Professional Documents
Culture Documents
Kubrusly
VII
VIII Preface
Preface VII
1 Preliminaries 1
1.1 Notation and Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Inverse Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Orthogonal Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Orthogonal Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.5 Adjoint Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.6 Normal Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.7 Orthogonal Eigenspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.8 Compact Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.9 Additional Propositions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2 Spectrum 27
2.1 Basic Spectral Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
2.2 A Classical Partition of the Spectrum . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.3 Spectral Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
2.4 Spectral Radius . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
2.5 Numerical Radius . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.6 Spectrum of Compact Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.7 Additional Propositions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3 Spectral Theorem 55
3.1 Spectral Theorem for Compact Operators . . . . . . . . . . . . . . . . . . . . . 55
3.2 Diagonalizable Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
3.3 Spectral Measure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
3.4 Spectral Theorem: General Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
3.5 Fuglede Theorems and Reducing Subspaces . . . . . . . . . . . . . . . . . . . 79
3.6 Additional Propositions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
IX
X Contents
4 Functional Calculus 91
4.1 Rudiments of C*-Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
4.2 Functional Calculus for Normal Operators . . . . . . . . . . . . . . . . . . . . 95
4.3 Analytic Functional Calculus: Riesz Functional Calculus . . . . . . 102
4.4 Riesz Decomposition Theorem and Riesz Idempotents . . . . . . . . 115
4.5 Additional Propositions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
References 187
Index 193
1
Preliminaries
This chapter summarizes the background required for the rest of the book. Its
purpose is threefold: notation, terminology, and basic results. By basic results
we mean well-known theorems from single operator theory that will be needed
in the sequel. As usual, the set of nonnegative integers, positive integers, and
all integers will be denoted by N 0 , N , Z , and the set of rational numbers, real
numbers, and complex numbers will denoted by Q , R , and C , respectively.
is a Banach space, then B[X ] is a unital (complex ) Banach algebra. Since the
induced uniform norm has the operator norm property, a trivial induction
shows that T n ≤ T n, for every operator T ∈ B[X ] on a normed space X ,
for each integer n ≥ 0.
A transformation T ∈ B[X , Y ] is a contraction if T ≤ 1; equivalently, if
T x ≤ x for every x ∈ X . It is a strict contraction if T < 1. If X = {0},
then a transformation T is a contraction (or a strict contraction) if and only
if supx=0 (T x/x) ≤ 1 (or supx=0 (T x/x) < 1). If X = {0}, then
Proof. Part (i). The equivalence between (a) and (b) still holds if X and Y
are just normed spaces. Indeed, if there exists T −1 ∈ B[R(T ), X ], then there
exists a constant β > 0 such that T −1 y ≤ βy for every y ∈ R(T ). Take
an arbitrary x ∈ X so that T x ∈ R(T ). Thus x = T −1 T x ≤ βT x, and
so β1 x ≤ T x. Hence (a) implies (b). Conversely, if (b) holds true, then
0 < T x for every nonzero x in X , and so N (T ) = {0}, which means that
T has a (linear) inverse on its range — a linear transformation is injective if
and only if it has a null kernel . Take an arbitrary y ∈ R(T ) so that y = T x
for some x ∈ X . Thus T −1 y = T −1 T x = x ≤ α1 T x = α1 y, which
means that T −1 is bounded. Therefore (b) implies (a).
Part (ii). Take an arbitrary R(T )-valued convergent sequence {yn }. Since each
yn lies in R(T ), there exists an X -valued sequence {xn } such that yn = T xn
for each n. Since {T xn } converges in Y, it is a Cauchy sequence in Y. Thus,
if T is bounded below, then there exists α > 0 such that
0 ≤ αxm − xn ≤ T (xm − xn ) = T xm − T xn
for every m, n so that {xn } is a Cauchy sequence in X , and thus it converges
in X to, say, x ∈ X if X is a Banach space. Since T is continuous, it preserves
convergence so that yn = T xn → T x, which implies that the (unique) limit
of {yn } lies in R(T ). Conclusion: R(T ) is closed in Y by the classical Closed
Set Theorem. That is, R(T )− = R(T ) whenever X is a Banach space, where
R(T )− stands for the closure of R(T ). Moreover, since (b) trivially implies
N (T ) = {0}, it follows that (b) implies (c). On the other hand, if N (T ) = {0},
then T is injective. If, in addition, the linear manifold R(T ) is closed in the
Banach space Y, then it is itself a Banach space so that T : X → R(T ) is an
injective and surjective bounded linear transformation of the Banach space X
onto the Banach space R(T ). Therefore, its inverse T −1 lies in B[R(T ), X ] by
the Inverse Mapping Theorem. That is, (c) implies (a).
Notice that if one of the Banach spaces X or Y is zero, then the results
in the previous theorems are either void or hold trivially. Indeed, if X = {0}
and Y = {0}, then the unique operator in B[X , Y ] is not injective (thus not
bounded below and not invertible); if X = {0} and Y = {0}, then the unique
operator in B[X , Y ] is not surjective but is bounded below (thus injective)
and has a bounded inverse on its singleton range; if X = {0} and Y = {0} the
unique operator in B[X , Y ] has an inverse in B[X , Y ].
The next theorem is a rather useful result establishing a power series expan-
sion for (λI − T )−1. This is the Neumann expansion (or Neumann series) due
to C.G. Neumann. The condition T < |λ| will be weakened in Corollary 2.12.
n k u
∞ k
T
−→ T
.
k=0 λ k=0 λ
n T k n T k
(λI − T ) λ1 u
−→ I and 1
(λI u
− T ) −→ I.
k=0 λ λ k=0 λ
Hence ∞ k ∞ k
(λI − T ) λ1 T
= 1 T
(λI − T ) = I,
k=0 λ λ i=0 λ
1
∞ T k
and so λ k=0 λ ∈ B[X ] is the inverse of λI − T ∈ B[X ].
then {Eγ }γ∈Γ is called aresolution of the identity on X . (Recall that for
each x ∈ X the sum x = γ∈Γ Eγ x has only a countable number of nonzero
vectors.) If {Ek }∞
k=0 is an infinite sequence, then the above identity in X
means convergence in the strong operator topology:
n s
Ek −→ I as n → ∞.
k=0
(See, e.g., [66, Theorems 5.52 and 5.59].) Let {Eγ }γ∈Γ be a resolution of
the identity on a nonzero Hilbert space made up of nonzero projections, let
{λγ }γ∈Γ be a similarly indexed family of scalars, and set
D(T ) = x ∈ H: {λγ Eγ x}γ∈Γ is a summable family of vectors in H .
is called a weighted sum of projections. It can be shown that the domain D(T )
of a weighted sum of projections is a linear manifold of H and that T is a
linear transformation. Moreover, T is bounded if and only if {λγ }γ∈Γ is a
bounded family of scalars, which happens if and only if D(T ) = H. In this
case T ∈ B[H] and T = supγ∈Γ |λγ |. (See, e.g., [66, Proposition 5.61].)
T ∗ 2 = T ∗T = T T ∗ = T 2.
10 1. Preliminaries
R(T ∗ )⊥ = N (T ).
N (T ) = N (T ∗ T ).
This completes the proof of the first identities. Since these hold for every T
in B[H, K], they also hold for T ∗ in B[K, H] and for T T ∗ in B[K]. Thus
If R(T ) = R(T )−, then take the inverse T⊥−1 in B[R(T ), N (T )⊥ ] (Theorem
1.2). For any w ∈ N (T )⊥ consider the functional fw : R(T ) → C defined by
fw (y) = T⊥−1 y ; w
for every y ∈ R(T ). Since T⊥−1 is linear and the inner product is linear in the
first argument, it follows that f is linear. Since |fw (y)| ≤ T⊥−1 wy for
every y ∈ R(T ), it follows that f is bounded. Moreover, R(T ) is a subspace
of the Hilbert space K (it is closed in K), and so a Hilbert space itself. Then
the Riesz Representation Theorem in Hilbert spaces (see, e.g., [66, Theorem
5.62]) says that there exists zw in the Hilbert space R(T ) such that
fw (y) = y ; zw
1.5 Adjoint Operator 11
x ; T ∗ zw = T x ; zw = T u ; zw + T v ; zw = T v ; zw
= fw (T v) = T⊥−1 T v ; w = T⊥−1 T⊥ v ; w
= v ; w = u ; w + v ; w = x ; w.
R(T ∗ ) = R(T ∗ )− .
Therefore R(T ) = R(T )− implies R(T ∗ ) = R(T ∗ )− , and so the converse also
holds because T ∗∗ = T . That is,
In spite of the equivalence between (a) and (c) above, the square of a
hyponormal is not necessarily hyponormal. It is clear that every self-adjoint
1.6 Normal Operators 13
E ∗ E = EE ∗ =⇒ R(E) ⊥ N (E).
E = E∗ =⇒ E = 1 =⇒ E ≤ 1.
E ≤ 1 =⇒ E = E∗.
14 1. Preliminaries
which ensures the claimed result, thus completing the proof of Claim 1.
Claim 2. T k = T k for every positive integer k ≤ n, for all n ≥ 1.
Proof. The above result holds trivially if T = O and it also holds trivially for
n = 1 (for all T ∈ B[H]). Let T = O and suppose the above result holds for
some integer n ≥ 1. By Claim 1 we get
T 2n = (T n)2 = T n2 ≤ T n+1 T n−1 = T n+1 T n−1.
Hence T n+1 = T n+1 . Then the claimed result holds for n + 1 whenever
it holds for n, which concludes the proof of Claim 2 by induction.
Therefore, T n = T n for every integer n ≥ 1 by Claim 2, and so T is nor-
maloid by Theorem 1.11.
Proof. Take any operator T ∈ B[H] and any scalar λ ∈ C . Observe that
by Theorem 1.7, which yields the inclusion in (a). If T is normal, then the
preceding inequality becomes an identity, yielding the identity in (b).
{Mn }∞
n=0 is a decreasing sequence of subspaces of H.
1
2 |λ| < |λ|x − xm = T xn − T xm ,
We show next that m = n. Let {ei }mi=1 and {fi }i=1 be orthonormal bases for
n
∗
the Hilbert spaces N (λI − T ) and N (λI − T ). Set k = min{m, n} ≥ 1 and
consider the mappings S : H → H and S ∗ : H → H defined by
k k
Sx = x ; ei fi and S∗x = x ; fi ei
i=1 i=1
for every x ∈ H. It is clear that S and S ∗ lie in B[H], and also that S ∗ is the
adjoint of S; that is, Sx ; y = x ; S ∗ y for every x, y ∈ H. Actually,
R(S) ⊆ {fi }ki=1 ⊆ N (λI − T ∗ ) and R(S ∗ ) ⊆ {ei }ki=1 ⊆ N (λI − T )
so that S and S ∗ in B[H] are finite-rank operators, thus compact (i.e., they
lie in B∞[H]), and hence T + S and T ∗ + S ∗ also lie in B∞[H] (since this is a
1.9 Additional Propositions 21
Together, the results of Theorems 1.17 and 1.19 are referred to as the
Fredholm Alternative, which is stated below.
1
(a1 ) f (x, y) = 4 f (x + y, x + y) − f (x − y, x − y)
+ i f (x + iy, x + iy) − i f (x − iy, x − iy) .
Notes: These are standard results that will be required in the sequel. Proofs
can be found in many texts on operator theory (see Suggested Reading). For
1.9 Additional Propositions 25
instance, Proposition 1.A can be found in [66, Propositions 5.4 and Problem
5.3], and Proposition 1.B in [66, Propositions 4.7, 4.12, 4.15, and Example 4.P].
Proposition 1.C can be found in [27, Proposition III.4.3] (for a Hilbert space
proof see [50, Problem 13]), and Proposition 1.D in [27, Theorem III.13.2] or
[66, Problem 4.35]. For Propositions 1.E and 1.F see [66, Problems 4.20 and
4.22]. Proposition 1.G(a) is referred to as extension by continuity; see, e.g.,
[66, Theorem 4.35]. It implies Proposition 1.G(b): an orthogonal projection is
continuous, and T = TE|R(E) ∈ B[M−, H] is a restriction of TE ∈ B[X , H] to
R(E) = M−. Proposition 1.H will be needed in Theorem 5.4. For Proposition
1.H(a) see [66, Problems 5.8]. A proof of Proposition 1.H(b) goes as follows.
Proof of Proposition 1.H(b). Since (M ∩ N ) ⊥ (M⊥ ∩ N ), consider the or-
thogonal projection E ∈ B[H] on M⊥ ∩ N (i.e., R(E) = M⊥ ∩ N ) along
M ∩ N (i.e., N (E) = M ∩ N ). Let L: N (M ∩ N ) → M⊥ ∩ N be the re-
striction of E to N (M ∩ N ). It is clear that L = E|N (M∩N ) is linear, in-
jective (N (L) = {0}), and surjective (R(L) = R(E) = M⊥ ∩ N ). Therefore,
N (M ∩ N ) is isomorphic to M⊥ ∩ N . Since M is closed and N is finite
dimensional, M + N is closed (even if M and N are not orthogonal — cf.
Proposition 1.C). Then (Projection Theorem) H = (M + N )⊥ ⊕ (M + N ) and
M⊥ = (M+N )⊥ ⊕[M⊥ (M+N )⊥ ], and hence M⊥ (M + N )⊥ coincides
with M⊥ ∩ (M + N ) = M⊥ ∩ N . Therefore, N (M ∩ N ) is isomorphic to
M⊥ (M + N )⊥, and so they have the same dimension.
For Proposition 1.I see, e.g., [63, Solutions 4.1 and 4.6], and for Proposition 1.J
see, e.g., [62, Corollaries 4.2 and 4.8]. By the way, does quasisimilarity preserve
nontrivial invariant subspaces? (See, e.g., [76, p. 194].) As for Propositions 1.K
and 1.L see [66, Propositions 5.28 and p. 385]). Propositions 1.M, 1.N, and 1.O
are the classical Square Root Theorem, the Polar Decomposition Theorem,
and the Cartesian Decomposition Theorem (see, e.g., [66, Theorems 5.85
and 5.89, and Problem 5.46]). Propositions 1.P, 1.Q, 1.R can be found in
[66, Problems 6.16 and 6.17, and Theorem 5.73]. Propositions 1.S, 1.T, 1.U,
1.V, 1.W, 1.X, 1.Y, and 1.Z are all related to compact operators (see, e.g.,
[66, Theorem 4.52(d,e), Problem 4.51, p. 258, Problem 5.41, Proposition 4.50,
Remark to Theorem 4.52, Corollary 4.55]).
Suggested Reading
Theorem 2.1. The resolvent set ρ(T ) is nonempty and open, and the spec-
trum σ(T ) is compact .
Proof. Take any T ∈ B[X ]. By the Neumann expansion (Theorem 1.3), if
T < |λ|, then λ ∈ ρ(T ). Equivalently, since σ(T ) = C \ρ(T ),
Remark. Since Bδ (λ) ⊆ ρ(T ), it follows that the distance of any λ in ρ(T ) to
the spectrum σ(T ) is greater than δ; that is (compare with Proposition 2.E),
Since RT (λ) − RT (ν) = RT (λ)[RT (ν)−1 − RT (λ)−1 ]RT (ν), it follows that
f (ν) with the following property. For every ε > 0 there is a δ > 0 such that
f (λ)−f (ν)
− f (ν) < ε for all λ in Λ for which 0 < |λ − ν| < δ. If there exists
λ−ν
such an f (ν) ∈ C , then it is called the derivative of f at ν. If f (ν) exists for
every ν in Λ, then f : Λ → C is analytic (or holomorphic) on Λ. A function
f : C → C is entire if it is analytic on the whole complex plane C . To prove the
next result we need the Liouville Theorem, which says that every bounded
entire function is constant.
Remark. σ(T ) is compact and nonempty, and so is its boundary ∂σ(T ). Hence,
∂σ(T ) = ∂ρ(T ) = ∅.
σ(T ) = σP (T ) ∪ σC (T ) ∪ σR (T ).
2.2 A Classical Partition of the Spectrum 31
The diagram below, borrowed from [62], summarizes such a partition of the
spectrum. The residual spectrum is split into two disjoint parts, σR(T ) =
σR1(T ) ∪ σR2(T ), and the point spectrum into four disjoint parts, σP (T ) =
4
i=1 σPi(T ). We adopt the following abbreviated notation: Tλ = λI − T ,
Nλ = N (Tλ ), and Rλ = R(Tλ ). Recall that if N (Tλ ) = {0}, then its linear
inverse Tλ−1 on Rλ is continuous if and only if Rλ is closed (Theorem 1.2).
R−
λ
=X R−
λ
= X
R−
λ
= Rλ R−
λ
= Rλ R−
λ
= Rλ R−
λ
= Rλ
Theorem 2.2 says that σ(T ) = ∅, but any of the above disjoint parts of
the spectrum may be empty (Section 2.7). However, if σP (T ) = ∅, then a set
of eigenvectors associated with distinct eigenvalues is linearly independent.
There are some overlapping parts of the spectrum which are commonly
used. For instance, the compression spectrum σCP (T ) and the approximate
point spectrum (or approximation spectrum) σAP (T ), which are defined by
σCP (T ) = λ ∈ C : R(λI − T ) is not dense in X
= σP3(T ) ∪ σP4(T ) ∪ σR (T ),
σAP (T ) = λ ∈ C : λI − T is not bounded below
= σP (T ) ∪ σC (T ) ∪ σR2(T ) = σ(T )\σR1(T ).
The points of σAP (T ) are referred to as the approximate eigenvalues of T .
Proof. Clearly (a) implies (b). If (b) holds, then there is no constant α > 0
such that α = αxn ≤ (λI − T )xn for all n. Thus λI − T is not bounded
below, and so (b) implies (c). Conversely, if λI − T is not bounded below,
then there is no constant α > 0 such that αx ≤ (λI − T )x for all x ∈ X
or, equivalently, for every ε > 0 there exists a nonzero yε in X such that
(λI − T )yε < εyε . Set xε = yε −1 yε , and hence (c) implies (a).
If supn (λn I − T )−1 < ∞, then (λn I − T )−1 → (λI − T )−1 in B[X ], be-
cause (λn I − T ) → (λI − T ) in B[X ], and so (λI − T )−1 ∈ B[X ]. That is,
(λI − T ) ∈ G[X ], which is again a contradiction (since λ ∈ σ(T )). This com-
pletes the proof of the claimed result.
Since (λn I − T )−1 = supy=1 (λn I − T )−1 y, take a unit vector yn in
X for which (λn I − T )−1 − n1 ≤ (λn I − T )−1 yn ≤ (λn I − T )−1 for
each n. The above claim ensures that supn (λn I − T )−1 yn = ∞, and hence
inf n (λn I − T )−1 yn −1 = 0, so that there exist subsequences of {λn } and
{yn }, say {λk } and {yk }, for which
Set xk = (λk I − T )−1 yk −1 (λk I − T )−1 yk and get a sequence {xk } of unit
vectors in X such that (λk I − T )xk = (λk I − T )−1 yk −1 . Hence
(λI − T )xk = (λk I − T )xk − (λk − λ)xk ≤ (λk I − T )−1yk −1 + |λk − λ|.
∂σ(T ) ⊆ σAP (T ).
This inclusion clearly implies that σAP (T ) = ∅ (for σ(T ) is closed and non-
empty). Finally, take an arbitrary λ ∈ C \σAP (T ) so that λI − T is bounded
below. Therefore, there exists an α > 0 for which
For the remainder of this section we assume that T lies in B[H], where
H is a nonzero complex Hilbert space. This will bring forth some important
34 2. Spectrum
σR (T ) = σP (T ∗ )∗ \σP (T ).
Proof. Since S ∈ G[H] if and only if S ∗ ∈ G[H], we get ρ(T ) = ρ(T ∗ )∗ . Hence
σ(T )∗ = (C \ρ(T ))∗ = C \ρ(T ∗ ) = σ(T ∗ ). Recall that R(S)− = R(S) if and
only if R(S ∗ )− = R(S ∗ ), and N (S) = {0} if and only if R(S ∗ )⊥ = {0} (cf.
Lemmas 1.4 and 1.5), which means that R(S ∗ )− = H. Thus σP1(T ) = σR1(T ∗ )∗,
σP2(T ) = σR2(T ∗ )∗, σP3(T ) = σP3(T ∗ )∗, and σP4(T ) = σP4(T ∗ )∗. Applying the
same argument, σC (T ) = σC (T ∗ )∗ and σCP (T ) = σP (T ∗ )∗ . Hence,
Therefore, σAP (T ∗ )∗ ∩ σAP (T ) = σ(T )\(σP1(T ) ∪ σR1(T )). But σ(T ) is closed
and σR1(T ) is open (and so is σP1(T ) = σR1(T ∗ )∗ ) in C . This implies that
σP1(T ) ∪ σR1(T ) ⊆ σ(T )◦ , where σ(T )◦ denotes the interior of σ(T ), and
∂σ(T ) ⊆ σ(T )\(σP1(T ) ∪ σR1(T )).
with βn = −1n+1 αn where {λi }ni=1 are the roots of ν − p(λ), so that
n n
νI − p(T ) = νI − αi T i = βn (λi I − T ).
i=0 i=1
If λi ∈ ρ(T ) for every i = 1, . . . , n, then βn ni=1 (λi I − T ) ∈ G[X ] so that
ν ∈ ρ(p(T )). Thus if ν ∈ σ(p(T )), then there exists λj ∈ σ(T ) for some j =
1, . . . , n. However, λj is a root of ν − p(λ), that is,
n
ν − p(λj ) = βn (λi − λj ) = 0,
i=1
since (λj I − T ) commutes with (λi I − T ) for every i. If ν ∈ ρ(p(T )), then
(νI − p(T )) ∈ G[X ] so that
n
Then (λj I − T ) has a right and a left inverse (i.e., it is invertible), and so
(λj I − T ) ∈ G[X ] by Theorem 1.1. Hence, λ = λj ∈ ρ(T ), which contradicts
the fact that λ ∈ σ(T ). Conclusion: if ν ∈ p(σ(T )), then ν ∈/ ρ(p(T )), that is,
ν ∈ σ(p(T )). Thus
p(σ(T )) ⊆ σ(p(T )).
In particular,
σ(T n ) = σ(T )n for every n ≥ 0,
that is, ν ∈ σ(T )n = {λn ∈ C : λ ∈ σ(T )} if and only if ν ∈ σ(T n ), and
that is, ν ∈ ασ(T ) = {αλ ∈ C : λ ∈ σ(T )} if and only if ν ∈ σ(αT ). Also notice
(even though this is not a particular case of the Spectral Mapping Theorem
for polynomials) that if T ∈ G[X ], then
σ(T ∗ ) = σ(T )∗
Take a surjective homomorphism Φ: A(T ) → C (i.e., take any Φ ∈ A(T )). Con-
sider the Cartesian decomposition T = A + iB, where A, B ∈ B[H] are self-
adjoint, and so T ∗ = A − iB (Proposition 1.O). Thus Φ(T ) = Φ(A) + iΦ(B)
and Φ(T ∗ ) = Φ(A) − iΦ(B). Since A = 12 (T + T ∗ ) and B = − 2i (T − T ∗ ) lie in
P(T, T ∗), we get A(A) = A(B) = A(T ). Moreover, since they are self-adjoint,
)} = σ(A) ⊂ R and {Φ(B) ∈ C : Φ ∈ A(T
{Φ(A) ∈ C : Φ ∈ A(T )} = σ(B) ⊂ R
(Propositions 2.A and 2.Q(b)), and so Φ(A) ∈ R and Φ(B) ∈ R . Hence
Φ(T ∗ ) = Φ(T ).
Therefore, since σ(T ∗ ) = σ(T )∗ for every T ∈ B[H] by Theorem 2.6, and
according to Proposition 2.Q(b),
)
σ(p(T, T ∗ )) = p(Φ(T ), Φ(T )) ∈ C : Φ ∈ A(T
)}
= p(λ, λ) ∈ C : λ ∈ {Φ(T ) ∈ C : Φ ∈ A(T
= p(λ, λ) ∈ C : λ ∈ σ(T )
= p(σ(T ), σ(T )∗ ) = p(σ(T ), σ(T ∗ )).
38 2. Spectrum
The first identity in the above expression defines the spectral radius rσ (T ),
and the second one is a consequence of the Weierstrass Theorem (cf. proof of
Theorem 2.2) since σ(T ) = ∅ is compact in C and the function | |: C → R is
continuous. A straightforward consequence of the Spectral Mapping Theorem
for polynomials reads as follows.
The next result is the well-known Gelfand–Beurling formula for the spec-
tral radius. Its proof requires another piece of elementary complex analysis,
viz., every analytic function has a power series representation. That is, if
f : Λ → C is analytic, and if Bα,β (ν) = {λ ∈ C : 0 ≤ α < |λ − ν| < β} lies in
the open set Λ ⊆ C , then f has a unique Laurent expansion about the point
∞
ν, namely, f (λ) = k=−∞ γk (λ − ν)k for every λ ∈ Bα,β (ν).
Proof. Since rσ (T )n ≤ T n for every positive integer n, and since the limit
1
of the sequence {T n n} exists by Lemma 1.10, we get
1
rσ (T ) ≤ lim T n n .
n
For the reverse inequality, proceed as follows. Consider the Neumann expan-
sion (Theorem 1.3) for the resolvent function RT : ρ(T ) → G[X ],
∞
RT (λ) = (λI − T )−1 = λ−1 T k λ−k
k=0
for every λ ∈ ρ(T ) such that T < |λ|, where the above series converges in
the (uniform) topology of B[X ]. Take an arbitrary bounded linear functional
∗
η : B[X ] → C in B[X ] (cf. proof of Theorem 2.2). Since η is continuous,
∞
η(RT (λ)) = λ−1 η(T k )λ−k
k=0
The converses to the above one-way implications fail in general. The next
result applies the preceding characterization of uniform stability to extend
the Neumann expansion of Theorem 1.3.
T k T k
Therefore, (λI − T )−1 = λ1 ∞ , where ∞ ∈ B[X ] is the strong
n T k k=0 λ k=0 λ
limit of k=0 λ , which concludes the proof of (b).
It can be shown that W (T ) is a convex set in C (see, e.g., [50, Problem 210]),
and it is clear that
W (T ∗ ) = W (T )∗ .
42 2. Spectrum
σP (T ) ∪ σR (T ) ⊆ W (T ).
σ(T ) = σR (T ) ∪ σAP (T ) ⊆ W (T )− .
Unlike the spectral radius, the numerical radius is a norm on B[H]. That is,
0 ≤ w(T ) for every T ∈ B[H] and 0 < w(T ) if T = O, w(αT ) = |α|w(T ), and
w(T + S) ≤ w(T ) + w(S) for every α ∈ C and every S, T ∈ B[H]. However, the
numerical radius does not have the operator norm property in the sense that
the inequality w(S T ) ≤ w(S)w(T ) is not true for all operators S, T ∈ B[H].
Nevertheless, the power inequality holds: w(T n ) ≤ w(T )n for all T ∈ B[H] and
every positive integer n (see, e.g., [50, p. 118 and Problem 221]). Moreover,
the numerical radius is a norm equivalent to the (induced uniform) operator
norm of B[H] and dominates the spectral radius, as in the next theorem.
1
|T x ; y| ≤ 4 |T (x + y) ; (x + y)| + |T (x − y) ; (x − y)|
+ |T (x + iy) ; (x + iy)| + |T (x − iy) ; (x − iy)|
≤ 14 w(T ) x + y2 + x − y2 + x + iy2 + x − iy2 .
whenever x = y = 1. Thus, since T = supx=y=1 |T x ; y| (see, e.g.,
[66, Corollary 5.71]), it follows that T ≤ 2w(T ).
The scalar 0 may be anywhere. That is, if T ∈ B∞[H], then λ = 0 may lie
in σP (T ), σR (T ), σC (T ), or ρ(T ). However, if T is a compact operator on a
nonzero space H and 0 ∈ ρ(T ), then H must be finite dimensional. Indeed,
if 0 ∈ ρ(T ), then T −1 ∈ B[H] so that I = T −1 T is compact (since B∞[H] is
an ideal of B[H]), which implies that H is finite dimensional (cf. Proposition
1.Y). The preceding theorem in fact is a rewriting of the Fredholm Alternative
(and it is also referred to as the Fredholm Alternative). It will be applied
often from now on. Here is a first application. Let B0 [H] denote the class of
all finite-rank operators on H (i.e., the class of all operators from B[H] with
a finite-dimensional range). Recall that B0 [H] ⊆ B∞[H] (finite-rank operators
are compact — cf. Proposition 1.X). Let #A denote the cardinality of a set
A, so that #A < ∞ means “A is a finite set”.
Proof. If dim H < ∞, then an injective operator is surjective, and linear mani-
folds are closed (see, e.g., [66, Problem 2.18 and Corollary 4.29]), and so the di-
agram of Section 2.2 says that σ(T ) = σP (T ) = σP4(T ) (for σP1(T ) = σR1(T ∗ )∗
according to Theorem 2.6). On the other hand, suppose dim H = ∞. Since
B0 [H] ⊆ B∞[H], Theorem 2.18 says that
σ(T )\{0} = σP (T )\{0} = σP4(T )\{0}.
Since dim R(T ) < ∞ and dim H = ∞, it follows that R(T )− = R(T ) = H and
N (T ) = {0} (because dim N (T ) + dim R(T ) = dim H; see, e.g., [66, Problem
2.17]). Then 0 ∈ σP4(T ) (cf. diagram of Section 2.2), and therefore
σ(T ) = σP (T ) = σP4(T ).
If σP (T ) is infinite, then there exists an infinite set of linearly independent
eigenvectors of T (Theorem 2.3). Since every eigenvector of T lies in R(T ),
this implies that dim R(T ) = ∞ (because every linearly independent subset
of a linear space is included in some Hamel bases — see, e.g., [66, Theorem
2.5]), which is a contradiction. Conclusion: σP (T ) must be finite.
Mn ⊂ Mn+1
n+1 n
(λn+1 I − T )yn+1 = αi (λn+1 − λi )xi = αi (λn+1 − λi )xi ∈ Mn .
i=1 i=1
Recall that λn = 0 for all n, take any pair of integers 1 ≤ m < n, and set
y = ym − λ−1 −1
m (λm I − T )ym + λn (λn I − T )yn
so that T (λ−1 −1
m ym ) − T (λn yn ) = y − yn . Since y lies in Mn−1 ,
1
2 < y − yn = T (λ−1 −1
m ym ) − T (λn yn ),
Proposition 2.A. Let H = {0} be a complex Hilbert space and let T denote
the unit circle about the origin of the complex plane.
(a) If H ∈ B[H] is hyponormal, then σP (H)∗ ⊆ σP (H ∗ ) and σR (H ∗ ) = ∅.
(b) If N ∈ B[H] is normal, then σP (N ∗ ) = σP (N )∗ and σR (N ) = ∅.
(c) If U ∈ B[H] is unitary, then σ(U ) ⊆ T .
(d) If A ∈ B[H] is self-adjoint, then σ(A) ⊂ R .
(e) If Q ∈ B[H] is nonnegative, then σ(Q) ⊂ [0, ∞).
(f) If R ∈ B[H] is strictly positive, then σ(R) ⊂ [α, ∞) for some α > 0.
(g) If E ∈ B[H] is a nontrivial projection, then σ(E) = σP (E) = {0, 1}.
48 2. Spectrum
Proposition 2.B. Similarity preserves the spectrum and its parts, and so it
preserves the spectral radius. That is, let H and K be nonzero complex Hilbert
spaces. For every T ∈ B[H] and W ∈ G[H, K],
(a) σP (T ) = σP (W T W −1 ),
(b) σR (T ) = σR (W T W −1 ),
(c) σC (T ) = σC (W T W −1 ).
Hence σ(T ) = σ(W T W −1 ), ρ(T ) = ρ(W T W −1 ), and r(T ) = r(W T W −1 ).
Unitary equivalence also preserves the norm: if W is a unitary transforma-
tion, then, in addition, T = W T W −1 .
Proposition 2.E. Take any operator T ∈ B[H]. Let d denote the usual dis-
tance in C . If λ ∈ ρ(T ), then
Proposition 2.I. If H and K are complex Hilbert spaces and T ∈ B[H], then
2.7 Additional Propositions 49
r(T ) = inf W T W −1 .
W ∈G[H,K]
Proposition 2.M. Let D and T = ∂ D denote the open unit disk and the unit
circle about the origin of the complex plane, respectively. If S+ ∈ B[K+] is a
unilateral shift and S ∈ B[K] is a bilateral shift on complex spaces, then
(a) σP (S+ ) = σR (S+∗ ) = ∅, σR (S+ ) = σP (S+∗ ) = D , σC (S+ ) = σC (S+∗ ) = T .
(b) σ(S) = σ(S ∗ ) = σC (S ∗ ) = σC (S) = T .
2
A unilateral weighted shift T+ = S+ D+ in B[+ (H)] isthe product of a
∞
canonical unilateral shift S+ and a diagonal operator D+ = k=0 αk I, both in
2
B[+ (H)], where {αk }∞
k=0 is a bounded sequence of scalars. A bilateral weighted
shift T = SD in B[ 2 (H)] is the product of a canonical bilateral shift S and
∞
a diagonal operator D = k=−∞ αk I, both in B[ 2 (H)], where {αk }∞ k=−∞ is
a bounded family of scalars.
2
Proposition 2.N. Let T+ ∈ B[+ (H)] be a unilateral weighted shift, and let
2
T ∈ B[ (H)] be a bilateral weighted shift, where H is complex .
(a) If αk → 0 as |k| → ∞, then T+ and T are compact and quasinilpotent .
If , in addition, αk = 0 for all k, then
(b) σ(T+) = σR (T+) = σR2(T+) = {0} and σ(T+∗ ) = σP (T+∗ ) = σP2(T+∗ ) = {0},
(c) σ(T ) = σC (T ) = σC (T ∗ ) = σ(T ∗ ) = {0}.
Proposition 2.Q. Let A be any unital complex Banach algebra (for instance,
A = B[X ]). If T ∈ A, where A is any closed unital subalgebra of A, then
(a) ρ (T ) ⊆ ρ(T ) and r (T ) = r(T ) (invariance of the spectral radius).
Hence ∂σ (T ) ⊆ ∂σ(T ) and σ(T ) ⊆ σ (T ).
Thus σ (T ) is obtained by adding to σ(T ) some holes of σ(T ).
(b) If A is a maximal commutative subalgebra of A, then
σ(T ) = Φ(T ) ∈ C : Φ ∈ A for each T ∈ A .
This is the Rosenblum Corollary, which will be used to prove the Fuglede
Theorems in Chapter 3.
Notes: Again, as in the previous chapter, these are basic results that will be
needed throughout the text. We will not point out here the original sources
but, instead, well-known secondary sources where the reader can find the
proofs for those propositions, as well as some deeper discussions on them.
Proposition 2.A holds independently of the forthcoming Spectral Theorem
(Theorem 3.11) — see, e.g., [66, Corollary 6.18]. A partial converse, however,
needs the Spectral Theorem (see Proposition 3.D in Chapter 3). Proposition
2.B is a standard result (see, e.g., [50, Problem 75] and [66, Problem 6.10]), as
is Proposition 2.C (see, e.g., [50, Problem 76]). Proposition 2.D also dismisses
the Spectral Theorem of the next chapter; it is obtained by the Square Root
Theorem (Proposition 1.M) and by the Spectral Mapping Theorem (Theorem
2.7). Proposition 2.E is a rather useful technical result (see, e.g., [63, Problem
6.14]. Proposition 2.F is a synthesis of some sparse results (cf. [28, Proposition
I.5.1], [50, Solution 98], [55, Theorem 5.42], [66, Problem 6.37], and Propo-
sition 2.E). For Propositions 2.G and 2.H see [66, Sections 6.3 and 6.4]. The
spectral radius formula in Proposition 2.I (see, e.g., [42, p. 22]) ensures that an
operator is uniformly stable if and only if it is similar to a strict contraction
(hint: use the equivalence between (a) and (b) in Corollary 2.11). For Propo-
sitions 2.J and 2.K see [66, Sections 6.3 and 6.4]. The examples of spectra
in Propositions 2.L, 2.M, and 2.N are widely known (see, e.g., [66, Examples
6.C, 6.D, 6.E, 6.F, and 6.G]). Proposition 2.O deals with the equivalence be-
tween uniform and weak stabilities for compact operators, and it is extended
to a wider class of operators in Proposition 2.P (see [63, Problems 8.8. and
8.9]). Proposition 2.Q(a) is readily verified, where the invariance of the spec-
tral radius follows by the Gelfand–Beurling formula. Proposition 2.Q(b) is a
key result for proving both Theorem 2.8 (the Spectral Mapping Theorem for
normal operators) and also Lemma 5.43 of Chapter 5 (for the characterization
2.7 Additional Propositions 53
of the Browder spectrum in Theorem 5.44) — see, e.g., [76, Theorems 0.3 and
0.4]. For Propositions 2.R, 2.S, and 2.T see [80, Theorem 11.22], [76, Theorem
0.8], and [76, Corollary 0.13], respectively. Proposition 2.T will play a central
role in the proof of the Fuglede Theorems of the next chapter (precisely, in
Corollary 3.5 and Theorem 3.17).
Suggested Reading
Take an arbitrary λ ∈
C such that λ = λγ for all γ ∈ Γ . The weighted sum of
projections λI − T = γ∈Γ (λ − λγ )Eγ has an inverse, and this inverse is the
weighted sum of projections (λI − T )−1 = γ∈Γ (λ − λγ )−1 Eγ because
(λ − λα )−1 Eα (λ − λβ )Eβ x
α∈Γ β∈Γ
= (λ − λα )−1 (λ − λβ )Eα Eβ x = Eγ x = x
α∈Γ β∈Γ γ∈Γ
for every x in H (since the resolution of the identity {Eγ }γ∈Γ is an orthogo-
nal family of continuous projections, and the inverse is unique). Moreover,
the weighted sum of projections of (λI − T )−1 is bounded if and only if
supγ∈Γ |λ − λγ |−1 < ∞, that is, if and only if inf γ∈Γ |λ − λγ | > 0. So
ρ(T ) = λ ∈ C : inf γ∈Γ |λ − λγ | > 0 .
We have just seen that dim R(Tn ) < ∞. That is, each Tn is a finite-rank
operator. However, since Ej ⊥ Ek whenever j = k, we get
2
(T − Tn )x2 =
λk Ek x ≤ sup |λk |2
Ek x2 ≤ n12 x2
k k k
u
for all x ∈ H, so that Tn −→ T . Then T is compact by Proposition 1.Z.
Proof. We have already seen (in the proof of the previous theorem) that
σP (T ) is nonempty and countable. Recall that σP (T ) = {0} if and only if
T = O (since T is normaloid; see Corollary 2.21) or, equivalently, if and only
if N (T )⊥ = {0} (i.e., N (T ) = H). If T = O, then the above assertions hold
trivially (σP (T )\{0} = ∅, {ek } = ∅, N (T )⊥ = {0} and T x = 0x = 0 for
every x ∈ H because the empty sum is null). Thus suppose T = O (so that
N (T )⊥ = {0}), and take an arbitrary λ = 0 in σP (T ). Theorem 1.19 ensures
that dim N (λI − T ) is finite, say, dim N (λI − T ) = nλ for some positive inte-
ger nλ . This ensures the existence of a finite orthonormal basis {ek (λ)}nk=1
λ
for
the Hilbert space N (λI − T ) = {0}. Recall that each ek (λ) is an eigenvector
of T (since 0 = ek (λ) ∈ N (λI − T )).
⊥
λ∈σP (T )\{0} {ek (λ)}k=1 is a orthonormal basis for N (T ) .
nλ
Claim.
Proof. Theorem 3.3 ensures that
−
H= N (λI − T ) .
λ∈σP (T )
⊥
x = u + vwith u ∈ N (T ) and v ∈ N (T ) . Consider the Fourier series expan-
sion v = k v ; ek ek = λ∈σP (T )\{0} nk=1 λ
v ; ek (λ)ek (λ) of v in terms of
the orthonormal basis {ek } = λ∈σP (T )\{0} {ek (λ)}nk=1 λ
for the Hilbert space
⊥
N (T ) = {0}. Since T is linear and continuous, and since T ek (λ) = λek (λ) for
each k = 1, . . . , nλ and each λ ∈ σP (T )\{0}, it follows that
Tx = Tu + Tv = Tv
nλ
= v ; ek (λ) T ek (λ)
λ∈σP (T )\{0} k=1
nλ
= λ v ; ek (λ) ek (λ).
λ∈σP (T )\{0} k=1
H. In this case, {Eγ }γ∈Γ is said to diagonalize T . (See also Proposition 3.A
— just recall that the identity on any infinite-dimensional Hilbert space H
is trivially diagonalizable (a diagonal, actually, if H = Γ2 ), and thus normal,
but not compact — cf. Proposition 1.Y.)
The implication (b) ⇒ (d) in Corollary 3.5 is the compact version of the
Fuglede Theorem (Theorem 3.17) that will be developed in Section 3.5.
The central result in the above statement is the implication (b) ⇒ (e). The
main idea behind its proof relies on Proposition 2.T (also see, e.g., [76, The-
orem 1.16]). Take any operator S on H. Suppose S T = T S. Since
Ek T = λk Ek = T Ek
(recall: Ek T = j λj Ek Ej = λk Ek = j λj Ej Ek = T Ek ), it follows that
each R(Ek ) = N (λk I − T ) reduces T (Proposition 1.I) so that
σ(T Ek ) ∩ σ(T Ej ) = ∅
Ej SEk = O.
s
E(Λk ) −→ E Λk .
k=1 k
Indeed, since Λj ∩ Λk = ∅ if j =
k, it follows by properties (b) and (c) that
E(Λj )E(Λk ) = E(Λj ∩ Λk ) = E(∅) = O for j = k, so that {E(Λk )} is an
3.3 Spectral Measure 65
s
E(Λk ) −→ E Λk = E(Ω) = I.
k=1 k
Let E : AΩ → B[H] be any spectral measure into B[H]. For each pair of
vectors x, y ∈ H consider the mapping πx,y : AΩ → C defined by
Thus, by the polarization identity for sesquilinear forms (Proposition 1.A(a1 )),
and by the parallelogram law (Proposition 1.A(b)), we get
|f (x, y)| ≤ φ∞ x2 + y2
a star-cyclic vector x for T , which means that there is a vector x ∈ H such that
n ∗m
m,n {T T x} = H (each index m and n runs over all nonnegative integers;
that is, (m, n) ∈ N 0 × N 0 ). In part (b) we consider the complementary case.
(a) Fix a unit vector x ∈ H. Take a compact ∅ = Ω ⊂ C . Let P (Ω) be the set
of all polynomials p(·, ·): Ω × Ω → C in λ and λ (equivalently, p(·, ·): Ω → C
such that λ → p(λ, λ)). Let C(Ω) be the set of all complex-valued continuous
functions on Ω. Thus P (Ω) ⊂ C(Ω). When equipped with the sup-norm, C(Ω)
is a Banach space. The Stone–Weierstrass Theorem for complex functions
(see, e.g., [31, p. 139] or [27, Theorem V.8.1]) says that, if Ω is compact, then
That is, P (Ω) is dense in the Banach space (C(Ω), · ∞ ). Consider the func-
tional Φ: P (Ω) → C given, for each p ∈ P (Ω), by
by the Schwarz inequality, since x = 1 and p(T, T ∗) is normal (thus nor-
maloid). From now on set Ω = σ(T ). According to Theorem 2.8,
Φ(ψ) ≥ 0 for every ψ ∈ C(σ(T )) such that ψ(λ) ≥ 0 for every λ ∈ σ(T ).
Proof. Take any ψ ∈ C(σ(T )) such that ψ(λ) ≥ 0 for every λ ∈ σ(T ). Take an
arbitrary ε > 0. Since Φ is continuous and P (σ(T ))− = C(σ(T )), there is a
polynomial pε ∈ P (σ(T )) such that pε (λ, λ) ≥ 0 for all λ ∈ σ(T ), and
The Stone–Weierstrass Theorem for real functions (see, e.g., [31, p. 137] or [27,
Theorem V.8.1]) ensures that there exists a polynomial qε in the real variables
α, β and with real coefficients, (α, β) → qε (α, β) ∈ R , such that
|qε (A, B)2 x ; x − Φ(pε )| = |qε (A, B)2 x ; x − pε (T, T ∗ )x ; x|
= |(qε (A, B)2 − pε (T, T ∗ ))x ; x|
≤ qε (A, B)2 − pε (T, T ∗ )
= r(qε (A, B)2 − pε (T, T ∗ ))
since qε (A, B)2 − pε (T, T ∗ ) is normal, thus normaloid (reason: qε (A, B)2 is a
self-adjoint operator that commutes with the normal operator pε (T, T ∗ )). So
|qε (A, B)2 x ; x − Φ(pε )| ≤ sup |qε (α, β)2 − pε (λ, λ)| ≤ ε,
λ=α+iβ∈σ(T )
|qε (A, B)2 x ; x − Φ(ψ)| ≤ |qε (A, B)2 x ; x − Φ(pε )| + |Φ(pε ) − Φ(ψ)| ≤ 2ε.
But 0 ≤ pε (A, B)2 x ; x, and so 0 ≤ pε (A, B)2 x ; x ≤ 2ε + Φ(ψ) for every
ε > 0, which implies that 0 ≤ Φ(ψ), thus completing the proof.
Now take the bounded, linear, positive functional Φ on C(Ω) (positiveness is
understood in the sense of the above Claim) with Ω = σ(T ), which is compact.
The Riesz Representation Theorem in C(Ω) (see, e.g., [79, Theorem 2.14]) en-
sures the existence of a finite positive measure μ on the σ-algebra AΩ = Aσ(T )
of Borel subsets of Ω (where continuous functions are measurable) such that
%
Φ(ψ) = ψ dμ for every ψ ∈ C(Ω).
Up = p(T, T ∗ )x.
for every p ∈ P (Ω) (since p(T, T ∗) is normal), which says that U is an isom-
etry, thus injective, and so it has an inverse on its range
R(U ) = p(T, T ∗ )x: p ∈ P (Ω) ⊆ H,
say U −1 : R(U ) → P (Ω). For each polynomial p in P (Ω) let q be the polyno-
mial in P (Ω) defined by q(λ, λ) = λp(λ, λ) for every λ ∈ Ω. Let ϕ: Ω → Ω ⊂ C
70 3. Spectral Theorem
for every p ∈ P (Ω). Since P (Ω) is dense in the Banach space (C(Ω), · ∞ ),
and since Ω is a compact set in C and μ is a finite positive measure on AΩ ,
it follows that P (Ω) is also dense in the normed space (C(Ω), · 2 ), which
in turn is a dense linear manifold of the Banach space (L2 (Ω, μ), · 2 ), and
hence P (Ω) is dense in (L2 (Ω, μ), · 2 ). That is,
Moreover, since U is a linear isometry of P (Ω) onto R(U ), which are linear
manifolds of the Hilbert spaces L2 (Ω, μ) and H, it follows that U extends
by continuity to a unique unitary transformation, also denoted by U, of the
Hilbert space P (Ω)− = L2 (Ω, μ) onto the range of U, R(U ) ⊆ H. (In fact, it
extends onto the closure of R(U ), but R(U ) is closed in H because every
isometry of a Banach space into a normed space has a closed range.) Thus if
x is a star-cyclic vector for T ; that is, if m,n {T nT ∗m x} = H, then
−
R(U ) = p(T, T ∗)x: p ∈ P (Ω) = {T n T ∗m x} = H,
m,n
U −1 T U = Mϕ .
(b) On the other hand, suppose there is no star-cyclic vector for T . Assume
that T and H are nonzero to avoid trivialities. Since T is normal, the nontrivial
n ∗m
subspace Mx = m,n {T T x} of H reduces T for each unit vector x ∈ H.
Consider the set = { Mx : unit x ∈ H} of all orthogonal direct sums of
(in the inclusion ordering). is not
these subspaces, which is partially ordered
empty (if a unit y ∈ Mx ⊥, then My = m,n {T n T ∗m y} ⊆ Mx ⊥ because Mx ⊥
reduces T , and so Mx ⊥ My ). Moreover, every chain in has an upper bound
3.4 Spectral Theorem: General Case 71
T = γ∈ΓT |Mγ in B[H] = B[ γ∈Γ Mγ ] is
unitarily equivalent to γ∈Γ Mϕγ in B[ γ∈Γ L2 (Ωγ , μγ )].
Consider the disjoint union of {Ωγ }, which is obtained by reindexing the ele-
ments of each Ωγ as follows. For each γ ∈ Γ write Ωγ = {λδ (γ)}δ∈Δγ so that, if
there is a complex number λ in Ωα ∩ Ωβ for α = β in Γ , then the same complex
number λ is represented as λδ (α) ∈ Ωα for some δ ∈ Δα and λδ (β) ∈ Ωβ for
some δ ∈ Δβ , and these representations are considered as distinct elements.
Therefore the sets Ωα and Ωβ become disjoint. (Regard the sets in {Ωγ } as
subsets of distinct copies of C so that,
for distinct indices α, β ∈ Γ , the sets
Ωα and Ωβ are disjoint.) Set Ω = γ∈Γ Ωγ . The disjoint union Ω — un-
like the ordinary union — can viewed
as the union of all σ(T |Mγ ) considering
the multiplicity of each point in γ∈Γ σ(T |Mγ ), disregarding possible overlap-
pings. Note that σ(T
|Mγ ) ⊆ σ(T ) since each Mγ reduces T (cf. Proposition
2.F(b)), and so Ω ⊆ γ∈Γ σ(T ) is bounded.
Let AΩ be the σ-algebra consist-
ing of all subsets of Ω of the form Λ = γ∈Γ Λγ , where each Λγ is a Borel
subset of Ωγ (i.e., Λγ ∈ AΩγ ). These are referred to as the Borel subsets of
Ω. Let μγ : AΩγ → R be the finite measure on each AΩ
γ whose existence was
ensured in part (a). Consider the Borel subsets Λ = j∈J Λj of Ω made up
of disjoint countable unions of Borel subsets of Ωj (i.e., Λj ∈ AΩj ), and
let μ
be the measure on AΩ such that μ(Λ) = j∈J μj (Λj ) for every Λ = j∈J Λj
in AΩ with {Λj } being any disjoint collection of sets with each Λj ∈ A
Ωj , for
any countable
subset J of Γ . (For a proper subset J of Γ we identify j∈J Λj
with γ∈Γ Λγ when Λγ = ∅ for every γ ∈ J .) Note that μ is positive since
72 3. Spectral Theorem
each μj is. (Indeed, Λj = Ωj so that μj (Λj ) > 0 for at least one j in one J .)
Let ϕ: Ω → C be the
function obtained from the functions ϕγ : Ωγ → Ωγ ⊂ C
as follows. If λ ∈ γ∈Γ Ωγ , then λ ∈ Ωβ for some β ∈ Γ , and so set
ϕ(λ) =
ϕβ (λ) = λ (each ϕγ is the identity function on Ωγ ). Since Ω = γ∈Γ Ωγ ,
where Ωγ = σ(T |Mγ ) ⊆ σ(T ), it follows that ϕ ∈ L∞ (Ω, μ) with
WV M ϕγ = Mϕ W V,
γ∈Γ
∼
and so γ∈Γ Mϕγ is unitarily equivalent to Mϕ (i.e., γ M ϕγ = Mϕ ) through
∼ ∼
the unitary W V. Then, T = Mϕ by transitivity, since T = γ∈Γ Mϕγ. That is,
It can also be shown along this line that x is a cyclic vector for a normal
operator T if and only if it is a cyclic vector for T ∗ [50, Problem 164].
Consider the proof of Theorem 3.11. Part (a) deals with the case where the
normal operator T is star-cyclic, while part (b) deals with the case where T is
not star-cyclic. In the former case, H must be separable by the above Remark.
We show next that if H is separable, then the measure in Theorem 3.11 can
be taken to be finite. Indeed, the positive measure μ: AΩ → R constructed in
part (a) is finite. However, the positive measure μ: AΩ → R constructed in
part (b) may fail to be even σ-finite (see, e.g., [50, p. 66]). But finiteness can
be restored by separability.
Remark. Corollary 3.13 forces the Hilbert space L2 (Ω, μ), with that particular
finite measure μ, to be separable (since it is unitarily equivalent to a separable
Hilbert space H). Recall that there exist nonseparable L2 spaces with finite
measures, and also non-σ-finite measures for which L2 is separable (see, e.g.,
[21, pp. 174, 192, 376] and [51, pp. 2, 3]).
If μ is finite, then the constant function 1 (i.e., 1(λ) = 1 for all λ ∈ Ω) lies
in L2 (Ω, μ). Set ψ = 1 in L2 (Ω, μ), and let P (Ω) denote the set of all poly-
nomials p(·, ·): Ω × Ω → C in λ and λ (equivalently, p(·, ·): Ω → C such that
λ → p(λ, λ)). Thus (cf. proof of Theorem 3.11),
76 3. Spectral Theorem
L2 (Ω, μ) = P (Ω)− = {λn λm } = {Mϕn Mϕ∗m ψ}
m,n m,n
so that ψ = 1 is a star-cyclic vector for Mϕ . Thus (b) implies (c). The Remark
preceding Corollary 3.13 ensures that (c) =⇒ (d) =⇒ (a).
for every ψ ∈ Ψ (by the previously displayed identity), which also holds for
the function ϕ: Ω → C such that ϕ(λ) = λ. (Reason: inf ψ∈Ψ ϕ − ψ∞ = 0 so
that ϕ ∈ Ψ − ⊂ L∞ (Ω, μ), thus apply extension by continuity.) Therefore,
% % %
2
ϕ dEλ f ; f = ϕ |f | dμ = ϕ f f dμ = Mϕ f ; f
for every f ∈ L2 (Ω, μ). By the polarization identity (Proposition 1.A) we get
Sf ; g = 14 [S(f + g) ; (f + g)−S(f − g) ; (f − g)+iS(f + ig) ; (f + ig)−
iS(f − ig) ; (f − ig) for every f, g ∈ L2 (Ω, μ) and S ∈ B[L2 (Ω, μ)]. Thus, re-
placing S with E (Λ) on the one hand, and with Mϕ on the other hand,
%
ϕ dEλ f ; g = Mϕ f ; g
for every p ∈ P (Ω). Let K be the closure of the union of the supports of the
measures E and E , which is compact since Ω is bounded, and let C(K) be the
set of all complex-valued continuous functions on K. The Stone–Weierstrass
Theorem for complex functions says that P (K) is dense in (C(K), · ∞ ), and
so P (K) is dense in (L2 (K, ν), · 2 ) for every finite positive measure ν on AΩ
with support in K (see proof of Theorem 3.11). Since the positive measures
E (·)x ; x = E (·)x2 and E (·)x ; x = E (·)x2 on AΩ are finite, we get
% %
χΛ dE λ x; x = χΛ dEλ x; x
2
of μ in the proof $ of Theorem 3.11). However, since χ∪Λ ∈ L (Ω, μ), and since
E (Λ )f ; f = χΛ |f |2 dμ for each Λ ∈ Ω and every f ∈ L2 (Ω, μ),
% %
E (∪Λ)χ∪Λ 2 = E (∪Λ)χ∪Λ ; χ∪Λ = χ∪Λ |χ∪Λ |2 dμ = dμ = μ(∪Λ).
∪Λ
Mϕ |R(MχΛ ) ∼
= Mϕ χΛ .
Hence T |R(E(Λ)) ∼
= Mϕ χΛ by transitivity, and therefore (cf. Proposition 2.B),
σ(T |R(E(Λ)) ) = σ(Mϕ χΛ ).
Since Λ = ∅, Proposition 3.B ensures that, for ϕ: Ω → C such that ϕ(λ) = λ,
!
σ(Mϕ χΛ ) ⊆ ϕ(Δ)− ∈ C : Δ ∈ AΛ and μ|Λ (Λ\Δ) = 0
!
= ϕ(Δ ∩ Λ)− ∈ C : Δ ∈ AΩ and μ(Λ\Δ) = 0 ⊆ ϕ(Λ)− .
Finally, take an arbitrary Λ̃ ∈ Aσ(T ) and let Ẽ : Aσ(T ) → B[H] be the spectral
measure defined by Ẽ(Λ̃) = E(∪Λ̃) (cf. part (c) in the proof of Theorem 3.15).
Since Λ̃− = ϕ(Λ)− for some Λ ∈ AΩ , and since ∪(ϕ(Λ)− ) ⊆ Λ−, we get
σ(T |R(Ẽ(Λ̃)) ) ⊆ σ(T |R(Ẽ(ϕ(Λ)− )) ) ⊆ σ(T |R(E(Λ− )) ) ⊆ ϕ(Λ− )− = Λ̃− .
Again, use the same notation for E and Ẽ, and for Λ and Λ̃.
The proof of the next theorem follows the same argument used in the proof
of Corollary 3.5, now extended to the general case (see [76, Theorem 1.12]).
$
Theorem 3.17. (Fuglede Theorem). Let T = λ dEλ be the spectral decom-
position of a normal operator in B[H]. If S ∈ B[H] commutes with T , then
S commutes with E(Λ) for every Λ ∈ Aσ(T ) .
Proof. Take any Λ in Aσ(T ) . Suppose ∅ = Λ = σ(T ); otherwise the result is
trivially verified with E(Λ) = O or E(Λ) = I. Set Λ = σ(T )\Λ in Aσ(T ) . Let
Δ and Δ be arbitrary nonempty measurable closed subsets of Λ and Λ . That
is, ∅ = Δ ⊆ Λ and ∅ = Δ ⊆ Λ lie in Aσ(T ) and are closed in C ; and so they
are closed relative to the compact set σ(T ). Lemma 3.16(a) ensures that
E(Λ)T |R(E(Λ)) = T E(Λ)|R(E(Λ)) = T |R(E(Λ)) : R(E(Λ)) → R(E(Λ))
for every Λ ∈ Aσ(T ) . In particular, this holds for Δ and Δ . Take any operator
S on H such that S T = T S. Thus, on R(E(Δ)),
(∗) (E(Δ )SE(Δ))T |R(E(Δ)) = (E(Δ )SE(Δ))(T E(Δ)) = E(Δ )S T E(Δ)
= E(Δ )T SE(Δ) = (T E(Δ ))(E(Δ )SE(Δ))
= T |R(E(Δ )) (E(Δ )SE(Δ)).
Since the sets Δ and Δ are closed and nonempty, Lemma 3.16(b) says that
σ(T |R(E(Δ)) ) ⊆ Δ and σ(T |R(E(Δ ) ) ⊆ Δ ,
and hence, as Δ ⊆ Λ and Δ ⊆ Λ = σ(T )\Λ,
(∗∗) σ(T |R(E(Δ)) ) ∩ σ(T |R(E(Δ )) ) = ∅.
3.5 Fuglede Theorems and Reducing Subspaces 81
and so, in the extension ordering (see, e.g., [66, Problem 1.17]),
E(Λ) = sup E(Δ) and E(Λ ) = sup E(Δ ).
Δ∈C Λ Δ ∈C Λ
Then
E(Λ )SE(Λ) = O.
Since E(Λ) + E(Λ ) = E(Λ ∪ Λ ) = I, this leads to
SE(Λ) = E(Λ ∪ Λ )SE(Λ) = (E(Λ) + E(Λ ))SE(Λ) = E(Λ)SE(Λ).
Thus R(E(Λ)) is S-invariant (cf. Proposition 1.I). Since this holds for every
Λ in Aσ(T ) , it holds for Λ = σ(T )\Λ so that R(E(Λ )) is S-invariant. But
R(E(Λ)) ⊥ R(E(Λ )) (i.e., E(Λ)E(Λ ) = O) and H = R(E(Λ)) + R(E(Λ ))
(since H = R(E(Λ ∪ Λ )) = R(E(Λ) + E(Λ )) ⊆ R(E(Λ)) + R(E(Λ )) ⊆ H).
Hence R(E(Λ )) = R(E(Λ))⊥, and so R(E(Λ))⊥ also is S-invariant. Then
R(E(Λ)) reduces S,
which, by Proposition 1.I, is equivalent to
SE(Λ) = E(Λ)S.
so that σ(T ) is the disjoint union of Λλ and Λλ . Observe that both Λλ and
Λλ lie in Aσ(T ) . Since Λλ and σ(T )\D −
λ are nonempty relatively open subsets
−
of σ(T ) and σ(T )\D λ ⊆ Λλ , it follows by Theorem 3.15 that E(Λλ ) = O and
E(Λλ ) = O. Then I = E(σ(T )) = E(Λλ ∪ Λλ ) = E(Λλ ) + E(Λλ ), and so
E(Λλ ) = I − E(Λλ ) = I. Therefore,
O = E(Λλ ) = I,
{0} = R(E(Λλ )) = H.
This proves (a) (for the particular case of S = T ), and also proves (b) — the
nontrivial R(E(Λλ )) reduces every S that commutes with T . Thus (b) ensures
that if T commutes with a nonscalar normal, then it is reducible. The converse
follows by Proposition 1.I, since every nontrivial orthogonal projection is a
nonscalar normal. This proves (c). Since a projection E is nontrivial if and
only if {0} = R(E) = H, Proposition 1.I ensures that T is reducible if and
only if it commutes with a nontrivial orthogonal projection, which proves
3.5 Fuglede Theorems and Reducing Subspaces 83
All the equivalent assertions in Corollary 3.5, which hold for compact
normal operators, remain equivalent in the general case (for plain normal, not
necessarily compact operators). Again, the implication (a) ⇒ (c) in Corollary
3.19 below is the Fuglede Theorem (Theorem 3.17).
$
Corollary 3.19. Let T = λ dEλ be a normal operator in B[H] and take any
operator S in B[H]. The following assertions are pairwise equivalent.
(a) S commutes with T .
(b) S commutes with T ∗ .
(c) S commutes with every E(Λ).
(d) R(E(Λ)) reduces S for every Λ.
$
Proof. Let T = λ dEλ be the spectral decomposition of a normal operator
in B[H]. Take any Λ ∈ Aσ(T ) and an arbitrary S ∈ B[H]. First we show that
ST = TS =⇒ SE(Λ) = E(Λ)S =⇒ S T ∗ = T ∗S ⇐⇒ S ∗ T = T S ∗.
Since these hold for every S ∈ B[H], and since S ∗∗ = S, it follows that
S∗T = T S∗ =⇒ S T = T S.
84 3. Spectral Theorem
This shows that (a), (b), and (c) are equivalent, and (d) is equivalent to (c)
by Proposition 1.I (as in the final part of the proof of Theorem 3.17).
consider the Banach space Γ∞ made up of all bounded families a = {αγ }γ∈Γ
of complex numbers and also the Hilbert space Γ2 consisting of all square-
summable families z = {ζγ }γ∈Γ of complex numbers indexed by Γ . The defini-
tion of a diagonal operator in B[Γ2 ] goes as follows.
(3) A diagonal operator B[Γ2 ] is a mapping D: Γ2 → Γ2 such that Dz =
{αγ ζγ }γ∈Γ for every z = {ζγ }γ∈Γ ∈ Γ2 , where a = {αγ }γ∈Γ ∈ Γ∞ is a
(bounded) family of scalars. (Notation: D = diag{αγ }γ∈Γ .)
A diagonal operator D is clearly diagonalizable by the resolution of the iden-
tity Eγ z = z ; fγ fγ = ζγ fγ given by the canonical orthonormal basis {fγ }γ∈Γ
for Γ2 , viz., fγ = {δγ,β }β∈Γ with δγ,β = 1 if β = γ and δγ,β = 0 if β = γ. So
D is normal with r(D) = D = a∞ = supγ∈Γ |αγ |. Consider the Fourier
series expansion of an arbitrary x in H in terms of any orthonormal basis
{eγ }γ∈Γ for H, namely, x = γ∈Γ x ; eγ eγ , so that we can identify x in
H with the square-summable family {x ; eγ }γ∈Γ in Γ2 (i.e., x is the image
of {x ; eγ }γ∈Γ under a unitary transformation
between the unitary equiva-
lent spaces H and Γ2 ). Thus, if T x = γ∈Γ λγ x ; eγ eγ , then we can iden-
tify T x with the square-summable family {λγ x ; eγ }γ∈Γ in Γ2 , and so we
can identify T with a diagonal operator D that takes z = {x ; eγ }γ∈Γ into
Dz = {λγ x ; eγ }γ∈Γ . This means that T and D are unitarily equivalent :
there is a unitary transformation U ∈ B[H, Γ2 ] such that U T = D U. It is in
this sense that T is said to be a diagonal operator with respect to the orthonor-
mal basis {eγ }γ∈Γ . Thus definition (2) can be rephrased as follows.
(2 ) An operator T on a Hilbert space H is diagonalizable if it is a diagonal
operator with respect to some orthonormal basis for H.
[50, Chapter 7]). The first version of the Spectral Theorem (Theorem 3.11)
says that multiplication operators are prototypes of normal operators.
Mφ f = φf for every f ∈ L2 .
That is, (Mφ f )(λ) = φ(λ)f (λ) for λ ∈ Ω. This is called the multiplication
operator on L2, which is linear and bounded (i.e., Mφ ∈ B[L2 ]). Moreover :
∗ ∗
(a) Mφ = Mφ (i.e., (Mφ f )(λ) = φ(λ) f (λ) for λ ∈ Ω).
(b) Mφ is a normal operator .
(c) Mφ ≤ φ∞ = ess sup |φ| = inf Λ∈NΩ supλ∈ Ω\Λ |φ(λ)|,
where NΩ denotes the collection of all Λ ∈ AΩ such that μ(Λ) = 0.
(d) σ(Mφ ) ⊆ ess R(φ) = essential range of φ
= α ∈ C : μ({λ ∈ Ω: |φ(λ) − α| < ε}) > 0 for every ε > 0
= α ∈ C : μ(φ−1 (Vα )) > 0 for every neighborhood Vα of α
= φ(Λ)− ∈ C : Λ ∈ AΩ and μ(Ω\Λ) = 0 ⊆ φ(Ω)− = R(φ)− .
If, in addition, μ is σ-finite, then
(e) Mφ = φ∞ ,
(f) σ(Mφ ) = ess R(φ).
Proposition 3.D. Let H be a complex Hilbert space and let T be the unit
circle about the origin of the complex plane. If T ∈ B[H] is normal, then
(a) T is unitary if and only if σ(T ) ⊆ T ,
(b) T is self-adjoint if and only if σ(T ) ⊆ R ,
(c) T is nonnegative if and only if σ(T ) ⊆ [0, ∞),
(d) T is strictly positive if and only if σ(T ) ⊆ [α, ∞) for some α > 0,
(e) T is an orthogonal projection if and only if σ(T ) ⊆ {0, 1}.
Observe that, if T = 10 00 and S = 21 11 in B[C 2 ], then S 2 − T 2 = 43 32 .
Thus O ≤ T ≤ S does not imply T 2 ≤ S 2 . However, the converse holds.
There are diagonalizable operators that are not reductive. In fact, since
C is a separable metric space, it includes a countable dense subset, and so
3.6 Additional Propositions 89
σ(D) = Λ− = Ω.
Therefore, every closed and bounded subset of the complex plane is the spec-
2
trum of some diagonal operator on + . Hence, if Ω ◦ = ∅, then Proposition
3.J ensures that the diagonal D is not reductive.
Suggested Reading
Lemma 4.1. Let A and B be algebras over the same scalar field and let
Φ: A → B be a homomorphism. Take an arbitrary x ∈ A.
(a) If A is a ∗-algebra, then x = a + i b where a∗ = a and b∗ = b in A.
(b) If A, B, and Φ are unital and x is invertible, then Φ(x)−1 = Φ(x−1 ).
(c) If A is unital and normed, and x is invertible, then x−1 ≤ x−1 .
(d) If A is a unital ∗-algebra and x is invertible, then (x−1 )∗ = (x∗ )−1 .
(e) If A is a C*-algebra and x∗ x = 1, then x = 1.
(f) If A is a C*-algebra, then x∗ = x.
Suppose A and B are unital complex Banach algebras.
(g) If Φ is a unital homomorphism, then σ(Φ(x)) ⊆ σ(x).
(h) If Φ is an injective unital homomorphism, then σ(Φ(x)) = σ(x).
1
(i) r(x) = limn xn n .
(j) If A is a unital complex C*-algebra and x∗ = x, then r(x) = x.
From now on assume that A and B are unital complex C*-algebras.
(k) If Φ is a unital ∗-homomorphism, then Φ(x) ≤ x.
() If Φ is an injective unital ∗-homomorphism, then Φ(x) = x.
(where the series converges uniformly in B[H]). However, unlike in the preced-
ing paragraph, complex conjugation does not define an involution on P (σ(T ))
(and so P (σ(T )) is not ∗-algebra under it). Indeed, p∗ (λ) = p(λ) is not a
polynomial in λ — the continuous function p( · ): σ(T ) → C given by p(λ) =
n ∗
i=0 αi λ is not a polynomial in λ (for σ(T ) ⊆ R , even if σ(T ) = σ(T )).
i
Theorem 4.2. Let$T ∈ B[H] be a normal operator and consider its spectral
decomposition T = λ dEλ . For every function ψ ∈ B(σ(T )) there is a unique
operator ψ(T ) ∈ B[H] given by
%
ψ(T ) = ψ(λ) dEλ ,
Let P (σ(T )) be the set of all polynomials p(·, ·): σ(T ) × σ(T ) → C in λ and
λ (or, p(·, ·): σ(T ) → C such that λ → p(λ, λ)), and let C(σ(T )) be the set of
all complex-valued continuous (thus Aσ(T ) -measurable) functions on σ(T ). So
P (σ(T )) ⊂ P (σ(T )) ⊂ C(σ(T )) ⊂ B(σ(T )),
where the last inclusion follows from the Weierstrass Theorem since σ(T ) is
compact — cf. proof of Theorem 2.2. Except for P (σ(T )), these are all unital
∗-subalgebras of B(σ(T )) and the restriction of the mapping ΦT to any of
them remains a unital ∗-homomorphism into B[H]. Thus Theorem 4.2 holds
for all those unital ∗-subalgebras of the unital C*-algebra B(σ(T )). Note that
P (σ(T )) is a unital subalgebra of P (σ(T )), and the restriction of ΦT to it is a
unital homomorphism into B[H], and so Theorem 4.2 also holds for P (σ(T ))
by replacing ∗-homomorphismn with plain homomorphism. In particular, for p
in P (σ(T )), say p(λ) = i=0 αi λi, we get, if T is normal,
% n % n
p(T ) = p(λ) dEλ = αi λi dEλ = αi T i
i=0 i=0
n,m i j
and, for p in P (σ(T )), say p(λ, λ) = i,j=0 αi,j λ λ (cf. Lemma 3.10), we get
% n,m % n,m
p(T, T ∗ ) = p(λ, λ) dEλ = αi,j λi λj dEλ = αi,j T i T ∗j .
i,j=0 i,j=0
Remark. Let E : Aσ(T ) → B[H] be the spectral measure of the spectral decom-
position of a normal operator T, and consider the statements of Lemmas 3.7,
3.8, and 3.9 for Ω = σ(T ). Lemma 3.7 remains true if B(σ(T )) is replaced by
4.2 Functional Calculus for Normal Operators 97
for every φ ∈ L∞ (σ(T ), μ), where Nσ(T ) denotes the collection of all Λ ∈ Aσ(T )
such that μ(Λ) = 0 or, equivalently, such that E(Λ) = O. Thus the sup-norm
φ∞ of φ: σ(T ) → C with respect to μ dominates the sup-norm with respect
to πx,x for every x ∈ H. Then the proof of Lemma 3.7 remains unchanged if
B(σ(T )) is replaced by L∞$(σ(T ), μ) for any positive measure μ equivalent to
E, since |f (x, x)| ≤ φ∞ dπx,x = φ∞ x2 for every x ∈ H still holds if
φ ∈ L∞ (σ(T ), μ). Hence, for every
$ φ ∈ L∞ (σ(T ), μ), there is a unique opera-
tor F ∈ B[H] such that F = φ(λ) dEλ as in Lemma 3.7. This ensures that
Lemmas 3.8 and 3.9 also hold if B(σ(T )) is replaced by L∞ (σ(T ), μ).
Theorem 4.3. Let $T ∈ B[H] be a normal operator and consider its spectral
decomposition T = λ dEλ . If μ is a positive measure on Aσ(T ) equivalent to
the spectral measure E : Aσ(T ) → B[H], then for every ψ ∈ L∞ (σ(T ), μ) there
is a unique operator ψ(T ) ∈ B[H] given by
%
ψ(T ) = ψ(λ)dEλ ,
Definition 4.5. A separating vector for a collection {Cγ }γ∈Γ ⊆ B[H] of op-
erators is a vector e ∈ H such that e ∈ N (C) for every O = C ∈ {Cγ }γ∈Γ .
Take the collection {E(Λ)}Λ∈AΩ of all images of the spectral measure E
associated to a normal operator T on H, where Ω is either the disjoint union of
{Ωγ } or σ(T ) as in proof of Theorem 3.15. For each x, y ∈ H, let πx,y : AΩ → C
be the scalar-valued (complex) measure of Section 3.3 related to E, namely,
of the normal operator Mϕ ∈ B[L2 (Ω, μ)]. Hence, for each f ∈ L2 (Ω, μ),
% %
πf,f (Λ) = E (Λ)f ; f = M χΛ f ; f = χΛ f f dμ = |f |2 dμ
Λ
for every Λ ∈ AΩ . In particular, set f = 1 = χΩ (the constant function f (λ) = 1
for all λ ∈ Ω, i.e., the characteristic function χΩ of Ω), which lies in L2 (Ω, μ)
because μ is finite. Note that f = 1 = χΩ is a separating vector for {E (Λ)}Λ∈AΩ
since E (Λ)1 = M χΛ 1 = χΛ = 0 if Λ = ∅ in AΩ . Thus π 1,1 is a scalar spectral
measure for Mϕ by Lemma 4.6. Now, if T ∈ B[H] is normal, then T ∼ = Mϕ by
Theorem 3.11. Let E : AΩ → B[H] be the spectral measure in H for T , given
by E(Λ) = U ∗E (Λ)U for each Λ ∈ AΩ , where U is a unitary in B[H, L2 (Ω, μ)]
4.2 Functional Calculus for Normal Operators 99
Remark. The preceding proof shows that the scalar spectral measure π 1,1 for
Mϕ is precisely the measure
$ μ of Theorem 3.11. Indeed, π1,1 (Λ) = E (Λ)12 =
2 2
M χΛ 1 = χΛ = Λ dμ = μ(Λ) for every Λ ∈ AΩ . And so is the scalar
spectral measure πe,e for T . That is, μ = π1,1 = πe,e . In fact, for every Λ ∈ AΩ ,
μ(Λ) = π 1,1 (Λ) = E (Λ)1 ; 1 = U E(Λ)U ∗ U e ; U e = E(Λ)e ; e = πe,e (Λ).
$
Suppose ΦT (ψ) = O. Then |ψ(λ)|2 dπx,x = 0 for every x ∈ H. Lemma 4.7
ensures the existence of a separating vector e ∈ H for {E(Λ)}Λ∈Aσ(T ). Thus
$
consider the scalar spectral measure πe,e so that |ψ(λ)|2 dπe,e = 0, and hence
ψ = 0 in L∞ (σ(T ), πe,e ), which means that ψ = 0 in L∞ (σ(T ), μ) for every
measure μ equivalent to πe,e ; in particular, for every scalar spectral measure
μ for T . So {0} = N (ΦT ) ⊆ L∞ (σ(T ), μ), and hence the linear transformation
ΦT : L∞ (σ(T ), μ) → B[H] is injective; thus an isometry by Lemma 4.1().
The mapping ΦT of Theorems 4.2, 4.3, 4.8, and 4.9 such that ΦT (ψ) = ψ(T )
is referred to as the functional calculus for T , and the theorems themselves
are referred to as the Functional Calculus for Normal Operators. Since
σ(T ) is compact it follows that, for any positive measure μ on Aσ(T ) ,
for every x, y ∈ H (see Lemma 3.7), which means that S ψ(T ) = ψ(T )S.
Proof. If the theorem holds for a scalar spectral measure, then it holds for
any scalar spectral measure. Suppose T is a normal operator on a separable
Hilbert space. Let μ be the positive measure of Theorem 3.11, which is a scalar
spectral measure for T (cf. Remark that follows the proof of Lemma 4.7). Now
set μ̃ = π̃e,e as in the proof of Theorem 4.7, which is again a scalar spectral
measure for T and, as before, use the same notation for μ and μ̃). Take any ψ in
L∞ (σ(T ), μ). By Theorem 4.8, ΦT : L∞ (σ(T ), μ) → B[H] is an isometric (thus
injective) unital ∗-isomorphism between C*-algebras and ΦT (ψ) = ψ(T ). So
σ(ψ) = σ(ΦT (ψ)) = σ(ψ(T ))
by Lemma 4.1(h). Consider the multiplication operator Mϕ on the Hilbert
space L2 (Ω, μ) of Theorem 3.11. Since T ∼
= Mϕ , L2 (Ω, μ) is separable. Since
ϕ(Ω) = σ(T ) (with ϕ(λ) = λ), take ψ ◦ ϕ ∈ L∞ (Ω, μ) (also denoted by ψ) so
−
that the spectrum σ(ψ) coincides in both algebras. Since μ is a scalar spectral
measure for Mϕ (cf. Remark that follows the proof of Lemma 4.7 again), we
may apply Theorem 4.8 to Mϕ . Therefore, using the same argument,
σ(ψ) = σ(ψ(Mϕ )).
Claim. ψ(Mϕ ) = Mψ .
Proof. Let E be the spectral measure for Mϕ , where E (Λ) = M χΛ with χΛ
being the characteristic function of Λ in AΩ as in the proof of Lemma 4.7.
Take the scalar-valued measure πf,g given by πf,g (Λ) = E (Λ)f ; g for every
Λ in AΩ and each f, g ∈ L2 (Ω, μ). Recall from the remark that follows the
proof of Lemma 4.7 that μ = π 1,1 . Observe that, for every Λ in AΩ ,
102 4. Functional Calculus
% % %
d πf,g = π f,g (Λ) = E (Λ)f ; g = χΛ f ; g = f gdμ = f g d π 1,1 .
Λ Λ Λ
So d π f,g = f g d π1,1 (f g is the Radon–Nikodým derivative
$ of π f,g with respect
to π 1,1 ; see proof of Lemma 3.8). Since ψ(Mϕ ) = ψ(λ) dEλ by Theorem 4.8,
% %
ψ(Mϕ )f ; g = ψ d Eλ f ; g = ψ d πf,g
% %
= ψf g d π 1,1 = ψf g dμ = Mψ f ; g
for every f, g ∈ L2 (Ω, μ), and so ψ(Mφ ) = Mψ , proving the claimed identity.
Hence, since μ is finite (thus σ-finite), it follows by Proposition 3.B that
σ(ψ(T )) = σ(ψ(Mϕ )) = σ(Mψ ) = ess R(ψ).
Measure theory was the main ingredient for the last four sections of Chap-
ter 3 and also for Section 2 of this chapter. The background for the present
section is complex analysis, where the integrals involved here are all Riemann’s
(unlike the measure theoretic integrals we have been dealing with so far). For
the standard results of complex analysis used in this section the reader is
referred, for instance, to [1], [13], [21], [26], and [79].
An arc is a continuous function from an arbitrary (nondegenerate) closed
interval of the real line into the complex plane, say, α:[0, 1] → C , where Γ =
R(α) = {α(t) ∈ C : t ∈ [0, 1]} is the range of the arc α, which is connected
and compact (continuous image of connected sets is connected, and continuous
image of compact sets is compact). Let G = {#X : ∅ = X ∈ ℘([0, 1])} be the
set of all cardinal numbers of nonempty subsets of the interval [0, 1]. Given an
arc α:[0, 1] → Γ set μ(t) = #{s ∈ [0, 1): α(s) = α(t)} ∈ G for each t ∈ [0, 1).
That is, μ(t) ∈ G is the cardinality of the subset of [0, 1] consisting of all
points s ∈ [0, 1) such that α(s) = α(t), for every t ∈ [0, 1). We say that μ(t)
is the multiplicity of the point α(t) ∈ Γ . Consider the function μ:[0, 1) → G
defined by t → μ(t), which is referred to as the multiplicity of the arc α. To
avoid space-filling curves, assume from now on that Γ is nowhere dense. A
pair (Γ, μ) consisting of the range Γ and the multiplicity μ of an arc is called a
directed pair if the direction in which the arc traverses its range Γ , according
to the natural order of the interval [0, 1], is taken into account. An oriented
curve generated by an arc α:[0, 1] → Γ is the directed pair (Γ, μ), and the arc
α itself is referred to as a parameterization of the curve (Γ, μ) (which is not
unique for (Γ, μ)). If an oriented curve (Γ, μ) is such that α(0) = α(1) for some
parameterization α (and so for all parameterizations), then it is called a closed
curve. A parameterization α of an oriented curve (Γ, μ) is injective on [0, 1)
if and only if μ = 1 (i.e., μ(t) = 1 for all t ∈ [0, 1)). If one parameterization of
(Γ, 1) is injective, then so are all of them. A curve (Γ, 1) parameterized by an
arc α that is injective on [0, 1) is called a simple curve (or a Jordan curve).
Thus an oriented curve consists of a directed pair (Γ, μ), where Γ ⊂ C is the
(nowhere dense) range of a continuous function α:[0, 1] → C called a parame-
terization of (Γ, μ), and μ is a G -valued function that assigns to each point t
of [0, 1) the multiplicity of the point α(t) in Γ (i.e., each μ(t) says how often
α traverses the point γ = α(t) of Γ ). For notational simplicity we shall refer
to Γ itself as an oriented curve when the multiplicity μ is either clear in the
context or is immaterial. In this case we simple say “Γ is an oriented curve”.
By a partition P of the interval [0, 1] we mean a finite sequence {tj }nj=0
of points in [0, 1] such that 0 = t0 , tj−1 < tj for 1 ≤ j ≤ n, and tn = 1. The
variation of a function α:[0, 1] → C is the supremum of the set {ν ∈ R :
total
n
ν = j=1 |α(tj ) − α(tj−1 )|} taken over all partitions P of [0, 1]. A function
α:[0, 1] → C is of bounded variation if its total variation is finite. An oriented
curve Γ is rectifiable if some (and so any) parameterization α of it is of bounded
variation, and its length (Γ) is the total variation of α. An arc α:[0, 1] → C is
104 4. Functional Calculus
Let C[Γ, Y] and B[Γ, Y] be the linear spaces of all Y-valued continuous or
bounded functions on Γ , respectively, equipped with the sup-norm, which are
Banach spaces because Γ is compact and Y is complete. Recall that, since Γ is
compact,
$ then C[Γ, Y] ⊆ B[Γ, Y]. It is readily verified that the transformation
: C[Γ, Y] → Y, assigning to each f in C[Γ,
$ Y] its integral in Y, is linear and
bounded (i.e., linear and continuous — ∈ B[C[Γ, Y], Y]). Indeed,
% % 1
f (λ) dλ ≤ sup f (λ) |dα(t)| = f ∞ (Γ ),
Γ λ∈Γ 0
and therefore, for every L ∈ B[Y, Z], where Z also is a Banach space,
% %
L f (λ) dλ = Lf (λ) dλ.
Γ Γ
The reverse (or the opposite) of an oriented curve Γ is an oriented curve
denoted by −Γ obtained from Γ by reversing the orientation of Γ . To reverse
the orientation of Γ means to reverse the order of the domain [0, 1] of its
parameterization α or, equivalently, to replace α(t) with α(−t) for t running
over [−1, 0]. Reversing the orientation of a curve Γ has the effect of replacing
$ $1 $0 $
Γ f (λ) dλ = 0 f (α(t)) dα(t) with 1 f (α(t)) dα(t) = −Γ f (λ) dλ so that
% %
f (λ) dλ = − f (λ) dλ.
−Γ Γ
Also note that the preceding arguments and results all remain valid if the
continuous function f is defined on a subset Λ of C that includes the curve Γ
(by simply replacing it with its restriction to Γ, which remains continuous).
In particular, if Λ is a nonempty open subset of C and Γ is a rectifiable curve
included in Λ, and if f : Λ → Y is a continuous$ function on the open set Λ,
then f has an integral over Γ ⊂ Λ ⊆ C , viz., Γ f (λ) dλ.
Recall that a topological space is disconnected if it is the union of two dis-
joint nonempty subsets that are both open and closed. Otherwise, the topo-
logical space is said to be connected . A subset of a topological space is called
a connected set (or a connected subset ) if, as a topological subspace, it is a
connected topological space itself. Take any nonempty subset Λ of C . A com-
ponent of Λ is a maximal connected subset of Λ, which coincides with the
union of all connected subsets of Λ containing a given point of Λ. Thus any
two components of Λ are disjoint. Since the closure of a connected set is con-
nected, any component of Λ is closed relative to Λ. By a region (or a domain)
we mean a nonempty connected open subset of C . Every open subset of C is
uniquely expressed as a countable union of disjoint regions that are compo-
nents of it (see, e.g., [21, Proposition 3.9]). The closure of a region is sometimes
called a closed region. Carefully note that different regions may have the same
closure (sample: a punctured open disk).
If Γ is a closed rectifiable oriented curve in C and if ζ ∈ C \Γ, then a
classical and important result in complex analysis says that the integral
106 4. Functional Calculus
%
1 1
wΓ (ζ) = 2πi dλ
Γ λ−ζ
has an integer value that is constant on each component of C \Γ and zero
on the unbounded component of C \Γ . The number wΓ (ζ) is referred to as
the winding number of Γ about ζ. If a closed rectifiable oriented curve Γ is a
simple curve, then C \Γ has only two components, just one of then is bounded,
and Γ is their common boundary (this is the Jordan Curve Theorem). In this
case (i.e., for a simple closed rectifiable oriented curve Γ ), wΓ (ζ) takes on only
two values for each ζ ∈ C \Γ , either 0 or ±1, and if for every ζ in the bounded
component of C \Γ we get wΓ (ζ) = 1, then we say that Γ is positively (i.e.,
counterclockwise) oriented and, otherwise, if wΓ (ζ) = −1, then we say that
Γ is negatively (i.e., clockwise) oriented ; and for every ζ in the unbounded
component of C \Γ we get wΓ (ζ) = 0. If Γ is positively oriented, then the
reverse curve −Γ is negatively oriented, and
m vice versa. These notions can be
extended as follows. A finite union Γ = j=1 Γj of disjoint closed rectifiable
oriented simple curves Γj is called a path, and its winding number wΓ (ζ) about
ζ ∈ C \Γ is defined by wΓ (ζ) = m j=1 wΓj (ζ). A path Γ is positively oriented if
for every ζ ∈ C \Γ the winding number wΓ (ζ) is either 0 or 1, and negatively
oriented if
for every ζ ∈ C \Γ the winding number wΓ (ζ) is either 0 or
m−1. If a
path Γ = m j=1 jΓ is positively oriented, then the reverse path −Γ = j=1 −Γj
is negatively oriented. The inside (notation: ins Γ ) and the outside (notation:
out Γ ) of a positively oriented path Γ are the sets
ins Γ = ζ ∈ C : wΓ (ζ) = 1 and out Γ = ζ ∈ C : wΓ (ζ) = 0 .
From now on all paths are positively oriented . Note that if Γ = m j=1 Γj is a
path and if there is a finite subset {Γj }nj =1 of Γ such that Γj ⊂ ins Γj +1 ,
then these nested (disjoint closed rectifiable oriented) simple curves {Γj }
are oppositely oriented, with Γn being positively oriented, because wΓ (ζ) is
either 0 or 1 for every ζ ∈ C \Γ. An open subset of C is a Cauchy domain
(or a Jordan domain) if it has finitely many components whose closures are
pairwise disjoint, and its boundary is a path. The closure of a Jordan domain
is sometimes referred to as a Jordan closed region. Also observe that if Γ is
a path, then {Γ, ins Γ, out Γ } is a partition of C . Since a path Γ is a closed
set in C (finite union of closed sets), and since it is the common boundary
of ins Γ and out Γ, it follows that ins Γ and out Γ are open sets in C , and
their closures are given by the union with their common boundary,
That is, the integral of the C -valued function ψ(·)[(·) − ζ]−1 : Λ\{ζ} → C , for
each ζ ∈ ins Γ , on any path Γ such that (ins Γ )− ⊂ Λ, is generalized to the
B[X ]-valued function ψ(·)[(·)I − T ]−1 : Λ ∩ ρ(T ) → B[X ], for each T ∈ B[X ],
on any path Γ such that Γ ⊂ Λ ∩ ρ(T ).
We shall see below that ψRT in fact is analytic, where the definition of
a Banach-space-valued analytic function on a nonempty open subset of C is
exactly the same as that for a scalar-valued analytic function.
108 4. Functional Calculus
Proof. Let T be an arbitrary operator in B[X ] and take its resolvent func-
tion RT : ρ(T ) → G[X ] so that RT (λ) = (λI − T )−1 for every λ ∈ ρ(T ). Let
ψ: Λ → C be an analytic function on an open set Λ ⊆ C that properly includes
σ(T ), so that there is a path Γ satisfying the above assumption. If there was
only one of such a path, then there was nothing to prove. Thus suppose Γ
and Γ are distinct paths satisfying the above assumption. We shall show that
% %
ψ(λ) RT (λ) dλ = ψ(λ) RT (λ) dλ.
Γ Γ
RT (λ) − RT (ν)
+ RT (ν)2 = RT (ν) − RT (λ) RT (ν)
λ−ν
(cf. proof of Theorem 2.2). Since RT : ρ(T ) → G[X ] is continuous,
R (λ) − R (ν)
T
+ RT (ν)2 ≤ lim RT (ν) − RT (λ)RT (ν) = 0
T
lim
λ→ν λ−ν λ→ν
for every ν ∈ ρ(T ). Thus the resolvent function RT : ρ(T ) → G[X ] is analytic on
ρ(T ) and so it is analytic on the nonempty open set Λ ∩ ρ(T ). Since the func-
tion ψ: Λ → C also is analytic on Λ ∩ ρ(T ), and since the product of analytic
functions on the same domain is again analytic, it follows that the product
4.3 Analytic Functional Calculus: Riesz Functional Calculus 109
function ψRT : Λ ∩ ρ(T ) → B[X ] such that (ψRT )(λ) = ψ(λ)RT (λ) ∈ B[X ] for
every λ ∈ Λ ∩ ρ(T ) is analytic on Λ ∩ ρ(T ), concluding the proof of Claim 1.
The Cauchy Theorem says that if ψ: Λ → C is analytic on a nonempty open
subset Λ of C that includes a path Γ and its inside ins Γ , then
%
ψ(λ) dλ = 0
Γ
(see, e.g., [21, Problem 5.O]). This can be extended from scalar-valued func-
tions to Banach-space-valued functions. Let Y be a complex Banach space,
let f : Λ → Y be an analytic function on a nonempty open subset $Λ of C that
includes a path Γ and its inside ins Γ , and consider the integral Γ f (λ).
Claim 2. If f : Λ → Y is analytic on a given nonempty open subset Λ of C
and if Γ is any path such that (ins Γ )− ⊂ Λ, then
%
f (λ) dλ = 0.
Γ
But Claim 1 ensures that ψRT is analytic on Λ ∪ ρ(T ) and therefore, since
$
Δ*− = (ins Γ*)− ⊂ Λ ∪ ρ(T ), Claim 2 ensures that
Γ* ψ(λ)RT (λ) dλ = 0. Hence
% %
ψ(λ) RT (λ) dλ = ψ(λ) RT (λ) dλ.
Γ Γ
After defining the integral of ψRT over any path Γ such that Γ ⊂ Λ ∩ ρ(T ),
and after ensuring in Lemma 4.13 its invariance for any path Γ such that
σ(T ) ⊂ ins Γ ⊂ (ins Γ )− ⊂ Λ, then it is clear that the preceding definition of
the operator ψ(T ) is motivated by the Cauchy Integral Formula.
Given an operator T in B[X ], we say that a complex-valued function ψ
is analytic on σ(T ) (or analytic on a neighborhood of σ(T )) if it is analytic
on an open set that includes σ(T ). By a path about σ(T ) we mean a path Γ
whose inside (properly) includes σ(T ) (i.e., a path such that σ(T ) ⊂ ins Γ ).
Thus what is behind Definition 4.14 is that, if ψ is analytic on σ(T ), then the
above integral exists as an operator in B[X ], and the integral does not depend
on the path about σ(T ) whose closure of its inside is included in the open set
upon which ψ is defined (i.e., it is included in the domain of ψ).
Lemma 4.15. If φ and ψ are analytic on σ(T ) and if Γ is any path about
σ(T ) such that σ(T ) ⊂ ins Γ ⊂ (ins Γ )− ⊂ Λ, where the nonempty open set
Λ ⊆ C is the intersection of the domains of φ and ψ, then
%
1
φ(T )ψ(T ) = 2πi φ(λ)ψ(λ) RT (λ) dλ,
Γ
Thus, by Definition 4.14 and by the resolvent identity (cf. Section 2.1),
%
%
1 1
φ(T )ψ(T ) = 2πi φ(λ) RT (λ) dλ 2πi ψ(ν) RT (ν) dν
Γ
%Γ %
= − 4π1 2 φ(λ)ψ(ν) RT (λ) RT (ν) dν dλ
Γ Γ
% %
= − 4π1 2 φ(λ)ψ(ν) (λ − ν)−1 RT (λ) − RT (ν) dν dλ
%Γ %Γ
= − 4π1 2 φ(λ)ψ(ν) (λ − ν)−1 RT (λ) dν dλ
Γ Γ
% %
+ 4π1 2 φ(λ)ψ(ν) (λ − ν)−1 RT (ν) dλ dν
%Γ Γ %
1
= − 4π2 φ(λ) ψ(ν)(λ − ν)−1 dν RT (λ) dλ
Γ
%Γ %
Given an operator T ∈ B[X ], let A(σ(T )) denote the set of all analytic
functions on the spectrum σ(T ) of T . That is, ψ ∈ A(σ(T )) if ψ: Λ → C is
analytic on an open set Λ ⊆ C that includes σ(T ). It is readily verified that
A(σ(T )) is a unital algebra, where the domain of the product of two functions
in A(σ(T )) is the intersection of their domains, and the identity element 1 in
A(σ(T )) is the constant function 1(λ) = 1 for all λ ∈ Λ. Note that the identity
function also lies in A(σ(T )) (i.e., if ϕ: Λ → Λ ⊆ C is such that ϕ(λ) = λ for
every λ ∈ Λ, then ϕ ∈ A(σ(T ))). The next theorem is the main result of this
section. (Recall that X is an arbitrary nonzero complex Banach space.)
for every ψ ∈ A(σ(T )), where the above series converges uniformly (i.e., it
converges in the operator norm topology of B[X ]). Next consider two special
cases consisting of elementary polynomials. First let ψ = 1 so that
% ∞ %
1 1 k 1 1
1(T ) = I 2πi λ dλ + T 2πi λk+1
dλ = I + 0 = I,
Γ k=1 Γ
1
$ 1
$since1
2πi Γ λdλ = 1(0) = 1 by the Cauchy Integral Formula and, $for k ≥ 1,
k+1 dλ = 0. Indeed, with α(θ) = ρeiθ for every θ ∈ [0, 2π] we get Γ λ1 dλ =
$Γ2πλ −1
$ 2π −1 −iθ $ 2π
0 α(θ)$ α (θ) dθ = $ 0 ρ e ρ i eiθ dθ = i 0 dθ = 2πi and, for k ≥ 1, we
1 2π $ 2π
also get Γ λk+1 dλ = 0 α(θ)−(k+1) α (θ) dθ = 0 ρ−(k+1) e−i(k+1)θ i ρ eiθ dθ =
$ 2π
i ρ−k 0 e−ikθ dθ = 0. Now let ψ = ϕ, the identity function, so that
% % ∞ %
1 1 1 k 1 1
ϕ(T ) = I 2πi dλ + T 2πi λ dλ + T 2πi λk dλ = 0 + T + 0 = T,
Γ Γ k=2 Γ
1
$ 1
$ $ 2π
since 2πi dλ = 0 by the Cauchy Theorem (i.e., 2πi Γ dλ = 0 α (θ) dθ =
$ 2π iθ Γ
i ρ 0 e dθ = 0) and the other two integrals were computed above. Thus
% %
T 2 = ϕ(T )2 = 2πi
1
ϕ(λ)2 RT (λ) dλ = 2πi1
λ2 RT (λ) dλ
Γ Γ
by Lemma 4.15, and so a trivial induction ensures that
%
1
T j = 2πi λj RT (λ) dλ
Γ
for every integer j ≥ 0. Therefore, by linearity of the integral,
m %
1
p(T ) = αj T j = 2πi p(λ) RT (λ) dλ
j=0 Γ
m
whenever p(λ) = j=0 αj λ ; that is, for every polynomial p ∈ P (σ(T )). Next
j
we extend this result from finite power series (i.e., from polynomials) to infinite
power
∞ series. To say that a function ψ has a power series expansion ψ(λ) =
k
k=0 αk λ∞ a neighborhood of σ(T ) means that the radius of convergence of
on
the series k=0 αk λk is greater than r(T ), which implies that the polynomials
pn (λ) = k=0 αk λk make a sequence {pn }∞
n
n=0 that converges uniformly to ψ
on a closed disk whose boundary is a circle Γ of radius r(T ) + ε about the
origin for some ε > 0, which in turn implies that $ ψ ∈ A(σ(T ))). Thus, since
the integral is linear and continuous (i.e., since ∈ B[C[Γ, B[X ]], B[X ]]), and
according to Corollary 2.12(a), it follows that
∞
ψ(T ) = αk T k .
k=0
Indeed,
114 4. Functional Calculus
% %
∞
1 1
ψ(T ) = 2πi ψ(λ) RT (λ) dλ = 2πi αk λk RT (λ) dλ
k=0
∞
Γ
% Γ
∞
1
= αk 2πi λk RT (λ) dλ = αk T k ,
k=0 Γ k=0
∞
where the operator series k=0 αk T k converges uniformly (i.e., it converges
in the operator norm topology to ψ(T ) ∈ B[X ]). In fact, since the scalar series
∞
k=0 αk λ converges uniformly on Γ to ψ ∈ A(σ(T )),
k
%
1
ψ(T ) − pn (T ) = 2πi ψ(λ) − pn (λ) RT (λ) dλ
Γ
%
1
≤ sup ψ(λ) − pn (λ) 2πi
RT (λ) dλ
λ∈Γ Γ
= sup ψ(λ) − pn (λ) → 0 as n → ∞.
λ∈Γ
Hence ΦT (pn ) → ΦT (ψ) in B[X ]. The same argument shows that if {ψn }∞n=0 is
a sequence of functions in A(σ(T )) that converges uniformly on every compact
subset of its domain to ψ ∈ A(σ(T )), then ΦT (ψn ) → ΦT (ψ) in B[X ].
Corollary 4.17. Take any operator in T ∈ B[X ]. If ψ ∈ A(σ(T )), then ψ(T )
commutes with every operator that commutes with T .
Proof. This is the counterpart of Corollary 4.10. Let S be an operator in B[X ].
Claim. If S T = T S, then RT (λ)S = SRT (λ) for every λ ∈ ρ(T ).
Proof. S = S (λI − T )(λI − T )−1 = (λI − T )S (λI − T )−1 if S T = T S, and
so RT (λ)S = (λI − T )−1 (λI − T )S (λI − T )−1 = SRT (λ) for λ ∈ ρ(T ).
Therefore, according to Definition 4.14, since S lies in B[X ],
% %
1 1
S ψ(T ) = S ψ(λ) RT (λ) dλ = ψ(λ) SRT (λ) dλ
2πi Γ 2πi Γ
%
1
= ψ(λ) RT (λ)S dλ = ψ(T )S.
2πi Γ
Proof. Take T ∈ B[X ] and ψ: Λ → C in A(σ(T )). If λ ∈ σ(T ) (so that λ ∈ Λ),
consider the function φ: Λ → C defined by φ(λ) = 0 and φ(ν) = ψ(λ)−ψ(ν)
λ−ν for
every ν ∈ Λ\{λ}. Since the function ψ(λ) − ψ(·) = φ(·)(λ − ·): Λ → C lies in
A(σ(T )), Lemma 4.15, Theorem 4.16, and Corollary 4.17 ensure that
%
1
ψ(λ)I − ψ(T ) = 2πi [ψ(λ) − ψ(ν)] RT (ν) dν
%Γ
1
= 2πi (λ − ν) φ(ν) RT (ν) dν
Γ
If ψ(λ) ∈ ρ(ψ(T )), then ψ(λ)I − ψ(T ) has an inverse [ψ(λ)I − ψ(T )]−1 in
B[X ]. Since ψ(λ)I − ψ(T ) = φ(T ) (λI − T ), we get
[ψ(λ)I − ψ(T )]−1 φ(T ) (λI − T ) = [ψ(λ)I − ψ(T )]−1 [ψ(λ)I − ψ(T )] = I,
(λI − T ) φ(T ) [ψ(λ)I − ψ(T )]−1 = [ψ(λ)I − ψ(T )] [ψ(λ)I − ψ(T )]−1 = I.
Thus λI − T has a left and a right bounded inverse, and so it has an inverse
in B[X ], which means that λ ∈ ρ(T ), which is a contradiction. Therefore, if
λ ∈ σ(T ), then ψ(λ) ∈ σ(ψ(T )), and hence
ψ(σ(T )) = ψ(λ) ∈ C : λ ∈ σ(T ) ⊆ σ(ψ(T )).
Conversely, take any ν ∈ ψ(σ(T )), which means that ν − ψ(λ) = 0 for every
1
λ ∈ σ(T ), and consider the function φ (·) = ν−ψ(·) : Λ → C , which lies in
A(σ(T )) with Λ ⊆ Λ. Take Γ such that σ(T ) ⊂ ins Γ ⊆ (ins Γ )− ⊂ Λ as in
Definition 4.14. Then Lemma 4.15 and Theorem 4.16 ensure that
%
φ (T ) [νI − ψ(T )] = [νI − ψ(T )] φ (T ) = 2πi
1
RT (λ) dλ = I.
Γ
Thus νI − ψ(T ) has a bounded inverse, φ (T ) ∈ B[X ], and so ν ∈ ρ(ψ(T )).
Equivalently, if ν ∈ σ(ψ(T )), then ν ∈ ψ(σ(T )). That is,
and closed relative to σ(T )). Suppose Δ ⊆ σ(T ) is a spectral set. Then there
is a function ψΔ ∈ A(σ(T )) (analytic on a neighborhood of σ(T )) such that
+
1, λ ∈ Δ,
ψΔ (λ) =
0, λ ∈ σ(T )\Δ.
It does not matter which value ψΔ (λ) takes for those λ in a neighborhood of
σ(T ) but not in σ(T ). Recall that if Δ is nontrivial (i.e., if ∅ = Δ = σ(T )),
then σ(T ) must be disconnected. Set EΔ = EΔ (T ) = ψΔ (T ) so that
% %
1 1
EΔ = ψΔ (T ) = 2πi ψΔ (λ) RT (λ) dλ = 2πi RT (λ) dλ
Γ ΓΔ
−
for any path Γ such that σ(T ) ⊂ ins Γ and (ins Γ ) is included in a neighbor-
hood of σ(T ) as in Definition 4.14, and ΓΔ is any path for which Δ ⊆ ins ΓΔ
and σ(T )\Δ ⊆ out ΓΔ . The operator EΔ ∈ B[X ] is referred to as the Riesz
idempotent associated with Δ. In particular, the Riesz idempotent associated
with an isolated point λ0 of the spectrum of T , namely, E{λ0 } = E{λ0 } (T ) =
ψ{λ0 } (T ), will be denoted by
% %
1
Eλ0 = 2πi 1
RT (λ) dλ = 2πi (λI − T )−1 dλ,
Γλ0 Γλ0
where Γλ0 is any simple closed rectifiable positively oriented curve (e.g., any
positively oriented circle) enclosing λ0 but no other point of σ(T ).
Let T be an operator in B[X ]. It is readily verified that the collection of all
spectral subsets of σ(T ) forms a (Boolean) algebra of subsets of σ(T ). Indeed,
the empty set and whole set σ(T ) are spectral sets, complements relative to
σ(T ) of spectral sets remain spectral sets, and finite unions of spectral sets
are spectral sets, and so are differences and intersections of spectral sets.
Lemma 4.19. Take any operator T ∈ B[X ] and let EΔ ∈ B[X ] be the Riesz
idempotent associated with an arbitrary spectral subset Δ of σ(T ).
(a) EΔ is a projection such that
(a1 ) E∅ = O, Eσ(T ) = I and Eσ(T )\Δ = I − EΔ .
If Δ1 and Δ2 are spectral subsets of σ(T ), then
(a2 ) EΔ1 ∩Δ2 = EΔ1 EΔ2 and EΔ1 ∪Δ2 = EΔ1 + EΔ2 − EΔ1 EΔ2 .
(b) EΔ commutes with every operator that commutes with T .
(c) R(EΔ ) is a subspace of X that is S-invariant for every operator S ∈ B[X ]
that commutes with T .
Proof. (a) Lemma 4.15 ensures that a Riesz idempotent deserves its name:
2
EΔ = EΔ so that EΔ ∈ B[X ] is a projection. Observe from the definition of
the function ψΔ for an arbitrary spectral subset Δ of σ(T ) that
4.4 Riesz Decomposition Theorem and Riesz Idempotents 117
ST = TS implies SEΔ = EΔ S.
σ(T |R(EΔ ) ) = Δ.
Moreover, the map Δ → EΔ that assigns to each spectral subset of σ(T ) the
associated Riesz idempotent EΔ ∈ B[X ] is injective.
Proof. Consider the results in Lemma 4.19(a1 ). If Δ = σ(T ), then EΔ = I so
that R(EΔ ) = X , and hence σ(T |R(EΔ ) ) = σ(T ). On the other hand, if Δ = ∅,
then EΔ = O so that R(EΔ ) = {0}, and hence T |R(EΔ ) : {0} → {0} is the null
operator on the zero space, which implies that σ(T |R(EΔ ) ) = ∅. In both cases
the identity σ(T |R(EΔ ) ) = Δ holds trivially. (Recall that the spectrum of a
bounded linear operator is nonempty on every nonzero complex Banach space.)
Thus suppose ∅ = Δ = σ(T ). Let ψΔ ∈ A(σ(T )) be the function that defines
the Riesz idempotent EΔ associated with Δ,
+
1, λ ∈ Δ,
ψΔ (λ) =
0, λ ∈ σ(T )\Δ,
so that %
1
EΔ = ψΔ (T ) = 2πi ψΔ (λ)RT (λ) dλ
Γ
for any path Γ as in Definition 4.14, and let ϕ ∈ A(σ(T )) be the identity
function on σ(T ), that is,
ϕ φ0 = 1 − ψΔ on σ(T ),
T φ0 (T ) = φ0 (T )T = I − ψΔ (T ) = I − EΔ .
If 0 ∈ σ(T |R(EΔ ) ), then T |R(EΔ ) has a bounded inverse on the Banach space
R(EΔ ), which means that there exists an operator [T |R(EΔ ) ]−1 on R(EΔ ) such
that T |R(EΔ) [T |R(EΔ) ]−1 = [T |R(EΔ ) ]−1 T |R(EΔ ) = I. Consider the operator
[T |R(EΔ ) ]−1 EΔ in B[X ] so that, since T ψΔ (T ) = ψΔ (T )T ,
ϕ φ1 = ψΔ on σ(T ),
T φ1 (T ) = φ1 (T )T = ψΔ (T ) = EΔ .
σ(T |R(EΔ ) ) = Δ.
EΔ = O =⇒ Δ=∅
by the above displayed identity (that holds for all spectral subsets of σ(T )).
Take arbitrary spectral subsets Δ1 and Δ2 of σ(T ). From Lemma 4.19(a),
EΔ1 \Δ2 = EΔ1 ∩(σ(T )\Δ2 ) = EΔ1 (I − EΔ2 ) = EΔ1 − EΔ1 EΔ2 ,
where EΔ1 and EΔ2 commute since EΔ1 EΔ2 = EΔ1 ∩Δ2 = EΔ2 EΔ1 , and so
(EΔ1 − EΔ2 )2 = EΔ1 + EΔ2 − 2EΔ1 EΔ2 = EΔ1 \Δ2 + EΔ2 \Δ1 = EΔ1Δ2 ,
where Δ1Δ2 = (Δ1 \Δ2 ) ∪ (Δ2 \Δ1 ) is the symmetric difference. Thus
such that
n
Δj = σ(T ) with Δj ∩ Δk = ∅ whenever j = k.
j=1
The results of Lemma 4.19 are readily extended for any integer n ≥ 2 so that
each EΔj is a nontrivial projection (O = EΔj = EΔj2 = I) and
n
EΔj = I with EΔj EΔk = O whenever j = k.
j=1
Observe that these operators commute. Indeed, recall that T RT (λ) = RT (λ)T ,
and T EΔj = EΔj T , and hence RT (λ)EΔj = EΔj RT (λ) (cf. Corollary 4.17 and
Lemma 4.19). Therefore, if j = k, then
σ(Tj ) = Δj ∪ {0}.
4.4 Riesz Decomposition Theorem and Riesz Idempotents 121
Note that the spectral radii are such that r(Tj ) ≤ r(T ). Since Tjk = T k EΔj
for every nonnegative integer k, it follows by Corollary 2.12 that
∞
k ∞ Tj
k
Rj (λ) = (λI − T )−1 EΔj = λ1 T
λ E Δ j = 1
λ λ
k=0 k=0
if λ in ρ(T ) is such that r(T ) < |λ| and, again by Corollary 2.12,
∞ T E Δ j
k ∞ Tj
k
RTj (λ) = (λI − T EΔj )−1 = λ1 λ = 1
λ λ
k=0 k=0
if λ in ρ(Tj ) is such that r(Tj ) < |λ|, where the above series converge uniformly
in B[X ]. Hence, if r(T ) < |λ|, then
Rj (λ) = RT (λ)EΔj = RT Δj = RTj
The properties of Riesz idempotents in Lemma 4.19 and Theorem 4.20 lead
to a major result in operator theory, which says that operators whose spectra
are disconnected have nontrivial hyperinvariant subspaces. (For the original
statement of the Riesz Decomposition Theorem see [77, p. 421].)
Proof. Let EΔ1 and EΔ2 be the Riesz idempotents associated with Δ1 and
Δ2 . Lemma 4.19(a) ensures that R(EΔ1 ) and R(EΔ2 ) are complementary
subspaces (i.e., X = R(EΔ1 ) + R(EΔ2 ) and R(EΔ1 ) ∩ R(EΔ2 ) = ∅, since Δ1
and Δ2 are complementary spectral sets, so that EΔ1 and EΔ2 are comple-
mentary projections). Thus (cf. Section 1.1), there is a unique decomposition
4.4 Riesz Decomposition Theorem and Riesz Idempotents 123
x = u + v = EΔ1 x + EΔ2 x
Proof. Let Δ be any spectral set of σ(T ) and let EΔ be the Riesz idempotent
associated to it. Since R(EΔ ) is an invariant subspace for T (cf. Lemma 4.19),
take the restriction T |R(EΔ ) ∈ B[R(EΔ )] of T to the Banach space R(EΔ ). If
λ ∈ σ(T )\Δ, then λ ∈ ρ(T |R(EΔ ) ). So (λI − T )|R(EΔ ) = λI|R(EΔ ) − T |R(EΔ)
has a bounded inverse, where I|R(EΔ ) = EΔ |R(EΔ ) stands for the identity on
R(EΔ ). Thus, since R(EΔ ) is T -invariant,
Therefore,
EΔ (N (λI − T )) = {0}.
Moreover, since
it follows that, if λ ∈ σ(T )\Δ, then λ ∈ ρ(T |R(EΔ ) ), by Theorem 4.20 (where
T |R(EΔ ) ∈ B[R(EΔ )]), so that (see diagram of Section 2.2)
(λI − T |R(Eλ ) )m = O.
(λI − T )m Eλ = O.
(λI − T )0 Eλ = O.
(Recall that AB = O if and only if R(B) ⊆ N (A), for all operators A and B.)
The remaining assertions are readily verified. In fact, {0} = N (λI − T ) since
λ ∈ σ(T )\{0} is an eigenvalue of the compact operator T (by the Fredholm
Alternative of Theorem 2.18), and N (λI − T ) ⊆ R(Eλ ) by Corollary 4.22.
Remark. Corollary 4.23 and Proposition 4.F ensure that, if T ∈ B∞[X ] (i.e.,
if T is compact) and λ ∈ σ(T )\{0}, then λ is a pole of the resolvent function
RT : ρ(T ) → G[X ], and the integer n in the statement of the preceding result
is the order of the pole λ. Moreover, it follows from Proposition 4.G that
R(Eλ ) = N ((λI − T )n ).
126 4. Functional Calculus
for every λ in the punctured disk Bδ0 (λ0 )\{0} = {λ ∈ C : 0 < |λ − λ0 | < δ0 } ⊆
ρ(T ), where δ0 = d(λ0 , σ(T )\{λ0 }) is the distance from λ0 to the rest of the
spectrum, and Tk ∈ B[X ] is such that
4.5 Additional Propositions 127
%
Tk = 1
2πi (λ − λ0 )−(k+1) RT (λ) dλ
Γλ0
for every k ∈ Z , where Γλ0 is any positively oriented circle centered at λ0 with
radius less than δ0 .
An isolated point of σ(T ) is a singularity (or an isolated singularity) of
the resolvent function RT : ρ(T ) → G[X ]. The expansion of RT in Proposition
4.E is the Laurent expansion of RT about an isolated point of the spectrum.
Note that, for every positive integer k (i.e., for every k ∈ N ),
and so T−1 = Eλ0 , the Riesz idempotent associated with the isolated point λ0 .
The notion of poles associated with isolated singularities of analytic functions
is extended as follows. An isolated point of σ(T ) is a pole of order n of RT ,
for sone n ∈ N , if T−n = O and Tk = O for every k < n (i.e., n is the largest
positive integer such that T−|n| = O). Otherwise, if the number of nonzero
coefficients Tk with negative indices is infinite, then the isolated point λ0 of
σ(T ) is said to be an essential singularity of RT .
where Eλ0 denotes the Riesz idempotent associated with λ0 and EΔ denotes
the Riesz idempotent associated with the spectral set Δ = σ(T )\{λ0 }.
Proposition 4.I. Take T ∈ B[X ]. Let Δ be a spectral subset of σ(T ), and let
EΔ ∈ B[X ] be the Riesz idempotent associated with Δ. If ψ ∈ A(σ(T )), then
ψ ∈ A(σ(T |R(EΔ ) )) and ψ(T )|R(EΔ ) = ψ(T |R(EΔ ) ), so that
128 4. Functional Calculus
% %
1 1
2πi ψ(λ)RT (λ) dλ = 2πi ψ(λ)RT |R(EΔ ) (λ) dλ.
ΓΔ R(EΔ ) ΓΔ
for every hyponormal operator T ∈ B[H], where Eλ0 is the Riesz idempotent
associated with the isolated point λ0 of the spectrum of T .
Notes: For Proposition 4.A see, for instance, [27, Theorem IX.8.10]. Propo-
sition 4.B is the composition counterpart of the product-preserving result
of Lemma 4.15 (see, e.g., [56, Theorem 5.3.2] or [39, Theorem VII.3.12]).
For Proposition 4.C see [39, Exercise VII.5.18] or [87, 1st edn. Theorem
5.7-B]. Proposition 4.D follows from Theorem 4.20, since, by Theorem 2.7,
1
λ0 I − T |R(Eλ0 ) is quasinilpotent so that limn (λI − T )n x n = 0 whenever
x ∈ R(Eλ ). For the converse see [87, 1st edn. Lemma 5.8-C]. The Laurent
expansion in Proposition 4.E and also Proposition 4.F are standard results
(see [27, Lemma VII.6.11, Proposition 6.12, and Corollary 6.13]). Proposition
4.G refines the results of Proposition 4.F — see, e.g., [39, Theorem VII.3.24].
For Proposition 4.H see [39, Exercise VII.5.34]. Proposition 4.I is a conse-
quence of Theorem 4.20 (see, e.g., [39, Theorem VII.3.20]). Proposition 4.J
also holds for Banach space spectral operators of the scalar type (cf. [41,
Theorem XV.5.1]), and Proposition 4.K is a particular case of it for the char-
acteristic function in a neighborhood of a clopen set in the σ-algebra of Borel
subsets of σ(T ). Proposition 4.L is the extension of Proposition 3.G from nor-
mal to hyponormal operators, which is obtained by the Riesz Decomposition
Theorem (Corollary 4.21) — see, for instance, [66, Problem 6.28].
Suggested Reading
SF = F ∪ Fr and F = F ∩ Fr .
T ∈ F if and only if T ∗ ∈ Fr .
Therefore,
T ∈ SF if and only if T ∗ ∈ SF ,
T ∈F if and only if T ∗ ∈ F.
The following properties on kernel and range of a Hilbert space operator and
its adjoint (Lemmas 1.4 and 1.5) will often be used throughout this chapter:
N (T ∗ ) = H R(T )− = R(T )⊥ ,
R(T ∗ ) is closed if and only if R(T ) is closed.
see Proposition 1.C). This concludes the proof of (v) which, together with (i),
concludes half of the claimed result.
(a2 ) Conversely, suppose dim N (T ) < ∞ and R(T ) is closed. Since R(T )
is closed, it follows that T |N (T )⊥ : N (T )⊥ → H (which is injective because
N (T |N (T )⊥) = {0}) has a closed range R(T |N (T )⊥) = R(T ). Hence it has a
bounded inverse on its range (Theorem 1.2). Let E ∈ B[H] be the orthogonal
projection onto R(E) = R(T |N (T )⊥ ) = R(T ), and define A ∈ B[H] as follows:
A = (T |N (T )⊥ )−1 E.
AT = I + K,
Let Z be the set of all integers and set Z = Z ∪ {−∞} ∪ {+∞}, the ex-
tended integers. Take any T in SF so that N (T ) or N (T ∗ ) is finite dimen-
sional. The Fredholm index ind (T ) of T in SF is defined in Z by
ind (T ∗ ) = −ind (T ).
This is how the theory advances in a Banach space. Many properties, among
those that will be developed in this chapter, work smoothly in any Banach
space. Some of them, in order to be properly translated into a Banach space
setting, will behave well if the Banach space is complemented. A Banach space
is complemented if every subspace of it has a complementary subspace (i.e., if
there exists a subspace N of X such that M + N = X and M ∩ N = {0} for
every subspace M of X ). However, if a Banach space is complemented, then it
is isomorphic to a Hilbert space [72] (i.e., topologically isomorphic, after the
5.1 Fredholm Operators and Fredholm Index 135
(f) Product. Still from Corollary 5.2, a nonzero scalar operator is Fredholm
with a null index. This is readily generalized:
Proof. (a) If S and T are left semi-Fredholm, then there are A, B, K, L in B[H],
being K, L compact, such that BS = I + L and AT = I + K. Then
(b) We shall split the proof of ind (S T ) = ind (S) + ind (T ) into three parts.
(b1 ) Take S, T ∈ F . First suppose ind (S) and ind (T ) are both finite or, equiv-
alently, suppose S, T ∈ F = F ∩ Fr . Consider the surjective transformation
L: N (S T ) → R(L) ⊆ H defined by
Lx = T x for every x ∈ N (S T ),
N (L) = N (T ).
Again, as is well known from linear algebra, dim X = dim N (L) + dim R(L)
for every linear transformation L on a linear space X (see, e.g., [66, Problem
2.17]). Thus, with X = N (S T ), and recalling that dim N (S) < ∞,
Since R(T ) is closed, R(T ) ∩ N (S) is a subspace of N (S), so that (by the
Projection Theorem) N (S) = R(T ) ∩ N (S) ⊕ (N (S) (R(T ) ∩ N (S))), and
hence dim (N (S) (R(T ) ∩ N (S))) = dim N (S) − dim (R(T ) ∩ N (S)) (be-
cause dim N (S) < ∞). Thus
and so
ind (S T ) = ind (S) + ind (T ).
If S, T ∈ SF \F, so that ind (S) and ind (T ) are both not finite, then the ex-
pression ind(S) + ind (T ) makes sense as an extended integer in Z if and only
if either ind(S) = ind (T ) = +∞ or ind (S) = ind(T ) = −∞. But this is equiv-
alent to saying that either S, T ∈ Fr \F = Fr \F or S, T ∈ F \F = F \Fr .
Therefore, if S, T ∈ SF with one of them in Fr \F and the other in F \Fr ,
then the index expression of Theorem 5.4 does not make sense. Moreover, if
one of them lies in Fr \F and the other lies in F \Fr , then it may hap-
pen that their product is not in SF . For instance, let H be an infinite-
dimensional Hilbert space, and let S+ be the canonical ∞ unilateral shift (of
2
infinite multiplicity) on the Hilbert space + (H) = k=0 H (cf. Section 2.7).
It is readily verified that dim N (S+ ) = ∞, dim N (S+∗ ) = 0, and R(S+ ) is closed
(S+ ) = + (H) k=1 H ∼
∞
2
in + (H). In fact, N
2
= H, N (S+∗ ) = {0}, and
∞
R(S+ ) = {0} ⊕ k=1 H. Therefore, S+ ∈ Fr \F and S+∗ ∈ F \Fr (accord-
ing to Theorem 5.1). However, it is easy to see that S+ S+∗ ∈ SF . Indeed,
S+ S+∗ = O ⊕ I (where
∞ O stands for the null operator on H and I for the iden-
tity operator on k=1 H) does not lie in SF because it is self-adjoint and
dim N (S+ S+∗ ) = ∞ (cf. Theorem 5.1 again).
Proof. The result holds trivially for n = 0 (the identity is Fredholm with index
zero), and tautologically for n = 1. Thus suppose n ≥ 2. Theorem 5.4 says
5.1 Fredholm Operators and Fredholm Index 139
that the claimed result holds for n = 2. By using Theorem 5.4 again, a trivial
induction ensures that the claimed result holds for each n ≥ 2.
Proof. (a) If T ∈ F , then there exist A ∈ B[H] and K1 ∈ B∞[H] such that
AT = I + K1 . So A ∈ Fr . Take an arbitrary K ∈ B∞[H]. Set K2 = K1 + AK,
which lies in B∞[H] because B∞[H] is an ideal of B[H]. Since A(T + K) =
I + K2 , it follows that T + K ∈ F . Therefore, T ∈ F implies T + K ∈ F .
The converse also holds because T = (T + K) − K. Hence
T ∈ F ⇐⇒ T + K ∈ F .
T ∈ Fr ⇐⇒ T + K ∈ Fr .
Since ind(T ) is finite (under the assumption that T ∈ F), it follows that
ind (A) = −ind(T ) is finite as well, and so we may subtract ind (A) to get
ind(T + K) = ind(T ).
ind(T + K) = −∞ = ind(T ).
(b) Weyl Operator. A Weyl operator is a Fredholm operator with null index
(equivalently, a semi-Fredholm operator with null index). Let
W = T ∈ F : ind (T ) = 0
denote the class of all Weyl operators from B[H]. Since T ∈ F if and only if
T ∗ ∈ F and ind (T ∗ ) = −ind(T ), it follows that
T ∈W ⇐⇒ T ∗ ∈ W.
Items (c), (d), and (f) in Remark 5.3 ensure that (i) every operator on a
finite-dimensional space is a Weyl operator,
K ∈ B∞[H] and λ = 0 =⇒ λI − K ∈ W,
and (iii) every nonzero multiple of a Weyl operator is again a Weyl operator,
S, T ∈ W =⇒ S T ∈ W.
Thus integral powers of Weyl operators are Weyl operators (Corollary 5.5),
T ∈W =⇒ T n ∈ W for every n ∈ N 0 .
5.2 Essential Spectrum and Spectral Picture 141
(Set T = −K in Theorem 5.6 and, for the converse, see Remark 5.3(c).) Also
note that, if T is normal, then N (T ) = N (T ∗ T ) = N (T T ∗ ) = N (T ∗ ) by
Lemma 1.4, and so every normal Fredholm operator is Weyl,
T normal in F =⇒ T ∈ W.
T ∈ G[H] =⇒ T ∈ W.
T ∈W =⇒ T + K ∈ W.
Proof. S ∈ B[H] is said to have a bounded inverse on its range if there exists
S −1 ∈ B[R(S), H] such that S −1 S = I in B[H] and S S −1 = I in B[R(S)]).
Claim. S is left invertible if and only if it has a bounded inverse on its range.
Proof. If S ∈ B[H] is such that S S = I, then S |R(S) ∈ B[R(S), H] is such
that S |R(S) S = I ∈ B[H]. Consider the operator S S |R(S) in B[R(S)], and
take any y in R(S). Thus y = Sx for some x in H, and so S |R(S) y = S Sx =
142 5. Fredholm Theory
Let T be an operator in B[H]. The left spectrum σ (T ) and the right spec-
trum σr (T ) of T ∈ B[H] are the sets
σ (T ) = λ ∈ C : λI − T is not left invertible ,
σr (T ) = λ ∈ C : λI − T is not right invertible ,
so that the spectrum σ(T ) of T ∈ B[H] is given by
σ(T ) = λ ∈ C : λI − T is not invertible = σ (T ) ∪ σr (T ).
By Corollary 5.9 and Theorem 2.6 (see also the diagram of Section 2.2),
and so
σ (T ) and σr (T ) are closed subsets of C .
5.2 Essential Spectrum and Spectral Picture 143
Indeed, σAP (T ) is closed (cf. Theorem 2.5). Note that σP1 (T ) and σR1 (T )
are open in C (cf. Remarks following Theorems 2.5 and 2.6), and therefore
σAP (T ) = σ(T )\σR1 (T ) = σ(T ) ∩ (C \σR1 (T )) and σAP (T ∗ )∗ = σ(T )\σP1 (T ) =
σ(T ) ∩ (C \σP1 (T )) are closed (and bounded) in C , since σ(T ) is closed (and
bounded); and so is the intersection
,
σ (T ) ∩ σr (T ) = σ(T ) σP1 (T ) ∪ σR1 (T ) = σAP (T ) ∩ σAP (T ∗ )∗ .
The fact that each σP1 (T ) and σR1 (T ) is an open subset of C , and so each
σ (T ) and σr (T ) is a closed subset of C , it a consequence of the property
that T lies in the unital Banach algebra B[H] (so that σ(T ) is a compact
set), which actually happens in any unital Banach algebra. Also recall that
(S S)∗ = S ∗S∗ and (S Sr )∗ = Sr∗ S ∗, so that S is a left inverse of S if and
only if S∗ is a right inverse of S ∗, and Sr is a right inverse for S if and only if
Sr∗ is a left inverse for S ∗. Therefore (as we can also infer from Corollary 5.9),
σ (T ) = σr (T ∗ )∗ and σr (T ) = σ (T ∗ )∗ .
for every T in B[H]. Recall that the origin of the linear space B[H]/B∞[H] is
[0] = B∞[H] ∈ B[H]/B∞[H], the kernel of the natural map π is
N (π) = T ∈ B[H]: π(T ) = [0] = B∞[H] ⊆ B[H],
[T ] = inf T + K ≤ T ,
K∈B∞[H]
so that π is a contraction.
By definition, (a) and (b) are equivalent, and (b) implies (c) trivially. If (c)
holds, then there exist B ∈ [A] = π(A), S ∈ [T ] = π(T ), and J ∈ [I] = π(I)
such that BS = J. Therefore, π(A)π(T ) = [A] [T ] = [B] [S] = π(B)π(S) =
π(BS) = π(J) = [J] = [I] = π(I), so that (c) implies (d). Now observe that
X ∈ π(A)π(T ) = π(AT ) if and only if X = AT + K1 for some K1 ∈ B∞[H],
and X ∈ π(I) if and only if X = I + K2 for some K2 ∈ B∞[H]. Therefore, if
π(A)π(T ) = π(I), then X = AT + K1 if and only if X = I + K2 , and hence
AT = I + K with K = K2 − K1 ∈ B∞[H]. Thus (d) implies (b). Finally, (d)
and (e) are equivalent by the definition of left invertibility.
Outcome: T ∈ F if and only if π(T ) is left invertible in B[H]/B∞[H]. Dually,
T ∈ Fr if and only if π(T ) is right invertible in B[H]/B∞[H].
σe (T ) = σ(π(T )),
Proof. The expressions for σe (T ) and σre (T ) in Corollary 5.11 lead to the
claimed identity, since σe (T ) = σe (T ) ∪ σre (T ) and F = F ∩ Fr .
Thus the essential spectrum is the set of all scalars λ for which λI − T is
not Fredholm. The following equivalent version is also frequently used [7].
so that ,
σe (T ) ∩ σre (T ) ⊆ σ(T ) σP1 (T ) ∪ σR1 (T ) ,
which is an inclusion of closed sets, and therefore
, ,
σP1 (T ) ∪ σR1 (T ) ⊆ σ(T ) σe (T ) ∩ σre (T ) ⊆ C σe (T ) ∩ σre (T ) ,
σe (T ) = ∅ ⇐⇒ dim H = ∞.
Actually, since σe (T ) was defined as the spectrum of π(T ) in the Calkin alge-
bra B[H]/B∞[H], which is a unital Banach algebra if and only if H is infinite
dimensional (in a finite-dimensional space all operators are compact), and
since spectra are nonempty in a unital Banach algebra, it follows that
5.2 Essential Spectrum and Spectral Picture 147
dim H = ∞ =⇒ σe (T ) = ∅.
dim H < ∞ =⇒ σe (T ) = ∅.
σe (T + K) = σe (T ).
(a) σk (T ) = σ−k (T ∗ )∗ .
If k ∈ Z \{0}, then
(b) σk (T ) = λ ∈ C : λI − T ∈ F and ind(λI − T ) = k
+
σPF (T ), 0 < k,
⊆ ∗ ∗
σPF (T ) , k < 0,
so that σk (T ) ⊆ σ(T ), and
σk (T ) = λ ∈ C : λI − T ∈ F and ind(λI − T ) = 0
k∈Z \{0}
= λ ∈ C : λI − T ∈ F \W
⊆ λ ∈ C : λI − T ∈ F = C \σe (T ).
If k = ±∞, then
Moreover,
σ+∞ (T ) = λ ∈ C : λI − T ∈ SF and ind (λI − T ) = +∞
= λ ∈ C : λI − T ∈ Fr \F
= σe (T )\σre (T ) ⊆ C \σre (T ),
σ−∞ (T ) = λ ∈ C : λI − T ∈ SF and ind(λI − T ) = −∞
= λ ∈ C : λI − T ∈ F \Fr
= σre (T )\σe (T ) ⊆ C \σe (T ),
which are all open subsets of C , and hence
σ+∞ (T ) ⊆ σe (T ) and σ−∞ (T ) ⊆ σe (T ).
Furthermore, if
5.2 Essential Spectrum and Spectral Picture 149
+
(d) Z \{0} = N denotes the set of all extended positive integers, and
−
Z \{0} = −N denotes the set of all extended negative integers, then
σP1 (T ) ⊆ + σk (T )
k∈Z \{0}
= λ ∈ C : λI − T ∈ SF and 0 < ind (λI − T ) ,
σR1 (T ) ⊆ − σk (T )
k∈Z \{0}
= λ ∈ C : λI − T ∈ SF and ind(λI − T ) < 0 ,
so that
σP1 (T ) ∪ σR1 (T ) ⊆ σk (T )
k∈Z \{0}
= λ ∈ C : λI − T ∈ SF and ind (λI − T ) = 0
,
⊆ λ ∈ C : λI − T ∈ SF = C σe (T ) ∩ σre (T ) ,
σ−∞ (T ) are open in C . Moreover, the expressions for σ+∞ (T ) and σ−∞ (T )
and Corollary 5.12 ensure that these sets are subsets of σe (T ).
(d) Recall from Section 1.3 that M− = H if and only if M⊥ = {0}. Hence
R(λI − T )− = H if and only if N (λI − T ∗ ) = {0} by Lemma 1.4. In other
words, R(λI − T )− = H if and only if dim N (λI − T ∗ ) = 0. Thus, according
to the diagram of Section 2.2, we get
σP1(T ) = λ ∈ C : R(λI − T ) is closed, 0 = dim N (λI − T ∗ ) = dim N (λI − T )
= λ ∈ C : R(λI − T ) is closed, 0 = dim N (λI − T ∗ ), ind(λI − T ) > 0 .
σe (T ) = σe (T ) ∪ σre (T )
| |
| |
| ||
|
σk (T )
| | | |
| | | ♠ |
σe (T ) | ♣ || | ||
σre (T )
| |
σe (T ) ∩ σre (T )
| |
| ♠ |
| ♣ ||
♣ | ♠
σ−∞ (T ) = σre (T )\σe (T ) σ+∞ (T ) = σe (T )\σre (T )
σ0 (T ) ⊆ σPF (T ) ∩ σPF (T ∗ )∗ ⊆ σP (T ) ∩ σP (T ∗ )∗ .
Thus
σ0 (T ) = σ0 (T ∗ )∗ .
{0} = σ0 (T + K) = σ0 (T ) = {1}.
This will be generalized in the proof of Theorem 5.24, where it is shown that
σ0 (T ) is never invariant under compact perturbation.
where
σe (T ) ∩ σk (T ) = ∅,
k∈Z
Recall that the collection {σk (T )}k∈Z \{0} consists of pairwise disjoint open
sets (cf. Theorem 5.16), which are subsets of σ(T )\σe (T ). These are the holes
of the essential spectrum σe (T ). Moreover, σ±∞ (T ) also are open sets (cf.
Theorem 5.16), which are subsets of σe (T ). These are the pseudoholes of σe (T )
(they are holes of σre (T ) and σe (T )). Thus the spectral picture of T consists
of the essential spectrum σe (T ), the holes σk (T ) and pseudoholes σ±∞ (T ) (to
each is associated a nonzero index k in Z \{0}), and the set σ0 (T ). It is worth
noticing that any spectral picture can be attained [25] by an operator in B[H]
(see also Proposition 5.G).
precisely the set of all isolated points of σ(T ) that lie in σ0 (T ). Let Eλ ∈ B[H]
be the Riesz idempotent associated with an isolated point λ of the spectrum
σ(T ) of an operator T ∈ B[H].
is the set of all accumulation points of σ(T ). Let π0 (T ) denote the set of all
isolated points of σ(T ) that lie in σ0 (T ); that is,
π0 (T ) = σiso (T ) ∩ σ0 (T ).
τ0 (T ) = σ0 (T )\π0 (T ) ⊆ σP4 (T ),
σ0 (T ) = τ0 (T ) ∪ π0 (T ).
Let π00 (T ) denote the set of all isolated eigenvalues of finite multiplicity,
Corollary 5.22. π0 (T ) = λ ∈ π00 (T ): R(λI − T ) is closed .
Proof. If λ ∈ π0 (T ), then λ ∈ π00 (T ) and R(λI−T ) is closed (since λ ∈ σ0 (T )).
Conversely, if R(λI − T ) is closed and λ ∈ π00 (T ), then R(λI − T ) is closed
and dim N (λI − T ) < ∞ (since λ ∈ σPF (T )), which means by Theorem 5.19
that λ ∈ σ0 (T ) (since λ ∈ σiso (T )). Thus λ ∈ σiso (T ) ∩ σ0 (T ) = π0 (T ).
which is the largest part of σ(T ) that remains unchanged under compact
perturbations. That is, σw (T ) is the largest part of σ(T ) such that
Take any x ∈ N (T ) and consider its Fourier series expansion with respect to
n
the orthonormal basis {ei }ni=1 , namely, x = =1 x ; ei ei . Observe that
n
x2 = |x ; ei |2 = Kx2 for every x ∈ N (T ).
j=1
N (T + K) = {0}.
σ0 (T ) = σ(T ) σe (T ) ∪ σk (T ) .
k∈Z \{0}
Take an arbitrary λ ∈ σ(T ). If λ lies in σe (T ) ∪ k∈Z \{0} σk (T ), then λ lies in
σe (T + K) ∪ k∈Z \{0} σk (T + K) for every K ∈ B∞[H], since
for every K ∈ B∞[H] (cf. Remarks 5.15(c) and 5.17(c)). On the other hand,
if λ ∈ σ0 (T ), then the preceding claim ensures that there exists K ∈ B∞[H]
for which λ ∈ σ(T + K). Therefore, as the largest part of σ(T ) that remains
invariant under compact perturbation is σw (T ),
σw (T ) = σe (T ) ∪ σk (T ) = σ(T )\σ0 (T ).
k∈Z \{0}
σe (T ) ⊆ σw (T ) ⊆ σ(T ).
(i.e., if and only if the essential spectrum has no holes). Since σw (T ) and
σ0 (T ) are both subsets of σ(T ), it follows by Theorem 5.24 that σ0 (T ) is the
complement of σw (T ) in σ(T ),
σ0 (T ) = σ(T )\σw (T ),
σ(T ) = σw (T ) ∪ σ0 (T ) and σw (T ) ∩ σ0 (T ) = ∅.
Thus
σw (T ) = σ(T ) ⇐⇒ σ0 (T ) = ∅,
and so, by Theorem 5.24 again,
σe (T ) = σw (T ) = σ(T ) ⇐⇒ σk (T ) = ∅.
k∈Z
Then the Weyl spectrum σw (T ) is the set of all scalars λ for which λI − T
is not a Weyl operator (i.e., for which λI − T is not a Fredholm operator of
index zero). Since an operator lies in W together with its adjoint, it follows
by Corollary 5.25 that λ ∈ σw (T ) if and only if λ ∈ σw (T ∗ ):
σw (T ) = σw (T ∗ )∗ .
Theorem 5.24 also gives us another characterization of the set W of all Weyl
operators from B[H] in terms of the set σ0 (T ) = σ(T )\σw (T ).
Corollary 5.26.
W = T ∈ B[H]: 0 ∈ ρ(T ) ∪ σ0 (T ) = T ∈ F : 0 ∈ ρ(T ) ∪ σ0 (T ) .
of the Weyl spectrum that precedes Lemma 5.23), and hence 0 ∈ ρ(T ) ∪ σ0 (T )
by Theorem 5.24. The converse is (almost) trivial: if 0 ∈ ρ(T ) ∪ σ0 (T ), then
either 0 ∈ ρ(T ), so that T ∈ G[H] ⊂ W by Remark 5.7(b); or 0 ∈ σ0 (T ), which
means that 0 ∈ σ(T ) and T ∈ W by the very definition of σ0 (T ). This proves
the first identity, which implies the second one since W ⊆ F .
Remark 5.27. (a) Finite Dimensional. Take any operator T ∈ B[H]. Since
σe (T ) ⊆ σw (T ), it follows by Remark 5.15(a) that
σw (T ) = ∅ =⇒ dim H < ∞
dim H < ∞ =⇒ σw (T ) = ∅.
Thus
σw (T ) = ∅ ⇐⇒ dim H = ∞.
and so, either by Theorem 5.24 or by item (a) and Remark 5.15(a),
dim H < ∞ =⇒ σe (T ) = σw (T ) = ∅.
Since π0 (K) = σiso (K)∩σ0 (K) ⊆ σiso (K)∩σPF (K) = π00 (K) ⊆ σPF (T ) ⊆ σP (T ),
Take any T ∈ B[H] and consider the following expressions (cf. Theorem
5.24 or Corollary 5.25, and the very definitions of σ0 (T ), π0 (T ), and π00 (T ) ).
σ0 (T ) = σ(T )\σw (T ), π0 (T ) = σiso (T )∩σ0 (T ), π00 (T ) = σiso (T )∩σPF (T ).
The equivalent assertions in the next corollary define an important class of
operators. An operator for which any of those equivalent assertions holds is
said to satisfy Weyl’s Theorem. This will be discussed later in Section 5.5.
Proof. By Theorem 5.24, σ(T )\σ0 (T ) = σ, w (T ), and so (a) implies (b). If (b)
holds, then σ0 (T ) = σ(T )\σw (T ) = σ(T ) σ(T )\π00 (T ) = π00 (T ), because
π00 (T ) ⊆ σ(T ), and so (b) implies (a). Moreover, if σ0 (T ) = σiso (T ) (or, if
their complements in σ(T ) coincide), then (a) holds because σ0 (T ) ⊆ σPF (T ).
Conversely, if (a) holds, then π0 (T ) = σiso (T ) ∩ σ0 (T ) = σiso (T ) ∩ π00 (T ) =
π00 (T ) because π00 (T ) ⊆ σiso (T ).
The claimed result holds trivially for k = 0. Suppose it holds for some k ≥ 0.
Take an arbitrary x ∈ N (T n0 +k+2 ) so that T n0 +k+1 (T x) = T n0 +k+2 x = 0.
Thus T x ∈ N (T n0 +k+1 ) = N (T n0 +k ), and so T n0 +k+1 x = T n0 +k (T x) =
0, which implies that x ∈ N (T n0 +k+1 ). Hence, N (T n0 +k+2 ) ⊆ N (T n0 +k+1 ).
But N (T n0 +k+1 ) ⊆ N (T n0 +k+2 ) since {N (T n )} is nondecreasing. Therefore
N (T n0 +k+2 ) = N (T n0 +k+1 ), and the claimed result holds for k + 1 whenever
it holds for k, which completes the proof of (a) by induction.
(b) Rewrite the statements in (b) as follows.
The claimed result holds trivially for k = 0. Suppose it holds for some integer
k ≥ 0. Take an arbitrary y ∈ R(T n0 +k+1 ) so that y = T n0 +k+1 x = T (T n0 +k x)
for some x ∈ H, and hence y = T u for some u ∈ R(T n0 +k ). If R(T n0 +k ) =
R(T n0 +k+1 ), then u ∈ R(T n0 +k+1 ), and so y = T (T n0 +k+1 v) for some v ∈ H.
Thus y ∈ R(T n0 +k+2 ). Therefore, R(T n0 +k+1 ) ⊆ R(T n0 +k+2 ). Since the se-
quence {R(T n )} is nonincreasing, it follows that R(T n0 +k+2 ) ⊆ R(T n0 +k+1 ).
Hence R(T n0 +k+2 ) = R(T n0 +k+1 ). Thus the claimed result holds for k + 1
whenever it holds for k, which completes the proof of (b) by induction.
and the descent of T is the least (extended) nonnegative integer such that
R(T n+1 ) = R(T n ), that is
dsc(T ) = min n ∈ N 0 : R(T n+1 ) = R(T n ) .
Note that the ascent of T is null if and only if T is injective, and the descent of
T is null if and only if T is surjective. Indeed, since N (I) = {0} and R(I) = H,
asc(T ) = 0 ⇐⇒ N (T ) = {0},
dsc(T ) = 0 ⇐⇒ R(T ) = H.
164 5. Fredholm Theory
Again, the notion of ascent and descent holds for every linear transformation
of linear space into itself, and so does the next result.
Proof. (a) Suppose dsc(T ) = 0 (i.e., suppose R(T ) = H). If asc(T ) = 0 (i.e., if
N (T ) = {0}), then take 0 = x1 ∈ N (T ) ∩ R(T ) and x2 , x3 in R(T ) = H such
that x1 = T x2 and x2 = T x3 , and so x1 = T 2 x3 . Continuing this way, we can
construct a sequence {xn }n≥1 of vectors in H = R(T ) such that xn = T xn+1
and 0 = x1 = T n xn+1 lies in N (T ), and so T n+1 xn+1 = 0. This implies that
xn+1 ∈ N (T n+1 )\N (T n ) for each n ≥ 1, and hence asc(T ) = ∞ by Lemma
5.29. Summing up: If dsc(T ) = 0, then asc(T ) = 0 implies asc(T ) = ∞.
(b) Set m = dsc(T ), so that R(T m ) = R(T m+1 ), and set S = T |R(T m ) . It
is readily verified by using Proposition 1.E that R(T m ) is T -invariant, and
so S ∈ B[R(T m)]. Note that R(S) = S(R(T m )) = R(S T m ) = R(T m+1 ) =
R(T m ). Thus S is surjective, which means that dsc (S) = 0. Moreover, since
asc(S) < ∞ (because asc(T ) < ∞), it follows by (a) that asc(S) = 0. That is,
N (S) = {0}. Take x ∈ N (T m+1 ) and set y = T m x in R(T m ), so that Sy =
T m+1 x = 0. Thus y = 0, and hence x ∈ N (T m ). Then N (T m+1 ) ⊆ N (T m ),
and so N (T m+1 ) = N (T m ) since {N (T n )} is nondecreasing. This implies that
asc(T ) ≤ m by Lemma 5.29. To prove the reverse inequality, suppose m = 0
(otherwise apply (a)) and take z in R(T m−1 )\R(T m ) so that z = T m−1 u and
T z = T (T m−1u) = T m u lies in R(T m ), for some u in H. Since T m (R(T m )) =
R(T 2m ) = R(T m ), we get T z = T m v for some v in R(T m ). Now note that
T m (u − v) = T z − T z = 0 and T m−1 (u − v) = z − T m−1 v = 0 (reason:
T m−1 v ∈ R(T 2m−1 ) = R(T m ) since v ∈ R(T m ), and z ∈ / R(T m )). Therefore,
(u − v) ∈ N (T m )\N (T m−1 ), and so asc(T ) ≥ m. Outcome: If dsc(T ) = m
and asc(T ) < ∞, then asc(T ) = m.
If dsc (T ) = ∞, then R(T n+1 ) ⊂ R(T n ) so that R(T n )⊥ ⊂ R(T n+1 )⊥ or,
equivalently, N (T ∗n ) ⊂ N (T ∗(n+1) ), and therefore asc(T ∗ ) = ∞. Dually, if
dsc(T ∗ ) = ∞, then asc(T ) = ∞. That is,
A pair of subspaces (i.e., closed linear manifolds) of a normed space that are
algebraic complements of each other are called complementary subspaces.
G[H] ⊂ B ⊂ F ,
and it is readily verified that the inclusions are, in fact, proper. Therefore, if
T ∈ B and T ∈ G[H], then asc(T ) = dsc(T ) = m for some integer m ≥ 1, and
so N (T m ) and R(T m ) are complementary linear manifolds of H by Lemma
5.32. If T ∈ B, then T ∈ F , so that T m ∈ F by Corollary 5.5, and hence R(T m )
is closed by Corollary 5.2. Recall that N (T m ) is closed since T m is bounded.
Thus R(T n ) and N (T m ) are subspaces of H.
(b)⇒(c). If (b) holds, then, again, T ∈ F , so that T n ∈ F by Corollary 5.5,
and hence N (T m ) and N (T m∗ ) are finite dimensional, and R(T m ) is closed
by Corollary 5.2. Moreover, suppose there exists m ∈ N such that
G[H] ⊂ B ⊂ W ⊂ F.
Equivalently,
(where the direct sum is not necessarily orthogonal and equality in fact means
equivalence; that is, up to a topological isomorphism). Since both R(T m ) and
N (T m ) are T m -invariant, it follows that
T m = T m |R(T m ) ⊕ T m |N (T m ) .
0 ∈ σiso (T ).
Outcome: 0 ∈ π0 (T ) = σiso (T ) ∩ σ0 (T ).
(b) Take an arbitrary m ∈ N . If 0 ∈ σiso (T ), then 0 ∈ σiso (T m ). Indeed, if 0 is
an isolated pont of σ(T ), then 0 ∈ σ(T )m = σ(T m ), and hence 0 is an isolated
pont of σ(T )m = σ(T m ). Therefore,
σ(T m ) = {0} ∪ Δ,
where Δ is a compact subset of C that does not contain 0, which means that
{0} and Δ are complementary spectral sets for the operator T m (i.e., they
form a disjoint partition for σ(T m )). Let E0 = E{0} ∈ B[H] be the Riesz idem-
potent associated with 0. Corollary 4.22 ensures that
N (T m ) ⊆ R(E0 ).
Corollary 5.35.
B = T ∈ F : 0 ∈ ρ(T ) ∪ σiso (T ) = T ∈ F : 0 ∈ ρ(T ) ∪ π0 (T ) .
(b) Complements. Recall that σ(T )\σiso (T ) = σacc (T ), and also that
σ(T )\π0 (T ) = σ(T )\ σiso (T ) ∩ σ0 (T )
= σ(T )\σiso (T ) ∪ σ(T )\σ0 (T ) = σacc (T ) ∪ σw (T ).
(compare with Corollary 5.25). Since an operator lies in B together with its
adjoint, it follows at once that λ ∈ σb (T ) if and only if λ ∈ σb (T ∗ ):
σb (T ) = σb (T ∗ )∗ .
σb (T ) ⊆ σ(T ).
σe (T ) ⊆ σw (T ) ⊆ σb (T ) ⊆ σ(T ).
Equivalently, G[H] ⊂ B ⊂ W ⊂ F .
σb (T ) = σe (T ) ∪ σacc (T ) = σw (T ) ∪ σacc (T ).
Proof. Recall that σ(T ) = {λ} − σ(λI − T ) by Theorem 2.7 (Spectral Map-
ping), and so 0 ∈ σ(λI − T )\σiso (λI − T ) if and only if λ ∈ σ(T )\σiso (T ).
Thus, by the definition of σb (T ), and using Corollaries 5.12 and 5.35,
5.4 Ascent, Descent, and Browder Spectrum 171
σb (T ) = λ ∈ C : λI − T ∈ F or 0 ∈ ρ(λI − T ) ∪ σiso (λI − T )
= λ ∈ C : λ ∈ σe (T ) or 0 ∈ σ(λI − T )\σiso (λI − T )
= λ ∈ C : λ ∈ σe (T ) or λ ∈ σ(T )\σiso (T )
= σe (T ) ∪ σ(T )\σiso (T ) = σe (T ) ∪ σacc (T ).
σb (T ) = σ(T )\π0 (T ).
Proof. By the Spectral Picture (Corollary 5.18) and Corollary 5.37 we get
,
σ(T )\σb (T ) = σ(T ) σe (T ) ∪ σacc (T ) = σ(T )\σe (T ) ∩ (σ(T )\σacc (T )
= σk (T ) ∩ σiso (T ) = σ0 (T ) ∩ σiso (T ) = π0 (T ),
k∈Z
σb (T ) ∪ π0 (T ) = σ(T ) and σb (T ) ∩ π0 (T ) = ∅,
Proof. The identities involving σb (T ) are readily verified. In fact, recall that
π0 (T ) = σiso (T ) ∩ σ0 (T ) ⊆ σiso (T ) ∩ σPF (T ) = π00 (T ) ⊆ σiso (T ) ⊆ σ(T ).
Since σb (T ) ∩ π0 (T ) = ∅ and σ(T )\π0 (T ) = σb (T ) (Corollary 5.38), we get
Moreover,
using Theorem 5.24 again, σw (T ) = σe (T ) ∪ κ(T ), where κ(T ) =
k∈Z \{0} k (T ) is the collection of all holes of σe (T ), which is a subset of σ(T )
σ
that is open in C (union of open sets — Theorem 5.16). Thus
σiso (T )\σw (T ) = σiso (T )\(σe (T ) ∪ κ(T )) = (σiso (T )\σe (T )) ∩ (σiso (T )\κ(T )).
Remark 5.40. (a) Finite Dimensional. Take any T ∈ B[H]. Since every op-
erator on a finite-dimensional space is Browder (cf. Remark 5.36(a)), it follows
by the very definition of the Browder spectrum σb (T ) that
dim H < ∞ =⇒ σb (T ) = ∅.
dim H = ∞ =⇒ σb (T ) = ∅.
Thus
σb (T ) = ∅ ⇐⇒ dim H = ∞,
and σb (T ) = σ(T )\π0 (T ) (cf. Corollary 5.38) is a compact set in C because
σ(T ) is compact and π0 (T ) ⊆ σiso (T ). Moreover, by Remark 5.27(b),
dim H < ∞ =⇒ σe (T ) = σw (T ) = σb (T ) = ∅.
(b) Compact Operators. Since σacc (K) ⊆ {0} if K is compact (cf. Corollary
2.20), it follows by Remark 5.27(d), Corollary 5.37, and item (a) that
Proof. Recall that {σw (T ), σ0 (T )} forms a partition of σ(T ), and also that
π0 (T ) ⊆ π00 (T ) ⊆ σ(T ). Thus (a) and (b) are equivalent and, if (a) holds,
so that (a) implies (c); and (c) implies (d) because π00 (T ) ⊆ σiso (T ):
σb (T + K) = σb (T ).
F = F ∩ A =⇒ σe (T ) = σe (T ),
where
σe (T ) = λ ∈ σ (T ): λI − T ∈ A \F
stands for the essential spectrum of T ∈ A with respect to A, with σ (T )
denoting the spectrum of T ∈ A with respect to the unital complex Banach
algebra A. Similarly, let W denote the class of Weyl operators in A,
W = T ∈ F : ind (T ) = 0 ,
5.4 Ascent, Descent, and Browder Spectrum 175
and let
σw (T ) = λ ∈ σ (T ): λI − T ∈ A \W
stand for the Weyl spectrum of T ∈ A with respect to A. If T ∈ A, set
σ0 (T ) = σ (T )\σw
(T ) = λ ∈ σ (T ): λI − T ∈ W .
Moreover, let B stand for the class of Browder operators in A,
B = T ∈ F : 0 ∈ ρ (T ) ∪ σiso
(T )
according to Corollary 5.35, where σiso (T ) denotes the set of all isolated points
of σ (T ), and ρ (T ) = C \σ (T ) is the resolvent set of T ∈ A with respect to
the unital complex Banach algebra A. Let
σb (T ) = λ ∈ σ (T ): λI − T ∈ A \B
be the Browder spectrum of T ∈ A with respect to A.
A = {T } and 0 ∈ σ0 (T ) =⇒
0 ∈ σiso (T ).
A(T + K) = (T + K)A = I.
Proof. Take any operator T ∈ B[H], and let A be a unital closed subalgebra of
the unital complex Banach algebra A = B[H] (thus a unital complex Banach
algebra itself). Let σ (T ), σb (T ), and σw
(T ) be the spectrum, the Browder
spectrum, and the Weyl spectrum of T with respect to A, respectively.
Claim 1. If A = {T }, then σ (T ) = σ(T ).
Proof. Suppose A = {T }. Trivially, T ∈ A. Let P = P(T ) be the collection of
all polynomials p(T ) in T with complex coefficients, which is a unital commu-
tative subalgebra of B[H]. Consider the collection T of all unital commutative
subalgebras of B[H] containing T . Note that every element of T is included
in A, and also that T is partially ordered (in the inclusion ordering) and
nonempty (e.g., P ∈ T ). Moreover, every chain in T has an upper bound in
T (the union of all subalgebras in a given chain of subalgebras in T is again
a subalgebra in T ). Thus Zorn’s Lemma says that T has a maximal element,
say A = A (T ) ∈ T . Hence there is a maximal commutative subalgebra A
of B[H] containing T (which is unital and closed — see the paragraph that
precedes Proposition 2.Q). Therefore,
A ⊆ A ⊆ A.
Let σ (T ) denote the spectrum of T with respect to A. If the preceding
inclusions hold true, and since A is a maximal commutative subalgebra of A,
then Proposition 2.Q(a,b) ensure that the following relations also hold true:
σ(T ) ⊆ σ (T ) ⊆ σ (T ) = σ(T ).
Claim 2. If A = {T }, then σb (T ) = σb (T ).
Proof. Claim 1 and Lemma 5.42.
Claim 3. If A = {T }, then σb (T ) = σw
(T ).
Proof. Recall that λ ∈ σ0 (T ) if and only if λ ∈ σ (T ) and λI − T ∈ W . But
λ ∈ σ (T ) if and only if 0 ∈ σ (λI − T ) by the Spectral Mapping Theorem
(Theorem 2.7). Hence,
λ ∈ σ0 (T ) ⇐⇒ 0 ∈ σ0 (λI − T ).
0 ∈ σ0 (λI − T ) =⇒
0 ∈ σiso (λI − T ).
However, applying the Spectral Mapping Theorem again,
0 ∈ σiso (λI − T ) ⇐⇒ λ ∈ σiso (T ).
Therefore,
σ0 (T ) ⊆ σiso
(T ),
which means that (cf. Corollary 5.41)
σw (T ) = σb (T ).
Claim 4. σw (T ) = K∈B∞[H]∩A σ(T + K).
Proof. This is the very definition of the Weyl spectrum of T with respect to
A. That is, σw
(T ) = K∈B [H] σ(T + K), where B∞[H] = B∞[H] ∩ A.
∞
σ0 (T ) = σiso (T ) ∩ σPF (T ),
σ0 (T ) ⊆ σiso (T ) ∩ σPF (T ),
σ0 (T ) = π0 (T ), or σ0 (T ) ⊆ σiso (T ), or
σacc (T ) ⊆ σw (T ), or σw (T ) = σb (T ).
In other words, Weyl’s Theorem holds if and only if Browder’s Theorem and
any of the equivalent assertions of Remark 5.40(c) hold true.
(d) Trivial Cases. By Remark 5.27(a), if dim H < ∞, then σ0 (T ) = σ(T ).
Thus, according to Corollary 2.19, dim H < ∞ =⇒ σ0 (T ) = π00 (T ) = σ(T ):
σ0 (T ) = σiso (T ) ∩ σPF (T ).
For the Weyl spectra, only inclusion is ensured. In fact, the Weyl spectrum of
a direct sum is included in the union of the Weyl spectra of the summands,
σw (T ⊕ S) ⊆ σw (T ) ∪ σw (S),
but equality does not hold in general [53]. However, it holds if σw (T ) ∩ σw (S)
has empty interior [71],
◦
σw (T ) ∩ σw (S) = ∅ =⇒ σw (T ⊕ S) = σw (T ) ∪ σw (S).
5.5 Remarks on Browder and Weyl Theorems 183
In general, Weyl’s Theorem does not transfer from T and S to their direct sum
T ⊕ S. The above identity involving the Weyl spectrum of a direct sum, viz.,
σw (T ⊕ S) = σw (T ) ∪ σw (S), when it holds, plays an important role in estab-
lishing sufficient conditions for a direct sum to satisfy Weyl’s Theorem. This
was recently investigated in [70] and [38]. As for the problem of transferring
Browder’s Theorem from T and S to their direct sum T ⊕ S, the following
necessary and sufficient condition was proved in [53, Theorem 4].
σw (T ⊕ S) = σw (T ) ∪ σw (S).
For the Weyl spectrum it was proved in [57] that the inclusion
σw (T ⊗ S) = σb (T ⊗ S)
Again, Weyl’s Theorem does not necessarily transfer from T and S to their
tensor product T ⊗ S. The preceding identity involving the Weyl spectrum of
a tensor product, namely, σw (T ⊗ S) = σw (T ) · σ(S) ∪ σ(T ) · σw (S), when
it holds, plays a crucial role in establishing sufficient conditions for a tensor
product to satisfy Weyl’s Theorem, as was recently investigated in [84], [67],
and [68]. As for the problem of transferring Browder’s Theorem from T and S
to their tensor product T ⊗ S, the following necessary and sufficient condition
was proved in [67, Corollary 6].
184 5. Fredholm Theory
Proposition 5.A. F and Fr are open sets in B[H], and so are SF and F.
Proposition 5.F. Let D denote the open unit disk centered at the origin of
2
the complex plane, and let T = ∂ D be the unit circle. If S+ ∈ B[+ ] is a unilat-
2 2
eral shift on + = + (C ), then
σe (S+ ) = σre (S+ ) = σe (S+ ) = σC (S+ ) = ∂σ(S+ ) = T ,
and ind(λI − S+ ) = −1 if |λ| < 1 (i.e., if λ ∈ D = σ(S+ )\∂σ(S+ )).
5.6 Additional Propositions 185
Notes: Propositions 5.A to 5.C are standard results on Fredholm and semi-
Fredholm operators. For instance, see [75, Proposition 1.25] or [27, Proposi-
tion XI.2.6] for Proposition 5.A, and [75, Proposition 1.17] or [27, Proposition
XI.3.13] for Proposition 5.B. For Proposition 5.C, see [6, Remark 3.3.3] or
[50, Problem 181]. The locally constant dimension of kernels is considered in
Proposition 5.D (see, e.g., [27, Theorem XI.6.7]), and a finer analysis of the
spectra of normal operators and of unilateral shifts is discussed in Propositions
5.E and 5.F (see, e.g., [27, Proposition XI.4.6] and [27, Example XI.4.10]). Ev-
ery spectral picture is attainable [25]. This is described in Proposition 5.G —
see [27, Proposition XI.6.13]. For Proposition 5.H, see [16, §7]. The results of
Proposition 5.I are from [23, p. 57] (also see [32]), and Proposition 5.J is an
immediate consequence of Proposition 5.I. Regarding the Fredholm Alterna-
tive version of Proposition 5.K, for item (a) see [74, Theorem 1.4.6]; item (b)
follows from Corollary 5.35, which goes back to Corollary 2.20. The compact
perturbation result of Proposition 5.L is from [9, Theorem 11].
Suggested Reading
1. L.V. Ahlfors, Complex Analysis, 3rd edn. McGraw-Hill, New York, 1978.
2. P. Aiena, Fredholm and Local Spectral Theory, with Applications to Multipliers,
Kluwer, Dordrecht, 2004.
3. N.I. Akhiezer and I.M. Glazman, Theory of Linear Operators in Hilbert Space –
Volume I , Pitman, London, 1981; reprinted: Dover, New York, 1993.
4. N.I. Akhiezer and I.M. Glazman, Theory of Linear Operators in Hilbert Space –
Volume II , Pitman, London, 1981; reprinted: Dover, New York, 1993.
5. W. Arveson, An Invitation to C*-Algebras, Springer, New York, 1976.
6. W. Arveson, A Short Course in Spectral Theory, Springer, New York, 2002.
7. F.V. Atkinson, The normal solvability of linear equations in normed spaces, Mat.
Sbornik 28 (1951), 3–14.
8. G. Bachman and L. Narici, Functional Analysis, Academic Press, New York,
1966; reprinted: Dover, Mineola, 2000.
9. B.A. Barnes, Riesz points and Weyl’s theorem, Integral Equations Operator The-
ory 34 (1999), 187–196.
10. B.A. Barnes, G.J. Murphy, M.R.F. Smyth, and T.T. West, Riesz and Fredholm
Theory in Banach Algebras, Pitman, London, 1982.
11. R.G. Bartle, The Elements of Integration and Lebesgue Measure, Wiley, New
York, 1995; enlarged 2nd edn. of The Elements of Integration, Wiley, New York,
1966.
12. R. Beals, Topics in Operator Theory, The University of Chicago Press, Chicago,
1971.
13. R. Beals, Advanced Mathematical Analysis, Springer, New York, 1973.
14. S.K. Berberian, Notes on Spectral Theory, Van Nostrand, New York, 1966.
15. S.K. Berberian, An extension of Weyl’s theorem to a class of not necessarily
normal operators, Michigan Math. J. 16 (1969), 273–279.
16. S.K. Berberian, The Weyl spectrum of an operator , Indiana Univ. Math. J. 20
(1971), 529–544.
17. S.K. Berberian, Lectures in Functional Analysis and Operator Theory, Springer,
New York, 1974.
18. S.K. Berberian, Introduction to Hilbert Space, 2nd edn. Chelsea, New York, 1976.
19. J. Bram, Subnormal operators, Duke Math. J. 22 (1955), 75–94.
20. A. Brown and C. Pearcy, Spectra of tensor products of operators, Proc. Amer.
Math. Soc. 17 (1966), 162–166.
21. A. Brown and C. Pearcy, Introduction to Operator Theory I – Elements of Func-
tional Analysis, Springer, New York, 1977.
22. A. Brown and C. Pearcy, An Introduction to Analysis, Springer, New York, 1995.
23. S.R. Caradus, W.E. Pfaffenberger, and B. Yood, Calkin Algebras and Algebras of
Operators on Banach Spaces, Lecture Notes in Pure and Applied Mathematics,
Vol. 9. Marcel Dekker, New York, 1974.
24. L.A. Coburn, Weyl’s theorem for nonnormal operators, Michigan Math. J. 13
(1966), 285–288.
25. J.B. Conway, Every spectral picture is possible, Notices Amer. Math. Soc. 24
(1977), A-431.
26. J.B. Conway, Functions of One Complex Variable, Springer, New York, 1978.
27. J.B. Conway, A Course in Functional Analysis, 2nd edn. Springer, New York,
1990.
28. J.B. Conway, The Theory of Subnormal Operators, Mathematical Surveys and
Monographs, Vol. 36, Amer. Math. Soc., Providence, 1991.
29. J.B. Conway, A Course in Operator Theory, Graduate Studies in Mathematics,
Vol. 21, Amer. Math. Soc., Providence, 2000.
30. K.R. Davidson, C*-Algebras by Example, Fields Institute Monographs, Vol. 6,
Amer. Math. Soc., Providence, 1996.
31. J. Dieudonné, Foundations of Modern Analysis, Academic Press, New York,
1969.
32. D.S. Djordjević, Semi-Browder essential spectra of quasisimilar operators, Novi
Sad J. Math. 31 (2001), 115–123.
33. R.G. Douglas, On majorization, factorization, and range inclusion of operators
on Hilbert space, Proc. Amer. Math. Soc. 17 (1966), 413–415.
34. R.G. Douglas, Banach Algebra Techniques in Operator Theory, Academic Press,
New York, 1972; 2nd edn. Springer, New York, 1998.
35. H.R. Dowson, Spectral Theory of Linear Operators, Academic Press, New York,
1978.
36. B.P. Duggal and S.V. Djordjević, Generalized Weyl’s theorem for a class of op-
erators satisfying a norm condition, Math. Proc. Royal Irish Acad. 104 (2004),
75–81.
References 189
37. B.P. Duggal, S.V. Djordjević, and C.S. Kubrusly, Hereditarily normaloid con-
tractions, Acta Sci. Math. (Szeged) 71 (2005), 337–352.
38. B.P. Duggal and C.S. Kubrusly, Weyl’s theorem for direct sums, Studia Sci.
Math. Hungar. 44 (2007), 275–290.
39. N. Dunford and J.T. Schwartz, Linear Operators – Part I: General Theory,
Interscience, New York, 1958.
40. N. Dunford and J.T. Schwartz, Linear Operators – Part II: Spectral Theory –
Self Adjoint Operators in Hilbert Space, Interscience, New York, 1963.
41. N. Dunford and J.T. Schwartz, Linear Operators – Part III: Spectral Operators,
Interscience, New York, 1971.
42. P.A. Fillmore, Notes on Operator Theory, Van Nostrand, New York, 1970.
43. P.A. Fillmore, A User’s Guide to Operator Algebras, Wiley, New York, 1996.
44. K. Gustafson, Necessary and sufficient conditions for Weyl’s theorem, Michigan
Math. J. 19 (1972), 71–81.
45. K. Gustafson and D.K.M. Rao, Numerical Range, Springer, New York, 1997.
46. P.R. Halmos, Measure Theory, Van Nostrand, New York, 1950; reprinted:
Springer, New York, 1974.
47. P.R. Halmos, Introduction to Hilbert Space and the Theory of Spectral Multi-
plicity, 2nd edn. Chelsea, New York, 1957; reprinted: AMS Chelsea, Providence,
1998.
48. P.R. Halmos, Finite-Dimensional Vector Spaces, Van Nostrand, New York, 1958;
reprinted: Springer, New York, 1974.
49. P.R. Halmos, Shifts on Hilbert spaces, J. Reine Angew. Math. 208 (1961), 102–
112.
50. P.R. Halmos, A Hilbert Space Problem Book , Van Nostrand, New York, 1967;
2nd edn. Springer, New York, 1982.
51. P.R. Halmos and V.S. Sunder, Bounded Integral Operators on L2 Spaces,
Springer, Berlin, 1978.
52. R. Harte, Invertibility and Singularity for Bounded Linear Operators, Marcel
Dekker, New York, 1988.
53. R. Harte and W.Y. Lee, Another note on Weyl’s theorem, Trans. Amer. Math.
Soc. 349 (1997), 2115–2124.
54. G. Helmberg, Introduction to Spectral Theory in Hilbert Space, North-Holland,
Amsterdam, 1969.
55. D. Herrero, Approximation of Hilbert Space Operators – Volume 1 , 2nd edn.
Longman, Harlow, 1989.
56. E. Hille and R.S. Phillips, Functional Analysis and Semi-Groups, Colloquium
Publications Vol. 31, Amer. Math. Soc., Providence, 1957; reprinted: 1974.
190 References
57. T. Ichinose, Spectral properties of linear operators I , Trans. Amer. Math. Soc.
235 (1978), 75–113.
58. V.I. Istrǎţescu, Introduction to Linear Operator Theory, Marcel Dekker, New
York, 1981.
59. M.A. Kaashoek and D.C. Lay, Ascent, descent, and commuting perturbations,
Trans. Amer. Math. Soc. 169 (1972), 35–47.
60. T. Kato, Perturbation Theory for Linear Operators, 2nd edn. Springer, Berlin,
1980; reprinted: 1995.
61. D. Kitson, R. Harte, and C. Hernandez, Weyl’s theorem and tensor products: a
counterexample, J. Math. Anal. Appl. 378 (2011), 128–132.
62. C.S. Kubrusly, An Introduction to Models and Decompositions in Operator The-
ory, Birkhäuser, Boston, 1997.
63. C.S. Kubrusly, Hilbert Space Operators, Birkhäuser, Boston, 2003.
64. C.S. Kubrusly, A concise introduction to tensor product, Far East J. Math. Sci.
22 (2006), 137–174.
65. C.S. Kubrusly, Measure Theory, Academic Press/Elsevier, San Diego, 2007.
66. C.S. Kubrusly, The Elements of Operator Theory, Birkhäuser/Springer, New
York, 2011; enlarged 2nd edn. of Elements of Operator Theory, Birkhäuser,
Boston, 2001.
67. C.S. Kubrusly and B.P. Duggal, On Weyl and Browder spectra of tensor products,
Glasgow Math. J. 50 (2008), 289–302.
68. C.S. Kubrusly and B.P. Duggal, On Weyl’s theorem of tensor products, to appear.
69. D.C. Lay, Characterizations of the essential spectrum of F.E. Browder , Bull.
Amer. Math. Soc. 74 (1968), 246–248.
70. W.Y. Lee, Weyl’s theorem for operator matrices, Integral Equations Operator
Theory 32 (1998), 319–331.
71. W.Y. Lee, Weyl spectrum of operator matrices, Proc. Amer. Math. Soc. 129
(2001), 131–138.
72. J. Lindenstrauss and L. Tzafriri, On the complemented subspaces problem, Israel
J. Math. 9 (1971), 263–269.
73. V. Müller, Spectral Theory of Linear Operators: and Spectral Systems in Banach
Algebras, 2nd edn. Birkhäuser, Basel, 2007.
74. G. Murphy, C*-Algebras and Operator Theory, Academic Press, San Diego, 1990.
75. C.M. Pearcy, Some Recent Developments in Operator Theory, CBMS Regional
Conference Series in Mathematics No. 36, Amer. Math. Soc., Providence, 1978.
76. H. Radjavi and P. Rosenthal, Invariant Subspaces, Springer, Berlin, 1973; 2nd
edn. Dover, New York, 2003.
77. F. Riesz and B. Sz.-Nagy, Functional Analysis, Frederick Ungar, New York, 1955;
reprinted: Dover, New York, 1990.
References 191
78. H.L. Royden, Real Analysis, 3rd edn. Macmillan, New York, 1988.
79. W. Rudin, Real and Complex Analysis, 3rd edn. McGraw-Hill, New York, 1987.
80. W. Rudin, Functional Analysis, 2nd edn. McGraw-Hill, New York, 1991.
81. M. Schechter, On the essential spectrum of an arbitrary operator. I , J. Math.
Anal. Appl. 13 (1966), 205–215.
82. M. Schechter, Principles of Functional Analysis, Academic Press, New York,
1971; 2nd edn. Graduate Studies in Mathematics, Vol. 36, Amer. Math. Soc.,
Providence, 2002.
83. J. Schwartz, Some results on the spectra and spectral resolutions of a class of
singular operators, Comm. Pure Appl. Math. 15 (1962), 75–90.
84. Y.-M. Song and A.-H. Kim, Weyl’s theorem for tensor products, Glasgow Math.
J. 46 (2004), 301–304.
85. V.S. Sunder, Functional Analysis – Spectral Theory, Birkhäuser, Basel, 1998.
86. B. Sz.-Nagy, C. Foiaş, H. Bercovici, and L. Kérchy, Harmonic Analysis of Opera-
tors on Hilbert Space, Springer, New York, 2010; enlarged 2nd edn. of B. Sz.-Nagy
and C. Foiaş, North-Holland, Amsterdam, 1970.
87. A.E. Taylor and D.C. Lay, Introduction to Functional Analysis, Wiley, New York,
1980; reprinted: Krieger, Melbourne, 1986; enlarged 2nd edn. of A.E. Taylor,
Wiley, New York, 1958.
88. A. Uchiyama, On the isolated points of the spectrum of paranormal operators,
Integral Equations Operator Theory 55 (2006), 145–151.
89. J. Weidmann, Linear Operators in Hilbert Spaces, Springer, New York, 1980.
90. H. Weyl, Über beschränkte quadratische Formen, deren Differenz vollstetig ist,
Rend. Circ. Mat. Palermo 27 (1909), 373–392.
This page intentionally left blank
Index