You are on page 1of 86

Functional Analysis I

Term 1, 20102011

Vassili Gelfreich

Contents
1

Vector spaces
1.1 Definition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 Examples of vector spaces . . . . . . . . . . . . . . . . . . . . . . .
1.3 Hamel bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Normed spaces
2.1 Norms . . . . . . . . . . . . . . . . . . . .
2.2 Four famous inequalities . . . . . . . . . .
2.3 Examples of norms on a space of functions
2.4 Equivalence of norms . . . . . . . . . . . .
2.5 Linear Isometries . . . . . . . . . . . . . .

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

8
8
9
10
11
12

Convergence in a normed space


3.1 Definition and examples . .
3.2 Topology on a normed space
3.3 Closed sets . . . . . . . . .
3.4 Compactness . . . . . . . .

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

14
14
16
17
18

Banach spaces
4.1 Completeness: Definition and examples . . . . . . . . . . . . . . . .
4.2 The completion of a normed space . . . . . . . . . . . . . . . . . . .
4.3 Weierstrass Approximation Theorem . . . . . . . . . . . . . . . . . .

20
20
22
25

Lebesgue spaces
5.1 Lebesgue measure . . .
5.2 Lebesgue integral . . .
5.3 Lebesgue space L1 (R)
5.4 L p spaces . . . . . . .

.
.
.
.

28
28
30
32
34

.
.
.
.

36
36
37
38
40

.
.
.
.
.
.

41
41
42
44
44
46
47

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.

Hilbert spaces
6.1 Inner product spaces . . . . . . . . . . . .
6.2 Natural norms . . . . . . . . . . . . . . . .
6.3 Parallelogram law and polarisation identity
6.4 Hilbert spaces: Definition and examples . .
Orthonormal bases in Hilbert spaces
7.1 Orthonormal sets . . . . . . . . . .
7.2 Gram-Schmidt orthonormalisation .
7.3 Bessels inequality . . . . . . . . .
7.4 Convergence . . . . . . . . . . . .
7.5 Orthonormal basis in a Hilbert space
7.6 Separable Hilbert spaces . . . . . .
iii

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.

.
.
.
.
.
.

1
1
2
4

Closest points and approximations


8.1 Closest points in convex subsets . . . . . . . . . . . . . . . . . . . .
8.2 Orthogonal complements . . . . . . . . . . . . . . . . . . . . . . . .
8.3 Best approximations . . . . . . . . . . . . . . . . . . . . . . . . . .

50
50
51
53

Linear maps between Banach spaces


9.1 Continuous linear maps . . . . . . . . . . . . . . . . . . . . . . . . .
9.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.3 Kernel and range . . . . . . . . . . . . . . . . . . . . . . . . . . . .

56
56
58
59

10 Linear functionals
10.1 Definition and examples . . . . . . . . . . . . . . . . . . . . . . . .
10.2 Riesz representation theorem . . . . . . . . . . . . . . . . . . . . . .

61
61
61

11 Linear operators on Hilbert spaces


11.1 Complexification . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11.2 Adjoint operators . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11.3 Self-adjoint operators . . . . . . . . . . . . . . . . . . . . . . . . . .

63
63
64
66

12 Introduction to Spectral Theory


12.1 Point spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
12.2 Invertible operators . . . . . . . . . . . . . . . . . . . . . . . . . . .
12.3 Resolvent and spectrum . . . . . . . . . . . . . . . . . . . . . . . . .

69
69
70
71

13 Compact operators
13.1 Definition, properties and examples . . . . . . . . . . . . . . . . . .
13.2 Spectral theory for compact self-adjoint operators . . . . . . . . . . .

74
74
75

14 Sturm-Liouville problems

79

iv

Preface
These notes follow the lectures on Functional Analysis given in the Autumn of 2010.
If you find a mistake or misprint please inform the author by sending an e-mail to
v.gelfreich@warwick.ac.uk. The author thanks James Robinson for his set of
notes and selection of exercises which significantly facilitated the preparation of the
lectures.

Vector spaces

1.1

Definition.

A vector space V over a field K is a set equipped with two binary operations called
vector addition and multiplication by scalars. Elements of V are called vectors and
elements of K are called scalars. The sum of two vectors x, y V is denoted x + y, the
product of a scalar K and vector x V is denoted x.
It is possible to consider vector spaces over an arbitrary field K, but we will consider the fields R and C only. So we will always assume that K denotes either R or C
and refer to V as a real or complex vector space respectively.
In a vector space, addition and multiplication have to satisfy the following set of
axioms: Let x, y, z be arbitrary vectors in V , and , be arbitrary scalars in K, then
Associativity of addition: x + (y + z) = (x + y) + z.
Commutativity of addition: x + y = y + z.
There exists an element 0 V , called the zero vector, such that x + 0 = x for all
x V.
For all x V , there exists an element y V , called the additive inverse of x, such
that x + y = 0. The additive inverse is denoted x.
Associativity of multiplication:1 ( x) = ( )x.
Distributivity:
(x + y) = x + y

and

( + )x = x + x.

There is an element 1 K such that 1x = x for all x V . This element is called


the multiplicative identity in K.
1 The purist would not use the word associativity for this property as it includes two different
operations: is a product of two scalars and x involves a vector and scalar.

It is convenient to define two additional operations: subtraction of two vectors and


division by a (non-zero) scalar are defined by
x y = x + (y),
x/ = (1/)x.

1.2

Examples of vector spaces

1. Rn is a real vector space.


2. Cn is a complex vector space.
3. Cn is a real vector space.
4. The set of all polynomials P is a vector space:
(

P=

k x k :

k K, n N .

k=0

5. The set of all bounded sequences ` (K) is a vector space:





` (K) = (x1 , x2 , . . .) : xk K for all k N, sup |xk | < .


kN

For two sequences x, y ` (K), we define x + y by


x + y = (x1 + y1 , x2 + y2 , . . .) .
For K, we set
x = (x1 , x2 , . . .) .
We will always use these definitions of addition and multiplication by scalars for
sequences.
In order to show that ` (K) is a vector space it is necessary to check that
the binary operations are consistently defined, i.e. to check that x and
x + y ` (K) for any x, y ` (K) and any K,
the axioms of vector space are satisfied.
6. Let 1 p < . The set ` p (K) of all pth power summable sequences is a vector
space:
(
)
` p (K) =

(x1 , x2 , . . .) : xk K,

|xk | p <
k=1

The definition of the multiplication by scalars and vector addition is the same
as in the previous example. Let us check that the sum x + y ` p (K) for any
x, y ` p (K). Indeed,

|xk + yk | p

(|xk | + |yk |) p

k=1

k=1

2 max{ |xk |, |yk | }

p

k=1


p
p
p
p
p
p
2
|x
|
+
|y
|
=
2
|x
|
+
2
k
k
k
|yk | p < .

k=1

k=1

k=1

7. The space C[0, 1] of all real-valued continuous functions on the closed interval
[0, 1] is a vector space. The addition and multiplication by scalars are defined
naturally: for f , g C[0, 1] and R we denote by f + g the function whose
values are given by
t [0, 1] ,

( f + g)(t) = f (t) + g(t) ,


and f is the function whose values are
( f )(t) = f (t) ,

t [0, 1] ,

We will always use similar definitions for spaces of functions to be considered


later.
8. The set L 1 (0, 1) of all real-valued continuous functions f on the open interval
(0, 1) for which
Z 1

| f (t)| dt <

is a vector space.
If f C[0, 1] then f L 1 (0, 1). Indeed, since [0, 1] is compact f is bounded (and
attains its lower and upper bounds). Then
Z 1

| f (t)| dt max | f (t)| < ,


t[0,1]

i.e. f L 1 (0, 1).


We note that L 1 (0, 1) contains some functions which do not belong to C[0, 1].
For example, f (t) = t 1/2 is not continuous on [0, 1] but it is continuous on
(0, 1) and
Z 1
Z 1
1

| f (t)| dt =
|t|1/2 dx = 2t 1/2 = 2 < ,
0

so f L 1 (0, 1).
We conclude that C[0, 1] is a strict subset of L 1 (0, 1).
3

1.3

Hamel bases

Definition 1.1 The linear span of a subset E of a vector space V is the collection of
all finite linear combinations of elements of E:
(
)
n

Span(E) =

x V :x =

j e j , n N, j K, e j E

j=1

We say that E spans V if V = Span(E), i.e. every element of V can be written as a


finite linear combination of elements of E.
Definition 1.2 A set E is linearly independent if any finite collection of elements of E
is linearly independent:
n

je j = 0

1 = 2 = = n = 0

j=1

for any choice of n N, e j E and j K.


Definition 1.3 A Hamel basis E for V is a linearly independent subset of V which
spans V .
Examples:
1. Any basis in Rn is a Hamel basis.


2. The set E = 1, x, x2 , . . . is a Hamel basis in the space of all polynomials.
Lemma 1.4 If E is a Hamel basis for a vector space V then any element x V can be
uniquely written in the form
n

x=

je j

j=1

where n N, j K, and e j E.
Exercise: Prove the lemma.
Definition 1.5 We say that a set is finite if it consists of a finite number of elements.
Theorem 1.6 If V has a finite Hamel basis then every Hamel basis for V has the same
number of elements.

Proof. Let E = { e1 , . . . , en } be a finite Hamel basis in V . Suppose there is a Hamel


basis E 0 = { e01 , . . . , e0m } which has more elements than E (if m < n, swap E and E 0 ).
Since Span(E 0 ) = V we can write e1 as a linear combination of elements from E 0 :
m

k e0k .

e1 =

k=1

Since e1 6= 0 there is k1 such that k1 6= 0 so we can write


e0k1 = k1
e1
1

11 k e0k ,

1km
k6=k1

Let S1 = {e1 } and S10 = {e0k1 }. The set E10 = (E 0 \ S10 ) S1 is linearly independent and
Span(E10 ) = Span(E 0 ) = V (check these two claims).
We can repeat the procedure inductively. Let S j = {e1 , . . . , e j }. Suppose for some
j, 1 j n 1, there is a set S0j = {e0k1 , . . . , e0k j } such that the set
E 0j = (E 0 \ S0j ) S j
and Span(E 0j ) = V . Then there are k , k K such that
e j+1 =

k e0k +

e0k E 0 \S0j

k ek .

ek S j

Since S j+1 is linearly independent, there is k j+1 such that k j+1 6= 0. Let S0j+1 =
S0j {ek j }. Then E 0j+1 is linearly independent and spans V (by the same arguments as
in the case j = 1).
After n inductive steps we get that En0 is linearly independent, which is impossible
because Sn = E and consequently En0 = (E 0 \ Sn0 ) E. This contradiction implies that
m = n.

Definition 1.7 If V has a finite basis E then the dimension of V (denoted dimV ) is
the number of elements in E. If V has no finite basis then we say that V is infinitedimensional.
Example: In Rn any basis consists of n vectors. Therefore dim Rn = n.
Let V and W be two vector spaces over K.
Definition 1.8 A map L : V W is called linear if for any x, y V and any K
L(x + y) = L(x) + L(y) .
Definition 1.9 If a linear map L : V W is a bijection, then L is called a linear
isomorphism. We say that V and W are linearly isomorphic if there is a bijective
linear map L : V W .
5

Proposition 1.10 Any n-dimensional vector space over K is linearly isomorphic to


Kn .
Proof: Let E = { e j : 1 j n } be a basis in V , then every element x V is represented
uniquely in the form
n

x=

je j .

j=1

The map L : x 7 (1 , . . . , n ) is a linear bijection V Kn . Therefore V is linearly


isomorphic to Kn .

In order to show that a vector space is infinite-dimensional it is sufficient to find an
infinite linearly independent subset. Lets consider the following examples:
1. ` p (K) is infinite-dimensional (1 p ).
Proof. The set
E = { (1, 0, 0, 0, . . .), (0, 1, 0, 0, . . .), (0, 0, 1, 0, . . .), . . .}
is linearly independent and not finite. Therefore dim ` p (K) = .

Remark: This linearly independent set E is not a Hamel basis. Indeed, the sequence x = (x1 , x2 , x3 , . . .) with xk = ek belongs to ` p (K) for any p 1 but
cannot be represented as a sum of finitely many elements of the set E.
2. C[0, 1] is infinite-dimensional.
Proof: The set E = { xk : k N } is an infinite linearly independent subset of
C0 [0, 1]. Indeed, suppose
n

p(x) =

k x k = 0

for all x [0, 1].

k=1

Differentiating the equality n times we get p(n) (x) = n!n = 0. Which implies
n = 0. Therefore p(x) 0 implies k = 0 for all k.

Note that


f (x) =

x( x), for 0 x
0,
for x 1

for (0, 1) form an uncountable linear independent subset in C[0, 1].


The linearly independent sets provided in the last two examples are not Hamel
bases. This is not a coincidence: ` p (K) and C[0, 1] (as well as many other functional
spaces) do not have a countable Hamel basis.2
2 Why?

Theorem 1.11 Every vector space has a Hamel basis.


The proof of this theorem is based on Zorns Lemma.
We note that in many interesting vector spaces (called normed spaces), a very large
number of elements should be included into a Hamel basis in order to enable representation of every element in the form of a finite sum. Then the basis is too large to
be useful for the study of the original vector space. A natural idea would be to allow
infinite sums in the definition of a basis. In order to use infinite sums we need to define convergence which cannot be done using the axioms of vector spaces only. An
additional structure on the vector space should be defined.

Normed spaces

2.1

Norms

Definition 2.1 A norm on a vector space V is a map k k : V R such that for any
x, y V and any K:
1. kxk 0, and kxk = 0 x = 0
2. kxk = || kxk

(positive definiteness);

(positive homogeneity);

3. kx + yk kxk + kyk

(triangle inequality).

The pair (V, k k) is called a normed space.


In other words, a normed space is a vector space equipped with a norm.
Examples:
1. Rn with each of the following norms is a normed space:
s
n

(a) kxk =

|xk |2
k=1
n

(b) kxk p =

!1/p

|xk | p

,1 p<

k=1

(c) kxk = max |xk |.


1kn

2. ` p (K) is a vector space with the following norm (1 p < )

!1/p

|xk | p

kxk` p =

k=1

3. ` (K) is a vector space with the following norm


kxk` = sup |xk | .
kN

We will often use kxk p to denote the norm of a vector x ` p .


In order to prove the triangle inequality for the ` p norm, we will state and prove
several inequalities.

2.2

Four famous inequalities

Lemma 2.2 (Youngs inequality) If a, b > 0, 1 < p, q < ,


ab

1
p

+ 1q = 1, then

a p bq
+ .
p
q

Proof: Consider the function f (t) = tp t + 1q defined for t 0. Since f 0 (t) = t p1 1


vanishes at t = 1 only, and f 00 (t) = (p1)t p2 0, the point t = 1 is a global minimum
for f . Consequently, f (t) f (1) = 0 for all t 0. Now substitute t = abq/p :
f (abq/p ) =

a p bq
1
abq/p + 0 .
p
q

Multiplying the inequality by bq yields Youngs inequality.


Lemma 2.3 (Holders inequality) If 1 p, q ,
then

1
p

+ 1q = 1, x ` p (K), y `q (K),

|x j y j | kxk` p kyk`q .

j=1

Proof. If 1 < p, q < , we use Youngs inequality to get that for any n N

n |x | |y |
n 
1 1
1 |x j | p 1 |y j |q
j
j
kxk` p kyk`q p kxk pp + q kykqq p + q = 1
j=1
j=1
`
`
Therefore for any n N

|x j y j | kxk` p kyk`q .

j=1

Since the partial sums are monotonically increasing and bounded above, the series
converge and Holders inequality follows by taking the limit as n .
If p = 1 and q = :
n

|x j y j | max |y j |

j=1

1 jn

|x j | kxk`1 kyk` .

j=1

Therefore the series converges and Holders inequality follows by taking the limit as
n .

Lemma 2.4 (Cauchy-Schwartz inequality) If x, y `2 (K) then

j=1

j=1

!1/2

|x j y j | |x j |2
9

|y j |2

j=1

!1/2
.

Proof: This inequality coincides with Holders inequality with p = q = 2.

Now we state and prove the triangle inequality for the ` p norm.
Lemma 2.5 (Minkowskis inequality) If x, y ` p (K) for 1 p then x + y
` p (K) and
kx + yk` p kxk` p + kyk` p .
Proof: If 1 < p < , define q from the equation 1p + 1q = 1. Then using Holders
inequality (finite sequences belong to ` p with any p) we get3
n

|x j + y j | p =

j=1

|x j + y j | p1|x j + y j |

j=1
n

|x j + y j | p1|x j | + |x j + y j | p1|y j |

j=1

j=1

!1/q

|x j + y j |(p1)q

|x j | p

j=1

(Holders inequality)

j=1

!1/q

!1/p

|x j + y j |(p1)q

j=1

!1/p

|y j | p

j=1


1/q
Dividing the inequality by nj=1 |x j + y j | p
and using that (p 1)q = p and 1
1
q

= 1p , we get for all n


n

|x j + y j | p

j=1

!1/p

!1/p

|x j | p

j=1

|y j | p

!1/p
.

j=1

The series on the right hand side converge to kxk` p + kyk` p . Consequently the series on
the left hand side also converge. Therefore x + y ` p (K), and Minkowskis inequality
follows by taking the limit as n .
Exercise: Prove Minkowskis inequality for p = 1 and p = .

2.3

Examples of norms on a space of functions

Each of the following formulae defines a norm on C[0, 1], the space of all continuous
functions on [0, 1]:
3 We

do not start directly with n = because a priori we do not know convergence for some of the
series involved in the proof.

10

1. the sup(remum) norm


k f k = sup | f (t)| ;
t[0,1]

2. the L1 norm
k f kL1 =

Z 1

| f (t)| dt ;

3. the L2 norm
k f kL 2 =

Z

1/2

| f (t)| dt

Exercise: Check that each of these formulae defines a norm. For the case of the L2
norm, you will need a Cauchy-Schwartz inequality for integrals.
Example: Let k N. The space Ck [0, 1] consists of all continuous real-valued functions which have continuous derivatives up to order k. The norm on Ck [0, 1] is defined
by
k

k f kCk =

supt[0,1] | f ( j)(t)| ,

j=0

where f ( j) denotes the derivative of order j.

2.4

Equivalence of norms

We have seen that various different norms can be introduced on a vector space. In
order to compare two norms it is convenient to introduce the following equivalence
relation.
Definition 2.6 Two norms k k1 and k k2 on a vector space V are equivalent if there
are constants c1 , c2 > 0 such that
c1 kxk1 kxk2 c2 kxk1

for all x V .

In this case we write k k1 k k2 .


Theorem 2.7 Any two norms on Rn are equivalent.4
Example: The norms k kL1 and k k on C[0, 1] are not equivalent.
4 You

already saw this statement in Analysis III and/or Differentiation in Year 2. The proof is based
on the observation that the unit sphere S Rn is sequentially compact. Then we checked that f (x) =
kxk2 /kxk1 is continuous on S and consequently it is bounded and attains its lower and upper bounds on
S. We set c1 = minS f and c2 = maxS f .

11

Proof: Consider the sequence of functions fn (t) = t n with n N. Obviously fn


C[0, 1] and
k fn k =
k fn kL1 =

max |t|n = 1 ,
t[0,1]
Z 1
n

t dt =

1
.
n+1

Suppose the norms are equivalent. Then there is a constant c2 > 0 such that for all n:
k f n k
= n + 1 c2 .
k f n kL 1
But it is not possible for all n. This contradiction implies the norms are not equivalent.


2.5

Linear Isometries

Suppose V and W are normed spaces.


Definition 2.8 If a linear map L : V W preserves norms, i.e. kL(x)k = kxk for all
x V , it is called a linear isometry.
This definition implies L is injective, i.e., L : V L(V ) is bijective, but it does not imply L(V ) = W , i.e., L is not necessarily invertible. Note that sometimes the invertibility
property is included into the definition of the isometry. Finally, in Metric Spaces the
word isometry is used to denote distance-preserving transformations.
Definition 2.9 We say that two normed spaces are isometric, if there is an invertible
linear isometry between them.
A linear invertible map can be used to pull back a norm as follows.
Proposition 2.10 Let (V, k kV ) be a normed space, W a vector space, and L : W V
a linear isomorphism. Then
kxkW := kL(x)kV
defines a norm on W .
Proof: For any x, y V and any K we have:
kxkW = kL(x)kV 0 ,
kxkW = kL(x)kV = || kL(x)kV = || kxkW .
If kxkW = kL(x)kV = 0, then L(x) = 0 due to non-degeneracy of the norm k kV . Since
L is invertible, we get x = 0. Therefore k kW is non-degenerate.
12

Finally, the triangle inequality follows from the triangle inequality for k kV :
kx + ykW = kL(x) + L(y)kV kL(x)kV + kL(y)kV = kxkW + kykW .
Therefore, k kW is a norm.

Note that in the proposition the new norm is introduced in such a way that L :
(W, k kW ) (V, k kV ) is a linear isometry.
Let V be a finite dimensional vector space and n = dimV . We have seen that V is
linearly isomorphic to Kn . Then the proposition implies the following statements.
Corollary 2.11 Any finite dimensional vector space V can be equipped with a norm.
Corollary 2.12 Any n-dimensional normed space V is isometric to Kn equipped with
a suitable norm.
Since any two norms on Rn (and therefore on Cn ) are equivalent we also get the
following statement.
Theorem 2.13 If V is a finite-dimensional vector space, then all norms on V are equivalent.

13

Convergence in a normed space

3.1

Definition and examples

The norm on a vector space V can be used to measure distances between points x, y V .
So we can define the limit of a sequence.
Definition 3.1 A sequence (xn )
n=1 , xn V , n N, converges to a limit x V if for any
> 0 there is N N such that
kxn xk <

for all n > N.

Then we write xn x (or lim xn = x).


We see directly from this definition that the sequence of vectors xn x if and only
if the sequence of non-negative real numbers kxn xk 0.
Exercises: Prove the following statements.
1. The limit of a convergent sequence is unique.
2. Any convergent sequence is bounded.
3. If xn converges to x, then kxn k kxk.
It is possible to check convergence of a sequence of real numbers without actually
finding its limit: it is sufficient to check that it satisfies the following definition:
Definition 3.2 (Cauchy sequence) A sequence (xn )
n=1 in a normed space V is Cauchy
if for any > 0 there is an N such that
kxn xm k <

for all m, n > N.

Theorem 3.3 A sequence of real numbers converges iff it is Cauchy.


Exercises: Prove the following statements.
1. Any convergent sequence is Cauchy.
2. Any Cauchy sequence is bounded.
Example: Consider the sequence fn C[0, 1] defined by fn (t) = t n .
1. fn 0 in the L1 norm.
Proof: We have already computed the norms:
k f n kL 1 =
Consequently, fn 0.

1
0.
n+1


14

2. fn does not converge in the sup norm.


Proof: If m > 2n 1 then
fn (21/n ) fm (21/n ) =

1
1
1
m/n .
2 2
4

Consequently ( fn ) is not Cauchy in the sup norm and hence not convergent.
This example shows that the convergence in the L1 norm does not imply the pointwise convergence and, as a results, does not imply the convergence in the sup norm
(often called the uniform convergence). Note that in contrast to the uniform and L1
convergences the notion of pointwise convergence is not based on a norm on the space
of continuous function.
Exercise: The pointwise convergence does not imply the L1 convergence.
Hint: Construct fn with support in (0, 1/n) but make the maximum of fn very large
to ensure that k fn kL1 > n. Therefore fn (t) 0 for every t [0, 1] but fn is not bounded
in the L1 norm, hence not convergent.
Proposition 3.4 If fn C[0, 1] for all n N and fn f in the sup norm, then fn f
in the L1 norm, i.e.,
k fn f k 0

k fn f kL1 0 .

Proof:
0 k fn f kL1 =

Z 1
0

| fn (t) f (t)| dt sup | fn (t) f (t)| = k fn f k 0 .


0t1

Therefore k fn f kL1 0.

We have seen that different norms may lead to different conclusions about convergence of a given sequence but sometime convergence in one norm implies convergence
in another one. The following lemma shows that equivalent norms give rise to the same
notion of convergence.
Lemma 3.5 Suppose k k1 and k k2 are equivalent norms on a vector space V . Then
for any sequence (xn ):
kxn xk1 0

kxn xk2 0 .

Proof: Since the norms are equivalent, there are constant c1 , c2 > 0 such that
0 c1 kxn xk1 kxn xk2 c2 kxn xk1
for all n. Then kxn xk2 0 implies kxn xk1 0, and vice versa.
15

3.2

Topology on a normed space

We say that a collection T of subsets of V is a topology on V if it satisfies the following


properties:
1. 0,V
/ T ;
2. any finite intersection of elements of T belongs to T ;
3. any union of elements of T belongs to T .
A set equipped with a topology is called a topological space. The elements of T
are called open sets. The topology can be used to define a convergent sequence and
continuous function.
A norm on V can be used to define a topology on V , i.e., to define the notion of an
open set.
Definition 3.6 A subset X V is open, if for any x X there is > 0 such that the
ball of radius centred around x belongs to X:
B(x, ) = {y V : ky xk < } X .
Example: In any normed space V :
1. The unit ball centred around the zero, B0 = { x : kxk < 1 }, is open.
2. Any open ball B(x, ) is open.
3. V is open.
4. The empty set is open.
It is not too difficult to check that the collection of open sets defines a topology
on V . You can easily check from the definition that equivalent norms generate the
same topology, i.e., open sets are exactly the same. The notion of convergence can be
defined in terms of the topology.
Definition 3.7 An open neighbourhood of x is an open set which contains x.
Lemma 3.8 A sequence xn x if and only if for any open neighbourhood X of x there
is N N such that xn X for all n > N.
Proof: ( = ). Let xn x. Take any open X such that x X. Then there is > 0 such
that B(x, ) X. Since the sequence converges there is N such that kxn xk < for
all n > N. Then xn B(x, ) X for the same values of n.
(=). Take any > 0. The ball B(x, ) is open, therefore there is N such that
xn B(x, ) for all n > N. Hence kxn xk < and xn x.

16

3.3

Closed sets

Definition 3.9 A set X V is closed if its complement V \ X is open.


Example: In any normed space V :
1. The unit sphere S = { x : kxk = 1 } is closed
2. Any closed ball
B(x, ) = { y V : ky xk }
is closed.
3. V is closed.
4. The empty set is closed.
Lemma 3.10 A subset X V is closed if and only if any convergent sequence with
elements in X has its limit in X.
Exercise: Prove it. (You have seen the proof in Year 2).

Definition 3.11 We say that a subset L V is a linear subspace, if it is a vector space


itself, i.e., if x1 , x2 L and K imply x1 + x2 L.
Proposition 3.12 A finite dimensional linear subspace W of a normed space V is
closed.
Proof: Since n = dimW < , there is a finite Hamel basis on W :
E = { e1 , e2 , . . . , en } ,

Span(E) = W ,

Suppose W is not closed, then by Lemma 3.10 there is a convergent sequence xk x ,


xk W but x V \ W . Then x is linearly independent from E (otherwise it would
belong to W ). Consequently
E = { e1 , e2 , . . . , en , x }
In this basis, the components of xk are given by
is a Hamel basis in L = Span(E).
k
k

(1 , . . . , n , 0) and x corresponds to the vector (0, . . . , 0, 1). In a finite dimensional


normed vector space, a sequence of vectors converges iff each component converges.
We get in the limit as k
(1k , . . . , nk , 0) (0, . . . , 0, 1) ,
which is obviously impossible. Therefore W is closed.

Example: The subspace of polynomial functions is linear but not closed in C[0, 1]
equipped with the sup norm.
17

3.4

Compactness

Definition 3.13 (sequential compactness) A subset K of a normed space (V, k kV ) is


(sequentially) compact if any sequence (xn )
n=1 with xn K has a convergent subse

quence xn j x with x K.
Proposition 3.14 A compact set is closed and bounded.
Theorem 3.15 A subset of Rn is compact iff it is closed and bounded.
Corollary 3.16 A subset of a finite-dimensional vector space is compact iff it is closed
and bounded.
Example: The unit sphere in ` p (K) is closed, bounded but not compact.
Proof: Take the sequence e j such that
e j = (0, . . . , 0, |{z}
1 , 0, . . .) .
jth place
We note that ke j ek k` p = 21/p for all j 6= k. Consequently, (e j )j=1 does not have any
convergent subsequence, hence S is not compact.

Lemma 3.17 (Riesz Lemma) Let X be a normed vector space and Y be a closed
linear subspace of X such that Y 6= X and R, 0 < < 1. Then there is x X
such that kx k = 1 and kx yk > for all y Y .
Proof: Since Y X and Y 6= X there is x X \Y . Since Y is closed, X \Y is open and
therefore
d := inf{ kx yk : y Y } > 0 .
xz
Since 1 > 1 there is a point z Y such that kx zk < d 1 . Let x = kxzk
. Then
kx k = 1 and for any y Y ,



xz
x (z + kx zky)
d

kx yk =
>
=,
kx zk y =
kx zk
d 1

as z + kx zky Y because Y is a linear subspace.

Theorem 3.18 A normed space is finite dimensional iff the unit sphere is compact.
Proof: If the vector space is finite-dimensional, Corollary 3.16 imply the unit sphere
is compact as the unit sphere is bounded and closed.
So we only need to show that if the unit sphere S is sequentially compact, then the
normed space V is finite dimensional. Suppose that dimV = . Then Riesz Lemma
can be used to construct an infinite sequence of xn S such that kxn xm k > 21 > 0
18

for all m 6= n. This sequence does not have any convergent subsequence (none of its
subsequences is Cauchy) and therefore S is not compact.
We construct xn inductively. First take any x1 S. Then suppose that for some n 1
we have found x1 , . . . , xn S such that kxl xk k > 21 for all 1 k, l n, k 6= l (note
that this property is satisfied for n = 1). The linear subspace Yn = Span(x1 , . . . , xn ) is
n-dimensional and hence closed (see Proposition 3.12). Since X is infinite dimensional
Yn 6= X. Then Riesz Lemma with Y = Yn and = 21 implies that there is xn+1 S such
that kxn+1 xk k > 12 for all 1 k n.
Repeating this argument inductively we generate the sequence xn for all n N.


19

Banach spaces

4.1

Completeness: Definition and examples

Definition 4.1 (Banach space) A normed space V is called complete if any Cauchy
sequence in V converges to a limit in V . A complete normed space is called a Banach
space.
Theorem 3.3 implies that R is complete, i.e., every Cauchy sequence of numbers
has a limit. We also know that C is complete (Do you know how to deduce it from the
completeness of R?).
Theorem 4.2 Every finite-dimensional normed space is complete.
Proof: Now let V be a vector space over K {R, C}, dimV = n < . Take any
basis in V . Then a sequence of vectors in V converges iff each component of the
vectors converges, and a sequence of vectors is Cauchy iff each component is Cauchy.
Therefore each component has a limit, and those limits constitute the limit vector for
the original sequence. Hence V is complete.

In particular, Rn and Cn are complete.
Theorem 4.3 (` p is a Banach space) The space ` p (K) equipped with the standard ` p
norm is complete.
Proof: Suppose that xk = (x1k , x2k , . . .) ` p (K) is Cauchy. Then for every > 0 there is
N such that
kxm xn k`pp =

|xmj xnj| p <

j=1

for all m, n > N. Consequently, for each j N the sequence xkj is Cauchy, and the
completeness of K implies that there is a j K such that
xkj a j
as k . Let a = (a1 , a2 , . . .). First we note that for any M 1 and m, n > N:
M

|xmj xnj | p

j=1

|xmj xnj| p < .

j=1

Taking the limit as n we get


M

|xmj a j | p .

j=1

20

This holds for any M, so we can take the limit as M :

|xmj a j | p .

j=1

We conclude that xm a ` p (K). Since ` p (K) is a vector space and xm ` p (K), then
a ` p (K). Moreover, kxm ak` p < for all m > N. Consequently xm a in ` p (K)
with the standard norm, and so ` p (K) is complete.

Theorem 4.4 (C is a Banach space) The space C[0, 1] equipped with the sup norm is
complete.
Proof: Let fk be a Cauchy sequence. Then for any > 0 there is N such that
sup | fn (t) fm (t)| <
t[0,1]

for all m, n > N. In particular, fn (t) is Cauchy for any fixed t and consequently has a
limit. Set
f (t) = lim fn (t) .
n

Lets prove that fn (t) f (t) uniformly in t. Indeed, we already know that
| fn (t) fm (t)| <
for all n, m > N and all t [0, 1]. Taking the limit as m we get
| fn (t) f (t)|
for all n > N and all t [0, 1]. Therefore fn converges uniformly:
k fn f k = sup | fn (t) f (t)| < .
t[0,1]

for all n > N. The uniform limit of a sequence of continuous functions is continuous.
Consequently, f C[0, 1] which completes the proof of completeness.

Example: The space C[0, 2] equipped with the L1 norm is not complete.
Proof: Consider the following sequence of functions:
( n
t for 0 t 1,
fn (t) =
1 for 1 t 2 .
This is a Cauchy sequence in the L1 norm. Indeed for any n < m:
k f n f m kL 1 =

Z 1

(t n t m ) dt =

21

1
1
1

<
,
n+1 m+1 n+1

and consequently for any m, n > N


k fn fm kL1 <

1
.
N

Now let us show that fn do not converge to a continuous function in the L1 norm.
Indeed, suppose such a limit exists and call it f . Then
k fn f kL1 =

Z 1

|t f (t)| dt +

Z 2

|1 f (t)| dt 0 .

Since
| f (t)| |t n | |t n f (t)| | f (t)| + |t n |
implies that
Z 1
0

| f (t)| dt

Z 1

t dt

Z 1

|t f (t)| dt

Z 1

| f (t)| dt +

Z 1

t n dt ,

R
R
we have 01 |t n f (t)| dt 01 | f (t)| dt as n and consequently
Z 1
Z 2

| f (t)| dt +

|1 f (t)| dt = 0 .

As f is assumed to be continuous, it follows



0, 0 < t < 1,
f (t) =
1, 1 < t < 2.
We see that the limit function f cannot be continuous. This contradiction implies that
C[0, 2] is not complete with respect to the L1 norm.

4.2

The completion of a normed space

A normed space may be incomplete. However, every normed space X can be con The minimal among these spaces5 is
sidered as a subset of a larger Banach space X.
called the completion of X.
Informally we can say that X consists of limit points of all Cauchy sequences in X.
Of course, every point x X is a limit point of the constant sequence (xn = x for all
If X is not complete, some of the limit points are not in
n N) and therefore X X.

X, so X is larger then the original set X.


Definition 4.5 (dense set) We say that a subset X V is dense in V if for any v V
and any > 0 there is x X such that kx vk < .
5 In this context minimal means that if any other space X
has the same property, then the minimal
It turns out that this property can be achieved by requiring X to be
X is isometric to a subspace of X.

dense in X.

22

Note that X is dense in V iff for every point v V there is a sequence xn X such
that xn v.
Theorem 4.6 Let (X, k kX ) be a normed space. Then there is a complete normed
space (X , k kX ) and a linear map i : X X such that i is an isometrical isomorphism between (X, k kX ) and (i(X), k kX ), and i(X) is dense in X .
Moreover, X is unique up to isometry, i.e., if there is another complete normed
space (X , kkX ) with these properties, then X and X are isometrically isomorphic.
Proof: The proof is relatively long so we break it into a sequence of steps.
Construction of X . Let Y be the set of all Cauchy sequences in X. We say that two

Cauchy sequences x = (xn )


n=1 , xn X, and y = (yn )n=1 , yn X, are equivalent, and
write x y, if
lim kxn yn kX = 0 .
n

Let X be the space of all equivalence classes in Y , i.e., it is the factor space: X =
Y / . The elements of X are collections of equivalent Cauchy sequences from X.
We will use [x] to denote the equivalence class of x.
Exercises: Show that X is a vector space.
Norm on X . For an X take any representative x = (xn )
n=1 , xn X, of the
equivalence class . Then the equation
kkX = lim kxn kX .
n

(4.1)

defines a norm on X . Indeed:


1. Equation (4.1) defines a function X R, i.e., for any X and any representative x the limit exists and is independent from the choice of the representative. (Exercise)
2. The function defined by (4.1) satisfies the axioms of norm. (Exercise)
Definition of i : X X . For any x X let
i(x) = [(x, x, x, x, . . .)]
(the equivalence class of the constant sequence). Obviously, i is a linear isometry,
and it is a bijection X i(X). Therefore the spaces X and i(X) are isometrically
isomorphic.


Completeness of X . Let (k)
be a Cauchy sequence in (X , k kX ). For every
k=1

k N take a representative x(k) (k) . Note that x(k) Y is a Cauchy sequence in the
space (X, k kX ). Then there is a strictly monotone sequence of integers nk such that


1
(k)
(k)
for all j, l nk .
(4.2)
x j xl
k
X
23

Now consider the sequence x defined by


 
(k)
x = xnk

k=1

Next we will check that x is Cauchy, and consider its equivalence class = [x ]
X . Then we will prove that (k) in (X , k kX ).
The sequence x is Cauchy. Since the sequence of (k) is Cauchy, for any > 0 there
is M such that
(k)

(l)

lim kxn xn kX = k (k) (l) kX <

for all k, l > M .

Consequently, for every k, l > M there is Nk,l such that


(k)

(l)

for all n > Nk,l .

kxn xn kX <
Then fix any > 0. If j, l >

(4.3)

k,l
and m > max{n j , nl , N/3
} we have
( j)

(l)

( j)

( j)

kxj xl kX = kxn j xnl kX


( j)

(l)

(l)

(l)

kxn j xm kX + kxm xm kX + kxm xnl kX


1 1
<
+ + <
j 3 l
where we used (4.3) and (4.2). Therefore x is Cauchy and = [x ] X .
The sequence (k) [x ]. Indeed, take any > 0 and k > 3 1 , then
k (k) kX

(k)

(k)

( j)

= lim kx j xj kX = lim kx j xn j kX
j
j


(k)
(k)
(k)
( j)
lim kx j xnk kX + kxnk xn j kX
j

1
+ < 2
k

Therefore (k) .
We have proved that any Cauchy sequence in X has a limit in X , so X is complete.
Density of i(X) in X . Take an X and let x . Take any > 0. Since x is
Cauchy, there is N such that kxm xk kX < for all k, m > N . Then
k i(xk )kX = lim kxm xk kX .
m

Therefore i(X) is dense in X .


24

Uniqueness of X up to isometry. Suppose that X is a complete normed space,


i : X X is an isometry and i(X) is dense in X . Then X and X are isometric.
Indeed, since i and i are isometries, the linear map L : i(X) i(X) defined by L =
1
i i is also an isometry. Then define the map L : X X by continuously extending
i.e. if n and n belong to the domain of L,
then set L( ) = limn L(
n ).
L,
Exercise: show that
1. L( ) is independent from the choice of the sequence n .
) for all i(X).
2. L( ) = L(
3. L is a linear map defined for all X .
4. L is an isometry and L(X ) = X .
This completes the uniqueness statement of the theorem.

The theorem provides an explicit construction for the completion of a normed


space. Often this description is not sufficiently convenient and a more direct description is desirable.
Example: Let ` f (K) be the space of all sequences which have only a finite number
of non-zero elements. This space is not complete in the ` p norm. The completion of
` f (K) in the ` p norm is isometric to ` p (K).
Indeed, we have already seen that ` p (K) is complete. So in order to prove the claim
you only need to check that ` f (K) is dense in ` p (K) (Exercise).
We see that the completion of a normed space depends both on the space and on
the norm.

4.3

Weierstrass Approximation Theorem

In this section we will prove an approximation theorem which is independent from the
discussions of the previous lectures. This theorem implies that polynomials are dense
in the space of continuous functions on an interval.
Theorem 4.7 (Weierstrass Approximation Theorem) If f : [0, 1] R is continuous
on [0, 1] then the sequence of polynomials
n  
n
Pn (x) =
f (p/n)x p (1 x)np
p
p=0
uniformly converges to f on [0, 1].

25

Proof: First we derive several useful identities. The binomial theorem states that
n  
n p np
n
x y
.
(x + y) =
p=0 p
Differentiating with respect to x and multiplying by x we get
 
n
n p np
n1
nx(x + y)
= p
x y
.
p
p=0
Differentiating the original identity twice with respect to x and multiplying by x2 we
get
 
n
n p np
2
n2
n(n 1)x (x + y)
= p(p 1)
x y
.
p
p=0
Now substitute y = 1 x and denote
 
n p
r p (x) =
x (1 x)np .
p
We get
n

r p(x)

= 1,

pr p(x)

= nx .

p=0
n
p=0
n

p(p 1)r p(x)

= n(n 1)x2 .

p=0

Consequently,
n

n
2

(p nx) r p(x)

p=0

2 2

p r p(x) 2nx pr p(x) + n x r p(x)

p=0

p=0

p=0

2 2

= n(n 1)x + nx 2(nx) + n x = nx(1 x) .


Note that as f is continuous it is also uniformly continuous: Take any > 0, there is
> 0 such that
|x y| <

| f (x) f (y)| < .

26

Now we can estimate






n


| f (x) Pn (x)| = f (x) f (p/n)r p (x)


p=0


n




= f (x) f (p/n) r p (x)
p=0








f (x) f (p/n) r p (x)


|xp/n|<







f (x) f (p/n) r p (x) .
+

|xp/n|>
The first sum is bounded by







f (x) f (p/n) r p (x) r p (x) <


|xp/n|<
|xp/n|<
The second sum is bounded by







f (x) f (p/n) r p (x) 2k f k r p (x)

|xp/n|>

|nxp|>n
n

which is less than for any n >

k f k
.
2 2

2k f k

(p nx)2
2 2 r p(x)
p=0 n

= 2k f k

x(1 x) k f k

n 2
2n 2

Therefore for these values of n

| f (x) Pn (x)| < 2 .


Consequently,
k f Pn k = sup | f (x) Pn (x)| 0

as n .

x[0,1]

Consider the space P[0, 1] of all polynomial functions restricted to the interval [0, 1]
and equip this space with the sup norm. This space is not complete (think about Taylor
series). On the other hand polynomials are continuous, and therefore P[0, 1] can be
considered as a subspace of C[0, 1] which is complete. The Weierstrass approximation
theorem states that any continuous function on [0, 1] can be uniformly approximated by
polynomials. In other words, the polynomials are dense in C[0, 1]. Then Theorem 4.6
implies that the completion of the polynomials P[0, 1] is isometric to C[0, 1] equipped
with the sup norm.
Corollary 4.8 The set of polynomials is dense in C[0, 1] equipped with the supremum
norm.
27

Lebesgue spaces

Lebesgue spaces play an important role in Functional Analysis and some of its applications. These spaces consist of integrable functions. In this section we discuss the main
definitions and the properties required later in this module. Most of the statements
are given without proofs. A more detailed study of these topics is a part of MA359
Measure Theory module.

5.1

Lebesgue measure

First we need to define a measure, which can be considered as a generalisation of the


length of an interval. A measure can be defined for a class of subsets which are called
measurable and form a -algebra.
Definition 5.1 A -algebra is a class of subsets of a set X which have the following
properties:
(a) 0,
/ X ,
(b) if S then X \ S ,
(c) if Sn for all n N then

n=1 Sn

Definition 5.2 A function : R is a measure, if it has the following properties:


(a) (S) 0 for all S ,
(b) (0)
/ = 0,
(c) is countably additive, i.e., if the sets Sn are pairwise disjoint (Sn Sm = 0/
for n 6= m), then
!

Sn

(Sn) .

n=1

n=1

The triple (X, , ) is called a measure space.


The Lebesgue measure is defined on Rn and coincides with the standard volume
for those sets, where the standard volume can be defined. It is the only measure we
consider in this module. The Lebesgue measure is constructed in the following way.
A box in Rn is a set of the form
n

B = [ai , bi ] ,
i=1

where bi ai . The volume vol(B) of this box is defined to be


n

vol(B) = (bi ai ) .
i=1

28

For any subset A of Rn , we can define its outer measure (A) by:
(
(A) = inf

vol(B) : C is a countable collection of boxes whose union covers A


BC

Finally, the set A is called Lebesgue measurable if for every S Rn ,


(S) = (A S) + (S A) .
If A is measurable, then its Lebesgue measure is defined by (A) = (A).
Lebesgue measurable sets form a -algebra.
Lebesgue measure of a box coincides with the volume of the box. The class of
Lebesgue measurable sets is very large and existence of sets which are not Lebesgue
measurable is equivalent to the axiom of choice.
In particular, a Borel set is any set in a topological space that can be formed from
open sets (or, equivalently, from closed sets) through the operations of countable union,
countable intersection, and relative complement. The Borel sets also form a -algebra.
We note that the -algebra of Lebesgue measurable sets includes all Borel sets.

Sets of measure zero


We say that a set A has measure zero if (A)=0.
Proposition 5.3 A set A R has Lebesgue measure zero iff for any > 0 there is an
(at most countable) collection of intervals that cover A and whose total length is less
than :
A

[a j , b j ]

and

|b j a j | < .

j=1

j=1

Corollary 5.4 The Lebesgue measure has the following property: any subset of a measure zero set is measurable and itself has measure zero.
Exercise: Show that a countable union of measure zero sets has measure zero. Hint:
for An choose a cover with n = /2n .
Examples. The set Q of all rational numbers has measure zero. The Cantor set has
measure zero.
Definition 5.5 A property is said to hold almost everywhere or for almost every
x (and abbreviated to a.e.) if the set of points at which the property does not hold
has measure zero.

29

5.2

Lebesgue integral

Integrals of simple functions


We say that : X R is a simple function if it can be represented as a finite sum
n

(x) =

c j S j (x) ,

j=1

where c j R, S j and S is the characteristic function of a set S X:



1, x S ,
S (x) =
0, x 6 S .
We define the integral of a simple function
n

:=

c j (S j ).

j=1

We note that if all S j are


intervals, this sum equals to the Riemann integral which you
R
studied in Year 1, i.e., is the algebraic area under the graph of the step function
(the area is counted negative on those intervals where (x) < 0).

Lebesgue integrable functions


Definition 5.6 A function f : X R is measurable if preimage of any interval is measurable.
We note that sums, products and pointwise limits of measurable functions are measurable. If f is measurable, then | f | and f are also measurable, where f+ (x) =
max{ f (x), 0} and f (x) = min{ f (x), 0}. Note that both f+ , f 0 and f = f+ f .
Definition 5.7 If a function f : R R is measurable and f 0 then
Z

Z
f = sup
: is a simple function and 0 (x) f (x) for all x .
R

If f is measurable and | f | < then


Z

f=

f+

and we say that f is integrable on X.

30

Properties of Lebesgue integrals


First we state the main elementary properties of the Lebesgue integration.
Theorem 5.8 If f , f1 , f2 are integrable and R, then
R

1. f1 + f2 is also integrable and ( f1 + f2 ) =


R

f1 +

f2 .

2. | f | is also integrable and | f | | f |.


3. If additionally f (x) 0 a.e., then

f 0.

We note that | f | is integrable (measurable) does not imply that f is integrable


(measurable).6
Exercise: If f is integrable than f+ and f } are integrable. (Hint: f+ = ( f + | f |)/2
and f = (| f | f )/2)

Integrals and limits


R

You should be careful when swapping lim and :


Examples:
Z

lim

n 1 
0, n

lim

= 1 6=

= 1 6=
n (0,n)

lim n

1
0, n

= 0.

1
lim (0,n) = 0 .
n n

The following two theorems establish conditions which allow swapping the limit
and integration. They play the fundamental role in the theory of Lebesgue integrals.
Theorem 5.9 (Monotone Convergence
Theorem) Suppose that fn are integrable funcR
tions, fn (x) fn+1 (x) a.e., and fn < K for some constant independent of n. Then
there is an integrable function g such that fn (x) g(x) a.e. and
Z

g = lim

fn .

Corollary 5.10 If f is integrable and | f | = 0, then f (x) = 0 a.e.


6 Indeed, we can sketch an example. It is based on partitioning the interval [0, 1] into two very nasty
subsets. So let f (x) = 0 outside [0, 1], for x [0, 1] let f (x) = 1 if x belongs to the Vitali set and
f (x) = 1 otherwise. Then | f | = [0,1] and is integrable, but f is not integrable as the Vitali set is not
measurable.

31

Proof:
Let fn (x) = n| f (x)|. This sequence satisfies MCT (integrable, increasing and
R
fn = 0 < 1), consequently there is an integrable g(x) such that fn (x) g(x) for a.e.
x. Since the sequence is increasing, fn (x) g(x) a.e. which implies | f (x)| g(x)/n
for all n and a.e. x. Consequently f (x) = 0 a.e.

Theorem 5.11 (Dominated Convergence Theorem) Suppose that fn are integrable
functions and fn (x) f (x) for a.e. x.. If there is an integrable function g such that
| fn (x)| g(x) for every n and a.e. x, then f is integrable and
Z

f = lim

fn .

It is also possible to integrate complex valued functions: f : R C is integrable if


its real and imaginary parts are both integrable, and
Z

f :=

Re f + i

Im f .

The MCT has no meaning for complex valued functions. The DCT is valid without
modifications (and indeed follows easily from the real version).

5.3

Lebesgue space L1 (R)

Definition 5.12 The Lebesgue space L1 (R) is the space of Lebesgue integrable functions modulo the following equivalence relation: f g iff f (x) = g(x) a.e. The
Lebesgue space is equipped with the L1 norm:
k f kL1 =

| f |.

Note that the value of the integral in the right-hand-side is independent from the choice
of a representative of the equivalence class.
It is convenient to think about elements of L1 (R) as functions R R interpreting
the equality f = g as f (x) = g(x) a.e.
From the viewpoint of Functional Analysis, the equivalence relation is introduced
to ensure non-degeneracy of theRL1 norm. Indeed, suppose f is an integrable function.
Then k f k1 = 0 is equivalent to | f | = 0, which is equivalent to f (x) = 0 a.e.
Theorem 5.13 L1 (R) is a Banach space.
The properties of the Lebesgue integral imply that L1 (R) is a normed space. The
completeness of L1 (R) follows from the combination of the following two statements:
The first lemma gives a criterion for completeness of a normed space, and the second
one implies that the assumptions of the first lemma are satisfied for X = L1 (R).

32

Lemma 5.14 If (X, k k) is a normed space in which

ky j k <

j=1

implies the series j=1 y j converges, then X is complete.


Proof: Let x j X be a Cauchy sequence. Then there is a monotone increasing sequence nk N such that for every k N
kx j xl k < 2k

for all k, l nk .

Let y1 = xn1 and yk = xnk xnk1 for k 1. Since kyk k 21k for k 2,

k=1

k=1

kyk kX kxn1 k + 2k = kxn1 k + 1 < .


By the assumption of the lemma, the series converges and therefore there is x X
such that

x =

yj .

j=1

On the other hand

y j = xn1 + (xn j xn j1 ) = xnk

j=1

j=2

and therefore xnk x . Consequently xk x and the space X is complete.

Lemma 5.15 If ( fk )
k=1 is a sequence of integrable functions such that k=1 k f k kL1 <
, then

1.
k=1 | f k (x)| converges a.e. to an integrable function,
2.
k=1 f k (x) converges a.e. to an integrable function.
Proof: The first statement follows from MCT applied to the sequence gn = nk=1 | fk |
and K =
k=1 k f k kL1 . So there is an integrable function g(x) such that

g(x) =

| fk (x)|
k=1

for almost all x. For these values of x the partial sums hn (x) = nk=1 fk (x) obviously
converge, so let

h(x) =

fk (x).
k=1

33

Moreover



n

n



|hn (x)| = fk (x) | fk (x)| | fk (x)| = g(x) .
k=1
k=1
k=1

Therefore the partial sums hn satisfy DCT and the second statement follows.

Exercise: Check that Lemma 5.15 implies that L1 (R) satisfies the assumptions of
Lemma 5.14.
In addition to L1 (R) we will sometimes consider the Lebesgue spaces L1 (a, b)
where (a, b) is an interval.
Proposition 5.16 The space C[0, 1] is dense in L1 (0, 1).
About proof: The proof uses that simple functions (=piecewise constant functions)
are dense in L1 (0, 1). Then check that every step function can be approximated by a
piecewise linear continuous function.

Consequently L1 (a, b) is isometric to the completion of C[a, b] in the L1 norm.

5.4

L p spaces

Another important class of Lebesgue spaces consists of L p spaces for 1 p < ,


among those the L2 space is the most remarkable (it is also a Hilbert space, see the
next chapter for details). In this section we will sketch the main definitions of those
spaces noting that the full discussion requires more knowledge of Measure Theory
than we can fit into this module.
The Lebesgue space L p (I) is the space of all measurable functions f such that
k f kL p =

Z

|f|

1/p
<

modulo the equivalence relation: f = g iff f (x) = g(x) a.e.


We note that in this case L p (a, b) L1 (a, b).
We note that although L1 (R) L2 (R) 6= 0/ (e.g. both spaces contain all simple
functions) none of those spaces is a subset of the other one. For example,
f (x) =

1
1 + |x|

belongs to L2 (R) but not to L1 (R). Indeed, f 2 < but


on R. On the other hand
(0,1) (x)
g(x) =
|x|1/2
R

belongs to L1 (R) but not to L2 (R).


34

f = so it is not integrable

Theorem 5.17 L p (R) and L p (I) are Banach spaces for any p 1 and any interval I.
We will not give a complete proof but sketch the main ideas instead.
Let 1p + q1 = 1 and f L p (R), g Lq (R). Then the Holder inequality states7 that
Z

| f g| k f k p kgkq .

Note that the characteristic function I Lq (R) for any interval I and any q 1, more1

over kI kLq = |I| q where |I| = b a is the length of I. The Holder inequality with
g = I implies that
Z
Z
I | f | = | f | |I|1/q k f k p .
I

The left hand side of this inequality is the norm of f in L1 (I):


k f kL1 (I) |I|1/q k f kL p (I) .
Consequently any Cauchy sequence in L p (I) is automatically a Cauchy sequence in
L1 (I). Since L1 is complete the Cauchy sequence converges to a limit in L1 (I). In
order to proof completeness of L p it is sufficient to show that the pth power of this limit
is integrable. This can be done on the basis of the Dominated Convergence Theorem.
Exercise: The next two exercises show that L2 (R) is complete (compare with the proof
of completeness for L1 (R)).
2
1. Let ( fk )
k=1 be a sequence in L (R) such that

k f k kL 2 < .
k=1

Applying the MCT to the sequence


n

gn =

!2

| fk |
k=1

show that k fk converges to a function f with integrable f 2 .



2
2. Now use the DCT applied to hn = f nk=1 fk to deduce that k fk converges
in the L2 norm to a function in L2 .

7A

proof of this inequality is similar to the proof of Lemma 2.3 provided we take for granted that a
product of two measurable functions is measurable. We will not discuss this proof further.

35

Hilbert spaces

6.1

Inner product spaces

You have already seen the inner product on Rn .


Definition 6.1 An inner product on a vector space V is a map (, ) : V V K such
that for all x, y, z V and for all K:
(i) (x, x) 0, and (x, x) = 0 iff x = 0;
(ii) (x + y, z) = (x, z) + (y, z);
(iii) ( x, y) = (x, y);
(iv) (x, y) = (y, x).
A vector space equipped with an inner product is called an inner product space.
In a real vector space the complex conjugate in (iv) is not necessary.
If K = C, then (iv) with y = x implies that (x, x) is real and therefore the requirement (x, x) 0 makes sense.
(iii) and (iv) imply that (x, y) = (x.y).
1. Example: Rn is an inner product space
n

(x, y) =

xk yk .
k=1

2. Example: Cn is an inner product space


n

(x, y) =

xk yk .
k=1

3. Example: `2 (K) is an inner product space

(x, y) =

xk yk .
k=1


Note that the sum converges because k |xk yk | 12 k |xk |2 + |yk |2 .
4. Example: L2 (a, b) is an inner product space
Z b

f (x)g(x) dx .

( f , g) =
a

36

6.2

Natural norms

Every inner product space is a normed space as well.


Proposition 6.2 If V is an inner product space, then
p
kvk = (v, v)
defines a norm on V .
Definition 6.3 We say that kxk =
product.

p
(x, x) is the natural norm induced by the inner

The proof of the proposition uses the following inequality.


Lemma
6.4 (Cauchy-Schwartz inequality) If V is an inner product space and kvk =
p
(v, v) for all v V , then
|(x, y)| kxk kyk

for all x, y V .

Proof of the lemma: The inequality is obvious if y = 0. So suppose that y 6= 0. Then


for any K:
0 (x y, x y) = (x, x) (y, x) (x, y) + | |2 (y, y) .
Then substitute = (x, y)/kyk2 :
0 (x, x) 2

|(x, y)| |(x, y)|


|(x, y)|
+
= kxk2
,
2
2
kyk
kyk
kyk2


which implies the desired inequality.

Proof of Proposition 6.2: We note that positive definiteness and homogeneity of k k


easily follow from (i), and (iii), (iv) in the definition of the inner product. In order to
establish the triangle inequality we use the Cauchy-Schwartz inequality. Let x, y V .
Then
kx + yk2 = (x + y, x + y) = (x, x) + (x, y) + (y, x) + (y, y)
kxk2 + 2kxk kyk + kyk2 = (kxk + kyk)2 ,
and the triangle inequality follows by taking the square root.
Therefore k k is a norm.

We have already proved the Cauchy-Schwartz inequality for `2 (K) using a different
strategy (see Lemma 2.4).
The Cauchy-Schwartz inequality in L2 (a, b) takes the form
Z b
Z b
1/2 Z b
1/2


2
2


|g(x)| dx
.
f (x)g(x) dx
| f (x)| dx

a

In particular, it states that f , g L2 (a, b) implies f g L1 (a, b).

37

Lemma 6.5 If V is an inner product space equipped with the natural norm, then xn
x and yn y imply that
(xn , yn ) (x, y) .
Proof: Since any convergent sequence is bounded, the inequality
|(xn , yn ) (x, y)| = |(xn x, yn ) + (x, yn y)|
|(xn x, yn )| + |(x, yn y)|
kxn xk kyn k + kxk kyn yk
implies that (xn , yn ) (x, y).

The lemma implies that we can swap inner products and limits.

6.3

Parallelogram law and polarisation identity

Natural norms have some special properties.


Lemma 6.6 (Parallelogram law) If V is an inner product space with the natural norm
k k, then

kx + yk2 + kx yk2 = 2 kxk2 + kyk2
for all x, y V .
Proof: The linearity of the inner product implies that for any x, y V
kx + yk2 + kx yk2 = (x + y, x + y) + (x y, x y)
= (x, x) + (x, y) + (y, x) + (y, y)
+(x, x) (x, y) (y, x) + (y, y)

= 2 kxk2 + kyk2

Example (some norms are not induced by an inner product): There is no inner
product which could induce the following norms on C[0, 1]:
k f k = sup | f (t)|

or

t[0,1]

k f kL1 =

Z 1

| f (t)| dt .

Indeed, these norms do not satisfy the parallelogram law, e.g., take f (x) = x and g(x) =
1 x, obviously f , g C[0, 1] and
k f k = kgk = k f gk = k f + gk = 1 ,
substituting these numbers into the parallelogram law we see 2 6= 4.
Exercise: Is the parallelogram law for the L1 norm satisfied for these f , g?

38

Lemma 6.7 (Polarisation identity) Let V be an inner product space with the natural
norm k k. Then
1. If V is real
4(x, y) = kx + yk2 kx yk2 ;
2. If V is complex
4(x, y) = kx + yk2 kx yk2 + ikx + iyk2 ikx iyk2 .
Proof: Plug in the definition of the natural norm into the right hand side and use
linearity of the inner product.

Lemma 6.7 shows that the inner product can be restored from its natural norm. Although the right hand sides of the polarisation identities are meaningful for any norm,
we should not rush to the conclusion that any normed space is automatically an inner
product space. Indeed, the example above implies that for some norms these formulae
cannot define an inner product. Nevertheless, if the norm satisfy the parallelogram law,
we indeed get an inner product:
Proposition 6.8 Let V be a real normed space with the norm k k satisfying the parallelogram law, then
(x, y) =

kx + yk2 kx yk2 kx + yk2 kxk2 kyk2


=
4
2

defines an inner product on V .8


Proof: Let us check that (x, y) satisfy the axioms of inner product. Positivity and
symmetry are straightforward (Exercise). The linearity:
4(x, y) + 4(z, y) = kx + yk2 kx yk2 + kz + yk2 kz yk2
= 12 (kx + 2y + zk2 + kx zk2 ) 12 (kx 2y + zk2 + kx zk2 )
=
=

2
2
1
1
2 kx + 2y + zk 2 kx 2y + zk
2
2
2
1
2 (2kx + y + zk + 2kyk kx + zk )
21 (2kx y + zk2 + 2kyk2 kx + zk2 )
2
2

= kx + y + zk kx y + zk = 4(x + z, y) .
We have proved that
(x, y) + (z, y) = (x + z, y) .
Applying this identity several times and setting z = x/m we obtain
n(x/m, y) = (nx/m, y)
8 Can

and

you find a simpler proof?

39

m(x/m, y) = (x, y)

for any m Z and n N. Consequently, for any rational =

n
m

( x, y) = (x, y) .
We note that the right hand side of the definition involves the norms only, which commute with the limits. Any real number is a limit of rational numbers and therefore the
linearity holds for all R.


6.4

Hilbert spaces: Definition and examples

Definition 6.9 A Hilbert space is a complete inner product space (equipped with the
natural norm).
Of course, any Hilbert space is a Banach space.
1. Example: Rn is a Hilbert space
n

(x, y) =

!1/2

kxk =

xk yk ,
k=1

xk2

k=1

2. Example: Cn is a Hilbert space


(x, y) =

!1/2

kxk =

xk yk ,

|xk |2

k=1

k=1

3. Example: `2 (K) is a Hilbert space

(x, y) =

!1/2

kxk =

xk yk ,
k=1

|xk |2

k=1

4. Example: L2 (a, b) is a Hilbert space


Z b

( f , g) =

f (x)g(x) dx,
a

kxk =

Z
a

40

| f (x)| dx

1/2
.

Orthonormal bases in Hilbert spaces

The goal of this section is to discuss properties of orthonormal bases in a Hilbert space
H. Unlike Hamel bases, the orthonormal ones involve a countable number of elements:
i.e. a vector x is represented in the form of an infinite sum

x=

k ek
k=1

for some k K.
We will mainly consider complex spaces with K = C. The real case K = R is not
very different. We will use (, ) to denote an inner product on H, and k k will stand
for the natural norm induced by the inner product.

7.1

Orthonormal sets

Definition 7.1 Two vectors x, y H are called orthogonal if (x, y) = 0. Then we write
x y.
Theorem 7.2 (Pythagoras theorem) If x y then kx + yk2 = kxk2 + kyk2 .
Proof: Since (x, y) = 0
kx + yk2 = (x + y, x + y) = (x, x) + (x, y) + (y, x) + (y, y) = kxk2 + kyk2 .

Definition 7.3 A set E is orthonormal if kek = 1 for all e E and (e1 , e2 ) = 0 for all
e1 , e2 E such that e1 6= e2 .
Note that this definition does not require the set E to be countable.
Exercise: Any orthonormal set is linearly independent.
Indeed, suppose nk=1 k ek = 0 with ek E and k K. Multiplying this equality
by e j we get
!
n

0=

k ek , e j

k=1

k (ek , e j ) = j .
k=1

Since j = 0 for all j, we conclude that the set E is linearly independent.


Definition 7.4 (Kronecker delta) The Kronecker delta is the function defined by

1, if j = k,
jk =
0, if j 6= k .

41

Example: For every j N, let e j = ( jk )


k=1 (it is an infinite sequence of zeros with 1
at the jth position). The set E = { e j : j N } is orthonormal in `2 . Indeed, from the
definition of the scalar product in `2 we see that (e j , ek ) = jk for all j, k N.
Example: The set9

E=

eikx
fk = : k Z
2

is an orthonormal set in L2 (, ). Indeed, since | fk (x)| =


k fk k2L2 =

1
2

for all x:

| fk (x)|2 dx = 1 ,

and if j 6= k
Z

( fk , f j ) =

x=
Z
1 i(k j)x
ei(k j)x
fk (x) f j (x) dx =
e
dx =

i(k j)

= 0.

x=

Lemma 7.5 If {e1 , . . . , en } is an orthonormal set in an inner product space V , then for
any j K

2
n

n


j e j = | j |2 .
j=1

j=1
Proof: The following computation is straightforward:

2
n



je j =
j=1

=

j e j , l el

j=1
n n

j l (e j , el )

j=1 l=1

l=1
n

j l jl =

j=1 l=1

7.2

j j .

j=1

Gram-Schmidt orthonormalisation

Lemma 7.6 (Gram-Schmidt orthonormalisation) Let V be an inner product space


and (vk ) be a sequence (finite or infinite) of linearly independent vectors in V . Then
there is an orthonormal sequence (ek ) such that
Span{ v1 , . . . , vk } = Span{ e1 , . . . , ek }

for all k.

9 Remember that for any x R and any k Z: eikx = cos kx + i sin kx. Then |eikx | = 1 and eik =
cos k i sin k = (1)k .

42

Proof: Let e1 =

v1
kv1 k .

Then
Span{ v1 } = Span{ e1 }

and the statement is true for n = 1 as the set E1 = { e1 } is obviously orthonormal.


Then we continue inductively.10 Suppose that for some k 2 we have found an
orthonormal set Ek1 = { e1 , . . . , ek1 } such that its span coincides with the span of
{ v1 , . . . , vk1 }. Then set
k1

ek = vk (vk , e j )e j .
j=1

k1
j=1 (vk , e j )e j

Span(Ek1 ) = Span{ v1 , . . . , vk1 } and v1 , . . . , vk are linearly


Since
independent, we conclude that ek 6= 0. For every j < k
k1

(ek , e j ) = (vk , el ) (vk , e j )(e j , el ) = (vk , el ) (vk , el ) = 0


j=1

which implies that ek e j . Finally let ek = ek /kek k. Then { e1 , . . . , ek } is an orthonormal set such that
Span{ e1 , . . . , ek } = Span{ v1 , . . . , vk } .
If the original sequence is finite, the orthonormalisation procedure will stop after a
finite number of steps. Otherwise, we get an infinite sequence of ek .

Corollary 7.7 Any infinite-dimensional inner product space contains a countable orthonormal sequence.
Corollary 7.8 Any finite-dimensional inner product space has an orthonormal basis.
Proposition 7.9 Any finite dimensional inner product space is isometric to Cn (or Rn
if the space is real) equipped with the standard inner product.
Proof: Let n = dimV and e j , j = 1, . . . , n be an orthonormal basis in V . Note that
(ek , e j ) = k j . Any two vectors x, y V can be written as
n

x=

y=

and

xk ek

Then
n

(x, y) =

xk ek , y j e j
k=1

j=1

y je j .

j=1

k=1

xk y j (ek , e j ) =

k=1 j=1

xk yk .
k=1

Therefore the map x 7 (x1 , . . . , xn ) is an isometry.

We see that an arbitrary inner product, when written in orthonormal coordinates,


takes the form of the canonical inner product on Cn (or Rn if the original space is
real).
10 For example, let k = 2.

We define e2 = v2 (v2 , e1 )e1 . Then (e2 , e1 ) = (v2 , e1 )(v2 , e1 )(e1 , e1 ) = 0.


Since v1 , v2 are linearly independent e2 6= 0. So we can define e2 = kee2 k .
2

43

7.3

Bessels inequality

Lemma 7.10 (Bessels inequality) If V is an inner product space and E = (ek )


k=1 is
an orthonormal sequence, then for every x V

|(x, ek )|2 kxk2 .


k=1

Proof: We note that for any n N:



2


n


x (x, ek )ek =


k=1

x (x, ek )ek , x (x, ek )ek


k=1
n

k=1
n

= kxk2 2 |(x, ek )|2 + |(x, ek )|2


k=1
n

k=1

= kxk2 |(x, ek )|2 .


k=1

Since the left hand side is not negative,


n

|(x, ek )|2 kxk2


k=1

and the lemma follows by taking the limit as n .

Corollary 7.11 If E is an orthonormal set in an inner product space V , then for any
x V the set
Ex = { e E : (x, e) 6= 0 }
is at most countable.
Proof: For any m N the set Em = { e : |(x, e)| > m1 } has a finite number of elements.
Otherwise there would be an infinite sequence (ek )
k=1 with ek Em , then the series

2
k=1 |(x, ek )| = + which contradicts to Bessels inequality. Therefore Ex =
m=1 Em
is a countable union of finite sets and hence at most countable.


7.4

Convergence

In this section we will discuss convergence of series which involve elements from an
orthonormal set.
Lemma 7.12 Let H be a Hilbert space and E = (ek )
k=1 an orthonormal sequence.

2 < +. Then
The series

e
converges
iff
|
|
k=1 k
k=1 k k

2



(7.1)
k ek = |k |2 .
k=1

k=1
44

Proof: Let xn = nk=1 k ek and n = nk=1 |k |2 . Lemma 7.5 implies that kxn k2 = n
and that for any n > m

2
n

n


kxn xm k2 = k ek = |k |2 = n m .
k=m+1

k=m+1
Consequently, xn is a Cauchy sequence in H iff n is Cauchy in R. Since both spaces
are complete, the sequences converge or diverge simultaneously.
If they converge, we take the limit as n in the equality kxn k2 = n to get (7.1)
(the limit commutes with k k2 ).

Definition 7.13 A series
n=1 xn in a Banach space X is unconditionally convergent
if for every permutation : N N the series
n=1 x (n) converges.
In Rn a series is unconditionally convergent if and only if it is absolutely convergent. Every absolutely convergent series is unconditionally convergent, but the
converse implication does not hold in general.
Example: Let (ek ) be an orthonormal sequence. Then

k ek

k=1

converges unconditionally but not absolutely.


The sum of an unconditionally convergent sequence is independent from the order
of summation.11
Lemma 7.12 and Bessels inequality imply:
Corollary 7.14 If H is a Hilbert space and E = (ek )
k=1 is an orthonormal sequence,
then for every x H the sequence

(x, ek )ek
k=1

converges unconditionally.
Lemma 7.15 Let H be a Hilbert space, E = (ek )
k=1 an orthonormal sequence and

x H. If x = k=1 k ek , then
for all k N.

k = (x, ek )

Proof: Exercise.
11 Prove

it

45

7.5

Orthonormal basis in a Hilbert space

Definition 7.16 A set E is a basis for H if every x H can be written uniquely in the
form

x=

k ek
k=1

for some k K and ek E. If additionally E is an orthonormal set, then E is an


orthonormal basis.
If E is a basis, then it is a linearly independent set. Indeed, if nk k ek = 0 then
k = 0 due to the uniqueness.
Note that in this definition the uniqueness is a delicate point. Indeed, the sum
is defined as a limit of partial sums xn = nk=1 k ek . A permutation of ek
changes the partial sums and may lead to a different limit. In general, we cannot even
guarantee that after a permutation the series remains convergent.
If E is countable, we can assume that the sum involves all elements of the basis
(some k can be zero) and that the summation is taken following the order of a selected
enumeration of E. The situation is more difficult if E is uncountable since in this case
there is no natural way of numbering the elements.
The situation is much simpler if E is orthonormal as in this case the series converge
unconditionally and the order of summations is not important.

k=1 k ek

Proposition 7.17 Let E = { e j : j N } be an orthonormal set in a Hilbert space H.


Then the following statements are equivalent:
(a) E is a basis in H;
(b) x =
k=1 (x, ek )ek for all x H;
2
(c) kxk2 =
k=1 |(x, ek )| for all x H;

(d) (x, en ) = 0 for all n N implies x = 0;


(e) the linear span of E is dense in H.
Proof:
(b) = (a): Take any x H and let k = (x, ek ). Then x =
k=1 k ek . In order to

check uniqueness of the coefficients we suppose that x =

k=1 k ek . Then

j = (x, e j ) = ( k ek , e j ) =
k=1

k (ek , e j ) = j ,
k=1

i.e. the coefficients are unique. Therefore E is a basis.


(b) = (c): use Lemma 7.12.
46

(c) = (d): Let (x, ek ) = 0 for all k, then (c) implies that kxk = 0 hence x = 0.
(d) = (b): let y = x
k=1 (x, ek )ek . Corollary 7.14 implies that the series converges. Then Lemma 6.5 implies we can swap the limit and the inner product to get
for every n
!

(y, en ) =

x (x, ek )ek , en
k=1

= (x, en ) (x, ek )(ek , en ) = (x, en ) (x, en ) = 0 .


k=1

Since (y, en ) = 0 for all n, then (d) implies that y = 0 which is equivalent to x =

k=1 (x, ek )ek as required.


(e) = (d): since Span(E) is dense in H for any x H there is a sequence xn
Span(E) such that xn x. Take x such that (x, en ) = 0 for all n. Then (xn , x) = 0 and
consequently


2
kxk = lim xn , x = lim (xn , x) = 0 .
n

Therefore x = 0.
(a) = (e): Since E is a basis any x = limn xn with xn = nk=1 k ek Span(E).

Example: The orthonormal sets from examples of Section 7.1 are also examples of
orthonormal bases.

7.6

Separable Hilbert spaces

Definition 7.18 A normed space is separable if it contains a countable dense subset.


In other words, a space H is separable if there is a countable set { xn H : n N }
such that for any u H and any > 0 there is n N such that
kxn uk < .
Examples: R is separable (Q is dense). Rn is separable (Qn is dense), Cn is separable
(Qn + iQn is dense).
Example: `2 is separable. Indeed, the set of sequences (x1 , x2 , . . . , xn , 0, 0, 0, . . .) with
x j Q is dense and countable.
Example: The space C[0, 1] is separable. Indeed, the Weierstrass approximation theorem states that every continuous function can be approximated (in the sup norm) by
a polynomial. The dense countable set is given by polynomials with rational coefficients.
47

Example: L2 (0, 1) is separable. Indeed, continuous functions are dense in L2 (0, 1)


(in the L2 -norm). The polynomials are dense in C[0, 1] (in the supremum norm and
therefore in the L2 norm as well). The set of polynomials with rational coefficients is
dense in the set of all polynomials and, consequently, it is also dense in L2 [0, 1] (in the
L2 norm).
Proposition 7.19 An infinite-dimensional Hilbert space is separable iff it has a countable orthonormal basis.
Proof: If a Hilbert space has a countable basis, then we can construct a countable dense
set by taking finite linear combinations of the basis elements with rational coefficients.
Therefore the space is separable.
If H is separable, then it contains a countable dense subset V = {xn : n N}.
Obviously, the closed linear span of V coincides with H. First we construct a linear
independent set V which has the same linear span as V by eliminating from V those
xn which are not linearly independent from { x1 , . . . , xn1 }. Then the Gram-Schmidt
process gives an orthonormal sequence with the same linear span, i.e., it is a basis by
characterisation (e) of Proposition 7.17.

The following theorem shows that all infinite dimensional separable Hilbert spaces
are isometric to `2 . In this sense, `2 is the only separable infinite-dimensional space.
Theorem 7.20 Any infinite-dimensional separable Hilbert space is isometric to `2 .
Proof: Let { e j : j N } be an orthonormal basis in H. The map A : H `2 defined by
A : u ((u, e1 ), (u, e2 ), (u, e3 ), . . . }
is invertible. Indeed, the image of A is in `2 due to Lemma 7.12, and the inverse map
is given by
A1 : (xk )
k=1 7

xk ek .
k=1

The characterisation of a basis in Proposition 7.17 implies that kukH = kA(u)k`2 .



Note that there are Hilbert spaces which are not separable.
Example: Let J be an uncountable set. The space H of all functions f : J R
such that
k f k2 := | f ( j)|2 <
jJ

48

is a Hilbert space.12 It is not separable. Indeed, let k ( j) = k j , where k j isthe


Kronecker delta. The set { k : k J } H is not countable and kk k0 k = kk0 2.
Consequently, if k 6= k0 , B(k , 12 ) B(k , 21 ) = 0.
/ So we have found an uncountable
1
number of nonintersecting balls of radius 2 . This obviously contradicts to existence of
a countable dense set.

12 How do we define the sum over an uncountable set? For any n N the set J = { j J : | f ( j)| >
n
} is finite (otherwise the sum is obviously infinite). Consequently, the set J ( f ) := { j J : | f ( j)| >
0 } is countable because it is a countable union of finite sets: J ( f ) =
n=1 Jn . Therefore, the number
of non-zero terms in the sum is countable and the usual definition of an infinite sum can be used.
1
n

49

8
8.1

Closest points and approximations


Closest points in convex subsets

Definition 8.1 A subset A of a vector space V is convex if x + (1 )y A for any


two vectors x, y V and any [0, 1].
Lemma 8.2 If A is a non-empty closed convex subset of a Hilbert space H, then for
any x H there is a unique a A such that
kx a k = inf kx ak .
aA

Proof: The parallelogram rule implies:


k(x u) + (x v)k2 + k(x u) (x v)k2 = 2kx uk2 + 2kx vk2 .
Then
ku vk2 = 2kx uk2 + 2kx vk2 4kx 12 (u + v)k2 .
Let d = infaA kx ak. Since A is convex, 21 (u + v) A for any u, v A, and consequently kx 12 (u + v)k d. Then
ku vk2 2kx uk2 + 2kx vk2 4d 2 .

(8.1)

Since d is the infinum, for any n there is an A such that kx an k2 < d 2 + n1 . Then
equation (8.1) implies that
2
2
2 2
kan am k 2d 2 + + 2d 2 + 4d 2 = + .
n
m
n m
Consequently (an ) is Cauchy and, since H is complete, it converges to some a . Since
A is closed, a A. Then
kx a k2 = lim kx an k2 = d 2 .
n

Therefore a is the point closest to x. Now suppose that there is another point a A
such that kx ak
= d, then (8.1) implies
ka ak
2kx a k2 + 2kx ak
2 4d 2 = 2d 2 + 2d 2 4d 2 = 0 .
So a = a and a is unique.

50

8.2

Orthogonal complements

Definition 8.3 Let X H. The orthogonal complement of X in H is the set


X = {u H : (u, x) = 0 for all x X } .
In an infinite dimensional space a linear subspace does not need to be closed. For
example the space ` f of all sequences with only a finite number of non-zero elements
is a linear subspace of `2 but it is not closed in `2 (e.g. consider the sequence xn =
(1, 21 , 22 , . . . , 2n , 0, 0, . . .)).
Proposition 8.4 If X H, then X is a closed linear subspace of H.
Proof: If u, v X and K then
(u + v, x) = (u, x) + (v, x) = 0
for all x X. Therefore X is a linear subspace. Now suppose that un X and
un u H. Then for all x X
(u, x) = ( lim un , x) = lim (un , x) = 0 .
n

Consequently, u X and so X is closed.

Exercises:
1. If E is a basis in H, then E = { 0 }.
2. If Y X, then X Y .
3. X (X )
4. If X is a closed linear subspace in H, then X = (X )
Definition 8.5 The closed linear span of E H is a minimal closed set which contains
Span(E):
Span(E) = { u H : > 0 x Span(E) such that kx uk < } .
Proposition 8.6 If E H then E = (Span(E)) = (Span(E)) .
Proof: Since E Span(E) Span(E) we have (Span(E)) (Span(E)) E . So
we need to prove the inverse inclusion. Take u E and x Span(E). Then there is
xn Span(E) such that xn x. Then
(x, u) = ( lim xn , u) = lim (xn , u) = 0 .
n

Consequently, u (Span(E)) and we proved E (Span(E)) .


51

Theorem 8.7 If U is a closed linear subspace of a Hilbert space H then


1. any x H can be written uniquely in the form x = u + v with u U and v U .
2. u is the closest point to x in U.
3. The map PU : H U defined by PU x = u is linear and satisfies
PU2 x = PU x

and

kPU (x)k kxk

for all x H .

Definition 8.8 The map PU is called the orthogonal projector onto U.


Proof: Any linear subspace is obviously convex. Then Lemma 8.2 implies that there
is a unique u U such that
kx uk = inf kx ak .
aU

Let v = x u. Let us show that v U . Indeed, take any y U and consider the
function : C R defined by
(t) = kv + tyk2 = kx (u ty)k2 .
Since the definition of u together with u ty U imply that (t) (0) = kx uk2 ,
the function has a minimum at t = 0. On the other hand
(t) = kv + tyk2 = (v + ty, v + ty)
= (v, v) + t(y, v) + t(v, y) + |t|2 (y, y) .
First suppose that t is real. Then t = t and

d
dt (0) = 0

implies

(y, v) + (v, y) = 0 .
Then suppose that t is purely imaginary, Then t = t and

d
dt (0) = 0

implies

(y, v) (v, y) = 0 .
Taking the sum of these two equalities we conclude
(y, v) = 0

for every y U.

Therefore v U .
In order to prove the uniqueness of the representation suppose x = u1 + v1 = u + v
with u1 , u U and v1 , v U . Then u1 u = v v1 . Since u u1 U and v v1 U ,
kv v1 k2 = (v v1 , v v1 ) = (v v1 , u1 u) = 0 .
Therefore u and v are unique.
Finally x = u + v with u v implies kxk2 = kuk2 + kvk2 . Consequently kPU (x)k =
kuk kxk. We also note that PU (u) = u for any u U. So PU2 (x) = PU (x) as PU (x) U

Corollary 8.9 If U is a closed linear subspace in a Hilbert space H and x H, then
PU (x) is the closest point to x in U.
52

8.3

Best approximations

Theorem 8.10 Let E be an orthonormal sequence: E = { e j : j J } where J is


either finite or countable set. Then for any x H, the closest point to x in Span(E) is
given by
y = (x, e j )e j .
jJ

Proof: Corollary 7.14 implies that u = jJ (x, e j )e j converges. Then obviously u


Span(E) which is a closed linear subset. Let v = xu. Since (v, ek ) = (x, ek )(u, ek ) =
0 for all k J, we conclude v E = (Span(E)) (Lemma 8.6). Theorem 8.7 implies
that u is the closest point.

Corollary 8.11 If E is an orthonormal basis in a closed subspace U H, then the
orthogonal projection onto U is given by
PU (x) =

(x, e j )e j .
jJ

Example: The best approximation of an element x `2 in terms of the elements of the


standard basis (e j )nj=1 is given by
n

(x, e j )e j = (x1, x2, . . . , xn, 0, 0, . . .) .


k=1

Example: Let (e j )j=1 be an orthonormal basis in H. The best approximation of an


element x H in terms of the first n elements of the orthonormal basis is given by
n

(x, e j )e j .
k=1

Now suppose that the set E is not orthonormal. If the set E is finite or countable we
can use the Gram-Schmidt orthonormalisation procedure to construct an orthonormal
basis in Span(E). After that the theorem above gives us an explicit expression for the
best approximation. Lets consider some examples.
Example: Find the best approximation of a function f L2 (1, 1) with polynomials
of degree up to n. In other words, let E = { 1, x, x2 , . . . , xn }. We need to find u
Span(E) such that
k f ukL2 = inf k f pkL2 .
pSpan(E)

The set E is not orthonormal. Lets apply the Gram-Schmidt orthonormalisation procedure to construct an orthonormal basis in Span(E). For the sake of shortness, lets
write k k = k kL2 (1,1) .
53

First note that k1k =

2 and let
1
e1 = .
2

Then (1, x) =

R1

1 x dx

= 0 and kxk2 =

R1

2
1 |x| dx

r
e2 =

2
3

so let

3
x.
2

Then
e3 = x2 (x2 , e2 )e2 (x2 , e1 )e1
r Z
r
Z 1
1
3
3
1
1
2
2
= x
x
t
t dt
t 2 dt
2 1
2
2 1
2
Z 1
1
1
= x2
t 2 dt = x2 .
2 1
3
Taking into account that
2

ke3 k =

Z 1
1

1
x
3
2

we obtain
e3
=
e3 =
ke3 k
Exercise: Show that e4 =

7
3
8 (5x 3x)

2
dt =

8
45

5
(3x2 1) .
8

is orthogonal to e1 , e2 and e3 .

The best approximation of any function f L2 (1, 1) by a polynomial of third


degree is given by
1
7 3
5
(5x 3x)
f (t)(5t 3 3t)dt + (3x2 1)
8
8
1
Z 1
Z
3
1 1
+ x
t f (t) dt +
f (t) dt
2 1
2 1

Z 1

f (t)(3t 2 1)dt

For example, if f (x) = |x| its best approximation by a third degree polynomial is
p3 =

15x2 + 3
.
16

We can check (after computing the corresponding integral);


k f p3 k2 =
54

3
.
16

Note that the best approximation in the L2 norm is not necessarily the best approximation in the sup norm. Indeed, for example,


2 + 3

15x
> 3
sup |x|
16 16
x[1,1]
(the supremum is larger than the values at x = 0). At the same time




1 1
2

sup |x| x +
= .
8 8
x[1,1]

55

Linear maps between Banach spaces

A linear map on a vector space is traditionally called a linear operator. All linear
functions defined on a finite-dimensional space are continuous. This statement is no
longer true in the case of an infinite dimensional space.
We will begin our study with continuous operators: this class has a rich theory and
numerous applications. We will only slightly touch some of them (the most remarkable
examples will be the shift operators on `2 , and integral operators and multiplication
operators on L2 ).
Of course many interesting linear maps are not continuous. For example, consider the differential operator A : f 7 f 0 on the space of continuously differentiable
functions. More accurately, let D(A) = C1 [0, 1] L2 (0, 1) be the domain of A. Obviously, A : D(A) L2 (0, 1) is linear but not continuous. Indeed, consider the sequence
xn (t) = n1 sin(nt). Obviously kxn kL2 n1 so xn 0, but A(xn ) = cos(nt) does not
converge to A(0) = 0 in the L2 norm so A is not continuous.
Some definitions and properties from the theory of continuous linear operators can
be literally extended onto unbounded ones, but sometimes subtle differences appear:
e.g., we will see that a bounded operator is self-adjoint iff it is symmetric, which is
no longer true for unbounded operators. In a study of unbounded operators a special
attention should be paid to their domains.

9.1

Continuous linear maps

Let U and V be vector spaces over K.


Definition 9.1 A function A : U V is called a linear operator if
for all x, y U and , K.

A(x + y) = A(x) + A(y)

We will often write Ax to denote A(x).


The collection of all linear operators from U to V is a vector space. If A, B : U V
are linear operators and , K then we define
(A + B)(x) = Ax + Bx .
Obviously, A + B is also linear.
Definition 9.2 A linear operator A : U V is bounded if there is a constant M such
that
kAxkV MkxkU
for all x U.
(9.1)
If an operator is bounded, then the image of a bounded set is also bounded. Since
A(x) = A(x) for all K, a bounded operator is rarely a bounded function. Rather,
it is a locally bounded function (i.e. every point has a neighbourhood such that the
restriction of A onto the neighbourhood is bounded).
56

Lemma 9.3 A linear operator A : U V is continuous iff it is bounded.


Proof: Suppose A is bounded. Then there is M > 0 such that
kA(x) A(y)k = kA(x y)k Mkx yk
for all x, y V and consequently A is continuous.
Now suppose A is continuous. Obviously A(0) = 0. Then for = 1 there is > 0
such that kA(x)k < = 1 for all kxk < . For any u U, u 6= 0,


2kuk

A(u) =
A
u .

2kuk




Since 2kuk u = 2 < we get kA(u)k 2kuk
and consequently A is bounded.
The space of all bounded linear operators from U to V is denoted by B(U,V ).
Definition 9.4 The operator norm of A : U V is
kA(x)kV
.
x6=0 kxkU

kAkB(U,V ) = sup

We will often write kAkop instead of kAkB(U,V ) .


Since A is linear
kAkB(U,V ) = sup kA(x)kV .
kxkU =1

We note that kAkB(U,V ) is the smallest M such that (9.1) holds: indeed, it is easy to see
that the definition of operator norm implies
kA(x)kV kAkB(U,V ) kxkU
and (9.1) holds with M = kAkB(U,V ) . On the other hand, (9.1) implies M
any x 6= 0 and consequently M kAkB(U,V ) .

kAxkV
kxkU

for

Theorem 9.5 Let U be a normed space and V be a Banach space. Then B(U,V ) is a
Banach space.
Proof: Let (An )
n=1 be a Cauchy sequence in B(U,V ). Take a vector u U. The
sequence vn = An (u) is a Cauchy sequence in V :
kvn vm k = kAn (u) Am (u)k = k(An Am )(u)k kAn Am kop kuk.
Since V is complete there is v V such that vn v. Let A(u) = v.
57

The operator A is linear. Indeed,


A(1 u1 + 2 u2 ) = lim An (1 u1 + 2 u2 ) = lim (1 An (u1 ) + 2 An (u2 ))
n

= 1 lim An u1 + 2 lim An u2 = 1 Au1 + 2 Au2 .


n

The operator A is bounded. Indeed, An is Cauchy and hence bounded: there is constant
M R such that kAn kop < M for all n. Taking the limit in the inequality kAn uk Mkuk
implies kAuk Mkuk. Therefore A B(U,V ).
Finally, An A in the operator norm. Indeed, Since An is Cauchy, for any > 0
there is N such that kAn Am kop < or
kAn (u) Am (u)k kuk

for all m, n > N.

Taking the limit as m


kAn (u) A(u)k kuk

for all n > N.

Consequently kAn Ak and so An A. Therefore B(U,V ) is complete.

9.2

Examples

1. Example: Shift operator: Tl , Tr : `2 `2 :


Tr (x) = (0, x1 , x2 , x3 , . . .)

Tl (x) = (x2 , x3 , x4 , . . .) .

and

Both operators are obviously linear. Moreover,


kTr (x)k2`2 =

|xk |2 = kxk`2 .
k=1

Consequently, kTr kop = 1. We also have


kTl (x)k2`2 =

|xk |2 kxk`2 .
k=2

Consequently, kTl kop 1. However, if x = (0, x2 , x3 , x4 , . . .) then kTl (x)k`2 =


kxk`2 . Therefore kTl kop = 1.
2. Example: Multiplication operator: Let f be a continuous function on [a, b]. The
equation
(Ax)(t) = f (t)x(t)
defines a bounded linear operator A : L2 [a, b] L2 [a, b]. Indeed, A is obviously
linear. It is bounded since
kAxk

Z b

=
a

| f (t)x(t)| dt

k f k2

58

Z b
a

|x(t)| dt = k f k2 kxkL2 .

Consequently kAkop k f k . Now let t0 be a maximum of f . If t0 6= b, consider


the characteristic function
x = [t0 ,t0 +] .
(If t0 = b let x = [t0 ,t0 ] .) Since f is continuous,
kAx k 1
=
kx k

Z t0 +

| f (t)|2 dt | f (t)|2

as 0.

t0

Therefore kAkop = k f k .
3. Example: Integral operator on L2 (a, b):
Z b

(Ax)(t) =

K(t, s)x(s) ds

for all t [a, b],

where

Z bZ b
a

|K(s,t)| ds dt < + .

Let us estimate the norm of A:


2
Z b Z b

2
K(t, s)x(s)ds dt
kAxk =
a

a

Z b Z b
Z b
2
2

|K(t, s)| ds
|x(s)| ds dt
a

a
Z bZ b

(Cauchy-Schwartz)

|K(t, s)|2 dsdt kxk2 .

=
a

Consequently
kAk2op

Z bZ b
a

|K(t, s)|2 dsdt .

Note that this example requires a bit more from the theory of Lebesgue integrals
than we discussed in Section 5. If you are not taking Measure Theory and feel
uncomfortable with these integrals, you may assume that x, y and K are continuous functions.

9.3

Kernel and range

Definition 9.6 Kernel of A:


Ker A = { x U : Ax = 0 }
Range of A:
Range A = { y V : x U such that y = Ax }

59

We note that 0 Ker A for any linear operator A. We say that Ker A is trivial if
Ker A = { 0 }.
Proposition 9.7 If A B(U,V ) then Ker A is a closed linear subspace of U.
Proof: If x, y Ker A and , K, then
A(x + y) = A(x) + A(y) = 0 .
Consequently x + y Ker A and it is a linear subspace. Furthermore if xn x and
A(xn ) = 0 for all n, then A(x) = 0 due to continuity of A.

Note that the range is a linear subspace but not necessarily closed. Exercise: construct an example (see Examples sheet 3).

60

10
10.1

Linear functionals
Definition and examples

Definition 10.1 If U is a vector space then a linear map U K is called a linear


functional on U,
Definition 10.2 The space of all continuous functionals on a normed space U is called
the dual space, i.e., U = B(U, K) .
The dual space equipped with the operator norm is Banach. Indeed, K = R or C
which are both complete. Then Theorem 9.5 implies that U is Banach.
1. Example: x ( f ) = f (x), x [a, b], is a bounded linear functional on C[a, b].
2. Example: Let C[a, b] (or L2 (a, b)). Then ` (x) =
bounded linear functional on L2 (a, b).

Rb
a

(t)x(t) dt is a

3. Example: Let H be a Hilbert space and y H. Then `y : H K defined by


`y (x) = (x, y)
is a bounded functional (by the Cauchy-Schwartz inequality), k`y kop = kykH .

10.2

Riesz representation theorem

The following theorem is one of the fundamental results of Functional Analysis: it


states that the map y 7 `y is an isometry between H and its dual space H .
Theorem 10.3 (Riesz Representation Theorem) Let H be a Hilbert space. For any
bounded linear functional f : H K there is a unique y H such that
for all x H.

f (x) = (x, y)
Moreover, k f kH = kykH .

Proof: Let K = Ker f . It is a closed linear subspace of H.


If K = H then f (x) = 0 for all x and the statement of the theorem is true with y = 0.
If K 6= H, we first prove that dim K = 1. Indeed, since K 6= {0}, there is a
vector z K with kzkH = 1. Now take any u K . Since K is a linear subspace
v = f (z)u f (u)z K . On the other hand
f (v) = f ( f (z)u f (u)z) = f (z) f (u) f (u) f (z) = 0
and so v K. For any linear subspace K K = { 0 }, and so v = 0. Then f (z)u

f (u)z = v = 0, i.e. u = ff (u)


(z) z. Consequently { z } is the basis in K and consequently
dim K = 1.
61

Since K is closed, Theorem 8.7 implies that every vector x H can be written
uniquely in the form
x = u+v

where u K and v K .

Since {z} is an orthonormal basis in K , we have u = (x, z)z. Moreover,


f (x) = f (u) + f (v) = f (u) = (x, z) f (z) = (x, f (z) z) .
Set y = f (z) z to get the desired equality:
f (x) = (x, y)

x H .

If there is another y0 H such that f (x) = (x, y0 ) for all x H, then (x, y) = (x, y0 ) for
all x, i.e., (x, y y0 ) = 0. Setting x = y y0 we conclude ky y0 k2 = 0, i.e. y = y0 is
unique.
Finally, the Cauchy-Schwartz inequality implies
| f (x)| = |(x, y)| kxk kyk,
i.e., k f kH = k f kop kyk. On the other hand,
k f kop

| f (y)| |(y, y)|


=
= kyk .
kyk
kyk

Consequently, k f kH = kykH .

62

11
11.1

Linear operators on Hilbert spaces


Complexification

In the next lectures we will discuss the spectral theory of linear operators. The spectral
theory looks more natural in complex spaces. In particular, the theory studies eigenvalues and eigenvectors of linear maps (i.e. non-zero solutions of the equation Ax = x).
In the finite-dimensional space a linear operator can be describe by a matrix. You already know that a matrix (even a real one) can have complex eigenvalues. Fortunately
a real Hilbert space can always be considered as a part of a complex one due to the
complexification procedure.
Definition 11.1 Let H be a real Hilbert space. The complexification of H is the complex vector space
HC = { x + iy : x, y H }
where the addition and multiplication are respectively defined by
(x + iy) + (u + iw) = (x + u) + i(y + w)
( + i )(x + iy) = (x y) + i(y + x) .
The inner product is defined by
(x + iy, u + iw) = (x, u) i(x, w) + i(y, u) + (y, w) .
Exercise: Show that HC is a Hilbert space.
Example: The complexification of `2 (R) is `2 (C).
Exercise: Show that kx + iyk2HC = kxk2 + kyk2 for all x, y H.
The following lemma states that any bounded operator on H can be extended to a
bounded operator on HC .
Lemma 11.2 Let H be a real Hilbert space and A : H H be a bounded operator.
Then
AC (x + iy) = A(x) + iA(y)
is a bounded operator HC HC .
Exercise: Prove the lemma.

63

11.2

Adjoint operators

Theorem 11.3 If A : H H is a bounded linear operator on a Hilbert space H, then


there is a unique bounded operator A : H H such that
(Ax, y) = (x, A y)

for all x, y H .

Moreover, kA kop kAkop .


Definition 11.4 The operator A is called the adjoint operator of a bounded operator A
if (Ax, y) = (x, A y) for all x, y H.
Proof: Let y H and f (x) = (Ax, y) for all x H. The map f : H K is linear and
| f (x)| = |(Ax, y)| kAxk kyk kAkop kxk kyk
where we have used the Cauchy-Schwartz inequality. Consequently, f is a bounded
functional on H. The Riesz representation theorem implies that there is a unique z H
such that
(Ax, y) = (x, z)
for all x H.
Define the function A : H H by A y = z. Then
(Ax, y) = (x, A y)

for all x, y H.

First, A is linear since for any x, y1 , y2 H and 1 , 2 K


(x, A (1 y1 + 2 y2 )) = (Ax, 1 y1 + 2 y2 ) = 1 (Ax, y1 ) + 2 (Ax, y2 )
= 1 (x, A y1 ) + 2 (x, A y2 ) = (x, 1 A y1 + 2 A y2 ) .
Since the equality is valid for all x H, it implies
A (1 y1 + 2 y2 ) = 1 A y1 + 2 A y2 .
Second, A is bounded since
kA yk2 = (A y, A y) = (AA y, y) kAA yk kyk kAkop kA yk kyk .
If kA yk 6= 0 we divide by kA yk and obtain
kA yk kAkop kyk .
If A y = 0, this inequality is obvious. Therefore this inequality holds for all y. Thus
A is bounded and kA kop kAkop .

1. Example: If A : Cn Cn , then A is the Hermitian conjugate of A, i.e. if
A = A T (the complex conjugate of the transposed matrix).
64

2. Example: Integral operator on L2 (0, 1)


Z 1

(Ax)(t) =

K(t, s)x(s)ds
0

The adjoint operator


Z 1

(A y)(s) =

K(t, s)y(t)dt
0

Indeed, for any x, y L2 (0, 1):


Z 1 Z 1

(Ax, y) =


K(t, s)x(s) ds y(t)
dt

Z 1Z 1

=
0

Z 1

K(t, s)x(s)y(t)
ds dt
!
Z
K(t, s)y(t) dt

x(s)

ds

= (x, A y) .
Note that we used Fubinis Theorem to change the order of integration.
3. Example: Shift operators: Tl = Tr and Tr = Tl . Indeed,

(Tr x, y) =

xk yk+1 = (x, Tl y) .
k=1

The following lemma states some elementary properties of adjoint operators.


Lemma 11.5 If A, B : H H are bounded operators on a Hilbert space H and ,
C, then
+ B
1. (A + B) = A
2. (AB) = B A
3. (A ) = A
4. kA k = kAk
5. kA Ak = kAA k = kAk2
Proof: Statements 13 follow directly from the definition of an adjoint operator (Exercise). Statement 4 follows from 3 and the estimate of Theorem 11.3: indeed,
kA k kAk = k(A ) k kA k.
Finally, in order to prove the statement 5 we note
kAxk2 = (AX, Ax) = (x, A Ax) kxk kA Axk kA Ak kxk2
implies kAk2 kAA k. On the other hand kA Ak kA k kAk = kAk2 and consequently kA Ak = kAk2 .

65

11.3

Self-adjoint operators

Definition 11.6 A linear operator A is self-adjoint, if A = A.


Lemma 11.7 An operator A B(H, H) is self-adjoint iff it is symmetric:
for all x, y H.

(x, Ay) = (Ax, y)

1. Example: H = Rn , a linear map defined by a symmetric matrix is self-adjoint.


2. Example: H = Cn , a linear map defined by a Hermitian matrix is self-adjoint.
3. Example: A : L2 (0, 1) L2 (0, 1)
Z 1

A f (t) =

K(t, s) f (s) ds
0

with real symmetric K, K(t, s) = K(s,t), is self-adjoint.


Let A : H H be a linear operator. A scalar K is an eigenvalue of A if there is
x H, x 6= 0, such that Ax = x. The vector x is a called an eigenvector of A.
Theorem 11.8 Let A be a self-adjoint operator on a Hilbert space H. Then all eigenvalues of A are real and the eigenvectors corresponding to distinct eigenvalues are
orthogonal.
Proof: Suppose Ax = x with x 6= 0. Then
kxk2 = ( x, x) = (Ax, x) = (x, A x) = (x, Ax) = (x, x) = kxk2 .
Consequently, is real.
Now if 1 and 2 are distinct eigenvalues and Ax1 = 1 x1 , Ax2 = 2 x2 , then
0 = (Ax1 , x2 ) (x1 , Ax2 ) = (1 x1 , x2 ) (x1 , 2 x2 ) = (1 2 ) (x1 , x2 ).
Since 1 2 6= 0, we conclude (x1 , x2 ) = 0.

Exercise: Let A be a self-adjoint operator on a real Hilbert space H. Show that its
complexification AC : HC HC is also self-adjoint. Show that if is an eigenvalue of
AC , then there is x H, x 6= 0, such that Ax = x.
Theorem 11.9 If A is a bounded self-adjoint operator then
1. (Ax, x) is real for all x H
2. kAkop = supkxk=1 |(Ax, x)|
66

Proof: For any x H


(Ax, x) = (x, Ax) = (Ax, x)
which implies (Ax, x) is real. Now let
M = sup |(Ax, x)| .
kxk=1

The Cauchy-Schwartz inequality implies


|(Ax, x)| kAxk kxk kAkop kxk2 = kAkop
for all x H such that kxk = 1. Consequently M kAkop . On the other hand, for any
u, v H we have
4 Re(Au, v) = (A(u + v), u + v) (A(u v), u v)

M ku + vk2 + ku vk2

= 2M kuk2 + kvk2
using the parallelogram law. If Au 6= 0 let
v=

kuk
Au
kAuk

to obtain, since kuk = kvk, that


kuk kAuk Mkuk2 .
Consequently kAuk Mkuk (for all u, including those with Au = 0) and kAkop M.
Therefore kAkop = M.


67

Unbounded operators and their adjoint operators13


The notions of adjoint and self-adjoint operators play an important role in the general
theory of linear operators. If an operator is not bounded a special care should be taken
in the consideration of its domain of definition.
Let D(A) be a linear subspace of a Hilbert space H, and A : D(A) H be a linear
operator. If D(A) is dense in H we say that A is densely defined.
Example: Consider the operator A( f ) = ddtf on the set of all continuously differentiable
functions, i.e., D(A) = C1 [0, 1] L2 (0, 1). Since continuously differentiable functions
are dense in L2 , this operator is densely defined.
Given a densely defined linear operator A on H, its adjoint A is defined as follows:
D(A ), the domain of A , consists of all vectors x H such that
y 7 (x, Ay)
is a continuous linear functional D(A) K. By continuity and density of D(A),
it extends to a unique continuous linear functional on all of H.
By the Riesz representation theorem, if x D(A ), there is a unique vector z H
such that
(x, Ay) = (z, y)
for all y D(A).
This vector z is defined to be A x.
It can be shown that A : D(A ) H is linear.
The definition implies (Ax, y) = (x, A y) for all x D(A) and y D(A ).
Note that two properties play a key role in this definition: the density of the domain
of A in H, and the uniqueness part of the Riesz representation theorem.
A linear operator is symmetric if
(Ax, y) = (x, Ay)

x, y D(A).

If A is symmetric then D(A) D(A ) and A coincides with the restriction of A onto
D(A). An operator is self adjoint if A = A , i.e., it is symmetric and D(A) = D(A ).
In general, the condition for a linear operator on a Hilbert space to be self-adjoint is
stronger than to be symmetric. If an operator is bounded then it is normally assumed
that D(A) = D(A ) = H and therefore a symmetric operator is self-adjoint.
The Hellinger-Toeplitz theorem states that an everywhere defined symmetric operator on a Hilbert space is bounded.

13 Optional

topic

68

12
12.1

Introduction to Spectral Theory


Point spectrum

Let H be a complex Hilbert space and A : H H a linear operator. If Ax = x for


some x H, x 6= 0, and C, then is an eigenvalue of A and x is an eigenvector.
The space
E = { x H : Ax = x }
is called the eigenspace.
Exercise: Prove the following: If A B(H, H) and is an eigenvalue of A, then E is
a closed linear subspace in H. Moreover, E is invariant, i.e., A(E ) = E (if 6= 0).
Definition 12.1 The point spectrum of A consists of all eigenvalues of A:
p (A) = { C : Ax = x for some x H, x 6= 0 } .
Proposition 12.2 If A : H H is bounded and is its eigenvalue then
k k kAkop .
Proof: If Ax = x with x 6= 0, then
kAyk kAxk

= | | .
kxk
y6=0 kyk

kAkop = sup

Examples:
1. A linear map on an n-dimensional vector space has at least one and at most n
different eigenvalues.
2. The right shift Tr : `2 `2 has no eigenvalues, i.e., the point spectrum is empty.
Indeed, suppose Tr x = x, then
(0, x1 , x2 , x3 , x4 , . . .) = (x1 , x2 , x3 , x4 , . . . )
implies 0 = x1 , x1 = x2 , x2 = x3 , . . . If 6= 0, we divide by and conclude
x1 = x2 = = 0. If = 0 we also get x = 0. Consequently
p (Tr ) = 0.
/
3. The point spectrum of the left shift Tl : `2 `2 is the open unit disk. Indeed,
suppose Tl x = x with C. Then
(x2 , x3 , x4 , . . .) = (x1 , x2 , x3 , x4 , . . . )
is equivalent to x2 = x1 , x3 = x2 , x4 = x3 , . . . Consequently, x = (xk )
k=1 with
2
xk = k1 x1 for all k 2. This sequence belongs to `2 if and only if
k=1 |xk | =
2k

k=1 |x1 | | | converges or equivalently | | < 1. Therefore


p (Tl ) = { C : | | < 1 } .
69

12.2

Invertible operators

Let us discuss the concept of an inverse operator.


Definition 12.3 (injective operator) We say that A : U V is injective if the equation
Ax = y has a unique solution for every y Range(A).
Definition 12.4 (bijective operator) We say that A : U V is bijective if the equation
Ax = y has exactly one solution for every y V .
Definition 12.5 (inverse operator) We say that A is invertible if it is bijective. Then
the equation Ax = y has a unique solution for all y V and we define A1 y = x.
1. Exercise: Show that A1 is a linear operator.
2. Exercise: Show that a linear operator A : U V is invertible iff
Ker(A) = { 0 }

and

Range(A) = V.

3. Exercise: Show that if A1 is invertible, then A1 is also invertible and


(A1 )1 = A.
4. Exercise: Show that if A and B are two invertible linear operators, then AB is
also invertible and (AB)1 = B1 A1 .
We will use IV : V V to denote the identity operator on V , i.e., IV (x) = x for all
x V . Moreover, we will skip the subscript V if there is no danger of a mistake. It is
easy to see that if A : U V is invertible then
AA1 = IV

and

A1 A = IU .

Example: The right shift Tr : `2 `2 has a trivial kernel and


Tl Tr = I.
but it is not invertible since Range(Tr ) 6= `2 . (Indeed, any sequence in the range of Tr
has a zero on the first place). Consequently, the equality AB = I alone does not imply
that B = A1 .
Lemma 12.6 If A : U V and B : V U are linear operators such that
AB = IV

and

BA = IU

then A and B are both invertible and B = A1 .


Proof: The equality ABy = y for all y V implies that Ker B = { 0 } and Range A = V .
On the other hand BAx = x for all x U implies Ker A = { 0 } and Range B = U.
Therefore both A and B satisfy the definition of invertible operator.

70

12.3

Resolvent and spectrum

Let A : V V be a linear operator on a vector space V . A complex number is an


eigenvalue of A if Ax = x for some x 6= 0. This equation is equivalent to (A I)x = 0.
Then we immediately see that A I is not invertible since 0 has infinitely many
preimages: x with C.
If V is finite dimensional the reversed statement is also true: if A I is not invertible then is an eigenvalue of A (recall the Fredholm alternative from the first year
Linear Algebra). In the infinite dimensional case this is not necessarily true.
Definition 12.7 (resolvent set and spectrum) The resolvent set of a linear operator
A : H H is defined by
R(A) = { C : (A I)1 B(H, H) } .
The resolvent set consists of regular values. The spectrum is the complement to the
resolvent set in C:
(A) = C \ R(A) .
Note that the definition of the resolvent set assumes existence of the inverse operator (A I)1 for R(A). If p (A) then (A I) is not invertible. Consequently
any eigenvalue (A) and
p (A) (A) .
The spectrum of A can be larger than the point spectrum.
Example: The point spectrum of the right shift operator Tr is empty but since
Range Tr 6= `2 it is not invertible and therefore 0 (Tr ). So p (Tr ) 6= (Tr ).

Technical lemmas
The following two lemmas will help us in the study of the resolvent set: they establish
useful conditions which guarantee that an operator has a bounded inverse.
Lemma 12.8 If T B(H, H) and kT k < 1, then (I T )1 B(H, H). Moreover
(I T )1 = I + T + T 2 + T 3 + . . .
and
k(I T )1 k (1 kT k)1 .
Proof: Consider the sequence Vn = I + T + T 2 + + T n . Since
kT n xk kT k kT n1 xk

71

we conclude that kT n k kT kn . Consequently for any m > n we have




kVm Vn k = T n+1 + T n+2 + + T m
kT kn+1 + kT kn+2 + + kT km
kT kn+1 kT km+1
kT kn+1
=

.
1 kT k
1 kT k
Since kT k < 1, Vn is a Cauchy sequence in the operator norm. The space B(H, H) is
complete and there is V B(H, H) such that Vn V . Moreover,
kV k 1 + kT k + kT k2 + = (1 kT k)1 .
Finally, taking the limit as n in the equalities
Vn (I T ) = Vn Vn T = I T n+1 ,
(I T )Vn = Vn TVn = I T n+1
and using that T n+1 0 in the operator norm we get V (I T ) = (I T )V = I.
Lemma 12.6 implies (I T )1 = V .

Lemma 12.9 Let H be a Hilbert space and T, T 1 B(H, H). If U B(H, H) and
kUk kT 1 k < 1, then the operator T +U is invertible and


(T +U)1

kT 1 k
.
1 kUk kT 1 k

Proof: Consider the operator V = T 1 (T +U) = I + T 1U. Since


kT 1Uk kT 1 k kUk < 1,
Lemma 12.8 implies that V is invertible and
1
kV 1 k 1 kT 1 k kUk
.
Moreover, the definition of V implies that T +U = TV . The composition of the invertible operators T and V is invertible and consequently
(T +U)1 = V 1 T 1 .
Finally, k(T +U)1 k kV 1 k kT 1 k implies the desired upper bound for the norm of
the inverse operator.


72

Properties of the spectrum


Lemma 12.10 If A : H H is bounded and (A) then (A ).
Proof: If R(A) then A I has a bounded inverse:
(A I)(A I)1 = I = (A I)1 (A I) .
Taking adjoints we obtain
(A I)1

(A I) = I = (A I) (A I)1


Consequently, (A I) has a bounded inverse (A I)1 (an adjoint of a bounded
operator). Therefore R(A) iff R(A ). Since the spectrum is the complement
of the resolvent set we also get (A) iff (A ).

Proposition 12.11 If A is bounded and (A) then | | kAkop .
Proof: Take C such that | | > kAkop . Since k 1 Akop < 1 Lemma 12.8 implies
that I 1 A is invertible and the inverse operator is bounded. Consequently, A I =
(I 1 A) also has a bounded inverse and so R(A). The proposition follows
immediately since (A) is the complement of R(A).

Proposition 12.12 If A is bounded then R(A) is open and (A) is closed.
Proof: Let R(A). Then T = (A I) has a bounded inverse. Set U = I.
Obviously, kUk = | |. Let
| | < kT 1 k1 ,
then Lemma 12.9 implies that T + U = A ( + )I also has a bounded inverse. So
+ R(A). Consequently R(A) is open and (A) = C \ R(A) is closed.

Example: The spectrum of Tl and of Tr are both equal to the closed unit disk on the
complex plane.
Indeed, p (Tl ) = { C : | | < 1 }. Since p (Tl ) (Tl ) and (Tl ) is closed, we
conclude that (Tl ) includes the closed unit disk. On the other hand, Proposition 12.11
implies that (Tl ) is a subset of the closed disk | | kTl kop = 1. Therefore
(Tl ) = { C : | | 1 } .
Since Tr = Tl and (Tl ) is symmetric with respect to the real axis, Lemma 12.10
implies (Tr ) = (Tl ).

73

13
13.1

Compact operators
Definition, properties and examples

Definition 13.1 Let X be a normed space and Y be a Banach space. Then a linear
operator A : X Y is compact if the image of any bounded sequence has a convergent
subsequence.
Obviously a compact operator is bounded. Indeed, otherwise there is a sequence
xn with kxn k = 1 such that kAxn k > n for each n. The sequence Axn does not contain a
convergent subsequence (it does not even contain a bounded subsequence).
Example: Any bounded operator with finite-dimensional range is compact. Indeed, in
a finite dimensional space any bounded sequence has a convergent subsequence.
Proposition 13.2 Let X be a normed space and Y be Banach. A linear operator A :
X Y is compact iff the image of the unit sphere is sequentially compact.
Theorem 13.3 If X is a normed space and Y is a Banach space, then compact linear
operators form a closed linear subspace in B(X,Y ).
Proof: If K1 , K2 are compact operators and 1 , 2 K, then 1 K1 + 2 K2 is also compact. Indeed, take any bounded sequence (xn ) in H. There is a subsequence xn1 j
such that K1 xn1 j converges. This subsequence is also bounded, so it contains a subsequence xn2 j such that K2 xn2 j converges. Obviously K1 xn2 j also converges and therefore
1 K1 xn2 j + 2 K2 xn2 j is convergent and consequently 1 K1 + 2 K2 is compact. Therefore the compact operators form a linear subspace.
Let us prove that this subspace is closed. Let Kn be a convergent sequence of
compact operators: Kn K in B(H, H). Take any bounded sequence (xn ) in X. Since
K1 is compact, there is a subsequence xn1 j such that K1 xn1 j converges. Since xn1 j is
bounded and K2 is compact, there is a subsequence xn2 j such that K2 xn2 j converges.
Repeat this inductively: for each k there is a subsequence xnk j of the original sequence
such that Kl xnk j converges as j for all l k.
Consider the diagonal sequence y j = xn j j . Obviously (y j )j=k is a subsequence of
(xnk j )j=1 . Consequently Kl y j converges as j for every l.
In order to show that K is compact it is sufficient to prove that Ky j is Cauchy:
kKy j Kyl k kKy j Kn y j k + kKn y j Kn yl k + kKyl Kn yl k

kK Kn k ky j k + kyl k + kKn y j Kn yl k .
Given > 0 choose n sufficiently large to ensure that the first term is less than 2 ,
then choose N sufficiently large to guarantee that the second term is less than 2 for
all j, l > N. So Ky j is Cauchy and consequently converges. Therefore K is a compact
operator, and the subspace formed by compact operators is closed.

74

Proposition 13.4 The integral operator A : L2 (a, b) L2 (a, b) defined by


Z b

(A f )(t) =

Z bZ b

K(t, s) f (s) ds

with

|K(t, s)|2 dsdt <

is compact.
Proposition 13.5 If X is a normed space and Y is a Banach space, then the operators
of finite range a dense among compact operators in B(X,Y ).

13.2

Spectral theory for compact self-adjoint operators

Lemma 13.6 If T : H H a compact self-adjoint operator on a Hilbert space H, then


at least one of = kT kop is an eigenvalue of T .
Proof: Assume T 6= 0 (otherwise the lemma is trivial). Since
kT kop = sup |(T x, x)|
kxk=1

there is a sequence xn H such that kxn k = 1 and |(T xn , xn )| kT kop . Since T is


compact, yn = T xn has a convergent subsequence. Relabel this subsequence as xn and
let y = limn T xn . Then (T xn , xn ) with = kT kop , and
kT xn xn k2 = kT xn k2 2(T xn , xn ) + 2 2 2 2(T xn , xn ) .
The right hand side converges to 0 as n . Consequently T xn xn 0. On the
other hand T xn y and consequently xn also converges:
xn x = 1 y .
The operator T is continuous and consequently T x = x. Finally, since kxn k = 1 for
all n, we have kxk = 1, and consequently is an eigenvalue.

Proposition 13.7 Let H be an infinitely dimensional Hilbert space and T : H H a
compact self-adjoint operator. Then p (T ) is either a finite set or countable sequence
tending to zero. Moreover, every non-zero eigenvalue corresponds to a finite dimensional eigenspace.
Proof: Suppose there is > 0 such that T has infinitely many different eigenvalues
with |n | > . Let xn be corresponding eigenvectors with kxn k = 1. Since the operator
is self-adjoint, this sequence is orthonormal and for any n 6= m
kT xn T xm k2 = kn xn m xm k2
= (n xn m xm , n xn m xm ) = |n |2 + |m |2 > 2 2 .
75

Consequently, (T xn ) does not have a convergent subsequence (none of the subsequences is Cauchy). This contradicts to the compactness of T . Consequently, p (T )
is either finite or a converging to zero sequence.
Now let 6= 0 be an eigenvalue and E the corresponding eigenspace. Let A :
= x for any x E , the operator
E E be the restriction of A onto E . Since Ax
A maps the unit sphere into the sphere of radius . Since A is compact, the image of
the unit sphere is sequentially compact. Therefore the sphere of radius is compact.
Since E is a Hilbert (and consequently Banach) space itself, Theorem 3.18 implies
that E is finite dimensional.

Theorem 13.8 (Hilbert-Schmidt theorem) Let H be a Hilbert space and T : H H
be a compact self-adjoint operator. Then there is a finite or countable orthonormal
sequence (en ) of eigenvectors of T with corresponding real eigenvalues (n ) such that
T x = j (x, e j )e j

for all x H.

Proof: We construct the sequence e j inductively. Let H1 = H and T1 = T : H1


H1 . Lemma 13.6 implies that there is an eigenvector e1 H1 with ke1 k = 1 and an
eigenvalue 1 R such that |1 | = kT1 kB(H1 ,H1 ) .
Then let H2 = {x H1 : x e1 }. If x H2 then T x H2 . Indeed, since T is
self-adjoint
(T x, e1 ) = (x, Te1 ) = 1 (x, e1 ) = 0
and T x H2 . Therefore the restriction of T onto H2 is an operator T2 : H2 H2 . Since
H2 is an orthogonal complement, it is closed and so a Hilbert space itself. Lemma 13.6
implies that there is an eigenvector e2 H2 with ke2 k = 1 and an eigenvalue 2 R
such that |2 | = kT2 kB(H2 ,H2 ) . Then let H3 = { x H2 : x e2 } and repeat the procedure
as long as Tn is not zero.
Suppose Tn = 0 for some n N. Then for any x H let
n1

y = x (x, e j )e j .
j=1

Applying T to the equality we get:


n1

n1

Ty = T x (x, e j )Te j = T x (x, e j ) j e j .


j=1

j=1

Since y e j for j < n we have yn Hn and consequently Ty = Tn y = 0. Therefore


n1

Tx =

(x, e j ) j e j

j=1

which is the required formula for T .


76

Suppose Tn 6= 0 for all n N. Then for any x H and any n consider


n1

yn = x (x, e j )e j .
j=1

Since yn e j for j < n we have y Hn and


n1
2

kxk = kyn k + |(x, e j )|2 .


j=1

Consequently kyn k2 kxk2 . On the other hand kTn k = n and






n1


T x (x, e j ) j e j = kTyn k kTn k kyn k |n | kxk


j=1
and since n 0 as n we have

Tx =

(x, e j ) j e j .

j=1

Corollary 13.9 Let H be an infinite dimensional separable Hilbert space and T : H


H a compact self-adjoint operator. Then there is an orthonormal basis E = { e j : j
N } in H such that Te j = j e j for all j N and

Tx =

j (x, e j )e j

for all x H.

j=1

Exercise: Deduce that operators with finite range are dense among compact selfadjoint operators.
Theorem 13.10 If H is an infinite dimensional Hilbert space and T : H H is a
compact self-adjoint operator, then (T ) = p (T ).
Proposition 13.7 states that p (T ) is either a finite or countably infinite set. Moreover, if p (T ) is not finite, zero is the unique limit point of p (T ). If p (T ) is finite, then the Hilbert-Schmidt theorem implies that the kernel of T is not trivial (since
Ker T = { en } ) and consequently 0 p (T ). Therefore Theorem 13.10 means that
(T ) = p (T ) { 0 }. In particular, (T ) = p (T ) if zero is an eigenvalue.
Proof: We will prove the theorem assuming that H is separable.14
14 The proof uses a countable basis in H. The theorem remains valid for a non-separable H but the
proof requires a modification, which is based on the following observation: Proposition 13.7 implies
that (Ker T ) has at most countable orthonormal basis of eigenvectors { e j }. Then for any vector x H
write x = PKer T (x) + j=1 (x, e j )e j where PKer T is the orthogonal projection on the kernel of T . Then
follow the arguments of the proof adding this term when necessary).

77

Then Corollary 13.9 implies that

Tx =

j (x, e j )e j

j=1

where { e j } is an orthonormal basis in H.Then x = j=1 (x, e j )e j and for any C

(T I)x =

( j )(x, e j )e j .

j=1

Let C \ p (T ) which is an open subset of C. Consequently there is > 0 such that


| | > for all p (T ) p (T ). Consider an operator S defined by

Sy =

(y, e )

k k ek .

k=1

Lemma 7.12 implies that the series converges since |k | > and

(y, ek ) 2
2


|(y, ek )|2 = 2 kyk2 .
kSyk =

j=k
k=1 k
2

In particular we see that S is bounded with kSkop 1 . Moreover S = (T I)1 .


Indeed,

(T I)Sy =
and

( j )(Sy, e j )e j =

j=1

j=1

((T I)x, e j )
ej =
j
j=1

j (x, e j )e j = x .

S(T I)x =

j (y, e j )e j = y .

j=1

Then S = (T I)1 and R(T ), and so (T ) p (T ). On the other hand,


p (T ) (T ). We conclude (T ) = p (T ).


78

14

Sturm-Liouville problems

In this chapter we will study the Sturm-Liouville problem: a differential equation of


the form


d
du

p(x)
+ q(x)u = u
with u(a) = u(b) = 0
dx
dx
where p and q are given functions on the interval [a, b]. The values of for which
the problem has a non-trivial solution are called eigenvalues of the Sturm-Liouville
problem and the corresponding solutions u are called eigenfunctions.
An eigenvalue is called simple, if the corresponding eigenspace is one-dimensional.
The main conclusion of this chapter is the following theorem:
Theorem 14.1 If p C1 [a, b], q C0 [a, b], p(x) > 0 and q(x) 0 for all x [a, b],
then
(i) eigenvalues of the Sturm-Liouville problem are all simple,
(ii) they form an unbounded monotone sequence,
(iii) eigenfunctions of the Sturm-Liouville problem form an orthonormal basis in L2 (a, b).
For a function u C2 [a, b] we define


du
d
p(x)
+ q(x)u .
L(u) =
dx
dx
Let L0 : D0 C0 [a, b] be the restriction of L onto the space


D0 := u C2 [a, b] : u(a) = u(b) = 0 .
We can equip both D0 and C0 [a, b] with the L2 norm. Integrating by parts, we can
check that L0 is symmetric, i.e. (u, L0 v) = (L0 u, v) for all u, v D0 . On the other hand,
considering L0 on the sequence un = n1 sin(n(x a)/(b a)), we can check that L0
is not bounded. We have not studied unbounded operators.
Eigenfunctions of the Sturm-Liouville problem are eigenvectors of L0 . In order
to prove Theorem 14.1, we will show that L0 is invertible and L01 coincides with the
restriction on C0 [a, b] of a compact self-adjoint operator A : L2 (a, b) L2 (a, b). The
Sturm-Liouville theorem (see Corollary 13.9) implies that eigenfunctions of A form
an orthonormal basis in L2 (a, b). Moreover, we will see that all eigenfunctions of A
belong to D0 and, consequently, L0 have the same eigenfunctions as A.

79

Differential equation Lu = f
Lemma 14.2 If both u1 and u2 satisfy the equation Lu = 0, i.e.
(pu0 )0 + qu = 0,

(14.1)

then
Wp (u1 , u2 ) = p(u01 u2 u1 u02 )
is constant. Moreover, if Wp (u1 , u2 ) 6= 0 then u1 and u2 are linearly independent.
Proof: Differentiating Wp with respect to x and using pu00 = p0 u0 + qu we obtain
Wp0 = p0 (u01 u2 u1 u02 ) + p(u001 u2 u1 u002 )
= p0 (u01 u2 u1 u02 ) + ((p0 u01 + qu1 )u2 (p0 u02 + qu2 )u1 ) = 0 .
Therefore Wp is constant.
Suppose u1 and u2 are linearly dependant, then there are constants 1 , 2 such that
1 u1 + 2 u2 = 0 and at least one of the constants does not vanish. Suppose 2 6= 0
(otherwise swap u1 and u2 ). Then u2 = 1 u1 /2 and u02 = 1 u01 /2 . Substituting
these equalities into Wp (u1 , u2 ) we see that Wp (u1 , u2 ) = 0. Therefore Wp (u1 , u2 ) 6= 0
implies that u1 , u2 are linearly independent.

Lemma 14.3 The equation (14.1) has two linearly independent solutions, u1 , u2
C2 [a, b], such that u1 (a) = u2 (b) = 0.
Proof: Let u1 , u2 be solutions of the Cauchy problems
(pu01 )0 + qu1 = 0
(pu02 )0 + qu2 = 0

u1 (a) = 0, u01 (a) = 1,


u2 (b) = 0, u02 (b) = 1 .

According to the theory of linear ordinary differential equations u1 and u2 exist, belong
to C2 [a, b] and are unique.
Moreover, u1 and u2 are linearly independent. Indeed, suppose Lu = 0 for some
u C2 [a, b] and u(a) = u(b) = 0. Then
Z b


(pu0 )0 u + qu2 dx
(using definition of L)
a
b Z b


0
= p(x)u (x)u(x) +
p(u0 )2 + qu2 dx (using integration by parts)

0 = (Lu, u) =

Z b


p(u0 )2 + qu2 dx

Since p > 0 on [a, b], we conclude that u0 0. Then u(a) = u(b) = 0 implies u(x) = 0
for all x [a, b].
Consequently, as u2 (b) = 0 and u2 is not identically zero, u2 (a) 6= 0 and so
Wp (u1 , u2 ) = p(a)(u01 (a)u2 (a) u1 (a)u02 (a)) = p(a)u01 (a)u2 (a) 6= 0.
Therefore u1 , u2 are linearly independent by Lemma 14.2.
80

Lemma 14.4 If u1 and u2 are linearly independent solutions of the equation Lu = 0


such that u1 (a) = u2 (b) = 0 and

1
u1 (x)u2 (y),
a x < y b,
G(x, y) =
a y x b,
Wp (u1 , u2 ) u1 (y)u2 (x),
then for any f C0 [a, b] the function
Z b

u(x) =

G(x, y) f (y) dy
a

belongs to C2 [a, b], satisfies the equation Lu = f and the boundary conditions u(a) =
u(b) = 0.
Proof: The statement is proved by a direct substitution of
u2 (x)
u(x) =
Wp (u1 , u2 )

Z x
a

u1 (x)
u1 (y) f (y) dy +
Wp (u1 , u2 )

Z b
x

u2 (y) f (y) dy

into the differential equation. Moreover, u1 (a) = u2 (b) = 0 implies u(a) = u(b) = 0.


Integral operator
Lemma 14.5 The operator A : L2 (a, b) L2 (a, b) defined by
Z b

(A f )(x) =

G(x, y) f (y) dy .
a

is compact and self-adjoint. Moreover, Range(A) is dense in L2 (a, b), Ker A = { 0 },


and all eigenfunctions, Au = u, belong to C2 [a, b] and satisfy u(a) = u(b) = 0.
Proof: Since the kernel G is continuous, the operator A is compact by Proposition 13.4.
Moreover, G is real and symmetric and so A is self-adjoint. Lemma 14.4 implies the
range of A contains all functions from C2 [a, b] such that u(a) = u(b) = 0. This set is
dense in L2 (a, b).
Now suppose Au = 0 for some u L2 [a, b]. Then for any v L2
0 = (Au, v) = (u, Av) ,
which implies u = 0 because u is orthogonal to a dense set (the range of A). Thus
Ker(A) = {0}.
Finally, let u L2 [a, b] be an eigenfunction of A, i.e., Au = u. Since Ker(A) = {0},
6= 0. So we can write u = 1 Au, which takes the form of the following integral
equation:
u(x) =

Z b

G(x, y)u(y) dy .
a

81

Obviously, |G(x, y)u(y)| kGk |u(y)| for all x, y [a, b]. Since G is continuous, the
Dominated Convergence Theorem implies that we can swap a limit x x0 and the
integration, and thus the integral in the right-hand-side is a continuous function of
x. Consequently, u is continuous. For a continuous u the integral is in C2 [a, b] and
satisfies the boundary conditions u(a) = u(b) = 0 due to Lemma 14.4. Thus u D0 .
Therefore, the eigenfunctions of A belong to D0 .

Proof of Theorem 14.1: Since A : L2 (a, b) L2 (a, b) is compact and self-adjoint,
Theorem 13.9 implies that its eigenvectors form an orthonormal basis in L2 (a, b). If u
is an eigenfunction of A, then Lemma 14.5 implies that u C2 [a, b] and u(a) = u(b) =
0. Moreover, Lemma 14.4 and Au = u with 6= 0 imply that Lu = u with = 1 .
Consequently, u is also an eigenfunction of the Sturm-Liouville problem.
Finally, suppose that u is an eigenfunction of the Sturn-Liuoville problem and u is
another eigenfunction which corresponds to the same eigenvalue. Both eigenfunctions
satisfy the linear odinary differential equation L(u) = u and u(a) = u(a)

= 0. Then
u(x)
= u(x)u0 (a)/u0 (a) due to uniqueness of the solution of the Cauchy problem. Thus
the eigenspace is one-dimensional.


Example: An application for Fourier series


Consider the Strum-Liouville problem

d2u
= u,
dx2

u(0) = u(1) = 0 .

It corresponds to the choice p = 1, q = 0. Theorem 14.1 implies that the normalised


eigenfunctions of this problem form an orthonormal basis in L2 (0, 1). In this example
the eigenfunctions are easy to find:


1
sin kx : k N .
2
Consequently any function f L2 (0, 1) can be written in the form

f (x) =

k sin kx
k=1

where

1
k =
2

Z 1

f (x) sin kx dx .
0

The series converges in the L2 norm.

82

You might also like