VS2%RS]BZRS%qS2Zqx%BqphSVqphiR%R
VS2%RS]BZRS%qS2Zqx%BqphSVqphiR%R
nYBiSpqHSV{{h%xp%BqR
gpA%qHpqpYS6q
VqY7S^RR
VqS%7{%qSBPSF%7AhHBqS^ZAh%RY%qS]B7{pqi
www:anthempress:com
nY%RSH%%BqSfRS{ZAh%RYHS%qS<~SpqHS<6VS0(ck
AiSVWn;dS^gd66
}ur}DS!hpx,P%pRSgBpHSNBqHBqS6dcSM;VS<~
BS^"S!BS}}SNBqHBqS6FcS}mS<~
pqH
0::SpH%RBqSV&S'ccDSWaSOB,SWOSc((cDS<6V
S]B{i%YSSgpA%qHpqpYS6qS0(ck
nYSpZYBSpRRRSYS7BphS%YSBSAS%Hq%fHSpRSYSpZYBSBPSY%RSaB,
VhhS%YRSR&HSF%YBZSh%7%%qSYS%YRSZqHSxB{i%YSR&HSpAB&
qBS{pSBPSY%RS{ZAh%xp%BqS7piSAS{BHZxHSRBHSBS%qBHZxHS%qB
pS%&phSRiR7SBSpqR7%HS%qSpqiSPB7SBSAiSpqiS7pqR
thxBq%xS7xYpq%xphS{YBBxB{i%qSxBH%qSBSBYa%RCS
a%YBZSYS{%BSa%qS{7%RR%BqSBPSABYSYSxB{i%YS
BaqSpqHSYSpAB&S{ZAh%RYSBPSY%RSABB,
BritishLibraryCataloguingTinTPublicationData
VSxpphBZSxBHSPBSY%RSABB,S%RSp&p%hpAhSPB7SYS!%%RYSN%Api
LibraryofCongressCatalogingTinTPublicationData
VSxpphBSxBHSPBSY%RSABB,SYpRSAqSJZRH
v6!WTckUS}MS(SMu}0MSk0(S}St;A,C
v6!WTc(US(SMu}0MSk0(S(St;A,C
nY%RS%hS%RSphRBSp&p%hpAhSpRSpqS!BB,
Preface
This book is the outgrowth of the lectures delivered on functional
analysis and allied topics to the postgraduate classes in the Department
of Applied Mathematics, Calcutta University, India. I feel I owe an
explanation as to why I should write a new book, when a large number of
books on functional analysis at the elementary level are available. Behind
every abstract thought there is a concrete structure. I have tried to unveil
the motivation behind every important development of the subject matter.
I have endeavoured to make the presentation lucid and simple so that the
learner can read without outside help.
The rst chapter, entitled Preliminaries, contains discussions on topics
of which knowledge will be necessary for reading the later chapters. The
rst concepts introduced are those of a set, the cardinal number, the
dierent operations on a set and a partially ordered set respectively.
Important notions like Zorns lemma, Zermelos axiom of choice are stated
next. The concepts of a function and mappings of dierent types are
introduced and exhibited with examples. Next comes the notion of a linear
space and examples of dierent types of linear spaces. The denition of
subspace and the notion of linear dependence or independence of members
of a subspace are introduced. Ideas of partition of a space as a direct
sum of subspaces and quotient space are explained. Metric space as an
abstraction of real line
is introduced. A broad overview of a metric
space including the notions of convergence of a sequence, completeness,
compactness and criterion for compactness in a metric space is provided in
the rst chapter. Examples of a nonmetrizable space and an incomplete
metric space are also given. The contraction mapping principle and its
application in solving dierent types of equations are demonstrated. The
concepts of an open set, a closed set and an neighbourhood in a metric
space are also explained in this chapter. The necessity for the introduction
of topology is explained rst. Next, the axioms of a topological space are
stated. It is pointed out that the conclusions of the HeineBorel theorem
in a real space are taken as the axioms of an abstract topological space.
Next the ideas of openness and closedness of a set, the neighbourhood of
a point in a set, the continuity of a mapping, compactness, criterion for
compactness and separability of a space naturally follow.
Chapter 2 is entitled Normed Linear Space. If a linear space admits a
metric structure it is called a metric linear space. A normed linear space is a
type of metric linear space, and for every element x of the space there exists
a positive number called norm x or x fullling certain axioms. A normed
linear space can always be reduced to a metric space by the choice of a
suitable metric. Ideas of convergence in norm and completeness of a normed
linear space are introduced with examples of several normed linear spaces,
Banach spaces (complete normed linear spaces) and incomplete normed
linear spaces.
vii
Continuity of a norm and equivalence of norms in a nite dimensional
normed linear space are established. The denition of a subspace and its
various properties as induced by the normed linear space of which this
is a subspace are discussed. The notion of a quotient space and its role
in generating new Banach spaces are explained. Rieszs lemma is also
discussed.
Chapter 3 dwells on Hilbert space. The concepts of inner product space,
complete inner product or Hilbert space are introduced. Parallelogram law,
orthogonality of vectors, the CauchyBunyakovskySchwartz inequality, and
continuity of scalar (inner) product in a Hilbert space are discussed. The
notions of a subspace, orthogonal complement and direct sum in the setting
of a Hilbert space are introduced. The orthogonal projection theorem takes
a special place.
Orthogonality, various orthonormal polynomials and Fourier series are
discussed elaborately. Isomorphism between separable Hilbert spaces is
also addressed. Linear operators and their elementary properties, space
of linear operators, linear operators in normed linear spaces and the norm
of an operator are discussed in Chapter 4. Linear functionals, space of
bounded linear operators and the uniform boundedness principle and its
applications, uniform and pointwise convergence of operators and inverse
operators and the related theories are presented in this chapter. Various
types of linear operators are illustrated. In the next chapter, the theory of
linear functionals is discussed. In this chapter I introduce the notions of
a linear functional, a bounded linear functional and the limiting process,
and assert continuity in the case of boundedness of the linear functional
and viceversa. In the case of linear functionals apart from dierent
examples of linear functionals, representation of functionals in dierent
Banach and Hilbert spaces are studied. The famous HahnBanach theorem
on the extension on a functional from a subspace to the entire space with
preservation of norm is explained and the consequences of the theorem
are presented in a separate chapter. The notions of adjoint operators and
conjugate space are also discussed. Chapter 6 is entitled Space of Bounded
Linear Functionals. The chapter dwells on the duality between a normed
linear space and the space of all bounded linear functionals on it. Initially
the notions of dual of a normed linear space and the transpose of a bounded
linear operator on it are introduced. The zero spaces and range spaces of a
bounded linear operator and of its duals are related. The duals of Lp ([a, b])
and C([a, b]) are described. Weak convergence in a normed linear space
and its dual is also discussed. A reexive normed linear space is one for
which the canonical embedding in the second dual is surjective (onetoone). An elementary proof of Eberleins theorem is presented. Chapter 7 is
entitled Closed Graph Theorem and its Consequences. At the outset the
denitions of a closed operator and the graph of an operator are given. The
closed graph theorem, which establishes the conditions under which a closed
linear operator is bounded, is provided. After introducing the concept of an
viii
open mapping, the open mapping theorem and the bounded inverse theorem
are proved. Application of the open mapping theorem is also provided. The
next chapter bears the title Compact Operators on Normed Linear Spaces.
Compact linear operators are very important in applications. They play a
crucial role in the theory of integral equations and in various problems of
mathematical physics. Starting from the denition of compact operators,
the criterion for compactness of a linear operator with a nite dimensional
domain or range in a normed linear space and other results regarding
compact linear operators are established. The spectral properties of a
compact linear operator are studied. The notion of the Fredholm alternative
is discussed and the relevant theorems are provided. Methods of nding an
approximate solution of certain equations involving compact operators in
a normed linear space are explored. Chapter 9 bears the title Elements of
Spectral Theory on Selfadjoint Operators in Hilbert Spaces. Starting from
the denition of adjoint operators, selfadjoint operators and their various
properties are elaborated upon the context of a Hilbert space. Quadratic
forms and quadratic Hermitian forms are introduced in a Hilbert space and
their bounds are discovered. I dene a unitary operator in a Hilbert space
and the situation when two operators are said to be unitarily equivalent,
is explained. The notion of a projection operator in a Hilbert space is
introduced and its various properties are investigated. Positive operators
and the square root of operators in a Hilbert space are introduced and
their properties are studied. The spectrum of a selfadjoint operator in a
Hilbert space is studied and the point spectrum and continuous spectrum
are explained. The notion of invariant subspaces in a Hilbert space is
also brought within the purview of the discussion. Chapter 10 is entitled
Measure and Integration in Spaces. In this chapter I discuss the theory
of Lebesgue integration and pintegrable functions on . Spaces of these
functions provide very useful examples of many theorems in functional
analysis. It is pointed out that the concept of the Lebesgue measure is
a generalization of the idea of subintervals of given length in
to a class
of subsets in . The ideas of the Lebesgue outer measure of a set E ,
Lebesgue measurable set E and the Lebesgue measure of E are introduced.
The notions of measurable functions and integrable functions in the sense
of Lebesgue are explained. Fundamental theorems of Riemann integration
and Lebesgue integration, Fubini and Tonelis theorem, are stated and
explained. Lp spaces (the space of functions pintegrable on a measure
subset E of ) are introduced, that (E) is complete and related properties
discussed. Fourier series and then Fourier integral for functions are
investigated. In the next chapter, entitled Unbounded Linear Operators,
I rst give some examples of dierential operators that are not bounded.
But these are closed operators, or at least have closed linear extensions. It
is indicated in this chapter that many of the important theorems that hold
for continuous linear operators on a Banach space also hold for closed linear
operators. I dene the dierent states of an operator depending on whether
ix
the range of the operator is the whole of a Banach space or the closure of
the range is the whole space or the closure of the range is not equal to
the space. Next the characterization of states of operators is presented.
Strictly singular operators are then dened and accompanied by examples.
Operators that appear in connection with the study of quantum mechanics
also come within the purview of the discussion. The relationship between
strictly singular and compact operators is explored. Next comes the study
of perturbation theory. The reader is given an operator A, the certain
properties of which need be found out. If A is a complicated operator, we
sometimes express A = T +B where T is a relatively simple operator and
B is related to T in such a manner that knowledge about the properties
of T is sucient to gain information about the corresponding properties
of A. In that case, for knowing the specic properties of A, we can
replace A with T , or in other words we can perturb A by T . Here
we study perturbation by a bounded linear operator and perturbation by
strictly singular operator. Chapter 12 bears the title The HahnBanach
Theorem and the Optimization Problems. I rst explain an optimization
problem. I dene a hyperplane and describe what is meant by separating
a set into two parts by a hyperplane. Next the separation theorems for
a convex set are proved with the help of the HahnBanach theorem. A
minimum Norm problem is posed and the HahnBanach theorem is applied
to the proving of various duality theorems. Said theorem is applied to prove
Chebyshev approximation theorems. The optimal control problem is posed
and the Pontryagins problem is mentioned. Theorems on optimal control of
rockets are proved using the HahnBanach theorem. Chapter 13 is entitled
Variational Problems and begins by introducing a variational problem.
The aim is to investigate under which conditions a given functional in a
normed linear space admits of an optimum. Many dierential equations are
often dicult to solve. In such cases a functional is built out of the given
equation and minimized. One needs to show that such a minimum solves
the given equation. To study those problems, a Gateaux derivative and a
Frechet derivative are dened as a prerequisite. The equivalence of solving
a variational problem and solving a variational inequality is established.
I then introduce the Sobolev space to study the solvability of dierential
equations. In Chapter 14, entitled The Wavelet Analysis, I provide a
brief introduction to the origin of wavelet analysis. It is the outcome of
the conuence of mathematics, engineering and computer science. Wavelet
analysis has begun to play a serious role in a broad range of applications
including signal processing, data and image compression, the solving of
partial dierential equations, the modeling of multiscale phenomena and
statistics. Starting from the notion of information, we discuss the scalable
structure of information. Next we discuss the algebra and geometry of
wavelet matrices like Haar matrices and Daubechiess matrices of dierent
ranks. Thereafter come the onedimensional wavelet systems where the
scaling equation associated with a wavelet matrix, the expansion of a
x
function in terms of wavelet system associated with a matrix and other
results are presented. The nal chapter is concerned with dynamical
systems. The theory of dynamical systems has its roots in the theory of
ordinary dierential equations. Henry Poincare and later Ivar Benedixon
studied the topological properties of the solutions of autonomous ordinary
dierential equations (ODEs) in the plane. They did so with a view of
studying the basic properties of autonomous ODEs without trying to nd
out the solutions of the equations. The discussion is conned to onedimensional ow only.
Prerequisites The reader of the book is expected to have a knowledge
of set theory, elements of linear algebra as well as having been exposed to
metric spaces.
Courses The book can be used to teach two semester courses at the M.Sc.
level in universities (MS level in Engineering Institutes):
(i) Basic course on functional analysis. For this Chapters 29 may be
consulted.
(ii) Another course may be developed on linear operator theory. For
this Chapters 2, 35, 79 and 11 may be consulted. The Lebesgue
measure is discussed at an elementary level in Chapter 10; Chapters
29 can, however, be read without any knowledge of the Lebesgue
measure.
Those who are interested in applications of functional analysis may look
into Chapters 12 and 13.
Acknowledgements I wish to express my profound gratitude to my
advisor, the late Professor Parimal Kanti Ghosh, former Ghose professor
in the Department of Applied Mathematics, Calcutta University, who
introduced me to this subject. My indebtedness to colleagues and teachers
like Professor J. G. Chakraborty, Professor S. C. Basu is duly acknowledged.
Special mention must be made of my colleague and friend Professor A. Roy
who constantly encouraged me to write this book. My wife Mrs. M. Sen
oered all possible help and support to make this project a success, and
thanks are duly accorded. I am also indebted to my sons Dr. Sugata Sen
and Professor Shamik Sen for providing editorial support. Finally I express
my gratitude to the inhouse editors and the external reviewer. Several
improvements in form and content were made at their suggestion.
xi
>=C4=CB
$DJHE:K9J?ED
$
*H;B?C?D7H?;I
$$
^
7HW
*XQFWLRQ 1DSSLQJ
0LQHDU 7SDFH
1HWULF 7SDFHV
8RSRORJLFDO 7SDFHV
'RQWLQXLW\ 'RPSDFWQHVV
(EHC;: &?D;7H F79;I
^
(HQLWLRQV DQG )OHPHQWDU\ 4URSHUWLHV
7XEVSDFH 'ORVHG 7XEVSDFH
*LQLWH (LPHQVLRQDO 2RUPHG 0LQHDU 7SDFHV DQG 7XEVSDFHV
5XRWLHQW 7SDFHV
'RPSOHWLRQ RI 2RUPHG 7SDFHV
$$$
$0
[YLL
#?B8;HJ F79;
^
QQHU 4URGXFW 7SDFH ,LOEHUW 7SDFH
]pZxYiT!Zqip,B&R,iT6xYap8St]!6CSvqJZph%i
'DXFK\&XQ\DNRYVN\7FKZDUW] QHTXDOLW\
4DUDOOHORJUDP 0DZ
3UWKRJRQDOLW\
3UWKRJRQDO 4URMHFWLRQ 8KHRUHP
3UWKRJRQDO 'RPSOHPHQWV (LUHFW 7XP
3UWKRJRQDO 7\VWHP
'RPSOHWH 3UWKRQRUPDO 7\VWHP
VRPRUSKLVP EHWZHHQ 7HSDUDEOH ,LOEHUW 7SDFHV
&?D;7H )F;H7JEHI
^
(HQLWLRQ! 0LQHDU 3SHUDWRU
0LQHDU 3SHUDWRUV LQ 2RUPHG 0LQHDU 7SDFHV
0LQHDU *XQFWLRQDOV
8KH 7SDFH RI &RXQGHG 0LQHDU 3SHUDWRUV
9QLIRUP &RXQGHGQHVV 4ULQFLSOH
7RPH %SSOLFDWLRQV
QYHUVH 3SHUDWRUV
&DQDFK 7SDFH ZLWK D &DVLV
[LLL
&?D;7H !KD9J?ED7BI
,DKQ&DQDFK 8KHRUHP
,DKQ&DQDFK 8KHRUHP IRU 'RPSOH[ :HFWRU DQG 2RUPHG
0LQHDU 7SDFH
%SSOLFDWLRQ WR &RXQGHG 0LQHDU *XQFWLRQDOV RQ ?8 9@
0$
8KH +HQHUDO *RUP RI 0LQHDU *XQFWLRQDOV LQ 'HUWDLQ
*XQFWLRQDO 7SDFHV
8KH +HQHUDO *RUP RI 0LQHDU *XQFWLRQDOV LQ ,LOEHUW
7SDFHV
'RQMXJDWH 7SDFHV DQG %GMRLQW 3SHUDWRUV
F79; E< EKD:;: &?D;7H !KD9J?ED7BI
^
'RQMXJDWHV (XDOV DQG 8UDQVSRVHV %GMRLQWV
'RQMXJDWHV (XDOV RI (9 ?8 9@ DQG ?8 9@
;HDN DQG ;HDN 'RQYHUJHQFH
6H H[LYLW\
&HVW %SSUR[LPDWLRQ LQ 6H H[LYH 7SDFHV
0$$
BEI;: "H7F> .>;EH;C 7D: $JI EDI;GK;D9;I
^
'ORVHG +UDSK 8KHRUHP
3SHQ 1DSSLQJ 8KHRUHP
&RXQGHG QYHUVH 8KHRUHP
0$$$
$2
^
%SSOLFDWLRQ
RI WKH 3SHQ 1DSSLQJ 8KHRUHP
V{{h%xp%BqRSBPSYS"{qSp{{%qSnYB7
ECF79J )F;H7JEHI ED (EHC;: &?D;7H F79;I
'RPSDFW 0LQHDU 3SHUDWRUV
7SHFWUXP RI D 'RPSDFW 3SHUDWRU
*UHGKROP %OWHUQDWLYH
%SSUR[LPDWH 7ROXWLRQV
B;C;DJI E< F;9?7B .>;EHO E< ;B<
:@E?DJ
dh7qRSBPS6{xphSnYBiSBPS6hPTVH_B%q
)F;H7JEHI
?D #?B8;HJ F79;I
"{pBRS%qS;%hAS6{pxR
%GMRLQW 3SHUDWRUV
^
7HOI%GMRLQW 3SHUDWRUV
5XDGUDWLF *RUP
9QLWDU\ 3SHUDWRUV 4URMHFWLRQ 3SHUDWRUV
4RVLWLYH 3SHUDWRUV 7TXDUH 6RRWV RI D 4RVLWLYH 3SHUDWRU
7SHFWUXP RI 7HOI%GMRLQW 3SHUDWRUV
QYDULDQW 7XEVSDFHV
'RQWLQXRXV 7SHFWUD DQG 4RLQW 7SHFWUD
[LY
';7IKH; 7D: $DJ;=H7J?ED ?D F79;I
^
8KH 0HEHVJXH 1HDVXUH RQ
1HDVXUDEOH DQG 7LPSOH *XQFWLRQV
'DOFXOXV ZLWK WKH 0HEHVJXH 1HDVXUH
8KH *XQGDPHQWDO 8KHRUHP IRU 6LHPDQQ QWHJUDWLRQ
8KH *XQGDPHQWDO 8KHRUHP IRU 0HEHVJXH QWHJUDWLRQ
(9 7SDFHV DQG 'RPSOHWHQHVV
(9 'RQYHUJHQFH RI *RXULHU 7HULHV
2$
/D8EKD:;: &?D;7H )F;H7JEHI
^
(HQLWLRQ! %Q 9QERXQGHG 0LQHDU 3SHUDWRU
7WDWHV RI D 0LQHDU 3SHUDWRU
(HQLWLRQ! 7WULFWO\ 7LQJXODU 3SHUDWRUV
6HODWLRQVKLS &HWZHHQ 7LQJXODU DQG 'RPSDFW 3SHUDWRUV
4HUWXUEDWLRQ E\ &RXQGHG 3SHUDWRUV
4HUWXUEDWLRQ E\ 7WULFWO\ 7LQJXODU 3SHUDWRUV
4HUWXUEDWLRQ LQ D ,LOEHUW 7SDFH DQG %SSOLFDWLRQV
2$$
.>; #7>D
7D79> .>;EH;C 7D: )FJ?C?P7J?ED
*HE8B;CI
8KH 7HSDUDWLRQ RI D 'RQYH[ 7HW
^
1LQLPXP 2RUP 4UREOHP DQG WKH (XDOLW\ 8KHRU\
%SSOLFDWLRQ WR 'KHE\VKHY %SSUR[LPDWLRQ
%SSOLFDWLRQ WR 3SWLPDO 'RQWURO 4UREOHPV
2$$$
07H?7J?ED7B *HE8B;CI
^
1LQLPL]DWLRQ RI *XQFWLRQDOV LQ D 2RUPHG 0LQHDU 7SDFH
+ADWHDX[ (HULYDWLYH
*UHFKHW (HULYDWLYH
dJZ%&phqxSBPSYS%q%7%8p%BqS^BAh7SPBS6Bh&%q
)TXLYDOHQFH RI WKH 1LQLPL]DWLRQ 4UREOHP WR 7ROYLQJ
:DULDWLRQDO QHTXDOLW\
wp%p%BqphSvqJZph%i
(LVWULEXWLRQV
7REROHY 7SDFH
2$0
.>; 17L;B;J D7BOI?I
^
%Q QWURGXFWLRQ WR ;DYHOHW %QDO\VLV
8KH 7FDODEOH 7WUXFWXUH RI QIRUPDWLRQ
%OJHEUD DQG +HRPHWU\ RI ;DYHOHW 1DWULFHV
3QHGLPHQVLRQDO ;DYHOHW 7\VWHPV
"qT*%7qR%BqphSFp&hS6iR7R
[Y
20
OD7C?97B OIJ;CI
^
% (\QDPLFDO 7\VWHP DQG WV 4URSHUWLHV
,RPHRPRUSKLVP (LHRPRUSKLVP 6LHPDQQLDQ
1DQLIROG
7WDEOH 4RLQWV 4HULRGLF 4RLQWV DQG 'ULWLFDO 4RLQWV
)[LVWHQFH
9QLTXHQHVV DQG 8RSRORJLFDO 'RQVHTXHQFH
d%RqxS<q%JZqRRSpqHSnB{BhB%xphS]BqRJZqxR
&LIXUFDWLRQ 4RLQWV DQG 7RPH 6HVXOWV
&?IJ E< OC8EBI
^
?8B?E=H7F>O
^
$D:;N
^
[YL
Introduction
Functional analysis is an abstract branch of mathematics that grew
out of classical analysis.
It represents one of the most important
branches of the mathematical sciences. Together with abstract algebra and
mathematical analysis, it serves as a foundation of many other branches of
mathematics. Functional analysis is in particular widely used in probability
and random function theory, numerical analysis, mathematical physics and
their numerous applications. It serves as a powerful tool in modern control
and information sciences.
The development of the subject started from the beginning of the
twentieth century, mainly through the initiative of the Russian school
of mathematicians. The impetus came from the developments of linear
algebra, linear ordinary and partial dierential equations, calculus of
variation, approximation theory and, in particular, those of linear integral
equations, the theory of which had the greatest impact on the development
and promotion of modern ideas. Mathematicians observed that problems
from dierent elds often possess related features and properties. This
allowed for an eective unifying approach towards the problems, the
unication being obtained by the omission of inessential details. Hence
the advantage of such an abstract approach is that it concentrates on the
essential facts, so that they become clearly visible.
Since any such abstract system will in general have concrete realisations
(concrete models), we see that the abstract method is quite versatile in
its applications to concrete situations. In the abstract approach, one
usually starts from a set of elements satisfying certain axioms. The nature
of the elements is left unspecied. The theory then consists of logical
consequences, which result from the axioms and are derived as theorems
once and for all. This means that in the axiomatic fashion one obtains a
mathematical structure with a theory that is developed in an abstract way.
For example, in algebra this approach is used in connection with elds,
rings and groups. In functional analysis, we use it in connection with
abstract spaces; these are all of basic importance.
In functional analysis, the concept of space is used in a very wide and
surprisingly general sense. An abstract space will be a set of (unspecied)
elements satisfying certain axioms, and by choosing dierent sets of axioms,
we obtain dierent types of abstract spaces.
The idea of using abstract spaces in a systematic fashion goes back to M.
Frechet (1906) and is justied by its great success. With the introduction
of abstract space in functional analysis, the language of geometry entered
the arena of the problems of analysis. The result is that some problems of
analysis were subjected to geometric interpretations. Furthermore many
conjectures in mechanics and physics were suggested, keeping in mind
the twodimensional geometry. The geometric methods of proof of many
theorems came into frequent use.
xvii
The generalisation of algebraic concepts took place side by side with that
of geometric concepts. The classical analysis, fortied with geometric and
algebraic concepts, became versatile and ready to cope with new problems
not only of mathematics but also of mathematical physics. Thus functional
analysis should form an indispensable part of the mathematics curricula at
the college level.
xviii
CHAPTER 1
PRELIMINARIES
In this chapter we recapitulate the mathematical preliminaries that will
be relevant to the development of functional analysis in later chapters.
This chapter comprises six sections. We presume that the reader has been
exposed to an elementary course in real analysis and linear algebra.
1.1
Set
The theory of sets is one of the principal tools of mathematics. One type of
study of set theory addresses the realm of logic, philosophy and foundations
of mathematics. The other study goes into the highlands of mathematics,
where set theory is used as a medium of expression for various concepts
in mathematics. We assume that the sets are not too big to avoid any
unnecessary contradiction. In this connection one can recall the famous
Russells Paradox (Russell, 1959). A set is a collection of distinct and
distinguishable objects. The objects that belong to a set are called elements,
members or points of the set. If an object a belongs to a set A, then we
write a A. On the other hand, if a does not belong to A, we write
a
/ A. A set may be described by listing the elements and enclosing them
in braces. For example, the set A formed out of the letters a, a, a, b, b, c can
be expressed as A = {a, b, c}. A set can also be described by some dening
properties. For example, the set of natural numbers can be written as
N = {x : x, a natural number} or {xx, a natural number}. Next we
discuss set inclusion. If every element of a set A is an element of the set
B, A is said to be a subset of the set B or B is said to be a superset of
A, and this is denoted by A B or B A. Two sets A and B are said
to be equal if every element of A is an element of B and every element of
B is an element of Ain other words if A B and B A. If A is equal
to B, then we write A = B. A set is generally completely determined by
its elements, but there may be a set that has no element in it. Such a set
is called an empty (or void or null) set and the empty set is denoted by
1
A First Course in Functional Analysis
(Phi). A; in other words, the null set is included in any set A this
fact is vacuously satised. Furthermore, if A is a subset of B, A = and
A = B, then A is said to be a proper subset of B (or B is said to properly
contain A). The fact that A is a proper subset of B is expressed as A B.
Let A be a set. Then the set of all subsets of A is called the power set
of A and is denoted by P (A). If A has three elements like letters p, q and
r, then the set of all subsets of A has 8(= 23 ) elements. It may be noted
that the null set is also a subset of A. A set is called a nite set if it is
empty or it has n elements for some positive integer n; otherwise it is said
to be innite. It is clear that the empty set and the set A are members of
P (A). A set A is called denumerable or enumerable if it is in onetoone
correspondence with the set of natural numbers. A set is called countable
if it is either nite or denumerable. A set that is not countable is called
uncountable.
We now state without proof a few results which might be used in
subsequent chapters:
(i) An innite set is equivalent to a subset of itself.
(ii) A subset of a countable set is a countable set.
The following are examples of countable sets: a) the set J of all integers,
b) the set
of all rational numbers, c) the set P of all polynomials with
rational coecients, d) the set all straight lines in a plane each of which
passes through (at least) two dierent points with rational coordinates and
e) the set of all rational points in n .
Examples of uncountable sets are as follows: (i) an open interval ]a, b[, a
closed interval [a, b] where a = b, (ii) the set of all irrational numbers. (iii)
the set of all real numbers. (iv) the family of all subsets of a denumerable
set.
1.1.1
Cardinal numbers
Let all the sets be divided into two families such that two sets fall into
one family if and only if they are equivalent. This is possible because the
relation between the sets is an equivalence relation. To every such family
of sets, we assign some arbitrary symbol and call it the cardinal number of
each set of the given family. If the cardinal number of a set A is , A =
or card A = . The cardinal number of the empty set is dened to be
0 (zero). We designate the number of elements of a nonempty nite set
as the cardinal number of the nite set. We assign 0 to the class of all
denumerable sets and as such 0 is the cardinal number of a denumerable
set. c, the rst letter of the word continuum stands for the cardinal number
of the set [0, 1].
1.1.2
The algebra of sets
In the following section we discuss some operations that can be
performed on sets. By universal set we mean a set that contains all the sets
Preliminaries
under reference. The universal set is denoted by U . For example, while
discussing the set of real numbers we take
as the universal set. Once
again for sets of complex numbers the universal set is the set of complex
numbers. Given two sets A and B, the union of A and B is denoted by
A B and stands for a set whose every element is an element of either A
or B (including elements of both A and B). A B is also called the sum
of A and B and is written as A + B. The intersection of two sets A and B
is denoted by A B, and is a set, the elements of which are the elements
common to both A and B. The intersection of two sets A and B is also
called the product of A and B and is denoted by A B. The dierence of
two sets A and B is denoted by A B and is dened by the set of elements
in A which are not elements of B. Two sets A and B are said to be disjoint
if A B = . If A B, B A will be called the complement of A with
reference to B. If B is the universal set, Ac will denote the complement of
A and will be the set of all elements which are not in A.
Let A, B and C be three nonempty sets. Then the following laws hold
true:
1. Commutative laws
A B = B A and A B = B A
2. Associative laws
We have a nite number of sets
A (B C) = (A B) C and (A B) C = A (B C)
3. Distributive laws
A (B C) = (A B) (A C)
(A B) C = (A C) (A B)
4. De Morgans laws
(A B)c = (Ac B c ) and (A B)c = (Ac B c )
Suppose we have a nite class of sets of the form {A1 , A2 , A3 , . . . , An },
then we can form A1 A2 A3 . . . An and A1 A2 A3 . . . An . We
can shorten the above expression by using the index set I = {1, 2, 3, . . . , n}.
The above expressions for union and intersection can be expressed in short
by iI Ai and iI Ai respectively.
1.1.3
Partially ordered set
Let A be a set of elements a, b, c, d, . . . of a certain nature. Let us
introduce between certain pairs (a, b) of elements of A the relation a b
with the properties:
(i) If a b and b c, then a c (transitivity)
(ii) a a (reexivity)
(iii) If a b and b a then a = b
A First Course in Functional Analysis
Such a set A is said to be partially ordered by and a and b, satisfying
a b and b a are said to be congruent. A set A is said to be totally
ordered if for each pair of its elements a, b, a b or b a.
A subset B of a partially ordered set A is said to be bounded above if
there is an element b such that y b for all y B, the element b is called
an upper bound of B. The smallest of all upper bounds of B is called the
least upper bound (l.u.b.) or supremum of B. The terms bounded below
and greatest lower bound (g.l.b.) or inmum can be analogously dened.
Finally, an element x0 A is said to be maximal if there exists in A no
element x = x0 satisfying the relation x0 x. The natural numbers are
totally ordered but the branches of a tree are not. We next state a highly
important lemma known as Zorns lemma.
1.1.4
Zorns lemma
Let X be a partially ordered set such that every totally ordered subset
of X has an upper bound in X. Then X contains a maximal element.
Although the above statement is called a lemma it is actually an axiom.
1.1.5
Zermelos theorem
Every set can be well ordered by introducing certain order relations.
The proof of Zermelos theorem rests upon Zermelos axiom of arbitrary
choice, which is as follows:
If one system of nonempty, pairwise disjoint sets is given, then there is
a new set possessing exactly one element in common with each of the sets
of the system.
Zorns Lemma, Zermelos Axiom of Choice and well ordering theorem
are equivalent.
1.2
Function, Mapping
Given two nonempty sets X and Y , the Cartesian product of X and Y ,
denoted by X Y is the set of all ordered pairs (x, y) such that x X and
y Y.
Thus X Y = {(x, y) : x X, y Y }.
1.2.1
Example
Let X = {a, b, c} and let Y = {d, e}.
Then, X Y =
{(a, d), (b, d), (c, d), (a, e), (b, e), (c, e)}.
It may be noted that the Cartesian product of two countable sets is
countable.
1.2.2
Function
Let X and Y be two nonempty sets. A function from X to Y is a subset
of X Y with the property that no two members of f have the same rst
Preliminaries
coordinate. Thus (x, y) f and (x, z) f imply that y = z. The domain
of a function f from X to Y is the subset of X that consists of all rst
coordinates of members of f . Thus x is in the domain of f if and only if
(x, y) f for some y Y .
The range of f is the subset of Y that consists of all second coordinates
of members of f . Thus y is in the range of f if and only if (x, y) f for
some x X. If f is a function and x is a point in the domain of f then f (x)
is the second coordinate of the unique member of f whose rst coordinate
is x.
Thus y = f (x) if and only if (x, y) f . This point f (x) is called the
image of x under f .
1.2.3
Mappings: into, onto (surjective), onetoone (injective)
and bijective
A function f is said to be a mapping of X into Y if the domain of f is
X and the range of f is a subset of Y . A function f is said to be a mapping
of X onto Y (surjective) if the domain of f is X and the range of f is Y .
onto
The fact that f is a mapping of X onto Y is denoted by f : X Y .
A function f from X to Y is said to be onetoone (injective) if distinct
points in X have distinct images under f in Y . Thus f is onetoone if and
only if (x1 , y) f and (x2 , y) f imply that x1 = x2 . A function from X
to Y is said to be bijective if it is both injective and surjective.
1.2.4
Example
Let X = {a, b, c} and let Y = {d, e}. Consider the following subsets of
X Y:
F = {(a, b), (b, c), (c, d), (a, c)}, G = {(a, d), (b, d), (c, d)},
H = {(a, d), (b, e), (c, e)}, = {(a, c), (b, d)}
The set F is not a function from X to Y because (a, d) and (a, e) are
distinct members of F that have the same rst coordinate. The domain of
both G and H is X and the domain of is (a, b).
1.3
Linear Space
A nonempty set is said to be a space if the set is closed with respect to
certain operations dened on it. It is apparent that the elements of some
sets (i.e., set of nite matrices, set of functions, set of number sequences)
are closed with respect to addition and multiplication by a scalar. Such
sets have given rise to a space called linear space.
Denition. Let E be a set of elements of a certain nature satisfying the
following axioms:
(i) E is an additive abelian group. This means that if x and y E,
then their sum x + y also belongs to the same set E, where the operation
A First Course in Functional Analysis
of addition satises the following axioms:
(a) x + y = y + x (commutativity);
(b) x + (y + z) = (x + y) + z (associativity);
(c) There exists a uniquely dened element 0, such that x + = x for
any x in E;
(d) For every element x E there exists a unique element (x) of the
same space, such that x + (x) = .
(e) The element is said to be the null element or zero element of E and
the element x is called the inverse element of x.
(ii) A scalar multiplication is said to be dened if for every x E, for any
scalar (real or complex) the element x E and the following conditions
are satised:
(a) (x) = x (associativity)
(x + y) = x + y
(b)
(distributivity)
( + )x = x + x
(c) 1 x = x
The set E satisfying the axioms (i) and (ii) is called a linear or vector
space. This is said to be a real or complex space depending on whether the
set of multipliers are real or complex.
1.3.1
Examples
(i) Real line
The set of all real numbers for which the ordinary additions and
multiplications are taken as linear operations, is a real linear space .
(ii) The Euclidean space
4 , unitary space C
n
, and complex plane
Let X be the set of all ordered ntuples of real numbers. If x =
(1 , 2 , . . . , n ) and y = (1 , 2 , . . . , n ), we dene the operations of addition
and scalar multiplication as x + y = (1 + 1 , 2 + 2 , . . . , n + n ) and
x = (1 , 2 , . . . , n ). In the above equations, is a real scalar. The
above linear space is called the real ndimensional space and denoted by n .
The set of all ordered ntuples of complex numbers, n , is a linear space
with the operations of additions and scalar multiplication dened as above.
The complex plane
is a linear space with addition and multiplication of
complex numbers taken as the linear operations over
(or ).
(iii) Space of m n matrices, mn
mn
is the set of all matrices with real elements. Then mn is a
real linear space with addition and scalar multiplication dened as follows:
Preliminaries
Let A = {aij } and B = {bij } be two m n matrices. Then A + B =
{aij + bij }. A = {aij }, where is a scalar. In this space A = {aij }
and the matrix with all its elements as zeroes is the zero element of the
space mn .
(iv) Sequence space l
Let X be the set of all bounded sequences of complex numbers, i.e., every
element of X is a complex sequence x = {i } such that i  Ci , where Ci
is a real number for each i. If y = {i } then we dene x + y = {i + i }
and x = {i }. Thus, l is a linear space, and is called a sequence space.
(v) C([a, b])
Let X be the set of all realvalued continuous functions x, y, etc, which
are functions of an independent variable t dened on a given closed interval
J = [a, b]. Then X is closed with respect to additions of two continuous
functions and multiplication of a continuous function by a scalar, i.e.,
(x + y)(t) = x(t) + y(t), x(t) = (x(t)) where is a scalar.
(vi) Space lp , Hilbert sequence space l2
Let p 1 be a xed real number. By denition each element in the space
lp is a sequence x = {i } = {1 , 2 , . . . , n , . . .}, such that
i p < ,
i=1
for p real and p 1, if y lp and y = {i }, x + y = {i + i } and x =
{i }, . Since i p + i p 2p max(i p , i p ) 2p (i p + i p ), it
i +i p < . Therefore, x+y lp . Similarly, we can show
follows that
i=1
that x lp where is a scalar. Hence, lp is a linear space with respect
to the algebraic operations dened above. If p = 2, the space lp becomes
l2 , a square summable space which possesses some special properties to be
revealed later.
(vii) Space Lp ([a, b]) of all Lebesgue pth integrable functions
Let f be a Lebesgue measurable function dened on [a, b] and 0 < p <
b
. Since f Lp ([a, b]), we have a f (t)p dt < . Again, if g Lp ([a, b]),
b
i.e., a g(t)p dt < . Since f + gp 2p max(f p , gp ) 2p (f p + gp ),
f Lp ([a, b]), g Lp ([a, b]) imply that (f + g) Lp ([a, b]) and f
Lp ([a, b]). This shows that Lp ([a, b]) is a linear space. If p = 2, we get
L2 ([a, b]), which is known as the space of square integrable functions. The
space possesses some special properties.
1.3.2
A First Course in Functional Analysis
Subspace, linear combination, linear dependence, linear
independence
A subset X of a linear space E is said to be a subspace if X is a linear
space with respect to vector addition and scalar multiplication as dened
in E. The vector of the form x = 1 x1 + 2 x2 + + n xn is called a
linear combination of vectors x1 , x2 , . . . , xn , in the linear space E, where
1 , 2 , . . . , n are real or complex scalars. If X is any subset of E, then
the set of linear combinations vectors in X forms a subspace of E. The
subspace so obtained is called the subspace spanned by X and is denoted
by span X. It is, in fact, the smallest subspace of E containing X. In other
words it is the intersection of all subspaces of E containing X.
A nite set of vectors {x1 , x2 , . . . , xn } in X is said to be linearly
dependent if there exist scalars {1 , 2 , . . . , n }, not all zeroes, such
that 1 x1 + 2 x2 + + n xn = 0 where {1 , 2 , . . . , n } are scalars,
real or complex. On the other hand, if for all scalars {1 , 2 , . . . , n },
1 x1 + 2 x2 + + n xn = 0 1 = 0, 2 = 0, . . . , n = 0, then the
set of vectors is said to be linearly independent.
A subset X (nite or innite) of E is linearly independent if every nite
subset of X is linearly independent. As a convention we regard the empty
set as linearly independent.
1.3.3
Hamel basis, dimension
A subset L of a linear space E is said to be a basis (or Hamel basis) for
E if (i) L is a linearly independent set, and (ii) L spans the whole space.
In this case, any nonzero vector x of the space E can be expressed
uniquely as a linear combination of nitely many vectors of L with the
scalar coecients that are not all zeroes. Clearly any maximal linearly
independent set (to which no new nonzero vector can be added without
destroying linear independence) is a basis for L and any minimal set
spanning L is also a basis for L.
1.3.4
Theorem
Every linear space X = {} has a Hamel basis.
Let L be the set of all linearly independent subsets of X. Since X = {},
it has an element x = and {x} L, therefore L = . Let the partial
ordering in L be denoted by set inclusion. We show that for every totally
ordered subset L , A of L, the set L = [L : A] is also in L.
Otherwise, {L} would be generated by a proper subset T L. Therefore,
for every A, {L } is generated by T = T L . However, the linear
independence of L implies T = L . Thus, T = [T L : A] =
[T : A] = [L : A] = L, contradicting the assumption that T
is a proper subset L . Thus, the conditions of Zorns lemma having been
satised, there is a maximal M L. Suppose {M } is a proper subspace
of X. Let y X and y
/ {M }. The subspace Y of X generated by M
and y then contains {M } as a proper subspace. If, for any proper subset
Preliminaries
T M, T and also y generate Y , it follows that T also generates {M },
thus contradicting the concept that M is linearly independent. There is
thus no y X, y
/ {M }. Hence M generates X.
A linear space X is said to be nite dimensional if it has a nite basis.
Otherwise, X is said to be innite dimensional.
1.3.5
Examples
(i) Trivial linear space
Let X = {} be a trivial linear space. We have assumed that is a
linearly independent set. The span of is the intersection of all subspaces
of X containing . However, belongs to every subspace of X. Hence it
follows that the Span = {}. Therefore, is a basis for X.
(ii) n
n
n
Consider the real linear space
where every x
is an
ordered ntuple of real numbers.
Let e1 = (1, 0, 0, . . . , 0), e2 =
(0, 1, 0, 0, . . . , 0), . . . , en = (0, 0, . . . , 1). We may note that {ei }, i =
1, 2, . . . , n is a linearly independent set and spans the whole space n .
Hence, {e1 , e2 , . . . , en } forms a basis of n . For n = 1, we get 1 and any
singleton set comprising a nonzero element forms a basis for 1 .
4
4
(iii) n
The complex linear space n is a linear space where every x n is
an ordered ntuple of complex numbers and the space is nite dimensional.
The set {e1 , e2 , . . . , en }, where ei is the ith vector, is a basis for n .
(iv) C([a, b]), Pn ([a, b])
C([a, b]) is the space of continuous real functions in the closed interval
[a, b]. Let B = {1, x, x2 , . . . , xn , . . .} be a set of functions in C([a, b]). It
is apparent that B is a basis for C([0, 1]). Pn ([a, b]) is the space of real
polynomials of order n dened on [a, b]. The set Bn = {1, x, x2 , . . . , xn } is
a basis in Pn ([a, b]).
4mn(+mn )
4mn is the space of all matrices of order m n. For i = 1, 2, . . . , n let
Eij be the mn matrix with (i, j)th entry as 1 and all other entries as zero.
Then, {Eij : i = 1, 2, . . . , m; j = 1, 2, . . . , n} is a basis for (4)mn (C mn ).
(v)
1.3.6
Theorem
Let E be a nite dimensional linear space. Then all the bases of E have
the same number of elements.
Let {e1 , e2 , . . . , en } and {f1 , f2 , . . . , fn , fn+1 } be two dierent bases in
E. Then, any element fi can be expressed as a linear combination of
10
A First Course in Functional Analysis
e1 , e2 , . . . , en ; i.e., fi =
n
aij ej . Since fi , i = 1, 2, . . . , n are linearly
i=1
independent, the matrix [aij ] has rank n. Therefore, we can express fn+1
n
as fn+1 =
an+1j ej . Thus the elements f1 , f2 , . . . , fn+1 are not linearly
j=1
independent. Since {f1 , f2 , . . . , fn+1 } forms a basis for the space it must
contain a number of linearly independent elements, say m( n). On the
other hand, since {fi }, i = 1, 2, . . . , n + 1 forms a basis for E, ei can be
expressed as a linear combination of {fj }, j = 1, 2, . . . , n + 1 such that
n m. Comparing m n and n m we conclude that m = n. Hence the
number of elements of any two bases in a nite dimensional space E is the
same.
The above theorem helps us to dene the dimension of a nite
dimensional space.
1.3.7
Dimension, examples
The dimension of a nite dimensional linear space E is dened as the
number of elements of any basis of the space and is written as dim E.
4 = dim + = 1
(ii) dim 4n = dim +n = n
(i) dim
For an innite dimensional space it can be shown that all bases are
equivalent sets.
1.3.8
Theorem
If E is a linear space all the bases have the same cardinal number.
Let S = {ei } and T = {fi } be two bases. Suppose S is an innite
set and has cardinal number . Let be the cardinal number of T . Every
fi T is a linear combination, with nonzero coecients, of a nite number
of elements e1 , e2 , . . . , en of S and only a nite number of elements of T are
associated in this way with the same set e1 , e2 , . . . , en or some subset of it.
Since the cardinal number of the set of nite subsets of S is the same as
that of S itself, it follows that 0 , . Similarly, we can show that
. Hence, = . Thus the common cardinal number of all bases in
an innite dimensional space E is dened as the dimension of E.
1.3.9
Direct sum
Here we consider the representation of a linear space E as a direct sum
of two or more subspaces. Let E be a linear space and X1 , X2 , . . . , Xn
be n subspaces of E. If x E has an unique representation of the form
x = x1 + x2 + + xn , xi Xi , i = 1, 2, . . . , n, then E is said to be the
direct sum of its subspaces X1 , X2 , . . . , Xn . The above representation is
Preliminaries
11
called the decomposition of the element x into the elements of the subspaces
n
X1 , X2 , . . . , Xn . In that case we can write E = X1 X2 Xn =
Xi .
i=1
1.3.10
Quotient spaces
Let M be a subspace of a linear space E. The coset of an element x E
with respect to M , denoted by x+M is dened as x+M = {x+m : m M }.
This can be written as E/M = {x + M : x E}. One observes that
M = + M, x1 + M = x2 + M if and only if x1 x2 M and as
a result, for each pair x1 , x2 E, either (x1 + M ) (x2 + M ) = or
x1 + M = x2 + M . Further, if x1 , x2 , y1 , y2 E, it then follows that
x1 x2 M and y1 y2 M, (x1 + x2 ) (y1 + y2 ) M and for any scalar
, (x1 x2 ) M because M is a linear subspace. We dene the linear
operations on E/M by
(x + M ) + (y + M ) = (x + y) + M,
(x + M ) = x + M where x, y M, is a scalar (real or complex).
It is clearly apparent that E/M under the linear operations dened
above is a linear space over
(or ). The linear space E/M is called
a quotient space of E by M . The function : E E/M dened
by (x) = x + M is called canonical mapping of E onto E/M . The
dimension of E/M is called the codimension (codim) of M in E. Thus,
codim M = dim (E/M ). The quotient space has a simple geometrical
interpretation. Let the linear space E = R2 and the subspace M be given
by the straight line as g. 1.1.
(x + M ) + (y + M )
= (x + y ) + M
y+M
x+y
x+M
y
x+m
Fig. 1.1 Addition in quotient space
1.4
Metric Spaces
Limiting processes and continuity are two important concepts in classical
analysis. Both these concepts in real analysis, specically in are based on
distance. The concept of distance has been generalized in abstract spaces
yielding what are known as metric spaces. For two points x, y in an abstract
12
A First Course in Functional Analysis
space let d(x, y) be the distance between them in , i.e., d(x, y) = x y.
The concept of distance gives rise to the concept of limit, i.e., {xn } is said
to tend to x as n if d(xn , x) 0 as n . The concept of continuity
can be introduced through the limiting process. We replace the set of real
numbers underlying by an abstract set X of elements (all the attributes of
which are known, but the concrete forms are not spelled out) and introduce
on X a distance function. This will help us in studying dierent classes of
problems within a single umbrella and drawing some conclusions that are
universally valid for such dierent sets of elements.
1.4.1
Denition: metric space, metric
A metric space is a pair (X, ) where X is a set and is a metric on
X (or a distance function on X) that is a function dened on X X such
that for all x, y, z X the following axioms hold:
1. is realvalued, nite and nonnegative
2. (x, y) = 0 x = y
3. (x, y) = (y, x) (Symmetry)
4. (x, y) (x, z) + (z, y) (Triangle Inequality)
These axioms obviously express the fundamental properties of the distance
between the points of the threedimensional Euclidean space.
A subspace (Y, ) of (X, ) is obtained by taking a subset Y X and
restricting to Y Y . Thus the metric on Y is the restriction = Y Y .
is called the metric induced on Y by .
In the above, X denotes the Cartesian product of sets. A B is the set
of ordered pairs (a, b), where a A and b B. Hence, X X is the set of
all ordered pairs of elements of X.
1.4.2
Examples
(i) Real line
This is the set of all real numbers for which the metric is taken as
(x, y) = x y. This is known as the usual metric in .
(ii) Euclidean space n , unitary space n , and complex plane
Let X be the set of all ordered ntuples of real numbers.
If
(1 , 2 , . . . , n ) and y = (1 , 2 , . . . , n ) then we set
n
(x, y) = (i i )2
i=1
It is easily seen that (x, y) 0. Furthermore, (x, y) = (y, x).
(1.1)
Preliminaries
Let,
13
z = (1 , 2 , . . . , n ). Then, 2 (x, z) =
n
(i i )2
j=1
n
n
n
(i i )2 +
(i i )2 + 2
(i i )(i i )
i=1
i=1
i=1
Now by the CauchyBunyakovskySchwartz inequality [see 1.4.3]
n
1/2 n
1/2
n
2
2
(i i )(i i )
(i i )
(i i )
i=1
i=1
i=1
(x, y)(y, z)
Thus,
(x, z) (x, y) + (y, z).
4
4
Hence, all the axioms of a metric space are fullled. Therefore, n
under the metric dened by (1.1) is a metric space and is known as the
ndimensional Euclidean space. If x, y, z denote three distinct points in 2
then the inequality (x, z) (x, y) + (y, z) implies that the length of any
side of a triangle is always less than the sum of the lengths of the other
two sides of the triangle obtained by joining x, y, z. Hence, axiom 4) of
the set of metric axioms is known as the triangle inequality. ndimensional
unitary space n is the space of all
ordered ntuples of complex numbers
n
with metric dened by (x, y) = (i i )2 . When n = 1 this is the
i=1
complex plane
+ with the usual metric dened by (x, y) = x y.
(iii) Sequence space l
Let X be the set of all bounded sequences of complex numbers, i.e.,
every element of X is a complex sequence x = (1 , 2 , . . . , n ) or x = {i }
such that i  Ci , where Ci for each i is a real number. We dene the
metric as (x, y) = sup i i , where y = {i }, N = {1, 2, 3, . . .}, and
iN
sup denotes the least upper bound (l.u.b.). l is called a sequence space
because each element of X (each point in X) is a sequence.
(iv) C([a, b])
Let X be the set of all realvalued continuous functions x, y, etc, that
are functions of an independent variable t dened on a given closed interval
J = [a, b].
We choose the metric dened by (x, y) = max x(t) y(t) where max
tJ
denotes the maximum. We may note that (x, y) 0 and (x, y) = 0 if and
only if x(t) = y(t). Moreover, (x, y) = (x, y). To verify the triangular
inequality, we note that
x(t) z(t) x(t) y(t) + y(t) z(t)
14
A First Course in Functional Analysis
maxt x(t) y(t) + maxt y(t) y(t)
(x, y) + (y, z), for every t [0, 1]
Hence, (x, z) (x, y) + (y, z). Thus, all the axioms of a metric space
are satised.
The set of all continuous functions dened on the interval [a, b] with the
above metric is called the space of continuous functions and is dened on
J and denoted by C([a, b]). This is a function space because every point of
C([a, b]) is a function.
(v) Discrete metric space
0, if x = y
.
1, if x = y
The above is called a discrete metric and the set X endowed with the
above metric is a discrete metric space.
Let X be a set and let be dened by, (x, y) =
(vi) The space M ([a, b]) of bounded real functions
Consider a set of all bounded functions x(t) of a real variable t, dened
on the segment [a, b]. Let the metric be dened by (x, y) = sup x(t)y(t).
t
All the metric axioms are fullled with the above metric. The set of real
bounded functions with the above metric is designated as the space M [a, b].
It may be noted that C[a, b] M ([a, b]).
(vii) The space BV ([a, b]) of functions of bounded variation
Let BV ([a, b]) denote the class of all functions of bounded variation on
n
[a, b], i.e., all f for which the total variation V (f ) = sup
f (xi )f (xi+1 )
i=1
is nite, where the supremum is taken over all partitions, a = x0 < x1 <
x2 < < xn = b. Let us take (f, g) = V (f g). If f = g, v(f g) = 0.
Else, V (f g) = 0 if and only if f and g dier by a constant.
(f, g) = (g, f ) since V (f g) = V (g f ).
If h is a function of bounded variation,
n
(f (ti ) h(ti )) (f (ti1 ) h(ti1 ))
(f, h) = V (f h) = sup
i=1
= sup
n
(f (ti ) g(ti ) + g(ti ) h(ti ))
i=1
(f (ti1 ) g(ti1 ) + g(ti1 ) h(ti1 ))
sup
n
i=1
(f (ti ) g(ti )) (f (ti1 ) g(ti1 ))
Preliminaries
15
+ sup
n
(g(ti ) h(ti )) (g(ti1 ) h(ti1 ))
i=1
V (f g) + V (g h) = (f, g) + (g, h)
Thus all the axioms of a metric space are fullled.
If BV ([a, b]) is decomposed into equivalent classes according to the
equivalence relation dened by f
= g, and if f (t) g(t) is constant on
[a, b], then this (f, g) determines a metric on the space BV ([a, b]) of
such equivalent classes in an obvious way. Alternatively we may modify
the denition of so as to obtain a metric on the original class BV ([a, b]).
For example, (f, g) = f (a) g(a) + V (f g) is a metric of BV ([a, b]).
The subspace of the metric space, consisting of all f BV ([a, b]) for which
f (a) = 0, can naturally be identied with the space BV ([a, b]).
(viii) The space c of convergent numerical sequences
Let X be the set of convergent numerical sequences x =
{1 , 2 , 3 , . . . , n , . . .}, where lim i = . Let x = {1 , 2 , n , . . .} and
i
y = {1 , 2 , n , . . .}. Set (x, y) = sup i i .
i
(ix) The space m of bounded numerical sequences
Let X be the sequence of bounded numerical sequences x =
{1 , 2 , . . . , n , . . .}, implying that for every x there is a constant K(X)
such that i  K(X) for all i. Let x = {i }, y = {i } belong to X.
Introduce the metric (x, y) = sup i i .
i
It may be noted that the space c of convergent numerical sequences is
a subspace of the space m of bounded numerical sequences.
(x) Sequence space s
This space consists of the set of all (not necessarily bounded) sequences
of complex numbers and the metric is dened by
(x, y) =
n
1
i i 
i 1 +  
2
i
i
i=1
where x = {i }
and
y = {i }.
Axioms 13 of a metric space are satised. To see that (x, y) also
satises axiom 4 of a metric space, we proceed as follows:
1
t
> 0,
Let
f (t) =
, t R. Since f (t) =
1+t
(1 + t)2
f (t) is monotonically increasing.
Hence a + b a + b f (a + b) f (a) + f (b).
Thus,
a + b
a + b
a
b
+
.
1 + a + b
1 + a + b
1 + a 1 + b
16
A First Course in Functional Analysis
Let
a = i i , b = i i , where z = {i }.
Thus,
i i 
i i 
i i 
+
1 + i i 
1 + i i  1 + zetai i 
Hence, (x, y) (x, z) + (z, y), indicating the axiom on triangle
inequality has been satised. Thus s is a metric space.
Problems
1. Show that (x, y) =
numbers.
x y denes a metric on the set of all real
2. Show that the set of all ntuples of real numbers becomes a metric
space under the metric (x, y) = max{x1 y1 , . . . , xn yn } where
x = {xi }, y = {yi }.
3. Let
be the space of real or complex numbers. The distance of
two elements f, g shall be dened as (f, g) = (f g) where (x) is
a function dened for x 0, (x) is twice continuously dierentiable
and strictly monotonic increasing (that is, (x) > 0), and (0) = 0.
Then show that (f, g) = 0 if and only if f = g.
4. Let C([B]) be the space of continuous (real or complex) functions f ,
dened on a closed bounded domain B on n . Dene (f, g) = (r)
where r = max f g. For (r) we make the same assumptions as
in example 3. When (r) < 0, show that the function space
is no more metric.
metric, but, when (r) > 0 the space
4 is
1.4.3
Theorem (H
olders inequality)
1 1
If p > 1 and q is dened by + = 1
p q
1/p n
1/q
n
n
p
q
xi yi 
xi 
yi 
(H1)
i=1
i=1
i=1
for any complex numbers x1 , x2 , x3 , . . . , xn , y1 , . . . , yn .
(H2) If x p i.e., pth power summable, y q where p, q are dened
as above, x = {xi }, y = {yi }.
1/p
1/q
p
q
We have
xi yi 
xi 
yi 
. The inequality is
i=1
i=1
i=1
known as Holders inequality for sum.
q
th
(H3) If x(t) Lp (0, 1) i.e. pth power integrable, y(t) Lq (0, 1) i.e.
power integrable, where p and q are dened as above, then
1
0
x(t)y(t)dt
1/p
x(t) dt
1/q
y(t) dt
.
q
Preliminaries
17
The above inequality is known as H
olders inequality for integrals. Here p
and q are said to be conjugate to each other.
Proof: We rst prove the inequality
a1/p b1/q
a b
+ , a 0, b 0.
p q
(1.2)
In order to prove the inequality we consider the function
f (t) = t t + 1 dened for 0 < < 1, t 0.
Then,
f (t) = (t1 1)
so that
f (1) = f (1) = 0
f (t) > 0 for 0 < t < 1
and
f (t) < 0 for t > 1.
It follows that f (t) 0 for t 0. The inequality is true for b = 0 since
p > 1. Suppose b > 0 and let t = a/b and = 1/p. Then
a a 1 a 1
f
=
+ 1 0.
b
b
p b
p
Multiplying by b, we obtain,
1
a b
1
1
1
a
1/p 1 p
= +
a b
since 1 = .
+b 1
p
p
p q
p
q
Applying this to the numbers
xj p
yj q
, bj =
aj =
n
xi p
yi q
i=1
i=1
for j = 1, 2, . . . n, we get
bj
xj yj 
aj
1/p
1/q p + q , j = 1, 2, . . . , n.
n
n
xj p
yj q
j=1
j=1
By adding these inequalities the RHS takes the form
n
j=1
n
aj
+
j=1
bj
=
1 1
+ = 1.
p q
LHS gives the Holders inequality (H1)
1/p
1/q
n
n
n
xj yj 
xj p
yj q
j=1
j=1
j=1
(1.3)
18
A First Course in Functional Analysis
which proves (1).
To prove (H2), we note that
x p
xj p <
[see H2],
yj q <
[see H2].
j=1
y q
j=1
xj p
yj q
Taking aj =
,
b
=
for j = 1, 2, . . .
j
xi p
yi q
i=1
i=1
we obtain as in above,
xi 
xj yj p
1/p
p
i=1
yi q
aj
bj
1/q p + q .
i=1
Summing over both sides for j = 1, 2, . . . , we obtain the Holders
inequality for sums.
In case p = 2, then q = 2, the above inequality reduces to the CauchyBunyakovskySchwartz inequality, namely
1/p
1/q
p
q
xi yi 
xi 
yi 
.
i=1
i=1
i=1
The CauchyBunyakovskySchwartz has numerous applications in a
variety of mathematical investigations and will nd important applications
in some of the later chapters.
To prove (H3) we note that
1
x(t) Lp (0, 1)
x(t)p dt <
[see H3],
y(t) Lp (0, 1)
a = 1
Taking
0
1
x(t)p
x(t)p dt
1
0
y(t)q dt <
and b = 1
0
y(t)q
y(t)q dt
[see H3].
in the inequality
a b
+ , and integrating from 0 to 1, we have
p q
1
x(t)y(t)dt
0
1
1
1
( 0 x(t)p dt)1/p ( 0 y(t)q dt)1/q
a1/p b1 p
which yields the Holders inequality for integrals.
(1.4)
Preliminaries
1.4.4
19
Theorem (Minkowskis inequality)
(M1) If p 1, then
n
1/p
xi + yi 
n
i=1
1/p
xi 
i=1
n
1/p
yi 
i=1
for any complex numbers x1 , . . . xn , y1 , y2 , . . . , yn .
(M2) If p 1, {xi } p , i.e. pth power summable, y = {yi } q , i.e.,
qth power summable, where p and q are conjugate to each other, then
1/p
xi + yi 
i=1
1/p
xi 
i=1
1/p
yi 
i=1
(M3) If x(t) and y(t) belong to Lp (0, 1), then
0
1/p
x(t) + y(t)p dt
1
0
1/p
x(t)p dt
+
1/p
y(t)p dt
Proof: If p = 1 and p = the (M1) is easily seen to be true.
Suppose 1 < p < , then
n
1/p n
1/p
p
p
xi + yi 
(xi  + yi )
.
i=1
i=1
Moreover, (xi  + yi )p = (xi  + yi )p1 xi  + (xi  + yi )p1 yi .
Summing these identities for i = 1, 2, . . . , n,
1/p n
1/q
n
n
p1
p
p1 q
(xi + yi  )xi 
xi 
((xi  + yi ) )
i=1
i=1
p
1/p
xi 
i=1
Similarly we have,
n
(xi  + yi )
i=1
p1
yi 
n
i=1
i=1
n
(xi  + yi )p
1/q
.
i=1
1/p
yi 
n
1/q
(xi  + yi  )
p
i=1
From the above two inequalities,
1/p n
1/p
n
n
(xi  + yi )p
xi p
+
yi p
i=1
i=1
i=1
(1.5)
20
A First Course in Functional Analysis
n
or,
n
1/p
(xi  + yi )
n
i=1
1/q
(xi  + yi )
i=1
1/p
xi 
i=1
assuming that
n
n
(1.6)
1/p
yi 
i=1
(xi  + yi )p = 0.
i=1
From (1.5) and (1.6) we have
n
1/p n
1/p n
1/p
p
p
p
xi + yi 
xi 
+
yi 
i=1
i=1
(1.7)
i=1
(M2) is true for p = 1 and p = .
To, prove (M2) for 1 < p < , we note that
n
n
xi p < also y = {yi } p
yi p < .
x = {xi } p
i=1
We examine
n
i=1
xi + yi p .
i=1
Let us note that z = {zi } p z = {zi p1 } q .
On applying twice Holders inequality to the sequences {xi } p and
{xi + yi p1 } q , corresponding to {yi } p we get,
xi + yi 
p
i=1
Assuming
by
i=1
xi + yi p
xi + yi 
i=1
i=1
i=1
i=1
1/q
1/p
1/q
.
xi + yi p
xi p
+
yi q
i=1
= 0, the above inequality yields on division
1/q
p
i=1
xi + yi p1 yi 
i1
i=1
xi  +
n
1/q
1/p
1/q
xi + yi (p1)q
xi p
+
yi q
i=1
xi + yi 
p1
i=1
1/p
xi + yi p
i=1
1/p
xi p
i=1
1/p
yi q
(1.8)
Preliminaries
21
It is easily seen that (M3) is true for p = 1 and p = . To prove (M3)
for 1 < p < we proceed as follows.
Let
x(t) Lp (0, 1) i.e.
y(t) Lp (0, 1) i.e.
If
z(t) Lp (0, 1)
i.e.
0
0
1
0
1
i.e.
0
x(t)p dt <
y(t)p dt <
z(t)p dt <
(z(t)p1 ) p1 dt <
i.e. z(t)p1 Lq (0, 1).
Let us consider the integral
0
x(t) + y(t)p dt
for 1 < p < , x(t) + y(t)p x(t)p + y(t)p
Hence,
2p (x(t)p + y(t)p )
1
1
p
p
x(t) + y(t) dt 2
x(t)p dt +
0
Furthermore,
y(t) dt
p
< since x(t), y(t) Lp (0, 1)
1
1
p
x(t) + y(t)p dt <
(x(t) + y(t)) p1 dt <
1
(x(t) + y(t))p1 dt Lq (0, 1)
where p and q are conjugate to each other.
Using Holders inequality we conclude
1
1
x(t) + y(t)p dt
x(t) + y(t)p1 x(t)dt
0
+
0
1
0
(p1)/q
x(t) + y(t)
+
=
0
1/q
dt
x(t) + y(t)p1 y(t)dt
1
0
1/p
x(t) dt
p
1/q
x(t) + y(t)(p1)q dt
1/q
p
x(t) + y(t) dt
1
0
1
0
1/p
y(t)p dt
1/p
x(t)p dt
+
0
1/p
.
y(t) dt
p
22
A First Course in Functional Analysis
x(t) + y(t)p dt = 0 and dividing both sides of the
1/q
1
p
x(t) + y(t) dt
, we get
above inequality by
Assuming that
1/p
x(t) + y(t) dt
p
1/p
p
x(t) dt
+
1
0
1/p
y(t)p dt
(1.9)
Problems
1. Show that the CauchyBunyakovskySchwartz inequality implies that
(1  + 2  + + n )2 n(1 2 + + n 2 ).
2. In the plane of complex numbers show that the points z on the open
unit disk z < 1 form a metric space if the metric is dened as
z1 z2
1+u
1
.
, where u =
(z1 , z2 ) = log
2
1u
1 z 1 z2
1.4.5
(n)
(n)
The spaces lp , l , lp , p 1, l
(n)
(n)
(i) The spaces lp , l
Let X be an ndimensional arithmetic space, i.e., the set of all possible
ntuples of real numbers and let x = {x1 , x2 , . . . , xn }, y = {y1 , y2 , . . . , yn },
and p 1.
n
1/p
p
xi yi 
. Let max xi yi  = xk yk .
We dene p (x, y) =
1in
i=1
Then,
p (x, y) = xk yk  1 +
1/p
p
xi yi
xk y k .
n
i=1
i=k
Making p , we get (x, y) = max xi yi . It may be noted that
1in
p (x, y) x, y X satises the axioms 13 of a metric space. Since by
Minkowskis inequality,
1/p n
1/p n
1/p
n
xi zi p
xi yi p
+
yi zi p
i=1
i=1
i=1
axiom 4 of a metric space is satised. Hence the set X with the metric
(n)
p (x, y) is a metric space and is called l .
Preliminaries
23
(ii) The spaces lp , p 1, l
Let X be the set of sequences x = {x1 , x2 , . . . , xn } of real numbers.
xi p < (p 1, p xed).
x is said to belong to the space lp if
i=1
In lp we introduce the metric (x, y) for x = {xi } and y = {yi } as
1/p
xi yi p
. The metric is a natural extension of the
p (x, y) =
i=1
(n)
metric in lp when n . To see that the series for p converges for
x, y lp we use Minkowskis inequality (M2). It may be noted that the
above metric satises axioms 13 of a metric space. If z = {zi } lp , then
1/p
p
the Minkowskis inequality (M2) yields (x, y) =
xi zi 
=
i=1
1/p
(xi yi ) + (yi zi )p
(x, y)+(y, z) Thus ln is a metric space.
i=1
If p = 2, we have the space l2 with the metric (x, y) =
1/2
xi yi 2
i=1
Later chapters will reveal that l2 possesses a special property in that it
admits of a scalar product and hence becomes a Hillbert space.
l is the space of all bounded sequences, i.e., all x = {x1 , x2 , . . . , .}
for which sup xi  < , with metric (x, y) = sup xi yi  where
1i
1i
y = {y1 , y2 , . . . , .}.
1.4.6
The complete spaces, nonmetrizable spaces
(i) Complex spaces
Together with the real spaces C([0, 1]), m, lp it is possible to consider
the complex space
([0, 1]), m, lp corresponding to the real spaces.
The elements of complex space ([0, 1]) are complexvalued continuous
functions of a real variable. Similarly, the elements of complex space m are
elements that are bounded, as in the case of complex lp spaces whose series
of ppower of moduli converges.
(ii) Nonmetrizable spaces
Let us consider the set F ([0, 1]) of all real functions dened on the
interval [0,1]. A sequence {xn (t)} F ([0, 1]) will converge to x(t)
F ([0, 1]), if for any xed t, we have xn (t) x(t). Thus the convergence of a
sequence of functions in F ([0, 1]) is a pointwise convergence. We will show
that F ([0, 1]) is not a metric space. Let M be the set of continuous functions
in the metric space F ([0, 1]). Using the properties of closure in the metric
space, M = M [see 1.4.6]. Since M is a set of continuous functions, the
24
A First Course in Functional Analysis
limits are in the sense of uniform convergence. However, F ([0, 1]) admits
of only pointwise convergence. This means int M = , i.e., M is nowhere
dense, that is therefore a rst category set of functions [sec 1.4.9.]. Thus,
M is the set of real functions and their limits are in the sense of pointwise
convergence. Therefore, M is a set of the second category [sec. 1.4.9] and
the pointwise convergence is nonmetrizable.
Problems
1. Find a sequence which converges to 0, but is not in any space lp where
1 p < .
x y
is a metric space.
2. Show that the real line with (x, y) =
1 + x y
3. If (X, ) is any metric space, show that another metric of X is dened
by
(x, y)
(x, y) =
.
1 + (x, y)
4. Find a sequence {x} which is in lp with p > 1 but x l1 .
5. Show that the set of continuous functions on (, ) with
(x, y) =
1
max[x(t) y(t) : t n]
is a metric space.
n 1 max[x(t) y(t) : t n]
2
n=1
6. Diameter, bounded set: The diameter D(A) of a nonempty set
A in a metric space (x, ) is dened to be D(A) = sup (x, y). A
x,yA
is said to be bounded if D(A) < . Show that A B implies that
D(A) D(B).
7. Distance between sets: The distance D(A, B) between two nonempty sets A and B of a metric space (X, ) is dened to be
D(A, B) = inf (x, y). Show that D does not dene a metric on
xA
yB
the power set of X.
8. Distance of a point from a set: The distance D(x, A) from a
point x to a nonempty subset A of (X, ) is dened to be D(x, A) =
inf (x, a). Show that for any x, y X, D(x, A)D(y, A) d(x, y).
aA
1.4.7
Denition: ball and sphere
In this section we introduce certain concepts which are quite important
in metric spaces. When applied to Euclidean spaces these concepts can
be visualised as an extension of objects in classical geometry to higher
dimensions. Given a point x0 X and a real number r > 0, we dene
three types of sets:
Preliminaries
25
(a) B(x0 , r) = {x X(x, x0 ) < r} (open ball)
(b) B(x0 , r) = {x X(x, x0 ) r} (closed ball)
(c) S(x0 , r) = {x X(x, x0 ) = r} (sphere).
In all these cases x0 is called the centre and r the ball radius. An open
ball in the set X is a set of all points of X the distance of which from the
centre x0 is always less than the radius r.
Note 1.4.1. In working with metric spaces we borrow some terminology
from Euclidean geometry. But we should remember that balls and spheres
in an arbitrary metric space do not possess the same properties as balls and
spheres in 3 . An unusual property of a sphere is that it may be empty.
For example a sphere in a discrete metric space is null, i.e., S(x0 , r) = if
r = 1. We next consider two related concepts.
1.4.8
Denition: open set, closed set, neighbourhood, interior
point, limit point, closure
A subset M of a metric space X is said to be open if it contains a
ball about each of its points. A subset K of X is said to be closed if its
complement (in X) is open that is, K C = X K is open.
An open ball B(x0 ,
) of radius
is often called an
neighbourhood of
x0 . By a neighbourhood of x0 , we mean any subset of X which contains
an
neighbourhood of x0 . We see that every neighbourhood of x0 contains
x0 . In other words, x0 is a point in each of its neighbourhoods. If N is a
neighbourhood of x0 and N M , then M is also a neighbourhood of x0 .
We call x0 an interior point of a set M X if M is a neighbourhood of
x0 . The interior of M is the set of all interior points of M and is denoted
by M 0 or int(M ). int(M ) is open and is the largest open set contained in
M . Symbolically, int(M ) = (x : x M and B(x0 ,
) M ) for some
> 0.
If A X and x0 M , then x0 is called the limit point of A if every
neighbourhood of x0 contains at least one point of A other than x0 . That
is, x0 is a limit point of A if and only if Nx0 is a neighbourhood of x0 and
implies that (Nx0 (x0 )) A = .
If for all neighbourhoods Nx of x, Nx A = , then x is called a contact
point. For each A X, the set A, consisting of all points which are either
points of A or its limiting points, is called the closure of A. The closure of
a set is a closed set and is the smallest closed set containing A.
Note 1.4.2. In what follows we show how dierent metrics yield dierent
types of open balls. Let X = 2 be the Euclidean space. Then the unit
open ball B(0, 1) is given in Figure 1.2(a). If the l norm is used the unit
open ball B(0, 1) is the unit square as given in Figure 1.2(b). If the l1 norm
is used, the unit open ball B(0, 1) becomes the diamond shaped region
shown in Figure 1.2(c). If we select p > 2, B(0, 1) becomes a gure with
curved sides, shown in Figure 1.2(d). The unit ball in C[0, 1] is given in
26
A First Course in Functional Analysis
Figure 1.2(e).
x2
x2
x1
x2
x1
x1
B (0,1)(R 2, )
B (0,1)(R 2, )
B (0,1)(R 2, l )
(a)
(b)
(c)
x2
x2
f f0<r
x1
f0 + r
2r
f0
B (0,1)(R 2, l3)
1
(d)
f0r
x1
(e)
Fig. 1.2
Note 1.4.3. It may be noted that the closed sets of a metric space have
the same basic properties as the closed numerical point sets, namely:
(i) M N = M N ;
(ii) M M ;
(iii) M = M = M ;
(iv) The closure of an empty set is empty.
1.4.9
Theorem
In any metric space X, the empty set and the full space X are open.
To show that is open, we must show that each point in is the centre
of an open ball contained in ; but since there are no points in , this
requirement is automatically satised. X is clearly open since every open
ball centered on each of its points is contained in X. This is because X is
the entire space.
Note 1.4.4. It may be noted that an open ball B(0, r) on the real line is
the bounded open interval ] r, r[ with its centre as the origin and a total
length of 2r. We may note that [0,1[ on the real line is not an open set since
the interval being closed on the left, every bounded open interval with the
origin as centre or in other words every open ball B(0, r) contains points
of
not belonging to [0, 1[. On the other hand, if we consider X = [0, 1[
as a space itself, then the set X is open. There is no inconsistency in the
above statement, if we note that when X = [0, 1[ is considered as a space,
Preliminaries
27
there are no points of the space outside [0,1[. However, when X = [0, 1[ is
considered as a subspace of , there are points of outside X. One should
take note of the fact that whether or not a set is open is relevant only with
respect to a specic metric space containing it, but never on its own.
1.4.10
Theorem
In a metric space X each open ball is an open set.
Let B(x0 , r) be a given ball in a metric space X. Let x be any point in
B(x0 , r). Now (x0 , x) < r. Let r1 = r(x0 , x). Hence B(x, r1 ) is an open
ball with centre x and radius r1 . We want to show that B(x, r1 ) B(x0 , r).
For if y B(x, r1 ), then
(y, x0 (y, x) + (x, x0 ) < r1 + (x, x0 ) = r, i.e., y B(x0 , r).
Thus B(x, r1 ) is an open ball contained in B(x0 , r). Since x is arbitrary, it
follows that B(x0 , r) is an open set. In what follows we state some results
that will be used later on:
Let X be a metric space.
(i) A subset G of X is open it is a union of open balls.
(ii) (a) every union of open sets in X is open and (b) any nite
intersection of open sets in X is open.
Note 1.4.5. The two properties mentioned in (ii) are vital properties
of a metric space and these properties are established by using only the
openness of a set in a metric space. No use of distance or metric is required
in the proof of the above theorem. These properties are germane to the
development of topology and topological spaces. We discuss them in the
next section.
We will next mention some properties of closed sets in a metric space.
We should recall that a subset K of X is said to be closed if its complement
K C = X K is open.
(i) The null set and the entire set X in a metric space are closed.
(ii) In a metric space a closed ball is a closed set.
(iii) In a metric space, (a) the intersection of closed sets is closed and
(b) the union of a nite number of closed sets is a closed set.
Note 1.4.6. This leads to what is known as closed set topology.
1.4.11
Convergence, Cauchy sequence, completeness
In real analysis we know that a sequence of real numbers {i } is said
to tend to a limit l if the distance of {i } from l is arbitrarily small i
excepting a nite number of terms. In other words, it is the metric on
that helps us introduce the concept of convergence. This idea has been
generalized in any metric space where convergence of a sequence has been
dened with the help of the relevant metric.
28
A First Course in Functional Analysis
Denition: convergence of a sequence, limit
A sequence {xn } in a metric space X = (X, ) is said to converge or
to be convergent if there is an x X such that lim (x, xn ) = 0. x is
n
called limit of {xn } and we write it as x = lim xn . In other words, given
n
> 0, n0 (
) s.t. (x, xn ) <
n > n0 (
).
Note 1.4.7. The limit of a convergent sequence must be a point of the
space X.
For example, let X be the open interval [0,1[ in
with the usual
metric (x, y) = x y. Then the sequence (1/2, 1/3, . . . , 1/n, . . .) is
not convergent since 0, the point to which the sequence is supposed to
converge, does not belong to the space X.
1.4.12
Theorem
A sequence {xn } of points of a metric space X can converge to one limit
at most.
If the limit is not unique, let xn x and xn y as n , x = y.
Then (x, y) (xn , x) + (xn , y) <
+
, for n n0 (
). Since
is an
arbitrary positive number, it follows that x = y.
1.4.13
Theorem
If a sequence {xn } of points of X converges to a point x X, then the
set of numbers (xn , ) is bounded for every xed point of the space X.
Note 1.4.8. In some spaces the limit of a sequence of elements is directly
dened. If we can introduce in this space a metric such that the limit
induced by the metric coincides with the initial limit, the given space is
called metrizable.
Note 1.4.9. It is known that in
the Cauchy convergence criterion
ensures the existence of the limit. Yet in any metric space the fulllment
of the Cauchy convergence criterion does not ensure the existence of the
limit. This needs the introduction of the notion of completeness.
1.4.14
Denition: Cauchy sequence, completeness
A sequence {xn } in a metric space X = (X, ) is said to be a Cauchy
sequence or convergent sequence if given
> 0, n0 (
), a positive integer
such that (xn , xm ) <
for n, m > n0 (
).
Note 1.4.10. The converse of the theorem is not true for an arbitrary
metric space, since there exist metric spaces that contain a Cauchy sequence
but have no element that will be the limit.
Preliminaries
1.4.15
29
Examples
(i) The space of rational numbers
Let X be the set of rational numbers, in which the metric is taken
as (r1 , r2 ) = r1 r2 . Thus, X is a metric space. Let us take
r1 = 1, r2 = 12 , , rn = n1 . {rn } is a Cauchy sequence and rn 0 as
"n
!
n . On the other hand let us take rn = 1 + n1
where n is an
!
"
1 n
integer. {rn } is a Cauchy sequence. However, lim 1 + n = e, which is
n
not a rational number.
(ii) The space of polynomials P (t)(0 t 1)
Let X be the set of polynomials P (t) (0 t 1) and let the metric
be dened by (P, Q) = max P (t) Q(t). It can be seen that with the
t
above metric, the space X is a metric space. Let {Pn (t)} be the sequence of
nth degree polynomials converging uniformly to a continuous function that
is not a polynomial. Thus the above sequence of polynomials is a Cauchy
sequence with no limit in the space (X, ). In what follows, we give some
examples of complete metric spaces.
(iii) Completeness of n and n
n
Let us consider xp
.
Then we can write xp =
(p) (p)
(p)
(q)
(q)
(q)
{1 , 2 , . . . , n }. Similarly, xq = {1 , . . . 2 , . . . , n }. Then,
1/2
n
(p)
(q)
(xp , xq ) =
i i 2
. Now, if {xm } is a Cauchy sequence,
i=1
for every
> 0, n0 (
) s.t.
1/2
n
(p)
(q) 2
(xp , xq ) =
i i 
<
for p, q > n0 (
).
(1.10)
i=1
(p)
(q)
Squaring, we have for p, q > n0 (
), i = 1, 2, . . . , n, i i 2 <
2
(p)
(q)
i i  <
. This shows that for each xed i, (1 i n),
(i) (i)
(i)
the sequence {1 , 2 , . . . , n } is a Cauchy sequence of real numbers.
(m)
Therefore, i
= i
as m . Let us denote by x the vector,
x = (1 , 2 , . . . , n ). Clearly, x n . It follows from (1.1) that (xm , x)
for m n0 (
). This shows that x is the limit of {xm } and this proves
completeness because {xm } is an arbitrary Cauchy sequence. Completeness
of n can be proven in a similar fashion.
(iv) Completeness of C([a, b]) and incompleteness of S([a, b])
Let {xn (t)} C([a, b]) be a Cauchy sequence. Hence, (xn (t), xm (t))
0 as n, m since xn (t), xm (t) C([a, b]). Thus, given
> 0, n0 (
)
such that max xn (t) xm (t) <
for n, m x0 (t). Hence, for every
t[a,b]
30
A First Course in Functional Analysis
xed t = t0 J = [a, b], xn (t0 ) xm (t0 ) <
for m, n > n0 (
). Thus,
is complete,
{xn (t0 )} is a convergent sequence of real numbers. Since
{xn (t0 )} x(t0 ) . In this way we can associate with each t J a
unique real number x(t) as limit of the sequence {xn (t)}. This denes a
(pointwise) function x on J and thus x(t) C([a, b]). Thus, it follows from
(6.2), making n , max xm (t) x(t) <
for m n0 (
) for every t J.
Therefore, {xm (t)} converges uniformly to x(t) on J. Since xm (t)s are
continuous functions of t, and the convergence is uniform, the limit x(t) is
continuous on J. Hence, x(t) C([a, b]), i.e., C([a, b]), is complete.
Note 1.4.11. We would call C([a, b]) as real C([a, b]) if each member of
C([a, b]) is realvalued. On the other hand, if each member of C([a, b]) is
complexvalued then we call the space as complex ([a, b]).
By arguing analogously as above we can show that complex ([a, b]) is
complete. We next consider the set X of all continuous realvalued functions
on J = [a, b]. Let us dene the metric (x(t), y(t)) for x(t), y(t) X as
b
(x, y) =
x(t) y(t)dt.
We can easily see that the set X with
the metric dened above is a metric space
S[a, b] = (X, ). We next show that
S[a, b] is not complete. Let us construct
a {xn } as follows: If a < c < b and for
every n so large that a < c n1
We dene
0
if a t c
1
xn (t) =
nt nc + 1 if c t c
1
if c t b
xm (t ), xn (t )
O
1
c
m
1
c
n
C
t
Fig. 1.3
For n > m
b
a
1
1 1
1
1
1
xn (t) xm (t)dt = AOB = 1 c c +
<
+
2
n
m
2 n m
Thus (xn , xm ) 0 as n, m . Hence(xn ) is a Cauchy sequence.
x(t) = 0, t [a, c)
Now, let x S[a, b], then lim (xn , x) = 0
x(t) = 1, t (c, b]
ni
Since it is impossible for a continuous function to have the property,
(xn ) does not have a limit.
(v) m, the space of bounded number sequences, is complete.
Preliminaries
31
(vi) Completeness of lp , 1 p < .
(a) Let (xn ) be any Cauchy sequence in the space lp where xn =
(n) (n)
(n)
{1 , 2 , . . . , i , . . .}. Then given
> 0, n0 (
) such that (xn , xm ) <
i/p
(n)
(m) p
for n, m n0 (
). Or,
i i 
<
. It follows that for
(n)
i=1
(m)
every i = 1, 2, . . . , i i  <
(n, m n0 (
)). We choose a xed
(n) (n)
i. The above inequality yields {1 , 2 , . . .} as a Cauchy sequence of
(n)
numbers. The space
being complete {i } i
as n .
Using these limits, we dene x = {1 , 2 , . . .} and show that x lp and
xm x as m . Since
is an arbitrary small positive number,
k
(n)
(m)
i i p <
p (k = 1, 2, . . .). Making n ,
(xn , xm ) <
i=1
we obtain for m > n0 (
)
k
(m)
i
i=1
k , then for m > n0 (
),
i p <
p .
(m)
i
i=1
(m)
We may now let
p <
p . This shows that
xm x = {i i } lp . Since xm lp , it follows by the Minkowski
inequality that x = (x xm ) + xm lp . It also follows from the above
inequality that ((xn , xm ))p
p . Further, since
is a small positive
number, xm x as m . Since {xm } is an arbitrary Cauchy sequence
in lp , this proves the completeness of lp , 1 p .
(b)
Let
{xn }
be
a
Cauchy
sequence
(n) (n)
(n)
in l where xn = {1 , 2 , . . . , i , . . .}. Thus for each
> 0, there
(n)
(m)
is an N such that for m, n > N , we have sup i i  <
. It follows
(n)
(n)
that for each i, {i } is a Cauchy sequence. Let i = lim i
n
and let
(n)
i 
x = {1 , 2 , . . .}. Now for each i and n > N , it follows that i
<
.
(n)
(n)
(n)
Therefore, i  i  + i i  i  +
for n > N . Hence, i is
(n)
bounded for each i i.e., x l and {i } converges to x in the l norm.
Hence, l is complete under the metric dened for l .
Problems
1. Show that in a metric space an open ball is an open set and a closed
ball is a closed set.
2. What is an open ball B(x0 ; 1) on
l2 ?
4? In +? In l1?
In C([0, 1])? In
3. Let X be a metric space. If {x} is a subset of X consisting of a single
point, show that its complement {x}c is open. More generally show
that AC is open if A is any nite subset of X.
32
A First Course in Functional Analysis
4. Let X be a metric space and B(x, r) the open ball in X with centre x
and radius r. Let A be a subset of X with diameter less than r that
intersects B(x, r). Prove that A B(x, 2r).
5. Show that the closure B(x0 , r) of an open ball B(x0 , r) in a metric
space can dier from the closed ball B(x0 , r).
6. Describe the interior of each of the following subsets of the real line:
the set of all integers; the set of rationals; the set of all irrationals;
]0, 1]; [0, 1]; and [0, 1[{1, 2}.
7. Give an example of an innite class of closed sets, the union of which
is not closed. Give an example of a set that (a) is both open and
closed, (b) is neither open nor closed, (c) contains a point that is not
a limit point of the set, and (d) contains no points that are not limit
points of the set.
8. Describe the closure of each of the following subsets of the real line;
the integers; the rationals; ]0, +[; ] 1, 0[]0, 1[.
9. Show that the set of all real numbers constitutes an incomplete metric
space if we choose (x, y) =  arctan x arctan y.
10. Show that the set of continuous realvalued functions on J = [0, 1]
do not constitute a complete metric space with the metric (x, y) =
1
x(t) y(t)dt.
0
11. Let X be the metric space of all real sequences x = {i } each
of which
has only nitely many nonzero terms, and (x, y) =
i i ,
(n)
(n)
(n)
where y = {i }. Show that {xn } with xn = {j }, j
j = 1, 2, . . . , n, and
not converge.
(n)
j
= j 2 for
= 0 for j > n is a Cauchy sequence but does
12. Show that {xn } is a Cauchy sequence if and only if (xn+k , xn )
converges to zero uniformly in k.
13. Prove that the sequence 0.1, 0.101, 0.101001, 0.1010010001,. . . is a
Cauchy sequence of rational numbers that does not converge in the
space of rational numbers.
'
(
14. In the space l2 , let A = x = (x1 , x2 , . . .) : xn  n1 , n = 1, 2, . . . .
Prove that A is closed.
1.4.16
Criterion for completeness
Denition (dense set, everywhere dense set and nowhere dense set)
Given A and B subsets of X, A is said to be dense in B if B A. A is
said to be everywhere dense in X if A = X. A is said to be nowhere dense
C
in X, if A = X or X A = X or A = or intA = . The set of rational
numbers is dense in .
Preliminaries
33
As an example of a nowhere dense set in twodimensional Euclidean
space (the plane) is any set of points whose coordinates are both rational is
an example of the rst category [see 1.4.18]. It is the union of countable sets
of onepoint sets. Although this set is of rst category, it is nevertheless
dense in 2 .
We state without proof the famous Cantors intersection theorem.
1.4.17
Theorem
Let a nested sequence of closed balls [i.e., each of which contains all that
follows: B 1 B 2 . . . B n . . .] be given in a complete metric space X. If
the radii of the balls tend to zero, then these balls have a unique common
point.
1.4.18
Denition: rst category, second category
A set M is said to be of the rst category if it can be written as a
countable union of nowhere dense sets. Otherwise, it is said to be of the
second category. The set of rational points of a straight line is of the rst
category while that of the irrational points is of the second category as
borne out by the following.
1.4.19
Theorem
A nonempty complete metric space is a set of the second category.
As an application of theorem 1.4.16, we prove the existence of nowhere
dierentiable functions on [0, 1] that are continuous in the said interval. Let
us consider the metric space C0 ([0, 1]) of continuous functions f for which
f (0) = f (1) with (f, g) = max{f (x) g(x), x [0, 1]}. Then C0 ([0, 1])
is a complete metric space. We would like to show that those functions
in C0 ([0, 1]) that are somewhere dierentiable form a subset of the rst
category. C0 ([0, 1]) being complete, is of the second category. C0 ([0, 1]) can
contain functions which are somewhere dierentiable. Therefore, C0 ([0, 1])
can contain functions that are nowhere dierentiable. For convenience we
extend the functions of C0 ([0, 1]) to the entire axis by periodicity and to
treat the space of such extensions with the metric dened above.
Let K be the set of functions
such that for some , the set
f ( + h) f ()
: h > 0 is bounded. K contains the set of
of numbers
h
functions that are somewhere dierentiable. We want to show that K is of
the rst category in .
)
f ( + h) f ()
n, for all h > 0 .
Let Kn = f : for some ,
h
*
Then K =
Kn . We shall show that for every n = 1, 2, . . ., (i) Kn is
n=1
closed, and (ii) Kn is everywhere dense in . If (i) and (ii) are both
true, Kn = or Kn = . Since Kn is a closed set, it follows that Kn
is nowhere dense in . Hence K will become nowhere dense in .
34
A First Course in Functional Analysis
For (i) let f be a limit point of Kn and let {fn } be a sequence in
K
n
converging to f . For
each k = 1, 2, . . . let k be in [0,1] such that
fk ( + h) fk (k )
n for all h > 0. Let be a limit point of {k } and
h
{kj } converge to . For h > 0 and
> 0,
f ( + h) f ()
fkj (kj + h) fkj (kj ) + 1 {f ( + h)
h
h
h
fkj (kj + h) + fkj (kj + h) fkj (kj ) + fkj (k ) f (k )
+ f (k ) f ()}
There is an N = N (
h) such that k > N implies that sup fk (t)f (t)
t
h
. Since f is continuous, there is an M > N such that for kj > M , we
4
h
h
and f (kj f () <
. Since lim kj = ,
have, f (+h)f (kj +h) <
j
4
4
f ( + h) f ()
fkj (kj + h) fkj (kj )
<
+
if kj > M , we have
h
h
f ( + h) f ()
n for all h > 0. Thus f Kn
n +
. It follows that
h
and Kn is closed.
For (ii) Let us suppose that g . Let
> 0 and let us partition [0,
1] into k equal intervals such that if x, x are in the same interval of the
th
partitioning, g(x) g(x ) <
/2 holds. Let us consider
! i1 "the i !subinterval
"
i1
i
x k and consider the rectangle with sides g k and g ki . For all
k
"
!
! "
2 !y g !k + ""
points!within
the rectangle the ordinates satisfy g i1
k
2.
""
!
i1
,
g
Thus ki , g ki is on the righthand side of the rectangle and i1
k
k
is on the lefthand side of the rectangle. By joining these two points by a
polygonal graph that remains within the rectangle and the line segments of
which have slopes exceeding n in absolute value, we thus obtain a continuous
function that is within
of g and as because its slope exceeds n, it belongs
to Kn . Thus Kn is dense in . Combining (i) and (ii) we can say
that Kn and hence K is nowhere dense in .
x2
/2
/2
g (x)
x1
i 1
k
Fig. 1.4
Preliminaries
1.4.20
35
Isometric mapping, isometric spaces, metric completion,
necessary and sucient conditions
Denition: isometric mapping, isometric spaces
Let X1 = (X1 , 1 ) and X2 = (X2 , 2 ) be two metric spaces. Then
(a) A mapping f of x1 into X2 is said to be isometric or an isometry if
f preserves distancei.e., for all x, y X1 , 2 (f x, f y) = 1 (x, y) where f x
and f y are images of x and y respectively.
(b) The space X1 is said to be isometric to X2 if there exists an onetoone and onto (bijective) isometry of X1 onto X2 . The spaces X1 and X2
are called isomtric spaces.
In what follows we aim to show that every metric space can be embedded
in a complete metric space in which it is dense. If the metric space X is
then X
is called the metric completion of X. For example,
dense in X,
the space of real numbers is the completion of the X of rational numbers
corresponding to the metric (x, y) = x y, x, y X.
1.4.21
Theorem
Any metric space admits of a completion.
1.4.22
Theorem
X is dense in X,
and any fundamental sequence of points of
If X X,
X has a limit in X, then X is a completion of X.
1.4.23
Theorem
A subspace of a complete metric space is complete if and only if it is
closed.
1.4.24
Theorem
Given a metric space X1 , assume that this space is incomplete, i.e.,
there exists in this space a Cauchy sequence that has no limit in X1 . Then
there exists a complete space X2 such that it has a subset X2 everywhere
dense in X2 and isometric to X1 .
Problems
1. Let X be a metric space. If (xn ) and (yn ) are sequences in X such
that xn x and yn y, show that (xn , yn ) (x, y).
2. Show that a Cauchy sequence is convergent it has a convergent
subsequence.
3. Exhibit a nonconvergent Cauchy sequence in the space of
polynomials on [0,1] with uniform metric.
4. If 1 and 2 are metrics on the same set X and there are positive
numbers a and b such that for all x, y X, a1 (x, y) 2 (x, y)
b1 (x, y), show that the Cauchy sequences in (X, 1 ) and (X, 2 ) are
the same.
36
A First Course in Functional Analysis
5. Using completeness in
4, prove completeness of C.
6. Show that the set of real numbers constitute an incomplete metric
space, if we choose (x, y) =  arctan x arctan y.
7. Show that a discrete metric space is complete.
8. Show that the space C of convergent numerical sequence is complete
with respect to the metric you are to specify.
9. Show that convergence in C implies coordinatewise convergence.
10. Show that the set of rational numbers is dense in
4.
11. Let X be a metric space and A a subset of X. Prove that A is
everywhere dense in X . The only closed superset of A is X the
only open set disjoint from A is .
12. Prove that a closed set F is nowhere dense if and only if it contains
no open set.
13. Prove that if E is of the rst category and A E, then A is also of
the rst category.
14. Show that a closed set is nowhere dense its complement is
everywhere dense.
15. Show that the notion of being nowhere dense is not the opposite of
being everywhere dense. [Hint: Let
be a metric space with the
usual metric and consider the subset consisting of the open interval
]1,2[. The interior of the closure of this set is nonempty whereas the
closure of ]1,2[ is certainly not all or .]
1.4.25
Contraction mapping principle
1.4.26
Theorem
In a complete metric space X, let A be a mapping that maps the elements
of the space X again into the elements of this space. Further for all x and
y in X, let (A(x), A(y)) (x, y) with 0 < 1 independent of x and
y. Then, there exists a unique point x such that A(x ) = x . The point
x is called a xed point of A.
Proof: Starting from an arbitrary element x0 X, we build up the
sequence {xn } such that x1 = A(x0 ), x2 = A(x1 ), . . . , xn = A(xn1 ), . . .. It
is to be shown that {xn } is a Cauchy or fundamental sequence. For this we
note that,
(x1 , x2 ) = (A(x0 ), A(x1 )) (x0 , x1 ) = (x0 , A(x0 )),
(x2 , x3 ) = (A(x1 ), A(x2 )) (x1 , x2 ) 2 (x0 , A(x0 )),
(xn , xn+1 ) (x0 , A(x0 ))
n
Further, (xn , xn+p ) (xn , xn+1 ) + + (xn , xn+1 )
Preliminaries
37
+ + (xn+p1 , xn+p )
( +
n
1
n
n+1
+ +
n+p1
)(x0 , A(x0 ))
n+p
(x0 , A(x0 )).
n
Since, by hypothesis, (xn , xn+p ) 1
(x0 , A(x0 )), therefore
(xn , xn+p ) 0 as n , p > 0. Thus, (xn ) is a Cauchy sequence.
Since the space is complete, there is an element x X, the limit of the
sequence, x = lim xn .
n
We shall show that A(x ) = x .
(x , A(x )) (x , xn ) + (xn , A(x ))
(x , xn ) + (A(xn1 ), A(x ))
(x , xn ) + (xn1 , x )
For n suciently large, we can write, (x , xn ) <
/2, (x , xn1 ) <
/2, for any given
. Hence (x , A(x )) <
. Since
> 0 is arbitrary,
(x , A(x )) = 0, i.e., A(x ) = x .
Let us assume that there exists two elements, x X, y Y, x = y
satisfying A(x ) = x and A(y ) = y. Then, (x , y ) = (A(x ), A(y ))
(x , y ). Since x = y , and < 1, the above inequality is
impossible unless (x , y ) = 0, i.e, x = y . Making p , in
n
n+p
the inequality (xn , xn+p ) 1
(x0 , A(x0 )), we obtain (xn , x )
n
1 (x0 , A(x0 )).
Note 1.4.12. Given an equation F (x) = 0, where F : n n , we can
write the equation F (x) = 0 in the form x = x F (x). Denoting x F (x)
by A(x), we can see that the problem of nding the solution of F (x) = 0 is
equivalent to nding the xed point of A(x) and vice versa.
1.4.27
Applications
(i) Solution of a system of linear equations by the iterative
method
Let us consider the real ndimensional space. If x = (1 , 2 , . . . , n ) and
y = (1 , 2 , . . . , n ), let us dene the metric as (x, y) = max i i . Let
i
us consider y = Ax, where A is an n n matrix, i.e., A = (aij ). The
n
aij j , i = 1, 2, . . . , n. Then
system of linear equations is given by i =
(1)
(2)
(1)
(2)
,
y
)
=
(A
,
A
)
yields
max


=
max
a
(
)
(y1 2
1
2
ij j
i
i
j
i
i
j
n
max
aij (x1 , x2 ). Now if it is assumed that
aij  < 1, for all i, then
j=1
j=1
38
A First Course in Functional Analysis
the contraction mapping principle becomes applicable and consequently the
matrix A has a unique xed point.
(ii) Existence and uniqueness of the solution of an integral
equation
1.4.28
Theorem
Let k(t, s) be a real valued function dened in the square a t, s b such
b b
b
2
that
k (t, s)dt ds < . Let f (t) L2 ([a, b]) i.e.,
f (t)2 dt < .
a
a
a
b
k(t, s)x(s)ds has a unique
Then the integral equation x(t) = f (t) +
a
solution x(t) L2 ([a, b]) for every suciently small value of the parameter
.
b
k(t, s)x(s)ds. Let x(t)
Proof: Consider the operator Ax(t) = f (t)+
a
b
x2 (t)dt < . We rst show that for x(t) L2 ([a, b]),
L2 ([a, b]), i.e.,
Ax L2 ([a, b]).
(Ax) dt =
f (t)dt + 2
f (t)
k(t, s)x(s)ds dt
a
2
k(t, s)x(s)ds
Using Fubinis theorem th. 10.5 and the square integrability of k(t, s) we
can show that
b
b b
b
f (t)
k(t, s)x(s)ds dt =
k(t, s)x(s)f (t)dt ds
a
1/2
k (t, s)dt ds
a
1/2
1/2
b
2
f (t)dt
x (s)ds
2
< +
Similarly we have
k(t, s)x(s)ds
a
dt < .
Thus, A(x)
L2 ([a, b]). Therefore, A : L2 ([a, b]) L2 ([a, b]). Using the metric in
L2 ([a, b]), i.e., given x(t), y(y) L2 ([a, b]),
1/2
b
(Ax, Ay) =
a
Ax Ay2 dt
dt
Preliminaries
39
= 
k(t, s)(x(s) y(s))ds
a
1/2
dt

a
2
k(t, s)2 dt ds
a
b
= 
x(s) y(s)2 ds
1/2
1/2
k(t, s) dt ds
(x, y) < (x, y)
where
1/2

k(t, s) dt ds
and
<
1 if 
<
b
a
1/2
k(t, s)2 dt ds
Thus the contraction mapping principle
holds, proving the existence and uniqueness of the solution of the given
integral equation for values of satisfying the above inequality.
(iii) Existence and uniqueness of solution for ordinary dierential
equations
Denition: Lipschitz condition
Let E be a connected open set in the plane 2 of the form E =
]s0 a, s0 + a[]t0 b, t0 + b[, where a > 0, b > 0, (s0 , t0 ) E. Let f
be a real function dened on E. We shall say that f satises a Lipschitz
condition in t on E, with Lipschitz condition M if for every (s, t1 ) and
(s, t0 ) in E, and s ]s0 a, s0 + a[, we have f (s, t1 ) f (s, t0 ) M t1 t0 .
Let (s0 , t0 ) E. By a local solution passing through (s0 , t0 ) we mean a
function dened on s0 , (s0 ) = t0 , s, (s) E for every s ]s0 a, s0 +a[
and (s) = f (s, (s)) for every s ]s0 a, s0 + a[.
1.4.29
Theorem
If f is continuous on the open connected set E =]s0 a, s0 + a[ and
satises a Lipschitz condition in t on E, then for every (s0 , t0 ) E, the
dt
= f (s, t) has a unique local solution passing through
dierential equation ds
(s0 , t0 ).
Proof: We rst show that the function dened on the interval ]s0 a, s0 +
a[ such that (s0 ) = t0 and (s) = f (s, (s)) for every s in the said interval
s
is of the form (s) = t0 +
s0
f (t , (t ))dt . It may be observed from the
above form that (s0 ) = t0 , (s), is dierentiable and (s) = f (s, (s)).
Let E1 E =]s0 a, s0 + a[]t0 a, t0 + b[ a > 0, b > 0 be an
open connected set containing (s0 , t0 ). Let f be bounded on E1 and let
f (s, t) A for all (s, t) E1 . Let d > 0 be such that (a) the rectangle
40
A First Course in Functional Analysis
R E1 where R =]s0 d, s0 + d[]t0 dA, t0 + dA[ and (b) M d < 1, where
M is a Lipschitz constant for f in E.
Let J =]s0 d, s0 + d[. The set B of continuous functions on J such
that (s0 ) = t0 and (s) t0  dA for every s J is a complete metric
space under the uniform metric .
s
Consider the mapping T dened by (T )(s) = t0 +
f (t, (t))dt for
s0
B and s J.
sNow (T )(s0) =t0s, T is continuous and for every s J, T (s)t0  =
f (t, (t))dt
f (t, (t))dt dA. Hence T B. Thus T maps
s0
s0
B into B.
We now show that T is a contraction. Let 1 , 2 B. Then for every
s J,
s
T 1 (s) T 2 (s) = (f (t , 1 (t ))) f (t , 2 (t ))dt
s0
M d max[1 (t ) 2 (t ) : t J]
so that (T 1 , T 2 ) M d(1 , 2 ). Hence, T is a contraction.
We next show that the local solution can be extended across E1 . Let J =
J1 , d = d1 , s0 +d = s1 and (s1 ) = t1 . By theorem 1.4.29 applied to (s1 , t1 )
we obtain J2 , d2 and (s2 , t2 ). The solution functions = 1 on J1 and 2
on J2 agree on an interval and so yields a solution on J1 J2 . In this way, we
obtain a sequence {(sn , tn )} with sn+1 > sn , n = 1, 2, . . .. We assume that
E1 is bounded and show that the distance of (sn , tn ) from the boundary
of
of E1 converges to zero. If (sn , tn ) E1 , we denote
by n the distance
n
1
(sn , tn ) from the boundary. We take dn = M in
, so that
,
A2 + 1 2M
n
M dn < 1 and dn
. Thus sn+1 = sn + dn and (sn+1 ) = tn+1 .
A2 + 1
dn < . If
Hence (sn+1 , tn+1 ) E1 . Since dn > 0 for all n,
n=1
n
1
n
is smaller than M, dn =
. Since
dn < ,
2
A2 + 1
A2 + 1
n=1
+
1
n = (A2 + 1)
dn < . On the other hand, if M is smaller
2
n=1
n=1
1
n
n
, then dn = M . Since
dn < , lim
than
< . Hence
2
n
2
2M
A +1
n=1
M must be of the order Kn, where K is nite. Now, the Lipschitz constant
1
n
< M and
dn < .
M cannot be arbitrarily large. Hence
2
A2 + 1
n=1
Therefore n 0 as n . Keeping in mind that D is the union of an
Preliminaries
41
increasing sequence of sets, each having the above properties of E1 , we have
the following theorem.
1.4.30
Theorem
If f is continuous on an open connected set E and satises the Lipschitz
condition in t on E, then for every (s0 , t0 ) E the dierential equation
dt
= f (s, t) has a unique solution t = (s) such that t0 = (s0 ) and such
ds
that the curve given by the solution passes through E from boundary to
boundary.
1.4.31
Quasimetric space
If we relax the condition (x, y) = 0 x = y, we get what is known as
a quasimetric space. Formally, a quasimetric space is a pair of (X, q) where
X is a set and q (quasidistance) is a real function dened on X X such
that for all x, y, z X we have q(x, x) = 0 and q(x, z) q(x, y) + q(z, y).
We next aim to show that the quasidistance is symmetric and nonnegative.
If we take x = y, then q(x, z) q(x, x) + q(z, x). Since q(x, x) = 0,
q(x, z) q(z, x). Similarly, we can show q(z, x) q(x, z). This is only
possible if q(x, z) = q(z, x), which proves symmetry.
Taking x = z in the inequality, we have 0 2q(z, y) or q(z, y) 0,
which shows nonnegativity.
Combining q(x, y) q(x, z) q(y, z) and q(x, y) q(y, z) q(x, z), we
can write q(x, z) q(y, z) q(x, y).
1.4.32
4
4
Example
be the twodimensional plane, x = (1 , 1 ), y = (2 , 2 ), where
Let
x, y 2 . The quasidistance between x, y is given by q(x1 , x2 ) = 1 2 .
We will show that 2 with the above quasidistance between two points is
a quasimetric space.
Firstly, q(x1 , x1 ) = 0. If x3 = (3 , 3 ),
q(x1 , x2 ) = 1 3  1 2  + 2 3  = q(x1 , x2 ) + q(x3 , x2 )
Hence
42 with the above quasidistance is a quasimetric space.
Note 1.4.13. q(x1 , x2 ) = 0 x1 = x2 . Thus a quasimetric space is not
necessarily a metric space.
Problems
1. Show that theorem 1.4.26 fails to hold if T has only the property
(T x, T y) < (x, y).
2. If T : X X satises (T x, T y) < (x, y) when x = y and T has
a xed point, show that the xed point is unique; here (X, ) is a
metric space.
42
A First Course in Functional Analysis
3. Prove that if T is a contraction in a complete metric space and x X,
then
T lim T n x = lim T n+1 x.
n
4. If T is a contraction, show that T n (n N ) is a contraction. If T n is
a contraction for n > 1, show that T need not be a contraction.
5. Show that f dened by f (t, x) =  sin(x) + t satises a Lipschitz
condition on the whole txplane with respect to its second argument,
f
does not exist when x = 0. What fact does this illustrate?
but that
x
6. Does f dened by f (t, x) = x1/2 satisfy a Lipschitz condition?
d2 u
= f (x) where u
dx2
2
C (0, 1), f (x) C(0, 1) and u(0) = u(1) = 0, is equivalent to the
1
integral equation u(x) =
G(x, t)f (t)dt where G(x, t) is dened as
0
x(1 t) x t
.
G(x, t) =
t(1 x) t x
2xn
xn+1
=
show that x = y = 0
8. For the vector iteration
1
yn+1
2 xn
is a xed point.
7. Show that the dierential equation
9. Let X = {x
: x 1}
and let the mapping T : X X be
dened by T x = x/2 + x1 . Show that T is a contraction.
10. Let the mapping T : [a, b] [a, b] satisfy the condition T x T y
kx y, for all x, y [a, b]. (a) Is T a contraction? (b) If
T is continuously dierentiable, show that T satises a Lipschitz
condition. (c) Does the converse of (b) hold?
11. Apply the Banach xed theorem to prove that the following system
of equations has a unique solution:
21 + 2 + 3 = 4
1 + 22 + 3 = 4
1 + 2 + 23 = 4
12. Show that x = 3x2/3 , x(0) = 0 has innitely many solutions, x, given
by x(t) = 0 if t < c and x(t) = (t c)3 if t c, where c > 0 is
any constant. Does 3x2/3 on the righthand side satisfy a Lipschitz
condition?
13. Pseudometric: A nite pseudometric on a set X is a function
: XX
satisfying for all x X conditions (1), (3) and
(4) of Section 1.4.1 and 2 (i.e. (x, x) = 0, for all x X).
Preliminaries
43
What is the dierence between a metric space and a pseudometric
space? Show that (x, y) = i i  denes a pseudometric on the set
of all ordered pairs of real numbers, where x = (1 , 2 ), y = (1 , 2 ).
14. Show that the (real or complex) vector space n of vectors x =
{x1 , . . . , xn }, y = {y1 , . . . , yn } becomes a pseudometric space by
introducing the distance as a vector: (x, y) = (p1 x1 y1 , p2 x2
y2 , . . . , pn xn yn ). The pj (j = 1, 2, . . . , n) are xed positive
constants. The order is introduced as follows: x, y n , x y
xi yi , i = 1, 2, . . . , n.
15. Show that the space
of real or complex valued functions
f (x1 , x2 , . . . , xn ) that are continuous on the closure of B ( i.e., B of
, is pseudometric when the distance is the function (f (x), g(x)) =
p(x)f (x) g(x), and p(x) is a given positive function in ).
1.4.33
Separable space
Denition: separable space A space X is said to be separable if it
contains a countable everywhere dense set; in other words, if there is in X
as sequence (x1 , x2 , . . . , xn ) such that for each x X we nd a subsequence
{xn1 , xn2 , . . . , xnk , . . .} of the above sequence, which converges to x. If X
is a metric space, then separability can be dened as follows: There exists
a sequence {x1 , x2 , . . . , xn , . . .} in X such that we nd an element xn0 of it
for every x X and every
> 0 satisfying (x, xn0 ) <
.
The separability of the ndimensional Euclidean space n
The set n0 , which consists of all points in the space n with rational
coordinates, is countable and everywhere dense in n .
The separability of the space C([0, 1]) In the space C([0, 1]), the set C0 ,
consisting of all polynomials with rational coecients, is countable. Take
any function x(t) C([0, 1]). By the Weierstrass approximation theorem
[Theorem 1.4.34] there is a polynomial p(t) s.t.
max x(t) p(t) <
/2,
t
> 0 being any preassigned number. On the other hand, there exists
another polynomial p0 (t) with rational coecients, s.t.,
max p(t) p0 (t) <
/2.
t
Hence (x, p0 ) = max x(t) p0 (t) <
. Hence C([0, 1]) is separable.
t
1.4.34
The Weierstrass approximation theorem
If [a, b] is a closed interval on the realline, then the polynomials with
real coecients are dense in C([a, b]).
44
A First Course in Functional Analysis
In other words, every continuous function on [a, b] is the limit of a uniformly
convergent sequence of polynomials.
Bn (x) =
x
n k
k
x (1 x)nk f
k
n
k=0
are called Bernstein polynomials associated with f . We can prove our
theorem by nding a Bernstein polynomial with the required property.
Note 1.4.14. The Weierstrass theorem for C([0, 1]) says in eect that all
real linear combinations of functions 1, x, x2 , . . . , xn are dense in [0, 1].
The Separability of the Space lp (1 < p < )
Let E0 be the set of all elements x of the form (r1 , r2 , . . . , rn ) where ri
are rational numbers and n is an arbitrary natural number. E0 is countable.
We would like to show that E0 is everywhere dense in lp . Let us take an
element x = {xi } lp and let an arbitrary
> 0 be given. We nd a
p
k p < . Next, take an
natural number n0 such that for n > n0
2
k=n+1
element x0 = (r1 , r2 , . . . , rn,0,0 . . .) such that
p
[(x, x)] =
k rk  +
p
k=1
k=n+1
k rk p <
k=1
p
. Then,
2
p
p
+
=
p where (x, x0 ) <
.
k  <
2
2
p
The space s is separable.
The space m of bounded numerical sequences is inseparable.
Problems
1. Which of the spaces
4n, +, l are separable?
2. Using the separability property of C([a, b]), show that Lp ([a, b]), a < b
(the space of pth power integrable functions) is separable.
1.5
Topological Spaces
The predominant feature of a metric space is its metric or distance. We
have dened open sets and closed sets in a metric space in terms of a metric
or distance. We have proved certain results for open sets (see results (i)
and (ii) stated after theorem 1.4.11.) in a metric space. The assertions of
the above results are taken as axioms in a topological space and are used to
dene an open set. No metric is used in a topological space. Thus a metric
space is a topological space but a topological space is not always a metric
space.
Preliminaries
45
Open sets play a crucial role in a topological space.
Many important concepts such as limit points, continuity and
compactness (to be discussed in later sections) can be characterised in terms
of open sets. It will be shown in this chapter that a continuous mapping
sends an open set back into an open set.
We think of deformation as stretching and bending without tearing.
This last condition implies that points that are neighbours in one
conguration are neighbours in another conguration, a fact that we
should recognize as a description of continuity of mapping. The notion
of stretching and bending can be mathematically expressed in terms of
functions. The notion of without tearing can be expressed in terms of
continuity. Let us transform a gure A into a gure A subject to the
following conditions:
(1) To each distinct point p of A corresponds one point p of A and vice
versa.
(2) If we take any two points p, q of A and move p so that the
distance between it and q approaches zero, then the distance between the
corresponding points p and q of A will also approach zero, and vice versa.
If we take a circle made out of a rubber sheet and deform it subject to the
above two conditions, then we get an ellipse, a triangle or a square but not
a gure eight, a horseshoe or a single point.
These types of transformations are called topological transformations
and are dierent from the transformations of elementary geometry or of
projective geometry. A topological property is therefore a property that
remains invariant under such a transformation or in particular deformation.
In a more sophisticated fashion one may say that a topological property of
a topological space X is a property that is possessed by another topological
space Y homeomorphic to X (homeomorphism will be explained later in this
chapter). In this section we mention some elementary ideas of a topological
space t. The notions of neighbourhood, limiting point and interior of a set
amongst others that will be discussed in this section.
1.5.1
Topological space, topology
Let X be a nonempty set. A class of subsets of X is called a topology
on X if it satises the following conditions:
(i) , X .
(ii) The union of every class of sets in is a set in .
(iii) The intersection of nitely many members of is a member of .
Accordingly, one denes a topological space (X, ) as a set X and a class
of subsets of X such that satises the axioms (i) to (iii). The member
of are called open sets.
46
1.5.2
A First Course in Functional Analysis
Example
Let X = (1 , 2 , 3 ). Consider 1 = {, X, {1 }, {1 , 2 }}, 2 =
{, X, {1 }, {2 }}, 3 = {, x, {1 }, {2 }, {3 }, {1 , 2 }, {2 , 3 }} and
4 = {, X}.
Here, 1 , 3 , and 4 are topologies, but 2 is not a topology due to
the fact that {1 } {2 } = {1 , 2 } 2 .
1.5.3
Denition: indiscrete topology, discrete topology
An indiscrete topology denoted by J, has only two members, and
X. The topological space (X, J) is called an indiscrete topological space.
Another trivial topology for a nonempty set X is the discrete topology
denoted by D. The discrete topology for X consists of all subsets of X.
A topological space (X, D) is called a discrete topological space.
1.5.4
Example
Let X = R. Consider the topology S where S. If G R
and G = , then G S if for each p G there is a set H of the form
{x R : a x < b}, a < b, such that p H and H G. The set
H = {x R : a x < b} is called a righthalf open interval. Thus a
nonempty set G is Sopen if for each p G, there is a righthalf open
interval H such that p H G. The topology dened above is called a
limit topology.
1.5.5
Denition: usual topology, upper limit topology, lower
limit topology
Let X = R {real}. Let us consider a topology = {, R, {]a, b[}, a < b
and all unions of open intervals} on X = R. This type of topology is called
the usual topology.
Let X = R be the nonempty set and = {, R, {]a, b]}, a < b and
union of leftopen right closed intervals}. This type of topology is called
the upper limit topology.
Let X = R be the nonempty set and = {, X = R, {[a, b[}, a < b
and union of leftclosed rightopen intervals}. Then this type of topology
is called lower limit topology.
1.5.6
Examples
(i) (Finite Complement Topology)
Let us consider an innite set X and let = {, X, A XAC be a
nite subset of X}. Then we see that is a topology and we call it a nite
complement topology.
(ii) (Countable complement topology)
Let X be a nonenumerable set and = {, X, A XAC be a
countable complement}. Then is a topology and will be known as a
countable complement topology.
(iii) In the usual topology in the real line, a single point set is closed.
Preliminaries
1.5.7
47
Denition: T1 space, closure of a set
A topological space is called a T1 space if each set consisting of a single
point is closed.
1.5.8
Theorem
Let X be a topological space. Then (i) any intersection of closed sets in
X is a closed set and (ii) nite union of closed sets in X is closed.
Closure of a set
If A is a subset of a topological space, then the closure of A (denoted
by A) is the intersection of all closed supersets of A. It is easy to see that
the closure of A is a closed superset of A that is contained in every closed
superset of A and that A is closed A = A.
A subset of a topological space X is said to dense or everywhere dense
if A = X.
1.5.9
Denition: neighbourhood
Let (X, ) be a topological space and x X be an arbitrary point. A
subset Nx X containing the point x X is said to be a neighbourhood
of x if a open set Gx such that x Gx Nx .
Clearly, every open set Gx containing x is a neighbourhood of x.
1.5.10
Examples
(i)]x
1 , x +
2 [ is an open neighbourhood in ( , U ).
(ii)]x, x +
] is an open neighbourhood in ( , UL ), where UL is a topology
dened above .
(iii)[x
, x[ is an open neighbourhood in ( , U4 ).
(iv)Let {X, } be a topological space and N1 , N2 be any two
neighbourhoods of the element x X. Then N1 N2 is also a
neighbourhood of x.
Note 1.5.1. A neighbourhood of a point need not be a open set but
a open set is a neighbourhood of each of its points.
1.5.11
Theorem
A necessary and sucient condition that a set G in a topological space
(X, ) be open is that G contains a neighbourhood of each x G.
Problems
1. Show that the indiscrete topology J satises all the conditions of 1.5.1.
2. Show that the discrete topology D satises all the conditions of 1.5.1.
48
A First Course in Functional Analysis
3. If D represents the discrete topologies for X, show that every subset
of X is both open and closed.
4. If {Ii , i = 1, 2, . . . , n} is a nite collection of open intervals such that
{Ii , i = 1, 2, . . . , n} = , show that {Ii , i = 1, 2, . . . , n} is an open
interval.
5. Show that any nite set of real numbers is closed in the usual topology
for .
6. Which of the following subsets of
(i) ]1, 3[
1.5.12
(ii) [1, 3]
4 are U neighbourhoods of 2?
(iii) ]1, 3]
Bases for a topology
(iv) [1, 3[
(v) [2, 3[.
In 1.5.1 the topologies U and S for
and T for J were introduced.
These neighbourhoods for each point were specied and then a set was
declared to be a member of the topology if and only if the set contains a
neighbourhood of each of its points. This is an extremely useful way of
dening a topology. It should be clear, however, that neighbourhoods must
have certain properties. In what follows we present a characterization of
the neighbourhoods in a topological space.
1.5.13
Theorem
Let (X, ) be a topological space, and for each p X let up be the family
of neighbourhoods of p. Then:
(i) If U up then p U .
(ii) If U up and V up , then, by the denition of neighbourhood,
there are open sets G1 and G2 such that p G1 U and
p G2 V . Now, p G1 G2 where G1 G2 is a open set. Since
p G1 G2 U V it follows that U V is a neighbourhoods of
p. Hence U V up .
(iii) If U up , the family of neighbourhoods of p, an open set G
such that p G U . Therefore, p G U V and V is a
neighbourhood of p. Hence V up .
(iv) If U up , then there is a open set V such that p V U .
Since V is a open set, V is a neighbourhood of each of its points.
Therefore V uq for each q V .
1.5.14
Theorem
Let X be a nonempty set and for each p X, let Bp be a nonempty
collection of subsets of X such that
(i) If B Bp , then p B.
(ii) If B Bp , and C Bp , then B C Bp .
If consists of the empty set together with all nonempty subsets G of
X having the property that p G implies that there is a B Bp such that
B G, then is a topology for X.
Preliminaries
1.5.15
49
Denition: base at a point, base of a topology
Base at a point
Let (X, ) be a topological space and for each x X, let Bx be a
nonempty collection of neighbourhoods of x. We shall say that Bx is a
base for the neighbourhood system of x if for each neighbourhood Nx
of x there is a B Bx such that x Bx Nx .
If Bx is a base for the neighbourhood system of x, then the members
of Bx will be called basic neighbourhoods of x.
Base for a topology
Let (X, ) be a topological space and B be a nonempty collection of
subsets of X such that
(i) B
(ii) x X, neighbourhoods Nx of x, Bx B such that
x Bx Nx .
Thus B is called the base of the topology and the sets belonging to B
are called basic open sets.
1.5.16
Examples
4
4
(i) Consider the usual topology U for the set of real numbers . The set
of all open intervals
of lengths 2/n(n = 1, 2, . . ,
.) is a base for ( , U ). The
1
open intervals x : x x0  < , (n = 1, 2, . . .) for ( , U ) form a base at
n
x0 .
(ii) In the case of a point in a metric space, an open ball centered on the
point is a neighbourhood of the point, and the class of all such open balls is
a base for the point. In the theorem below we can show a characterization
of openness of a set in terms of members of the base B for a topological
space (X, ).
1.5.17
Theorem
Let (X, ) be a topological space and B, a base of the topology, then, a
necessary condition that a set G X be open is that G can be expressed as
union of members of B.
1.5.18
Denition: rst countable, second countable
A topological space (X, ) that has a countable local base at each x X
is called rst countable. A topological space (X, ) is said to be second
countable if a countable base for the topology.
1.5.19
Lindel
of s theorem
Let X be a second countable space. If a nonempty open set G in X
is represented as the union of a class {Gi } of open sets, then G can be
represented as a countable union of Gi s.
50
A First Course in Functional Analysis
1.5.20
Example
( , U ) is rst countable. ( , UL ) is second countable. It is to be noted
that a second countable space is also rst countable.
Problems
1. For each p
nd a collection Bp such that Bp is a base for the
Dneighbourhood system of p.
2. Let X = {a, b, c} and let = {X, , X, {a}, {b, c}}. Show that is a
topology for X.
3. For each p X nd a collection Bp of basic neighbourhoods of p.
4. Prove that open rectangles in the Euclidean plane form an open base.
1.5.21
Limit points, closure and interior
We have characterized open sets of a topological space (X, ) in terms
of neighbourhoods. We now introduce and examine another concept that
is conveniently described in terms of neighbourhoods.
Denition: limit point, contact point, isolated point, derived set, closure
Let (X, ) be a topological space and let A be a subset of X. The
point x X is said to be a limit point of A if every neighbourhood
of x contains at least one point of A other than x. That is, x is a limit
point of A if and only if Nx a neighbourhood of x satises the condition
Nx (A {x}) = .
If neighbourhoods Nx of x, s.t. Nx A = , then x is called a contact
point of A. D(A) = {x : x is a limit point of A} is called the derived set of
A.
A D(A) is called the closure of A denoted by A.
Problems
Let X = and A =]0, 1[. Then nd D(A) for the following cases:
(i) = U , the usual topology on
(ii) = UL , the lower limit topology on
(iii) = U4 , the upper limit topology on .
1.5.22
4
4
Theorem
Let (X, ) be a topological space. Let A be a subset of X and D(A) the
set of all limit points of A. Then A D(A) is closed.
1.5.23
Denition: closure
Let (X, ) be the topological space and A be a subset of X. The closure of A denoted by A is the smallest closed subset of X that contains
A.
Preliminaries
1.5.24
51
Theorem
Let (X, ) be a topological space and let A be a subset of X. Then
A = A D(A) where D(A) is the set of limit points of A. It follows from
the previous theorem that A D(A) is a closed set.
1.5.25
Denition: interior, exterior, boundary
Let (X, ) be a topological space and let A be a subset of X. A point x
is a interior point of A if A is a neighbourhood of x. The interior
of A denoted by Int A is the set of all interior points of A.
x X is said to be an exterior point of A if x is an interior point of
AC = X A.
A point in X is called a boundary point of A if each neighbourhood
of x contains points both of A and of AC . The boundary of A is the set
of all boundary points of A.
1.5.26
Example
1
is an interior point of [0, 1],
2
but neither 0 nor 1 is an interior point of [0, 1]. The U interior of [0, 1] is
]0, 1[. In the UL topology for , 0 is an interior point of [0, 1] but 1 is not.
The UL interior of [0, 1] is [0, 1[.
Consider the space ( , U ). The point
1.5.27
Denition: separable space
Let (X, ) be a topological space. If a denumerable (enumerable)
subset A of X, A X such that A = X, then X is called a separable
space. Or, in other words, a topological space is said to be separable if
it contains a denumerable everywhere dense subset.
1.5.28
Example
Let X = and A be the set of all intervals. Now let D(A) be the derived
set of A. It is clear that D(A) = . Hence A = A D(A) = A = X.
Hence A is everywhere dense in X = . Similarly, the set
of rational
points, which is also a subset of , is everywhere dense in . Again, since
is countable the topological space
is separable.
1.6
Continuity, Compactness
1.6.1
Denition: continuity
Let D (or ), f : D (or ) and a D. The function f is said
to be continuous at a if lim f (x) = f (a). In other words, given
> 0, a
xa
= (
) such that x a < f (x) f (a) <
.
Note 1.6.1. f is said to be continuous in D if f is continuous at every
a D.
52
A First Course in Functional Analysis
1.6.2
Denition: continuity in a metric space
Given two metric spaces (X, X ) and (Y, Y ), let f : D X Y be a
mapping. f is said to be continuous at a D if x D, for each
> 0
there exists > 0 such that X (a, x) < Y (f (a), f (x)) <
.
1.6.3
Denition: continuity on topological spaces
Let (X, F ) and (y, V ) be topological spaces and let f be a mapping of
X into Y . The mapping f is said to be continuous (or F V continuous)
if f 1 (G) is F open whenever G is V open. That is, the mapping f is
continuous if and only if the inverse image under f of every V open set is
an F open set.
1.6.4
Theorem (characterization of continuity)
Let (X, X ) and (Y, y ) be metric spaces and let f : X Y be a
mapping. Then the following statements are equivalent:
(i) f is continuous on X.
(ii) For each x X, f (xn ) f (x) for every sequence {xn } X with
xn x.
(iii) f 1 (G) is open in X whenever G is open in Y .
(iv) f 1 (F ) is closed in X whenever F is closed in Y .
(v) f (A) f (A) A X
(vi) f 1 (B) f 1 (B), B Y .
1.6.5
Denition: homeomorphism
Let (X, X ) and (Y, Y ) be metric spaces. A mapping f : X Y is
said to be a homeomorphism if
(i) f is bijective
(ii) f is continuous
(iii) f 1 is continuous
If a homeomorphism from X to Y exists, we say that the spaces X and
Y are homeomorphic.
1.6.6
Theorem
Let I1 and I2 be any two open intervals. Then (I1 , U1 ) and (I2 , U2 ) are
homeomorphic, and a homeomorphism exists between the spaces (I1 , UI1 )
and (I2 , UI2 ).
Preliminaries
53
x2
Iu1
x1
Iu2
Fig. 1.5
1.6.7
Denition: covering, subcovering, open covering
A collection C = {S : } of subsets of a set X is said to be a
covering of X if {S : } = X. If C1 is a covering of X, and C2 is
a covering of X such that C2 C1 , then C2 is called a subcovering of C1 .
Let (X, ) be a topological space. A covering C of X is said to be a open
covering of X if every member of C is a open set. A covering C is said to
be nite if C has only a nite number of members.
1.6.8
Denition: a compact topological space
A topological space (X, ) is said to be compact if every open covering
of X has a nite subcovering.
Note 1.6.2. The outcome of the HeineBorel theorem on the real line is
taken as a denition of compactness in a topological space. The HeineBorel theorem reads as follows: If X is a closed and bounded subset of the
real line , then any class of open subsets of , the union of which contains
X, has a nite subclass whose union also contains X.
1.6.9
Theorem
The space ( , U ) is not compact. Therefore, no open interval is compact
w.r.t. the U topology.
1.6.10
Denition: nite intersection property
Let (X, ) be a topological space and {F  } be a class of subsets
such that the intersection of nite number of elements of {F  } is nonn
void, i.e.,
Fk = irrespective of whatever manner any nite number
k=1
of k s {k k = 1, 2, . . . , n} is chosen from . Then, F = {F  } is said
to have the Finite Intersection Property (FIP).
1.6.11
Theorem
The topological space (X, ) is compact if every class of closed subsets
{F  } possessing the nite intersection property has nonvoid
54
A First Course in Functional Analysis
n
intersection. In other words, if all F s are closed and
Fk = for a
k=1
F = . The converse result
nite subcollection {1 , 2 , . . . , n }, then
is also true, i.e., if every class of closed subsets of (X, ) having the FIP
has nonvoid intersection, then (X, ) is compact.
1.6.12
Theorem
A continuous image of a compact space is compact.
1.6.13
Denition: compactness in metric spaces
A metric space being a topological space under the metric topology,
the denition of compactness as given in denition 1.6.5 is valid in metric
spaces. However, the concept of compactness in a metric space can also be
introduced in terms of sequences and can be related to completeness.
1.6.14
Denition: a compact metric space
A metric space (X, ) is said to be compact if every innite subset of X
has at least one limit point.
Remark 1.6.1. A set K X is then compact if the space (K, ) is
compact. K is compact if and only if every sequence with values in K has
a subsequence which converges to a point in K.
1.6.15
Denition: relatively compact
If X is a metric space and K is a subset of X such that its closure K is
compact, then K is said to be relatively compact.
Lemma 1.6.1. A compact subset of a metric space is closed and bounded.
The converse of the lemma is in general false.
1.6.16
Example
Consider en in I2 where e1 = (1, 0, 0, . . . , 0), e2 = (0, 1, 0, . . . , 0),
e3 = (0, 0, 1, . . . 0). This sequence is bounded, since (, en ) = 1 where
stands for the metric in I2 . Its terms constitute a point set that is closed
because it has no limit point. Hence the sequence is not compact.
Lemma 1.6.2. Every compact metric space is complete.
1.6.17
Denition: sequentially compact, totally bounded
(i) A metric space X is said to be sequentially compact if every sequence
in X has a convergent subsequence.
(ii) A metric space X is said to be totally bounded if for every
> 0,
X contains a nite net called an
net such that the nite set of open balls
Preliminaries
55
of radius
> 0 and centers in the
net covers X.
Lemma 1.6.3. X is totally bounded if and only if every sequence in X
has a Cauchy sequence.
1.6.18
In
X
X
X
Theorem
a metric space (X, ), the following statements are equivalent:
is compact.
is sequentially compact.
is complete and totally bounded.
1.6.19
Corollary
For 1 p < and x, y
p1
n
xj yj p .
j=1
4n(+n),
consider p (x, y) =
4 +
(a) (HeineBorel) A subset of n ( n ) is compact if and only if it is
closed and bounded.
(b) (BolzanoWeierstrass) Every bounded sequence in n ( n ) has a
convergent subsequence.
4 +
4 +
Proof: Since a bounded subset of n ( n ) is totally bounded, part (a)
follows from theorem 1.6.18. Since the closure of a bounded set of n ( n )
is complete and totally bounded, part (b) follows from theorem 1.6.18.
1.6.20
4 +
Theorem
In a metric space (X, ) the following statements are true: X is totally
bounded X is separable.
X is compact X is separable.
X is separable X is second countable.
1.6.21
Theorem
If f is a realvalued continuous function dened on a metric space (X, ),
then for any compact set A X the values Sup[f (x), x A], Inf [f (x), x
A] are nite and are attained by f at some points of A.
Remark 1.6.2. If a continuous function f (x) is dened on some set M
that is not compact, then Sup f (x) and Inf f (x) need not be attained. For
xM
xM
example, consider the set of all functions x(t) such that x(0) = 0, x(1) = 1
1
x2 (t)dt, though
and x(t) 1. The continuous functional f (x) =
0
continuous on M , does not attain the g.l.b. on M . For if x(t) = tn , f (x) =
1
0, as n . Hence Inf [f (x)] = 0. But the form of f (x)
22n+1
56
A First Course in Functional Analysis
indicates that f (x) > 0 for every x = x(t) continuous curve that joins the
points (0,0) and (1,1). The fallacy is that the set of curves considered is
not compact even if the set is closed and bounded in C ([0,1]).
1.6.22
Denition: uniformly bounded, equicontinuous
Let (X, ) be a complete metric space. The space of continuous realvalued functions on X with the metric, (x, y) = max[f (x) g(x)x X]
is a complete metric space, which we denote by C([X]). A collection F of
functions on a set X is said to be uniformly bounded if there is an M > 0
such that f (x) M x X and all f F. For subsets of C([X]),
uniform boundedness agrees with boundedness in a metric space, i.e., a set
F is uniformly bounded if and only if it is contained in a ball.
A collection F of functions dened on a metric space X is called
equicontinuous if for each
> 0, there is a > 0 such that (x, x ) <
f (x) f (x ) <
for all x, x X and for f F . It may be noted
that the functions belonging to the equicontinuous collection are uniformly
continuous.
1.6.23
Theorem (ArzelaAscoli)
If (X, ) is a compact metric space, a subset K C(X) is relatively
compact if and only if it is uniformly bounded and equicontinuous.
1.6.24
Theorem
If f is continuous on an open set D, then for every (x0 , y0 ) D
dy
= f (x, y) has a local solution passing through
the dierential equation
dx
(x0 , y0 ).
Problems
1. Show that if (X, ) is a topological space such that X has only a nite
number of points, then (X, ) is compact.
2. Which of the following subspaces of (R, U ) are compact: (i) J, (ii)
[0, 1], (iii) [0, 1] [2, 3], (iv) the set of all rational numbers or (v) [2, 3[.
3. Show that a continuous real or complex function dened on a compact
space is bounded. More generally, show that a continuous real or
complex function mapping compact space into any metric space is
bounded.
4. Show that if D is an open connected set and a dierential equation
dy
= f (x, y) is such that its solutions form a simple covering of D,
dx
then f is the limit of a sequence of continuous functions.
5. Prove the HeineBorel theorem: A subspace (Y, UY ) of (R, U ) is
compact if and only if Y is bounded and closed.
1
6. Show that the subset I2 of points {xn } such that xn  , n = 1, 2, . . .
n
is compact.
Preliminaries
57
7. Show that the unit ball in C([0, 1]) of points [x : max x(t) 1, t
[0, 1]] is not compact.
8. Show that X is compact if and only if for any collection F of closed
n
sets with the property that
Fi = for any nite collection in F,
i=1
it follows that {F , F F} = .
CHAPTER 2
NORMED LINEAR
SPACES
If a linear space is simultaneously a metric space, it is called a metric linear
space. The normed linear spaces form an important class of metric linear
spaces. Furthermore, in each space there is dened a notion of the distance
from an arbitrary element to the null element or origin, that is, the notion
of the size of an arbitrary element. This gives rise to the concept of the
norm of an element x or x and nally to that of the normed linear space.
2.1
Denitions and Elementary Properties
2.1.1
Denition
Let E be a linear space over
4 (or +).
To every element x of the linear space E, let there be assigned a unique
real number called the norm of this element and denoted by x, satisfying
the following properties (axioms of a normed linear space):
(a) x > 0 and x = 0 x = 0;
(b) x + y x + y (triangle inequality), y X
(c) x =  x (homogeneity of the norm),
The normed linear space E is also written as (E,  ).
58
4 (or +)
Normed Linear Spaces
59
Remark 2.1:
1. Properties (b) and (c) property (a).
0 = x x x +  x = 2x
which yields x 0.
2. If we regard x in a normed linear space E, as a vector, its length is
x and the length x y of the vector x y is the distance between
the end points of the vectors x and y. Thus in view of (a) and (b)
and the denition 2.1.1, we say all vectors in a normed linear space E
have positive lengths except the zero vector. The property (c) states
that the length of one side of a triangle can never exceed the sum of
the lengths of the other two sides (Fig. 2.1(a)).
x+y
x + y 
y 
x
Fig. 2.1(a)
Fig. 2.1(b)
3. In a normed linear space a metric (distance) can be introduced by
(x, y) = x y. It is clear that this distance satises all the metric
axioms.
2.1.2
Examples
1. The ndimensional Euclidean space
space n are normed linear spaces.
and
the unitary
Dene the sum and product of the elements by a scalar as in Sec. 1.4.2,
Ex. 2. The norm of x n is dened by
x =
n
1/2
i 
i=1
This norm satises all the axioms of 2.1.1. Hence
normed linear spaces.
4n and +n are
2. Space C([0, 1]).
We dene the addition of functions and multiplication by a scalar
in the usual way.
We set x = max x(t). It is clear that the axioms of a normed
t[0,1]
linear space are satised.
60
A First Course in Functional Analysis
3. Space lp : We dene the addition of elements and multiplication of
elements by a scalar as indicated earlier.
p
1/p
p
i 
.
We set for x = {i } lp , x as x =
i=1
The axioms (a) and (c) of 2.1.1 are satised.
If y = {i }, it can be shown by making an appeal to Minkowskiis
inequality ([1.4.5])
p
1/p
i + i 
i=1
p
1/p
i 
p
i=1
1/p
i 
<
i=1
where x(t) and y(t) are pth power summable.
Thus lp is a normed linear space.
4. Space Lp ([0, 1]), p > 1
We set x =
1
0
1/p
x(t)p dt
for x(t) Lp ([0, 1]).
If y(t) Lp ([0, 1]) then
1
0
1/p
y(t)
<
where x(t) and y(t) are Riemann integrable. Then by Minkowskis
inequality for integrals, we have the inequality (c) of (2.1.1).
5. Space l : Let x = {i }, such that i  Ci where Ci for each i is a
real number. Setting x = sup i , we see that all the norm axioms
i
are fullled. Hence l is a normed linear space.
6. Space C k ([0, 1]): Consider a space of functions x(t) dened and
continuous on [0, 1] and having their continuous derivatives upto the
order k, i.e., x(t) C k [0, 1]. The norm on this function space is
dened by
"
!
x = max maxt x(t), maxt x1 (t), . . . , maxt xk (t) .
Then x + y = max(maxt x(t) + y(t), maxt x1 (t) + y 1 (t),
. . . , maxt xk (t) + y k (t))
It is clear that all the axioms of 2.1.1 are fullled. Therefore C k ([0, 1])
is a normed linear space.
Normed Linear Spaces
2.1.3
61
Induced metric, convergence in norm, Banach space
Induced metric: In a normed linear space a metric (distance) can be
introduced by, (x, y) = x y. It may be seen that the metric dened
above satises all the axioms of a metric space. Thus the normed linear
space (E,  ) can be treated as a metric space if we dene the metric in
the above manner.
Convergence in norm: After introducing the metric, we dene the
convergence of a sequence of elements {xn } to x namely, x = lim xn or
xn x if xn x 0 as n . Such a convergence in a normed linear
space is called convergence in norm.
Banach Space: A real (complex) Banach space is a real (complex) normed
linear space that is complete in the sense of convergence in norm.
2.1.4
Examples
1. The ndimensional Euclidean space n is a Banach Space
Dening the sum of elements and the product of elements by a scalar (real
or complex) in the usual manner, the norm is dened as
x =
n
1/2
i2
i=1
where x = {i }. By example 1, 2.1.2 we see that n is a normed linear
(m)
(m)
space. If xm = {i } and if i i as m , i, then,
n
1/2
(m)
i
i 
0 as m
i=1
or in other words xm x 0 as m where x = {i }. Since x n
it follows that n is a complete normed linear space or n is a Banach
space.
2. lp is a Banach space(1 p < )
Dening the sum of the elements and the product of elements by a scalar
as in 1.3 and taking the norm as
x =
p
1/p
i 
i=1
where x = {i }, we proceed as follows.
(n)
(n)
Let xn = {i } lp and i i as n . Then xn x 0 as
n where x = {i }. Now by Minkowskis inequality [1.4.5] we have
62
A First Course in Functional Analysis
1/p
i 
i=1
1/p
in p
i=1
1/p
(x)
i
i 
i=1
or, x xn  + x xn  < for n suciently large.
Hence x lp , 1 p < and the space is a Banach space.
3. C([0, 1]) is a Banach space
C([0, 1]) is called the function space. Consider the linear space of all
scalar valued (real or complex) continuous functions dened in C([0, 1]).
Let {xn (t)} C([0, 1]) and {xn (t)} be a Cauchy sequence in C([0, 1]).
C([0, 1]) being a normed linear space [see 2.1],
xm xn  = max xm (t) xn (t) <
m, n N.
t
(2.1)
Therefore, for any xed t = t0 [0, 1], we get
xm (t0 ) xn (t0 ) <
m, n N.
4+
4+
4+
This shows that {xm (t0 )} is a Cauchy sequence in ( ). But ( )
being complete, the sequence converges to a limit in ( ). In this way, we
can assign to each t [0, 1] a unique x(t) ( ). This denes a (pointwise)
function x on [0, 1]. Now, we show that x C([0, 1]) and xm x.
4+
Letting n , we have from (2.1)
xm (t) x(t) <
m N and t [0, 1]
(2.2)
This shows that the sequence {xm } of continuous functions converges
uniformly to the function x on [0, 1], and hence the limit function x is
a continuous function on [0, 1]. As such, x C([0, 1]). Also from (2.2), we
have
max xm (t) x(t) <
, m N
t
xm x
m N
xm x C([0, 1]).
Hence C[0, 1] is a Banach space.
Normed Linear Spaces
63
4. l is a Banach space
Let {xm } be a Cauchy sequence in l and let xm = {im } l . Then
(m)
sup i
i
(n)
i 
, m, n N.
This gives im in  <
m, n N (i = 1, 2, . . .).
4+
This shows that for each i, {im } is a Cauchy sequence in ( ). Since
( ) is complete, {im } converges in ( ). Let im i as m , and
let x = (1 , 2 , . . . , n , . . . , . . .).
4+
4+
Let m , then
(m)
i
i  <
m N (i = 1, 2, . . .)
(2.3)
Since xm l , there is a real number Mm such that im  Mm , i.
Therefore, i  im  + i im  Mm +
, m N, i = 1, 2, . . .
Since the RHS is true for each i and is independent for each i, it follows
that {i } is a bounded sequence of numbers and thus x l . Furthermore,
it follows from (2.3),
xm x <
m > N.
Hence xm x l and l is a Banach Space.
5. Incomplete normed linear space
Let X be a set of continuous functions dened on a closed interval [a, b].
For x X let us take x as
b
x(t)dt
(2.4)
x =
a
The metric induced by (2.4) for x, y X is given by
b
x(t) y(t)dt
(x, y) =
(2.5)
In note 1.4.10, we have shown that the metric space (X, ) is not
complete, i.e., given a Cauchy sequence {xm } in (X, ), xm does not
64
A First Course in Functional Analysis
converge to a point in X. Hence the normed linear space (X,  ) is
not complete.
6.
An incomplete normed linear space and its completion
L2 ([a, b])
The linear space of all continuous realvalued functions on [a, b] forms
a normed linear space X with norm dened by
1/2
b
2
(x(t)) dt
(2.6)
x =
a
Let, us consider {xm } as follows:
.
/
if t 0, 12
0 !
"
.
/
m t 12
xm (t) =
if t 12 , am
1
if t [am , 1]
1
1
where am = + .
2 m
Let us take a = 0 and b = 1 and n > m.
1
Hence, xn xm 2 =
[xn (t) xm (t)]2 dt
0
=
1
1
2+n
1
2
xm (t) xn (t)2 dt +
= ABC =
1
1
m n
<
1
1
2+m
1
1
2+n
xn (t) xm (t)dt
1
[see gure 2.1(c)].
m
The Cauchy sequence does not converge to a point in (X,  ). For
every x X,
1
xn x =
xn (t) x(t)2 dt
0
1/2
=
0
x(t)2 dt +
am
1/2
xn (t) x(t)2 dt +
1 x(t)2 dt.
am
Since the integrands are nonnegative, xn x in the space (X,  )
implies that x(t) = 0 if t [0, 12 [, x(t) = 1 if t ] 12 , 1]. Since it is impossible
for a continuous function to have this property {xn } does not have a limit
in X.
The space X can be completed by Theorem 1.4.5. The completion is
denoted by L2 ([0, 1]). This is a Banach space. In fact the norm on X and
Normed Linear Spaces
65
the operations of a Linear
space can be extended to the completion of
X. This process can be seen in Theorem 2.1.10 in the next section. In
general for any p 1, the Banach space Lp [a, b] is the completion of the
normed linear spaces which consists of all continuous realvalued functions
on [0, 1] and the norm dened by
xp =
1/p
x(t) dt
.
p
With the help of Lebesgue integrals the space Lp ([0, 1]) can also be
obtained in a direct way by the use of Lebesgue integral and Lebesgue
measurable functions x on [0, 1] such that the Lebesgue integral of xp on
[0, 1] exists and is nite. The elements of Lp ([0, 1]) are equivalent classes of
those functions, where x is equivalent to y if the Lebesgue integral of xyp
over [0, 1] is zero. We discuss these (Lebesgue measures) in Chapter Ten.
Until then the development will take place without the use of measure
theory.
1
m
1
n
B
xn
xm
am
A
1
t
Fig. 2.1(c)
Fig. 2.1(d)
7. Space s
Every normed linear space can be reduced to a metric space. However,
every metric cannot always be recovered from a norm. Consider the space
with the metric dened by
(x, y) =
1 i i 
,
2i 1 + i i 
i=1
66
A First Course in Functional Analysis
where x = {i } and y = {i } belong to s. This metric cannot be obtained
from a norm. This is because this metric does not have the two properties
that a metric derived from a norm possesses. The following lemma identies
the existence of these two properties.
2.1.5
Lemma (translation invariance)
A metric induced by a norm on a normed linear space E satises:
(a) (x + a, y + a) = (x, y)
x, y, a E and every scalar .
(b) (x, y) = (x, y) for all
Proof: We have
(x + a, y + a) = x + a (y + a) = x y = (x, y)
(x, y) = x y =  x y = (x, y).
Problems
1. Show that the norm x of x is the distance from x to 0.
2. Verify that the usual length of a vector in the plane or in three
dimensional space has properties (a), (b) and (c) of a norm.
3. Show that for any element x of a normed linear space x 0 follows
from axioms (b) and (c) of a normed linear space.
n
1/2
n
2
4. Given x = {i }
, show that x =
i 
denes a norm
on
4n .
i=1
5. Let E be the linear space of all ordered triplets x = {1 , 2 , 3 }, y =
{1 , 2 , 3 } of real numbers. Show that the norms on E are dened
by,
x1 = 1  + 2  + 3 ,
x2 = {12 + 22 + 32 }1/2 ,
x = max{1 , 2 , 3 }
6. Show that the norm is continuous on the metric space associated with
a normed linear space.
7. In case 0 < p < 1, show with the help of an example that p does not
p1
n
(n)
i p
, x=
dene a norm on lp unless n = 1 where xp(n) =
{i }.
8. Show that each of the following denes a norm on
i=1
42 .
Normed Linear Spaces
67
x1  x2 
(i) x1 =
+
, (ii) x2 =
a
b
,
x1  x2 
(iii) x = max
+
a
b
x22
x21
+ 2 ,
a2
b
where a and b are two xed positive real numbers and x = (x1 , x2 )
2
. Draw a closed unit sphere (x = 1) corresponding to each of
these norms.
9. Let   be a norm on a linear space E. If x + y E and
x + y = x + y, then show that sx + ty = sx + ty,
for all s 0, t 0.
10. Show that a nonempty subset A of n is bounded there exists a
real number K such that for each x = (x1 , x2 , . . . , xn ) in A we have
xi  K for each subscript i.
11. Show that the real linear space C([1, 1]) equipped with the norm
given by
1
x(t)dt,
x1 =
1
where the integral is taken in the sense of Riemann, is an incomplete
normed linear space [Note: x1 is precisely the area of the region
enclosed within the integral t = 1 and t = 1].
12. Let E be a linear space and the metric on E such that
(x, y) = (x y, 0)
and
(x, 0) = (x, 0) x, y E and
4 (+ )
Dene x = (x, 0), x E. Prove that   is a norm on E and
that is the metric induced by the norm   on E.
13. Let E be a linear space of all real valued functions dened on
[0, 1] possessing continuous rstorder derivatives. Show that f  =
f (0) + f  is a norm on E that is equivalent to the norm
f  + f  .
2.1.6
Lemma
In a normed linear space E,
 x y  x y, x, y E.
Proof: x = (x y) + y x y + y.
Hence x y x y.
68
A First Course in Functional Analysis
Interchanging x with y,
y x y x.
Hence  x y  x y.
2.1.7
Lemma
The   is a continuous mapping of E into
space.
4 where E is a Banach
Let {xn } E and xn x as n , it then follows from lemma 2.1.4
that
 xn  x  xn x 0 as n .
Hence the result follows.
2.1.8
Corollary
4+
Let E be a complete normed linear space over ( ). If {xn }, {yn } E,
n ( ) and xn x, yn y respectively as n and n ( )
then (i) xn + yn x + y (ii) n xn x as x .
4+
4+
Proof: Now, (xn + yn ) (x + y) xn x + yn y 0 as n
Hence xn + yn x + y as n .
n xn x n (xn x)+(n )x  xn x + n  x 0
because {n } being a convergent sequence is bounded and x is nite.
2.1.9
Summable sequence
Denition: A sequence {xn } in a normed linear space E is said to be
summable to the limit sum s if the sequence {sm } of the partial sums of
n
xn converges to s in E, i.e.,
the series
n=1
0m
0
0
0
0
0
sm s 0 as m or 0
xn s0 0 as m .
0
0
n=1
In this case we write s =
if
n=1
n=1
xn  < .
xn . {xn } is said to be absolutely summable
Normed Linear Spaces
69
It is known that for a sequence of real (complex) numbers absolute
summability implies summability. But this is not true in general for
sequences in normed linear spaces. But in a Banach space every absolutely
summable sequence in E implies that the sequence is summable. The
converse is also true. This may be regarded as a characteristic of a Banach
space.
2.1.10
Theorem
A normed linear space E is a Banach space if and only if every absolutely
summable sequence in E is summable in E.
Proof: Assume that E is a Banach space and that {xn } is an absolutely
summable sequence in E. Then,
xn  = M < ,
n=1
i.e., for each
> 0 a K such that
K
xn  <
, i.e.,
n=1
0
0 n
n
0
0
0
0
sn sm  = 0
xk 0
xk 
0
0
k=m+1
k=m+1
xk ,
, n, m > K.
n=K
In the above, sn =
n
xk .
k=1
Thus sn is a Cauchy sequence in E and must converge to some element
s in E, since E is complete. Hence {xn } is summable in E.
Conversely, let us suppose that each absolutely summable sequence in
E is summable in E. We need to show that E is a Banach space. Let {xn }
be a Cauchy sequence in E. Then for each k, an integer nk such that
xn xm  <
1
n, m nk .
2k
We may choose nk such that nk+1 > nk . Then {xnk } is a subsequence
of {xn }.
70
A First Course in Functional Analysis
Let us set y0 = xn1 , y1 = xn2 xn1 , . . . , yk = xnk+1 xnk , . . .. Then
(a)
k
yn = xnk+1 , (b) yk  <
n=0
Thus
yk  y0  +
k=0
1
, k 1.
2k
1
= y0  + 1 < .
2k
k=1
Thus the sequence {yk } is absolutely summable and hence summable
to some element x in E. Therefore by (a) xnk x as k . Thus the
Cauchy sequence {xn } in E has a convergent subsequence {xnk } converging
to x. Now, if a subsequence in a Cauchy sequence converges to a limit then
the whole sequence converges to that limit. Thus, the space is complete
and is therefore a Banach space.
2.1.11
Ball, sphere, convex set, segment of a straight line
Since normed linear spaces can be treated as metric spaces, all concepts
introduced in metric spaces (e.g., balls, spheres, bounded set, separability,
compactness, linear dependence of elements, linear subspace, etc.) have
similar meanings in normed linear spaces. Therefore, theorems proved
in metric spaces using such concepts can have parallels in normed linear
spaces.
Denition: ball, sphere
Let (E,  ) be a normed linear space.
(i) The set {x E : x x0  < r}, denoted by B(x0 , r), is called the
open ball with centre x0 and radius r.
(ii) The set {x E : x x0  r}, denoted by B(x0 , r), is called a
closed ball with centre x0 and radius r.
(iii) The set {x E : x x0  = r}, denoted by S(x0 , r), is called a
sphere with centre x0 and radius r.
Note 2.1.1.
1. An open ball is an open set.
2. A closed ball is a closed set.
Normed Linear Spaces
71
3. Given r > 0
1
B(0, r) = {x E : x < r} = x E
0x0
2
0 0
:0 0<1
r
= {ry E : y < 1} where y =
x
r
= rB(0, 1).
Therefore, in a normed linear space, without any loss of generality, we can
consider B(0, 1), the ball centred at zero with a radius of 1. The ball B(0, 1)
is called the unit open ball in E.
Denition: convex set, segment of a straight line
A set of elements of a linear space E having the form y = tx, x E,
x = 0, < t < is called a real line dened by the given element x and
a set of elements of the form
y = x1 + (1 )x2 , x1 , x2 X, 0 1
is called a segment joining the points x1 and x2 . A set X in E is called a
convex set if
x1 , x2 X x1 + (1 )x2 , x1 , x2 E, 0 1.
2.1.12
Lemma. In a normed linear space an open (closed) ball
is a convex set
Let x1 , x2 B(x0 , r), i.e., x1 x0  < r, x2 x0  < r.
Let us select any element of the form,
y = x1 + (1 )x2 , 0 < < 1
Then y x0  = x1 + (1 )x2 x0 
(x1 x0 ) + (1 )(x2 x0 )
x1 x0  + (1 )x2 x0 
< r.
Thus, y B(x0 , r).
72
A First Course in Functional Analysis
Note 2.1.2
(i) For any point x = , a ball of radius r > x with its centre in the
origin, contains the point x.
(ii) Any ball of radius r < x with centre in the origin does not contain
this point.
(n)
In order to have geometrical interpretations of dierent abstract spaces lp ,
the ndimensional pth summable spaces, we draw the shapes of unit balls
for dierent values of p.
Examples: unit closed balls in
x = (x1 , x2 )
with dierent norms: Given
(i) x1/2 = (x1 1/2 + x2 1/2 )1/2
(ii) x1 = (x1  + x2 )
(iii) x2 2 = (x1 2 + x2 2 )1/2
(iv) x4 = (x1 4 + x2 4 )1/4
(v) x = max(x1 , x2 )
Problems
1. Show that for the norms in examples (ii), (iii), (iv) and (v) the unit
spheres reect what is shown in gure below:
x = 1
x4 = 1
x2 = 1
x1 = 1
Fig. 2.2
2. Show that the closed unit ball is a convex set.
+
+
3. Show that (x) = ( 1  + 2 )1/2 does not dene a norm on the
linear space of all ordered pairs x = {1 , 2 } of real numbers. Sketch
the curve (x) = 1 and compare it with the following gure 2.3.
Normed Linear Spaces
73
2
1
1
1
1
Fig. 2.3
4. Let be the metric induced by a norm on a linear space E = . If
1 is dened by
3
0
x=y
1 (x, y) =
1 + (x, y) x = y
then prove that 1 can not be obtained from a norm on E.
5. Let E be a normed linear space. Let X be a convex subset of E.
Show that (i) the interior X 0 and (ii) the closure X of X are convex
0
sets. Show also that if X = , then X = X .
2.2
Subspace, Closed Subspace
Since the normed linear space E is a special case of linear space, all
the concepts introduced in a linear space (e.g., linear dependence and
independence of elements, linear subspace, decomposition of E into direct
sums, etc.) have a relevance for E.
Denition: subspace
A set X of a normed linear space E is called a subspace if
(i) X is a linear space with respect to vector addition and scalar
multiplication as dened in E (1.3.2).
(ii) X is equipped with the norm  X induced by the norm   on E
i.e., xX = x, x X.
We may write this subspace (X,  X ) simply as X.
74
A First Course in Functional Analysis
Note 2.2.1. It is easy to see that X is a normed linear space. Furthermore
the metric dened on X by the norm coincides with the restriction to X
of the metric dened on E by its norm. Therefore X is a subspace of the
metric space E.
Denition: closed subspace
A subspace X of a normed linear space E is called a closed subspace of E
if X is a closed metric space.
Denition: subspace of a Banach space
Given E a Banach space, a subspace X of E is said to be a subspace of
Banach space E.
Examples
4 +
1. The space c of convergent numerical sequences in
(or ) is a closed
subspace of l , the space of bounded numerical sequences in
( ).
4+
2. c0 , the space of all sequences converging to 0, is a closed subspace of
c.
3. The space P[0, 1] is a subspace of C[0, 1], but it is not closed.
P[0, 1] is spanned by the elements x0 = 1, x1 = t, . . . , xn = tn , . . .
Then P[0, 1] is a set of all polynomials, but P[0, 1] = C[0, 1].
2.2.1
Theorem (subspace of a Banach space)
A subspace X of a Banach space E is complete if and only if the set X
is closed in E.
Proof: Let X be complete. Let it contain a limit point x of X. Then,
every open ball B(x, 1/n) contains points of X (other than x). The open
ball B(x, 1/n) where n is a positive integer, contains a point xn of X, other
than x. Thus {xn } is a sequence in X such that
1
, n,
n
lim xn = x in X
xn x <
{xn } is a Cauchy sequence in E and therefore in X.
However, X being complete, it follows that x X. This proves that X
is closed.
Normed Linear Spaces
75
On the other hand, let X be closed, in which case it contains all of its
limiting points. Hence every Cauchy sequence will converge to some point
in X. Otherwise the subspace X will not be closed.
Examples:
4. Consider the space of sequences
x = (1 , 2 , . . . , n , 0, . . .) in
4(+),
c0 l and
= c0 .
where n = 0 for only nite values of n. Clearly,
It may be noted that c0 is the closure of in (l ,   ). Thus is not
is an incomplete normed linear space equipped
closed in l and hence
with the norm induced by the norm   on l .
5. For every real number p 1, we have,
lp c0
It may be noted that c0 is the closure of lp in c0 and lp = c0 . Thus, lp
is not closed in c0 and hence lp is an incomplete normed linear space when
induced by the   in c0 .
Problems
1. Show that the closure X of a subspace X of a normed linear space E
is again a subspace of E.
2. If n m 0, prove that n ({a, b]) m ([a, b]) and that the space
n
([a, b]) with the norm induced by the norm on m ([a, b]) is not
closed.
3. Prove that the intersection of an arbitrary collection of nonempty
closed subspaces of the normed linear space. E is a closed subspace
of E.
4. Show that c l is a vector subspace of l and so is c0 .
5. Let X be a subspace of a normed linear space E. Then show that
X is nowhere dense in E (i.e., the interior of the closure of X is
empty) if and only if X is nowhere dense in E.
6. Show that c is a nowhere dense subspace of m.
76
A First Course in Functional Analysis
2.3
Finite Dimensional Normed
Spaces and Subspaces
Linear
Although innite dimensional normed linear spaces are more general than
nite dimensional normed linear spaces, nite dimensional normed linear
spaces are more useful. This is because in application areas we consider
the nite dimensional spaces as subspaces of innite dimensional spaces.
Quite a number of interesting results can be derived in the case of nite
dimensional spaces.
2.3.1
Theorem
All nitedimensional normed linear spaces of a given dimension n are
isomorphic to the ndimensional Euclidean space En , and are consequently,
isomorphic to each other.
Proof: Let E be an ndimensional normed linear space and let e1 , e2 , . . . , en
be the basis of the space. Then any element x E can be uniquely
expressed in the form x = 1 e1 +2 e2 + n en . Corresponding to x E, let
us consider the element x
= {1 , 2 , . . . , n } in the ndimensional Euclidean
space. The correspondence established in this manner between x and x
is
onetoone. Moreover, let y E be of the form
y = 1 e1 + 2 e2 + + n en .
Then y E is in onetoone correspondence with y En where y =
{1 , 2 , . . . , n } En . It is apparent that
xx
, and y y implies x + y +
x + y
and
x
x,
4 (+ )
To prove that E and En are isomorphic, we go on to show that the
linear mapping from E onto En is mutually continuous.
For any x E, we have
0
0
n
n
0
0
0
0
i ei 0
i  ei 
x = 0
0
0
i=1
i=1
n
1/2 n
1/2
2
2
=
ei 
i 
i=1
= 
x where =
i=1
!n
i=1
ei 2
"1/2
Normed Linear Spaces
77
In particular, for all x, y E,
x yE 
x yEn
(2.7)
Next we establish a reverse inequality. We note that the unit sphere
S(0, 1) = {
x 0 = 1} in En is compact. We next prove that the function
f (
x) = f (1 , . . . , n ) = x = 1 e1 + 2 e2 + + n en 
dened on S(0, 1) is continuous. Now, a continuous function dened on a
compact set attains its extremum.
Since all the i s cannot vanish simultaneously on S and since
e1 , e2 , . . . , en are linearly independent, f (1 , 2 , . . . , n ) > 0.
Now, f (1 , 2 , . . . , n ) f (1 , 2 , . . . , n ) =  x y  x yE

x yEn .
The above shows that f is a continuous function.
Now, since the unit ball S(0, 1) in En is compact and the function
f (1 , 2 , . . . , n ) dened on it is continuous, it follows that f (1 , . . . , n ) has
a minimum on S. Hence,
f (1 , 2 , . . . , n ) > r where r > 0,
or, f (
x) = x .
Hence for any x
En ,
0
0
0
0
0
0
0
0
0 n
0
0 i ei 0
0
0 
f (
x) = x = 
x 0
x
0
n
0 i=1
0
0
0
2
i 0
0
0
0
i=1
or in other words
x y 
x y.
(2.8)
From (2.7) and (2.8) it follows that the mapping of E onto En is onetoone.
The mapping from E onto En is onetoone and onto. Both the mapping
and its inverse are continuous. Thus, the mapping is a homeomorphism.
The homeomorphism between E and En implies that in a nite dimensional
78
A First Course in Functional Analysis
Banach space the convergence in norm reduces to a coordinatewise
convergence, and such a space is always complete.
The following lemma is useful in deriving various results. Very roughly
speaking, it states that with regard to linear independence of vectors, we
cannot nd a linear combination that involves large number of scalars but
represents a small vector.
2.3.2
Lemma (linear combination)
Let {e1 , e2 , . . . , en } be a linearly independent set of vectors in a normed
linear space E (of any nite dimension). In this case there is a number
c > 0 such that for every choice of scalars 1 , 2 , . . . , n we have
1 e1 + 2 e2 + + n en  c(1  + 2  + + n ), c > 0
(2.9)
Proof: Let S = 1  + 2  + + n . If S = 0, all j are zero so that
the above inequality holds for all c. Let S > 0. Writing i = i /S (2.9) is
equivalent to the following inequality,
1 e1 + 2 e2 + + n en  c
Note that
n
(2.10)
i  = 1.
i=1
Hence it suces to prove the existence of a c > 0 such that (2.10) holds
n
i  = 1.
for every ntuples of scalars 1 , 2 , . . . , n with
i=1
Suppose that this is false. Then there exists a sequence {ym } of vector
n
(m)
(m)
(m)
i  = 1
ym = 1 e1 + + n en ,
i=1
such that ym  0, as m .
Since
sequence
n
(m)
i 
(m)
= 1 , we have i
 1. Hence for each xed i the
i=1
(m)
(i
(1)
(2)
) = (i , i , . . .)
(m)
is bounded. Consequently, by the BolzanoWeierstrass theorem, {i } has
a convergent subsequence. Let i denote the limit of the subsequence and
let {y1,m } denote the corresponding subsequence of {ym }. By the same
Normed Linear Spaces
79
argument, {y1,m } has a subsequence {y2,m } for which the corresponding
(m)
subsequence of scalars 2 converges. Let 2 denote the limit. Continuing
in this way after n steps, we obtain a subsequence, {yn,m } = {yn,1 , yn,2 , . . .}
of {ym }, the terms are of the form,
yn,m =
n
(m)
ei ,
i=1
n
(m)
i
(m)
 = 1 with scalars i
i as m
i=1
yn,m y =
n
i ei
i=1
where
i  = 1, so that not all i can be zero.
Since {e1 , e2 , . . . , en } is a linearly independent set, we thus have y = .
On the other hand, yn,m y implies yn,m  y, by the continuity of
the norm. Since ym  0 by assumption and {yn,m } is a subsequence of
{ym }, we must have yn,m  0. Hence y = 0, so that y = 0 by (b) of
2.1.1. This contradicts the fact that y = and the lemma is proved.
Using the above lemma we prove the following theorem.
2.3.3
Theorem (completeness)
Every nite dimensional subspace of a normed linear space E is
complete. In particular, every nite dimensional normed linear space is
complete.
Let us consider an arbitrary Cauchy sequence {ym } in X, a subspace of
E and let the dimension of X be n. Let {e1 , e2 , . . . , en } be a basis for X.
Then each ym can be written in the form,
(m)
(m)
ym = 1 e1 + 2 e2 + + n(m) en
Since {ym } is a Cauchy sequence, for every
> 0, there is an N such
that ym yp  <
when m, p > N . From this and the lemma 2.3.2, we
have for some c > 0
0
0
0
0
(m)
0
0
(m)
p
i ip  for all m, p N
> ym yp  = 0 (i i )ei 0 c
0
0
i=1
i=1
80
A First Course in Functional Analysis
On division of both sides by c, we get
im ip 
n
im ip  <
i=1
for all m, p > N.
c
4+
(m)
This shows that {i } is a Cauchy sequence in ( ) for i = 1, 2, . . . , n.
Hence the sequence converges. Let i denote the limit. Using these n limits
1 , 2 , . . . , n , let us construct y as
y = 1 e1 + 2 e2 + + n en
Here y X and
0
0 n
n
0
0
0
0
(m)
(m)
ym y = 0 (i i )ei 0
i i  ei .
0
0
i=1
i=1
(m)
Since i i as m for each i, ym y as m . This shows
that {ym } is convergent in X. Since {ym } is an arbitrary Cauchy sequence
in X it follows that X is complete.
2.3.4
Theorem (closedness)
Every nite dimensional subspace X of a normed linear space E is
closed in E. If the subspace X of E is closed, then it is closed in E and
the theorem is true. By theorem 2.3.3, X is complete. X being a complete
normed linear space, it follows from theorem 2.2.1 that X is closed.
2.3.5
Equivalent norms
A norm on a linear space E induces a topology, called norm topology
on E.
Denition 1: Two norms on a normed linear space are said to be equivalent
if they induce the same norm topology or if any open set in one norm is also
an open set in the other norm. Alternatively, we can express the concept
in the following form:
Denition 2: Two norms  and  on the same linear space E are said to
be equivalent norms on E if the identity mapping IE : (E,  ) (E,   )
is a topological homoeomorphism of (E,  ) onto (E,   ).
Theorem: Two norms   and   on the same normed linear space
Normed Linear Spaces
81
E are equivalent if and only if positive constants 1 and 2 such that
1 x x 2 x x E.
Proof: In view of denition 2 above, we know that   and   are
equivalent norms on E
the identity mapping IE is a topological isomorphism of (E,  )
onto (E,   ).
constants 1 > 0 and 2 > 0 such that 1 x IE x
2 x, x E,
1 x < x < a2 x, x E.
This completes the proof.
Note 2.3.1. The relation norm equivalence is an equivalence relation
among the norms on E. The special feature of a nite dimensional normed
linear space is that all norms on the space are equivalent, or in other words,
all norms on E lead to the same topology for E.
2.3.6
Theorem (equivalent norms)
On a nite dimensional normed linear space any norm   is equivalent
to any norm   .
Proof: Let E be a ndimensional normed linear space and let
{e1 , e2 , . . . , en } be any basis in E. Then for every x E we can nd
some scalars 1 , 2 , . . . , n , not all zeros such that
x = 1 e1 + 2 e2 + + n en .
Then by lemma 2.3.2, we can nd a constant c > 0 such that
0
0
n
n
0
0
0
0
x = 0
i ei 0 c
i 
0
0
i=1
(2.11)
(2.12)
i=1
On the other hand,
0 n
0
0
0
0
0
i e i 0
x = 0
0
0
i=1
n
i=1
i  ei 
(2.13)
82
A First Course in Functional Analysis
k1
n
i 
i=1
where k1 = max ei  .
Hence x < 2 x
k1
where 2 =
c
Interchanging x and x we obtain as in the above
(2.14)
x 1 x
1
k2
, k2 = max ei .
where 1 =
c
Problems
1. Let E1 be a closed subspace and E2 be a nite dimensional subspace
of a normed linear space E. Then show that E1 + E2 is closed in E.
2. Show that equivalent norms on a vector space E induces the same
topology on E.
3. If   and   are equivalent norms on a normed linear space E,
show that the Cauchysequences in (E,  ) and (E,   ) are the
same.
4. Show that a nite dimensional normed linear space is separable. (A
normed linear space is said to be separable if it is separable as a metric
space.)
5. Show that a subset L = {u1 , u2 , . . . , un } of a normed linear space E is
linearly independent if and only if for every x span L, there exists
a unique (1 , 2 , . . . n ) n ( n ) such that x = 1 u1 + 2 u2 +
3 u3 + + n un .
4 +
6. Show that a Banach space is nite dimensional if and only if every
subspace is closed.
7. Let 1 p . Prove that a unit ball in lp is convex, closed and
bounded but not compact.
8. Let
be the vector space generated by the functions
1, sin x, sin2 x, sin3 x, . . . dened on [0, 1]. That is, f
if and only
if there is a nonnegative integer k and real numbers 1 , 2 , 3 , . . . , n
(all depending on f such that f (x) =
n sinn x for each x [0, 1]).
n=0
Show that
is an algebra and
is dense in C([0, 1]) with respect to
the uniform metric (A vector space of realvalued functions is called
an algebra of functions whenever the product of any two functions
in
is in
(Aliprantis and Burkinshaw [1])).
Normed Linear Spaces
83
9. Let E1 be a compact subset and E2 be a closed subset of a normed
linear space such that E1 E2 = . Then show that (E1 + B(0, r))
E2 = for some r > 0.
10. Let 1 p . Show that the closed unit ball in lp is convex, closed
and bounded, but not compact.
11. Let E denote the linear space of all polynomials in one variable with
coecients in ( ). For p E with p(t) = 0 +1 t+2 t2 + +n tn ,
let
p = sup{p(t) : 0 t 1},
4+
p1 = 0  + 1  + + n ,
p = max{0 , 1 , . . . , n }.
Then show that  , p1 , p are norms on E, p p1 and
p p1 for all p E.
12. Show that equivalent norms on a linear space E induce the same
topology for E.
13. If two norms   and  0 on a linear space are equivalent, show
that (i) xn x 0 implies (ii) xn x0 0 (and vice versa).
2.3.7
Rieszs lemma
Let Y and Z be subspaces of a normed linear space X and suppose that
Y is closed and is a proper subset of Z. Then for every real number in
the interval (0, 1) there is a z Z such that z = 1, z y for all
y Y.
Proof: Take any v Z Y and denote its distance from Y by d (g. 2.4),
d = inf v y.
yY
y0
Y
Z
Fig. 2.4
84
A First Course in Functional Analysis
Clearly, d > 0 since Y is closed. We now take any (0, 1). By the
denition of an innum there is a y0 Y such that
d v y0 
(note that
(2.15)
> d since 0 < < 1).
Let, z = c(v y0 ) where c =
1
.
v y0 
Then z = 1 and we shall show that z y for every y Y .
Now,
z y = c(v y0 ) y = c(v y0 ) c1 y = cv y1 
where y1 = y0 + c1 y.
The form of y1 shows that y1 Y . Hence v y1  d, by the denition
of d. Writing c out and using (2.15), we have
z y = cv y1  cd =
d
d
d = .
v y0 
Since y Y was arbitrary, this completes the proof.
2.3.8
Lemma
Let Ex and Ey be Banach spaces, A be compact and R(A) be closed in
Ey , then the range of A is nite dimensional.
Proof: Let, if possible, {z1 , z2 , . . .} be an innite linearly independent
subset of R(A) and let Zn = span{z1 , z2 , . . .}, n = 1, 2, . . ..
Zn is nite dimensional and is therefore a closed subspace of Zn+1 . Also,
Zn = Zn+1 , since {z1 , z2 , . . . , zn+1 } is linearly independent. By the Riesz
lemma (2.3.7), there is a y n Zn+1 , such that
y n  = 1
and
dist(y n , Zn )
1
.
2
Now, {y n } is a sequence in {y R(A) : y 1} having no convergent
subsequence. This is because y n y m  12 for all m = n. Hence the set
{y R(A) : y } cannot be compact. Hence R(A) is nite dimensional.
Normed Linear Spaces
85
Problems
1. Prove that if E is a nite dimensional normed linear space and X
is a proper subspace, there exists a point on the unit ball of E at a
distance from X.
2. Let E be a normed linear space. Show that the Riesz lemma with
= 1 holds if and only if for every closed proper subspace X of E,
there is a x E and y0 X such that x y0  = dist (x, X) > 0.
3. Let E = {x C([0, 1]) : x(0) = 0} with the sup norm and
,
1
X= xE:
x(t)dt = 0 . Then show that X is a proper closed
0
subspace of E. Also show that there is no x E with x = 1 and
dist (x, X) = 1.
2.4
Quotient Spaces
In this section we consider an useful method of constructing a new Banach
space from a given Banach space. Earlier, in section 1.3.7 we constructed
quotient space over a linear space. Because a Banach space is also a linear
space, we can similarly construct a quotient space by introducing a norm
consistent with the norm of the given Banach space.
2.4.1
Theorem
Let E be a normed linear space over
of E.
Dene  q : E/L
4(+) and let L be a closed subspace
4 by
x + Lq = inf{x + m : m L}.
Then (E/L,  q ) is a normed linear space. Furthermore, if E is a
Banach space, then E/L is a Banach space.
Proof: We rst show that  q denes a norm on E/L. We note rst that
x + Lq 0, x E.
Next, if x + L = L, then x + Lq = O + Lq = 0
Conversely, let x + Lq = 0 for some x E. Then there exists a
86
A First Course in Functional Analysis
sequence {mk } L such that
lim x + mk q = 0 i.e. mk x in E as k .
Now x L as L is closed.
Hence x + L = L
Thus x + Lq = 0 x + L = L.
Further, for x, y E, we have,
(x + L) + (y + L)q = (x + y) + Lq
= inf{(x + y) + m : m L}
= inf{(x + m1 ) + (y + m2 ), m1 , m2 L}
< inf{x + m1  : m1 L}
+ inf{y + m2  : m2 L}
= x + Lq + y + Lq
This proves the triangle inequality.
Now, for x L and
4(+), with = 0, we have,
(x + L)q = x + Lq = x + Lq
= inf{x + m : m L}
1
2
m
= inf (x + m ) : m =
L
=  inf{x + m  : m : L}
=  x + Lq
Thus we conclude that (E/L,  q ) is a normed linear space.
We will next suppose that E is a Banach space. Let (xn + L) be a
Cauchy sequence in E/L.
We next show that {xn + L} contains a convergent subsequence {xnk +
L}. Let the subsequence {xnk + L} be such that
1
2
1
+ L)q < 2
2
(xn2 + L) (xn1 + L)q <
(xn3 + L) (xn2
(xnk+1 + L) (xnk + L)q <
1
2k
Normed Linear Spaces
87
Let us choose any vector y1 xn1 + L. Next choose y2 xn2 + L such
that y2 y1  < 12 . We then nd y3 xn3 + L such that y3 y2  < 212 .
Proceeding in this way, we get a sequence {yk } in E such that
xnk + L = yk + L
and
yk+1 yk  <
1
(k = 1, 2, . . .)
2k
Then for p = 1, 2, . . .
yk+p yk 
p
yk+i yk+i1  <
i=1
p
i=1
1
1
= k+p1
2k+i1
2
Therefore, it follows that {yk } is a Cauchy sequence in E. However
because E is complete, y E such that limk yk y = 0. Since
(xnk + L) (y + L)q = (yk + L) (y + L)q
= (yk y)Lq
yk y(Lq = 0)
it follows that
lim (xnk + L) = y + L E/L
Hence {xn + L} has a subsequence {xnk + L} that converges to some
element in E/L.
Then
(xn + L) (y + L)q (xn + L) (xnk + L)q
+ (xnk + L) (y + L)q
(xn xnk ) + Lq
+ (xnk y) + Lq 0 as k .
Hence the Cauchy sequence {xn + L} converges in E/L and thus E/L is
complete.
Problems
1. Let M be a closed subspace of a normed linear space E. Prove that
the quotient mapping x x + M of E onto the quotient space E/M
is continuous and that it maps open subsets of E onto open subsets
of E/M .
2. Let X1 = (X1 ,  1 ) and X2 = (X2 ,  2 ) be Banach spaces over the
same scalar eld ( ). Let X = X1 X2 be the Cartesian product
of X1 and X2 . Then show that X is a linear space over ( ). Prove
that the following is a norm on X:
4+
(x1 , x2 ) = max{x1 1 ; x2 2 }
4+
88
A First Course in Functional Analysis
3. Let M and N be subspaces of a linear space E and let E = M + N .
Show that the mapping y y + M , which sends each y in N to y+M
in E/M , is an isomorphism of N onto E/M .
4. Let M be a closed subspace of a normed space E. Prove that if E is
separable, then E/M is separable (A space E is separable if it has a
denumerable everywhere dense set).
5. If E is a normed vector space and M E is a Banach space, then
show that if E/M is a Banach space, E itself is a Banach space.
2.5
Completion of Normed Spaces
Denition: Let E1 and E2 be normed linear spaces over
4(+).
(i) A mapping T : E1 E2 (not necessarily linear) is said to be an
isometry if it preserves norms, i.e., if
T xE2 = xE1 x E1
such an isometry is said to imbed E1 into E2 .
(ii) Two spaces E1 and E2 are said to be isometric if there exists an
oneone (bijective) isometry of E1 into E2 . The spaces E1 and E2
are then called isometric spaces.
Theorem 2.5.1. Let E1 = (E1 ,  E1 ) be a normed linear space. Then
there is a Banach space E2 and an isometry T from E1 onto a subspace E2
of E2 which is dense in E2 . The space E2 is unique, except for isometries.
Proof: Theorem 1.4.25 implies the existence of a complete metric space
X2 = (X2 , 2 ) and an isometry T : X1 X2 = T (X1 ), where X2 is dense in
X2 and X2 is unique, except for isometries. In order to prove the theorem,
we need to make X2 a linear space E2 and then introduce a suitable norm
on E2 .
To dene on X2 the two algebraic operations of a linear space, we
consider any x
, y X2 and any representatives {xn } x
and {yn } y.
We recall that x
, y are equivalence classes of Cauchy sequences in E1 .
We set zn = xn + yn . Then {zn } is a Cauchy sequence in E1 since
zn zm  = xn + yn (xm + ym ) xn xm  + yn ym .
Normed Linear Spaces
89
We dene the sum z = x
+ y of x
and y to be an equivalence class of
which {zn } is representative; thus {zn } z. This denition is independent
of the particular choices of Cauchy sequences belonging to x
and y. We
know that if {xn } {xn } and {yn } {yn }, then {xn + yn } {xn + yn },
because,
xn + yn (xn + yn ) xn xn  + yn yn .
Similarly, we dene
x X2 , the product of a scalar and x
to be
the equivalence class for which {xn } is a representative. Moreover, this
denition is independent of the particular choice of a representative x
. The
zero element of X2 is the equivalence class containing all Cauchy sequences
that converge to zero. We thus see that these algebraic operations have
all the properties required by the denition of a linear space and therefore
X2 is a linear space. Let us call it the normed linear space E2 . From the
denition it follows that on X2 [see theorem 1.4.25] the operations of linear
space induced from E2 agree with those induced from E1 by means of T .
We call X2 , a subspace of E2 as E2 .
Furthermore, T induces on E2 a norm  1 , value of which at every
y = T x E2 is 
y 1 = x. The corresponding metric on E2 is the
restrictions of 2 to E2 since T is isometric. We can extend the norm  1
to E2 by setting 
x2 = (0, x
) for every x
E2 . It is clear that  2
satises axiom (a) of subsection 2.1.1 and that the other two axioms (b)
and (c) of the above follow from those for  1 by a limiting process.
The space E2 constructed as above is sometimes called the completion
of the normed linear space E1 .
Denition: completion A completion of a normed linear space E1 is any
normed linear space E2 that contains a dense subspace that is isometric to
E1 .
Theorem 2.5.2. All completions of a normed linear space are isometric.
and E
be two completions of a normed linear
Proof: Let, if possible, E
are complete and both
and E
space E. In particular, we assume that E
and
contain E as a dense subset. We now dene an isometry T between E
For each x
since E is dense in E,
a sequence {xn } of points of
E.
E,
E converging to x
. But we may also consider {xn } as a Cauchy sequence
and E
being complete, it must converge to x
Dene T x
in E
E.
=x
by the construction. In what follows we will show that this construction
is independent of the particular sequence {xn } converging to x
and gives
a onetoone mapping of X onto X. Clearly T x = x x E. Now, if
90
A First Course in Functional Analysis
then
and xn x
{xn } x
in E
in E,
xE = lim xn 
n
and
x
E = lim xn .
n
xE . Hence T is isometric.
Thus T x
E = 
Corollary 2.5.1.
isometries.
in theorem 2.5.2 is unique except for
The space E
Example: The completion of the normed linear space (P[a, b],   ) where
P[a, b] is the set of all polynomials with real coecients dened on the
closed interval [a, b], is the space (C([a, b]),   ).
CHAPTER 3
HILBERT SPACE
A Hilbert space is a Banach space endowed with a dot product or scalar
product. A normed linear space has a norm, or the concept of distance,
but does not admit the concept of the angle between two elements or
two vectors. But an inner product space admits both the concepts such
as the concept of distance or norm and the concept of orthogonalityin
other words, the angle between two vectors. Just as a complete normed
linear space is called a Banach space, a complete inner product space is
called a Hilbert space. An inner product space is a generalisation of the
ndimensional Euclidean space to innite dimensions.
The whole theory was initiated by the work of D. Hilbert (1912)
[24] on integral equations. The currently used geometrical notation and
terminology is analogous to that of Euclidean geometry and was coined by
E. Schmidt (1908) [50]. These spaces have up to now been the most useful
spaces in practical applications of functional analysis.
3.1
Inner Product Space, Hilbert Space
3.1.1
Denition: inner product space, Hilbert space
An inner product space (preHilbert Space) is a linear (vector) space
H with an inner product dened on H. A Hilbert space is a complete
inner product space (complete in the metric dened by the inner product)
(cf. (3.2) below). Hence an inner product on H is a mapping of H H into
the scalar eld K( or C) of H; that is, with every pair of elements x and
y there is associated a scalar which is written x, y and is called the inner
product (or scalar product) of x and y, such that for all elements x, y and
z and scalar we have s
(a) x + y, z = x, z + y, z
(b) x, y = x, y
91
92
A First Course in Functional Analysis
(c) x, y = y, x
(d) x, x 0,
x, x = 0 x = 0.
An inner product on H denes a norm on H given by
1
x = x, x 2 0
(3.1)
and a metric on H given by
(x, y) = x y =
x y, x y.
(3.2)
Hence inner product spaces are normed linear spaces, and Hilbert
spaces are Banach spaces.
In (c) the bar denotes complex conjugation. In case,
x = (1 , 2 , . . . n , . . .)
and y = (1 , 2 , . . . n , . . .)
x, y =
i i .
(3.3)
i=1
In case H is a real linear space
x, y = y, x.
The proof that (3.1) satises the axioms (a) to (d) of a norm [see 2.1] will
be given in section 3.2.
From (a) to (d) we obtain the formula,
(a ) x + y, z = x, z + y, z for all scalars , .
(b ) x, y = x, y
(3.4)
(c ) x, y + z = x, y + x, z.
3.1.2
Observation
It follows from (a ) that the inner product is linear in the rst argument,
while (b ) shows that the inner product is conjugate linear in the second
argument. Consequently, the inner product is sesquilinear, which means
that 1 12 times linear.
3.1.3
1.
Examples
, n dimensional Euclidean space
The space n is a Hilbert space with inner product dened by
x, y =
n
i=1
i i
(3.5)
Hilbert Space
93
where x = (i , 2 , . . . n ), and y = (1 , 2 , . . . n )
In fact, from (3.5) it follows
1
2
x = x, x =
n
12
i2
i=1
The metric induced by the norm takes the form
1
(x, y) = x y = x y, x y 2 = {(1 1 )2 + + (n n )2 } 2
The completeness was established in 1.4.16.
2. Unitary space n
The unitary space
by
is a Hilbert space with inner product dened
x, y =
n
i i
(3.6)
i=1
where x = (1 , 2 , . . . n ) and y = (1 , 2 , . . . n ).
From (3.6) we obtain the norm dened by
1
2
x = x, x =
n
12
i 
i=1
The metric induced by the norm is given by
(x, y) = x y =
n
12
i i 
i=1
Completeness was shown in 1.4.16.
Note 3.1.1. In (3.6) we take the conjugate i so that we have y, x =
x, y which is the requirement of the condition c, so that x, x is real.
3. Space l2
l2 is a Hilbert space with inner product dened by
x, y =
i i
(3.7)
i=1
where
Since
x = (i , 2 , . . . n , . . .) l2 and y = (1 , 2 , . . . n , . . .) l2 .
x, y l2 ,
i 2 < and
i 2 < .
i=1
i=1
By CauchyBunyakovskySchwartz inequality [see theorem 1.4.3]
94
A First Course in Functional Analysis
We have x, y =
i i
i=1
1/2
i 
i=1
1/2
i 
<
(3.8)
i=1
From (3.7) we obtain the norm dened by
12
1
x = x, x 2 =
i 2 .
i=1
Using the metric induced by the norm, for l2 , we see that all the axioms
3.1.1(a)(d) are fullled.
4. Space L2 ([a, b])
The inner product is dened
x, y =
x(t)y(t)dt
(3.9)
for x(t), y(t) L2 ([a, b]) i.e., x(t), y(t) are Riemann square integrable.
The norm is then
12
b
1
2
x(t) dt
where x(t) L2 ([a, b]).
x = x, x 2 =
a
Using the metric induced by the norm we can show that L2 ([a, b]) is a
Hilbert space.
In the above x(t) is a realvalued function.
In case x(t) and y(t) are complexvalued functions we can dene the
b
inner product x, y as x, y =
x(t)y(t)dt with the norm given by
a
x(t) =
12
x(t) dt
because x(t)x(t) = x(t)2 .
Note 3.1.2. l2 is the prototype of the Hilbert space. It was introduced
and studied by D. Hilbert in his work on integral equations. The axiomatic
denition of a Hilbert space was given by J. Von Neumann in a paper on
the mathematical foundation of quantum mechanics [30].
3.2
CauchyBunyakovskySchwartz
Inequality
3.2.1
Lemma
(CBS)
(CauchyBunyakovskySchwartz
(CBS)
inequality)
If x, y are two elements of an inner product space, then
x, y xy
(3.10)
Hilbert Space
95
The equality occurs if and only if {x, y} is a linearly dependent set.
Proof: If y = then x, y = x, theta = , x = 0x, x = 0 and the
conclusion is clear. Let y = , then, for any scalar , we have
0 x + y2 = x + y, x + y = x, x + x, y + y, x
+ y, y = 2 x, x + x, y + x, y + y, y
Let
y, x
.
x, x
Then the above inequality reduces to,
y, y
x, y2
0 or, x, y xy.
x, x
Note 3.2.1. By using CauchyBunyakovskySchwartz inequality we can
show that the norm dened by a scalar product of an inner product space
[cf. (3.1)] satised all the axioms of a normed linear space.
For x, y belonging to an inner product space, we obtain,
x + y2 = x + y, x + y = x, x + x, y + y, x + y, y
= x2 + 2 Re x, y + y2
since
y, x = x, y x y.
CauchyBunyakovskySchwartz inequality reduces the above inequality to
x + y2 x2 + 2xy + y2 = (x + y)2
Hence, x + y x + y, which is the triangular inequality of a
normed linear space.
Thus, the distance introduced by the norm satises all the axioms of a
metric space.
Thus a Hilbert space a Banach space a complete metric space.
3.3
Parallelogram Law
3.3.1
Parallelogram law
The parallelogram law states that the norm induced by a scalar product
satises
x + y2 + x y2 = 2(x2 + y2 )
(3.11)
where x, y are elements of an inner product space.
Proof: x + y2 = x + y, x + y = x, x + x, y + y, x + y, y.
x y2 = x y, x y = x, x x, y y, x + y, y.
Therefore, x + y2 + x y2 = 2(x2 + y2 ).
96
A First Course in Functional Analysis
The term parallelogram equality is suggested by elementary geometry,
as we shall see from the gure below. Since norm stands for the length
of a vector, the parallelogram law states an important property of a
parallelogram, i.e., the sum of the squares of the lengths of the diagonals is
equal to twice the sum of the squares of the lengths of the sides.
Thus the parallelogram law generalises a known property of elementary
geometry to an inner product space.
x+
y
x
Fig. 3.1 Parallelogram with sides x and y in the plane
In what follows we give some examples of normed linear spaces which
are not Hilbert spaces.
3.3.2
Space lp
The space lp with p = 2 is not an inner product space, hence not a
Hilbert space. Hence we would like to show that the norm of lp with
p = 2 cannot be obtained from an inner product. We prove this by
showing that the norm does not satisfy the parallelogram law. Let us take
1
x = (1, 1, 0 . . . 0) lp and y = (1, 1, 0, 0 . . . 0 . . .) lp . Then x = 2 p .
1
y = 2 p . Now x+y = (2, 0, 0 . . .), x+y = 2. Again xy = (0, 2, 0, 0 . . .).
Hence x y = 2.
Thus the parallelogram law is not satised. Hence lp (p = 2) though a
Banach space (cf. 2.1.4) is not a Hilbert space.
3.3.3
Space C([0, 1])
The space C([0, 1]) is not an inner product space and hence not a Hilbert
space.
We show that the norm dened by
x = max x(t)
0t1
cannot be obtained from an inner product. Let us take x(t) = 1 and
y(t) = t. Hence x = 1, y = 1
x(t) + y(t) = 1 + t = 2 x(t) y(t) = 1 t = 1.
Thus x + y + x y = 3 = 4 = 2(x + y).
Hence the parallelogram law is not satised.
Thus C([0, 1]), although a complete normed linear space, i.e., a Banach
space [cf. 2.1.4], is not a Hilbert space.
Hilbert Space
97
We know that the norm of a vector in an inner
product space can be
+
expressed in terms of the inner product: x = x, x. The inner product
can also be recovered from the induced norm by the following formula
known as the polarization identity:
(x + y2 x y2 ) in
4
x, y =
1 [(x + y2 x y2 ) + i(x + iy2 x iy2 )] in
4
(3.12)
Now, in ,
1
1
(x + y2 x y2 ) = [x + y, x + y x y, x y]
4
4
1
= [x,
y + x, y + y, x x,
x y,
y + x, y + y, x].
x + y,
4
= x, y since in , x, y = y, x.
In
1
[(x + y2 x y2 ) + i(x + iy2 x iy2 )]
4
1
= [{x,
x y,
y + x, y + y, x}
x + y,
y + x, y + y, x x,
4
+ i{x,
x iiy,
x + x, iy + iy, x + iiy,
y x,
y + x, iy + iy, x}]
1
[2x, y + 2x, y + 2x, y 2x, y] = x, y.
4
which is the polarization identity.
3.3.4
Theorem
A norm on a linear space E is induced by an inner product , on it
if and only the norm satises the parallelogram law. In that case the inner
product , is given by the polarization equality.
Proof: Suppose that the norm is induced by the inner product. Then
the parallelogram law holds true. Furthermore the inner product can be
recovered from the norm by the polarization equality.
Conversely, let us suppose that obeys the parallelogram law and
, is dened by the polarization equality as given in (3.11). We have to
show that , is an inner product and generalize on E. Let us consider
the formula (3.11) for the complex space.
(i) Then for all x, y E
1
x, y = [(x + y2 x y2 + i(x + iy2 x iy2 )].
4
Putting y = x E we get
1
x, x = [(4x2 0 + i1 + i2 x2 i1 i2 x2 ]
4
98
A First Course in Functional Analysis
1
[(4x2 + 2i(x2 x2 )]
4
= x2
Therefore inner product , generates the norm .
(ii) For all x, y E we have,
1
y, x = [y + x2 y x2 i(y + ix2 y ix2 )]
4
1
= [x + y2 x y2 + i(x + iy2 x iy2 )] = x, y.
4
(iii) Let u, v, w X. Then parallelogram law yields
3
(u + v) + w2 + (u + v) w2 = 2(u + v2 + w2 )
(u v) + w2 + (u v) w2 = 2(u v2 + w2 ).
On substraction we get,
((u + w) + v2 (u + w) v2 ) + ((u w) + v2 (u w) v2 )
= 2(u + v2 u v2 ).
Using the polarization identity, we get
3
Re u + w, v + Re u w, v = 2 Re u, v
(3.13)
Imu + w, v + Imu w, v = 2 Imu, v.
Hence,
u + w, v + u w, v = 2u, v
(3.14).
u v = y and v = z, we obtain from (3.14)
7
8
x+y
x, z + y, z = 2
, z = x + y, z
2
Putting u + w = x
since on putting w = u, (3.13) reduces to Re2u, v = Re2u, v for all
u, v E.
Thus condition (a) of 3.1 is proved.
(iv) Next we want to prove condition (b), i.e., x, y = x, y, for
every complex scalar and x, y E. We shall prove it in stages.
Stage 1. Let = m, a positive integer, m > 1.
mx, y = (m 1)x + x, y = (m 1)x, y + x, y
= (m 2)x, y + 2x, y
..
.
= x, y + (m 1)x, y = mx, y.
Also for any positive integer n, we have
9x :
1
x, y = x, y
n
, y = n
n
n
Hilbert Space
Hence
99
9x
:
1
, y = x, y.
n
n
If m is a negative integer, splitting m as (m 1) + 1 we can show that (b)
is true.
m
Stage 2. Let = r =
be a rational number, m and n be prime to each
n
other.
9x : 9m
:
m
,y =
x, y = rx, y.
Then rx, y = x, y = m
n
n
n
Stage 3. Let be a real number. Then there exists a sequence {rn } of
rational numbers, such that rn as n .
Hence
rn x, y x, y
But
rn x, y = rn x, y
and
rn x + y x + y
Therefore, x, y = x, y for any real .
Stage 4. Let = i. Then, the polarization identity yields
1
[ix + y2 ix y2 + i(i(x + y))2 (i(x y))2 ]
4
i
= [x + y2 x y2 + i(x + iy2 x iy2 )]
4
= ix, y.
ix, y =
Stage 5. Finally, let = p + iq, be any complex number, then,
x, y = px, y + iqx, y = px, y + iqx, y
= (p + iq)x, y = x, y.
Thus we have shown that , is the inner product inducing the norm
on E.
3.3.5
Lemma
Let E be an inner product space with an inner product , .
(i) The linear space E is uniformly convex in the norm , that is,
for every
> 0, there is some > 0, such that for all x, y E with
x 1, y 1 and x y
, we have x + y 2 2.
(ii) The scalar product is a continuous function with respect to norm
convergence.
Proof: (i) Let
> 0. Given x, y E with x 1, y 1 and xy
.
Then
x y x + y 2.
The parallelogram law gives
x + y2 = x2 + y2 + [x, y + y, x]
= 2(x2 + y2 ) [x2 {x, y + y, x} + y2 ]
100
A First Course in Functional Analysis
Hence,
= 2(x2 + y2 ) x y2 4
2 .
12
2
.
x + y 4
2 = 2 2 if = 1 1 4
(ii) Let x, y E and xn x, yn y where xn , yn are elements of
E. Therefore xn and yn are bounded above and let M be an upper
bound of both xn and yn .
Hence,
xn , yn x, y = xn , yn xn , y + xn , y x, y
= xn , yn y + xn x, y M yn y + M xn x
using CauchyBunyakovskySchwartz inequality, since xn x and yn
y as n we have from the above inequality,
xn , yn x, y as n .
This shows that x, y, which is a function on E E is continuous in both
x and y.
3.4
Orthogonality
3.4.1
Denitions (orthogonal, acute, obtuse)
Let x, y be vectors in an inner product space.
(i) Orthogonal: x is said to be orthogonal to y or written as x y if
x, y = 0.
(ii) Acute: x is said to be acute to y if x, y 0.
(iii) Obtuse: x is said to be obtuse to y if x, y 0.
In 2 or 3 , x is orthogonal to y if the angle between the vectors is 90 .
Similarly when x is acute to y, the angle between x and y is less than or
equal to 90 . We can similarly explain when x, y 0 the angle between x
and y is greater than or equal to 90 . This geometrical interpretation can
be extended to innite dimensions in an inner product space.
3.4.2
Denition: subspace
A nonempty subset X of the inner product space E is said to be a
subspace of E if
(i) X is a (linear) subspace of E considered as a linear space.
(ii) X admits of a inner product , X induced by the inner product
, on E, i.e.,
x, yX = x, y x, y E.
Note 3.4.1. A subspace X of an inner product E is itself an inner product
space and the induced norm X on X coincides with the induced norm
on E.
Hilbert Space
3.4.3
101
Closed subspace
A subspace X of an inner product space E is said to be a closed subspace
of E if X is closed with respect to X induced by the on E.
Note 3.4.2. Given a Hilbert space H, when we call X a subspace (closed
subspace) of H we treat H as an inner product space and X its subspace
(closed subspace).
Note 3.4.3. Every subspace of a nite dimensional inner product space
is closed.
This is not true in general, as the following example shows.
3.4.4
Example
Consider the Hilbert space l2 and let X be the subset of all nite
sequences in l2 given by
X = {x = {i } l2 : i = 0 for i > N, N is some positive integer}.
X is a proper subspace of l2 , but X is dense in l2 . Hence X = l2 = X.
Hence X is not closed in l2 .
Problems [3.13.4]
1. Let 1 , 2 , . . . n be n strictly positive real numbers. Show that
the function of two variables , : n n , dened by
n
x, y =
i xi yi , is an inner product on n .
i=1
2. Show that equality holds in the CauchyBunyakovskySchwartz
inequality, (i.e., x, y = xy) if and only if x and y are linearly
dependent.
3. Let , be an inner product on a linear space E. For x = , y =
in E dene the angle between x and y as follows:
x,y = arc cos +
Re (x, y)
, 0 x,y
(x, x)(y, y)
Then show that x,y is welldened and satises the identity
1
x, x + y, y x y, x y = 2x, x 2 y, y 2 cos x,y .
4. Let be a norm on a linear space E which satises the parallelogram
law
x + y2 + x y2 = 2(x2 + y2 ), x, y E
For x, y E dene x, y =
iy2 ]
1
[x + y2 x y2 + ix + iy2 ix
4
Then show that , is the unique inner product on E satisfying
+
x, y = x for all x E.
102
A First Course in Functional Analysis
5. [Limaye [33]] Let X be a normed space over . Show that the norm
satises the parallelogram law, if and only if in every plane through
the origin, the set of all elements having norm equal to 1 forms an
ellipse with its centre at the origin.
6. Let {xn } be a sequence in a Hilbert space H and x H such that
lim xn = x, and lim xn , x = x, x. Show that lim xn = x.
n
7. Let C be a convex set in a Hilbert space H, and d = inf{x, x C}.
If {xn } is a sequence in C such that lim xn = d, show that {xn }
n
is a Cauchy sequence.
8. (Pythagorean theorem) (Kreyszig [30]). If x y is an inner
product on E, show that (g. 3.2),
x + y2 = x2 + y2 .
x+
y
y
x
Fig. 3.2
9. (Appolonius identity) (Kreyszig [30]). Verify by direct calculations
that for any three elements x, y and z in an inner product space E,
z x2 + z y2 =
1
1
x y2 + 2z (x + y)2 .
2
2
Show that this identity can also be obtained from the parallelogram
law.
3.5
Orthogonal Projection Theorem
In 1.4.7 we have dened the distance of a point x from a set A in a metric
space E which runs as follows:
D(x, A) = inf (x, y).
yA
(3.15)
In case E is a normed linear space, thus:
D(x, A) = inf x y
yA
(3.16)
If y is the value of y for which the inmum is attained then, D(x, A) =
x y, y A. Hence y is the element in A closest to x. The existence
Hilbert Space
103
of such an element y is not guaranteed and even if it exists it may not be
unique. Such behaviour may be observed even if A happens to be a curve
in 2 . For example let A be an open line segment in 2 .
y2
No y
(A unique y )
Fig. 3.3
y1
(infinitely many y s)
Fig. 3.4
Fig. 3.5
Existence and uniqueness of points y
A satisfying (3.16) where the given
set E, A 2 is an open segment [in g. 3.3 and in g. 3.4] and is a
circular arc [g. 3.5].
The study of the problem of existence and uniqueness of a point in
a set closest to a given point falls within the purview of the theory of
optimization.
In what follows we discuss the orthogonal projection theorem in a
Hilbert space which partially answers the above problem.
3.5.1
Theorem (orthogonal projection theorem)
If x H (a Hilbert space) and L is some closed subspace of H, then x
has a unique representation of the form
x = y + z,
(3.17)
with y L and z L i.e. z is orthogonal to every element of L.
Proof: If x L, y = x and z = . Let us next say that x L. Let
d = inf x y2 , i.e., d is the square of the distance of x from L. Let {yn }
yL
be a sequence in L such that dn = x yn 2 and let dn d as n .
Let h be any nonzero element of L. Then yn +
h L for any complex
number
.
Therefore, x (yn +
h)2 d i.e. x (yn +
h), x (yn +
h) d
or,
x yn , x yn
h, x yn x yn ,
h +
h,
h d
or,
x yn 2
h, x yn
x yn , h + 
2 h2 d,
Let us put
=
x yn , h
h2
The above inequality reduces to,
x yn 2
x yn , h2
d,
h2
104
or
or
A First Course in Functional Analysis
x yn , h2
(dn d).
h2
x yn , h h dn d.
(3.18)
Inequality (3.18) is evidently satised for h = 0.
It then follows that
ym yn , h x yn , h + x ym , h
( dn d + dm d)h
(3.19)
(3.19) yields
+
+
yn ym , h
( dn d + dm d)
h
Taking supremum of LHS we obtain,
yn ym ( dn d + dm d)
(3.20)
Since dn d as n , the above inequality shows that {yn } is a
Cauchy sequence. H being complete, {yn } some element y H. Since
L is closed, y L. It then follows from (3.18) that
x yn , h = 0
(3.21)
where h is an arbitrary element of L.
Hence x y is perpendicular to any element h L, i.e. x y L.
Setting z = x y we have
x=y+z
(3.22)
Next we want to show that the representation (3.22) is unique. If that
be not so, let us suppose there exist y and z such that
x = y + z
(3.23)
It follows from (3.22) and (3.23) that
y y = z z.
(3.24)
Since L is a subspace and y, y L y y L.
Similarly z z L. Hence (3.24) can be true if and only if y y =
z z = showing that the representation (3.22) is unique. Otherwise we
may note that y y 2 = y y , z z = 0 since y y is to z z.
Hence y = y and z = z . In (3.22) y is called the projection of x on L.
3.5.2
Lemma
The collection of all elements z orthogonal to L(= ) forms a
closed subspace M (say).
Let z1 , z2 to L but y L where L is nonempty.
Hilbert Space
Then,
105
y, z1 = 0, y, z2 = 0.
Therefore, for scalars , , y, z1 + z2 = y, z1 + y, z2 = 0.
Again, let {zn } z in H, where {zn } is orthogonal to L. Then, a scalar
product being a continuous function
y, zn y, z = y, zn z yzn z, y L
0 since zn z as n .
Therefore, y, z = lim y, zn = 0.
n
Hence,
z L.
Thus the elements orthogonal to L form a closed subspace M . We write
L M . M is called the orthogonal complements of L, and is written
as M = L .
Note 3.5.1. In 2 if the line L is the subspace, the projection of any
vector x with reference to a point 0 on L (the vector x not lying on L) is
given by,
B
Fig. 3.6 Projection of x on L
Here, OB is the projection vector x on L. B is the point on L closest
from A, the end point of x.
Next, we present a theorem enumerating the conditions under which
the points in a set closest from a point (not lying on the set) can be found.
3.5.3
Theorem
Let H be a Hilbert space and L a closed, convex set in H and x H L,
then there is a unique y0 L such that
x y0 = inf [x y]
yL
Proof: Let d = inf x0 y2 . Let us choose {yn } in L such that
yL
lim x yn 2 = d
(3.25)
Then by parallelogram law,
(ym x) (yn x)2 + (ym x) + (yn x)2
= 2(ym x2 + yn x2 )
106
A First Course in Functional Analysis
ym yn 2 = 2(ym x2 + yn x2 ) 4
or
Since L is convex, ym , yn L
ym + y n
x2 .
2
ym + yn
L
2
ym + yn
x2 d.
2
ym yn 2 2ym x2 + 2yn x2 4d.
Hence
Hence,
Using (3.25) we conclude from above that {yn } is Cauchy.
Since L is closed, {yn } y0 (say) L as n .
x y0 2 = d.
Then
For uniqueness, let us suppose that x z0 2 = d, where z0 L.
y0 z0 2 = (y0 x) (z0 x)2
Then,
= 2y0 x2 + 2z0 x2 (y0 x) + (z0 x)2
0
02
0 y0 + z0
0
0 4d 4d = 0
= 4d 4 0
x
0 2
0
Hence
y0 = z0 .
3.5.4
Theorem
Let H be a Hilbert space and L be a closed convex set in H and
x H L. Then there is a unique y0 L (i) such that xy0 = inf xy
yL
and (ii) x y0 L .
Theorem 3.5.3 guarantees the existence of a unique y0 L such that
x y0 = inf x y.
yL
If y L and
is a scalar, then y0 +
y L, so that
x (y0 +
y)2 y0 x2
and
x y0 , y
y, x y0 + 
2 y2 0
or
(3.26)

 y 2 Re {
(x y0 ), y}.
If
is real and = > 0
2 Re x y0 , y y2
(3.27)
If
= i, real and > 0, and if x y0 , y = + i (3.26) yields,
i( + i) i( i) + 2 y2 0 or, 2 2 y2
or,
2 Imx y0 , y y2
(3.28)
Since > 0 is arbitrary, it follows from (3.27) and (3.28) that
x y0 , y = 0 and x y0 is perpendicular to any y L. Thus x y0 L.
Therefore x y0 = z (say), where z L . Thus, x = y0 + z, where
y0 L and z L . y0 is the vector in L at minimum distance from x. y0
is called the orthogonal projection of x on L.
Hilbert Space
3.5.5
107
Lemma
In order that a linear subspace L is everywhere dense in a Hilbert space
H, it is necessary and sucient that an element exists which is dierent
from zero and orthogonal to all elements of M .
Proof: Since M is everywhere dense in H, x M implies that x M . By
hypothesis M = H and consequently, x H, in particular x x implying
x = .
Conversely, let us suppose M is not everywhere dense in H. Then
M = H and there is an x M and x H. By theorem 3.5.1 x = y + z,
y M , z M , and since x M , it follows that z = , which is a
contradiction to our hypothesis. Hence M = H.
3.6
Orthogonal Complements, Direct Sum
In 3.5.2, we have dened the orthogonal complement of a set L in a Hilbert
space H as
L1 = {yy H, y x for every x L}
(3.29)
We write (L ) = L , (L ) = L etc.
Note 3.6.1. (i) {} = X and X = {}, i.e., is the only vector
orthogonal to every vector.
Note that {} = {x H : x, = 0} = H.
Since x, = 0 for all x H. Also if x = , then x, x = 0. Hence
a nonzero vector x cannot be orthogonal to the entire space. Therefore,
H = {}.
(ii) If L = is a subset of H, then the set L is a closed subspace of
H. Furthermore L L is either {} or empty (when L).
For the rst part of the above see Lemma 3.5.2. For the second part,
let us suppose L L = and let x A A . Then we may have x x.
Hence x = .
(iii) If A is a subset of H, then A A . Let x A. Then, x A
which means that x (A ) .
(iv) If A and B are subsets of H such that A B, then A B .
Let x B , then x, y = 0 y B and therefore y A since
A B. Thus x B x A that is, B A .
(v) If A = is a subset of H, then A = A . Changing A by A in
(iii) we get, A A .
Since A A , it follows for (iv) that A A or A A . Hence
it follows from the above two inclusions that A = A .
3.6.1
Direct sum
In 1.3.9. we have seen that a linear space E can be expressed as the
direct sum of its subspaces X1 , X2 . . . Xn if every x E can be expressed
108
A First Course in Functional Analysis
uniquely in the form
x = x1 + x2 + + xn , xi Xi .
In that case we write E = X1 X2 X3 Xn . In what follows we
mention that using the orthogonal projection theorem [cf. 3.5.1] we can
partition a Hilbert space H into two closed subspaces L and its orthogonal
complement L .
i.e.,
where x H, y L and z L.
x=y+z
This representation is unique.
Hence we can write H = L L .
(3.30)
Thus the orthogonal projection theorem can also be stated as
follows:
If L is a closed subspace of a Hilbert space H, then H = LL .
Proof: L is a closed subspace of H (cf. Lemma 3.5.2). Therefore, L and
L are orthogonal closed subspace of H. We next want to show that L+L
is a closed subspace of H. Let z L + L . Then there exists a sequence
{zn } in L + L such that lim zn = z.
n
Since zn L + L can be uniquely represented as zn = xn + yn , xn
L, yn L .
Since xn yn , n , by virtue of Pythagorean theorem we have
zm zn 2 = (xm xn ) + (ym yn )2
= xm xn 2 + ym yn 2 0 as n .
Hence {zn } is Cauchy and {xn } and {yn } are Cauchy sequences in L
and L respectively. Since L and L are closed subspaces of H, they are
complete. Hence {xn } x an element in L and {yn } y, an element in
L as n .
Thus, z = lim zn = lim (xn + yn ) = x + y.
n
Hence, x + y L + L i.e. L + L is a closed subspace of H.
Hence L + L is complete. We next have to prove that L + L = H.
If that be not so, let L + L be a proper subspace of H. If L = H then
L = . On the other hand if L = H, L = . Hence L + L = H is true
in the above cases. Let us suppose that L + L = is a complete proper
subspace of H. We want to show that (L + L ) is = . Now L + L is
a convex set. Let = x H [L ]. Then by theorem 3.5.4 there exists a
y0 L + L s.t.
x y0 =
inf
yL+L
x y = d > 0
Hilbert Space
109
and x y0 L + L . Let z0 = x y0 , z0 = , z0 H and z0 L + L .
This gives,
z0 , y + z = 0 y M and z M .
z0 , y + z0 , z = 0
y M and z M
In particular, taking z = and y = respectively,
3
z0 , y = 0 y M
z0 , z = 0
z M
Consequently, z0 M M .
But M M = {}, because they are each closed subspaces.
Therefore its follows that z0 = , which contradicts the fact that
L + L = is a proper subspace of H.
Hence H = L + L . Since L L = {}, H = L L .
3.6.2
Projection operator
Let H be a Hilbert space. L is a closed subspace of H. Then by
orthogonal projection theorem for every x H, y L and z L , x can
be uniquely represented by x = y + z. y is called the projection of x on
L. We write y = P x, P is called the projection mapping of H onto L, i.e.,
onto
P : H L.
Since
z = x y = x P x = (I P )x.
Thus
PH = L
Now,
P y = y, y L
(I P )H = L
and
P z = 0, z L .
Thus the range of P and its null space are mutually orthogonal. Hence the
projection mapping P is called an orthogonal projection.
3.6.3
Theorem
A subspace L of a Hilbert space H is closed if and only if L = L .
If L = L , then L is a closed subspace of H, because L is already
a closed subspace of H [see note 3.6.1].
Conversely let us suppose that L is a closed subspace of H. For any
subset L of H, we have L L [see note 3.6.1]. So it remains to prove
that L L.
Let x L . By projection theorem, x = y + z, y L and z L .
Since L L , it follows that y L .
L being a subspace of H, z = x y L .
Hence z L L . As such z z, i.e., z = . Hence, z = x y = .
Thus x L. Hence L L. This proves the theorem.
110
3.6.4
A First Course in Functional Analysis
Theorem
Let L be a nonempty subset of a Hilbert space H. Then, span L is
dense in H if and only if L = {}.
Proof: We assume that span L is dense in H. Let M = span L so that
M = H. Now, {} L . Let x L and since x H = M , there exists
a sequence {xn } M such that
lim xn = x.
Now, since x L and L M , we have, x, xn = 0, n.
Since the inner product is a continuous function, proceeding to the limits
in the above, we have x, x = 0 x = .
Thus, {L } {}. Hence L = .
Conversely let us suppose that L = {}. Let x M . Then x M
and in particular x L. This veries that x L = {}.
Hence M = {}. But M is a closed subspace of H. Hence by the
projection theorem H = M .
Problems [3.5 and 3.6]
1. Find the projections of the position vector of a point in a twodimensional plane, along the initial line and the line through the
origin, perpendicular to the initial line.
2. Let Ox , Oy , Oz be mutually perpendicular axes through a xed point
O as origin. Let the spherical polar coordinates of P be (r, , ). Find
the projections of the position vector of P WRT O as the xed point,
on the axes Ox , Oy , Oz respectively.
3. Let x1 , x2 , . . . xn satisfy xi = and xi xj if i = j, i, j =
1, 2, . . . n. Show that the xi s are linearly independent and extend
the Pythagorean theorem from 2 to n dimensions.
4. Show that if M and N are closed subspaces of a Hilbert space H,
then M + N is closed provided x y for all x M and y N .
5. Let H be a Hilbert space, M H a convex subset, and {xn } a
sequence in M such that xn d as n where d = inf x.
Show that {xn } converges in H.
xM
6. If M H, a Hilbert space, show that M is the closure of the span
of M .
7. If M1 and M2 are closed subspaces of a Hilbert space H, then show
that (M1 M2 ) equals the closure of M1 M2 .
8. In the usual Hilbert space
42 nd L if
(i) L = {x} where x has two nonzero components x1 and x2 .
(ii) L = {x, y} 2 is a linearly independent set.
Hilbert Space
111
Hints: (i) If y L with components y1 , y2 , then x1 y1 = x2 y2 = 0.
(ii) If L is a linearly independent set then x = ky, k = 0 and a scalar.
9. Show that equality holds in the CauchyBunyakovskySchwartz
inequality (i.e. x, y = xy) if and only if x and y are linearly
dependent vectors.
3.7
Orthogonal System
Orthogonal system plays a great role in expressing a vector in a Hilbert
space in terms of mutually orthogonal vectors. The consequences of the
orthogonal projection theorem are germane to this approach. In the twodimensional plane any vector can be expressed as the sum of the projections
of the vector in two mutually perpendicular directions. If e1 and e2 are
two unit vectors along the perpendicular axes Ox and Oy and P has coordinates (x1 , x2 ) [g. 3.7], then the position vector r of P is given by
r = x1 e1 + x2 e2 .
y
P
r
e2
O
e1
Fig. 3.7 Expressing a vector in terms of two perpendicular vectors
The above expression of the position vector can be extended to n
dimensions.
3.7.1
Denition: orthogonal sets and sequences
A set L of a Hilbert space H is said to be an orthogonal set if its elements
are pairwise orthogonal. An orthonormal set is an orthogonal subset L H
whose elements ei , ej satisfy the following
condition (ei , ej ) = ij where ij
0, i = j
is the Kronecker symbol in ij =
1, i = j.
If the orthogonal or orthonormal set is countable, then we can call the
said set an orthogonal or orthonormal sequence respectively.
3.7.2
Example
2inx
{e
}, n = 0, 1, 2, . . . is an example of an orthonormal sequence in
the complex space L2 ([0, 1]).
3.7.3
Theorem (Pythagorean)
If a1 , a2 , . . . am are mutually orthogonal elements of the Hilbert space
H, then
112
A First Course in Functional Analysis
a1 + a2 + + ak 2 = a1 2 + + ak 2
Proof:,
Since a1 , a2 , . . . ak
ai , aj = ai 2 if i = j
=0
if i = j.
are
mutually
orthogonal
elements,
a1 + a2 + + ak 2 = a1 + a2 + + ak , a1 + a2 + + ak
=
k
k
ai , aj =
i=1 j=1
k
k
ai , ai =
ai 2 .
i=1
i=1
since ai , aj = 0 for i = j.
3.7.4
Lemma (linear independence)
An orthonormal set is linearly independent.
Proof: Let {e1 , e2 , . . . en } be an orthonormal set. If the set is not linearly
independent, we can nd scalars 1 , 2 , . . . n not all zeroes such that
1 e1 + 2 e2 + n en = .
Taking scalar products of both sides with j , we get j (ei , ej ) = 0 since
(ei , ej ) = 0, i = j.
Therefore, j = 0, for j = 1, 2, . . . n. Hence {ei } is linearly independent.
3.7.5
Examples
n
(n dimensional Euclidean space): In n the orthonormal
1.
set is given by e1 = (1, 0 0), e2 = (0, 1, . . . 0), . . . en = (0, 0, . . . 1)
respectively.
2.
l2 space: In l2 space the orthonormal set {en } is given by,
e1 = (1, 0, . . . 0 ), e2 = (0, 1, 0, . . . 0 ), en = (0, 0, 0, . . . 0, 1 . . .)
respectively.
3. C([0, 2]): The inner product space of all continuous realvalued
functions dened on [0, 2] with the inner product given by: u, v =
2
u(t)v(t)dt.
0
We want to show that {un (t)} where un (t) = sin nt is an orthogonal
sequence.
2
2
un (t)um (t)dt =
sin nt sin mt dt
un (t), um (t) =
0
0
0 n = m
=
n = m = 1, 2, . . .
Hence un (t) = .
,
1
sin nt is an orthonormal sequence.
Hence,
On the other hand if we take vn = cos nt
Hilbert Space
113
vn , vm =
cos nt cos mt dt
0
0, m = n
, m = n = 1, 2
=
2, m = n = 0.
Hence vn = for n = 0.
1 cos t cos nt
is an orthonormal sequence.
Therefore, , ,
2
3.7.6
GramSchmidt orthonormalization process
Theoretically, every element in a Hilbert space H can be expressed in
terms of any linearly independent set. But the orthonormal set in H has an
edge over other linearly independent sets in this regard. If {ei } is a linearly
independent set and x H, a Hilbert space, then x can be expressed in
terms of ei as follows:
x=
n
ci ei ,
i=1
In case ei , ej , i, j = 1, 2, . . . n are mutually orthogonal, then ci = x, ei ,
since ei , ej = 0 for i = j.
Thus, in the case of an orthonormal system it is very easy to nd the
coecients ci , i = 1, 2, . . . n.
Another advantage of the orthonormal set is as follows. Suppose
we want to add cn+1 en+1 to x so that x = x + cn+1 en+1 span
{e1 , e2 , . . . en+1 }.
Now, x, ej = x, ej + cn+1 en+1 , ej = x, ej = cj , j = 1, 2, . . . n.
x, en+1 = x, en+1 + cn+1 en+1 , en+1 = cn+1 ,
since x span {e1 , e2 , . . . en }. Thus determination of cn+1 does not depend
on the values c1 , c2 , . . . cn .
In what follows we
normalization process.
explain
the
GramSchmidt
ortho
Let {an } be a (nite or countably innite) set of vectors in a Hilbert
space H. The problem is to convert the set into an orthonormal set {en }
such that span {ei , e2 , . . . en } = span {a1 , a2 , . . . an }, for each n. A few
steps are written down.
a1
Step 1. Normalize a1 which is necessarily nonzero so that e1 =
.
a1
Step 2. Let g2 = a2 a2 , e1 so that g2 , e1 = 0.
g2
Here take, e2 =
, so that e2 e1 .
g2
114
A First Course in Functional Analysis
Here, g2 = 0, because otherwise g2 = 0 and hence a2 and a1
are linearly dependent, which is a contradiction. Since g2 is a linear
combination of a2 and a1 , we have span {a2 , a1 } = span {e1 , e2 }.
Let us assume by way of induction that
gm = am am , e1 e1 am , e2 e2 am , em1 em1
and
gm = 0
then
gm , e1 = 0, . . . gm , em1 = 0 i.e. gm e1 , e2 , . . . em1 .
gm
.
em =
gm
We take
We assume span {a1 , a2 , . . . am1 , am } = span {e1 , e2 , . . . em1 , em }
Next we take
gm+1 = am+1 am+1 , e1 e1 am+1 , e2 e2 em+1 , em em .
Now,
gm+1 , e1 = gm+1 , e2 = gm+1 , em = 0.
Thus
gm+1 e1 , e2 , . . . em .
gm+1 = am+1 is a linear combination of e1 , e2 , . . . em
am+1 is a linear combination of a1 , a2 , . . . am which contradicts the
hypothesis that {a1 , a2 , . . . am+1 } are linearly independent.
0.
gm+1 = , i.e., gm+1 =
gm+1
We write, em+1 =
.
gm+1
Hence,
Hence,
3.7.7
e1 , e2 , . . . em+1 form an orthonormal system.
Examples
1.
(Orthonormal polynomials) Let L2, ([a, b]) be the space
of squaresummable functions with weight functions (t).
Let us
take a linearly independent set 1, t, t2 tn in L2, ([a, b]). If we
orthonormalize the above linearly independent set, we get Chebyshev
system of polynomials, p0 = const., p1 (t), p2 (t), . . . , pn (t), . . . which are
orthonormal with weight (t), i.e.,
b
(t)pi (t)pj (t)dt = ij .
a
We mention below a few types of orthogonal polynomials.
[a, b]
(t)
Polynomial
(i) a = 1, b = 1
Legendre polynomials
(ii) a = , b =
t2
Hermite polynomials
Laguerre polynomials
(iii) a = 0, b =
Hilbert Space
115
(i) Legendre polynomials
Let us take g1 = e1 = 1, g1 , g1 =
;
1
1
dt = 2, i.e., g1 =
2n + 1
1
g1
= =
Pn (t)n=0
g1
2
2
1 dn 2
Pn (t) = n
(t 1)n .
2 n! dtn
a2 , g1
1 1
g2 = a2
g1 = t
t dt = t.
g1
2 1
;
1
3
2
g2
2
2
=
t.
g2 =
t dt = , e2 =
3
g2
2
1
;
2n + 1
e2 =
P2 (t)n=1 .
2
e1 =
where
Take,
Thus
(3.31)
It may be noted that
Pn (t) =
N
j=0
1
2n n!
(1)j
dn 2n n
[t C1 t2n1 + (1)n ]
dtn
applying binomial theorem
(2n 2j)!
tn2j
j)!(n 2j)!
2n j!(n
n
(n 1)
if n is even and N =
if n is odd.
2
2
Next, let us take g3 = a3 a3 , e1 e1 a3 , e2 e2
1
1 1 2
3
1
= t2
t dt t
t3 dt = t2 .
2 1
2 1
3
2
1
1
8
g3 2 =
dt =
t2
.
3
45
1
;
;
5 1 2
2n + 1
(3t 1) =
Pn (t)n=2
e3 =
2 2
2
We next want to show that
;
1
12
2
2
Pn =
Pn (t)dt =
2n
+1
1
(3.32)
where N =
(3.33)
Let us write v = t2 1. The function v n and its derivatives
(v ) , . . . (v n )(n1) are zero at t = 1 and (v n )(2n) = (2n)!. Integrating
n times by parts, we thus obtain from (3.31)
1
n
2
2
(2 n!) Pn =
(v n )(n) (v n )(n) dt
n 1
116
A First Course in Functional Analysis
1
= (v n )(n1) (v n )(n)
= (1)n (2n)!
(vn )(n1) (v n )(n+1) dt.
= 2(2n)!
vn dt = 2(2n)!
cos2n+1 d
(1 t2 )n dt
(t = sin )
22n+1 (n!)2
=
.
2n + 1
0
0;
0 2n + 1 0
0
0
Pn 0 = 1.
Thus 0
0
0
2
1
1
((t2 1)m )(m) ((t2 1)n )(n) dt
Next, Pm , Pn = m+n
2
m!n! 1
1
1
= m+n
(v m )(m) (v n )(n) dt
2
m!n! 1
where m > n (suppose) and v = t2 1.
+1
1
(v m )(m) (v n )(n1)
= m+n
2
m!n!
1
1
2m
(v m )(m1) (vn )(n+1) dt
(1)n 2m 2(m n)
=
2m+n m!n!
(v m )(mn) dt = 0
m > n.
A similar conclusion is drawn if n > m.
Thus, {Pn (t)} as given in (3.31) is an orthonormal sequence of
polynomials.
The Legendre polynomials are solutions of the important Legendre
dierential equation
(1 t2 )Pn 2tPn + n(n + 1)Pn = 0
(3.34)
and (3.32) can also be obtained by applying the power series method to
(3.34).
Hilbert Space
117
y
P0
P1
P2
1
1
Fig. 3.8 Legendre polynomials
Remark 1.
Legendre polynomials nd frequent use in applied
mathematics, especially in quantum mechanics, numerical analysis, theory
of approximations etc.
(ii) Hermite polynomials
Since a = and b = , we consider L2 ([, ]) which is also
a Hilbert space. Since the interval of integration is innite, we need
to introduce a weight function which will make the integral convergent.
t2
We take the weight function w(t) = e 2 so that the GramSchmidt
orthonormalization process is to be applied to w, wt, wt2 . . . etc.
a0 2 =
w2 (t)dt =
et dt = 2
et dt =
t2
w
e 2
e0 = 1 = 1
( ) 2
( ) 2
Take
[ w2 tdt]
a1 , g0
g1 = a 1
g0 = wt
g0 2
3t2
te 2 dt
= wt.
= wt
2
2 2
2
g1 =
w t dt =
t2 et dt
3
1
2
z 2 1 ez dz putting z = t2 .
2 0
1
3
=
=
since (n + 1) = n(n).
2
2
<
t2
t2
2te 2
1
g1
e 2 2t
= 1 =
e1 (t) =
g1
21
( ) 2
so that
118
A First Course in Functional Analysis
<
We want to show that en (t) =
2
1
t2
Hn (t), n 1
e
2n n!
(3.35)
t2
and
e0 (t) =
e 2
1
( ) 2
H0 (t)
dn t2
(e ), n = 1, 2, 3, . . .
dtn
Hn are called Hermite polynomials of order n. H0 (t) = 1.
where
H0 (t) = 1, Hn (t) = (1)n et
(3.36)
Performing the dierentiations indicated in (3.36) we obtain
Hn (t) = n!
N
(1)j
j=0
2n2j
tn2j
j!(n 2j)!
(3.37)
n
(n 1)
where N = if n is even and N =
if n is odd. The above form
2
2
can also be written as
N
(1)j
n(n 1) (n 2j + 1)(2t)n2j
(3.38)
Hn (t) =
j!
j=0
t2
Thus
e 2
e0 (t) = 1 H0 (t)
( ) 2
<
t2
t2
t2
e 2 (2t)
2te 2
1
e 2 Hn (t)n=1 .
e1 (t) = 1 = 1 =
n
2 n!
( ) 2
(2 ) 2
(3.36) yields explicit expressions for Hn (t) as given below for a few values
of n.
H1 (t) = 2t
H0 (t) = 1
H2 (t) = 4t2 2
H3 (t) = 8t3 12t
H4 (t) = 16t4 48t2 + 12 H5 (t) = 32t5 160t3 + 120t.
We next want to show that {en (t)} is orthonormal where en (t) is given
by (3.35) and Hn (t) by (3.37),
<
2
1
en (t), em (t) =
et Hm Hn dt.
n+m
2
2
n!m!( )
Dierentiating (3.38) we obtain for n 1,
Hn (t) = 2n
M
(1)j
j=0
j!
(n 1)(n 2) (n 2j)(2t)n12j
= 2nHn1 (t)
where N =
(n 2)
(n 1)
if n is even and N =
if N is odd.
2
2
Hilbert Space
119
Let us assume m n and u = et . Integrating m times by parts we
obtain from the following integral,
2
n
et Hm (t)Hn (t)dt
(1)
Hm (t)(u)(n) dt
2m
= Hm (t)(u)(n1)
= 2m
Hm1 (u)n1 dt
Hm1 (u)(n1) dt
H0 (t)(u)(nm) dt
= (1)m 2m m!(u)(nm1)
= (1)m 2m m!
=0
if n > m because as t , t2 and u 0.
This proves orthogonality of {em (t)}, when m = n
2
2
(1)n
et Hn2 (t)dt = (1)n 2n n!
H0 (t)et dt
n n
= (1) 2 n! .
2
et Hn2 (t)dt = 2n n! .
Hence
This proves orthonormality of {en }.
Hermite polynomials Hn satisfy the Hermite dierential equations Hn
2tHn + 2nHn = 0.
Like Legendre polynomials, Hermite polynomials also nd applications
in numerical analysis, approximation theory, quantum mechanics etc.
(iii) Laguerre polynomials
We consider L2 ([0, )) and apply GramSchmidt process to the
sequence dened by
t
Take
e 2 , te 2 , t2 e 2 .
t
g0 = e 2 , g0 2 =
e0 =
t
g0
= e 2
g0
t
et dt = 1.
g1 = te 2 te 2 , e 2 e 2
120
A First Course in Functional Analysis
te
2t
,e
2t
=
tet
te dt =
+
1 0
et dt = 1.
g1 = (t 1)e 2
t
g1 2 = (t 1)e 2 , (t 1)e 2 =
e1 (t) = (t 1)e
Let us take en (t) = e
2t
2t
(t 1)2 et dt = 1.
Ln (t), n = 0, 1, 2
(3.39)
where the Laguerre polynomials of order n is dened by
et dn n t
(t e ), n = 1, 2
n! dtn
n
(1)j
n
tj
Ln (t) =
j
j!
L0 (t) = 1, Ln (t) =
i.e.
(3.40)
(3.41)
j=0
Explicit expressions for the rst few Laguerre polynomials are
L1 (t) = 1 t
L0 (t) = 1,
1
L2 (t) = 1 2t + t2
2
3
1
L3 (t) = 1 3t + t2 t3 .
2
6
2 3
1 4
2
L4 (t) = 1 4t + 3t t + t
3
24
The Laguerre polynomials Ln are solutions of the Laguerre dierential
equations
(3.42)
tLn + (1 t)Ln + nLn = 0.
In what follows we nd out e2 (t) by GramSchmidt process.
We know g2 (t) = a2 a2 , e1 e1 a2 , e0 e0 .
2 2t
a2 (t) = t e ; a2 , e1 =
t2 (t 1) et dt
0
=
(t3 t2 )et dt
0
=2
a2 , e0 =
t2 et dt = 2.
2 2t
g2 (t) = t e
t2 et dt = 4.
g2 (t) =
=
0
4(t 1)e 2 2e 2 = (t2 4t + 2)e 2 .
(t2 4t + 2)2 et dt
(t4 8t3 + 16t2 + 4 + 4t2 16t)et dt
Hilbert Space
121
=
0
(t4 8t3 + 20t2 16t + 4)et dt
= 4.
"
!
e2 (t) = 1 2t + 12 t2 et .
The orthogonality {Lm (t)} can be proved as before.
3.7.8
Fourier series
Let L be a linear subspace of a Hilbert space spanned by
e1 , e2 , . . . en . . . and let x L. Therefore, there is a linear combination
n
n
i ei for every
> 0 such that x
i ei <
.
i=1
i=1
0
02 =
>
n
n
n
0
0
0
0
i ei 0 = x
i e i , x
i e i
Hence 0x
0
0
i=1
i=1
i=1
> = n
> = n
>
= n
n
i ei
i ei , x +
i e i ,
i ei
= x, x x,
i=1
= x2
n
i=1
i x, ei
i=1
= x2
n
Therefore, x
i=1
i ei , x +
i=1
i di
i=1
n
n
n
n
i=1
i j ei , ej
i=1 j=1
i di +
i=1
n
i 2 where di = x, ei .
i=1
i ei 2 = x2
n
di 2 +
i=1
n
i di 2 .
i=1
The numbers di are called Fourier coecients of the element x with
respect to the orthonormal system {ei }.
The expression on the RHS for dierent values of i takes its least value
when i = di , the ith Fourier coecient of x.
n
n
Hence, 0 x
di ei 2 = x2
di 2 <
.
(3.43)
i=1
i=1
being arbitrary small, it follows that x = lim
The convergence of the series
moreover
n
i=1
di e i =
di e i .
i=1
di 2 also follows from (3.43) and
i=1
di 2 = x2 .
(3.44)
i=1
Next let x be an arbitrary element in the Hilbert space H. Let y be the
projection of x on L.
122
Then
A First Course in Functional Analysis
y=
di ei where di = y, ei = x, ei and
i=1
di 2 = y2 .
i=1
Since x = y + z, y L, z L, it follows from the Pythagorean
theorem,
(3.45)
x2 = y2 + z2 y2 .
Consequently for any element x in H, the inequality,
di 2 x2
(3.46)
i=1
holds for any x H where di = x, ei , i = 1, 2, 3 . . .. The above inequality
is called Bessel inequality.
3.8
Complete Orthonormal System
It is known that in twodimensional Euclidean space, 2 , any vector can be
uniquely expressed in terms of two mutually orthogonal vectors of unit norm
or, in other words, any nonzero vector x can be expressed uniquely in terms
of bases e1 , e2 . Similarly, in 3 we have three terms, e1 , e2 , e3 , each having
unit norm, and the bases are pairwise orthonormal. This concept can be
easily extended to ndimensional Euclidean space where we have npairwise
orthonormal bases e1 , e2 , . . . en . But the question that invariably comes up
is whether this concept can be extended to innite dimensions. Or, in other
words, the question arises as to whether an innite dimensional space, like
an inner product space, can contain a suciently large set of orthonormal
vectors such that any element x H (a Hilbert space) can be uniquely
represented by said set of orthonormal vectors.
Let H = be a Hilbert space. Then the collection C of all orthonormal
subsets of H is clearly nonempty. It can be easily seen that the class C can
be partially ordered under set inclusion relation.
3.8.1
Denition: complete orthonormal system
V.A. Steklov rst introduced the concept of a complete orthonormal
system. An orthonormal system {ei } in H is said to be a complete
orthonormal system if there is no nonzero x H such that x is orthogonal
to every element in {ei }. In other words, a complete orthonormal system
cannot be extended to a larger orthonormal system by adding new elements
to {ei }. Hence, a complete orthonormal system is maximal with respect to
inclusion.
3.8.2
Examples
1. The unit vectors ei = (1, 0, 0), e2 = (0, 1, 0), e3 = (0, 0, 1) in the
directions of three axes of rectangular coordinate system form a complete
orthonormal system.
Hilbert Space
123
2. In the Hilbert space l2 , the sequence {en } where en = {nj }, is a
complete orthonormal system.
3. The orthonormal system
in example 3.7.5(3), i.e., the sequence un =
,
1 cos t cos 2t
, , , although an orthonormal set is not complete for
2
sin t, un = 0. But the system 12 , 1 cos t, 1 sin t, 1 cos 2t, . . . is a
complete orthonormal system.
3.8.3
Theorem
Every Hilbert space H = {} contains a complete orthonormal
set.
Consider the family C of orthonormal sets in1 H 2
partially ordered by
x
set inclusion. For any nonzero x H, the set x is an orthonormal
set. Therefore, C =
. Now let us consider any totally ordered subfamily
in C. The union of sets in this subfamily is clearly an orthonormal set
and is an upper bound for the totally ordered subfamily. Therefore, by
Zorns lemma (1.1.4), we conclude that C has a maximal element which is
a complete orthonormal set in H.
The next theorem provides another characterization of a complete
orthonormal system.
3.8.4
Theorem
Let {ei } be an orthonormal set in a Hilbert space H. Then
{ei } is a complete orthonormal set if and only if it is impossible
to adjoin an additional element e H, e = to {ei } such that
{ei , e} is an orthonormal set in H.
Proof: Suppose {ei } is a complete orthonormal set. Let it be possible to
adjoin an additional vector e H of unit norm e = , such that {e, ei } is
an orthonormal set in H i.e. e {ei }, e is a nonzero vector of unit norm.
But this contradicts the fact that {ei } is a complete orthonormal set. On
the other hand, let us suppose that it is impossible to adjoin an additional
element e H, e = , to {ei } such that {e, ei } is an orthonormal set.
Or, in other words, there exists no nonzero e H of unit norm such that
e {ei }. Hence the system {ei } is a complete orthonormal system.
In what follows we dene a closed orthonormal system in a Hilbert space
and show that it is the same as a complete orthonormal system.
3.8.5
Denition: closed orthonormal system
An orthonormal system {ei } in H is said to be closed if the subspace
L spanned by the system coincides with H.
3.8.6
Theorem
A Fourier series with respect to a closed orthonormal system,
constructed for any x H, converges to this element and for
124
A First Course in Functional Analysis
every x H, the ParsevalSteklov equality
i=1
d2i = x 2
(3.47)
holds.
Proof: Let {ei } be a closed orthonormal system. Then the subspace
spanned by {ei } coincides with H.
Let x H be any element. Then the Fourier series (c.f. 3.7.8) WRT
the closed system is given by
x=
di e i
where di = x, ei ,
i=1
Then by the relation (3.43), we have,
d2i where di = x, ei
x2 =
and
ei 2 = 1.
i=1
3.8.7
Corollary
An orthonormal system is complete if and only if the system
is closed.
Let {ei } be a complete orthonomal system in H. If {ei } is not closed,
let the subspace spanned by {ei } be L where L = H. If any nonzero x is
not orthogonal to L, then x is also not orthogonal to L. Thus H L.
This contradicts that {ei } is not closed.
Conversely, let us suppose that the orthonormal system {ei } is closed.
Then, by theorem 3.8.6, we have for any x H,
x2 =
d2i where di = x, ei .
i=1
If x ei , i = 1, 2, . . . , that is, ei = , i = 1, 2, . . . , then x = 0. This
implies that the system {ei } is complete.
3.8.8
Denition: orthonormal basis
A closed orthonormal system in a Hilbert space H is also called an
orthonormal basis in H.
Note 3.8.1 We note that completeness of {en } is tantamount to the
statement that each x L2 has Fourier expansion,
1
x(t) =
xn eint
2 n=
(3.48)
It must be emphasized that the expansion is not to be interpreted as saying
that the series converges pointwise to the function x(t).
Hilbert Space
125
One can conclude that the partial sums of 3.48 is the vector un (t) =
n
xk eikt converges to the vector x in the sense of L2 , i.e.,
1
2
k=
un (t) x(t) 0 as n .
This situation is often expressed by saying that x is the limit in the
mean of un s.
3.8.9
Theorem
In a separable Hilbert space H a complete orthonormal set is
enumerable.
Proof: Let H be a separable Hilbert space, then an enumerable set
M , everywhere dense in H, i.e., M = H. Let {e  } be a complete
orthonormal sets in H. If possible let the set {e  } be not enumerable
then an element e {e  } such that e M . Since M = H, e M ,
since M is dense in H, a separable set {xn } M such that lim xn = e .
n
Since {e  } is a complete set in H and xn M H, then xn can
be expressed as
xn =
(n)
C e
e = lim xn = lim
n
(n)
C e .
1
The above relation shows that e is a linear combination of
2
e  ,
=
which is a contradiction, since {e  } is an orthonormal set and hence
linearly independent.
Hence {e  } cannot be nonenumerable.
Problems [3.7 and 3.8]
1. Let {a1 , a2 , . . . an } be an orthogonal set in a Hilbert space H,
and 1 , 2 , . . . n be scalars such that their absolute values are
respectively 1. Show that
1 a1 + + n an = a1 + a2 + + an .
2. Let {en } be an orthonormal sequence in a Hilbert space H. If {n }
be a sequence of scalars such that
i 2 converges then show that
i=1
i=1
i ei converges to an x
H and n = x, en , n
N.
126
A First Course in Functional Analysis
3. Let {e1 , e2 , . . . en } be a nite orthonormal set in a Hilbert space H
and let x be a vector in H. If p1 , p2 , . . . pn are arbitrary scalers show
n
that x
pi ei attains its minimum value pi = x, ei for each
i=1
i.
4. Let {en } be a orthonormal sequence in a Hilbert space H. Prove that
x, en y, en  xy, x, y H.
n=1
5. Show that on the unit disk B(z < 1) of the complex plane z = x+iy,
the functions
12
k
gk (x) =
z k1 (k = 1, 2, 3)
form an orthonormal system under the usual denition of a scalar
product in a complex plane.
6. Let {e  } be an orthonormal set in a Hilbert space H.
(a) If x belongs to the closure of span {en }, then show that
x=
x, en en and x2 =
n=1
x, en 2
n=1
where {e1 , e2 , . . .} = {en : x, en = 0}.
(b) Prove that the span {en } is dense in H if and only if every x in
H has a Fourier expression as above and for every x, y in H, the
identity
x, y =
x, en en , y
n=1
0}.
holds, where {ei , e2 , . . .} = {e : x, e = 0 and y, e =
7. Let L be a complete orthonormal set in a Hilbert space H.
u, x = v, x for all x L, show that u = v.
8. The Haar system in L2 ([0, 1]) is dened as follows:
(a)
(0) (x) = 1 x [0, 1]
? 1
1, x 0,
1 )
(0)
1 (x) =
1, x
,1
0, x = 1
2
If
Hilbert Space
127
and for m = 1, 2, . . . ;
(K)
m (x) =
K = 1, . . . 2m
K 1 K 12
m
2 , x
,
m
m
2 1 2
K2 K
,
2m , x
2m 2m
K 1 K
,
0,
x [0, 1]
2m 2m
(K)
and at that nite set of points at which m (x) has not yet been
(K)
dened, let m (x) be the average of the left and right limits of
(K)
(K)
m (x) as x approaches the point in question. At 0 and 1, let m (x)
assume the value of the onesided limit. Show that the Haar system
given by
(0)
(0)
(K)
{0 , 1 m , m = 1, 2, . . . ; K = 1, . . . 2m }
is orthonormal in L2 ([0, 1]).
9. If H has a denumerable orthonormal basis, show that every
orthonormal basis for H is denumerable.
10. Show that the Legendre dierential equation can be written as
[(1 t2 )Pn ] = n(n + 1)Pn .
Multiply the above equation by Pn and the corresponding equation
in Pm by Pn . Then subtracting the two and integrating resulting
equation from 1 to 1, show that {Pn } in an orthogonal sequence in
L2 ([1, 1]).
11. (Generating function) Show that
1
=
Pn (t)w n .
1 2tw + w2 n=0
12. (Generating function) Show that
exp (2wt w2 ) =
1
Hn (t)w n .
n!
n=0
The function on the left is called a generating function of the Hermite
polynomials.
n
2 d
2
13. Given H0 (t) = 1, Hn (t) = (1)n et n (et ) n = 1, 2, . . .
dt
Show that Hn+1 (t) = 2tHn (t) Hn (t).
14. Dierentiating the generating function in problem 12 W.R.T. t, show
that Hn (t) = 2nHn1 (t), (n 1) and using problem 13, show that
Hn satises the Hermite dierential equation.
128
3.9
A First Course in Functional Analysis
Isomorphism between Separable Hilbert
Spaces
Consider a separable Hilbert space H and let {ei } be a complete
orthonormal system in the space. If x is some element in H then we
can assign to this element a sequence of numbers {c1 , c2 , . . . cn }, the
Fourier coecients. As shown earlier the series
ci 2 is convergent
i=1
and consequently the sequence {c1 , c2 , . . . cn , . . .} can be treated as some
element x
of the complex space l2 . Thus to every element x H there
can be assigned some element x
l2 . Moreover, the assumption on the
completeness of the system implies
xH =
12
ci 
= 
xl2
(3.49)
i=1
where the subscripts H and l2 denote the respective spaces whose norms
are taken.
Moreover, it is clear that if x H corresponds to x
l2 , and y H
corresponds to y l2 , then x y corresponds to x
y. It then follows from
(3.49) that
x yl2 .
(3.50)
x yM =
Let us suppose that z = {i } is an arbitrary element in l2 . We next consider
n
in H the elements zn =
i ei , n = 1, 2, . . .
i=1
n
We have then zn zm 2 =
i ei 2 =
i=m+1
Now,
n
i 2 .
i=m+1
zm zn 0 as n, m .
Then {zn } is a Cauchy sequence in the sense of the metric in H and
by virtue of completeness of H, converges to some element z of the space,
since
z, ei = lim zn , ei = i , i = 1, 2, . . ..
n
It therefore follows that the i are Fourier coecients of z w.r.t. the chosen
orthonormal system {ei }. Thus each element z l2 is assigned to some
element z H. In the same manner, corresponding to every z l2
we can nd a z H. Thus a onetoone correspondence between the
elements of H and l2 is established. The formula (3.50) shows that the
correspondence between H and L2 is an isometric correspondence. Now, if
x H corresponds to x l2 and y H corresponds to y l2 , x y H2
corresponds to x
y l2 . Again x H corresponds to
x l2 for any
Hilbert Space
129
scalar . Since x yH =
x yl2 and xH =
xl2 , it follows that
the correspondence between H and l2 is both isometric and isomorphic.
We thus obtain the following theorem.
3.9.1
Theorem
Every complex (real) separable Hilbert space is isomorphic and isometric
to a complex (real) space l2 . Hence all complex (real) separable Hilbert
spaces are isomorphic and isometric to each other.
CHAPTER 4
LINEAR OPERATORS
There are many operators, such as matrix operator, dierential operator,
integral operator, etc. which we come across in applied mathematics,
physics, engineering, etc. The purpose of this chapter is to bring the
above operators under one umbrella and call them linear operators. The
continuity, boundedness and allied properties are studied. If the range of
the operators is , then they are called functionals. Bounded functionals
and space of bounded linear functionals are studied. The inverse of an
operator is dened and the condition of the existence of an inverse operator
is investigated. The study will facilitate an investigation into whether a
given equation has a solution or not. The setting is always a Banach space.
4.1
Denition: Linear Operator
We know of a mapping from one space onto another space. In the case of
vector spaces, and in particular, of normed spaces, a mapping is called an
operator.
4.1.1
Denition: linear operator
Given two topological linear spaces (Ex , x ) and (Ey , y ) over the same
scalar eld (real or complex) an operator A is dened on Ex with range in
Ey .
We write y = Ax; x Ex and y Ey .
The operator A is said to be linear if:
(i) it is additive, that is,
A(x1 + x2 ) = Ax1 + Ax2 , forall x1 , x2 Ex
(ii) it is homogeneous, that is,
A(x) = Ax,
130
(4.1)
Linear Operators
131
forall x Ex and every real (complex) whenever Ex is real
(complex).
Observe the notation. Ax is written instead of A(x); this simplication
is standard in functional analysis.
4.1.1a Example
Consider a real square matrix (aij ) of order n, (i, j = 1, 2, . . . n). The
equations,
i =
n
aij j
(i = 1, 2, . . . , n)
j=1
can be written in the compact form,
y = Ax
where y = (1 , 2 , . . . , n ) Ey , A = (aij )
i=1,...,n
j=1,...,n
x = (1 , 2 , . . . , n ) Ex
If x1 = (11 , 21 , . . . , n1 ) and x2 = (12 , 22 , . . . , n2 ) and y 1 =
and y 2 = (12 , 22 , . . . , n2 ) such that
(11 , 21 , . . . , n1 )
Ax1 = y1
n
aij j1 = i1 , i = 1, 2, . . . , n
j=1
2
Ax = y
n
aij j2 = i2 , i = 1, 2, . . . , n
j=1
Then
A(x1 + x2 ) = (
aij (j1 + j2 )) = (
aij j1 ) + (
aij j2 )
= Ax1 + Ax2 = y 1 + y 2 .
The above shows that A is additive.
The fact that A is homogeneous can be proven in a similar manner.
4.1.2a Example
Let k(t, s) be a continuous function of t and s, a t, s b. Consider
the integral equation
b
k(t, s)x(s)ds.
y(t) =
a
If x(s) C([a, b]), then y(t) C([a, b]).
The above equation can be written as
b
y = Ax where A =
k(t, s)x(s)ds.
a
132
A First Course in Functional Analysis
The operator A maps the space C([a, b]) into itself.
Let x1 (s), x2 (s) C([a, b]), then,
b
k(t, s)(x1 (s) + x2 (s))ds
A(x1 + x2 ) =
a
k(t, s)x1 (s)ds +
=
a
b
Moreover, A(x) =
k(t, s)x2 (s)ds = Ax1 + Ax2
a
k(t, s)(x(s))ds =
a
k(t, s)x(s)ds = Ax.
a
Thus A is additive and homogeneous.
4.1.3a Example
Let Ex = C 1 ([a, b]) = {x(t) : x(t) is continuously dierentiable in
dx
a < t < b,
is continuous in (a, b)}.
dt
Dene the norm as
dx
(4.2)
x = sup x(t) +
dt
atb
dx
C([a, b])
dt
dx
y = sup y(t) = sup
atb
atb dt
Let x C 1 ([a, b]), then y = Ax =
Since the sup of (4.2) exists, sup y(t) also exists. Moreover the operator
dx
is linear for
A=
dt
A(x1 + x2 ) = Ax1 + Ax2
x1 , x2 C 1 ([a, b])
A(x) = Ax
( )
4+
4.1.4
Continuity
We know that the continuity of A in the case of a metric space means
that there is a > 0 such that the collection of images of elements in the
ball B(x, ) lie in B(Ax,
).
4.1.1b Example
(m)
Let us suppose in example 4.1.1a {i } is convergent.
n
(m)
(p)
(m)
(p)
aij (j j )
Then i i =
j=1
Hence by CauchyBunyakovskySchwartz inequality (1.4.3)
n
n
n
(m)
n (m)
(p) 2
(p)
2
(
a
(i i )2
ij
i
i
i=1
i=1 j=1
j=1
Linear Operators
(m)
where xm = {i
Since
n
n
133
(m)
}, ym = Axm = {i
}.
(m)
a2ij is nite, convergence of {i
} implies convergence of
i=1 j=1
(m)
{i
}. Hence continuity of A is established.
4.1.2b Example
Consider Example 4.1.2a.
Let {xn (t)} converge to x(t) in the sense of convergence in C([0, 1]), i.e.,
converges uniformly in C[(0, 1]). Now in the case of uniform convergence
we can take the limit under the integral sign.
b
b
It follows that lim
K(t, s)xm (s)ds =
K(t, s)x(s)ds i.e.
m
lim Axm = Ax and the continuity of A is proved.
4.1.3b Example
We refer to the example 4.1.3a. The operator A, in this case, although
additive and homogeneous, is not continuous. This is because the
derivative of a limit element of a uniformly convergent sequence of functions
need not be equal to the limit of the derivative of these functions, even
though all these derivatives exist and are continuous.
4.1.4 Example
Let A be a continuous linear operator. Then
(i) A() =
(ii) A(z) = Az for any z Ex
Proof: (i) for any x, y, z Ex , put x = y + z and consequently y = x z.
Now,
Ax = A(y + z) = Ay + Az = Az + A(x z)
Hence, A(x z) = Ax Az
(4.3)
Putting x = z, we get A() = .
(ii) Taking x = in (4.3) we get
A(z) = Az
4.1.5
Theorem
If an additive operator A, mapping a real linear space Ex into a real
linear space Ey s.t. y = Ax, x Ex , y Ey , be continuous at a point
x Ex , then it is continuous on the entire space Ex .
Proof: Let x be any point of Ex and let xn x as n . Then
xn x + x x as n .
Since A is continuous at x ,
134
A First Course in Functional Analysis
lim A(xn x + x ) = Ax
However,
A(xn x + x ) = Axn Ax + Ax ,
since A is additive in nature.
Therefore, lim (Axn Ax + Ax ) = Ax
or
lim Axn = Ax
where x is any element of Ex .
4.1.6
Theorem
An additive and continuous operator A dened on a real linear space is
homogeneous.
Proof: (i) Let = m, a positive integer. Then
A(x) = Ax + Ax + + Ax = mAx
A
BC
D
m terms
(ii) Let = m, m is a positive integer.
By sec. 4.1.4
A(x) = A(mx), m positive integer
= A(mx) = mAx = Ax
m
be a rational number m and n prime to each other,
(iii) Let =
n
then
x
m
x = mA
A
n
n
x
Let = , n is an integer. Then x = n.
n
x
Ax = A(n) = nA = nA
n
m
x
1
= Ax i.e. A(x) = A
x
Hence A
n
n
n
x
m
= mA
= Ax = Ax.
n
n
m
m
where
> 0, then also A(x) = Ax.
n
n
Let us next consider to be an irrational number. Then we can nd a
sequence of rational number {si } such that lim si = .
If =
si being a rational number, then
A(si x) = si Ax
(4.4)
since A is continuous at x, is a real number, and since lim si x = x
i
we have,
Linear Operators
135
lim A(si x) = A(x)
Again, lim si = .
i
Hence taking limits in (4.4), we get,
A(x) = Ax.
4.1.7
The space of operators
The algebraic operations can be introduced on the set of linear
continuous operators, mapping a linear space Ex into a linear space Ey .
Let A and B map Ex into Ey .
For any x Ex , we dene the addition by
(A + B)x = Ax + Bx
and the scalar multiplication by
(A)x = Ax.
Thus, we see the set of linear operators dened on Ex is closed w.r.t.
addition and scalar multiplication. Hence, the set of linear operators
mapping Ex into Ey is a linear space.
In particular if we take B = A, thus
(A + B)x = (A A)x = 0 x = Ax Ax =
Thus 0, the null operator, is an element of the said space. The limit of
a sequence is dened in a space of linear operators by assuming for example
that An A if An x Ax for every x Ex .
This space of continuous linear operators will be discussed later.
4.1.8
The ring of continuous linear operators
4+
Let E be a linear space over a scalar eld ( ). We next consider the
space of continuous linear operators mapping E into itself. Such a space
we denote by (E E). The product of two linear operators A and B in
(E E) is denoted by AB = A(B), i.e., (AB)x = A(Bx) for all x E.
Let xn x in E. A and B being continuous linear operators,
Axn Ax and Bxn Bx.
Since AB is the product of A and B in (E E),
ABxn ABx = A(Bxn Bx).
Since Bxn Bx as n or Bxn Bx as n and A is a
continuous linear operator, A(Bxn Bx) as n . Thus, AB is a
continuous linear operator.
136
A First Course in Functional Analysis
Since A maps E E and is continuous linear A2 = A A (E E).
Let us suppose An (E E) for any nite n. Then
An+1 = A An (E E)
It can be easily seen that, if A, B and C (E E) respectively, then
(AB)C = A(BC), (A + B)C = AC + BC and also C(A + B) = CA + CB.
Moreover, there exists an identity operator I, dened by Ix = x
for all x and such that AI = IA = A for every operator A. In general
AB = BA, the set (E E) is a noncommutative ring with identity.
4.1.9
Example
Consider the linear space 2 of all polynomials p(s), with real coecients.
The operator A is dened by
1
dp
= Bp
tsp(s)ds = Ap and y(t) = t
y(t) =
dt
0
1
1
1
2 dp(s)
2
ds = t s p(s) 0 2
ABp =
ts
sp(s)ds
ds
0
0
1
= t p(1) 2
sp(s)ds
d
BAp = t
dt
tsp(s)ds = t
0
sp(s)ds
0
Thus AB = BA.
4.1.10
Function of operator
AD represents a simple example of an
The operator An = A
A A BC
n terms
operator function. A more general function, namely of the polynomial
function of operator,
pn (A) = a0 I + a1 A + a2 A2 + + an An .
4.2
Linear Operators in Normed Linear
Spaces
Let Ex and Ey be two normed linear spaces. Since a normed linear space
is a particular case of a topological linear space, the denition of a linear
operator mapping a topological linear space Ex into a topological linear
space Ey holds good in case the spaces Ex and Ey reduce to normed linear
spaces. Theorems 4.1.5 and 4.1.6 also remain valid in normed linear spaces.
Linear Operators
4.2.1
137
Denition: continuity of a linear operator mapping Ex
into Ey
Since convergence in a normed linear space is introduced through the
convergence in the induced metric space, we dene convergence of an
operator A in a normed linear space as follows:
Given A, a linear operator mapping a normed linear space Ex into
a normed linear space Ey , A is said to be continuous at x Ex if
xn xEx 0 as n Axn AxEy 0 as n .
4.1.1c Example
(m)
(m)
Now refer back to example 4.1.1a. Let, xm = {i }. ym = {i }.
Then ym = Axm . Let xm x as m . We assume that xm Ex , the
ndimensional Euclidean space. Then
n
(m) 2
(i
) < .
i=1
Now, by CauchyBunyakovskySchwartzs inequality, (Sec. 1.4.3)
n
n
(m)
(m)
aij (j j )
y ym = {i i } =
i=1 j=1
i
or
(m)
i 
1/2
n
n
(m)
=
aij (j j )
a2ij
j=1
j=1
1/2
n
(m)
j j 2
j=1
Thus
n
1/2
i
(m)
i 2
1/2
1/2
n
n
n
(m)
a2ij
j j 2
i=1
Since
n
n
i=1 j=1
j=1
a2ij <
i=1 j=1
xm x2 0 =
(m) 2
i i
 0
i=1
n
(m) 2
i i
 0
i=1
= Axm Ax2 0 as m .
This shows that A is continuous.
138
A First Course in Functional Analysis
4.1.2c Example
We consider example 4.1.2a in the normed linear space C([a, b]). Since
x(s) C([a, b]) and K(t, s) is continuous in a t, s b, it follows that
y(t) C([a, b]).
Let xn , x C([a, b]) and xn x 0 as n , where x =
max x(t).
atb
Now, yn (t) y(t) = max yn (t) y(t)
atb
K(t, s)(xn (s) x(s))ds
= max
atb
[b a] max K(t, s) max xn (s) x(s)
at,sb
asb
= [b a] max K(t, s) xn (s) x(s) 0 as n .
at,sb
Axn Ax 0 as xn x 0,
or,
showing that A is continuous.
4.2.2
Example
Let A = (aij ), i, j = 1, 2, . . ., respectively.
Let x = {i }, i = 1, 2, . . . Then y = Ax yields
i =
aij j where y = {i }
j=1
Let us suppose
aij q < , q > 1
K=
(4.5)
i=1 j=1
and
x lp i.e.
i p .
i=1
(1)
{i }
(2)
For x1 =
lp , x2 = {i } lp it is easy to show that A is linear.
Then, using Holders inequality (1.4.3)
1/q
1/p q
n
n
p
i q
aij q
j 
i=1
i=1 j=1
j=1
= xq
n
aij q
i=1 j=1
xq
i=1 j=1
aij q
Linear Operators
Hence y =
139
1/q
i q
i=1
1/q
1 1
aij q x where + = 1.
p q
i=1 j=1
Hence, using (4.5), we can say that x lp = y lq .
Let now xm xp 0, i.e., xm x in lp as m , where
(n)
xn = {i } and x = {i }.
Now, using Holders inequality (sec. 1.4.3)
q
(m)
Axm Axqq =
a
(
)
ij j
j
i=1 j=1
q/p
(m)
aij q
j j p
i=1 j=1
=K
j=1
q/p
(m)
j
j p
j=1
= Kxm xqp
Hence xm xp 0 = Axm Axq 0.
Hence A is linear and continuous.
4.2.3
Denition: bounded linear operator
Let A be a linear operator mapping Ex into Ey where Ex and Ey are
linear operators over the scalar eld ( ). A is said to be bounded if there
exists a constant K > 0 s.t.
4+
AxEx KxEx
for all x Ex
Note 4.2.1. The denition 4.2.3 of a bounded linear operator is not the
same as that of an ordinary real or complex function, where a bounded
function is one whose range is a bounded set.
We would next show that a bounded linear operator and a continuous
linear operator are one and the same.
4.2.4
Theorem
In order that an additive and homogeneous operator A be continuous, it
is necessary and sucient that it is bounded.
Proof: (Necessity) Let A be a continuous operator. Assume that it is not
bounded. Then, there is a sequence {xn } of elements, such that
Axn  > nxn 
(4.6)
140
A First Course in Functional Analysis
Let us construct the elements
n =
xn
xn 
, i.e., n  =
0 as n
nxn 
nxn 
Therefore, n = as n 0.
On the other hand,
An  =
1
Axn  > 1
nxn 
(4.7)
Now, A being continuous, and since n as n
n as n = An A =
This contradicts (4.7). Hence, A is bounded.
(Suciency) Let the additive operator A be bounded i.e. Ax
Kx x Ex .
Let xn x as n i.e. xn x 0 as n .
Now, Axn Ax = A(xn x) Kxn x 0 as n .
Hence, A is continuous at xn .
4.2.5
Lemma
Let a given linear (not necessarily bounded) operator A map a Banach
space Ex into a Banach space Ey . Let us denote by En the set of those
*
En and at least
x Ex for which Ax < nx. Then Ex is equal to
n=1
one En is everywhere dense.
Proof: Since A < n, the null element belongs to every En for every
n. Again, for every x, we can nd a n say n such that Ax < n x,
Ax
.
n >
x
Therefore, every x belongs to same En .
*
Hence Ex =
En . Ex , being a Banach space, can be reduced to a
n=1
complete metric space. By theorem 1.4.19, a complete metric space is a set
of the second category and hence cannot be expressed as a countable union
of nowhere dense sets. Hence, at least one of the sets of Ex is everywhere
dense.
To actually construct such a set in En we proceed as follows. Let En be
a set which is everywhere dense in Ex . Consequently there is a ball B(x0 , r)
containing B(x0 , r) En , everywhere dense.
Let us consider a ball B(x1 , r1 ) lying completely inside B(x0 , r) and
such that x1 En . Take any element x with norm x = r1 . Now
x1 + x B(x1 , r1 ). Since B(x1 , r1 ) E n , there is a sequence {yk } of
elements in B(x1 , r1 ) En such that yk x1 + x as k . Therefore,
Linear Operators
141
xk = yk x1 x. Since x = r1 , there is no loss of generality if we
assume r1 /2 < xk .
Since yk and x1 En ,
Axk  = Ayk Ax1  Ayk  + Ax1 
n
(yk  + x1 )
Besides yk  = xk + x1  xk  + x1 
r1 + x1 
Hence, Axk  n
(r1 + 2x1 )
2
n(r1 + x)
r1
xk  since
xk .
r1
2
Let n be the least integer greater than
2
n(r1 + 2x1 )
, then Axk 
r1
n
xk , implying that all xk En .
Thus, any element x with norm equal to r1 can approximate
elements
x
in En . Let x be any element in Ex . Then x
= r1 x1  satises
there
is
a
sequence
{
x
}
E
.

x = r1 . Hence
k
n , which converges to x
x1 
x
k
x, as
Then x
= r1
satises 
x = r1 . Then xk = x
x1 
r1
k .
x1 
x1 
Axk  =
A
xk 
n
xk  = nxk 
r1
r1
Thus xk En . Consequently En is everywhere dense in Ex .
4.2.6
Denition: the norm of an operator
Let A be a bounded linear operator mapping Ex into Ey . Then we can
nd K > 0 such that
(4.8)
AxEy KxEx
The smallest value of K, say M , for which the above inequality holds is
called the norm of A and is denoted by A.
4.2.7
Lemma
The operator A has the following two properties:
(i) Ax A x for all x Ex , (ii) for every
> 0, there is an
element x such that
Ax  > (A
)x 
Proof: (i) By denition of the norm of A, we can write,
A = inf{K : K > 0 and AxEy KxEx , x Ex }
Hence
AxEy A xEx
(ii) Since A is the inmum of K, in (4.9)
x Ex , s.t. Ax  > (A
)x 
(4.9)
142
4.2.8
A First Course in Functional Analysis
Lemma
(i) A = sup
x=
AxEy
xEx
(ii) A = sup AxEy
(4.10)
(4.11)
x1
Proof: It follows from (4.9) that
sup
x=
AxEy
A
xEx
(4.12)
Again (ii) of lemma 4.2.7 yields
Ax 
> (A
)
x 
Hence, sup
x=
AxEy
A
xEx
(4.13)
It follows from (4.12) and (4.13) that
sup
x=
AxEy
= A
xEx
Next, if xEx 1, it follows from (i) of lemma 4.2.7,
sup AxEy A
(4.14)
x1
Again, we obtain from (ii) of lemma 4.2.7
Ax  > (A
)x 
Put =
x
x  ,
then
A  =
1
1
Ax  >
(A
)x 
x 
x 
Since   = 1, it follows that
sup Ax A  > A
x1
Hence, sup Ax A.
(4.15)
x1
Using (4.14) and (4.15) we prove the result.
4.2.9
Examples of operators
Examples of operators include the identity operator, the zero operator,
the dierential operator, and the integral operator. We discuss these
operators in more detail below.
Linear Operators
4.2.10
143
Identity operator
The identity operator IEx : Ex Ex is dened by IEx x = x for all
x Ex . In case Ex is a nonempty normed linear space, the operator I is
bounded and the norm I = 1.
4.2.11
Zero operator
The zero operator 0 : Ex Ey where Ex and Ey are normed linear
spaces, is bounded and the norm 0Ey = 0.
4.2.12
Dierential operator
Let Ex be the normed linear space of all polynomials on J = [0, 1] with
norm given by x = maxtJ x(t). A dierential operator A is dened on
Ex by Ax(t) = x (t) where the prime denotes dierentiation WRT t.
The operator is linear for dierentiable x(t) and y(t) Ex for we have,
A(x(t) + y(t)) = (x(t) + y(t)) = x (t) + y (t) = Ax(t) + Ay(t)
Again A(x(t)) = (x(t)) = x (t) = A(x(t)) where is a scalar.
Ax(t) = ntn1 , t J
If xn (t) = tn
then xn (t) = 1 and Axn (t) = xn (t) = ntn1 so that
Axn  = n
and
Axn /xn  = n.
Since n is any positive integer we cannot nd any xed number M s.t.
Axn /xn  M
Hence A is not bounded.
4.2.13
Integral operator
Refer to example 4.1.2.
A : C([a, b]) C([a, b]), for x(s) C([a, b]),
b
Ax =
K(t, s)x(s)ds
a
where K(t, s) is continuous in a t, s b. Therefore
y(t) = Ax(s) C([a, b])
b
K(t, s)x(s)ds
y(t) = max y(t) = max
atb
atb a
b
max
K(t, s)ds max x(s) = max
atb
atb
K(t, s)ds x
K(t, s)ds
Thus, A max
atb
asb
(4.16)
144
A First Course in Functional Analysis
K(t, s)ds is a continuous function, it attains the maximum
Since
a
at some point t0 of the interval [a, b]. Let us take,
z0 (s) = sgn K(t0 , s)
where sgn z = z/z, z .
Let xn (s) be a continuous function, such that xn (s) 1 and xn (s) =
z0 (s) everywhere except on a set En of measure less than 1/2M n, where
M = max K(t, s). Then, xn (s) z0 (s) 2 everywhere on En .
t,s
b
b
K(t, s)z0 (s)ds
K(t, s)xn (s)ds
We have
a
a
b
K(t, s) xn (s) z0 (s)ds
K(t, s) xn (s) z0 (s)ds
=
En
1
1
=
2M n
n
b
b
1
K(t, s)z0 (s)ds
K(t, s)xn (s)ds +
Thus
n
a
a
1
A xn  +
n
for t [a, b], putting t = t0
b
1
K(t0 , s)ds A xn  +
n
a
Since xn  1, the preceeding inequality in the limit as n gives
rise to
b
K(t, s)ds A,
2 max K(t, s)
t,s
K(t, s)ds A
i.e., max
t
(4.17)
From (4.16) and (4.17) it follows that
1
K(t, s)ds
A = max
t
4.2.14
Theorem
A bounded linear operator A0 dened on a linear subset X, which
is everywhere dense in a normed linear space Ex with values in a
complete normed linear space Ey can be extended to the entire space with
preservation of norm.
Linear Operators
145
Proof: A can be dened on Ex such that Ax = A0 x, x X and
AEx = A0 X .
Let x be any element in Ex not belonging to X.
Since X is everywhere dense in Ex , there is a sequence {xn } X s.t.
xn x 0 as n and hence xn xm  0 as n, m . However
then
A0 xn A0 xm  = A0 (xn xm ) A0 X xn xm 
0 as n, m , {A0 xn }
is a Cauchy sequence and converges by the completeness of Ey to some
limit denoted by Ax. Let {n } X be another sequence convergent to
x. Evidently xn n  as n . Hence, A0 xn A0 n  0
as n . Consequently A0 n Ax, implying that A is dened
uniquely by the elements of Ex . If x X select xn = x for all n. Then
Ax = lim A0 xn = A0 x.
n
The operator A is additive, since
(2)
(1)
(2)
A(x1 + x2 ) = lim A0 (x(1)
n + xn ) = lim A0 xn + lim A0 xn = Ax1 + Ax2
n
We will next show that the operator is bounded
A0 xn  A0 X xn 
Making n we have
Ax A0 X x
Dividing both sides by x and taking the supremum
AEx A0 X
But the norm of A over Ex cannot be smaller than the norm of A0 over
X, therefore we have
AEx = AX
The above process exhibits the completion by continuity of a bounded
linear operator from a dense subspace to the entire space.
Problems [4.1 & 4.2]
1. Let A be an nth order square matrix, i.e., A = (aij ) i=1,...n . Prove
j=1,...n
that A is linear, continuous and bounded.
and let x =
2. Let B be a bounded, closed domain in
T
(x1 , x2 , . . . , xn ) . We denote by
the space C([B]) of functions f (x)
which are continuous on B. The function (x) and the ndimensional
146
A First Course in Functional Analysis
vector p(x) are xed members of C([B]). The values of p(x) lie in B
for all x B. Show that T1 and T2 given by T1 f (x) = (x)f (x) and
T2 (f (x)) = f (p(x)) are linear operators.
3. Show that the matrix
a11
a21
A= .
..
a11
a22
an1
an2
is a bounded linear operator in
4np for p = 1, 2, and that for
A1 max
p = 1,
a1n
a2n
ann
n
aij 
i=1
1/2
aij 
A2 max
j
p = 2,
i=1
j=1
p = ,
A = max
j
n
aij 
j=1
4. Let E = C([a, b]) with   . Let t1 , . . . , tn be dierent points in
[a, b] and 1 , . . . , n be such that j (ti ) = ij for i, j = 1, 2, . . . , n.
Let P : E = E be denoted by
Px =
n
x(tj )j , x E
j=1
Then show that P is a projection operator onto span {1 , 2 , . . . , k },
P (E E) and
n
P  = sup
uj (t)
atb j=1
The projection operator P above is called the interpolatory projection
onto the span of {1 , 2 , . . . , n } corresponding to the nodes
{t1 , . . . , tn }.
[For Projection operator see Lemma 4.5.3]
Linear Operators
4.3
147
Linear Functionals
If the range of an operator consists of real numbers then the operator
is called a functional. In particular if the functional is additive and
homogeneous it is called a linear functional.
Thus, a functional f (x) dened on a linear topological space E is said
to be linear, if
(i) f (x1 + x2 ) = f (x1 ) + f (x2 ) x1 , x2 E and
(ii) f (xn ) f (x) as xn x and xn , x E in the sense of convergence
in a linear space E.
4.3.1
Similarity between linear operators and linear functionals
Since the range of a linear functional f (x) is the real line , which is
a Banach space, the following theorems which hold for a linear operator
mapping a Banach space into another Banach space are also true for linear
functionals dened on a Banach space.
4.3.2
Theorem
If an additive functional f (x), dened on a normed linear space E on
4(+), is continuous at a single point of this space, then it is also continuous
linear on the whole space E.
4.3.3
Theorem
Every linear functional is homogeneous.
4.3.4
Denition
A linear functional dened on a normed linear space E over
said to be bounded if there exists a constant M > 0 such that
4(+) is
f (x) M x x E.
4.3.5
Theorem
An additive functional dened on a linear space E over
if and only if it is bounded.
4(+) is linear
The smallest of the constants M in the above inequality is called the
norm of the functional f (x) and is denoted by f .
Thus f (x) f  x
Thus f  = sup f (x)
x1
or in other words f  = sup f (x) = sup
x=1
x=
f (x)
x
148
A First Course in Functional Analysis
4.3.6
Examples
1. Norm: The norm   : Ex on a normed linear space (Ex ,  )
is a functional on Ex which is not linear.
2. Dot Product: Dot Product, with one factor kept xed, denes a
functional f : n by means of
f (x) = a x =
n
a i xi
i=1
where a = {ai } and x = {xi }, f is linear and bounded.
f (x) = a x a x
Therefore, sup
x=
Hence,
f (x)
a for x = a, f (a) = a a = a2
x
f (a)
= a, i.e., f  = a.
a
3. Denite Integral: The denite integral is a number when we take the
integral of a single function. But when we consider the integral over a class
of functions in a function space, then the integral becomes a functional.
Let us consider the functional
b
x(t)dt, x C([a, b])
f (x) =
a
f is additive and homogeneous.
b
Now, f (x) = 
x(t)dt sup x(t)
a
Therefore, f (x) (ba)x or,
dt = (b a)x
a
sup
f (x) = f (x) (ba)x
xC[(a,b)]
showing that f is bounded.
Now,
f (x)
= f (x) (b a)
xC[(a,b)] x
sup
Next we choose x = x0 = 1, so that x0  = 1.
b
b
f (x0 )
Again, f (x0 ) =
= (b a).
x0 (t)dt =
1 dt = (b a) or,
x0 
a
a
Hence, sup
x=
f (x)
= (b a).
x
4. Denite integral on C([a, b]);
If K(, ) is a continuous function on [a, b] [a, b]
Linear Operators
149
and
K(s, t)x(t)dt, x C([a, b]), s [a, b]
F (x)(s) =
a
then F (x)(s)
K(s, t) x(t)dt
a
K(s, t) sup x(t)dt
= x
K(s, t)dt sup
a
3
F x(s)
F  = sup
sup
x
x
s[a,b]
K(s, t)dtx
a
K(s, t)dt; s [a, b]
a
Since K(s, t) is a continuous function of s dened over a compact
interval [a, b], a s0 [a, b] s.t.
K(s0 , t) = sup K(s, t)
s[a,b]
K(s0 , t)dt
Thus, F  = sup
x[a,b]
4.3.7
Geometrical interpretation of norm of a linear functional
4+
Consider in a normed linear space E
over ( ) a linear functional f (x).
The equation f (x) = c, where f (x) = i=1 ci xi , is called a hyperplane.
This is because in ndimensional Euclidean space En , such an equation of
the form f (x) = c represents a ndimensional plane.
Now, f (x) f  x. If x 1, i.e., x lies in a unit ball, then
f (x) f . Thus the hyperplane f (x) = f  has the property that all
the unit balls x 1 lie completely to the left of this hyperplane (because
f (x) < f  holds for the points of the ball x < 1). The plane f (x) = f 
is called the support of the ball x 1. The points on the surface of
the ball x = 1 lie on the hyperplane. All other points within the ball
x = 1 lie on one side of the hyperplane. Such a hyperplane is also called
a supporting hyperplane.
Problems
1. Find the norm of the linear functional f dened on C([2, 2]) by
f (x) =
2
x(t)dt
x(t)dt
0
[Ans. f  = 4]
2. Let f be a bounded linear functional on a complex normed linear
space. Show that f , although bounded, is not linear (the bar denotes
the complex conjugate).
150
A First Course in Functional Analysis
3. The space C ([a, b]) is the normed linear space of all continuously
dierentiable functions on J = [a, b] with norm dened by
x = max x(t) + max x (t)
tJ
tJ
Show that all the axioms of a norm are satised.
Show that f (x) = x (), = (a + b)/2 denes a bounded linear
functional on C ([a, b]).
1
st
x(t)dt, x C([0, 1]), s [0, 1], show that
4. Given F (x)(s) =
0 2 st
F  = 1.
5. If X is a subspace of a vector space E over
and f is a linear
functional on E such that f (X) is not the whole of , show that
f (y) = 0 for all y X.
4.4
The Space of Bounded Linear
Operators
In what follows we want to show that the set of bounded linear operators
mapping a normed linear space (Ex ,  Ex ) into another normed linear
space (Ey ,  Ey ) forms again a normed linear space.
4.4.1
Denition: space of bounded linear operators
Let two bounded linear operators, A1 and A2 , map a normed linear
space (Ex ,  Ex ) into the normed linear space (Ey ,  Ey ).
We can dene addition and scalar multiplication by
(A1 + A2 )x = A1 x + A2 x,
A1 (x) = Ax for all scalars
4 (+ )
This set of linear operators forms a linear space denoted by (Ex Ey ).
We would next show that (Ex Ey ) is a normed linear space. Let us
dene the norm of A as A = sup AxEy . Then A 0. If A = 0,
x<1
i.e., if
sup
xEx 1
AxEy = 0 then AxEy = 0 for all x s.t. xEx 1.
However because of the homogeneity of A, Ax = 0 for all x and therefore
A = 0.
Now, A = sup Ax =  sup Ax =  A
x1
x1
A + B = sup (A + B)x sup Ax + sup Bx
x1
x1
x1
= A + B
Thus, the space of bounded linear operators is a normed linear space.
Linear Operators
4.4.2
151
Theorem
If Ey is complete, the space of bounded linear operators is also complete
and is consequently a Banach space.
Proof: Let us be given a Cauchy sequence {An } of linear operators.
Then with respect to the norm is a sequence of linear operators such that
An Am  0 as n, m . Hence, An xAm x An Am  x 0,
as n, m , for any x.
Therefore the sequence {An x} of elements of Ey is a Cauchy sequence
for any xed x. Now, since Ey is complete {An x} has some limit, y. Thus,
every y Ey is associated with some x Ex and we obtain some operator
A dened by the equation Ax = y. Such an operator A is additive and
homogeneous. Because
(i)
A(x1 + x2 ) = lim An (x1 + x2 ) = lim An x1 + lim An x2
n
= Ax1 + Ax2 ,
(ii)
for x1 , x2 Ex
A(x1 ) = lim An (x1 ) = lim An x1
n
= Ax1 ,
4 (+ )
To show that A is bounded, we note that
An Am  0
as n
Hence An  Am  0, i.e., {An } is a Cauchy sequence and is
therefore bounded. Then there is a constant k s.t. An  k for all n.
Consequently, An xEy kxEx for all n and hence
AxEy = lim An xEy kxEx
n
Hence, it is proved that A is bounded. Since in addition, A is additive
and homogeneous, A is a bounded linear operator.
Next, we shall prove that A is the limit of the sequence {An } in the
sense of norm convergence, in a space of linear operators.
Because of the convergence of {An x}, there is an index n0 for every
> 0 s.t.
An+p x An xEy <
(4.18)
for n n0 , p 1 and all x with x 1.
Taking the limit in (4.18) as p , we get
Ax An xEy <
for n n0 ,
and all x with x 1. Hence for n n0 ,
An A = sup (An A)xEy <
x1
152
A First Course in Functional Analysis
Consequently, A = lim An in the sense of norm convergence in a space
n
of bounded linear operators and completeness of the space is proved.
We next discuss the composition of two bounded linear operators, each
mapping Ex Ex , Ex being a normed linear space.
4.4.3
Theorem
4+
Let Ex be a normed linear space over ( ). If A, B (Ex Ex ),
then
AB (Ex Ex ) and AB A B
Proof: Since A, B : (Ex Ex ), we have
(AB)(x + y) = A(B(x + y)) = A(Bx + By)
= (AB)x + (AB)y x, y Ex , ,
4 (+ )
Furthermore, A and B being bounded operators
xn x AB(xn ) = A(Bxn ) ABx
showing that AB is bounded and hence continuous.
(AB)(x)Ex = A(Bx)Ex A BxEx A B xEx
Hence, AB = sup
x=
4.4.4
(AB)xEx
A B
xEx
Example
1. Let A (
norm.
4m 4n) where both 4m and 4n are normed by l1
Then Al1 = max
1jn
Proof: For any x
Axl1 =
4n ,
n
j=1
xj 
j=1
or
aij 
i=1
m
n
m
n

aij xj 
aij xj 
i=1
n
A = sup
x=
max
1jn
i=1 j=1
m
i=1
m
aij 
aij  x1
i=1
n
Ax1
max
aij .
1jn
x1
i=1
(4.19)
Linear Operators
153
We have to next show that there exists some x n s.t. the RHS in
the above inequality is attained.
Let k be an index for which the maximum in (4.18) is attained; then
Aek 1 =
m
aik  = max
1jn
i=1
m
aij ,
i=1
.
.
, i.e. the maximum in (4.18) is attained for the kth
where, ek =
1
..
0
coordinate vector.
Problem
1. Let A (
4n 4m ) where 4n are normed by l norm.
Show that (i) Al = max
1im
(ii) Al2 =
n
aij 
j=1
where is the maximum eigenvalue of AT A.
[Hint. Ax2 = (Ax)T Ax = (xT AT Ax)]
2. (Limaye [33]) Let A = (aij ) be an innite matrix with scalar entries
and
p,r =
sup
j=1,2,...
1/r
aij r
if p = 1, 1 r <
i=1
1/q
1 1
aij q
if 1 < p , + = 1, r =
sup
p
q
i=1,2,...
j=1
sup
i,j=1,2,...
aij  if p = 1, r =
If pr < then show that A denes a continuous linear map from
lp to lr and its operator norm equals p,r .
[Note that, 1,1 = 1 = sup
aij  and , = =
j=1,2,...
i=1
aij  ].
sup
i=1,2,...
j=1
154
A First Course in Functional Analysis
4.5
Uniform Boundedness Principle
4.5.1
Denition: uniform operator convergence
Let {An } (Ex Ey ) be a sequence of bounded linear operators,
where Ex and Ey are complete normed linear spaces. {An } is said to
converge uniformly if {An } converges in the sense of norm, i.e.,
An Am  0 as n, m .
4.5.2
Lemma
{An } converges uniformly if and only if {An } converges uniformly for
every x Ex .
Proof: Let us suppose that {An } converges uniformly, i.e., An Am 
0 as n, m .
(An Am )x
0 as n, m .
Hence, sup
x
x=
or in other words, given
> 0, there exists an r > 0, s.t.
An x Am x <
r
r
where x B(0, r) and n, m n0 (
/r).
Hence the uniform convergence of {An x} for any x B(0, r) is
established. Conversely, let us suppose that {An x} is uniformly convergent
for any x B(0, 1). Hence, sup An x Am x 0 as n, m . Or, in
x1
other words, An Am  0 as n, m . Using theorem 4.4.2, we can
say
An A (Ex Ey ) as n
4.5.3
Denition: pointwise convergence
A sequence of bounded linear operators {An } is said to converge
pointwise to a linear operator A if, for every xed x, the sequence {An x}
converge to Ax.
4.5.4
Lemma
If {An } converge uniformly to A then {An } converge pointwise.
An A)x
Proof: An A 0 as n sup
0 as n .
x
x=
Hence, if x r, for
/r > 0, as n0 s.t. for all n n0
An x Ax < x r =
,
r
r
i.e., {An } is pointwise convergent.
On the other hand if {An } is pointwise convergent, {An } is not
necessarily uniformly convergent as is evident from the example below.
Linear Operators
155
Let H be a Hilbert space with basis {e1 , e2 , . . . , en } and for x H let An
denote the projection of x on Hn , the ndimensional subspace spanned
by e1 , e2 , . . . , en . Then
An x =
n
x, ei ei
i=1
x, ei ei = x
i=1
for every x H. Hence, An I is the sense of pointwise convergence.
On the other hand, for
< 1, any n and p > 0,
An+p en+1 An en+1  = en+1 0 = en+1  = 1 >
Hence, the uniform convergence of the sequence {An } in the unit ball
x < 1 of the space H does not hold.
4.5.5
Theorem
If the spaces Ex and Ey are complete, then the space of bounded linear
operators is also complete in the sense of pointwise convergence.
Proof: Let {An } of bounded linear operators converge pointwise. Since
{An } is a Cauchy sequence for every x, there exists a limit y = lim An x
n
for every x.
Since Ey is complete y Ey . This asserts the existence of an operator
A such that Ax = y. That A is linear can be shown as follows
for, x1 , x2 Ex ,
A(x1 + x2 ) = lim An (x1 + x2 ) = lim (An x1 + An x2 )
n
= Ax1 + Ax2 , which shows A is additive.
Again, for
4(+),
A(x) = lim An (x) = lim An x
n
= Ax, showing A is homogeneous.
That A is bounded can be proved by making an appeal to BanachSteinhans theorem. (4.5.6)
4.5.6
Uniform boundedness principle
Uniform boundedness principle was discovered by Banach and
Steinhaus. It is one of the basic props of functional analysis. Earlier
Lebesgue rst discovered the principle in his investigations of Fourier series.
Banach and Steinhans isolated and developed it as a general principle.
4.5.7
Theorem (Banach and Steinhaus)
If a sequence of bounded linear operators is a Cauchy sequence at every
point x of a Banach space Ex , the sequence {An } of these operators is
uniformly bounded.
156
A First Course in Functional Analysis
Proof: Let us suppose the contrary. We show that the assumption implies
that the set {An x} is not bounded on any closed ball B(x0 ,
).
Any x B(x0 ,
) can be written as
x0 +
for any Ex

In fact, if An x C for all n and if all x is in some ball B(x0 ,
), then
An x0 +
 C

An  An x0  C
or,

or, An 
C + An x0 

Since the norm sequence {An x0 } is bounded due to {An x0 } being a
convergent sequence, it follows that
An  C1  where C1 =
C + An x0 
.
The above inequality yields
An  = sup
=0
An 
C1

But this contradicts the hypothesis, because
An x C x B(x0 ,
) An  C1
Next, let us suppose B(x0 ,
0 ) is any closed ball in Ex . The sequence
{An x} is not bounded on it.
Hence, there is an index n0 and an element
x1 B0 (x0 ,
0 ) s.t. An1 x1  > 1
By continuity of the operator An1 , then the above inequality holds
(see g. 4.1) The
in some closed ball B 1 (x1 ,
1 ) B0 (x0 ,
0 ).
sequence {An x} is again not bounded on B(x1 ,
1 ), and therefore
there is an index n2 s.t. n2 > 0, and an element
x2 B 1 (x1 ,
1 ) s.t. An2 x2  > 2.
Since An2 is continuous the above inequality
0
must hold in some ball B 2 (x2 ,
2 ) B 1 (x1 ,
1 ) and
x0
so on. If we continue this process and let
n 0
as n , there is a point x belonging to all balls
B r (xn ,
n ). At this point
Ank x k
Fig. 4.1
Linear Operators
157
which contradicts the hypothesis that {An x} converges for all x Ex .
Hence the theorem.
Now we revert to the operator
Ax = lim An x
n
The inequality An x M x, n = 1, 2, . . . holds good.
Now, given
> 0, there exists n0 (> 0) s.t. for n n0
Ax An x + (A An )x (M +
)x
Making
0, we get
4.5.8
for n n0
Ax M x
Theorem (uniform boundedness principle)
Let Ex and Ey be two Banach spaces. Let {An } be a sequence of
bounded linear operators mapping Ex Ey s.t.
(i) {An x} is a bounded subset of Ey for each x Ex ,
Then the sequence {An } of norms of these operators is bounded.
Proof: Let Sk = {x : An x k, x Ex }.
Clearly, Sk is a subset of Ex .
Since {An x} is bounded for any x, Ex can be expressed as
Ex =
Sk
k=1
Since Ex is a Banach space, it follows by the Baires Category theorem
1.3.7 that there exists at least one Sk0 with nonempty interior and thus
contains a closed ball B0 (x0 , r0 ) with centre x0 and radius r0 > 0.
An x0  k0
would belong to the B0 (x0 , r0 ) for every Ex .
Now, x = x0 + r0

0
0
0
0 An
0
x
r
Thus, 0
+
A
n 0 0 k0
0 0 
r0
An  An x0  k0
Hence,

k0 + An x0 
2k0
or
An 


r0
r0
An 
2k0
Thus, An  = sup

r0
=0
Thus
Hence, {An } is bounded.
158
A First Course in Functional Analysis
4.5.9
Remark
The theorem does not hold unless Ex is a Banach space, as it follows
from the following example.
4.5.10
Examples
1. Let Ex = {x = {xj } : only a nite number of xj s are nonzero}.
x = sup xj 
j
n
Consider the mapping, which is a linear functional dened by fn (x) =
xj then
j=1
n
m
n
fn (x) =
xj
xj 
xj , since xi = 0 for i > m.
j=1 j=1
j=1
fn (x)
m
xj  mx
j=1
Hence, {fn (x)} is bounded.
n
n
xj
xj  nx.
Now, fn (x) =
j=1 j=1
Hence fn  n
Next, consider the element = {1 , 2 , . . . , i , . . .}
where i = 1 1 i n
= 0 i > n.
 = 1
f (i ) =
i = n = n
fn ()
=n

Thus {fn (x)} is bounded but {fn } is not bounded. This is because
Ex = {x = {xj } : only for a nite number of xj s is nonzero} is not a
Banach space.
2. Let Ex be the set of polynomials x = x(t) =
pn tn where pn = 0 for
n=0
n > Nx .
Let x = max[pn , n = 1, 2, . . .]
Linear Operators
Let fn (x) =
159
n1
pk . The functionals fn are continuous linear functionals
k=0
on Ex . Moreover, for every x = p0 + p1 t + + pm tm , it is clear that for
every n, fn (x) (m + 1)x, so that {fn (x)} is bounded. For fn (x) we
choose x(t) = 1 + t + + tn .
fn (x)
n since x = 1 and fn (x) = n.
Now, fn  = sup
x= x
Hence {fn } is unbounded.
4.6
Some Applications
4.6.1
Lagranges interpolation polynomial
Lagranges interpolation formula is to nd the form of a given function in
a given interval, when the values of the function is known at not necessarily
equidistant interpolating points within the said interval. In what follows,
we want to show that although the Lagrangian operator converges pointwise
to the identity operator, it is not uniformly convergent.
For any function f dened on the interval [0, 1] and any partition
0 t1 < t2 < tn 1 of [0, 1], there is a polynomial of degree (n1) which
interpolates to f at the given points, i.e., takes the values f (ti ) at t = ti ,
i = 1, 2, . . . , n. This is called the Lagrangian interpolation polynomial and
is given by
Ln f =
n
(n)
wk (t)f (tnk )
(4.20)
pn (t)
pn (t)(t tnk )
(4.21)
k=1
(n)
where wk (t) =
and
pn (t) =
n
E
(t tnk ).
(4.22)
k=1
4.6.2
Theorem
We are given some points on the segment [0, 1] forming the innite
triangular matrix,
t11
2
t1
3
T =
t1
t22
t32
t33
0
.
(4.23)
160
A First Course in Functional Analysis
For a given function, f (t) dened on [0, 1], we construct the Lagrangian
interpolation polynomial Ln f whose partition points are the points of nth
row of (4.23),
n
Ln f =
(n)
wk (t)f (tnk )
k=1
(n)
where wk
pn (t)
,
pn (t)(t tnk )
n
E
pn (t) =
(t tnk ).
k=1
For every choice of the matrix (4.23), there is a continuous function f (t)
s.t. Ln f does not uniformly converge to f (t) as n .
Proof: Let us consider Ln as an operator mapping the function f (t)
C([0, 1]) into the elements of the same space and put
n
n = max n (t) where n (t) =
t
n
(n)
n
Now, Ln f  = max
wk (t)f (tk )
t
(n)
wk (t)
k=1
k=1
max
t
n
(n)
wk (t) max f (tnk )
k=1
= n f , where n = max
n
(n)
wk (t)
k=1
On C([0, 1]) f  = max f (t)
t
Ln f 
n
Ln  = sup
f = f 
Since n (t) is a continuous function dened on a closed and bounded
set [0, 1], the supremum is attained.
Hence, Ln  = n .
On the other hand, the Bernstein inequality (See Natanson [40]) n >
ln n
holds.
8
Consequently Ln  as n .
This proves the said theorem, because if Ln f f uniformly for all
f (t) C([0, 1]), then the norm Ln  must be bounded.
4.6.3
Divergence of Fourier series of continuous functions
In section 7 of Chapter 3 we have introduced, Fourier series and Fourier
coecients in a Hilbert space. In L2 ([, ]) the orthonormal set can be
Linear Operators
161
taken as
en (t) =
eint
, n = 0, 1, 2, . . .
2
If x(t) L2 ([, ]) then x(t) can be written as
xn en
n=
1
where xn = x, en =
x(t)eint dt
2
1
1
=
x(t) cos ntdt
x(t) sin ntdt
2
2
= cn idn (say)
Then, xn = cn + idn
2cn cos nt 2dn sin nt
+
Thus
xn en = x0 e0 +
2
2
n=
n=1
n=1
(4.24)
Here, cn and dn are called the Fourier coecient of x(t).
Note 4.6.1. It may be noted that the completeness of {en } is equivalent
to the statements that each x L2 has the Fourier expansion
1
x(t) =
xn eint
2 n=
(4.25)
It must be emphasized that this expansion is not to be interpreted as
saying that the series converges pointwise to the function.
One can conclude that the partial sums in (4.25), i.e., the vector
n
1
xk eikt
un (t) =
2 k=
(4.26)
converges to the vector x in the sense of L2 , i.e.,
un (t) x(t) 0 as n
This situation is often expressed by saying that x is the limit in the
mean of un s.
4.6.4
Theorem
Let E = {x C([, ]) : x() = x()} with the sup norm. Then
the Fourier series of every x in a dense subset of E diverges at 0. We recall
(see equation (4.25)) that
162
A First Course in Functional Analysis
x(t) = x0 +
2cn cos nt 2dn sin nt
+
2
2
n=1
n=1
1
x(t)dt
where x0 =
2
cn =
x(t) cos ntdt
(4.27)
(4.28)
(4.29)
dn =
x(t) sin ntdt
(4.30)
For x(t) E
x = max x(t)
t
(4.31)
Let us take operator An = un where un (x) is the value at t = 0 of the
nth partial sum of the Fourier series of x, since for t = 0 the sine terms are
zero and the cosine is one we see from (4.28) and (4.29) that
n
1
un (x(0)) =
cm
x0 + 2
2
m=1
3
n
1
1
+
x(t) 2
cos mt
dt
(4.32)
=
2 m=1
2
n
n
1
1
cos mt =
2 sin t cos mt
Now, 2 sin t
2 m=1
2
m=1
n
1
1
sin m +
t sin m
t
=
2
2
m=1
1
1
= sin t + sin n +
t
2
2
It may be noted that, except for the end terms, all other intermediate
terms in the summation vanish in pairs.
Dividing both sides by sin 12 t and adding 1 to both sides, we have
"
!
n
sin n + 12 t
cos mt =
1+2
sin 12 t
m=1
Consequently, the expression for un (x) can be written in the simple
form:
"
!
sin n + 12 t
1
x(t)Dn (t)dt, where Dn (t) =
un (x) =
sin 12 t
2
It should be noted that un (x) is a linear functional in x(t).
Linear Operators
163
We would next show that un is bounded. It follows from (4.29) and the
above integral,
1
x(t) Dn (t)dt
un (x)
2
1
Dn (t)dtx
2
un (x)
1
Dn (t)dt
Therefore, un  = sup
2
x= x
1
= Dn (t)1
2
where  1 denotes L1 norm.
Actually the equality sign holds, as we shall prove.
Let us write Dn (t) = y(t)Dn (t) where y(t) = +1 at every t at which
Dn (t) 0 and y(t) = 1 otherwise y(t) is not continuous, but for any given
> 0 it may be modied to a continuous x of norm 1, such that for this x
we have
1
un (x) 1
Dn (t)dt =
(x(t) y(t))Dn (t)dt <
2
2
If follows that
1
un  =
2
Dn (t)dt
We next show that the sequence {un } is unbounded. Since sin u u,
0 u , we note that
3
3
!
"
1
/2
sin(2n + 1)u
sin n + 2 t
dt 4
du
u
sin 12 t
0
(2n+1)
2
=4
0
=4
2n (k+1)
2
k
2
k=0
2n
k=0
 sin v
dv
v
1
(k + 1) 2
(k+1)
2
k
2
 sin vdv
2n
1
8
as n
(k + 1)
k=0
Because the harmonic series
 sin v
dv
v
1
diverges.
k
k=1
164
A First Course in Functional Analysis
Hence {un } is unbounded. Since E is complete this implies that there
exists no c > 0 and nite such that un (x) c holds for all x. Hence
there must be an x E such that {un (x )} is unbounded. This implies
that the Fourier series of that x diverges at t = 0.
Problems [4.5 & 4.6]
1. Let Ex be a Banach space and Ey a normed linear space. If {Tn } is
a sequence in (Ex Ey ) such that T x = limn Tn x exists for each
x in Ex , prove that T is a continuous linear operator.
2. Let Ex and Ey be normed linear spaces and A : Ex Ey be a linear
operator with the property that the set {Axn  : n } is bounded
whenever xn in Ex . Prove that A (Ex Ey ).
3. Given that Ex and Ey are Banach spaces and A : Ex Ey is a
bounded linear operator, show that either A(Ex ) = Ey or is a set of
the rst category in Ey .
[Hint: A set X Ex is said to be of the rst category in Ex if it is
the union of countably many nowhere dense sets in Ex ].
4. If Ex is a Banach space, and {fn (x)} a sequence of continuous linear
functionals on Ex such that {fn (x)} is bounded for every x Ex ,
then show that the sequence {fn } is bounded.
[Hint: Consult theorem 4.5.7]
5. Let Ex and Ey be normed linear spaces. E be a bounded, complete
convex subset of Ex . A mapping A from Ex to Ey is called ane if
A(a + (1 )b) = A(a) + (1 )A(b) for all 0 < < 1,
and a, b E.
Let F be a set of continuous ane mappings from Ex to Ey . Then
show that either the set {A(x) : A F} is unbounded for each x
in some dense subset of E or else F is uniformly bounded in Ey .
6. Let Ex and Ey be Banach spaces and An (Ex Ey ), n = 1, 2, . . ..
Then show that there is some A (Ex Ey ) such that An x Ax
for every x Ex if and only if {An } converges for every x in some
set whose span is dense in Ex and the set {An  : n = 1, 2, . . .} is
bounded.
7. Show that there exists a dense set of Ex = {x C([, ]):
x() = x()} such that the Fourier series of every x diverges
at every rational number in [, ].
4.7
Inverse Operators
In what follows, we introduce the notion of the inverse of a linear operator
and investigate the conditions of its existence and uniqueness. This is,
Linear Operators
165
in other words, searching the conditions under which a given system of
equations will have a solution or if the solution at all exists it is unique.
4.7.1
Denition: domain of an operator
Let a linear operator A map a subspace E of a Banach space Ex into
a Banach space Ey . The subspace E on which the operator A is dened is
called the domain of A and is denoted by D(A).
4.7.2
Example
2
d
Let A : C 2 ([0, 1]) C[(0, 1)] where Au = f , A = dx
2 , u(0) = u(1) =
0 and f C([0, 1]).
Here, D(A) = {u(x)u(x) C 2 ([0, 1]), u(0) = u(1) = 0}
4.7.3
Example
Let a linear operator A map D(A) Ex Ey , where Ex and Ey are
Banach spaces. The range of A is the subspace of Ey into which D(A) is
mapped to, and the range of A is denoted by R(A).
4.7.4
Example
s
k(s, t)x(t)dt where x(t) C([0, 1]), K(s, t) C([0, 1])
Let y(s) =
C([0, 1]).
R(A) = {y : y C([0, 1]), y = Ax}
4.7.5
Denition: null space of a linear operator
The null space of a linear operator A is dened by the set of elements
of E, which is mapped into the null element and is denoted by N (A).
Thus N (A) = {x E : Ax = }.
4.7.6
Example
Let A : R2 R2
A=
N (A) =
4.7.7
2 1
4 2
,
1
x1
x R2 : x1 = x2 , where x =
x2
2
Denition:
operator
left inverse and right inverse of a linear
A linear continuous operator B is said to be a left inverse of A if
BA = I.
A linear continuous operator C is said to be a right inverse of A if
AC = 1.
166
4.7.8
A First Course in Functional Analysis
Lemma
If A has a left inverse B and a right inverse C, then B = C.
For B = B(AC) = (BA)C = C.
If A has a left inverse as well as a right inverse, then A is said to have
an inverse and the inverse operator is denoted by A1 .
Thus if A1 exists, then by denition A1 A = AA1 = I.
4.7.9
Inverse operators and algebraic equations
Let Ex and Ey be two Banach spaces and A be an operator s.t.
A (Ex Ey ).
We want to know when one can solve
Ax = y
(4.33)
Here y is a known element of the linear space Ey and x Ex is unknown.
If R(A) = Ey , we can solve (4.31) for each y Ey .
If N (A) consists only one element then the solution is unique. Thus
if R(A) = Ey and N (A) = {}, we can assign to each y Ey the
unique solution of (4.33). This assignment gives the inverse operator
A1 of A. We next show that A1 if it exists is linear. Let x =
A1 (y1 + y2 ) A1 y1 A1 y2 . Then A being linear
Ax = AA1 (y1 + y2 ) A A1 y1 A A1 y2
= y1 + y2 y1 y2 =
Thus, x = A1 Ax = A1 = , i.e., A1 (y1 + y2 ) = A1 y1 + A1 y2 ,
proving A1 to be additive. Analogously the homogeneity of A1 is
established.
Note 4.7.1. It may be noted that the continuity of the operator A in
some topology does not necessarily imply the continuity of its inverse,
i.e., an operator inverse to a bounded linear operator is not necessarily a
bounded linear operator. In what follows we investigate sucient conditions
for the existence of the inverse to a linear operator.
4.7.10
Theorem (Banach)
Let a linear operator A map a normed linear (Banach) space Ex onto a
normed linear (Banach) space Ey , satisfying for every x Ex the condition
Ax mx, m > 0
(4.34)
m, some constant. Then the inverse bounded linear operator A1 exists.
Proof: The condition (4.34) implies that A maps Ex onto Ey in a onetoone fashion. If Ax1 = y and Ax2 = y, then A(x1 x2 ) = yields
Linear Operators
167
mx1 x2  A(x1 x2 ) = 0, whenever x1 = x2 . Hence, there is a
linear operator A1 . The operator is bounded, as is evident from (4.34),
A1 y
1
1
A A1 y = y,
m
m
for every y Ey .
4.7.11
Theorem (Banach)
Let A and B be two bounded linear operators mapping a normed linear
space E into itself, so that A and B are conformable for multiplication.
Then
AB A B.
If further {An } A and {Bn } B as n , then An Bn AB as
n .
Proof: For any x E,
ABx A Bx A B x
AB = sup
or,
x=
Hence
ABx
A B.
x
AB A B
Now, since {An } and {Bn } are bounded linear operators,
An Bn AB = An Bn An B + An B AB
An  Bn B + An A B
0 as n , since An A and Bn B as n
4.7.12
Theorem
Let a bounded linear operator A map E into E and let A q < 1.
Then the operator I +A has an inverse, which is a bounded linear operator.
Proof: In the space of operators with domain E and range as well in E,
we consider the series
I A + A2 A3 + + (1)n An +
(4.35)
Since A2  A A = A2 and analogously An  An , it
follows for the partial sums Sn of the series (4.35), that
Sn+p Sn  = (1)n+1 An+1 + (1)n+2 An+2 + + (1)n+p An+p 
An+1  + An+2  + + An+p 
q n+1 + q n+2 + + q n+p
= q n+1
(1 q p )
0 as n , since p > 0, 0 < q < 1.
(1 q)
168
A First Course in Functional Analysis
Hence, {Sn } is a Cauchy sequence and the space of operators being
complete, the sequence {Sn } converges to a limit.
Let S be the sum of the series.
Then
S(I + A) = lim Sn (I + A)
n
= lim (I + A)(I A + A2 + + (1)n An )
n
= lim (I An+1 ) = I
n
Hence S = (I + A)
Let x1 = (I + A)1 y1 , x2 = (I + A)1 y2 , x1 , x2 , y1 , y2 E
Then
y1 + y2 = (I + A)(x1 + x2 )
or,
x1 + x2 = (I + A)1 y1 + (I + A)1 y2 = (I + A)1 (y1 + y2 ).
Hence, S is a linear operator. Moreover,
1
, 0<q<1
S
An 
qn =
1q
n=0
n=0
(4.36)
Then (I + A)1 is a bounded linear operator.
4.7.13
Theorem
Let A (Ex Ey ), A have a bounded inverse with A1  ,
A C and < 1, then C has a bounded inverse and
C 1 
(1 )
Proof: Since I A1 C = A1 (A C) < 1 and A1 C =
I (I A1 C), it follows from theorem 4.7.12 that A1 C has a bounded
inverse and hence C has a bounded inverse.
Hence, C 1  = (A1 C)1 A1  = (I (I A1 C))1 A1 
(I (I A1 C))1  A1 
()n
n=0
=
4.7.14
using (4.35) and noting that
I A1 C
Example
Consider the integral operator
Cx = x(s)
K(s, t)x(t)dt
0
(4.36 )
Linear Operators
169
with continuous kernel K(s, t) which maps the space C([0, 1]) into C([0, 1]).
Let K0 (s, t) be a degenerate kernel close to K(s, t). That is, K0 (s, t) is
n
of the form
ai (s)bi (t). In such a case, the equation
i=1
Ax = x(s)
K0 (s, t)x(t)dt = y
(4.37)
can be reduced to a system of algebraic equations and nding the solution
of equation (4.37) can be reduced to nding the solution of the concerned
algebraic equations. Let us assume that equation (4.37) has a solution.
In order to know whether the integral equation
Cx = y
(4.38)
has a solution, we frame the equation (4.37) in such a manner that
w = max K(t, s) K0 (t, s) <
t,s
1
r
(4.39)
where r = R, R = (rij ) being the matrix associated with the solution
x0 (s) = Ry of the linear algebraic system generated from equation (4.37).
It follows from (4.38), (4.37) and (4.39)
C A w <
1
r
(4.40)
It follows from theorem 4.7.13 that equation (4.37) with a continuous
kernel has a solution; if x(t) be the solution, then
x(t) x0 (t)
w
r2
1 wr
The above inequality gives an estimate as to how much the solution of
equation (4.38) diers from the solution of (4.37), the explicit form of the
solution of (4.38) being not known.
Finally, we obtain the following theorem.
4.7.15
Theorem (Banach)
If a bounded linear operator A maps the whole of the Banach space Ex
onto the whole of the Banach space Ey in a onetoone manner, then there
exists a bounded linear operator A, which maps Ex onto Ey .
Proof: The operator A being onetoone and onto has an inverse A1 . We
need to show that A1 is bounded.
Let Sk = {y Ey : A1 y ky}.
Ey can be represented as
Ey =
*
k=0
Sk
170
A First Course in Functional Analysis
Since Ey is a complete metric space, by Baires category theorem (th.
1.4.19) at least one of the sets Sk is everywhere dense. Let this set be Sn .
Let us take an element y Ey . Let y = l. Then there exists y1 Sn
such that
l
y y1  , y1  l.
2
This is possible, since B(0, l) Sn is everywhere dense in B(0, l) and
y B(0, 1).
Moreover, we can nd an element y2 Sn such that
(y y1 ) y2 
l
l
, y2  ,
22
2
Continuing this process, we can nd element yk Sn , such that
y (y1 + y2 + + yk )
Making k , we have, y = lim
l
l
, yk  k1
k
2
2
k
yi .
i=1
Let xk = A1 yk , then
xk Ex = A1 yk Ex nyk Ey
Expressing sk =
k
nl
2k1
xi
i=1
k+p
sk+p sk  = 
xi 
i=k+1
nl
2k1
1
nl
1 p < k1
2
2
Ex being complete, {sk } converges to some element x Ex as k .
k
xi =
xi .
Hence, x = lim
k
i=1
i=1
Moreover, A being continuous
k
Ax = A lim
xi = lim
Axi
k
= lim
k
k
i=1
i=1
yi = y
i=1
Hence, A1 y = x = lim 
k
k
i=1
xi  lim
k
k
i=1
xi 
Linear Operators
171
nl
= 2nl = 2ny
2i1
i=1
Since y is an arbitrary element of Ey , that A1 is a bounded linear
operator is proved.
Note 4.7.2. This is further to the note 4.7.1.
(i) A bounded linear operator A mapping a Banach space Ex into a
Banach space Ey may have an inverse which is not bounded.
(ii) An unbounded linear operator mapping Ex Ey may have a
bounded inverse.
4.7.16
Examples
1. Let E = C([0, 1]) and let Au =
u()d.
0
Thus, A is a bounded linear operator mapping C([0, 1]) into C([0, 1]).
A1 given by A1 u =
d
u(t)
dt
is an unbounded operator dened on the linear subspace of continuously
dierentiable functions such that u(0) = 0.
2. Let E = C([0, 1]). The operator B is given by
Bu = f (x)
where B =
(4.41)
d2
dx2
DB = {u : u C 2 ([0, 1]), u(0) = u(1) = 0}
Integrating equation (4.41) twice, we obtain,
x s
u(x) =
ds
f (t)dt + C1 x + C2
0
The condition u(0) = 0 immediately gives C2 = 0 and consequently
x s
u(x) =
ds
f (t)dt + C1 x
0
We change the order of integration and use Fubinis theorem [see 10.5].
This leads to
x
x
u(x) =
f (t)dt
ds + C1 x
0
172
A First Course in Functional Analysis
=
x
0
(x t)f (t)dt + C1 x
u(1) = 0 C1 =
1
0
(1 t)f (t)dt
Using (4.43), (4.42) reduces to
x
t(1 x)f (t)dt +
u(x) =
(4.42)
(4.43)
x(1 t)f (t)dt
x
1
K(x, t)f (t)dt = B 1 f
(4.44)
K(x, t), the kernel of the integral equation (4.44) is given by
3
t(1 x) 0 t x
K(x, t) =
x(1 t) x t 1
We note that B is not a bounded operator. For example, take
u(x) = sin nx, u(x) D(B)
d2
sin nx = n2 2 sin nx.
Now,
2
dx
0 2
0
0 d
0
0 = n2 2 as n .
Then, 0
(sin
nx)
0 dx2
0
On the other hand
B
Hence B
4.7.17
f 
1
0
K(x, t)dtf 
1
f 
8
is bounded.
Operators depending on a parameter
Let us consider the equation of the form
Ax x = y
or, (A I)x = y
(4.45)
Here, is a parameter. Such equations occur frequently in Applied
Mathematics and Theoretical Physics.
4.7.18
Denition: homogeneous equation, trivial solution
In case y = in equation (4.45) then it is called a homogeneous
equation. Thus
(A I)x =
(4.46)
is a homogeneous equation. This equation always has a solution x = ,
called the trivial solution.
4.7.19
Denition: resolvent operator, regular values
In case (AI) has an inverse (AI)1 , the operator R = (AI)1
is called the resolvent of (4.45). Equation (4.46) will then have a unique
Linear Operators
173
solution x = . Those for which equation (4.45) has a unique solution for
every y and the operator R is bounded, are called regular values.
4.7.20
Denition: eigenvector, eigenvalue or characteristic
value, spectrum
If the homogeneous equation (4.46) or the equation Ax = x has a nontrivial solution x, then that x is called the eigenvector. The values of
corresponding to the nontrivial solution, the eigenvector, are called the
eigenvalues or characteristic values.
The collection of all nonregular values of is called the spectrum of
the operator A.
4.7.21
Theorem
If in the equation (A I)x = y, (1/)Aq < 1 holds for , thus
A I has an inverse operator; moreover,
1
A A2
R =
1 + + 2 +
If is a regular value, then + for  < (A I)1 1 is also a
regular value. This implies that the collection of regular values is an open
set and hence the spectrum of an operator is a closed set.
Proof: The equation (A I)x = y can be written as
A
1
I
x= y
1
Thus, by theorem 4.7.12 we can say, if 
A = q < 1, then (I A/)
will have an inverse and the concerned values of will be its regular values.
If in theorem 4.7.13 we take C = A(+)I then if C (AI) =
 < (A I)1 1 then we conclude that C = A ( + )I has an
inverse and ( + ) is also a regular value.
4.7.22
Example
Let us consider the Fredholm integral equation of the second kind:
b
K(x, s)(s)ds = f (x)
(4.47)
(x)
a
K(x, s)(s)ds, equation (4.47) can be written as
If we denote A by
a
(I A) = f
(4.48)
The solution of the equation (4.47) can be expressed in the form of an
innite series
b
(x) = f (x) +
m
Km (x, s)f (x)ds (see Mikhlin [37])
(4.49)
m=1
174
A First Course in Functional Analysis
where the mth iterated kernel Km (x, s) is given by
Km (x, s) =
Km1 (x, t)K(t, s)dt
(4.50)
By making an appeal to theorem 4.7.21, we can obtain the resolvent
operator R as
R f = (I A)1 f = [I + A + 2 A2 + + p Ap + ]f
where Ap f =
Kp (t, s)f (s)ds is the pth iterate of the kernel K(t, s).
a
Thus if A <
1
, we get

(x) = f (x) +
R(x, s, )f (s)ds
a
Here, R(x, s, ), the resolvent of the kernel K(t, s) is dened by,
R(x, s, ) = K(x, s) + K2 (x, s) + + p Kp+1 (x, s) +
Problems
1. Let Ex and Ey be normed linear spaces over
a given linear operator. Show that
4(+). A : Ex Ey is
(i) A is bijective = A has an inverse = N (A) = {}
(ii) A1 , if it exists, is a linear operator.
2. L( n ) denotes the space of linear operators mapping n n .
Suppose that the mapping A : D Rm L(Rn ) is continuous
at a point x0 D for which A(x0 ) has an inverse. Then, show that
there is a > 0 and a > 0 so that A(x) has an inverse and that
A(x)1 
x D B(x0 , )
Hint: Use theorem 4.7.13
3. (ShermanMorrisonWoodbury Formula) Let A L( n ) have
an inverse and let U, V map n m , m n. Show that A + U V T
has an inverse if and only if (I + (V T )1 )U has an inverse and that
(A + U V T )1 = A1 A1 U (I + V T A1 U )1 V T A1
Hint: Use theorem 4.7.13
Linear Operators
175
4. If L is a bounded linear operator mapping a Banach space E into E,
show that L1 exists if and only if there is a bounded linear operator
K in E such that K 1 exists and
I KL < 1
If L1 exists then show that
1
(I KL)n K
L =
and
L1 
n=0
K 1 
1 I KL
5. Use the result of problem 4 to nd the solution of the linear dierential
equation
dU
U = f, u(t) C 1 [0, 1], f C([0, 1]) and
dt
 < 1
6. Let m be the space of bounded number sequences, i.e., for x m =
x = {i }, i  Kx , x = sup i .
i
In m the shift operator E is dened by
Ex = (0, 1 , 2 , 3 , . . .) for x = (1 , 2 , 3 , . . .)
Find E and discuss the inversion of the dierence operator =
E I.
[Hint: Show that x = = x = ]
4.8
Banach Space with a Basis
If a space E has a denumerable basis, then it is a separable space.
A denumerably everywhere dense set in a space with basis is a linear
ri ei with rational coecients ri . Though many
combination of the form
i=1
separable Banach spaces have bases, it is not proved for certain that every
separable Banach space has a basis.
Note 4.8.1.
1. It can be shown that a Banach space E is either nite dimensional
or else it has a Hamel basis which is not denumerable and hence
nonenumerable or uncountable.
2. An innite dimensional separable Banach space has, in fact, a basis
which is in onetoone correspondence with the set of real numbers.
Note 4.8.1. has exposed a severe limitation of the Hamel basis,
that every element of the Banach space E must be a nite linear
combination of the basic elements and has given rise to the concept
of a new basis known as Schauder basis.
176
4.8.1
A First Course in Functional Analysis
Denition: Schauder basis
Let E be a normed linear space. A denumerable subset {e1 , e2 , . . .} of
E is called a Schauder basis for E if en  = 1 for each n and if for every
x E, there are unique scalars 1 , 2 , . . . in ( ) such that
4+
x=
i ei .
i=1
In case E is nite dimensional and {a1 , a2 , . . . , an } is a Hamel basis,
then {a1 /a1 , a2 /a2 , . . . , an /an } is a Schauder basis for E.
If {e1 , e2 , . . .} is a Schauder basis for E then for n = 1, 2, . . ., let us
dene functionals fn : E ( ) by fn (x) = n for
4+
x=
n en E.
(4.51)
n=1
The uniqueness condition in the denition of a Schauder basis yields
that each fn is welldened and linear on E. It is called the nth coecient
functional on E.
4.8.2
Denition: biorthogonal sequence
Putting x = ej in (4.51) we have
ej =
fi (ej )ei ,
i=1
since ei are linearly independent,
fi (ej ) =
if i = j
if i = j
(4.52)
The two sequences {ei } and sequence of functionals {fi } such that,
(4.52) is true, are called biorthogonal sequences.
4.8.3
Lemma
For every functional f dened on the Banach space E, we can nd
coecients ci = f (ei ), where {ei } is a Schauder basis in E, such that
f=
f (ei )fi =
i=1
ci fi ,
i=1
{fi } being a sequence of functionals dened on E and satisfying (4.51).
Proof: For any functional f dened on E, it follows from (4.51)
f (x) =
i=1
fi (x)f (ei )
Linear Operators
177
Writing, f (ei ) = ci ,
f (x) =
ci fi (x) or, f =
i=1
ci fi
(4.53)
i=1
The representation (4.53) is unique. The series (4.53) converges for
every x E.
Problems
1. Let E = C([0, 1]). Consider in C([0, 1]) the sequence of elements
t, (1 t), u00 (t), u10 (t), u11 (t), u20 (t), u21 (t), . . . , u22 (t)
(4.54)
where ukl (t), k = 1, 2, . . ., 0 l 2k , are dened in the following
way ukl = 0, if t is located outside the interval (l/2k , l + 1/2k ) but
inside of this interval ukl (t) has a graph in the form of a isosceles
triangle with height equal to 1. [See gure 4.2]. Take a function
x(t) C([0, 1]) representable in the form of the series
k
x(t) = a0 (1 t) + a1 t +
2
1
akl ukl (t)
k=0 l=0
where a0 = x(0), a1 = x(1) and the coecients akl admit a unique
geometric construction as given in the following gure [see gure (4.2)]
x
x = x(t )
u
1
u22(t )
a00
a11
a01
a1
a0
1/2
3/4
1/4
Fig. 4.2
1/2
3/4
Fig. 4.3
The graph of the partial sums of the above series
k
a0 (1 t) + a1 t +
s1 2
1
akl ukl (t)
k=0 l=0
is an open polygon with 2s + 1 vertices lying on the curve x = x(t)
at the points with equidistant abscissae.
Show that the collection of functions in (4.54) forms a basis in
C([0, 1]).
178
A First Course in Functional Analysis
2. Let 1 p < . For t [0, 1], let x1 (t) = 1.
1 if 0 t
2
x2 (t) =
1
1 if < t 1
2
and for n = 1, 2, . . ., j = 1, . . . , 2n
n/2
2 , if (2j 2)/2n+1 t (2j 1)/2n+1
x2n +j (t) = 2n/2 , if (2j 1)/2n+1 t 2j/2n+1
0, otherwise
Show that the Haar system {x1 , x2 , x3 , . . .} is a Schauder basis for
L2 ([0, 1]).
Note each xn is a step function.
}
x1(t )
1
}
x2(t )
n = 1 x (t )
3
j=1
t
x4(t ) n = 1
t j=2
Fig. 4.4
CHAPTER 5
LINEAR
FUNCTIONALS
In this chapter we explore some simple properties of functionals dened on
a normed linear space. We indicate how linear functionals can be extended
from a subspace to the entire normed linear space and this makes the
normed linear space richer by new sets of linear functionals. The stage
is thus set for an adequate theory of conjugate spaces, which is an essential
part of the general theory of normed linear spaces. The HahnBanach
extension theorem plays a pivotal role in extending linear functionals from
a subspace to an entire normed linear space. The theorem was discovered by
H. Hahn (1927) [23], rediscovered in its present, more general form (5.2.2)
by S. Banach (1929) [5]. The theorem was further generalized to complex
spaces (5.1.8) by H.F. Bohnenblust and A. Sobezyk (1938) [8].
Besides the HahnBanach extension theorem, there is another
important theorem discovered by HahnBanach which is known as HahnBanach separation theorem.
While the HahnBanach extension
theorem is analytic in nature, the HahnBanach separation theorem is
geometric in nature.
5.1
HahnBanach Theorem
4.3 introduced linear functionals and 4.4 the space of bounded linear
operators. Next comes the notion of the space of bounded linear
functionals.
5.1.1
Denition: conjugate or dual space
The space of bounded linear functionals mapping a Banach space Ex
into
is called the conjugate or (dual) of Ex and is denoted by Ex .
In theorem 4.2.13 we have seen how a bounded linear operator A0
179
180
A First Course in Functional Analysis
dened on a linear subspace X, which is everywhere dense in a complete
normed linear space Ex with values in a complete normed linear space, can
be extended to the entire space with preservation of norm. HahnBanach
theorem considers such an extension even if X is not necessarily dense in
Ex .
What follows is a few results as a prelude to proving the main theorem.
5.1.2
Lemma
Let L be a linear subspace of a normed linear space Ex and f be a
functional dened on L. If x0 is a vector not in L, and if L1 = (L, x0 ), a
set of elements of the form x + tx0 , x L and t any real number, then f
can be extended to a functional f0 dened on L1 such that f0  = f .
Proof: We assume that Ex is a real normed linear space. It is seen
that L1 is a linear subspace because x1 + t1 x0 and x2 + t2 x0 L1
(x1 + x2 ) + (t1 + t2 )x0 L1 and (x1 + t1 x0 ) L1 etc. Any u L1 has
two representations of the form x1 + t1 x0 , and x2 + t2 x0 respectively and
that t1 = t2 . Otherwise
x1 + t1 x0 = x2 + t1 x0 = x1 = x2
x1 x 2
t2 t 1
showing that x0 L since x1 , x2 L. Hence x1 = x2 and t1 = t2 , i.e., the
representation of u is unique. Let us take two elements, x and x L.
Now, x1 + t1 x0 = x2 + t2 x0 or, x0 =
We have, f (x ) f (x ) = f (x x )
f  x x 
f (x + x0  + x + x0 )
Thus, f (x ) f  x + x0  f (x ) + f  x + x0 
Since x and x are arbitrary in M , independent of each other,
sup{f (x) f  x + x0 } inf {f (x) + f  x + x0 }
xL
xL
Consequently, there is a real number c, satisfying the inequality,
sup{f (x) + f  x + x0 } c inf {f (x) + f  x + x0 }
xL
xL
(5.1)
Now any element u L1 has the form u = x + tx0 , x M and t
Let us dene a new functional f0 (u) on L, such that
f0 (u) = f (x) tc
c is some xed real number satisfying (5.1).
Note that u = x + tx0 , x M , x0
/ M.
4.
(5.2)
Linear Functionals
181
If in particular u L then t = 0 and u = x.
Hence f and f0 coincide on L.
Now let u1 = x1 + t1 x0 , u2 = x2 + t2 x0 .
Then, f0 (u1 + u2 ) = f (x1 + x2 ) (t1 + t2 )c
= f (x1 ) t1 c + f (x2 ) t2 c
= f0 (u1 ) + f0 (u2 )
Thus, f0 (u) is additive. To show that f0 (u) is bounded and has the
same norm as that of f (x) we consider two cases:
(i) For t > 0, it follows from (5.1) and (5.2) that
x
02
0x
1
0
0
c t f  0 + x0 0
f0 (u) = t f
t
t
= f  x + tx0  = f  u
Hence, f0 (u) f  u
(5.3)
(ii) For t < 0, then (5.1) yields
0x
0
1
0
0
f (x/t) c f  0 + x0 0 = f  x + tx0 
t
t
1
= f  u
t
2
1 x
c f  u
Hence, f0 (u) = t f
t
That is, we get back (5.3).
Hence, inequality (5.3) remains valid for all u (L, x0 ) = L1 . Thus it
follows from (5.3) that f0  f . However, since the functional f0 is an
extension of f from L to L1 , where f0  f . Hence f0  = f .
Note that we have determined the norm of the functional f0 with respect
to that linear subspace on which f0 is dened. Thus, the functional f (x) is
extended to L1 = (L, x0 ) with presentation of norm.
5.1.3
Theorem (the HahnBanach theorem)
Every linear functional f (x) dened on a linear subspace L of a normed
linear space E can be extended to the entire space with preservation of
norm. That is, we can construct a linear function F (x) dened on E such
that
(i) F (x) = f (x) for x L, (ii) F E = f L
Proof: Let us rst suppose the space E is separable. Let N be a countable
everywhere dense set in E. Let us select those elements of this set which
do not fall in L and arrange them in the sequence x0 , x1 , x2 , . . . , xn , . . .
182
A First Course in Functional Analysis
By virtue of lemma 5.1.2 we can extend the functional f (x) successively
to the subspaces (L, x0 ) = L1 , (L1 , x1 ) = L2 and so on and ultimately
construct a certain functional fw dened on the linear subspace Lw , which
is everywhere dense in E and is equal to the union of all Ln .
Moreover, fw  = f . Since Lw is everywhere dense in E, we can
apply theorem 4.2.13 to extend the functional fw by continuity to the entire
space E and obtain the functional F dened on E such that
F (x)L = f (x) and
F E = f L
Alternatively, in case the space is not separable, we can proceed as
follows. Consider all possible extensions of the functional with preservation
of the norm. Such extensions always exist. We next consider the set of
these extensions and introduce a partial ordering as detailed below. We
will say f < f if a linear subspace L on which f is dened is contained
in the linear subspace L on which f is dened and if f (x) = f (x) for
x L . Evidently f < f has all the properties of ordering.
Now, let {f } be an arbitrary, totally ordered subset of the set . This
subset has an upper bound, which is the functional f , dened on a linear
subspace L = L . L is the domain of f and f (x) = f0 (x) if x L
is an element of L0 . Hence f is a linear functional and f  = f , that
is, f . Thus, it is seen that all the hypotheses of Zorns lemma (1.1.4)
are satised and has a maximal element F . This functional is dened on
the entire span E. If that is not so, the functional can be further extended,
contradicting the fact that F is the maximal element in .
Hence, the proof is completed.
Note 5.1.1 Since the constant c satisfying (5.1) may be arbitrarily
preassigned, and hence there may not be a single maximal element in , the
extension of a linear functional by the HahnBanach theorem is generally
not unique.
Note that HahnBanach theorem is a potential source of generating
dierent linear functionals in a Banach space or a normed linear space.
The two theorems which are oshoots of the HahnBanach theorem give
rise to various applications.
5.1.4
Theorem
Let E be a normed linear space and x0 = be a xed element in E.
There exists a linear functional f (x), dened on the entire space E, such
that
(i) f  = 1 and (ii) f (x0 ) = x0 
Consider the set of elements {tx0 } = L, where t runs through all positive
real numbers.
The set L is a subspace of E, spanned by x0 .
Linear Functionals
183
A functional f0 (x), dened on L has the following form: if x = tx0 ,
then
(5.4)
f0 (x) = tx0 
Then, f0 (x0 ) = x0  and f0 (x) = tx0  = x
f0 (x)
f0 (x)
= 1, x, i.e., sup
= 1, i.e., f0  = 1
Thus,
x
x= x
Now, if the functional f0 (x) is extended to the entire space with
preservation of norm, we get a functional f (x) having the required
properties.
5.1.5
Theorem
/ L in a normed linear space E.
Given a subspace L and an element x0
Let d > 0 be the distance from x0 to L, i.e.,
d = inf x x0 
xL
Then there is a functional f (x) dened everywhere on E and such that
1
(i) f (x) = 0, x L, (ii) f (x0 ) = 1 and (iii) f  =
d
Each of its elements is uniquely
Proof: Consider a set (L; x0 ).
representable in the form u = x + tx0 , where x L and t is real. Let
us construct the functional f0 (u) by the following rule. If u = x + tx0 ,
dene f0 (u) = t. Evidently f0 (x) = 0, if x L and f0 (x0 ) = 1.
To determine f0  we have, for t = 0,
f0 (u) = t =
t u
t u
=
u
x + tx0 
0
u
u
x0
u
0
0
0
0
"0
!
since
x
=
+
=0
0d
0
0
0 x + x0 0
0x 0 x 0
d
t
t
t
1
(5.5)
Thus, f0 
d
Furthermore, there is a sequence {xn } L s.t.
lim xn x0  = d
Then we have f0 (xn x0 ) f0  xn x0 
Since f0 (xn x0 ) = f0 (xn ) f0 (x0 ) = 1, xn L
1 f0  xn x0 
Hence, by taking the limit we get,
1
1 f0 d or f0 
d
(5.6)
Then, by extending f0 (x) to the entire space with preservation of norm
we obtain the functional f (x) with the required property.
184
A First Course in Functional Analysis
5.1.6
Geometric interpretation
The conclusion of the above theorem admits geometric interpretation
as follows.
Through every point x0 on the surface of the ball x r a supporting
hyperplane can be drawn.
f (x)
Note that f  = sup
x= x
For a functional f (x) on the ball x r,
f  = sup
x=
f (x)
sup f (x)
x
r
Consider the hyperplane f (x) = rf .
For points x0 on the surface of the ball, x0  = r, f (x0 ) = rf  and
for other points of the ball
f (x) rf 
Thus, f (x) = rf  is a supporting hyperplane.
5.1.7
Denition: sublinear functional
A real valued function p on a normed linear space E is said to be
sublinear, if it is
(i) subadditive, i.e.,
p(x + y) p(x) + p(y), x, y E
(5.7)
and (ii) positive homogeneous, i.e.,
p(x) = p(x), 0 in
5.1.8
4 and x E
(5.8)
Example
  is a sublinear functional.
For, x + y x + y, x, y E
x = x, 0 in
4 and x E
The HahnBanach theorem (5.1.3) can be further generalized in the
following theorem. Let a functional f (x) dened on a subspace L of a
normed linear space E and be majorized on L by a sublinear functional
p(x) dened on E. Then f can be extended from L to E without losing
the linearity and the majorization, so that the extended functional F on E
is still linear and majorized by p. Here we have taken E to be real.
Linear Functionals
5.1.9
185
Theorem HahnBanach
functional)
theorem
(using
sublinear
Let E be a normed linear space and p a sublinear functional dened on
E. Furthermore, let f be a linear functional dened on a subspace L of E
and let f satisfy,
f (x) p(x), x L
(5.9)
Then f can be extended to a linear functional F satisfying,
F (x) p(x), x E
(5.10)
F (x) is a linear functional on E and
F (x) = f (x), x E
Proof: The proof comprises the following steps:
(i) The set F of all linear extensions g of f satisfying g(x) p(x) on
D(g) can be partially ordered and Zorns lemma yields a maximal
element F on F.
(ii) F is dened on the entire space E.
To show that D(F ) is all of E, the arguments will be as follows. If D(F )
is not all of E, choose a y1 E D(F ) and consider the subspace Z1 of
E spanned by D(F ) and y1 , and show that any x Z1 can be uniquely
represented by x = y + y1 . A functional g1 on Z1 , dened by
g1 (y + y1 ) = F (y) + c
is linear and a proper extension of F , i.e., D(g1 ) is a proper subset of D(F ).
If, in addition, we show that g1 F , i.e.,
g1 (x) p(x)
x D(g1 ),
then the fact that F is the maximal element of F is contradicted.
For details of the proof see Kreyszig [30].
5.1.10
Theorem
Let E be a normed linear space over
and f is a linear functional
dened on a subspace L of E. Then theorem 5.1.9 = theorem 5.1.3.
Proof: Let p(x) = f  x, x E.
Then p is a sublinear functional on E and f (x) p(x), x E.
Hence, by theorem 5.1.9, it follows that there exists a real linear
functional F on E such that
F (x) = f (x) x L
and
F (x) f  x,
xE
F (x) = f (x) f  x x E = F E f L
186
A First Course in Functional Analysis
On the other hand, take x1 L, x1 = , Then
F 
F (x1 )
f (x1 )
=
= F  f 
x1 
x1 
This completes the proof when E is a real normed linear space.
5.2
HahnBanach Theorem for Complex
Vector and Normed Linear Space
5.2.1
Lemma
Let E be a normed linear space over . Regarding E as a linear space
over , consider a reallinear functional u : E . Dene
f (x) = u(x) iu(ix), x E
Then f is a complexlinear functional on E.
Proof: As u is real and linear, f is also real, linear.
Now, since u is linear
f (ix) = u(ix) iu(i ix) = iu(x) + iu(ix)
= i[u(x) iu(ix)] = if (x) x E
Hence, f is complexlinear.
5.2.2
HahnBanach theorem (generalized)
Let E be a real or complex vector space and p be a realvalued functional
on E which is subadditive, i.e., for all x, y E
p(x + y) p(x) + p(y)
(5.11)
and for every scalar satises,
p(x) = p(x)
(5.12)
Moreover, let F be a linear functional dened on a subspace L of E and
satisfy,
f (x) p(x) x L
(5.13)
Then f has a linear extension F from L to E satisfying
F (x) p(x)
xE
(5.14)
Proof: (a) Real Vector Space: Let E be real. Then (5.13) yields f (x)
p(x) x L. It follows from theorem 5.1.9 that f can be extended to a
linear functional F from L to E such that
F (x) p(x) x E
(5.15)
Linear Functionals
187
(5.12) and (5.15) together yield
F (x) = F (x) p(x) =  1p(x) = p(x)
xE
(5.16)
From (5.15) and (5.16) we get (5.14).
(b) Complex Vector Space: Let E be complex. Then L is a complex
vector space, too. Hence, f is complexvalued and we can write,
f (x) = f1 (x) + if2 (x)
xL
(5.17)
where f1 (x) and f2 (x) are real valued. Let us assume, for the time being, E
and L as real vector spaces and denote them by Er and Lr respectively. We
thus restrict scalars to real numbers. Since f is linear on L and f1 and f2
are realvalued, f1 and f2 are linear functionals on L. Also, f1 (x) f (x)
because the real part of a complex quantity cannot exceed the absolute
value of the whole complex quantity.
Hence, by (5.13),
f1 (x) p(x)
x Lr
Hence, by the HahnBanach theorem 5.1.8 f1 can be extended to a
functional F1 from Lr to Er , such that
F1 (x) p(x)
x Lr
(5.18)
We next consider f2 . For every x L,
i[f1 (x) + if2 (x)] = if (x) = f (ix) = f1 (ix) + if2 (ix)
The real parts on both sides must be equal.
Hence, f2 (x) = f1 (ix) x L
(5.19)
Just as in (5.18) f1 (x) has been extended to F1 (x), x E,
f2 (x) = [f1 (ix)] can be extended exploiting HahnBanach theorem 5.18,
to F2 (x) = [F1 (ix)] x E.
Thus we can write
F (x) = F1 (x) iF1 (ix)
xE
(5.20)
(i) We would next prove that F is a linear functional on the complex
vector space E.
For real c, d, c + id is a complex scalar.
F ((a + ib)x) = F1 (ax + ibx) + iF2 (ax + ibx)
= F1 (ax + ibx) iF1 (iax bx)
= aF1 (x) + bF1 (ix) i[aF1 (ix) bF1 (x)]
= (a + ib)F1 (x) i[(a + ib)F1 (ix)]
= (a + ib)[F1 (x) iF1 (ix)]
= (a + ib)[F1 (x) + iF2 (x)]
= (a + ib)F (x)
188
A First Course in Functional Analysis
(ii) Next to be shown is that F (x) p(x), x E.
It follows from (5.12) that p() = 0. Taking y = x in (5.11), we get
0 p(x) + p(x) = 2p(x), i.e., p(x) 0 x E
Thus, F (x) = 0 = F (x) p(x).
Let F (x) = 0. Then we can write using polar coordinates,
F (x) = F (x)ei ,
thus F (x) = F (x)ei = F (ei x)
Since F (x) is real, the last expression is real and is equal to its real
part.
Hence, by (5.12)
F (x) = F (ei x) = F1 (ei x) p(ei x) = ei p(x) = p(x)
This completes the proof.
We next consider HahnBanach theorem (generalized) in the setting of
a normed linear space E over ( ).
4+
5.2.3
HahnBanach theorem (generalized form in a normed
linear space)
Every bounded linear functional f dened on a subspace L of a normed
linear space E over ( ) can be extended with preservation of norm to the
entire space, i.e., there exists a linear functional F (x) dened on E such
that
(i) F (x) = f (x) for x L; (ii) F E = f L
Proof: If L = {} then f = and the extension is F = . Let L = {}.
We want to use theorem 5.2.2. For that purpose we have to nd out a
suitable p.
We know, for all x L f (x) fL  x.
Let us take p(x) = f L x x E.
Thus, p is dened on all E.
4+
Furthermore, p(x + y) = f L x + y f L (x + y)
= p(x) + p(y), x, y E
p(x) = f L x =  f L x)
= p(x) x E
Thus conditions (5.11) and (5.12) of theorem 5.2.2 are satised. Hence,
the above theorem can be applied, and we get a linear functional F on E
which is an extension of f and satises.
Linear Functionals
189
F (x) p(x) = f L x x E
Hence, F E = sup
x=
F (x)
p(x) = f L
x
(5.21)
On the other hand, F being an extension of f ,
F E f L
(5.22)
Combining (5.21) and (5.22) we get,
F E = f L
5.2.4
Hyperspace and related results
5.2.5
Denition: Hyperspace
A proper subspace E0 of a normed linear space E is called a hyperspace
in E if it is a maximal proper subspace of E. It may be noted that a
proper subspace E0 of E is maximal if and only if the span of E0 {a}
equals E for each a
/ E0 .
5.2.6
Remark 1
A hyperplane H is a translation of a hyperspace by a vector, i.e., H is
of the form
H = x + E0
where E0 is a hyperspace and x E.
5.2.7
Theorem
A subspace of a normed linear space E is a hyperspace if and only if it
is the null space of a nonzero functional.
Proof: We rst show that null spaces of nonzero linear functionals are
hyperspaces.
Let f : E ( ) be a nonzero linear functional and x0 E be such
that f (x0 ) = 0. Then, for every x E, there exists a u N (f ) such that
4+
x=u+
f (x)
x0 , u N (f )
f (x0 )
If in particular x0 E/N (f ), then
E = span {x0 ; N (f )}
Thus we see that null spaces of f are hyperspaces.
Conversely, let us suppose that H is a hyperspace of E and x0 E/H
such that E = span {x0 ; H}. Then for every x E, there exists unique
pair (, u) in ( ) H such that x = x0 + u.
Let us dene f (x0 + u) = ( ), u H.
Now, f (( + )x0 + u) = + = f (x0 + u) + f (x0 + u), , ( ).
4+
4+
f (p(x0 + u)) = p = pf (x0 + u), p
4 (+ )
4+
190
A First Course in Functional Analysis
Thus, f is a linear functional.
Again, taking = 1, f (x0 ) = 1, and when = 0, f (u) = 0 i.e. u N (f ),
i.e., N (f ) = H.
5.2.8
Remark
A subset H E is a hyperplane in E if and only if there exists a
nonzero linear functional f and a scalar such that
H = {x E : f (x) = }
since {x E : f (x) = } = x + N (f ) for some x E with f (x ) = .
Thus, hyperplanes are of the form H for some nonzero linear functional f
and for some ( ).
4+
5.2.9
Denition
If the scalar eld is
H if
4, a set X is said to be on the left side of a hyperplane
X {x E : f (x) }
and it is strictly on the left side of H if
X {x E : f (x) < }
Similarly, X is said to on the right side of a hyperplane H if
X {x E : f (x) }
and strictly on the right side of a hyperplane H if
X {x E : f (x) > }.
5.2.10
Theorem (HahnBanach separation theorem)
Let E be a normed linear space and X1 and X2 be nonempty disjoint
convex sets with X1 being an open set. Then there exists a functional
f E and a real number , such that
X1 {x E : Ref (x) < },
X2 {x E : Ref (x) }
Before we prove the theorem we introduce few terminologies and a
lemma.
5.2.11
Denition: absorbing set
Let E be a linear space. A set X E is said to be an absorbing set
if for every x E, there exists t > 0 such that t1 x X.
5.2.12
Denition: Minkowski functional
Let X E be a convex, absorbing set. Then, : E
a Minkowski functional of X if
X (x) = inf{t > 0 : t1 x X}
4 is said to be
Linear Functionals
5.2.13
191
Remark
(i) If X is an absorbing set, X (x) < for every x E.
(ii) If X is an absorbing set then, X and
(iii) If X is a normed linear space, then every open set containing is an
absorbing set.
Proof: We prove (iii). Since an open set containing contains an open
ball B(,
), if x B(,
) then x <
. Therefore, for any > 1
1 x
<
Hence, every open set containing is an absorbing set.
5.2.14
Lemma
Let X be a convex, absorbing subset of a linear space E and let X be
the corresponding Minkowski functional. Then X is a sublinear functional,
[see 5.1.7] and
{x E : X (x) < 1} X {x E : X (x) 1}.
(5.23)
Proof: For X to be a sublinear functional, it has to satisfy the properties,
X (x + y) X (x) + X (y), X (x) = (x)
for all x, y E and for all 0.
Let x, y E. Let p > 0, q > 0 be such that p1 x E, q 1 x E. Then,
using the convexity of X, we have,
p
q
1
1
p x+
q 1 y X
(p + q) (x + y) =
p+q
p+q
Hence, X (x + y) p + q.
Taking inmum over all such p and q, it follows that
X (x + y) X (x) + Y (y)
Next, to show that X (x) = X (x) for all x E, and for all 0.
Let x E and > 0. Let p > 0 be such that X (x) p. Taking
inmum over all p > 0 with p1 x X, we have
X (x) X (x)
(5.24)
Let us take x in place of x and let p > 0 be such that p1 (x) X.
Since p1 (x) = ( 1 p)1 x, we have,
X (x) 1 p
Taking inmum over all x such that p > 0, we obtain
X (x) 1 X (x)
192
Thus,
A First Course in Functional Analysis
X (x) X (x)
(5.25)
It follows from (5.24) and (5.25) that
X (x) = X (x), 0
To prove the last part of the lemma we proceed as follows. Let x X.
Then 1 {t > 0 : t1 x X}
Then X (x) 1.
Next, let us suppose that x E be such that X (x) < 1
Then there exists p0 > 0, such that p0 < 1 with p1
0 x X. Since X is
convex and X, we have
x = p0 (p1
0 x) + (1 p0 ) X
Thus, X (x) < 1 implies x X.
Hence (5.23) is proved.
5.2.15
Proof of Theorem 5.2.10
We prove the theorem for .
Let x1 X1 and x2 X2 . Then X1 X2 = {x1 x2 : x1 X1 , x2
X2 }. Since X1 and X2 are each convex, X1 X2 is convex. Thus X1 X2 is
nonempty and convex. We next show that X1 X2 is open, given that X1
is open. Since X1 is open, it contains an open ball B(x1 ,
) where x1 X1 .
For, x2 X2 ,
B(x1 x2 ,
) = B(x1 ,
) x2 X1 x2
Thus, X1 X2 is open.
*
Hence, X1 X2 =
(x1 x2 ) is open in E.
x1 X1
x2 X2
Also,
/ X1 X2 , since X1 X2 = .
Let X = X1 X2 + u0 where u0 = x2 x1 . Then X is an open convex
set with X. Hence, X is an absorbing set as well. Let X be the
Minkowski functional of X.
In order to obtain the required functional, we apply the theorem 5.1.9.
Let E0 = span {u0 }, p = X and the linear functional f0 : E dened
by
f0 (u0 ) = ,
(5.26)
Since X1 X2 = and u0 = x2 x1
/ X, by lemma 5.2.14, we have
X (u0 ) 1 and hence,
f0 (u0 ) = X (u0 ) = X (u0 ),
Linear Functionals
193
Therefore, by theorem 5.1.9 f0 has a linear extension f : E R such
that
f (x) X (x), x E
(5.27)
Lemma 5.2.14 yields that X (x) 1 for some x X.
Hence, it follows from (5.27) that f (x) 1 for every x X.
Thus, f (x) 1 for every x (X). Thus we have,
f (x) 1 x X (X)
Since X (X) is an open set containing , and is in the preimage of
any open set in the range of f , f is continuous.
Next to show that there exists
4 such that
f (x1 ) < f (x2 ), x1 X1 , x2 X2
Since f : E R and X1 , X2 E, f (X1 ), f (X2 ) are intervals in
Given X1 is open we next show that f (X1 ) is open.
4.
Since f is nonzero there is some a E such that f (a) = 1, a = . Let
x X. Since X1 is open, it contains an open ball B(x,
),
> 0.
If k <
/a, then x ka X1 so that f (x ka) f (X1 ). Then
k1
, f (x) k  <
a
Hence, f (X1 ) is open in
,
f (X1 )
4.
Hence it is enough to show that f (x1 ) f (x2 ) for every x1 X1 and
every x2 X2 . Since x1 x2 + u0 X, and taking note (5.26) by lemma
5.2.14, we have
f (x1 ) f (x2 ) + 1 = f (x1 x2 + u0 ) = X (x1 x2 + u0 ) 1
Thus, we have f (x1 ) f (x2 ) for all x1 X1 , x2 X2 .
In case the scalar eld is
5.2.1.
5.2.16
+, we can prove the theorem by using lemma
Remark
Geometrically, the above separation theorem says that the set X1 lies
on one side of the real hyperplane {x E : Ref (x) = } and the set X2
lies on the other, since
X1 {x E : Ref (x) < } and
X2 {x E : Ref (x) }
194
A First Course in Functional Analysis
Problems [5.1 and 5.2]
1. Show that a norm of a vector space E is a sublinear functional on E.
2. Show that a sublinear functional p satises (i) p(0) = 0 (ii) p(x)
p(x).
3. Let L be a closed linear subspace of a normed linear space E, and x0
be a vector not in L0 . Given d is the distance from x0 to L, show that
there exists a functional f0 E such that f0 (L) = 0, f0 (x0 ) = 1 and
1
f0  = .
d
4. (i) Let L be a linear subspace of the normed linear space E over ( ).
4+
4+
(ii) Let f : L ( ) be a linear functional such that f (x) x
for all x L and xed > 0.
Then show that f can be extended to a linear continuous functional
F : E ( ) such that
4+
F (x) x x
4 (+ )
5. Every linear functional F (x) dened on a linear subspace L of a
normed linear space E can be extended to the entire space with
preservation of the norm, that is, we can construct a linear functional
F (x), dened on E such that
(i) F (x) = f (x) for x L
(ii) F E = f L .
Prove the above theorem in case E is separable without using Zorns
lemma (1.1.4).
6. Let L be a closed subspace of a normed linear space E such that
f (L) = 0 = f (E) = 0, f E
Prove that L = E.
7. Let E be a normed linear space. For every subspace L of E and every
functional f dened on E, prove that there is a unique HahnBanach
extension of f to E if and only if E is strictly convex that is, for
f1 = f2 in E, with f1  = 1 = f2  we have f1 + f2  < 2.
[Hint: If F1 and F2 are extensions of f , show that F1 + F2 /2 is also
a continuous linear extension of f and the strict convexity condition
is violated].
5.3
Application
to
Bounded
Functionals on C([a, b])
Linear
In this section we shall use theorem 5.1.3 for obtaining a general
representation formula for bounded linear functionals on C([a, b]), where
Linear Functionals
195
[a, b] is a xed compact interval. In what follows, we use representations in
terms of RiemannStieljes integral. As a sort of recapitulation, we mention
a few denitions and properties of RiemannStieljes integration which is a
generalization of Riemann integration.
5.3.1
Denitions: partition, total variation, bounded variation
A collection of points P = [t0 , t1 , . . . , tn ] is called a partition of an
interval [a, b] if
(5.28)
a = t0 < t1 < < tn = b holds
Let w : [a, b] be a function. Then the (total) variation Var (w)
of w over [a, b] is dened to be
n
w(tj ) w(tj1 ) : P = [t0 , t1 , . . . , tn ]
Var (w) = sup
j=1
)
is a partition of [a, b] (5.29)
The supremum being taken over all partitions 5.28 of the interval [a, b].
If Var (w) < holds, then w is said to be a function of bounded
variation.
All functions of bounded variation on [a, b] form a normed linear space.
A norm on this space is given by
w = w(a) + Var (w)
(5.30)
The normed linear space thus dened is denoted by BV ([a, b]), where
BV suggests bounded variation.
We now obtain the concept of a RiemannStieljes integral as follows.
Let x C([a, b]) and w BV ([a, b]). Let Pn be any partition of [a, b] given
by (5.28) and denote by (Pn ) the length of a largest interval [tj1 , tj ] that
is,
(Pn ) = max(t1 t0 , t2 t1 , . . . , tn tn1 ).
For every partition Pn of [a, b], we consider the sum,
S(Pn ) =
n
x(tj )[w(tj ) w(tj1 )]
(5.31)
j=1
There exists a number I with the property that for every
> 0 there is
a > 0 such that
(Pn ) < = I S(Pn ) <
I is called the RiemannStieljes integral of x over [a, b] with respect to
w and is denoted by
b
x(t)dw(t)
(5.32)
a
196
A First Course in Functional Analysis
Thus, we obtain (5.32) as the limit of the sum (5.31) for a sequence
{Pn } of partitions of [a, b] satisfying (Pn ) 0 as n .
In case w(t) = t, (5.31) reduces to the familiar Riemann integral of x
over [a, b].
Also, if x is continuous on [a, b] and w has a derivative which is integrable
on [a, b] then
b
b
x(t)dw(t) =
x(t)w (t)dt
(5.33)
a
We show that the integral (5.32) depends linearly on x, i.e., given
x1 , x2 C([a, b]),
b
b
b
[px1 (t) + qx2 (t)]dw(t) = p
x1 (t)dw(t) + q
x2 (t)dw(t)
a
where
p, q
The integral also depends linearly on w BV ([a, b]) because for all
w1 , w2 BV ([a, b]) and scalars r, s
b
b
b
x(t)d(rw1 + sw2 )(t) = r
x(t)dw1 (t) + s
x(t)dw2 (t)
a
5.3.2
Lemma
For x(t) C([a, b]) and w(t) BV ([a, b]),
b
x(t)dw(t) max x(t) Var (w)
t[a,b]
a
(5.34)
If Pn is any partition of [a, b]
n
x(tj )(w(tj ) w(tj1 ))
j=1
n
x(tj )(w(tj ) w(tj1 ))
j=1
n
max x(tj ) sup
w(tj ) w(tj1 )
S(Pn ) =
tj [a,b]
j=1
= max x(t) Var (w)
t[a,b]
Hence making n , we get,
b
x(t)dw(t) max x(t) Var (w)
a
t[a,b]
(5.35)
The representation theorem for bounded linear functionals on C([a, b])
by F. Riesz (1909) [30] is discussed next.
Linear Functionals
5.3.3
197
Theorem (Rieszs representation theorem on functionals
on C([a, b])
Every bounded linear functional f on C([a, b]) can be represented by a
RiemannStieljes integral
b
f (x) =
x(t)dw(t)
(5.36)
a
where w is of bounded variation on [a, b] and has the total variation
Var (w) = f 
(5.37)
Proof: Let M ([a, b]) be the space of functions bounded in the closed
interval [a, b]. By making an appeal to HahnBanach theorem 5.1.3. we can
extend the functional f from C([a, b]) to the normed linear space M ([a, b])
that is dened by
x = sup x(t)
t[a,b]
Furthermore, F is a bounded linear
functional and
f C([a,b]) = F M ([a,b])
We dene the function w needed in
(5.36). For this purpose, we consider the
function ut () as follows,
Fig. 5.1 The function ut
3
ut () =
for a t
otherwise
[see gure 5.1]
(5.38)
Clearly, ut () M ([a, b]). We mention that ut () is called the
characteristic function of the interval [a, t]. Using ut () and the functional
F , we dene w on [a, b] by
w(a) = 0
w(t) = F (ut ())
t [a, b]
We show that this function w is of bounded variation and Var (w)
f .
For a complex quantity we can use the polar form. If fact setting,
= arg, we may write,
= e()
3
1 if = 0
where e() =
ei if = 0
We see that if = 0, then  = ei . Hence, for any , zero or not, we
have,
 = e()
(5.39)
198
A First Course in Functional Analysis
where the bar indicates complex conjugation.
In what follows we write,
j = e(w(tj ) w(tj1 )) and
utj () = uj ()
Then by (5.34), for any partition (5.26) we obtain,
n
w(tj ) w(tj1 ) = F (u1 ) +
j=1
n
F (uj ) F (uj1 )
j=2
n
j [F (uj ) F (uj1 )]
=
1 F (u1 ) +
j=2
n
= F
1 u1 +
j [uj uj1 ]
j=2
0
0
0
0
n
0
0
0
u
+
[u
u
]
F  0
j j
j1 0
0 1 1
0
0
j=2
Now, uj () = utj () = 1, tj1 < tj . Hence, on the righthand
side of the above inequality F  = f  and the other factor   equals
1 because 
j  = 1 and from the denition of uj ()s we see that for each
t [a, b] only one of the terms u1 , u2 ui , . . . is not zero (and its norm is
1). On the left we can now take the supremum over all partitions of [a, b].
Then we have,
Var (w) f 
(5.40)
Hence w is of bounded variation on [a, b].
We prove (5.36) when x C([a, b]). For every partition Pn of the form
(5.28) we dene a function, which we denote simply by zn (t), keeping in
mind that zn depends on Pn , and not merely on n. zn (t) will be as follows:
zn (t) = x(t0 )x(ut1 (t)) +
n
(t)
(t)
x(tj1 )[utj utj1 ]
(5.41)
j=2
zn (t) is a step function.
Then zn M ([a, b]). By the denition of w,
F (zn ) = x(t0 )F (xt1 ) +
n
x(tj1 )[F (xtj ) F (xtj1 )]
j=2
= x(t0 )w(t1 ) +
n
x(tj1 )[w(tj ) w(tj1 )]
j=2
n
j=1
x(tj1 )[w(tj ) w(tj1 )]
(5.42)
Linear Functionals
199
where the last equality follows from w(t0 ) = w(a) = 0. We now choose any
sequence {Pn } of partitions of [a, b] such that (Pn ) 0 (It is to be kept in
mind that tj depends on the particular partition Pn ). As n the sum
on the righthand side of (5.42) tends to the integral in (5.26) and (5.26)
follows provided F (zn ) F (x), which equals f (x) since x C([a, b]).
We need to prove that F (zn ) F (x). Keeping in mind the denition
of ut () (g. (5.1)), we note that (5.41) yields zn (a) = x(a) 1 since the sum
in (5.41) is zero at t = a. Hence zn (a) x(a) = 0. Moreover by (5.41) if
tj1 < tj , then we obtain zn (t) = x(tj1 ) 1. It follows that for those
t,
zn (t) x(t) = x(tj1 ) x(t)
Consequently, if (Pn ) 0, then zn x 0 because x is continuous
on [a, b], hence uniformly continuous on [a, b], since [a, b] is compact in .
The continuity of F now implies that F (zn ) F (x) and F (x) = f (x) so
that
b
f (x) =
x(t)dw(t)
It follows from (5.34) and (5.36) that
b
f (x) =
x(t)dw(t) max x(t)Var (w) = xVar (w)
a
t[a,b]
Therefore, x C([a, b])
f  = sup
x=
f (x)
Var (w)
x
(5.43)
It follows from (5.35) and (5.37) that
f  = Var (w)
Note 5.3.1. We note that w in the theorem is not unique. Let us impose
on w the following conditions
(i) w is zero at a and continuous from the right
w(a) = 0,
w(t + 0) = w(t) (a < t < b)
Then, w will be unique [see A.E. Taylor [55]].
200
A First Course in Functional Analysis
5.4
The General Form of Linear
Functionals in Certain Functional
Spaces
5.4.1
Linear functionals on the ndimensional Euclidean space
n
Let f be a linear functional dened on En .
Now, x En can be written as
x=
n
where x = {1 , 2 , . . . , n }
i e i
i=1
f being a linear functional
f (x) = f
n
i ei
i=1
n
n
f (ei )
(5.44)
i=1
i fi
where fi = f (ei )
i=1
For x = {i }, let us suppose,
(x) =
n
i i
i=1
where i are arbitrary.
For y = {i }, we note that
(x + y) =
n
(i + i )i =
i=1
(x) =
n
n
i=1
i i =
i=1
n
i i +
n
i i = (x) + (y)
i=1
i i = (x)
i=1
for all sectors .
Hence is a linear functional dened on an ndimensional space. Since
i can be regarded as the components of an ndimensional vector , the
space ( n ) , the dual of n , is also an ndimensional space with a metric,
generally speaking, dierent from the metric of n .
Let x = max i ; then
n
n
n
i i
i i 
i  x,
(x) =
i=1
i=1
i=1
Linear Functionals
201
(x)
i 
x
i=1
n
Hence,  = sup
x=
(5.45)
On the other hand, if we select an element x0 =
n
sgn i ei
4n, then
i=1
x0  = 1 and
(x0 ) =
=
Hence, 
n
i=1
n
sgn i i (ei ) =
n
sgn i i
i=1
i x0 
i=1
n
i 
(5.46)
i=1
From (5.45) and (5.46), it follows that
n
i 
 =
i=1
If an Euclidean metric is introduced, we can verify that the metric in
4n) is also Euclidean.
5.4.2
The general form of linear functional on s
s is the space of all sequences of numbers.
Let f (x) be a linear functional dened on s. Put
en = {in } where in = 0, i = n
Further, let f (en ) = un .
Therefore,
x = lim
and
nn = 1
The convergence in s is coordinatewise.
n
k ek =
k ek
n=1
k=1
holds where x = {i }.
Because f is continuous,
f (x) = lim
m
k f (ek ) =
k=1
k uk
k=1
Since this series must converge for every number sequence {k }, the uk
must be equal to zero from a certain index onwards and consequently
f (x) =
m
k=1
k uk
202
A First Course in Functional Analysis
Conversely, for any x = {i }, let (x) be given by
(x) =
m
k uk
(5.47)
k=1
where uk s are real and arbitrary. For if y = {i },
(x + y) =
m
(k + k )uk =
k=1
m
k uk +
k=1
m
Moreover, (x) =
k uk =
k=1
m
k uk = (x) + (y)
k=1
m
k uk = (x).
k=1
Hence, (x) is a linear functional on s.
It therefore follows that every linear functional dened on s has the
general form given by (5.47) where m and uk , k = 1, 2, . . . , m are uniquely
dened by (5.47).
5.4.3
The general form of linear functionals on lp
Let f (x) be a bounded linear functional dened on lp . Since the elements
ei = {ij } where ij = 1 for i = j and ij = 0 for i = j, form basis of lp ,
every element x lp can be written in the form
x=
i ei
i=1
Since f (x) is bounded linear,
f (x) =
i f (ei )
i=1
Writing ui = f (ei ), f (x) takes the form,
f (x) =
ui i
(5.48)
i=1
(n)
Let us put xn = {i }, where
3
ui q1 sgn ui ,
(n)
i =
0
if i n
if i > n
q is chosen such that the equality [(1/p) + (1/q)] = 1 holds
f (xn ) =
n
(n)
ui i
i=1
On the other hand,
n
i=1
ui q1 ui sgn ui =
n
i=1
ui q
(5.49)
Linear Functionals
203
f (xn ) f  xn  = f 
= f 
n
ui 
Thus
n
ui q f 
i=1
= f 
p(q1)
n
1/p
(n)
i p
i=1
1/p
i=1
n
1/p
ui 
i=1
1/p
ui q
i=1
1 1
Since + = 1
p q
1/q
n
ui q
f .
Thus,
i=1
Since the above is true for every n, it follows that
1/q
ui q
f 
(5.50)
i=1
Thus {ui } lq
Conversely, let us take an arbitrary sequence {vi } lq . Then, for
x = {i } let us write,
vi i
(x) =
i=1
To show that is a linear functional, we proceed as follows. For
y = {i },
vi (i + i )
(5.51)
(x + y) =
i=1
Since x, y lp , x + y lp , i.e.,
Since {vi } lq , (x + y)
1/p
 + i 
i=1
1/q
vi 
i=1
< .
1/p
i + v 
i=1
Hence, is additive and homogeneous.
(x)
where M =
1/q
vi 
i=1
i=1
vi 
1/q
q
i=1
1/p
i 
= M x
< .
204
A First Course in Functional Analysis
Thus, is a bounded linear functional. For calculating the norm of the
functional we proceed as follows
f (x)
1/q
ui 
x
i=1
Consequently, f 
1/q
ui q
(5.52)
i=1
It follows from (5.50) and (5.52)
f  =
1/q
ui 
i=1
5.4.4
Corollary
Every linear functional dened on l2 can be written in the general form
f (x) =
ui i
i=1
where
ui  < and
f  =
i=1
5.5
1/2
ui 
i=1
The General Form of Linear Functionals
in Hilbert Spaces
4+
Let H be a Hilbert space over ( ) and f (x) a linear bounded functional
dened on H. Let N (f ) denote the null space of f (x), i.e., the space of
zeroes of f (x). Let x1 , x2 N (f ). Then
f (x1 + x2 ) = f (x1 ) + f (x2 ) = 0 scalars ,
Hence, N (f ) is a subspace of H. Since f is a bounded linear functional
for a convergent sequence {xn }, i.e.,
xn x f (xn ) f (x) f  xn x 0 as n
Hence, N (f ) is a closed subspace. Let x H and x
/ N (f ). Let x0 be
the projection of x on the subspace H N (f ), the orthogonal complement
of H. Let f (x0 ) = , obviously = 0. Put x1 = x0 /. Then
f (x1 ) = f
x
0
1
f (x0 ) = 1
Linear Functionals
205
If now x H is arbitrary and f (x) = , then f (x x1 ) =
f (x) f (x1 ) = 0. Let us put x x1 = z then z N (f ) and we
have x = x1 + z. This equality shows that H is the orthogonal sum of
N (f ) and the onedimensional subspace spanned by x1 .
Since z N (f ) and x1 H N (f ),
zx1 or x x1 , x1 = 0 where , stands for the scalar product.
Hence,
x, x1 = x1 , x1 = x1 2
Since = f (x), we have
7
f (x) = x,
If
x1
= u, then
x1 2
x1
x1 2
f (x) = x, u,
(5.53)
i.e., we get the representation of an arbitrary functional as an inner product
of the element x and a xed element u. The element u is dened uniquely
by f because if f (x) = x, v, then x, u v = 0 for every x H, implying
u = v.
Further (5.53) yields,
f (x) = x, u x u
which implies that
sup
x=
f (x)
u
x
or
f  u
(5.54)
Since, on the other hand, f (u) = u, u = u2 , it follows that f 
cannot be smaller than u, hence f  = u.
Thus, every linear functional f (x) in a Hilbert space H can be
represented uniquely in the form f (x) = x, u, where the element u is
uniquely dened by the functional f . Moreover, f  = u.
Problem
1. If l1 is the space of real elements x = {i } where
i  < , show
i=1
that a linear functional f on l1 can be represented in the form
f (x) =
ck k
k=1
where {ck } is a bounded sequence of real numbers.
206
5.6
A First Course in Functional Analysis
Conjugate Spaces and Adjoint Operators
In 5.1.1 we have dened the conjugate (or dual) space E of a Banach
space E. We may recall that the conjugate (or dual) space E is the space
of bounded linear functionals mapping the Banach space E . The idea
that comes next is to nd the characterisation, if possible, of a conjugate
or dual space. In this case isomorphism plays a great role. We recall
that two spaces E and E are said to be isomorphic if, between their
elements, there can be established a onetoone correspondence, preserving
the algebraic structure, that is such that
3
,
x + y x + y
x x
(5.55)
y y
x x for scalar
4n: the dual space of 4n is 4n
Let {e1 , e2 , . . . , en } be a basis in 4n . Then any x 4n can be written
5.6.1
as
Space
x=
n
i ei , where x = {i }
i=1
Let f be a linear functional dened on n .
n
n
Then f (x) =
i f (ei ) =
i ai where ai = f (ei ).
i=1
i=1
Now, by CauchyBunyakovskySchwartz inequality (sec. 1.4.3)
1/2 n
1/2 n
1/2
n
i 2
ai 2
=
ai 2
x
f (x)
where x =
i=1
n
i=1
1/2
i=1
i 2
i=1
f (x)
Hence, f  = sup
x= x
n
1/2
ai 
(5.56)
i=1
Taking x = {ai } we see that
1/2
n
n
f (x) =
ai 2 =
ai 2
x
i=1
i=1
Hence, the upper bound in (5.56) is attained. That is
1/2
n
2
ai 
f  =
i=1
This shows that the norm of f is the Euclidean norm and f  = a
where a = {ai } .
Linear Functionals
207
Hence, the mapping of n onto n dened by f a = {ai } where
ai = f (ei ), is norm preserving and, since it is linear and bijective, it is an
isomorphism.
5.6.2
Space l1 : the dual space of l1 is l
Let us take a Schauder basis {ei } for l1 , where ei = (ij ), ij stands for
the Kronecker symbol.
Thus every x l1 has a unique representation of the form
x=
i ei
(5.57)
i=1
For any bounded linear functional f dened on l1 i.e. for every f l1
we have
f (x) =
i f (ei ) =
i ai
(5.58)
i=1
i=1
where ai = f (ei ) are uniquely dened by f . Also, ei  = 1, i = 1, 2, . . .
and
ai  = f (ei ) f  ei  = f 
(5.59)
Hence sup ai  f . Therefore {ai } l .
i
Conversely, for every b = {bi } l we can obtain a corresponding
bounded linear functional on l1 . We can dene on l1 by
(x) =
i bi , where x = {i } l1
i=1
If y = {i } l1 , then
(x + y) =
(i + i )(ei ) =
i=1
i=1
i bi +
i bi
i=1
= (x) + (y) showing is additive
For all scalars ,
(x) =
(i )(ei ) =
i=1
i (ei )
i=1
= (x), i.e., is homogeneous.
Thus is homogeneous. Hence, is linear.
i bi  sup bi 
i  = x sup bi 
Moreover, (x)
i=1
i=1
208
A First Course in Functional Analysis
(x)
sup bi  < since b = {bi } l . Thus
i
x= x
is bounded linear and l1 .
We nally show that the norm of f is the norm on the set l . From
(5.58), we have,
i ai sup ai 
i  = x sup ai 
f (x) =
i
i
Therefore,  = sup
i=1
i=1
f (x)
sup ai .
i
x= x
It follows from (5.59) and this above inequality,
Hence, f  = sup
f  = sup ai ,
i
which is the norm on l . Hence, we can write f  = a where
a = {ai } l . It shows that the bijective linear mapping of l1 onto
l dened by f a = {ai } is an isomorphism.
5.6.3
Space lp theorem
The dual space of lp is lq , here, 1 < p < and q is the conjugate of p,
1 1
that is, + = 1
p q
Proof: A Schauder basis for lp is {ei } where ei = {ij }, ij is the Kronecker
symbol. Thus for every x = {i } lp we can nd a unique representation
of the form
x=
i ei
(5.60)
i=1
lp
We consider any f where
Since f is linear and bounded,
f (x) =
lp2
is the conjugate (or dual) space of lp .
i f (ei ) =
i=1
i ai
(5.61)
i=1
where ai = f (ei ).
1 1
+ = 2.
p q
Let q be the conjugate of p i.e.
(n)
Let xn = { i } with
3
(n)
Then f (xn ) =
i=1
=
(n)
ai q /ai
i ai =
0
n
i=1
if i n and ai = 0
if i > n or ai = 0
ai q
Using (5.62) and that (q 1)p = q, it follows from the above,
(5.62)
Linear Functionals
209
f (xn ) f  xn  = f 
= f 
= f 
n
Hence, f (xn ) =
ai  f 
q
(n) p
i=1
n
1/p
ai 
p(q1)
i=1
n
1/p
ai q
i=1
n
i=1
1/p
1/p
ai 
i=1
Dividing both sides by the last factor, we get,
1p1 n
1/q
n
q
q
ai 
=
ai 
= f 
i=1
(5.63)
i=1
Hence, on letting n , we prove that,
{ai } lq
Conversely, for b = {bi } lq , we can get a corresponding bounded linear
functional on lp . For x = {i } lp , let us dene as,
(x) =
1 bi
(5.64)
i=1
For y = {i } lp , we have,
(x + y) =
(i + i )bi =
i=1
i bi +
i=1
i bi
i=1
1/p
1/p
= (x) + (y)
i p
+
i p
i=1
i=1
1/q
bi 
i=1
Also, (x) =
(i )bi =
i=1
i bi = (x), for all scalars .
i=1
Hence is linear.
To prove that is bounded, we note that,
1/p
1/q
p
q
(x) =
i bi
i 
bi 
i=1
i=1
i=1
<
210
A First Course in Functional Analysis
=
1/q
bi 
x
i=1
Hence, is bounded since {bi } lq .
Thus, is a bounded linear functional. Finally, to show that the norm
of f is the norm on the space lq we proceed as follows: (5.61) yields,
1/p
1/q
p
q
i ai
i 
ai 
f (x) =
i=1
= x
i=1
i=1
1/q
ai q
i=1
f (x)
Hence, f  = sup
x= x
1/p
ai 
(5.65)
i=1
It follows from (5.63) and (5.65) that
1/q
q
f  =
ai 
= aq
(5.66)
i=1
Thus, f  = aq where a = {ai } lq and ai = f (ei ). The mapping of
lp onto lq as dened f a is linear and bijective, and from (5.66) we see
that it is norm preserving. Therefore, it is an isomorphism.
Note 5.6.1.
(i) It can be shown that lq is isomorphic to lp . Hence
(ii) l2 = l2 , i.e., l2 is called a selfconjugate space.
(iii) A linear functional in a Hilbert space is spanned by elements of the
same space. A Hilbert space is, therefore, selfconjugate.
5.6.4
Reexive space
Let E be a normed linear space. In 5.1.2, we have dened E , the
conjugate space of E as the space of bounded linear functionals dened on
E. In the same manner we can introduce the concept of a conjugate space
(E ) of a Banach space E and call the space E as the second conjugate
of the normed linear space E. To be more specic, consider a bounded
linear f dened on E, so that in f (x), f remains xed and x varies over E.
We can also think of a situation where x is kept xed and f is varying in
E. For example, let
1
f (x) =
x(t)dg(t)
0
Then we have two cases: namely (i) g(t) is xed and x(t) varying or
(ii) x(t) is xed and g(t) varies. Now, since f (x) , f (x) can be treated
Linear Functionals
211
as a functional Fx , dened on E , for xed x and variable f . Hence, it is
possible to write f (x) = Fx (f ). In what follows, we shall show that the
mapping Fx is an isometric isomorphism of E onto a subspace of E .
5.6.5
Theorem
Let E = {} be a normed linear space over
4(+). Given x E, let
Fx (f ) = f (x) f E
(5.67)
Then Fx is a bounded linear functional on E , i.e., Fx E .
Further, the mapping Fx is an isometric isomorphism of E onto the
= {Fx : x E} of E .
subspace E
Proof: The mapping Fx satises,
Fx (f1 + f2 ) = (f1 + f2 )(x) = f1 (x) + f2 (x)
= Fx (f1 ) + Fx (f2 )
(5.68)
4+
f1 , f2 E and , ( )
Hence Fx is linear.
Also Fx is bounded, since
Fx (f ) = f (x) f  x, f E .
Consequently, Fx E . If Fx
F1x (f ) = F2x (f ) or (F1x F2x )f = 0.
f is arbitrary F1x = F2x , showing that
Thus to every x E, a unique Fx
a function : E E given by
E is not unique, let us suppose
We keep x xed and vary f . Since
Fx is unique.
E given by (5.67). This denes
(x) = Fx .
(i) We show that is linear.
For, x, y E and ,
4(+),
((x + y))(f ) = Fx+y (f ) = f (x + y)
= (Fx + Fy )(f )
= ((x) + (y))(f ), f E
Hence, (x + y) = (x) + (y)
(ii) We next show that preserves norm.
For each x E, we have
(5.69)
,
Fx (f )
(x) = Fx  = sup
:f E
f 
f =
,
f (x)
= sup
:f E
f 
f =
212
A First Course in Functional Analysis
Now, by theorem 5.1.4, for every x E, a functional g E such
that g = 1 and g(x) = x.
Therefore, (x) = Fx  = sup
f =
f (x)
g(x)
= x
f 
g
Using (5.69) we prove (x) = x.
(iii) We next show that is injective. Let x, y E. Then
x y = x y = 0 (x y) = 0
(x) (y) = 0 (x) = (y)
We thus conclude that is an isometric isomorphism of E onto the
subspace E((E))
of E.
5.6.6
Denition
4+
Let E be a normed linear space over ( ). The isometric isomorphism
: E E dened by (x) = Fx is called the natural embedding (or
the canonical embedding) of E into the second conjugate space E . The
functional Fx E is called the functional induced by the vector x. We
refer to the functional of this type as induced functional.
x
E
E*
Fx
E**
Fig. 5.2
5.6.7
Denition: reexive normed linear space
A normed linear space E is said to be reexive if the natural embedding
maps the space E onto its second conjugate space E , i.e., (E) = E .
Note 5.6.2.
(i) If E is a reexive normed linear space, then E is isometrically
isomorphic to E under the natural embedding.
(ii) If E is a reexive normed linear space, since the second conjugate
space E is always complete, the space E must be complete. Hence,
completeness of the second conjugate space is a necessary condition
for a normed linear space to be complete. However, this condition
need not be sucient (Example 4 (sec. 5.6.4)). Thus, it is clear that
if E is not a Banach space, then we must have (E) = E and hence
E is not reexive.
Linear Functionals
5.6.8
213
Denitions: algebraic dual, topological dual
Algebraic dual (conjugate)
Given Ex , a topological linear space, the space of linear functionals
mapping Ex is called the algebraic dual (conjugate) of Ex .
Topological dual (conjugate)
On the other hand the space of continuous linear functionals mapping
Ex is called the topological dual (conjugate) of Ex .
5.6.9
Examples
n
, ndimensional Euclidean space is reexive.
1.
In 5.4.1 we have seen that n = n . Then,
4 4
(4n ) = (4n ) = (4n ) = 4n
Hence, n is reexive.
Note 5.6.3 Every nite dimensional normed linear space is reexive. We
know that in a nite dimensional normed linear space E, every linear
functional on E is bounded, so that the reexivity of E follows.
2. The space lp (p > 1)
1 1
In 5.6.3, we have seen that lp = lq , + = 1
p q
Therefore, lP = (lp ) = (lq ) = lp
Hence, lp is reexive.
3. The space C([0, 1]) is nonreexive. For that see 6.2.
5.6.10
Theorem
A normed linear space is isometrically isomorphic to a dense subspace
of a Banach space.
Proof: Let E be a normed linear space. If : E E be the natural
embedding, then E and (E) are isometrically isomorphic spaces. But
(E) is a dense subspace of (E) and (E) is a closed subspace of the
Banach space E , it follows that (E) itself is a Banach space. Hence E
is isometrically isometric to the dense subspace (E) of the Banach space
(E).
We next discuss the relationship between separability and reexivity of
a normed linear space.
5.6.11
Theorem
Let E be a normed linear space and E be its dual. Then E is separable
E is separable.
Proof: Since E is separable, a countable set S = {fn : fn E , n }
such that S is dense in E , i.e., S = E .
214
A First Course in Functional Analysis
For each n
N, choose xn E such that
xn  = 1
and
fn (x)
1
fn 
2
Let X be a closed subspace of E generated by the sequence {xn }, i.e.,
X = span{xn E, n }.
Suppose X = E then a point x0 E X. Theorem 5.1.5 yields that
we can nd a functional = g E such that g(x0 ) = 0 and g(X) = 0.
g(xn ) = 0, n
Thus 1
fn  fn (xn ) = (fn g)(xn ) fn g
2
Therefore, g fn g + fn  3fn g n
But since S = E , it follows that g = 0, which contradicts the
assumption that X = E. Hence, X = E and thus E is separable.
5.6.12
Theorem
Let E be a separable normed linear space. If the dual E is nonseparable then E is nonreexive.
Proof: Let E be reexive if possible. Then, E is isometrically isomorphic
to E under the natural embedding. Given E is separable, E will be
separable. But, by theorem 5.6.11, E is separable, which contradicts our
assumption. Hence, E is non reexive.
5.6.13
Example
The space (l1 ,  1 ) is not reexive.
The space l1 is separable.
Now, (l1 ) = l . But l is not separable. By theorem 5.6.12 we can
say that l1 is nonreexive.
5.6.14
Adjoint operator
We have, so far, talked about bounded linear operators and studied their
properties. We also have discussed bounded linear functionals. Associated
with linear operators are adjoint linear operators. Adjoint linear operators
nd much use in the solution of equations involving operators. Such
equations arise in Physics, Applied Mathematics and in other areas.
Let A be a bounded linear operator mapping a Banach space Ex into a
Banach space Ey , and let us consider the equation Ax = y, x Ex , y Ey .
If g : Ey be a linear functional, then
g(y) = g(Ax) = a functional of x = f (x) (say)
(5.70)
f (x) is a functional on Ex .
We can see that f is linear. Let x1 , x2 Ex and y1 , y2 Ey , such that
y1 = Ax1 ,
y2 = Ax2
Linear Functionals
215
Then g(y1 + y2 ) = g(Ax1 + Ax2 ) = g(A(x1 + x2 )) = f (x1 + x2 ).
Since g is linear,
f (x1 + x2 ) = g(y1 + y2 ) = g(y1 ) + g(y2 ) = f (x1 ) + f (x2 )
(5.71)
Thus, f is linear. Hence the functional f Ex corresponds to
some g Ey . This sets the denition of an adjoint operator. The
correspondence so obtained forms a certain operator with domain Ey and
range contained in Ex .
5.6.15
Denition: adjoint operator
Let A be a bounded linear operator mapping a normed linear space Ex
into a normed linear space Ey , let f Ex and g Ey be given linear
functionals, then the operator adjoint to A is denoted by A and is given
by f = A g [see gure 5.3]
A
Ex
Ey
f = A*g
(
Fig. 5.3
5.6.16
Examples
1. Let A be an operator in ( n n ), where n is an ndimensional
space. Then A is dened by a matrix (aij ) of order n and equality y = Ax
where x = {1 , 2 , . . . , n } and y = {1 , 2 , . . . , n } such that
i =
n
aij j
j=1
n
Consider a functional f
(=
n
fi i .
f = (f1 , f2 , . . . , fn ), f (x) =
4n )
since
4n
is selfconjugate;
i=1
Hence, f (Ax) =
n
fi i =
i=1
n
n
n
j=1
fi
i=1
aij fi j =
i=1 j=1
n
j j
n
aij j
j=1
n
n
j=1
aij fi
i=1
(5.72)
216
A First Course in Functional Analysis
where j =
n
aij fi
(5.73)
i=1
The vector = (1 , 2 , . . . , n ) is an element of n and is obtained
from the vector f = (f1 , f2 , . . . , fn ) of the same space by the linear
transformation
= A f
where A is the transpose of the matrix A. Therefore, the transpose of A
corresponds to the adjoint of the matrix A in the ndimensional space.
2. Let us consider in L2 ([0, 1]) the integral
1
K(s, t)f (t)dt
T f = g(s) =
0
K(s, t) is a continuous kernel.
An arbitrary linear functional (g) L2 ([0, 1]) will be of the form g, v
where v L2 ([0, 1]) and , denotes scalar product.
This is because L2 ([0, 1]) is a Hilbert space.
1 1
K(s, t)f (t)dtv(s)ds
(g) = g, v =
K(s, t)v(s)ds f (t)dt
=
0
(on change of order of integration by Fubinis theorem 10.5.3)
1
=
(T v)(t)f (t)dt
0
where, T v(s) =
= T v, f .
K(t, s)v(t)dt
0
Thus, in the given case, the adjoint operator is also an integral operator,
the kernel K(t, s) which is obtained by interchanging the arguments of
K(s, t). K(t, s) is called the transpose of the kernel K(s, t).
5.6.17
Theorem
Given A, a bounded linear operator mapping a normed linear space Ex
into a normed linear space Ey , its adjoint A is also a bounded linear
operator, and A = A .
Let f1 = A g1 and f2 = A g2 .
Hence, g1 (y) = g1 (Ax) = f1 (x), x Ex , f1 Ex , y Ey , g1 Ey .
Also g2 (y) = g2 (Ax) = f2 (x).
Now, g1 and g2 are linear functionals and hence f1 and f2 are linear
functionals.
Linear Functionals
217
Now,
(g1 + g2 )(y) = g1 (y) + g2 (y) = f1 (x) + f2 (x) = (f1 + f2 )(x)
or
f1 + f2 = A (g1 + g2 )
or A g1 + A g2 = A (g1 + g2 )
Thus, A is a linear functional.
Moreover, A g(x) = f (x) = g(Ax) g A x
A g = sup
or,
x=
A g(x)
g A
x
Hence,
A g
A
g
Therefore, A  = sup =
g=
A g
A
g
(5.74)
Let x0 be an arbitrary element of Ex . Then, by theorem 5.1.4, there
exists a functional g0 Ey such that g0  = 1 and g0 (Ax0 ) = Ax0 .
Hence, Ax0  = g0 (Ax0 ) = f0 (x0 ) f0  x0 
= A g0  x0  A  g0  x0 
A = sup
or,
x=
Ax0 
A  [since g0  = 1]
x0 
(5.75)
It follows from (5.74) and (5.75) that
A = A 
5.6.18
Adjoint operator for an unbound linear operator
Let A be an unbounded linear operator dened on a subspace Lx dense
in Ex with range in the space Ey . The notion of an adjoint to such an
unbounded operator can be introduced. Let g Ey and let
g(Ax) = f0 (x), x Lx
Let x1 , x2 Lx .
Then g(A(x1 + x2 )) = g(Ax1 + Ax2 )
= g(Ax1 ) + g(Ax2 )
since g is a linear functional dened on Ey .
= f0 (x1 ) + f0 (x2 )
On the other hand, g(A(x1 + x2 )) = f0 (x1 + x2 ), showing that f0 is
additive. Similarly, we can show that f0 is homogeneous. Thus, f0 is
linear. But f0 is not in general bounded. In case f0 is bounded, since Lx
is everywhere dense in Ex , f0 can be extended to the entire space Ex .
In case A is not dened on the whole space Ey which contains , it
must be dened on some subspace Ly Ey . This will lead to the linear
218
A First Course in Functional Analysis
functional f Ex being set in correspondence to the linear functional
g Ey . This operator A is also called the adjoint of the unbounded
linear operator A.
Thus we can write f0 = A g, g Ly .
Let g1 , g2 Ly . Then, for xed x Lx .
g1 (Ax) = f1,0 (x)
(5.76)
Similarly, g2 (Ax) = f2,0 (x)
(5.77)
Therefore, (g1 + g2 )(Ax) = (f1,0 + f2,0 )(x)
Thus, (g1 + g2 ) Ly , showing that Ly is a subspace. It follows form
(5.74) that f1,0 = A g1 , f2,0 = A g2 . Hence (5.76) gives
A (g1 + g2 ) = f1,0 + f2,0 = A g1 + A g2
This shows that A is a linear operator, but generally not bounded.
5.6.19
The matrix form of operators in space with basis and the
adjoint
Let E be a Banach space with a basis and A a bounded linear operator
mapping E into itself.
Let {ei } be a basis in E and x E can be written as
x=
i ei
i=1
Thus, A being bounded,
y = Ax =
i Aei
i=1
Since Aei is again an element of E, it can be represented by
pki ek
Aei =
k=1
Then we can write,
y = Ax = lim
n
Thus, y =
n
i=1
k ek
i Aei = lim
n
n
i=1
pki ek
(5.78)
k=1
(5.79)
k=1
where, k =
pki i
(5.80)
i=1
Let {j } be a sequence of functionals biorthogonal to the sequence {ei },
i.e.,
Linear Functionals
3
j (ek ) =
219
if j = k
if j = k
(5.81)
Then (5.79) and (5.80) imply,
3
m = m (y) = m
lim
n
3
= lim m
i=1
n
n
i=1
= lim
n
pki ek
k=1
i=1
= lim
pki ek
k=1
n
pki m (ek )
k=1
pmi i
(5.82)
i=1
Equation (5.82) shows that the operator A is uniquely dened by the
innite matrix (pki ). Thus, the components of the element y = Ax are
uniquely dened by the components of the element x. Thus a nite matrix
gets extended to an innite dimensional matrix.
5.6.20
Adjoint A of an operator A represented by an innite
matrix
Let A denote the operator adjoint to A and A map E into itself. Let
f = A g, i.e., g(y) = g(Ax) = f (x) for every x E.
ci fi and f =
di f i
Furthermore, let g =
i=1
i=1
3
3 n
3
= g lim
Then g(Ax) = g A
i ei
pki i ek
3
= lim
= lim
i=1
k=1
n
k=1
pki i
i=1
n
k=1
g(ek )
pki i
i=1
ck = lim
i=1
i=1
pki ck
k=1
On the other hand,
g(Ax) = f (x) =
i fi (x) =
i=1
Consequently,
i=1
di i = lim
i=1
k=1
di i
i=1
pki ck
(5.83)
220
A First Course in Functional Analysis
Let x = em , i.e., m = 1, i = 0, for i = m. Thus (5.83) gives,
n
pkm ck =
pkm ck
dm = lim
n
k=1
k=1
Thus, dm = A cm where cm is the mth component of g. A = (aji ) is the
transpose of A = (aij ). Thus, in the case of a matrix with innite number of
elements, the adjoint operator is the transpose of the corresponding matrix.
Such representation of operator and their adjoints hold for instance in
the space l2 .
Note 5.6.4. Many equations of mathematical physics are converted into
algebraic equations so that numerical methods can be adopted to solve
them.
5.6.21
Representation of sum, product, inverse on adjoints
of such operator which admit of innite matrix
representation
Given A, an operator which admits of innite matrix representation in
a Banach space with a basis, we have seen that the adjoint operator A
admits of a similar matrix representation.
By routine manipulation we can show that
(i) (A + B) = A + B .
(ii) (AB) = B A where A and B are conformable for multiplication.
(iii) (A1 ) = (A )1 , where A1 exists.
Problems
+n is +n.
Prove that the dual space of (+n ,   ) is the space (+n ,  1 ).
1. Prove that the dual space of
2.
3. Prove that the dual space of l2 is l2 .
4. Show that, although the sequence space l1 is separable, its dual (l1 )
is not separable.
5. Show that if E is a normed linear space its conjugate is a Banach
space.
6. Show that the space lp . 1 < p < is reexive but l1 is not reexive.
7. If E, a normed linear space is reexive and X E is a closed
subspace, then show that X is reexive.
8. Show that a Banach space E is reexive if and only E is reexive.
9. If E is a Banach space and E is reexive, then show that (E) is
closed and dense in E .
10. Let E be a compact metric space. Show that C(E) with the sup norm
is reexive if and only if, E has only a nite number of points.
CHAPTER 6
SPACE OF BOUNDED
LINEAR
FUNCTIONALS
In the previous chapter, the notion of functionals and their extensions
was introduced. We have also talked about the space of functionals or
conjugate space and adjoint operator dened on the conjugate space. In
this chapter, the notion of the conjugate of a normed linear space and
its adjoints has been revisited. The null space and the range space of a
bounded linear operator and its transpose (adjoint) are related. Weaker
concept of convergence in a normed linear space and its dual (conjugate)
are considered. The connection of the notion of reexivity with weak
convergence and with the geometry of the normed linear spaces is explored.
6.1
Conjugates
(Adjoints)
(Duals)
and
Transposes
In 5.6 we have seen that the conjugate (dual) space E of a Banach space
E, as the space of bounded linear functionals mapping the Banach space
E . Thus, if f E ,
f  = sup
x=
f (x)
, x E.
x
If, f1 , f2 E , then f1 = f2 = f1 (x) = f2 (x) x E
Again, f1 (x) = f2 (x) = (f1 f2 )(x) = 0 = f1 f2  = 0 = f1 = f2 .
On the other hand, the consequence of the HahnBanach extension
theorem 5.1.4 shows that x1 = x2 in E if and only if f (x1 ) = f (x2 ) for
221
222
A First Course in Functional Analysis
all f E . This shows that
x = sup
f =
f (x)
, x E.
f 
in analogy with the denition of f  above.
This interchangeability between E and E explains the nomenclature
conjugate or dual for E .
6.1.1
Denition: restriction of a mapping
If F : E1 E2 , E1 and E2 being normed linear spaces, and if E0 E1 ,
then F E0 dened for all x E0 is called the restriction of F to E0 .
6.1.2
Theorem
Let E be a normed linear space.
(a) Let E0 be a dense subset of E. For f E let F (f ) denote
the restriction of f to E0 . Then the map F is a linear
isometry from E onto E0 .
(b) IF E is separable then so is E.
Proof: Let f E . Now F (f ), being dened on E0 E, belongs to
E0 . F (f ) = f  and that the map is linear. Then by theorem 5.1.3,
F (f ) dened on E0 can be extended to the entire space with preservation
of norm. Hence, F is onto (surjective).
(b) [See theorem 5.6.11.]
6.1.3
Theorem
1 1
+ = 1. For a xed y lq , let
p q
Let 1 p < and
fy (x) =
Then
i=1
fy (lp )
The map f : lq
i yi
where x = {i } lp .
and
fy  = yq
lp
dened by
F (y) = fy y lq
is a linear isometry from lq into (lp ) .
If 1 p < . Then F is surjective (onto).
In fact, if f lp and y = (f (e1 ), f (e2 ) . . .) then y lq and f = F (y).
Proof: Let y lq . For x lp , we have
i=1
i yi  xp yq .
Space of Bounded Linear Functionals
223
For p = 1 or the above is true and follows by letting n in Holders
inequality (sec. 1.4.3) if 1 p < . Hence fy is welldened, linear and
fy  yq . Next, to prove yq fy . If y = , there is nothing to
prove. Assume, therefore, that y = . The above inequality can be proved
by following arguments as in 5.6.3.
If we let F (y) = fy , y lq , Then, F is a Linear isometry from lq into
(lp ) for 2 p < .
Let 1 p < . To show that F is surjective consider f (lp ) and
and let y = (f (e1 ), f (e2 ), . . .).
If, p = 1, we show from the expression for y that y l . Let 1 p <
and for n = 1, 2, . . . dene y n = (y1 , y2 , . . . yn , 0 . . . 0).
Thus, y n lq . yn q fyn . Now,
3
xi yi : x lp , xp 1
fyn  = sup
i=1
Let us consider x lp with xp 1 and dene xn = (x1 , x2 , . . . xn , 0 . . . 0).
Then xn belongs to lp , xn p xp 1 and
n
f (x ) =
n
xi f (ei ) =
i=1
n
xi yi = fyn (x).
i=1
Thus, fyn  f  = sup{f (x) : x lp , xp 1}
q1
yj q = lim y n q lim sup fyn 
so that
n
j=1
f  < , that is y lp .
Now, let x lp . Since p < , we see that x = lim
n
xi ei . Hence, by
i=1
the continuity and the linearity of f ,
xi ei =
xi f (ei ) =
xi yi = fy (x).
f (x) = lim f
n
i=1
i=1
i=1
Thus, f = fy that is F (y) = f showing that F is surjective.
In what follows we take c0 as the space of scaler sequences converging
to zero and c00 , as the space of scalar sequences having only nitely many
nonzero terms.
6.1.4
Corollary
Let 1 p < and
1 1
+ =1
p q
4 +
(i) The dual of n ( n ) with the norm  p is linearly isometric to
n
( n ) with the norm  q .
4 +
224
A First Course in Functional Analysis
(ii) The dual of c00 with the norm  p is linearly isometric to lq .
(iii) The dual of c0 with the norm   is linearly isometric to l1 .
Proof: (i) If we replace the summation
i yi with the summation
i=1
n
i yi
i=1
in theorem 6.1.3 and follow its argument we get the result.
(ii) If 1 p < . Then c00 is a dense subspace of lp , so that the dual
of c00 is linearly isometric to lq by theorems 6.1.2(a) and 6.1.3.
Let p = , so that q = 1. Consider y l1 and dene,
fy (x) =
xj yj , x c00 .
j=1
Following 6.1.3 we show that fy (c00 ) and fy  y1 . Next, we
yj  = y1 and that the map F : l1 (c00 ) given
show that fy  =
j=1
by F (y) = fy is a linear isometry from l1 into (c00 ) .
To prove F is surjective, we consider f in (c00 ) and let y =
(f (e1 ), f (e2 ), . . .). Next we dene for n = 1, 2, . . .
,
sgn yj if 1 j n
xnj =
0
if j > n
so that f  f (xn ) =
n
xnj yj =
j=1
n
yj , n = 1, 2, . . .
j=1
so that y l1 . If x c00 then x =
n
xi e i
i=1
for some n and hence
n
n
xj f (ej ) =
xi yi = fy (x).
f (x) =
j=1
i=1
Thus, f = fy that is F (y) = f , showing that f is surjective.
(iii) Since c00 is dense in c0 , we use theorem 6.1.1(a) and (b) above.
Note 6.1.1. Having considered the dual of a normed linear space E, we
now turn to a similar concept for a bounded linear operator on Ex , a normed
linear space.
Let Ex and Ey be two normed linear spaces and A (Ex Ey ).
Dene a map A : Ey Ex as follows. For Ey and x Ex , let
x(y) = (Ax) = f (x), where x Ex , y Ey and f Ex . Then we can
write
f = A .
A is called adjoint or transpose of A. A is linear and bounded [see
5.6.13 to 5.6.16].
Space of Bounded Linear Functionals
6.1.5
225
Theorem
Let Ex ,Ey and Ez be normed linear spaces.
(i) Let A, B (Ex Ey ) and k
and (kA) = kA .
4(+).Then (A + B) = A + B ,
(ii)Let A (Ex Ey ) and C (Ey Ez ). Then (CA) = A C .
(iii) Let A (Ex Ey ). Then A  = A = A .
Proof:
(i)For proof of (A + B) = A + B and (kA) = kA see 5.7.13.
(ii) Since A maps Ey into Ex we can nd f Ex and Ey such
that (y) = (Ax) = f (x). Next, since C maps Ez into Ey we can nd
Ez for Ey such that (z) = (Cy) = (y).
Thus (z) = (CAx) = f (x).
Thus f = (CA) .
Now f = A = A (C ).
Hence (CA) = A C .
(iii) To show that A = A  see theorem 5.6.17.
Now, A = (A ) . Hence the above result yields A  = A .
We have f = A , i.e., A : Ey Ex since Ey and f Ex .
Since A is the adjoint of A , A maps Ex Ey .
If we write f (x) = Fx (f ), then for xed x, Fx can be treated as a functional
dened on Ex . Therefore, Fx Ex .
Thus, for Fx Ex , Ey , we have A (Fx )() = Fx (A ()).
G
G
In particular, let x Ex and Fx = Ex (x), where Ex (x) is the canonical
embedding of Ex into Ex . Thus, for every Ey , we obtain
G
G
A ( Ex (x))() = Ex (x)(A ()) = A ()(x)
G
= (Ax) = Ey (y)(A(x))().
Hence A
Ex (x)
Ex
E(y) (y)A.
Schematically,
Ex(x)
E x**
Ey
Ey(y)
A**
Fig. 6.1
E y**
226
6.1.6
A First Course in Functional Analysis
Example
Let Ex = c00 = Ey , with the norm   . Then, by 6.1.4, Ex is linearly
isometric to l1 and by 6.1.3. EG
x is linearly isometric to l . The completion
of c00 (that is the closure of c00 in (c
00 ) is linearly isometric to c0 . Let
A (c00 c00 ). Then A can be thought of as a norm preserving linear
extension of A to l .
We next explore the dierence between the null spaces and the range
spaces of A, A respectively.
6.1.7
Theorem
Let Ex and Ey be normed linear spaces and A (Ex Ey ). Then
(i) N (A) = {x Ex : f (x) = 0 for all f R(A )}.
(ii) N (A ) = { Ey : (y) = 0 for all y R(A)}.
In particular, A is onetoone if and only if R(A) dense in Ey .
(iii) R(A) {y Ey : (y) = 0 for all N (A )}, where equality holds
if and only if R(A) is closed in Ey .
(iv) R(A ) {f Ex : f (x) = 0 for all x N (A)}, where equality holds
if Ex and Ey are Banach spaces and R(A) is closed in Ey .
In the above, N (A) denotes the null space of A. R(A) denotes the range
space of A, N (A ) and R(A ) will have similar meanings.
Proof: (i) Let x Ex . Let f Ex and Ey .
Then A (x) = f (x) = (Ax).
Therefore, Ax = 0 if and only f (x) = 0 f R(A ).
(ii) Let Ex Then A = 0 if and only if (Ax) = A (x) = 0 for
every x Ex .
Now, A is onetoone, that is, N (A ) = {} if and only if =
wherever (y) = 0 for every y R(A). Hence, by theorem 5.1.5, this
happens if and only if the closure of R(A) = Ey , i.e., R(A), is dense in Ey .
(iii) Let y R(A) and y = Ax for some x Ex . If N (A ) then
(y) = (Ax) = f (x) = A (x) = 0.
Hence R(A) {y Ey : (y) = 0 for all N (A )}.
If equality holds in this inclusion, then R(A) is closed in Ey . Since
R(A) = {N () : N (A )}, and each N () is a closed subspace of Ey .
Conversely, let us assume that R(A) is closed in Ey . Let y0 R(A), then
by 5.1.5 there is some Ey such that (y0 ) = 0 but (y) = 0 for every
y R(A). In particular, A ()(x) = f (x) = (Ax) = 0 for all x Ex i.e.,
N (A ). This shows that y0 {y Ey : (y) = 0 for all N (A )}.
Thus, equality holds in the inclusion mentioned above.
(d) Let f R(A ) and f = A for some Ey . If x N (A), then
f (x) = A (x) = (Ax) = (0) = 0. Hence, R(A ) {f Ex : f (x) = 0,
for all x N (A)}.
Space of Bounded Linear Functionals
227
Let us assume that R(A) is closed in Ey , and that Ex and Ey are
Banach spaces, we next want to show that the above inclusion reduces to
an equality. Let f Ex be such that f (x) = (Ax) = 0 wherever Ax = 0.
We need to nd Ey such that A = f , that is, (A(x)) = f (x) for
every x Ex . Let us dene : R(A) ( ) by (y) = f (x), if y = Ax.
4+
Since f (x) = 0 for all x N (A), (y1 + y2 ) = f (x1 + x2 ), if y1 = Ax1
and y2 = f (x2 ). Since f is linear,
(y1 + y2 ) = f (x1 + x2 ) = f (x1 ) + f (x2 ) = (y1 ) + (y2 ).
( y) = f ( x) = f (x),
4(+).
Thus is well dened and linear.
Also, the map A : Ex R(A), is linear, bounded and surjective, where
Ex is a Banach space and so is the closed subspace R(A) of the Banach
space Ey . Hence, by the open mapping theorem [see 7.3], there is some
r > 0 such that for every y R(A), there is some x Ex with Ax = y and
x y, so that
(y) = f (x) f  x f  y.
This shows that is a continues linear functional on R(A). By the
HahnBanach extension theorem 5.2.3, there is some Ey such that
R(A) = . Then A ()(x) = (Ax) = (Ax) = f (x) for every x Ex , as
desired.
Problems
1. Prove that the dual space of (
to ( n ,   ).
+n,  1) is isometrically isomorphic
2. Show that the dual space of (c0 ,   ) is (l1 ,  1 ).
3. Let  1 and  2 be two norms on the normed linear space E
with x1 Kx2 , x E and K > 0, prove that (E ,  1 )
(E ,  2 ).
4. Let Ex and Ey be normed spaces. For F (Ex Ey ), show that
F  = sup{(F (x)) : x Ex , x 1, Ey ,  1}.
5. If S in a linear subspace of a Banach space E, dene the annihilator
S 0 of S to be the subset S 0 = { E : (s) = 0 for all s S}. If
T is a subspace of E , dene T 0 = {x E : f (x) = 0 for all f T }.
Show that
(a) S 0 is a closed linear subspace of E ,
(b) S 00 = S where S is the closure of S,
(c) If S is a closed subspace of E, then S is isomorphic to E /S 0 .
228
A First Course in Functional Analysis
6. c denotes the vector subspace of l consisting of all convergent
sequences. Dene the limit functional : c
by (x) =
by (x1 , x2 . . .) =
(x1 , x2 , . . .) = lim xn and : l
n
lim sup xn .
(i) Show that is a continuous linear functional where c is
equipped with the sup norm.
(ii) Show that is sublinear and (x) = (x) holds for all x c.
6.2
Conjugates (Duals) of Lp ([a, b]) and
C([a, b])
The problem of nding the conjugate (dual) of Lp ([a, b]) is deferred until
Chapter 10.
6.2.1
Conjugate (dual) of C([a, b])
Rieszs representation theorem on functionals on C([a, b]) has already
been discussed. We have seen that bounded linear functional f on [a, b] can
be represented by a RiemannStieljes integral
x(t)dw(t)
f (x) =
(6.1)
where w is a function of bounded variation on [a, b] and has the total
variation
Var(w) = f 
(6.2)
4+
Note 6.2.1. Let BV ([a, b)] denote the linear space of ( )valued
functions of bounded variation on [a, b]. For w BV ([a, b]) consider
w = w(a) + Var(w).
Thus   is a norm on BV ([a, b]). For a xed w BV ([a, b]), let us dene
fw : C[a, b] ( ) by
4+
xdw. x C([a, b]).
fw (x) =
a
Then fw C ([a, b]) and fw  w. However, fw  may not be equal
to w. For example, if z = w + 1, then fz = fw , but z = w + 1, so
that either fw  =
w or fz  = z.
This shows that distinct functions of bounded variation can give rise to
the same linear functional on C([a, b]). In order to overcome this diculty
a new concept is introduced.
Space of Bounded Linear Functionals
6.2.2
229
Denition [normalized function of bounded variation]
A function w of bounded variation on [a, b] is said to be normalised
if w(a) = 0 and w is right continuous on ]a, b[. We denote the set of all
normalized functions of bounded variation on [a, b] by N BV ([a, b]). It is a
linear space and the total variation gives rise to a norm on it.
6.2.3
Lemma
Let w BV ([a, b]). Then there is a unique y N BV ([a, b]) such that
b
b
xdw =
xdy
a
for all x C([a, b]). In fact,
if t = a
0,
w(t+ ) w(a), if t ]a, b[ .
y(t) =
w(b) w(a),
if t = b
Moreover, Var(y) Var(w).
Proof: Let y : [a, b] ( ) be dened as above. Note that the right
limit w(t+ ) exists for every t ]a, b[, because Rew, and Imw are real valued
functions of bounded variation and hence each of them is a dierence of
two monotonically increasing functions. This also shows that w has only a
countable number of discontinuities in [a, b].
Let
> 0. We show that Var(y) Var(w) +
. Consider a partition
a = t0 < t1 < t2 < tn1 < tn = b.
4+
s0
a = t0
sn
t1
s1
t2
s2
t3
tn = b
Fig. 6.2
Choose point s1 , s2 , . . . , sn1 in ]a, b[, at which w is continuous and which
satisfy.
, j = 1, 2, . . . , n 1.
tj < sj , [w(t+
j ) w(sj )] <
2n
Let s0 = a and sn = b. Then,
y(t1 ) y(t0 ) w(t+
1 ) w(s1 ) + w(s1 ) w(s0 )
y(tj ) y(tj1 ) w(t+
j ) w(sj ) + w(sj ) w(sj1 )
+ w(sj1 ) w(t+
j1 ), j = 2, . . . (n 2).
y(tn ) y(tn1 ) w(sn ) w(sn1 ) + w(sn1 ) w(t+
n1 )
n
n
+ (n 2)
+
Hence,
y(tj ) y(tj1 )
w(sj ) w(sj1 ) +
2n
j=1
j=1
<
n
j=1
w(sj ) w(sj1 ) +
.
230
A First Course in Functional Analysis
Since the above is true for every partition Pn of [a, b], Var(y)
Var(w) +
. As
> 0 is arbitrary, Var(y) Var(w). In particular, y is
of bounded variation on [a, b]. Hence y N BV [a, b].
Next, let x C([a, b]). Apart from the subtraction of the constant w(a),
the function y agrees with the function w, except possibly at the points of
discontinuities of w. Since these points are countable, they can be avoided
while calculating the RiemannStieljes sum
n
x(tj )[w(tj ) w(tj1 )],
j=1
xdw, since each sum is equal to
which approximates
a
xdy. Hence
a
x(tj )[y(tj )
j=1
y(tj1 )] and is approximately equal to
n
xdw =
a
xdy.
a
To prove the uniqueness of y, let y0 N BV ([a, z]) be such that
b
xdw =
xdy for all x C([a, b]) and z = y y0 . Thus z(a) =
y(a) y0 (a) = 0 0 = 0.
Also, since z(b) = z(b) z(a) =
dy
dz =
a
dy0 = 0.
a
Now, let ]a, b[. For a suciently small positive h, let
if a t
1
t
x(t) =
if < t + h
1
0
if + h < t b.
Then
Since
x C([a, b]) and (x(t)) 1 for all t [a, b].
b
b
b
0=
xdy
xdy0 =
xdz
a
a
+h
t
1
dz
dz +
=
h
a
+h
t
dz =
we have z() =
1
dz.
h
a
It follows that z() Var + h,
where Var + h denotes the total variation of z on [, + h]. As z is
right continuous at , its total variation function v(t) = Vara t, t [a, b]
is also right continuous at . Let
> 0, there is some > 0 such that for
0 < h < ,
z() Var + h = v( + h) v() <
.
Hence, z() = 0. Thus z = 0, that is, y0 = y.
Space of Bounded Linear Functionals
6.2.4
231
Theorem
Let E = C([a, b]). Then E is isometrically isomorphic to the subspace
of BV ([a, b]), consisting of all normalized functions of bounded variation.
If y is such a normalized function (y N BV ([a, b])), the corresponding f
is given by
b
f (x) =
x(t)dy(t).
(6.3)
a
Proof: Formula (6.3) denes a linear mapping f = Ay, where y is
normalized and f C ([a, b]). We evidently have f  Var(y). For a
normalized y, Var(y) is the norm of y because y(a) = 0. Now consider any
g C ([a, b]). The theorem 5.3.3 then tells us that there is a w BV ([a, b])
such that
b
g(x) =
x(t)dw(t)
and
Var(w) = g.
The integral is not changed if we replace w by the corresponding normalized
function y of bounded variation. Then by lemma 6.2.3
b
b
g(x) =
x(t)dw(t) =
x(t)dy(t)
a
and
g Var(y).
and
g = Ty
Also
Var(y) Var(w) = g.
Therefore, g = Var(y). Since by lemma 6.2.3 there is just one normalized
function, corresponding to the functional g a onetoone correspondence
exists between the set of all linear functionals of C ([a, b]) and the set of
all elements of N BV ([a, b]).
It is evident that the sum of functions y1 , y2 N BV ([a, b]) corresponds
to the sum of functionals g1 , g2 C ([a, b]) and the function y corresponds
to the functional g, in case the functionals g1 , g2 correspond to the
functions y1 , y2 N BV ([a, b]). It therefore follows that the association
between C ([a, b]) and the space of normalized functions of bounded
variation (N BV ([a, b]) is an isomorphism. Furthermore, since
g = Var(y) = y
the correspondence is isometric too.
Thus, the dual of a space of continuous functions is a space of normalized
functions of bounded variation.
6.2.5
Moment problem of Hausdro or the little moment
problem
Let us consider the discrete analogue of the Laplace transform
(s) =
esu d(u) s ( )
0
4+
(6.4)
232
A First Course in Functional Analysis
4+
where : [0, ] ( ) is of bounded variation on every subinterval of
[0, ). If we put t = eu and put s = n, a positive integer, the above
integral gives rise to the form
(n) =
tn dz(t), n = 0, 1, 2,
(6.5)
where (u) = z(eu ).
The integral (6.4), where z is a function of bounded variation of [0,1], is
called the nth moment of z. The moments of a distribution of a random
variable play an important role in statistics. For example, in the case of a
rectangular distribution
0
x<a
xa
, axb
F (x) =
ba
L
x>b
the frequency density function is given by the following step function [g.
6.3]
1
b a
Fig. 6.3
Hence, the nth moment function for a rectangular distribution is
n
dF (t) =
tn f (t)dt.
(n) =
A sequence of scalars (n), n = 0, 1, 2, . . . is called a moments sequence
if there is some z BV ([0, 1]) whose nth moments is (x), n = 0, 1, 2, . . .
t
For example, if is a positive integer, then taking z(t) = , t [0, 1],
we see that
1
1
1
n
(n) =
, n = 0, 1, 2 . . .
t dz(t) =
tn+1 dt =
n
+
0
0
Similarly, if 0 < r 1, then (r n ), n = 0, 1, 2, . . . is a moment sequence since
if z is the characteristic functions of [r, 1] [see Chapter 10], then
0
tn dz(t) = rn , n = 0, 1, 2, . . .
Space of Bounded Linear Functionals
233
If (x) is the nth moment of z BV ([0, 1]), then
(n) Var(z), n = 0, 1, 2, . . .
Hence, every moment sequence is bounded. To prove that (n) is
convergent [see Limaye [33]]. Thus every scalar sequence need not be a
moment sequence. The problem of determining the criteria that a sequence
must full in order to become a moment sequence is known as the moment
problem of Hausdro or the little moment problem. We next discuss
some mathematical preliminaries relevant to the discussion.
6.2.6
The shift operator, the forward dierence operator
Let X denote the linear space of all scalar sequences (n)), n =
0, 1, 2, . . . and let E: X X be dened by,
E((n)) = (n + 1), X, n = 0, 1, 2, . . .
E is called the shift operator.
Let I denote the identity operator from X to X.
Dene = E I. is called then forward dierence operator.
Thus, for all X and n = 0, 1, 2, . . .
((n)) = (n + 1) (n)
For r = 0, 1, 2, . . . we have
r
r
r
rj r
(1)
= (E I) =
Ej,
j
j=0
r
r
(n + j)
so that r ((n)) =
(1)rj
j
j=0
r
r
rj r
In particular, ((0)) =
(1)
(j).
j
j=0
6.2.7
(6.6)
(6.7)
Denition: P ([0, 1])
Let P([0, 1]) denote the linear space of all scalarvalued polynomials on
[0,1]. For m = 0, 1, 2, . . . let
pm (t) = tm , t [0, 1].
We next prove the Weierstrass approximations theorem.
6.2.8
The Weierstrass approximations theorem (RALSTON
[43])
The Weierstrass approximation theorem asserts that the set of
polynomials on [0,1] is dense in C([0, 1]). Or in other words, P([0, 1]) is
dense in C([0, 1]) under the sup norm.
234
A First Course in Functional Analysis
In order to prove the above we need to show that for every continuous
function f C([0, 1]) and
> 0 there is a polynomial p P([0, 1]) such
that
max {f (x) p(x) <
}.
x[0,1]
n!
n
where n is a positive integer
,
In what follows, we denote by
k!(n k)!
k
and k an integer such that 0 < k n. The polynomial Bn (f )(x) dened
by
n
n k
k
nk
Bn (f )(x) =
x (1 x)
(6.8)
f
k
n
k=0
is called the Bernstein polynomial associated with f . We prove our theorem
by nding a Bernstein polynomial with the required property. Before we
take up the proof, we mention some identities which will be used:
n
n k
x (1 x)nk = [x + (1 x)]n = 1
(i)
(6.9)
k
k=0
n
n k
(6.10)
(ii)
x (1 x)nk (k nx) = 0
k
k=0
(6.10) is obtained by dierentiating both sides of (6.9) w.r.t. x and
multiplying both sides by x(1 x).
On dierentiating (6.10) w.r.t. x, we get
n
n
[nxk (1 x)nk + xk1 (1 x)nk1 (k nx)2 ] = 0
k
k=0
Using (6.9), (6.10) reduces to
n
n k1
(1 x)nk1 (k nx)2 = n
x
k
(6.11)
k=0
Multiplying both sides by x(1 x) and dividing by n2 , we obtain,
2
n
x(1 x)
n k
k
nk
x (1 x)
(iii)
x =
k
n
n
(6.12)
k=0
(6.12) is the third identity to be used in proving the theorem.
It then follows from (6.8) and (6.9) that
n
n k
k
x (1 x)nk f (x) f
f (x) Bn (f )(x) =
k
n
k=0
n
n
k
k
nk
x (1 x)
f (x) f
or
f (x) Bn (f )(x)
k
n
k=0
Since f is uniformly continuous on [0,1], we can nd a
(6.13)
(6.14)
Space of Bounded Linear Functionals
235
k
> 0 and M s.t. x <
n
f (x) f k <
n 2
(6.15)
f (x) < M for x [0, 1].
and
Let us partition
two parts, denoted
the sum on the RHS of (6.13) into
and
.
stands for the sum for which x nk < (x is xed
by
but arbitrary, and is the sum of the remaining terms.
n
k
xk (1 x)nk f (x) f
Thus,
=
k
n
k
x nx <
n
n k
<
x (1 x)nk = .
2
2
k
(6.16)
k=0
We next show that if n is suciently large then
can be made less
than 2 independently of x. Since f is bounded using (6.15),
n
we get
xk (1 x)nk where the sum is taken for all
2M
k
k
k s.t., x
n
x(1 x)
(6.12) yields 2
n
1
1
2 since max x(1 x) = for x [0, 1]
or
4 n
4
n
=
xk (1 x)nk ,
where
k
k
k,x n >
2M 2
= .
2
4M
2
n
k
n k
nk
x (1 x)
Hence, f (x) Bn (f )(x)
f (x) f
k
n
k=0
< + =
.
2 2
M
,
2
<
taking
n>
6.2.9
Denition: P(m)
Let us dene, for a nonnegative integer m,
P(m) = {p P([0, 1]) : p is of degree m}.
6.2.10
Lemma Bn (p) P(m) where p is a polynomial of degree
m
The Bernstein polynomial Bn (f ) is given by
236
A First Course in Functional Analysis
Bn (f )(x) =
n
n k
k
x (1 x)nk , x [0, 1], n = 0, 1, 2
f
k
n
k=0
We express Bn (f ) as a linear combination of p0 , p1 , p2 , . . .
nk
nk
nk j
nk
x =
pj (x),
Since (1 x)nk =
(1)j
(1)j
j
j
j=0
j=0
and pk pj = pj+k , we have
nk
n
nk
n
k
pj+k .
Bn (f ) =
f
(1)j
j
k j=0
n
k=0
n
j+k
n nk
, we put j + k = r
=
As
j+k
k
j
k
n
n
k
n
rk r
(1)
Bn (f ) =
f
pr
k
r
n
r=0
k=0
In particular, Bn (f ) is a polynomial of degree at most n. Also,
n
n k
x (1 x)nk = [x + (1 x)]n
Bn (p0 )(x) =
k
k=0
= 1 = p0 (x), x [0, 1].
If n m then clearly Bn (p) P(n) P(m) .
Next, x n
m + 1. Consider the sequence (p (k)), dened by
k
, k = 0, 1, 2, . . .
p (k) = p
n
Noting the expression for r ()(0) and Bn (f ) we obtain
n
n
r
Bn (p) =
pr .
( P )(0)
r
r=0
Since p is a polynomial of degree at most m, it follows that
k
k+1
p
(p )(k) = p
n
n
= 0 + 1 k + m k m1 , k = 0, 1, 2, . . .
for some scalar 0 , . . . m1 . Proceeding, similarly we conclude that
(m p )(k) equals a constant, for k = 0, 1, 2, . . . and for each r m + 1,
we have (r P )(k) = 0, k = 01, 2, . . .
In particular, (r p )(0) = 0, for all r m + 1 so that
m
n
r
pr .
( p )(0)
Bn (p) =
r
r=0
Hence, Bn (p) P(m) .
Space of Bounded Linear Functionals
6.2.11
237
Lemma
Let h be a linear functional on P([0, 1]) with (n) = h(pn ) for n =
0, 1, 2, . . . . If fkl (x) = xk (1 x)l for x [0, 1], then
h(fkl ) = (1)l l ()(k), k, l = 0, 1, 2, . . .
Proof: We have fkl = pk (1 x)l where pk = xk
l
l
l
l
= pk
(1)i
(1)i
pi =
pi+k .
i
i
i=0
i=0
l
l
i l
i l
Hence, h(fkl ) =
h(pi+k ) =
(i + k),
(1)
(1)
i
i
i=0
i=0
which equals to (1)l l ()(k).
We next frame the criterion for a sequence to be a moment sequence.
6.2.12
Theorem (Hausdor, 1921) Limaye [33]
Let ((n)), n = 0, 1, 2 . . . be a sequence of scalars. Then the following
conditions are equivalent,
(i) ((n)) is a moment sequence
(ii) For n = 0, 1, 2, . . . and k = 0, 1, 2, . . . , n, let
n
(1)nk nk ((k)).
dn,k =
k
n
dn,k  d for all n and some d > 0.
Then
k=0
4+
(iii) The linear functional h : P([0, 1]) ( ) dened by h(0 p0 +
1 p1 + + n pn ) = 0 (0) + + n (n), is continuous, where n = 0, 1, 2
and 0 , 1 , . . . , n ( ).
Further, there is a nondecreasing function on [0,1] whose nth moment
is (n) if and only if dn,k 0 for all n = 0, 1, 2, . . . and k = 0, 1, 2, . . . , n.
This can only happen if and only if the linear functional h is positive.
Proof: (i) (ii). Let z BV ([0.1]) be such that the nth moment of z is
(n), n = 0, 1, 2, . . . . Then
1
pdz, z P([0, 1]),
h(p) =
4+
dene a linear functional h on P([0, 1]) such that h(pn ) = (n), for
n = 0, 1, 2, . . .. By lemma 6.2.11,
n
(1)nk nk ()(k)
dn,k =
k
n
h(fk,nk ) for n = 0, 1, 2, . . .
=
k
238
A First Course in Functional Analysis
and k = 0, 1, 2, . . . , n. Since fk,nk 0 on [0, 1], it follows that
1
n
fk,nk dvz ,
dn,k 
k
0
where vz (x) is the total variation of z on [0, x].
But, for n = 0, 1, 2, . . .
n
n
fk,nk = Bn (1) = 1.
k
k=0
1
n
dn,k 
Bn (1)dvz = Var z,
Hence,
0
k=0
where Var z is the total variation of z on [0,1].
Note that if z is nondecreasing, then since fk,nk 0 we have dn,k =
1
n
fk,nk dz 0 for all n = 0, 1, 2, . . . and k = 0, 1, 2, . . . , n
k
0
(ii) (iii) For a nonnegative integer m, let hm denote the restriction
of h to P(m) . Since h is linear on P ([0, 1]) and P(m) is a nite dimensional
subspace or P ([0, 1]), it follows that hm is continuous, since every linear
map on a nite dimensional normed linear space is continuous.
Let p P([0, 1]). Since
n
k
n k
p
Bn (p) =
x (1 x)nk
k
n
k=0
and
h(pn ) = (n) for n = 0, 1, 2, . . ., we have
n
n
k
h(xk (1 x)nk )
p
h(Bn (p)) =
k
n
k=0
n
n
k
(1)nk nk ()(k).
p
=
k
n
k=0
n
k
p
=
dn,k ,
n
k=0
by lemma 6.2.11. Hence,
n
p k dn,k 
h(Bn (p)
n
k=0
p
n
dn,k  dp .
k=0
Now, let the degree of p be m. Then Bn (p) P(m) , for all n = 0, 1, 2, . . .
as proved earlier.
Since Bn (p) p 0 as n and hm is continuous,
Space of Bounded Linear Functionals
239
h(p) = hm (p) = lim hm (Bn (p) dp .
n
This shows that h is continuous on P([0, 1]).
If, dm,k 0 for all m and k, and if p 0 on [0,1]
n
k
then h(p) = lim h(Bn (p)) = lim
dn,k 0,
p
n
n
n
k=0
i.e., h is a positive functional.
n
n
dn,k  =
dn,k =
In this case,
k=0
k=0
1
0
Bn (1)dz =
dz = (0).
0
(iii) (ii) Since P([0, 1]) is dense in C([0, 1]) with the sup norm   ,
there is some F C ([0, 1]) with F P([0,1]) = h and F  = h.
By Riesz representations theorem for C([0, 1]) there is some z
N BV ([0, 1]) such that
f dz,
F (f ) =
0
f C([0, 1]).
In particular, for n = 0, 1, 2, . . .
(n) = h(pn ) = F (pn ) =
tn dz(t),
that is (n) is the nth moment of z.
If the functional h is positive and f C([0, 1]) with f 0 on [0,1],
then Bn (f ) 0 on [0,1] for all n and we have F (f ) = lim F (Bn (f )) =
n
lim h(Bn (f )) 0, i.e., F is a positive functional on C([0, 1]).
By Riesz representation theorem for C([0, 1]), we can say that there is a
nondecreasing function z such that F = Fz . In particular (n) is the nth
moment of a nondecreasing function z.
Thus, if ((n)) is a moment sequence, then there exists a unique
y N BV ([0, 1]) such that (n) is the nth moment of y. This follows
from lemma 6.2.3 by noting that P([0, 1]) is dense in C([0, 1]) with the sup
norm   .
Problems
1. For a xed x [a, b], let Fx C ([a, b]) be dened by Fx (f ) =
f (x), f C([a, b]).
Let yx be the function in N BV ([a, b]) which represents Fx as in
theorem 5.3.3. If x = a, then yx is the characteristic function
of [a, b], and if a < x b, then yx is the characteristic function of
[x, b].
240
A First Course in Functional Analysis
If a x1 < x2 < xn1 < xn b, and
F (f ) = k1 f (x1 ) + + kn (f (xn ),
f C([a, b]),
then show that the function in N BV ([a, b]) corresponding to F
C ([a, b]) is a step function.
2. Prove the inequality
b
f dg < max[f (x) : x [a, b]] Var g.
a
3. Show that for any g BV ([a, b]) there is a unique g BV ([a, b]),
continuous from the right, such that
f dg for all f C([a, b])
f dg =
a
and
Var (g) Var (g).
4. Show that a sequence of scalars (n), n = 0, 1, 2, . . ., is a moment
sequence if and only if
(n) = 1 (n) 2 (n) + i3 (n) i4 (n), where i =
and (1)nk nk j (k) 0
for all k = 0, 1, 2, . . . , n, j = 1, 2, 3, 4 and n = 0, 1, 2, . . . ,.
5. Let y N BV ([a, b]) and (n) =
1
0
tn dy(t), for n = 0, 1, 2, . . .
Then show that
Var (y) = sup{
n
n
k=0
nk ()(k); n = 0, 1, 2, . . .}
where is the forward dierence operator.
6.3
Weak and Weak Convergence
6.3.1
Denition: weak convergence of functionals
Let E be a normed linear space. A sequence {fn } of linear functionals
in E is said to be weak convergent to a linear functional f0 E , if
fn (x) f0 (x) for every x E.
Thus, for linear functionals the notion of weak convergence is equivalent
to pointwise convergence.
Space of Bounded Linear Functionals
6.3.2
241
Theorem
If a sequence {fn } of functionals weakly converges to itself, then {fn }
converges weakly to some linear functional f0 .
For notion of pointwise convergence see 4.5.2. Theorem 4.4.2 asserts
that E is complete, where E is the space conjugate to the normed linear
space E. Therefore if {fn } E is Cauchy, {fn } f0 E . Therefore,
for every x E fn (x) f0 (x) as n .
6.3.3
Theorem
Let {fn } be a sequence of bounded linear functionals dened on the
Banach space Ex .
A necessary and sucient condition for {fn } to converge weakly to f
as n is
(i) {fn } is bounded
(ii) fn (x) f (x) x M where the subspace M is everywhere dense
in Ex .
Proof: Let fn f weakly, i.e., fn (x) f (x) x Ex .
It follows from theorem 4.5.6 that {fn } is bounded. Since M is a
subspace of Ex , condition (ii) is valid.
We next show that the conditions (i) and (ii) are sucient. Let {fn }
be bounded. Let L = sup fn .
Let x Ex . Since M is everywhere dense in Ex , x0 M s.t. given
arbitrary
> 0,
x x0  <
/4M.
condition (ii) yields, for the above
> 0, n > n0 depending on
s.t.
fn (x0 ) f (x0 ) <
.
2
The linear functionals f is dened on M . Hence by HahnBanach
extension theorem (5.1.3), we can extend f from M to the whole of Ex .
Moreover, f  = f M sup fn  = L
Now,
fn (x) f (x) fn (x) fn (x0 ) + fn (x0 ) f (x0 )
+ f (x0 ) f (x)
= fn (x x0 ) + + f (x x0 )
2
< M x x0  + + M x x0 
2
<M
+ +M
.
4M
2
4M
=
for n > n0 (t).
Hence,
fn (x) f (x) x Ex ,
242
A First Course in Functional Analysis
i.e.,
weakly
f.
n
Application to the theory of quadrature formula
fn
6.3.4
Let x(t) C([a, b]). Then
x(t)dt
f (x) =
(6.17)
is a bounded linear functional in C([a, b]).
kn
(n)
(n)
Consider fn (x) =
Ck x(tk ), n = 1, 2, 3, . . .
(6.18)
k=1
(n)
Ck
(n)
are called weights, tk
(n)
a t1
are the nodes,
(n)
(n)
tkn1 tkn
(6.18) is a linear functional.
6.3.5
Denition: quadrature formula
(n)
Let Ck be so chosen in (6.18), such that f (x) and fn (x) coincide for
all polynomials of a degree less then equal to n, i.e.,
f (x)
fn (x)
if x(t) =
n
an tp
(6.19)
p=0
The relation f (x) fn (x) which becomes an equality for all polynomials
of degree less then equal to n, is called a quadrature formula. For example,
(n)
in the case of Gaussian quadrature: t1 = a and the last element = b.
(n)
tk , (k = 1, 2, . . . n) are the n roots of Pn (t) = 2n1n! (t2 1)n = 0.
Consider the sequence of quadrature formula:
f (x)
fn (x), n = 1, 2, 3, . . .
The problem that arises is whether the sequence {fn (x)} converges to the
value of f (x) as n for any x(t) C([0, 1]). The theorem below answers
this question.
6.3.6
Theorem
The necessary and sucient condition for the convergence of a sequence
of quadrature formula, i.e., in order that
1
kn
(n)
(n)
Ck x(tk ) =
x(t)d(t)
lim
n
k=1
holds for every continuous function x(t), is that
kn
k=1
must be true for every n,
(n)
Ck  K = const,
Space of Bounded Linear Functionals
243
Proof: By denition of the quadrature formula, the functional fm satises
fm (x) = f (x) for m n
(6.20)
for every polynomial x(t) of degree n,
fm (x) =
km
(m)
Ck
(m)
x(tk ).
(6.21)
k=1
Each fm is bounded since x(t(m) ) x by the denition of the norm.
k
km
m
(n)
(m)
(n)
Consequently, fm (x)
Ck x(tk )
Ck  x.
k=1
k=1
For later use we show that fm has the norm,
fm  =
km
(m)
Ck
,
(6.22)
k=1
i.e., fm  cannot exceed the righthand side of (6.22) and equality holds if
we take an x0 C([0, 1]) s.t. x0 (t) < 1 on J and
3
(n)
1 if Ck 0
(n)
(n)
.
x0 (tk ) = sgnCk =
(n)
1 if Ck < 0
Since then x0  = 1 and
fn (x0 ) =
kn
k=0
(n)
(n)
Ck sgnCk
kn
(n)
Ck .
k=0
For a given x Ex , (6.21) yields an approximate value fn (x) for f (x)
in (6.20).
We know that the set P of all polynomials with real coecients is dense
in the real space Ex = C([0, 1]) by the Weierstrass approximation theorem
(th 1.4.32). Thus, the sequence of all functionals {fn } converges to the
functional f on a set of all polynomials, everywhere dense in C([0, 1]).
km
Cm  k = const., it follows from (6.20) fm  is bounded.
Since
k=0
Hence, by theorem 6.3.3, fn (x) f (x) for every continuous function x(t).
6.3.7
Theorem
(n)
If all the coecients Ck of quadrature formulae are positive, then the
sequence of quadrature formulae f (x) fn (x), n = 1, 2, . . . is convergent
for every continuous function x(t).
In fact, fn (x0 ) = f (x0 ) for any n and x0 (t) 1.
244
A First Course in Functional Analysis
Hence,
fn (x0 ) =
kn
k=1
(n)
Ck 
kn
(n)
Ck
=
0
k=1
d = (1) (0).
Therefore, the hypothesis of theorem 6.3.6 is satised.
6.3.8
Weak convergence of sequence of elements of a space
6.3.9
Denition: Weak convergence of a sequence of elements
Let E be a normed linear space, {xn } a sequence of elements in E,
x E, {xn } is said to converge weakly to the element x if for every
linear functional f E , f (xn ) f (x) as n and in symbols we write
w
xn x. We say that x is the weak limit of the sequence of elements {xn }.
6.3.10
Lemma: A sequence cannot converge weakly to two limits
Let {xn } E, a normed linear space, converge weakly to x0 , y0 E
respectively, x0 = y0 . Then, for any linear functional f E ,
f (x0 ) = f (y0 ) = lim f (xn ).
n
f being linear, f (x0 y0 ) = 0. The above is true for any functional
belonging to E . Hence, x0 y0 = , i.e., x0 = y0 . Hence the limit is
unique.
It is easy to see that any subsequence {xnk } also converges weakly to x
w
if xn x.
6.3.11
Denition: strong convergence of sequence of elements
The convergence of a sequence of elements (functions) with respect to
the norm of the given space is called strong convergence.
6.3.12
Lemma
The strong convergence of a sequence {xn } in a normed linear space E
to an element x E implies weak convergence.
For any functional f E ,
f (xn ) f (x) = f (xn x) f  xn x 0
w x .
as n , since f  is nite, xn x strongly xn
Note 6.3.1. The converse is not always true. Let us consider the sequence
of elements {sin nt} in L2 ([0, 1]).
1
Put xn (t) = sin nt s.t. f (xn ) =
sin nt(t)dt where (t) is a
0
square integrable function uniquely dened with respect to the functional f .
Obviously, f (xn ) is the nth Fourier coecient of (t) relative to {sin nt}.
w
Consequently, f (xn ) 0 as n so that xn 0 as n .
On the other hand,
Space of Bounded Linear Functionals
xn xn 2 =
=
0
245
(sin nt sin mt)2 dt
sin2 ntdt 2
sin nt sin mtdt
1
+
sin2 mtdt = 1 if n = m.
Thus, {xn } does not converge strongly.
6.3.13
Theorem
In a nite dimensional space, notions of weak and strong convergence
are equivalent.
Proof: Let E be a nite dimensional space and {xn } a given sequence
w
such that xn x. Since E is a nite dimensional there is a nite system
of linearly independent elements e1 , e2 , . . . , em s.t. every x E can be
represents as,
x = 1 e1 + 2 e2 + + m em .
Let,
(n)
(n)
(0)
(0)
(n)
xn = 1 e1 + 2 e2 + + m em .
(0)
x0 = 1 e1 + 2 e2 + + m em .
Now, consider the functionals fi such that
fi (ei ) = 1, fi (ej ) = 0 i = j.
Then
(n)
(0)
fi (xn ) = i , fi (x0 ) = i .
But since f (xn ) f (x0 ), for every functional f ,
(0)
then also fi (xn ) fi (x0 ), i.e., i (n) i .
0
0
0m
0
m
0 (p)
0
(q)
(p)
(q)
0
i i  ei 
xp xq  = 0 (i i )ei 0
0
0 i=j
0 i=1
0 as p, q , showing that in a nite dimensional normed linear
space, weak convergence of {xp } strong convergence or {xp }.
6.3.14
Remark
There also exist innite dimensional spaces in which strong and weak
convergence of elements are equivalent.
Let E = l1 of sequences {1 , 2 , . . . n . . .}
i  converges.
s.t. the series
i=1
We note that in l1 , strong convergence of elements implies coordinatewise convergence.
246
A First Course in Functional Analysis
6.3.15
Theorem
If the normed linear space E is separable, then we can nd an
w
equivalent norm, such that the weak convergence xn x and xn 
x0  in the new norm imply the strong convergence of the sequence {xn }
to x0 .
Proof: Let E be a normed linear space. Since the space is separable, it
has a countably everywhere dense set {ei } where ei  = 1.
Let
(n)
(n)
(0)
(0)
xn = 1 e1 + + i ei +
x0 = 1 e1 + + i ei +
w
where xn x0 as n .
Let us consider the functionals fi E such that
fi (ei ) = 1, fi (ej ) = 0 i = j.
(n)
Now
fi (xn ) = i
Since
fi (xn ) fi (x0 ) as n ,
(n)
and
(0)
fi (x0 ) = i
(0)
i , i = 1, 2, 3, . . .
If
xn  x0  as n
0
0
0
0
0
0
0
0
0
0
(n) 0
(0) 0
i ei 0 0
i ei 0
0
0
0
0
0
as
n .
i=1
i=1
Let us introduce in E a new norm  1 as follows
0
0
0
0
xn xo 
0
0
(n)
(0)
xn x0 1 = 0 (i i )ei 0 =
0
0
1 + xn x0 
i=0
Since
(6.23)
xn x0  0
xn x0 1 xn x0 .
Again, since {xn 1 } is convergent and hence bounded, xn 1 M
(say).
xn x0  (1 M )1 xn x0 1
Thus, xn x0 1 xn x0 (1 M )1 xn x0 1
Thus  1 and   are equivalent norms.
M
(6.23) yields that xn 
= L (say).
1M
0
0
0
0
0
(n) 0
i ei 0 L (say).
Hence 0
0
0
i=1
Space of Bounded Linear Functionals
Let
Sm =
m
(n)
i ei
and
i=1
S=
247
(n)
i ei
i=1
Let
> 0 and Sm S <
for m m0 (
).
(n)
(0)
(i i )ei
Now, xn x0 =
i=1
Let
then
for
and for
> 0, n n0 (
), m m0 (
)
=
2L
(m)
(0)
(n)
(0)
2L =
i i  ei  <
2L =
(i i )ei
m
2L
n
n n0 (
), m m0 (
).
Thus, xn x0  0 as n , proving strong convergence of {xn }.
6.3.16
Theorem
to
If the sequence {xn } of a normed linear space E 3converges weakly
kn
(n)
x0 , then there is a sequence of linear combinations
Ck xk which
k=1
converges strongly to x0 .
In other words, x0 belongs to a closed linear subspace L, spanned by
the elements x1 , x2 , . . . xn , . . . .
Proof: Let us assume that the theorem is not true, i.e., x0 does not
belong to the closed subspace L. Then, by theorem 5.1.5, there is a linear
functional f E , such that f (x0 ) = 1 and f (xn ) = 0, n = 1, 2, . . ..
But this means that f (xn ) does not converge to f (x0 ), contradicting the
w
hypothesis that xn x0 .
6.3.17
Theorem
Let A be a bounded linear operator with domain Ex and range in Ey ,
both normed linear spaces. If the sequence {xn } Ex converges weakly to
x0 Ex , then the sequence {Axn } Ey converges weakly to Ax0 Ey .
Proof: Let Ey be any functional. Then (Axn ) = f (xn ), f Ex .
Analogously (Ax0 ) = f (x0 ).
w
Since xn x0 , f (xn ) f (x0 ) i.e., (Axn ) (Ax0 ). Since is
w
an arbitrary functional in Ey , it follows that Axn Ax0 . Thus, every
bounded linear operator is not only strongly, but also weakly continuous.
6.3.18
Theorem
If a sequence {xn } in a normed linear space converges weakly to x0 ,
then the norm of the elements of this sequence is bounded.
We regard xn (n = 1, 2, . . .) as the elements of E , conjugate to E then
the weak convergence of {xn } to x0 means that the sequence of functions
248
A First Course in Functional Analysis
xn (f ) converges to x0 (f ) for all f E . But by the theorem 4.5.7 (BanachSteinhaus theorem) the norm {xn } is bounded, this completes the proof.
6.3.19
Remark
If x0 is the weak limit of the sequence {xn }, then
x0  lim inf xn ;
Moreover, the existence of this nite inferior limit follows from the preceding
theorem.
Proof: Let us assume that x0  > limit inf xn . Then there is a number
such that x0  > > lim inf xn . Hence, there is a sequence {xni }
such that x0  > > xni . Let us construct a limit functional f0 such
that
f0  = 1
Then
f0 (x0 ) = x0  > .
and
f0 (xni ) f0  xni  = xni  < for all i.
Consequently, f0 (xn ) does not converge to f0 (x0 ), contradicting the
w
hypothesis that xn x0 .
Note 6.3.2. The following example shows that the inequality x0  < lim
inf xn  can actually hold.
In the space L2 ([0, 1]) we consider the function
xn (t) = 2 sin nt. Now, xn (t)2 = xn (t), xn (t)
1
1
2
=
xn (t)dt =
2 sin2 ntdt = 1.
0
Thus, lim xn  = 1. On the other hand, for every linear functional f ,
n
1
f (xn ) = 2
g(t) sin ntdt = 2cn ,
0
cn s are the Fourier coecients of g(t) L2 ([0, 1]).
w
Thus, f (xn ) 0 as n for every linear functional f , i.e., xn
consequently, x0 = .
and x0  = 0 < 1 = lim xn .
n
6.3.20
Theorem
In order that a sequence {xn } of a normed linear space E converges
weakly to x0 , it is necessary and sucient that
(i) the sequence {xn } is bounded and
(ii) f (xn ) f (x0 ) for every f of a certain set of linear functionals,
linear combination of whose elements are everywhere dense in E .
Proof: This theorem is a particular case of theorem 6.3.3. This is because
convergence of {xn } E to x0 E is equivalent to the convergence of the
linear functionals {xn } E to xo E .
Space of Bounded Linear Functionals
6.3.21
249
Weak convergence in certain spaces
(a) Weak convergence in lp .
6.3.22
Theorem
(n)
(n)
In order that a sequence {xn }, xn = {i }, i lp converges to
(0)
(0)
x0 = {i }, i lp , it is necessary and sucient that
(i) the sequence {xn } be bounded and
(n)
(0)
(ii) i i as n and for all i (in general, however, nonuniformly).
Proof: We note that the linear combinations of the functionals fi =
(0, 0, . . . 1, . . . 0), i = 1, 2, . . . are everywhere dense in lq = lp . Hence, by
w
the theorem 6.3.20, in order that xn x0 it is necessary and sucient
(n)
(0)
that (i) {xn } is bounded and fi (xn ) = i fi (x0 ) = i for every i.
Thus, weak convergence in lp is equivalent to coordinatewise
convergence together with the boundedness of norms.
(b) Weak convergence in Hilbert spaces
Let H be a Hilbert space and x H. Now any linear functional f
dened on H can be expressed in the form f (x) = x, y, where y H
corresponds to x.
w
Now, xn x0 f (xn ) f (x0 ) xn , y x, y for every
y H.
6.3.23
Lemma
In a Hilbert space H, if xn x and yn y strongly as n then
xn , yn x, y as n , where , denotes a scalar product in H.
Proof: xn , yn x, y = xn , yn xn , y + xn , y x, y
= xn , yn y + xn x, y xn  yn y + xn x y
0 as , because xn  and y are bounded.
w
Note 6.3.3. If, however, xn x, yn y, then generally xn , yn
does not converge to x, y.
For example, if xn = yn = en , {en } an arbitrary orthonormal sequence,
w
then en but
en , en = en 2 = 1 does not converge to
= 0, 0.
w
However, if xn x, yn y, then
xn , yn x, y, provided yn  is totally bounded.
Let P = sup yn . Then
xn , yn x, y xn x, yn + x, yn y
250
A First Course in Functional Analysis
P xn x + x, yn y 0 as n .
w
Finally, we note that if xn x and xn  x, then xn x.
This is because,
xn x2 = xn x, xn x = [xn , xn x, x]
+ [x, x x, xn ] + [x, x xn , x]
2
[xn  x ] + [x, x x, xn ] + [x, x xn , x]
0 as n .
Problems
1. Let E be a normed linear space.
(a) If X is a closed convex subset of E, {xn } is a sequence in X and
w
xn x in E, then prove that x X (6.3.20).
w
(b) Let Y be a closed subspace of E. If xn x in E, then show
w
that xn + Y x + Y in E/Y .
w
2. In a Hilbert space H, if {xn } x and xn  x as n ,
show that {xn } converges to x strongly.
pn
3. Let fm (x) =
Cnm x(tn,m ) be a sequence of quadrature formulae
m=1
b
k(x, t)dt on the Banach space Ex = C([a, b]).
for, f (x) =
a
(Cn,m are the weights and tn,m are the nodes).
Show that fn  =
pn
Cn,m .
m=1
Further, if (i) fn (xk ) f (xk ), k = 0, 1, 2, . . .
and (ii) Cn,m 0 n, m,
show that fn (x) f (x) x C([a, b]).
Hence, show that the sequence of Gaussian quadrature formulae
Gn (x) f (x) as n .
4. Show that a sequence {xn } in a normed linear space is norm bounded
wherever it is weakly convergent.
5. Given {fn } E where E is a normed linear space, show that {fn }
is weakly convergent to f E implies that f  lim inf fn .
n
6. In l1 , show that xn x i xn x 0.
7. A space E is called weakly sequentially complete if the existence of
lim f (xn ) for each f E implies the existence of x E such that
n
Space of Bounded Linear Functionals
251
{xn } converges weakly to x. Show that the space C([a, b]) is not
weakly sequentially complete.
w
8. If xn x0 in a normed linear space E, show that x0 Y , where
Y = span {xn }. (Use theorem 5.1.5).
w
9. Let {xn } be a sequence in a normed linear space E such that xn x
in E. Prove that there is a sequence {yn } or linear combination of
elements of {xn } which converges strongly to x. (Use HahnBanach
theorem).
10. In the space l2 , we consider a sequence {Tn }, where Tn : l2 l2 is
dened by
Tn x = (0, 0, . . . , 0, 1 , 2 , . . .), x = {xn } l2 .
Show that
(i) Tn is linear and bounded
(ii) {Tn } is weakly operator convergent to 0, but not strongly.
(Note that l2 is a Hilbert space).
11. Let E be a separable Banach space and M E a bounded set. Show
that every sequence of elements of M contains a subsequence which
is weak convergent to an element of E .
12. Let E = C([a, b]) with the sup norm. Fix t0
(a, b). For each positive
integer n with t0 + n4 < b, let
0
if a t t0 , t0 + t b
2
if t0 t t0 + ,
n(t t0 )
xn (t) =
n
2
4
4
n
if t0 + t t0 + ,
t + t0
n
n
n
w
Then show that xn in E but xn (t)
/ in E.
13. Let Ex be a Banach space and Ey be a normed space. Let {Fn } be
a sequence in (Ex Ey ) such that for each xed x Ex , {Fn (x)} is
w
weakly convergent in Ey . If Fn (x) y in Ey , let F (x) = y. Then
show that F (Ex Ey ) and
F  lim inf Fn  sup Fn  < , n = 1, 2, . . .
n
(Use theorem 4.5.7 and example 14).
14. Let E be a normed linear space and {xn } be a sequence in E. Then
show that {xn } is weakly convergent to E if and only if (i) {xn } is
a bounded sequence in E and (ii) there is some x E such that
f (xn ) f (x) for every f in some subset of E whose span is dense
in E .
252
6.4
A First Course in Functional Analysis
Reexivity
In 5.6.6 the notion of canonical or natural embedding of a normed
linear space E into its second conjugate E was introduced. We have
discussed when two normed linear spaces are said to be reexive and some
relevant theorems. Since the conjugate spaces E, E , E often appear in
discussions on reexivity of spaces, there may be some relationships between
weak convergence and reexivity. In what follows some results which were
not discussed in 5.6 are discussed.
6.4.1
Theorem
Let E be a normed linear space and {f1 , . . . fn } be a linearly independent
subset of E . Then there are e1 , e2 , . . . , en in E such that fj (ei ) = ij for
i, j = 1, 2, . . . n.
Proof: We prove by induction on m. If m = 1, then since {f1 } is
linearly independent, let a0 E with f1 (a0 ) = 0. Let e1 = f1a(a0 0 ) .
Hence, f1 (e1 ) = 1. Next let us assume that the result is true for
m = k. Let {f1 , f2 , . . . , fk+1 } be a linearly independent subset of E .
Since {f1 , f2 , . . . fk } is linearly independent, there are a1 , a2 , . . . ak in E
such that fj (ai ) = ij for 1 i, j k. We claim that there is some a0 E
such that fj (a0 ) = 0 for 1 j k but fk+1 (a0 ) = 0. For x E, let
a0 = f1 (x)a1 + f2 (x)a2 + + fk (x)ak .
Then
(x a)
k

N (fj ). If
j=1
then
k

N (fj ) N (fk+1 ),
j=1
fk+1 (x) = fk+1 (x a0 ) + fk+1 (a0 )
= f1 (x)fk+1 (a1 ) + + fk (x)fk+1 (ak ),
so that fk+1 = fk+1 (a1 )f1 + + fk+1 (ak )fk
span {f1 , f2 , . . . , fk }.
violating the linear independence of {f1 , . . . fk+1 }. In the above N (fj )
stands for the nullspace of fj .
Hence our claim is justied. Now let
a0
ek+1 =
and for i = 1, 2, . . . , k,
fk+1 (a0 )
let ei = ai fk+1 (ai )ek+1 .
Then
fj (ei ) = fj (ai ) fk+1 (ai )fj (ek+1 ).
fj (a0 )
.
= fj (ai ) fk+1 (ai )
fk+1 (a0 )
1 for j = i = 1, 2, . . . , k
=
0 for j = i
Space of Bounded Linear Functionals
a0
fk+1 (a0 )
fj (ek+1 ) = fj
=
253
fj (a0 )
=0
fk+1 (a0 )
for j = 1, 2, . . . , k, since fj (a0 ) = 0.
Also
fk+1 (ek+1 ) = 1.
Hence
fj (ei ) = ij , i, j = 1, 2, . . . k + 1
6.4.2
Theorem (Helley, 1912) [33]
Let E be a normed linear space.
( ) and 0.
(a) Consider f1 , f2 , . . . , fm in E , k1 , k2 , . . . km in
Then, for every
> 0, there is some x E such that fj (x ) = kj for each
j = 1, 2, . . . , m and x  < +
if and only if
0
0
m
0
0
0m
0
0
0
h
k
h
f
j j
j j0
0
j=1
0j=1
0
4 +
4 +
( ).
for all h1 , h2 , . . . , hm in
(b) Let S be a nite dimensional subspace of E and Fx E . If
> 0,
then there is some x E such that
F S = (x )S and x  < Fx  +
.
Proof: (a) Suppose that for every
> 0, there is some x E such that
fj (x ) = kj for each j = 1, . . . , m and x  < +
. Let us x h1 , h2 , . . . , hm
in
( ). Then
m
m
m
=
h
k
h
f
(x
)
h
f
)
=
(x
j j
j j
j j
j=1
j=1
j=1
0
0
0
0
0m
0
0
0
0
0m
0
0
0
0
0
0
h
f

<
(
+
)
h
f
x
j j0
j j 0.
0
0
0j=1
0 j=1
0
0
4 +
As this is true for every
> 0, we conclude that
0
0
0
0
m
0m
0
0
0
h
k
h
f
j j
j j 0.
0
j=1
0 j=1
0
Conversely, suppose that for all h1 , h2 , . . . , hm in
m
( ),
hj kj
j=1
4 +
0
0
0
0
0
0m
0
h
f
0
j j 0. It may be noted thats {f1 , f2 , . . . , fm } can be assumed to be
0
0
0j=1
a linearly independent set. If that is not so, let f1 , f2 , . . . , fn with n m
254
A First Course in Functional Analysis
be a maximal linearly independent subset of {f1 , f2 , . . . fm }. Given
> 0,
let x E be such that x  +
and fj (x ) = kj for j = 1, 2, . . . , n. If
n < l m, then fl = h1 f1 + + hn fn for some h1 , h2 , . . . , hn in
( ).
4 +
Hence,
But,
fl (x ) = h1 f1 (x ) + + hn fn (x ) = h1 k1 + + hn kn .
0
0
0
0
n
n
0
0
k l
0
0
h
k
h
f
f
j j
j j 0 = 0,
0 l
0
0
j=1
j=1
so that fl (x ) = kl as well.
4 +
Consider the map F : E m ( m ) given by F(x) = (f1 (x), . . . , fm (x)).
Clearly, F is a linear map. Next, we show that it is a surjective (onto)
mapping. To this end consider (h1 , h2 , . . . , hm ) m (or m ). Since
{f1 , f2 , . . . , fm } are linearly independent, it follows from theorem 6.4.1 that
there exist e1 , e2 , . . . , em in E such that fj (ei ) = ij , 1 i, j m. If we
take x = h1 e1 + + hm em , then it follows that F(x) = (h1 , h2 , . . . , hm ).
We next want to show that F maps each open subset of E onto an open
subset of m (or m ). Since F is nonzero we can nd a nonzero vector,
a in E s.t. F(a) = (1, 1, . . . 1) m (Cm ). Let P be an open set in E.
Then there exists an open ball U (x, r) E with x E and r .
We can now nd a scalar k such that
x ka U (x, r)
where 0 < k <
r
.
a
Hence, x ka P with the above choice of k.
Therefore, F(x ka) = F(x) kF(a) = F(x) k F(E). Thus
,
r
F (E),
k ( ) : F(x) = k  <
a
4+
showing F(E) is open in
4m (+m).
4 +
Thus F maps each open subset of E onto an open subset of m ( m ).
Let
> 0 and let us consider U = {x E : x < +
}. We want
to show that there is some x U with F(x ) = (k1 , k2 , . . . , km ). If that
be not the case, then (k1 , . . . , km ) does not belong to the open convex set
F(U ). By the HahnBanach separation theorem (5.2.10) for m ( m )
there is a continuous linear functional g on m ( m ) such that
4 +
4 +
Re g((f1 (x), . . . , fm (x))) Re g((k1 , . . . km ))
for all x U . By 5.4.1, there is some (h1 , h2 , . . . , hm )
that
g(c1 , c2 , . . . , cm ) = c1 h1 + + cm hm
for all (c1 , c2 , . . . , cm )
4m (+m) such
4m(+m ). Hence,
Re[h1 f1 (x) + + hm fm (x)] Re(h1 k1 + + hm km )
Space of Bounded Linear Functionals
255
for all x U . If h1 f1 (x) + + hm fm (x) = rei with r 0 and
< < , then by considering xei in place of x, it follows that
h1 f1 (x) + + hm fm (x) Re(h1 k1 + + hm k)
for all x U . But
0
0
0m
0
m
0
0
0
hj fj (x) : x U = ( +
) 0
h j fj 0
sup
0
0 j=1
0
j=1
0
0
0m
0
m
0
0
0
m
0
0
0
0
0
Hence, ( +
) 0
hj fj 0 Re
hj kj
hj kj 0
hj fj 0
0j=1
0
j=1
j=1
This contradiction shows that there must be some x U with F(x ) =
(k1 , k2 , . . . , km ) as wanted.
(b) Let {f1 , f2 , . . . , fm } be a basis for the nite dimensional subspace S
of E and let kj = Fx (fj ), j = 1, 2, . . . , m. Then for all h1 , h2 , . . . , hm in
m
( m ), we have
0
0
0m
m
0
m
m
0
0
0
0
F
=
h
k
h
F
(f
)
h
f

h
f
=
F
j j
j x j
j j
x 0
j j 0.
x
0 j=1
j=1
0
j=1
j=1
4 +
Let = Fx  in (a) above, we see that for every
> 0, there is some
x E such that
x  Fx  +
for j = 1, 2, . . . , m
(x )(fj ) = fj (x ) = kj = Fx (fj ),
i.e., (x )S = Fx S as desired.
6.4.3
Remark
(i) It may be noted if we restrict ourselves to a nite dimensional
subspace of E, then we are close to reexivity.
The relationship between reexivity and weak convergence is
demonstrated in the following theorem.
6.4.4
Theorem (Eberlein, 1947)
Let E be a normed linear space. Then E is reexive if and only if every
bounded sequence has a weakly convergent subsequence.
Proof: For proof see Limaye [33].
6.4.5
Uniform convexity
We next explore some geometric condition which implies reexivity. In
2.1.12 we have seen that a closed unit ball of a normed linear space E is a
convex set of E. In the case of the strict convexity of E, the midpoint of
the segment joining two points on the unit sphere of E does not lie on the
256
A First Course in Functional Analysis
unit sphere of E. Next comes a concept in E which implies reexivity of
E.
A normed space E is said to be uniformly convex if, for every
> 0,
there exists some > 0 such that for all x, and y in E with x 1, y 1
and x y
, we have x + y 2(1 ).
This idea admits of a geometrical interpretation as follows: given
> 0,
there is some > 0 such that if x and y are in the closed unit ball of E,
and if they are at least
apart, then their midpoint lies at a distance at
best from the unit sphere. Here may depend on
. In what follows the
relationship between a strictly convex and a uniformly convex space
is discussed.
6.4.6
Denition
A normed space E is said to be strictly convex if, for x = y in E with
x = 1 = y, we have
x + y < 2.
6.4.7
Lemma
A uniformly convex space is strictly convex. This is evident from
the denition itself.
6.4.8
Lemma
If E is nite dimensional and strictly convex, then E is uniformly
convex.
Proof: For
> 0, let
= {(x, y) E E : x 1, y 1, x y
}.
Then is a closed and bounded subset of E E. We next show that is
compact, i.e., show that every sequence in has a convergent subsequence,
converging in E. Let un = (xn , y n ) in , ( , ) denote the cartesian product.
m
Let {e1 , . . . , em } be a basis in E. Then we can write xn =
pnj ej and
yn =
m
j=1
qjn ej where pnj and qjn are scalars for j = 1, 2, . . . , m. Since
j=1
{(xn , y n )} is a bounded sequence in E E, {pnj } and {qjn } are bounded for
j = 1, 2, . . . , m.
By BolzanoWeierstrass theorem (Cor. 1.6.19), {zjn } has a convergent
subsequence {zjnk } for each j = 1, 2, . . . , m.
Hence, {unk } = {(xnk , ynk )} converges to some element u E E as
k . Since unk and is closed, u E E. Therefore, if
xnk x and y nk y as k , we have u = (x, y) . Thus is
compact.
Space of Bounded Linear Functionals
257
For, (x, y) , let
f (x, y) = 2 x + y.
Now, f is a continuous and strictly positive function on . Hence there
is some > 0 such that f (x, y) 2 for all (x, y) . This implies the
uniform convexity of E.
6.4.9
Remark
A strictly convex normed linear space need not in general be uniformly
convex. Let E = c00 , a space of numerical sequences with nite number of
nonzero terms. Let
n = {x : xj = 0 for all j > n}.
For
x 1 , let x = x1 .
Let us assume that x is dened for all x n1 .
If x n , then x = zn1 + xn en , for some zn1 n1 . Dene
x = (zn1 n + xn n )1/n .
By making an appeal to induction we can verify that   is a strictly
convex norm on E.
For n = 1, 2, . . . let
e1 + en
e1 + en
xn =
and zn =
,
1/n
2
21/n
Then xn  = 1 = zn 
xn + zn  = 2(n1)/n = xn zn .
Thus, xn zn  1 for all n. But xn + zn  2 as n . Hence, E
is not uniformly convex.
6.4.10
Remark
It is noted that the normed spaces l1 , l , C([a, b]) are not strictly convex.
It was proved by Clarkson [12] that the normed spaces lp and Lp ([a, b]) with
1 < p < are uniformly convex.
6.4.11
Lemma
Let E be a uniformly convex normed linear space and {xn } be a sequence
in E such that xn  1 and xn + xm  2 as n, m . Then
lim xn xm  = 0. That is, {xn } is a Cauchy sequence.
n,m
Proof: If {xn } is not Cauchy in E, given
> 0 and a positive integer n0 ,
there are n, m n0 with
xn xm 
.
This implies that for a given x E and a positive integer m0 , there is
n, m > m0 with
xn x + xm x xn xm 
.
258
A First Course in Functional Analysis
Taking n = m > m0 we have
xm x
.
2
Since xn  1 as n we see that for each k = 1, 2, . . . there is a
positive integer nk such that
xnk  1 +
1
for all n nk .
k
Choosing m1 = n1 . Then xm1  1 + 1 = 2.
Let m0 = max{m1 , n1 } and x = xm1
We see that there is some m2 > m0 with
xm2 xm1 
.
2
1
since m2 > m0 n2 . Thus we can nd a
2
subsequence {xmk } of {xm } such that for k = 1, 2, . . .,
We note that xm2  1 +
xmk+1 xmk 
and
xmk  1 +
1
.
k
By the uniform convexity of E, there is a > 0 such that x+y 2(1)
wherever x and y are in E, x 1, y 1 and x y
.
Let us put yk = xmk for k = 1, 2, . . ., then
0
0
0 yk 0
1
1
0
0
0 1 + 1 0 1, yk+1  1 + k + 1 1 + k i.e.,
k
0
0
0
0
0
0
0 yk+1 0
0 1 and 0 yk+1 yk 0
=
(say).
0
0 1+ 1 0 2
01 + 1 0
k
k
0
0
0 yk+1 + yk 0
0
Hence, 0
0 1 + 1 0 2(1 ).
k
Thus,
lim supk yk+1 + yk  2(1 ) < 2,
i.e.,
lim supk xmk+1 + xmk  < 2.
The above contradicts the fact that
xm + xn  2 as m, n .
Hence, {xn } is a Cauchy sequence in E.
6.4.12
Theorem (Milman, 1938) [33]
Let E be a Banach space which is uniformly convex in some equivalent
norm. Then E is reexive.
Proof: We rst show that a reexive normed space remains reexive in an
equivalent norm.
Space of Bounded Linear Functionals
259
From theorem 4.4.2, we can conclude that the space of bounded linear
functionals dened on a normed linear space E is complete and hence a
Banach space. Thus the dual E and in turn the second dual E of
the normed linear space E are Banach spaces. Since E is reexive, E
is isometrically isomorphic to E and hence E is a Banach space. Also, in
any equivalent norm on E, the dual E and the second dual E remains
unchanged, so that E remains reexive.
Hence, we can assume without loss of generality that E is a uniformly
convex Banach space in the given norm   on E.
Let Fx E . Without loss of generality we assume that Fx  = 1.
To show that there is some x E with (x) = Fx : E E being
a canonical embedding. First, we nd a sequence {fn } in E such that
fn  = 1 and Fx (fn ) > 1 n1 for n = 1, 2, . . ..
For a xed n, let Sn = span {f1 , f1 , . . . , fn }.
1
We put
n = in Helleys theorem (6.4.2) and nd xn
E such that
n
1
Fxn S = (xn )S
and xn  < 1 + .
n
n
n
Then for n = 1, 2, . . . and m = 1, 2, . . . , n.
F (fm ) = (xn )(fm ) = fm (xn )
1
1
so that 1 < F (fn ) = fn (xn ) xn  < 1 +
n
n
2
and
2 < 2F (fn ) = fn (xn ) + fn (xm )
n
1
1
xm + xn  2 + + .
n m
Then we have,
lim xn  = 1
and
lim xn + xm  = 2.
n,m
By lemma 6.4.11 {xn } is a Cauchy sequence in E. Since E is a Banach
space, let xn x in E. Then x = 1. Also, since F (fm ) = fm (xn ) for all
n m, the continuity or fm shows that
F (fm ) = fm (x), m = 1, 2, . . . .
Replacing Sn by the span of
Let us next consider f E .
{f, f1 , f2 , . . . , fn }, we nd some z E, such that z = 1 and
F (f ) = f (z) and
F (fm ) = fm (z).
We want to show that x = z.
Now, x + z fm (x + z) = 2F (fm ) > 2
for all m = 1, 2, . . . so that x + z 2.
2
,
m
260
A First Course in Functional Analysis
Since x = 1, z = 1, the strict convexity of E
implies that x = z and F (f ) = f (z) = f (x),
for all f E , i.e., F = (x). Hence, E is reexive.
6.4.13
Remark
The converse of Milmans theorem is false [see Limaye[33]].
Problems
1. Let E be a reexive normed linear space. Then show that E is strictly
convex (resp. smooth) if and only if E is smooth (resp. strictly
convex).
(Hint: A normed linear space E is said to be smooth if, for every
x0 E with x0  = 1, there is a unique supporting hyperplane [see
4.3.7] for B(, 1) at x0 .)
2. [Weak Schauder basis. Let E be a normed linear space. A
countable subset {a1 , a2 , . . .} of E is called a weak Schauder basis
for E if ai  = 1 for each i and for every x E, there are unique
n
w
i ai x as n .
i ( ) i = 1, 2, . . . such that
4+
i=1
Weak Schauder bases. A countable subset {f1 , f2 , . . .} of E
is called a Weak Schauder basis if fi  = 1 for all i and for
every g E , there are unique i ( ), i = 1, 2, . . . such that
n
w
i fi g as n .]
4+
i=1
Let E be a reexive normed linear space and {a1 , a2 , . . .} be a
Schauder basis for E with coecient functionals {g1 , g2 , . . .}. If
fn = gn /gn , n = 1, 2, . . . then show that {f1 , f2 , . . .} is a Schauder
basis for E with coecient functionals (g1 Fa1 , g2 Fa2 , . . .).
3. Let E be a separable normed linear space. Let {xn } be a dense subset
of {x E : x = 1}.
(a) Then show that there is a sequence {fn } in E such that
fn  = 1 for all n and for every x = in E, fn (x) = 0 for
some n and for x E if
x0 =
fn (x)2
2n
n=1
12
then  0 is a norm on E, in which E is strictly convex and
x0 x for all x E.
(b) There is an equivalent norm on E in which E is strictly convex
(Hint: consider x1 = x + x0 ).
Space of Bounded Linear Functionals
261
(c) Show that l1 is strictly convex but not reexive in some norm
which is equivalent to the norm  1 .
4. Let E be a uniformly convex normed linear space, x E, and {xn }
be a sequence in E.
(a) If x = 1, xn  1 and xn + x 2 then show that
xn x 0.
w
(b) Show that xn x in E if and only if xn x in E and
lim sup xn  x.
n
6.5
Best Approximation in Reexive Spaces
The problem of best approximation of functions concerns nding a proper
combination of known functions so that the said combination is closest
to the above function. P.L. Chebyshev [43] was the rst to address this
problem. Let E be a normed linear space, x E is an arbitrary element. We
want to approximate x by a nite linear combination of linearly independent
elements x1 , x2 , . . . xn E.
6.5.1
Lemma
n
i2 increases indenitely, then (1 , 2 , . . . , n ) = x 1 x1
If
i=1
2 x2 n xn  .
Proof: We have
(1 , 2 , . . . , n ) 1 x1 + + n xn  x.
The continuous function
(1 , 2 , . . . , n ) = 1 x1 + 2 x2 + + n xn 
of the parameters 1 , 2 , . . . , n assumes its minimum m on a unit ball
3
n
S = 1 , 2 , . . . , n En :
i2 = 1
i=1
in En where En denotes the ndimensional Euclidean space.
Since a unit ball in En is compact, the continuous function
(x1 , x2 , . . . xn ) assumes its minimum on S. Since x1 , x2 , . . . , xn are linearly
independent the value of (x1 , x2 , . . . , xn ) is always positive.
Therfore
m > 0.
Given an arbitrary K > 0,
(1 , 2 , . . . n ) 1 x1 + 2 x2 + + n xn  x
262
A First Course in Functional Analysis
0
0
n
0 x + + x 0
0
0
1
1
n
n
2
+n
=
i 0
0 x
2
0
0
i=1 i
i=1
n
i2 m x,
i=1
,
1
2
n
2 , 2 , , 2 lie on a unit ball.
i i
i i
i i
n
2
> 1 (K + x), (1 , 2 , . . . n ) > K
i
m
i=1
since
Thus if
which proves the lemma.
6.5.2
Theorem
(0)
(0)
(0)
There exist real numbers 1 , 2 , . . ., n , such that (1 , 2 , . . .,
(0)
n ) = x1 x1 2 x2 . . . n xn  assumes its minimum for 1 = 1 , 2 =
(0)
(0)
2 . . . n = n .
Proof: If x depends linearly on x1 , x2 , . . . , xn , then the theorem is true
immediately. Let us assume that x does not lie in the subspace spanned by
x 1 , x2 , . . . , xn .
We rst show that (1 , 2 , . . . , n ) is a continuous function of its
arguments.
Now (1 , 2 , . . . , n ) (1 , 2 , . . . , n )
0
0 0
0
n
n
0
0 0
0
0
0 0
0
1 xi 0 0x
i xi 0
= 0x
0
0 0
0
i=1
i=1
0
0 n
n
0
0
0
0
xi .
0 (i i )xi 0 max i i 
0 in
0
i=1
i=1
3
n
2
i r ,
If,
S = (1 , 2 , . . . n ) En :
i=1
then outside S , the previous lemma yields, (1 , 2 , . . . , n ) x.
Now, the ball S En being compact, (1 , 2 , . . . n ) being
a continuous function assumes its minimum r at some point
(0)
(0)
(0)
(1 , 2 , . . . , n ). But r (0, 0, . . . , 0) = x. Hence, r is the least
value of the function (1 , 2 , . . . n ) on the entire space of the points
1 , 2 , n , which proves the theorem.
Space of Bounded Linear Functionals
6.5.3
263
Remark
(i) The linear combination
n
(0)
i xi , giving the best approximation of
i=1
the element x, is in general not unique.
(ii) Let Y be a nite dimensional subspace of C([0, 1]). Then the best
approximation out of Y is unique for every x C([a, b]) if and only if Y
satises the Haar condition [see Kreyszig [30]].
(iii) However, there exist certain spaces in which the best approximation
is everywhere uniquely dened.
6.5.4
Denition: strictly normed
A space E is said to be strictly normed if the equality x + y =
x + y for x = , y = , is possible only when y = ax, with a > 0.
6.5.5
Theorem
In a strictly normed linear space the best approximation of an arbitrary
x in terms of a linear combination of a given nite system of linearly
independent elements is unique.
n
Proof: Let us suppose that there exist two linear combinations
i xi
and
n
i=1
where
then
i=1
i xi such that
0 0
0
0
n
n
0 0
0
0
0 0
0
0
i xi 0 = 0x
i xi 0 = d,
0x
0 0
0
0
i=1
i=1
0
0
n
0
0
0
0
ri xi 0 > 0,
d = min 0x
ri 0
0
i=1
0
0
0
0
0
0
n
n
n
0
0 10
0
i + i 0
10
0
0
0
0
0
0
x i 0 0x
i x i 0 + 0x
i xi 0
0x
0
0
0
0 20
0
2
2
i=1
i=1
i=1
1
1
= d + d = d.
2
2
0
0
n
0
0
i + i 0
0
and since 0x
xi 0 d,
0
0
2
i=1
0
0
n
0
i + i 0
0
0
we have 0x
xi 0 = d,
0
0
2
i=1
0
0 0
0
n
n
0
0
0
i + i 0
0
0 01
0
x
1 xi 0
xi 0 = 0
Consequently, 0x
0
0 02
0
2
i=1
i=1
0
0
n
0
01
0
0
x
i xi 0.
+0
0
02
i=0
264
A First Course in Functional Analysis
The space being strictly normed.
3
n
n
x
i xi = a x
i xi .
i=1
i=1
If a = 1, then x would be a linear combination of the elements
n
x1 , x2 , . . . , xn , which is a contradiction. Thus, a = 1, but then
(i
i=1
i )xi = 0 and x1 , x2 , . . . , xn are linearly independents.
i = i , i = 1, 2, . . . , n.
6.5.6
Hence we get
Remark
(i) Lp ([0, 1]) and lp for p > 1 are strictly normed.
(ii) C([0, 1]) is not strictly normed.
Let us take x(t) and y(t) as two nonnegative linearly independent
functions taking the maximum values at one and the same point of the
interval.
Then,
x + y = max x(t) + y(t) = x(t) + y(t)
= x(t) + y(t) = x + y.
But y = ax. Hence, C([0, 1]) is not strictly normed.
6.5.7
Lemma
Let E be a reexive normed linear space and M be a nonempty closed
convex subset of E. Then, for every x E, there is some y M such that
x y = dist (x, M ), that is there is a best approximation to x from M .
Proof: Let x E and d = dist (x, M ). If x M then the result is
trivially satised. Let x M . Then there is a sequence {yn } in M
such that x yn  d as n . Since x is known and {x yn } is
bounded, {yn } is bounded, and since E is reexive, {yn } contains a weakly
convergent subsequence {ynp }, (6.4.4). Now, {ynp } M and M being
closed, lim 1 (ynp ) = (y 1 ), y 1 M and 1 is any linear functional.
p
Therefore, lim 1 (x ynp ) = 1 (x y 1 ).
p
Since {ynp } is a subsequence of {yn },
lim x ynp  = lim x yn  = d.
Thus, by theorem 6.3.15, {x ynp } is strongly convergent to {x y (1) }.
Hence, {ynp } is strongly convergent to y (1) . Similarly if {yn } contains
another weakly convergent subsequence, then we can nd a y 2 M
s.t. {ynq } is strongly convergent to y 2 s.t. x y 2  = d. Since M
is convex y1 , y 2 M [y 1 + (1 )y 2 ] M, 0 1. Thus
x (y 1 + (1 )y 2 ) = d.
Space of Bounded Linear Functionals
6.5.8
265
Minimization of functionals
The problem of best approximation is just a particular case of a wider
and more comprehensive problem, namely the problem of minimization of
a functional.
The classical Weierstrass existence theorem tells the following:
(W ) The minimum problem
F (u) = min!, u M
(6.25)
has a solution provided the functional F : M R is continuous on the
nonempty compact subset M of the Banach space E.
Unfortunately, this result is not useful for many variational problems
because of the following crucial drawback:
In innitedimensional Banach spaces, closed balls are not compact.
This is the main diculty in calculus of variations. To overcome this
diculty the notion of weak convergence is introduced.
The basic result runs as follows:
(C) In a reexive Banach space, each bounded sequence has a weakly
convergent subsequence. (6.4.4).
If H is a Hilbert space, it is reexive and the convergence condition (C)
is a consequence of the Riesz theorem.
In a reexive Banach space, the convergence principle (C) implies the
following fundamental generalization to the classical Weierstrass theorem
(W ):
(W ) The minimum problem (6.25) has a solution provided the
functional F : M R is weakly sequentially lower semicontinuous on
the closed ball M of the reexive Banach space E.
More generally, this is also true if M is a nonempty bounded closed
convex set in the reexive Banach space E. These things will be discussed
in Chapter 13.
Problems
1. Prove that a onetoone continuous linear mapping of one Banach
space onto another is a homeomorphism. In particular, if a onetoone linear mapping A of a Banach space onto itself is continuous,
prove that its inverse A1 is automatically continuous.
2. Let A : Ex Ey be a linear continuous operator, where Ex and Ey
are Banach spaces over ( ). If the inverse operator A1 : Ey Ex
exist, then show that it is continuous.
4 +
3. Let A : Ex Ey be a linear continuous operator, where Ex and
Ey are Banach spaces over ( ). Then show that the following two
conditions are equivalent:
4 +
266
A First Course in Functional Analysis
(i) Equation Au = v, u Ex , is wellposed that is, by denition,
for each given v Ey , Au = v has a unique solution u, which
depends continuously on v.
(ii) For each v Ey , Au = v has a solutions u, and Aw = implies
w = .
CHAPTER 7
CLOSED GRAPH
THEOREM AND ITS
CONSEQUENCES
7.1
Closed Graph Theorem
Bounded linear operators are discussed in chapter 4. But in 4.2.11 we
have seen that dierential operators dened on a normed linear space are
not bounded. But they belong to a class of operator known as closed
operators. In what follows, the relationship between closed and bounded
linear operators and related concepts is discussed.
Let Ex and Ey be normed linear space, let T : D(T ) (Ex Ey ) be a
linear operator and let D(T ) stand for domain of T .
7.1.1
Denition: graph
The graph of an operator T is denoted by G(T ) and dened by
G(T ) = {(x, y) : x D(T ), y = T x}.
7.1.2
(7.1)
Denition: closed linear operator
A linear operator T : D(T ) Ex Ey is said to be a closed operator
if its graph G(T ) is closed in the normed space Ex Ey .
The two algebraic operations of the vector space Ex Ey are dened as
usual, that is:
For, x1 , x2 Ex , y1 , y2 Ey ,
(x1 , y1 ) + (x2 , y2 ) = (x1 + x2 , y1 + y2 )
For, x Ex , y Ey ,
267
(7.2)
268
A First Course in Functional Analysis
(x, y) = (x, y)
where is a scalar and the norm on Ex Ey is dened by
(x, y) = x + y.
(7.3)
Under what conditions will a closed linear operator be bounded? An
answer is given by the following theorem which is known as the closed
graph theorem.
7.1.3
Theorem: closed graph theorem
Let Ex and Ey be Banach spaces and let T : D(T ) Ey be a closed
linear operator where D(T ) Ex . Then, if D(T ) is closed in Ex , the
operator T is bounded.
Proof: We rst show that Ex Ey with norm dened by (7.3) is complete.
Let {zn } be Cauchy in Ex Ey , where zn = (xn , yn ). Then, for every
> 0,
there is an N = N (
) such that
zn zm Ex Ey = (xn , yn ) (xm , ym )
= xn xm  + yn ym  <
(7.4)
n, m N (
).
Hence, {xn } and {yn } are Cauchy sequences in Ex and Ey respectively.
Ex being complete,
xn x (say) Ex as n .
Similarly, Ey being complete, yn y (say) Ey .
Hence,
{zn } z = (x, y) Ex Ey , as n .
Hence, Ex Ey is complete.
By assumption, G(T ) is closed in Ex Ey and D(T ) is closed in Ex .
Hence, D(T ) and G(T ) are complete.
We now consider the mapping P : G(T ) D(T ).
(x, T x) x
We see that P is linear, because
P [(x1 , T x1 ) + (x2 , T x2 )] = P [(x1 + x2 , T (x1 + x2 ))]
= x1 + x2 = P (x1 , T x1 ) + P (x2 , T x2 ),
where
x1 , x2 Ex .
P is bounded, because
P [(x, T x)] = x x + T x = (x, T x).
P is bijective, the inverse mapping
P 1 : D(T ) G(T ) i.e. x (x, T x).
Since G(T ) and D(T ) are complete, we can apply the bounded inverse
Closed Graph Theorem and Its Consequences
269
theorem (theorem 7.3) and see that P 1 is bounded, say (x, T x) bx,
for some b and all x D(T ).
Therefore, x + T x bx
Hence,
T x x + T x bx
for all x D(T ). Thus, T is bounded.
7.1.4
Remark
G(T ) is closed if and only if z G(T ) implies z G(T ). Now, z G(T )
if and only if there are zn = (xn , yn ) G(T ) such that zn z, hence
xn x, T xn T x.
This leads to the following theorem, where an important criterion for
an operator T to be closed is discovered.
7.1.5
Theorem (closed linear operator)
Let T : D(T ) Ex Ey be a linear operator, where Ex and Ey are
normed linear spaces. Then T is closed if and only if it fulls the following
condition: If xn x where xn D(T ) and T xn y together imply that
x D(T ) and T x = y.
7.1.6
Remark
(i) If T is a continuous linear operator, then T is closed.
Since T is continuous, xn x in Ex implies that T xn T x in Ey .
(ii) A closed linear operator need not be continuous. For example, let
Ex = Ey =
and T x = x1 for x = and T = . Here, if xn , then
T xn , showing that T is closed. But T is not continuous.
(iii) Given T is closed, and that two sequences, {xn } and {xn }, in the
domain converge to the same limit x; if the corresponding sequences {T xn }
and {T xn } both converge, then the latter have the same limit.
T being closed, xn x and T xn y1 imply x D(T ) and T x = y1 .
Since {xn } x, T being closed, xn x and T xn y2 imply that
x D(T ) and y2 = T x.
Thus, {T xn } and {T xn } have the same limit.
7.1.7
Example (dierential operator)
We refer to example 4.2.11.
We have seen that the operator A given by Ax(t) = x (t) where
Ex = C([0, 1]) and D(A) Ex is the subspace of functions having
continuous derivatives, is not bounded. We show now that A is a closed
operator. Let xn D(A) be such that
xn x
and
Axn = xn y.
Since convergence in the norm of C([0, 1]) is uniform convergence on
[0,1], from xn y we have
270
A First Course in Functional Analysis
y( )d =
0
i.e.,
lim
0 n
t
xn ( )d
= lim
xn ( )d = x(t) x(0),
y( )d .
x(t) = x(0) +
0
This shows that x D(A) and x = y. The theorem 7.1.5 now implies
that A is closed.
Note 7.1.1. Here, D(A) is not closed in Ex = C([0, 1]), for otherwise A
would be bounded by the closed graph theorem.
7.1.8
Theorem
Closedness does not imply boundedness of a linear operator. Conversely,
boundedness does not imply closedness.
Proof: The rst statement is shown to be true by examples 7.1.6 (ii) and
7.1.7. The second statement is demonstrated by the following example. Let
T : D(T ) D(T ) Ex be the identity operator on D(T ), where D(T ) is
a proper dense subspace of a normed linear space Ex . It is evident that T
is linear and bounded. However, we show that T is not closed. Let us take
x Ex D(T ) and a sequence {xn } in D(T ) which converges to x.
7.1.9
Lemma (closed operator)
Since a broad class of operators in mathematical and theoretical physics
are dierential operators and hence unbounded operators, it is important
to determine the domain and extensions of such operators. The following
lemma will be an aid in investigation in this direction.
Let T : D(T ) Ey be a bounded linear operator with domain
D(T ) Ex , where Ex and Ey are normed linear spaces. Then:
(a) If D(T ) is a closed subset of Ex , then T is closed.
(b) IF T is closed and Ey is complete, then D(T ) is a closed subset of
Ex .
Proof: (a) If {xn } is in D(T ) and converges, say, xn x and is such that
{T xn } also converges, then x D(T ) = D(T ), since D(T ) is closed.
T xn T x since T is continuous.
Hence, T is closed by theorem 7.1.5.
(b) For x D(T ) there is a sequence {xn } in D(T ) such that xn x.
Since T is bounded,
T xn T xm  = T (xn xm ) T  xn xm .
This shows that {T xn } is Cauchy, {T xn } converges, say, T xn y
Ey
because Ey is complete. Since T is closed, x D(T ) by theorem 7.1.5 and
T x = y. Hence, D(T ) is closed because x D(T ) was arbitrary.
Closed Graph Theorem and Its Consequences
7.1.10
271
Projection mapping
We next discuss the partition of a Banach space into two subspaces and
the related question of the existence of operators, which are projections
onto subspaces. These ideas help very much in the analysis of the structure
of a linear transformation. We provide here an illustration of the use of
closed graph theorem.
7.1.11
Denition: direct sum
A vector space E is said to be the direct sum of two of its subspaces M
and N , i.e.,
E =M N
(7.5)
if every x E has a unique decomposition
x = y + z,
(7.6)
with y M and z N .
Thus, if E = M N then M N = {}.
7.1.12
Denition: projection
A linear map P from a linear space E to itself is called a projection if
P2 = P.
If P is a projection then (I P ) is also a projection.
For
(I P )2 = (I P )(I P ) = I P + P 2 P
= I P + P P = I P.
Moreover, P (I P ) = P P 2 = P P = 0.
7.1.13
Lemma
If a normed linear space E is the direct sum of two subspaces M and
N and if P is a projection of E onto M , then
(i) P x = x if and only if x M ;
(ii) P x = if and only if x N .
Proof: If x E, then x = y + z where y M and z N .
Since P is a projection of E onto M
Px = y
if P x = x, y = x and z = . Similarly, if x M, P x = x.
If P x = then y = and hence x N . Similarly, if x N, P x = .
If R(P ) and N (P ) denote respectively the range space and null space
of P , then
R(P ) = N (I P ), N (P ) = R(I P ).
Therefore, E = R(P ) + N (I P ) and R(P ) N (P ) = {} for every
projection P dened on E.
272
A First Course in Functional Analysis
The closedness and the continuity of a projection can be determined by
the closedness of its range space and null space respectively.
7.1.14
Theorem
Let E be a normed linear space and P : E E be a projection. Then
P is a closed map, if and only if, the subspaces R(P ) and N (P ) are closed
in E. In that case, P is in fact, continuous if E is a Banach space.
Proof: Let P be a closed map, yn R(P ), zn N (P ). Further, let
yn y, zn z in E. Then P yn = yn y, P zn = in E so that
P y = y and P z = . Then y R(P ) and z N (P ). The above shows that
R(P ) and N (P ) are closed subspaces in E.
Conversely, let R(P ) and N (P ) be closed in E. Let xn x and
P xn y in E. Since R(P ) is closed and P xn R(P ) we see that y R(P ).
Also, since N (P ) is closed and xn P xn N (P ), we see that xy N (P ).
Thus, x y = z with z N (P ). Thus, x = y + z, with y R(P ) and
z = x y N (P ). Hence, P x = y, showing that P is a closed mapping.
If E is a Banach space and R(P ) and N (P ) are closed, then by the closed
graph theorem (theorem 7.1.3) the closed mapping P is in fact continuous.
7.1.15
Remark
(i) Let E be a normed linear space and M a subspace of E. Then
there exists a projection P dened on E such that R(P ) = M . Let {fi }
be a (Hamel basis) for M . Let {fi } be extended to a basis {hi } such that
{hi } = {fi } {gi } for the space E. Let N = span {gi }, then E = M + N
and M N = {}. The above shows that there is a projection of E onto
M along N .
(ii) A question that arises is that, given E is a normed linear space and
M a closed subspace of E, does there exist a closed projection dened on
E such that R(P ) = M ? By theorem 7.1.14, such a projection exists if
and only if, there is a closed subspace N of E such that E = M + N and
M N = {}. In such a case, N is called a closed complement of M in
E.
7.2
Open Mapping Theorem
7.2.1
Denition: open mapping
Let Ex and Ey be two Banach spaces and T be a linear operator mapping
Ex Ey . Then, T : D(T ) Ey with D(T ) Ex is called an open
mapping if for every open set in D(T ) the image is an open set in Ey .
Note 7.2.1. A continuous mapping, T : Ex Ey has the property that
for every open set in Ey the inverse image is an open set. This does not
imply that T maps open sets in Ex into open sets in Ey . For example, the
mapping
given by t sin t is continuous but maps ]0, 2[ onto
[1, 1].
Closed Graph Theorem and Its Consequences
7.2.2
273
Theorem: open mapping theorem
A bounded linear operator T mapping a Banach space Ex onto all of a
Banach space Ey is an open mapping.
Before proving the above theorem, the following lemma will be proved.
7.2.3
Lemma (open unit ball)
A bounded linear operator T from a Banach space Ex onto all of a
Banach space Ey has the property that the image T (B(0, 1)) of the unit
ball B(0, 1) Ex contains an open ball about 0 Ey .
Proof: The proof comprises three parts:
!
"
(i) The closure of the image of the open ball B 0, 12 contains an open
ball B .
(ii) T (Bn ) contains an open ball Vn about 0 Ey where Bn =
B(0, 2n ) Ex .
(iii) T (B(0, 1)) contains an open ball about 0 Ey .
(i) Given a set A Ex we shall write [see gs 7.1(a) and 7.1(b)]
A = {x Ex : x = a, a scalar, a A}
(7.7)
A + g = {x Ex : x = a + g, a A, g Ex }
(7.8)
and similarly for subsets of Ey .
A+g
A
a
a +g
Fig. 7.1(a) Illustration of formula (7.7)
Fig. 7.1(b) Illustration of formula (7.8)
1
Ex .
2
Any xed x Ex is in kB1 with real k suciently large (k > 2x).
*
Hence, Ex =
kB1 .
We consider the open ball B1 = B 0,
k=1
Since T is surjective and linear,
*
*
*
kB1 =
kT (B1 ) =
kT (B1 ).
Ey = T (Ex ) = T
k=1
k=1
k=1
(7.9)
274
A First Course in Functional Analysis
Since Ey is complete and
kT (B1 ) is equal to Ey , (7.9) holds. Since
k=1
Ey is complete, it is a set of the second category by Baires category theorem
*
kT (B1 ) cannot be expressed as the
(theorem 1.4.20). Hence Ey =
k=1
countable union of nowhere dense sets. Hence, at least one ball kT (B1 )
must contain an open ball. This means that T (B1 ) must contain an open
ball B = B(y0 ,
) T (B1 ). It therefore follows from (7.8) that
B y0 = B(0,
) T (B1 ) y0 .
(7.10)
We show now B y0 = T (B(0, 1)).
This we do by showing that
T (B1 ) y0 T (B(0, 1)).
(7.11)
Let y T (B1 ) y0 . Then y + y0 T (B1 ) and we remember that
y0 T (B1 ) too.
Since y+y0 T (B1 ), there exists wn B1 such that un = T wn T (B1 )
such that un y + y0 . Similarly, we can nd zn B1 such that
vn T zn T (B1 ) and
Since wn , zn B1 and B1 has radius
wn  + zn  < 1.
vn y0 .
1
2,
Hence
wn zn B(0, 1).
Now,
T (wn zn ) = T wn T zn y.
Thus,
y T (B(0, 1)). Thus if y T (B1 ) y0 ,
we obtain wn zn 
then y T (B(0, 1)) and y is arbitrary, it follows that
T (B1 ) y0 T (B(0, 1)).
Hence, (7.11) is proved. Using (7.9) we see that (7.8) yields
B y0 B(0,
) T (B1 ) y0 T B(0, 1).
n
Let Bn = B(0, 2
(7.12)
n
) Ex . Since T is linear, T (Bn ) = 2
n
T (B(0, 1)).
B(0,
) = B(0,
/2 ) T (Bn ).
(7.13)
1
(iii) We nally prove that V1 = B 0,
T (B(0, 1)).
2
1
For that, let y V1 , (7.13) yields V1 = B 0,
T (B1 ). Hence
2
y T (B1 ). Since T (B1 ) is a closed set, there is a v T (B1 ) close to y
such that,
If follows from (7.12), Vn = 2
Closed Graph Theorem and Its Consequences
275
.
4
Now, v T (B1 ) implies that there is a x1 B1 such that v = T x1 .
Hence y T x1  < .
4
From the above and (7.13), putting n = 2, we see that
v y <
y T x1 V2 = T (B2 ).
We can again nd a x2 B2 such that
(y T x1 ) T x2  < .
8
Hence y T x1 T x2 V3 T (B3 ) and so on.
Proceeding in the above manner we get at the nth stage, an xn Bn
such that0
0
n
0
0
0
0
T xk 0 < n+1 , (n = 1, 2, . . .)
(7.14)
0y
0
0 2
k=1
writing zn = x1 + x2 + + xn , and since xk Bk , so that xk  < 1/2k
we have, for n > m,
n
n
1
zn zm 
xk  <
2k
k=m+1
k=m+1
1
1
1
1
= m+1 1 + + 2 + + nm1 0 as m .
2
2 2
2
Hence, {zn } is a Cauchy sequence and Ex being complete {zn } z
Ex . Also z B(0, 1) since B(0, 1) has radius 1 and
1
xk  <
= 1.
(7.15)
2k
k=1
k=1
Since T is continuous, T zn T z and (7.14) shows that T z = y. Hence
y T (B(0, 1)).
Proof of theorem 7.2.2
We have to prove that for every open set A Ex the image T (A) is
open in Ey .
Let y = T z T (A), where z A. Since A is open, it contains an open
ball with centre z. Hence A z contains an open ball with centre 0. Let
the radius of the ball be r and k = 1r . Then k(A z) has an open ball
B(0, 1). Then, by lemma 7.2.3. T (k(A z)) = k[T (A) T z] contains an
open ball about 0 and therefore T (A) T z contains a ball about 0. Hence,
T (A) contains an open ball about T z = y. Since y T (A) is arbitrary,
T (A) is open.
276
7.3
A First Course in Functional Analysis
Bounded Inverse Theorem
Let Ex and Ey be Banach spaces. Hence, if the linear operator T is bijective
(i.e., injective and surjective) then T 1 is continuous and thus bounded.
Since T is open, by the open mapping theorem 7.2.2, given A Ex is open,
T (A) is open. Again T is bijective, i.e., injective and surjective. Therefore,
T 1 exists. For every open set A in the range of T 1 , the domain of T 1
contains an open set A. Hence by theorem 1.6.4, T 1 is continuous and
linear and hence bounded (4.2.4).
7.3.1
Remark
(i) The inverse of a bijective closed mapping from a complete metric
space to a complete metric space is closed.
(ii) The inverse of a bijective, linear, continuous mapping from a Banach
space to a Banach space is linear and continuous.
(iii) If the normed linear spaces are not complete then the above ((i) and
(ii)) may not be true.
Let Ex = c00 with  1 and Ey = c00 with   . If P (x) = x for
x Ex , then P : Ex Ey is bijective, linear and continuous. But P 1 is
not continuous since, for xn = (1, 1, 1, . . . , 1 , 0, 0, 0 . . .) we have xn  = 1
BC
D
A
n
and P 1 (xn ) = xn 1 = n for all n = 1, 2, . . ..
7.3.2
Denition: stronger norm, comparable norm
Given E a normed linear space,   on E is said to be stronger than
the norm   if for every x E and every
> 0, there is some > 0 such
B (x, ) B (x,
). Here, B (x, ) denotes an open ball in E w.r.t.
 . Similarly B (x,
) denotes an open ball in E w.r.t.   . In other
words,   is stronger than   if and only if every open subset of E
with respect to   is also an open subset with respect to  .
The norms   and   are said to be comparable if one of them is
stronger than the other. For denition of two equivalent norms,   and
  , see 2.3.5.
7.3.3
Theorem
Let   and   be norms on a linear space E. Then the norm   is
stronger than  if and only if there is some > 0 such that x x
for all x E.
Proof: Let   be stronger than   , then there is some r > 0 such that
Let
{x E : x < r} {x E : x < 1}.
0
0
0
0
rx
0 < r,
0
= x E and
> 0. Since 0
(1 +
)x 0
Closed Graph Theorem and Its Consequences
277
0
0
0
0
rx
0
0 < 1 , i.e., x < (1 +
) x.
then 0
(1 +
)x 0
r
1
Since
> 0 is arbitrary, x x.
r
1
or
x x with = .
r
Conversely, let x x for all x E.
Let {xn } be a sequence in E such that xn x 0.
Since xn x xn x 0.
Hence, the   is stronger than the norm   .
7.3.4
Twonorm theorem
Let E be a Banach space in the norm  . Then a norm   of the
linear space E is equivalent to the norm   if and only if, E is also a
Banach space in the norm   and the norm   is comparable to the
norm  .
Proof: If the norms   and   are equivalent, then clearly they are
comparable.
Therefore, 1 x x 2 x, 1 , 2 0 for all x E. Let {xn }
be a Cauchy sequence in the Banach space E with norm  .
Then, if xn x in E with norm  ,
xn x 2 xn x
and hence {xn } x in the norm   .
Therefore, E is a Banach space with respect to the norm   .
Conversely, let us suppose that E is a Banach space in the norm  
and that the norm   is comparable to the norm  .
Let us suppose without loss of generality that   is stronger than   .
Then, by theorem 7.3.3, we can nd a > 0 such that x x, for all
x E. Let E denote the linear space with the norm  and let us consider
the identity map I : E E. Clearly I is bijective, linear and continuous.
By the bounded inverse theorem 7.3, I 1 : E E is also continuous, that
is, x x for all x E and some > 0.
1
Letting = we have,
x x x.
Therefore, it follows from 2.3.5   and   are equivalent.
7.3.5
Remark
(i) The result above shows that two comparable complete norms on a
normed linear space are equivalent.
278
A First Course in Functional Analysis
Problem [7.1, 7.2 and 7.3]
1. Given that E is a Banach space, D(T ) E is closed, and the linear
operator T is bounded, show that T is closed.
2. If T is a linear transformation from a Banach Ex into a Banach space
Ey , nd a necessary and sucient condition that a subspace G of
Ex Ey is a graph of T .
3. Given Ex , Ey and Ez , three normed linear spaces respectively:
(i) If F : Ex Ey is continuous and G : Ey Ez is closed, then
show that G F : Ex Ez is closed.
(ii) If F : Ex Ey is continuous and G : Ex Ey is closed, then
show that F + G : Ex Ey is closed.
4. Let Ex and Ey be normed linear spaces and T : Ex Ey be linear.
Let T : Ex /N (T ) Ey be dened by T(x + N (T )) = T (x), x Ex .
Show that T is closed if and only if N (T ) is closed in Ex and T is
closed.
5. Let Ex and Ey be normed linear spaces and A : Ex Ey be linear,
such that the range R(A) of A is nite dimensional. Then show that
A is continuous if and only if the null space N (A) of A is closed in
Ex .
In particular, show that a linear functional f on Ex is continuous if
and only if N (A) is closed in Ex .
6. Let Ex be a normed linear space and f : Ex R be linear. Then
show that f is closed if and only if f is continuous.
(Hint: Problems 7.4 and 7.5).
7. Let Ex and Ey be Banach spaces and let T : Ex Ey be a closed
linear operator, then show that
(i) if C is compact in Ex , T (C) is closed in Ey and
(ii) if K is compact in Ey , T 1 (K) is closed in Ex .
8. Give an example of a discontinuous operator A from a Banach space
Ex to a normed linear space Ey , such that A has a closed graph.
9. Show that the null space N (A) of a closed linear operator A : Ex
Ey , Ex , Ey being normed linear spaces, is a closed subspace of Ex .
10. Let Ex and Ey be normed linear spaces. If A1 : Ex Ey is a closed
linear operator and A2 (Ex Ey ), show that A1 + A2 is a closed
linear operator.
422 4, dened by (x1 , x2) {x1 } is open. Is the
4 given by (x1, x2 ) (x1, 0) an open mapping?
11. Show that A :
mapping 2
Closed Graph Theorem and Its Consequences
279
12. Let A : c00 c00 be dened by,
y = Ax =
T
1
1
1 , 2 , 3 , . . .
2
3
where x = {i }. Show that A is linear and bounded but A1 is
unbounded.
13. Let Ex and Ey be Banach spaces and A : Ex Ey be an injective
bounded linear operator. Show that A1 : R(A) Ex is bounded if
and only if, R(A) is closed in Ey .
14. Let A : Ex Ey , be a bounded linear operator where Ex and Ey
are Banach spaces. If A is bijective, show that there are positive real
numbers and such that x Ax x for all x Ex .
15. Prove that the closed graph theorem can be deduced from the open
mapping theorem.
(Hint: Ex Ey is a Banach space and the map (x, A(x)) x Ex
is onetoone and onto, A : Ex Ey ).
16. Let Ex and Ey be Banach spaces and A (Ex Ey ) be surjective.
Let yn y in Ey . If Ax = y, show that there is a sequence {xn } in
Ex , such that Axn = yn for each n and xn x in Ex .
17. Show that the uniform bounded principle for functionals [see 4.5.5]
can be deduced from the closed graph theorem.
18. Let Ex and Ey be Banach spaces and Ez be a normed linear space.
Let A1 (Ex Ez ) and A2 (Ey Ez ). Suppose that for every
x Ex there is a unique y Ey such that Bx = A2 y, and dene
A1 x = y. Then show that A1 (Ex Ey ).
19. Let Ex and Ey be Banach spaces and A (Ex Ey ). Show that
R(A) is linearly homeomorphic to Ex /N (A) if and only if R(A) is
closed in Ey .
(Hint: Two metric spaces are said to be homeomorphic to each
other if there is a homeomorphism from Ex onto Ey . Use theorem
7.1.3.)
20. Let Ex denote the sequence space lp (1 p ). Let   be a
complete norm on Ex such that if xn x 0 then xnj xj for
every j = 1, 2, . . .. Show that   is equivalent to the usual norm
 p on Ex .
21. Let   be a complete norm on C([a, b]) such that if xn x 0
then xn (t) x(t) for every t [a, b]. Show that   is equivalent
to any norm on C([a, b]).
22. Give an example of a bounded linear operator mapping C 1 ([0, 1])
C 1 ([0, 1]) which is not a closed operator.
280
7.4
A First Course in Functional Analysis
Applications
Theorem
of
the
Open
Mapping
Recall the denition of a Schauder basis in 4.8.3. Let E be a normed linear
space. A denumerable subset {e1 , e2 , . . .} of E is called a Schauder basis
for E if en  = 1 for each n and if for every x E, there are unique scalars
1 , 2 , . . . n in
( ) such that x =
i ei .
4 +
i=1
In case {e1 , e2 , . . .} is a Schauder basis for E then for n = 1, 2, . . ., let us
n en E.
dene functionals fn : E ( ) by fn (x) = n (x) for x =
4+
n=1
fn is welldened and linear on E. It is called the nth coecient functional
on E.
7.4.1
Theorem
The functionals fn = n (x) for a given x E are bounded.
We consider the vector space E of all sequence (1 ,02 , . . . , 0n , . . .) for
n
0
0
0
0
n en converges in E. The norm y = sup 0
i ei 0 converts
which
0
0
n
n=1
i=1
E into a normed vector space. We show that E is a Banach space. Let
(m)
ym = {n }, m = 1, 2, . . . be a Cauchy sequence in E . Let
> 0, then
there is a N , such that m, p > N implies
0
0
n
0
0
0
0
(m)
(p)
ym yp  = sup 0 (i i )ei 0 <
.
0
n 0
i=1
(m)
n
(p)
n 
But this implies
< 2
for every n. Hence for every n,
lim n(m) = n exists. It remains to be shown that
y = (1 , 2 , . . . , n , . . .) E
(m)
(m)
(m)
Now, ym = {1 , 2 , . . . n
and
lim yn = y.
. . .} E .
Since in E convergence implies coordinatewise convergence and since
lim nm = n , lim ym = (1 , 2 , . . . n , . . .).
n
0
0
n
0
0
0
0
m
Now, y ym  = sup 0 (i i )ei 0 0 as m .
0
n 0
n
i=1
Hence y = (1 , 2 , . . . n , . . .) E and {ym } being Cauchy, E is a
Banach space.
Let us next consider a mapping P : E E for which y = (1 , 2 , . . .)
E such that Py =
n en E. If z = (1 , 2 , . . .) E such that
n=1
Closed Graph Theorem and Its Consequences
Pz =
n en E, then P (y + z) =
n=1
281
(n + n )en = P y + P z showing
n=1
P is linear. Since {ei } are linearly independent P y = y = . Hence
P is onetoone. Now {en } being a Schauder basis in E, every element in
E is representable in the form
rn en where (r1 , r, . . . rn . . .) E . Hence
n=1
P is onto. P is bounded since
0
0
0
0
n
n
0
0
0
0
0
0
0
0
i ei 0 lim 0
i ei 0 .
sup 0
n
0
0
0
0
n
i=1
i=1
By the open mapping theorem, the inverse P 1 of P is bounded.
Now,
0 n
0
0 n
0
n1
0
0
0
0
0
0
0
0
i ei
i ei 0 2 sup 0
i ei 0
n (x) = n  = n en  = 0
0
0
0
n 0
= 2y = 2P
i=1
1
i=1
1
x 2P
This proves the boundedness of n (x).
i=1
 x.
CHAPTER 8
COMPACT
OPERATORS ON
NORMED LINEAR
SPACES
This chapter focusses on a natural and useful generalisation of bounded
linear operators having a nite dimensional range. The concept of a
compact linear operator is introduced in section 8.1. Compact linear
operators often appear in applications. They play a crucial role in the
theory of integral equations and in various problems of mathematical
physics. The relation of compactness with weak convergence and reexivity
is highlighted. The spectral properties of a compact linear operator are
studied in section 8.2. The notion of the Fredholm alternative and the
relevant theorems are provided in section 8.3. Section 8.4 shows how to
construct a nite rank approximations of a compact operator. A reduction
of the nite rank problem to a nite dimensional problem is also given.
8.1
Compact Linear Operators
8.1.1
Denition: compact linear operator
A linear operator mapping a normed linear space Ex onto a normed
linear space Ey is said to be compact if it maps a bounded set of (Ex )
into a compact set of (Ey ).
8.1.2
Remark
(i) A linear map A from a normed linear space Ex into a normed linear
space Ey is continuous if and only if it sends the open unit ball B(0, 1) in
282
Compact Operators on Normed Linear Spaces
283
Ex to a bounded subset of Ey .
(ii) A compact linear operator A is stronger than a bounded linear
operator in the sense that A(B(0, 1)) is a compact subset of Ey given B(0, 1)
an open unit ball.
(iii) A compact linear operator is also known as a completely
continuous operator in view of a result we shall prove in 8.1.14(a).
8.1.3
Remark
(i) A compact linear operator is continuous, but the converse is not
always true. For example, if Ex is an innite dimensional normed linear
space, then the identity map I on Ex is clearly linear and continuous, but
it is not compact. See example 1.6.16.
8.1.4
Lemma
Let Ex and Ey be normed linear spaces.
Further, A1 , A2 map B(0, 1) into A1 (B(0, 1)) and A2 (B(0, 1)) which are
respectively compact.
Then (i) (A1 + A2 )(B(0, 1)) is compact.
(ii) A1 A2 (B(0, 1)) is compact.
Proof: Let xn  1 and {xn } is a Cauchy sequence. Since A1 (B(0, 1))
is compact and is hence sequentially compact (1.6), {A1 xn } contains a
convergent subsequence {A1 xnp }.
A2 being compact, we can similarly argue that {A2 xnq } is convergent.
Let {xnr } be a subsequence of both {xnp } and {xnq }.
Then {xnr } is Cauchy.
Moreover, (A1 + A2 ) (xnr ) is convergent.
Hence (A1 + A2 ) is compact.
(ii) Let xn  1 and {xn } is convergent in Ex . {A2 xn } being
compact and sequentially compact (1.6.17) {A2 xn } contains a convergent
subsequence {A2 xnp } Ey . Hence {A2 xnp } is bounded. A1 being compact
(A1 A2 )(xnp ) is a compact sequence. Hence, A1 A2 maps bounded sequence
{xn } into a compact sequence. Hence A1 A2 is compact.
8.1.5
Examples
(1) Let Ex = Ey = C([0, 1]) and let
1
K(t, s)x(s)ds.
Ax = y(t) =
0
The kernel K(t, s) is continuous on 0 t, s 1. We want to show that A
is compact. Let {x(t)} be a bounded set of functions of C([0, 1]), x ,
let K(t, s) satisfy L = max K(t, s). Then y(t) satises
t,s
y(t)
1
0
K(t, s)x(s)ds L,
284
A First Course in Functional Analysis
showing that y(t) is uniformly bounded. Furthermore, we show that the
functions y(t) are uniformly continuous.
Given
> 0, we can nd a > 0 on account of the uniform continuity
of K(t, s), such that
K(t2 , s) K(t1 , s) <
/
for t2 t1  < and every s [0, 1].
1
Therefore, y(t2 ) y(t1 ) = (K(t2 , x) K(t1 , s))x(s)ds
0
< max K(t2 , s) K(t1 , s) x(s)
s
< =
wherever t2 t1  <
for all y(t). Hence {y(t)} is uniformly continuous.
By ArzelaAscolis theorem (1.6.23) the set of functions {y(t)} is
compact in the sense of the metric of the space C([0, 1]). Hence the operator
A is compact.
8.1.6
Lemma
If a sequence {xn } is weakly convergent to x0 and compact, then the
sequence is strongly convergent to x0 .
Proof: Let us assume by way of contradiction that {xn } is not strongly
convergent. Then, given
> 0, we can nd an increasing sequence of
indices n1 , n2 , . . . , nk . . . such that xnk x0 
. Since the sequence
{xni } is compact, it contains a convergent subsequence {xnij }. Thus, let
{xnij } converge strongly to u0 .
w
w
u . Since at the same time xnij
x ,
j 0
j 0
u0 = x0 . Thus on one hand, xnij x0 
, whereas on the other
xnij x0  0, a contradiction to our hypothesis. Hence the lemma.
Moreover, xnij
8.1.7
Theorem
A compact operator A maps a weakly convergent sequence into
a strongly convergent sequence.
Let the sequence {xn } converge weakly to x0 . Then the norms of the
elements of this sequence are bounded (theorem 6.3.3). Thus A maps a
bounded sequence {xn } into a compact sequence {Axn }. Let yn = Axn .
Since a compact linear operator is bounded, and since {xn } is weakly
convergent to x0 , by theorem 6.3.17 Axn converges weakly to Ax0 = y0 .
Given A is compact and {xn } is bounded, {yn } where {yn } = {Axn } is
w
y and {yn } is compact by lemma
compact. Now, as because {yn }
n 0
8.1.6.
yn y0 .
Compact Operators on Normed Linear Spaces
8.1.8
285
Theorem
Let A be a linear compact operator mapping an innite dimensional
space E into itself and let B be an arbitrary bounded linear operator acting
in the same space. Then AB and BA are compact.
Proof: See lemma 8.1.4.
Note 8.1.1. In case, A is a compact linear operator mapping a linear
space E E and admits of an inverse A1 , then A A1 = I. Since I is
not compact, A1 is not bounded.
8.1.9
Theorem
If a sequence {An } of compact linear operators mapping a normed linear
space Ex into a Banach space Ey converges strongly to the operator A, that
is if An A 0, then A is also a compact operator.
Proof: Let M be a bounded set in Ex and a constant such that x
for every x M . For given
> 0, there is an index n0 such that
An A <
/, for n n0 (
). Let A(M ) = L and An0 (M ) = N .
We assert that the set An0 (M ) = N is a nite
net of L. Let us take
for every y L one of the preimages x M and put y0 = An0 x N ,
to receive y y0  = Ax An0 x A An0  x <
/ =
.
On the other hand, since An0 is compact and M is bounded, the set N
is compact. It follows then L for every
> 0 has a compact
net and is
therefore itself compact (theorem 1.6.18). Thus, the operator A maps an
arbitrary bounded set into a set whose closure is compact set and hence
the operator A is compact.
8.1.10
Example
1. If Ex = Ey = L2 ([0, 1]), then the operator, Axy =
1 1
with
K 2 (t, s)dtds < , is compact.
0
K(t, s)x(s)ds
0
Proof: Let us assume rst that K(t, s) is a continuous kernel. Let M be a
bounded set in L2 ([0, 1]) and let
1
x2 (t)dt 2 for all x(t) M.
0
Consider the set of functions
1
K(t, s)x(s)dx, x(s) M.
y(t) =
0
It is to be shown that the functions y(t) are uniformly bounded and
equicontinuous (theorem 1.6.22). This implies the compactness of the
set {y(t)} in the sense of uniform convergence and also in the sense
of convergence in the mean square. By CauchyBunyakovskySchwartz
inequality (1.4.3) we have
286
A First Course in Functional Analysis
y(t) =
1
0
K(t, s)x(s)ds
where L = maxt,s K(t, s).
uniformly bounded. Furthermore,
y(t2 ) y(t1 )
12
K (t, s)ds
12
x (s)ds
Consequently the functions y(t) are
[K(t2 , s) K(t1 , s)] ds
12
12
x (s)ds
<
for t2 t1  < where is chosen such that
K(t2 , s) K(t1 , s) <
/
for t2 t1  < .
The estimate y(t2 ) y(t1 ) <
does not depend on the positions of t1 , t2
on [0,1] and also does not depend on the choice of y(t) M . Hence, the
functions y(t) are equicontinuous.
Thus, in the case of a continuous kernel the operator is compact.
Next let us assume K(t, s) to be an arbitrary squareintegrable kernel.
We select a sequence of continuous kernels {Kn (t, s)} which converges in
the mean to K(t, s), i.e., a sequence such that
0
Set
(K(t, s) Kn (t, s))2 dtds 0 as n
An x =
1
0
Kn (t, s)x(s)ds.
Then,
3
Ax An x =
1
0
=
0
0
K(t, s)x(s)ds
2
Kn (t, s)x(s)dx
dt
, 12
[K(t, s) Kn (t, s)] ds
2
[K(t, s) Kn (t, s)] dsdt
12
, 12
x (s)ds dt
0
x
Hence,
Ax An x
A An  = sup
x
x=0
1
0
1
0
, 12
[K(t, s) Kn (t, s)] dtds
2
Since Kn (t, s) K(t, s) as n
A An  0 as n . Since all the An are compact, A is also
compact by theorem 8.1.9.
Compact Operators on Normed Linear Spaces
8.1.11
287
Remark
The limit of a weakly convergent sequence {An } of compact operators is
not necessarily compact.
Let us consider an innite dimensional Banach space E with a basis
{ei }. Then every x E can be written in the form
x=
i ei .
i=1
Let Sn x =
n
i ei where Sn x is a projection of x to a nite dimensional
i=1
space.
Let us consider the unit ball B(0, 1) = {x : x E, x 1}.
Then Sn (B(0, 1)) is closed and bounded in the ndimensional space En
and hence compact.
Thus, Sn is compact.
w
w
As n , Sn x x or Sn I, where the identity operator I is
not compact.
8.1.12
Theorem (Schauder, 1930) [49]
Let Ex and Ey be normed linear spaces and A (Ex Ey ). If A is
compact then A is a compact linear operator mapping Ey into Ex . The
converse holds if Ey is a Banach space.
Proof: Let A be a compact linear operator mapping Ex into Ey . Let us
consider a bounded sequence {n } in Ey . For y1 , y2 Ey
n (y1 ) n (y2 ) n  y1 y2  y1 y2 .
Let L = A(B(0, 1)). Then {nL : n = 1, 2, . . .} is a set of uniformly
bounded equicontinuous functions on the compact metric space L. By the
ArzelaAscolis theorem (1.6.23) {nL } has a subsequence {nj L } which
converges uniformly on L.
For i, j = 1, 2, . . ., we have
A (ni ) A (nj ) = sup{A (ni nj )(x) : x 1}
= sup{(ni nj )(Ax) : x 1}
sup{ni (y) nj (y) : y L}.
Since the sequence {nj L } is uniformly Cauchy on L; we see that
(A (nj )) is a Cauchy sequence in Ex . It must converge in Ex since Ex is
a complete normed linear space and hence a Banach space (theorem 4.4.2).
We have thus shown that (A (n )) has a convergent subsequence. Thus A
maps a bounded sequence in Ey into a convergent subsequence in Ex and
is hence a compact operator.
288
A First Course in Functional Analysis
Conversely, let us assume that Ey is a Banach space and A (Ex Ey )
and A is a compact operator mapping Ey into Ex . Then we can show that
A is a compact operator mapping Ex into Ey by following arguments
put forward as in above. Now let us consider the canonical embedding
Ex : Ex Ex and Ey : Ey Ey introduced in sec. 5.6.6. Since
A Ex = Ey A by sec. 6.1.5 we see that Ey A(B(0, 1)) = {A Ex (x) :
x B(0, 1) Ex }, is contained in {A (f ) : f Ex , f  < 1}.
This last set is totally bounded in Ey since A is a compact map.
As a result, Ey (A(B(0, 1)) is a totally bounded subset of Ey . Since Ey
is an isometry, A(B(0, 1)) is a totally bounded subset of Ey . As Ey is a
Banach space, and A(B(0, 1)) is a totally bounded subset of it, then its
closure A(B(0, 1)) is complete and totally bounded. Hence by theorem
1.6.18, A(B(0, 1)) is compact. Hence, A is a compact operator mapping Ex
into Ey .
8.1.13
Theorem
Let Ex and Ey be normed linear spaces and A : Ex Ey be linear. If
A is continuous and the range of A is nite dimensional then A is compact
and R(A) is closed in Ey .
Conversely, if Ex and Ey are Banach spaces, A is compact and R(A) is
closed in Ey , then A is continuous and its range is of nite dimensions.
Proof: Since A is linear and continuous, it is bounded by theorem 4.2.4.
Since R(A) is a nite dimensional subspace of Ey , it is closed [see theorem
2.3.4]. Thus if {xn } is a bounded sequence in Ex , {Axn } is a bounded and
closed subset of the nite dimensional space R(A). We next show that A is
compact. By the th. 2.3.1 every nite dimensional normed linear space of a
given dimension n is isomorphic to the ndimensional Euclidean space Rn .
By HeineBorel theorem a closed and bounded subset of Rn is compact.
Therefore A is relatively compact.
Conversely, let us assume that Ex and Ey are Banach spaces, A is
compact such that R(A) is closed in Ey . Then A is continuous. Also, R(A)
is a Banach space and A : Ex R(A) is onto. Then by the open mapping
theorem (7.2.2), A(B(0, 1)) is open. Hence there is some > 0 such that
X = {y R(A) : y } A(B(0, 1)).
Since R(A) is closed, we have
{y R(A) : y } = X A(B(0, 1)) R(A).
As A(B(0, 1)) is compact, we nd the closed ball of radius about zero
in the normed linear space R(A) is compact. We can next show using 2.3.8
that R(A) is nite dimensional.
8.1.14
Remark
An operator A on a linear space E is said to be of nite rank if the
range of A is nite dimensional.
Compact Operators on Normed Linear Spaces
8.1.15
289
Theorem
Let Ex and Ey be normed linear spaces and A : Ex Ey be linear.
w
(a) Let A be a compact operator mapping Ex into Ey . If xn x in
Ex , then Axn Ax in Ey .
w
(b) Let Ex be reexive and Axn Ax in Ey wherever xn x in Ex .
Then A is a compact linear operator mapping Ex Ey .
w
Proof: (a) Let xn x in Ex . By theorem 6.3.3, {xn } is a bounded
sequence in Ex . Let us suppose by way of argument, that Axn
/ Ax.
Then, given
> 0, there is a subsequence {xni } such that Axni Ax
for all i = 1, 2, . . .. Since A is compact and {xni } is a bounded sequence,
there is a subsequence {xnij } of {xni } such that Axnij converges as j ,
to some element y in Ey . Then y Ax
, so that y = Ax. On the
w
other hand, if f Ey then f A Ex and since xn x in Ex , we have
f (Ax) = lim f (Axnij ) = f lim (Axnij ) = f (y)
j
Thus f (y Ax) = 0 for every f Ey . Then by 5.1.4 we must have
y = Ax. This contradiction proves that A(xn ) Ax in Ey .
(b) Let {xn } be a bounded sequence in Ex . Since Ex is reexive,
Eberleins theorem (6.4.4) shows that {xn } has a weak convergent
w
subsequence {xni }. Let xni x in Ex . Then by our hyperthesis
Axni Ax in Ey . Thus, for every bounded sequence {xn } in Ex , Axn
contains a subsequence which converges to Ey . Hence, A is a compact map
by 8.1.1.
8.1.16
Remark
(a) The requirement of reexivity of Ex cannot be dropped from
8.1.15(b). For example, if A denotes the identity operator from l1 to l1 , then
w
Schurs lemma [see Limaye [33]] shows that Axn Ax wherever xn x
in l1 . However, the identity operator is not compact.
8.1.17
Theorem
The range of a compact operator A is separable.
Proof: Let Kn be the image of the ball {x : x n}. Since A is compact,
K n is compact and therefore, also a separable set [see 1.6.19]. Let Ln be a
*
countable everywhere dense set in K n . Since K =
Kn is the range of
A, L =
*
n=1
n=1
Ln is a countable, everywhere dense set in K.
290
A First Course in Functional Analysis
8.2
Spectrum of a Compact Operator
In this section we develop the RieszSchauder theory of the spectrum of
( ). We show
a compact operator on a normed linear space Ex over
that this spectrum resembles the spectrum of a nite matrix except for the
number 0. We begin the study by referring to some preliminary results
(4.7.174.7.20).
4 +
8.2.1
Theorem
Let Ex be a normed linear space, A is a compact linear operator mapping
Ex into Ex and 0 = k ( ). If {xn } is a bounded sequence in Ex such
that Axn kxn y in Ex , then there is a subsequence {xni } of {xn } such
that xni x in Ex and Ax kx = y.
Proof: Since {xn } is bounded and A is a compact operator, {xn } has a
subsequence {xni } such that {A(xni )} converges to some z in Ex , then
4+
kxni = kxni Axni + Axni y + z,
so that xni (z y)/k = x (say). Also, since A is continuous,
Ax kx = lim {Axni kxni } = z {z y} = y.
i
8.2.2
Remark
The above result shows that if A is a compact linear operator mapping
Ex into Ex , and if {xn } is a bounded sequence of approximate solution of
Ax kx = y, then a subsequence of {xn } converges to an exact solution
of the above equation. The following result, which is based on Riesz
lemma (2.3.7), is instrumental in analysing the spectrum of a compact
operator.
8.2.3
Lemma
Let Ex be a normed linear space and A : Ex Ex .
(a) Let 0 = k (C) and Ey be a proper closed subspace of Ex such
that (A kI)Ex Ey . Then there is some x Ex such that x = 1 and
for all y Ey ,
k
Ax Ay
.
2
(b) Let A be a compact linear operator mapping Ex Ex and
k0 , k1 , . . . , be scalars with kn  for some > 0 and n = 0, 1, 2, . . . .
Let E0 , E1 , . . . , E 0 , E 1 , . . . be closed subspaces of Ex , such that for
n = 0, 1, 2, . . . ,
En+1 En , (A kn I)(En ) E n+1 ,
E n E n+1 , (A kn+1 I)(E n+1 ) E n .
Then there are nonnegative integers p and q such that
Compact Operators on Normed Linear Spaces
Ep+1 = Ep
and
291
E q+1 = E q .
Proof: First, we note that A(Ey ) Ey , since
Ay = [Ay ky] + ky Ey for all y Ey .
Now by the Riesz lemma (2.3.7), there is some x Ex , such that x = 1
1
and dist (x, Ey ) .
2
Let us consider y Ey . Since Ax kx Ey and Ay Ey , we have
0
0
0
0
1
0
Ax Ay = kx [kx Ax + Ay] = k 0x [kx Ax + Ay]0
0
k
k dist (x, Ey )
1
.
2
(b) Let us suppose now that Ep+1 is a proper closed subspace of Ep for
each p = 0, 1, 2. By (a) above we can nd an yp Ep , such that yp  = 1
and for all y Ep+1 ,
Ayp Ayp+1 
k
, p = 0, 1, 2, . . . .
2
2
If follows that {yp } is a bounded sequence in Ex and
Ayp Ayr 
, p, r = 0, 1, with p = r.
2
The above shows that {Ayp } cannot have a convergent subsequence. But
this contradicts the fact that A is compact. Hence there is some nonnegative
integer p such that Ep = Ep+1 .
It can similarly be proved that there is some nonnegative integer q such
that E q+1 = E q .
8.2.4
Denitions: (A), (A), e (A), a (A)
In view of the discussion in 4.7.174.7.20, we write the following
denitions:
(i) Resolvent set: (A) : { ( ) : A I is invertible}.
(ii) Spectrum (A) : { ( ) : A I does not have an
inverse}. A scalar belonging to (A) is known as spectral value of A.
(iii) Eigenspectrum e (A) of A consists of all in ( ), such that
A is not injective or onetoone. Thus, e (A) if and only if there is some
nonzero x in Ex such that Ax = x. is called an eigenvalue of A and x
is called the corresponding eigenvector of A. The subspace N (A I) is
known as the eigenspace of A, corresponding to the eigenvalue .
(iv) The approximate eigenspectrum a (A) consists of all in
( ), such that (A I) is not bounded below. Thus, a (A) if
and only if, there is a sequence in Ex such that xn  = 1 for each n
and Axn xn  0 as n . Then is called an approximate
4+
4+
4 +
4 +
292
A First Course in Functional Analysis
eigenvalue of A. If e (A) and x is a corresponding eigenvector, then
letting xn = x/x for all n, we conclude that a (A). Hence,
e (A) a (A) (A).
(iv) An operator A on a linear space Ex is said to be of nite rank if
the range of A is nite dimensional.
8.2.5
Theorem
Let Ex be a normed linear space and A be a compact linear operator,
mapping Ex into Ey .
(a) Every nonzero spectral value of A is an eigenvalue of A, so that
{ : e (A), = 0} = { : (A), = 0}.
(b) If Ex is innite dimensional, then 0 a (A)
(c) a (A) = (A).
Proof: (a) Let 0 = ( ). If is not an eigenvalue, then A I is
onetoone. We prove that is not a spectral value of A, i.e., (A I)
is invertible. We rst show that (A I) is bounded below. Otherwise,
we can nd a sequence {xn } in Ex , such that xn  = 1 for each n and
(A kI)(xn ) 0 as n . Then, by theorem 8.2.1, there is a
subsequence {xni } of {xn } such that xni x in Ex and Ax x = 0.
Since A I is onetoone, we have x = . But x = lim xni  = 1.
4+
This leads to a contradiction. Thus, A I is bounded below.
Next we show that A I is onto, i.e., R(A I) = Ex . First, we show
that R(A I) is a closed subspace of Ex . Let (Axn xn ) be a sequence
in R(A I) which converges to some element y Ex . Then ((A I)xn )
is a bounded sequence in Ex and since (A I) is bounded below, i.e.,
(A I)x mx, we see that {xn } is also a bounded sequence in Ex .
By theorem 8.2.1, there is a subsequence {xni } of (xn ), such that xni x
in Ex and Ax x = y. Thus, y R(A I) showing that the range of
(A I) is closed in Ex .
Now, let En = R((A I)n ) for n = 0, 1, 2, . . .. Then we show by
induction that each En is closed in Ex . For n = 0, E0 = Ex , for n = 1, E1
is closed. For n 2 (A I)n
= An An1 + + n Cr (1)r Anr r +
+ (1)n1 An1 + (1)n n I.
= Pn (A) n I
where n = (1)n n and Pn (A) is a nth degree polynomial in A.
Then, by lemma 8.1.4., Pn (A) is a compact operator and clearly n = 0.
Further, since (A I) is onetoone, (A I)n is also onetoone.
Compact Operators on Normed Linear Spaces
293
If we replace A with Pn (A) and with n and follow the arguments put
forward above, we conclude that R(Pn (A) n I) = En is a closed subspace
of Ex .
Since En+1 En and En+1 = (A I)(En ) and part (b) of lemma
8.2.3 shows that that there is a nonnegative integer p with Ep+1 = Ep . If
p = 0 then E1 = E0 . If p > 0, we want to show that Ep = Ep1 .
Let y Ep1 , that is, y = (A I)p1 x for some x Ex . Then
(A I)y = (A I)p x Ep = Ep+1 , so that there is some x Ex
with (A I)y = (A I)p+1 x. Since (A I)(y (A I)p x) =
and since (A I) is onetoone, it follows that y (A I)p x = , i.e.,
y = (A I)p x Ep . Thus, Ep = Ep1 . Proceeding as in above, if
p > 1, we see that Ep+1 = Ep = Ep1 = Ep2 = = E1 = E0 . But
E1 = R(A I) and E0 = Ex . Hence A I is onetoone.
Being bounded below and onto, (A I) has an inverse. Hence, every
nonzero spectral value of A is an eigenvalue of A. Since e (A) (A)
always, the proof (a) is complete.
(b) Let Ex be innite dimensional. Let us consider an innite linearly
independent set {e1 , e2 , . . .} of Ex and let E n = span {e1 , e2 , . . . en }, n =
1, 2, . . .. Then E n is a proper subspace of E n+1 . E n is of nite dimension
and is closed by theorem 2.3.4. By the Riesz lemma (theorem 2.3.7), there
is some element an+1 E n+1 such that an+1  = 1, dist (an+1 , E n ) 12 .
Let us assume that A is bounded below i.e., Ax mx for all x Ex
and some m > 0. Then for all p, q = 1, 2, . . ., and p = q, we have,
m
Aap Aaq  map aq  ,
2
so that {Aap } cannot have a convergent subsequence, which contradicts the
fact that A is compact.
Hence, A is not bounded below. Hence 0 a (A).
(c) If Ex is nite dimensional and D(A) = Ex , then the operator A
can be represented by a matrix, (aij ); then A I is also represented by a
matrix and (A) is composed of those scalars which are the roots of the
equation
a11 a12
a1n
= 0 [see Taylor, [55]]
a
a
a
n1
Hence
n2
nn
a (A) = (A).
If Ex is innite dimensional, then 0 a (A) by (b) above. Also, since
e (A) a (A) (A) always, if follows from (a) above that a (A) = (A).
8.2.6
Lemma
Let Ex be a linear space, A : Ex Ex linear, Axn = n xn , for some
= xn Ex and n ( ), n = 0, 1, 2, . . . .
4+
294
A First Course in Functional Analysis
(a) Let n = m wherever n = m. Then {x1 , x2 , . . .} is a linearly
independent subset of Ex .
(b) Let Ex be a normed linear space. A is a compact linear operator
mapping Ex Ex and the set {x1 , x2 , . . .} is linearly independent and
innite. Then n 0 as n .
Proof: (a) Since x1 = , the set {x1 } is linearly independent. Let
n = 2, 3, . . . and assume that {x1 , x2 , . . . , xn } is linearly independent. Let,
if possible, xn+1 = 1 x1 + 2 x2 + + n xn for some 1 , 2 , . . . n in
( ).
Then, n+1 xn+1 = 1 n+1 x1 + 2 n+1 x2 + + n n+1 xn and also
n+1 xn+1 = A(xn+1 ) =
n
i Axi = 1 1 x1 + 2 2 x2 + + n n xn .
i=1
Thus we get on subtraction,
1 (n+1 1 )x1 + 2 (n+1 2 )x2 + + n (n+1 n )xn = .
Since x2 , . . . xn are linearly independent, j (j n+1 ) = 0 for each j. As
xn+1 = , we see that j = 0 for some j, 1 j n, so that n+1 = j .
But this is impossible. thus the set {x1 , x2 , . . . xn+1 } is linearly
independent. Using mathematical induction we conclude that {x1 , x2 , . . .}
are linearly independent.
(b) For n = 1, 2, . . ., let E n = span {x1 , x2 , . . . , xn }. Since xn+1 does
not belong to E n , E n is a proper subspace of E n+1 . Also, E n is closed in
Ex by th. 2.3.4, and (A n+1 I)(E n+1 ) E n since (A n+1 I)xn+1 = .
If n
/ 0 as n , we can assume by passing to a subsequence that
n  > 0 for all n = 1, 2, . . .. Now 8.2.3(b) yields that E q+1 = E q
for some positive integer q which contradicts the fact that E q is a proper
subspace of E q+1 . Hence n 0 as n .
8.2.7
Theorem
Let Ex be a normed linear space and A be a compact linear operator
mapping Ex into Ex .
(a) The eigenspectrum and the spectrum of A are countable sets and
have 0 as the only possible limiting point. In particular, if {1 , 2 , . . .} is
an innite set of eigenvalues of A, then n 0 as n .
(b) Every eigenspace of A corresponding to a nonzero eigenvalue of A
is nite dimensional.
Proof: Since { : (A), = 0} = { : e (A), = 0} by 8.2.5(a),
we have to show that the set e (A) is countable and 0 is the only possible
limit point of it.
For > 0, let
L = { e (a) :  }.
Compact Operators on Normed Linear Spaces
295
Suppose that L is an innite set for some > 0. Let n L for
n = 1, 2, . . . with n = m wherever n = m. If xn is an eigenvector of
A corresponding to the eigenvalue n , then by theorem 8.2.6(a), the set
{x1 , x2 , . . .} is linearly independent, and consequently n 0 as n
by 8.2.6(b). But this is impossible since n  for each n. Hence L
*
is a nite set for > 0. Since e (A) =
L1/n it follows that e (A)
n=1
is a countable set and that e (A) has no limit points except possibly the
number 0.
( ) since  A for
Furthermore, e (A) is a bounded subset of
every e (A). If {1 , 2 , . . .} is an innite subset of e (A), then it must
have a limit point by the BolzanoWeierstrass theorem for ( ) (theorem
1.6.19). As the only possible limit point is 0, we see that n 0 as n .
(b) Let 0 = e (A). Suppose that the set of eigenvectors
corresponding to an eigenvalue forms an innite set {x1 , x2 , . . .}. Let
take the values 1 , 2 , . . . corresponding to the eigenvectors x1 , x2 , . . ..
Then, by BolzanoWeierstrass theorem (th. 1.6.19) the set of eigenvalues
( ). As the only possible limit point
{1 , 2 , . . .} have a limit point in
is zero, we see that n 0 as n . But this is impossible since
e (A) # = 0.
Thus the eigenspace of A corresponding to is nite dimensional.
We next consider the spectrum of the transpose of a compact operator.
4 +
4 +
4 +
8.2.8
Theorem
Let Ex be a normed linear space and A (Ex Ex ).
(A ) (A).
If Ex is a Banach space, then
Then
(A) = a (A) e (A ) = (A ).
4+
Proof: Let ( ) be such that (A I) is invertible, i.e., (A I)
has a bounded inverse. If (A I)B = I = B(A I) for some bounded
linear operator B mapping Ex Ex , then by 6.1.5(ii) B (A I) = I =
(A I)B , where A , B stand for adjoints of A and B respectively.
Hence (A ) (A).
Let Ex be a Banach space. By 8.2.4(iii) (A) if and only if either
A I is not bounded below or R(A I) is not dense in Ex . As because
(A I) is not bounded below a (A).
Let f Ex . Then (A I)f = 0 if and only if f ((A I)x) =
(A I)f (x) = 0 for every x Ex .
Now, (A I) is onetoone, i.e., N (A I) = {} if and only if f =
wherever f (y) = 0 for every y R(A I). This happens if and only if the
closure of R(AI) is Ey i.e., R(AI) is dense in Ey . Hence e (A ).
Thus (A) = a (A) e (A ).
296
A First Course in Functional Analysis
Finally, to conclude (A) = (A ), it will suce to show that a (A)
(A ). Let (A ), that is A I is invertible. If x Ex then by
5.1.4, there is some f Ex , such that f (x) = x, f  = 1 so that
x = f (x) = (A I)(A I)1 (f )(x)
= (A I)1 (f )(A I)(x)
(A I)1  A(x) x.
Thus, (A I) is bounded below, that is a (A).
If A is a compact operator we get some interesting results.
8.2.9
Theorem
Let Ex be a normed linear space and A be a compact operator mapping
Ex into Ex . Then
(a) dim N (A I) = dim N (A I) < for 0 = ( ),
(b) { : e (A ), = 0) = { : e (A), = 0},
(c) (A ) = (A).
Proof: (a) By theorem 8.1.12, A is a compact linear operator mapping
Ex into Ex . Then, theorem 8.2.7 yields that the dimension r of N (A I)
and the dimension s of N (A I) are both nite.
First we show that s r.
If r = 0, that is e (A), then, by theorem 8.2.5(a), we see that
(A). Since (A ) (A) by theorem 8.2.8 we have (A ). In
particular (A I) is onetoone, i.e., s = 0.
Next, let r > 1. Consider a basis {e1 , e2 , . . . , er } of N (A I). Then
from 4.8.3 we can nd f1 , . . . , fr in Ex such that fj (ei ) = i,j , i, j =
1, 2, . . . , r.
Let, if possible, {1 , 2 , . . . , r+1 } be a linearly independent subset of
N (A I) containing (r + 1) elements. By 4.8.2 there are y1 , y2 , . . . , yr+1
in Ex such that
j (yi ) = ij , i, j = 1, 2, . . . , r + 1.
4+
Consider the map B : Ex Ex given by
B(x) =
r
fi (x)yi , x Ex .
i=1
Since fi Ex , B is a a bounded linear operator mapping Ex Ex and
B is of nite rank. Therefore B is a compact operator on Ex by theorem
8.1.13.
Since A is also compact, lemma 8.1.4 shows that A B is a compact
operator. We show that A B I is onetoone but not onto and obtain
a contradiction.
Compact Operators on Normed Linear Spaces
We note that j N (A I) and hence,
j (A B I)(x) = (A I)(j )(x) j
297
r
fi (x)yi
i=1
=0
=
r
fi (x)j (yi ).
i=1
fi (x)
0
if 1 j r
.
if j = r + 1
Now, let x Ex satisfy (A B I)x = . Then it follows that
fj (x) = j (A B I)x = j (0) = 0, for 1 j r and in turn,
B(x) = . Hence (A I)x = i.e., x N (A I). Since {e1 , e2 , . . . , er }
is a basis of N (AI), we have x = 1 e1 + +r er for some 1 , 2 , . . . , r
in
( ). But
4 +
0 = fj (x) = fj (1 e1 + 2 e2 + + r er ) = j , j = 1, 2, . . . , r
so that x = 0 e1 + 0 e2 + + 0 er = .
Thus A B I is onetoone because
(A B I)x = x = .
Next we assert that yr+1 R(A B I). For if yr+1 = (A B I)x
for some x Ex , then
1 = r+1 (yr+1 ) = r+1 ((A B I)x) = 0,
as we have noted above. Hence (A B I) is not onto.
Thus a linearly independent subset of N (A I) can have at most r
elements, i.e., s r.
To obtain r s we proceed as follows. Let t denote the dimension
of N (A I). Considering the compact operator A in place of A, we
nd that t s. If Ex denotes the canonical embedding of Ex into Ex
considered in sec. 5.6.6, then by theorem 6.1.5. A Ex = Ex A. Hence
Ex (N (A I)) N (A I), so that r t. Thus r t s. Hence
r = s.
(b) Let 0 = ( ). Part (a) shows that N (A I) = {} if and
only if N (A I) = {}, that is, e (A) if and only if e (A ).
(c) Since A and A are compact operators, we have by theorem 8.2.5
4+
{ : (A), = 0} = { : e (A), = 0}
{ : (A ), = 0} = { : e (A ), = 0}
It follows from (b) above that
{ : (A ), = 0} = { : (A), = 0}
298
A First Course in Functional Analysis
If Ex is nite dimensional, then det(A I) = det(A I). Hence
0 (A ) if and only if 0 (A). If Ex is innite dimensional then
Ex is innite dimensional and hence 0 a (A) as well as 0 a (A ) by
theorem 8.2.5(b). Thus, in both cases, (A ) = (A).
If follows from the above that the spectrum of an innite matrix is very
much like the spectrum of a nite matrix, except for the number zero.
8.2.10
Examples
Let Ex = lp , 1 p and Ax =
1.
1 2 3
, ,
1 2 3
,
where
x = {1 , 2 , 3 , . . .} lp .
,
1 1
1
Let An =
, , , , 0 0 . Since An is nite, An is a linear
1 2
n
compact operator [see theorem 8.1.13].
1
Furthermore, (A An )xpp =
i p =
 p
p i
i
i=n+1
i=n+1
Hence
1
xp
p


, p > 1.
i
(n + 1)p i=n+1
(n + 1)p
(A An ) = sup
1
(A An )x
, p > 1.
x
n+1
Hence, An A as n and An is compact, by theorem 8.1.9, A
is also a compact operator. A is clearly onetoone, 0 is not an eigenvalue
of A, but since A is not bounded below, 0 is a spectral value of A. Also,
n = n1 is an eigenvalue of A and n 0 as n .
2. The eigenspace of a compact operator corresponding to the eigenvalue
0 can be innite dimensional. The easiest example is the zero operator on
an innite dimensional normed linear space.
3. = 0 can be an eigenvalue of a compact operator A, but = 0 may
not be an eigenvalue of the transpose A and vice versa.
4. Let Ex = lp and A denote the compact operator on Ex dened by
T
x4 x5
Ax = x3 , , ,
for x = (x1 , x2 , . . .)T lp
3 4
i.e.,
0 0 1
0 0 0
0 0 0
0
1
3
1
4
x1
x2
x3
x4
..
.
x3
x34
= x5
4
..
.
Compact Operators on Normed Linear Spaces
1 1
+ = 1, so that
p q
Hence A can be identied with B on lp ,
0
0
B= 1
0
..
.
0
0
0
1
3
..
.
0
0
0
0
1
4
299
T
x2 x3
Hence Bx = 0, 0, x1 , ,
for x = (x1 , x2 , x3 , . . .) lq .
3 4
1
1
0
0
Since A . = 0 . we see that 0 is an eigenvalue of A. But
..
..
0
0
since B is onetoone, 0 is not an eigenvalue of B. Also since B = A, we
see that not only the compact operator B does not have an eigenvalue 0,
its adjoint B does not have an eigenvalue 0 too.
(c) Let Ex = C([0, 1]), 1 p . For x Ex , let
1
s
tx(t)dx(t) + s
(1 t)x(t)dx(t), s [0, 1] (8.1)
Ax(s) = (1 s)
0
Since the kernel is continuous A is a compact operator mapping Ex into
Ex [see example 8.1.10].
The above is a Fredholm integral operator with a continuous kernel
given by,
(1 s)t if 0 t s 1
(8.2)
K(x, t) =
s(1 t) if 0 s t 1
4(+) be such that
Let x Ex and 0 =
Ax = x.
Then for all s [a, b]
x(s) = (1 s)
tx(t)dx(t) + s
0
(1 t)x(t)dx(t).
(8.3)
Putting s = 0 and s = 1, we note that x(0) = 0 = x(1). Since tx(t) and
(1 t)x(t) are integrable functions of t [0, 1], it follows that the righthand side of the equation given above is an absolutely continuous function
of x [0, 1]. Hence x is (absolutely) continuous on [0,1]. This implies that
tx(t) and (1 t)x(t) are continuous functions of t on [0,1]. Thus the right
hand is, in fact, a continuously dierentiable function of s and we have for
all s [0, 1].
300
A First Course in Functional Analysis
x (s) = (1 s)sx(s)
=
s
0
tx(t)dx(t) s(1 s)x(s) +
(1 t)x(t)dx(t)
tx(t)dx(t) +
0
(1 t)x(t)dx(t).
This shows that x is a continuously dierentiable function, and for all
s [0, 1], we have,
x (s) = sx(s) (1 s)x(s) = x(s).
Thus, the dierential equation x + x = 0 has a nonzero solution,
satisfying x(0) = 1 = x(1) if and only if = 1/n2 2 , n = 1, 2, . . . and in
such a case its most general solution is given by x(s) = c sin ns, s [0, 1],
where c ( ).
Let n = n212 , n = 1, 2, . . . and xn (s) = sin ns for s [0, 1]. Thus each
n is an eigenvalue of A and the corresponding eigenspace N (A n I) =
span {xn } is one dimensional.
Next, let 0 be not an eigenvalue of A. For, if Ax = for some x Ex ,
then by dierentiating the expression for Ax(s) with respect to s two times,
we see that x(s) = 0 for all s [0, 1]. On the other hand, since A is
compact and Ex is innite dimensional, 0 is an approximate eigenvalue of
A by theorem 8.2.5. Thus,
,
1
1
1
1
,
,
and a (A) = (A) = 0, 2 , 2 2 , .
e (A) =
2 22 2
2
4+
8.2.11
Problems [8.1 and 8.2]
1. Show that the zero operator on any normed linear space is compact.
2. If A1 and A2 are two compact linear operators mapping a normed
linear space Ex into a normed linear space Ey , show that A1 + A2 is
a compact linear operator.
3. If Ex is nite dimensional and A is linear mapping Ex into Ey , then
show that A is compact.
4. If A (Ex Ey ) and Ey is nite dimensional, then show that A is
compact.
5. If A, B (Ex Ex ) and A is compact, then show that AB and BA
are compact.
6. Let Ex be a Banach space and P (Ex Ex ) be a projection. Then
show that P is a compact linear operator if and only if P is of nite
rank.
7. Given Ex is an innite dimensional normed linear space and A a
compact linear operator mapping Ex into Ex , show that I A is
not a compact operator where is a nonzero scalar.
Compact Operators on Normed Linear Spaces
301
4+
8. Let A = (aij ) be an innite matrixwith aij
( ) i, j N . If
aij xj , show that in the
x lp and Ax lr where Ax =
j=1
following cases A : lp lr is a compact operator;
(i) 1 p < , 1 r < and
aij  0 as i
j=1
(ii) 1 p < , 1 r < and
aij r < .
i=1
j=1
9. Let Ex = C([a, b]) with   and A : Ex Ex be dened
b
by Ax(s) =
K(s, t)x(t)dt, x Ex where K(, ) C([a, b])
a
C([a, b]). Let {An } be the Nystrom approximation of A corresponding
to a convergent quadrature formula with nodes t1,n , t2,n , . . . , tn,n
in [a, b] and weights w1,n , w2,n , . . . wn,n in
( ) i.e., An x(s) =
n
K(tj,n , t)x(tj,n )wj,n , x Ex , n N , where nodes and weights
R C
j=1
are such that
n
x(tj,n )wj,n
x(t)dt as n for every
a
j=1
x C([a, b]). Then show that (i) Ax An x 0 for every
x C([a, b]). (ii) (An A)A 0 and (An A)An  0 as
n .
For quadrature formula see 6.3.5. In order to solve the integral
equation numerically Ax = y, x, y C([a, b]) the given equation
is approximately reduced to a system of algebraic equations by using
quadrature.
(Hint: (i) Show that for each u C([a, b]), {(An u(s))} converges
to (Au(s)). {An u : n
} is equicontinuous, and hence (ii).
Use the result in (i) and the fact that {Au : u 1} and
{An u : u 1, n } are equicontiuous.)
8.3
Fredholm Alternative
In this section, linear equations with compact operators will be considered.
F. Riesz has shown that such equations admit the applications of basic
consequences from the Fredholm theory of linear integral equations.
8.3.1
A linear equation with compact operator and its adjoint
Let A be a compact operator which maps a Banach space E into itself.
Consider the equation
302
or,
A First Course in Functional Analysis
Au u = v
(8.4)
Pu = v
(8.5)
where P = A I. Together with equation (8.5), consider
A f f = g
or,
(8.6)
P f =g
(8.7)
where A is the adjoint operator of A and acts into the space E . By
theorem 8.1.12, A is a compact operator.
8.3.2
Lemma
Let N be a subspace of the null space of the operator P , that is, a
collection of elements u such that P u = . Then N is a nitedimensional
subspace of E.
Proof: Let M be an arbitrary bounded set in N . For every u N, Au = u,
that is, the operator A leaves the element of the subspace N invariant and
in particular, carries the set M into itself. The subspace N of E is then
said to be invariant with respect to A.
As A is a compact operator, A carries M into a compact set.
Consequently, every bounded set M N is compact, implying by theorem
2.3 that N is nite dimensional.
8.3.3
Remark
The elements of the subspace N are eigenvectors of the operator A
corresponding to the eigenvalue 0 = 1. The above conclusion remains
valid if 0 is replaced by any nonzero eigenvalue.
Thus a compact linear operator can have only a nite number of linearly
independent eigenvectors corresponding to the same nonzero eigenvalue.
8.3.4
Lemma
Let L = P (E), that is, L be a collection of elements v E representable
in the form Au u = v. Then L is a subspace.
To prove L is linear we note that if Au1 u1 = v1 and Au2 u2 = v2 ,
then 1 v1 + 2 v2 = A(1 u1 + 2 u2 ) (1 u1 + 2 u2 ), 1 , 2 (C). Thus,
v1 , v2 L 1 v1 + 2 v2 L. We next prove that L is closed. We rst
show that there is a constant m depending only on AI such that wherever
the equation P u = v is solvable, at least one of the solutions satises the
inequality
mu v, m > 0
(8.8)
Let u0 be a solution of P u = v. Then every other solution of P u = v is
expressible in the form u = u0 + w where w is a solution of the homogenous
equation
Pu =
(8.9)
Compact Operators on Normed Linear Spaces
303
Let us consider F (w) = u0 + w, a bounded below continuous functional.
Let d = inf. F (w) and {wn } N be the minimizing sequence, that is,
F (wn ) = u0 + wn  d
(8.10)
The sequence {u0 + wn } has a limit and is hence bounded. However, the
sequence {wn } is also bounded, since,
wn  = (u0 + wn ) u0  u0 + wn  + u0 
Thus {wn } is a bounded sequence in a nitedimensional space N and hence,
by BolzanoWeierstrass theorem (theorem 1.6.19) has a convergent
subsequence. Hence, we can nd a subsequence {wnp } such that
wnp w0 . Then F (wnp ) F (w0 ).
(8.11)
From (8.10) and (8.11) it follows that
F (w0 ) = u0 + w0  = d.
Therefore, the equation P u = v always has the solution u
= u0 + w0
with the minimal norm. In order to show that (8.8) holds for u
, we consider
the ratio 
u/v, and let us assume that the ratio is not bounded. Then
n such that
there exist sequences vn and u

un 
.
vn 
un , we can
Since, vn , evidently, corresponds to the minimal solution
assume, without loss of generality, that 
un  = 1; then vn  0.
Since the sequence {
un } is bounded and A is compact, the sequence
{A
un } is compact and consequently contains a convergent subsequence.
Again, without loss of generality, let us assume that
0 .
A
un u
(8.12)
However, since u
n = A
un vn ,
o
u
n u
since vn .
and consequently,
A
un A
u0
(8.13)
From (8.12) and (8.13) it follows that
0 , that is, u
0 N.
A
u0 = u
However, because of the minimality of the norm of the solution u
, it follows
that 
un u
0  
u = 1, contradicting the convergence of {
un } to u
0 .
304
A First Course in Functional Analysis
Thus 
u/v is bounded and if m = inf {v/
u}, the inequality (8.8)
is proved.
Now, suppose we are given a sequence vn L convergent to v0 . We can
assume that for some subsequence
vnp +1 vnp  <
1
2np +1
where vnp+1 vnp  <
1
.
2np
Let un0 be a minimal solution of the equation P u = vn1 and unp , p =
1, 2, . . ., a minimal solution of the equation P u = vnp +1 vnp .
Then
munp  vnp+1 vnp  <
This estimate yields that
1
.
2np
unp converges and if u
is the sum of the
p=1
series, then
Pu
=P
lim
k
k
unp
p=0
= lim P un0 +
k
k
= lim
k
k
P unp
p=0
(vnp+1 vnp )
p=1
= lim vnk+1 = v0 .
k
exhibiting v0 L. Hence, L is closed.
8.3.5
Theorem
The equation (8.4) is solvable for given v E, a Banach space, if and
only if f (v) = 0 for every linear functional f , such that
A f f =
(8.14)
Proof: Suppose that the equation Au u = v is solvable, that is, v is
expressible in the form v = Au0 u0 , for some u0 E. Let f be any linear
functional satisfying A f f = . Then
f (v) = f (Au0 u0 ) = f (Au0 )f (u0 ) = A f (u0 )f (u0 ) = (A f f )(u0 ) = 0.
Next we have to show that v L = P (E) satises the hypothesis
of the theorem. Let us suppose v L. Since L is closed, v lies at
a distance d > 0, from L and by theorem 5.1.5, there exists a linear
functional f0 such that f0 (v) = 1, and f0 (z) = 0 for every z L. Hence
f0 (Au u) = (A f0 f0 )(u) = 0 for all u E, that is, A f0 f0 = 0, a
contradiction, because on the one hand by construction f0 (v) = 1, where
on the other hand f0 (v) = 0. Hence y L, proving the suciency.
Compact Operators on Normed Linear Spaces
8.3.6
305
Remark
An equation P u = v with the property that it has a solution u if
f (v) = 0 for every f , satisfying P f = , is said to be normally solvable.
The essence of the theorem 8.3.5 is that
L = P (E) is closed, is a sucient condition for P u = v to be
normally solvable.
8.3.7
Corollary
If a conjugate homogeneous equation A f f = 0 has only a trivial
solution, then the equation Au u = v has a solution for any righthand
side.
8.3.8
Theorem
In order that equation (8.6) be solvable for g E given, it is necessary
and sucient that g(u) = 0 for every u E, such that
Au u = .
(8.15)
Proof: To prove that the condition is necessary, we note that
g(u) = (A f f )u = f (Au u) = 0
(8.16)
For proving that the condition is sucient we proceed as follows. Let
us dene the function f0 (v) on the subspace L by means of the equality
f0 (v) = g(u), u being one of the preimages of the element v (i.e., P 1 v)
under the mapping P . The functional f0 satisfying hypothesis of the
theorem is uniquely dened. For if u is another preimage of the same
element v, there
Au u = Au u i.e., A(u u ) (u u ) = 0,
where g(u u ) = 0, i.e., g(u) = g(u ).
If u1 and u2 are solutions of (8.16), we have
g(u1 + u2 ) = (A f f )(u1 + u2 ) = f (A(u1 + u2 ))
(u1 + u2 )) = f ((Au1 u1 ) + (Au2 u2 )) = 0
Since g E , g(u1 + u2 ) = g(u1 ) + g(u2 ), f ((Au1 u1 ) + (Au2 u2 ))
= f (Au1 u1 ) + f (Au2 u2 )
This shows that f is additive and homogeneous. To prove the boundedness
of f we proceed as follows. We can show, as in lemma 8.3.4, that the
inequality mu vis satised for at least one of the preimages u of
the element v.
1
Therefore, f0 (v) = g(u) g u m
g v and the
boundedness of f0 is proved. We can extend f0 by the HahnBanach
306
A First Course in Functional Analysis
theorem 5.1.3 to the entire space E to obtain a linear functional f , such
that
f (Au u) = f (v) = f0 (v) = g(u),
or (A f f )u = g(u).
such that f is a solution of (8.6).
8.3.9
Corollary
If the equation Au u = has only a null solution u = , then the
equation A f f = g is solvable only when g = on the RHS.
We next want to show that the homogeneous and nonhomogeneous
equations having solutions in the identical space are also closely related.
8.3.10
Theorem
In order that the equation
Au u = v
(8.4)
be solvable for every v, where A is a compact operator mapping a Banach
space E into itself, it is necessary and sucient that the corresponding
homogeneous equation
Au u =
(8.15)
has only a trivial solution u = . In this case, the solution of equation (8.4)
is uniquely dened, and the operator T = A I has a bounded inverse.
Proof: Let us suppose that the condition is necessary. Let us denote by NK
the null space of the operator T K . It is clear that T K u = T K+1 u = ,
that is, NK NK+1 .
Let the equation Au u = v be solvable for every v, and let us assume
that the homogeneous equation Au u = has a nontrivial solution
u1 . Let u2 be a solution of the equation Au u = u1 , and in general
let uk+1 be a solution of the equation Au u = uk , k = 1, 2, 3, . . ..
We have, T uk = uk1 , T 2 uk = uk2 , . . . , T k1 uk = u1 = . Wherever
T k uk = T u1 = . Hence, uk Nk and uk Nk1 , that is, each subspace
Nk1 is a proper subspace of Nk . Then, by Riesz lemma 2.3.7 there is in the
subspace Nk , an element vk with norm 1, such that vk u 12 for every
u Nk1 . Consider the sequence {Avk }, which is compact since vk  = 1
(i.e., {vk } is bounded) and A is a compact operator. On the other hand,
let vp and vq be two such elements with p > q.
Since T p1 (vq T vp + T vq ) = T p1 vq T p vp + T p vq = noting that
p 1 q, then vq T vp + T vq Np1 and hence
Avp Avq  = vp (vq T vp + T vq )
1
.
2
Thus a contradiction arises from the assumption that equation (8.4) has
in the presence of a trivial solution of the equation T u = , a nontrivial
Compact Operators on Normed Linear Spaces
307
solution. This proves the necessary part. Next to show that the condition
is sucient.
Suppose that the equation T u = has only a trivial solution. Then, by
corollary 8.3.9, the equation
A f f = g
(8.6)
is solvable for any right side. Since A is also a compact operator and E a
Banach space, we can apply the necessary part of the theorem just proved
to equation (8.6). Hence the equation
A f f =
(8.14)
has only a trivial solution. However then equation (8.4) by corollary to
theorem 8.3.6 has a solution for every v, and it is proved that the condition
is sucient.
Since by hypothesis of the theorem, equation (8.4) has a unique solution,
then the inverse T 1 to T (i.e., (A I)) exists and T 1 = (A I)1 .
Because of uniqueness property, the unique solution is at the same time
minimal, and hence
m(A I)1 v v.
8.3.11
Theorem
Let us consider the pair of equations,
and
Au u =
(8.15)
A f f =
(8.14)
where A and A are compact operators mapping respectively the Banach
space E into itself and the Banach space E into itself. Then the above
pair of equations have the same number of linearly independent solutions.
Proof: Let u1 , u2 , . . . , un be a basis of the subspace N of solutions of
equation (8.15). Similarly, let f1 , f2 , . . . , fm be a basis of the subspace of
solutions of equation (8.14).
Let us construct a system of functionals 1 , 2 , . . . , n orthogonal to
u1 , u2 , . . . un , that is, such that
i (uj ) = ij , i, j = 1, 2, . . . n.
Let us also construct a system of elements w1 , w2 , . . . , wm biorthogonal to
f1 , f2 , . . . , fm .
Let us assume n < m. We consider the operator V given by
V u = Au +
n
i=1
i (u)wi .
308
A First Course in Functional Analysis
Since A is a compact operator and the righthand side of V u contains a
nite number of terms, V is a compact operator. We next want to show
that the equation V u u = has only a trivial solution.
Let u0 be a solution of Vu u = .
n
i (u0 )wi = 0
Then
fk (V u0 u0 ) = 0, or, fk Au0 u0 +
i=1
or,
or,
A fk u0 fk u0 +
(A fk fk )u0 +
n
i (u0 )fk (wi ) = 0
i=1
n
i (u0 )fk (wi ) = 0.
i=1
Since {fi } and {wi } are biorthogonal to each other, we have from the
above equation,
(A fk fk )u0 + k (u0 ) = 0.
Since fk is a basis of the subspace of solutions of equation (8.14),
A fk fk = .
Hence,
k (u0 ) = 0, k = 1, 2, . . . (n < m).
Hence we have V u0 = Au0
or, Au0 u0 = V u0 u0 = .
Since u0 N and {ui } is a basis of N ,
n
i ui .
u0 =
i=1
However, j (u0 ) =
n
i j (ui ) = j .
i=1
Since
j (u0 ) = 0, j = 1, 2, . . . , n, j = 0.
Hence,
u0 = .
Since the equation V u u = has only a trivial solution, the equation
V u u = v is solvable for any v and in particular for v = wn+1 . Let u be
a solution of this equation. Then we can write
n
i (u ) (wi )
fn+1 (wn+1 ) = fn+1 Au u +
i=1
= (A fn+1 fn+1 )u +
n
i (u )fn+1 (wi ) = 0
i=1
where on the other hand fn+1 (wn+1 ) = 1. The contradiction obtained
proves the inequality n < m to be impossible.
Let us assume, conversely, that m < n. Consider in the space E , the
operator
Compact Operators on Normed Linear Spaces
V f = A f +
m
f (wi )i .
309
(8.16)
i=1
This operator is adjoint to the operator V .
It is to be shown that the equation V f f = has only a trivial
solution.
For all k = 1, 2, . . . , n.
Taking note of the biorthogonality of {i } and {ui }
m
f (wi )i (uk )
(V f f )uk = (A f f )uk +
i=1
= f (Auk uk ) + f (wk )
= f (wk )
(8.17)
since {uk } is one of the bases of the subspace of solutions of (8.15). Thus,
if f0 is a solution of the the equation V f f = then from (8.17) it
follows that f0 (wk ) = 0, k = 1, 2, . . . , m.
Hence (8.16) yields V f0 = A f0 .
0 = V f 0 f 0 = A f 0 f 0 ,
Hence
i.e., f0 is a solution of A f f = 0.
m
m
However, f0 =
i fi =
f0 (wi )fi = ,
i=1
i=1
f0 (wi ) = 0, i = 1, 2, 3, . . ..
since
Since V is a compact operator, by theorem (8.3.10) the equation
V f f = g has a solution for any g, particularly for, g = m+1 .
Therefore if f is a solution of the above equation we have
V f f = m+1
Therefore,m+1 (um+1 ) = V f (um+1 ) f (um+1 )
m
f (wi )i (um+1 )
= (A f f )um+1 +
i=1
= f (Aum+1 um+1 ) = 0.
On the other hand, we have by construction m+1 (um+1 ) = 1.
The contradiction obtained proves the inequality m < n to be impossible.
Thus m = n.
In what follows, we observe that if we combine the theorems 8.3.5,
8.3.8, 8.3.10 and 8.3.11 we obtain a theorem which generalizes the famous
Fredholm theorem for linear integral equations to any linear equation with
compact operator.
310
A First Course in Functional Analysis
8.3.12
Theorem
Let us consider the equations
Au u = v
and
(8.4)
A f f =g
(8.5)
where A and A are compact operators mapping respectively Banach spaces
E and E into itself. Then equations (8.4) and (8.5) have a solution for
any element on the right side and, in this case, the homogeneous equations
Au u =
(8.13)
A f f =
(8.14)
have only a trivial solution or the homogeneous equations have the same
nite number of linearly independent solutions u1 , u2 , . . . , un ; f1 , f2 , . . . , fn .
In that case equation (8.4) will have a solution, if and only if,
fi (v) = 0 (g(ui ) = 0), i = 1, 2, . . . , n
The general solution of equation (8.4), then, takes the form,
u = u0 +
n
a i ui ,
i=1
u0 is any solution of equation (8.4) and a1 , a2 , . . . an are arbitrary constants.
Correspondingly, the general solution of equation (8.5), has the form
f = f0 +
n
bi fi ,
i=1
f0 any solution of equation (8.5) and b1 , b2 , . . . , bn are arbitrary constants.
We next consider an equation containing a parameter:
Au u = v, = 0.
(8.18)
Since the equation can be expressed in the form
1
1
1
Au u =
v and
A
is compact (completely continuous) together with A, the theorem proved
for equation (8.4) remains valid for equation (8.15).
Theorem 8.3.10 implies that for a given = 0, either the equation
Au u = v is solvable for any element on the righthand side, or the
homogeneous equation Au u = has a nontrivial solution. Hence,
every value of the parameter = 0 is either regular or is an eigenvalue and
the operator A has no other nonzero point spectrum except the eigenvalues.
Compact Operators on Normed Linear Spaces
8.3.13
311
Theorem
If A is a compact operator, then its spectrum consists of nite or
countable point sets. All eigenvalues are located in the interval [A, A]
and in the case of a countable spectrum, these have only one limit point
= 0.
Proof: Let us consider the operator T = A I.
!
"
Now, for = 0, T = I 1 A and by theorem
4.7.12, the operator
!
"
1
I 1 A and hence T has an inverse when 
A < 1, i.e., the
spectrum of the operator A lies on [A, A]. Let 0 < m < A.
For a conclusive proof it will suce to exhibit that there can exist only a
nite number of eigenvalues , such that  m. If that be not true, it is
possible to select a sequence 1 , 2 , . . . , n of distinct eigenvalues, and also
i  m. Let u1 , u2 , . . . , un be a sequence of eigenvectors corresponding to
these eigenvalues, such that
Aun = n un .
It is required to show that the elements u1 , u2 , . . . , uk for every k are linearly
independent. For k = 1, this is trivial. Suppose that u1 , u2 , . . . , uk are
linearly independent.
Let us assume that
uk+1 =
k
ci ui
(8.19)
i=1
then we have
k+1 uk+1 = Auk+1 =
k
ci Aui =
i=1
k
i ci ui
(8.20)
i=1
From (8.19) and (8.20) it follows (since k+1 = 0) that
k
i
1
ci ui = 0.
k+1
i=1
However this is impossible since 1
i
k+1
= 0
and u1 , u2 , . . . , uk+1 are linearly independent.
Hence the distinct
eigenvalues are nite in number. For proof in the case of a countable
spectrum see theorem 8.2.7.
8.4
Approximate Solutions
In the last section we have seen that, given A, a linear compact operator
mapping a Banach space E into itself and that if the homogeneous equation
312
A First Course in Functional Analysis
u Au = has only a trivial solution, then the equation u Au = v has a
unique solution.
In this section we consider the question of nding approximate solution
to the unique solution. We consider here operators with nite rank, i.e.,
operators having nite dimensional range. The process of nding such an
approximate solution has a deep relevance. In numerical analysis, in case
we cannot nd the solution of an equation in a closed form, we nd an
approximation to such operator equation, so that the approximations can
be reduced to nite dimensional equations. To make the analysis complete,
it is imperative in this case that the approximate operator equations have a
unique solution and this solution tends to the exact solution of the original
equation in the limit. Thus, if A is a bounded linear operator mapping E
is approximate to A, v0 E is approximate to v, thus
into E and if A
0 = v0 is a close approximation to u
the element u0 E satisfying u0 Au
satisfying u Au = v.
8.4.1
Theorem
Let E be a Banach space and A be a compact operator on E, such that
x = is the only solution of x Ax = . Then (I A) is invertible. Let
(Ex Ex ) satisfy
A
A)1  < 1.
= (A A)(I
Then for given v, v0 Ex , there are unique u, u0 E such that
u Au = v, u0 Au0 = v0
(I A)1 
(
v + v v0 ).
1
Proof: Since A is compact and I A is onetoone, it follows from theorem
8.2.5(a) that (I A) is invertible. As E is a Banach space and
and
u u0 
[(I A) (I A)](I
A)1  =
< 1,
is invertible and
it follows from theorem 4.7.12 that (I A)
1
1
1 
(I A)  .
1  (I A)  , (I A)1 (I A)
(I A)
1
1
are invertible, there are unique
Let v, v0 E, since I A and I A
u, u0 E such that
u Au = v
Also,
and
0 = v0 .
u0 Au
1 v0
u u0 = (I A)1 v (I A)
1 ]v + (I A)
1 (v v0 ).
= [(I A)1 (I A)
Hence, u u0 
(I A)1 
(I A)1 
v +
v v0 .
1
1
Compact Operators on Normed Linear Spaces
313
can be constructed. We would
We would next show how the operator A
is an operator of nite rank, then the solution of the
also show that if A
0 = v0 can be reduced to the solution of a nite system of
equation u0 Au
linear equations which can be solved by standard methods. Next, when the
operator A is compact, we can nd several ways of constructing a bounded
of nite rank such that A A
is arbitrarily small.
linear operator A
8.4.2
Theorem
Let A be an operator of nite rank on a normed linear space E over
( ) given by
4+
= f1 (u)u1 + + fm (u)um , u E
Au
where u1 , u2 , . . . , um are in E, and f1 , f2 , . . . , fm are linear functionals on
E.
f1 (u1 ) f1 (um )
..
Let
M =
fm (u1 )
fm (um )
(a) Consider vo E and let v = (f1 (v0 ), f2 (v0 ), . . . , fm (v0 ))T
Then
0 = v0
u0 Au
and
u
= (f1 (u0 ), . . . , fm (u0 ))T ,
if and only if u
Mu
= v
and
u0 = v0 + u
1 u1 + u
2 u2 + + u
m um
where u
i , i = 1, . . . , m is the ith component of u.
4+
if and only if is
(b) Let 0 = ( ). Them is an eigenvalue of A
an eigenvalue of M .
Furthermore, if u
(resp., u0 ) is an eigenvector of M (resp., A)
corresponding to , then
u0 = u
1 u1 + u
2 u2 + + u
m um .
(resp., M )
(resp., u
= (f1 (u0 ), . . . , fm (u0 ))T is an eigenvector of A
corresponding to ,
0 = v0 and u
Proof: Let u0 Au
= (f1 (u0 ), . . . , fm (u0 ))T
Then for i = 1, 2, . . . , m,
(M u
)(i) = fi (u1 )f1 (u0 ) + + fi (um )fm (u0 )
= fi (f1 (u0 )u1 + + fm (u0 )um )
0 ) = fi (u0 v0 ) = fi (u0 ) fi (v0 )
= fi (Au
=u
(i) v(i) .
Hence u
Mu
= v.
314
Also,
A First Course in Functional Analysis
0 = v0 + f1 (u0 )u1 + + fm (u0 )um
u0 = v0 + Au
= v0 + u
1 u1 + + u
m um .
1 u1 + + u
m um .
Conversely, let u
Mu
= v and u0 = v0 + u
0 = f1 (u0 )u1 + + fm (u0 )um
Then, Au
m
m
= f1 (v0 ) +
u
j f1 (uj ) u1 + + fm (v0 ) +
u
j fm (uj ) um
j=1
1
j=1
= [
v + (M u
) ]u1 + + [
v + (M u
) ]um
m
=u
1 u1 + + u
m um = u0 v0 .
Also for i = 1, 2, . . . , m
u
(i) = v(i) + (M u
)(i) = v(i) + fi (u1 )
u(1) + fi (u2 )
u(2)
+ + fi (um )
u(m)
= v(i) + fi (
u(1) u1 + u
(2) u2 + + u
(m) um )
= v(i) + fi (u0 v0 ) = fi (u0 ),
i.e.,
u
= (f1 (u0 ), . . . , fm (u0 ))T .
(b) Since = 0, let =
(a) above, we see that
0=
u0 Au
1
.
and
by A
and letting v0 = in
Replacing A
u0 = u
(1) u1 + + u
(m) um .
0 = u0 with u0 = if and only if M u
=
u with u
= .
Hence Au
if and only if is an eigenvalue of M .
Thus, is an eigenvalue of A
and M corresponding to are related by
Also, the eigenvectors of A
u
= (f1 (u0 ), . . . , fm (u0 )) and
(1) u1 + u
(2) u2 + + u
(m) um .
u0 = u
= u has a nonzero solution
Letting = 1 in (b) above, we see that Au
in E if and only if M u = u has a nonzero solution in m ( m ). Also,
= v0 is given by
for a given v0 E, the general solution of u Au
u = v0 + u
(1) u1 + + u
(m) um , where u
= (
u(1) , u
(2) , . . . , u
(m) )T is the
T
general solution of u M u = (f1 (v0 ), . . . , fm (v0 )) . Thus, the problem of
= v0 is reduced to solving the matrix
solving the operator equation u Au
equation
4 +
u M u = v where v = (f1 (v0 ), . . . , fm (v0 ))T .
We next describe some methods of approximating a compact operator
by bounded operators of nite rank. First, we describe some methods
related to projections.
Compact Operators on Normed Linear Spaces
8.4.3
315
Theorem
Let E be a Banach space and A be a compact operator on E. For
n = 1, 2, . . ., let Pn (E E) be a projection of nite rank and
S
G
AP
n = Pn A, An = APn , An = Pn APn .
If Pn u u in E for every u E, then AP
n A 0. If, in addition,
PnT uT uT in E , for uT E , then ASn A 0 and AG
n A 0.
Proof: Let Pn u u in E for every u in E. Then it follows that AP
n u Au
in E for every u E. Since A is a compact linear operator mapping E E,
the set G = {Au : u E, u 1} is totally bounded. As E is a Banach
space, we show below that {Pn v} converges to v uniformly on E. Since G
is totally bounded, given
> 0, there are v1 , . . . vm in G such that
G B(v1 ,
) B(v2 ,
) B(vm ,
).
Now, Pn vj vj as n for each j = 1, 2, . . . , m. Find n0 such that
Pn vj vj  <
, for all n n0 and j = 1, 2, . . . , m. Let v G, and chose
vj in E such that v vj  <
. Then, for all n n0 , we have,
Pn v v Pn (v vj ) + (Pn I)vj  + I(vj v).
(Pn  + I)v vj  + Pn vj vj  3
.
Thus Pn v converges to v uniformly on G. Hence,
AP
n A = (Pn I)A = sup (Pn I)Av 0 as n .
v1
Let us next assume that Pn uT uT in E for every uT E as well.
By theorem 8.1.12, A is a compact operator on E and
(ASn ) = (APn ) = Pn A .
Replacing A by A and Pn by Pn and recalling theorem 6.1.5 (ii), we
see that
ASn A = (ASn A)  = Pn A A  0, as before.
Also,
AG
n A = Pn APn Pn A + Pn A A
Pn (APn A) + Pn A A
Pn  ASn A + AP
n A.
which tends to zero as n , since the sequence {Pn } is bounded
by theorem 4.5.7.
8.4.4
Remark
S
G
Denitions AP
n , An , An
(i) AP
n is called the projection of A on the ndimensional subspace and
is expressed as AP
n = Pn A.
316
A First Course in Functional Analysis
(ii) ASn = APn is called the Sloan projection in the name of the
mathematician Sloan.
(iii) AG
n = Pn APn is called the Galerkin projection in the name of the
mathematician Galerkin.
8.4.5
Example of projections
We next describe several ways of constructing bounded projections Pn
of nite rank such that Pn x x as n .
1. Truncation of Schauder expansion
Let E be a Banach space with a Schauder basis {e1 , e2 , . . .}. Let
f1 , f2 , . . . be the corresponding coecient functionals. For n = 1, 2, . . .
dene
n
fn (u)ek u E.
Pn u =
k=1
Now each fk E and hence each Pn (E E). Now, Pn2 = Pn and each
Pn is of nite rank.
The very denition of the Schauder basis implies that Pn u u in E
for every u E. Hence A AP
n  0 if A is a compact operator mapping
E E.
2. Projection of an element in a Hilbert space
Let H be a separable Hilbert space and {u1 , u2 , . . .} be an orthonormal
basis for H [see 3.8.8]. Then for n = 1, 2, . . .,
Pn u =
n
< u, uk > uk , u H,
k=1
where < > is the inner product on H. Note that each Pn is obtained by
truncating the Fourier expansion of u H [see 3.8.6]. Since H can be
identied with H (Note 5.6.1) and Pn can be identied with Pn we obtain
P
AASn  0 and AAG
n  0, in addition to AAn  0 as n .
Piecewise linear interpolations
Let E = C([a, b]) with the sup norm. For n = 1, 2, . . ., consider n nodes
(n) (n)
(n)
(n)
(n)
(n)
t1 , t2 tn in [a, b]: i.e., a = t0 tn1 < < tn tn+1 = b.
(n)
For j = 1, 2, . . . , n, let uj
C([a, b]) be such that
(n) (n)
(i) uj (ti ) = ij , i = 1, . . . n
(n)
(n)
(ii) u1 (a) = 1, uj (a) = 0 for j = 2, . . . , n
(n)
(n)
un (b) = 1, uj (b) = 0 for j = 1, 2, . . . , n 1,
(n)
is linear on each of the subintervals
(iii) uj
0, 1, 2, . . . , n.
(n)
(n)
[tk , tk+1 ], k =
Compact Operators on Normed Linear Spaces
(n)
(n)
317
(n)
The functions u1 , u2 , . . . , un are known as the hat functions
(n)
because of the shapes of their graphs. Let t [a, b]. Then uj (t) 0
(n)
(n)
(n)
(n)
for all j = 1, 2, . . . , n. If t [tk , tk+1 ], then uk (t) + uk+1 (t) = 1 and
(n)
(n)
(n)
(n)
uj (t) = 0 for all j = k, k + 1. Thus u1 (t) + u2 (t) + + un (t) = 1.
For x C([a, b]), dene
Pn (x) =
n
(n)
(n)
x(tj )uj .
j=1
Then Pn is called a piecewise linear interpolatory projection. Let
(n+1)
(n)
tj ; j = 0, 1, 2, . . . , n} denote the mesh of the partition
hn = max{tj
of [a, b] by the given nodes. We show that Pn x x in C([a, b]), provided
hn 0 as n . Let us x x C([a, b]) and let
> 0. By the uniform
continuity of x on [a, b], there is some > 0 such that x(s) x(t) <
wherever s t < . Let us choose N such that hn < for all n N .
Consider n n0 and t [a, b].
(n)
(n)
(n)
If uj (t) = 0, then t [tj1 , tj1 ], so that
(n)
(n)
tj
t hn < and x(tj ) x(t) <
.
n
(n)
(n)
Hence, Pn x(t) x(t) = (x(tj ) x(t))uj (t)
j=1
n
(n)
(n)
x(tj ) x(t)uj (t)
j=1
n
(n)
uj (t) =
.
j=1
Thus Pn x(t) x(t) 0, provided hn 0.
3. Linear integral equation with degenerate kernels
Let E denote either C([a, b]) or L2 ([a, b]). We consider the integral
equation
1
k(t, s)x(s)ds = y(t).
(8.21)
0
The kernel k(t, s) is said to be degenerate if
k(t, s) =
ai (t)bi (s), t, s [a, b].
(8.22)
Thus the kernel can be expressed as the sum of the products of functions,
one depending exclusively on t and the other depending exclusively on s. ai
and bi belong to E, i = 1, 2, . . . , m.
318
A First Course in Functional Analysis
Thus, the equation (8.21) can be reduced to the form
b
b
m
k(t, s)x(s)ds =
ai (t)bi (s)x(s)ds = y(t)
a
Hence,
m
a i=1
3
ai (t)
bi (s)x(s)ds
a
i=1
We write Ax(t)
=
3
m
= y(t) for t [a, b].
bi (s)x(s)ds ai (t) = y(t).
a
i=1
(E E). Also A
is of nite rank because R(A)
We note that A
span {a1 (t), . . . , am (t)}.
8.4.6
Theorem
Let E = C([a, b]) (resp. L2 ([a, b]) and K(t, s) C([a, b] [a, b]). Let
(kn ( , )) be a sequence of degenerate kernels in C([a, b][a, b]) (respectively,
L2 ([a, b] [a, b]) such that k kn  0. (resp. k kn 2 0). If A
and AD
n are the Fredholm integral operators with kernels k(, ) and kn (, )
respectively, then A An  0, where   denotes the operator norm in
(E E).
Proof: Let E = C([a, b]).
Then
(A AD
n )x
0
0
0 b
0
0
0
= 0 (k kn )(t, s)x(s)ds0
0 a
0
(b a)(k kn ) x .
Hence,
(A AD
n ) (b a)k kn  0 as n .
Next, let E = L2 ([a, b]).
0
0
0 b
0
0
0
(k kn )(t, s)x(s)ds0
(A AD
n )x2 = 0
0 a
0
2
12
b
2
[k(t, s) kn (t, s)] ds
a
12
2
x (s)ds
a
= k kn 2 x2 [see example 8.1.10]
Hence,
A AD
n 2 k kn 2 0 as n .
4. Truncation of a Fourier expansions
Let k( , ) L2 ([a, b] [a, b]) and {e1 , e2 , . . .} be an orthonormal basis
for L2 ([a, b]). For i, j = 1, 2, . . . let wi,j (t, s) = ei (t)ej (s), t, s [a, b]. Then
{wi,j ; i, j = 1, 2, . . .} is an orthonormal basis for L2 ([a, b] [a, b]). Then by
3.8.6,
k=
(k, wi,j )wi,j .
i,j
Compact Operators on Normed Linear Spaces
319
For n, 1, 2, . . . and s, t [a, b]. Let
n
n
kn (t, s) =
k, wi,j wi,j (t, s) =
k, wi,j ei (t)ej (s),
i,j=1
where
i,j=1
b
< k, wi,j >=
k(t, s)ei (t)ej (s)dtds i, j = 1, 2, . . . .
a
Thus kn () is a degenerate kernel and k kn 2 0 as n .
8.4.7
Examples
Let us consider the innite dimensional homogeneous system of
equations
ai,j xj = 0, i = 1, 2, . . . .
(8.23)
xi
j=1
Let {xi } be a denumerable set and let
ai,j
4 (+ )
where
ai,j 2 < .
i,j=1
Let the only squaresummable solution of (8.23) be zero.
a11 a12 a1n
..
an1 an2 ann
Let
A=
y1
x1
y2
x2
..
..
Let
X = . l2 , Y = .
l2 .
yn
xn
..
..
.
.
(8.24)
Consider the innite dimensional equation
X AX = Y ,
(8.25)
If we now truncate the equation to a ndimensional subspace then
(8.25) reduces to
where
Xn An Xn = Yn
a11 a12
..
An = .
an1
an2
a1n
ann
x1
..
X n = . .
xn
(8.26)
320
A First Course in Functional Analysis
Yn =
y1
y2
..
.
yn
Since the homogeneous equation (8.23) has the only solution as zero,
(i)
the equation (8.25) has a unique solution. If we let Xn = 0, for
i = n+1, . . . then, the sequence {Xn } converges in l2 to the unique solution
x
1
x
.2
.
X= .
of the denumerable system given by
x
n
..
.
xi
aij xj = yi , i = 1, 2, . . . .
(8.27)
j=1
(I A)1 
(
n Y 2 + Y Yn 2 )
1
n
1
,
provided
n = A An  <
(I A)1 
aij xj , X l2 , i = 1, 2, . . . , n, . . . ,
where
AX =
In fact,
X Xn 
j=1
n
aij xj
An X = j=1
if i = 1, 2, . . . , n
if i > n
These results follow from theorem 8.4.1 and theorem 8.4.3 if we note
that A is a compact operator and if
Pn X = (x1 , . . . , xn , 0, . . . , 0)T , X E,
then An = Pn APn . Since Pn is obtained by truncating the Fourier series
of X l2 , we see that Pn X X for every X in l2 and Pn X T X T for
every X T in (l2 ) . Hence A AG
n 2 0.
We note that (x1 , . . . xn )T is a solution of the system (8.26) if and only
if (x1 , . . . xn , 0 . . . 0)T is a solution of the system
X An X = (y1 , . . . , yn , 0, 0, . . .)T , X l2 .
Problems [8.3 and 8.4]
1. Let Ex = C([0, 1]) and dene A : Ex Ex by Ax(t) = u(t)x(t)
where u(t) Ex is xed. Find (A) and show that it is closed (Hint:
Compact Operators on Normed Linear Spaces
321
(t) = u(t) which is a continuous function dened on a compact set
[0,1].)
2. Let A be a compact linear operator mapping Ex into Ex and = 0.
Then show that A I is injective if and only if it is surjective.
3. If = i for some i, show that the range of T consists of all
vectors orthogonal to the eigenspace corresponding to i , where T is
selfadjoint and compact. For such a vector show that the general
solution of ( T )u = f is
u=
1 < f, uj >
1
f+
j
+ g,
j
j=
where g is an arbitrary element of the eigenspace corresponding to i
[see Taylor [55], ch. VI].
4. If A maps a normed linear space Ex into Ey and A is compact, then
show R(A) is separable.
(Hint: R(A) =
A(Zn ) where Zn = {x : x n}).
n=1
Since A is compact, it may be seen that every innite subset of A(Zn )
has a limit point in Ey . Consequently, for each positive integer m
1
there is a nite set of points in A(Zn ) such that the balls of radius m
with center at these points cover A(Zn ).)
5. Show that the operator C, such that
Cu =
d2 u
,
dx2
subject to the boundary conditions
u (0) = u (1) = 0,
is positive but not positivedenite.
1
u(x)v(x)dx, show that Cu, u 0. If u = 1
(Hint: If u, v =
0
then Cu, u = 0 [see 9.5].)
6. Find the eigenvalues and eigenvectors of the operator C given in
problem 5.
(Hint: Take u(x) =
Cn cos nx
1
cos nx cos mx = 0 for n = m.)
where
0
7. Suppose Ex and Ey are innite dimensional normed linear spaces.
Show that if A : Ex Ex is a surjective linear operator, then A is
not a compact operator.
322
A First Course in Functional Analysis
8. Let P0 = (1/ij), i, j = 1, 2, . . . and v0 l2 . Then, show that the
unique u0 l2 satisfying u0 P0 u0 = v0 is given by
T
v
1 1
6
0,j
u0 = v0 +
1, , ,
6 2 j=1 j
2 3
T
1
1
u
0,j
Hint: P0 u0 =
1, , ,
for u0 l2
j
2 3
j=1
9. Let Ex = C([0, 1]), k(, ) C([0, 1] [0, 1]) and P be the Fredholm
integral operator with kernel k ( , ).
(a) If  < 1/k , then for every v C([0, 1]), show that there is a
unique u C([0, 1]) such that u P u = v. Further, let
un (s) = v(s) +
n
j=1
j
0
k (j) (s, t)v(t)d(t) , s [0, 1],
where k (j) (, ) is the j th iterated kernel then, show that
un u v
n+1 kn+1
1  k
(b) If k(s, t) = 0, for all s t and 0 =
j k
un u v
.
j!
j=n+1
4(+) then show that
CHAPTER 9
ELEMENTS OF
SPECTRAL THEORY
OF SELFADJOINT
OPERATORS IN
HILBERT SPACES
A Hilbert space has some special properties: it is selfconjugate as well
as an inner product space. Besides adjoint operators, this has given rise
to selfadjoint operators which have immense applications in analysis and
theoretical physics. This chapter is devoted to a study of selfadjoint
bounded linear operators.
9.1
Adjoint Operators
In 5.6.14 we dened adjoint operators in a Banach space. In a Hilbert
space we can use inner product to obtain an adjoint to a given linear
operator. Let H be a Hilbert space and A a bounded linear operator
dened on H with range in the same space. Let us consider the functional
fy (x) = Ax, y, y H
(9.1)
As a linear functional in the Hilbert space, fy (x) can always be written in
the form fy (x) = x, y where y is some element in H. Let us suppose
that there exists another element z H s.t. fy (x) = x, z . Then we have
x, y z = 0. Since x is arbitrary and x (y z ) we have y = z .
Thus y H can be uniquely associated to the element y, identifying the
323
324
A First Course in Functional Analysis
functional fy . Thus we can nd a correspondence between y and y given
by y = A y. A is an operator dened on H with range in H. Then
operator A is associated with A by
Ax, y = x, A y,
(9.2)
and is called the adjoint operator of A. If A is not unique, let us suppose,
Ax, y = x, A y = x, A1 y.
Hence x, (A A1 )y = 0 i.e., x (A A1 )y. Since x is arbitrary,
(A A1 )y = 0. Again the result is true for any y implying that A = A1 .
It can be easily seen that the denition of adjoint operator derived here
formally coincides with the denition given in 5.6.14, for the case of Banach
spaces. It can be easily proved that the theorems on adjoint operators for
Banach spaces developed in 5.6 remain valid in complex Hilbert space too.
Note 9.1.1. In 5.6.18, we dened the adjoint of an unbounded linear
operator in a space Ex . Let H be a Hilbert space. Let A be a linear
operator (unbounded) with domain D(A) everywhere dense in H. If the
scalar product Ax, y for a given xed y and every x D(A) can be
represented in the form
Ax, y = x, y ,
then it can be seen that y belongs to the domain DA of the operator,
adjoint of A. The adjoint operator A itself is thus dened by
A y = y .
It can be argued as in above, that y is unique and A is a linear operator.
Here, y D(A ).
9.1.1
Lemma
Given a complex Hilbert space H, the operator A adjoint to a bounded
linear operator A is bounded and A = A .
Proof: Ax2 = Ax, Ax = x, A Ax x A Ax
Ax
A Ax
sup
. Hence, A A .
Or, sup
x= x
Ax= Ax
Similarly, considering A x2 we can show that A  A.
Hence, A = A , showing that A is bounded.
9.1.2
Lemma
In H, A = A.
Proof: The operator adjoint ot A is denoted by A .
We have
A x, y = y, A x = Ay, x = x, Ay, x, y H
Therefore, A x, y = x, A y = x, Ay showing that A = A.
Elements of Spectral Theory of SelfAdjoint Operators. . .
9.1.3
325
Remark
= A .
(i) A
(ii) For A and B linear operators (A + B) = A + B .
(iii) (A) = A ,
(iv) (AB) = B A .
(v) If A has an inverse A1 then A has an inverse and (A )1 = (A1 ) .
Proof: (i) A = (A ) = A using lemma 9.1.2.
(ii) For x, y H (A + B) x, y = x, (A + B)y = x, Ay + x, By
= A x, y + B x, y.
[(A + B) A B ]x, y = 0.
or,
Since x and y are arbitrary, (A + B) = A + B .
(iii) Ax, y = Ax, y = x, A y = x, (A) y x, y H.
Hence, (A) = A .
(iv) ABx, y = Bx, A y = x, B A y = x, (AB) y
where x, y H.
Hence, (AB) = B A .
(v) Let A mapping H into H have an inverse A1 .
A1 x, y = x, (A1 ) y = y, A1 x = AA1 y, A1 x
= A1 y, A A1 x = y, (A1 ) A A1 x
= (A1 ) A A1 x, y.
Thus, (A1 ) A = I
, Again A (A
Thus
A (A
(9.3)
1
) x, y = (A
) x, Ay = x, A
Ay = x, y.
) = I.
(9.4)
1
Hence it follows from (9.3) and (9.4) that (A )
9.2
SelfAdjoint Operators
9.2.1
Selfadjoint operators
= (A
) .
A bounded linear operator A is said to be selfadjoint if it is equal to
its adjoint, i.e., A = A . Selfadjoint operators on a Hilbert space H
are also called Hermitian.
Note 9.2.1. A linear (not necessarily bounded) operator A with
domain D(A) dense in H is said to be symmetric, if for all x, y D(A),
the equality
Ax, y = x, Ay
326
A First Course in Functional Analysis
holds. If A is unbounded, it follows from Note 9.1.1 that
y D(A) = y D(A ).
Hence D(A) D(A ). In other words, A A or A is an extension of A.
For A bounded, D(A) = D(A ) = H. For, A = A and D(A) dense in H,
A is called selfadjoint.
9.2.2
Examples
1. In an ndimensional complex Euclidean space, a linear operator A can
be identied with the matrix (aij ) with complex numbers as elements. The
operator adjoint to A = (aij ) is A = (aji ). A selfadjoint operator is a
Hermitian matrix if aij = aji .
If (aij ) is real then a Hermitian matrix becomes a symmetric matrix.
2. Adjoint operator corresponding to a Fredholm operator in
L2 ([0, 1]).
1
If T f = g(s) =
k(s, t)f (t)dt (5.6.15), the kernel of the adjoint
0
operator T in complex L2 ([0, 1]) is k(t, s).
T is selfadjoint if k(s, t) = k(t, s).
3. In L2 ([0, 1]) let the operator A be given by Ax = tx(t) L2 ([0, 1]) with
every function x(t) L2 ([0, 1]). It can be seen that A is selfadjoint.
9.2.3
Remark
Given A, a selfadjoint operator, then
(i) A is selfadjoint where is real.
(ii) (A + B) is selfadjoint if A and B are respectively selfadjoint.
(iii) AB is selfadjoint if A and B are respectively selfadjoint and AB =
BA.
(iv) If An A in the sense of norm convergence in the space of operators
and all An are selfadjoints, then A is also selfadjoint.
Proof: For (i)(iii) see 9.1.3 (ii)(iv).
(iv) Let An A as n in the space of bounded linear operators.
Using 5.6.16, An A = An A .
Since An A, as n , limn An = A . Since An is selfadjoint, An = An .
Thus, An tends to both A and A as n .
Hence, A = A .
Elements of Spectral Theory of SelfAdjoint Operators. . .
9.2.4
327
Denition: bilinear hermitian form
A functional is said to be of bilinear form if it is a functional of two
vectors and is linear in both the vectors.
Bilinear Hermitian Form
Let us consider Ax, y where A is selfadjoint.
Now, A(x1 + x2 ), y = Ax1 , y + Ax2 , y
Moreover, Ax, y = x, Ay
Thus, Ax, y is a bilinear functional.
If A is selfadjoint, then we denote Ax, y by A(x, y). Thus, A(x, y) =
A(y, x).
This form is bounded in the sense, that A(x, y) CA x y where
CA is some constant.
9.2.5
Lemma
Thus every selfadjoint operator A generates some bounded bilinear
hermitian form
A(x, y) = Ax, y = x, Ay.
Conversely if a bounded linear Hermitian form A(x, y) is given, then it
generates some selfadjoint operator A, satisfying the equality
A(x, y) = Ax, y.
(9.5)
Proof: The rst part follows from the denition in 9.2.4. Let us
consider the bilinear hermitian form given by (9.5). Let us keep y xed
in A(x, y) and obtain a linear functional of x. Consequently, A(x, y) =
x, y , y is a uniquely dened element. Thus we get an operator A,
dened by Ay = y , and such that x, Ay = A(x, y).
Now x, A(y1 + y2 ) = A(x, y1 + y2 ) = A(x, y1 ) + A(x, y2 ) = x, Ay1 +
x, Ay2
Thus A is linear. Moreover, x, Ay = A(x, y) CA x y.
Putting x = Ay, we get from the above, Ay, Ay CA Ay
y or Ay CA y.
Hence A CA , showing that A is bounded. To prove selfadjointness
of A, we note that for x, y H, we have, x, Ay = A(y, x) = (y, Ax) =
Ax, y, implying that A = A and A(x, y) = Ax, y.
9.3
Quadratic Form
9.3.1
Denition: quadratic form
A bilinear hermitian form A(x, y) given by (9.5) is said to be a
quadratic form if x = y so that A(x, x) is a quadratic form.
328
A First Course in Functional Analysis
Further, (i) A(x, x) is real since A(x, y) = A(y, x).
(ii) A(x + y, x + y) = A(x, x) + A(x, y)
+ A(y, x) + A(y, y)
9.3.2
Lemma
Every bilinear hermitian form A(x, y) can be uniquely dened by a
quadratic hermitian form.
The bilinear form A(x, y) is dened by
1
{[A(x1 , x1 ) A(x2 , x2 )] + i[A(x3 , x3 ) A(x4 , x4 )]}
4
where x1 = x + y, x2 = x y and x3 = x + iy, x4 = x iy.
The quadratic form A(x, x) is bounded, that is, A(x, x) CA x2 , if
and only if the corresponding bilinear hermitian form is bounded. Moreover,
A = max(m, M ) = sup (Ax, x) where m and M are dened below.
A(x, y) =
x=1
Proof: Let m = inf Ax, x and M = sup Ax, x. The numbers m and
x=1
x=1
M are called the greatest lower bound and the least upper bounds
respectively of the selfadjoint operator A.
Let x = 1. Then,
Ax, x Ax x Ax2 = A
and, consequently, CA = sup Ax, x A.
(9.6)
x=1
On the other hand, for every y H, we have, Ay, y CA y2 .
Let z be any element in H, dierent from zero.
1
Az 2
1
We put
t=
and u = Az,
z
t
we get
Hence,
Az2 = Az, Az = A(tz), u
1
= {A(tz + u), (tz + u) A(tz u), (tz u)}
4
1
CA [tz + u2 + tz u2 ]
4
1
= CA [tz2 + u2 ]
2
1
1
= CA t2 z2 + 2 Az2
2
t
1
= CA [Az z + z Az] = CA Az z.
2
Az CA z.
Therefore, A = sup
z=
Az
CA = sup (Ax, x)
z
x=1
(9.7)
Elements of Spectral Theory of SelfAdjoint Operators. . .
329
It follows from (9.6) and (9.7) that
A = max{m, M } = sup Ax, x.
x=1
9.4
Unitary Operators, Projection Operators
In this section we study some wellbehaved bounded linear operators in a
Hilbert space which commute with their adjoints.
9.4.1
Denitions: normal, unitary operators
(i) Normal operator: Let A be a bounded linear operator mapping a
Hilbert space H into itself. A is called a normal operator if A A = AA .
(ii) Unitary operator: The bounded linear operator mapping a Hilbert
space into itself is called unitary if AA = I = A A. Hence A has an
inverse and A1 = A .
9.4.2
Example 1
42 (+2 )
and x = (x1 , x2 )T
x1 x2
x 1 + x2
Ax =
, A x=
.
x1 + x 2
x1 + x2
2x1
A Ax =
= AA x.
2x2
Let H =
Thus A is a normal operator.
9.4.3
Example 2
cos sin
Let
A=
sin
cos
cos sin
Then
A =
sin cos
1 0
.
Therefore, AA = A A =
0 1
Thus A is unitary.
9.4.4
Remark
(i) If A is unitary or selfadjoint then A is normal.
(ii) The converse is not always true.
(iii) The operator A in example 9.4.2 although normal is not unitary.
(iv) The operator A in example 9.4.3 is unitary and necessarily normal.
9.4.5
Remark
If B is a normal operator and C is a bounded operator, such that
C C = I, then operator A = CBC is normal.
330
A First Course in Functional Analysis
A = CB C .
For
Now, AA = CBC CB C = CBB C
Again A A = CB C CBC = CB BC . Hence AA = A A.
9.4.6
Example
Let H = l2 and x = (x1 , x2 , . . .)T in H, let Cx = (0, x1 , x2 , . . .)T .
Then
C x = (x2 , x3 , . . .)T for x H.
Hence C Cx = (x1 , x2 , . . .)T
CC x = (0, x2 , x3 , . . .)T for all x H
0
0
0
1
0
0
, C =
where C =
1
0
0
1
Thus
9.4.7
C C = I
0 1
..
. 0 1
0 0
0
..
.
1
but CC = I.
Denition
Given a linear operator A and a unitary operator U , the operator
B = U AU 1 = U AU is called an operator unitarily equivalent to
A.
9.4.8
Projection Operator
Let H be a Hilbert space and L a closed subspace of H. Then, by
orthogonal projection theorem for every x H, y L, and z L , x
can be uniquely represented by x = y + z.
Then P x = y and (I P )x = z.
This motivates us to dene the projection operator, see 3.6.2.
9.4.9
Theorem
P is a selfadjoint operator with its norm equal to one and P satises
P2 = P.
We rst show that P is a linear operator.
Let, x1 = y1 + z1 , y1 L, z1 L and x2 = y2 + z2 , y2 L, z2 L .
Now, y1 = P x1 , y2 = P x2 .
Since x1 + x2 = [y1 + y2 ] + [z1 + z2 ], therefore,
P (x1 + y2 ) = y1 + y2 = P x1 + P x2 , , ( ).
4+
Hence P is linear.
Since y z, we have x2 = y + z2 = y + z, y + z
= y, y + z, y + y, z + z, z
= y2 + z2
Thus, y2 = P x2 x2 , i.e. P x x for every x.
Hence, P  1.
Elements of Spectral Theory of SelfAdjoint Operators. . .
331
Since for x L, P x = x and consequently P x = x, it follows that
P  = 1.
Next we want to show that P is a selfadjoint operator. Let x1 =
y1 + z1 and x2 = y2 + z2
We have P x1 = y1 , P x2 = y2 .
Therefore, P x1 , x2 = y1 , P x2 = y1 , y2 .
Similarly, x1 , P x2 = P x1 , y2 = y1 , y2 .
Consequently, P x1 , x2 = x1 , P x2 .
Hence, P is selfadjoint.
Since, x = y + z, with y L and z L, P x L for every x H.
Hence, P 2 x = P (P x) = P x for every x H.
Hence, projection P in a Hilbert space satises P 2 = P .
9.4.10
Theorem
Every selfadjoint operator P satisfying P 2 = P is an orthogonal
projection on some subspace L of the Hilbert space H.
Proof: Let L have the element y where y = P x, x being any element
of H. Now, if y1 = P x1 L and y2 = P x2 L for x1 , x2 H, then
y1 + y2 = P x1 + P x2 = P (x1 + x2 ) L. Similarly, y = P x = P (x) L
for ( ). Hence, L is a linear subspace. Now, let yn y0 yn L
since yn = P xn for every xn H. Now P yn = P 2 xn = P xn = yn .
Since P is continuous, yn y0 P yn P y0 . However, since
P yn = yn , yn P y0 . Consequently, y0 = P y0 , i.e., y0 L, showing
that L is closed. Finally, x P x P x since P is selfadjoint and P 2 = P ,
as is shown below x P x, P x = P x P 2 x, x = 0.
Thus, it follows from the denition of L that P is the projection of H
onto this subspace. Moreover, corresponding to an element x H, P x L
is unique. For if otherwise, let P1 x be another projection on L. But that
violates the orthogonal projection theorem [see 3.5].
4+
9.4.11
Remark
(i) L mentioned above consists of all elements of the form P x = x, x
L.
(ii) By the orthogonal projection theorem, we can write x = y +z, where
y L, z L and x H. If we write y = P x, then z = (I P )x. Thus
(I P ) is a projection on L .
Moreover, (I P )2 = I 2P + P 2 = I 2P + P = I P .
(I P )x, y = x, (I P ) y = x, y x, P y
= x, (I P )y for all x, y H.
332
A First Course in Functional Analysis
Thus (I P ) is a projection operator.
9.4.12
Theorem
For the projections P1 and P2 to be orthogonal, it is necessary and
sucient that the corresponding subspace L1 and L2 are orthogonal.
Let y1 = P1 x and y2 = P2 x, x H. Let P1 be orthogonal to P2 . Then
y1 , y2 = P1 x, P2 x = x, P1 P2 x = 0, since P1 is orthogonal to P2 .
Since y1 is any element of L1 and y2 is any element of L2 , we conclude
that L1 L2 . Similarly let L1 L2 . Then for y, L1 and y2 L2 we
have y1 , y2 = 0 or P1 x, P2 x = x, P1 P2 x = 0 showing that P1 P2 .
9.4.13
Lemma
The necessary and sucient condition that the sum of two projection
operators PL1 and PL2 be a projection operator is that PL1 and PL2 must
be mutually orthogonal. In this case PL1 + PL2 = PL1 +L2 .
Proof: Let PL1 + PL2 be a projection operator P .
Then
(PL1 + PL2 )2 = PL1 + PL2 .
Therefore PL21 + PL1 PL2 + PL2 PL1 + PL22 = PL1 + PL2 .
Hence
PL1 PL2 + PL2 PL1 = 0.
Multiplying LHS of the above equation by PL1 , we have,
PL21 PL2 + PL1 PL2 PL1 = 0
or PL1 PL2 + PL1 PL2 PL1 = 0.
Multiplying RHS by PL1 we have,
PL , PL2 PL1 + PL1 PL2 PL21 = 0
Multiplying LHS by
PL1
1
or PL1 PL2 PL1 = 0.
we have PL2 PL1 = 0 i.e. PL2 PL1 .
Next, let us suppose that PL1 PL2 = 0.
Then
(PL1 + PL2 )2 = (PL1 + PL2 )(PL1 + PL2 )
= PL21 + PL1 PL2 + PL2 PL1 + PL22
= PL21 + PL22 = PL1 + PL2 .
Thus PL1 + PL2 is a projection operator.
Since
PL1 PL2 , L1 is L2
(9.8)
If x H, thus P x = PL1 x + PL2 x = x1 + x2 with x1 + x2 L1 + L2 .
Further, if x = x1 + x2 is an element in L1 + L2 thus
x = x1 + x2 = PL1 (x1 + x2 ) + PL2 (x1 + x2 )
= PL1 +L2 (x1 + x2 ),
since
PL1 x2 = 0
and
PL2 x1 = 0.
It follows from (9.8) and (9.9) that P x = x = PL1 +L2 x.
(9.9)
Elements of Spectral Theory of SelfAdjoint Operators. . .
333
Hence P is a projection.
9.4.14
Lemma
The necessary and sucient condition for the product of two projections
PL1 and PL2 to be a projection is that the projection operator i.e. PL1 PL2 =
PL2 PL1 . In this case PL1 PL2 = PL1 L2 .
Proof: Since P = PL1 PL2 is selfadjoint, we have
PL1 PL2 = (PL1 PL2 ) = PL2 PL1 = PL2 PL1 ,
taking note that PL1 and PL2 are selfadjoint.
Hence PL1 commutes with PL2 .
Conversely, if PL1 PL2 = PL2 PL1 , then
P = (PL1 PL2 ) = PL2 PL1 = PL2 PL1 = PL1 PL2 = P .
Thus P is selfadjoint.
Furthermore, (PL1 PL2 )2 = PL1 PL2 PL1 PL2 = PL1 PL1 PL2 PL2
= PL21 PL22 = PL1 PL2 .
Hence P = PL1 PL2 is a projection.
Let x H be arbitrary. Then, P x = PL1 PL2 x = PL2 PL1 x.
Thus P x belongs to L1 and L2 , that is to L1 L2 .
Now, let y L1 L2 . Then P y = PL1 (PL2 y) = PL1 y = y.
Thus P is a projection on L1 L2 . This proves the lemma.
9.4.15
Denition
The projection P2 is said to be a part of the projection P1 , if
P1 P2 = P2 .
9.4.16
Remark
(i) P1 P2 = P2 (P1 P2 ) = P2 P2 P1 = P2 .
(ii) PL2 is a part of PL1 if and only if L2 is a subspace of L1 .
9.4.17
Theorem
The necessary and sucient condition for a projection operator PL2 to
be a part of the projection operator PL1 is the inequality PL2 x PL1 x
being satised for all x H.
Proof: PL2 PL1 x = PL2 x yields
PL2 x = PL2 PL1 x PL2  PL1 x PL1 x
Conversely if (9.10) be true, then for every x L2 ,
PL1 x PL2 x = x and since PL1 x x
(9.10)
334
we have
A First Course in Functional Analysis
PL1 x = x.
Therefore, PHL1 x2 = x2 PL1 x2 = 0 and hence, x L1 .
Therefore, PL1 x L2 for every x H, which implies that
PL1 PL2 x = PL2 x i.e. PL1 PL2 = PL2 .
9.4.18
Theorem
The dierence P1 P2 of two projections is a projection operator, if and
only if P2 is a part of P1 . In this case, LP1 P2 is the orthogonal complement
of LP2 in LP1 .
Proof: If P1 P2 is a projection operator, then so is I (P1 P2 ) =
(I P1 ) + P2 .
Then, by lemma 9.4.13, (I P1 ) and P2 are mutually orthogonal, i.e.,
(I P1 )P2 = 0 i.e. P1 P2 = P2 showing that P2 is a part of P1 .
Conversely, let P2 be a part of P1 i.e. P1 P2 = P2 or (I P1 )P2 = 0, i.e.,
I P1 is orthogonal to P2 .
Therefore, by lemma 9.4.13, (I P1 ) + P2 is a projection operator and
I [(I P1 ) + P2 ] = P1 P2 is aslo a projection operator. The condition
P1 P2 = P2 implies that P1 P2 and P2 are orthogonal. Then, because of
lemma 9.4.13, LP1 = LP1 P2 + LP2 .
9.5
Positive Operators, Square Roots of a
Positive Operator
9.5.1
Denition: A nonnegative operator, a positive operator
4+
A selfadjoint operator, A in H over ( ) is said to be nonnegative if
Ax, x 0 for all x H and is denoted by A 0. A selfadjoint operator
A in H over ( ) is said to be positive if Ax, x 0 for all x H and
Ax, x = 0 for at least one x and is written as A > 0.
4+
9.5.2
Denition: stronger and smaller operator
If A and B are selfadjoint operators and A B is positive, i.e.,
A B > 0, then A is said to be greater then B or B is smaller then
A and expressed as A > B.
9.5.3
Remark
The relation on the set of selfadjoint operators on H is a partial
order. The relation is
(i) reexive, i.e., A A
(ii) transitive, i.e., A B and B C A C
(iii) antisymmetric, i.e., A B and B A A = B.
Elements of Spectral Theory of SelfAdjoint Operators. . .
335
(iv) A B, C D A + C B + D.
(v) For any A, AA and A A are nonnegative.
For x H, AA x, x = A x, A x = A x2 0
A Ax, x = Ax, Ax = Ax2 0.
(vi) If A 0 and A1 exists, then A1 > 0. A 0 Ax, x 0.
Now, A1 exists Ax = x = .
Hence, A1 x, x = 0 x = , for if x is nonnull, A1 x is nonnull
and < A1 x, x > 0. Thus, A1 x, x > 0 for nonnull x.
(vii) If A and B are positive operators and the composition AB exists, then
AB may not be a positive operator. For example, let H = 2 ( 2 ), and
4 +
A(x1 , x2 )T = (x1 + x2 , x1 + 2x2 )
B(x1 , x2 )T = (x1 + x2 , x1 + x2 ).
AB(x1 , x2 )T= (2x1 + 2x2 , 3x1 + 3x2 )
BA(x1 , x2 )T = (2x1 + 3x2 , 2x1 + 3x2 )
for all (x1 , x2 )
4 4 (+ + )
AB is not a positive operator since AB is not selfadjoint.
9.5.4
Example
Let us consider the symmetric operator B
d2 u
dx2
the functions u(x) being subject to the boundary conditions u(0) = u(1) =
0, the eld being the segment 0 < x < 1.
D(B) = {u(x) : u(x) C 2 (0, 1), u(0) = u(1) = 0}. Take H = L2 ([0, 1]).
Then, for all u, v D(B),
Bu =
Bu, v =
d2 x
v(x)dx =
dx2
1
0
d2 v
u(x)dx = Bv, u
dx2
Hence, B is symmetric. Therefore,
x=1 1 2
1 2
du
du
du
Bu, u =
dx u
=
dx 0.
dx
dx
dx
0
0
x=0
9.5.5
Theorem
If two positive selfadjoint operators A and B commute, then their
product is also a positive operator.
Proof: Let us put
A
A1 =
, A2 = A1 A21 , . . . An+1 = An A2n , . . . and show that
A
(9.11)
0 An I for every n.
336
A First Course in Functional Analysis
The above is true for n = 1. Let us suppose that (9.11) is true for n = k.
Then A2k (I Ak )x, x = (I Ak )Ak x, Ak x 0 since (I Ak ) is a
positive operator. Hence A2k (I Ak ) 0.
Analogously Ak (I Ak )2 0.
Hence,
Ak+1 = A2k (I Ak ) + Ak (I Ak )2 0
and
I Ak+1 = (I Ak ) + A2k 0.
Consequently, (9.11) holds for n = k + 1.
Moreover,
A1 = A21 + A2 = A21 + A22 + A3 =
= A21 + A22 + + A2n + An+1 ,
whence
that is,
n
k=1
n
A2k = A1 An+1 A1 , since An+1 0,
Ak x, Ak x A1 x, x.
k=1
Consequently, the series
k .
Hence,
Ak x2 converges and Ak x 0 as
k=1
n
A2k
x = A1 x An+1 x A1 x as n
(9.12)
k=1
since B commutes with A and hence with A1 .
BA2 = B(A1 A21 ) = BA1 BA21
= A1 B A21 B = (A1 A21 )B,
i.e., B commutes with A2 .
Let B commute with Ak , k = 1, 2, . . . , n.
BAn+1 = B(An A2n ) = An B An BAn
= (An A2n )B = An+1 B.
Hence B commutes with Ak , k = 1, 2, . . . , n, . . ..
ABx, x = ABA1 x, x = A lim
= A lim
n
n
BA2k x, x
k=1
BAk x, Ak x 0
k=1
Using (9.12) ABx, x = ABA1 x, x = BAx, x.
9.5.6
Theorem
If {An } is a monotone increasing sequence of mutually commuting selfadjoint operators, bounded above by a selfadjoint operator B commuting
Elements of Spectral Theory of SelfAdjoint Operators. . .
337
with all the
An : A1 A2 An B
(9.13)
then the sequence {An } converges pointwise to a selfadjoint operator A and
A B.
Proof: Let Cn = B An . Since B and An for all n are selfadjoint operator
{Cn } is a sequence of selfadjoint operators. Cn x, x = (B An )x, x 0
because of (9.13). Hence, Cn is a nonnegative operator.
Moreover, Cn Cm = (B An )(B Am ) = B 2 BAm An B + An Am
= B 2 Am B BAn + Am An = (B Am )(B An )
since B commutes with An for all n and An Am commute. Hence,
Cn Cm = Cm Cn . Moreover, {Cn } forms a monotonic decreasing sequence.
Consequently, for m < n, the operator (Cm Cn )Cm and Cn (Cm Cn ) are
also positive. Moreover,
2
Cm
x, x Cm Cn x, x Cn2 x, x 0
This implies that the monotonic decreasing nonnegative numerical
sequence {Cn2 x, x} has a limit. Hence, it follows from the above inequality,
{Cm Cn x, x} also tends to the same limit as n, m . Therefore,
2
x, x 2Cm Cn x, x
Cm x Cn x2 = (Cm Cn )2 x, x) = Cm
+Cn2 x, x 0 as n, m .
Thus, the sequence {Cn x} and thereby also {An x} converges to some limit
Ax for arbitrary x, that is Ax = lim An x. Hence A is a selfadjoint
n
operator, satisfying A B.
9.5.7
Remark
If, in theorem 9.5.6, the inequality (9.13) is replaced by A1 A2
An B and the other conditions remain unchanged, then the
conclusion of theorem 9.5.6 remains unchanged, except that A B.
9.5.8
Square roots of nonnegative operators
9.5.9
Denition: square root
The selfadjoint operator B is called a square root of the nonnegative
operator A, if B 2 = A.
9.5.10
Theorem
There exists a unique positive square root B of every positive selfadjoint
operator A; it commutes with every operator commuting with A.
Proof: Without loss of generality, it can be assumed that A I. Let us
put B0 = 0, and
1
Bn+1 = Bn + (A Bn2 ), n = 0, 1, 2, . . .
2
(9.14)
338
A First Course in Functional Analysis
Suppose that Bk is selfadjoint, positive and commutes with every
operator commuting with A, for k = 1, 2, . . . n.
Then
Now,
and
Now,
1
Bn+1 x, x = [Bn + (A Bn2 )]x, x
2
1
1
= Bn x, x + Ax, x (Bn2 x, x)
2
2
1
1
= x, Bn x + x, Ax x, Bn2 x
2
2
= x, Bn+1 x.
1
Bn+1 x, x = Bn x, x + (A Bn2 )x, x
2
1
1
2
(I Bn+1 ) = (I Bn ) + (I A)
2
2
1
Bn+1 Bn = [(I Bn1 ) + (I Bn )](Bn Bn1 )
2
1
B0 = 0, B1 = A I. Also B1 > 0.
2
(9.15)
(9.16)
(9.17)
Therefore, it follows from (9.16) and (9.17) that Bn I and Bn Bn+1
for all n. Thus it follows from (9.15) that Bn+1 is selfadjoint, positive
and commutes with every operator commuting with A. Thus {Bn } is a
monotonic increasing sequence bounded above. This sequence converges in
limit to some selfadjoint positive operator B. Taking limits in (9.14) we
have,
1
B = B + (A B 2 ) that is B 2 = A.
2
Finally, B commutes with every operator that commutes with A. This
is because that Bn possesses the above property. Thus B is the positive
square root of A. If B is not unique let B1 be another square root of A.
Then B 2 B12 = 0.
Therefore, (B 2 B12 )x, y = 0
or (B + B1 )(B B1 )x, y = 0.
Let us take y such that (B B1 )x = y.
Then
0 = (B + B1 )y, y = By, y + B1 y, y.
Since B and B1 are positive, By, y = B1 y, y = 0.
However, since the roots are positive, we have B = C 2 where C is a
selfadjoint operator. Since
Cy2 = C 2 y, y = By, y = 0, hence Cy = 0.
Consequently, By = C(Cy) = 0 and analogously B1 y = 0.
However, then, B1 x Bx2 = (B B1 )2 x, x = (B B1 )y, x = 0, that
is, Bx = B1 x for every x H and the uniquences of the square root is
proved.
Elements of Spectral Theory of SelfAdjoint Operators. . .
9.5.11
339
Example
Let H = L2 ([0, 1]). Let the operator A be dened by Ax(t) =
1
tx(t), x(t) L2 ([0, 1]). Then, Ax2 = Ax, Ax = 0 t2 x2 (t)dt x2 .
Hence, A is bounded.
2
Ax, x =
tx(t)x(t)dt =
tx (t)dt =
( tx(t))( tx(t))dt = (Bx, Bx)
where Bx(t) = + tx(t).
Problems
1. Suppose A is linear and maps a complex Hilbert space H into itself.
Then, if Ax2 , x 0andAx, x = 0 for each x H, show that A = 0.
2. Let {u1 , u2 , . . .} be an orthonormal system in a Hilbert space H, T
(H H) and ai,j = T uj , ui , i, j = 1, 2, . . . Then show that the
matrix {ai,j } denes a bounded linear operator Q on H with respect
to u1 , u2 , u3 . Show further that Q = P T P , where
x, uj uj , x H.
Px =
j
If u1 , u2 , u3 . . . constitute an orthonormal basis for H, then prove that
Q = T.
3. Let P and Q denote Fredholm integral operators on H = L2 ([a, b])
with kernels p(, ) and q(, ) in L2 ([a, b] [a, b]), respectively. Then
show that P = Q if and only if p(, ) and q(, ) are equal almost
everywhere on [a, b] [a, b]. Further, show that P Q is a Fredholm
integral operator with kernel
p q(s, t) =
p(s, u)q(u, t)d(u), (s, t) [a, b] [a, b]
a
and that P Q p q(s, t)2 p2 q2
(Hint: To nd P Q use Fubinis theorem (sec. 10.5).)
4. Consider the shift operator A and a multiplication operator B on l2
such that
0 if n = 0
Ax(n) =
x(n 1) if n 1.
Bx(n) = (n + 1)1 x(n) if n 0.
Put C = AB. Show that C is a compact operator which has no
eigenvalue and whose spectrum consists of exactly one point.
340
A First Course in Functional Analysis
5. Let 1 2 n be the rst n consecutive eigenvalues of
a selfadjoint coercive operator A mapping a Hilbert space H
into itself and let u1 , u2 , . . . , un be the corresponding orthonormal
eigenfunctions. Let there exist a function u = un+1 = 0 which
maximizes the functional
Au, u
, u DA H,
u, u
under the supplementary conditions, u, u1 = 0, u, u2 =
0 u, un = 0.
Then show that un+1 is the eigenfunction corresponding to the
eigenvalue
Aun+1 , un+1
.
n+1 =
un+1 , un+1
Show further that n n+1 .
(Hint: A symmetric nonnegative operator A is said to be coercive
if there exists a nonnegative number such that Au, u
u, u u D(A), > $.)
6. Let D(A) be the subspace of a Hilbert space H 1 ([0, 1]) of functions
u(x) with continuous rst derivatives on [0,1] with u(0) = u(1) = 0
d2
and A = 2 .
dx
Find the adjoint A and show that A is symmetric.
1
(Hint: u, v = 0 u(x)v(x)dx for all u, v H 2 .)
7. (Elastic bending of a clamped beam) Let
d2
Au = 2
dx
d2 u
b(x) 2
dx
+ ku = f (x), k > 0, 0 < x < L.
subject to the boundary conditions,
u=
du
= 0 at x = 0 and x = L
dx
Show that the operator A is symmetric on its domain.
8. For x L2 [0, [, consider
;
U1 (x)u =
;
U2 (x)u =
2 d
du
2 d
du
sin us
x(s)d(s) [see ch. 10]
s
1 cos us
x(s)d(s)
s
Elements of Spectral Theory of SelfAdjoint Operators. . .
341
Show that
(i) U1 (x)u and U2 (x)u are welldened for almost all u [0, )
(ii) U1 (x), U2 (x) L2 [0, [
(iii) The mappings U1 and U2 are bounded operators on L2 [0, [,
which are selfadjoint and unitary.
9. Let A (H H) be selfadjoint. Then show that
(i) A2 0 and A AI.
(ii) if A2 A then 0 A I.
9.6
Spectrum of SelfAdjoint Operators
Let us consider the operator A = A I, where A is selfadjoint and a
complex number.
In sec. 4.7.17 we dened a resolvent operator and regular values of
an operator. By theorem 4.7.13, if (1/)A < 1 (that is, if  > A),
then is a regular value of A and consequently, then entire spectrum of
A lies inside and on the boundary of the disk  A. This is true for
arbitrary linear operators acting into a Banach space. For a selfadjoint
operator dened on a Hilbert space, the plane comprising the spectrum of
the operator is indicated more precisely below.
9.6.1
Lemma
4+
Let A be a selfadjoint linear operator in a Hilbert space over ( ).
Then all of its eigenvalues are real.
Let x =
be an eigenvector of A and the corresponding eigenvalue.
Then,
Ax = x.
Premultiplying both sides with x
we have x Ax = x x.
(9.18)
Taking adjoint of both sides we have,
x A x = x x.
(9.19)
From (9.18) and (9.19) it follows that
x Ax = x x = x x, showing that = i.e. is real.
9.6.2
Lemma
Eigenvectors belonging to dierent eigenvalues of a selfadjoint operator
in a Hilbert space H over ( ) are orthogonal.
Let x1 , x2 be two eigenvectors of a selfadjoint operator corresponding
to dierent eigenvalues 1 and 2 .
4+
342
A First Course in Functional Analysis
Then we have
Ax1 = 1 x1 ,
(9.20)
Ax2 = x2
(9.21)
Premultiplying (9.20) by x2 and (9.21) with x1 we have,
x2 Ax1 = 1 x2 x1
x1 Ax2 = 2 x1 x2
Therefore, x2 Ax1 = x1 Ax2 = 1 x1 x2 = 2 x2 x1 .
Since A is selfadjoint 1 , 2 are real.
Therefore, (1 2 )x1 x2 = 0.
Since
9.6.3
1 = 2 , x1 x2 = 0 i.e. x1 x2 .
Theorem
For the point to be a regular value of the selfadjoint operator A, it is
necessary and sucient that there is a positive constants C, such that
(A I)x = A x = Ax x Cx
for every x H over
(9.22)
4(+).
Proof: Suppose that R = A1
is bounded and R  = K. For every
x H, we have x = R A x KA x whence A x (1/K)x,
proving the conditions is necessary.
We next want to show that the condition is sucient. Let y = Ax x
and x run through H. Then y runs through some linear subspace L. By
(9.22) there is a onetoone correspondence between x and y. For if x1 and
x2 correspond to the same element y,
we have A(x1 x2 ) (x1 x2 ) = 0,
1
whence x1 x2 
A (x1 x2 ) = 0
C
(9.23)
We next show that L is everywhere dense in H. If it were not so, then
there would exist a nonnull element x0 H such that x0 , y = 0 for
every y H. Hence x0 , Ax x = 0 for every x H. In other words,
(A )x0 , x = 0, A being selfadjoint. Hence Ax0 x0 , x = 0 for
nonzero x0 and for every x H. It then follows that
Ax0 x0 , x = 0.
The above equality is impossible, either for complex because the
eigenvalues of a selfadjoint operator A are real. If is real, i.e., = ,
then we have from (9.23)
Elements of Spectral Theory of SelfAdjoint Operators. . .
343
x0  (1/C)Ax0 x0  = 0.
Next, let {yn } L, yn = A xn and {yn } y0 .
1
1
By (9.22) xn xm 
A xn A xm  =
yn ym .
C
C
{yn } is a Cauchy sequence and hence yn ym  0 as n, m .
However, then xn xm  0 as n, m . Since H is a complete space,
there exists a limit for {xn } : x = lim xn . Moreover,
r
A x = lim A xn = lim yn = y i.e. y L.
n
Thus L is a closed subspace everywhere dense in H, i.e., L = H. In
addition, since the correspondence y = A x is onetoone, there exists an
inverse operator x = A1
y = R y dened on the entire H. Inequality
(9.22) yields
1
1
R y = x
A x =
y,
C
C
i.e. R is a bounded operator and R 
9.6.4
1
.
C
Corollary
The point belongs to the spectrum of a selfadjoint operator A if and
only if there exists a sequence {xn } such that
Axn xn  Cn xn , Cn 0 as n .
(9.24)
If we take xn  = 1, then (9.24) yields
Axn xn  0, xn  = 1
9.6.5
(9.25)
Theorem
The spectrum of a selfadjoint operator A lies entirely on a segment
[m, M ] of the real axis, where
M = sup Ax, x and m = inf Ax, x.
x=1
x=1
Proof: Since A is selfadjoint
Ax, x = x, Ax = Ax, x i.e. Ax, x is real.
Also,
Ax, x
Ax x
A
2
x
x2
Then,
CA = sup Ax, x A.
x=1
On the other hand, for every y H it follows from Lemma 9.3.2 that
Ay, y CA y2 .
344
A First Course in Functional Analysis
Let
m = inf
i.e.,
x=
Ax, x
Ax, x
, M = sup
2
x2
x= x
(9.26)
Ax, x
M
x2
Let 1 be any eigenvalue of A and x1 the corresponding eigenvector.
Then Ax1 = 1 x1 and m 1 M .
Now if y = A x = Ax x, then y, x = Ax, x x, x;
x, y = y, x = Ax, x x, x.
Hence, x, y y, x = ( )x, x = 2ix2 , where = + i
or,
2 x2 = x, y y, x x, y + y, x 2x y
and therefore, y x, that is, A x  x
(9.27)
Since = 0, it follows from theorem 9.6.3, that = + i with = 0 is a
regular value of the selfadjoint operator A.
In view of the above result we can say that the spectrum can lie on the
real axis. We next want to show that if lie outside [m, M ] on the real line
then it is a regular value.
For example, if > M, then = M + k with k > 0.
We have A x, x = Ax, x x, x M x, x x, x = kx2
where
A x, x kx2
On the other hand, A x, x A x x. Thus A x kx
showing that is regular. Similar arguments can be put forward if m.
9.6.6
Theorem
M and m belong to the point spectrum.
Proof: If A is replaced by A = A I, then the spectrum is shifted
by to the left and M and m change to M and m respectively.
Thus without loss of generality it can be assumed that 0 m M . Then
M = A [see lemma 9.3.2].
We next want to show that M is in the point spectrum. Since M = A,
we can consider a sequence {xn } with xn  = 1 such that
Axn , xn = M
n ,
n 0 as n .
Further,
Axn  A xn  = A = M .
Therefore, Axn M xn 2 = Axn M xn , Axn M xn
= Axn , Axn 2M Axn , xn + M 2 xn 2 .
= Axn 2 2M (M
n ) + M 2 M 2 2M (M
n ) + M 2 .
= 2M
n .
Elements of Spectral Theory of SelfAdjoint Operators. . .
Hence,
Axn M xn  =
345
2M
n .
Therefore, Axm M xn  0 as n and xn  = 1.
Using corollary 9.6.4, we can conclude from the above that M belongs to
the spectrum. Similarly, we can prove that m belongs to the spectrum.
9.6.7
Examples
1. If A is the identity operator I, then the spectrum consists of
the single
eigenvalue
1 for which the corresponding eigenspace H1 = H.
1
R =
I is a bounded operator for = 1.
( 1)
2. The operator A : L2 ([0, 1]) L2 ([0, 1]) is dened by Ax = tx(t), 0
t 1.
Example 9.5.11 shows that A is a nonnegative operator. Here m = 0
and M 1. Let us show that all the points of the segment [0, 1] belong to
the spectrum of A, implying that M = 1.
Let 0 1 and
> 0. Let us consider the interval [, +
] or
[
, ] lying in [0, t].
1 for t [, +
]
x (t) =
0 for t [, +
]
Since
0
Hence,
x2 (t)dt
+
1
dt = 1.
x (t) L2 ([0, 1]), x  = 1.
Furthermore, A x (t) = (t )x (t).
1 +
2
2
(t )2 dt = .
Therefore,
A x (t) =
3
We have A x  0 as
0. Consequently, for , 0 1 is in the
point spectrum.
At the same time, the operator has no eigenvalues. In fact, A x(t) =
(t )x(t).
If A x(t) = 0, then (t )x(t) 0 almost everywhere on [0, 1] and
thus x(t) is also equal to zero, almost everywhere.
9.7
Invariant Subspaces
9.7.1
Denition: invariant subspace
A subspace L of H is called invariant under an operator A, if x L
Ax L.
346
A First Course in Functional Analysis
9.7.2
Example
Let be the eigenvalue of A, and N the collection of eigenvectors
corresponding to this eigenvalue which includes zero as well.
Since Ax = x, x N Ax N . Hence N is an invariant
subspace.
9.7.3
Remark
If the subspace L is invariant under A, we say that L reduces the
operator A.
9.7.4
Lemma
For selfadjoint A, the invariance of L implies the invariance of its
orthogonal complements, M = H L.
Let x M , implying x, y = 0 for every y L. However, Ay L for
y L, and x, Ay = 0, i.e., Ax, y = 0 for every y L. Hence x L and
Ax L implies M is invariant under A. Moreover, M = H L. Let G
denote the range of the operator A , i.e., the collection of all elements of
the form y = Ax x, an eigenvalue. We want to show that
H = G + N . Let y G , u N ,
y, u = Ax x, u = x, Au u = x, 0 = 0.
then
Consequently, G N . If y G and y G ,
then
y = lim yn , where yn G yn , u = 0
y, u = limn yn , u = 0.
Consequently, G N .
Now, let
y, u = 0 for every y G . For any x H,
0 = Ax x, u = x, Au u Au = u
since x is arbitrary. Therefore, u N .
Consequently, N = H G = H G .
9.7.5
Lemma
G is an invariant subspace under a selfadjoint operator A where G
stands for the range of the operator A .
Proof: Let N denote the orthogonal sum of all the subspaces N , i.e., a
closed linear span of all the eigenvectors of the operator A. If H is separable,
then it is possible to construct in every N a nite or countably orthonormal
system of eigenvectors which span N for a particular . Since the
eigenvectors of distinct members of N are orthogonal, by combining these
systems, we obtain an orthogonal system of eigenvectors {en }, contained
completely in the span N .
The operator A denes in the invariant subspace L an operator AL in
Elements of Spectral Theory of SelfAdjoint Operators. . .
347
(L L); namely AL x = Ax for x L. It can be easily seen that AL is
also a selfadjoint operator.
9.7.6
Lemma
If the invariant subspace L and M are orthogonal complements of each
other, then the spectrum of A is the settheoretic union of the spectra of
operators AL and AM .
Proof: Let belong to the point spectrum of AL (or AM ). Then,
there is a sequence of elements {xn } L(or M ) such that xn  =
1, AL, xn  0(AM, xn  0). However, AL, xn  = A xn 
(AM, xn  A xn ). Hence, belongs to the spectrum of A.
Now, let belong to the spectrum of neither AL nor AM . Then, there
is a positive number C, such that A y = AL, y Cy, AM, z
Cz, for any y L and z M . However, every x H has the form
x = y + z with y L and z M , and x2 = y2 + z2 . Hence,
1
A x = A y + A z = (A y2 + A z2 ) 2
1
C(y2 + z2 ) 2 = Cx.
Thus is not in the point spectrum of A.
9.8
Continuous Spectra and Point Spectra
It has already been shown that a Hilbert space H can be represented as
the orthogonal sum of two spaces, N , a closed linear hull of the set of all
eigenvectors of a selfadjoint operator A, and its orthogonal complement G.
Thus H = N G.
9.8.1
Denition: discrete or point spectrum
The spectrum of AN is called discrete or point spectrum if N is the
closed linear hull of all eigenvectors of a selfadjoint operator A.
9.8.2
Denition: continuous spectrum
The spectrum of the operator AG is called continuous spectrum of
A if G is the orthogonal complement of N in H.
9.8.3
Remark
(i) If N = H, then A has no continuous spectrum and A has a pure
point spectrum.
This happen in the case of compact operators.
(ii) If H = G, then A has no eigenvalues and the operator A has a
purely continuous spectrum. The operator in example 2 of section
9.6.7 has a purely continuous spectrum.
348
A First Course in Functional Analysis
9.8.4
Spectral radius
Let A be a bounded linear operator mapping a Banach space Ex into
itself. The spectral radius of A is denoted by r (A) and is dened as
r (A) = Sup {, (A)}.
Thus, all the eigenvalues of the operator A lie within the disc with origin
as centre and r (A) as radius.
9.8.5
Remark
Knowledge of spectral radius is very useful in numerical analysis.
We next nd the value of the spectral radius in terms of the norm of the
operator A.
9.8.6
Theorem
Let Ex be a complex Banach space and let A (Ex Ex ).
1
Then r (A) = lim An  n .
n
Proof: Note that for any 0 =
4(+), we have the factorization
A I = (A I)p(A) = p(A)(A I)
n
where p(A) is a polynomial in A. If follows from the above that if An n I
has a bounded universe in Ex , then A I has bounded inverse in Ex .
Therefore, n (An ) (A).
and so,
(A) n (An ).
(9.28)
Hence, if (A), then  A  (by (9.28) and lemma 9.3.2).
n
 An  n for (A).
Hence,
r (A) = Sup { : (A) An  n }.
This gives r (A) lim inf An  n
(9.29)
Further, in view of theorems 4.7.21, the resolvent operator is represented
by
k Ak ,  A.
R (A) = 1
Also, we have Ax = x where is an eigenvalue and x the corresponding
eigenvector. Therefore
 x = Ax A x
or,
 A for any eigenvalue .
Hence, r (A) A. Also R (A) is analytic at every point (A). Let
x Ex and f Ex . Then the function
g() = f (R (A)x) = 1
n=0
f (n An x).
Elements of Spectral Theory of SelfAdjoint Operators. . .
349
is analytic for  > r (A). Hence the singularitics of the function g all be
f (n An x) forms
in the disc { :  r (A)}. Therefore, the series
n=1
a bounded sequence. Since this is true for every f Ex , an application
of uniform boundedness principle (theorem 4.5.6) shows that the elements
n An form a bounded sequence in (Ex Ex ). Thus,
n An  M <
for some positive constant M (depending on ).
1
Hence, An  n M n  lim Sup An  n .
n
Since is arbitrary with  r (A), it follows that
1
lim Sup An  n r (A).
(9.30)
It follows from (9.29) and (9.30) that lim An  n = r (A).
n
9.8.7
Remark
The above result was proved by I. Gelfand [19].
9.8.8
Operator with a pure point spectrum
9.8.9
Theorem
Let A be a selfadjoint operator in a complex Hilbert space and let A
have a pure point spectrum.
Then the resolvent operator R = (A I)1 can be expressed as
1
Pn .
n
n
Proof: In this case, N = H and therefore there exists a closed orthonormal
system of eigenvectors {en }, such that
A en = n en
(9.31)
where n is the corresponding eigenvalue.
Every x H can be written as
x=
cn en
(9.32)
n=1
where the Fourier coecients cn are given by
cn = x, en
(9.33)
The projection operator Pn is given by
Pn x = x, en en = cn en ,
Pn denotes the projection along en .
The series (9.32) can be written as,
(9.34)
350
A First Course in Functional Analysis
x = Ix =
n
Pn x or in the form I =
Pn
(9.35)
We know Pn Pm = 0, m = n
cn Aen =
n Pn x
By (9.31) and (9.35) Ax =
n
Thus,
Ax, x =
(9.37)
We can write A in the operator form. Then, (9.36) yields,
A=
n Pn
n
(9.36)
n cn en ,
cm em =
n c2n
(9.38)
(9.39)
Thus the quadratic form Ax, x can be reduced to a sum of squares.
Using (9.37), (9.39) can be written as,
Ax, x =
n Pn x, x
(9.40)
If does not belong to the closed set {n } of eigenvalues, then there is a
d > 0 such that  n  > d.
We have A x = (A I)x =
(n )Pn x. Since A has an inverse
n
and Pn commutes with A1
, we have
x=
(n )Pn A1
Pn x.
x=
n
Premultiplying with Pm we have
Hence
Since
Since
Pm x = (m )Pm A1
x.
1
R x = A1
Pn x.
x=
n
n
cn
Pn x = cn en , R x =
en
n
n
cn cn
n d ,
12
1 2
x
1
R x
cn
=
or R  .
d
d
d
n
(9.41)
(9.42)
Consequently, does not belong to the spectrum. Now it is possible to
write (9.42) in the form
R =
n
1
Pn .
n
(9.43)
Elements of Spectral Theory of SelfAdjoint Operators. . .
9.8.10
351
Remark
For n dimensional symmetric (hermitian) matrices we have similar
expressions for R , with the only dierence that for ndimensional matrices
the sum is nite.
Hilbert demonstrated that the class of operators with a pure point
spectrum is the class of compact operators.
Problems
1. A : H H be a coercive operator (see 9.5, problem 5).
(i) Show that [u, v] = Au, v, u, v D(A) denes a scalar product
in H. If [un , u] 0 as n , un is said to tend u in energy and
the above scalar product is called energy product.
(ii) If {n } be a eigenfunction of the operator A and n the
corresponding eigenvalue, show that the solution u0 of the equation
Au = f can be written in the form,
u0 =
f, n
n .
n
n=1
(Hint: Note that An , n = [n , n ] = n and An , m =
[n , m ] = 0, for n = m, {n } is a system of functions which is
orthogonal in energy and is complete in energy.)
2. Let A be a compact operator on H. If {un } is an innite dimensional
orthonormal sequence in H then show that Aun 0 as n . In
particular, if a sequence of matrices {ai,j } denes a compact operator
on l2 ,
and
j =
i=1
ai,j 2
and i =
ai,j 2 ,
j=1
show that j 0 as j and 0 as i .
i
3. Let A (H H) where H is a Hilbert space. Then show that
(i) A is normal if and only if Ax = A x for every x H.
(ii) if A is normal then N (A) = N (A ) = R(A) .
4. Let P (H H) be normal, H being a Hilbert space over
4(+).
(i) Let X be the set of eigenvectors of P and Y the closure of the
span of X. Then show that Y and Y are closed invariant subspaces
for P .
(Hint: Show that (P ) is a closed and bounded subset of
that (P ) = e (P ) { : a (P ).)
4(+) and
352
A First Course in Functional Analysis
5. Let A (H H) be selfadjoint, where H is a Hilbert space over
Then show that its Cayley transform
+.
(i) T (A) = (A iI)(A + iI)1 is unitary and 1 (T (A)).
6. Let a be a nonzero vector, v be a unit vector, and = a. Dene
1
2
T
= 2( v a) and u =
(a v).
(i) Show that u is a unit vector and that (I 2u uT )a = v.
(ii) If v1 and v2 are vectors and is a constant, show that det (I
v1 v2T ) = 1 v1T v2 , and that det (I 2u uT ) = 1.
7. Let x(t) C([a, b]) and K(s, t) C([a, b] [a, b]).
s
K n (b a)n
If
Ax(s) =
K(s, t)x(t)dt, show that An 
n!
a
where
K = max K(s, t).
as,tb
(Hint: For nding A2 use Fubinis theorem (sec. 10.5).)
8. Let A denote a Fredholm integral operator on L2 ([a, b]) with kernel
K(, ) L2 ([a, b] [a, b]). Then show that
(i) A is selfadjoint if and only if K(t, s) = K(s, t) for almost all (s, t)
in [a, b] [a, b].
(ii) A is normal if and only if
b
K(u, s)K(u, t)d(u) =
a
K(s, u)K(t, u)d(u)
for almost all (s, t) in [a, b] [a, b].
(Hint: Use Fubinis theorem (sec. 10.5).)
9. Fix m
4(+). For (x1, x2 ) 42 (+2 ), dene
A(x1 , x2 ) = (mx1 + x2 , mx2 ).
12
+
2m2 + 1 + 4m2 + 1
while (A) =
Then show that A =
2
{m}, so that r (A) = m < A.
10. Let A (H H) be normal. Let X be a set of eigenvectors of A,
and let Y denote the closure of the span of X. Then show that Y
and Y are closed invariant subspaces of A.
11. Let A (H H), H a Hilbert space.
Then show that
(i) (A) if and only if (A )
Elements of Spectral Theory of SelfAdjoint Operators. . .
353
(ii) e (A) (A) and (A) is contained in the closure of (A).
(A) is dened as (A) = {Ax, x : x H, x = 1}. (A) is
known as the numerical range of A.
CHAPTER 10
MEASURE AND
INTEGRATION IN Lp
SPACES
In this chapter we discuss the theory of Lebesgue measure and pintegrable
functions on . Spaces of these functions provide some of the most concrete
and useful examples of many theorems in functional analysis. This theory
will be utilized to study some elements of Fourier series and Fourier
integrals. Before we introduce the Lebesgue theory in a proper fashion,
we point out some of the lacuna of the Riemann theory which prompted a
new line of thinking.
10.1
The Lebesgue Measure on
Before we introduce Lebesgue measure and associated concepts we present
some examples.
10.1.1
Examples
1. Let S be the set of continuous functions dened on a closed interval
[a, b]. Let
b
(x, y) =
x(t) y(t)dt.
(10.1)
a
(X, ) is a metric space. But it is not complete [see example in note 1.4.11].
0
if a t c
1
Let xn (t) =
nt nc + 1 if c t c
1
if c t b
354
Measure and Integration in LP Spaces
355
{xn } is a Cauchy sequence. Let xn x as n , then xn x as
n x(t) = 0, t [a, c) and x(t) = 1 for t (c, b], (see note 1.4.11).
b
x(t)y(t)dt can be used as a metric
The above example shows that
a
of a wider class of functions, namely the class of absolutely integrable
functions.
2. Consider a sequence {fn (t)} of functions dened by
fn (t) = lim [cos(n!)t]2m
(10.2)
k
, k = 0, 1, 2, . . . , n!
n!
= 0 otherwise
Thus, fn (t) = 1 if t =
Dene, (fn , fm ) =
fn (t) fm (t)dt.
Now, for any value of n, some common point at which the functional
values of fn , fm take the value 1 and hence their dierence is zero. On
the other hand, at the remaining points {(n! + 1) (m! + 1)} the value
fn (t) fm (t) is equal to 1 or 1. But the number of such types of functions
is nite and hence fn fm = 0 only for a nite number of points. Thus
(fn , fm ) = 0.
Hence, {fn } is a Cauchy sequence. Hence, {fn (t)} tends to a function
f (t) s.t.
f (t) = 1, at all rational points in 0 t 1
(10.3)
= 0, irrational points in (0, 1)
Therefore,
1 if we consider the integration in the Riemann sense, the
f (t)dt does not exist. Hence the space is not complete. We
integral
0
would show later that if the integration is taken in the Lebesgue sense, then
the integral exists.
3. Let us dene a sequence {n } of sets as follows:
0 = [0, 1]
1 = 0 with middle open interval of length
1
4
removed.
2 = 1 with middle open interval of the component intervals of 1
removed, each of length 412 .
Then by induction we have already dened n so as to consist of 2n
disjoint closed intervals of equal length. Let n+1 = n with middle open
1
intervals of the component intervals of n removed, each of length 4n+1
.
356
A First Course in Functional Analysis
For each n = 1, 2, . . ., the sum of the lengths of the component open
intervals of n is given by
m(n ) = 1
n1
i=0
1 i 1
1
2 = + n+1 .
4i+1
2 2
For every n = 1, 2, . . ., let xn be dened by
3
1 if t n
xn (t) =
0 if t
/ n
(10.4)
(10.5)
It may be seen that {xn } is a Cauchy sequence. Let m > n. Then
1
(xm , xn ) =
xm (t) xn (t)dt
0
= m(n ) m(m )
=
1
1
m+1
2n+1
2
We shall show that {xn } does not converge to any Riemann integrable
function. Let us suppose that there exists a Riemann integrable function x
and that
1
x(t) xn (t)dt = 0.
lim
(10.6)
Let J1 be the open interval removed in forming 1 , J1 , J2 , J3 the open
intervals removed in forming 2 etc.
For each l = 1, 2, . . . there is an N so that n > N implies xn (t) = 0,
t Jl .
It follows x is equivalent to a function which is identically zero on
V =
Jl
l=1
But the lower Riemann integral of such a function is zero. Since x is
integrable,
1
x(t)dt = 0.
0
But (10.4) yields that
1
0
xn (t)dt >
1
2
This contradicts (10.6).
Thus, the space of absolutely integrable functions when integration is
used in the Riemann sense is not complete. This may be regarded as a major
defect of the Riemann integration. The denition of Lebesgue integration
overcomes this defect and other defects of the Riemann integration.
Measure and Integration in LP Spaces
10.1.2
357
Remark
In example 2 (10.1.1) it may seen that {fn } 1 at all rational points
in [0, 1] and 0 at all irrational points in [0, 1].
It is known that the set of rational points in [0, 1] can be put into onetoone correspondence with the set of positive integers, i.e., the set of natural
numbers [see Simmons [53]]. Hence the set of rational numbers in [0, 1]
forms a countable set. Thus, the set of rational numbers in [0, 1] can be
written as a sequence {r1 , r2 , r3 , . . .}.
Let
be any positive real number. Suppose we put an open interval
of width
about the rst rational number r1 , an interval of width
/2
about r2 and so on. About rn we put an openinterval of width
/2n1 .
Then we have an open interval of some positive width about every rational
number in [0, 1]. The sum of the widths of these open intervals is
+ 2 + 22 + + 2n + = 2
. We conclude from all this that all
rational numbers in [0, 1] can be covered with open intervals, the sum of
whose length is an arbitrarily small positive number.
We say that the Lebesgue measure of the R of rational numbers in
[0, 1] is l m (R) = 0. This means that the greatest lower bound of the
total lengths of a set of open intervals covering the rational number is zero.
The Lebesgue measure of the entire interval [0, 1] is l m [0, 1] = 1. This
is because the greatest lower bound of the total length of any set of open
intervals covering the whole set [0, 1] is 1.
Now if we remove the rational numbers in [0, 1] from [0, 1] we are left
with the set of irrational numbers the Lebesgue measure of which is 1.
Thus, if we delete from [0, 1] the set M of rational numbers, whose
1
f (t)dt in example 2 above, i.e.,
Lebesgue measure is zero, we can nd L
0
[0,1]M
f (t)dt = 1
f (t)dt
0
denotes the integration in the Lebesgue sense. The above discussion may
be treated as a prelude to a more formal treatment ahead.
10.1.3
The Lebesgue outer measure of a set E R
The Lebesgue outer measure of a set E is denoted by m (E) and
is dened as
3
*
l(In ) : E
In ,
m (E) = g l b
n=1
where In is an open interval in
In .
n=1
4 and l(In) denotes the length of the interval
358
A First Course in Functional Analysis
10.1.4
Simple results
(i) m () = 0
(ii) m (A) 0 for all A
(iii) m (A1 ) m (A2 ) for A1 A2
(iv) m
An
m (An ) for all subsets A1 , A2 , . . . , An . . .
n=1
n=1
(v) m (I) = l(I) for any interval I
(vi) Even when A1 , A2 , . . . , An . . . are pairwise disjoint subsets of
may not have
*
An =
m (An ).
m
n=1
10.1.5
4, we
n=1
Denition: Lebesgue measurable set, Lebesgue measure
of such a set
A set S
4 is said to be Lebesgue Measurable if
m (A) = m (A S) + m (A S C ) for every A
Since we have always m (A) m (A S) + m (A S C ) we see that S
is measurable (if and only if) for each A we have
m (A) m (A S) + m (A S C ).
10.1.6
Remark
(i) Since the denition of measurability is symmetric in S and S C , we
have S C measurable whenever S is.
(ii) and the set
10.1.7
4 of all real numbers are measurable.
Lemma
If m (S) = 0 then S is measurable.
Proof: Let A be any set. Then A S S and so m (A S) m (S) = 0.
Also A A S C .
Hence, m (A) m (A S C ) = m (A S) + m (A S C ).
But
m (A) m (A S) + m (A S C ).
Hence S is measurable.
Measure and Integration in LP Spaces
10.1.8
359
Lemma
If S1 and S2 are measurable, so is S1 S2 .
Proof: Let A be any set. Since S2 is measurable, we have
m (A S1C ) = m (A S1C S2 ) + m (A S1C S2C )
Since A (S1 S2 ) = (A S1 ) (A S2 S1C ), we have
m (A (S1 S2 )) m (A S1 ) + m (A S2 S1C ).
Thus, m (A (S1 S2 )) + m (A S1C S2C )
m (A S1 ) + m (A S2 S1C ) + m (A S1C S2C )
= m (A S1 ) + m (A S1C ) = m A
Since (S1 S2 )C = S1C S2C . Hence, SS2 is measurable, since the above
equality is valid for every set A , where S C denotes the complement of
S in . If S is measurable then m (S) is called the Lebesgue measure
of S and is denoted simply by m(S).
10.1.9
Remark
(i) and
4 are measurable subsets.
(ii) The complements and countable union of measurable sets are
measurable.
10.1.10
Lemma
Let A be any set and S1 , S2 , . . . , Sn , a nite sequence of disjoint
measurable sets. Then
n
n
*
=
m A
Si
m (A Si )
i=1
i=1
Proof: The lemma can be proved by making an appeal to induction on n.
It is true for n = 1. Let us next assume that the lemma is true for
m = n 1 sets Si . Since Si are disjoint sets, we have
n
*
A
Si Sn = A Sn
(10.7)
i=1
n
*
i=1
Si
SnC
=A
n1
*
(10.8)
Si
i=1
Hence the measurability of Sn implies
n
*
*
m A
=m A
Si
Si Sn
i=1
i=1
+m
n
*
i=1
Si
SnC
360
A First Course in Functional Analysis
Using (10.7) and (10.8) we have,
n
n1
*
*
= m (A Sn ) + m A
m A
Si
Si
i=1
i=1
= m (A Sn ) +
n1
m (A Si )
(10.9)
i=1
(10.9) is true since by assumption the lemma is true for m = n 1.
Thus the lemma is true for m = n and the induction is complete.
10.1.11
Remark [see Royden [47]]
It can be proved that the Lebesgue measure m is countably additive
on measurable sets, i.e., if S1 , S2 , . . . are pairwise disjoint measurable sets,
then
*
m
Sn =
m (Sn ).
n=1
n=1
10.2
Measurable and Simple Functions
10.2.1
Denition: Lebesgue measurable function
An extended realvalued function f on
is said to be Lebesgue
measurable if f 1 (S) is a measurable subset for every open subset S
of
and if the subsets f 1 () and f 1 () of
are measurable.
10.2.2
Denition:
function
complexvalued
Lebesgue
measurable
A complexvalued function f on is said to be Lebesgue measurable
if the real and imaginary part Ref and Imf are both measurable.
10.2.3
Lemma
Let f be an extended realvalued function whose domain is measurable.
Then the following statements are equivalent:
(i) For each real number the set {x : f (x) > } is measurable.
(ii) For each real number the set {x : f (x) } is measurable.
(iii) For each real number the set {x : f (x) < } is measurable.
(iv) For each real number the set {x : f (x) } is measurable.
If (i)(iv) are true, then
(v) For each extended real number the set {x : f (x) = } is
measurable.
Proof: Let the domain of f be D, which is measurable.
Measure and Integration in LP Spaces
361
(i) = (iv) {x : f (x) } = D {x : f (x) > }
and the dierence of two measurable sets is measurable. Hence (i) = (iv).
Similarly (iv) = (i). This is because
{x : f (x) > } = D {x : f (x) }.
Next to show that (ii) = (iii) since
{x : f (x) < } = D {x : f (x) }.
(iii) = (ii) by arguments similar as in above.
(ii) = (i) since
,
*
1
{x : f (x) > } =
x : f (x) +
,
n
n=1
and the union of a sequence of measurable sets is measurable. Hence (ii)
= (i).
(i) = (ii) since
{x : f (x) } =

x : f (x) >
n=1
1
n
and the intersection of a sequence of measurable sets is measurable. Hence,
(i) = (ii).
Thus the rst four statements are equivalent.
If is a real number,
{x : f (x) = } = {x : f (x) } {x : f (x) }
and so (ii) and (iv) = (v) for real. Since
{x : f (x) = } =
{x : f (x) n}
n=1
(ii) = (v) for = . Similarly, (iv) = (v) for = , and we have
(ii) and (iv) = (v).
10.2.4
Remark
(i) It may be noted that an extended real valued function f is (Lebesgue)
measurable if its domain is measurable and if it satises one of the
rst four statements of the lemma 10.2.3.
(ii) A continuous function (with a measurable domain) is measurable,
because the preimage of any open set in
is an open set.
(iii) Each step function is measurable.
362
A First Course in Functional Analysis
10.2.5
Lemma
Let K be a constant and f1 and f2 be two measurable realvalued
functions dened on the same domain. Then the functions f1 + K, Kf1 ,
f1 + f2 , f2 f1 and f1 f2 are measurable.
Proof: Let {x : f1 (x) + K < } = {x : f1 (x) < K}.
Therefore by condition (iii) of lemma 10.2.3, since f1 is measurable,
f1 + K is measurable.
If f1 (x) + f2 (x) < , then f1 (x) < f2 (x) and by the corollary
to the axiom of Archimedes [see Royden [47]] we have a rational number
between two real numbers. Hence there is a rational number p such that
f1 (x) < p < f2 (x).
Hence, {x : f1 (x) + f2 (x) < } = {x : f1 (x) < p} {x : f2 (x) < p}.
Since the rational numbers are countable, this set is measurable and so
f1 + f2 is measurable.
Since f2 = (1)f2 is measurable, when f2 is measurable f1 f2 is
measurable.
Now, {x : f 2 (x) > } = {x : f (x) > } {x : f (x) < } for > 0
and if < 0
{x : f 2 (x) > } = D,
where D is the domain of f . Hence f 2 (x) is measurable. Moreover,
f1 f2 =
1
[(f1 + f2 )2 f12 f22 ].
2
Given f1 , f2 measurable functions, (f1 +f2 )2 , f12 and f22 are respectively
measurable functions.
Hence, f1 f2 is a measurable function.
10.2.6
Remark
Given f1 and f2 are measurable,
(i) max{f1 , f2 } is measurable
(ii) min{f1 , f2 } is measurable
(iii) f1 , f2  are measurable.
10.2.7
Theorem
Let {fn } be a sequence of measurable functions (with the same domain
of denition). Then the functions sup{f1 , . . . , fn } and inf{f1 , f2 , . . . , fn },
sup fn , inf fn , inf sup fk , sup inf fk are all measurable.
n
n kn
n kn
Proof: If q is dened by q(x) = sup{f1 (x), f2 (x), . . . , fn (x)} then {x :
*
q(x) > } =
{x : fn (x) > }. Since fi for each i is measurable, g is
n=1
measurable.
Measure and Integration in LP Spaces
363
HSimilarly, if p(x) is dened by p(x) = sup fn (x) then {x : p(x) > } =
n=1 {x : fn (x) > } and so p(x) is measurable. Similar arguments can be
put forward for inf.
10.2.8
Remark
(x)
If {fn } is a sequence of measurable functions such that fn
for each x then f is measurable.
10.2.9
f (x)
Almost everywhere (a.e.)
If f and g are measurable functions, f is said to be equal to g almost
everywhere (abbreviated as a.e.) on a measurable set S, if
m{x S : f (x) = g(x)} = 0.
10.2.10
Characteristic function of a set E, simple function
If we refer to (10.3) in example 2 of 10.1 we see that
3
1
f (x) = 1 at all rational points in [0, 1]
, R f (x)dx = 1 and
f (x) = 0 at all irrational points in [0, 1]
0
1
f (x)dx = 0.
R
0
Thus the integral
f (x)dx = 0 is not Riemann integrable. This has
0
led to the introduction of a function which is 1 on a measurable set and
is zero elsewhere. Such a function is integrable and has as its integral the
measure of the set.
Denition: characteristic function of E
The function E dened by
3
1 xE
E (x) =
0 x
/E
is called the characteristic function of E.
The characteristic function is measurable if and only if E is measurable.
Denition: simple function
A simple function is a scalarvalued function on
whose range is
nite. If a1 , a2 , . . . , an are the distinct values of such a function , then
(x) =
n
ai Ei (x)
(10.10)
i=1
is called a simple function if the sets Ei are measurable, Ei is given by
Ei = {x
4 : (x) = ai}
364
A First Course in Functional Analysis
10.2.11
Remark
(i) The representation for is not unique.
(ii) The function is simple if and only if it is measurable and assumes
only a nite number of values.
(iii) The representation (10.10) is called the canonical representation and
it is characterised by the fact that Ei are disjoint and the ai distinct
and nonzero.
10.2.12
Let f :
Example
4 [0, [. Consider the simple function for n = 1, 2, . . .
3 (i1)
if
2n
n (x) =
(i1)
2n
f (x) <
i
2n
for i = 1, 2, . . . , n2n
if f (x) n
4
4
Then 0 1 (x) 2 (x) f (x) and n (x) f (x) for each x .
If f is bounded, the sequence {n } converges to f uniformly on .
If f :
[, ], then by considering f = f + f , where
f + = max{f, 0} and f = min{f, 0}, we see that f = f + where f (x) 0
and f = f when f (x) = 0.
Thus, there exists a sequence of simple functions which converges to f
at every point of .
It may be noted that if f is measurable, each of the simple functions is
measurable.
10.2.13
The Lebesgue integral
If vanishes outside a set of nite measure, we dene the integral of
by
(x)dm(x) =
n
ai (Ei )
(10.11)
i=1
when has the canonical representation =
n
ai Ei .
i=1
10.2.14
Lemma
n
ai Ei , with Ei Ej = for i = j. Suppose each set Ei is a
Let =
i=1
measurable set of nite measure. Then
n
dm =
ai m(Ei ).
i=1
Proof: The set Aa = {x : (x) = a} =
*
ai =a
Ei
Measure and Integration in LP Spaces
Hence am(Aa ) =
365
ai m(Ei ) by the additivity of m, and hence
ai =a
(x)dm(x) =
a m(Aa ) =
n
ai m(Ei ).
i=1
10.2.15
Theorem
Let and be simple functions, which vanish outside a set of nite
measure. Then
( + )dm = dm + dm
and if a.e. then
dm
dm.
Proof: Let {Ei } and {Ei } be the set occurring in canonical representations
of and . Let E0 and E0 be the sets where and are zero. Then the
set Fk obtained by taking the intersections of Ei Ei are members of a
nite disjoint collection of measurable sets and we may write
=
n
ai Fi
i=1
and so + =
n
bi Fi
i=1
(ai + bi )Fi
Hence, using lemma 10.2.13
( + )dm = a dm + b dm
Again a.e. = ( ) 0 a.e.
= ( )dm 0
since the integral of a simple function which is greater than or equal to
zero a.e. is nonnegative. Hence, the rst part of the theorem yields
dm dm.
10.2.16
The Lebesgue integral of a bounded function over a set
of nite measure
Let f be a bounded realvalued function and E a measurable set of nite
measure. Keeping in mind the case of Riemann integral, we consider for
simple functions and the numbers
(10.12)
inf
f
366
A First Course in Functional Analysis
and
sup
f
(10.13)
It can be proved that if f is bounded on a measurable set E with m(E)
nite, then the integrals (10.12) and (10.13) will be equal where and
are simple functions if and only if f is measurable [see Royden [47]].
10.2.17
Denition: Lebesgue integral of f
If f is a bounded measurable function dened on a measurable set E
with m(E) nite, the Lebesgue integral of f over E is dened as
f (x)dm = inf
(x)dx
(10.14)
f
for all simple functions ( f ).
Note 10.2.1. If f is a bounded function dened on [0, 1] and f is Riemann
integrable on [0, 1], then it is measurable and
f (x)dx =
R
0
10.2.18
f (x)dm.
0
Denition
If f is a complexvalued measurable function over , then we dene
Re f dm + i
f dm =
Im f dm
Ref dm and
whenever
10.2.19
Denition
Imf dm are welldened.
If f is a measurable function on
4 and
f dm < , we say f is an
integrable function on .
In what follows we state without proof some important convergence
theorems.
10.2.20
Theorem
Let {fn } be a sequence of measurable functions on a measurable subset
E of .
(a) Monotone convergence theorem: If 0 f1 (x) f2 (x) and
fn (x) f (x) for all x E, then
fn dm
f dm.
(10.15)
E
Measure and Integration in LP Spaces
367
(b) Dominated convergence theorem: If fn (x) g(x) for all
n = 1, 2, . . . and x E, where g is an integrable function on E
and if fn (x) f (x) for all x E then fn , f are integrable on E
n
and
fn (x)dm
E
f (x)dm
(10.16)
If in particular, m(E) < and fn (x) K for all n = 1, 2, . . .,
x E and some K > 0, then the result in 10.2.20(b) is known as the
bounded convergence theorem.
Note 10.2.2. If f1 and f2 are integrable functions on E, then
(f1 + f2 )dm =
f1 dm +
f2 dm.
E
Proof: The above result is true where f1 and f2 are simple measurable
functions dened on E [see theorem 10.2.15]. We write f1 = f1+ f1 , where
f1+ = max{f1 , 0} and f1 = min{f1 , 0}. Similarly we take f2 = f2+ f2 ,
f2+ and f2 will have similar meanings as those of f1+ , f1 . It may be noted
that f1+ , f1 , f2+ , f2 are nonnegative functions.
We now approximate f1+ , f1 , f2+ , f2 by nondecreasing sequence
of simple measurable functions and applying the monotone convergence
theorem (10.2.20(a)).
10.3
Calculus with the Lebesgue Measure
Let E = [a, b], a nite closed interval in . We rst recapitulate a few
denitions pertaining to the Riemann integral. Let f be a bounded realvalued function dened on the interval [a, b] and let a = 0 < < <
n = b be a subdivision of [a, b]. Then for each subdivision we can dene
the sums
n
n
(i i1 )Mi and s =
(i i1 )mi
S=
i=1
where Mi =
i=1
sup
f (x), mi =
i1 <xi
inf
i1 <xi
f (x).
We then dene the upper Riemann integral of f by
b
f (x)dx = inf S
(10.17)
with the inmum taken over all possible subdivisions of [a, b]. Similarly, we
dene the lower integral
b
f (x)dx = sup s
(10.18)
a
368
A First Course in Functional Analysis
The upper integral is always at least at large as the lower integral and
if the two are equal we say f is Riemann integrable and call the common
value, the Riemann integral of f . We shall denote it by
f (x)dx
(10.19)
For the denition of the Lebesgue integral see 10.2.16.
10.3.1
Remark
4+
Consider a ( )valued bounded function f on [a, b]. f is Riemann
integrable on [a, b] if and only if the set of discontinuities of f on [a, b] is
of (Lebesgue) measure zero. In that case, f is Lebesgue integrable on [a, b]
b
b
f (x)dx is equal to the integral
f (x)dm [see Rudin [48]].
and integral
a
10.4
The Fundamental Theorem for Riemann
Integration
4+
A ( )valued function F is dierentiable on [a, b] and its derivative f is
continuous on [a, b] if and only if
F (x) = F (a) +
f (s)ds
axb
for some continuous function f on [a, b]. In that case F (x) = f (x) for all
x [a, b]. For a proof see Rudin [48].
10.4.1
Absolutely continuous function
4+
A ( )valued function F on [a, b] is said to be absolutely continuous
on [a, b] if for every
> 0, there is some > 0 such that
n
F (xi ) F (yi ) <
i=1
whenever a y1 < x1 < < yn < xn b and
n
(xi yi ) < .
i=1
10.4.2
Remark
(i) Every absolutely continuous function is uniformly continuous on
[a, b].
(ii) If F is dierentiable on [a, b] and its derivative F is bounded on
[a, b], then F is absolutely continuous by the mean value theorem.
Measure and Integration in LP Spaces
10.5
A
369
The Fundamental Theory for Lebesgue
Integration
4(+)valued function F is absolutely continuous on [a, b] if and only if
f dm
F (x) = F (a) +
axb
for some (Lebesgue) integrable function; f on [a, b]. In that case F (x) =
f (x) for almost all x [a, b] [see Royden [47]].
10.5.1
Total variation, bounded variation
Let f : [a, b] (C) be a function. Then the (total) variation Var
(f ) of f over [a, b] is dened as
n
Var (f ) = sup
f (ti ) f (ti1 ) : P = [t0 , t1 , . . . , tn ]
i=1
where a = t0 and b = tn is a partition of [a, b].
The supremum is taken over all partitions of [a, b]. If Var (f ) <
holds, f is said to be a function of bounded variation.
10.5.2
Remark
(i) An absolutely continuous function on [a, b] is of bounded variation
on [a, b].
(ii) If f is of bounded variation on [a, b], then f (x) exists for almost all
x [a, b] and f is (Lebesgue) integrable on [a, b] [see Royden [47]].
(iii) A function of bounded variation on [a, b] need not be continuous on
[a, b].
.
/
For example, the characteristic function of the set 0, 13 is of bounded
variation on [0, 1], but it is not continuous on [0, 1]
Although our discussion is conned to Lebesgue measure on , we
sometimes need Lebesgue measure on 2 to apply some results. The
Lebesgue measure on 2 generalizes the idea of area of a rectangle, while
the Lebesgue measure on
generalizes the idea of length of an interval.
10.5.3
Theorem (Fubini and Tonelli) [see Limaye [33]]
2
and k(, )
Let m m denote the Lebesgue measure on
be
a
(
)valued
measurable
function
on
[a,
b]
[c,
d].
If either
4+
K(s, t)d(m m)(s, t) < or if K(s, t) 0 for all (s, t)
d
[a, b] [c, d], then
K(s, t)dm(t) exists for almost every s [a, b] and
c
b
K(s, t)dm(s) exists for almost every t [c, d].
[a,b][c,d]
370
A First Course in Functional Analysis
The functions dened by these integrals are integrable on [a, b] and [c, d]
respectively.
Moreover,
K(s, t)d(m m)(s, t)
[a,b][c,d]
?
?
=
a
c
b
=
c
)
K(s, t)dm(t) dm(s)
)
K(s, t)dm(s) dm(t).
Lp Spaces and Completeness
10.6
We rst recapitulate the two important inequalities, namely Holders
inequality and Minkowskis inequality, before taking up the case of pth
power Lebesgue integrable functions dened on a measurable set E.
10.6.1
Theorem (H
olders inequality) [see 1.4.3]
If p > 1 and q is dened by p1 + 1q = 1, then the following inequalities
hold true
n
1/p n
1/q
n
xi yi 
xi p
yi q
(H 1)
i=1
i=1
i=1
for complex numbers x1 , x2 , . . . , xn , y1 , y2 , . . . , yn
(H 2) In case x lp , i.e., pth power summable, y lq , where p and q are
dened as above x = {xi }, y = {yi }, we have
xi yi 
i=1
xi 
1/p n
i=1
1/q
yi 
i=1
The inequality is known as the Holders inequality for sum.
(H 3) If f (x) Lp (]0, 1[), i.e., pth power integrable, g(x) Lq (]0, 1[), i.e.,
qth power integrable, where p and q are dened as above, then
f (x)g(x)dx
a
1/p
1/q
f (x) dx
g(x) dx
The above inequality is known as Holders inequality for integrals.
10.6.2
Theorem (Minkowskis inequality)[see 1.4.4]
(M 1) If p 1, then
n
i=1
1/p
xi + yi 
n
i=1
1/p
xi 
n
i=1
1/p
yi 
Measure and Integration in LP Spaces
371
for complex numbers x1 , x2 , . . . , xn , y1 , y2 , . . . , yn .
(M 2) If p 1, x = {xi } lp , the pth power summable y = {yi } lq ,
where p and q are conjugate to each other, then
1/p
xi + yi 
i=1
1/p
xi 
i=1
1/p
yi 
i=1
(M 3) If f (x) and g(x) belong to Lp (0, 1), then
1
0
1/p
f (x) + g(x) dx
1/p
f (x)
+
0
1/p
g(x)
We next consider E to be a measurable subset of
and 1 p < .
Let f be a measurable or pintegrable function on E and be pth power
integrable on E i.e. f p is integrable on E. Then the following inequalities
hold true.
10.6.3
Theorem
Let f and g be measurable functions on
4.
(a) H
olders inequality Let 1 < p < and
1
p
1
q
= 1. Then
1/p
1/q
q
f  dm
g dm
f gdm
(b) Minkowskis inequality Let 1 p < .
Assume that m(f 1 ()) g 1 ()) 0 m(f 1 () g 1 ()).
1/p
1/p
1/p
f + gp dm
f p dm
+
gp dm
Then
E
Proof: (a) Since f and g are
measurable functions on E, f g is measurable
f gdm is welldened. Let
on E (Sec. 10.2.5) and hence
E
1/p
f  dm
p
a=
E
and
1/p
g dm
p
b=
E
If a = 0 or b = 0, then f g = 0 almost everywhere on E and hence
f gdm = 0. Therefore the inequality holds. If a = or b = , then
E
the inequality obviously holds. Next we consider 0 < a, b < . We replace
xi by f and yi by g and the summation from i = 0 to n by integral over
E with respect to Lebesgue measure in (H 1) of theorem 10.6.1. Then the
proof of theorem 10.6.3 is obtained by putting forward arguments exactly
as in the proof of theorem 10.6.1.
372
A First Course in Functional Analysis
(b) Since f and g are measurable functions on E, f + g is a measurable
function on E (sec. 10.2.5).
Moreover since f and g are each pintegrable,
f + gp dm is welldened. The proof proceeds
f + g is pintegrable i.e.
E
exactly as in (M 1) of theorem 10.6.2.
10.6.4
Denition: f g, Metric in Lp (E)
f g: For measurable functions f and g on E, a measurable set, we
write f g if f = g almost everywhere on E.
It may be noted that is an equivalence relation on the set of
measurable functions on E.
Metric: Let 1 p < . For any pintegrable functions f and g on E,
dene,
1/p
p
p (f, g) =
f g dm
E
Note that m({x : f(x) = }) = 0 = m({x : g(x) = }) since
f (x)p dm < and
g(x)p dm < . Hence
E
(i) p (f, g) is welldened, nonnegative and
(ii) p (f, g) = p (g, f ) (symmetric)
(iii) By Minkowskis inequality
1/p
1/p
1/p
f gp dm
f hp dm
+
h gp dm
E
where h is a pintegrable function on E.
Hence p (f, g) p (f, h) + p (h, g).
Thus the triangle inequality is fullled.
Lp (E): Let Lp (E) denote the set of all equivalence classes of pintegrable
functions. p induces a metric on Lp (E).
10.6.5
Denition: essentially bounded, essential supremum
Let p = A measurable function f is said to be essentially bounded
on E if there exists some > 0 such that m{x E : f (x) } = 0 and
is called an essential bound for f  on E.
Essential supremum
If = inf{ : an essential bound for f  on E}, then is itself an
essential bound for f  on E. Such an is called the essential supremum
of f  on E and will be denoted by essupE f .
n at x = 1 , n = 1, 2, . . . .
n
is essentially
Let us consider f (x) =
x otherwise 0 < x 1.
bounded and essupE f  = 1.
Measure and Integration in LP Spaces
373
If f and g are essentially bounded functions on E, then it can be seen
that
essupE f + g essupE f  + essupE g
Lp (E): Let Lp (E) denote the set of all equivalence classes of essentially
bounded functions on E under the equivalence relation .
Then (f, g) = essupE f g, f, g E induces a metric on L (E).
10.6.6
Theorem: For 1 p < , the metric space Lp (E) is
complete
Proof: For 1 p < , let {fn } be a Cauchy sequence in Lp (E), where
E is a measurable set.
To prove the space Lp (E) to be complete, it is sucient if we can show
that a subsequence of {fn } converges to some point in Lp (E). Hence, by
passing to a subsequence if necessary, we may have that
p (fn+1 , fn )
1
, n = 1, 2, . . .
2n
Let f0 = 0 and for x E and n = 1, 2, . . . we denote,
gn (x) =
n
fi+1 (x) fi (x)
and
g(x) =
i=0
1/p
gn  dm
fi+1 (x) fi (x)
i=0
By Minkowskis inequality
p
n
fi+1 fi 
i=0
n
i=0
p
1/p
1/p
fi+1 fi p
E
n
p (fi+1 , fi )
i=0
n
1
= p (f1 , f0 ) +
i
2
i=1
If we apply the monotone convergence theorem (10.2.18(a)) to the above
inequality, we obtain,
1/p
1/p
p
p
g dm
= lim
gn  dm
p (f1 , f0 ) + 1 <
E
Hence the function g is nite almost everywhere on E. Now the series
fi+1 (x)fi (x) is absolutely continuous and hence summable for almost
i=0
all x E. For such x E, we let
f (x) =
i=0
[fi+1 (x) fi (x)]
374
A First Course in Functional Analysis
We know that fn (x) =
n1
[fi+1 (x) fi (x)] for all x E, we have
i=0
lim fn (x) = f (x)
and
fn (x)
n
fi+1 (x) fi (x) g(x)
i=1
for almost all x E. Since the function g p is integrable, the dominated
convergence theorem (10.2.20(b)) yields
p
p
f  dm = lim
fn  dm
g p dm <
E
Hence f Lp (E). By Minkowskis inequality, f  + g Lp (E) and
f fn p (f  + g)p for all n = 1, 2, . . . Again the dominated convergence
theorem (10.2.20(b)) yields
1/p
p
p (fn , f ) =
fn f 
0 as n .
E
Thus, the sequence {fn } converges to f in Lp (E) showing that the
metric space is complete.
Case p = Let us consider a Cauchy sequence {fn } in L (E).
Let Mj = {x E : fj (x) > essupE fj } Thus except for the set Mj ;
fj (x) is bounded.
Moreover, let Nm,n = {x fm (x) fn (x) > essupE fm fn }.
Thus except for the set Nm,n , fm (x) fn (x) is bounded.
Let G be the union of all Mj and Nm,n . Then m(G) = 0, and the
sequence of {fn } converges uniformly to a bounded function f on the
complement of G in E. Hence, f L (E) and (fn , f ) 0 as n .
Thus the sequence {fn } converges in L (E), showing that the metric
space L (E) is complete.
10.6.7
The general form of linear functionals in Lp ([a, b])
Let us consider an arbitrary linear functional f (x) dened on Lp [0, 1]
(p > 1).
3
1 for 0 < t
Let ut () =
(10.20)
0 for t < 1
Let h(t) = f (ut ()). We rst show that h(t) is an absolutely continuous
function. To this end, we take i = (si , ti ), i = 1, 2, . . . , n to be an arbitrary
system of nonoverlapping intervals in [0, 1].
Let
i = sign (h(ti ) h(si )).
3 n
n
h(ti ) h(si ) = f
i [uti () usi ()]
Then
i=1
i=1
Measure and Integration in LP Spaces
375
0
0
n
0
0
0
0
f  0
i [uti () usi ()]0
0
0
i=1
= f 
f 
p
1/p
n
i [uti () usi ()] d
i=1
n
1/p
= f 
i=1
n
1/p
m(i )
i=1
Hence, h(t) is absolutely continuous. Thus h(t) has an a.e. Lebesgue
integrable derivative and is equal to the Lebesgue integral of this derivative.
Let h (t) = (t), so that
t
(s)ds
h(t) h(0) =
0
Now h(0) = f [u0 ()] = 0, since u0 () 0 is the null element of Lp [0, 1].
We have,
t
h(t) =
(s)ds
0
It follows from (10.20), that
t
t
1
(s)ds =
ut (s)(s)ds +
ut (s)(s)ds
f [ut (s)] = h(t) =
0
0
t
1
=
ut (s)(s)ds
0
Since f is a linear functional, we have
f (u K (s)) f (u k1 (s)) =
n
1
0
If vn (s) =
u k (s)(s)ds
n
=
n
u k1 (s)(s)ds
n
)
?
u k (s) u k1 (s) (s)ds
n
Ck [u k (s) u k1 (s)], then f (vn ) =
n
k=1
vn (s)(s)ds.
Let x(t) be an arbitrary, bounded and measurable function. Then there
exists a sequence of step functions {vm (t)}, such that vm (t) x(t) a.e.
as m , where {vm (t)} can be assumed to be uniformly bounded.
By the Lebesgue dominated convergence theorem (10.2.20(b)), we get
lim f (vm ) = lim
m
vm (t)(t)dt =
lim vm (t)(t)dt =
0 m
x(t)(t)dt
0
376
A First Course in Functional Analysis
Since, on the other hand, vm (t) x(t) a.e. and vm (t) is uniformly
bounded, it follows that
vm xp =
1/p
vm (t) x(t) dt
0 as m
Therefore, f (vm ) f (x) and consequently,
f (x) =
x(t)(t)dt
0
Consider now the function xn (t) dened as follows
3
(t)q1 sgn (t) if (t) n
xn (t) =
0
if (t) > n
where q is conjugate to p i.e. 1p + 1q = 1. The function xn (t) is bounded and
measurable.
1
1
xn (t)(t)dt =
(t)q1 (t)dt
Therefore, f (xn ) =
0
f  xn p = f 
On the other hand,
f (xn ) = f (xn ) =
=
0
1
Hence,
0
xn (t)xn (t) q1 dt =
xn (t) q1 dt
xn (t)p dt
1
0
xn (t) dt
xn (t)(t)dt
xn (t) dt f  xn  = f 
Therefore,
1/p
p
1/q
xn (t)p dt
f 
xn (t) dt
p1
(10.21)
Now, (t) is Lebesgue integrable and becomes innite only on a set of
measure zero. Hence,
xn (t) (t)q1 a.e. on [0, 1]
Therefore, by the dominated convergence theorem (10.2.20(b))
Measure and Integration in LP Spaces
or,
0
(qp)p
xn (t)
q1
dt
1/q
(t)q
377
(t)
(q1)p
q1
dt f  as n
f  i.e. (t) Lq [0, 1]
(10.22)
1
Now, let x(t) be any function in Lp [0, 1]. Then there exists 0 x(t)(t)dt.
Furthermore, there exists a sequence {xm (t)} of bounded functions, such
that
1
x(t) xm (t)p dt 0 as m
0
Therefore,
xm (t)(t)dt
1
0
1
0
x(t)(t)dt
1/p
(xm (t) x(t) dt
1
0
(xm (t) x(t) (t)dt
1q
(t) dt
q
Using the fact that xm (t) x(t) Lp [0, 1] and (t) Lq ([0, 1]), the
above inequality is obtained by making an appeal to Holders inequality.
Since the sequence {xm (t)} are bounded and measurable functions, in
Lp ([0, 1]) and (t) Lq ([0, 1]),
1
xm (t)(t)dt = f (xm )
0
Hence, f (xm )
1
0
x(t)(t)dt as m
On the other hand f (xm ) f (x). It then follows that
1
f (x) =
x(t)(t)dt
(10.23)
Thus every functional dened on Lp ([0, 1]) can be represented in the
form (10.23).
Conversely, if (t) be an arbitrary function belonging to Lq ([0, 1]), then
1
x(t)(t)dt can be shown to be a linear functional dened on
g(x) =
0
Lp ([0, 1]).
If g(x1 ) =
1
0
x1 (t)(t)dt and g(x2 ) =
g(x1 + x2 ) =
1
0
x2 (t)(t)dt then
1
0
(x1 (t) + x2 (t))(t)dt
=
0
x1 (t)(t)dt +
x2 (t)(t)dt = g(x1 ) + g(x2 )
378
A First Course in Functional Analysis
Moreover, g(x)p
1/p
x(t) dt
1/q
(t) dt
< ,
q
showing that g(x) Lp ([0, 1]).
Thus g is additive, homogeneous, i.e., linear and bounded.
The norm of the functional f given by (10.23) can be determined in
terms of (t).
It follows from (10.23) with the use of Holders inequality (1.4.4)
f (x) =
1
0
x(t)(t)dt
1
0
1/p
x(t) dt
1/q
(t) dt
q
xp q
Hence f  q
(10.24)
It follows from (10.23) and (10.24) that
1
1/q
f  = (t)q =
(t)q dt
0
10.7
Lp Convergence of Fourier Series
In 3.7.8 we have seen that in a Hilbert space H if {ei } is a complete
orthonormal system, then every x H can be written as
x=
ci ei
(10.25)
i=0
where ci = x, ei , i = 0, 1, 2, . . .
i.e., the series (10.25) converges.
ci given by (10.26) are called Fourier Coecients.
In particular, if H = L2 ([, ]) and
int ,
e
, n = 0, 1, 2
en (t) =
2
(10.26)
(10.27)
then any x(t) L2 [, ] can be written uniquely as
2cn cos nt 2dn sin nt
+
2
2
n=1
n=0
x(t) cos ntdt
(c0 id0 ) +
1
where cn =
2
1
dn =
2
x(t) sin ntdt
(10.28)
(10.29)
(10.30)
Measure and Integration in LP Spaces
379
Let us consider a more general problem of representing any integrable
function of period 2 on
in terms of the special 2periodic function
eint
, n = 0, 1, 2, . . .. Let x Lp ([, ]).
2
For n = 0, 1, 2, . . . the nth Fourier coecients of x is dened by
1
(x(t))eint dm(t)
(10.31)
cn =
2
and the formal series
1
cn eint
2 n=
(10.32)
is called the Fourier series of x.
For n = 0, 1, 2, . . . consider the nth partial sum,
sn (t) =
n
ck eikt , t [, ]
k=n
10.7.1
Remark
(i) Kolmogoro [29] gave an example of a function x in L1 ([, ]) such
that the corresponding sequence {sn (t)} diverges for each t [, ].
(ii) If x Lp ([, ]) for some p > 1, then {sn (t)} converges for almost
all t [, ] (see Carleson, A [10]).
A relevant theorem in this connection is the following.
10.7.2
Theorem
In order that a sequence {xn (t)} Lp ([0, 1]) converges weakly to
x(t) Lp ([0, 1]), it is necessary and sucient that
(i) the sequence {xn } is bounded,
1
t
(ii)
xn (s)ds
x(s)ds for any t [0, 1]
0
Proof: The assumption (i) is the same as that of theorem 6.3.22.
Therefore, we examine assumption (ii).
For this purpose, let us dene,
3
1 for 0 t s
s (t) =
0 for s t 1
Then the sums
n
i=1
ci [si (t) si1 (t)],
380
A First Course in Functional Analysis
where 0 = s0 < s1 < < sn1 < sn = 1 are everywhere dense in
Lq ([0, 1]) = Lp ([0, 1]).
w
Hence, in order that xn (t) x(t), it is necessary and sucient that
assumption (i) is satised and that
1
1
xn (t)s (t)dt
x(t)s (t)dt
0
or,
xn (t)dt
x(t)dt,
0
as n and for every s [0, 1].
10.7.3
Remark
Therefore, if sn (t) Lp ([0, 1]) and fulls the conditions of the theorem
10.7.2, then
w
{sn (t)} s(t) as n .
CHAPTER 11
UNBOUNDED LINEAR
OPERATORS
In 4.2.3 we dened a bounded linear operator in the setting of two normed
linear spaces Ex and Ey and studied several interesting properties of
bounded linear operators. But if said operator ceases to be bounded, then
we get an unbounded linear operator.
The class of unbounded linear operators include a rich class of operators,
notably the class of dierential operators. In 4.2.11 we gave an example
of an unbounded dierential operator. There are usually two dierent
approaches to treating a dierential operator in the usual function space
setting. The rst is to dene a new topology on the space so that the
dierential operators are continuous on a nonnormable topological linear
space. This is known as L. Schwartzs theory of distribution (Schwartz, L
[52]). The other approach is to retain the Banach space structure while
developing and applying the general theory of unbounded linear operators
(Browder, F [9]). We will use the second approach. We have already
introduced closed operators in Chapter 7. The linear dierential operators
are usually closed operators, or at least have closed linear extensions.
Closed linear operators and continuous linear operators have some common
features in that many theorems which hold true for continuous linear
operators are also true for closed linear operators. In this chapter we point
out some salient features of the class of unbounded linear operators.
381
382
A First Course in Functional Analysis
11.1
Denition:
Operator
An
Unbounded
11.1.1
Let Ex and Ey be two normed linear spaces
Linear
Let a linear operator A : D(A) Ex R(A) Ey , where D(A) and
R(A) stand for the domain and range of A, respectively.
If it does not full the condition
AxEy KxEx , for all x Ex
(11.1)
where K is a constant (4.2.3), the operator becomes unbounded.
See 4.2.11 for an example of an unbounded linear operator.
11.1.2
Theorem
Let A be a linear operator with domain Ex and range in Ey . The
following statements are equivalent :
(i) A is continous at a point
(ii) A is uniformly continuous on Ex
(iii) A is bounded, i.e., there exists a constant K such that (11.1) holds
true for all x Ex [see theorem 4.1.5 and theorem 4.2.4].
11.2
States of a Linear Operator
Denition: State diagram is a table for keeping track of theorems between
the ranges and the inverses of linear operators A and A . This diagram
was constructed by S. Goldberg ([21]). In what follows, A : Ex Ey , Ex
and Ey being normed linear (Banach) spaces.
We can classify the range of an operator into three types :
I. R(A) = Ey
II. R(A) = Ey , R(A) = Ey ,
III. R(A) = Ey .
Similarly, A1 may be of the following types:
(a) A1 exists and is continuous and hence bounded
(b) A1 exists but is not continuous
(c) A has no inverse.
If R(A) = Ey , we say A is in state I or that A is surjective written as
A I. Similarly we say A is in state b written as A b if R(A) = Ey but
R(A) = Ey . Listed below are some theorems that show the impossibility
of certain states for (A, A ). For example, if A fullls conditions I and b,
then A will be said to belong to Ib . Similar meaning for A IIc .
Now (A, A ) will be said to be in state (Ib , IIc ) if A Ib then A IIc .
Unbounded Linear Operators
11.2.1
383
Theorem
If A has a bounded inverse, then R(A ) is closed.
Proof: Let us suppose that A : fp Ey g Ex . Since A has a
bounded inverse, there exists an m > 0 such that
A fp A fq mfp fq .
Thus, {fp } is a Cauchy sequence which converges to some f in some
Banach space Ey . Since A is closed, f is in D(A ) and A f = g. Hence
R(A ) is closed.
11.2.2
Remark
The above theorem shows that if A II then A cannot belong to a
i.e., A
/ IIa .
11.2.3
Denition: orthogonal complements K , C
In 5.1.1. we have dened a conjugate Ex of Banach space Ex . Let A
map a Banach space Ex into a Banach space Ey . In 56 an adjoint, A
of an operator A mapping Ey Ex was introduced. We next introduce
the notion of an orthogonal complement of a set in Banach space.
The orthogonal complement of a set K Ex is denoted by K and
is dened as
(11.2)
K = {f Ex : f (x) = 0, x K}
Orthogonal: A set K Ex is said to be orthogonal to a set F Ex ,
if f (k) = 0 for f F Ex and k K.
Thus, K is called an orthogonal complement of K, because K is
orthogonal to K.
Even if K is not closed. K is a closed subspace of Ex .
C: If C is a subset of Ex , the orthogonal complement of C in Ex
is denoted by C and dened by
C = {x : x Ex , Fx (f ) = 0 f C}
(11.3)
For notion of Fx see theorem 5.6.5.
11.2.4
Remarks
K and C are closed subspaces respectively of Ex and Ex . Also
K = K and C = C.
11.2.5
Theorem
If L is a subspace of Ex , then (L ) = L.
Proof: Let {xn } L be convergent and lim xn = x L. Let {fm } L
n
be convergent and L being closed lim fm = f L .
m
384
A First Course in Functional Analysis
fm (xn ) f (x) fm (xn ) f (xn ) + f (xn ) f (x).
Now,
lim fm (xn ) = f (xn ).
Now,
Since
x Ex , f (x) = 0 for f L . Hence f (xm ) f (x) = 0.
Thus,
fm (xn ) f (x) = 0 as m, n .
Hence, x (L ). Thus
(L) L.
On the other hand, L (L) since
Thus
11.2.6
(L) is a closed subspace.
(L) = L.
Theorem
If M is a subspace of Ex , then ( M ) M . If Ex is reexive, then
( M ) = M .
Proof: Let {fn } M Ex be a convergent sequence. Let fn f Ex 0
as n , where f M .
Now,
M = {x0 : x Ex , Fx (f ) = 0 f M }
Since fn M and if x Ex we have Fx (fn ) = 0.
Hence, fn (x) = Fx (fn ) = 0 for x M [see 5.6.5].
Thus, fn (x) f (x) fn f x, x M .
0 as n .
0 f (x) 0 as n .
or
Hence f (x) = 0 for x M . Thus f ( M ) .
Hence f M f ( M ) proving that ( M ) M .
For the second part see Remark 11.2.7.
11.2.7
Remark
If Ex is reexive, i.e., Ex = Ex , then ( M ) = M .
11.2.8
Denition: domain of A
Domain of A is dened as
D(A ) = { : Ey , A is continuous on D(A)}.
For D(A ), let A be the operator which takes D(A ) to A,
where A is the unique continuous linear extension of A to all of Ex .
11.2.9
Remark
(i) D(A ) is a subspace of Ey and A is linear.
(ii) A is taken to be A rather than A in order that R(A ) is
contained in Ex [see 5.6.15].
Unbounded Linear Operators
11.2.10
385
Theorem
(i) R(A) = R(A) = N (A ).
(ii) R(A) = N (A ).
In particular, A has a dense range if and only if A is onetoone.
Proof: (i) R(A) is a closed subspace of Ey . Hence, R(A) = R(A) .
R(A) = { : Ey , (v) = 0, v = Au R(A)}.
Now,
(v) = (Au) = A (u).
Therefore if R(A) , N (A ).
Hence, R(A) N (A ).
(11.4)
On the other hand, N (A ) = { : D(A )
Now,
Ey ,
A = 0},
A = 0,
Hence, (Au) = 0, u D(A) i.e. R(A) .
Thus, N (A ) R(A) .
(11.5)
(11.4) and (11.5) together imply that R(A) = N (A ).
(ii) It follows from (i) of theorem 11.2.6 that
(R(A) ) = R(A).
Again (i) of this theorem yields R(A) = N (A ).
Hence,
N (A ) = (R(A) ) = R(A).
If R(A) is dense in Ey , we have
R(A) = Ey = N (A )
= {y : y Ey , Gy () = (y) = 0 where A = 0}
Thus, A = 0 = 0 showing that A is onetoone.
11.2.11
Theorem
If A and A each has an inverse then (A1 ) = (A )1 .
Proof: By theorem 11.2.10. D(A1 ) = R(A) is dense in Ey . Hence
(A1 ) is dened. Suppose f D((A )1 ) = R(A ). Then there exists a
D(A ), such that A = f . To show D((A1 ) ) we need to prove
that f (A1 ) is continuous on R(A). Now,
f ((A1 A)x) = A (x) = (Ax), x D(A)
Thus, (A1 )f = on R(A), where = (A1 ) f = (A1 ) A since
R(A) is dense in Ey . Hence, (A1 ) = (A )1 on D((A )1 ). It remains
to prove that D((A1 ) ) D((A )1 ).
Let us next suppose that D((A1 ) ). We want to show that
D((A )1 ) = R(A ). For that we show that there exists an element
v D(A ) such that A v = or equivalently v A = on D(A).
386
A First Course in Functional Analysis
Keeping in mind the denition of D(A ) we dene v as the continuous
linear extension of A1 to all of Ey , thereby obtaining A v = . Thus,
D((A1 ) ) D((A )1 ).
11.3
Denition: Strictly Singular Operators
The concept of a strictly singular operator was rst introduced by T.
Kato [28] in connection with the development of perturbation theory. He
has shown that there are many properties common between A and A + B,
where B is strictly singular. In what follows we take Ex and Ey as two
normed linear spaces.
Denition: strictly singular operator
Let B be a bounded linear operator with domain in Ex and range in Ey .
B is called strictly singular if it does not have a bounded inverse on any
innite dimensional subspace contained in its domain.
11.3.1
Example
The most important examples of strictly singular operators are compact
operators. These play a signicant role in the study of dierential and
integral equations.
In cases where the normed linear spaces are not assumed complete, it
is convenient to also consider precompact operators.
11.3.2
Denition: precompact operator
Let A be a linear operator mapping Ex into Ey .
If A(B(0, 1)) is totally bounded in Ey , then A is called precompact.
11.3.3
Theorem
Every precompact operator is strictly singular.
Proof: Let B be a precompact operator with domain in Ex and range in
Ey . B is bounded since a totally bounded set is bounded. Let us assume
that B has a bounded inverse on a subspace M D(B). If BM (0, 1) is a
unit ball in M , then B[BM (0, 1)] is totally bounded. Since B has a bounded
inverse on M , it follows that BM (0, 1) is totally bounded in M . Since unit
ball in M is totally bounded, it has a nite
net, i.e., it has nite number of
points x1 , x2 , . . . xn in the unit ball in M such that for every x BM (0, 1),
there is an xi such that
x xi < 1
(11.6)
Let us assume that the nite dimensional space N spanned by x1 , x2 , . . . xk
is M .
Unbounded Linear Operators
387
Suppose this assertion is false. Since N is a nite dimensional proper
subspace of the normed linear space M , we will show that there exists an
element in the unit ball of M whose distance from N is 1. Let z be a point
in M but not in N . Then there exists a sequence {mk } in N such that
z mk d(z, N ).
Since N is nite dimensional and {mk } is bounded, N must also be
compact (1.6.19). Hence, {mk } has a convergent subsequence {mkp } which
converges to m N (say). Hence,
z m = lim z mkp = d(z, N )
p
= d(z m, N )
Since z m = 0,
0
0
0 z m 0 d(z m, N )
zm
0
0
1=0
=
=d
,N .
z m0
z m
z m
This contradicts (11.6).
Hence M is nitedimensional. Therefore, B does not have an inverse
on an innite dimensional subspace.
11.3.4
Denition: nite deciency
A subspace L of a vector space E is said to have nite deciency in
E if the dimension of E/L is nite. This is written as
dim E/L < .
Even though L is not contained in D(A), or restriction of A to L will mean
a restriction of A to L D(A).
11.3.5
Theorem
Let A be a linear operator from a subspace of Ex into Ey . Assume
that A does not have a bounded inverse when restricted to any closed
subspace having nite deciency in Ex . Then, given an arbitrarily small
number
> 0, there exists an innite dimensional subspace L(
) contained
in D(A), such that A restricted to L(
) is precompact and has norm not
exceeding
.
Proof: Since A does not have a bounded inverse on a closed subspace Ex
having nite deciency, we can not nd a m > 0 such that Ax mx
on such a subspace. Therefore, there is no loss of generality in assuming
that there exists an x1 Ex such that x1 = 1 and Ax1 < 3 . There
is an f1 Ex such that f1 = 1 and f1 (x2 ) = x2 = 1. Since N (f1 )
has a deciency 1 in Ex , there exists an element x2 N (f1 ) such that
x2 = 1 and Ax2 32 . There exists an f2 Ex such that f2 = 1
and f2 (x2 ) = x2 = 1. Since, N (f1 ) N (f2 ) has nite deciency in Ex ,
there exists an x3 N (f2 ) N (f1 ) such that x3 = 1 and Ax3 33 .
388
A First Course in Functional Analysis
Hence, by induction, we can construct sequences {xk } and {fk } having the
following properties :
, 1k<
3k
xk k1
i=1 N (fi ) or equivalently, fi (xk ) = 0 1 i <
i = k
xk = fk = fk (xk ) = 1, Axk <
(11.7)
(11.8)
We next show that {xk } is linearly independent.
If that is not so, we can nd 1 , 2 , . . . k not all zeroes, such that
1 x1 + 2 x2 + + k xk + = 0.
or
1 fi (x1 ) + 2 fi (x2 ) + + k fi (xk ) + = 0.
Using (11.7) and (11.8) we get i = 0 for 1 i < k = 2, 3, . . .
Hence, {xk } is a linearly independent set. Let L = span {x1 , x2 , . . .}.
L is an innite dimensional subspace of D(A). It will now be shown that
the restriction AL of A to L has norm not exceeding
.
l
i xi . Then from (11.7) and (11.8),
Suppose x =
i=1
1 = f1 (x1 ) f1 x = x
We next want to establish that
k  2k1 x
1kl
(11.9)
Let us suppose that (11.9) is true for k j < l.
Then we get from (11.7) and (11.8) that
fj+1 (x) =
j
i fj+1 (xi ) + j+1
i=1
Hence, by (11.10) and the induction hypothesis,
j+1  fj+1 (x) +
j
i  fj+1 (xi )
i=1
x +
j
2i1 x = 2j x
i=1
Hence, (11.9) is true by induction.
Thus,
Ax
l
i=1
Hence, Ax
.
i Axi
l
i=1
2i1 3i
x
x
(11.10)
Unbounded Linear Operators
389
To prove that AL is precompact, we would show that AL is the limit
in (L Ey ) of a sequence of precompact operators [see 8.1.9]. For each
positive integer n, we dene AL
n : L Ey to be A on span {x1 , . . . xn }
and 0 on span {xn+1 , xn+2 , . . .}. It may be noted that AL
n is linear and has
n+k
nite dimensional range. Moreover, AL
is
bounded
on
L
for
if
x
=
i xi
n
i=1
then by (11.7) and (11.9)
AL
n x
n
i  Axi
i=1
n
2i1 3i
x.
i=1
Thus AL
n is bounded and nite dimensional.
Hence, AL
n is precompact.
i  Axi
x
2i1 3i 0 as
Since, AL x AL
n x
i=n+1
i=n+1
n , it follows that AL
n converges to AL in (L Ey ). Hence, AL is
precompact [see 8.1.9].
11.4
Relationship between
Compact Operators
Singular
and
The following theorem reveals the connection between the class of strictly
singular and the class of compact operators.
11.4.1
Theorem
Suppose B (Ex Ey ). The following statements are equivalent :
(a) B is strictly singular
(b) For every innite dimensional subspace L Ex , there exists an
innite dimensional subspace M L such that B is precompact on M .
(c) Given
> 0 and given L an innite dimensional subspace of Ex ,
there exists an innite dimensional subspace M L such that B restricted
to M and has norm not exceeding
, an arbitrary small positive number.
Proof: We rst show that (a) implies (b). Let us suppose that B is strictly
singular and L is an innitedimensional subspace of Ex . Then BL , the
restriction of B to L, is strictly singular. Therefore, BL does not have a
bounded inverse on an innite dimensional subspace M L. Hence, by
theorem 11.3.4, B is precompact on such a M L.
Next, we show that (b) (c). If (b) is true then we assert that B does
not have a bounded inverse on an innite dimensional subspace, having
nite deciency in L. If that is not so, then B would be precompact and
390
A First Course in Functional Analysis
would have a bounded inverse at the same time. This violates the conclusion
of theorem 11.3.3. Hence by applying theorem 11.3.4 to BL (c) follows.
Finally, we show that (c) (a). It follows from (c) that BL
, i.e.,
BL x
x for an arbitrary small
> 0, i.e., BL x >mx, m > 0
and nite, for all x belonging to an innite dimensional subspace M L.
Hence, BL does not have a bounded inverse on M L. Thus B is strictly
singular.
11.5
Perturbation by Bounded Operators
Suppose we want to study the properties of a given operator A mapping a
normed linear space Ex into a normed linear space Ey . But if the operator
A turns out to be involved, then A is replaced by, say, T + V , where T
is a relatively simple operator and V is such that the knowledge about
properties of T is enough to gain information about the corresponding
n
ak D k , T is
properties of A. For example, if A is a dierential operator
k=s
chosen as an D n and V is the remaining lower order terms. The concern
of the penturbation theory is to nd the conditions that V should full so
that the properties of T can help us determine the properties of A.
In what follows, V is a linear operator with domain a subspace of Ex
and range a subspace of Ey .
11.5.1
Denition: kernel index of A, deciency index of A and
index of A
Kernel index of A: The dimension of N (A) will be dened as the kernel
index of A and will be denoted by (A).
Deciency index of A: The deciency of R(A) in Ey written as (A), will
be called the deciency index of A.
Then (A) and (A) will be either a nonnegative integer or .
Index of A: If (A) and (A) are not both innite, we say A has an inverse.
The index (A) is dened by (A) = (A) (A).
It is understood as in the real number system, if p is any real number,
p=
11.5.2
and
p = .
Examples
1. Let Ex = Lp ([a, b]) Ey = Lq ([a, b]) where 1 p, q < .
Let us dene A as follows :
D(A) = {u : u(n1) exists and is absolutely continuous on [a, b], u(n) Ey }.
Unbounded Linear Operators
391
Au = u(n) , u(n) stands for the nth derivative of u in [a, b]. It may
be recalled that an absolutely continuous function is dierentiable almost
everywhere (10.5). Here, N (A) is the space of polynomials of degree at
most (n 1). Hence, (A) = n, (A) = 0.
2. Let Ex = Ey = lp , 1 p . Let { } be a bounded sequence of
numbers and A be dened on all of Ex by A({x }) = { x }.
(A) are the members of k which are 0. (A) = 0 if {1/ } is a
bounded sequence. (A) = if innitely many of the are 0.
11.5.3
Lemma
Let L and M be subspace of Ex with dim L > dim M (thus dim M <
). Then, there exists a l = 0 in L such that
l = dist (l, M ).
Note 11.5.1 This lemma does not hold if dim L = dim M < .
For example, if Ex = 2 and L and M are two lines through the origin
which are not perpendicular to each other.
If Ex is a Hilbert space, the lemma has the following easy proof.
Proof: First we show that dim M = dim(M M /M ) = dim(H/M )
where H/M stands for the quotient space.
"
!
.
Let x, y M . We consider
the
mapping
x
x
+
M
in
H/M
"
!
Similarly, y y + M in H/M . Thus, x + y (x + M ) + (y + M ) =
x+y+M in (H/M ). Similarly, for any scalar , x (x+M ) = x+M .
Thus, there is an isomorphism between M and (H/M ). Let us assume
that L M = . Hence, dim M = dim(M + M /M ) = dim(H/M )
lim(L + N /N ) = dim L. The above contradicts the hypothesis that
L M = .
Let x L M and let x = . Then
x m2 = x2 + m2 x2 for m M .
Thus, d(x, M ) = x.
11.5.4
Denition: minimum module of A
Let N (A), the null manifold of A, be closed. The minimum module of
A is written as (A) and is dened by
(A) =
inf
xD(A)
11.5.5
Ax
d(x, N (A))
(11.11)
Denition
The onetoone operator A of A induced by A is the operator from
D(A)/N (A) into Ey dened by
= Ax,
A[x]
392
A First Course in Functional Analysis
where the coset [x] denotes the set of elements equivalent to x and
belongs to D(A)/N (A).
A is onetoone and linear with same range as that of A. We next state
without proof the following theorem.
11.5.6
Theorem (Goldberg [21])
Let N (A) be closed and let D(A) be dense in Ex . If (A) > 0, then
(A) = (A ) and A has a closed range.
11.5.7
Theorem
Suppose (A) > 0. Let V be bounded with D(V ) D(A).
V < (A), then
(a) (A + V ) (A)
(b) dim Ey /R(A + V ) dim Ey /R(A)
If
Proof: (a) For x = in N (A + V ) and V < = (A),
since x = N (A + V ), Ax + V x = i.e., Ax = V x.
[x] Ax = V x V x < x where [x] Ex /N (A).
Thus, x > [x] = d(x, N (A)).
Therefore, by lemma 11.5.3, the dimension of N (A + V ) < dimension
of N (A), or (A + V ) (A).
(b) Let Ex1 = D(A) and let V1 be V restricted to D(A). Let us consider
A and V1 as operators with domain dense in Ex1 , since f1 = A 1 where
1 Ey and f Ex . Therefore, the domain of A is in Ey and range in
Ex1 . Therefore, by theorem 11.5.6,
(A ) = (A) > V V1 = V1 .
We next show that
dim(Ey /R(A + V )) = dim(Ey /R(A + V1 )) = dim R(A + V ) .
For g (Ey /R(A + V1 )) , and the map W dened by:
(W g(x)) = g[x], g (Ey /R(A + V1 )) ,
we observe (W g(x) = g[x] g [x] g x, x Ex
and
W g(m) = g[m] = 0, m R(A + V1 ).
Thus, W g is in R(A + V1 )
with
W g g
Since
g[x] = W g(y) V g y, y [x]
It follows that g[x] V g [x]
(11.12)
Unbounded Linear Operators
Thus,
393
g V g
(11.13)
(11.13) together with (11.12) proves that W is an isometry.
Given f R(A + V ) , let g be a linear functional on Ex1 /R(A + V )
dened g[x] = g(x).
Now,
g[x] = g(y) g y, y [x].
It follows that g[x] g [x]. Hence, g is in (Ex1 /R(A + V )) .
Furthermore, V g = f , proving that R(V ) = (R(A + V )) .
Thus,
(Ex1 /R(A + V1 )) = (R(A + V1 ))
(11.14)
Hence, it follows from theorem 11.2.10, denition 11.5.1 and (11.14)
dim(Ex1 /R(A + V1 )) = dim(R(A + V1 ) )
= dim(N (A + V1 ) ) = (A + V1 )
(A ) = dim(Ey /R(A)).
11.6
Perturbation
Operators
by
11.6.1
Denition: normally solvable
Strictly
Singular
A closed linear operator with closed range is called normally
solvable.
11.6.2
Theorem
Let Ex and Ey be complete. If A is closed but R(A) is not closed, then
for each
> 0 there exists an innitedimensional closed subspace L(
)
contained in D(A), such that A restricted to L(
) is compact with norm
not exceeding
, an arbitrarily small number.
Proof: Let U be a closed subspace having nite deciency in Ex . Assume
that A has a bounded inverse on U . Since A is closed, Axn y, xn U
{xn } is a Cauchy sequence and therefore converges to x in Banach space
U . Thus, AU is closed.
Moreover, A being closed, x D(A) and Ax = y. By hypothesis,
there exists a nite dimensional subspace N of Ex such that Ex = U + N .
Hence AEx = AU + AN Ey . Thus AU is a closed subspace and AN
is a nite dimensional subspace of Ey . Dene a linear map B from Ey
onto Ey /AU by By = [y]. Since By = [y] y, B is continuous.
Moreover, the linearity of B and the nite dimensionality of N imply
the nite dimensionality of BN . Now a nite dimensional subspace of
a normed linear space is complete and hence closed. Since B is continuous
B 1 BN = U + N is closed (B 1 is used in the set theoretic sense). Thus
394
A First Course in Functional Analysis
AEx is closed. But this contradicts the hypothesis that R(A) is not closed.
Therefore, A does not have a bounded inverse on U . Hence there exists
an L = L(
) with the properties described in theorem 11.3.4. Since Ey is
complete and A is closed and bounded on L, it follows that L is contained
in D(A). Moreover, AL
and ABL = AB L , where BL and BL are
unit balls in L and L respectively and AL is the restriction of A to L.
The precompactness of A and the completeness of Ey imply that AB L is
compact. Thus, AL is compact.
11.6.3
Theorem
Suppose that A1 is a linear extension of A such that
dim(D(A1 )/D(A)) = n < ,
(a) If A is closed then A1 is closed
(b) If A has a closed range, then A1 has a closed range.
Proof: (a) By hypothesis, D(A1 ) = D(A) + N , where N is a nite
dimensional subspace. Hence, G(A1 ) = G(A) + H, where G(A1 ) and G(A)
are the graphs of A1 and A respectively and H = {(n, A1 n) : n N }.
Thus, if G(A) is closed, then G(A1 ) is closed since H is nite
dimensional.
(b) R(A1 ) = R(A) + A1 N , A1 N is nite dimensional and hence closed.
Also R(A) is given to be closed. Hence, R(A1 ) is closed.
11.6.4
Theorem
Let Ex and Ey be complete and let A be normally solvable. If L is a
subspace (not necessarily closed) of Ex such that L + N (A) is closed, then
AL is closed. In particular, if L is closed and N (A) is nite dimensional,
then AL is closed.
Proof: Let A1 be the operator A restricted to D(A) (LN (A)). Then A1
is closed and N (A1 ) = N (A).
Hence (A1 ) (A) > 0. Therefore, A1 has a closed range, i.e.,
AL = A1 (L + N (A)) is closed.
If V is strictly singular with no restriction on its norm, then we get an
important stability theorem due to Kato (Goldberg [21]).
11.6.5
Theorem
Let Ex and Ey be complete and let A be normally solvable with
(A) < .
If V is strictly singular and D(A) < D(V ), then
(a) A + V is normally solvable
(b) (A + V ) = (A)
Unbounded Linear Operators
395
(c) (A + V ) and (A + V ) have constant values p1 and p2 ,
respectively, except perhaps for isolated points. At the isolated points,
p1 < (A + V ) <
and
(A + V ) > p2 .
Proof: (a) Since (A) < , i.e., the null space of A is nite dimensional
i.e., closed, there exists a closed subspace L of Ex such that Ex = LN (A).
Let AL be the operator A restricted to L D(A). Then A being closed, AL
is closed with R(AL ) = R(A). Let us suppose that A + V does not have a
closed range. Now A + V is an extension of AL + V . Then it follows from
theorem 11.6.3(b) that AL + V does not have a closed range. Moreover,
11.6.3(a) yields that AL + V is closed since AL is closed. Thus, AL + V is a
closed operator but its range is not closed. It follows from theorem 11.6.2
that there exists a closed innitedimensional subspace L0 contained in
D(AL ) = D(AL + V ) such that
(AL + V )x <
(AL )
x, x L0
2
(11.15)
Thus, since AL is onetoone, it follows for all x in L0 ,
V x AL x (AL + V )x
(AL )
(AL )
(AL )
x =
x.
2
2
The above shows that V has a bounded inverse on the innite dimensional
space L0 . This, however, contradicts the hypothesis that V is strictly
singular. We next show that (A+V ) < . There exists a closed subspace
M1 such that
N (A + V ) = N (A + V ) N (A) M1
(11.16)
Let A1 be the operator A restricted to M1 . Since N (A) is nite dimensional,
i.e., closed, N (A) + M1 is closed and A is normally solvable. Hence, by
theorem 11.6.4 AM1 is closed. Thus, R(A1 ) = AM1 is closed. Moreover, A1
is onetoone. Hence, its inverse is bounded. Since M1 N (A + V )#V =
A1 on M1 and V is strictly singular, M1 must be nite dimensional.
Therefore, 11.16 implies that N (A + V ) is nite dimensional.
(b) We have shown above that for all scalars , A + V is normally
solvable and (A + V ) < .
Let I denote the closed interval [0, 1] and let Z be the set of integers
together with the ideal elements and . Let us dene : I Z
by (x) = (A + V ). Let I have the usual topology and let Z have the
discrete topology, i.e., points are open sets. To prove (b) it suces to show
that is continuous. If is continuous, then (I) is a connected set which
therefore consists of only one point. In particular,
(A) = (0) = (1) = (A + V )
(11.17)
396
A First Course in Functional Analysis
In order to show the continuity of , we rst prove that
(A + V ) = (A)for V sucienty small.
We refer to (a) and note that AL is closed, onetoone and R(AL ) =
R(A). Hence, AL has a bounded inverse. Then, by theorem 11.5.7, we have
(AL + V ) = (AL ) = 0 (11.18) Since AL has a bounded inverse
dim(Ey /R(AL + V )) = dim(Ey /R(AL )) provided V < (AL )
Hence, (AL + V ) = (AL ).
(11.19)
Now, D(A + V ) = D(A) = D(A) L N (A).
D(AL + V ) = D(AL ) = D(A) L.
Thus, D(A) = D(AL ) N (A).
Thus, dim(D(A)/D(AL )) = dim(N (A)) = (A).
Hence, (A) = (AL ) + (A)
= (AL + V ) + (A) = (A + V ),
where V < (AL ).
(11.20)
Hence, the continuity of is established and (11.17) is true.
(c) For proof, see Goldberg [21].
11.7
Perturbation in a Hilbert Space and
Applications
11.7.1 A linear operator A dened in a Hilbert space is said to be
symmetric if it is contained in its adjoint A and is called selfadjoint if
A = A .
11.7.2
Denition: coercive operator
A symmetric operator A with domain D(A) dense in a Hilbert space H
is said to be coercive if there exists an > 0 such that
u D(A), Au, u u, u
11.7.3
(11.21)
Denition: scalar product [ , ]
Let A be a coercive linear operator with domains D(A) dense in a Hilbert
space H. Let A satisfy (11.21) u D(A).
We dene [u, v] = Au, v.
Since A is selfadjoint [u, v] = [v, u] u, v D(A).
0 [u, u] = u2
(11.22)
Unbounded Linear Operators
397
It may be seen that [ , ] denes a new inner product in D(A). We
complete D(a) w.r.t. the new product and call the new Hilbert space as
HA , where   dened by (11.22) will be the norm of HA .
11.7.4
Perturbation
Our concern is to solve a complicated dierential equation
A1 u = f, u D(A1 )
(11.23)
A1 is coercive in the Hilbert space H1 .
We often replace the above equation by a simpler equation of the form
A2 u = f
(11.24)
The question that may arise is to what extent the replacement is
justied, or in other words we are to determine how close is the solution of
equation (11.24) to the solution of equation (11.23). For this, let us assume
that both the operators A1 and A2 are symmetric and coercive on their
respective domains D(A1 ) and D(A2 ) respectively. We complete D(A1 )
and D(A2 ) respectively w.r.t. to the products [u, u]A1 , u D(A1 ) H and
[v, v]A2 , v D(A2 ) H. We call the Hilbert spaces so generated as HA1
and HA2 respectively.
Let u2A1 = A1 u, u 1 u, u = 1 u2A1 , u HA1
Also, let
u2A2
11.7.5
Theorem
= A2 v, v 2 v, v =
2 v2A2 v
HA2
(11.25)
(11.26)
Let the symmetric and coercive operators A1 and A2 fulll respectively
the inequalities (11.25) and (11.26). Moreover, let HA1 and HA2 coincide
and are each seperable. If u0 and u1 are the solutions of equations (11.23)
and (11.24), then there exists some constant such that
u1 u0 A2 u1 A2
(11.27)
Before we prove theorem 11.7.5, a lemma is proved.
11.7.6
Lemma
Let A1 be a symmetric coercive operator fullling condition (11.25). Let
A1 be a symmetric nonnegative bounded linear operator and satisfy the
condition
0 A1 u, u 3 u, u, 3 > 0, u D(A1 ) D(A1 )
(11.28)
Let 1 > 3 and 1 3 = 2 .
Then A2 = A1 A1 is a symmetric linear coercive operator satisfying
the condition (11.26).
398
A First Course in Functional Analysis
Proof: A2 u, u = (A1 A1 )u, u = A1 u, u A1 u, u
(1 3 )u, u = 2 u, u
u D(A2 ).
Moreover, since A1 and A1 are symmetric linear operators, A2 = A1
A1 is symmetric and coercive.
Proof (th. 11.7.5) HA1 being a Hilbert space, we can dene
1
uA1 = [u, u]A2 1 = A1 u, u1/2 , u D(A) HA1
1
Similarly, uA2 = [u, u]A2 2 = A2 u, u1/2 , u D(A2 ) HA2
HA1 and HA2 being seperable are isomorphic.
It follows from (11.27) (A1 A2 )u, u 3 u2
3
u2A2
or
A1 u, u 3 u2 + u2A2 1 +
2
3
2
u2A2 .
or
uA1 1 +
2
;
3
or
uA1 1 +
uA2
2
(11.29)
Again, since A1 is nonnegative, we have
u2A1 u2A2
(11.30)
Hence, we can nd positive constants 1 , 2 , such that
1 u2A2 u2A1 2 u2A2 u HA1 = H2
(11.31)
If u0 and v 0 are the respective unique solutions of (11.23) and (11.24),
then the inequality
(11.32)
v 0 u0 A2 v0 A2
holds, in which the constant is dened by the formula
1 1 2 1
= max
;
1
2
(11.33)
Formulas (11.32) and (11.33) solve the problem (11.23) approximately with
an estimation of the error involved.
11.7.7
Example
1. For error estimate due to perturbation of a second order elliptic
dierential equation see Mikhlin [36].
2. In the theory of small vibrations and in many problems of quantum
mechanics it is important to determine how the eigenvalues and the
n
bij , xi xj , are changed if
eigenvectors of a quadratic form K(x, x) =
i=1,j=1
Unbounded Linear Operators
399
both the form K(x, x) and the unit form E(x, x) are altered. Perturbation
theory is applied in this case. See Courant and Hilbert [15].
3. In what follows, we consider a dierential equation where perturbation
method is used. We consider the dierential equation
d2 y
+ (1 + x2 )y + 1 = 0, y(1) = 0
dx2
(11.34)
We consider the perturbed equation
d2 y
+ (1 +
x2 )y + 1 = 0, y(1) = 0
dx2
(11.35)
d2 y0
For
= 0, the equation
+ y0 + 1 = 0, y0 (1) = 0 has the solution
dx2
cos x
y0 =
1.
sin x
Let y(x,
) be the solution of the equation (11.33) and we expand y(x,
)
in terms of
(11.36)
y(x1
) = y0 (x) +
y1 (x) +
2 y2 (x)
Substituting the power series (11.36) for y in the dierential equation,
we obtain,
n (yn + yn ) +
n x2 yn1 + 1 = 0
n=0
n=1
and since the coecients of the powers of
must vanish we have,
yn + yn + x2 yn1 = 0
(11.37)
with the boundary conditions
yn (1) = 0
(11.38)
Thus, we have a sequence of boundary value problems of the type (11.37)
subject to (11.38) from which y1 , y2 , y3 , . . . etc., can be found [see Collatz
[14]].
CHAPTER 12
THE HAHNBANACH
THEOREM AND
OPTIMIZATION
PROBLEMS
It was mentioned at the outset that we put emphasis both on the theory and
on its application. In this chapter, we outline some of the applications of
the HahnBanach theorem on optimization problems. The HahnBanach
theorem is the most important theorem about the structure of linear
continuous functionals on normed linear spaces. In terms of geometry, the
HahnBanach theorem guarantees the separation of convex sets in normed
linear spaces by hyperplanes. This separation theorem is crucial to the
investigation into the existence of an optimum of an optimization problem.
12.1
The Separation of a Convex Set
In what follows, we state a theorem which asserts the existence of a
hyperplane separating two disjoint convex sets in a normed linear space.
12.1.1
Theorem (HahnBanach separation theorem)
Let E be a normed linear space and X1 , X2 be two nonempty disjoint
convex sets, with X1 being an open set. Then there exists a functional
f E and a real number such that
X1 {x E : Ref (x) < }, X2 {x E : Ref (x) }.
For proof see 5.2.10.
The following theorems are in the setting of
400
4.
n
The HahnBanach Theorem and Optimization Problems
12.1.2
401
Theorem (intersection)
In the space n , let X1 , X2 , . . . , Xm be compact convex sets, whose
union is a convex set. If the intersection of any (m 1) of them is nonempty, then the intersection of all Xj is nonempty.
Proof: We shall rst prove the theorem for m = 2.
Let X1 and X2 be nonempty compact convex sets, such that X1 X2
is convex. Let X1 and X2 be disjoint, then
there is a plane P which separates them
P
strictly. Since there exist points of X1 X2
X1
on both sides of P , and since X1 X2 is
convex, there exist points of X1 X2 on
both sides of P . But this is impossible
since P separates strictly X1 and X2 .
Let us next suppose that the result is
true for m = r convex sets. We shall
X2
prove this implies that the result holds for
Fig. 12(a)
m = r + 1 convex sets X1 , X2 , . . . , Xr+1 .
r
Put X = j=1 Xj . Then X = by our premise. Now, X = ,
Xr+1 = . Suppose the two sets are disjoint. Then there exists a plane P
which separates them strongly. Writing Xj = Xj P we have
r
*
Xj = (Xj P ) (Xr+1 P )
j=1
r+1
*
= P
Xj
j=1
Therefore, the union of the sets X1 , X2 , . . . , Xr is convex. Also, the
intersection of any (r 1) of X1 , X2 , . . . , Xr meets X and Xr+1 and hence
meets P . Therefore, the intersection of any (r 1) of X1 , X2 , . . . , Xr is
not empty.
But by hypothesis jr=1 Xj = X P = contradicting the fact that P
is a hyperplane which separates X and Xr+1 strictly.
It follows that X Xr+1 = and so the result holds for m = r + 1.
12.1.3
Theorem
A closed convex set is equal to the intersection of the halfspaces which
contain it.
Proof: Let X be a closed convex set and let A be the intersection of
the halfspaces which contains it. If Ef = {x : x n , f (x) = } is
a hyperplane in n , then Hf = {x : x n , f (x) } is called a
halfspace.
If x0 X, then {x0 } is a compact convex set not meeting X. Therefore,
402
A First Course in Functional Analysis
there exists a plane Ef separating {x0 } and set X s.t.
f (x0 ) < inf f (x).
xX
We thus have Hf X and x0 Hf .
Consequently, x0 does not belong to the intersection of the halfspaces
Hf containing X, i.e., x0 A. Hence, X A and since A X, A = X.
12.1.4
Denition: plane of support of X
Let X be any set in n . A plane Ef containing at least one point of X
and s.t. all points of X are on one side of Ef is called a plane of support
or supporting hyperplane of X.
12.1.5
Observation
If X is compact, then for any linear functional f which is not identically
zero, there exists a plane of support having equation f (x) = (it is
sucient to take = minxX f (x)).
12.1.6
Plane of support theorem
If X is a compact nonempty convex set, it admits of an extreme point;
in fact, every plane of support contains an extreme point of X.
Proof: (i) The theorem is true in
for, compact convex set in
is a
closed segment [, ] and contains two extreme points and ; the planes
of support {x : x , x = } and {x : x , x = }.
(ii) Suppose that the theorem holds for r . We shall prove that it holds
in r+1 . Let X be a compact convex set in r+1 and let Ef be a plane of
support. The intersection Ef X is a nonempty closed convex set; since
Ef X is contained in the compact set X, it is also a compact set. The set
Ef X can be regarded as a compact set in r and so by hypothesis, it
admits of an extreme point x0 . Let [x1 , x2 ] be a line segment of centre x0
with x1 = x0 and x2 = x0 . Since x0 is an extreme point of Ef X, we have
[x1 , x2 ] Ef X. Therefore, if x1 and x2 X, we have x1 , x2 Ef and
hence x1 , x2 are separated by Ef but this contradicts the denition of Ef
as a plane of support of X. It follows that there is no segment [x1 , x2 ] of
centre x0 contained in X and so x0 is an extreme point of X; by denition
x0 is in Ef .
Thus, if the theorem holds for n = r, it holds for n = r + 1. But we
have seen that it holds for n = 1. Hence, by induction, the theorem is true
for all n 1.
4
4
4
12.2
Minimum Norm
Duality Theory
Problem
and
the
Let E be a real normed linear space and X be a linear subspace of E.
The HahnBanach Theorem and Optimization Problems
12.2.1
403
Denition: primal problem
To nd u0 X s.t.
inf u0 u = , u X
u
12.2.2
(12.1)
Denition: dual problem
To nd
f X , f  1
(12.2)
s.t.
sup f (u0 ) =
(12.3)
where
12.2.3
= {f E : f (u) = 0 u X}
(12.4)
Theorem: minimal norm problem on the normed space
E
Let X be a linear subspace of the real normed space E. Let u0 E.
Then the following results are true:
(i) Extremal values: =
(ii) Dual problem: The dual problem (12.3) has a solution u .
(iii) Primal problem: Let f be a xed solution of the dual problem
(12.3). Then the point u0 X is a solution of the primal problem (12.1) if
and only if,
(12.5)
f(u0 u) = u u0 
12.2.4
Lemma
If dim X < , then the primal problem (12.1) always has a solution.
Let v X and X with  1. Therefore, (i) we obtain the two
sided error estimate for the minimum value :
v u0  (u0 ).
Proof of theorem: (i) and (ii) Since is the inmum in (12.1), for each
> 0, there is a point u X such that,
u u0  +
.
Thus, for all f X with f  1,
f (u0 ) = f (u0 u) f u0 u +
.
Hence +
, for all
> 0, that is, . Let > 0. Now theorem
5.1.4 yields that there is a functional f X with f = 1 such that
f(u0 ) =
(12.6)
Along with , this implies = .
If = 0 then (12.6) holds with f = 0 and hence we again have = .
404
A First Course in Functional Analysis
(iii) This follows from = and f(u) = 0 u X.
Proof of lemma: Since u0 X, u0  . Thus problem (12.1) is
equivalent to the nitedimensional minimum problem
min Z = u u0 
uX0
(12.7)
where the set X0 = {u X : u u0 } is compact. By the BolzanoWeierstrass theorem (1.6.19) this problem has a solution.
12.2.5
Minimum norm problems on the dual space E
Let us consider the modied primal problem:
To nd f X s.t.
inf Z = (f f0 ) =
f
along with the dual problem.
To nd u X,
sup z(= f0 (u)) =
(12.8)
(12.9)
u
u=1
X = {f E : f(u) = 0 for all u X}.
Thus, the primal problem (12.8) refers to the dual space E , where as
the dual problem (12.9) refers to the original space E.
12.2.6
Theorem
Let X be a linear subspace of the real normed linear space E. Given
f0 E , the following results hold good.
(a) Extreme values: =
(b) Primal problem: The primal problem (12.8) has a solution f.
(c) Dual problem: Let f be a xed solution of the primal problem
(12.8). Then, the point u X with u 1 is a solution of the dual
problem (12.9) if and only if,
(f0 f)(u) = f0 f
Proof: (i) For all f X ,
f f0  = sup (f(u) f0 (u))
u1
sup f0 (u) =
u1,
uX
Since f(u) = 0 for all u X. Hence, .
(12.10)
The HahnBanach Theorem and Optimization Problems
Let fr : X be the restriction of f0 : E
Then fr  = sup f0 (u) = .
405
4 to X.
u1
uX
By HahnBanach theorem (theorem 5.1.3) there exists an extension
F : E of fr with F  = fr .
This implies g := f0 F = 0 on X, that is, g X . Since and

g f0  = F  = fr  = , g X ,
we get = . Hence, (12.8) has a solution.
(ii) This follows from = with f(u) = 0.
12.2.7
Denition: x
Let < a x b < . Set x (u) : = u(x) for all u C([a, b]).
Obviously, x C([a, b]) and C  = 1.
12.2.8
Lemma
Let f C([a, b]) be such that f = 0.
Suppose that f(u) = f u, where u = max u(x) and u : [a, b]
axb
R is a continuous function, such that u(x) achieves its maximum at
precisely N points of [a, b] denoted by x1 , x2 , . . . , xN . Then there exist
real numbers a1 , a2 , . . . , aN , such that
f = a1 x1 + + aN xN
and
a1  + a2  + + aN  = f.
Proof: By Riesz representation theorem (5.3.3) there exists a function
h : [a, b] of bounded variation, such that
b
f(u) =
u(x)h(x)dx for all u C([a, b])
and
Var(h) = f,
where Var(h) stands for the total variation of h on the interval [a, b]. We
assume that h(a) = 0. For simplicity, we take N = 1, u(x1 ) = u and
a < x1 < b.
Let J = [a, b] [x1
, x1 +
] for xed
> 0, and let VarJ (h) denote
the total variation of h on J. Then
VarJ (h) + h(x1 +
) h(x1
) Var(h)
(12.11)
Case I Let VarJ (h) = 0 for all
> 0. Then, by (12.11), h is a step function
of the following form,
3
0
if a x < x1
h(x) =
Var(h) if x1 < x b
406
A First Course in Functional Analysis
and by the relation
f(u) =
u(x)dh(x) = u(x1 )Var(h)
a
for all u C([a, b].
Hence, f = Var(h)x1 .
Case II Let VarJ (h) > 0 for some
> 0. We want to show that this is
impossible. By the mean value theorem, there is a point [x
, x +
]
such that
x1 +
f (u) =
u(x)dh(x) +
u(x)dh(x)
x1
max u(x)VarJ (h) + u()h(x +
) h(x
).
xJ
Since u(x) achieves its maximum exactly at the point x1 , we get
max u(x) u.
xJ
Thus, it follows from (12.11) that
f(u) < uVar(h).
Hence, f(u) < uf. This is a contradiction.
For N > 1 we use a similar argument.
12.3
Application
Approximation
to
Chebyshev
It is convenient in practice to approximate a continuous function by a
polynomial for various reasons. Let u0 be a continuous function mapping a
compact interval [a, b] R. Let us consider the following approximation
problem:
max u0 (x) u(x) = min!, u 2
(12.11)
axb
where 2 denotes the set of real polynomials of degree N for xed N 1.
Problem (12.11) corresponds to the famous Chebyshev approximation of
the function u0 by polynomials.
12.3.1
Theorem
Problem (12.11) has a solution. If p(x) is a solution of (12.11), then
u0 (x) p(x) achieves its maximum at least N + 2 points of [a, b].
Proof: Let E = C([a, b]) and v = max v(x). Then, (12.11) can be
axb
written as
min Z = u0 p, p 2
(12.12)
The HahnBanach Theorem and Optimization Problems
407
Since, dim X < , i.e., nite, the problem (12.12) has a solution by 12.2.4.
If u0 2, then (12.12) is immediately true. Let us assume that u0 2.
Let p be a solution of (12.12). Then, since u0 2 and p 2 we have
u0 p > 0. By the duality theory from theorem 12.2.3 there exists a
functional f C([a, b]) such that
along with f  = 1 and
f (u0 p) = u0 p
(12.13)
f (
p) = 0 p X
(12.14)
Let us suppose that u0 (x) p(x) achieves its maximum on [a, b] at
precisely the points x1 , x2 , . . . , xM where 1 M < N + 2. It follows
from (12.13) and lemma 12.2.6 that there are real numbers a1 , a2 , . . . , aM
with a1  + + aM  = 1, such that
u = a1 u(x1 ) + a2 u(x2 ) + + aM u(xM )
where u(x) C([a, b]).
Assume that aM = 0. Let us choose a real polynomial p(x) of degree
N , such that
p(x1 ) = p(x2 ) = = p(xM 2 ) = 0 and p(xM ) = 0.
This is possible since M 1 N . Then, p X and f (p) = 0 contradicting
(12.14).
12.4
Application
Problems
to
Optimal
Control
We want to study the motion of a vertically ascending rocket that reaches
a given altitude H with minimum fuel expenditure [see gure 12(b)].
The mathematical model for the system is given
by,
Rocket
d2 x
= u(t) g,
(12.15)
2
dt
where x is the height of the rocket above the ground
Earth
level and g is the acceleration due to gravity. u(t) is
the thrust exerted by the rocket.
Fig. 12(b)
Let h be the height attained at time T . Then the initial and boundary
conditions of the equation (12.15),
x(0) = x (0) = 0, x(T ) = h
We neglect the loss of mass by the burning of fuel.
(12.16)
408
A First Course in Functional Analysis
Let us measure the minimal fuel expenditure during the time interval
T
[0, T ] through the integral 0 u(t)dt over the rocket thrust. Let T > 0
be xed. Then the minimal fuel expenditure (T ) during the time interval
[0, T ] is given by a solution of the following minimum problem
T
u(t)dt = (T )
(12.17)
min
u
where we vary u over all the integrable functions u : [0, T ]
(12.15) we get
t
u(t)dt gt.
x (t) =
4. Integrating
Integrating further,
t t
1
dt
u(s)ds gt2
x(t) =
2
0
0
t
1
=
(t s)u(s)ds gt2 [see 4.7.16 Ex. 2]
2
0
T
1
(T s)u(s)ds gT 2 .
Thus, h =
2
0
(12.18)
Thus for given h > 0, we have to determine the optimal thrust u() and the
nal time T as a solution of (12.17).
This formulation has the following shortcoming. If we consider only
classical force functions u, then an impulse at time t of the form u = t is
excluded.
However, such types of thrusts are of importance. Therefore, we
consider the generalized problem for functionals:
(a) For a xed altitude h and xed nal time T > 0, we are looking for
a solution U of the following minimum problem:
min U  = (T ), U C([0, T ])
(12.19)
along with the side condition,
h = U (w)
T2
2
(12.20)
where we write w(t) = T t.
(b) We determine the nal time T in such a way that
(T ) = min!
(12.21)
It may be noted that (12.19) generalizes (12.17). In fact, if the functional
U C([0, T ]) has the following special form:
T
v(t)u(t)dt for all u C([0, T ])
U (v) =
0
The HahnBanach Theorem and Optimization Problems
4 is continuous, then we show that
where the xed function u : [0, T ]
U  =
u(t)dt
(12.22)
u(t)dt for all t [0, T ].
Let us dene h(t) =
0
Then,
409
U (w) =
0
w(t)dh(t) for all u C([0, T ])
U  = Var(h) by 12.2.8
and
(12.23)
For 0 = t0 < t1 < < tn = T , a partition of the interval [0, T ],
=
n
n
h(tj ) h(tj1 )
j=1
tj
u(t)dt =
tj1
j=1
u(t)dt.
T
Hence, Var(h) 0 u(t)dt.
By the mean value theorem,
=
n
u(j )(tj tj1 ) where tj1 j tj .
j=1
Making the partition arbitrarily ne as n , we get
T
0
u(t)dt as n
and hence
Var(h) =
0
u(t)dt.
(12.24)
Thus, (12.23) and (12.24) yield
U  =
T
0
u(t)dt.
Thus (12.22) is true.
Theorem 12.4.1 Problem (a), (b) has the following solution:
U = T 0
and
T = (2h) 2
with the minimal fuel expenditure U  = T [see Zeidler, E [56]].
CHAPTER 13
VARIATIONAL
PROBLEMS
13.1
Minimization of Functionals in a
Normed Linear Space
In this chapter we rst introduce a variational problem. The purpose is to
explore the conditions under which a given functional in a normed linear
space admits of an optimum. Many dierential equations arising out of
problems of physics or of mathematical physics are dicult to solve. In
such a case, a functional is built up out of the given equations and is
minimized.
Let H be a Hilbert space and A be a symmetric linear operator with
domain D(A) dense in H. Then Au, u, for all u D(A), is a symmetric
bilinear functional and is denoted by a(u, u). a(u, u) is a quadratic
functional (9.3.1). Let L(u) be a linear functional. The minimization
problem can be stated as
Min J(u) =
uH
1
a(u, u) L(u)
2
(13.1)
In general, we consider a vector space E and U an open set of E. Let
J : u U E R. Then the minimization problem is
Min J(u)
(13.2)
uU
13.2
G
ateaux Derivative
Let E1 and E2 be normed linear spaces and P : U E1 E2 be a
mapping of an open subset U of E1 into E2 . We shall call a vector E1 ,
410
Variational Problems
411
= 0 a direction in E1 .
13.2.1
Denition
The mapping P is said to be dierentiable in the sense of G
ateaux,
or simply Gdierentiable at a point u U in the direction if the
(u)
has a limit P (u, ) in E2 as t 0 in .
dierence quotient P (u+t)P
t
The (unique) limit P (u, ) is called the G
ateaux derivative of P at u
in the direction of .
P is said to be Gdierentiable in a direction in a subset of U if it is
Gdierentiable at every point of the subset in the direction .
13.2.2
Remark
The operator E1 # P (u, ) H is homogeneous.
For P (u, ) = lim
t0
P (u + t) P (u)
= P (u, ) for > 0.
13.2.3
Remark
The operator P (u, ) is not, in general, linear.
Example 1. Let f : 2 be dened by
if (x1 , x2 ) = (0, 0)
0
x51
f (x1 , x2 ) =
if (x1 , x2 ) = (0, 0)
((x1 x2 )2 + x41 )
If u = (0, 0)
42 and the direction = (h1 , h2 ) 42 ( = 0), we have,
(f (th1 , th2 ) f (0, 0))
t2 h51
=
t
((h1 h2 )2 + t2 h41 )
which has a limit as t 0 and we have
0 if h1 = h2
f (u, ) = f ((0, 0), (h1 , h2 )) =
h if h1 = h2
It can be easily veried that f is Gdierentiable in 2 .
Example 2. Let be an open set in n and E = Lp (), p > 1.
Suppose f : # t f (t) is a continuously dierentiable function,
such that (i) f (t) Ktp and (ii) f (t) Ktp1 for some constant
K > 0. Then
f (u(x))dx
(13.3)
J(u) =
denes a functional J on Lp () = E which is Gdierentiable everywhere
in all directions, and we have
J (u, ) =
f (u(x))(x)dx
(13.4)
412
A First Course in Functional Analysis
We rst show that the RHS of (13.4) exists.
Since u Lp () and since f satises (i) we have,
J(u)
f (u(x)dx K
up dx <
which means that J is welldened on Lp ().
On the other hand, for any u Lp (), since f satises (ii), f (u)
Lp () where 1p + 1q = 1. This is because
f (u) dx K
q
(p1)q
u
dx = K
up dx <
Thus for any u, Lp (), we have by using H
olders inequality
(theorem 1.4.3)
f (u)dx f (u)L () uL () Kup/q L () <
p
p
p
Lp ()
This proves the existence of the RHS of (13.4).
If t , we dene g : [0, 1] by setting,
g() = f (u + t).
dierentiable in ]0, 1[ and g(1) g(0) =
1
1 Then g is continuously
g ()d = t(x)
f (u + t)d ( = (x), (x) 1), so that
0
1
(J(u + t) J(u))
=
(x)
f (u(x) + t(x))ddx
t
0
Now,
(x)f (u(x + t(x))dx K
(x)u(x + t(x))p1 dx
1/p
(x)p dx
u(x + t(x)(p1)q dx
1/q
<
Hence,(x)f (u(x) + t(x)) L1 ( [0, 1])
and by Fubinis theorem (10.5.3)
1
(J(u + t) J(u))
=
d
(x)f (u(x) + t(x))dx.
t
0
The continuity of f implies that f (u + t) f (u) as t 0 (and
hence as t 0) uniformly for ]0, 1[.
Moreover, the condition (ii) and the triangle inequality yield,
(x)f (u(x) + t(x)) K(x)(u(x) + (x))p1
(13.5)
Variational Problems
413
The RHS of (13.5) is integrable by Holders inequality [see 1.4.3]. Then
by dominated convergence theorem (10.2.20(b)) we conclude
J (u, ) =
f (u)dx.
13.2.4
Denition
An operator P : U (E1 E2 ) (U being an open set in E1 ) is said
to be twice dierentiable in the sense of G
ateaux at at point u U
in the directions , (, E1 , = 0, = 0 given) if the operator
u P (u, ) : U E1 E2 is once Gdierentiable at u in the
direction . The Gderivative of u P (u, ) is called the second
Gderivative of P and is denoted by P (u, , ) E2 , i.e.,
P (u + t, ) P (u, )
t0
t
P (u, , ) = lim
13.2.5
Gradient
(13.6)
be a functional on an open set of a normed
Let J : U E1
linear space E1 which is once Gdierentiable at a point u U . If the
functional u J (u, ) is continuous linear on E1 , then there exists a
(unique) element G(u) E1 (6.1), such that
J (u, ) = G(u)() for all E1 .
Similarly, if J is twice Gdierentiable at a point u U , and if the form
(, ) J (u, , ) is a bilinear (bi) continuous form on E1 E1 , then
there exists a (unique) element H(u) (E1 E1 ) such that
J (u, , ) = H(u)(, ), (, ) E1 E1
13.2.6
(13.7)
Denitions: gradient, Hessian
Gradient: G(u) E1 is called the gradient of J at u
Hessian: H(u) (E1 E1 ) is called the Hessian of J at u.
13.2.7
Mean value theorem
Let J be a functional as in 13.1. Let us assume that [u + t, t [0, 1]],
is contained in U . Let the function g : [0, 1] be dened as
t g(t) = J(u + t)
Let J (u + t, ) exist. Then
J(u + ( + t)) J(u + t)
g(t + ) g(t)
= lim
= g (t)
0
0
showing g is once dierentiable in [0,1]
lim
Thus, g (t) = J (u + t, )
(13.8)
Similarly, if J (u + t, , ) exist and J is twice dierentiable then
g (t) = J (u + t; , )
(13.9)
414
A First Course in Functional Analysis
13.2.8
Lemma
Let J be as in 13.1. Let u U , E1 be given. If [u+t : t [0, 1]] U
and J is once Gdierentiable on this set in the direction, then there exists
a t0 ]0, 1[, such that
J(u + ) = J(u) + J (u + t0 , )
Proof: g : [0, 1]
value theorem yields:
4 and g is dierentiable in ]a, b[.
(13.10)
The classical mean
g(1) = g(0) + 1 g (t0 ), t0 ]0, 1[
(13.11)
Using the denition of g(t) and (13.8), (13.10) follows from (13.11).
13.2.9
Lemma
Let U and J be as in lemma 13.2.8. If J is twice Gdierentiable on the
set [u + t; t [0, 1]] in the directions , , then there exists a t0 ]0, 1[
such that
1
J(u + ) = J(u) + J (u, ) + J (u + t0 , , )
(13.12)
2
Proof: The classical Taylors theorem applied to g on ]0, 1[, yields
g(1) = g(0) + 1 g (0) +
1 2
1 g (t0 )
2!
(13.13)
Using the denition of g(t), (13.8) and (13.9), (13.12) follows from
(13.13).
13.2.10
Lemma
Let E1 and E2 be two normed linear
E1 and let E1 be given. If the set
P : U E1 E2 is a mapping which is
the set [u + t; t [0, 1]] in the direction
exists a th ]0, 1[, such that
spaces. U an open subset of
[u + t; t [0, 1]] U and
Gdierentiable everywhere on
then, for any h E1 , there
h(P (u + )) = h(P u) + h(P (u + th , )
Proof: Let g : [0, 1]
(13.14)
4 be set as
t g(t) = h(P (u + t))
(13.15)
where h : U E1 .
Then g (t) exists in ]0, 1[ and
g(t + t ) g(t)
h(P (u + (t + t )) h(P (u + t))
=
Lt
t 0
t 0
t
t
g (t) = Lt
= h(P (u + t, ) for t ]0, 1[,
since h is a linear functional dened on E1 .
Now, (13.14) follows immediately on applying the classical mean value
theorem to the function g.
Variational Problems
13.2.11
415
Theorem
Let E1 , E2 , u, and U be as in above. If P : U E1 E2 is
Gdierentiable in the set [u + t; t [0, 1]] in the direction , then there
exists a t0 ]0, 1[, such that,
P (u + ) P (u)E1 P (u + t0 , )
(13.16)
Proof: Let v = P (u+)P (u), then v E1 . Then by th. 5.1.4, which is a
consequence of the HahnBanach theorem, we can nd a functional h E1
such that
h = 1 h(v) = h(P (u + ) P u) = P (u + ) P (u).
Since P satises the assumption of theorem 13.2.10, it follows that there
exists t0 = th ]0, 1[, such that
P (u + ) P u = h(P (u + ) P u) = h(P (u + t0 , ))
h P (u + t0 , ) = P (u + t0 , ).
13.2.12
Convexity and G
ateaux dierentiability
Earlier we saw that a subset U of a vector space E is convex if
u, v E = u + (1 )v E, 0 1.
13.2.13
Denition
A functional J : U E1
is said to be convex if
4 on a convex set U of a vector space E1
J((1 )u + v) (1 )J(u) + J(v) for all
(13.17)
u, v U and [0, 1].
J is said to be strictly convex if strict inequality holds for all u, v E1
with u = v and ]0, 1[.
13.2.14
Theorem
If a functional J : U E1
on a convex set is Gdierentiable
everywhere in U in all directions then
(i) J is convex if and only if,
J(v) J(u) + J (u, v u) for all u, v U
(13.18)
(ii) J is strictly convex if and only if,
J(v) > J(u) + J (u, v u) for all u, v U
with u = v.
(13.19)
416
A First Course in Functional Analysis
Proof: J is convex = J(v) J(u) J(u+(vu))J(u)
, [0, 1]
Since J (u, v u) exists, proceeding to the limit as 0 on the RHS,
we get,
J(v) J(u) J (u, v u) which is inequality (13.18).
For proving the converse, we note that (13.18) yields
J(v) J(u + (v u)) + J (u + (v u), u (u + (v u))
or, J(v) J(u + (v u)) J (u + (v u), v u)
(13.20)
by the homogeneity of the mapping J (u, ).
Similarly we can write,
J(v) J(u + (v u)) + J (u + (v u)), v (u + (v u))
or, J(v) J(u + (v u)) + (1 )J (u + (v u), v u)
(13.21)
Multiplying (13.20) by (1 ) and (13.21) by and adding we get
back (13.17) for any u, v U E1 .
If J is strictly convex, we can write,
J(v) J(u) > 1 [J(u + (v u)) J(u)]
(13.22)
On the other hand, the mean value theorem yields
J(u + (v u)) = J(u) + J (u, 0 (v u)), 0 < 0 < ]0, 1[
or, J(u + (v u)) J(u) = 0 J (u, v u)
(13.23)
Using (13.23), (13.22) reduces to (13.19).
The converse can be proved exactly in the same way as in the rst part.
13.2.15
Weak lower semicontinuity
Denition: weak lower semicontinuity
Let E be a normed linear space.
A functional J : E is said to be weakly lower semicontinuous
w
if, for every sequence vn u in E1 (6.3.9), we have
lim inf J(vn ) J(u)
13.2.16
Theorem
(13.24)
If a functional J : E is convex and admits of a gradient G(u) E
at every u E, then J is weakly lower semicontinuous.
Proof: Let vn be a sequence in E, such vn u is in E. Then
G(u)(vn u) 0 as n , since vn is weakly convergent in u. On
the other hand, since J is convex, we have by theorem 13.2.14,
J(vn ) J(u) + G(u)(vn u).
Variational Problems
417
On taking limits we have
lim inf J(vn ) J(u).
13.2.17
Theorem
If a functional J : U E
on the open convex set of a normed
linear space, E is twice Gdierentiable everywhere in U in all directions,
and if the form (, ) J (u, , ) is nonnegative, i.e., if J (u, , ) 0
for all u U and E with = 0, then J is convex.
If the form (, ) J (u, , ) is positive, i.e., if J (u, , ) > 0 for
all u U and E with = 0, then J is strictly convex.
Proof: Since U is convex, the set [u + (v u), [0, 1]] is contained in
U whenever u, v U . Then, by Taylors theorem [see Cea [11]], we have,
with = v u,
1
J(v) = J(u) + J (u, v u) + J (u + 0 (v u), v u, v u)
2
(13.25)
for some 0 ]0, 1[. Then the nonnegativity of J implies
J(v) J(u) + J (u, v u)
from which convexity of J follows from 12.2.14. Similarly, the strict
convexity of J follows from positiveproperty of J (u, , ).
13.2.18
Theorem
If a functional J : E
all directions and satises
4 is twice Gdierentiable everywhere in E in
(a) J has a gradient G(u) E at all points u E,
(b) (, ) J (u, , ) is nonnegative, i.e., J (u; , ) 0 for all
u, E with =
then J is weakly lower semicontinuous.
Proof: By theorem 13.2.14, the condition (b) implies that J is convex.
Then the conditions of theorem 13.2.16 being fullled J is weakly lower
semicontinuous.
13.3
Fr
echet Derivative
Let E1 and E2 be two normed linear space.
418
A First Course in Functional Analysis
13.3.1
Denition: Fr
echet derivative
A mapping P : U E1 E2 from an open set U in E1 to E2 is
said to be Fr
echet dierentiable, or simply Fdierentiable, at a point
u U if there exists a bounded linear operator P (u) : E1 E2 , i.e.,
P (u) (E1 E2 ) such that
lim
P (u + ) P (u) P (u)
=0

(13.26)
echet
Clearly, P (u), if it exists, is unique and is called the Fr
derivative of P at u.
13.3.2
Examples
1. f is a function dened on an open set U R2 and f : U . Then
f is Fdierentiable if it is once dierentiable in the usual sense.
u = (u1 , u2 )T , = (1 , 2 )T .
Let
Then, f (u + ) f (u) = f (u1 + 1 , u2 + 2 ) f (u1 , u2 )
f
f
1 +
2 + 0(2 )
=
u1
u2
where 0(2 ) denotes terms of order in 2 and of higher orders.
f (u + ) f (u) f (u)

0
0
0
0
0 f
f
f
(u)
0
0 u1 1 + u
2
2
= lim
= 0.

0
f f
.
f (u) = grad f T =
,
u1 u2
Therefore, lim
Hence,
2. Let (u, v) a(u, v) be a symmetric bilinear form on a Hilbert space
H and v L(v) a linear form on H. Let us take J : U
(sec
13.1) by
1
a(v, v) L(v) for H, = .
2
1
J(v + ) J(v) = a(v + , v + ) L(v + )
2
J(v) =
1
a(v, v) + L(v)
2
1
1
1
a(v, ) + a(, v) + a(, ) L()
2
2
2
1
= a(v, ) L() + a(, ).
2
Variational Problems
419
J(v + ) J(v) J (v)
0

0
0
0
0
1
0
= Lt 0(a(v, ) L() J (v)) + a(, )0
0
2
0
Lt
(13.27)
Let us suppose that
(i) a(, ) is bicontinuous: there exists a constant K > 0 such that
a(u, v) Ku v for all u, v H.
(ii) a(, ) is coercive (11.7.2), i.e.,
a(u, v) v2H for all v H.
(iii) L is bounded, i.e., there exists a constant M , such that
L(v) M v, for all v H.
Using condition (i) it follows from (13.27)
J(v + ) J(v) J (v)

0
K 0(v2 )
Lt
a(v, ) L() J (v) +
.
v
0
Lt
The limit will be zero if J (v) = a(v, ) L().
13.3.3
Remark
If an operator P : U E1 E2 , where E1 and E2 are normed linear
spaces, is Fdierentiable then it is also Gdierentiable and its Gderivative
coincides with the Fderivative.
Proof: If P has a Fderivative P (u) at u U , then
P (u + ) P (u) P (u)
= 0.
0

lim
Since = , we put = te where t =  and e is an unit vector in the
direction of .
The above limit yields
P (u + te) P (u) tP (u)e
= 0.
t
te0
lim
The above shows that P is Gdierential at u and P (u) is also the
Gderivative.
420
A First Course in Functional Analysis
13.3.4
Remark
The converse is not always true.
Example one in 13.2.3 has a Gderivative at (0, 0), but is not Fdierentiable.
13.4
Equivalence of the Minimizing
Problem for Solving Variational
Inequality
13.4.1
Denition
A functional J : U E , where U is an open set in a normed
linear space, is said to have a local minimum at a point u U if there is
a neighbourhood Nu of u in E such that
J(u) J(v) for all v U Nu
13.4.2
(13.28)
Denition
A functional J on U is said to have a global minimum in U if there exist
a u U , such that
J(u) J(v) for all v U
(13.29)
13.4.3
Theorem
Suppose E, U and J : U
4 full the following conditions:
1. E is a reexive Banach space
2. U is weakly closed
3. U is weakly bounded and
4. J : U E
4 is weakly lower semicontinuous.
Then J has a global minimum in U .
Proof: Let m denote inf J(v). If vn is a minimizing sequence for J, i.e.,
vU
m = inf J(v) = lim J(vn )
n
vU
then, by the boundedness of U , (from (3)), vn is a bounded sequence in
E, i.e., there exists a constant k > 0, such that un  < k for all n.
By the reexivity of E, this bounded sequence is weakly compact [see
theorem 6.4.4]. Hence, {vn } contains a weakly convergent subsequence, i.e.,
w
a sequence vnp , such that vnp u E as p . U being weakly closed
u U . Finally, since vnp u and J is weakly lower semicontinuous,
J(u) lim inf J(vnp )
p
Variational Problems
421
which implies that
J(u) lim J(vnp ) = l J(v)
p
for all v E.
13.4.4
Theorem
If E, U and J satisfy the conditions (1), (2), (4) and J satisfy the
condition (5)
lim J(v) = +,
v
then J admits of a global minimum in U .
Proof: Let z U be arbitrarily xed.
Let us consider the subset U of U as follows,
U = {v : v U and J(v) J(z)}.
Thus, the existence of a minimum in U ensures the existence of a
minimum in U .
We would show that U satises the conditions (2) and (3). If U0 is not
bounded we can nd a sequence vn U 0 , such that vn  +. Then
condition (5) yields J(vn ) + which is impossible since vn U =
J(vn ) J(z). Hence U 0 is bounded. To, show that U is weakly closed,
w
let un U 0 be a sequence such that un u in E. Since U is weakly
0
closed, u U . Since un U , J(un ) J(z) and since J(un ) J(u) <
for n n0 (
), it follows that J(u) J(z) showing that U is weakly closed.
w
On the other hand, since J is weakly lower semicontinuous un u in
E implies that
J(u) lim inf J(un ) J(z)
proving that u U . Now, U and J satisfy all the conditions of theorem
13.4.3, hence J has a global minimum in U 0 and hence in U .
13.4.5
Theorem
Let J : E
be a functional on E, U a subset of E satisfying the
following conditions:
1. E is a reexive Banach space,
2. J has a gradient G(u) E everywhere in U ,
3. J is twice Gdierentiable in all directions , E and satises the
condition
J (u, , ) E() for all E
where E(t) is a function on [t
E(t) 0 and
4; t 0] such that
lim E(t) = +.
422
A First Course in Functional Analysis
4. U is a closed convex set
Then there exists at least one minimum u U of J. Furthermore, if
in condition (3),
5. E(t) > 0 for t > 0 is satised by E then there exists a unique minimum
of J in U
Proof: First of all, by condition (3), J (u, , ) 0 and hence by Taylors
formula (13.25),
1
J(v) = J(u) + J (u, v u) + J (u + 0 (v u), (v u), (v w)), 0 < < 1.
2
We have J(v) J(u) + J (u, v u)
(13.30)
Application of theorem 13.2.17 asserts the convexity of J. Similarly,
condition (5) implies that J is strictly convex by theorem 13.2.17. Then
by conditions (2) and (3), we conclude from theorem 13.2.16 and keeping
in mind that
J (u + 0 (v u), (v u), (v w)) 0 for 0 < 0 < 1,
J is weakly lower semicontinuous.
We next show that J(v) + as v +.
For this, let z U be arbitrarily xed. Then, because of conditions (2)
and (3), we can apply Taylors formula (13.25) to get, for v E,
1
J(v) = J(z) + G(z)(v z) + J (z + 0 (v z), (v z), (v z))
2
for some 0 ]0, 1[ (13.31)
Now, G(z)(v z) G(z) v z,
(13.32)
Condition (3) yields,
J (z + 0 (v z), (v z), (v z)) v zE(v z)
(13.33)
Using (13.32) and (13.33), (13.31) reduces to
1
J(v) J(z) + v z E(v z) G(z) .
2
Here, since z U is xed, as v +
v z +,
J(z) and G(z) are constants and E(vz) + by condition (3).
Thus, J(v) + as v . The theorem thus follows by virtue
of theorem 13.4.5.
Variational Problems
13.4.6
423
Theorem
Suppose U is a convex subset of a Banach space and J : U E
is a Gdierentiable (in all directions) convex functional.
Then, u U is a minimum for J, (i.e., J(u) J(v) for all v E) if,
and only if u U and J (u, v u) 0 for all v U .
Proof: Let u U be a minimum for J. Then, since U is convex,
u, v U = u +
n (v u) U as
n 0 for each n. Hence
J(u) J(u +
n (v u))
J(u +
n (v u)) J(u)
0,
Therefore, lim
n 0+
n
i.e., J (u, v u) 0 for any v E.
Conversely, since J is convex and Gdierentiable by condition (i) of
theorem 13.2.14, we have
J(v) J(u) + J (u, v u) for any v E.
Now, using the assumption that J (u, v u) 0, we get
J(v) J(u) for all u U.
In what follows we refer to the problem posed in (13.1).
13.4.7
Problem (PI) and problem (PII) and their equivalence
Let K be a closed convex set of normed linear space E.
Problem (PI): To nd u K such that
J(u) J(v) for all v K
1
J(v) = a(v, v) L(v) implies
2
(i) J (v, ) = a(v, ) L(), and
(13.33)
(ii) J (v; , ) = a(, ).
The coercivity of a(, ) implies that
J (v; , ) = a(, ) 2 .
If we choose E(t) = t, then all the assumptions of theorem 13.4.5 are
fullled by E, J and K so that problem (PI) has a unique solution.
Also, by theorem 13.4.6 the problem (PI) is equivalent to
(PII): To nd
u K; a(u, v u) L(v u) for all v K
We thus obtain the following theorem:
(13.34)
424
A First Course in Functional Analysis
13.4.8
Theorem
(1) There exists a unique solution u K of the problem (PI).
(2) Problem (PI) is equivalent to problem (PII). The problem (PII)
is called a variational inequality associated to the closed, convex set and
the bilinear form a(, ).
The theorem 13.4.7 was generalized by Gstampaccia (Cea [11]) to the
nonsymmetric case. This generalizes and uses the classical LaxMiligram
theorem [see Reddy [45]]. We state without proof the theorem due to
Stampacchia.
13.4.9
Theorem (Stampacchia)
Let K be a closed convex subset of a Hilbert space H and a(, ) be a
bilinear bicontinuous coercive form (sec 11.7.2) on H. Then, for any given
L H, the variational inequality (13.34) has a unique solution u K.
For proof see Cea [11].
13.5
Distributions
13.5.1
Denition: Support
The support of a function f (x), x n is dened as the closure
of the set of points in n at which f is nonzero.
13.5.2
Denition: smooth function
A function : n is said to be smooth or innitely dierentiable
if its derivatives of all order exist and are continuous.
Denition: C0 ()
13.5.3
The set of all smooth functions with compact support in
denoted by C0 ().
13.5.4
4n
is
Denition: test function
A test function is a smooth function with compact support,
C0 ().
13.5.5
Denition: generalized derivative
A function u C () is said to have the th generalized derivative
D u, 1  m, if the following relation (generalized Greens
formula) holds:
D udx = (1)
uD dx for every C0 ()
(13.35)
For u C0 (), the generalized derivatives are derivatives in the
ordinary (classical) sense.
Variational Problems
13.5.6
425
Denition: distribution
A set of test functions {n } is said to converge to a test function 0
in C0 () if there is a bounded set 0 containing the supports of
0 , 1 , 2 , . . . and if n and all its generalized derivatives converge to 0
and its derivatives respectively. A functional f on C0 () is continuous if
it maps every convergent sequence in C0 () into a convergent sequence in
, i.e., if f (n ) f (0 ) whenever n in C0 ().
A continuous linear functional on C0 () is called a distribution or
generalized function.
Example An example of the distribution is provided by the delta
distribution, dened by
(x)(x)dx = (0) for all C0 ()
(13.36)
Addition and scalar multiplication of distributions:
If f and g are distributions, then the distributions of f + g, , ,
being scalars, is the sum of distribution of f and distribution of g.
13.6
Sobolev Space
C () is an inner product space with respect to the L2 ()inner product.
But it is not complete with respect to the norm generated by the inner
product
D uD vdm
(13.37)
u, vp =
=p
where u and v along with their derivatives upto m, are square integrable
in the Lebesgue sense [see chapter 10].
D u2 dm < for all  p.
The space C () can be completed by adding the limit points of all
Cauchy sequences in C (). It turns out that the distributions are those
limits points.
We can thus introduce the Sobolev space H 1 () as follows:
,
v
1
2
H () = v : v L2 (),
L (), j = 1, 2, . . . , p , (13.38)
xj
where
i.e.,
v
are taken in the sense of distributions,
Dj v =
xj
Dj vdx =
vDj dx for all D()
(13.39)
426
A First Course in Functional Analysis
where D() denotes the space of all C functions with compact support
on . H 1 () is provided with the inner product
n
u, v = u, vL2 () +
Dj u, Dj vL2 ()
(13.40)
j=1
=
uv +
n
(Dj u)(Dj v)
j=1
dx
(13.41)
for which it becomes a Hilbert space.
13.6.1
Remark
D() C 1 () H 1 ().
We introduce the space
H01 () = the closure of D() in H 1 (),
(13.42)
We state without proof some wellknown theorems.
13.6.2
Theorem of density
If , the boundary of is regular (for instance, is a C 1 function of
dimension n 1), then C 1 () (or C )manifold (respectively C ()) is
dense in H 1 ().
13.6.3
Theorem of trace
If is regular then the linear mapping v v of C 1 () C 1 ()
(respectively of C () C ()) extends to a continuous linear map of
H 1 () into L2 () denoted by and for any v H 1 (), v is called the
trace of v in .
Moreover, H01 () = {v : v H 1 (), v = 0}.
13.6.4
Greens formula for Sobolev spaces
Let be a bounded open set with suciently regular boundary , then
there exists a unique outer normal vector n(x). We dene the operator of
exterior normal derivation formally
=
nj (x)D j
n j=1
n
(13.43)
Now, if u, v C 1 () then, by the classical Greens formula [see Mikhlin
[36]], we have
(Dj u)vdx =
u(Dj v)dx + uvnj d
where d is the area element on .
Variational Problems
427
This formula remains valid also if u, v H 1 () in view of the trace
theorem and density theorem .
Next, if u, v C 2 () then applying the above formula to Dj u, D j v and
summing over j = 1, 2, . . . , n, we get,
n
D u, D vL2 () =
j
j=1
i.e.,
n
(u)vdx +
((D ) u)vdx +
D j u, Dj v =
j 2
j=1
j=1
13.6.5
n
u
vd,
n
u
vd
n
(13.44)
(13.45)
Remark
(i) u H 2 () = u L2 ()
(ii) Since D j u H 1 () by trace theorem (13.6.3) (Dj u) exists and
belongs to L2 () so that
u
nj (Dj u) L2 ().
=
n j=1
n
Hence, by using the density and trace theorems (13.6.2 and 13.6.3),
the formula (13.4.3) is valid.
13.6.6
Weak (or variational formulation of BVPs)
Example 1. Let = 1 2 , where j are open subsets of such that
1 2 =
Consider the space
E = {v : v H 1 (); v = 0 on 1 }
(13.46)
E is clearly a closed subspace of H 1 () and is provided with the inner
product induced from that in H 1 () and hence it is a Hilbert space.
Moreover,
H01 () E H 1 ()
(13.47)
and the inclusions are continuous linear. If f L2 () we consider the
functional
1
v, v f, vL2 ()
2
i.e., a(u, v) = u, v and L(v) = f, v.
J(v) =
Then a(, ) is bilinear, bicontinuous and coercive,
a(u, v) uE vE = uH 1 () vH 1 () for u, v E.
a(v, v) = v2H 1 () for v E.
(13.48)
428
A First Course in Functional Analysis
L(v) f L2 () vL2 () f L2 () vH 1 () for v E.
Then the problems (PI) and (PII) respectively become
(PIII) To nd u E, J(u) J(v) for all v E
(13.49)
(PIV) To nd u E, u, = f, L2 () for all v E
(13.50)
Theorem 13.4.8 asserts that these two equivalent problems have unique
solutions.
The problem (PIV) is the Weak (or Variational) formulation of the
(i) Dirichlet problem (if 2 = )
(ii) Newmann problem (if 1 = )
(iii) Mixed boundary problem in the general case.
13.6.7
Equivalence of problem (PIV) to the corresponding
classical problems
Suppose, u C 2 () E and v C 1 () E.
Using Greens formula (13.45),
u
f vdx
a(u, v) = u, v = (u + u)vdx +
vd =
u
i.e.,
(u + u f )vdx +
vd = 0
(13.51)
n
We note that this formula remains valid if u H 2 () E for any v E.
We choose v D() E then the boundary integral vanishes so that
we get,
(u + u f )vdx = 0 v D().
Since D() is dense in L2 (), this implies that (if u H 2 ()) u is a
solution of the dierential equation,
u + u f = 0 in (in the sense of L2 ()).
More generally, without the strong regularity assumption as above, u is
a solution of the dierential equation.
u + u f = 0 in the sense of distribution in
(13.52)
Next we choose v E arbitrary. Since u satises (13.50) in , we nd
from (13.51) that
u
vd = 0 v E
n
u
= 0 on in some generalised sense. In fact by trace
which means that n
1
1
u
theorem v H 2 () and hence n
= 0 in H 2 () [ee Liones and Magenes
Variational Problems
429
[34]]. Thus, if the problem (IV) has a regular solution then it is the solution
of the classical problem
u + u f = 0 on
u=0
on 1
(13.53)
=0
on 2
n
13.6.8
Remark
The variational formulation (PIV) is very much used in the Finite
Elements Method.
CHAPTER 14
THE WAVELET
ANALYSIS
14.1
An Introduction to Wavelet Analysis
The concept of Wavelet was rst introduced around 1980. It came out
as a synthesis of ideas borrowed from disciplines including mathematics
(Calderon Zygmund operators and LittlewoodPaley theory), physics
(coherent states formalism in quantum mechanism and renormalizing
group) and engineering (quadratic mirror lters, sidebend coding in signal
processing and pyramidal algorithms in image processing) (Debnath [17]).
Wavelet analysis provides a systematic new way to represent and analyze
multiscale structures. The special feature of Wavelet analysis is to
generalize and expand the representations of functions by orthogonal
basis to innite domains. For this purpose, compactly supported
[see 13.5] basis functions are used and this linear combination represents
the function. These are the kinds of functions that are realized by physical
devices.
There are many areas in which wavelets play an important role, for
example
(i) Ecient algorithms for representing functions in terms of a wavelet
basis,
(ii) Compression
algorithms
based
on
the
wavelet
expansion representation that concentrate most of the energy of a
signal in a few coecients (Resniko and Wells [46]).
430
The Wavelet Analysis
431
14.2
The Scalable Structure of Information
14.2.1
Good approximations
Every measurement, be it with a naked eye or by a sophisticated
instrument, is at best very accurate or in other words approximate. Even
a computer can measure the nite number of decimal places which is a
rational number. Even Heisenbergs uncertainty principle corroborates this
type of limitations.
The onus on the technologist is thus to make a measurement or a
representation as accurate as possible. In the case of speech transmission,
codes are used to transmit and at the destination the codes are decoded.
Any transmitted signal is sampled at a number of uniformly spaced types.
The sample measurements can be used to construct a Fourier series
expansion of the signal. It will also interpolate values for unmeasured
instants. But the drawback of the Fourier series is that it cannot take
care of local phenomenon, for example, abrupt transitions. To overcome
this diculty, compactly supported [see 13.5] wavelets are used. A simple
example is to consider a time series that describes a quantity that is zero
for a long time, ramps up linearly to a maximum value and falls instantly
to zero where it remains thereafter (g. 14.1).
Fig. 14.1 Continuous ramp transient
Here, wavelet series approximate abrupt transitions much more
accurately than Fourier series [see Resniko and Wells [46]]. Wavelet series
expansion is less expansive too.
14.2.2
Special Features of wavelet series
That wavelet analysis provides good approximation for transient or
localized phenomenon is due to following:
(a) Compact support
(b) Orthogonality of the basis functions
(c) Multiresolution representation
14.2.3
Compact support
Each term in a wavelet series has a compact support [13.5]. As a
result, however short an interval is, there is a basis function whose support
is contained within that interval. Hence, compactly supported wavelet basis
function can capture local phenomenon and is not bothered by properties
of the data far away from the area of interest.
432
A First Course in Functional Analysis
14.2.4
Orthogonality
The terms in a wavelet series are orthogonal to one another, just like
the terms in a Fourier series. This means that the information carried by
one term is independent of the information carried by any other.
14.2.5
Multiresolution representation
Multiresolution representation describes what is called a hierarchical
structure.
Hierarchical structures classify information into several
categories called levels or scales so that, higher in the hierarchy a level
is, the fewer the number of members it has. This hierarchy is prevelant
in the social and political organization of the country. Biological sensory
system, such as visions, also have this hierarchy built in. The human
vision system provides wide aperture detection (so events can be detected
early) and highresolution detection (so that the detailed structure of the
visual event can be seen). Thus, a multiresolution or scalable mathematical
representation provides a simpler or more ecient representation than the
usual mathematical representation.
14.2.6
Functions and their representations
Representation of continuous functions
Suppose we consider a function of a real variable, namely,
f (x) = cos x, x
where
denotes the continuum of real numbers. For each x , we
have a denite value for cos x. Since cos x is periodic, all of its values are
determined by its values on [0, 2[. The question arises as to how best we
can represent the value of the function cos x at any point x [0, 2[. There
is an uncountable number of points in [0, 2[ and an uncountable number
of values of cos x, as x varies. If we represent
cos x =
(1)n
n=0
for any x
x2n
2n!
(14.1)
4, we see that the sequence of numbers,
1
1
1
, 0, , 0, , 0, . . . ,
2!
4!
6!
which is countable, along with the sequence of power functions
1, 0,
(14.2)
x0 = 1, x, x2 , x3 , . . . , xn
is sucient information to determine cos x at any point x. Thus we
represent the uncountable number of values of cos x in terms of the
countable discrete sequence (14.2), which are the coecient of a power
series representation for cos x. This is the basic technique in representing a
function or a class of functions in terms of more elementary or more easily
computed functions.
The Wavelet Analysis
14.2.7
433
Fourier series and the Fourier transform
The concept of a Fourier series was introduced in 3.7.8.
14.2.8
Denition: discrete Fourier transform
We note that if f is an integrable periodic function of period 1, then
the Fourier series of f is given by
f (x) =
cn e2inx
(14.3)
with Fourier coecients {cn } [see 3.7.8] given by
1
f (x)e2inx dx.
cn =
(14.4)
Supposethat {cn } is a given discrete sequence of complex numbers in l2 ( ),
that is
cn 2 < , then we dene the Fourier transform of the sequence
f = {cn } to be the Fourier series [see 14.3],
f() =
cn e2in
n
which is a periodic function.
14.2.9
Inverse Fourier transform of a discrete function
The inverse of the Fourier transform is the mapping from a periodic
function (14.3) to its Fourier coecients (14.4).
14.2.10
Continuous Fourier transform
If f L2 ( ), thus the Fourier transform of f is given by
f (x)e2ix dx
f () =
(14.5)
with the inverse Fourier transform given by
f (x) =
f()e2ix d
(14.6)
where both formulas have to be taken in a suitable limiting sense, but
for nicely behaved functions that decrease suciently rapidly, the formulas
hold (Resniko and Wells [46]).
14.3
Algebra and
Matrices
Geometry
of
Wavelet
A wavelet matrix is a generalisation of unitary matrices [see 9.4.1] to a
larger class of rectangular matrices. Each wavelet matrix contains the basic
434
A First Course in Functional Analysis
information to dene an associated wavelet system. Let . be a subeld of
the eld + of complex numbers. . could be the rational numbers 3, the
real numbers 4 or the eld + itself.
Consider an array A = (asr ), consisting of m rows of presumably innite
vectors of the form
a01
a00
a01
a02
a11
a10
a11
a12
(14.7)
am1
a1m1 am1
am1
1
0
2
In the above, asr is an element of . + and m 2. We call such an array
A as a matrix even though the number of columns (rows) may not be nite.
Dene submatrices Ap of A of size m m in the following manner,
Ap = (as pm + q), q = 0, 1, . . . , m 1, s = 0, 1, . . . , m 1
(14.8)
for p an integer. Thus, A can be expressed in terms of submatrices in the
form,
A = ( , A1 , A0 , A1 , ),
(14.9)
where
Ap =
a0p
..
.
a0p+1
a0p+m1
am1
p
am1
p+1
m1
ap+m1
From the matrix, a power series of the following form is constructed
A(z) =
Ap z p .
(14.10)
p=
We call the above series the Laurent series of the matrix A.
Thus, A(z) is the Laurent series with matrix coecients. We can write
A(z) as a m m matrix with Laurent series coecients:
a0mj z j
..
A(z) =
.
j
am1
mj z
j
a0mj+1 z j
j
a0mj+m1 z j
j
am1
mj+1 z
m1
amj+m1
zj
(14.11)
(14.10) and (14.11) will both be referred to as the Laurent series
representation of A(z) of the matrix A.
The Wavelet Analysis
14.3.1
435
Denition: genus of the Laurent series A(z)
Suppose A(z) has a nite number of nonzero matrices, i.e.,
A(z) =
n2
Ap z p
(14.12)
p=n1
where we assume that An1 and An2 are both nonzero matrices.
g = n2 n 1 + 1
(14.13)
i.e., the number of terms in the series (14.12) is called the genus of the
Laurent series A(z) and the matrix A.
14.3.2
Denition: adjoint A(z) of the Laurent series A(z)
Let,
A(z) = A (z 1 )
=
Ap z p
(14.14)
A(z) is called the adjoint of the Laurent matrix A(z).
T
In the above, Ap = Ap is the hermitian conjugate of the m m matrix
Ap .
14.3.3
Denition: the wavelet matrix
The matrix A, as dened in (14.7), is said to be a wavelet matrix of
rank m if
(14.15)
(1) A(z)A(z) = mI
(2)
asj = m s,0 0 s m 1.
(14.16)
j=
where s,0 = 1 for s = 0 and is zero otherwise.
14.3.4
Lemma: a wavelet matrix with m rows has rank m
Let A be a wavelet matrix with m rows and an innite number of
columns.
0
a
a1
Let A(1) = . where ai stands for the rows of A.
..
am
If the second and third rows are multiples of each other, then we can
write a2 = a1 where is a scalar. In that case the rst two rows of A(1)
will be multiples of each other. Therefore, the determinant of A(1) would
be zero.
This contradicts (14.15).
436
14.3.5
A First Course in Functional Analysis
Denition: wavelet space W M (m, g :
.)
The wavelet space W M (m, g : .) denotes the set of all wavelet
matrices of rank m and genus g with coecients in the eld ..
Quadratic orthogonality relations for the rows of A
Comparison of coecients of corresponding powers of z in (14.15) yields:
asj+mp asj+mp = m s s p p
(14.17)
j
We will refer to (14.15) and (14.16) or equivalently (14.17) and
(14.16), as the quadratic and linear conditions dening a wavelet matrix
respectively.
Scaling vector: The vector a0 is called the scaling vector.
Wavelet vector: as for 0 < s < m is called a wavelet vector.
14.3.6
Remark
1. The quadratic condition
asserts that the rows of a wavelet matrix
have length equal to m and they are pairwise disjoint when shifted
by an arbitrary multiple of m.
2. The linear condition (14.16) implies that the sum of the components
of the scaling vector is equal to the rank of A.
3. The sum of the components of each of the wavelet vector is zero.
14.3.7
Examples
1. Haar matrix of rank 2
1
1
. Here A1 AT1 = 2I.
Let A1 =
1 1
Sum of the elements of the rst row = 2 = rank of A.
Sum of the elements of the second row = 0.
Hence, A1 is a wavelet matrix of rank 2.
1 1
, A2 AT2 = 2I and fulls conditions (14.15)
Similarly, A2 =
1 1
and (14.16).
Hence, A2 is a wavelet matrix of rank 2 too.
The general complex Haar wavelet matrix of rank 2 has the form
1 1
, 4.
ei ei
2. Daubechies wavelet matrix of rank 2 and genus 2
Let
1+ 3 3+ 3
3 3 1 3
1
D2 =
4
1 + 3 3 3 3 3 1 + 3
(14.18)
The Wavelet Analysis
437
D2 D2T = 2I.
Sum of the elements of the rst row = 2 = rank of D2 .
Sum of the elements of other wavelet vectors = 0.
14.3.8
by
Denition: Haar wavelet matrices
Haar wavelet matrix of rank m is denoted by H(m; .) and is dened
H(m; .) = W M (m 1; .)
(14.19)
Thus, Haar wavelet matrix is a wavelet matrix of genus 1.
14.3.9
The Canonical Haar matrix
In what follows, we provide a characterization of the Haar wavelet
matrix.
14.3.10
Theorem
An m m complex matrix H is a Haar wavelet matrix if and only if
H=
1 0
0 U
(14.20)
where U U (m1) is a unitary matrix and 0 is the canonical Haar matrix
of rank m which is dened by
(m 1)
..
0=
..
.
0
1
m1
1
m1
..
.
0
m
s2 +s
m
s2 +s
+m
2
1
1
m1
m
s2 +s
+m
(14.21)
where s = (m j) and j = 0, 1, . . . , (m + 1) are the row numbers of the
matrix.
In the above, U(m 1) is the group of (m 2) (m 1) complex
matrices U , such that U T U = 1.
Before we prove the theorem, we prove the following lemma.
14.3.11
Lemma
If H = (hsr ) is a Haar wavelet matrix, then
hr := h0r = 1 for 0 r < m
(14.22)
438
A First Course in Functional Analysis
Proof: From (14.16) and (14.15), we have
m1
hj hj = m and
j=0
It follows that
m1
m1
hj = m.
j=0
hj = m.
j=0
Now,
m1
hj 12 =
j=0
m1
(hj hj hj hj + 1)
j=0
m1
j=0
hj hj
m1
hj
j=0
m1
hj +
j=0
m1
j=0
=mmm+m
=0
which implies that hj = 1 for j = 0, . . . , m 1.
Proof of the theorem 14.3.9
We have seen that the elements of the rst row of a Haar matrix are all
equal to 1.
1 0
For the remaining m 1 rows, we proceed as follows.
H is a
0 U
Haar matrix whenever H is a Haar matrix and U U(m 1). Hence the
operation of U can be employed to develop a canonical form for H. The
rst step rotates the last row of H so that its rst (m 2) entries are zero.
Since
the rows of a Haar matrix are pairwise orthogonal and of length equal
to m, the orthogonality of the rst and last rows implies that the last row
can be normalized to have the form
;
;
m
m
0, 0, 0, . . . ,
,
2
2
Using the same argument for the proceeding rows the result can be
obtained.
14.3.12
Remarks
I. If H1 , H2 (m; ) are two Haar matrices, then there exists a
unitary matrix U U(m 1), such that
1 0
H1 =
H2 .
0 U
II. If A is a real wavelet matrix, that is, if asj
matrix if and only if,
1 0
A=
0 O
4, then A is a Haar
(14.23)
The Wavelet Analysis
439
where O O(m 1) is an orthogonal matrix and
Haar matrix of rank m.
14.4
0 is the canonical
OneDimensional Wavelet Systems
In this section, we introduce the basic scaling and wavelet functions of
wavelet analysis. The principle result is that for any wavelet matrix A
W M (m, g; ), there is a scaling function (x) and (m1) wavelet functions
1 (x), . . . , (m1) (x) which satisfy specic scaling relations dened in terms
of the wavelet matrix A. These functions are all compactly supported and
squareintegrable.
14.4.1
The scaling equation
Let A W M (m, g; ) be a wavelet matrix and consider the functional
dierence equation
mg1
(x) =
a0j (mx j)
(14.24)
j=0
This equation is called the scaling equation associated with the
wavelet matrix A = (asj ).
14.4.2
The scaling function
If L2 ( ) is a solution of the equation
then is called a
(14.24),
I 0
H is a Haar matrix
scaling function. It may be noted that
0 U
whenever H is a Haar matrix and U is a group of (m1)(m1) complex
matrices U such that U U = I. Hence the action of U can be employed
to develop a canonical form for H. Let the rst step be to rotate the last
row of H so that the rst (m 2) elements of the row are zeroes. Let the
last two elements be and respectively. Then + = 0 by (14.16) and
2 + 2 = m by (14.15). Hence, = and 2 = m
2 . Therefore, the last
row is
;
;
m
m
.
,
0, 0, . . . , 0,
2
2
Then next step would be to rotate the matrix so that the last three
elements in (m 1)th row are only nonzeroes. If these elements are , ,
then
++ =0
2 + 2 + 2 = m.
If we take = then + 2 = 0
2 + 2 2 = 6 2 = m
440
A First Course in Functional Analysis
;
;
m
m
i.e., =
, = 2
6
6
Hence, the last but one row is
;
;
;
m
m
m
0, 0, . . . , 0, 2
.
,
,
6
6
6
Similarly, let the rotation yield for the (m s)th row all the rst
(m s 1) elements as zeroes and the last (s + 1) elements as nonzeroes.
Let these be 1 , 2 , . . . , s , s+1 .
Then
i = 0,
i2 = m.
i
Taking
2 = 3 = = s = s+1
we have
1 = s2
i2 = m
;
or,
s2 22
s22
=m
or, 2 =
m
.
s2 + s
Hence, the (m s)th row is
;
;
;
m
m
m
0, 0, 0, . . . , s
.
,
,...,
s2 + s
s2 + s
s2 + s
Thus we get the expression for
14.4.3
0 [see 14.21].
The wavelet function
If is a scaling function for the wavelet matrix A, then the wavelet
functions { 1 , 2 , . . . , m1 } associated with matrix A and the scaling
function are dened by the formula
s (x) =
mg1
asj (mx j)
(14.25)
j=0
14.4.4
Theorem
Let A W M (m, g; ) be a wavelet matrix. Then, there exists a unique
L2 ( ), such that
(i) satises (14.24)
(x)dx = 1
(ii)
m
(iii) supp 0, (g 1)
+1 .
m1
For proof see Resniko and Wells [46].
In what follows we give some examples and develop the notion of the
wavelet system associated with the wavelet matrix A.
The Wavelet Analysis
14.4.5
441
Examples
1 1
is the canonical Haar wavelet
1 1
matrix of rank 2, then the scaling function satises the equation
1. The Haar functions If
0=
(x) = (2x) + (2x 1)
(14.26)
Hence, (x) = [0, 1[ where K , the characteristic function of a subset K
[see 10.2.10] is a solution of the equation (14.26) [see 14.2(b)].
The wavelet function = (2x) + (2x 1), where (x) = [0, 1[.
For graph, see gure 14.2(c).
1.5
0.5
0
0.5
1
1.5
0.5
0.5
1.5
Fig. 14.2(b) The Haar scaling function for rank 2
1.5
0.5
0
0.5
1
1.5
0.5
0.5
1.5
Fig. 14.2(c) The Haar scaling function for rank 2
442
A First Course in Functional Analysis
2. Daubechies wavelets for rank 2 and genus 2.
3 3 1 3
1+ 3 3+ 3
1
Let D2 =
4
1 + 3 3 3 3 3 1 + 3
This is a wavelet matrix of rank 2 and genus 2 and discovered by
Deubechies. For graphs of the corresponding scaling and wavelet functions,
see Resniko and Wells [46]. The common support of and is [0.3].
We end by stating a theorem due to Lawton, (Lawton [46]).
14.4.6
Theorem
+
4
Let A W M (m, g; ). Let W (A) be the wavelet system associated
with A and let f L2 ( ) (3.1.3). Then, there exists an L2 convergent
expansion
f (x) =
cj j (x) +
m1
s
dsij ij
(x)
(14.27)
s=1 i=0 j=
j=
where the coecients are given by
cj =
f (x)j (x)dx
(14.28)
dsij =
s
f (x)ij
dx.
(14.29)
For proof, see Resniko and Wells [46].
14.4.7
Remark
1. For most wavelet matrices, the wavelet system W [A] will be a
complete orthonormal system and an orthonormal basis for L2 ( ).
2. However, for some wavelet matrices the system W [A] is not
orthonormal and yet the theorem 14.4.6 is still true.
CHAPTER 15
DYNAMICAL SYSTEMS
15.1
A Dynamical System and Its Properties
Let us consider a rst order o.d.e. (ordinary dierential equation) of the
form
dx
x =
= sin x
(15.1)
dt
The solution of the above equation is,
cosec x0 + cot x0
,
(15.2)
t = log
cosec x + cot x
where x(t)t=0 = x0 .
From (15.2), we can nd the values of x for dierent values of t. But the
determination of the values of x for dierent values of t is quite dicult.
On the other hand, we can get a lot of information about the solution
by putting x against x of the graph x = sin x.
Fig. 15.1
Let us suppose x0 = /4.
Then the graph 15.1 describes the qualitative features of the solution
x(t) for all t > 0. We think of t as time, x as the position of an imaginary
particle moving along the real line, and x as the velocity of that particle.
443
444
A First Course in Functional Analysis
Then the dierential equation x = sin x represents a vector eld on
the line. The arrows indicate the directions of the corresponding velocity
vector at each x. The arrows point to the right when x > 0 and to the
left when x < 0. x > 0 for /2 > x > 0 and x < 0 for x < 0. At x > ,
x < 0 and for /2 < x < , x > 0. At points where x = 0, there is no
ow. Such points are therefore called xed points. This type of study of
o.d.e.s to obtain the qualitative properties of the solution was pioneered
by H. Poincare in the late 19th century [6]. What emerged was the theory
of dynamical systems. This may be treated as a special topic of the theory
of o.d.e.s. Poincare followed by I. Benedixon [Bhatia and Sjego [6]] studied
topological properties of solutions of autonomous o.d.e.s in the plane.
Almost simultaneously with Poincare, A.M. Lyapunov [32] developed his
theory of stability of a motion (solution) for a system of n rst order o.d.e.s.
He dened, in a precise form, the notion of stability; asymptotic stability
and instability, and developed a method for the analysis of the stability
properties of a given solution of an o.d.e.. But all his analysis was strictly
in a local setting. On the other hand, Poincare studied the global properties
of dierential equations in a plane.
As pointed out in the example above, Poincare introduced the concept of
a trajectory, i.e., a curve in the x, x plane, parametrized by the time variable
t, which can be found by eliminating the variable t from the given equations,
thus reducing these to rst order dierential equations connecting x and
x.
In this way, Poincare set up a convenient geometric framework in which
to study the qualitative behaviour of planar dierential equations. He was
not interested in the integration of particular types of equations, but in
classifying all possible behaviours of the class of all second order dierential
equations. Great impetus to the theory of dynamical systems came from
the work of G.D. Birko [7]. There are many other authors who contributed
to a large extent to this qualitative theory of dierential equations. In this
chapter we give exposure to the basic elements in a dynamical system.
15.1.1
Denition: dynamical system
Let X denote a metric space with metric . A dynamical system on X
is the triplet (X, , ), where is a map from the product space X
into the space X satisfying the following axioms:
(i) (x, 0) = (x) for every x X (identity axiom)
(ii) ((x, t1 ), t2 ) = (x, t1 + t2 ) for every x X and t1 , t2 in
axiom)
4 (group
(iii) is continuous (continuity axiom).
Given a dynamical system on X, the space X and the map are respectively
called the phase space and the phase map (of the dynamical system).
Unless otherwise stated, X is assumed to be given.
Dynamical Systems
15.1.2
445
Example: ordinary autonomous dierential systems
The dierential system
dx
= x = f (x, t)
dt
(15.3)
is called an autonomous system if the RHS in (15.3) does not contain t
explicitly.
We consider the equation
dx
= x = f (x)
dt
4
4
(15.4)
where f : n n is continuous and, moreover, let us assume that for
each x n a unique solution (t, x) which is dened on
and satises
(0, x) = x. Then following Coddington and Levinson ([13], chapters 1 and
2) it can be said that the uniqueness of the solution implies that
(t1 , (t2 , x)) = (t1 + t2 , x)
4 4
(15.5)
and considered as a function from
n into n . Moreover, is
continuous in its arguments [see section 4, chapter II, Coddington and
Levinson [13]].
We assume that f satises the global Lipschitz condition, i.e.,
f (x1 ) f (x2 ) M x1 x2 ,
(15.5)
4
4 4
for all x1 , x2 n and a given positive number M , so that the conditions
on solutions of (15.4) are obtained. We next want to show that the map
: n n such that (x, t) = (t, x) denes a dynamical system
on n .
For that, we note that
4
4
(x, 0) = (0, x) = x
((x, t1 ), t2 ) = (t2 , (t1 , x)) = (t1 + t2 , x)
= (x, t1 + t2 )
Moreover, (x, t) is continuous in its arguments. Thus all the axioms
(i), (ii) and (iii) of 15.1.1 are fullled. Hence (x, t) is a phase map.
Example 2: ordinary autonomous dierential systems
Let us consider the system
dx
= x = F (x)
dt
(15.6)
where F : D
is a continuous function on an open set D n and
for each x D (15.6) has a unique solution (t, x), (0, x) = x dened on
446
A First Course in Functional Analysis
a maximal interval (a(x), b(x)), a(x) < 0 < b(x) +. For each
x D, dene + (x) = {(t, x) : 0 t < b(x)} and (x) = {(t, x) :
a(x) t 0}, + (x) and (x) are respectively called the positive and
the negative trajectories respectively through the point x D.
We will show that to each system (15.6), there corresponds a system
dx
= x = F (x), x
dt
4n
(15.7)
where F : D n such that (15.7) denes a dynamical system on D with
the property that for each x D the systems (15.6) and (15.7) have the
same positive and the same negative trajectories.
If D = n , then given the equation (15.4), we set,
dx
F (x)
= x(t)
= F1 (x) =
dt
1 + F (x)
where   is the Euclidean norm. If D =
D = and is closed.
We next consider the system
(15.8)
4n , then the boundary D of
F (x)
dx
(x, D)
= x(t)
= F1 (x) =
dt
1 + F (x) 1 + (x, D)
(15.9)
where (x, D) = inf{x y : y D}.
In other words, (x, D) is the distance of x from D.
Since f satises Lipschitz condition, equation (15.4) has a unique
solution.
0
0
0
0
F (x)
F (y)
0
0
Now, F1 (x) F1 (y) = 0
1 + F (x) 1 + F (y) 0
1
F (x) F (y) + F (x) F (y)
(1 + F (x))(1 + F (y))
0
0
0
0 F (x)
F
(y)
0
0
0 F (x) F (y) 0
< Kx y + e(x) e(y)
where e(x) is a vector of unit norm in the direction of F (x).
Assuming F (x) > m > 0 we have
F1 (x) F1 (y) < (K + k)x y
where e(x) e(y) kx y.
Thus, F1 satises the global Lipschitz condition. Thus, (15.8) denes a
dynamical system. Similarly, (15.9) also denes a dynamical system. (15.8)
and (15.9) have the same positive and negative trajectories as (15.4).
Dynamical Systems
447
15.2
Homeomorphism,
Dieomorphism,
Riemannian Manifold
15.2.1
Denition: homeomorphism
See 1.6.5.
15.2.2
Denition: manifold
A topological space Y [see 1.5.1] is called an ndimensional manifold
or (nmanifold) if every point of Y has a neighbourhood homeomorphic
to an open subset of n . Since one can clearly take this subset to be an
open ball, and since an open ball in n is homeomorphic to n itself, the
condition for a space to be a manifold can also be expressed by saying
that every point has a neighbourhood homeomorphic to n .
15.2.3
Remark
A homeomorphism from an open subset V of Y to an open subset of
4n allows one to transfer the cartesian coordinate system of 4n to V . This
gives a local coordinate system or chart on Y [see Schwartz [51]].
15.2.4
Dierentiability, C r ,C 0 maps
Let U be an open subset of n .
For denition of dierentiability of J : L n see 13.2 and 13.3.
C r  J is said to belong to class C r if J is r times continuously
dierentiable on an open set U n .
C r map Let F : U n V m . For denition of dierentiability
of F and the concrete form of the derivative see Ortega and Rheinboldt
[42].
If (x1 , . . . , xn )T U and (y1 , y2 , . . . , yn )T V then
yi = Fi (x1 , x2 , . . . , xn ), i = 1, 2, . . . , m
(15.10)
The map F is called a C r map if Fi is continuously r dierentiable for
some 1 r .
Smooth: F : U n m is said to be smooth if it is a C map.
C 0 map: Maps that are continuous but not dierentiable will be
referred to C 0 maps.
15.2.5
Denition: dieomorphism
F : U n
following conditions:
4m
is said to be a dieomorphism if it fulls the
(i) F is a bijection (onetoone and onto) [see 1.2.3].
(ii) Both F and F 1 are dierentiable mappings.
F is said to be C k dieomorphism if both F and F 1 are C k maps.
448
15.2.6
A First Course in Functional Analysis
Remark
Note that G : U V is a dieomorphism if and only if, m = n and the
matrix of partial derivatives,
Gi
G (x1 , . . . , xn ) =
, i, j = 1, . . . , n
xj
is nonsingular at every x U .
T
exp y
Example: Let G(x, y) =
with U = 2 and V = {(x, y) : x >
exp x
0, y > 0}.
0
exp y
, det G (x, y) = exp(x + y) = 0 for each
G (x, y) =
exp x
0
(x, y) 2 .
Thus, G is a dieomorphism.
15.2.7
Two types of dynamical systems
We note that in a Dynamical system the state changes with time {t}.
The two types of dynamical system encountered in practices are as follows:
(i) xt+1 = G(xt ), t Z or N
(15.11)
Such a system is called a discrete system.
(ii) When t is continuous, the dynamics are usually described by a
dierential equation,
dx
= x = F (x)
(15.12)
dt
In (15.11), x represents the state of the system and takes values in the
state space or phase space X [see 15.1.1]. Sometimes the phase space is the
Euclidean space or a subspace of it. But it can also be a nonEuclidean
structure such as a circle, sphere, a torus or some other dierential
manifold.
15.2.8
Advantages of taking the phase space as a dierential
manifold
If the phase space X is Euclidean, then it is easy to analyse. But if the
phase space X is nonEuclidean but a dierential manifold there is also an
advantage. This is because a dierential manifold is locally Euclidean and
this allows us to extend the idea of dierentiability to functions dened on
them. If Y is a manifold of dimension n, then for any x Y we can nd a
neighbourhood Nx Y containing x and a homomorphism h : Nx Rn
which maps Nx onto a neighbourhood of h(x) Rn . Since we can dene
coordinates in U = h(Nx ) Rn (the coordinate curves of which can be
mapped back onto Nx ) we can think of h as dening local coordinates on
the patch Nx of Y [see gure 15.2].
Dynamical Systems
449
The pair (U, h) is called a chart and we can use it to give dierentiability
on Nx . Let us assume that G : Nx Nx then G induces a mapping
= h G h1 : U U [see gure 15.3]. We say G is a C k map on Nx if
G
is a C k map on U .
G
z
z
Fig. 15.2 Cylinder
Example of a dierential manifold
G
Nx
h1
Nx
h
Fig. 15.3
15.3
Stable Points,
Critical Points
Periodic Points and
Let Y be a dierential manifold and G : Y Y be a dieomorphism. For
x Y , the iteration (15.11) generates a sequence {Gk(xt ) }. The distinct
points of the sequence dene the orbit or trajectory of x under G. More
generally, the orbit of x under G is {Gm (x) : m Z}. For m Z+ , Gm
is the composition of G with itself m times. Since G is a dieomorphism,
G1 exists and Gm = (G1 )m G0 = Idy, the identity map on Y . Thus,
the orbit of x is an innite (on both sides) sequence of distinct points of Y .
15.3.1
Denition: xed point
A point x Y is called a xed point of G if G(m) (x ) = x for all
m Z.
15.3.2
Example 1
To nd the xed points for x = F (x), where F (x) = x2 1.
Now, x(t + 1) x(t) F (x(t)).
If x is a xed point of G, then
x x = F (x ), i.e., F (x ) = 0, i.e., x = 1.
450
A First Course in Functional Analysis
It may be noted that at the xed points, x = 1, there is no ow.
15.3.3
Denition: periodic points
A point x Y is said to be a periodic point of Y if Gp (x ) = x for
some integer p 1.
The least value of p for which denition of a periodic point is true is
called the period of the point x and the orbit of x is
{x , G(x ), G2 (x ), . . . , Gp1 (x )}
(15.13)
is said to be a periodic orbit of period p or a pcycle of G.
15.3.4
Remark
1. A xed point is a periodic point of period one.
2. Since (Gp )q (x ) = Gp (x ), q Z for a periodic point x of G with
period p, x is a xed point of Gp (x ).
3. If x is a periodic point of period p for G, then all other points in the
orbit of x are periodic points of period p of G. For if Gp (x ) = x ,
then Gp (Gi (x )) = Gi (Gp (x )) = Gi (x ), i = 0, 1, 2, . . . , q 1.
15.3.5
Denition: stability according to Lyapunov stable point
A xed point x is said to be stable if, for every neighbourhood Nx of
x , there is a neighbourhood Nx Nx of x such that if x Nx then
Gm (x ) Nx for all m > 0. The above denition implies that iterates
of points near to a stable xed point remain near to it for m Z+ . This
is in conformity with the denition of a stable equilibrium point for a
moving particle.
15.3.6
Remark
1. If a xed point x is stable and limm Gm (x ) = x , for all x
in some neighbourhood of x , then the xed point is said to be
asymptotically stable.
2. Unstable point. A xed point x is said to be unstable if for every
neighbourhood Nx of x Gm (x) Nx for all x Nx .
15.3.7
Example
We refer to the example in 15.3.2. To determine stability, we plot x2 1
and then sketch the vector eld [see gure 15.4]. The ow is to the right
where x2 1 > 0 and to the left where x2 1 < 0. Thus, x = 1 is stable
and x = 1 is unstable.
Dynamical Systems
451
Fig. 15.4
We end this section with an important theorem.
15.4
Existence, Uniqueness and Topological
Consequences
15.4.1
Theorem: existence and uniqueness
Consider x = F (x), where x
U is an open connected set in n .
4n and F : U 4n C 1 (4n), where
Then, for x0 U , the initial value problem has a solution x(t) on some
time interval about t = 0 and the solution is unique. (Strogatz [54])
15.4.2
Remark
1. These theorems have deep implications. Dierent trajectories do
not intersect.
For if two trajectories intersect at some point in the phase space, then
starting from the crossing point we get two solutions along the two
trajectories. This contradicts the uniqueness of the solution.
2. In two dimensional phase spaces let us consider a closed orbit C in the
phase plane. Then any trajectory starting inside C will always lie within
C. If there are xed points inside C, then the trajectory may approach one
of them.
But if there are no xed points inside C, then by intuition we can
say that the trajectory can not move inside the orbit endlessly. This is
supported by the following famous theorem.
PoincareBendixon theorem:
If a trajectory is conned to a closed, bounded region and there are no
xed points in the region, then the trajectory must eventually approach a
closed circuit (Arrowsmith and Pace [3]).
452
15.5
A First Course in Functional Analysis
Bifurcation Points and Some Results
Bifurcation phenomenon is the outcome of the presence of a parameter in
a dynamical system. A physical example may stimulate the study of the
bifurcation theory. Suppose a body is resting on a vertical iron pillar. If the
weight is gradually increased, a stage may come when the pillar may become
unstable and buckle. Here the weight plays the role of a control parameter
and the deection of the pillar from vertical plays the role of the dynamical
variable x. The bifurcation of xed points for ows on the line occurs
in several physical phenomenon, such as the onset of coherent radiation
in a laser and the outbreak of an insect population, etc. [see Strogatz
[54]]. Against the above backdrop, we can have the formal denition of
bifurcation as follows. Let F : m n n (G : m n n ) to be
an mparameter, C r family of vector elds (dieomorphisms) on n , i.e.,
(, x) F (, x)(G(, x)), m , x n . The family F (G) is said to
have a bifurcation point at = if, in every neighbourhood of , there
exists values of such that corresponding vector elds F (, ) = F ()
(dieomorphisms G(, ) = G ()) show topologically distinct behaviour.
For details see Arrowsmith and Place [3]. We provide here some examples.
Example 1. F (x, ) = x2
(15.14)
Let us sketch the phase portraits for x = F (x), with (, x) near (0, 0).
Since our study is conned to a neioghbourhood of (0, 0), the bifurcation
study is local in nature (15.5).
4
4
4
4
Fig. 15.5
For < 0, the equation (15.14) yields x < 0 for all x . When = 0
there is a nonhyperbolic xed point at x = 0, but x < 0 for all x = 0
(Arrowsmith and Place [3]). For > 0, x2 = 0 x = 1/2 for
x > 1/2 , x < 0 and for x < 1/2 , x > 0. Hence, x = 1/2 is a stable xed
point. On the other hand, x = 1/2 is an unstable xed point.
Example 2. F (, x) = x x2
(15.15)
If > 0, x = is stable and x = 0 is unstable. The stabilities are
reversed when < 0. At = 0, there is one singularity at x = 0 and x < 0
Dynamical Systems
453
for all x = 0. This leads to a bifurcation as depicted below [see gure 15.6].
Fig. 15.6
List of Symbols
(phi)
Null set
Belongs to
AB
A is a subset of B
BA
B is a superset of A
P (A)
Power set of A
Set of all integers
Set of all rational numbers
N = {1, 2, . . . , n, . . .}
The set of natural numbers
Set of all polynomials
ndimensional real Euclidean space
]a, b[
An open interval containing a and b
[a, b]
A closed interval containing a and b
Class of all denumerable sets
4
+
Set of real numbers
Set of complex numbers
AB
Union of sets A and B
A+B
The sum of sets A and B
AB
The intersection of A and B
AB
The product of A and B
AB
The set of elements in A which are not elements
of B
Ac
The complement of the set A
g.l.b (inmum)
Greatest lower bound
l.u.b (supremum)
Least upper bound
X Y
Cartesian product of sets X and Y
(x, y)
mn
Ordered pair of elements x and y
Space of m n matrices
A = {aij }
A, a matrix
Sequence space
C([a, b])
Space of functions continuous in [a, b]
lp
pth power summable space
l2
Hilbert sequence space
Lp ([a, b])
Lebesgue pth integrable functions
454
List of Symbols
455
Pn ([a, b])
Space of real polynomials of order n dened on
[a, b]
X 1 X2
Direct sum of subspaces
X/Y
Quotient space of a linear space X by a
subspace Y
Codimension
(codim) of Y in X
Dimension of Y in X
(x, y)
Distance between x and y
(X, )
Metric space where X is a set and a metric
M ([a, b])
Space of bounded real functions
Space of convergent numerical sequences
Space of bounded numerical sequences
Space of not necessarily bounded sequences
Set inclusion
D(A)
Diameter of a set A
D(A, B)
Distance between two sets A and B
D(x, A)
Distance of a point x from a set A
B(x0 , r)
Open ball with centre x0 and radius r
B(x0 , r)
Closed ball with centre x0 and radius r
S(x0 , r)
Sphere with centre x0 and radius r
The complement of the set K
Nx0
Neighbourhood of x0
Closure of the set A
ndimensional complex space
S([a, b])
Space of continuous real valued functions on
b
[a, b] with the metric (x, y) = a x(t) y(t)dt
Bx
Basic Fneighbourhood of x
D(A)
Derived set of A
Int A
n
Fi
Interior of A
Intersection of all sets for i = 1, 2, . . . n
i=1
Inf
Inmum
Sup
Supremum
Norm
456
A First Course in Functional Analysis
xm x
{xm } tends to x
lim sup
Limit supremum
lim inf
Limit inmum
1
l1 norm
2
l2 norm
l norm
xn
Summation of xn for n = 1, . . .
xn
Summation of xn for n = 1, 2, . . . m (nite)
n=1
m
n=1
(n)
lp
ndimensional pth summable space
X0
The interior of the set X
c0
Space of all sequences converging to 0
x + Lq
Quotient norm of x + L
Span L
Set of linear combinations of elements in L
a (A)
Approximate eigenspectrum of an operator A
r (A)
Spectral radius of an operator A
Transpose of a matrix A
D(A)
Domain of an operator A
R(A)
Range of an operator A
N (A)
Null space of an operator A
BV ([a, b])
Space of scalarvalued functions of bounded
variation on [a, b]
Space of scalarvalued normalized functions of
bounded variation on [a, b]
N BV ([a, b])
Forward dierence operator
w
xn x
{xn } is weak convergent to x
x, y
Inner product of x and y
xy
x is orthogonal to y
EF
E is orthogonal to F
Conjugate transpose of a matrix M
A0
A is a nonnegative operator
w(A)
Numerical range of an operator A
List of Symbols
m(E)
x dm
457
Lebesgue measure of a subset E of
Lebesgue integral of a function x over a set E
Var (x)
Total variation of a scalar valued function x
essupE x
Essential supremum of a function x over a
set E
L (E)
The set of all equivalent classes of essentially
bounded functions on E
Span L
Closure of the span of L
Set orthogonal to M
Set orthogonal to M
(Ex Ey )
Space of all bounded linear operators mapping
n.l.s. Ex n.l.s. Ey
Hilbert space
Inverse of an operator A
Adjoint of an operator A
A
A
Closure of A
Operator of the form A I
A
Norm of A
A(x, x)
Quadratic Hermitian form
A(x, y)
Bilinear Hermitian form
C ([0, 1])
Ex
Space of continuous functions x(t) on [0, 1] and
having derivatives to within kth order
Conjugate (dual) of Ex
Ex
Conjugate (dual) of Ex
Ex (x)
Canonical embedding of Ex into Ex
Characteristic function of a set E
ij
Kronecker delta
sgn z
Signum of z
{e1 , e2 , . . .}
Standard Schauder basis for
p<
4n or +n or lp , 1
Gr(F )
Graph of a map F
Identity operator
(A)
Resolvent set of an operator A
(A)
Spectrum of an operator A
458
e (A)
A First Course in Functional Analysis
Eigenspectrum of an operator A
Abbreviations
BVP
Boundary value problems
LHS
Left hand side
ODE
Ordinary dierential equations
RHS
Right hand side
s.t.
Such that
WRT
With regards to
a.e.
Almost everywhere
Bibliography
[1] Aliprantis, C.D. and Burkinshaw, O. (2000) : Principles of Real
Analysis, Harcourt Asia Pte Ltd, Englewood Clis, N.J.
[2] Anselone, P.M. (1971) :
Collectively
Approximation Theory, PrenticeHall.
Compact
Operator
[3] Arrowsmith, D.K. and Place, C.M. (1994) : An Introduction to
Dynamical Systems, Cambridge University Press, Cambridge.
[4] Bachman, G. and Narici, L. (1966) : Functional Analysis, Academic
Press, New York.
[5] Banach, S. (1932) : Theories des operations lineaires, Monografje
Matematyczne, Warsaw.
[6] Bhatia, N.P. and Szeg
o, G.P. (1970) : Stability Theory of Dynamical
Systems, SpringerVerlag, New York.
[7] Birko, G.D. (1927) : Dynamical Systems (Amer. Math. Soc.
Colloquium Publications Vol. 9), New York.
[8] Bohnenblust, H.F. and Sobczyk, A. (1938) : Extension of Functionals
on Complex Linear Spaces, Bull. Amer. Math. Soc. 44, 913.
[9] Browder, F.E. (1961) : On the Spectral Theory of Elliptic Dierential
Operators I, Math. Ann., Vol. 142, 22130.
[10] Carleson, A. (1966) : On the Convergence and Growth of Partial
Sums of Fourier series, Acta Math. 116, 1357.
[11] Cea, J. (1978) : Lectures on OptimizationTheory and Algorithms,
Tata Institute of Fundamental Reseach, Narosa Publishing House,
New Delhi.
[12] Clarkson, J.A. (1936) : Uniformly Convex Spaces, Trans. Amer.
Math. Soc. 40, 396414.
[13] Coddington, E.A., Levinson, N. (1955) : Theory of Ordinary
Dierential Equations, McGrawHill Book Company, New York.
459
460
A First Course in Functional Analysis
[14] Collatz, L. (1966) :
Functional Analysis
Mathematices, Academic Press, New York.
and
Numerical
[15] Courant, R. and Hilbert, D. (1953) : Methods of Mathematical
Physics, Interscience, New York.
[16] Daubechies, I. (1988) : Orthonormal Bases of Compactly Supported
Wavelets. Commun. Pure App. Math., 41; 90996.
[17] Debnath, L. (1998) : Wavelet Transforms and their Applications,
PinsaA, 64, A, 6.
[18] Dieudo
nne, J. (1969) : Foundations of Modern Analysis, Academic
Press, New York.
[19] Gelfand, I. (1941) : Normiert Ringe, Mat. Sbornik, N.S. 9 (51), 324.
[20] Goman, C and Pedrick, G. (1974) : First Course in Functional
Analysis, PrenticeHall of India Private Ltd., New Delhi.
[21] Goldberg, S. (1966) :
Unbounded Linear Operators with
Applications, McGraw Hill Book Company, New York.
[22] Haar, L. (1910) : Zur Theorie der Orthogonalen Functionensysteme,
Math. Ann. 69, 331371.
[23] Hahn, H. (1927) : Uber
Lineare Gheichungssysteme in Linearen
Ra
umen, Journal Reine Angew Math. 157, 21429.
[24] Hilbert, D. (1912) : Grundz
uge Einerallgemeinen Theorie der
Linearen Integralgleichungen, Repr. 1953, New York.
[25] Jain, P.K., Ahuja, O.P. and Ahmed, K. (1997) : Functional Analysis,
New Age International (P) Limited, New Delhi.
[26] James, R.C. (1950) : Bases and Reexively in Banach spaces, Ann.
Math., 52, 51827.
[27] Kantorovich, L.V. (1948) : Functional Analysis and Applied
Mathematics, (Russian) Uspekhi Matem. Nauk, 3, 6, 89185.
[28] Kato, T. (1958) : Perturbation Theory for Nullity, Deciency and
Other Quantities of Linear Operators, J. Analyse Math. vol. 6, pp.
273322.
[29] Kolmogoro, A. and Fomin, S. (1954) : Elements of the Theory of
Functions and Functional Analysis, Izdatb Moscow Univ., Moscow;
transl. by L. Boron. Grayrock Press, Rochester, New York, 1957.
[30] Kreyszig, E. (1978) : Introductory Functional Analysis with
Applications, John Wiley & Sons. New York.
Bibliography
461
[31] Lahiri, B.K. (1982) : Elements of Functional Analysis, World Press,
Kolkata.
[32] Liapunov, A.M. (1966) : Stability of Motion (English translation),
Academic Press, New York.
[33] Limaye, B.V. (1996) : Functional Analysis, New Age International
Ltd., New Delhi.
[34] Lions, J.L. and Magenes, E. (1972) : Nonhomogeneous Boundary
Value Problems, vol. I, SpringerVerlag, Berlin.
[35] Lusternik, L.A. and Sobolev, V.J. (1985) : Elements of Functional
Analysis, Hindusthan Publishing Corporation, New Delhi.
[36] Mikhlin, S. (1964) : Variational Methods in Mathematical Physics,
Pergamon Press, New York.
[37] Mikhlin, S. (1965) : The Problem of the Minimum of a Quadratic
Functional, Holdenday, San Francisco.
[38] Manseld, M.J. (1963) :
Introduction to Topology,
Educational Publishing Inc., New York.
Litton
[39] Nair, M.T. (2002) : Functional Analysis, PrenticeHall of India
Private Limited, New Delhi.
[40] Natanson, I.P. (1955) : Konstruktive Funktionentheories (translated
from Russian), Akademic Verlag, Berlin.
[41] Neumann, J. Von. (1927) : Mathematische Begr
undung der
Quantenmechanik. Nachr. Ges.Wiss. Gotingen. Math. Phys. Kl 137.
[42] Ortega, J.M. and Rheinboldt, W.C. (1970) : Iterative Solution of
Nonlinear Equations in Several Variables, Academic Press, New York.
[43] Ralston, A. (1965) : A First Course in Numerical Analysis, McGrawHill Book Company, New York.
[44] Rall, L.B. (1962) : Computational Solution of Nonlinear Operator
Equations, John Wiley & Sons, New York.
[45] Reddy, J.N. (1986) : Applied Functional Analysis and Variational
Methods in Engineering, McGrawHill Book Company, New York.
[46] Resniko, H.L. and Wells, O Jr. (1998) : Wavelet Analysis, Springer,
New York.
[47] Royden, H.L. (1988) : Real Analysis, Macmillan, New York.
[48] Rudin, W. (1976) : Principles of Mathematical of Analysis, 3rd. ed.,
McGrawHill Book Company, New York.
462
A First Course in Functional Analysis
[49] Schauder, J (1930) : Uber
Lineare,
perationen, Studia Math. 2, 16.
Vollstetige Funktionalo
[50] Schmidt, E. (1908) : Uber
die Auosung Linearer Gleichungen mit
Unendlich vielen Unbekannten, Rendi. Circ. Math. Palermo 25, 53
77.
[51] Schwartz, A.S. (1996) : Topology for Physicists, SpringerVerlag,
Berlin.
[52] Schwartz, L (1951) : Theorie des Distributions, vols. I and II.
Hermann & Cie, Paris.
[53] Simmons, G.F. (1963) : Introduction to Topology and Modern
Analysis, McGraw Hill Book Company, Toky
o.
[54] Strogatz, S.H. (2007) : Nonlinear Dynamics and Chaos, Levant
Books, Kolkata.
[55] Taylor, A.E. (1958) : Introduction to Functional Analysis, John Wiley
& Sons, New York.
[56] Zeidler, E. (1995) : Applied Functional Analysis: Main Principles
and their Applications, SpringerVerlag, Berlin.
Index
linear functional 147
A
absolutely continuous function 368
absorbing set 190
acute 100
almost everywhere (a.e.) 363
approximate eigenspectrum 291
approximate eigenvalue 291
approximate solutions 311
asymptotically stable 450
B
ball
closed 25
open 25
Banach and Steinhaus theorem 155
base at a point 49
base of a topology 49
Bases for a topology 48
basis
Hamel 8
Schauder 176
Bernstein Polynomial 234
Bessel inequality 122
best approximation 261
bifurcation point 452
bilinear hermitian form 327
biorthogonal sequence 176
BolzanoWeirstrass theorem 55
boundary 51
bounded below 4
bounded convergence theorem 367
bounded inverse theorem 276
bounded linear operators 150
bounded variation 195, 369
bounded
C
canonical or natural embedding 212
cardinal numbers 2
cartesian product 4
category
rst 33
second 33
Cauchy sequence 28
CauchyBunyakovskySchwartz
inequality 18
characteristic function of a set E 363
characteristic function 197
characteristic value 173
Chebyshev approximation 406
closed complement 272
closed graph theorem 268
closed orthonormal system 123
closure 25, 50
closure 50
compact support 424
compact
metric space 54
compactness 51
comparable norm 276
complete orthonormal system 122
completeness of
C([a, b]) 29
lp 31
n
29
n
29
completeness 27
conjugate (dual)
C([a, b]) 228
continuity 51
+
4
463
464
in a metric space 52
of a linear operator
mapping Ex into Ey 137
on topological spaces 52
continuous Fourier transform 433
contraction mapping principle 36
convergence in norm 61
convergence 27
strong 244
weak 240, 244
weak 240
covering 53
D
deciency index of A 390
denite integral 148
dense set 32
everywhere 32
nowhere 32
dieomorphism 447
dierence of two sets 3
ndimensional manifold 447
direct sum 12, 107
discrete Fourier transform 433
distance between sets 24
distance of a point from a set 24
distribution 424
dot product 148
dual problem 403
dual space of l1 207
dual
algebraic 213
conjugate 213
topological 213
dynamical system 443
E
eigenvalue 172
eigenvector 172
eigenspectrum 291
energy product 351
equation
homogeneous 172
linear integral 317
A First Course in Functional Analysis
equicontinuous 56
equivalent norms 80
essential supremum 372
essentially bounded 372
everywhere dense 47
exterior 51
F
nite basis 9
nite deciency 387
nite intersection property 53
xed point 449
Fourier coecients 248
Fourier series 121
Fredholm alternative 301
Frechet derivative 417
Fubinis theorem 369
function 5
bijective 6
domain 5
range 5
G
Gdierentiable 411
general form of linear functionals
in a Hilbert space 204
generalized derivative 424
generalized function 425
genus of the Laurent series 435
gradient 413
GramSchmidt orthonormalization
process 113
graph 267
greatest lower bound (g.l.b.) 4
Greens formula for Sobolev spaces
426
Gateaux derivative 410
H
Haar wavelet matrices 437
HahnBanach separation theorem
190
HahnBanach theorem
Index
using sublinear functional 185
halfspace 401
HeineBorel theorem 55
Hermite polynomials 119
hessian 413
Hilbert space 91, 204
homeomorphism 52, 447
hyperplane 149, 400
hyperspace 188
Holders inequality 16
I
induced metric 61
inequality
inner product 92
integrable function 366
integral of
RiemannStieljes 195
interior 50, 51
intersection of two sets 3
invariant subspace 345
inverse Fourier transform 433
isometric mapping 35
isometric spaces 35
isomorphism 128
K
kernel index 390
kernel
degenerate 317
465
linear functionals 147
on s 201
on Lp ([a, b]) 374
on the ndimensional Euclidean
space n 200
linear independence 112
linear operator
left inverse of a 165
null space of a 165
right inverse of a 165
unbounded 217
linearly dependent 9
linearly independent 9
Lipschitz condition 39
M
mappings
into 5
onto (surjective) 5
onetoone (injective) 5
bijective 5
continuous 52
homeomorphism 52
projection 271
C r ,C 0 maps 447
metric completion 35
Minkowski functional 190
Minkowskis inequality 19
moment problem of Hausdor 231
multiresolution representation 432
N
L
Lagranges interpolation polynomial
159
Laguerre polynomials 117
Lebesgue integral 364
Lebesgue measurable function 360
Lebesgue measurable set 358
Lebesgue measure 359
Lebesgue outer measure 357
Legendre polynomials 116
Lindel
ofs theorem 49
linear combination of vectors 9
natural embedding 212
neighbourhood of a point 25
normalized function of bounded
variation 229
normally solvable 393
norm 58
O
obtuse 100
open covering 53
open mapping 272
466
open mapping theorem 272
operator 130
adjoint 217, 221, 323
bounded linear 150
closed linear 267, 269
coercive 396
compact linear 283
completely continuous 283
dierential 143, 269
domain of an 165
forward dierence 233
function 136
identity 143
integral 143
linear 130
nonnegative 334
normal 329
norm 141
positive 334
precompact 386
projection 109
resolvent 172, 348
selfadjoint 326
shift 233
spectrum of a compact 290
strictly singular 386
stronger 334
smaller 334
inverse 166
unbounded linear 382
unitary 329
zero 143
ordinary autonomous dierential
systems 445
orthogonal complements 107, 383
in a Banach space 383
orthogonal projection theorem 102
orthogonal set 112
orthogonal 100
orthonormal basis 124
orthonormal set 112
P
parallelogram law 95
partially ordered set 4
A First Course in Functional Analysis
partition 195
periodic points 449
piecewise linear interpolations 317
plane of support 402
point
contact 50
interior 25
isolated 50
limit 25, 50
pointwise convergence 154
polarization identity 97, 99
polynomials
Hermite 117
Laguerre 119
Legendre 115
primal problem 403
product
inner 92
scalar 92
projection of nite rank 315
Q
quadratic form 327
quadrature formula 242
quotient spaces 85
R
relatively compact 54
restriction of a mapping 222
RiemannStieljes integral 195
Rieszs lemma 83
Riemannian manifold 447
S
scaling equation 439
Schauder theorem 287
second Gderivative 413
selfconjugate space 210
sequence space 7
sequence
strongly convergent 284
weakly convergent 284
sequentially compact 54
Index
set
closed 25
countable 2
denumerable 2
derived 50
diameter 24
empty (or void or null) 2
enumerable 2
nite 2
inclusion 1
open 25
power 2
resolvent 291
uncountable 2
universal 3
simple function 363
smooth function 424
Sobolev space 425
space
l2 7
lp 7
Lp [a, b] 7
n dimensional Euclidean 92
Banach 61
compact topological 53
conjugate (or dual) 179, 206,
221
discrete metric 16
Euclidean 7
Hilbert 91
inner product 91
linear 6
metric 13
normed linear 91
null 165
quasimetric 41
reexive 210
separable 47, 51
sequence 15, 17
unitary 7, 93
spaces
complete 23
isometric 35
nonmetrizable 23
topological 44
467
spectral radius 348
spectrum of selfadjoint operators
341
spectrum 173
continuous 347
discrete or point 347
pure point 347
sphere 25
square roots of
nonnegative operators 337
stable points 449
stable 450
strictly convex 256
strictly normed 263
stronger norm 276
subcovering 53
sublinear functional 184
subspace 8, 100
closed 101
summable sequence 68
supporting hyperplane 149, 402
support 424
T
test function 424
the canonical embedding 212
the general form of linear functionals
on lp 202
the limit of a convergent sequence 28
the space of bounded linear
operators 150
the space of operators 135
theorem (ArzelaAscoli) 56
theorem (Banach and Steinhaus)
155
theorem (Eberlein) 255
theorem (Helley) 253
theorem (Milman) 258
theorem (Pythagorean) 102
theorem (Schauder) 287
theorem of density 426
theorem of trace 426
theorem
(Fubini and Tonelli) 369
bounded convergence 367
468
bounded inverse 276
closed graph 268
dominated convergence 367
HahnBanach theorem 179
HahnBanach (generalized) 186
monotone convergence 367
open mapping 273
plane of support 402
twonorm 277
topology 45
discrete 46
indiscrete 46
lower limit 46
upper limit 46
usual 46
totally bounded 54
translation invariance 66
truncation
of a Fourier expansion 318
U
uniform boundedness principle 154,
155
uniform convexity 255
uniform operator convergence 154
uniformly bounded 56
V
(total) variation 195
W
wavelet analysis 430
wavelet function 440
wavelet matrix 435
wavelet space 436
weak convergence
lp 249
Hilbert spaces 249
weak lower semicontinuity 416
weakly bounded 420
weakly closed 420
weakly lower semicontinuous 420
Weierstrass approximation theorem
43, 233
A First Course in Functional Analysis
Weierstrass existence theorem 265
Z
Zermelos axiom of choice 4
Zermelos theorem 4
Zorns lemma 4