You are on page 1of 105

PEOPLE’S DEMOCRATIC REPUBLIC OF ALGERIA

Ministry of Higher Education and scientific Research

Higher Teacher Training School of Oran

Departement of exacts sciences


Speciality: Mathematics

Dissertation Submitted To a Partial Fulfillment of Secondary


School Teacher Degree in Mathematics

ENTITLED :

The Numerical Range of Operator

Presented by :
• Amina Menasra

In front of the jury composed of:

 President Dr.Heirech Mohamed


 Examiner Dr.Belabed Fatima Zohra
 Supervised by Mr.Derkoui Rafik

2020 /2021
Thanks

At first, we thank Allah because without him, we would not acheive


anything.

Special thanks to the superviser Mr.Derkoui Rafik for his encouragement


, his great help and his big efforts on completing the graduation memoire in
the best way.

Thanks and appreciation to the teachers debate Dr.Herireche Mohamed


and Dr.Belabed Fatima Zohra who accept to discuss my graduation memoire.

My thanks go out to my family,who supported me and made me who i


am now.To my friends also maissa,samah, Imene , fatima, sabrina and all
people i meet in Ens Oran.

I thank also all my teachers who believed in me ,without them i wasn’t


going to success.
Dedication

My dear family .....

My dears friends.....

My dears teachers along my career :.....

Specially for my supervisor Dr Derkoui Rafik.......

2
Table of contents

red1 Hilbert space 7


red1.1 Inner product space . . . . . . . . . . . . . . . . . . . . . . 7
red1.2 Hilbert space . . . . . . . . . . . . . . . . . . . . . . . . . 13
red1.3 Orthogonal set . . . . . . . . . . . . . . . . . . . . . . . . . 17
red1.4 Orthonormal set . . . . . . . . . . . . . . . . . . . . . . . . 22
red1.5 Orthonormal basis set . . . . . . . . . . . . . . . . . . . . . 25

red2 Bounded linear operators 26


red2.1 Linear Operator . . . . . . . . . . . . . . . . . . . . . . . . 26
red2.2 Bounded linear operator . . . . . . . . . . . . . . . . . . . 28
red2.3 The algebra of operator : . . . . . . . . . . . . . . . . . . . 31
red2.4 Sesquilinear forms . . . . . . . . . . . . . . . . . . . . . . . 34
red2.5 The adjoint operator . . . . . . . . . . . . . . . . . . . . . 38
red2.6 Some Special Classes of Operators . . . . . . . . . . . . . . 43
red2.7 Normal, Unitary and Isometric Operators . . . . . . . . . . 46
red2.8 Compact operator . . . . . . . . . . . . . . . . . . . . . . . 49

red3 Spectral Theory 50


red3.1 Spectral notion . . . . . . . . . . . . . . . . . . . . . . . . 50
red3.2 Resolvent Equation and Spectral Radius . . . . . . . . . . 52
red3.3 Spectral Mapping Theorem for Polynomials . . . . . . . . . 53
red3.4 Spectrum of Various Classes of Operators . . . . . . . . . . 55

red4 Psuedespectrum of a matrix 56


red4.1 Pseudospectra of matrices . . . . . . . . . . . . . . . . . . 57
red4.2 Singular values . . . . . . . . . . . . . . . . . . . . . . . . 59
red4.3 Equivalence of pseudospectrum . . . . . . . . . . . . . . . 60
red4.4 The pseudospectrum of diagonal matrice : . . . . . . . . . 63

3
red4.5 Proprieties of pseudospectrum : . . . . . . . . . . . . . . . 63
red4.6 The condition number, the spectral abscissa and the spec-
tral radius . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
red4.7 The pseudoprojection . . . . . . . . . . . . . . . . . . . . . 68
red4.8 matrix function . . . . . . . . . . . . . . . . . . . . . . . . 69
red4.9 Circulant matrix(example) . . . . . . . . . . . . . . . . . . 71

red5 The numerical range of matrix 73


red5.1 The numerical range of matrix . . . . . . . . . . . . . . . . 73
red5.2 The numerical abscissa . . . . . . . . . . . . . . . . . . . . 76
red5.3 The numerical radius . . . . . . . . . . . . . . . . . . . . . 78
red5.4 The relation between spectrum and the numerical range . . 78
red5.5 The relation between Psudospectrum and The numerical
range of matrix . . . . . . . . . . . . . . . . . . . . . . . . . . 79
red5.6 The numerical range of self adjoint ,normlaoid and spec-
traloid matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
red5.7 Matrix pseudo-commutating . . . . . . . . . . . . . . . . . 81
red5.8 Generalized spectrum and numerical range of matrix the
lorentzain oscillator group of dimension four . . . . . . . . . . 82
red5.8.1 Oscillator group of dimension 4 . . . . . . . . . . . 83
red5.8.2 Eigenvalues and Pseudo-spectrum of matrix Aa . 86
red5.8.3 Numerical rang of matrix Aa . . . . . . . . . . . . . 87

4
Introduction
The domain of function analysis presents an important part of applied
mathematics, such as the results of operational equations,spectrum of ope-
rators, the field of values. For that we choose to spak about this latter in our
study.

The eigenvalue was one of the most important in understanding and sol-
ving linear equations and appearing of spectral theory with the investigation
of localized vibrations of variety of differents objects ,made so much mathe-
metics and physicals problems solved.

Hilbert was the first who coined the term of ”eigenvalue” and the set
of eigenvalues”the spectrum”.His research laids to the foundation of spectral
notion and function analysis.

But the spectral objects have some changes in the case of small pertur-
bations,addition to that studying of behavior of non normal operator using
the spectrum wasn’t enought and evident . That what lead Trefethen in 1990
to the concept of ”pseudospectrum” , and he applied it to plenty of highly
intresting problems.

In 1918,Toeppliz introduced the field of values ( the numerical range) of


matrix and it is genralized with time to the numerical range of operators.This
latter plays a main role in studying matrices,polynomial, norm inegality,
perturbation theory,numerical analysis..

Our works was deviding to five chapters.In the first one, we mentioned
some basic concepts and definitions of Hilbert space , the space that the
study is based on.

In the seconde chapter, we focussed on studying the bounded linear ope-


rators and some special classes , sesquilinear form , the algebra of operator.

In the third chapter ,we studied the spectral notion , the resolvant equa-
tions ,spectral radius and the spectrum of various clasesses of operators.

5
The next chapter was about the pseudo-spectrum,we give the six defin-
tions of pseudospectrum, the singular values addition to the pseudospectrum
of diagonal matrices and other proprities.

At least, we came to the essential subject ”the numerical range of ma-


trix”.We defined this latter with giving examples and some proprieties and
we precised also to talk about the relation between the spectrum, the pseu-
dospectrum and the numerical range.

We ended our memoire with generalized spectrum and numerical range


of matrix the lorentzain oscillator group of dimension four,and which is the
subject of new article published by Dr RAfik DERKOUI and ABDERRAH-
MANE SMAIL in 2020.

6
Chapter 1

Hilbert space

Before defining the hilbert space,we need to see a background in linear algebra
and real analysis.We begin our study by some notions that are fondamen-
tale to the fieldof functionnal analysis ; vector spaces,normed spaces, inner
product spaces.

1.1 Inner product space

Definition 1.1.1 (Linear space) Linear space is a set X with an


associated scalar field (either R or C) defined by the following linear opera-
tions:
1) Vector addition:which take each pair of elements x and y in X
another element x + y of X :

X × X → X , (x, y) → x + y,

2)Scalar multiplication : which takes each pair of scalars λ from F


and element x ∈ X to another element λx of X:

X × X → X; (x, λ) → λx,
And for the following conditions are satisfied:
1)Vector addition is commutative: ∀xϵX; ∀u, λϵF ;

x + y = y + x,

7
2)Vector addition is associative: ∀ x, y, z ∈ X

(x + y) + z = x + (y + z),

3)Existence of an addition identity for all element: ∀xϵX , ∃


an element −xϵX such that ;

−x + x = 0,
4)Scalar multiplication is associative: ∀u, λϵF ; ∀xϵX;

(λu)x = λ(ux),
5)Scalar multiplication distributes over scalar addition: ∀xϵX; ∀u, λϵF ;

(λ + u)x = λu + λx,
6)Scalar multiplication identity applies to vector: for any ele-
ment ∀xϵX;

1.x = x,
7)Scalar multiplication distributes overnvector addition: ∀x, yϵX; ∀λϵF ;

λ.(x + y) = λ.x + λ.y.

Remark 1.1.2 From now, the study will consider specificly the linear space
complex.

Definition 1.1.3 A complex normed space (L; ∥ . ∥) is a linear vector


space with a function L : ∥.∥ → R called a norm that satisfied the proprieties:
a.Positive: for all x, y ∈ L.

∥ x ∥> 0,

b.Nodegenerate: If and only if x = 0.

∥ x ∥= 0 ,

8
c.Multiplicative: for all xϵL and λ ϵC.

∥ λx ∥= |λ| . ∥ x ∥,
d.Triangle inequality: for all x, yϵL.

∥ x + y ∥≤∥ x ∥ + ∥ y ∥ .

Definition 1.1.4 An inner product space(or Pre-hilbert) is a linear


space with function ⟨, ⟩ : V × V → C called an inner product provide that
for all x, y, zϵV and λ, µ ϵC then:
a.Linear argument:

⟨x, λy + µz⟩ = λ ⟨x, y⟩ + µ ⟨x, z⟩ ,


b.Hermitian symetric:

⟨y, x⟩ = ⟨x, y⟩,


c.Positivity:

⟨x, y⟩ ≥ 0,
d.Nodegenerate:

⟨x, x⟩ = 0 if and only if x = 0.

Example 1.1.5 Let X = F n , we define:

⟨, ⟩ × X × X → F ,

Such that
⟨x, y⟩ =ni xi yj ,
for x = (x1 , . . . , xn ), y = (y1 , y2 , , , , , , yn) is an inner product .

9
Remark 1.1.6 ⋆From (a) and (b) , it follows that ⟨, ⟩ is a conjugate
linear(sesquilinear) meaning that:

⟨λx + µy, z⟩ = λ ⟨x, z⟩ + µ ⟨y, z⟩ .

Definition 1.1.7 (Shwartz inequality) Let (V ; ⟨; ⟩) be an inner product


space then for all x, yϵV ;

|⟨x; y⟩| ≤ ∥x∥ ∥y∥ , (if x and y are linearly dependant).

proof. The result is obvieus if;

⟨y, x⟩ = 0,

Suppose;
δ = ⟨y, x⟩ ̸= 0,
then x ̸= 0 and y ̸= 0 then;

Z = δ. |δ|−1 y,

then ;

⟨z, x⟩ = δ. |δ|−1 x
= |δ|
= |⟨y, x⟩|
≥ 0,

Let;
v = x ∥x∥−1 and w = z ∥z∥−1 ,
Then;
∥W ∥ = ∥V ∥ = 1,
And;
⟨v; w⟩ ≥ 0,
Since;
∥v − w∥2 = ⟨v; w⟩ + ⟨w; v⟩ − 2Re ⟨v; w⟩ ,

10
It follows that;
⟨v; w⟩ ≤ 1,
So;

|⟨y, x⟩| = ⟨z; x⟩


= ∥x∥ ∥z∥ ⟨w; v⟩
≤ ∥x∥ ∥z∥
= ∥x∥ ∥y∥ .

Definition 1.1.8 An inner product space (V ; ⟨; ⟩) is a normed linear space


with norm √
∀xϵV, ∥x∥ = ⟨x, x⟩.
proof. ⋆The positivity and nonegativity propriety are verified.Now for
the multiplicative propriety,for a simple consequence of hermitian symetry

(⟨y, x⟩ = ⟨x, y⟩),

and the multiplication properiety of inner product in the fact that

⟨x, λy⟩ = λ ⟨x, y⟩ ,

whenever x, yϵH and λ ϵC ;



∥λx∥ = ⟨λx, λx⟩
√ √
= λ.λ. ⟨x, x⟩
= |λ| ∥x∥ ,

.⋆Using the distributive property of inner products we see that we see that
for x, yϵH;
∥ x + y ∥ 2 = ⟨x, x⟩ + ⟨x, y⟩ + ⟨y, x⟩ + ⟨y, y⟩ ,
and according to shwartz-inequality

∥ x + y ∥ 2 = ∥x∥ 2 + ∥y∥ 2 + ∥x∥ ∥y∥ + ∥y∥ ∥x∥


= ∥ x ∥ 2 + ∥ y ∥ 2 + 2 ∥x∥ ∥y∥
= (∥ x ∥ + ∥ y ∥)2 ,

11
so
∥ x + y ∥≤∥ x ∥ + ∥ y ∥,
We conclude that ∥.∥ is a norm. 2

Theorem 1.1.9 Let X an inner product . Then ∀x, yϵX

∥x + y∥2 + ∥x − y∥2 = 2(∥x∥2 + ∥y∥2 ),

proof. Let X an inner product . Then ∀x, yϵX

∥x + y∥2 + ∥x − y∥2 = ⟨x + y; x + y⟩ + ⟨x − y; x − y⟩
= ⟨x; x⟩ + ⟨y; x⟩ + ⟨x; y⟩ + ⟨y; y⟩ + ⟨x; x⟩ + ⟨y; y⟩ − ⟨x; y⟩ − ⟨x; y⟩
= 2(⟨x; x⟩ + ⟨y; y⟩) = 2(∥x∥2 + ∥y∥2 ).

Theorem 1.1.10 Let X be an inner product space.Then ∀x, yϵX ;


1
⟨x; y⟩ = ( ∥x + y∥2 − ∥x − y∥2 − i ∥x + y∥2 + i ∥x − y∥2 ).
4
proof. Expanding out the implied inner products, one shows easily that

∥x + y∥2 − ∥x − y∥2 = 4Re ⟨y; x⟩ ,

and
(−i ∥x + y∥2 + i ∥x − y∥2 ) = 4Im ⟨y; x⟩ .
2

Theorem 1.1.11 (Continuity of the Inner Product) Let X be an inner


product space with induced norm ∥; ∥ then

⟨x; y⟩ : X × X → C,

is continous.

12
proof. Since C and X ×X are metric spaces, it suffices to show sequential
continuity.Suppose xn → x and yn → y. Then by the Schwarz inequality:

|⟨yn ; xn ⟩ − ⟨y; x⟩|


= |⟨yn ; xn − x⟩ + ⟨yn − y; xn⟩|
≤ ∥xn − x∥ ∥yn ∥ + ∥x∥ ∥yn − y∥ → 0.

1.2 Hilbert space

Definition 1.2.1 A metric space (X; d) is a set X together with an


assigned metric function d : X × X → R that has the following properties;
∀x, yϵX
1)Positive:
d(x, y) ≥ 0
,
2)Non-degenerate:

d(x, y) = 0 if and only if x = y,

3)Symetric:
d(x, y) = d(y, x),
4)Triangle inequality:

d(x, z) ≤ d(x, y) + d(y, z).

Definition 1.2.2 Let X be a metric space and let {xn }be a sequence of points
x.
1)We say that {xn } is a cauchy sequence if for every ε ≥ 0 , there exists
an N ϵN so that; i, j ≥ N =⇒ d(xi , xj ) ≺ ϵ,
2)We say that {xn } converges to a point xi ϵX;

lim d(xn , x) = 0.
n→+∞

13
Definition 1.2.3 (Metric space) A metric space X is said to be complete
if every cauchy sequence in X converges to a point inX.

Definition 1.2.4 A normed vector space is called a banach space if evey


cauchy sequence converges.

Definition 1.2.5 A hilbert space is a complete inner product space (or a


Banach space with respect of the norm ;

∥x∥ = ⟨x, x⟩.

Examples 1.2.6 ⋆any finite dimensional inner product are hilbert space.
⋆L2 (A) for any measurable A ⊂ Rn , with inner product

⟨g, f ⟩ = g(x).f (x)dx,
A

⋆l2 = {(x1 , x2 , x3 , ...)} ϵC;


∞ ∑

|xk |2 ≺ ∞ with ⟨y, x⟩ = y k xk ,
k=0 k=0

⋆Space X = C [−1; 1] with the inner product

⟨; ⟩ : X × X → F,

defined by :
⟨f ; g⟩ =11 f (x)g(x)dx,
for each f ; g ∈ X isn’t Hilbert space.

14
Example 1.2.7 Space X = lp (p ̸= 2)isn’t hilbert space so it sufficr to prouve
that the parralogram law isn’t verify.Let x = (1, 0; 0; 0), Y = (0, 1, 0, 0...) then

∥x ∥=∥ y∥ = 2P ,

∥x + y∥ + ∥x − y∥ ̸= 2 ∥x∥2 + 2 ∥y∥2 .

Definition 1.2.8 (Convex set) A subset A of a vector space X is called


convex set if ∀x, y ∈ A,

∀t ∈ [0, 1] such that (1 − t)x + tx ∈ A.

Theorem 1.2.9 If A is a closed conex sebset in the hilbert space X .It


contain a unique element with the smallest norm.
proof. Let be
d = inf ∥x∥ ; x ∈ A,
with using the definition of d there is a sequence such that

∥xn ∥ → d,

Now we should prouve that {xn } is a cauchy sequence then we should prouve
that ∥x∥ → 0when n, m → ∞.
A is a convex set thatn ∀x, y ∈ A;
1
(xn + xm ) ≥ d,
2
then
∥(xn + xm )∥ ≥ 2d,
Then because

∥xn + xm ∥2 + ∥xn − xm ∥2
= 2 ∥xn ∥2 + 2 ∥xm ∥2 ; ∥xn − xm ∥2
= − ∥xn + xm ∥2 + 2 ∥xn ∥2 + 2 ∥xm ∥2
≤ 2 ∥xn ∥2 + 2 ∥xm ∥2 − 4d2 .

15
Now as ∥xn ∥ → d, ∥xm ∥ → d when m, n → ∞.then ∥xn − xm ∥ → 0.So {xn }
converges in A AsX is complet then there is x ∈ X such that xn → x.
As the sequence {xn } is in A then,

xn → x ∈ A,

but A is closet for that ,


A = A,
We have x ∈ A so,
xn → x,
then,
∥xn ∥ → ∥x∥ ,
then; ∥x∥ = d,
Now we prouve that x is lonely .let x, y ∈ A, ∥x∥ = d; ∥y∥ = d ,as A is
conex than
1
(x + y) ∈ A → ∥x + y∥ ≥ 2d,
2
Now ∥x − y∥2 ≤ 0 because,

∥x − y∥2 = − ∥x + y∥2 + 2 ∥x∥2 + 2 ∥y∥2


≤ 2d2 + 2d2 + 4d2
≤ 0,

and as,
∥x − y∥2 ≥ 0,
so,
∥x − y∥2 = 0,
then,
x = y.
2

Corollary 1.2.10 LetA nonempty closet and convex set in Hilbert space and
let x0 ∈
/ A.Then there is an only element a ∈ A such that,

∥x − a∥ = d(x0 , A).

16
Definition 1.2.11 Two hilbert spaces H1 , H2 are called isomorphic if
there exists a bijection linear mapping f ; H1 → H2 such that,

⟨φ(x); φ(y)⟩ = ⟨x, y⟩ ; x, y ∈ M,

φ is called isometric isomorphism mapping.

Theorem 1.2.12 Let x, y be a hilbert space in the field F and let f : x → y


is a linear function so the two sentences are equivalents:
1) ∥f (x)∥ = ∥x∥ for all x ∈ X.
2) ⟨f (x), f (y)⟩ = ⟨x, y⟩ for x, y ∈ X.
proof. 1 → 2 : let x, y ∈ X, λ ∈ F ;

∥x + λy∥
= ∥f (x) + λf (y)∥
= ∥f (x)∥ + λ ∥f (y)∥ ,

then;
∥x + λy∥2 = ∥x∥2 + 2Re(λ ⟨x, y⟩) + |λ|2 ∥y∥2 ,
so;.

∥f (x) + λf (y)∥2
= ∥f (x)∥2 + 2Re(λ ⟨f (x), f (y)⟩) + |λ|2 ∥f (y)∥2 ,

As;
∥f (x) + λf (y)∥2 = ∥x + λy∥2 ,
so;
(λ ⟨f (x), f (y)⟩) = (λ ⟨x, y⟩),
then;
⟨f (x), f (y)⟩ = ⟨x, y⟩ ,
If F = R ,we take λ = 1, If F = C we take λ = 1, λ = i.we get ⟨x, y⟩ ,
⟨f (x), f (y)⟩ had the same reel part and imaginary part.so

⟨f (x), f (y)⟩ = ⟨x, y⟩ ,

17
2 → 1 let x ∈ X with using 2;
⟨f (x), f (x)⟩ = ⟨x, x⟩
→ ∥f (x)∥2
= ∥x∥2
→ ∥f (x)∥
= ∥x∥ for all x ∈ X.
2

1.3 Orthogonal set


Definition 1.3.1 A subset of Hilbert space H is a subspace if it is closed
under the operation of forming linear combination for all x, yϵH and for
all scalars C1 , C2 we have C1 x + C2 y belongs to H.The subspace M is
said to be closer, it countains all its limit ’s points it means every sequence
of elements of H that is cauchy for the H − norm converges to an element
in H.

Example 1.3.2 Every finite dimensional subspace of a hilber space H is


closed .

Definition 1.3.3 Let H be a hilbert space we denotes an inner product by


⟨; ⟩ the inner product structure of a hilbert space allows to introduce the
concept of orthogonality.And let x, y are vectors from hilbert space H . If
⟨y; x⟩ = 0 we say x and y are orthogonal, and write x ⊥ y.we say thet the
subsets A and B are orthogonals written A ⊥ B for every xϵA and yϵB
.The orthogonal complement A⊥ of a subset A is the set of vectors orthogonal
to A.A⊥ = {xϵH/x ⊥ y for all yϵH} .

Notation 1.3.4 ∗x⊥y = y⊥x.


∗x⊥x = 0.
∗If x⊥y than x⊥λy.

18
proof. If the vector X is orthogonal to all vectors (x1 , x2 , ..xn ) in the
hilbert space then it is othogonal to any linear combination .
x⊥xk for all k = 1, 2, 3, , , , n;

⟨x, xi ⟩ = 0,

let

n
z= λ i xi ,
i=1

Soit λi ∈ F for all i = 1...n 2

Theorem 1.3.5 Let (x1 , , , , , xn ) be an orthogonal in H then

∥xi + xk ∥2 = ∥xi ∥2 + ∥xk ∥2 .

proof. If x ⊥ y then

∥x + y∥2 = ∥x∥2 + ∥y∥2 + 2Re ⟨x; y⟩ .


Theorem 1.3.6 Let {xn }be an orthogonal in H.Then ∞
∑ k=0 xk converges if

k=0 ∥xk ∥ converges.
2


proof. Note the convergence of a serie ∞ k=0 xk of elements
∑xi of a hilbert
space H is defined to be the limit of the ’partial sum limn→∞ ’ ∞ .In par-
k=0 xk∑
ticular,the cauchy criterion applies since H is complete.The series ∞ k=0 yk
converges if and only if for ε ≥ 0 thereexists n0 ϵN such that m,n ≥ n0 imply



yi ≤ ε,
k=0

∑ ∑∞
by the above discussion ∞
2
k=0 xk converges if and only if ∥ k=0 xk ∥ becomes
small
∑∞ for sufficiently large n,m .By the Pythagorean theorem this term equals
∑∞
k=0 ∥xk ∥ . Then
2
∑∞
k=0 ∥xk ∥ converges. 2
2
k=0 xk converges if and only if

19
Notation 1.3.7 If M, N are subspaces in Hilbert space as M ⊥ N then

M ∩ N = {0} .

Theorem 1.3.8 Let X be a hilbert space then ;


1) {0}⊥ = X .
2) {0} = X ⊥ .

Theorem 1.3.9 The orthogonal complement of a subset of hilbert space is


a closed linear subspace.
proof. Let H be a hilbert space and A a subset of H . if y; zϵA⊥ and
λ, xϵC then the linear of the inner product implies that;

⟨x; λy + µz⟩ = λ ⟨x; y⟩ + µ ⟨x; z⟩ ,

for all xϵA therefore λy + µz and A⊥ is linear subset that it is a subspace.


To show that A⊥ is closed we need to show if (yn ) is convergente in A⊥
then the limit y also belongs to A⊥ .Let xϵA the inner product is continous.
Therefore ;
⟨ ⟩
⟨x; y⟩ = x; lim yn
n→∞
= lim ⟨x; yn ⟩
n→∞
= 0,

then yn ϵA⊥ so the subspace is closer. 2

Theorem 1.3.10 Let H be a closed linear subspace of hilbert space H then:


A.for each xϵH there is a unique closet point yϵH such that

∥x − y∥ = min ∥x − z∥ ; zϵH

B.The point yϵH closest to xϵH is the unique element H with the
propriety that
(x − y)⊥H
.

20
proof. Let the distance d from M.

d = inf {∥x − z∥ , zϵH} ,

First we proove there is a closer point yϵH at which this infirum is attained
meaning that;
∥x − y∥ = d,
From the definition of d there is a sequence of elements yn such that;

lim ∥x − yn ∥ = d,
n→∞

thus for all ϵ ≻ 0 there an N such that;

∥x − yn ∥ 6 d + ϵ, when n ≥ N,

we show that the sequence{yn } is cauchy from the parralogram law we have

∥ym − yn ∥2 + ∥2x − ym − yn ∥2 = 2 ∥x − yn ∥2 + 2 ∥x − ym ∥2 ,

.Since (ym + yn )/2 ∈ M implies that

∥x − (yn + yn )/2∥ ≥ d

Combinig this equation we get that for all m, n ∈ N ;

∥ym − yn ∥2 = − ∥2x − ym − yn ∥2 + 2 ∥x − yn ∥2 + 2 ∥x − ym ∥2
≤ 4(d + ϵ)2 − 4d2
≤ 4ϵ(d + ϵ),

therefore {yn } is cauchy since hilbert space is complete .there is y such that
yn → y and since M is closed , we have y ∈ M the norm is continuous so

∥x − y∥ = lim ∥x − yn ∥
n→∞
= d

Second,we provethe uniqueness of a vector y ∈ M that minimizes ∥x − y∥ =


d, ∥x − y ′ ∥ = d.then the parrologram law implies that

∥x − y∥2 + ∥x − y ′ ∥ = 2 ∥2x − y − y ′ ∥ + 2 ∥y − y ′ ∥ ,
2 2 2

21
Since(y + y′)/2 ∈ M ;

∥y − y ′ ∥ = 4d2 − 4 ∥2x + (y + y ′ )/2∥ ,


2 2

therefore;
∥y − y ′ ∥ = 0 and y = y ′ ,
2

third we show that the unique y ∈ M found above satisfies the condition
that the vector x − y is orthogonal to M . since y minimize the distanceto x
, we have for every λ ∈ C; z ∈ M that;

2Reλ ⟨x − y, z⟩ ≤ |λ|2 ∥z∥2 ,

suppose that;
⟨x − y, z⟩ = |⟨x − y, z⟩| eiθ ,
chosen λ = ϵeiθ where ϵ ≥ 0 and dividing by ϵ we get;

2 ⟨x − y; z⟩ ≤ ϵ ∥z∥2 ,

taking the limit as ϵ → 0,, we find that;

⟨x − y, z⟩ = 0,

so ;
(x − y) ⊥ M,
Finally we share that y is the only element in M such that (x − y) ⊥ M .
Suppose that y′ is another such element in M .then y − y′ϵM and for any
z∈M .
We have

⟨z, y − y ′ ⟩ = ⟨z, x − y ′ ⟩ − ⟨z, x − y⟩


= 0,

In partticular we may take:


z = y − y′
Then we will have:
y = y′.
2

22
Definition 1.3.11 IfM and N are closed subspace of a hilbert space then we
define the orthogonality direct sum or simply sum M ⊕ N by :

M ⊕ N = {y + z/y ∈ M ; z ∈ N } .

We may also define the orthogonal direct sumof two hilber spaces are not
subspaces of the same space .

Remark 1.3.12 If M is a closed subspace ,then any x ∈ M may be uniquely


representes x = y + z when y ∈ M is the best approximation to x and z ⊥ M
then we have the following corralary.

Corollary 1.3.13 If H is a closed subspace of a hilbert space then;

M ⊕ M ⊥ = H.

Thus every closed subspace M of hilbert space has a closed complementary


subspace M ⊥ .

1.4 Orthonormal set

Definition 1.4.1 (orthonormal set) Let A be a subset in Pre-hilbert space


for all x,y ∈ A, A is orthonormal set if it is orthogonal ∥x∥ = 1,it means
that;

⟨x; y⟩ = {}0; x ̸= y1; x = y,


And for x, y ∈ H, {xn } is called on orthogonal sequence if ∀m, n ∈ N, xn ⊥xm
for m ̸= n.and it is called orthonormal set if

⟨xn ; xm ⟩ = {}0; xn ̸= ym 1; xn = ym .

23
Proposition 1.4.2 Let A is a set in hilbert space X. then A is linear
independant.
proof. Let {x1 , x2 , , xn } is a finite set in A then :

n
i=1 λi xi = 0,
than
∥ni=1 λi xi ∥2 = 0.
As
xi ⊥xj (i ̸= j) =⇒ni=1 |λi |2 ∥xi ∥2 = 0,
and xi ̸= 0; (∀i); ∥xi ∥ ≥ 0 then λi = 0. 2

Theorem 1.4.3 let {x1 , x2,,,,, xn } are orthonormal vectors in hilbet space X
.∀x ∈ X;
1) ∥x −ni=1 ⟨x; xi ⟩ xi ∥2 = ∥x∥2 − ∥ni=1 ⟨x; xi ⟩∥2 , ∀i ∈ 1, n.
2) ∥x∥2 ≥ ∥ni=1 ⟨x; xi ⟩∥2 .
3) ⟨x −ni=1 ⟨x; xi ⟩ xi , xj ⟩ ⊥ xj .
proof. Let {x1 , x2,,,,, xn } are orthonormal vectors in hilbet space X
.∀x ∈ X we have:
1.λi = ⟨x, xi ⟩ ,

∥x −ni=1 λi xi ∥2 = ⟨x −ni=1 λi xi , x −ni=1 λi xi ⟩


= ∥x∥2 + ⟨ni=1 λi xi ;ni=1 λi xi ⟩ −ni=1 λi ⟨x, xi ⟩ +ni=1 λi ⟨x, xi ⟩
= ∥x∥2 −ni=1 |λi |2 ,

2.
∥x −ni=1 ⟨x; xi ⟩ xi ∥2 ≥ 0 ⇒ ∥x∥2 ≥ ∥ni=1 ⟨x; xi ⟩∥2 ,
3.
⟨x −ni=1 ⟨x; xi ⟩ xi , xj ⟩ = ⟨x, xj ⟩ −ni=1 ⟨x; xi ⟩ ⟨xi , xj ⟩ ,
As
⟨xi , xj ⟩ = {}0, i ̸= j1, i = j,
so,

⟨x −ni=1 ⟨x; xi ⟩ xi , xj ⟩ = ⟨x, xj ⟩ − ⟨x, xj ⟩


= 0,

24
then,
⟨x −ni=1 ⟨x; xi ⟩ xi , xj ⟩ ⊥ xj .
2

Corollary 1.4.4 Let {xn } an orthonormal set in hilbert space X then ∀x ∈


X,
∥x∥2 ≥ ∥ni=1 ⟨x; xi ⟩∥2 .

Theorem 1.4.5 If {xn } is orthonormal in hilbert space X and {λn } is se-


quence such that ∞
i=1 |λi | ≺ ∞.then we define:
2

yn =ni=1 λi xi ,

Thus {yn } is a cauchy sequence and it converges.

proof. yn =ni=1 λi xi ; ym =m
i=1 λi xi
If m ≻ n, m = n + k; such that k ∈ Z+ ;

ym = yn+k +n+k
i=1 λi xi ,

then,
ym − yn+k =n+k
i=1 λi xi ,

so that,
2
∥ym − yn ∥2 = n+k
n+1 λi xi

n+1 |λi | ∥xi ∥


n+k 2 2
=
n+1 |λi | → 0,
n+k 2
=

As X is complete and {yk } is cauchy sequence than {yn} converges. 2

Definition 1.4.6 Let be {xn } is a sequence in Pr-hilbert space and yn =ni=1 xi


converge to x,then
x =ni=1 xi .

25
Theorem 1.4.7 Let be {xn } is a sequence in Pr-hilbert space and let x =ni=1
λi xi , y =ni=1 µi xi thus
1) ⟨x; y⟩ =ni=1 λi µi .
2) ⟨x; xk ⟩ = λk .
3) ∥x∥2 =∞ ∞
i=1 |λi | =i=1 |⟨x, xi ⟩| .
2 2

proof. 1) let sn =ni=1 λi xi , tn =ni=1 µi xi , sn → s tn → t


then;⟨x, y⟩ → ⟨sn , tn ⟩
so that

⟨sn , tn ⟩ = ⟨ni=1 λi xi ,ni=1 µi xi ⟩


= ni=1 λi µi ,

2) uk = 1, uj = 0 j ̸= k ;then

⟨x; xk ⟩ = λk ,

3) x ∈ X

∥x∥2 = ⟨x; x⟩
= ni=1 λi λi
= ni=1 |λi |2
= ∞i=1 |⟨x, xi ⟩| ,
2

1.5 Orthonormal basis set

Definition 1.5.1 (total set) Let A a subset in Pre hilbert space.A is called
a total set if A⊥ = {0} and specifficly {xn } a sequence is calling total sequence
if ”x⊥xn ∀n x = 0”

Example 1.5.2 Let A = {e1 , e2 , , , , ; en } = {0, 0, 1, 0, , } is a total set in l2 .

26
Definition 1.5.3 A is subset in hilbert X ,we call A is an orthonormal basis
in X if A is orthonormal and total.

Theorem 1.5.4 Let X be a hilbert space .Every orthogonal set in X coun-


taind a maximal orthogonormal set .

Corollary 1.5.5 Every hilbert space has a maximal orthonormal set .

Definition 1.5.6 A part D from H is called dense in H if :

∀h ∈ H ∀ε⟩0 ∃f ∈ D such that ∥f − h∥ ≤∈ .

Definition 1.5.7 A hilbert space is seperable if it has a seperable if it has a


countable subset denses. it means if it exists a sequence {fk }∞ of elements
of H verifies

∀h ∈ H ∀ε⟩0 ∃fk ∈ D such that ∥h − fk ∥ ≺∈ .

Definition 1.5.8 A hilbert space X is seprable if it has a countable ortho-


normal system.

27
Chapter 2

Bounded linear operators

2.1 Linear Operator

Definition 2.1.1 Let X and Y are normed space and let M is a subset from
X. If it is possible to link each an element x ∈ M and individually with an
element y ∈ Y.Then we say that we have an operator (mapping) from M
to Y and we write:
T : M → Y ; x → T (x),
*We called M the set where T is defined :the domain of T.
*If D(X) = X in this case we write

T : X → Y, x → T (x),
*We call

R(T ) = {y ∈ Y : ∃x ∈ D(T ) : y = T (x)} ,


the set of values of T.
*and we denote

N (A) = {x ∈ D(T ) : T (x) = (θ)} ,

the zero space of operator.

28
Definition 2.1.2 we say that
T : D(T ) → Y,
1)Linear if
T (λ1 x1 + λ2 x2 ) = λ1 T (x1 ) + λ2 T (x2 ), ∀x1 , x2 ∈ D(T ), ∀λ1 , λ2 ∈ K,
2)Bounded if there is a constant C ≻ 0 such that:

∥T x∥Y ≤ C ∥x∥X .....∗


If there is we say that the operator is unbounded.
4)We say that T is continouis in the point x0 ∈ D(T ) if there is a sequence
{xn }∞
n=1 from D(T ) such that xn → x0 ( converges in X). so that T xn → T x0
(converge inY )
5)We said that T is continouis if it is continouis in each x ∈ D(T ).
6)If T a linear operator gives a one-to-one map
(x1 ̸= x2 ) =⇒ (T x1 ̸= T x2 ),
or equivalently,
(x1 = x2 ) =⇒ (T x1 = T x2 ),
of D(T ) onto ran(T ), then the inverse map T −1 gives a linear operator on
ran(T ) onto D(T ):

T −1 T x = x for x ∈ D(T ) and T T −1 y = y for y ∈ ran(T ).


T −1 is called the inverse of T.(or the inverse operator).
The following proposition is an easy consequence of the linearity of T .

Proposition 2.1.3 A linear operator T admits an inverse T −1 if, and only


if,T x = 0 imply x = 0.
proof. suppose T x = 0 imply x = 0.Let T x1 = T x2 since T is linear
than
T (x2 − x1 ) = T x2 − T x1 = 0
So that x1 = x2 by hypothesis .
Inversly if T −1 exist then T x2 = T x1 implies x1 = x2 .
Let T x = 0 since T is linear T 0 = 0 = T x so that x = 0. 2

29
Definition 2.1.4 Let T1 and T2 be linear operators with domains D(T1 ) and
D(T2 ) both contained in a linear space X and ranges R(T1 ) and R(T2 ) both
contained in a linear space Y.Then, T1 = T2 ,if, and only if , D(T1 ) = D(T2 )
and T1 x = T2 x for all x ∈ D(T1 ) = D(T2 ). If D(T1 ) ⊆ D(T2 ) and T1 x = T2 x
for all x ∈ D(T1 ) = D(T2 ).T2 is called an extension of T1 and T2 is a
restriction of T2 . We shall write T1 ⊆ T2 .

Proposition 2.1.5 Let T : X → Y and S : Y → Z be bijective linear


operators, where X, Y, Z are linear spaces over the same scalar field F . Then,
the inverse (ST )−1 : Z → X of the product (composition) of S and T exists
and satisfies
(ST )−1 = T −1 S −1 .

2.2 Bounded linear operator


Definition 2.2.1 Let v,w be normed vector space .a linear transformation
operator
T : v → w,
is bounded if there is a constant c such M

∥T x∥v ≤ C ∥x∥X , ∗
for x ∈ V.

Remark 2.2.2 We can see the linearity and the homogenity of the norm in
w to see that
(x) T (x)
T =
∥x∥v w ∥x∥v w
∥T (x)∥w
= ,
∥x∥v
We see that T is bounded,satisfying *, if and only if
sup ∥T (x)∥∥x∥=1 ≤ C,

30
Theorem 2.2.3 Let V, W be normed vector spaces and let

T : v → w,
be a linear transformation. The following statements are equivalent.
1)T is a bounded linear transformation.
2)T is continuous everwhere in V
3)T is continuous at 0 in V.

proof. 1 → 2 Let C as in the defenition of bounded linear transformation.


By linearity of T we have

∥T v − T u∥w = ∥T (v − u)∥w ≤ C ∥v − u∥w ,


which imply ’2’
2 → 3 is trivial
3 → 1 If T is continouis at 0 there exists δ⟩0 such that for allv ∈ V with
∥v∥ ≤ δ we have ∥T v∥ ≤ 1.Now let x ∈ V and x ̸= 0. Then

x
δ = δ/2,
2 ∥x∥ v v
then
x
T (δ ) ≤ 1,
2 ∥x∥ v
But by the linearity of T and the homogeneity of the norm we get:

x
1 ≥ T (δ )
2 ∥x∥ v w
Tx
= δ( )
2 ∥x∥ v w
δ
= ∥T x∥w
2 ∥x∥ v

and therefore; ∥T x∥w ≤ C ∥x∥v with C = 2/δ. 2

31
Notation 2.2.4 If
T : v → w,
is is linear one often writes T x for T (x).

Definition 2.2.5 We denote by L(V ; W ) the set of all bounded linear trans-
formations T : v → w, L(V ; W ) from a vector space. S + T is the transfor-
mation with

(S + T )(x) = S(x) + T (x),


and cT is the operator x → cT .
On L(V ; W ) we define the operator norm (depending on the norms on V
and W ) by

∥T v∥ w
∥T ∥L(V ;W ) = ∥T ∥op = sup ,
∥v∥V

Lemma 2.2.6 Let V and W be normed spaces. If V is finite dimensional


then all linear transformations from V to W are bounded.
proof. Let {v1; , , , , ; vn} be a basis of V. Then for

v =nj=1 aj vj ,

we have

∥T v∥w = n
j=1 aj vj w
≤ n
j=1 |aj | ∥T vj ∥w
≤ n
j=1 ∥T vj ∥w max |ak |k=1....n

The expression max |ak |k=1....n defines a norm on V.Since all norms on V are
equivalent, there is a constant C1 such that

max |aj |j=1....n ≤ Cj=1


n
aj vj ,
for all choices of 1, , , , ; n. Thus we get

∥T v∥w ≤ C ∥v∥V

32
for all v ∈ V where the constant C is given by:

C = C1 nj=1 ∥T vj ∥w ,
2

2.3 The algebra of operator:

Definition 2.3.1 An algebra A over a field F is a vector space over F such


that to each ordered pair of elements. x, y ∈ A a unique product xy ∈ A is
defined, with the proprities
1)(xy)z = x(yz)
2)x(y + z) = xy + xz
3)(x + y)z = xz + yz
4)α(xy) = (αx)y = x(αy) for all x, y, z ∈ A and α ∈ F.Depending on
whether F is Ror C.A is called a real or complex algebra .A is said to be
commutative if the multiplication is commutative ,that is,for all x, y ∈ A

xy = yx.

A is called an algebra with identity if it contains an element e ̸= 0,Such that


for all x ∈ A.We have xe = ex = x.The element e is called an identity. If A
has an identity, it is unique. It may be noted that F and B(H) are algebras
with identity.

Definition 2.3.2 A normed algebra is a normed space which is an algebra


such that for all x, y ∈ A

∥xy∥ ≤ ∥x∥ ∥y∥ ,


and if A has an identity e,

∥e∥ = 1.
A Banach algebra is a normed algebra which is complete considered as a
normed space.

33
Theorem 2.3.3 (B(H), ∥, ∥) where ∥T ∥ = sup {∥T x∥ : ∥x∥ ≤ 1} .T ∈ B(H),
is Banach algebra with idedndity provided that H ̸= {0} .

proof. Since

∥(ST )(x)∥ = ∥S(T x)∥ ≤ ∥S∥ ∥T x∥ ≤ ∥S∥ ∥T ∥ ∥x∥ , S, T ∈ H

it follows that
∥ST ∥ ≤ ∥S∥ ∥T ∥ .
That B(H) is Banach space .The operator I is the identity and satisfies
∥I∥ = 1 when H ̸= {0} . 2

Definition 2.3.4 A sequence {Tn }n≥1 in B(H) converges to T ∈ B(H) in


the uniform operator norm if limn ∥Tn − T ∥ = 0.

Remark 2.3.5 there are two types of convergence , weak and strong conver-
gence .

Definition 2.3.6 A sequence {Tn }n≥1 in B(H) converges strongly to T ∈


B(H) if ,for each x ∈ H ,limn ∥Tn x − T x∥ = 0.A sequence {Tn }n≥1 in B(H)
converges weakly to T ∈ B(H) if ,for each x ∈ H ; limn ∥T xn − T x∥ = 0.

Definition 2.3.7 Let T ∈ B(H).T is said to be invertible in B(H) if it has


a set theoretic inverse T −1 and T −1 ∈ B(H).

Remark 2.3.8 The following fundamental proposition will be used to show


that the collection of invertible elements in B(H) is an open set and inversion
is continuous in the uniform operator norm.

34
Proposition 2.3.9 If T ∈ B(H) and ∥I − T ∥ ≺ 1.then T is invertible and

T −1 =∞
k=0 (I − T ) ,
k

where convergence takes place in the uniform operator norm. Moreover,


1
T −1 ≤ .
1 − ∥I − T ∥
proof. Set η = ∥I − T ∥ ≺ 1.Then for n ≻ m.we have

n
k=0 (I − T )k −m
k=n+1 (I − T )
k
= n
k=m+1 (I − T )k
≤ − T )k
n
k=m+1 (I
η m+1
≤ nk=m+1 η ≤ ,
1−η
{ }
The sequence of partial sums nk=0 (I − T )k n≥1 is cauchy.If S =∞
k=0 (I −
k
T ) ,then

(∞ )
T S = [I − (I − T )] k=0 (I − T )
k
(n )
= lim [I − (I − T )] k=0 (I − T )k
n
[ ]
= lim I − (I − T )n+1 = I.
n

Sincelimn ∥(I − T )n ∥ = 0.Similary ST = I, so that T is invertible with


T −1 = S.Moreover,

∥S∥ = lim n
k=0 (I − T )k
n
≤ lim nk=0 (I − T )k
n
1
= .
1 − ∥I − T ∥
2

Theorem 2.3.10 An operator which is bounded below is clearly injective.

35
Theorem 2.3.11 An operator T ∈ B(H) is invertible if, and only if, it is
bounded below and has dense range.

proof. If T is invertible, then the range of T is H and is therefore dense.


Moreover,
1 1
∥T x∥ ≥ T −1 T x = ∥x∥ , x∈H
∥T −1 ∥ ∥T −1 ∥

and therefore, T is bounded below.Conversely, if T is bounded below, there


exists an α ≻ 0 such that ∥T x∥ ≥ α ∥x∥ for all x ∈ H.Hence, if (T xn )n≥1 is a
Cauchy sequence inH. Then the inequality
1
∥xn − xm ∥ ≤ ∥T xn − T xm ∥ ,
α
this implies (xn )n≥1 is a Cauchy sequence in H. Let x = limn xn .Then x ∈ H
and T x = limn T xn and hence, ran(T ) is closed. Since ran(T ) is dense in
H, it follows that ran(T ) = H.As T is bounded below,T −1 is well defined
Moreover, if y = T x.Then ;
1 1
T −1 y = ∥x∥ ≤ ∥T x∥ = ∥y∥ .
α α
2

2.4 Sesquilinear forms


In this section, a new kind of functional—a sesquilinear functional, or a
sesquilinear form, will be introduced. On the pattern of linear functionals,
the notion of bounded sesquilinear functionals is studied. A characterisation
of such functionals is provided.

Definition 2.4.1 LetX be a vector space over C. A sesquilinear form on


X is a mapping B from X into the complex plane C with the following
properties:

36
1)
B(x1 + x2 , y) = B(x1 , y) + B(x2 , y).
2)
B(x, y1 + y2 ) = B(x, y1 ) + B(x, y2 ).
3)
B(αx, y) = αB(αx, y).
4)
B(x, βy) = βB(x, y).
for all x, x1 , x2 , y, y1 , y2 in X and all scalars a, b in C.Thus , B is linear
in the first argument and conjugate linear in the second argument.If X is
a real vector space,then

B(x, βy) = βB(x, y).

and B is called bilinear, since it is linear in each of the two arguments.

Definition 2.4.2 A Hermitian form B on a complex vector space X is a


mapping from X × X into the complex plane C satisfying properties:

B(x, y) = B(y, x).

Theorem 2.4.3 Let B be a nonnegative sesquilinear form on the complex


vector space X. Then

|B(x, y)|2 ≤ B(x, x)B(y, y) for all x, y ∈ X.


proof. If B(x, y) = 0.the inequality is, of course, true. Suppose B(x, y) ̸=
0.Then for arbitrary complex numbers α, β we have

B(αx + βy, αx + βy) ≥ 0


= ααB(x, x) + αβB(x, y) + αβB(y, x) + ββB(y, y)
= ααB(x, x) + αβB(x, y) + αβB(x, y) + ββB(y, y).

since B is nonnegative. Now let α = t be real and set β = B(x, y)/ |B(x, y)| .Then,

37
βB(x, y) = |B(x, y)| and ββ = 1.
Hence,

0 ≤ t2 B(x, x) + 2t |B(x, y)| + B(y, y),


for an arbitrary real number t. Thus, the discriminant

4 |B(x, y)|2 − 4B(x, x)B(y, y) ≤ 0,


which completes the proof. 2

Definition 2.4.4 Let H be a Hilbert space.The sesquilinear form B is said


to be bounded if there exists some positive constant M such that

|B(x, y)| ≤ M ∥x∥ ∥y∥ for all x, y ∈ H.


The norm of B is defined by:

|B(x, y)|
∥B∥ = sup |B(x, y)| = sup ,
∥x∥=∥y∥=1 x∈H,y∈H,x̸=y̸=0 ∥x∥ ∥y∥

Theorem 2.4.5 Let H be a Hilbert space and B(., .) : H × H → C be a


bounded sesquilinear form. Then, B has a representation

B(x, y) = (Sx, y),


where S : H → H is a bounded linear operator. S is uniquely determined by
B and has norm

∥S∥ = ∥B∥ ,

proof. For fixed x, the expression B(x, y)defines a linear functional in y


whose domain is H. Then, the Theorem of F. Riesz yields an element z ∈ H
such that

B(x, y) = (y, z),


hence,

38
B(x, y) = (z, y),
Here, z is unique but, of course, depends onx ∈ H.Define the mapping S :
H → H bySx = z, x ∈ H.Then

B(x, y) = (Sx, y),


since,

B(α1 x1 + α2 x2 , y) = B(α1 x1 , y) + B(α2 x2 , y),


then,

(S(α1 x1 + α2 x2 ) − α1 Sx1 − α2 Sx2 , y) = 0, y ∈ H


Since y is arbitrary,

S(α1 x1 + α2 x2 ) = α1 Sx1 + α2 Sx2 ,


so that S is a linear operator. The domain of the operator S is the whole of
H.Furthermore, since

|(Sx, y)| ≤ ∥Sx∥ ∥y∥ ,


we have,

|B(x, y)|
∥B∥ = sup
x∈H,y∈H,x̸=y̸=0 ∥x∥ ∥y∥
|(Sx, y)|
= sup
x∈H,y∈H,x̸=y̸=0 ∥x∥ ∥y∥
∥Sx∥
≤ sup
x̸=0 ∥x∥
= ∥Sx∥ .

On the other hand,

39
|(Sx, y)|
∥B∥ = sup
x∈H,y∈H,x̸=y̸=0 ∥x∥ ∥y∥
|(Sx, Sx)|
≥ sup
x∈H,y∈H,x̸=y̸=0 ∥x∥ ∥Sx∥
∥Sx∥
= sup
x̸=0 ∥x∥
= ∥Sx∥ .

It remains to check that S is unique. Suppose there is a linear operator


T : H → H such that for allx, y ∈ H we have,

B(x, y) = (Sx, y) = (T x, y).


It then follows that,

((S − T )x, y) = 0, for all x, y ∈ H.


Setting y = (S − T )x,we obtain∥(S − T )x∥ = 0,that is Sx = T x for eachx ∈
H.Consequently,
S = T.
2

Theorem 2.4.6 If a complex scalar function B(., .) : H × H → C where H


denotes a Hilbert space, satisfies the following conditions:
1)
B(x1 + x2 , y) = B(x1 , y) + B(x2 , y).
2)
B(x, y1 + y2 ) = B(x, y1 ) + B(x, y2 ).
3)
B(αx) = αB(x).
4)
B(x, βy) = βB(x, y).
5)
|B(x, x)| ≤ M ∥x∥2

40
.
6)
|B(x, y)| = |B(y, x)| .
where M is a constant;x, x1 , x2 , y, y1 , y2 are arbitrary elements ofH .a, b are
scalars, then Bis a bounded sesquilinear functional with: ∥B∥ ≤ M.

Corollary 2.4.7 If the bounded sesquilinear functional B satisfies the condi-


tion:

|B(x, y)| = |B(y, x)| x, y ∈ H,


then,

|B(x, x)|
∥B∥ = sup ,
x∈H,∥x∦=0 ∥x∥2
by the other hand ,

|B(x, x)|
sup
x∈H,∥x∦=0 ∥x∥2
|B(y, x)|
≤ sup = ∥B∥ .
x∈H,∥y∦=∥x∦=0 ∥x∥ ∥y∥

Corollary 2.4.8 If H is a Hilbert space, the norm of a Hermitian bounded


sesquilinear form B(., .) : H × H → C is given by the formula:

|B(x, x)|
∥B∥ = sup .
x∈H,∥x∦=0 ∥x∥2

2.5 The adjoint operator


The study of bilinear forms on a Hilbert space H yields rich dividends. The
algebra B(H) of bounded linear operators on H admits a canonical bijection
T → T ∗ possessing

41
pleasant algebraic properties. Moreover, many properties of T can be
studied through the operator T ∗ . It also helps us to study three impor-
tant classes of operators, namely self-adjoint, unitary and normal operators.
These classes have been studied extensively, because they play an important
role in various applications.

Definition 2.5.1 Let T be a bounded linear operator on a Hilbert space H.


Then, the Hilbert space adjoint T ∗ of T is the operator

T : H → T ∗,
such that for all x, y ∈ H

⟨T x, y⟩ = (x, T ∗ y) .
The Hilbert space adjoint T ∗ of T in Definition exists, is unique and is a
bounded linear operator with norm

∥T ∥ = ∥T ∗ ∥ .
proof. The formula

B(y, x) = (y, T x) , x, y ∈ X
defines a bounded sesquilinear form on H × H because the inner product is a
sesquilinear form andT is a bounded linear operator. Indeed, for y1 , y2 , x1 , x2
in H and a, b scalars,

B(αy1 + βy2 , x)
= B(αy1 + βy2 , T x)
= α(y1 , T x) + βB(y2 , T x)
= α(y1 , x) + βB(y2 , x),
and ,

B(y, αx1 + βx2 ) = (y, T (αx1 + βx2 ))


= (y, αT x1 + βT x2 )
= α(y, T x1 ) + β(y, T x2 )
= αB(y, x1 ) + βB(y, x2 ).

42
Moreover, B is bounded

B(x, y) = ⟨x, T y⟩ ≤ ∥x∥ ∥T y∥ ≤ ∥x∥ ∥T ∥ ∥y∥ .


this implies ∥B∥ ≤ ∥T ∥ also,

(y, T x)
B = sup
x̸=0 ∥y∥ ∥x∥
y̸=0
|(tx, tx)|
≥ sup
x̸=0 ∥T x∥ ∥T x∥
T x̸=0
= ∥T ∥ .

we conclude that

∥B∥ = ∥T ∥ .
From the representation Theorem for bounded sesquilinear forms, we have

B(y, x) = (T ∗ , x) ,
where we have replaced S of Theorem T ∗ .
The operator T ∗ : H → H is a uniquely defined bounded linear operator
with norm

∥B∥ = ∥T ∥ = ∥T ∗ ∥ .
Then we note that

(y, T x) = (T ∗ y, x) ,
we conclude

(T x, y) = (T ∗ y, x).
2

Theorem 2.5.2 If S, T ∈ B(H)and a is a scalar, then


1)
(αS + T )∗ = αS ∗ + T ∗ .

43
2)
(T S)∗ = S ∗ T ∗ .
3)
(S ∗ )∗ = S.
4)If S is invertible in B(H) and S −1 is its inverse, then S ∗ is invertible
and ( )∗
(S ∗ )−1 = S −1
5)

∥S ∗ S∥ = ∥SS ∗ ∥
= |S|2 .

6)S ∗ S = 0 if and only if S = 0.


proof. 1)By definition of the adjoint, for allx, y ∈ H

(x, (αS + T )∗ y) = ((αS + T )x, y)


= (αSx, y) + (T x, y)
= (x, αS ∗ y) + (x, T ∗ y)
= (x, (αS ∗ + T ∗ )y).

hence (αS + T )∗ = (αS ∗ + T ∗ ).


2)For x, y ∈ H

(S, (ST )∗ y) = (ST (x), y)


= (T x, S ∗ y)
= (x, T ∗ S ∗ y).

hence (ST )∗ = T ∗ S ∗
3)For x, y ∈ H

(x, (S ∗ )∗ y) = (S ∗ x, y) = (x, Sy),


Hence,(S ∗ )∗ = S for all y ∈ H
4)IfI denotes the identity operator in B(H), then I ∗ = I.Indeed, for
x, y ∈ H.

44
(x, I ∗ y) = (Ix, y)
= (x, y)
= (x, Iy).

hence,I ∗ y = Iy for all y ∈ H which implies I = I ∗ .Suppose S is an in-



vertible element in B(H). Then,SS −1 = S −1 S = I.We have (S −1 S) =
∗ ∗ ∗
S ∗ (S −1 ) = I ∗ .Since,I = I ∗ .We get S ∗ (S −1 ) = I.Similary, (S −1 ) S ∗ =

I.Hence (S −1 ) = S ∗ .
5)By the Cauchy–Schwarz inequality,

∥SS ∗ ∥ = ∥S∥2

6)it is immediate sequence of 5. 2

Definition 2.5.3 Let A be an algebra over C. A mapping a → a∗ of A into


itself is called an involution if, for all a, b ∈ H and all α ∈ C,
1)
a∗∗ = a.
2)
(a + b)∗ = a∗ + b∗ .
3)
(αa)∗ = αa∗ .
4)

(ab)∗ = (ba)∗ .
An algebra with an involution is called a∗ algebra. A normed algebra with
an involution is called a normed ∗algebra. A Banach algebra A with an
involution satisfying ∥aa∗ ∥ = ∥a∥2 is called
a C ∗ −algebra.

Theorem 2.5.4 If T ∈ B(H), then Kerf (T ) = Kerf (T ∗ T ) = [ran(T ∗ )]⊥


and [Kerf (T )]⊥ = [ran(T ∗ )].

45
Theorem 2.5.5 If T ∈ B(H) is such that T and T ∗ are both bounded below,
then T is invertible.
proof. If T ∗ is bounded below, then Kerf (T ∗ ) = {0}. Using the last
[ ]⊥
⊥ ⊥
theorem [ran(T )] = 0 which implies [ran(T )] = 0⊥ = ran(T ) =
H.Thus, ran(T ) is dense in H. Thus T is invertible. 2

2.6 Some Special Classes of Operators

Definition 2.6.1 If T ∈ B(H) then,


1)T is Hermitian or self-adjoint if

T = T ∗.

2)T is unitary if T is bijective and

T ∗ = T −1 .

3)T is normal if
T T ∗ = T ∗ T.

Remark 2.6.2 In the analogy between the adjoint and the conjugate, Hermi-
tian operators become analogues of real numbers, unitaries are the analogues
of complex numbers of absolute value 1. Normal operators are the true ana-
logues of complex numbers: Note that:
T + T∗ T − T∗
T = +i .
2 2i
T +T ∗ ∗
where 2
and T −T
2i
are self-adjoint and

T + T∗ T − T∗
T = −i .
2 2i
∗ T −T ∗
The operators T +T
2
and 2i
are called real and imaginary parts of T .

46
If T is self-adjoint or unitary, then T is normal. However, a normal
operator need not be self-adjoint or unitary. First note thatI � the identity
operator in B(H), is self-adjoint. The operator T = 2iI then

T ∗ = −2iI

so
T T ∗ = 4I = T ∗ T
but
T ∗ ̸= T
and
1
T −1 = − iI ̸= T ∗ .
2
.

Theorem 2.6.3 Let T ∈ B(H)Then,


a)If T is self-adjoint, (T x, x) is real for all x ∈ H.
b)If H is a complex Hilbert space and (T x, x) is real for all x ∈ H,the
operatorT is self-adjoint.

Remark 2.6.4 Part (b) of the preceding proposition is false if it is only


assumed that H is a real Hilbert space. For example, if
( )
0 1
T = ,
−1 0
on R2 , then (T x, x) = 0 for allx ∈ R. However

0 −1
T = ̸= T.
1 0

Theorem 2.6.5 Let {Tn }n≥1 be a sequence of bounded self-adjoint linear


operators on a Hilbert space H. Suppose {Tn }n≥1 converges, say limn Tn =
T (uniform norm),limn ∥Tn − T ∥ = 0.Then, the limit operator T is a bounded
self-adjoint operator on H.

47
proof. Clearly, T is a bounded linear operator. It is enough to show that

T = T.Then

∥(Tn )∗ − T ∗ ∥ = ∥(Tn − T )∗ ∥
= ∥Tn − T ∥ .

Therefore,

T ∗ = lim (Tn )∗
n
= lim Tn
n
= T.

The following result is important for the discussion of “spectral theory”. 2

Theorem 2.6.6 If T ∈ B(H) is self-adjoint,

T = sup {|(T x, x)| : ∥x∥ ≤ 1}


= sup {|(T x, x)| : ∥x∥ = 1} .

proof. Define B(x, y) = (T x, y), x, y ∈ H.B is abounded sesquilinea form


with∥B∥ = ∥T ∥.Since

B(y, x) = (y, T x) = (y, T x) = (T x, y) = B(x, y),

B is Hermitian. hence;

{ }
∥B∥ = sup |(T x, x)| / ∥x∥2 ; x ∈ H, x ̸= 0
= sup {|(T x, x)| ; x ∈ H, x ≤ 1} .

Corollary 2.6.7 If T ∈ B(H)is such that T = T ∗ and (T x, x) = 0 for all


x ∈ H then T = 0.

48
Proposition 2.6.8 If H is a complex Hilbert space and T ∈ B(H) is such
that (T x, x) = 0 for all x ∈ H then T = 0.
proof. for all x, y ∈ H the following equality is easily verified:

1
(T x, x) = [(T (x + y), x + y) − (T (x − y), x − y) + i(T (x + y), x + y) − i(T (x − y), x − y)] .
4
Since(T x, x) = 0 for all x ∈ H, it follows that (T x, y) = 0for all x, y ∈
H.Setting y = T x,we obtain

T x = 0 for all x ∈ H.
that is;
T x = 0 for all x ∈ H.Consequently, T = 0. 2

Definition 2.6.9 Let T∈ B(H) such that T ∗ = T.If for each x ∈ H, (T x, x) ≥


0, we say that T is positive semidefinite(or positive. If (T x, x) ≻ 0 for non-
zero x ∈ H we say that T is positive definite(strictly positive).

Definition 2.6.10 Let {Tn }n≥1 be a sequence of bounded linear self-adjoint


operators defined in a Hilbert space H.Tn ∈ B(H), n = 1, 2, ....The sequence
{Tn }n≥1 is said to be increasing(resp. decreasing) if T1 ≤ T2 ≤ .....(resp.T1 ≥
T2 ≥ ....).

Theorem 2.6.11 Let {Tn }n≥1 be an increasing sequence of bounded linear


self-adjoint operators defined in a Hilbert space H.that is bounded from above,
that is,

T1 ≤ T2 ≤ .... ≤ αI,
where α is a real number. Then, {Tn }n≥1 is strongly convergent.

Theorem 2.6.12 If T ∈ B(H) is self-adjoint and n ∈ N then


∥T n ∥ = ∥T ∥n .

49
2.7 Normal, Unitary and Isometric Opera-
tors
The true analogues of complex numbers are the normal operators. The fol-
lowing Theorem gives a characterisation of these operators.

Theorem 2.7.1 If T ∈ B(H) the following are equivalent:


1)T is normal;
2)∥T x∥ = ∥T ∗ x∥ for all x ∈ H
If H is a complex Hilbert space, then these statements are also equivalent
to:
3)The real and imaginary parts of T commute;

T + T∗ T − T∗
T1 T2 = T2 T1 whereT1 = and T2 = .
2 2i
proof. If x ∈ H then;

∥T x∥2 − ∥T ∗ x∥2 = (T x, T x) − (T ∗ x, T ∗ x)
= (T ∗ T x, T ∗ T x) − (T T ∗ x, T T ∗ x)
= ((T ∗ T − T T ∗ ), x).

Since, T ∗ T − T T ∗ is Hermitian,it follows that (a) and (b) are equivalent.


We next show that (a) and (c) are equivalent:

T ∗ T = (T1 − iT2 )(T1 + iT2 )


= (T1 )2 + i(T1 T2 − T2 T1 ) + (T2 )2 .

T T ∗ = (T1 + iT2 )(T1 − iT2 )


= (T1 )2 + i(T2 T1 − T1 T2 ) + (T2 )2 .

Hence,T ∗ T = T T ∗ if, and only if, T2 T1 = T1 T2 . 2

50
Theorem 2.7.2 Let T ∈ B(H) satisfy

T ∗T = T T ∗.

then,

∥T ∥k = T K .

Definition 2.7.3 For any T ∈ B(H)

ρ(T ) = sup {|T x, x)| : ∥x∥ = 1} .

Theorem 2.7.4 If T ∈ B(H) is a normal operator, then

∥T ∥ = sup {|(T x, x)| : ∥x∥ = 1} .

proof. From the definition of ρ and the definition of norm, it follows that

ρ(T ) = sup {|T x, x)| : ∥x∥ = 1} .


≤ sup ∥T x∥ ∥x∥ : ∥x∥ = 1
= sup ∥T x∥ : ∥x∥ = 1
= T.

Since T is normal, we have

∥T ∥p = ∥T p ∥ for p = 2n , n = 1, 2, 3.
So,

1
∥T ∥ = ∥T p ∥ p
1
≤ (2p (T p )) p (Proof By induction)
1
= 2 p ρ(T ).

when p → ∞ , we get:

51
∥T ∥ ≤ ρ(T ).
thus,

∥T ∥ = ρ(T ).
2

Corollary 2.7.5 Let T ∈ B(H) be self-adjoint. Then,

∥T ∥ = sup {|(T x, x)| : ∥x∥ = 1} .


proof. Every self-adjoint operator T ∈ B(H) is normal. 2

Proposition 2.7.6 Let H be a complex Hilbert space and T ∈ B(H) Then,


the following are equivalent.
a)T is an isometry,
b)
T ∗ T = I.
c)
(T x, T y) = (x, y).
proof. (a) implies (b). Since ∥T x∥ = ∥x∥ for all x ∈ H we have

(T ∗ T x, x) = (T x, T x)
= ∥T x∥2
= ∥x∥2
= (x, x).

This implies
T ∗ T = I.
(b) implies (c),

(T x, T y) = (T ∗ T x, y) = (x, y)

52
(c) implies (a) This follows on taking

y = x.

Theorem 2.7.7 The range ran(T ) of an isometric operator T defined on a


complex Hilbert space is a closed linear subspace of T .

Definition 2.7.8 Let S and T be bounded linear operators on a Hilbert space


H. The operator S is said to be unitarily equivalent to T if there exists a
unitary operator U on H such that

S = U T U −1 = U T U ∗ .

2.8 Compact operator

Definition 2.8.1 Let X and Y be normed linear spaces. A linear operator


T : X → Y is called a compact operator if for any bounded sequence {xn } in
X, the sequence{T xn } in Y contains convergent subsquence.

Example 2.8.2 If E has a finite dimension, the idendity operator is a


compact operator.

53
Chapter 3

Spectral Theory

As noted earlier, if H is a complex Hilbert space, B(H) is a C ∗ -algebra with


identity.then we study the inversibility of an operator T.This chapter we shall
study the invertibility of the operators λI − T , where T ∈ B(H).I is the
identity operator and λ ∈ C.The study of the distribution of the values of λ
for which λI − T does not have an inverse is called ‘spectral theory’ for
the operator, and the complement of the set:
{λ ∈ C : λI − T is invertible in B(H).}
called the ‘spectrum’ of the operator T , is important in classifying the ope-
rators on Hilbert space.

3.1 Spectral notion


Definition 3.1.1 The spectrum of an operator T ∈ B(H) is the set

σ(T ) = {λ ∈ C : λI − T isn’t invertible in B(H)} ,


and the resolvent set of T to be the set
p(T ) = {λ ∈ C : λI − T is invertible in B(H)} ,
Then (λ0 I − T )−1 is called the resolvent at λ0 and it is denoted by R(λ0 , T ).Further,
the spectral radius of T is defined by

r(T ) = sup {|λ| : λ ∈ σ(T )} .

54
Examples 3.1.2 (i) Let
 
1 0 0
T = I =  0 1 0 ,
0 0 1

we check for

σ(T ) = {λ ∈ C : λI − I isn’t invertible in B(H)}


= {λ ∈ C : 1 − λI isn’t invertible in B(H)} .

If λ ∈ σ(T ) then,

 
1−λ 0 0
det(I − λI) = det  0 1−λ 0 
0 0 1−λ
= (1 − λ)3
= 0,

then,

(1 − λ)3 = 0 =⇒ 1 − λ = 0 =⇒ λ = 1.
Then,
σ(T ) = {1} .
(ii) For an n × n matrix T , λI − T is not invertible if and only if
det (λI − T ) = 0.Thus, in the finite-dimensional case, σ(I) is just the set
of eigenvalues of T (since det (λI − T ))is an nth-degree polynomial whose
roots are the eigenvalues of T ).

Remark 3.1.3 Recall that λI − T fails to be invertible if either ran(λI −


T ) ̸= H or ker(λI − T ) ̸= 0.

55
Definition 3.1.4 a)The point spectrum (eigenspectrum, eigenvalues) of T ∈
B(H) is defined to be the set

σp (T ) = {λ ∈ C : ker(λI − T ) ̸= 0} ;

in other words, there is a nonzero vector x in H such that (λI − T ) x = 0


i.e λI − T isn’t bijective ,i.e there is,

x ∈ H, (x ̸= 0) : T x = λx.

b)The continuous spectrum σc (T ) is the set


{
σc (T ) = λ ∈ C, λI − T is injective and ran(λI − T ) is dense in H but (λI − T )−1 isn’t bound

c)In the former case, λ is said to belong to the compression spectrum δcom (T )
of T , and in the latter case, λ is said to belong to the approximate point
spectrum σ ap (T ) of T . In other words,

σcom (T ) = {λ ∈ C : ran(λI − T ) is not dense in H} ,

{
σap (T ) = λ ∈ C : there is a sequence {xn }n≥1 such that ∥xn ∥ = 1 for every n and ∥(λI − T )x

Remark 3.1.5 In the finite dimension,

σp (T ) = σ(T ).

3.2 Resolvent Equation and Spectral Radius

Theorem 3.2.1 (The resolvent equation) For λ, µ ∈ p(T ),

R(λ, T ) − R(µ, T ) = −(λ − µ)R(λ, T )R(µ, T ).

56
proof. We have,
R(λ, T ) − R(µ, T ) = (λI − T )−1 − (µI − T )−1
= (λI − T )−1 [(µI − T ) − (λI − T )] (µI − T )−1
= −(λ − µ)R(λ, T )R(µ, T ).
2

Theorem 3.2.2 Let T ∈ B(H). The resolvent set p(T ) of T is open, and
the map λ → R(λ, T ) = (λI − T )−1 from p(T ) ⊆ C to B(H) is strongly
holomorphic in the sense of Definition vanishing at ∞. For each x, y ∈
H, the map λ → (R(λ, T )x, y) = ((λI − T )−1 x, y) ∈ C is holomorphic on
p(T ),vanishing at ∞.

Corollary 3.2.3 For T ∈ B(H), σ(T ) = C\p(T ) is a closed subset of C.

Definition 3.2.4 Recall that the spectral radius of an operator T ∈ B(H) is


defined to be
r(T ) = sup {|λ| : λ ∈ σ(T )} .

Theorem 3.2.5 Let T ∈ B(H),where (H ̸= {0}).If |λ| ≻ ∥T ∥then ,


λ ∈ p(T ) and,
R(λ, T ) = (λI − T )−1 =∞
n=0 λ
−n−1 n
T .
where convergence takes place in the uniform operator norm. Also, the spec-
trum σ(T ) of T is a nonempty compact subset which lies in {λ ∈ C, |λ| ≤ ∥T ∥}In
particular, there exist λ ∈ σ(T ) such that,
|λ| = r(T ).

Theorem 3.2.6 (Gelfand’s formula) For any T ∈ B(H) the following


limit exists,
1
lim ∥T n ∥ n .
n→∞
and equals r(T ).

57
1 1
Lemma 3.2.7 For T ∈ B(H), limn→∞ ∥T n ∥ n exists and equals inf n ∥T n ∥ n ,Moreover,
1
0 ≤ inf ∥T n ∥ n ≤ ∥T ∥ .
n

Remark 3.2.8 If T ∈ B(H) is such that T ∗ T = T T ∗

r(T ) = ∥T ∥ .
For a normal operator T.

3.3 Spectral Mapping Theorem for Polyno-


mials

Theorem 3.3.1 (Spectral Mapping) Let H be a Hilbert space and T ∈


B(H), Then
a) { }
σ(T ∗ ) = λ : λ ∈ σ(T ) ;
b)if T is invertible, then
{ }
σ(T −1 ) = λ−1 : λ ∈ σ(T ) .

proof. 1)If λ ∈ σ(T ) then (T − λI) isn’t inversible.Let x, y ∈ H then

⟨(T − λI) x, y⟩ = ⟨T x, y⟩ − ⟨λx, y⟩


⟨ ⟩
= ⟨x, T ∗ y⟩ − x, λy
⟨( )∗ ⟩
= T ∗ − λI x, y .

Then T ∗ − λI is invertible.Then λ ∈ σ(T ).


2)If T is invertible =⇒ (T − 0I) is invertible =⇒ 0 ∈ σ(T ).

58
( )
λ−1 ∈ σ(T ) ⇐⇒ T −1 − λ−1 I is inversible
⇐⇒ (λ − T I) λ−1 T −1 is inversible.
⇐⇒ (λ − T I) ∈ σ(T ).
Then,
λ−1 ∈ σ(T −1 ) ⇐⇒ λσ(T )
λ−1 σ(T −1 ) ⇐⇒ λ ∈ σ(T ).
Then, { }
σ(T −1 ) = λ−1 , λ ∈ σ(T ) .
2

Proposition 3.3.2 Let T ∈ B(H). Then,


a)
σp (T ∗ ) = σcom (T ).
.
b)
σ(T ∗ ) = σap (T ∗ ) ∪ σap (T ).

Proposition 3.3.3 Let T ∈ B(H). Then, is σap a closed subset of C.

3.4 Spectrum of Various Classes of Operators


Theorem 3.4.1 Every point in the spectrum of a normal operator is an
approximate eigenvalue.

Theorem 3.4.2 The spectrum of every self-adjoint operator T ∈ B(H) is


a subset of R. In particular, the eigenvalues of T , if any, are real. Further-
more, if T is a positive operator, then the spectrum of T is nonnegative, and
eigenvalues, if any, are also nonnegative.

59
Theorem 3.4.3 Let B(H) denote the algebra of bounded linear operators
on a complex Hilbert space H. Suppose T ∈ B(H) satisfies the equality
T T ∗ = T ∗ T, i.e. T is normal. Then,
a)
σp (T ) = σp (T ∗ ).
(b) eigenvectors corresponding to distinct eigenvalues, if any, are orthogonal.

60
Chapter 4

Psuedespectrum of a matrix

The study of eigenvalues appears in diffrent scientific fields .e can citefor


exemple the domain of fizics ,spectral theory ,the stability of dynamics eec-
tricity , the quantum mecanic.The calcul and the localization of eigenvalues
is the domain of complex plan is delicat .
It easy to verify the case of normal matrix in the domain of complex
plan is delicat .But it is not about non-normal matrix .And this why we pose
the question about the importance of non normal caracter of matrix and the
result hich can cause . to illustrate that , we consider A the matrix Gallery
five of Matlab

 
−9 11 −21 63 −25
 70 −69 141 −421 168 
 
A=
 −575 575 −1449 3451 −1380  
 3891 3891 7782 −23345 9336 
1024 −1024 2084 −6144 2457
we have the polynomial characteristic polynomial A is
P (z) = z 5 .
The spectral of A is equal to 0.

σ(A) = {0} .
And we denote the spectral of σ(A) is .
Then if we calculate with matlab the eigen value of A we obtain the result
, there were five eigenvalues

61
2πi
Zk = 0, 0407e k .k ∈ {0, , , , 4}
The spectral of A calculated with matleb is totally diffrent than theoric
spectral of A .
Another problem is posed in calculating the eigen values. it is the sensi-
bility of perturbations.
We consider the matrix B = (bij ) defined by B = Bij ,

B = P DP −1 .

such that D = diag(0, 999 0, 99 0.9 0, 5 0, 1) and

P = (pij )

where
1
pij = .
i+j
The spectral of B is stricly include in the unit disck i.e p(B) ≺ 1 where the
ρ(B) is the spectral rayon of the matrix B then the matrix is stable.We
pertubate the element (b15 ) with insering it the relative fault 2.10−4 .The
eigen values of perturbate matrix Be:

λ1 = 12, 36, λ2 = −9, 423?, λ3 = −1, 503, λ4 = 1, 057, λ5 = 1, 001.

then the matrix B e isn’t stable then we see the effect of that small per-
turbation in the level of matrix stability.

4.1 Pseudospectra of matrices


We shall generally let A denote a matrix in Cn×n .We are able to motivate
what the idea of pseudospectra is by what we can observe throught applied
mathematics, ”is A singuler?” isn’t robut because of an arbitry small per-
turbation the answer will change but it is better to ”Is ∥A−1 ∥ large”.Now,to
define the eigenvalue we need the condition of matrix singularity.To know if ”
z is an eigenvalue of A” is the same as to ask ”is z − A singular ?”therefore
, the property of being an eigenvalue of a matrix isn’t robust then to ask
better ”is ∥(z − A)−1 ∥ large?”

62
Definition 4.1.1 (the norm of resolvent) Let A ∈ Cn×n and ϵ ≻ 0
then the ϵ−pseudospectrum σϵ of A is:
{ }
σϵ = z ∈ C : (z − A)−1 ≻ ϵ−1 .
In words,the set ϵ−pseudospectrum of the complex plane is open and bounded
by ϵ−1 .

Definition 4.1.2 (the perturbation theory) Let A ∈ Cn×n and ϵ ≻ 0


then the ϵ−pseudospectrum σϵ of A is:

{ }
σϵ = z ∈ C : z ∈ σ(A + E) for some E ∈ Cn×n with ∥E∥ ≺ ϵ .

In words, the set ϵ−pseudospectrum is the set of numbers that are eigevalues
of the boundaries matrices of A with ∥E∥ ≺ ϵ.

Definition 4.1.3 (the pseudoeigenvector, the pseudoeigenvalues) Let


A ∈ Cn×n and ϵ ≻ 0 then the ϵ−pseudospectrum σϵ of A is:

σϵ = {z ∈ C : ∥(z − A)v∥ ≻ ϵ for some v ∈ Cn with ∥v∥ = 1} .

v and z are the pseudoeigenvectors, the pseudoeigenvalues respectively.In


words, the ϵ−pseudospectrum is the set of the pseudoeigenvalues.

Remark 4.1.4 Let A ∈ Cn×n .If ∥.∥ = ∥.∥2 , a matrix norm is its largest
singular value and its inverse norm is the inverse of the smallest singular
value.Specially,
(z − A)−1 2 = [smin (z − A)]−1 ,
where smin (z − A) denotes the smallest singular value of (z − A) then we get
the fourth definition.

63
Definition 4.1.5 (singular values) Let A ∈ Cn×n and ϵ ≻ 0 then the
ϵ−pseudospectrum σϵ of A is:

σϵ = {z ∈ C : smin (z − A) ≤ ϵ } .

Definition 4.1.6 Let A ∈ Cn×n and ϵ ≻ 0 then the ϵ−pseudospectrum σϵ


of A is:

σϵ = {z ∈ C : z ∈ σ(A + ϵv2 v1∗ )where ∥v1 ∥ ≤ 1, ∥v2 ∥ ≤ 1 } .

Definition 4.1.7 Let A ∈ Cn×n and ϵ ≻ 0 then the ϵ−pseudospectrum σϵ


of A is :

σϵ = {z ∈ C : z ∈ σ(A − (Av − zv)v ∗ )where ∥Av − zv∥ ≤ 1, ∥v1 ∥ ≤ 1 } .

Notation 4.1.8 σ0 is the spectrum of A.

4.2 Singular values

Definition 4.2.1 Let A = (aij ) ∈ Cn×n and αk ∈ R+ where k ∈ {1, , , , p} .we


suggest
σ(AA∗ ) = {α1 α2 , , , , αp } ,
{√ √ √ }
with p ≤ n.Then the singular values of A are the numbers α1 , α2 , , , , , αp .

Proposition 4.2.2 Let A ∈ Cn×n , then the spectrum of AA∗ is equal to


A∗ A.

64
Definition 4.2.3 (the norm of matrix) Let A = (aij ) ∈ Cn×n ,then we
have
m
∥A∥1 = max |aij | ,
1≺j≤n
i=1

and,
n
∥A∥∞ = max |aij | .
1≺i≤n
j=1

Definition 4.2.4 (the spectral norm (norm 2)) Let A ∈ Cn×n , then
the spectral norm is defined by,

∥A∥2 = max ∥Ax∥ .


∥x∥2 =1

Proposition 4.2.5 √
The spectral norm of A is the largest singular value of
A ,it means ∥A∥2 = α where α is the largest eigenvalue of AA∗ .

proof. ∥A∥22 = max∥x∥2 =1 x(A∗ A)x∗ = α where α is the largest eigen-


value of A∗ A. 2

Lemma 4.2.6 Let A ∈ Cn×n , ϵ ≻ 0 ,v ∈ Cn×n with ∥v∥ = 1.If ∥(A − zI) v∥ ≤
′ ′
ϵ.Then there is ϵ such 0 ≺ ϵ ≺ ϵ and u ∈ Cn×n with ∥u∥ = 1,such

(A − zI) v = ϵ u.

proof. We have (A − zI) ̸= 0 , v ̸= 0 then there exists w ∈ Cn×n ,


(A − zI) v = w.We can write it in the form of:
w ′
(A − zI) v = ∥w∥ = ϵ u,
∥w∥

with,

ϵ = ∥w∥ ,
and,
w
u= ,
∥w∥

65
we have,
∥u∥ = 1.
Addition to that,we have,

∥(A − zI) v∥ ≤ ϵ,
then,

ϵ u ≤ ϵ,
and that imply,

ϵ ∥u∥ ≤ ϵ,
then,

ϵ ≤ ϵ.
2

4.3 Equivalence of pseudospectrum

Theorem 4.3.1 the definitions of pseudspectrum are equivalent.


proof. *If z ∈ σ(A).With ϵ = 0 the equivelence is evident .
*we suppose z ∈ / σ(A)
1 =⇒ 3 we suppose that z ∈ C such that , (z − A)−1 ≥ ϵ−1 then
there exists u0 ∈ C such that ;

(z − A)−1 u0
(z − A)−1 = , u0 ̸= 0.
∥u0 ∥
because,

(z − A)−1 = sup (z − A)−1 w


∥w∥=1

= (z − A)−1 w0

where∥ w0 ∥ = 1 because ∥ w∥ = 1 is compact.

−1 (z − A)−1 u0
(z − A) w0 =
∥u0 ∥

66
we suppose,
(z − A)−1 u0 = w.
we get,
u0 = (z − A) w.

ε−1 ≤ (z − A)−1
(z − A)−1 u0
=
∥u0 ∥
∥w∥
=
∥(z − A) w∥

then,
∥(z − A) w∥
≤ ε.
∥w∥
addition to,
∥(z − A) v∥ ≤ ε,
with,
w
v= ,
∥w∥
and ∥v∥ = ∥w∥
∥w∥
= 1.
3 =⇒ 2 we suppose that z ∈ σ(A + E) such that ∥E∥ ≤ ϵ then
(A + E)v = zv , v ̸= 0,we devise with ∥v∥ we get,
v v
(A + E) =z ,
∥v∥ ∥v∥
then,
(A + E)w = zw,
where,
v
w= ,
∥w∥
which imply,
Ew = (zI − A)w, ∥w∥ = 1.
then,
w = (zI − A)−1 w.

67
then,

1 = ∥w∥ = (zI − A)−1 w


≤ (zI − A)−1 ∥E∥ ∥w∥
≤ zI − A)−1
1
≥ = ϵ−1 .
ϵ
1 =⇒ 4 We have,
1
(zI − A)−1 = ≥ ϵ−1
smin (z − A)
⇔ smin (z − A) ≤ ϵ.

6 → 5 :We suppose that z ∈ σ(A − (Av − zv)v ∗ ) where ∥v∥ ≤ 1 and


∥Av − zv∥ ≤ ϵ.We suppose −(Av − zv) = ϵv2 then z ∈ σ(A+ ϵv2 v ∗ ) where
∥v∥ ≤ 1 and ∥u∥ ≤ 1.
3 → 6 We suppose that z is in

σϵ = {z ∈ C, ∥(zI − A)v∥ ≤ ϵ when ∥v∥ = 1} .


Then it is obvieus that,

σϵ = {z ∈ C, z ∈ σ(A − (Av − zv)v ∗ ) where ∥v∥ ≤ 1 and ∥Av − zv∥ ≤ ϵ} .

4.4 The pseudospectrum of diagonal matrice:

Definition 4.4.1 (decomposition of shur) Let A ∈ Cn×n there is unit


matrix U and triangular matrix T such that A = U T U ∗ .We named the
shur decomposition the diagonale principal of T contains eigenvalues of A.

Theorem 4.4.2 Let A ∈ Cn×n . A is a normal matrice if and only if an


unitary matrice U and diagonal matrice D such that A = U DU ∗ .We say
that A is an unitary diagonal.

68
proof. The matrice A is with the form of A = U T U ∗ .The decomposition
of shur T is a triangular matrice superior

AA∗ = (U T U ∗ )(U T U ∗ )∗
= (U T U ∗ )(U T ∗ U ∗ )
= U T T ∗U ∗.

AA∗ = (U T U ∗ )∗ (U T U ∗ )
= U T T ∗U ∗.
Then,
AA∗ = AA∗ .
with,
T ∗ T = T ∗ T.
superior T is diagonal and A = U T U ∗ and U is unitary and T is
triangular. 2

Theorem 4.4.3 Let ϵ ≻ 0, the pseudospectrum of matrice is the renion of


disks with are centered by the eigenvalue and its radius is ϵ.

4.5 Proprieties of pseudospectrum:


Theorem 4.5.1 Let A ∈ Cn×n and ϵ ≥ 0 then,
1)if
0 ≤ ϵ1 ≤ ϵ2 then, σϵ1 (A) ⊆ σϵ2 (A)
2) ( )
A1 0
σϵ (A1 ⊕ A2 ) = σϵ (A1 ) ∪ σϵ (A2 ) =
0 A2
3)
σϵ (A + F ) ⊆ σϵ+∥F ∥ (A), F ∈ Cn×n
4)
σϵ1 (A) + σϵ2 (A) ⊆ σ2ϵ2 +2∈1 (A)

69
proof. 1) Let z ∈ σϵ1 (A) ,then smin (z −A) ≤ ϵ1 ≤ ϵ2 ,then z ∈ σϵ2 (A).
2) A1 ⊕ A2 is diagonal matrix z .The singular value of (zI− A1 ⊕ A2 ) are
the same singular values of (zI− Ai ) with i = 1...n with usi,g the fourth
defenition os pseudospectrum we get it .
3)Let z ∈ σϵ (A + E) then, z ∈ σϵ (A + E + F ) when ∥E∥ ≤ ϵ. We have
∥F + E∥ ≤ ∥F ∥ + ∥E∥ then, z ∈ σϵ+∥F ∥ (A)
4)Let z ∈ σϵ1 (A) + σϵ2 (A) with ϵ1 ≤ ϵ2 . then z = z1 + z2 when
z1 ∈ σϵ1 (A) and z2 ∈ σϵ2 (A).We suppose that u1 is the pseudospectrum
eigenvector of A correspending to z1 when ∥v1 ∥ = 1.Then (A + E)u1 =
z1 u1 . ∥E1 ∥ ≤ ϵ1 and (A + E)u1 = z2 u1 + w2 , ∥E2 ∥ ≤ ϵ2 , w2 ∈ C n .Thus,

z1 u1 + z2 u2 = zu1 ,

which imply zu = (2A + E1 + E2 − w2 u∗1 )u1.


Another part, ∥E1 + E2 − w2 u∗1 ∥ ≤ ϵ1 + ϵ2 + ∥w2 ∥ .Then

∥w2 ∥ = ∥(z2 − A)u1 ∥ + ∥E2 ∥


≤ ∥(z1 − A)u1 ∥ + |z1 − z2 | + ϵ2 ,

Then,
∥E1 + E2 − w2 u∗1 ∥ ≤ 2ϵ1 + 2ϵ2 + |z1 − z2 | ,
So,
σϵ1 (A) ⊆ σϵ2 (A)
We can take z1 = z2 thus it results,

z ∈ σϵ1 +ϵ2 (A).

Theorem 4.5.2 let A ∈ C n×n and ϵ1 , ϵ2 two positive numbers. Let z1 , z2


two complex numbers correspending to p1 , p2 are two points of σϵ1 (A), σϵ2 (A)
respectively then the meduim of segment [p1 , p2 ] is the point of σϵ1 +ϵ2 + z1 −z
2
2
(A).

proof. Let q the medium of segment [p1, p2 ] .Let z be the complex number
corresponding to q.We have z1 u1 = (A + E1 )u1 , ∥E1 ∥ ≤ ϵ1 et z2 u2 = (A +
E2 )u2 , ∥E2 ∥ ≤ ϵ2 . ∥u1 ∥ = ∥u2 ∥ = 1.

70
Then,
z1 + z2
zu1 = ( )u1
2
1
= [(A + E1 )u1 + (A + E2 )u2 + w1 ] .
2
Then,
E1 + E2 + w1 u∗1 )
zu1 = (A + )u1 .
2
with,
ϵ1 +ϵ2 +∥w1 ∥
∥E∥ ≤ .
2
2

Theorem 4.5.3 Let A ∈ C n×n and ϵ ≥ 0.the pseudospectrum of matrice A


is the set z ∈ C such that ∥u∗ (zI − A)∥ ≤ ϵ for such u ∈ Cn with ∥u∥ = 1.
proof. Let z ∈ σϵ (A) and let v is the eigenvector to the left of matrice
(A + E) correspending to z with ∥E∥ ≤ ϵ.Then v ∗ (A + E) = zv ∗
Then,
v∗ v∗
(zI − A) = E,
∥v∥ ∥v∥
which imply,
∥u∗ (zI − A)∥ ≤ ϵ,

v
with u = ∥v∥ .Then ∥w∗ (zI − A)∥ ≤ ϵ such that ∥u∥ = 1.Then there is η
with 0 ≤ η ≤ ϵ.And let φ ∈ Cn where ∥φ∥ = 1 such that u∗ (zI − A) = ηφ,we
choose E = ηuφ∗ such that E ∈ C n×n then

∥E∥ = ∥ηuφ∗ ∥
≤ η ∥w∥ ∥φ∗ ∥
≤ η ≤ ϵ.

And,
u∗ E = u∗ (zI − A).
which imply,
u∗ (A + E) = zu∗ .

71
Then,
z ∈ σϵ (A).
2

4.6 The condition number, the spectral abs-


cissa and the spectral radius
IfV a matrix of ordinary eigenvalues A ∈ C n×n and K(v) = ∥v∥ ∥v −1 ∥ is its
conditional number (K(v) = 0 if A isn’t diagonal )

etα(A) ≤ etA ≤ for t ≥ 0.


When,

α(A) = sup Rez


z∈σ(A)

is the spectral abscissa ofA .


And we have the framing for the matrix An

p(A)n ≤ ∥An ∥ ≤ K(V )p(A)n ,


Where
p(A) = sup |z|
z∈σ(A)

is the spectral radius of A.


The framing of ∥f (A)∥ is given by

∥f ∥σ(A) ≤ ∥f (A)∥ ≤ k(V ) ∥f ∥σ(A) ,


where
∥f ∥σ(A) = sup |f (z)| ,
z∈σ(A)

is an arbitary analytic function in spectrum neighbors σ(A).

72
Theorem 4.6.1 Let A ∈ C n×n , ϵ ≥ 0 then we have
1)if z ∈ σϵ (etA ), t ≥ 0 and etA ≤ |z| resp( etA ≥ |z|),then


etnα(A) ∑

|z n |
k(V ) ≥ ϵ−1 resp(k(V ) ≥ ϵ−1 ).
n=0
|z n+1 | n=0
et(n+1)α(A)

2)if z ∈ σϵ (e ) then ∥A∥ ≤ |z| ,resp (∥A∥ ≥ |z|) , then


tA



p(A)n ∑

|z n |
−1
k(V ) ≥ ϵ resp(k(V ) ≥ ϵ−1 ).
n=0
|z n+1 | n=0
p(A)n+1
3)z ∈ σϵ (f (A)) with ∥f (A)∥ ≤ |z|, resp (∥f (A)∥ ≥ |z|)then


∥f n ∥σϵ (A) ∑

|z n |
−1
k(V ) ≥ ϵ resp(k(V ) ≥ ϵ−1 ).
n=0
|z n+1 | n=0
∥f ∥σϵ (A)
n+1

proof. The pseudospectrum of a matrix A is the set z ∈ C such that

∑∞
∥A∥n ∑∞
|z n |
−1 −1
{}k(V ) ≥ ϵ , if ∥A∥ ≤ |z| .k(V ) n+1 ≥ ϵ , if ∥A∥ ≥ |z| .
n=0
|z n+1 |
n=0
∥A∥

And with using the fisrt introduction of that section we get the result. 2

Notation 4.6.2 A is a normal matrix then σϵ (A) then σϵ (A) = σ(A) + ∆ϵ


when ∆ϵ is a closed disk with radius ϵ and center0.And,

σϵ (γI) = D(γ, ϵ), γ ∈ C


where D(γ, ϵ) is the closed disk of radius ϵ and center γ.

Proposition 4.6.3 Let A ∈ C n×n and ϵ ≥ 0.It excist αn ∈ C and ∀ϵ ≥ 0


such that

σϵ (A) ⊑ σϵ (αn In ).
In is the n × n idendity matrix.

73
proof. Let zk ∈ ∂σϵ (A) k ∈ {1, 2, , , , n} where ∂σϵ (A) is the bord of
σϵ (A).We chooseαn the barycenter of system {zk (A)} when k ∈ {1, 2, , , , n}
and
rϵ = sup |αn − zk |
zk∈∂σϵ (A)

and because αn In is normal .That it’s enough to prouve σϵ (A) ⊑ D(αn , rϵ )


where D(αn , rϵ ) is the closed disk of radius rϵ and center αn
If z ∈ σϵ (A) then |αn − z| ≤ rϵ then z ∈ D(αn , rϵ ).The point of pseudos-
pectrum of matrix A is contained in the interior or the bord of closed disk
D(αn , rϵ ). 2

Remark 4.6.4 The role of this theorem isn’t to prouve the excistence of
disk contains the pseudospectrum because it’s assured by the compactness of
the pseudospectrum but to define.

Theorem 4.6.5 D(αn , rϵ ) is the smallest disk which contains σϵ (A).

proof. Because αn is the barycenter then it’s unique.we suppose that


there is another disk D(αn , ŕϵ )diffrent than D(αn , rϵ ) which is the smallesr
disk which contains σϵ (A) then ŕϵ ≺ rϵ .it exist at least about zk ∈ ∂σϵ (A) such
that |αn − zk | = rϵ .Another part |αn − zk | ≤ ŕϵ .Then it is a contradiction
rϵ ≺ ŕϵ .Then D(αn , rϵ ) is the smallest disk which contains σϵ (A). 2

4.7 The pseudoprojection


P(C) designs the set of parts of the complex plan .we define

λS = {λs, s ∈ S, λ ∈ R, S ⊂ P (C)} .

Definition 4.7.1 The applicationf : P (C) → P (C) is called a pseudopro-


jection if it si verified :
1)
f (P1 ) ∪ f (P2 ) ⊆ f (P1 ∪ P2 ).

74
2)
f (λP ) = λf (P ).
3)
f 2 = f when f 2 = f ◦ f.

Definition 4.7.2 Let ℑ(C) is a subset of part of complex plan


{ }
ℑ(C) = σϵ (A)/ϵ ≥ 0 and A ∈ C n×n
We define the application ⟨.⟩ such that:
⟨.⟩ : ℑ(C) → ℑ(C)
⟨σϵ (A)⟩ = D(α, rϵ ).

Example 4.7.3 ⟨σϵ (γI)⟩ = D(γ, rϵ ) γ ∈ C

Theorem 4.7.4 ⟨.⟩ is a pseudospectrum.


proof. Let A ∈ Cn×n B ∈ Cn×n ϵ1 ≥ 0 ϵ2 ≥ 0 then
1)
⟨σϵ1 (A)⟩ ∪ ⟨σϵ2 (B)⟩ = D(α1 , rϵ1 ) ∪ D(α2 , rϵ2 )
= D(α3 , rϵ3 )
= ⟨σϵ1 (A) ∪ σϵ2 (B)⟩ .
when,
α1 + α2
α3 = , rϵ3 = sup( sup |α3 − zk | , sup |α3 − zk |)
2 zk ∈σϵ1 (A) σϵ2 (B)

2)
⟨λσϵ1 (A)⟩ = D(λα, λrϵ ) = λD(α, rϵ ) = λ ⟨σϵ (A)⟩ , λ ∈ C
3)
⟨ ⟩
⟨⟨σϵ1 (A)⟩⟩ = D(α, rϵ ) = ⟨σrϵ (αI)⟩ = D(α, rϵ ) = ⟨σϵ1 (A)⟩ .
2

75
4.8 matrix function

Definition 4.8.1 Let λ1 , , , λn the eigenvalues of matrix A ∈ C n×n .We define


the matrice f(A) with

1
f (A) = f (z)(zI − A)−1 dz.
2πi Γ
Γ constiturs of finite number of simple curves ,close Γk such that every λK
is contenant in Γk .

Theorem 4.8.2 Let f an arbitary analytic function in σϵ (A) and ϵ ≥ 0 then



∥f (A)∥ ≤ sup |f (A)| .
ϵ z∈σϵ (A)

proof. We have


1
∥f (A)∥ = f (z)(zI − A)−1 dz
2πi ∂D(α,rϵ )

1
≤ |f (z)| (zI − A)−1 |dz|
2π ∂D(α,rϵ )

1
= |f (z)| |dz|
2πϵ ∂D(α,rϵ )∩σϵ (A)

1
≤ sup |f (z)| |dz|
2πϵ z∈σϵ (A) ∂D(α,rϵ )

≤ sup |f (z)| .
ϵ z∈σϵ (A)
2

Definition 4.8.3 Let A ∈ C n×n , ϵ ≥ 0 .we define the pseudospectrum abs-


cissa of A with

αϵ (A) = sup Rez.


z∈σϵ (A)

76
Theorem 4.8.4 Let A ∈ C n×n , ϵ ≥ 0 then
rϵ tαϵ (A)
etA ≤ e .
ϵ

Definition 4.8.5 Let A ∈ C n×n , ϵ ≥ 0 then we define the pseudospectrum


radius of A with
ρϵ (A) = sup |z| .
z∈σϵ (A)

Theorem 4.8.6 Let A ∈ C n×n , ϵ ≥ 0 then



Ak ≤ ρϵ (A), k ∈ N.
ϵ

4.9 Circulant matrix(example)

Definition 4.9.1 Circulant matrix is a square matrix which each row vector
is rotate one element to the right relative to the proceding row vector.
 
c0 c1 c2 c3 cn−1
 cn−1 c0 c1 c2 cn−2 
 
C=  c n−2 c n−1 c 0 c 1 c n−3


 c2 ... cn−1 c0 .. 
c1 c2 .. cn− 1 c0
where the coefficients are complex numbers.The circulant matrix is a special
case of Toepliz matrix and it defined as c = circl(c0 , c1,,,, cn−1 ).

Example 4.9.2 A = circ(123) then,


 
1 2 3
A= 3 1 2 
2 3 1

77
Proposition 4.9.3 Let A and B are two circulant matrix.Then,
1)A + B is circulant matrix .
2) AB is circulant matrix.
3) A∗ is circulant matrix.

Proposition 4.9.4 (The spectral of circulant matrix) Let C is a cir-


culant matrix and Let P its representing polynomial .Then it’s spectrum is
equal to
{ }
σ(C) = p(1), p(w), p(w2 ), , , , p(wn−1 ) .
where w is the primitive root of unity .
 
1 0 2 3
 3 1 0 2 
G=  2

3 1 0 
0 2 3 1
The polynomial representing by G is

p(z) = 1 + 2z + 3z 2
Then the spectrum is

σ(G) = {p(1), p(i), p(−1), p(−i)}


= {6, −1 − 3i, 0, −1 + 3i} .

Remark 4.9.5 The circulant matrix is a normal matrix then its eigenvalues
aren’t sensibles to perturbations.

78
Chapter 5

The numerical range of matrix

5.1 The numerical range of matrix

Definition 5.1.1 The numerical range of an operator T is the subset of


complex numbers C given by

W (T ) = {⟨T x, x⟩ , x ∈ H, ∥x∥ = A} .
Let A be an n × n complex matrix. Then the numerical range of A , W (A),
is defined to be { ∗ }
x Ax
W (A) = , x ∈ C , x ̸= 0 .
n
x∗ x
where x∗ denotes the conjugate transpose of the vector x.

Examples 5.1.2 Let T be an operator in C 2 defined by :

Example 5.1.3 ( )
0 1
T =
0 0
If x = (f, g) then ∥x∥2 = |f |2 + |g|2 = 1.We have T x = (g, 0) and ⟨T x, x⟩ =
gf .

79
Notice that,

|⟨T x, x⟩| = ∥g∥ ∥f ∥


1( 2 )
≤ |f | + |g|2
2
1
≤ .
2
Then, { }
1
W (T ) ⊂ z : |z| ≤ .
2
Let z = reiθ where
1
0≤r≤
2
If we choose x = (cosα, e sin α) where sin 2α = 2r ≤ 1 then 0 ≤ α ≤ π4 .We

see that ⟨T x, x⟩ = eiθ sin αcosα = reiθ .


Then, { }
1
W (T ) = z : |z| ≤ .
2
Let T be the unilateral shift on H, the Hilbert space l2 of square summable
sequences. For any f = (f1 , f2 , f3 , , , ) ∈ H, ∥f ∥ = 1,we have T f = (f2 , f3 , , , )
and hence consider

⟨T f, f ⟩ = f1 f2 + f2 f3 + f3 f4 + ....
With
|f1 |2 + |f2 |2 + .. = 1.
Notice that ,

|⟨T f, f ⟩| = |f1 | |f2 | + |f2 | |f3 | + ....


1[ 2 ]
≤ |f1 | + 2 |f2 |2 + .
2
1[ ]
≤ 2 − |f1 |2 .
2
Thus,|⟨T f, f ⟩| ≺ 1 if |f1 | ̸= 0.If |f1 | = 0 and f containing a finite number
of nonzero entries, we can show in the same way that ,|⟨T f, f ⟩| ≺ 1 by

80
considering the minimum natural number n , for which fn ̸= 0.Thus W (T )
is contained in the open unit disk

{z : |z| ≺ 1} .

We now show that it is in fact the open unit disk. Let z = reiθ , 0 ≤ r ≺ 1 be
any point of this disk. Consider
√ √ √
f = ( 1 − r2 , r 1 − r2 e−iθ , r2 1 − r2 e−2iθ , , , ).
Observe that,

∥f ∥2 = 1 − r2 + r2 (1 − r2 ) + r4 (1 − r2 ) + .. = 1.

So,
⟨T f, f ⟩ = reiθ .
Let the transformation A : C2 → C2 be represented by
( )
r b
A= , r ∈ R, b ∈ C
0 −r
[ ]
Let (f, g) be a unit vector in C2 , f = eiα cos θ, g = eiβ sin θ, α ∈ 0, π2 , β ∈
[0, 2π] .Then we have,

Af = (reiα cos θ + beiβ sin θ, −reiβ sin θ).


And,

⟨Af, f ⟩ = r(cos2 θ − sin2 θ) + bei(β−α) sin θ cos θ


= x + iy,
|b|
x = rcos2θ + sin 2θ cos(β − α + γ),
2
|b|
y= sin 2θ sin(β − α + γ),
2
γ = arg b,
|b|2
(x − rcos2θ)2 + y 2 = sin2 2θ.
4
This is a family of circles and we can now obtain their union. Rewriting this
last expression as:

81
|b|2
(x − r cos φ) + y =
2 2
sin2 φ.with 0 ≤ φ ≤ π.
4
Eliminating φ between the last two equations, one obtains:

x2 y2
2 + 2 = 1.
r2 + ( |b|4 ) ( |b|4 )

This is an ellipse with center at 0, minor axis b, and major axis 4r2 + b2 .

Proposition 5.1.4 Let A ∈ C n×n , B ∈ C n×n , γ ∈ C and α, β ∈ C then:


1)
W (A + B) = W (A) + W (B).
2)
W (γA) = γW (A).
3)
W (αA + βI) = αW (A) + β.

4)
W (A∗ ) = {z, z ∈ W (A)} .
5)
W (U ∗ AU ) = W (A).

Proposition 5.1.5 Let A ∈ C n×n we define the matrix H = 21 (A + A∗ )


and S = 21 (A − A∗ ) such that the matrix H is hermittian and S is non-
hermittian.Then,

W (A) = W (H) + W (S).


proof. We have A = H + S .Let x ∈ C n×n where ∥x∥ = 1.Then we have

1 ∗
x∗ Hx = x (A + A∗ ) x
2
1
= [⟨x, Ax⟩ + ⟨x, A∗ x⟩]
2
= Rex∗ Ax

82
Then every z ∈ W (H) has the form Rez for any z ∈ W (H).The converse is
true also.With the same way we prouve that any z ∈ W (S) has the form of
iImz . W (S) = ImW (A). 2

5.2 The numerical abscissa

Definition 5.2.1 Let A ∈ C n×n we define the positive numerical abscissa of


A the number

W + (A) = sup Rez.


where W (A) is the numerical range.
In hilbert space the positive numerical abscissa is done by
A + A∗
W + (A) = sup λ where λ is the eigenvalue of .
2
Then,
W + (A) = W + (A∗ ).

Definition 5.2.2 Let A ∈ C n×n , we define the negative abscissa of A the


number

A + A∗
W − (A) = inf λ where λ is the eigenvalue of matrix .
2

Proposition 5.2.3 Let A ∈ C n×n , Then

W − (A) = inf Re z
z∈W (A)

A+A∗
proof. Let λ the eigenvalue of 2
then,

83
⟨ ⟩
− A + A∗
W (A) = inf λ = inf x,
|x|=1 2

= inf Rex Ax
|x|=1
= inf Rez.
z∈W (A)

Theorem 5.2.4 Let A ∈ C n×n then,

W − (A) ≤ α(A) ≤ W + (A).

Theorem 5.2.5 The numerical range of matrix A is no empty bounded and


convex set.

Theorem 5.2.6 Let A ∈ C n×n then,If H is finite dimentional then W (A)


is compact.

Theorem 5.2.7 Let A ∈ C n×n then,If 0 ∈ W (A) then the numerical range
W (A) is closed.

Lemma 5.2.8 Let A ∈ C n×n on two dimensinal space then W (T ) is ellipse


where its focal is its eigenvalue.

84
5.3 The numerical radius

Definition 5.3.1 The numerical radius r(A) of a matrix A ∈ C n×n is


done by

r(A) = {sup |z| , z ∈ W (A)} .


And notice that ,

|⟨T x, x⟩| ≤ W (T ) ≤ ∥x∥2 .

Theorem 5.3.2 For A ∈ C n×n we have,

W (A) ≤ ∥A∥ ≤ 2W (A).

5.4 The relation between spectrum and the


numerical range

Proposition 5.4.1 Let A ∈ C n×n ,

σ(A) ⊂ W (A).

proof. Let λ ∈ σ(A) and x ∈ H such that ∥x∥ = 1 then, Ax = λx


and then ⟨(A − λ)x, x⟩ = 0 then, ⟨Ax, x⟩ = λ then λ ∈ W (A). 2

Theorem 5.4.2 (Spectrum inclusion) The spectrum of a matrix A ∈


C n×n is contained in the closure of its numerical range so that,

σ(A) ⊂ W (A).

85
proof. It is clear that the boundary of the spectrum is contained the
approximate spectrum point σapp (A), And since W (T ) is convex then its
suffices to show that σapp (A) ⊂ W (A).
Consider that λ ∈ σapp (A) and a sequence {fn } of complex vectors with

∥(T − λI)fn ∥ → 0,
Then,by the shwartz equality,

|⟨(T − λI)fn ), fn ⟩| ≤ ∥(T − λI)fn ∥ → 0,


Thus,

⟨T fn , fn ⟩ → λ
so,
λ ∈ W (T ).
2

5.5 The relation between Psudospectrum and


The numerical range of matrix

Theorem 5.5.1 Let A ∈ C n×n then,

σϵ (A) ⊆ W (A) + ∆ϵ .
Where ∆ϵ is the closed disk of center 0 and radius ϵ.

proof. Let z ∈ σϵ (A) then there exists E such that ∥E∥ ≤ ϵ and
z ∈ σ(A + E) then there exists v ̸= 0 such that (A + E)v = zv .Then we
can write z in the form of

v ∗ (A + E)v v ∗ Av v ∗ Av
z= = + ∗ .
v∗v v∗v v v
2

86
Theorem 5.5.2 Let A ∈ C n×n , u ∈ C n , ∥u∥ = 1 and ϵ ≥ 0.If z ∈ σϵ (A)
then there is excist w ∈ C n such that,

W (A + E) ∩ W (zI + wu∗ ) = ∅.
with ∥E∥ ≤ ϵ.

proof. z ∈ σϵ (A) imply (A + E)u = zu + w, ∥E∥ ≤ ϵ so, u∗ (A + E)u =


u∗ (zu + w) = z + u∗ w.Then,

W (A + E) = {x∗ (A + E)x, ∥u∥ = 1} .


we put x = u, then we get z + u∗ w ∈ W (A + E).

W (zI + wu∗ ) = {y ∗ (zI + wu∗ )y, ∥y∥ = 1}


= {z + y ∗ wu∗ u, ∥y∥ = 1} .

We take y = u then z + u∗ w ∈ W (A + E) so,

W (A + E) ∩ W (zI + wu∗ ) ̸= ∅.

Proposition 5.5.3 Let A ∈ C n×n , ϵ ≥ 0 then,

W (A + E) ⊆ W (A) + ∆ϵ .

proof. We have W (A + E) ⊆ W (A) + W (E) .We suppose that z ∈


W (E) then,

z = x∗ Ex.
Then
∥u∥ = 1
.Then
|z| ≤ ∥E∥ ≤ ϵ.
Then
z ∈ ∆ϵ

87
so
W (E) ⊆ ∆ϵ
then,
W (A + E) ⊆ W (A) + ∆ϵ .
2

5.6 The numerical range of self adjoint ,norm-


laoid and spectraloid matrix

Theorem 5.6.1 If A ∈ C n×n is selfadjoint then W (A) is real.

Definition 5.6.2 Let H the hilbert space and A ∈ C n×n then,


1) A is normaloide if
W (A) = ∥A∥ .
2) A is spectraloid if
W (A) = r(A).

Lemma 5.6.3 A is spectraloid if

W (An ) = W (A)n .for n ∈ N

proof. if A is spectraloid then,

W (An ) = r(A)n = r(A)n ⊆ W (A)n .

the inverse is clear that W (An ) = W (A)n . 2

88
5.7 Matrix pseudo-commutating
Definition 5.7.1 Let A ∈ C n×n and B ∈ C n×n .We said that A and B
commutate if AB = BA or equivently if their commututor,
[A, B] = AB − BA.

Proposition 5.7.2 Let A ∈ C n×n and B ∈ C n×n . A and B commutate


Then
1)
[A, B] = − [B, A] .
2)
[A, B]∗ = [B ∗ , A∗ ] .
3)
[λA, B] = [A, λB] = λ [A, B] . [A, B] λ ∈ C.

Definition 5.7.3 Let A ∈ C n×n and B ∈ C n×n , ϵ ≥ 0.A, B are pseudo-


commutating if and only if

∥[A, B]∥ ≤ ϵ.

Lemma 5.7.4 Let A ∈ C n×n for some u ∈ C n , v ∈ C n with ∥u∥ = ∥v∥ = 1.



we have A and vu2 are pseudo-commutating.

Theorem 5.7.5 Let A ∈ C n×n , u ∈ C n with ∥u∥ = 1.If z ∈ σϵ (A) then there
is w ∈ C n , v ∈ C n where ∥v∥ = 1 such that,
1
W (A + [A, vu∗ ]) ∩ W (zI + wu∗ ) ̸= ∅.
2
proof. This result is getting clear if we take E = 12 [A, vu∗ ] we will get
1
∥E∥ = [A, vu∗ ] ≤ ϵ.
2
And the rest is clear using the last theorem. 2

89
5.8 Generalized spectrum and numerical range
of matrix the lorentzain oscillator group
of dimension four
Connected Lie groups that admit a bi-invariant Lorentzian metric were deter-
mined by the first of the authors in [?]. Among them, those that are solvable,
non-commutative, and simply connected are called oscillator groups. This
group has many properties useful both in geometry and physics.
We study here the geometry of these groups and their networks, i.e their
discrete sub-groups co-compact. If G is an oscillator group, its networks
determine compact homogeneous Lorentz manifolds, on which G acts by
isometries.
Let H2k+1 = R × Ck be the Heisenberg group and let λ = (λ1 , λ2 , . . . , λk )
k be strictly positive real numbers. Let the additive group R act on H2k+1
by the action:
ρ(t)(u, (zj )) = (u, (eiλj t zj )).
The group Gk (λ), a semi-direct product of R by H2k+1 following ρ, is
provided with a bi-invariant Lorentz metric. Here is how it is built:

g = R × R × R2k

is the tangent space at the origin. Let us extend the usual scalar product
of R2k into a symmetric bilinear form over g so that the plane R × R is
hyperbolic and orthogonal to R2k . This form defines an invariant Lorentz
metric on the left on Gk (λ), it is also invariant on the right because the
adjoint operators on g are antisymmetric [?].
Groups Gk (λ) are characterized [?] by:

Theorem 5.8.1 The groups Gk (λ) are the only Lie group simply connected
, resolvable and noncommutative which admit a bi-invariant Lorentz metric.

90
5.8.1 Oscillator group of dimension 4
Letλ > 0. Suppose Gλ the group of Lie with moderlying variety R4 = R×
C × R with product
( )
1 _ iy1
(x1 , z1 , y1 ) · (x2 , z2 , y2 ) = x1 + x2 + Im(z1 e z2 ), z1 + e z2 , y1 + y2 ,
iy1
2
where (x1 , z1 , y1 ), (x2 , z2 , y2 ) ∈ R× C × R.
Gλ is the group of Lie of dimension 4, simply connexe résolable and no-
commutatif, called Oscillator group.
The oscillator group of dimension 4 doesn’t admet invariant metric flat.but
it isn’t the only group of Lie which is simply connex résolable non commu-
tatif which admet a metric Lorentzienne bi-invariant (i.e invariant from the
translations à gauche et à droitfrom left and right).
The algebraof lie of oscillator group , noted gλ , will be said Oscillating
algebra. gλ is defining as the algebra as the algebra of Lie real and generated
by four invariat from the left noted , X, Y, P, Q,such that hooksof Lie are
done by
(5.1) [X, Y ] = P, [Q, X] = λY, [Q, Y ] = −λX.
The field of vectors X, Y, P, Q verify (??) can be presented in matrix form
such (see [?] and [?]):
   
0 0 1 0 0 −1 0 0
 0 0 0 1   0 0 0 0 
X=  
 0 0 0 0 , Y =  0 0 0 1 ,

0 0 0 0 0 0 0 0
   
0 0 0 2 0 0 0 0
 0 0 0 0   
P =  , Q =  0 0 −λ 0  .
 0 0 0 0   0 λ 0 0 
0 0 0 0 0 0 0 0
Then the oscillator group of dimension 4 can be seen as a subspace of
GL(4, R) (see [?]):
Gλ = {Mλ (x1 , x2 , x3 , x4 ) ∈ GL(4, R) | x1 , x2 , x3 , x4 ∈ R} ,
have an elemrnt as
Mλ (xi ) = exp(x1 P ) exp(x2 X) exp(x3 Y ) exp(x4 Q),

91
it means
 
1 x2 sin(λx4 ) − x3 cos(λx4 ) x2 cos(λx4 ) + x3 sin(λx4 ) 2x1 + x2 x3
 0 cos(λx4 ) − sin(λx4 ) x2 
Mλ (xi ) = 
 0
.

sin(λx4 ) cos(λx4 ) x3
0 0 0 1
e specially , Mλ is diffeoomorphism between Gλ and R3 × R/ 2π λ
Z.

In the suite,we note ∂i = ∂xi the locally coordinate system in a local coor-
dinate chart (x1 , x2 , x3 , x4 ). On matrice writting,its correspond to ∂M λ
∂xj
(x1 , x2 , x3 , x4 )
Either now,Let {e1 , e2 , e3 , e4 }a base of invariant vector field invariant left
to Gλ such that (ej )I = (∂j )I for j ∈ {1, 2, 3, 4}, where I = Mλ (0, 0, 0, 2kπ/λ),
for normal number k, design  the unitary
 matrix. then ([?], [?])
0 0 0 2
 0 0 0 0 
(e1 )Mλ (x1 ,x2 ,x3 ,x4 ) = 
 0 0 0 0 ,

0 0 0 0
 
0 0 1 x2 sin(λx4 ) − x3 cos(λx4 )
 0 0 0 cos(λx4 ) 
(e2 )Mλ (x1 ,x2 ,x3 ,x4 ) = 
 0 0 0
,

sin(λx4 )
0 0 0 0
 
0 −1 0 x2 cos(λx4 ) + x3 sin(λx4 )
 0 0 0 − sin(λx4 ) 
(e3 )Mλ (x1 ,x2 ,x3 ,x4 ) = 
 0 0 0
,

cos(λx4 )
0 0 0 0
 
0 x2 cos(λx4 ) + x3 sin(λx4 ) −x2 sin(λx4 ) + x3 cos(λx4 ) 0
 0 − sin(λx4 ) − cos(λx4 ) 0 
(e4 )Mλ (x1 ,x2 ,x3 ,x4 ) = λ 
 0
.
cos(λx4 ) − sin(λx4 ) 0 
0 0 0 0
Then,the invariant base of fieldsat left on Gλ , is done by
e1 = ∂1 ,
e2 = −x3 cos(λx4 )∂1 + cos(λx4 )∂2 + sin(λx4 )∂3 ,
(5.2)
e3 = x3 sin(λx4 )∂1 − sin(λx4 )∂2 + cos(λx4 )∂3 ,
e4 = ∂4 .
using (??),the direct calcul we prouve that the only hooks of Lie [ei , ej ] no
zero , is done by
(5.3) [e2 , e3 ] = e1 , [e2 , e4 ] = −λe3 , [e3 , e4 ] = λe2 .

92
comparing(??) with (??), we see that the algebra of Lie of o Gλ coïncide
with the oscillator algebra,we pose, X = e2 , Y = e3 , P = e1 and Q = e4 .So,
Gλ is the oscillator group.

Remark 5.8.2 A such representation in oscillator group G1 such a matrix


sub-group of GL(4, R) is at first initiated for Streater in important article [?].
the case λ ̸= 1 is studied by Calvaruso and Van Der Veken [?].

At the moment we consider on Gλ a family parametre of left-invariant


Lorentzian metrics ga . With respect to coordinates (x1 , x2 , x3 , x4 ), this metric
ga is explicitly given by

ga = adx21 + 2ax3 dx1 dx2 + (1 + ax23 )dx22 + dx23 + 2dx1 dx4 + 2x3 dx2 dx4 + adx24 ,

with −1 < a < 1.


Note that for a = 0 and λ = 1 we have the bi-invariant metric on the
oscillator group G1 [?]. In all other cases, ga is only invariant on the left.
The matrix of the metric ga is given by
 
a ax3 0 1
 ax3 1 + ax23 0 x3 
Aa =  0
.
0 1 0 
1 x3 0 a

5.8.2 Eigenvalues and Pseudo-spectrum of matrix Aa

Proposition 5.8.3 The eigenvalues of the matrix Aa are:

λ1 = 1, √ ( )
λ2 = 23 a + 13 ax23 − 21 S + 2S
C
+ 13 − 23 i S + CS ,
λ3 = λ2 ,
λ4 = 32 a + 13 ax23 + S − CS + 13 ,

with √
√ 8
N−
3
S= M+ ,
27

93
and
2 1 1 1 11 1
M = a + a2 − a3 + x23 + ax23 + ax43
9 9 27 6 18 6
1 2 2 1 3 2 1 2 4 1 3 4 1
− a x3 − a x3 + a x3 + a x3 + a3 x63
18 18 9 18 27
4 3 4 2 1 4 8 2 13 4 1 2 1 1 7
N = a − a − a − x3 − x3 − x63 − ax23 − ax43 − ax63 + a2 x23
27 27 27 27 108 27 9 54 54 27
4 3 2 7 2 4 1 4 2 1 2 6 11 4 4 1 3 6 1 5 4 1 2 8
+ a x3 + a x 3 − a x3 + a x3 − a x3 + a x3 + a x3 − a x3
27 36 9 18 108 27 54 108
1 6 4 1 1 1 1 6 8
− a x3 − a5 x63 + a4 x83 − a6 x63 − a x3
108 54 54 54 108
2 1 1 2 1 1 4
C = a − a2 − x23 − ax23 − a2 x23 − a2 x43 −
9 9 3 9 9 9 9
proof. We have

det(Aa − λI4 ) = (1 − λ)(−λ3 + Lλ2 + Kλ + (a2 − 1)),

with
( )
L = 1 + 2a + ax23 ,
K = (−a2 − 2a − a2 x23 + x23 + 1),

so, det(A − λI4 ) = 0, If and only if either λ1 = 1 or

−λ3 + Lλ2 + Kλ + (a2 − 1) = 0.

According to the CARDAN method we find,

z 3 + pz + q = 0,

such as
L
(5.4) z =λ− , z ∈ C,
3
and

p = −( 13 L2 + K) = − 31 (4 + a2 x43 + ax23 + a2 − 2a + a2 x23 + 3x23 ) ,


q = − 27
1
(−16 + 2a3 x63 + 6a2 x43 + 33ax23 − 2a3 + 6a2 − 3a3 x23 + 12a + 3a3 x43 − 3a2 x23 + 9x23 )

94
Then the CARDAN method he says that the 3 solutions are:
v ( ) v ( )
u √ u √
u1 −∆ u1 −∆
zk = j k t
3
−q + + j −k t
3
−q − , 0≤k≤2
2 27 2 27

such as ,

∆ = −4p3 − 27q 2 ,
π
j = ei2 3 .

So, according to (??) we find,


L
λk = zk + , 0≤k≤2
3
2
Pseudo-spectrum of Aa : since A is symmetrical therefore Aa is normal,
therefore pseudo-spectrum noted by Λϵ (Aa ) given by:

Λϵ (Aa ) = {z ∈ C : |z − λi | ≤ ϵ} with i ∈ {1, . . . , 4} .

5.8.3 Numerical rang of matrix Aa

Proposition 5.8.4 The numerical rang of matrix Aa check the following


relation:
x∗ A a x
≤ (1 + |a|)(1 + |x3 |) + ax23 .
x∗ x
proof. We have
{ }
x∗ Ax
W (A) = , x ∈ C4 , x ̸= 0
x∗ x


z1
 z2 
we put x =   iθi
 z3  , with zi = ri e . We have
z4

x∗ Aa x = a |z1 |2 +a |z4 |2 +|z2 |2 +|z3 |2 +ax3 (z1 z2 +z2 z1 )+x3 (z2 z4 +z4 z2 )+(z1 z4 +z4 z1 )+a |z2 |2 x23 ,

95
so,

x∗ Ax (a − 1)(|z1 |2 + |z4 |2 ) z1 z2 + z2 z1 z2 z4 + z4 z2 z1 z4 + z4 z1 2 |z2 |


2
= 1+ +ax +x + +ax .
x∗ x ∑
4 3
∑4 3
∑4 ∑4 ∑
3 4
|zi |2
|zi |2
|zi |2
|zi |2
|zi |2

i=1 i=1 i=1 i=1 i=1

We have,

|zj |2
(5.5) ≤ 1, ∀j ∈ {1, . . . , 4} .

4
|zi |2

i=1

and,
zi zj + zj zi
(5.6) ≤ 1, ∀i, j ∈ {1, . . . , 4} ,
∑ 4
|zi |2

i=1

So from (??) and (??) we find

x∗ Aa x
≤ 1 + |ax3 | + |x3 | + |a| + ax23 .
x∗ x
It had to be proven. 2

Example 5.8.5 1) For a = 0 and x3 = 0,


 
0 0 0 1
 0 1 0 0 
A00 = 
 0 0 1
,
0 
1 0 0 0
so,

g00 (x∗ , x) r12 + r42 − 2r1 r4 cos(θ1 − θ4 )


= 1 − ≤ 1,
x∗ x r12 + r22 + r32 + r42
g00 (x∗ ,x)
we need to prouve that x∗ x
= 1. we have,

r12 + r42 − 2r1 r4 cos(θ1 − θ4 )


1− = 1,
r12 + r22 + r32 + r42

96
we get,
r12 + r42 − 2r1 r4 cos(θ1 − θ4 )
= 0,
r12 + r22 + r32 + r42
And that will be correct for r1 = r4 , (θ1 − θ2 ) = 2kπ, k ∈ N and r3 = r2 we
g 0 (x∗ ,x)
get 0 x∗ x = 1. Then 1 ∈ W (A00 ) .
On the other hand, we have,
r12 + r42 − 2r1 r4 cos(θ1 − θ4 )
− ≥ −2,
r12 + r22 + r32 + r42
to prouve that we will suppose the contrast and we show that isn’t true .It
means,

r12 + r42 − 2r1 r4 cos(θ1 − θ4 )


− ≤ −2,
r12 + r22 + r32 + r42
Then,
r12 + r42 − 2r1 r4 cos(θ1 − θ4 )
≥ 2,
r12 + r22 + r32 + r42
Then,
2r22 + 2r32 + r12 + r42 − 2r1 r4 cos(θ1 − θ4 ) ≤ 0,
It means,

2r22 + 2r32 + (r1 cos θ1 − r4 cos θ2 )2 + (r1 sin θ1 − r4 sin θ2 )2 ≤ 0.(never occurs)

So,
g00 (x∗ , x)
≥ −1.
x∗ x
And we have for r1 = r4 ∧ θ1 − θ4 = (2k + 1)π ∧ r2 = r3 = 0(k ∈ N ) we
g00 (x∗ ,x)
have x∗ x
= −1 . So W (A00 ) = [−1, 1]
2) For a = 0 and x3 = 0.5,
 
0 0 0 1
 0 1 0 0.5 
A0.5
0 = 
 ,
0 0 1 0 
1 0.5 0 0
so,
g00.5 (x∗ , x) g00 (x∗ , x) r2 r4 cos(θ2 − θ4 )

= ∗
+ 2 .
xx xx r1 + r22 + r32 + r42

97
We have,
r2 r4 cos(θ2 − θ4 ) 1
2 2 2 2
≤ ,
r1 + r2 + r3 + r4 2
To prouve it we need to take the contrast

r2 r4 cos(θ2 − θ4 ) 1
2 2 2 2
≥ ,
r1 + r2 + r3 + r4 2

And we get
r12 + r22 + r32 + r42 + r2 r4 cos(θ2 − θ1 ) ≤ 0,
Then,

r12 + r32 + (r2 cos θ2 − r4 cos θ2 )2 + (r2 sin θ2 − r4 sin θ2 )2 ≤ 0,

And also,
g00.5 (x∗ , x) 5

≥−
xx 4
for prouving that we need to prouve that the contract isn’t true , it means,

g00.5 (x∗ , x) 5
≤ −
x∗ x 4
it means,
r2 r4 cos(θ2 − θ4 ) 9
2 2 2 2
≤−
r1 + r2 + r3 + r4 4
and we get,
[ ]
4 r2 r4 cos(θ2 − θ4 ) + r22 + r42 + 5r22 + 5r42 + 5r12 + 5r32 ≤ 0

Then,

[ ]
5(r22 +r42 +r12 +r32 )+4 (r2 cos θ2 − r4 cos θ2 )2 + (r2 sin θ2 − r4 sin θ2 )2 ≤ 0(that is impossible)

So we conclude,
5 g 0.5 (x∗ , x) 3
− ≤ 0 ∗ ≤ ,
4 xx 2
but − 45 and 3
2
does not belong to W (A0.50 ).let’s prouve that:

98
If 3
2
∈ W (A0.5
0 ) then,

g00 (x∗ , x) r2 r4 cos(θ2 − θ4 ) 3



+ 2 2 2 2
= .
xx r1 + r2 + r3 + r4 2

So,

r12 + r42 − 2r1 r4 cos(θ1 − θ4 ) r2 r4 cos(θ2 − θ4 ) 3


1− 2 2 2 2
+ 2 2 2 2
= .
r1 + r2 + r3 + r4 r1 + r2 + r3 + r4 2
Then,
−r12 − r42 + 2r1 r4 cos(θ1 − θ4 ) + r2 r4 cos(θ2 − θ4 ) 1
2 2 2 2
= .
r1 + r2 + r3 + r4 2
So we get,

2 [(r1 cos θ1 − r4 cos θ4 )2 + (r1 sin θ1 − r4 sin θ4 )2 ]


+ [(r2 cos θ2 − r4 cos θ4 )2 + (r2 sin θ2 − r4 sin θ4 )2 ] + r12 + r32 = 0.

The only condition which verify the equation is just when r1 = r2 = r3 =


r4 = 0 and that is not possible because x ̸= 0.Then, 23 ∈/ W (A0.5
0 ).
With the same method we prouve that − 4 ∈5 0.5
/ W (A0 ).It means to prouve
that,

g00 (x∗ , x) r2 r4 cos(θ2 − θ4 ) 5


+ = − .
x∗ x 2 2 2
r1 + r2 + r3 + r4 2
4
Then,

r12 + r42 − 2r1 r4 cos(θ1 − θ4 ) r2 r4 cos(θ2 − θ4 ) 5


− 2 2 2 2
+ 2 2 2 2
=− .
r1 + r2 + r3 + r4 r1 + r2 + r3 + r4 4

So,
−r12 − r42 + 2r1 r4 cos(θ1 − θ4 ) + r2 r4 cos(θ2 − θ4 ) 5
2 2 2 2
=−
r1 + r2 + r3 + r4 4
The only condition which verify the equation is just when r1 = r2 = r3 =
r4 = 0 and]that is[ not possible because x ̸= 0.Then, − 45 ∈
/ W (A00.5 ).So we get
W (A0 ) ⊂ − 4 , 2 .
0.5 5 3

99
Conclusion
Our memoire dealt with one of the most important and the newest topics
in functional analysis,and it focused on several aspects such as the theory
of spectrum,which contribute to the solution of linear equations addition
to many mathematics and physicals problems.Also ,we made the point in
pseudo spectrum which had great role in understanding the perturbations of
spectral objects.

Through to the heart of our study ”the numerical range of operator”


and exactly we kept our touch on the numerical range of matrices.This latter
contributed to many mathemathical discplines such as the theory of opera-
tors,matrix polynomials,applications to various areas including C-algebras..

We ended our work with the article published by Dr DERKOUI RAFIK


and ABDERRAHMANE SMAIL in 2020 as exemple of calculating spec-
trum and numerical range ,and it touched the Lorentzian oscillator matrix
of dimension four. The subject of numerical range of operator is still open
for scientific research in all aspects algebric , geometric..

100
30 avr. 2021 14:22

The numerical range of matrix

The adjoint Various


operator classess of
Sesquilinea operator Linear
r form operator

The algebra
of operator Orthogonal
ser

Hilbert
Spectral space
notion proprieties

Spectrum Orthonorm
of various al set
classes of

Bounded Inner
Resolvent linear product
Equation operator Space

Hilbert
space
Spectral
theory
Spectrum The
mapping
numerical
range of Spectrum
and
numrical

The
Pseudo-spe numerical
ctrum range of
Matrix The relation
pseudospe btw
ctrum spectrum

The relation
Singular btw
values pseudospe

Equivalenc The
e of numerical
pseudospe radius

Pseudospe Numerical
ctrum range of
proprieties some
The The
pseudoproj numerical
ection The The abssica
pseudospe numerical
ctrum of range of
Bibliography

[1] M. Boucetta, A. Medina.: Solutions of the Yang-Baxter equations on or-


thogonal groups: the case of oscillator groups, J. Geom. Phy. 61 (2011),
2309–2320.

[2] S. Bromberg, A. Medina, Geometry of oscillator groups and locally sym-


metric manifolds, Geom. Dedicata 106 (2004), 97–111.

[3] G. Calvaruso, J. Van der Veken: Totally geodesic and parallel hyper-
surfaces of four-dimensional oscillator groups. Results Math. 64 (2013),
135–153.

[4] G. Calvaruso, A. Zaeim, On the symmetries of the Lorentzian oscillator


group, Collect. Math. 68: 51. doi:10.1007/s13348-016-0173-3, (2017).

[5] R. Duran Diaz, P.M. Gadea, J.A. Oubiña: Reductive decompositions


and Einstein-Yang-Mills equationsassociated to the oscillator group. J.
Math. Phys. 40 (1999), 3490–3498.

[6] R. Duran Diaz, P.M. Gadea, J.A. Oubiña: The oscillator group as a
homogeneous spacetime. Lib. Math. 19 (1999), 9–18.

[7] P.M. Gadea, J.A. Oubiña: Homogeneous Lorentzian structures on the


oscillator groups. Arch. Math. 73 (1999), 311–320.

[8] K.E. Gustafson and D.K.M. Rao, Numerical Range, Springer, New York,
1997.

[9] R.A. Horn and C.R. Johnson, Topics in Matrix Analysis, Cambridge
University Press, Cam- bridge, 1991.

102
[10] V. Khiem Ngo, An Approach of Eigenvalue Perturbation Theory. ApPl.
Num. Anal. Comp. Math. 2 (2005), No. 1, pp. 108-125.

[11] A. V. Levichev, Chronogeometry of an electromagnetic wave given by a


biinvariant metric on the oscillator group, Siberian Math. J. 27 (1986),
237–245.

[12] A. Medina, Groupes de Lie munis de métriques bi-invariantes, Tohoku


Math. J. 37, (1985) 405–421.

[13] A. Medina, P. Revoy, Les groupes oscillateurs et leurs réseaux, Manus-


cripta Math. 52 (1985), 81–95.

[14] Medina A. Groupes de Lie munis de pseudo-métriques de Riemann bi-


invariantes. Séminaire Géom. Diff., 1981–1982 Montpellier.

[15] Milnor J, Curvatures of left invariant metrics on Lie groups, Adv. in


Math., 21, 1976, 283–329.

[16] D. Müller, F. Ricci: On the Laplace-Beltrami operator on the oscillator


group. J. Reine Angew. Math. 390, (1988) 193–207.

[17] T. Nomura, The Paley-Wiener theorem for the oscillator group, J. Math.
Kyoto Univ. 22 (1982/83), 71–96.

[18] B. O’Neill, Semi-Riemannian Geometry, New York: Academic Press,


(1983).

[19] L. Reichel, L. N .Trefethen, Eigenvalues and pseudo-eigenvalues of Toe-


plitz Matrice, Lin. Alg. Applics. 162-164 (1992), pp. 153-185.

[20] Streater R.F., The representations of the oscillator group, Commun.


math, phys., 4, 1967, 217–236.

[21] L. Trefethen, M. Embree, Spectra and Pseudospectra :The Behavior of


Non-Normal Matrices and Operators, Princeton University Press, Prin-
ceton, 2005.

[22] L. Trefethen, Pseudospectra of matrices, Numerical Analysis 1991,D.F.


and G.A.Watson ,eds., Longman Scientific and Technical, Harlow, UK,
(1992), pp. 234-266.

103
[23] C. Van Loan, A study of the matrix exponential, Numerical Analysis
Report No. 10, University of Manchester, UK, August, 1975, Reissued
as MIMS EPrint, Manchester Institute for Mathematical Sciences, The
University of Manchester, UK, November 2006.

104

You might also like