You are on page 1of 256

# Elements of Linear Algebra.

Course notes
September 28, 2015

Cristian Neculaescu
Bucharest University of Economic Studies, Faculty of Cybernetics, Departament of
Matematics, Cybernetics Building (Calea DorobanT i 1517, sector 1), room 2625, Office
hours: to be announced September 28, 2015
E-mail address: algecon2011@gmail.com
URL: roedu4you.ro, math4you section
Dedicated to the memory of S. Bach.

2000 Mathematics Subject Classication. Primary 05C38, 15A15; Secondary 05A15, 15A18

## The Author thanks V. Exalted.

Abstract. Replace this text with your own abstract.

Contents
Preface

Part 1.

vi

## Introduction to Linear Algebra

Prerequisites
Chapter 1. Vector Spaces and Vector Subspaces
1.1. Introductory denitions

1
2
5
5

Terminological Dierences

18

Changing the scalar multiplication may change the properties of the vector space

20

21

21

24

1.4. Exercises

25

26

32

55

64

69

71

72

76

78

79

79

81

82

82

86

102

iv

CONTENTS

105

2.6.3. Examples

107

126

## Chapter 3. Inner Product Vector Spaces

141

3.1. Ortogonality

147

150

152

154

156

161

161

163

167

172

173

175

175

## 3.6. Vector Spaces over Complex Numbers

180

Chapter 4. A ne Spaces

183

4.1. Denitions

183

4.1.1. Properties

185

Part 2.

## Linear Algebra Software Products

191

Chapter 5. Geogebra

193

Chapter 6. CARMetal

195

Part 3.

197

Appendices

Chapter 7. Reviews

199

Binary Logic

199

203

7.2. Sets

203

204

CONTENTS

7.2.2. Relations

205

7.2.3. Functions

207

208

209

210

210

210

215

## 7.5. General Matrix Topics

217

7.6. Determinants

222

222

224

224

228

231

237

237

241

243

245

246

247

248

Bibliography

249

vi

CONTENTS

Preface

Part 1

## Introduction to Linear Algebra

Prerequisites
These notes are a continuation of the topics learned at the high school level. In particular, the following
topics are supposed to be known:
Elements of Set Theory
Important Sets of Numbers: N, Z, Q, R, C (and their basic properties)
Elements of Binary Logic
Polynomials
Exponents
Rational and Radical Expressions
Functions
denition, injectivity (onetoone functions), surjectivity (onto functions), bijective (one
one) and invertible functions.
Elementary functions:
Linear functions
Quadratic functions
Polynomial functions
Rational functions
Exponential functions
Logarithmic functions
Trigonometric and Inverse Trigonometric functions
Graphs of Elementary functions
Complex numbers:
modulus and argument,
trigonometric form,
DeMoivres theorem
nth Roots of Complex Numbers
The Principle of Mathematical Induction
Basic Discrete Mathematics:
Permutations and Combinations
The Binomial Theorem
2D Analytic Geometry
Equation of a Line, slope,
The Circle

PREREQUISITES

The Ellipse
The Hyperbola
The Parabola
Linear and Nonlinear Equations and Inequalities
Basic Linear Algebra:
Matrix Algebra, rank, Determinants (of an arbitrary dimension), inverse
Systems of Linear Equations
Cramers Rule
RouchCapelli [KroneckerCapelli] Theorem
Basic Abstract Algebra:
operation (law),
closure,
monoids,
groups,
rings,
elds,
morphisms.
0.0.1. Exercise. Show that in a monoid with neutral element the neutral element is unique.
0.0.2. Solution. Consider a monoid (M; ) and suppose there are two neutral elements with respect to
" ", denoted by e and f . Then (by the denition of the neutral element):
(1) 8x 2 M , x e = e x = x, and
(2) 8x 2 M , x f = f

x=x

## Then, by replacing x = e in (1) and x = f in (2) we get:

f

e = e f = f and e f = f

## element, if it exists, it is unique.

0.0.3. Exercise. Show that in a monoid with neutral element if an element is symmetrizable then the
symmetric element is unique.
0.0.4. Solution. Consider a monoid (M; ) with neutral element denoted by e and a symmetrizable
element denoted by x0 . By the denition of the monoid, " " is associative.
Suppose there are two symmetric elements, denoted by x00 and x000 . Then (by the denition of the
symmetric element):
x0 x00 = x00 x0 = e, and

x0 x000 = x000 x0 = e.
Then x00 = x00 e = x00 (x0 x000 ) = (x00 x0 ) x000 = e x000 = x000 . So the symmetric element is unique.

CHAPTER 1

## Vector Spaces and Vector Subspaces

1.1. Introductory denitions
1.1.1. Denition. (vector space) Given:
The sets:
V 6= ; (the elements of V are called vectors)1;
K 6= ; (the elements of K are called scalars);
The functions:
K ! K. The function +K is a composition law over K called scalar addition

+K ( ; ) : K

## and will be denoted shortly by +";

(; ):K

K ! K. The function

## cation and will be denoted shortly by ";

V ! V. The function +V is a composition law over V called vector addition

+V ( ; ) : V

## and will be denoted shortly by +";

(; ) : K

V ! V. The function

## is a composition law over V called multiplication

of vectors with scalars and will be denoted by ". For each xed
operation V 3 x 7!

2 K, the partial

## x 2 V may be called homothety (dilatation or dilation) of parameter

[changing this operation may lead to changing the specics of the vector space see the
second subsection].
The pair (V; K) (together with the above operations) is called vector space if the following conditions
are met:
(1) (K; +K ;

K)

is a commutative eld;

## (2) (V; +V ) is an Abelian group;

(3) 8a; b 2 K; 8x; y 2 V, we have:
(a) (a +K b) x = a x +V b x (distributivity of " with respect to +K ");
(b) a (x +V y) = a x +V a y (distributivity of " with respect to +V ");
(c) a (b x) = (a

b) x (mixed associativity);

(d) 1K x = x.
1 In

strict sense, the elements of V are position vectors. The names vector, position vector, point, should be used with
care. At the end of this section a subsection on this topic will be included.

## 1. VECTOR SPACES AND VECTOR SUBSPACES

1.1.2. Remark. The distinction between dierent operations will be made from context;
The element 0V is the neutral element with respect to vector addition and it will be denoted by
0;
The element 0K is the neutral element with respect to scalar addition and it will be denoted by 0;
The element 1K is the neutral element with respect to scalar multiplication and it will be denoted
by 1;
Notation conventions:
vectors will be denoted by small latin letters (a, b, c, u, v, w, x, y, z, etc.),
scalars will be denoted by small greek letters ( , , , , etc.),
sets will be denoted by "doubled" (blackboard bold font) capital letters (A, B, C, D, K, L,
M, N, R, X, V, etc.)
all the above may have subscripts (

0,

(2)

## depending on the context.

The following are examples of common situations when vector spaces are used. The 2D and 3D
examples are perhaps among the simplest, but simple is not always good: this simplicity has both good
and bad aspects, in that they oer geometrical intuitions which is good for two and three dimensions
but devastating for arbitrary dimensions. Moreover, even in two or three dimensions the pictures may be
misleading, because they suggest a certain measurement for distances, which is not always appropriate.
1.1.3. Example (A beautiful business situation). As the rst example in which vector spaces are present
I choose a beautiful old problem (in fact, a set of two problems) [found in , page 60, problems 41 and
42].
The prices for bushels of wheat and rye are pw and pr . The market demand for bushels of wheat and
rye are Dw = 4
Sw = 7 + pw

pr and Sr =

27

## 5pr . The market supply of bushels of wheat and rye are

pw + 2pr .

1. Find the equilibrium prices (the prices which equate demand and supply for both goods).
A tax tw per bushel is imposed on wheat producers and a tax tr per bushel is imposed on rye producers.
2. Find the new prices as functions of taxes.
3. Find the increase of prices due to taxes.
4. Show that a tax on wheat alone reduces both prices.
5. Show that a tax on rye alone increases both prices, with the increase in the rye price being greater
than the tax on rye.
1.1.4. Solution. Equilibrium takes place when demand meets supply:

## 1.1. INTRODUCTORY DEFINITIONS

8
< Dw = S w
: D =S
r
r

8
<

()

10pw + 7pr = 7 + pw

: 3 + 7p
w

8
< p0 = 219
w
13
with solution:
: p0 = 306

5pr =

27

## supply are equal and are

pw + 2pr

16:84615385

23:53846154
So 1. the equilibrium prices are p0w = 219
and p0r =
13
r

pr

13

4
13

42
13

306
.
13

## [At these equilibrium prices, the demand and

(for rye)]

When taxes are imposed to producers, the supply of both goods will take place at prices lowered by
the taxes:
the demand remains unchanged: Dw = 4

## 10pw + 7pr and Dr = 3 + 7pw

5pr ,

while the supply takes place at prices lowered by the taxes: Sw = 7 + (pw
Sr =

27

(pw

tw ) + 2 (pr

tw )

(pr

tr ) and

tr )

Equilibrium
again happens
when demand meets supply:
8
8
< Dw = Sw
< 4 10pw + 7pr = 7 + (pw tw ) (pr tr )
()
,
: D =S
: 3 + 7p
5pr = 27 (pw tw ) + 2 (pr tr )
w
r
r
8
1
< p w = 9 tr
t + 219
13
13 w
13
with solution:
: p = 14 t
3
t + 306
13 r

13 w

13

2.
2
3
2 The3new2prices as functions 3of taxes:
219
9
1
219
p
t
t + 13
5 = 4 13 5 +
4 w 5 = 4 13 r 13 w
3
306
14
306
t
t + 13
pr
13 r
13 w
13
3. 0
The2increase
of
prices
due
to
taxes
is:
2 31
3
1
13

@tr 4

tw 4

1
13

5A

3
14
4. A tax on wheat alone means2that 3
tr = 2
0;
pw
5=4
the prices in this situation are 4
pr
prices.

219
13
306
13

0 2
@tr 4

3
5

14

1
t
13 w

3
5

2
4

1
3

tw 4

1
3

31
5A

## 5. A tax on rye alone means that

2 tw =30; 2
3
2
3
9
219
pw
5 = 4 13 5 + tr 4 13 5 so a tax on rye alone increases both
the prices in this situation are 4
306
14
pr
13
13
prices.
14
Moreover, the rise on rye price is tr , which is bigger than the tax on rye.
13

1.1.5. Example. A farmer has 45 ha to grow wheat and rye. She may sell a maximum amount of 140
tons of wheat and 120 tons of rye. She produces 5 tons of wheat and 4 tons of rye for each hectare and
she may sell the production for 30 e/ton (wheat) and 50 e/ton (rye). She needs 6 labor hours (wheat)

## 1. VECTOR SPACES AND VECTOR SUBSPACES

and 10 labor hours (rye) for each hectare to harvest the crop, for which she pays 10 e/lh and which is no
more than 350 lh. Find the maximum prot she may obtain and the necessary strategy for obtaining it.
xw = hectares used for wheat
xr = hectares used for rye
prot: 30 5 xw + 50 4 xr
xw + xr

10 10 xr

45

5xw

140

4xr

120

6xw + 10xr
xw , xr

10 6 xw

350

) the problem:
maximize:

subject to:

8
>
>
>
>
>
>
>
>
>
>
<

90xw + 100xr
xw + xr

45

xw

28

xr
30
>
>
>
>
>
3xw + 5xr 175
>
>
>
>
>
:
xw ; xr 0

## 1.1.6. Example (2D). Consider the following objects:

the vector space (R2 ; R) (the Euclidean Plane); within this environment, a vector is identied
2 3
1
with the position vector of the corresponding point; for example, the vector 4 5 is represented
2

## 1.1. INTRODUCTORY DEFINITIONS

graphically by the position vector with the origin at the point O (0; 0) and with the edge (or
terminal point) at the point P1 (1; 2).
2 3
2 3
3
1
the vectors: v1 = 4 5, v2 = 4 5, v1 , v2 2 R2 ;
1
2
2 3 2 3 2 3
4
3
1
the sum v1 + v2 = 4 5 + 4 5 = 4 5 = v3 ;
3
1
2
3
2
3
5;
the opposite vector for v2 is v2 = v4 = 4
1
the multiplication of the vector with a scalar v6 = 2; 6 v1 ;
the subtraction
with the opposed of the second vector: v1
2 of 3two 2vectors
3 is2 the addition
3
1
3
2
5 = v5 ;
v1 + ( v2 ) = 4 5 4 5 = 4
2
1
1

v2 =

## 1.1.7. Example. (3D) Consider the following objects:

the vector space (R3 ; R) (the 3D Euclidean space); within this environment, a vector is identied
2 3
1
6 7
6 7
by the position vector of the corresponding point; for example, the vector 6 2 7 is represented
4 5
1
by the position vector starting at O (0; 0; 0) and ending at P1 (1; 2; 1).

10

## 1. VECTOR SPACES AND VECTOR SUBSPACES

6 7
6 7
6 7
6 7
the vectors: v1 = 6 2 7, v2 = 6 1 7, v1 ,
4 5
4 5
2
1
2 3 2 3 2
1
3
4
6 7 6 7 6
6 7 6 7 6
the sum v1 + v2 = 6 2 7 + 6 1 7 = 6 3
4 5 4 5 4
1
2
3
The opposite vector for v2 is

v2 2 R3 ;
3

7
7
7 = v3 ;
5

7
7
1 7;
5
2
= 2; 6 v ;
3 2 13 2
3
3
2
7 6 7 6
7
7 6 7 6
7
7 6 1 7 = 6 1 7 = v5 ;
5 4 5 4
5
2
1

6
6
v2 = v4 = 6
4

## multiplication of a vector by a scalar v6

2
1
6
6
subtraction v1 v2 = v1 + ( v2 ) = 6 2
4
1

## Picture 1.1.2: Vectors and

operations in 3D

1.1.8. Remark. There are various similarities and dierences between 2D and 3D:
The points and the position vectors are represented by ordered lists of numbers.
There is a certain ambiguity regarding the notions "point", "vector", "position vector". In order
to clear these ambiguities we have to cover another chapter named "A ne Geometry" [see, for
example, ]. For now it is enough to observe that in an environment called "vector space",
a "vector" which geometrically would have as origin some point other than the origin of the
coordinate system simply doesnt exist.

## 1.1. INTRODUCTORY DEFINITIONS

11

Ambiguities may also be found not only in the common language, but also in the scientic language: for example, the expression "x = 0" in R1 means the point with coordinates (0), in R2 it
means a line (the horizontal axis), in R3 it means a plane (the yOz plane), and so on.
A 2D line may be represented as "the set of all the solutions (x0 ; y0 ) of the equation ax+by+c = 0"
[with not all of a, b, c nulls]. This representation may be considered convenient for 2D, but in a
3D environment the similar representation "the set of all the solutions (x0 ; y0 ; z0 ) of the equation
ax + by + cz + d = 0" [with not all of a, b, c, d nulls] represents a plane and not a line; a line
could be viewed as an intersection between two planes (algebraically as the set of all solutions of
a system of two equations with three variables) and this could be generalized for n dimensions
(as the set of all solutions of a system of n equations with n

## 1 variables), but this is not that

convenient anymore.
An alternative convenient representation is the parametric representation:
Consider the points P1 and P3 with the corresponding position vectors v1 and v3 . The line
between P1 and P3 is the set of all points corresponding to the position vectors v(
(1

) v1 = v1 + (v3

v1 ),

v3 +

2 R.

Consider for example the points P1 (1; 2) and P3 (4; 3); the line between them has the equation:
x 1
y 2
=
) x 1 = 3 (y 2) ) x 3y + 5 = 0;
4 1
3 2
observe that
8 "x 3y + 5 = 0" is a linear system with one equation and two variables, with
< x = 3t 5
the solution
: y = t; t 2 R:
2 3
2 3
1
4
The position vectors for the points are v1 = 4 5 and v3 = 4 5;
2
3
the points of the line P1 P3 are exactly those points with position vectors given by
v(

= v1 + v2 [= v1 + (v3 v1 )] [= (1
) v1 + v3 ] =
3
02 3 2 31 2
3
1
4
1
3 +1
5, 2 R,
= 4 5 + @4 5 4 5A = 4
2
3
2
+2
)
2

8
< x = 3t 5
which is just another way to describe the general solution
:
: y = t; t 2 R
t=

+2)y =t=

+ 2, x = 3t

5 = 3 ( + 2)

5 = 3 + 1.

12

The Point

Remarks:

P1

P3

v1 + v2 = v(1)

P2

P4

1
1
2

P7

v4 = v1

P8

1)

v8 = v(

1
3

v2 = v(

1=3)

"(P1 P3 ) \ Oy"

## [the point on the line with null rst coordinate]

[the yintercept point]
v9 = v(

P9

2)

"(P1 P3 ) \ Ox"

## [the point on the line with null second coordinate]

[the xintercept point]

## Picture 1.1.3: Points on a 2D line

In the pictures 1.1.1 and 1.1.2, the fact that the points P3 , P1 and P5 are collinear is no coincidence.
These points correspond to the vectors v3 = v1 + 1 v2 , v1 + 0 v2 , v5 = v1 + ( 1) v2 , which are
all of the form v1 + v2 , so they are on the same line (their edges are collinear).

## 1.1.9. Example. Consider the set

Rn = f(x1 ; x2 ;

; xn ) ; xi 2 R; i = 1; ng:

The set Rn is a real vector space with respect to the "componentwise operations":
Addition:
(x1 ; x2 ;

; xn ) + (y1 ; y2 ;
8 (x1 ; x2 ;

Def

; yn ) = (x1 + y1 ; x2 + y2 ;
; xn ) ; (y1 ; y2 ;

; yn ) 2 Rn

; xn + yn ) ;

## 1.1. INTRODUCTORY DEFINITIONS

13

(the "+" sign from the lefthand side refers to Rn addition while the "+" signs from the righthand side
are referring to R addition; we use the same graphical sign for dierent operations while the distinction
should be made from the context)
Multiplication of a vector by a scalar:
Def

(x1 ; x2 ;

; xn ) ; 8 2 R:

; xn ) = ( x1 ; x2 ;

(R; +; ) is a commutative eld; (Rn ; +) is an Abelian group (componentwise addition is associative, commutative, has a neutral element and each element has a symmetric, because of the similar properties of
R addition), the neutral element is 0Rn = (0; 0;
( x1 ; x2 ;

## ; 0) while the symmetric vector of (x1 ; x2 ;

; xn ) is

; xn ) (the opposite vector). Moreover, when the operations meet we have distributivity

properties:
( + ) (x1 ; x2 ;

; xn ) =

= (( + )x1 ; ( + )x2 ;

; ( + )xn ) =

= ( x 1 + x 1 ; x2 + x 2 ;

; xn + x n ) =

= ( x 1 ; x2 ;
=

(x1 ; x2 ;
((x1 ; x2 ;

; xn ) + ( x 1 ; x 2 ;
; xn ) + (x1 ; x2 ;
; xn ) + (y1 ; y2 ;

(x1 + y1 ; x2 + y2 ;

) (x1 ; x2 ;

; xn ) = ((
=

(x1 ; x2 ;
)x1 ; (

( x 1 ; x2 ;

; yn )) =

; (xn + yn )) =

= ( x 1 + y 1 ; x2 + y 2 ;

; xn ) ;

; xn + yn ) =

= ( (x1 + y1 ) ; (x2 + y2 ) ;

= ( x 1 ; x2 ;

; xn ) =

; xn + y n ) =

; xn ) + ( x 1 ; x2 ;
; xn ) +
)x2 ;

;(

; xn ) =

(y1 ; y2 ;

; xn ) =
; yn ) ;

)xn ) = (
(

(x1 ; x2 ;

( x1 ) ;

( x2 ) ;

( xn )) =

; xn )):

1.1.10. Example (Vector spaces over nite elds). The last example mentions a special topic which is
the beginning for "Coding Theory" and uses Chapters 3 and 4 from . Theorem T. 3.3.3, page 26 
proves that for each prime p and for each positive integer n there is a unique nite eld with pn elements
(and characteristic p), denoted by Fpn . If ; =
6 V is such that (V; Fpn ) is a vector space, then it is a vector
space over a nite eld which is a framework used in "Coding Theory" and "Cryptography". The nite
eld may be, for example, (Z2 ;

2;

2)

(Z2 = f0; 1g and the laws are addition and multiplication mod 2)

and the vector space may be Zk2 ; Z2 , which is the vector space of all the linear codes of length k over Z2 .

14

## 1.1.11. Denition. (vector subspace) Consider a vector space (V; K) and S

V a subset of vectors. S is

## called vector subspace for (V; K) if:

(1) 8x; y 2 S; x + y 2 S;
(2) 8 2 K; 8x 2 S;

x 2 S:

## 1.1.12. Example. The set f (1; 1; 2) ; :

1.1.13. Example. the set f(1; 1; 1) +

(1; 1; 2) ;

## 2 Rg is not a vector subspace for (R3 ; R).

1.1.14. Example. The set S = f(x1 ; x2 ; x3 ) 2 R3 ; x2 = 0g is a vector subspace for (R3 ; R).
Consider x; y 2 S ) x = (x1 ; 0; x3 ) and y = (y1 ; 0; y3 );
we have x + y = (x1 ; 0; x3 ) + (y1 ; 0; y3 ) = (x1 + y1 ; 0; x3 + y3 ) 2 S,
and for

2 R, x =

(x1 ; 0; x3 ) = ( x1 ; 0; x3 ) 2 S.

From the denition of subspace it follows that S is a subspace of (R3 ; R). The set may be described in
the following way:
(x1 ; 0; x3 ) = (x1 ; 0; 0) + (0; 0; x3 ) = x1 (1; 0; 0) + x3 (0; 0; 1), so S = f (1; 0; 0) + (0; 0; 1) ; ;
1.1.15. Denition. (Linear combination) For p 2 N, i = 1; p; xi 2 V and
is called the linear combination of the vectors xi with the scalars
If, moreover, the scalars eld is K = R, with

2 K, the vector x =

i.

0, 8i = 1; p, and

p
P

2 Rg :
p
P

ix

i=1

## = 1, then the linear

i=1

1.1.16. Example. 2 (1; 0; 0) + 3 (0; 1; 0) is a linear combination in R3 ; its value is (2; 3; 0).
Remark: In matrix form,
2 3
2 3 2
1
0
2
6 7
6 7 6
6 7
6 7 6
26 0 7 + 36 1 7 = 6 3
4 5
4 5 4
0
0
0

3 2
3
1 0 2 3
7 6
7 2
7 6
7
7 = 6 0 1 74 5
5 4
5 3
0 0

## 1.1.17. Example. A 3D picture which contains the following objects:

The points O (0; 0; 0), P1 (1; 1; 0), P2 (0; 1; 1), P3 (1; 0; 1),
!
!
!
The vectors v1 = OP1 , v2 = OP2 , v3 = OP3 ,
the linear combinations v4 = 2v1 + 2v2 , v5 = 2v1 + 2v3 , v6 = 2v2 + 2v3
the planes (with dierent colors): p1 (the set of all linear combinations between v1 and v2 ) p2 (the set
of all linear combinations between v1 and v3 ) p3 (the set of all linear combinations between v2 and v3 ),
the lines: l1 : (0; 0; 0) + (1; 1; 0) l2 : (0; 0; 0) + (0; 1; 1) l3 : (0; 0; 0) + (1; 0; 1)
1.1.18. Example. Several special linear combinations:
the sum of two vectors: v1 + v2 = 1 v1 + 1 v2 ;

v2 = 1 v1

15

1 v2 ;

## the null vector: 0 v1 + 0 v2 ;

the vectors which are "similar" with the vector v:

## dilatations or contractions of the original vector)

linear combinations:
with a single vector:
with two vectors:

## origin] [a 3dimensional vector subspace]

1.1.19. Denition. (Linear covering) If A V is a set of vectors, the set
( n
)
X
Def
i
i
spanK (A) =
i x ; p 2 N;
i 2 K; x 2 A; i = 1; p
i=1

of all the possible linear combinations with vectors from A is the linear covering of A (or the set spanned
by A) (or the set generated by A).
1.1.20. Example. In (R2 [X] ; R), for A = f1; Xg,
spanR A = fa 1 + b X; a; b 2 Rg
is the set of all linear combinations with the polynomials 1 and X and it is the set R1 [X].
1.1.21. Remark. The set spanK (A) depends on the set K of scalars: R3 is a vector space over each R
and Q elds, but the vector spaces (R3 ; R) and (R3 ; Q) behave dierently for example, the same set of
2 For

16

## vectors spans dierently over R and over Q:

p
3; 1; 0 2 spanR (f(1; 0; 0) ; (0; 1; 0)g)
p
3; 1; 0 2
= spanQ (f(1; 0; 0) ; (0; 1; 0)g)
1.1.22. Remark. Consider a nite set of vectors

xi ; i = 1; p

p
P

ix

= 0

i=1

1

i ).

## This equation is always compatible, as it always has the solution

= 0 (the null solution). Depending on the set of vectors, the null solution may be unique or

not. The following notions will distinguish between these two cases.
1.1.23. Denition. (Linear dependence and independence) A nite set of vectors xi ; i = 1; p is called
linear dependent if at least one vector is a linear combination of the other vectors. The situation where
the set is not linear dependent is called linear independent.
xi ; i = 1; p linear dependent: 9i0 2 1; p, 9
Equivalent: 9i0 2 1; p, xi0 2 spanK
p
P
Equivalent: the vector equation

2 K, i 2 f1;

xi ; i = 1; p n fxi0 g
ix

p
P

ix

i=1
i6=i0

i=1

## xi ; i = 1; p linear independent: 8i0 2 1; p, xi0 2

= spanK xi ; i = 1; p n fxi0 g
p
P
i
Equivalent: the vector equation
i x = 0 does not have other solution than the null solution.
i=1

1.1.24. Example. The set of R4 vectors f(1; 2; 2; 1) ; (1; 3; 4; 0) ; (2; 5; 2; 1)gis linear dependent because
(1; 2; 2; 1) + (1; 3; 4; 0) = (2; 5; 2; 1).
1.1.25. Example. The set of R4 vectors f(1; 2; 2; 1) ; (1; 3; 4; 0) ; (3; 5; 2; 1)g is linear independent because when considering the vector equation
1

(1; 2; 2; 1) +

system
8
>
>
>
>
>
>
>
< 2
>
>
2
>
>
>
>
>
:

+3

+5

=0

=0

+3

(1; 3; 4; 0) +

## (3; 5; 2; 1) = (0; 0; 0; 0), this translates into the linear homogeneous

=0

+0 2+ 3 =0
which has only the null solution.
1

1.1.26. Remark. (Equivalent denitions for linear independence) A nite set of vectors xi ; i = 1; p is
linear independent if and only if one of the statements is true:
(1) The vector equation

p
P

i=1

## only with null scalars)

ix

= 0 has only the null solution (the null vector is a linear combination

## 1.1. INTRODUCTORY DEFINITIONS

(2) 8

2 K; i = 1; p; ((9i0 2 f1;

; pg ;

i0

6= 0) )

p
P

ix

i=1

17

## 6= 0) (each linear combination with at

Proof. The two statements are connected by the result from Logic which states (p ! q)

(eq !ep)

## (see 7.0.21, page 202).

1.1.27. Remark. Study the nature of a nite set of vectors (Procedure)
Consider the nite vector set xi ; i = 1; p in the vector space (V; K).
p
P
i
Step 1 Consider the vector equation
, p.
i x = 0, with unknowns 1 ,
i=1

## Step 3 Depending on the ndings from Step 2, draw the conclusion:

[the null solution is unique] the vector set is linear independent;
[the null solution is not unique] the vector set is linear dependent;
nd a linear dependence of the vector by using a nonnull solution.
1.1.28. Example. In the vector space (R3 ; R), study the nature of the set fv 1 (m) ; v 2 (m) ; v 3 (m)g with
respect to the parameter m 2 R, where v 1 (m) = (m; 1; 1), v 2 (m) = (1; m; 1), v 3 (m) = (1; 1; m).
Step 1: Consider the vector equation
1v

(m) +

2v

(m) +

1

(m; 1; 1) +

(m

1,

3v

(m) = 0,

2,

3,

(1; m; 1) +

2
3;

+m

which becomes:

## (1; 1; m) = (0; 0; 0),

3;

+m

3)

= (0; 0; 0).

We
8 get the linear homogeneous system:
>
m 1+ 2+ 3 =0
>
>
<
1+m 2+ 3 = 0
>
>
>
:
1+ 2+m 3 = 0
Step 2: Solve the system and obtain the complete solution (dependent on the parameter m):
8
>
f(0; 0; 0)g ;
m 2 Rn f 2; 1g
>
>
<
S (m) =
f( a b; a; b) ; a; b 2 Rg ; m = 1
>
>
>
:
f(a; a; a) ; a 2 Rg ;
m= 2
Step 3 Conclusion:
for m 2 R n f 2; 1g the vector set is linear independent.
for m =

## 2 the vector set is linear dependent and a linear dependency is v 1 ( 2) + v 2 ( 2) +

v 3 ( 2) = 0.
for m = 1 the vector set is linear dependent and a linear dependency is 2v 1 (1)+v 2 (1)+v 3 (1) = 0.

18

## In the conclusion we get two types of information:

a qualitative type (about the nature of the vector set: dependent/independent)
a quantitative type (for the linear dependent situation, we get a linear dependency).

1.1.29. Denition. (Set of generators) Consider (V; K) and the vector set X

## of vectors xi ; i = 1; p is a set of generators for (or generates) the set X when X

span

xi ; i = 1; p

If the set X is not specied, we consider that X = V (the set generates the whole vector space).

1.1.30. Example. In the vector space (R2 [X] ; R), for A = f1; Xg,
spanR A = fa 1 + b X; a; b 2 Rg
is the set of all linear combinations with polynomials 1 and X so A generates all the set R2 [X].

Terminological Dierences. The terms "vector", "position vector" or "point", although usually
used interchangeably to designate an element of a vector space, in strict sense they have dierent meaning.
The term "point" refers to an element of the set V and designates a place in space. The set V is
regarded as an a ne space and the only accepted operation with points is subtraction, which gives a
vector.
The term "vector" refers to an object dened by two points, the rst one understood as the origin of
the vector and the other one understood as the edge of the vector.
The term "position vector" refers to those vectors for which the origin is the origin (the null point) of
the space and this is the term most suited for vector spaces.

~ and
1.1.31. Example. The points P = (2; 5) and Q = (6; 2) are identied with the position vectors OP
~
OQ.

The operation P

## ~ (with the origin the point Q and the edge P ,

Q has the result not the vector QP

~ with R = P
which in vector spaces does not exist) but the vector OR,

Q = ( 4; 3) (operation with

~ and OR
~ may be seen (in other settings) as congruent, they have dierent
points). Although the vectors QP
~
~ , is not an element of
origins and only one of them is a position vector (namely OR);
the other one, QP
the vector space.

19

20

## 1. VECTOR SPACES AND VECTOR SUBSPACES

Changing the scalar multiplication may change the properties of the vector space. Consider
p (X)
.
the set R [X] of all fractions of polynomials with real coe cients,
q (X)
The set R [X] is a eld together with addition and scalar multiplication.
The structure (R [X] ; R [X]) is a vector space over itself (and in this case the scalar multiplication is
the usual multiplication of fractions of polynomials), with dimension 1 (see the section 1.5 for dimension)
Consider an alternate "scalar multiplication":
X 2 v (X) =

( (X) ; v (X)) 7!
Because ( + ) (X 2 ) =

(X 2 )+ (X 2 ) and (

: R [X]

) (X 2 ) =

R [X] ! R [X]

## all the axioms are met.

The structure (R [X] ; R [X]) has dimension 1 (and a basis is the polynomial 1), because v (X) =
v (X) 1; 8v 2 R [X].
The structure (R [X] ; R [X]) with " " as scalar multiplication:
p (X)
Consider a vector v (X) =
with p and q polynomials.
q (X)

p (X) =

n
X

ak X k =

k=0

[ n2 ]
X

a2i X 2i +

i=0

n 1
[X
2 ]

a2j+1 X 2j+1 = p1 X 2 + p2 X 2 X;

j=0

where:
p1 (X) =

[ n2 ]
X

a2i X i ; p2 (X) =

i=0

n 1
[X
2 ]

a2j+1 X j :

j=0

## For each polynomial, group the even and odd exponents of X:

v (X) =

p (X)
p1 (X 2 ) + p2 (X 2 ) X
(p1 (X 2 ) + p2 (X 2 ) X) (q1 (X 2 ) q2 (X 2 ) X)
=
=
=
q (X)
q1 (X 2 ) + q2 (X 2 ) X
q12 (X 2 ) q22 (X 2 ) X 2

p1 (X 2 ) q1 (X 2 )

p1 (X 2 ) q1 (X 2 )
q12 (X 2 )

p1 (X 2 ) q2 (X 2 ) X + p2 (X 2 ) q1 (X 2 ) X
q12 (X 2 ) q22 (X 2 ) X 2
p2 (X 2 ) q2 (X 2 ) X 2
+
q22 (X 2 ) X 2

## Observe that the polynomial v (X) =

p2 (X 2 ) q2 (X 2 ) X 2

p1 (X 2 ) q2 (X 2 ) + p2 (X 2 ) q1 (X 2 )
X
q12 (X 2 ) q22 (X 2 ) X 2

p (X)
may be written as a linear combination of the polynomials
q (X)

## 1 and X with the scalars

1 (X)

2 (X)

p1 (X) q1 (X)
q12 (X)

p2 (X) q2 (X) X
q22 (X) X

and
p1 (X) q2 (X) + p2 (X) q1 (X)
;
q12 (X) q22 (X) X

## 1.2. PROPERTIES OF VECTOR SPACES

21

meaning that
v=

1+

X;

which shows that f1; Xg is a generating set. While the set f1; Xg is linear independent (exercise), it
follows that the structures (R[X]; R[X]; +; ) and (R[X]; R[X]; +; ) are dierent, because of the dierent
scalar multiplication.
[Citation???]
1.1.1. Lengths and Angles. We briey mention the notions3 connecting the vector space with
Euclidean Geometry; later there is an entire dedicated chapter 3, page 141.
For two vectors x, y 2 Rn , the expression
n
P
hx; yi = x y =
xi yi
i=1

## The vectors are called perpendicular (orthogonal) [written x ? y] when hx; yi = 0.

rn
p
P 2
The length of a vector is kxk = hx; xi =
xi .
i=1

v
.
kvk
When two vectors are perpendicular, Pithagoras Theorem takes place: v ? w ) kv + wk2 = kvk2 +
A vector is called versor (unit vector) when kvk = 1. The versor of a nonzero vector v is

kwk2
Proof.
n
P

i=1

vi2 +

n
P

i=1

n
P

i=1

vi wi = 0 ) kv + wk2 =

## wi2 = kvk2 + kwk2 .

1.1.32. Remark. kv
n
P

i=1

wi2

wk2 =

n
P

(vi

n
P

(vi + wi )2 =

i=1

n
P

i=1

wi )2 =

i=1

n
P

i=1

= kvk + kwk = kv + wk .

(vi2 + wi2

## (vi2 + wi2 + 2vi wi ) =

2vi wi ) =

n
P

i=1

vi2 +

n
P

i=1

n
P

i=1

vi2 +

wi2

n
P

i=1

wi2 + 2

n
P

n
P

vi wi =

i=1

vi w i =

i=1

n
P

i=1

vi2 +

[The parallelogram generated by the two vectors is a rectangle, so the two diagonals have the same
length]
For two nonzero vectors, the cosine of the angle between their directions is cos (v; w) =

hv; wi
.
kvk kwk

## 1.2. Properties of vector spaces

1.2.1. Proposition. (Algebraic rules in a vector space)
(1) 8 ;

2 K; 8x; y 2 V, we have:

(2) 8 2 K;

(x

y) =

y, (

) x=

x;

0V = 0V ;

(3) 8x 2 V; 0K x = 0V ;
(4) 8x 2 V; ( 1K ) x =
3 It

x;

is about the default notions, in the sense that they are considered valid when no other specication was made.

22

## 1. VECTOR SPACES AND VECTOR SUBSPACES

(5)

x = 0V )

= 0K or x = 0V .

Proof. Consider ;
(1)

x=
(

(2)

2 K; x; y 2 V arbitrary chosen;

((x

y) + y) =

)x + x ) x
0V =

(x

(3) 0K x = (

0V =

) x=

(4) 0V = 0K x = (1K
(5)

x = 0V and

y)

y) +

x=(

x) =

[alternative:

(x

(x

y) =

y; x = ((

) x;
x = 0V :

(0V + 0V ) =
x

0V +

0V

add

0V

in both terms

0V =

0V ]

x = 0V :

1K ) x = (1K + ( 1K )) x = 1K x + ( 1K ) x ) ( 1K ) x =
6= 0 ) 9

) + )x =

2 K and

x=

(1K x) =

x:

0V ) x = 0V .

## Exercise: Study this result as a theoretical exercise.

1.2.2. Remark (Properties of the linear covering of a set). A set A is linear dependent () 9v 2 A,
v 2 span (A n fvg)
A set A is linear independent () 8v 2 A, v 2
= span (A n fvg)
span (fvg) = f v;
;=
6 A

V)A

2 Kg
span (A)

;=
6 A

V ) 0 2 span (A)

;=
6 A
;=
6 A1

## V ) span (A) = span (A [ f0g)

A2

V ) span (A1 )

span (A2 )

1.2.3. Remark. span (xi )i=1;n is a vector subspace [In fact, the linear covering of any set not necessarily nite is a vector subspace, with a similar argument; with the convention span (;) = f0g, all the
possibilities are covered].
Proof. Consider v1 ; v2 2 span (xi )i=1;n and
2 K ) 9 i1 ;
n
n
P
P
j
x
;
j
=
1;
2;
then
v
+v
=
( i1 + i2 ) xi 2 span (xi )i=1;n and
1
2
i i

i=1

i=1

2
i

2 K, i = 1; n, such that vj =
n
P
v1 =
( i1 ) xi 2 span (xi )i=1;n
i=1

## 1.2.4. Remark. Consider a vector subspace V0 in (V; K). Then,

8n 2 N; 8xi 2 V0 ; 8

2 K; i = 1; n;

n
X
i=1

i xi

2 V0 :

23

## (a subspace contains any linear combinations of its elements)

(span (V0 )

V0 )

(a slightly stronger statement actually takes place: if V0 is a subspace, the linear covering of any subset
of the subspace is included in the subspace: 8V1

V0 , span (V1 )

V0 )

Proof. By induction over n 2 N: for n = 1, axiom 2 of the denition of a subspace it follows that
for any x1 2 V0 and for any scalar

2 K, we have

1 x1

n+1
P

i=1
n+1
P

i xi

n
P

i xi

n+1 xn+1

i=1

i xi

i=1

n+1 xn+1

## 1.2.5. Remark. span (xi )i=1;n

V0 subspace
V0 (xi )
i=1;n

9
>
>
>
>
>
=
>
>
>
>
>
;

2 K; i = 1; n + 1. Then:

n+1
X
i=1

i xi

2 V0 :

## V0 (the linear covering of a set is the intersection of all

vector subspaces containing the set) (the linear covering of a set is the smallest in the sense of inclusion
subspace which contains the set) (the word "smallest" is used with respect to the inclusion relation;
"smallest" in this context means that any subspace which includes the set also includes the linear covering
of the set).
Proof. span (xi )i=1;n

V0 subspace
V0 (xi )
i=1;n

## V0 because when a subspace includes the set, it also includes

the linear covering of the set (Remark (1.2.4)); the other inclusion follows from Proposition (1.2.3):
is itself a subspace including the set, and so it is within the family of subspaces which
T
are including the set, from where it follows span (xi )i=1;n
V0 because the intersection is

V0
V0

subspace
(xi )i=1;n

## included in each set which is intersected.

1.2.6. Remark. A1

A2

## 1.2.7. Remark. span (span (A)) = span (A)

1.2.8. Remark (Exchange Property). v 2 span (A [ fwg) n span (A) ) w 2 span (A [ fvg) [if a vector v
is a linear combination of vectors from A with w but not from A only, then w is also a linear combination
of A with v]
Proof. v 2 span (A [ fwg) n span (A) ) v 2 span (A [ fwg) and v 2
= span (A).

24

The scalar

## 2 K, 9vi 2 A, i = 1; n, such that v = w +

= 0 then v =

n
P

i vi

i=1

diction.
By dividing with , we get w =

n
P

i=1

n
P

i vi .

i=1

## 2 span (A) which is a contra-

vi 2 span (A [ fvg).

## 1.3. Vector Spaces Examples

1.3.1. Example. The space Kn of all the nite sequences with n components from the eld K. The
elements of the set Kn are (x1 ; x2 ;

## Further examples are for K = R, K = C, K = Q and for nite elds.

1.3.2. Example. Consider a set X 6= ; and the set of all functions with X as a denition domain,
and with nitelymany nonzero real values F (X) = ff ( ) : X ! R; f

## (R ) is niteg. The set F (X)

together with the usual algebraic operations (with functions) is a vector space.[Interesting particular cases:
X = f1;

; ng; X = N].

1.3.3. Example. The set of matrices with elements from a eld K and with m lines and n columns.
The set is denoted by Mm

(K), the vectors are matrices, vector addition is the matrix addition and the

## multiplication by a scalar is the multiplication of a matrix by a scalar.

1.3.4. Example. The set of all real sequences. The set is denoted by RN , a vector is a sequence of
real numbers, understood as the ordered and innite succession of the elements of the sequence. Vector
addition is the componentwise addition of the sequences
(an )n2N + (bn )n2N = (an + bn )n2N
(and it is a new sequence, whose terms are obtained by adding the corresponding terms of the sequences)
and the multiplication of a vector by a scalar is
(an )n2N = ( an )n2N :
1.3.5. Example. The set of sequences (an )n2N for which the series
limit lim

n
P

n!1 k=0

scalar.

a2k

1
P

n=0

## a2n converges [this means that the

exists and it is nite], together with the usual sequence addition and multiplication by a

1.3.6. Example. The set of all Cauchy rational sequences (a sequence (an )n2N
sequence when 8" > 0, 9n" 2 N, 8n; m
vector space.

n" , jan

Q is called a Cauchy

## am j < "), with the usual sequence operations is a

1.4. EXERCISES

25

1.3.7. Example. The set of all real polynomials in the unknown t, denoted by R [t]. When p(t) =
a0 + a1 t +

+ an tn and q(t) = b0 + b1 t +

+ (an + bn )tn

## p(t) + q(t) = (a0 + b0 ) + (a1 + b1 )t +

+ a n tn

p(t) = a0 + a1 t +

## 0 = 0 (the null polynomial)

we obtain a vector space structure (when adding two polynomials with dierent degrees, the polynomial
with the lowest degree is completed with null coe cients up to the bigger degree; the null polynomial is
considered to have degree

1).

1.3.8. Example. The set of all real polynomials in the unknown t and with degree (in t) at most n,
denoted by Rn [t], with the usual polynomial operations.
1.3.9. Example. The set of all functions f ( ) : R ! R of class C 1 and which are solutions of the
dierential equation f 0 (t) + af (t) = 0, 8t 2 R, together with the usual function operations.
1.3.10. Example. The set of real functions which are indenitely dierentiable D1 (R; R).
1.3.11. Example. The set of all real functions with domain [a; b] and codomain R, denoted by F ([a; b] ; R).
1.3.12. Example. The set of all Lipschitzian functions from F ([a; b] ; R) (functions f ( ) : [a; b] ! R such
that 9kf > 0 with jf (x)

f (y)j 6 kf jx

## yj, 8x; y 2 [a; b]), denoted by L ([a; b] ; R).

1.4. Exercises

1.4.1. Example. The set of vectors f(1; 1; 1), (1; 2; 3), (3; 2; 1)g generates the vector space (R3 ; R).
1.4.2. Exercise. Consider a real vector space (V; R) and the operations:
+ : (V

V)

(V

V) ! V

dened by
(x1 ; y1 ) + (x2 ; y2 ) = (x1 + x2 ; y1 + y2 )
and
:C

(V

V) ! (V

V)

dened by
( + i ) (x; y) = ( x
Show that (V

y; x + y) :

V; C) with the above operations is a complex vector space (this vector space is called the

26

## 1. VECTOR SPACES AND VECTOR SUBSPACES

1.4.3. Exercise. Show that the set of the nondierentiable functions over [a; b] is not a vector space.
1.4.4. Exercise. Show that the union of two vector subspaces is generally not a subspace.
1.4.5. Exercise. Show that the set V0 = fx 2 Rn ; Ax = 0g is a vector subspace, where A 2 Mm

(R).

1.4.6. Exercise. Consider the subspaces A = f(0; a; b; 0) ; a; b 2 Rg, B = f(0; 0; a; b) ; a; b 2 Rg. Determine A + B.
1.4.7. Exercise. Show that if the linear operator U ( ) : Rn ! Rn satises U 2 ( ) + U ( ) + I ( ) = O ( ),
then U ( ) is bijective.
1.5. Representation in Vector Spaces
1.5.1. Proposition. (Equivalent denitions for a basis) Consider a nite set of vectors denoted by B
from the vector space (V; K). The following statements are equivalent:
(1) B is linear independent and it is maximal with this property (in the sense that 8v 2 VnB, the
set B [ fvg is not linear independent anymore);
(2) B is generating V and and it is minimal with this property (in the sense that 8v 2 B, the set
B n fvg doesnt generate V anymore);
(3) B is generating V and it is linear independent.
Proof. The proof will follow the steps: (1))(2))(3))(1).
(1))(2)
Assume B is linear independent and maximal (with this property); we prove by contradiction that the
set B generates V:
Assume that B is not generating V; then 9v0 2 V n span (B) and the new set B1 = B [ fv0 g is linear
independent (because otherwise v0 would be a linear combination with B elements) and strictly includes
B, which is a contradiction with the maximality of B (with respect to the linear independence property).
The contradiction originates from the hypothesis that B doesnt generate V, so, by contradiction, we get
that B generates V.
We prove also by contradiction that B is minimal (as a generating set for V)
Assume by contradiction that B, as a generating set of V, is not minimal. Then there is at least an
element v1 2 B such that B n fv1 g still generates V (at least an element may be removed from B without
aecting the generating property); since v1 is a linear combination of B n fv1 g, it follows that the initial
set B is not linear independent, which is a contradiction with the hypothesis. Since the contradiction
originates from the hypothesis that B as a generating set is not minimal, it follows that B is minimal (as
a generating set of V).

## 1.5. REPRESENTATION IN VECTOR SPACES

27

(2))(3)
Since B generates V, suppose by contradiction that the set B is not linear independent: 9v2 2 B such
that v2 2 span (B n fv2 g); then the set B n fv2 g

## B and still holds the generating property (because

each vector of the space is a linear combination with vectors from B; if v2 participates at this linear
combination, by replacing it with the linear combination from B n fv2 g we get a new linear combination
only with vectors from B n fv2 g), which is a contradiction.
(3))(1)
Since B is linear independent, if B is not maximal then there is a vector v 0 2 VnB such that B [ fv 0 g
is still linear independent. This means that v 0 2
= span (B), which contradicts that B generates V.
1.5.2. Denition. In a vector space (V; K) a set B

## V is called a basis when B satises one of the

equivalent statements from Proposition 1.5.1, page 26. A basis for which the order of the vectors also
matters is called an ordered basis.
1.5.3. Denition. A vector space with at least a basis which is a nite set is called a vector space of
nite type.
A vector space which has no nite basis is called a vector space of innite type.
1.5.4. Theorem. (The su ciency of maximality in a generating set) If the nite set S generates
V and the set B0

## S is linear independent, then there is a set B such that B0

S and B is a basis

for V.
(the proof will show that when S is nite and generates V, the maximality of B in S as a linear
independent set is enough for the maximality of B in V as a linear independent set)
Proof. We inductively obtain a set B with the properties:
a) the set B is linear independent,
b) B0

## c) the set B is maximal in S with the above properties.

The procedure for obtaining the set B:
Initially, B

B0 (the set B0 is linear independent and it may or may not be maximal in S with

## respect to this property).

1.1. If B is maximal in S, then the procedure stops.
1.2. If B is not maximal in S, then there is a vector x 2 S n B which may be added to B without
losing the linear independence property.
2. Consider the new set B

28

## 1. VECTOR SPACES AND VECTOR SUBSPACES

Because of the nitude of the set S, the procedure stops in nite number of steps.
From the procedure we get a set B satisfying the properties a), b), c) and which is not necessarily
unique, but satises the following maximality property in S: if B1 is linear independent and B

B1

S,

then B = B1 .
From the procedure for obtaining B it also follows that jBj

## smaller than the number of elements of S).

The set B is a basis for V (the set B is a set which is maximal in V with respect to the linear
independence property):
Assume by contradiction that B is not a basis; then there is a vector x0 2 V such that x0 2
= span (B);
since S generates V we have x0 2 span (S) which means that there are k elements y1 ,
, yk from S, and
k
P
0
k scalars 1 ,
, k such that x0 =
i yi . If each element yi , i = 1; k is in span (B), then x 2 span (B)
i=1

also, so there is at least one element among yi 2 S; i = 1; k which is not in span (B) which we denote by
Def

## with the maximality in S of the set B.

1.5.5. Corollary. From any nite set which generates the space it may be extracted a basis.
Proof. Consider a nite set S generating the space V. In the set S there is at least one nonzero
vector, denoted by v 0 (if all the vectors are zero, then S would not generate V). Then:
fv0 g is a linear independent set
fv0 g

and we apply the previous result to obtain a set B which is a basis for V such that fv0 g

S.

1.5.6. Theorem. (The Steinitz exchange lemma) Consider a linear independent set B = fv1 ;
and a generating set S = fu1 ;
fv1 ;

; un g, both in V. Then r

; vr g

## n and, maybe with a reordering of the vectors,

; un g generates V. (any set of linear independent vectors has less vectors than any

; vr ; ur+1 ;

generating set and the linear independent set may replace the same number of vectors from the generating
set while preserving the generating property).
Proof. Induction over the number of linear independent vectors, j = 1; r:
For j = 1, 9

2 K; i = 1; n such that
v1 =

n
X

i ui ;

i=1

## if all the scalars

are zero, then v1 would be zero (a contradiction with the linear independence property

## of the set fv1 ;

; vr g); so at least one scalar is nonzero and, maybe with a reordering of the vectors from

## the generating set, it may be assumed that

vectors v1 ; u2 ;

## 6= 0; then u1 may be written as a linear combination of the

; un :

(1.5.1)

u1 =

n
X

v1

Consider v 2 V arbitrary; 9
v=

n
P

i ui

1 u1

n
P

i ui

i=2

i=2

v1 +
1

n
P

i ui ;

n
P

v1
1

i=2

i=2

i
1

ui

n
P

i ui

i=2

ui ;

i 1

## but from (1.5.1) it follows:

i=1

n
P

ui :

2 K; i = 1; n, such that v =

i=1

If r

29

; un g generates V.

n.

## ; vn ; vn+1 g is linear independent:

since n vectors from the linear independent set B already replaced n vectors (which means all the
vectors) from the generating set S, this means that the set fv1 ;

## ; vn g is linear independent and generating

set, which means it is also maximal with respect to the linear independence property, a contradiction with
the existence of another vector vn+1 such that the set fv1 ;
So r

## if all the scalars

of the set fv1 ;
r

; un g generates V )

; vr 1 ; ur ;
9

2 K; vr =

r 1
X

i vi

i=1

n
X

i ui ;

i=r

rP1

i vi

i=1

; ng such that

i0

## 6= 0; then ur may be written as a linear combination of the vectors fv1 ;

(1.5.2)

ur =

r 1
X

vr

because fv1 ;

n
X

vi

also generates V: 8v 2 V; 9

ui ;

2 K; i = 1; n;

v=
rP1

rP1

i vi

i=1

i vi

i=1

rP1
i=1

n
P

i=r
r

vr
r
i r
r

i ui

rP1
i=1

vi +

rP1

i vi

i=1

vi
r
i

r
r

+
n
P

i=r+1
n
P

vr +

r ur

i=r+1

; un g:

i=r+1

## ; un g generates V and from (1.5.2) it follows that fv1 ;

; vr 1 ; ur ;

i=1

; vr 1 ; vr ; ur+1 ;

n
P

i ui

i=r+1

ui
r
i

n
P

i=r+1
i r
r

ui ;

i ui

; vr ; ur+1 ;

; un g

30

## 1. VECTOR SPACES AND VECTOR SUBSPACES

which means that each vector from V is a linear combination of the set fv1 ;

; vr ; ur+1 ;

; un g (the set

generates V).
1.5.7. Corollary. In a nitetype vector space, any linear independent set may be completed up to a
basis.
Proof. Because the space is of nitetype, we have a nite generating set; by using the Steinitz
exchange Lemma, we may replace some vectors from the generating set the vectors of the linear independent
set, while preserving the generating property.
Now we have a linear independent set included in a nite generating set so we are in position to apply
the Theorem 1.5.4 to obtain a basis which contains the initial linear independent set (and which may be
considered as "completing" the initial independent set up to a basis).
1.5.8. Corollary. In a nitetype vector space, the number of vectors of any linear independent set is
smaller than the number of vectors of any generating set.
Proof. From the Steinitz Exchange Lemma.
1.5.9. Corollary. In a nitetype vector space, any two bases have the same number of elements.
Proof. Both bases may be viewed as linear independent sets and generating sets.
1.5.10. Denition. The number of vectors from any basis of a nitetype vector space is called the
dimension of the vectors space (V; K) and is denoted by dimK V.
1.5.11. Proposition. Given a basis in nitetype vectors space, any vector is a linear combination of the
basis. Moreover, the scalars participating at the linear combination are uniquely determined by the basis.
Proof. Consider a basis B = fu1 ;

; un g and v 2 V.

Because B generates V, the vector v is a linear combination of B (which proves the existence of the
scalars).
Moreover, the scalars are unique:
Assume two linear combinations: 9 i ;
n
X
i=1

i ui

2 K, i = 1; n, v =

n
X
i=1

i ui

## and because B is a linear independent set it follows

uniquely determined by the basis.

n
X

n
P

i ui

i=1

i ) ui

n
P

i ui .

Then

i=1

= 0;

i=1

i;

## 1.5. REPRESENTATION IN VECTOR SPACES

31

1.5.12. Denition. Given the vector space (V; K), the basis (ordered basis) B = fu1 ;

; un g and the

vector v 2 V, the scalars from the previous result are called the coordinates (the representation) of the
vector v in the basis B.
This will be denoted with [v]B and the coordinates will be considered (by convention) as columns:
2

6 1 7
6
7
6 2 7
6
7
[v]B = 6 . 7 [matrixform representation of v in B]
6 .. 7
6
7
4
5
n

v=

n
P

[vectorform representation of v in B] :

i vi

i=1

1.5.13. Remark (Canonical bases). Each vector space has an innity of possible bases. As a convention
for commonly used vector spaces, if there is no mention about the basis in which a representation takes
place, then it is assumed that the representation is done in some (conventionally) special basis usually
called the standard basis or the canonical basis.
[The standard basis for Rn ] The set E = fe1 ;
nents, e1 = (1; 0;
j = 1; n (

ij

; 0), e2 = (0; 1; 0;

; 0),

## ; en g, where the vectors ej , j = 1; n have each n compo, en = (0;

; 0; 1); in general, ej = (

1j ;

jj ;

nj ),

## is the Kroneckers symbol).

[The standard basis for R2 ] The set E = fe1 ; e2 g = f(1; 0) ; (0; 1)g.
[The standard basis for R3 ] The set E = fe1 ; e2 ; e3 g = f(1; 0; 0) ; (0; 1; 0) ; (0; 0; 1)g.
[The standard basis for Rn [t]] [the set of all polynomials with real coe cients and degree at most n in
the unknown t] The set E = f1; t1 ;

; tn g.

[The standard basis for R [t]] [the set of all real polynomials in the unknown t] The set E = f1; t1 ;

; tn ;

[The standard basis for Mmn (R)] [the set of all matrices with m lines and n columns, with real
entries]
The set E = fE11 ; : : : ; Emn g where Ei0 j0 2 Mmn (R) are matrices with the general term aij =
8
< 1; (i; j) = (i0 ; j0 )

: 0; (i; j) 6= (i ; j ) :
0 0
[The standard basis
2
3
2
1 0
0 1
6
7
6
6
7
6
6 0 0 7, E12 = 6 0 0
4
5
4
0 0
0 0

3
2
0
7
6
7
6
7, E21 = 6 1
5
4
0

The set E =
3
2
0
7
6
7
6
0 7, E22 = 6
5
4
0

## fE11 ; E12 ; E21 ; E22 ; E31 ; E32 g with matrices E11 =

3
2
3
2
3
0 0
0 0
0 0
7
6
7
6
7
7
6
7
6
7
0 1 7, E31 = 6 0 0 7, E32 = 6 0 0 7.
5
4
5
4
5
0 0
1 0
0 1

1.5.14. Remark. Consider the vector space (V; K), a xed ordered basis B = (e1 ; ::; en )
n
n
n
P
P
P
u=
2 K. Then:
i ei , u1 =
i ei , u2 =
i ei 2 V and the scalars ,
i=1

i=1

i=1

u = u1 + u2

V, the vectors

g.

32

## 1. VECTOR SPACES AND VECTOR SUBSPACES

takes place if and only if the corresponding matrix relation between the vectors representations is taking
place:
[u]B =

[u1 ]B + [u2 ]B :

## Proof.2 The 3coordinates of the vectors in B are:

6 1 7
n
P
6 . 7
[u]B = 6 .. 7 () u =
4
5
i=1
n

6 1 7
n
P
6 . 7
[u1 ]B = 6 .. 7 u1 =
4
5
i=1
n

i ei

i ei

6 1 7
n
P
6 . 7
[u2 ]B = 6 .. 7 () u2 =
i ei .
4
5
i=1
n

Then:

u = u1 + u2 ()
n
n
P
P
()
i ei =
()

i=1
n
P

i ei

i=1

i ei

i=1

n
P

i=1

n
P

i=1
i

i ) ei

i ei

()

()

## 8i = 1; n (by the unicity of

3 2
3 2
1+
1
1
7 6
7 6
6 1 7 6
7 6 .. 7 6
6 .. 7 6
..
() 6 . 7 = 6
7=6 . 7+6
.
5 4
4
5 4
5 4
n
n+
n
n
() [u]B = [u1 ]B + [u2 ]B .

()

2i

+
3i 2

i,

3
2
3
2
3
1

..
.

7
7
7=
5

6 1 7
6 .. 7
6 . 7+
5
4
n

6 1 7
6 .. 7
6 . 7 ()
4
5
n

## 1.6. Representation of Vectors

Consider a nitetype vector space V over the eld K and the ordered basis E = (e1 ; ::; en )
coordinates of the vectors ei with respect to E are:
2 3
2
1
6 7
6
6 7
6
6 0 7
6
6 7
6
6 7
6
7 ; [e2 ] = 6
[e1 ]E = 6
0
E
6 7
6
6 . 7
6
6 . 7
6
6 . 7
6
4 5
4
0

3
0 7
7
1 7
7
7
0 7
7;
.. 7
7
. 7
5
0

3
0
6 7
6 7
6 0 7
6 7
6 . 7
. 7
; [en ]E = 6
6 . 7:
6 7
6 7
6 0 7
4 5
1

V. The

matrix form
2

n1

vj =

7
7
7
22 7
n
7
P
7 ; v2 =
32 7
i=1
.. 7
7
. 7
5

i2 ei ;

n2

..
.
3

1m

7
7
7
7
n
7
P
7 ; vm =
7
i=1
7
7
7
5

2m
3m

..
.
nm
n
P

ij ei ;

i=1

i1 ei ;

12

6
6
6
6
6
[v2 ]E = 6
6
6
6
6
4
6
6
6
6
6
[vm ]E = 6
6
6
6
6
4

vector form

7
7
7
21 7
n
7
P
7 ; v1 =
31 7
i=1
.. 7
7
. 7
5

11

6
6
6
6
6
[v1 ]E = 6
6
6
6
6
4

33

im ei ;

8j = 1; m

## (K)) the matrix with the coordinates of vj as column j,

2

6
6
6
6
6
[M (B1 )]E = 6
6
6
6
6
4

11

12

1m

21

22

2m

31

32

3m

..
.

..
.

n1

n2

..

..
.
nm

7
7
7
7
7
7:
7
7
7
7
5

The following properties of the set B1 may be studied with this matrix:

## The vectors are considered as Kn elements (given by the number of lines);

The number of vectors is m (the number of columns);
The set B1 is a linear independent set if and only if rank [M (B1 )]E = m (a consequence is: the
set B1 cannot be linear independent when its number of vectors is bigger than the dimension of
the environment); [the matrix rank cannot be bigger than m; the matrix rank is smaller than m

34

## 1. VECTOR SPACES AND VECTOR SUBSPACES

if and only if (at least) a column is a linear combination of the others, which means that the set
B1 is linear dependent]
The set B1 generates the environment V if and only if rank [M (B1 )]E = n (a consequence is: the
set B1 cannot generate V when the number of vectors is smaller than the environment dimension);
[the set B1 generates V if and only if the nonhomogeneous linear system [M (B1 )]E x = [v]E is a
compatible system (has at least a solution) for each possible vector v; when rank [M (B1 )]E < n,
the set B1 may be completed with at least a certain additional vector v such that the rank of the
matrix attached to the new set is strictly bigger than the rank of the old matrix, and so for such
a vector v the initial linear system is incompatible]
The set B1 is a basis if and only if n = m and rank [M (B1 )]E = n. [from the previous remarks]
When the set B1 is an ordered basis, any vector from the environment may be represented in both
perspectives: the initial perspective E and the new perspective B1 .

## In the following we will nd the connections between the two perspectives.

The representation of the new basis B1 with respect to the old basis E is (in matrix form):
2

6
6
6
6
6
[M (B1 )]E = 6
6
6
6
6
4

11

12

1n

21

22

2n

31

32

..
.

..
.

n1

n2

3n

...

..
.
nn

7
7
7
7
7
7:
7
7
7
7
5

This matrix has as columns the representations in the old basis of the new basis vectors.
In the new basis, the vectors of the new basis B1 will be represented as:
2 3
2 3
2 3
1
0
6 7
6 7
6 0 7
6 7
6 7
6 7
6 0 7
6 1 7
6 0 7
6 7
6 7
6 7
6 7
6 7
6 7
7 ; [v2 ] = 6 0 7 ; :::; [vn ] = 6 ... 7 ;
[v1 ]B1 = 6
0
B
B
6 7
6 7
6 7
1
1
6 . 7
6 . 7
6 7
6 . 7
6 . 7
6 7
6 . 7
6 . 7
6 0 7
4 5
4 5
4 5
0
0
1

35

## while the vectors of the old basis E will be represented as

2
3
2
3
[e1 ]B1

0
11

6
6
6
6
6
=6
6
6
6
6
4

7
6
7
6
7
6
7
6
7
6
7 ; [e2 ] = 6
0
B1
6
31 7
6
.. 7
7
6
. 7
6
5
4
0
21

7
6
7
6
7
6
7
6
7
6
7 ; :::; [en ] = 6
0
B1
6
32 7
6
.. 7
7
6
. 7
6
5
4
0
22

0
n2

0
n1

n
X

ei =

0
12

0
1n
0
2n
0
3n

..
.
0
nn

7
7
7
7
7
7;
7
7
7
7
5

0
ji vj

j=1

Denote by [M (E)]B1 the matrix with columns given by the coordinates of the old basis E in the new basis
B1 ,

[M (E)]B1

6
6
6
6
6
=6
6
6
6
6
4

0
11

0
12

0
1n

0
21

0
22

0
2n

0
31

0
32

0
3n

..
.

..
.

0
n1

0
n2

..

..
.

0
nn

7
7
7
7
7
7:
7
7
7
7
5

## Consider an arbitrary vector x, represented in the old basis E with coordinates

2
3
6 x1 7
6
7
6 x2 7
6
7
6
7
6
[x]E = 6 x3 7
7;
6 . 7
6 . 7
6 . 7
4
5
xn
and in the new basis B1 with coordinates
2

[x]B1

3
0
x
6 1 7
6 0 7
6 x 7
6 2 7
6
7
0 7:
=6
x
6 3 7
6 . 7
6 . 7
6 . 7
4
5
x0n

x=

n
P

j=1

x0j vj =

n
P

i=1

n
P

j=1
n
P

j=1

x0j

x0j

n
P

i=1
!

ij ei

ij ei

n
P

j=1

n
P

i=1

n
P

j=1

x0j

n
P

x0j
i=1!
ij

ij ei

ei ;

36

but x =

## 1. VECTOR SPACES AND VECTOR SUBSPACES

n
P

xi ei , so that xi =

i=1

n
P

j=1

x0j

ij ,

2 P
n
x0
2
3 6 j=1 j
6
6
6 x1 7 6
6
7 6 n
6
7 6 P 0
6
7 6
xj
6
7 6
6 x 7 6 j=1
6 2 7 6
6
7 6
6
7 6
6
7 6 P
n
6
7=6
x0j
6
7 6
6 x3 7 6 j=1
6
7 6
6
7 6
6
7 6
6
7 6
..
6 . 7 6
.
6 .. 7 6
6
7 6
4
5 6
6
xn
6 n
4 P 0
xj
j=1

1j

2j

3j

nj

## 8i = 1; n (from the unicity of representation in a basis). The

3

7 2
7
7
7 6
7 6
7 6
7 6
7 6
7 6
7 6
7 6
7 6
7 6
7=6
7 6
7 6
7 6
7 6
7 6
7 6
7 6
7 6
7 6
7 4
7
7
5

11

12

13

21

22

23

31

32

33

..
.

..
.

..
.

n1

n2

n3

1n

:::

2n

3n

...

..
.
nn

32
76
76
76
76
76
76
76
76
76
76
76
76
76
76
76
76
76
76
76
76
54

x01

x02

x03
..
.
x03

7
7
7
7
7
7
7
7
7
7
7;
7
7
7
7
7
7
7
7
7
5

and so we obtain:
[x]E = [M (B1 )]E [x]B1
(The matrix [M (B1 )]E is the changeofbasis matrix from B1 to E) The changeofbasis matrix from the
new basis to the old basis has as columns the coordinates in the new basis of the vectors from the old
basis. In a similar way we have the connection
[x]B1 = [M (E)]B1 [x]E
(The matrix [M (B1 )]E is the changeofbasis matrix from E to B1 ) which means that for any vector x,
we have the equality
[M (E)]B1 [x]E = ([M (B1 )]E )

[x]E ;

which means that the matrices are equal[M (E)]B1 = [M (B1 )]E 1 .
[M (B1 )]E
B1

[M (B1 )]E 1
A diagram of the matrix connections
between two bases

## 1.6. REPRESENTATION OF VECTORS

37

Given two ordered bases B1 and B2 , both represented in the initial ordered basis E, passing from B1
to B2 is accomplished by pivoting on E: by using the previous formulas, we get:
[x]B1 = [M (B1 )]E 1 [x]E [x]B2 = [M (B2 )]E 1 [x]E
[x]E = [M (B2 )]E [x]B2

## [x]E = [M (B1 )]E [x]B1

from where we obtain the relations:

[M (B1 )]E
B1

[M (B1 )]E 1

[M (B2 )]E
E

B2

[M (B2 )]E

## [M (B1 )]E 1 [M (B2 )]E

A diagram of the matrix connections between two noninitial bases
both split by the initial basis and direct
A practical procedure to obtain the necessary calculations is given by the Gaussian Elimination Procedure.
The Gaussian Elimination Procedure generalizes the row reduction procedure for solving linear systems.
Given a linear system, the procedure chooses an equation and an unknown (which has a nonzero coe cient
in the chosen equation) and eliminate that unknown from all the other equations, by using appropriate
equation multiplications and equations additions (elementary transformations). The procedure uses the
compatibilities between the equality relation and the vector space operations, which are:
Multiplication with the same scalar of both terms of an equality preserves the equality.
Addition of the same object in both terms of an equality preserves the equality.
Replacing an object with an equal quantity preserves the equality.
[While these statements may seem superuous, a word of warning has to be said about using them
in a computational/numerical framework: it may happen that the replacement of the exact and symbolic
objects from the statements with their numerical counterparts might have undesired consequences and the
statements may not hold]
These principles may be used to formulate a systematic procedure for nding the general solution of a
linear system and we will describe this procedure.
Consider the system:

38

## 1. VECTOR SPACES AND VECTOR SUBSPACES

8
>
>
a11 x1 + a12 x2 +
>
>
>
>
>
>
a21 x1 + a22 x2 +
>
>
>
>
>
<

>
>
ai1 x1 + ai2 x2 +
>
>
>
>
>
>
>
>
>
>
>
: a x + a x +
n1 1
n2 2

a1j xj +

a1m xm

b1

a2j xj +

a2m xm

b2

aij xj +

aim xm

bi

anj xj +

anm xm

bn

in which the coe cient aij is nonzero. We want to eliminate the unknown xj from all the equations
except the ith equation (in other words, the coe cients of xj will become zero in all equations except the
equation i) and for equation i we want the coe cient to become 1. In order to obtain these goals, we will
perform the following operations:
divide the equation i with aij and replace the equation i with the results;
for each equation k = 1; n, k 6= i, add the equation k with the equation i multiplied by ( akj )
and replace the equation k with the result;
After these row operations, the new system will have the unknown xj only within the ith equation.
The operations are systematized with the following rules: the element aij is called PIVOT and the
new coe cients will be obtained by using THE PIVOT RULE:
(1) The places (l; j)l=1;n l6=i (the pivot column) are becoming zero while the place (i; j) (the pivot
place) becomes 1.
aik
(the pivot line is divided by the pivot).
aij
(3) The other places (neither the line nor the column of the pivot) (k; l)k=1;m k6=j are calculated by
(2) The places (i; k)k=1;m k6=j (the pivot line) are becoming

l=1;n l6=i

using
THE RECTANGLE RULE:
column j
line i

aij

column l

ail

j
line k

akj

akl

For each place (k; l), consider the rectangle with edges (i; j), (i; l), (k; j), (k; l) (specic for each
place (k; l)); the new value for akl is obtained by the formula: the product on the pivot diagonal
minus the product on the other diagonal and everything divided with the pivot:
a0kl =

aij akl

akj ail
aij

## 1.6. REPRESENTATION OF VECTORS

39

where a0kl stand for the new value of the place (k; l).

All the calculations are placed in a table and, from a vector space perspective, they have the following
meaning:

The rst column from the left is holding, for each step, the names of the current basis (representation system) (including the order of the vector basis, from upthe rst one, until downthe last
one)4.
The rst line represents the names of the included vectors5.
The (numerical) columns are the coordinates of the vectors (with names from the rst line) in the
current basis (given by the rst column).
The rst table represents the initial representation, usually done in the standard basis E.
The following tables are giving the coordinates in intermediary bases
The last table gives the representations in the nal basis.
The choice of the pivot aij means that the new basis is obtained by replacing in the old basis the
vector from the line i with the vector from the column j.

## The initial table (with all the details)

4 This
5 This

v1

vj

vm

e1

ei

en

e1

a11

a1j

a1m

b1

e2
..
.

a21
..
.

a2j
..
.

a2m
..
.

0
..
.

0
..
.

0
..
.

b2
..
.

ei
..
.

ai1
..
.

aij #
..
.

aim
..
.

0
..
.

1
..
.

0
..
.

bi
..
.

en

an1

anj

anm

bn

column is a metadata column: it gives information about the names the current basis vectors and their order.
line is a metadata line which gives informations about the names of the vectors.

40

## The second table, obtained with the pivot aij

v1

vj

vm

e1

e1

a011

a01m

e2
..
.

a021
..
.

0
..
.

a02m
..
.

0
..
.

ei

a0(i

1)1

a0(i

1)m

ei

ai1
aij

aim
aij

ei+1
..
.

a0(i+1)1
..
.

0
..
.

a0(i+1)m
..
.

0
..
.

en

a0n1

a0nm

a1j
aij

b01

a2j
aij

0
..
.

b02
..
.

..
.
a(i 1)j
aij

vj

en

1
aij
a(i+1)j
aij

..
.
anj
aij

b0i

bi
aij

0
..
.

b0i+1
..
.

b0n

The procedure may be used to nd the complete solution of a nonhomogeneous linear system
Ax = b
in the following way:

## (1) Write the initial table.

(2) Apply the pivot rule until it is not possible anymore (once a pivot has been chosen on a line,
dont chose another pivot on that line anymore). With maybe a renaming of the columns or a
reordering of the lines, the nal table will be the following:

..
.

A012

b01

..
.

b02

## (3) In the nal table:

the columns corresponding to the identity matrix I (the columns with previous pivots) are
called the main unknowns while the remaining unknowns are called the secondary unknowns;
the equations (lines) corresponding to the identity matrix I (the lines with previous pivots)
are called the main equations, while the other equations are called the secondary equations; the
secondary equations have all of them the lefthand side with all coe cients zero, because otherwise
some other pivots would be possible to choose (the table would not be the last one)

## 1.6. REPRESENTATION OF VECTORS

41

(4) The last table may be written in a system form like this:
2

I
6
6 ..
6 .
4
0
which means

..
.

A012

..
.

3 2
3
2
0
7 xP
b
74
5=4 1 5
7
5 x
b02
S

8
< I xP + A xS = b0 [main equations]
1
: 0 x + 0 x = b0 [secondary equations]
P
S
2

(5) The8secondary equations are used to decide the compatibility of the system
< b0 = 0 ) compatible system
2

: b0 6= 0 ) incompatible system
2
after which the secondary equations are becoming redundant and they may be discarded;

(6) When the system is compatible, the secondary unknowns are considered parameters while the
new system, viewed as
xP = b01

A012 xS ;

becomes a Cramer system: for each possible value of the parameters, the system has a unique
solution, which is already present in the way the system is written.

## 1.6.1. Example. Solve the systems:

(1)

(2)

(3)

8
>
4x + 3x2 + 3x3 = 14
>
>
< 1

>
>
>
:
8
>
>
>
<

>
>
>
:
8
>
>
>
<
>
>
>
:

6 7
6 7
3x1 + 2x2 + 5x3 = 13 (the solution is: 6 1 7)
4 5
2x1 + x2 + 8x3 = 13
1
2
4x1 + 3x2 + 3x3 = 6
3 + 12
6
6
(the solution is: 6 5
3x1 + 2x2 + x3 = 8
14
4
11x1 + 8x2 + 7x3 = 20
x1 + x2 + x3 = 10

2x1 + x2 + x3 = 16

(No solution)

1.6.2. Solution.

7
7
7,
5

2 R)

42

## 1. VECTOR SPACES AND VECTOR SUBSPACES

x1

x2

x3

14

13

13

1
0
0

3
4
1
4
1
2

3
4
11
4
13
2

j
j
j
j

7
2
5
2
6

j
1

11

10

11

j
1

1
2

x1

6
7 6 7
6
7 6 7
The solution may be read from the column b of the last table and it is 6 x2 7 = 6 1 7.
4
5 4 5
x3
1
In matrix form, the operations for each pivot are the following (by using elementary matrices):
0

10
1
1 0
1
3
3 7
0 0
1
B 4
C B4 3 3 14C B
4
4 2C
B 3
CB
B
1
11
5C
C
B
C B3 2 5 13C = B0
C,
1
0
B 4
C@
B
C
A
4
4
2
@ 2
A
A
@
1 13
2 1 8 13
0 1
0
6
0 4
10
1 0 2 2
1
3
3 7
1
1
3
0
1
0
9
11
B
CB
4
4 2C B
C
B
CB
1 11 5 C
C
B0
B
C
C=B
B
4
0
0
0 1
11
10C,
B
CB
C
@
A
4 4 2A
@
4 A@
1 13
0
1
0 0 1
1
0
6
2
2
2
0
10
1 0
1
1 0
9
1 0 9
11
1 0 0 2
B
CB
C B
C
B
CB
C B
C
=
B0 1 11 C B0 1
C
B
11
10
0 1 0 1C.
@
A@
A @
A
0 0 1
0 0 1
1
0 0 1 1

## 1.6. REPRESENTATION OF VECTORS

43

1
1
0
1 0 0 2
4 3 3 14
C
C
B
B
C
C
B
B
The matrix identity from the initial matrix B3 2 5 13C up to the nal matrix B0 1 0 1C
A
A
@
@
0 0 1 1
2 1 8 13
0
1
1
0
0
1
1
0
1
0 0
1 0
9 B1 3 0C B
4
3
3
14
CB
B
CB
C
C B 43
CB
B
CB
C
C
C
B
(by using elementary matrices) is: B0 1 11 C B0
B
4 0C B
1 0C 3 2 5 13C =
@
A@
A
@
4
A
4 A@ 2
0
1
0 0 1
2 1 8 13
0 1
2
4
0
1
1 0 0 2
B
C
B
C
B0 1 0 1C, or with
@
A
0 0 1 1
10
1 0
0
10
1
1
0 0
1 0
9 B1 3 0 C B
11
21
9
C B
B
CB
C
C B 43
C B
B
CB
C
C
B
C
=
B0 1 11 C B0
4 0C B
1 0C B 14
26 11 C and
@
A@
@
A
4
A
4 A@ 2
0
1
0 0 1
1
2 1
0 1
2
41
0
0
1
0

11 21
9
4 3 3
B
B
C
C
B
B
C
C
B 14
26 11 C = B3 2 5C,
@
@
A
A
1
2 1
2 1 8
in
0 the form:
10
1 0
4 3
4 3 3
1 0 0 2
B
CB
C B
B
CB
C B
B3 2 5C B0 1 0 1C = B3 2
@
A@
A @
2 1
2 1 8
0 0 1 1
[this is a factorization also known as

## The extended form of the tables:

1
3 14
C
C
5 13C
A
8 13
LU decomposition, see , Section 2.6, page 83]

44

## 1. VECTOR SPACES AND VECTOR SUBSPACES

x1

x2

x3

e1

e2

e3

e1

4 #

14

e2

13

e3

13

3
4
1
- #
4
1
2

3
4
11
4
13
2

j
x1

e2

e3

j
j
j
j

7
2
5
2
6

j
j
j
j

1
4
3
4
1
2

x1

x2

11

10

e3

1 #

11

x1

x2

14

26 11

x3

11

21

In
pivot are (by using elementary1matrices):
0 matrix form,
1 0 the operations for each
1 0
1
3
3 7 1
0 0
0 0
1
B 4
C B4 3 3 14 1 0 0C B
C
4
4 2 4
B 3
CB
B
C
1
11
5
3
C
B
C B3 2 5 13 0 1 0C = B0
C
1
0
1
0
B 4
C@
B
C
A
4
4
2
4
@ 2
A
A
@
1 13
1
2 1 8 13 0 0 1
0 1
0
6
0 1
2
0 4
10
1 02 2
1
3
3 7 1
1
0 0
1
3
0
1
0
9
11
2
3
0
B
CB
C B
4
4 2 4
C
B
CB
C B
1 11 5
3
C
B
B0
C
C = B0 1
4
0
0
1
0
11
10 3
4 0C
B
CB
C
@
A
4 4 2
4
@
A
4 A@
1 13
1
0
1
0 0 1
1
1
2 1
0
6
0 1
2
2
2
2
0
10
1 0
1
1 0
9
1 0 9
11
2 3 0
1 0 0 2
11 21
9
B
CB
C B
C
B
CB
C B
C
=
B0 1 11 C B0 1
C
B
11
10 3
4 0
0 1 0 1 14
26 11 C
@
A@
A @
A
0 0 1
0 0 1
1
1
2 1
0 0 1 1 1
2 1

## 1.6. REPRESENTATION OF VECTORS

45

1
4 3 3 14 1 0 0
C
B
C
B
The matrix identity starting from the initial matrix B3 2 5 13 0 1 0C and ending with
A
@
2 1 8 13 0 0 1
0
1
1 0 0 2
11 21
9
B
C
B
C
the nal matrix B0 1 0 1 14
26 11 C (by using elementary matrices) is:
@
A
0 0 1 1 1
2 1
10
10
1
0
10
1
0 0
4
3
3
14
1
0
0
1 0
9 B1 3 0 C B
CB
C
B
CB
C B 43
CB
C
B
CB
C
B
C
B0 1 11 C B0
1 0C B3 2 5 13 0 1 0C =
4 0C B
A
@
A@
4
A@
4 A@ 2
0
1
2 1 8 13 0 0 1
0 0 1
0 1
2
41
0
1 0 0 2
11 21
9
B
C
B
C
= B0 1 0 1 14
26 11 C
@
A
0 0 1 1 1
2 1
0

10
1 0
10
1
0 0
1 0
9 B1 3 0 C B
C B
4
B
CB
B
C
C B
3
B
CB
B
C=B
By using B0 1 11 C B0
4 0C
1
0
CB 4
C @
@
A@
A
4 A@ 2
0
1
0 0 1
0 1
2
4
1
0
1 1 0
4 3 3
11 21
9
C
B
B
C
C
B
B
C
=
B
B 14
C
3 2 5C
26 11
A
@
@
A
2 1 8
1
2 1
the corresponding LUdecomposition is:
0
10
1 0
4 3 3
1 0 0 2
11 21
9
4 3 3
B
CB
C B
B
CB
C B
B3 2 5C B0 1 0 1 14
26 11 C = B3 2 5
@
A@
A @
2 1 8
2 1 8
0 0 1 1 1
2 1
Various verications from the table:
0

4
B
B
B3
@
2
0
4
B
B
B3
@
2
0
4
B
B
B3
@
2

3
2
1
3
2
1
3
2
1

10
1 0
1
3
11 21
9
1 0 0
CB
C B
C
CB
C B
C
=
C
B
C
B
5
14
26 11
0 1 0C
A@
A @
A
8
1
2 1
0 0 1
10 1 0 1
3
2
14
CB C B C
CB C B C
5C B1C = B13C
A@ A @ A
8
1
13
10
1 0 1
3
11
1
CB
C B C
CB
C B C
5C B 14 C = B0C
A@
A @ A
8
1
0

11
14
1

1
9
C
C
26 11 C and
A
2 1

21

1
14 1 0 0
C
C
13 0 1 0C
A
13 0 0 1

46

## 1. VECTOR SPACES AND VECTOR SUBSPACES

1 0 1
10
0
21
4 3 3
C B C
CB
B
C B C
CB
B
B3 2 5C B 26C = B1C
A @ A
A@
@
0
2
2 1 8
0
10 1 0 1
4 3 3
9
0
B
CB C B C
B
CB C B C
B3 2 5C B 11 C = B0C
@
A@ A @ A
2 1 8
1
1
0

(2) The pivot method for this system has the following form:

x1

x2

x3

11

20

3
4
1
4
1
4

3
4
5
4
5
4

1
0
0

j
j
j
j

3
2
7
2
7
2

j
3

12

14

The procedure cannot be continued anymore since the last line from the last table (corresponding with the unknowns) is zero. Because the element corresponding with the column b is
also zero, the system is compatible and 1undetermined (it has 2 main unknowns and 1 secondary
unknown).
82
>
> 12 + 3
6
7 >
<6
6
7
6
The complete solution of the system is 6 x2 7 2 6 14 5
4
5 >
4
>
>
:
x3
2

x1

7
7
7;
5

9
>
>
>
=
2R
>
>
>
;

In matrix form, the operations for pivot are the following (by using elementary matrices):

## 1.6. REPRESENTATION OF VECTORS

0
B
B
B
B
@

0
B
B
B
@

10
1 0
1
0 0
1
CB 4 3 3 6 C B
4
C
B
3
B
C
B 3 2 1 8 C=B
1 0 C
0
C
@
A B
4
A
@
11
11 8 7 20
0 1
0
4
1
0
1
0
3 3
3
1 3 0 B 1
1
4
4 2 C
CB
B
C
5
7
1
C
C=B
B 0
0
0
4 0 CB
AB
@
4
4 2 C
A
@
1
5 7
0
1 1
0
0
4
4 2

3
4
1
4
1
4

3
4
5
4
5
4

47

3
2
7
2
7
2

C
C
C
C
A

12

C
C
14 C
A
0

The matrix identity from the initial matrix up to the nal matrix (by using elementary matrices) is:

B
B
B 0
@
0

10
3 0 B
CB
C
4 0 CB
AB
@
1 1

10
1
0 0
CB 4 3 3 6
4
CB
3
1 0 C
3 2 1 8
CB
@
4
A
11
11 8 7 20
0 1
4

1 0
10 1
0 0
1 3 0 B
C B
B
CB 4
C B
3
B
CB
By using B 0
1 0 C
4 0 CB
C=B
@
A@
4
A @
11
0
1 1
0 1
4
0
1 1 0
1
2 3 0
4 3 0
B
C
B
C
B
C
B
C
=
B 3
C
B
4 0
3 2 0 C,
@
A
@
A
2
1 1
11 8 1
the attached LUdecomposition is:
0
10
1 0
4 3 0
1 0
3 12
4 3
B
CB
C B
B
CB
C B
B 3 2 0 CB 0 1 5
14 C = B 3 2
@
A@
A @
11 8 1
0 0 0
0
11 8
(3) The pivot method applied to this system is:
0

1 0

C B
C B
C=B 0 1
A @
0 0

5
0

C
C
4 0 C and
A
1 1

3
2

C
C
1 8 C.
A
7 20

12

C
C
14 C;
A
0

48

x1

x2

x3

10

16

24

j
1

10

j
1

## In matrix form, the operations (with elementary matrices) are:

0
B
B
B
@

1
B
B
B 0
@
0

3 0
1
1
1

1 1
1 10
1 1 1 10
CB
C B
C
CB
C B
C
1
1
4 C
0 C B 2 1 1 16 C = B 0
A@
A @
A
1
3 2 2 24
0
1
1
6
1
10
1 0
1 0 0 6
0
1 1
1 10
C
CB
C B
C
CB
C B
0 CB 0
1
1
4 C=B 0 1 1 4 C
A
A@
A @
0 0 0
2
1
0
1
1
6

0 0

2 1

10

The matrix identity from the initial matrix up to the nal matrix is:
0

1
B
B
B 0
@
0

10

CB
CB
1 0 CB
A@
1 1

0 0

10

1 1 1 10
CB
CB
2 1 0 C B 2 1 1 16
A@
3 0 1
3 2 2 24

By using
0
10
1 0
1 1 0
1 0 0
1
B
CB
C B
C B
B
CB
B 0
1 0 CB 2 1 0 C = B 2
@
A@
A @
0
1 1
3 0 1
1
0
1 1 0
1
1 1 0
1 1 0
B
C
B
C
B
C
B
C
B 2
1 0 C = B 2 1 0 C,
@
A
@
A
3 2 1
1
1 1
the LUdecomposition is:

1 0 0
C B
C B
C=B 0 1 1
A @
0 0 0
0

C
C
1 0 C and
A
1 1

C
C
4 C
A
2

## 1.6. REPRESENTATION OF VECTORS

1 1 0

B
B
B 2 1 0
@
3 2 1

10

1 0 0

1 1 1 10

49

C
C B
C
C B
=
C
B
2 1 1 16 C.
4
A
A @
3 2 2 24
2

CB
CB
CB 0 1 1
A@
0 0 0

The matrix inverse may be obtained by using the same procedure, with the corresponding tables in
the form:
A

A
0

2 3

B
B
1.6.3. Example. Find the inverse of the matrix B 1 2
@
1 1

C
C
1 C.
A
2

## 1.6.4. Solution. The pivot table is the following:

2

1
0
0

3
2
1
2
1
2

1
2
1
2
3
2

j
j
j
j

1
2
1
2
1
2

j
1

3
5 1
2
2 2
1 3
1
0
1
0
j
2 2
2
1
1
1
0
0
1
j
2
2
2
From
the
last
three
columns
and
the
last
0
1 three lines we read the inverse matrix:
0
1 1
3
5 1
2 3
1
B 2
C
B
C
B 1 32 21 C
B
C
C
B 1 2
1 C =B
B 2 2
@
A
2 C
@ 1
1
1 A
1 1
2
2
2
2
In matrix form, the pivot operations are (with elementary matrices):
1

50

## 1. VECTOR SPACES AND VECTOR SUBSPACES

10

3
2
1
2
1
2

1
C B
B
C
0
2
1 0 1 0 C=B
A B
@
1
2 0 0 1
0
1 0
1 1
3
0 0
C B 1 0
2
2 2
C B
1
1
1
1 0 C
0 1
C=B
@
2
2
2
A
1
3
1
0 0
0 1
2
2
2
0
1
1 0
0 1
2
3 0
C B
B
C
0 1
1
1
1 2 0 C=B
A B
@
0
2
1 1 1
0 0
1
0
0
1
5 1
3
1 B
1
2 2 C
B
CB 2
C
1 3
1 C B
C
=B 0
1 CB
@
AB
2 C
A
@ 12 21
1
2
0
2
2
2
The LUdecomposition
in this situation is: 1
0
0
10
5 1
3
2 3
2 3
1 B 1 0 0
2
2 2 C
B
B
CB
C
1 C B
1 3
B
C
=B 1 2
B 1 2
0 1 0
1 CB
B
@
@
A@
2 2
2 C
A
1
1
1
1 1
1 1
2
0 0 1
2
2
2
0
10
10
1 0

2
CB
CB
1=2 1 0 C B 1
A@
1=2 0 1
1
0
0
1
1
3 0 B 1
B
CB
B
C
B 0 2 0 CB
0
@
AB
@
0 1 1
0
0
10
1 0 1=2
1
B
CB
B
CB
B 0 1
1=2 C B 0
@
A@
0 0
1=2
0
0
2 3
B
B
Verication: B 1 2
@
1 1
B
B
B
@

1=2

0 0

1 1 0 0

1 0 1=2
1
3 0
B
CB
B
CB
B 0 1
1=2 C B 0 2 0
@
A@
0 0
1=2
0 1 1
0
1 1 0
3
5 1
2
B 2
C
B
B 1 32 21 C
B
C =B
B 1
B 2 2
@
2 C
@ 1
A
1
1
1
2
2
2

CB
CB
CB
A@
3
2
1

1=2

0 0

C B
C B
1=2 1 0 C = B
A B
@
1=2 0 1
1

1
1
0 0
C
2
C
1
1 0 C
C
2
A
1
0 1
2
1
1
2
3 0
C
C
1
1 2 0 C
A
2
1 1 1
1
5 1
3
0
2
2 2 C
1 C
1 3
C
0
2 2
2 C
1
1
1 A
1
21 2
2
0 0
C
C
1 0 C
A
0 1
1
2
1
2
3
2

1 1 0 0

C
C
1 0 1 0 C, with
A
2 0 0 1
1
3
5 1
2
2 2 C
1 3
1 C
C and
2 2
2 C
1
1 A
1
2
2
2

C
C
1 C.
A
2

By using the pivot method and similar tables for calculations most of the required Linear Algebra
calculations may be obtained (as well as obtain them with a computer), in the same time keeping track of
the perspective interpretations in terms of bases. The theoretical framework is given by

1.6.5. Theorem. (The substitution Lemma) Consider a nitetype vector space (V; K), V1 the subm
P
space generated by the ordered linear independent set B = (e1 ;
; em ), and v =
i ei 2 V1 . If
i=1

6= 0 then B 0 = (e1 ;

ej 1 ; v; ej+1 ;

## 1.6. REPRESENTATION OF VECTORS

6
6 .
old [x]B = 6 ..
4
j

0
i

0
1

7
6
7
7
6 . 7
7 and new [x]B 0 = 6 .. 7 coordinates are given by of an arbitrary vector x 2 V1 is
5
4
5
0
m

0
j

51

i j

6= 0.

Proof. As

6= 0, from x =
x

m
P

i ei

i=1

m
P

i
j

i=1;i6=j

## combination with ej may be replaced by a linear combination from B 0 ).

The set B 0 is a basis in V1 because it has the same number of vectors as B, which means that B 0 is
minimal as a generating set.
Alternative proof:
Let
=0)

8
<

such that

m
P

= 0 (si

6= 0)

i + j i = 0; i 6= j
0
) B is a basis in V1
m
m
P
P
Moreover, x =
e
=
i i

jv

i=1;i6=j

i=1

i ei

=0)

i ei

m
P

m
P

i ei +

i
i

j
j

; i 6= j;

j
j

0
j

m
P

i=1;i6=j

m
P

=0 )

i ) ei

i
j

i=1;i6=j

ei

m
P

i
j

i=1;i6=j

1.6.6. Example. Study the nature of the vector set fv1 ; v2 ; v3 ; v4 g, with vectors:
2

6
6
v1 = 6
4

2 3
2 3
2 3
2
1
1
7
6 7
6 7
6 7
6 7
6 7
6 7
7
0 7 ; v2 = 617 ; v3 = 617 ; v4 = 6 2 7
5
4 5
4 5
4 5
1
3
1
0
1

1 v1

2 v2

3 v3

4 v4

1,

2,

3,

4:

= 0.

1,

j ej

i=1;i6=j

= 0, 8i

i ei + j ej =
0
i

i ei

i=1

i=1;i6=j

i=1;i6=j

m
P

2,

3,

## satisfying the vector equation.

By replacing in the vector equation the coordinates of the vectors (in the standard basis) we get:
[v1 ]E + 2 [v2 ]E + 3 [v3 ]E + 4 [v4 ]E = E )
2 3
2 3
2 3
2 3 2 3
1
2
1
1
0
6 7
6 7
6 7
6 7 6 7
6 7
6 7
6 7
6 7 6 7
) 1 6 0 7 + 2 617 + 3 617 + 4 6 2 7 = 6 0 7,
4 5
4 5
4 5
4 5 4 5
3
1
0
0
1
which is the same with the linear homogeneous system (with unknowns
1

1,

2,

3,

4 ):

ei +

52

## 1. VECTOR SPACES AND VECTOR SUBSPACES

8
>
>
>
<
>
>
>
:

+2

+2

2
3

=0

=0

+3 2+ 3 =0
Use the pivot method to solve the system and to keep the bases interpretations:
1

v1

v2

v3

v4

j
1 #

e2

e3

e2
e3

e3

1 #

1
2
1

v1

v2

3 #

e3

e2

j
v1

e1

e1

11

2
1
5

2
1
1
4
j
j
0
3
3
3
3
5
1
2 1
v2
j
0
1
0
j
j
0
3
3
3 3
11
1 5
1
v3
j
0
0
1
j
j
0
3
3 3
3
In matrix form, the pivot operations are the following (with elementary matrices):
0
10
1 0
1
1 0 0
1 2 1
1 1 0 0
1 2 1
1 1 0 0
B
CB
C B
C
CB
C B
C
B
B 0 1 0 CB 0 1 1 2 0 1 0 C = B 0 1 1 2 0 1 0 C
@
A@
A @
A
1 0 1
1 3 1 0 0 0 1
0 5 2
1 1 0 1
0
10
1 0
1
1
2 0
1 2 1
1 1 0 0
1 0
1
5 1
2 0
B
CB
C B
C
B
CB
C B
C
B 0 1 0 CB 0 1 1 2 0 1 0 C = B 0 1 1
2 0 1 0 C
@
A@
A @
A
0
5 1
0 5 2
1 1 0 1
0 0
3
11 1
5 1
0
10
0
1
1
4 2
1
1
1 0 0
1 0
1
0
1
5
1
2
0
B
3 C
3 3
3
3
C B
B
CB
5 1
2 1
C B
B 0 1 1 CB
B
B 0 1 1
2 0 1 0 C=B 0 1 0
B
@
A @
3 C
3 3
3 3
@
A
1
11
1 5
1
0 0
3
11 1
5 1
0 0
0 0 1
3
3
3 3
3
Verication:
v1

1
C
C
C
C
A

## 1.6. REPRESENTATION OF VECTORS

53

1 0
1
2
1
1
1
0
0
3
3
3 C B
B
C
1
2 1 C
B
B
C
C
= B 0 1 0 C.
B
C
@
A
3
3 3 A @
1 5
1
0 0 1
3 3
3
Conclusion: the set fv1 ; v2 ; v3 ; v4 g is linear dependent, with the subset fv1 ; v2 ; v3 g linear independent.
0

10
1 2 1 B
CB
C
0 1 1 CB
AB
@
1 3 1

## 1.6.8. Example. Consider the vectors:

0 1
0 1
0 1
0
1
m
B C
B C
B C
B C
B C
B C
v1 = B2C ; v2 (m) = B m C ; v3 (m) = B 0 C ; m 2 R:
@ A
@ A
@ A
1
1
1
Discuss the nature of the set fv1 ; v2 ; v3 g with respect the parameter m.
1.6.9. Solution. Consider the system (with unknowns
1 v1

2 v2

(m) +

3 v3

1,

2,

## and with the parameter m):

(m) = 0.

1.6.10. Remark. The current presentation of the pivot method is far from complete:
the situation when the pivot cannot be taken from the main diagonal has not been covered,
numerical aspects (and approximate results) of the procedure are not covered,
obtaining the results by using software products and big/huge examples are not covered
other applications (in other disciplines) are not covered.
We hope that other texts will be able to cover these aspects.
1.6.11. Example (Economic Theory Application adapted from [], page 108, Example 1). An American rm has a gross prot in amount of 100000 USD. The rm accepts to donate 10% from the net prot
for The Red Cross. The rm has to pay a state tax of 5% from the prot (after donation) and a federal
tax of 40% from the prot (after the donation and after the state tax).
Which are the taxes and how much does the rm donate?
What is the real value of the donation?
1.6.12. Solution. Denote by D the donation, by S the state tax and by F the federal tax.
The net prot is 100000 S F ;
1
(100000 S F ) ) 10D + S + F = 100000;
D=
10
5
S=
(100000 D) ) D + 20S = 100000;
100
40
F =
(100000 D S) ) 2D + 2S + 5F = 200000 )
100
Solve the linear system:

54

## 1. VECTOR SPACES AND VECTOR SUBSPACES

8
8
11400000
>
>
>
F =
35736:67712;
10D + S + F = 100000
>
>
>
>
319
<
<
1500000
, the solution is:
S=
4702:194357;
D + 20S = 100000
>
>
319
>
>
>
>
>
:
: D = 1900000 5956:112853:
2D + 2S + 5F = 200000
319
When there is no donation, the taxes would be bigger:
5
D=0)S=
100000 = 5000, and
100
40
(100000 5000) = 38000
F =
100
The dierence between the taxes without donation and the taxes with donation is:
1500000 11400000
817000
5000 + 38000
=
2561:128527
319
319
319
The real value of the donation is the dierence between the donation made and the tax excess when
the donation is absent:
1900000 817000
1083000
=
= 3394:984326.
319
319
319

55

## 1.7. Operations with Subspaces

1.7.1. Proposition (Prop. 5.11, ). If (V; K) is a nitetype vector space and V0

V is a vector

subspace, then:
(1) (V0 ; K) is also of nitetype;
(2) 8B0 basis for V0 , 9B basis for V such that B0

(3) dim V0

dim V;

## (4) dim V0 = dim V () V0 = V.

1.7.2. Proposition. The intersection of any family of subspaces is a subspace (and a nonvoid set).
Proof. If (V; K) is a vector space and (Vi )i2I are vector subspaces in (V; K), then for V0 :=
have:

i2I

Vi we

8i 2 I, 0 2 Vi ) 0 2 V0 () V0 6= ;);
1. x, y 2 V0 ) 8i 2 I x, y 2 Vi ) 8i 2 I, x + y 2 Vi ) x + y 2 V0
2. x 2 V0 ,

2 K ) 8i 2 I, x 2 Vi and

2 K ) 8i 2 I, x 2 Vi ) x 2 V0 .

1.7.3. Example. Consider the vectors: v1 = (2; 2; 2), v2 = (2; 2; 1), v3 = ( 3; 3; 2), v4 = (2; 5; 1) and
the subspaces V1 = span (fv1 ; v2 g), V2 = span (fv3 ; v4 g).
Describe the intersection V1 \ V2 .
1.7.4. Solution. v 2 V1 \ V2 ) v is simultaneously a linear combination of v1 , v2 and of v3 , v4 so we get
the vector relation:
v=

1 v1

[v]E =

2 v2

4 v4 ,

## which written in the standard basis is:

[v2 ]E = 3 [v3 ]E +
2 3
2
2
6 7
6
6 7
6
By replacing [v1 ]E = 6 2 7, [v2 ]E = 6
4 5
4
2
we obtain the system:
8
>
2 1+2 2 = 3 3+2 4
>
>
<
2 1 + 2 2 = 3 3 + 5 4 , with the
>
>
>
:
2 1
2 = 2 3+ 4
1

[v1 ]E +

3 v3

We get [v]E =

[v1 ]E +

[v2 ]E =

7
12

[v4 ]E .
3
2
2
3
7
6
7
6
2 7, [v3 ]E = 6 3
5
4
1
2

solution:

2
6 7
6 7
6 2 7+
4 5
2

8
>
>
>
>
>
>
>
<
>
>
>
>
>
>
>
:

7
6

7
12

7
6

1
2

2 R:
2
3
2
6
6
7
6
6
7
6 2 7= 6
6
4
5
4
1
2

7
6 7
7
6 7
7, [v4 ]E = 6 5 7,
5
4 5
1

7
2
7
2
0

3
7
7
7
7
5

56

(2; 5; 1) =

7 7
; ;0
2 2

7
12

7 7
; ;0
2 2

or v =

1
2

( 3; 3; 2)+

## The intersection of the two subspaces is:

V1 \ V2 =

7 7
; ;0 ;
2 2

2R

= f (1; 1; 0) ;

2 Rg :

Remarks:

8
>
2 1+2 2 = 3
>
>
<
The system
2 1 + 2 2 = 3 has no solution, so that v3 62 V1 .
>
>
>
:
2 1
2 = 2
8
>
2 1+2 2 =2
>
>
<
The system
2 1 + 2 2 = 5 has no solution, so that v4 62 V1 .
>
>
>
:
2 1
2 = 1
8
>
2= 3 3+2 4
>
>
<
The system
2 = 3 3 + 5 4 has no solution, so that v1 62 V2 .
>
>
>
:
2=2 3+ 4
8
>
2= 3 3+2 4
>
>
<
The system
2 = 3 3 + 5 4 has no solution, so that v2 62 V2 .
>
>
>
:
1=2 3+ 4
Because there is no basis mentioned, we assume that the initial representation is done by using the

standard ordered basis E = (e1 ; e2 ; e3 ), where e1 = (1; 0; 0), e2 = (0; 1; 0), e3 = (0; 0; 1).
In the ordered basis E the objects are represented as following:
2 3
2 3
2 3
2 3
2
3
2
3
1
0
0
2
2
3
6 7
6 7
6 7
6 7
6
6
7
7
6 7
6 7
6 7
6 7
6
6
7
7
[e1 ]E = 6 0 7, [e2 ]E = 6 1 7, [e3 ]E = 6 0 7, [v1 ]E = 6 2 7, [v2 ]E = 6 2 7, [v3 ]E = 6 3 7,
4 5
4 5
4 5
4 5
4
5
4
5
0
0
1
2
1
2
2 3
2 3
7
2
6 7
6 2 7
6 7
6 7
[v4 ]E = 6 5 7, [v]E = 6 72 7.
4 5
4 5
1
0
2
3
2 2
6
7
6
7
The set B1 = (v1 ; v2 ) is linear independent (because the matrix 6 2 2 7 has rank 2) and generates
4
5
2
1
V1 , so that B1 is a basis for V1 , while the dimension of V1 is 2, and the objects within V1 are represented
like this:

57

2

[v1 ]B1 = 4

5, [v2 ] = 4
B1

## 5, @ [v3 ] , @ [v4 ] , [v] =

B1
B1
B1

1
0
in B1 because it is from the intersection).

2
4

7
12
7
6

## 5 (the vector v may be represented

The set B2 = (v3 ; v4 ) is a basis in V2 , the dimension of V2 is 2, and the objects from V2 are represented
as:

[v3 ]B2 = 4

5, [v4 ] = 4
B2

## 5, @ [v1 ] , @ [v2 ] , [v] =

B2
B2
B2

0
1
in B2 because it is from the intersection).

2
4

1
2

## 5 (the vector v may be represented

In the picture there are the following objects (in terms of both Linear Algebra and Analytic Geometry):
The points:
O (0; 0; 0), P 1 (2; 2; 2), P 2 (2; 2; 1), P 3 ( 3; 3; 2), P 4 (2; 5; 1);
The position vectors:
!
!
!
!
OP 1, OP 2, OP 3, OP 4 (the vectors v1 , v2 , v3 , v4 );
The planes:
(P L1) :

x + y = 0 (the subspace V1 ),

(P L2) :

x+y

3z = 0 (the subspace V2 );

the intersecting line for the planes (P L1) and (P L2), (P L12): x = (0; 0; 0) + s (1; 1; 0) (the subspace
V1 \ V2 = f (1; 1; 0) ;

2 Rg, with dimension 1). The picture was obtained with the software product

58

## 1.7.5. Denition. Given a family of subspaces (Vi )i=1;k , the set

k
X

Def

Vi =

i=1

( k
X
i=1

vi ; vi 2 Vi ; 8i = 1; k

## is called the sum of subspaces.

1.7.6. Proposition. The sum of subspaces is a subspace.
k
P

Proof. xj 2
k
P

i=1

(vi1 + vi2 ) 2

k
P

i=1

Vi .

i=1

Similar, for

k
P

2 K; x1 =

i=1

( vi1 ) 2

k
P

i=1

k
P

i=1

vij ) x1 + x2 =

Vi .

## 1.7.7. Proposition. The sum of subspaces is the covering of their union

!
k
k
X
[
Vi = span
Vi
i=1

i=1

and it is the smallest space which contains the union (in terms of inclusion).
k
P

Proof. Consider x 2

i=1

Vi ) 8i 2 I, 9vi 2 Vi

## Conversely, consider x 2 span

that x =

m
P

j vj .

k
S

i=1

Vi

k
S

i=1

Vi , such that x =

) 9m 2 N , 9

k
P

i=1

2 K, j = 1; m, 9vj 2

j=1

## For each j = 1; m, 9ij 2 I, vj = uij 2 Vij )

j u ij

vi ) x 2 span

2 Vij an then x =

m
P

k
S

i=1

j u ij

j=1

k
S

i=1

Vi .

Vi , j = 1; m, such

i2I

Vi [for two

dierent indices or more j1 , j2 it may happen that the indices ij1 and ij2 are the same and in this
situation

j1 uij1

j2 uij2

2 Vij1 ].

i=1;k

dim

k
P

i=1

Vi

k
P

## Proof. For the rst inequality we have 8i = 1; k; Vi

max (dim Vi )
i=1;k

k
P

i=1

dim

k
P

i=1

dim (Vi ).

i=1

Vi .

k
P

i=1

Vi ) 8i = 1; k; dim Vi

dim

k
P

i=1

For the second inequality, consider a basis for each space and their union; the union generates the set
k
P
Vi and the number of its vectors is smaller than
dim (Vi );
i=1

because any basis has at most as many vectors as a generating set, we get that dim

k
P

i=1
k
P

Vi

dim (Vi ).

Vi

i=1

1.7.9. Proposition. (Equivalent denitions for the direct sum of two subspaces) Consider two
subspaces V1 and V2 , and their sum V1 + V2 in the vector space (V; K). The following statements are
equivalent:

59

## (1) 8v 2 V1 + V2 , 9!v1 2 V1 , 9!v2 2 V2 , v = v1 + v2 [Each vector admits a unique decomposition as a

sum between a vector from V1 and a vector from V2 ].
(2) V1 \ V2 = f0V g [The intersection between the two spaces is the null vector subspace].
(3) For any basis B1 of V1 , and for any basis B2 of V2 , the set B1 [ B2 is a basis for V1 + V2 .
(4) dim (V1 + V2 ) = dim V1 + dim V2 [The dimension of the sum subspace equals the sum of the
dimensions of the component subspaces].
Proof. We will prove the equivalence of the statements 1. and 2., 2 and 3., 3 and 4.
1.)2.
By contradiction:
non (V1 \ V2 = f0V g)

(9x 2 V1 \ V2 ; x 6= 0V ).

Consider v 2 V1 + V2 and x 2 V1 \ V2 , x 6= 0V .
v 2 V1 + V2 ) 9v1 2 V1 , v2 2 V2 , such that v = v1 + v2 .
But since x 2 V1 \V2 , v = (v1

x)+(v2 + x)

## is another decomposition for v, which is distinct because x 6= 0V .

This means: (9x 2 V1 \ V2 ; x 6= 0V ) ) (v = v1 + v2 = (v1

## x) + (v2 + x)) [the decomposition is not

unique]
So (non(2:) ) non(1:))

(1: ) 2:)

2.)1.
By contradiction:
If the decomposition is not unique, then 9u1 ; v1 2 V1 , 9u2 ; v2 2 V2 , such that u1 6= v1 or u2 6= v2 and
v = v1 + v2 = u1 + u2 .
Then 0V 6= v1

u1 = v2

## So (non (1:) ) non (2:))

u2 2 V1 \ V2 ) (9x (= v1

u1 = v2

u2 ) 2 V1 \ V2 ; x 6= 0V )

(2: ) 1:)

## 2.,3. Consider a basis (ei )i=1;k1

V1 , (fj )j=1;k2

V2 in each subspace.

)
Assume that V1 \ V2 = f0V g.
The union set (ei )i=1;k1 [ (fj )j=1;k2 is a basis for V1 + V2 :
The set (ei )i=1;k1 [ (fj )j=1;k2 generates V1 + V2 , because
span (ei )i=1;k1 = V1 and span (fj )j=1;k2 = V2 )
) span (ei )i=1;k1 [ (fj )j=1;k2

## The set (ei )i=1;k1 [ (fj )j=1;k2 is linear independent:

Consider a null linear combination
k1
k2
P
P
i ei +
j fj = 0 )
i=1

j=1

60

## 1. VECTOR SPACES AND VECTOR SUBSPACES

)
)
)

k1
P

i=1
k1
P

i=1
k2
P

i ei

k2
P

j=1
i ei
j fj

j=1

j fj

2 V1 \ V2 = f0V g

= 0V ) 8i = 1; k1 ,
= 0V ) 8j = 1; k2 ,

## Thus the set (ei )i=1;k1 [ (fj )j=1;k2 is linear independent.

"("
Assume that for any basis B1 of V1 , and for any basis B2 of V2 , the set B1 [ B2 is a basis for V1 + V2 .
Assume by contradiction that there is x 2 V1 \ V2 , x 6= 0V .
Then the nonzero vector x has two distinct representations: one in B1 and another one in B2 , which
is a contradiction with the linear independence of the set B1 [ B2 .
3. () 4.
")"
When the statement 3. happens, then B1 \ B2 = ;, because otherwise if v 2 B1 \ B2 then by taking the
new set B10 = (B1 n fvg) [ f2vg [by replacing in B1 the vector v with the vector 2v] the set B1 is another
basis for V1 ; the sets B1 [ B2 and B10 [ B2 cannot have the same number of elements while they are both
bases of V1 + V2 , which is a contradiction.
So jB1 [ B2 j = jB1 j + jB2 j, which means that dim (V1 + V2 ) = dim V1 + dim V2 .
"("
Assume that dim (V1 + V2 ) = dim V1 + dim V2 and consider a basis B1 for V1 and a basis B2 for V2 .
Since span (B1 ) = V1 and span (B2 ) = V2 , span (B1 [ B2 )

## V1 +V2 [the set B1 [B2 generates V1 +V2 ]

If B1 \ B2 6= ;, then jB1 [ B2 j < jB1 j + jB2 j ) dim (V1 + V2 ) < dim V1 + dim V2 contradiction, so
B1 \ B2 = ;.
If B1 \ B2 = ; but the set B1 [ B2 is not linear independent, then dim (V1 + V2 ) < jB1 [ B2 j =
jB1 j + jB2 j, which is again a contradiction.
So for any basis B1 of V1 , and for any basis B2 of V2 , the set B1 [ B2 is a basis for V1 + V2 .
1.7.10. Denition. The sum V1 + V2 of two subspaces V1 and V2 is called direct sum when any condition
from the proposition 1.7.9 takes place. The direct sum of two subspaces is denoted by V1

V2 .

[the direct sum concept may be seen as a generalization of linear independence; the direct sum may be
seen as the linear independence of a set of subspaces]
1.7.11. Denition. Consider the subspace V2 in (V; K). If V1 is another subspace of (V; K) such that
V = V1
6 The

## V2 , then V1 is called a complement6 of V2 in V.

terminology used is dierent for various existing schools. The AngloSaxon school uses the term "complementary
subspaces", while the French school uses the term "sousespaces supplmentaires"; moreover, the dierences remain over the

61

V2 .

## [Each subspace has at least a complement]

Proof. Consider a basis (ei )i=1;k1 of V1 ; since this basis is a linear independent set of V, [by Corollary
1.5.7, page 30] it may be completed up to a basis in V with some vectors denoted by (fj )j=1;k2 . Then the
set (ei )i=1;k1 [ (fj )j=1;k2 is a basis for V and the set V2 = span (fj )j=1;k2 is a subspace and a complement
of V1 in V:
x 2 V1 \ V2 ) the vector x may be represented in both sets as linear combinations,
k2
k1
P
P
x=
j fj , so that
i ei =
j=1

i=1

k1
P

i ei

k2
P

j fj

= 0,

j=1

i=1

and since the above expression is a null linear combination of the set (ei )i=1;k1 [ (fj )j=1;k2 which is
a basis, all the scalars are zero, which means that any element of the intersection is null, which in turn
means that the sum of the subspaces is direct.
1.7.13. Remark. It may be observed from the proof that, since a linear independent set may be completed
in many ways up to a basis, the complement (of a proper subspace) is not unique7.
1.7.14. Example. If V1 = f (1; 0) ;

## 2 Rg, then the set f(1; 0)g may be completed up to a basis in

R2 with each of the vectors (0; 1), (1; 1), (1; 1) so that each of the subspaces V2 = f (0; 1) ;
V3 = f (1; 1) ;
V1

2 Rg, V4 = f (1; 1) ;

2 Rg is a complement of V1 in R2 : R2 = V1

V2 = V1

2 Rg,
V3 =

V4 .

1.7.15. Remark. If V1 is a subspace in V such that dim V1 = k and dim V = n, then any complement of
V1 in V has dimension n

## k. The dimension of the complement is also called the codimension of V1 .

1.7.16. Theorem. [Equivalent denitions for the direct sum of more than two subspaces]
k
P
Consider in (V; K) k subspaces (Vi )i=1;k such that V =
Vi . The following statements are equivalent:
i=1

k
P

## (1) 8v 2 V, 8i = 1; k, 9!vi 2 Vi such that v =

vi .
i=1
!
k
P
(2) 8j = 1; k; Vj \
Vi = f0V g.
i=1;i6=j

## (3) For i = 1; k, 8Bi basis in Vi , the set

(4)

k
P

i=1

dim Vi = dim V.

k
S

Bi is linear independent.

i=1

translations (an English text translated from French uses the term "supplementary subspaces"). When we consider other
schools (such as the Russian school or the German school) and related translations, we nd a certain ambiguity even in the
same language.
7 The vector space (V; K) has two improper subspaces: f0g (the null subspace) and V (the whole space). Each improper
subspace has a unique complement, namely the other improper subspace.

62

## Proof. The following parts will be proved: 1 ) 2, 2 ) 1, 2 ) 3, 3 ) 2, "3.)4." and

"4.)3."
1 ) 2

e2 )e1

k
P

Vi
Suppose by contradiction that there is an index j 2 1; k such that Vj \
i=1;i6=j
!
k
k
P
P
Vi and x 6= 0.
Vi n f0g ) x 2 Vj and x 2
Then 9x 2 Vj \

6= f0g.

i=1;i6=j

i=1;i6=j

k
P
Vi we have:
vi0 , vi 2 Vi , i 6= j and for an arbitrary vector v 2
i=1
i=1;i6=j
!
!
k
k
k
k
k
k
P
P
P
P
P
P
v=
vi =
vi +vj =
vi x +(vj + x) =
vi
vi0 +(vj + x) =
(vi

Then x =

i=1

k
P

i=1;i6=j

i=1;i6=j

i=1;i6=j

i=1;i6=j

vi0 )+

i=1;i6=j

(vj + x), which is another decomposition for x, which is distinct from the rst one because x 6= 0, which
is a contradiction with the unicity of the decomposition.
2 ) 1 e1 )e2
If an element has two decompositions,

k
P

vi =

i=1

k
P

i=1

vi0 , then
vj =

(vi

i=1
k
P

(vi

i=1;i6=j

## f0V g, which is a contradiction.

It follows that 8j = 1; k, vj

k
P

vi0 ) = 0;
vi0 ) 6= 0 ) 9j; Vj \

k
P

i=1;i6=j

Vi

6=

## vj0 = 0 which means that the two decomposition are identical.

2)3
Consider a basis Bi for each subspace Vi . The union of these bases

k
S

i=1

## the condition 2. we also get the linear independence of the union:

k
S
If
Bi is not independent, then there is an index j and a nonzero vector in Vj which is a sum of
i=1
!
k
P
vectors from the other subspaces, which means Vj \
Vi 6= f0V g, which is a contradiction.
i=1;i6=j

## It follows that the set

k
S

Bi is independent, which is 3.

i=1

3)2

## Consider a basis Bi for each subspace Vi . The union of these bases

independent and also generates the sum.
k
P

k
S

Bi is assumed to be linear

i=1

## If there is an index j such that Vj \

Vi 6= f0g then there is a vector x0 such that 0 6= x0 2
i=1;i6=j
!
k
k
P
S
Vj \
Vi and for a certain arbitrary vector v its representation in terms of
Bi may be modied
i=1

i=1;i6=j

k
P
and in
Vi :
i=1;i6=j
!
kj0
kj
k
P
P
P
j0 j0
j j
x0 =
)
i ei =
i ei
i=1

j=1;j6=j0

i=1

)v=

k
P

j=1

kj
P

i=1

j j
i ei

kj0
P

j0 j0
i ei +

i=1

k
P

j=1;j6=j0

kj
P

i=1

j j
i ei

## which is a contradiction with the linear independence of

kj0
P

i=1
k
S

j0
i

63
j0
i

eji 0 +

k
P

j=1;j6=j0

Bi .

kj
P

i=1

j
i

j
i

eji ,

i=1

3.)4.
If for i = 1; k, 8Bi basis in Vi , the set
1
0
k
S
8j = 1; k, Bj \ @ Bi A = ;

k
S

## Bi is linear independent, then

i=1

i=1
i6=j

because otherwise, by altering the basis Bj (such that the above condition is satised), we would be

able to obtain alternative unions which are linear independent but with dierent numbers of elements,
which is a contradiction with the fact that each basis has the same number of elements.
It follows that all the possible intersections of the sets Bi are void, which means that
k
k
k
k
P
P
P
S
jBi j ) dim
Vi =
dim Vi .
Bi =
i=1

i=1

i=1

i=1

4.)3.

k
P

i=1

The union

k
S

Vi

Bi generates

i=1

k
P

i=1
k
P

Vi ; if the set

i=1
k
P

i=1

k
S

i=1

k
P

i=1

## dim Vi , which contradicts the assumption.

Vi will

1.7.17. Denition. The sum of a set of subsets (Vi )i=1;k is called direct when any condition from the
Theorem 1.7.16 is satised.
The notation used for this special summation is

k
L

i=1

is direct).

## Vi (and it means that the sum

k
P

i=1

Vi of the subspaces

1.7.18. Theorem. (The Grassmann Formula) For any two subspaces V1 and V2 of (V; K) we have:
dim V1 + dim V2 = dim (V1 + V2 ) + dim (V1 \ V2 ) :
Proof. Consider
V01 a complement of V1 \ V2 in V1 : V1 = (V1 \ V2 )
V02 a complement of V1 \ V2 in V2 : V2 = (V1 \ V2 )
It
8 follows:
>
V01 \ V2 = V1 \ V01 \ V2 = (V1 \ V2 ) \ V01 = f0g
>
>
| {z }
>
<
0
=V1

>
>
V02 \ V1 = V2 \ V02 \ V1 = (V1 \ V2 ) \ V02 = f0g :
>
>
| {z }
:
=V02

8
< (V1 \ V2 ) \ V0 = f0g;
1
0
V1 )
: V0
V1 :
1
8
< (V1 \ V2 ) \ V0 = f0g;
2
V02 )
: V0
V:
2

64

## 1. VECTOR SPACES AND VECTOR SUBSPACES

We show that the sum from the righthand part is direct: V1 +V2 = (V1 \ V2 )+V01 +V02 = (V1 \ V2 )
V01

## V02 by using the Theorem 1.7.16.

We have to prove the relations:
(V1 \ V2 ) \ (V01 + V02 ) = f0g
x 2 (V1 \ V2 ) \ (V01 + V02 ) ) x 2 V1 ; x 2 V2 ; x 2 V01 + V02 )
x 2 V1 ; x 2 V2 ; x = u1 + u2 ; ui 2 V0i ) u1 2 V01
u1 = x

V1 and

u2 2 V2 ) u1 2 V1 \ V2 \ V01 = f0g ) u1 = 0;

similar, u2 = 0 and so x = 0.
V01 \ ((V1 \ V2 ) + V02 ) = V01 \ V2 = f0g.
V02 \ ((V1 \ V2 ) + V01 ) = V02 \ V1 = f0g.
So the sum (V1 \ V2 ) + V01 + V02 is direct which means that the following relation between dimensions
takes place:
dim (V1 + V2 ) = dim (V1 \ V2 ) + dim V01 + dim V02 ;
because dim V0i = dim Vi

## dim (V1 \ V2 ) it follows that

dim (V1 + V2 ) = dim V1 + dim V2

dim (V1 \ V2 ) ;

## 1.8. The Lattice of Subspaces

Consider a vector space (V; K), with dim V =n, and the set of all its vector subspaces, denoted by
SL (V)

P (V).

## Consider on SL (V) the operations:

Intersection "\"; properties of the intersection:
[SL (V) is a closed part of P (V) with respect to intersection] 8W1 ;W2 2SL(V) , W1 \ W2 2 SL (V)
[idempotency of intersection] 8W2SL(V) , W \ W = W
[commutativity of intersection] 8W1 ;W2 2SL(V) , W1 \ W2 = W2 \ W1
[associativity of intersection] 8W1 ;W2 ;W3 2SL(V) , (W1 \ W2 ) \ W3 = W1 \ (W2 \ W3 )
[neutral element of intersection, V] 8W2SL(V) , V \ W = W
[rst element, f0V g] 8W2SL(V) , f0V g \ W = f0V g
[there is no inverse with respect to intersection]
Sum "+"; properties of the sum:
[SL (V) is a stable part of P (V) with respect to sum] 8W1 ;W2 2SL(V) , W1 + W2 2 SL (V)
[idempotency of sum] 8W2SL(V) , W + W = W

65

## [commutativity of sum] 8W1 ;W2 2SL(V) , W1 + W2 = W2 + W1

[associativity of sum] 8W1 ;W2 ;W3 2SL(V) , (W1 + W2 ) + W3 = W1 + (W2 + W3 )
[neutral element of sum, f0V g] 8W2SL(V) , f0V g + W = W
[last element, V] 8W2SL(V) , V + W = V
[there is no inverse element with respect to sum]
Properties when the intersection and sum are meeting:
[absorption] W0 \ (W0 + W) = W0 + (W0 \ W) = W0
[the operations are not distributive (one with respect to the other one)]
9W1 ;W2 W3 2SL(V) , W1 \ (W2 + W3 ) 6= (W1 \ W2 ) + (W1 \ W3 )
for example, 3 distinct lines passing through origin and contained in the same plane
W1 \ (W2 + W3 ) = W1 and (W1 \ W2 ) + (W1 \ W3 ) = f0V g
9W1 ;W2 W3 2SL(V) , W1 + (W2 \ W3 ) 6= (W1 + W2 ) \ (W1 + W3 )
W1 + (W2 \ W3 ) = W1 and (W1 + W2 ) \ (W1 + W3 ) = W1 + W2 = W1 + W3 = the containing plane.
[the complement of an element, which is not unique] 8W2SL(V) , 9W0 2SL(V) , W + W0 = V and W \ W0 =
f0V g.
Consider on SL (V) the relation " " [inclusion].
Inclusion has the following properties:
[reexivity] 8W2SL(V) , W

W2 and W2

W1 ) W1 = W2

W2 and W2

W3 ) W1

W3 .

e (W1

W2 ) ) W2

## Properties when inclusion meets with intersection and sum:

[the consistency of inclusion with respect to intersection] W1
[the consistency of inclusion with respect to sum] W1

W2 () W1 \ W2 = W1

W2 () W1 + W2 = W2

8W0 2SL(V) , 8W2SL(V) , the function W 7! W0 \ W is increasing (nonstrictly) (isotone) [when W1 and W2
are comparable and W1

## inclusion relation remains the same: W1 \ W0

W2 \ W0 ]

8W0 2SL(V) , 8W2SL(V) , the function W 7! W0 + W is increasing (isotone) [if W1 and W2 are comparable
and W1

W2 , then, for each W0 , W1 + W0 and W2 + W0 are also comparable and the inclusion relation

W2 + W0 ]

## [distributivity inclusions, which may be strict, because of the lack of distributivity]

(W1 \ W2 ) + (W1 \ W3 )

W1 \ (W2 + W3 )

66

## 1. VECTOR SPACES AND VECTOR SUBSPACES

W1 + (W2 \ W3 )

(W1 + W2 ) \ (W1 + W3 )

## W3 , we have: W1 + (W2 \ W3 ) = (W1 + W2 ) \ W3

Proof:
" "
W1

W3

and
W1

W1 + W2

and
W2 \ W3

9
>
>
>
=
>
>
>
;

W2

and
W2 \ W3 W3
) W1 + (W2 \ W3 )

) W1

9
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
=

W3 \ (W1 + W2 )

9
W1 + W2 >
>
>
=
>
>
>
;

) W2 \ W3

W3 \ (W1 + W2 )

>
>
>
>
>
>
>
>
>
>
W3 \ (W1 + W2 ) >
>
>
>
>
>
;

" "

9
8
8
9
>
>
>
>
x 2 W3
x 2 W3
>
>
>
>
>
>
>
>
=
<
<
=
)
)
x 2 W3 \ (W1 + W2 ) )
and 9x1 2 W1 ; 9x2 2 W2 ;
and
>
>
>
>
>
>
>
>
>
>
>
>
;
:
:
;
x = x1 + x2
x 2 W1 + W2
8
9
>
>
x
2
W
\
W
(
)
>
>
2
3
>
>
< 2
=
)
) x = x1 + x2 2 W1 + (W2 \ W3 )
and
>
>
>
>
>
>
:
;
x1 2 W1
( ) because x2 2 W2 and x2 = x x1 2 W3 , because x 2 W3 and x1 2 W1 W3
Def: A subset of SL (V) is called a chain when it is totally ordered [doesnt contain incomparable
elements]; when the chain has a nite number k of elements, the number k

## 1 is called the length of the

chain.
Def: For an element W 2 SL (V), the superior margin of all the chains between f0V g and W is called
the height of W. The elements with height 1 are called atoms or points.
Remark: Any subset of a chain is also a chain.
Remark: Any chain from SL (V) is of nite length, of at most the dimension of the embedding vector
space
Remark: Any chain from SL (V), with length k is isomorphic with the set f1;
there is a bijection between the chain and the set f1;
i

' (j) ()

j]
Def: If W1

## W2 g is called the interval between W1

and W2 . When the interval [W1 ; W2 ] = fW1 ; W2 g [it doesnt contain intermediary elements] the interval
is called prime and we say that W2 covers W1 .

67

## Remark: SL (V) is an interval: SL (V) = [f0V g ; V].

Remark: An interval may have an innite number of elements [for example, when W is a plane, the
interval [f0V g ; W] contains all the lines of the plane passing through origin]
Remark: The elements of an interval are not necessarily comparable [the previous example or [f0V g ; V] =
SL (V)]
Def: Two intervals are called transposed when they may be written as [W1 \ W2 ; W1 ] and [W2 ; W1 + W2 ],
for a suitable choice of the subspaces W1 and W2 .
Def: Two intervals are called projective when there is a nite sequence of intervals two by two transposed.
Remark: The dimension as a vector subspace of an element from SL (V) is a function d ( ) : SL (V) !
f0; 1;

* W1

## * when W2 is immediate superior to W1 , d (W2 ) = d (W1 ) + 1

The function d ( ) induces on SL (V) a graduated partial ordered structure, which satises the Jordan
Dedekind condition: All the maximal chains with the same edges have the same nite length
Def: A function v ( ) : SL (V) ! R is called a valuation when v (W1 ) + v (W2 ) = v (W1 + W2 ) +
v (W1 \ W2 ). The valuation is called isotone [increasing] when W1
positive [strictly increasing] when W1

W2 ) v (W1 )

## W2 ) v (W1 ) < v (W2 ).

Remark: Given a strictly increasing valuation v ( ), the value of an interval [W1 ; W2 ] is dened as
v ([W1 ; W2 ]) = v (W2 )

v (W1 ).

## Remark: d ( ) is a strictly increasing valuation.

The set SL (V) has an innite number of elements (for example an innity of 1dimensional vector
subspaces).
Remark: When W1 6= W2 and they are both immediate superior to W0 , then W1 + W2 is immediate
superior for both W1 and W2 . Dually, when W0 is immediate superior for both W1 and W2 , then both
W1 and W2 are immediate superior for W1 \ W2 .
Remark: The functions 'W1 ( ) : [W2 ; W1 + W2 ] ! [W1 \ W2 ; W1 ] dened by 'W1 (W) = W \ W1
and

W2

W2

## (U) = U + W are isomorphisms which

are inverse to each other. Moreover, the intervals [W1 \ W2 ; W1 ] and [W1 ; W1 + W2 ] are transported in
transposed isomorphic intervals by the function

W2

( ), respectively by 'W1 ( ).

## Remark: The projective intervals are isomorphic.

Remark: Any subspace W is the sum of the lines passing through origin which are included in W.
Remark: The sub lattice generated by two subspaces W1 and W2 is fW1 ; W2 ; W1 \ W2 ; W1 + W2 g.

68

## Remark: The sublattice generated by three subspaces

Remark: When two subspaces W1 and W2 are comparable (in the sense that W1

W2 or W2

W1 ),

## and there is a subspace W0 such that W0 \ W1 = W0 \ W2 and W0 + W1 = W0 + W2 , then W1 = W2 .

Remark: For each interval [W1 ; W2 ]

## SL (V), and for each element of the interval there is a comple-

ment with respect to the interval. For W 2 [W1 ; W2 ], the complement of W with respect to [W1 ; W2 ] is
the subspace (W0 \ W2 ) + W1 .
Remark: Given a strictly increasing valuation v ( ), no interval can be projective with respect to a
proper subinterval.
Remark: Given a strictly increasing valuation v ( ), all the projective intervals have the same value.
Remark: Any valuation associates a unique value for each class of prime projective intervals.
Remark: If p (W) is the number of prime intervals of a chain between f0V g and W, then p ( ) is a
valuation.
Remark: Any linear combination of valuations is a valuation.
P
Remark: The structure of a valuation: v (W) = v (f0V g) +

projective intervals,

p p (W),

## where, for each class of prime

is a value attached with the class and p ( ) is the number of prime projective

intervals with that class, in any maximal chain between f0V g and W.

CHAPTER 2

Linear Transformations
2.0.1. Denition. (Linear Transformation) If (V1 ; K) and (V2 ; K) are vector spaces (over the same scalar
eld), a function U ( ) : V1 ! V2 is called vector spaces morphism (or linear transformation) if:
(1) U (x + y) = U (x) + U (y) ; 8x; y 2 V1 ( U ( ) is a group morphism (additivity));
(2) U ( x) = U (x) ; 8x 2 V1 ; 8 2 K (U ( ) is homogeneous).
[additivity and homogeneity together are also called linearity]
The set of all vector spaces morphisms between (V1 ; K) and (V2 ; K) is denoted by LK (V1 ; V2 ).
2.0.2. Example. Consider the vector spaces (R3 ; R) and (R2 [X] ; R) (Rn [X] is the set of all polynomials
of degree at most n, with the unknown denoted by X and with real coe cients).
The function U ( ) : P2 [X] ! R3 dened by U (P ) = xP 2 R3 (for a polynomial P (X) = aX 2 +bX +c 2
R2 [X] we attach the vector xP = (a; b; c)) is a vector spaces morphism.
The operations in (R2 [X] ; R) are (the denitions should be known from the highschool):
Def

## P ( ) + Q ( ) = (P + Q) ( ) ; where: (P + Q) (X) = P (X) + Q (X) ;

Def

P ( ) = ((

P ) ( )) ; where: (

P ) (X) =

P (X) :

## P (X) = aX 2 + bX + c 2 R2 [X] ) U (P ) = (a; b; c) 2 R3 ;

Q (X) = a1 X 2 + b1 X + c1 2 R2 [X] ) U (Q) = (a1 ; b1 ; c1 ) 2 R3 :
(P + Q) ( ) is dened by:
(P + Q) (X) = (a + a1 ) X 2 + (b + b1 ) X + (c + c1 ) 2 R2 [X] )
) U ((P + Q)) = (a + a1 ; b + b1 ; c + c1 ) = (a; b; c) + (a1 ; b1 ; c1 ) = U (P ) + U (Q)
( P ) ( ) is dened by:
( P ) (X) = aX 2 + bX + c )
) U (( P )) = ( a; b; c) =

(a; b; c) = U (P ) :

2.0.3. Denition. A linear transformation between two vector spaces which is bijective is called isomorphism.
2.0.4. Denition. (Isomorphic spaces) Two vector spaces are called isomorphic if between them there is
an isomorphism. We denote this situation by (V1 ; K) = (V2 ; K).

70

2. LINEAR TRANSFORMATIONS

2.0.5. Remark. Intuitively speaking, when two vector spaces are isomorphic, the vector space algebraic
structure does not distinguish between the spaces. Still the spaces may be dierent and they may be distinguished from other perspectives, like other abstract structures and/or other nonmathematical reasons.
An example for this type of situation is the study of the vector spaces (R2 ; R) and (C; R).

2.0.6. Proposition. Any nitetype vector space (V; K) with dimK V = n is isomorphic with the vector
space (Kn ; K) 2.0.4.

Proof. Consider a nitetype vector space (V; K) and B a basis in (V; K).
Consider the function ' ( ) : V ! Kn ; dened by ' (v) = [v]B .
The function ' ( ) is linear:
2

6 1 7
6
7
n
6 2 7
X
6
7
[v1 ]B = 6 . 7 ; v1 =
6 .. 7
i=1
6
7
4
5

6
7
6
7
n
6 0 7
X
2
6
7
i vi ; [v2 ]B = 6 . 7 ; v2 =
6 .. 7
i=1
6
7
4
5

v1 + v2 =
2

6
6
6
6
[v1 + v2 ]B = 6
6
6
4

i vi

i=1

1
2

0
1

+
..
.

0
2

0
n

0
i vi

0
n

n
X

0
1

n
X

0
i vi

i=1

=
3

n
X

0
1

i=1

0
i ) vi

7
7 6 1 7 6
7
7 6
7 6
7 6 2 7 6 0 7
7 6 2 7
7 6
7 = 6 . 7 + 6 . 7 = [v1 ]B + [v2 ]B
7 6 .. 7 6 .. 7
7 6
7
7 6
5
5 4
5 4
0
n

so ' (v1 + v2 ) = ' (v1 ) + ' (v2 ) (from the unicity of a representation in a basis).
Similar, for

2 K we have

v1 =

n
X
i=1

so ' (

v1 ) =

6
6
6
6
)
v
)
[
v
]
=
6
i
i
1 B
6
6
4

1
2

..
.
n

7
7
7
7
7=
7
7
5

6 1 7
6
7
6 2 7
6
7
6 . 7=
6 .. 7
6
7
4
5

[v1 ]B

## ' (v1 ), which means that the function is linear.

The function is bijective: the injectivity results from the linear independence property for the basis (the
unicity of the coordinates), while the surjectivity results from the generating property of the basis.

2.0.7. Exercise. Show that the morphism from the previous example is bijective.

## 2.1. EXAMPLES OF LINEAR TRANSFORMATIONS

71

2.0.8. Denition. (Linear functional) Any linear transformation between the vector spaces (V; K) and
(K; K) is called a linear transformation over (V; K) (any element of the set LK (V; K)). The set LK (V; K)
is also denoted by V0 (= LK (V; K)) and is called the algebraic dual of the vector space (V; K).

## 2.1. Examples of Linear Transformations

2.1.1. Example. The derivation operator, over the vector space of all functions with a single variable
and indenitely derivable: D ( ) : C 1 (R) ! C 1 (R), D (f ( )) = f 0 ( ).
The derivation operator is linear: D ((f + g) ( )) = (f + g)0 ( ) = f 0 ( )+g 0 ( ); D (( f ) ( )) = ( f )0 ( ) =
f 0 ( ).

2.1.2. Example. The integration operator, dened over the vector space of all integrable functions:
Rt
I ( ) : F ! F , I (f ( )) (t) = f ( ) d .
a

## The operator is linear because of the integral properties.

2.1.3. Example. Consider the vector spaces (R3 ; R) and (R2 [X] ; R) (Rn [X] is the set of all polynomials
of degree at most n, in the unknown X and with real coe cients).
The function U ( ) : R2 [X] ! R3 dened by U (P ( )) = xP ( ) 2 R3 (for a polynomial P (X) =
aX 2 + bX + c 2 P2 [X] we attach the vector xP ( ) = (a; b; c)) is a morphism of vector spaces.

72

2. LINEAR TRANSFORMATIONS

## 2.2. Properties of Linear Transformations

2.2.1. Proposition. (Properties of linear transformations) Consider two vector spaces over the same
eld of scalars (V1 ; K), (V2 ; K) and a linear transformation U ( ) : V1 ! V2 . Then:
(1) U ( ) is linear , 8 ;

## 2 K, 8x1 ; x2 2 V1 , U ( x1 + x2 ) = U (x1 ) + U (x2 )

(2) [, Prop. 6.1, page 91] U ( ) is linear , GU ( ) = f(x; U (x)) ; x 2 V1 g is a vector subspace in
(V1

V2 ; K).

(3) U ( ) is linear )
(a) U (0V1 ) = 0V2 ;
(b) 8 vector subspace V01

V2 , U

## (V02 ) = fv 2 V1 ; U (v) 2 V02 g is a vector subspace in V1 .

Note: for points 3.b and 3.c we use the direct image of a set by a function and the preimage
of a set by a function; see the Denition 7.2.18 and the Remark 7.2.19.

## Proof. Let U ( ) : V1 ! V2 be a linear transformation.

(1) ")" When U ( ) is a linear transformation, and x1 , x2 2 V and ,
U ( x1 + x2 )

additivity

U ( x1 ) + U ( x2 )

homogeneity

2 K, we have:

U (x1 ) + U (x2 ).

8 ;

for

for

## which means that the function U ( ) is a linear transformation.

(2) When (V1 ; K) and (V2 ; K) are two vector spaces over the same eld of scalars, the set V1

V2

may be viewed as a vector space (the product vector space) with the operations:
(v1 ; v2 ) + (w1 ; w2 ) = (v1 + w1 ; v2 + w2 ) (the addition of the place i is the addition of the vector
space Vi )
(v1 ; v2 ) = ( v1 ; v2 ) (the multiplication with a scalar of the place i is the multiplication with
a scalar of the vector space Vi )
Because of the vector space properties of the structures (V1 ; K) and (V2 ; K), the structure
(V1

## V2 ; K) is also a vector space.

")"
Assume U ( ) is linear and choose two vectors w1 , w2 2 GU ( ) . Then there are vectors v1 ,

Then

## w1 + w2 = (v1 ; U (v1 )) + (v2 ; U (v2 )) = (v1 + v2 ; U (v1 ) + U (v2 ))

73

additivity

(v1 + v2 ; U (v1 + v2 )) 2

Gu( ) .
Also for
w1 =

2 K,
(v1 ; U (v1 )) = ( v1 ; U (v1 ))

homogeneity

( v1 ; U ( v1 )) 2 GU ( ) .

"("
Assume that GU ( ) is a vector subspace in (V1

## V2 ; K) and consider two vectors v1 , v2 2 V1 .

Then all the pairs (v1 ; U (v1 )), (v2 ; U (v2 )), (v1 + v2 ; U (v1 + v2 )) belong to GU ( ) , which is a
subspace, so that (v1 ; U (v1 )) + (v2 ; U (v2 )) = (v1 + v2 ; U (v1 ) + U (v2 )) 2 GU ( ) .
Since the set GU ( ) is the graphic of a function,
9
(v1 + v2 ; U (v1 ) + U (v2 )) 2 GU ( ) =
) U (v1 ) + U (v2 ) = U (v1 + v2 )
;
(v1 + v2 ; U (v1 + v2 )) 2 GU ( )
[because otherwise there would be an element v1 + v2 for which the function would have two
images, which contradicts the denition of a function, see 7.2.16, page 207]
In a similar way, if (v1 ; U (v1 )), ( v; U ( v)) 2 GU ( ) , then also

## and by the same argument as above we get U ( v1 ) = U (v1 ).

(3) When U ( ) is linear:
(a) U (0V1 ) = U (x

x) = U (x)

U (x) = 0V2 .

## 2 K, y1 ; y2 2 U (V01 )9) 9x1 ; x2 2 V01 such that U (xi ) = yi ;

V01 subspace ) x1 + x2 2 V01 =
)
U ( ) linear ;
) U ( x1 + x2 ) = U (x1 ) + U (x2 ) = y1 + y2 2 U (V01 ).

(b) Consider ;

(c) Consider ;

2 K, x1 ; x2 2 U

## V02 subspace ) V02 3 U (x1 ) + U (x1 ) = U ( x1 + x2 ) ) x1 + x2 2 U

(V02 ).

2.2.2. Denition. The codomain subspace U (V1 ) is called the image of the linear transformation and is
denoted by Im U ( ); its dimension is also called the rank of the linear transformation and is denoted by
dim Im U ( ) = rank U ( ).
2.2.3. Denition. The domain subspace U

## (f0V1 g) is called the kernel of the linear transformation and

is denoted by ker U ( ); its dimension is also called the nullity of the linear transformation and is denoted
by dim ker U ( ) = null U ( ).
2.2.4. Theorem (The ranknullity theorem for linear transformations). Consider two vector spaces
(V1 ; K) and (V2 ; K), with V1 of nitetype. For a linear transformation U ( ) : V1 ! V2 , we have:
dim V1 = dim (ker U ( )) + dim (Im U ( )).

74

2. LINEAR TRANSFORMATIONS

set B1 = fu1 ;

; uk g

; wk = U (uk )g

Im U ( )

V1 .

; vp g

ker U ( )

V1 .

## Then the set B = B1 [ B2 is a basis for V1 :

[observe that B1 \ B2 = ; since v 2 B1 \ B2 ) U (v) = 0 which cannot be a vector in a basis]
k
k
P
P
Consider x 2 V1 ) U (x) 2 Im U ( ) ) 9 1 ,
, k 2 K such that U (x) =
w
=
i i
i U (ui ) =
i=1

k
P

i=1

i ui

i=1

Denote u =

k
P

i ui

i=1

## 2 V1 and consider the decomposition x = (x

Then U (x u) = U (x)
p
P
that x u =
j vj .

U (u) = U (x)

U (x) = 0, so that x

u) + u.
u 2 ker U ( ) ) 9 1 ,

2 K such

j=1

We get that x =

p
P

j vj

j=1

k
P

so that V1 = span B.

i ui

i=1

p
P

j vj

j=1
p
P

0
j

j=1

vj =

k
P

i=1

0
i

i ) ui

!
p
P
0
=U
0=U
j
j vj
j=1

is a basis]

0
i

i , 8i = 1; k )

j=1

i ui

p
P

j=1

i=1

0
j vj

k
P

i=1

0
i ui ,

then

k
P

i=1

p
P

k
P

0
i

i ) ui
0
j

)0=

k
P

i=1

0
i

i ) wi

0
j,

; wk g

8j = 1; p so

## that the representation of the vector x is unique and B is a basis for V1 .

Because k = dim Im U ( ), and p = dim ker U ( ) and jBj = k + p, we get the result.
2.2.5. Remark. When between the spaces V1 and V2 there is a bijective linear transformation, the
dimensions of the spaces are equal.
Proof. When the linear function U ( ) : V1 ! V2 is bijective, then ker U ( ) = f0g (from injectivity)
and Im U ( ) = V2 (from surjectivity); from the previous Theorem 2.2.4 we get that
dim V1 = dim (ker U ( )) + dim (Im U ( )) = dim V2 .
The next two propositions will show that the inversion (when possible) and the composition (when
allowed) are preserving the linear characteristic of the involved functions.
2.2.6. Proposition. Consider two vector spaces (V1 ; K), (V2 ; K) over the same eld and a linear bijective
transformation U ( ) : V1 ! V2 .
Then the inverse function U
linear transformation is linear).

75

## Proof. We know that U ( ) satises

U (u + v) = U (u) + U (v), and U ( u) = U (u).
Consider x; y 2 V2 and U

(x) = u, U

(y) = v;

)U

For

(x + y) = U

(x) + U

2 K, x 2 V2 , U

(x + y) = u + v

## (x) = v ) U (v) = x and

U ( v) = U (v) ) U ( v) = x ) v = U

( x) ) U

(x) = U

( x).

2.2.7. Proposition. Consider three vector spaces over the same eld and the linear transformations
U1 ( ) : V1 ! V2 , U2 ( ) : V2 ! V3 .
Then the function U ( ) : V1 ! V3 dened by U (v) = U2 (U1 (v)), 8v 2 V1 is also linear (the composition preserves linearity of functions).
Proof. Let v1 ; v2 2 V1 ; we have U (v1 + v2 ) = U2 (U1 (v1 + v2 ))

additivity

of U1 ( )

additivity

of U2 ( )

## U2 (U1 (v1 )) + U2 (U1 (v2 )) = U (v1 ) + U (v2 ).

Homogeneity is proven similarly.
2.2.8. Remark. When the domain and the codomain of the linear transformation are the same, U ( ) :
V ! V, we may talk about the powers of the linear operator U ( ):
U 2 ( ) = (U

U ) ( ), U n ( ) = (U
|

{z

U ) ( ).
}

n times

## The identity on V, IV ( ) : V ! V, IV (v) = v is a linear operator.

2.2.9. Remark. When p (x) =

n
P

## ak xk is an arbitrary polynomial (with real or complex coe cients), we

k=0

may talk about the polynomial operator, which is a new operator (from Proposition 2.2.7) with the form
n
P
given by: p (U ( )) =
ak U k ( ), where U 0 ( ) = IV ( ) (the identity operator on V).
k=0

2.2.10. Remark. When p ( ) and q ( ) are two polynomials, the composition of the two attached operator

## polynomials is commutative: p (U ( )) q (U ( )) = q (U ( )) p (U ( )).

This means that, while in general the function composition is not commutative, in the special case of
polynomial operators for the same operator the composition is commutative. The proof relies on the fact
that any two powers of the same operator commutes U k

Uj ( ) = Uj

## U k ( ) = U k+j ( ) and is left as

an exercise.
2.2.11. Remark. The relation = ( Denition (2.0.4)) is an equivalence relation between vector spaces
over the same eld (it is reexive, symmetric and transitive) (the relation is dened over a set of vector
spaces and it establishes equivalence classes).

76

2. LINEAR TRANSFORMATIONS

Proof. Reexivity follows from the fact that the identity operator IV1 ( ) : V1 ! V1 , U (v) = v is
linear and bijective, so V1 = V1 .
The symmetry follows from Proposition (2.2.6), because when V1 = V2 , then 9 U ( ) : V1 ! V2 linear
bijective ) U

( ) : V2 ! V1 linear bijective ) V2 = V1 .

Transitivity follows from Proposition (2.2.7) because when V1 = V2 and V2 = V3 then there are some
isomorphisms U ( ) : V1 ! V2 and V ( ) : V2 ! V3 and the new function (V

U ) ( ) : V1 ! V3 preserves

## by composition both the linearity and the bijectivity properties, so that V1 = V3 .

2.2.12. Remark. The set LK (V1 ; V2 ) together with the usual algebraic operations with functions
Def

## (U1 ( ) + U2 ( )) (x) = U1 (x) + U2 (x) ;

Def

( U1 ( )) (x) =

U1 (x) ;

has a vector space structure over the eld K (in particular the algebraic dual (??) is a vector space over
the eld K).

Proof. (LK (V1 ; V2 ) ; +) has a group structure (from the properties of addition over V2 ) with neutral
element the null operator O ( ) : V1 ! V2 , O (v)

0.

## 2.3. Representations of Linear Transformations

2.3.1. Remark. Consider a linear transformation U ( ) : V1 ! V2 and an ordered basis Bd = (e1 ; ::; en )
for the domain V1 .
Then the function U ( ) is uniquely determined by the values on the basis (U (e1 ) ;
linear transformation is uniquely determined by the values on a basis of the domain).
n
P
The proof relies on the remark that x 2 V1 ) 9 i 2 K, x =
i ei ) U (x) = U
n
P

i=1

iU

(ei ).

; U (en )) (any
n
P

i ei

i=1

i=1

Consider a linear transformation U ( ) : V1 ! V2 between two nitetype vector spaces over the same
eld K.
Assume dim V1 = n, dim V2 = m and x the ordered bases Bd = (e1 ; ::; en ) in V1 and Bc = (f1 ; ::; fm )
in V2 .
Consider the representations in the codomain of the images of the vectors of the basis from the domain:

U (ej ) =

77

m
P

## aij fi 2 V2 ; j = 1; n (vector representation)

2
3
6 a1j 7
6
7
6 a2j 7
6
7
= 6 . 7 (coordinate representation)
6 .. 7
6
7
4
5
amj

i=1

[U (ej )]Bc

d
The matrix which has as columns [U (ej )]Bc , namely [U ( )]B
Bc = [aij ] i=1;n = [U (e1 )]Bc

[U (ej )]Bc

[U (

j=1;m

## is called the matrix associated with U ( ) at bases Bd and Bc .

Conversely, for each matrix A 2 Mm

(R) and for each possible choice of ordered bases in both the

domain and the codomain there is an unique associated linear transformation dened by the above formula
[U (x)]Bc = A [x]Bd . In other words, the function U ( ) : Rn ! Rm dened as U (x) = Ax is a linear
transformation.
When x 2 V1 with coordinates in Bd given by [x]Bd

U (x) = U

n
P

xj e j

j=1

n P
m
P

n
P

xj U (ej ) =

j=1

aij xj fi =

j=1 i=1

3
x1
7
6
6 . 7
= 6 .. 7, then
5
4
xn
2

m
P

n
P

i=1

n
P

m
P
xj
aij fi =
j=1
i=1
!

aij xj

j=1

fi

## therefore the representation of U (x) in Bc is

[U (x)]Bc

2 P
n
a x
6 j=1 1j j
6 n
6 P
6
a2j xj
6
= 6 j=1
6
..
6
6 n .
4 P
amj xj
j=1

7
7 6 a11 a12
7 6
7 6 a21 a22
7 6
7=6 .
..
7 6 ..
.
7 6
7 4
5
am1 am2

d
[U (x)]Bc = [U ( )]B
Bc [x]Bd :

3 2

a1n 7
7
a2n 7
7
.. 7
..
.
. 7
7
5
amn

6
6
6
6
6
6
6
4

x1 7
7
x2 7
7
= [U (e1 )]Bc
.. 7
7
. 7
5
xn

[U (ej )]Bc

## [U (en )]Bc [x]Bd ;

d
The representation matrix [U ( )]B
Bc is unique because for two matrices A and B the following happens:

if 8x 2 Rn , Ax = Bx, then A = B.

2.3.2. Proposition. The linear transformation U ( ) is bijective if and only if its attached matrix (for a
certain choice of bases) is invertible.

78

2. LINEAR TRANSFORMATIONS

Proof. ) When U ( ) is bijective, form Remark 2.2.5 the two spaces have equal dimensions (so
that the matrix is square) and for each y 2 V2 the system Ax = y has a unique solution (the unicity comes
from injectivity while the existence comes from surjectivity).
Suppose by contradiction that the matrix is not invertible; then det A = 0, which means that the
columns of the matrix are linear dependent. Consider a nonzero linear combination for the null vector; in
matrix form, this means a nonzero solution of the system Ax = 0, which is a contradiction with the unicity
of the null solution, contradiction obtained from the hypothesis that the matrix would not be invertible.
In conclusion there exists the matrix A 1 .
( When the matrix A is invertible, then, keeping the same basis for each space, the function
V ( ) : V2 ! V1 dened by V (y) = A 1 y is exactly the inverse function for U ( ), because
U (V (y)) = U (A 1 y) = A (A 1 y) = y = 1V2 (y) and V (U (x)) = V (Ax) = A

## which means that U ( ) is invertible, and so it is bijective.

2.3.3. Remark.

(1) The rank of the linear transformation (Def. 2.2.2, page 73) equals the rank of

the matrix representing the linear transformation (in a certain choice for bases in both the domain
and the codomain).
(2) A linear transformation is injective if and only if its rank equals the dimension of the domain.
(3) A linear transformation is surjective if and only if its rank equals the dimension of the codomain.

2.3.4. Remark. When the basis in V1 changes from Bd to Bd0 and the basis in V2 changes from Bc to Bc0
the representation of the linear transformation changes in the following way:
B0

Bd
[U (x)]Bc = [U ( )]B
[x]Bd ; [U (x)]Bc0 = [U ( )]Bdc0 [x]B 0 ;
c
d

## [x]B 0 = [M (Bd )]B 0 [x]Bd ; [y]Bc0 = [M (Bc )]Bc0 [y]Bc ;

d

and so
d
[U (x)]Bc = [M (Bc )]Bc10 [U (x)]Bc0 = [U ( )]B
Bc [x]Bd )
d
[U (x)]Bc0 = [M (Bc )]Bc0 [U ( )]B
Bc [x]Bd =

1
d
= [M (Bc )]Bc0 [U ( )]B
Bc [x]Bd [M (Bd )]B 0 [x]B 0 :
d

## 2.3.1. Representations of Linear Functionals. Given a basis Bd = fv1 ;

; vn g in (V; K) and

a basis Bc = fwg in (K; K), any linear functional f ( ) : V ! K (which is a particular type of linear
transformation) admits a unique representation:
f (v) = f

n
X
i=1

i vi

n
X
i=1

if

(vi ) ;

## 2.4. THE FACTOR VECTOR SPACE

79

so that the value of the linear functional in v is uniquely determined by the values of the linear functional
at the basis vectors and by the coordinates of the vector. In matrix form:
2
3
[f (v)]Bc =

6 1 7
6
7
i6 2 7
7
6
f (vn ) 6 . 7 =
6 .. 7
6
7
4
5

f (v1 ) f (v2 )

f (v1 ) f (v2 )

f (vn )

[v]Bd :

## Moreover, when for a xed basis E in V with dim V = n we denote by

of the vector, then for each i the function

## ( ) is a linear functional and the set of linear functionals

( i ( ))i=1;n is a basis in the dual space V0 [called the dual basis of E in V]. It follows that for nitetype
vector spaces the dual space is isomorphic with the space: V ' V0 .

## 2.4. The Factor Vector Space

2.4.1. Factor Space attached to a Subspace.
2.4.1. Remark. Consider a vector space (V; K) and V0 a vector subspace. Consider on V the relation
Def

u v v (mod V0 ) , u

v 2 V0 :

## This is an equivalence relation on V.

Proof. Reexivity: 8v 2 V, v v v (mod V0 ), because v

v = 0 2 V0

## Transitivity: u v v (mod V0 ) and v v w (mod V0 ) ) u

(v

w) 2 V0 ) u v w (mod V0 ).

v and v

(u

v) 2 V0 ) v v u (mod V0 ).

w 2 V0 ) u

w = (u

v) +

The equivalence relation " v (mod V0 )" generates on V equivalence classes with respect to V0 (mod
V0 ); they will be denoted by x
b (mod V0 ); x^ is the set of all elements of V which are equivalent with x mod
V0 :

x
b (mod V0 ) = fv 2 V; x v v (mod V0 )g = x + V0 = fx + v0 ; v0 2 V0 g :

The dimension of an equivalence class is considered by denition to be equal with the dimension
of V0 .
Two equivalence classes may only be identical or disjoint:
Proof. ; =
6 x
b \ yb () 9z0 2 x
b \ yb () 9v0 ; u0 2 V0 , z0 = x + v0 = y + u0 .

80

2. LINEAR TRANSFORMATIONS

Let v 2 x
b ) 9v1 2 V0 , v = x + v1 ) v = y + (u0
Similar, we also get yb

v0 + v1 ) 2 yb ) x
b

yb.

x
b, so that two equivalence classes which are not disjoint are identical.

## The set of all equivalence classes modV0 is a partition of the set V:

Proof. x 2 V ) x 2 x
b [each element belongs to at least an equivalence class mod V0 and, as two

## distinct classes are disjoint, x belongs to exactly one class]

2.4.2. Denition. The set of all equivalence classes is called the factor set mod V0 ; this set is denoted by
V=V0 = fb
x (mod V0 ) ; x 2 Vg
(it is a set of equivalence classes, so it is a set of sets)
2.4.3. Remark. For each xed x 2 V, the function

( ) : V0 ! (b
x (mod V0 )) dened by

(v) = x + v

is a bijection.
Proof. Injectivity: let v1 , v2 2 V0 such that

(v1 ) =

Surjectivity: y 2 x
b (mod V0 ) = 9vy 2 V0 , y = x + vy )

(v2 ) ) x + v1 = x + v2 ) v1 = v2 .

(vy ) = x + vy = y.

2.4.4. Proposition. With the elements of the set V=V0 we may dene vector space operations:
Def
Addition modV0 : (b
x (mod V0 )) + (b
y (mod V0 )) = x[
+ y (mod V0 )

## Multiplication with a scalar modV0 :

(b
x (mod V0 )) = cx (mod V0 )

Def
Proof. Addition x
b + yb = x[
+ y is well dened (doesnt depend on representatives) because, when

x
b=x
b1 and yb = yb1 , then x

x1 and y

y1 2 V0 ) (x + y)

(x1 + y1 ) 2 V0 so that x[
+ y = x\
1 + y1 .

## Associativity is a consequence of the associativity of the operation on V:

\
(b
x + yb) + zb = x[
+ y + zb = (x +
y) + z = x +\
(y + z) = x
b + y[
+z =x
b + (b
y + zb);
the neutral element is b
0(= V0 ) and the opposed vector is
Multiplication with a scalar:

and so

(x

x
b = cx.

x
b = cx is a well dened operation, because when x
b=x
b1 then x x1 2 V0

## Moreover, we have the distributivity properties:

( + )x
b=( \
+ )x = \
x + x = cx + cx = x
b+ x
b;
\
(b
x + yb) = x[
+ y = (x
+ y) = \
x + y = cx + cy;
( x
b) = cx = \
( x) = (\
)x = (

)x
b;1 x
b = 1dx = x
b:

So (V=V0 ; K) has a vector space structure (together with the operations between classes dened above).

## 2.4.5. Remark. The function

phism and ker ( ) = V0 .

( ) : V ! (V=V0 ) given by

(x) = x
b is a (surjective) vector space mor-

Proof.
morphism.

(x + y) = x[
+y =x
b + yb =

( x) = cx = x
b =

(x0 ) = xb0 = b
0 ) V0

x0 2 V0 )

81

ker ( )

## (x) so the function is a

(x) = b
0 2 V=V0 g = V0 :

(x) = b
0)x
b=b
0 ) x 2 V0 .

2.4.6. Denition. The vector space V=V0 is called the factor space of V with respect to V0 .
2.4.7. Theorem. (The dimension of the factor space) Consider a nitetype vector space (V; K)
and a subspace V0 . Then
dim (V=V0 ) = dim V
Proof. Choose a basis x1 ;
Consider the set yb1 ;

because x1 ;
r
P

j yj ,

i=1

Consider

if 0 6= v0 =
that the set x1 ;

; yr is a basis in V we have 8v 2 V 9 i ;

r
P

i=1

2 K such that
r
P

j yj

i=1

## ; xk in V0 and complete it up to a basis x1 ;

; ybr in V=V0 ;

; xk ; y1 ;

which means vb =

dim V0 :

; xk ; y1 ;

; yr in V.

2 K such that v =

k
X

i xi

i=1

| {z }
2V0

bj
jy
r
P

i=1

and so yb1 ;
bj
jy

=b
0)

r
P

i=1

j yj

2 V0 ;

## 2 V0 , since v0 may be represented as a linear combination of xi it would follow

; yr while a basis, it would accept a null linear combination with nonzero

; xk ; y1 ;

## scalars, which is a contradiction;

so we get that v0 = 0 and as y1 ;

## ; yr are linear independent in V we get that all the scalars

are

zero.
This means that yb1 ;

## dim V=V0 = dim V

; ybr is a basis for the factor space V=V0 and dim V=V0 = r, which means

dim V0 .

## 2.5. The Isomorphisms Theorems

2.5.1. Theorem. (The First Isomorphism Theorem) Consider two vector spaces over the same eld
(V1 ; K) and (V2 ; K) and a linear transformation U ( ) : V1 ! V2 . Then the vector spaces V1 = ker U ( ) and
Im U ( ) are isomorphic.
Proof. Dene the function U~ ( ) : V1 = ker U ( ) ! Im U ( ) by U~ (b
x) = U (x);
x

U~ ( ) is well dened (the denition doesnt depend on the representatives) because when x
b = yb then
y 2 ker U ( ), which means U (x

U~ ( ) is a morphism:

y) = 0 () U (x) = U (y).

82

2. LINEAR TRANSFORMATIONS

~ x1 ) + U~ (b
additivity: U~ (b
x1 + x
b2 ) = U~ x\
x2 );
1 + x2 = U (x1 + x2 ) = U (x1 ) + U (x2 ) = U (b
homogeneity: U~ ( x
b) = U~ (cx) = U ( x) = U (x) = U~ (b
x).

## surjectivity of U~ ( ): y 2 Im U ( ) ) 9xy 2 V1 , U (xy ) = y ) U~ (xy ) = y.

injectivity of U~ ( ): U~ (b
x1 ) = U~ (b
x2 ) ) U (x1 ) = U (x2 ) ) x1
So U~ ( ) is a vector spaces isomorphism.

x2 2 ker U ( ) ) x
b1 = x
b2 .

## 2.5.1. Projection Operators.

2.5.2. Denition. Consider a vector space (V; K) and two vector subspaces V1 and V2 such that V =
V1

## V2 . Then, for each x 2 V, 9!x1 2 V1 and 9!x2 2 V2 such that x = x1 + x2 .

The vector x1 is called the projection of x on V1 in the direction V2 .

## 2.5.3. Denition. A linear transformation p ( ) : V ! V is called projection when p (p (x)) = p (x), 8x 2

V [idempotency property].
2.5.4. Theorem. For a projection p ( ) we have: V = p (V)
Proof. Let v 2 V ) v = p (v) + v
p (v

p (v)) = p (v)

p2 (v) = p (v)

ker p ( ).

p (v) = 0, so v

## p (v) 2 ker (p ( )) and V = p (V) + ker p ( ).

The sum is direct because v 2 p (V) \ ker p ( ) ) 0 = p (v) and 9u, v = p (u) ) 0 = p (v) = p2 (u) =
p (u) = v so p (V ) \ ker p ( ) = f0g.
2.5.5. Proposition. If p ( ) : V ! V is a projection, then (1V
Moreover, ker (1V

p) ( ) = Im p ( ) and Im (1V

## Proof. x 2 ker (1V

p) ( ) ) x

p) ( ) = ker p ( ).

p (x) = 0 ) x = p (x) 2 Im p ( ).

## x 2 Im p ( ) ) 9y, x = p (y) ) p (x) = p (p (y)) = p (y) = x ) x

x 2 Im (1V

p) ( ) ) 9y, x = (1V

p) ( ) : V ! V is also a projection.

p) (y) = y

## p (y) ) p (x) = p (y)

p (p (y)) = p (y)

p) ( )
p (y) = 0 )

p (x) = 0 ) x 2 ker p ( ).
x 2 ker p ( ) ) p (x) = 0 ) x = x

p (x) = (1V

p) (x) ) x 2 Im (1V

p) ( ).

2.5.6. Proposition. Consider a vector space (V; K) and a subspace V0 . Any complement of V0 in V
(i.e. a subspace V1 such that V0

## V1 = V) is isomorphic with V=V0 ; moreover, any two complements are

isomorphic.
2.5.2. Other Isomorphism Theorems.
2.5.7. Theorem. (The Second Isomorphism Theorem) Consider a vector space (V; K) and V1 and
V2 subspaces in V. Then the vector spaces V1 = (V1 \ V2 ) and (V1 + V2 ) =V2 are isomorphic.

83

## Proof. Dene the function ' ( ) : V1 ! (V1 + V2 ) =V2 by ' (x) = x

b = x + V2 .
The function is a morphism:

' (x + y) = x[
+y =x
b + yb = ' (x) + ' (y), and

' ( x) = cx = x
b = ' (x).

## Injectivity: x 2 ker ' ( ) , x 2 V1 and x

b=b
0 () x 2 V2 ) so that ker ' ( ) = V1 \ V2 .

## Surjectivity: Consider y 2 (V1 + V2 ) =V2 ) 9x1 2 V1 , 9x2 2 V2 , y = x1 + x2 + V2 = x1 + V2 =

x
b1 ) ' (x1 ) = y ) the function is surjective.

From Theorem 2.5.1, page 81 it follows that the spaces V1 = ker ' ( ) and Im ' ( ) are isomorphic, which

means that the vector spaces V1 = (V1 \ V2 ) and (V1 + V2 ) =V2 are isomorphic.
2.5.8. Corollary. Consider a vector space (V; K) and two subspaces V1 and V2 . Then
dim V1 + dim V2 = dim (V1 + V2 ) + dim (V1 \ V2 ) :
Proof. From the previous theorem we know that V1 = (V1 \ V2 ) and (V1 + V2 ) =V2 are isomorphic,
so they have the same dimension. Then
dim (V1 = (V1 \ V2 )) = dim ((V1 + V2 ) =V2 )
and from (Theorem 2.4.7, page 81) we know that
dim (V1 = (V1 \ V2 )) = dim V1

dim (V1 \ V2 )

and
dim ((V1 + V2 ) =V2 ) = dim (V1 + V2 )

dim V2

so we get
dim V1

## dim (V1 \ V2 ) = dim (V1 + V2 )

dim V2 :

2.5.9. Theorem. (The Third Isomorphism Theorem) Consider a vector space (V; K) and V1 and
V2 subspaces in V such that V

V1

V2 .

Then the vector spaces ((V=V2 ) = (V1 =V2 )) and V=V1 are isomorphic.
Proof. Denote by x
b = x + V2 the class of x with respect to V2 (that is, an element of V=V2 ) and

with x
e = x + V1 the class of x with respect to V1 (that is, an element of V=V1 );
dene the function ' ( ) : (V=V2 ) ! (V=V1 ) by ' (b
x) = x
e.

## The function ' ( ) is well dened, because x

b1 = x
b2 ) x1

x2 2 V 2 ) x1

x2 2 V 1 ) x
e1 = x
e2 .

84

2. LINEAR TRANSFORMATIONS

## The function ' ( ) is a morphism because ' (b

x1 + x
b2 ) = ' x\
e1 + x
e2 = ' (b
x1 ) +
1 + x2 = x^
1 + x2 = x

' (b
x2 ) and ' ( x
b) = ' (cx) = fx = x
e = ' (b
x).

## Im ' ( ) = V=V1 (the function is surjective), while its kernel is V1 =V2 :

x
b 2 ker ' ( ) , ' (b
x) = e
0,x
e=e
0 , x 2 V1 so x
b 2 ker ' ( ) () x
b = x + V2 and x 2 V1 which

means x
b 2 V1 =V2

V=V2 .

## Apply Theorem 2.5.1, page 81 to obtain that

(V=V2 )
and Im ' ( ) are isomorphic, which means
ker ' ( )

## ((V=V2 ) = (V1 =V2 )) and V=V1 are isomorphic.

2.5.10. Corollary. Consider two nitetype vector spaces (V1 ; K), (V2 ; K) and U ( ) : V1 ! V2 a morphism between them. Then:
(1) dim V1 = dim (ker U ( )) + dim (Im U ( ));
(2) U ( ) injective , dim V1 = dim (Im U ( ));
(3) U ( ) surjective , dim V2 = dim (Im U ( )).
Proof. 1. Use the Theorem 2.5.1 and the Theorem 2.4.7.
The vector spaces V1 = ker U ( ) and Im U ( ) are isomorphic, so that their dimensions are equal: dim (V1 = ker U
dim (Im U ( )).
Because dim (V1 = ker U ( )) = dim (V1 )
dim (V1 )

## dim (ker U ( )), we get:

dim (ker U ( )) = dim (Im U ( )) ) dim (V1 ) = dim (ker U ( )) + dim (Im U ( )).

()

() dim (V1 ) =

## dim (Im U ( )).

3. U ( ) surjective , Im U ( ) = V2 () dim V2 = dim (Im U ( )) (the last equivalence takes place
because Im U ( ) is a subspace of V2 ; while their dimensions are equal, because the spaces are of nitetype,
they are equal as sets).
2.5.11. Theorem. (Sards Quotient Theorem, nitedimensional form) Consider three vectors spaces X,
Y, Z over the same eld K and the linear transformations S ( ) : X ! Y and T ( ) : X ! Z.
If S ( ) is surjective and ker S ( )

## ker T ( ), then there is an unique linear transformation R ( ) : Y ! Z

such that T = R S.
b
Proof. Consider the linear transformations S^ ( ) : X= ker S ( ) ! Y and T^ ( ) : X= ker T ( ) ! Z,

dened by:

b b
S^ (^
x) = S (x) and T^ x
^ = T (x).
b
The functions S^ ( ) and T^ ( ) are welldened (Exercise!).
b
The functions S^ ( ) and T^ ( ) are linear transformations (Exercise!).

## 2.5. THE ISOMORPHISMS THEOREMS

85

b
The functions S^ ( ) and T^ ( ) injective (Exercise!).

## S^ ( ) is surjective (because S ( ) is surjective) (Exercise!).

) S^ ( ) bijective ) 9S^

( ) : Y ! X= ker S ( ) isomorphism.

b^.
Consider the function P ( ) : X= ker S ( ) ! X= ker T ( ), dened by P (^
x) = x
P ( ) is well dened (Exercise!).

## P ( ) is a linear transformation (Exercise!).

b
Dene R ( ) : Y ! Z by: R ( ) = T^ P S^
b
(R S) (x) = R (S (x)) = T^ P S^ 1 (S (x))

( ). Then
b
b b
= T^ (P (^
x)) = T^ x
^ = T (x).

2.5.12. Remark. Conversely, if X, Y, Z are vector spaces over the same eld of scalars K and S ( ) : X ! Y
and T ( ) : X ! Z are linear transformations such that there is another linear transformation R ( ) : Y ! Z
with T = R S, then it happens that ker S ( )

ker T ( ) (Exercise!).

86

2. LINEAR TRANSFORMATIONS

## 2.6. Introduction to Spectral Theory

2.6.1. Denition. Consider a vector space (V; K), a subspace V0

## The subspace V0 is called invariant with respect to U ( ) (or under U ( ), or U ( )invariant) if

U (V0 )

V0 :

2.6.2. Remark. When a linear operator over an ndimensional vector space has an mdimensional invariant subspace, the matrix corresponding to a basis in which the rst m vectors form a basis for the
invariant
subspace
3 has
2 the form 3
2
A
A12
A
A12
5 = 4 11
5 2 Mn;n (with A21 = 0 2Mn m;m ).
4 11
0 A22
A21 A22
[When the basis is such that the subbasis is on the last places, then the matrix will have A12 = 0]
Proof. 1Consider a vector space (V; K) with dimension dimK V =n and V0 a subspace in V, with
dimK V0 = m.
Choose a basis of V such that its rst m vectors are a basis for V0 :
Start with the set B0 = fv1 ; v2 ;

## ; vm ; vm+1 g is a linear independent set (by contradiction: if the set

is linear dependent, then vm+1 would be a linear combination of B0 , a contradiction with vm+1 2 V n V0 ).
Repeat the procedure for a nite number of times (n

## rst m vectors are a basis for V0 .

We get the basis B = fv1 ; v2 ;
fv1 ; v2 ;

; vm ; vm+1 ;

; vn g = B0 [ fvm+1 ;

; vn g for V in which B0 =

; vm g is a basis for V0 .

In this basis, the vectors of the subspace are represented as a column with zero on the last n

places.
Consider a linear operator U ( ) : V ! V for which U (V0 )

## V0 and consider the basis B, with

properties as above, be xed in V considered both as the domain and as the codomain.
The matrix which represents U ( ) is a matrix which has as columns [U (vj )]B :
U (v1 ); U (v2 );

; U (vm ); U (vm+1 ) ;

; U (vn ) are the images by the linear operator of the basis vectors

and they get represented in the codomain with respect to the same basis.
Since V0 is invariant with respect to U ( ), the images of the set B0 remain in V0 :
fU (v1 ); U (v2 );

; U (vm )g

1 Proof

m positions:

## completed after a question from Alexandru Gr

adinaru, 20132014.

## 2.6. INTRODUCTION TO SPECTRAL THEORY

a
6 1j
6
6 a2j
6
6 .
6 ..
6
m
6
P
U (vj ) =
aij vi ; [U (vj )]B = 6
6 amj
i=1
6
6
6 0
6
6 ..
6 .
4
0
The matrix of U ( ) has the form:
2
a
a1m
a1(m+1)
6 11
6
6 a21
a2m
a2(m+1)
6
6 .
.
..
..
6 ..
.
6
6
6 a
amm
am(m+1)
6 m1
6
6
0 a(m+1)(m+1)
6 0
6
..
..
6 ..
6 .
.
.
4
0
0
an(m+1)

87

7
7
7
7
7
7
7
7
7 ,8j = 1; m.
7
7
7
7
7
7
7
5
3

a1n
a2n

amn
a(m+1)n

ann

7
7
7
7
7
2
7
7
7
6
7, which is of the form 6
7
4
7
7
7
7
7
7
5

A11
m m

A12
m (n m)

A22
(n m) (n m)

(n m) m

7
7.
5

2.6.3. Remark. When the space may be represented as a direct sum of U ( )invariant subspaces, then
the matrix representation of U ( ) in a suitable basis (in a basis obtained as an union of bases of the
invariant spaces) has the form

6 A11 0
6
6 0 A22
6
6
6
6
4
0
0

## Proof. For two subspaces, consider V = V1

0
0
..

.
Akk

3
7
7
7
7
7
7
7
5

V2 with U (V1 )

V1 and U (V2 )

V2 and consider a

## basis attached to this direct sum:

f1 ;

; fk1 ; g1 ;

; gk2 ;

then the last k2 coordinates of U (fj ) are zero (because they are representable only on V1 ) while the rst
k1 coordinates of U (gj ) are zero (because they are representable only on V2 ).
2
3
A11 0
5.
We get the pseudodiagonal matrix form 4
0 A22
For more invariant subspaces the pseudodiagonal form is obtained by induction.
2.6.4. Remark (). When V0 is U ( )invariant, it is possible for a complement of V0 not to be U ( )
invariant;

88

2. LINEAR TRANSFORMATIONS

## moreover, it is possible that any complement of V0 not to be U ( )invariant.

2.6.5. Denition. The U ( )invariant one dimensional subspaces are also called U ( )invariant directions.
Any nonzero vector of an U ( )invariant direction is called eigenvector.
2.6.6. Remark. The vectors x and U (x) are collinear if and only if there is a scalar

such that

U (x) = x:
The scalar

## 2.6.7. Remark. The eigenvalue corresponding to an eigenvector is unique.

Proof. When U (x) =
1x

2x

) (

2) x

and U (x) =

1x

2x

with

6=

2,

then

## = 0 ) x = 0 which is a contradiction with the requirement that the

eigenvector is nonzero.
2.6.8. Remark. When an eigenvector x corresponds to an eigenvalue , any vector

x with

6= 0 is

## another eigenvector corresponding to the same eigenvalue.

Proof. When U (x) = x, by multiplying with

we get U ( x) =

x ) U ( x) =

( x) so that

## the vector x is also an eigenvector of the eigenvalue .

2.6.9. Remark. Consider a set of eigenvectors, each of them corresponding to the same linear operator
but to a dierent eigenvalue.
Then the set is linear independent.
Proof. Consider the linear operator U ( ) and v1 ;
1;

m.

For m = 2, when
0 = U (0) = U (
For

1 v1

1 v1

2 v2

2 v2 )

## = 0, then by applying the operator we get

1U

(v1 ) +

(v2 ) =

1 1 v1

2 2 v2 .

= 0, we get:

9
>
>
1 v1 + 2 v2 = 0; >
>
>
>
>
=
v
+
v
=
v
=
0
1 1 1
2 2 2
1 1 1
)
>
>
=
6
0
>
1
>
>
>
>
v1 6= 0 ;
For 2 =
6 0, we get:
2

2U

9
>
1 v1 + 2 v2 = 0; >
>
=
)
=
0
1
>
>
>
;
v2 6= 0

= 0.

## 2.6. INTRODUCTION TO SPECTRAL THEORY

9
>
1 v1 + 2 v2 = 0 >
>
=
)2
1 1 v1 + 2 2 v2 = 0
>
>
>
;
=
6
0
1
which is a contradiction.

1 2 v1

1 1 v1

1 v1

m vm

U( )

=0 )

1 1 v1

= 0,

if

m ) v1 +

m 1 m 1 vm 1

m 1

subtraction

2 ) v1

=0)

if

1 1 v1

9
>
2 v2 = 0 >
>
=
2 v2 = 0
>
>
>
;
=
6
0
1

89

m 1

m m vm

= 0;

m

m ) vm 1

## we may apply the induction hypothesis to get that 8j,

m)

=0)

1 eigenvectors,

= 0.

2.6.10. Remark. Any linear operator over a nitetype vector space has a number of distinct eigenvalues
at most equal with the dimension of the domain.
Proof. For each distinct eigenvalue we have at least an eigenvector, and since their set is a linear
independent one, the number of vectors in the set cannot exceed the dimension of the vector space.
2.6.11. Remark. Several eigenvectors may correspond to the same eigenvalue and their set may still be
linear independent.
2.6.12. Remark. Consider a xed eigenvalue
attached to

Proof. U (x1 ) =

x1 and U (x2 ) =

x2 ) U (

1 x1

2 x2 )

1 x1

2 x2 )

## combination of eigenvectors is another eigenvector corresponding to the same eigenvalue.

2.6.13. Denition. The dimension dim V of the eigenset is called the geometric multiplicity of .
2.6.14. Remark. The eigenset V is the kernel of the linear operator N ( ) : V ! V, N (v) = U (v)
Proof. V = fv 2 V; U (v) = vg = fv 2 V; U (v)
= fv 2 V; N (v) = 0g = N

v.

v = 0g =

(f0g) = ker N ( ).

2.6.15. Remark. For the linear operator U ( ) : V ! V consider the same basis B both in the domain
and the codomain.
When we write the relation U (v) = v in matrix form in terms of B representation in which the linear
operator is represented by the matrix A we get
A [v] =

## [v] ) A [v] = In [v] ) (A

In ) [v] = 0;

90

2. LINEAR TRANSFORMATIONS

which is a homogeneous system with the coordinates of the vector v as unknowns and the scalar

as

parameter. The necessary and su cient condition for the system to accept nonzero solutions is that the
determinant of system matrix to be zero, that is
det (A
2.6.16. Denition. 1. The equation det (A

In ) = 0:

## equation of the linear operator U ( ) / matrix A.

2. The polynomial

7! det (A

## In ) is called the characteristic polynomial of the linear operator

U ( ) / matrix A (it is a polynomial with the degree equal with the dimension of the vector space)
3. The roots of the characteristic polynomial are called the eigenvalues of the linear operator /matrix.
4. The multiplicity of an eigenvalue as a root of the characteristic polynomial is called the algebraic
multiplicity of the eigenvalue.
4. The set of all distinct roots of the characteristic polynomial is called the spectrum of the linear
operator and is denoted by

(A) or

(U ( )).

## 2.6.17. Remark. An equivalent form is det ( In

2.6.18. Remark. The polynomial
det (A

I) = ( 1)n

7! det (A

+ ( 1)n

A) = ( 1)n det (A

In ).

## I) has the following structure:

n 1

T r (A)

+ det (A)

2.6.19. Remark. The characteristic polynomial does not depend on the basis in which the linear operator
is represented (the basis in which the linear operator is represented with the matrix A).
Proof. When a linear operator is represented in two dierent bases with matrices A an B, these
1

det (T

AT

I) = det (T
= det T

det (A

## AT , where T is the changeofbases matrix; we have

AT

T ) = det (T

I) det T = det (A

(A

I) T ) =

I) :

2.6.20. Example. 1. When the matrix representing the linear operator is diagonal, A = diag (d1 ;
n
Q
the characteristic polynomial is 7!
(di
).

; dn ),

i=1

2. When the matrix representing the linear operator is upper semitriangular (which means that the

elements below the main diagonal are zero, or aij = 0, 8i; j with i
j
n
Q
polynomial is 7!
(aii
) (aii are the elements of the main diagonal).

## 2), then the characteristic

i=1

3. When the matrix representing the linear operator has a pseudodiagonal form (Remark 2.6.3)

## 2.6. INTRODUCTION TO SPECTRAL THEORY

91

0 7
6 A11 0
7
6
7
6 0 A22
p
p
0
P
Q
7
6
with Aii 2 Mki ki and
A=6
ki = n, then det (A
In ) =
det (Aii
Iki )
7
..
7
6
i=1
i=1
.
7
6
5
4
0
0
App
(with the identity matrices of the corresponding dimension) (the characteristic polynomial is the product
of the characteristic polynomials corresponding to the linear operators attached to the submatrices of the
pseudodiagonal)

2.6.21. Remark. We consider mainly linear operators over the eld of scalars R. Solving the characteristic
equation may lead to the following possible situations:
(1) All the roots belong to R and they are distinct.
In this case, there is a basis of eigenvectors corresponding to the eigenvalues, all the eigensets
have dimension 1 and the matrix of the linear operator in this basis (for both the domain and the
codomain) has diagonal form, with eigenvalues on the main diagonal, in the order given by the
order of the eigenvectors in the basis.
(2) All the roots belong to R but some of them are not distinct.
In this case, for a given eigenvalue with (the algebraic) multiplicity greater than 1, the question
is whether the geometric multiplicity of the eigenvalue equals the algebraic multiplicity of the
eigenvalue. There are two subcases:
(a) For each eigenvalue, the geometric and the algebraic multiplicities are equal.
(b) For at least one eigenvalue, the geometric and the algebraic multiplicities are not equal.
(3) Some of the eigenvalues are not from R (they are complex, form C).

Even if the linear operator is represented by a real matrix, some of the eigenvalues may be complex
and in this case their associated eigenvectors will also have complex coordinates.
To solve this di culty we will follow an indirect approach: even if the linear operator is initially
considered over the vector space (Rn ; R), it will be considered from the beginning to be dened over the
complex vector space (Cn ; C), and in this context the (complex) Jordan canonical form will be obtained,
after which it will be applied a certain decomplexication procedure to obtain the real counterpart of the
Jordan canonical form.
Consider a nitedimensional complex vector space (V; C) and a linear operator U ( ) : V ! V represented with the matrix A in a certain xed basis, and with the characteristic polynomial det (A
k
Q
ni
0=
(
(where some of the eigenvalues may be complex numbers).
i)
i=1

In ) =

92

2. LINEAR TRANSFORMATIONS

2.6.22. Denition. The linear operator U ( ) : V ! V is called diagonalizable when there is a basis of
the vector space V where the attached matrix is diagonal.
2.6.23. Remark. The linear operator is diagonalizable () 9P an invertible matrix (which is the
changeofbasis matrix from the old basis to the new basis) such that the matrix P

AP is diagonal.

## 2.6.24. Theorem. The linear operator U ( ) : V ! V is diagonalizable () there is a basis in V which

has only eigenvectors of U ( ).
Proof. ")"
Suppose that the representation of U ( ) in the basis B = fv1 ;

## codomain) is denoted by D and it is a diagonal matrix:

2
3
d1 0 0
6
7
6
7
D = 6 0 . . . 0 7.
4
5
0 0 dn
2
3

6 1k 7
6 . 7
In coordinates, [vk ]B = 6 .. 7 (the column k of the identity matrix, 1 on the line k and 0 for the rest
4
5
nk

## of the lines); then

2

d1

0
..

32

7 6 1k 7
6
76 . 7
6
[U (vk )]B = 6 0
. 0 7 6 .. 7 = dk [vk ]B
54
5
4
0 0 dn
nk
so that [U (vk )]B = dk [vk ]B , which means that vk is an eigenvector corresponding to the eigenvalue

dk .
This means that each vector of the basis is an eigenvector while the elements of the main diagonal of
the matrix D are eigenvalues of U ( ).
"("
When the basis B = fv1 ;

; vn g has only vectors which are eigenvectors for U ( ), then for each

## k = 1; n, U (vk ) = k vk and because B is a basis we may write this in matrix form:

2
3
2
3
6 1k 7
6
6 . 7
6
[vk ]B = 6 .. 7 and [U (vk )]B = 6
4
5
4
nk

## For an arbitrary vector v =

n
P

k=1

U (v) = U

n
P

k=1

k vk

n
P

k=1

kU

k 1k

..
.

k nk

7
7
7.
5

6 1 7
6 .. 7
k vk (represented in the basis B), [v]B = 6 . 7 and
4
5
n

(vk ) =

n
P

k=1

k k vk ,

so that

## 2.6. INTRODUCTION TO SPECTRAL THEORY

1 1

0
...

32

0
...

93

7
76
7 6
7 6
7
7 6 .. 7 6
7 6
=
7
6
7
6
7=6 0
0
0 7 [v]B ,
0
.
5
54
5 4
5 4
0 0
0 0
n
n
n
n n
which means that the matrix attached to U ( ) in the basis B is diagonal.

6
6
[U (v)]B = 6
4

..
.

In other words, if we may nd a basis of eigenvectors, then in this basis the linear operator has a
diagonal matrix with eigenvalues on the diagonal.

2.6.25. Example. The linear operator represented in the standard basis with the matrix
2
3
4
0
1
6
7
6
7
A=6 1
6
2 7, has:
4
5
5
0
0
3
2
4
0
1
7
6
7
6
3
2 2 + 29 + 30
the characteristic polynomial: 7! det 6
1
6
2 7=
5
4
5
0
0
3
2
the characteristic equation:
2 + 29 + 30 = 0
the roots of the characteristic equation (the eigenvalues):

= 5,

6,

the eigenvectors attached to the eigenvalue 1 = 5 are the solutions of the system:
2
32
3 2 3
4
0
1
0
1
6
76 1 7 6 7
6
76
7 6 7
6
1
6
2 7 6 2 7 = 6 0 7, which means
1
4
54
5 4 5
5
0
0
0
1
3
8
8
8 2
9
3
>
>
>
>
( 1) 1 + 0 2 + 1 3 = 0
=
1
>
>
>
>
>
>
>
>
<
< 1
< 6
7
=
6
7
3
3 7;
)
,
V
=
2
R
6
( 1) 1 + ( 11) 2 + ( 2) 3 = 0
1
2 =
11
>
>
>
>
4 11 5
>
>
>
>
>
>
>
>
:
:
:
;
5 1 + 0 2 + ( 5) 3 = 0
=
1
3
8 2 3
9
>
>
>
>
> 6 0 7
>
<
=
6 7
the eigenvectors attached to the eigenvalue 2 = 6 are V 2 =
6 1 7; 2 R
>
>
4 5
>
>
>
>
:
;
0
8 2
9
3
1
>
>
>
>
>
>
< 6 5 7
=
6 9 7
the eigenvectors attached the eigenvalue 3 = 1 are V 3 =
;
2
R
6 25 7
>
>
4
5
>
>
>
>
:
;
1
We know that eigenvectors corresponding to distinct eigenvalues are linear independent from 2.6.9,
so that if we choose an eigenvector for each eigenvalue we get 3 eigenvectors which will form a linear
independent set; as it is also maximal (the embedding space has dimension 3) it is a basis.

94

2. LINEAR TRANSFORMATIONS

82
>
>
>
<6
6
If the basis is 6
>
4
>
>
:

1
3
11

3 2

3 2

0
7 6 7 6
7 6 7 6
7;6 1 7;6
5 4 5 4
0

vectors as columns)
2
3
1
1 0
5 7
6
6 3
9 7
T = 6 11 1
7, its inverse
25
4
5
1 0 1
32
2
5
1
0
4
6 76
6 6
76
6 4
T 1 AT = 6 55
76 1
1 19
55
54
4
5
5
0 6
5
6
which is a diagonal form.

is T

0
6
0

1
5
9
25

39
>
>
=
7>
7
7 , the changeofbasis matrix is (the matrix with the
5>
>
>
;
2

6
6
=6
4

32

76
76
2 76
54
0

1
6

4
55

19
55

5
6

5
6

5
6

1
3
11

0
1
0

7
7
7, while
5
1
5

9
25

7 6
7 6
7=6 0
5 4
0

0
6
0

7
7
0 7,
5
1

82
3 2
3 2 39
1
>
1
0 >
>
>
>
=
7 6 7>
<6 5 7 6
6 9 7 6 3 7 6 7
When the eigenvectors are placed in a dierent order, B = 6 25 7 ; 6 11 7 ; 6 1 7 , then
>
5 4
5 4 5>
4
>
>
>
>
;
:
0
1
1
2
2
3
3
1
5
5
1
0
0
6 7
6 5
6 6
7
6 9
6
7
7
3
T = 6 25
1 7, T 1 = 6 56 0 61 7 and
11
4
4
5
5
4
19
1
1 0
1 55
55
2
32
32
3 2
3
5
5
1
1 0 0
0 6
4
0
1
1 0
6 6
76
76 5
7 6
7
6
76
76 9
7 6
7
3
=
T 1 AT = 6 56 0 61 7 6 1
6
7
6
7
1
0 5 0 7
6
2
11
4
54
5 4 25
5 4
5
4
19
5
0
0
1
1 0
0 0
6
1 55
55
(also a diagonal form, but on the diagonal the order of the eigenvalues is changed)
82
39
3 2 3 2
>
5 >
0
11
>
>
>
7 6 7 6
7>
=
<6
7 6 7 6
7
6
If we choose dierent eigenvectors, B = 6 3 7 ; 6 5 7 ; 6 9 7 , then
>
5>
4
5 4 5 4
>
>
>
>
;
:
25
11
0
2
3
2
3
5
1
0 66
11 0
5
6 66
7
6
7
6
1 19 7
6
7
1
4
6
7 and
T =6 3 5
9 7, T = 6 275
275 7
4
5
5
4
5
1
1
11 0 25
0
30
30
2
32
32
3 2
3
5
1
0 66 7
4
0
1
11 0
5
5 0
0
6 66
6
76
7 6
7
6
7
1
6
7
6
7
6
7
4
19 7
T 1 AT = 6
=
6
7
6
7
6
7
1
6
2
3
5
9
0
6
0
6 275 5 275 7 4
54
5 4
5
4
5
1
1
5
0
0
11 0 25
0 0
1
0 30
30

## (the same diagonal form)

2.6.26. Theorem. (The Hamilton-Cayley Theorem) Consider a linear operator U ( ) which is represented with the matrix A in a certain basis and with the characteristic polynomial P ( ) = det (A
Then P (A) = 0 2Mn

I).

95

2
=

## consider the inverse matrix:

(A

I)

1
det (A

I)

B =

1
B ;
P( )

where B is the matrix with elements Aji (the cofactors (algebraic complements) of the matrix (A
which are polynomials of degree at most n

I)),

1 in .

B = B0 + B1 +

+ Bn

n 1

P ( ) I = (A

I) B

## and we rewrite this relation with respect to the increasing exponents of ;

with the notation P ( ) =
(

) I = (A

## the previous equality becomes:

I) B0 + B1 +

+ Bn

n 1

where we multiply on the righthand term and organize by the increasing exponents of , to obtain:
0I

1I

nI

+ (ABn

= AB0 + (AB1
1

Bn 2 )

n 1

B0 ) + (AB2
+ ( Bn 1 )

B1 )

because this is an equality of two polynomials, their corresponding coe cients should be equal. We get
the equalities:
0I

AB0

1I

AB1

B0

2I

AB2

B1

=
n 1I
nI

= ABn

Bn

Bn

j
j

An

## An (to the left)

96

2. LINEAR TRANSFORMATIONS

we multiply conveniently to the left in order to obtain the cancellation of the terms to the right, and we
get
0I

AB0

1A

A2 B1

AB0

2
2A

A3 B2

A2 B1

=
n 1
n 1A
n
nA

= An Bn

An 1 Bn

An Bn

n 1

## from where by addition it follows

0I

1A

2
2A

n 1A

n
nA

= 0;

which means P (A) = 0 (the null matrix) and correspondingly P (U ( )) = O ( ) (the null operator)
2.6.27. Remark. The set Mn (R) of the square matrices with dimension n is a vector space with dimension
n2 , so that for each matrix A 2 Mn (R), span Ak ; k 2 N has dimension at most n2 . The Hamilton
Cayley Theorem shows that dim span Ak ; k 2 N

## , As are linear independent satises

s
P
k
n + 1, while the linear dependence of the powers has the form
k A = 0 and corresponds to a

doesnt takes place; the least number s for which the powers A0 , A1 ,
s

certain polynomial m ( ) =

s
P

k=0

## , which is called the minimal polynomial of the matrix A.

k=0

2.6.28. Remark. (The greatest common divisor of several polynomials) A certain property which
will be used for the Jordan Canonical form is the property 2.6.3, which may be seen in particular forms
in 2.6.1 and in 2.6.2.
For two polynomials p1 (x) and p2 (x) (with real or complex coe cients), their greatest common divisor
is a new polynomial d (x) (with coe cients from the same eld) which divides the two polynomials and
with this property has maximal degree (there is no other polynomial of bigger degree which divides both
polynomials). The existence and nding of the polynomial d (x) may be obtained by factorization (which
is a procedure limited by the impossibility of eectively nding the factorization of a polynomial) or by
using the Euclid Algorithm for nding the greatest common divisor for polynomials, as the last nonzero
remainder; the unicity of d (x) may be obtained with the extra condition that the polynomial should have
the leading coe cient 12. Some of the properties of the polynomial d (x) = gcd (p1 ( ) ; p2 ( )) are:
if d1 (x) jp1 (x) and d1 (x) jp2 (x) then d1 (x) jd (x) (if a polynomial d1 ( ) divides both of p1 ( ) and
p2 ( ) then d1 ( ) also divides their greatest common divisor d ( ))
2 The

leading coe cient is the coe cient of the monomial with the greatest degree of the polynomial. These polynomials are
also called monic polynomials.

## 2.6. INTRODUCTION TO SPECTRAL THEORY

97

gcd (p1 ( ) ; p2 ( )) = gcd (p2 ( ) ; p1 ( )) (in the Euclid algorithm, it doesnt matter which polynomial
is the rst)
if gcd (p1 ( ) ; q ( )) = 1 then gcd (p1 ( ) ; p2 ( )) = gcd (p1 ( ) ; p2 ( ) q ( ))
d ( ) is the least degree polynomial with the property: there are polynomials h1 ( ) and h2 ( ) such
that
(2.6.1)

d ( ) = h1 ( ) p1 ( ) + h2 ( ) p2 ( ) ;
for three or more polynomials, the greatest common divisor is dened recursively:
gcd (p1 ( ) ; p2 ( ) ; p3 ( )) = gcd (gcd (p1 ( ) ; p2 ( )) ; p3 ( )) ;
while the property 2.6.1 may be extended for three polynomials in the following way:
gcd (p1 ( ) ; p2 ( ) ; p3 ( )) = gcd (gcd (p1 ( ) ; p2 ( )) ; p3 ( )) =
= h01 ( ) gcd (p1 ( ) ; p2 ( )) + h02 ( ) p3 ( ) =

(2.6.2)

## = h01 ( ) (h001 ( ) p1 ( ) + h02 ( ) p2 ( )) + h02 ( ) p3 ( ) =

= h2 ( ) p1 ( ) + h2 ( ) p2 ( ) + h3 ( ) p3 ( ) :
for a certain number k of polynomials their greatest common divisor is dened similarly:
gcd (p1 ( ) ; p2 ( ) ;

## ; pk ( )) = gcd (gcd (p1 ( ) ;

; pk

( )) ; pk ( ))

while the property 2.6.2 may be extended for k polynomials in the following way: 9hi ( ) polynomials, i = 1; k such that
(2.6.3)

gcd (p1 ( ) ; p2 ( ) ;

; pk ( )) =

k
X

hi ( ) pi ( ) :

i=1

2.6.29. Denition. A linear operator N ( ) is called nilpotent when there is r 2 N such that N r ( ) = O ( )
(the null operator).

2.6.30. Remark. If N ( ) is nilpotent, f 6= 0 is a nonzero vector and k + 1 = min fs; N s (f ) = 0g, then
the set f; N (f ) ;

Proof.

; N k (f ) is linear independent.

k
X

(f ) = 0 j N k ( ) )

( ) we get that 8j = 1; k,

iN

i=0

## in a similar way by applying N k

0N

(f ) = 0 )
j

= 0.

= 0;

98

2. LINEAR TRANSFORMATIONS

2.6.31. Theorem. (Reducing a complex linear operator to nilpotent linear operators) Consider
k
Q
nj
a linear operator U ( ) : Cn ! Cn with the characteristic polynomial P ( ) =
(
j) .
j=1

## For each j = 1; k, consider the set V

(1) For each j = 1; k, V
k
L
(2) V =
V j.

nj
j I) ( )). Then:

= ker ((U

is an U ( )invariant subspace.

j=1

U (x), 8x 2 V
over V

## has the structure Uj ( ) = (Nj +

j I) (

! V j , dened by Uj (x) =

Proof.
(1) The set V

Take v 2 V

nj
j I)

= ker ((U

nj
j In )

( )); then (U

(v) = 0.

## We have to prove that U (v) 2 V j , which means (U

(U

j I)

nj

(U (v)) = (U

j I)

nj

((U

= (U

j I)

nj

(U

= (U

j I) ((U

so the subspace V j
k
Q
(2) We have P ( ) =
(

jI

j I) (v)
j I)

= (U
j I) (0) +
is U ( )invariant.
nj
j) ,

nj

j In )

j I) (v))

(U (v)) = 0.

=
nj
j I)

+ (U

(v)) +

nj

(U

0 = 0 ) U (v) 2 V

( j I) (v) =
j I)

nj

(v) =

## and for j = 1; k we consider the polynomials Pj ( ) and Qj ( ),

j=1

where

k
Q

P( )
.
Pj ( )
i=1;i6=j
We have gcd (Q1 ( ) ;
; Qk ( )) = 1 so from Remark 2.6.28 and the property 2.6.3
k
P
there are some polynomials hj ( ) such that
hj ( ) Qj ( ) = 1.

Pj ( ) = (

nj
j)

and Qj ( ) =

ni
i)

j=1

## Moreover, we have P ( ) = Pj ( ) Qj ( ) ; 8j = 1; k; we write these relations for the attached

linear operator polynomials and we use the HamiltonCayley Theorem:
0 =P (U ( )) = Pj (U ( )) Qj (U ( )) )
) Pj (U ( )) ((Qj (U ( ))) (v)) = 0; 8v 2 V;
) (Qj (U ( ))) (v) 2 V j ; 8v 2 V
and
k
P
hj (U ( )) Qj (U ( )) = I ( ), which means, taking into account (from the Remark 2.2.10)
j=1

that hj (U ( )) Qj (U ( )) = Qj (U ( )) hj (U ( )),
k
k
P
P
hj (U ( )) ((Qj (U ( ))) (v)) =
(Qj (U ( ))) ((hj (U ( ))) (v)) = v; 8v 2 V;
j=1

j=1

## 2.6. INTRODUCTION TO SPECTRAL THEORY

99

because (Qj (U ( ))) (v) 2 V j and V j is U ( )invariant, we get that (Qj (U ( ))) ((hj (U ( ))) (v)) 2
V j , so that any vector may be written as a sum of vectors from V j , j = 1; k, and so we have
k
P
that
V j = V.
j=1

such that

k
P

## vj = 0; then we have Pi (U ( )) (vi ) = 0 and

j=1

Qi (U ( )) (vj ) = 0; 8j = 1; k; j 6= i )
!
k
P
Qi (U ( )) (vi ) = Qi (U ( ))
vl = 0;
l=1;l6=i

since the polynomials Pi ( ) and Qi ( ) are relatively prime, from 2.6.1 we get the existence of
the polynomials R1 ( ) and R2 ( ) such that
R1 ( ) Pi ( ) + R2 ( ) Qi ( ) = 1, which means, by passing to linear operator polynomials, that
0 = R1 (U ( )) ((Pi (U ( ))) (vi )) + R2 (U ( )) ((Qi (U ( ))) (vi )) = vi , which means that vi = 0;
we get that the decomposition is unique and the sum is direct.
(3) Consider the linear operator Uj ( ) : V

! V j , Uj (v) = U (v), 8v 2 V

(the restriction of U ( )

over V j ).
Because V

= ker ((U

j I)

nj

( )), we have (U

j I)

nj

Uj ( ) = (Uj

j I) (

)+

jI (

) = Nj ( ) +

jI (

j I) (

j I)

nj

## ), we get the structure from the statement.

2.6.32. Theorem. (The Jordan canonical form for nilpotent operators) Consider a nilpotent
linear operator N ( ) : V ! V over a nitetype vector space V. Then there is a basis for which the matrix
representation of N ( ) is a Jordan block attached to the zero eigenvalue.
Proof. N ( ) nilpotent ) 9!r 2 N such that ker N r ( ) = V and ker N r

Not

## r = min dim N k ( ) = dim V = n

k

(the existence comes from the nilpotency of N ( ), while the unicity is a result of r being the smallest);
moreover, we have x 2 ker N k ( ) ) N k (x) = 0 ) N N k (x) = 0 ) x 2 ker N k+1 ( ) so that:
ker N ( )

ker N k ( )

ker N 2 ( )

ker N r ( ) = V

(the kernels of the successive powers of N ( ) are forming a chain, which is nite (because of the nite
dimension of V) and equal with V from the exponent r above) the resulting chain is:
f0g = ker N 0 ( )

ker N 1 ( )

ker N r

ker N 2 ( )
Not

()

6=

ker N r ( ) = V

## and consider their dimension, denoted by mk = dim ker N k ( ), so that

k = 0; r ) 0 = m0

m1

m2

mr

< mr = n.

(v) =

100

2. LINEAR TRANSFORMATIONS

## For each k = 1; r, we have ker N k ( ) = ker N k

V = ker N r ( ) = ker N r
we have qk = mk

()

Not

()

Qr ;

mr

Not
1

= p1 .

## V = ker N r ( ) = ker N r 1 ( ) Qr = ker N r 1 ( ) spanff1 ;

; fp1 g. Moreover,
p
1
P
r 1
( ) ) 8i = 1; p1 , i = 0 (because of the direct sum). We have the decomposition
i fi 2 ker N

i=1

ker N r

fp 1 g

( ):

## By applying N ( ), we get that the vectors N (f1 ) ;

fi 2 ker N r ( ) ) N r (fi ) = 0 ) N r

(N (fj )) = 0 ) N (fj ) 2 N r

( ) (because

( )).

Moreover, if any linear combination with these vectors would belong to ker N r 2 ( ), then
p1
p1
p1
P
P
P
r 2
= 0 from where
= Nr 1
( ) which means that N r 2
i fi
i N (fi )
i N (fi ) 2 ker N

i=1

we get

i fi

i=1

2 ker N r

())

= 0,

## and this means that the set fN (f1 ) ;

ker N r

( ) n ker N r

qr = p1 (because Qr

( ), so fN (f1 ) ;

## ; N (fp1 )g is linear independent and fN (f1 ) ;

Qr 1 , while dim Qr

; N (fp1 )g

= qr

= mr

; N (fp1 )g
mr

Not

## complete the linear independent set fN (f1 ) ;

fp1 +1 ;

i=1

i=1

p1
P

; N (fp1 )g in Qr

up to a basis of Qr

p1 and

; fp1 +p2 .

## ; fp1 +p2 g is a basis in Qr

; N (fp1 ) ; fp1 +1 ;

## V = ker N r ( ) = span ff1 ;

fp 1 g

ker N r

= span ff1 ;

fp 1 g

Qr

ker N r

; N 2 (fp1 ) ; N (fp1 +1 ) ;

()=

## = span ff1 ; fp1 g spanfN (f1 ) ;

Apply N ( ) to the set fN (f1 ) ;
; N (fp1 ) ; fp1 +1 ;
fN 2 (f1 ) ;

()=

; N (fp1 ) ; fp1 +1 ;
; fp1 +p2 g
; fp1 +p2 g to get the set

ker N r

( ):

## ; N (fp1 +p2 )g which is a linear independent set (with similar ar-

guments as above) in Qr 2 .
Moreover, qr
we obtain that qr
f1 ;
N (f1 ) ;
N 2 (f1 ) ;

Not

qr

qr

1
1

and denote by p3 = qr
qr

q2

qr

= qr

p1

## q1 and the structure:

basis in Qr ;

fp 1 ;
;

N (fp1 ) ;

fp1 +1 ;

; N 2 (fp1 ) ;

N (fp1 +1 ) ;

basis in Qr 1 ;

fp1 +p2 ;
N (fp1 +p2 ) ; fp1 +p2 +1 ;

basis in Qr 2 ;

## ; fp1 +p2 +p3

:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
Nr

(f1 ) ;
; N r 1 (fp1 ) ; N r 2 (fp1 +1 ) ;
; N r 2 (fp1 +p2 ) ;
; fp1 + +pr 1 +1 ;
The last line has only eigenvectors, all attached to the zero eigenvalue.

; fp1 +

+pr

1 +pr

basis in Q1 :

## 2.6. INTRODUCTION TO SPECTRAL THEORY

101

Each column of the table is a linear independent set which determine an N ( )invariant subspace; the
rst p1 subspaces have dimension r; the next p2 subspaces have dimension r

## 1, and so on; the last pr

subspaces have dimension 1. The entire space V is a direct sum of the subspaces on the columns; for the
rst column, choose as basis the set
Nr

(f1 ) ; N r

(f1 ) ;

## ; N (f1 ) ; f1 (in this order).

In this basis the restriction of N ( ) is given by the values of N ( ) at the vectors forming the basis:
f1 2 V = ker N r ) N r (f1 )2= 0 3

0 7
7
0 7
7
.. 7
. 7
7
7
7
0 7
5
0
2 3
6 1 7
6 7
6 0 7
6 7
6 . 7
r 2
r 1
. 7
N (N
(f1 )) = N
(f1 ) = 6
6 . 7
6 7
6 7
6 0 7
4 5
0
::::::::::::::::::::::::::::::::::::::::::::::::::::::
2
2 3
3
0 7
6 0 1
6 0 7
6
6 . 7
.. 7
6 0 0
6 .. 7
. 7
7
6
6 7
6 . .
6 7
7 Not
6
6
7
.
.
N (f1 ) = 6 0 7 ) N jsp inv (x) = 6 . .
0 7
7 x = J0 (r) x;
6
6 7
7
6
6 7
7
1 7
6 0 0
6 1 7
4 5
4
5
0 0
0
0
nally, we get for the matrix representation of N ( ) the following structure:
2
p1 cells
J (r)
6 0
6
...
6
0
0
with
6
6
6
order r
J0 (r)
6
6
6
p2 cells
J0 (r 1)
6
6
6
..
0
0
.
6
with
6
6
6
order r 1
J0 (r 1)
6
6
..
..
..
..
6
.
.
.
.
6
6
6
pr cells
J0 (1)
6
6
6
...
0
0
with
6
4
order 1
J0 (1)
6
6
6
6
6
r 1
r
N (N
(f1 )) = N (f1 ) = 6
6
6
6
6
4

3
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
7
5

102

2. LINEAR TRANSFORMATIONS

2.6.33. Denition. An r

the form:

6
6
6 0
6
J (r) = 6
6
6
4
0

..

...
0

(r1 +

+ rk )

(r1 +

0 7
7
0 7
7
7:
1 7
7
5

2
0
0
6 J (r1 )
6
6
0
J (r2 )
0
6
J (r1 ;
rk ) = 6
..
6
.
6
4
0
0
J (rk )

## 2.6.35. Denition. A Jordan matrix attached to the scalars ( 1 ;

i = 1; s is a matrix with the form:
2
1
6 J 1 r1 ;
6
6
0
6
J =6
6
6
4
0

;r

1
k1

r1 2 ;

is a matrix of dimension

7
7
7
7
7:
7
7
5

s)

## and the orders r1 i ;

0
; rk22

r1 s ;

rkii ,

...
0

when it has

; rkss

7
7
7
7
7:
7
7
5

2.6.1. Decomplexication of the Complex Jordan Canonical Form. When the eld of scalars

is real, the eigenvalues are generally speaking complex numbers, and so they dont belong to the eld of
scalars; moreover, the attached eigenvectors have complex coordinates and thus they dont belong to the
vector space. The following procedure obtains a pseudodiagonal form for this situation:
Consider the space (Rn ; R) and a linear operator U ( ) : Rn ! Rn with the representation (in the
standard basis) given by the matrix A [[U (x)]E = A [x]E ]. The the characteristic polynomial P ( ) =
det (A

+ i 2 C n R;

## i is also an eigenvalue with the same multiplicity (because

when a polynomial with real coe cients has a complex root, it also has as root its complex conjugate).
The same matrix A denes a new linear operator over Cn given by [U (x)]E = A [x]E . Over the eld C
this linear operator admits a Jordan basis in which for the complex eigenvalue

we have m corresponding

basis vectors (some of them are eigenvectors) with complex coordinates, denoted by f1 ;
complex conjugates f1 ;

## ; fm are the corresponding vectors for the eigenvalue :

; fm ; then their

103

## If v is an eigenvector for A, Av = v, then A (Re v + i Im v) = (Re + i Im ) (Re v + Im v) ) Av = v

(since the matrix A has real components) so that v is an eigenvector for
The Jordan basis corresponding to the eigenvalue

; Af1q = f1q

Af11 = f11 ;

## Af21 = f11 + f21 ;

.........................................................
Afn11 = fn11

+ fn11 ;

; Afnqq = fnqq

+ fnqq

The vectors fjk are linear independent and they form the corresponding part of the Jordan basis for
the eigenvalue .
Starting from the vectors attached to these two complex conjugated eigenvalues we may build a basis
with real coordinates by replacing each pair of complex conjugated vectors fjk ; fjk with the pair of real
vectors
gjk =

1
2

1
2i

fjk

Afjk = fjk

+ fjk ;

Afjk = fjk

+ fjk

## and from the relations

f + f = (Re ( ) + i Im ( )) (Re (f ) + i Im (f )) +
+ (Re ( )
= 2 Re ( ) Re (f )
f

i Im ( )) (Re (f )

i Im (f )) =

2 Im ( ) Im (f ) = 2 (Re ( ) g

Im ( ) h) ;

f = (Re ( ) + i Im ( )) (Re (f ) + i Im (f ))
(Re ( )

i Im ( )) (Re (f )

i Im (f )) =

= 2i Re ( ) Im (f ) + 2i Im ( ) Re (f )
it follows that
Agjk = gjk
Ahkj = hkj

+ Re ( ) gjk

+ Re ( ) gjk + Im ( ) hkj

## so that the replacements for each pair of complex cells (for

0
@

0
0

Im ( ) hkj ;

1
A

0
@

+ i ) are:
1
A

104

2. LINEAR TRANSFORMATIONS

0
B
B
B
B
B
B
B
B
B
B
B
B
B
@

0
0
0
0
0

1 0 0 C
B
B
C
B
B
C
B
B 0
0
0
C
B
B
C
B
B
C
B 0 0
B 0 0
1 C
B
B
A
@
@
0 0
0 0 0
1
0
1 0 0 0 0
C
B
C
B
B
1 0 0 0 C
C
B
C
B
B 0 0
0
0 0 0 C
C
B
C
B
C
B 0 0
0 0
1 0 C
B
C
B
C
B
0 0 0
1 C
B 0 0
A
@
0 0
0 0 0 0
:::::::::::::::::::::::::::::::::

1
0

0 C
C
1 C
C
C
C
C
A

0
1
0

C
C
0 C
C
C
0 C
C
C
1 C
C
C
C
C
A

105

## (1) Find the matrix of the linear operator

(2) Solve (in C) the characteristic equation det (A
k
P
with their multiplicity orders nj ,
nj = n.

2 C, j = 1; k,

j=1

## (with multiplicity nj ) obtained at step 2.:

nj
j I)

= ker (U

( ).

(4) Find the restriction Uj ( ) of the linear operator U ( ) over the set V j , Uj ( ) : V

! V j;

Uj (x) = U (x) ; 8x 2 V j .
(5) Find the nilpotent linear operator Nj ( ) : V
(Uj

j I) (x),

j I (x)

8x 2 V j .

## (6) Find the chain of kernels:

f0g = ker Nj0 ( )

ker Nj2 ( )

ker Nj ( )

ker Nj j

()

ker Nj j ( ) = V j ;

nd
rj = min dim ker Njk ( ) = dim V
k

= nj

## and for each k = 1; rj , consider the decomposition

ker Njk ( ) = ker Njk

()

Qjk ;

## (7) Find mjk = dim ker Njk ( ), k = 0; rj

(8) Find qkj = dim Qjk = mjk

mjk 1 ; k = 1; rj

(9) Find
pj1 = mjrj

mjrj

pj2 = qrjj

pj1 ,

pj3 = qrjj

qrjj

= qrjj ,

= qrjj

pj1

pj2

## (10) Choose a basis in Qjrj , denoted by f1j ;

(11) Find the vectors
basis in Qjrj

fpjj +1 ;
1

; fpjj +pj
1
2

fpjj ;
1

1;

## (12) Continue this procedure until the following structure is obtained:

; Nj fpjj

up to a

106

2. LINEAR TRANSFORMATIONS

f1j ;

fpjj ;

basis in Qjrj ;

Nj f1j ;

fpjj +1 ;

Nj fpjj +1 ;

Nj fpjj ;

; Nj2 fpjj ;

Nj2 f1j ;

basis in Qjrj

fpjj +pj ;
1

1

basis in Qjrj

## ; fpjj +pj +pj

1

1;

2;

:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
r

Nj j

f1j ;

; Nj j

fpjj ; Nj j
1

fpjj +1 ;
1

; Nj j

fpjj +pj ;
1

; fpjj +
1

+pjr

1 +1

; fpjj +
1

+pjr

j
1 +prj

basis in Qj1 :

The Jordan basis is obtained by ordering the above vectors in the following way: choose the
vectors on the columns, from below to above (for each column, from the bigger exponent to the
smaller exponent) for example, for the rst column:
r

Nj j

f1j ; Nj j

f1j ;

; Nj f1j ; f1j

(13) Obtain in this basis a matrix for the linear operator which has pj1 cells of order rj , pj2 cells of order
rj

1, and so on.

## 2.6. INTRODUCTION TO SPECTRAL THEORY

107

2.6.3. Examples.

4
4
2.6.36. Example (One eigenvalue
with multiplicity
2
3 4). Consider U ( ) : R ! R with the matrix in the
1
1 0 7
6 1
6
7
6 1 3
7
0
1
6
7
standard basis given by A = 6
7; determine the Jordan canonical form and a Jordan
6 1 0
1 1 7
6
7
4
5
0
1
1 1
basis.
2
3
1
1 0 7
6 1
6
7
6 1 3
7
0
1
6
7
Step 1: the matrix is A = 6
7.
6 1 0
1 1 7
6
7
4
5
0
1
1 1

I4 ) =
0

0
1

=(

1)4 = 0 )

= 1, n1 = 4.

## Step 3: The dimension of the eigenset attached to the eigenvalue

= 1:

4
Consider the
(U
1R4 )4 ( ); its matrix is (A
1 1R4 ) ( ) = (U
2 linear operator 3
1
1
0 7
6 0
6
7
6 1 2
7
0
1
6
7 Not.
A I4 = 6
7 = B.
6 1 0
7
2
1
6
7
4
5
0
1
1 0
2
32 2
3
1
1
0 7
2 2 7
6 0
6 2 2
6
7
6
7
6 1 2
7
6 2 2
7
0
1
2
2
6
7
6
7
(A I4 )2 = 6
7 =6
7 = B2.
6 1 0
7
6
2 1 7
2 2
2 7
6
6 2
7
4
5
4
5
0
1
1 0
2
2 2
2
2
33 2
3
1
1
0 7
6 0
6 0 0 0 0 7
6
7
6
7
6 1 2
7
6 0 0 0 0 7
0
1
6
7
6
7
(A I4 )3 = 6
7 =6
7 = B3.
6 1 0
7
6
7
2 1 7
6
6 0 0 0 0 7
4
5
4
5
0 0 0 0
0
1
1 0

I4 )4 .

108

2. LINEAR TRANSFORMATIONS

(A

So V

1
1
0
6 0
6
6 1 2
0
1
6
4
I4 ) = 6
6 1 0
2 1
6
4
0
1
1 0
1

= ker (U

n1
1 I)

34

7
6 0
7
6
7
6 0
7
6
7 =6
7
6 0
7
6
5
4
0

( ) = ker (U

0 0 0 7
7
0 0 0 7
7
7 = B4.
0 0 0 7
7
5
0 0 0

1R4 )4 ( ) = R4 .

## Step 4: The restriction U1 ( ) for U ( ) over V 1 , U1 ( ) : V

! V 1 ; U1 (x) = U (x), 8x 2 V

is:

## U1 ( ) : R4 ! R4 ; U1 (x) = U (x) (U1 ( ) is identical with U ( )).

4
4
Step 5: The
2 nilpotent linear3operator N1 ( ) : R ! R , dened by N1 (x) = U1 (x)
1
1
0 7
6 0
7
6
7
6 1 2
0
1
7
6
matrix B = 6
7.
6 1 0
2 1 7
7
6
5
4
0
1
1 0
Step 6: Find

k

1 I4

= n1 = 4

## from where we get r1 = 3. The chain of kernels:

ker N12 ( )
ker N1r1 1 ( ) ker N1r1 ( ) = V 1 :
32
3 2 3
2
1
1
0 7 6 x1 7 6 0 7
6 0
76
7 6 7
6
7 6 x2 7 6 0 7
6 1 2
0
1
76
7 6 7
6
ker N1 ( ) is the set of solutions of the system: 6
76
7 = 6 7, which means all the
6
7 6 7
6 1 0
2 1 7
7 6 x3 7 6 0 7
6
4
54
5 4 5
0
0
1
1 0
x4
2
3
6 2a + b 7
6
7
6
7
a
6
7
vectors of the form: 6
7
6
7
a
6
7
4
5
b
2
32
3 2 3
2 2 7 6 x1 7 6 0 7
6 2 2
6
76
7 6 7
6 2 2
7 6 x2 7 6 0 7
2
2
6
7
6
7 6 7
ker N12 ( ) is the set of solutions of the system: 6
76
7 = 6 7, which means
6 2
76 x 7 6 0 7
2
2
2
6
76 3 7 6 7
4
54
5 4 5
2
2 2
2
x4
0
2
3
6 a b+c 7
6
7
6
7
a
6
7
all the vectors of the form: 6
7.
6
7
b
6
7
4
5
c
f0g = ker N10 ( )

ker N1 ( )

N10 ( )

I4

N1 ( )

N12

N13

()

()

109

mjk

## has the kernel:

qkj

f0g
m10 = 0
82
3 2 39
>
>
>
2 7 6 1 7>
>
>
>
>
6
>
>
>
7
7
6
>
6
>
=
<6 1 7 6 0 7>
7 6 7
6
span 6
m11 = 2
7;6 7
>
7
7
6
>
6
>
>
>
6 1 7 6 0 7>
>
>
>
5 4 5>
4
>
>
>
>
: 0
1 ;
82 3 2
3 2 39
>
>
>
>
>
>
>
6 1 7 6 1 7 6 1 7>
>
>
>
7
7
6
7
6
>
6
>
=
<6 1 7 6 0 7 6 0 7 >
7 6 7
6 7 6
span 6 7 ; 6
m12 = 3
7;6 7
>
7
7
6
7
6
>
6
>
>
>
6 0 7 6 1 7 6 0 7>
>
>
>
5 4 5>
4 5 4
>
>
>
>
: 0
1 ;
0

m13

=4

q11 = m11
=2

q21 = m12
=3

q31 = m13
=4

p12 = q21

## q31 = 0 ) the Jordan matrix has 0 cells of order r1

p13 = q11

q21 = 2

1 = 1 ) the
2
6 1 1
6
6 0 1
6
6
So the Jordan matrix is 6
6 0 0
6
6
6
4
0 0

1=2

## Jordan matrix has 1 cell of order r1

3
0 j 0 7
7
1 j 0 7
7
7
1 j 0 7
7.
7
7
j
7
5
0 j 1

2=1

Step 10 [Finding the Jordan basis]: For each k = 1; r1 = 1; 3 consider the decomposition
ker N1k ( ) = ker N1k

()

Q1k ;

## Find the subspaces Q1k :

f0g = ker N10 ( )

ker N1 ( )

ker N12 ( )

ker N13 ( )

ker N1 ( )

Q12

ker N12 ( )

= R4
Q13

k
ker N1 ( )

Q12

m10 =
0=2

m11 =
2=1

m12 =
3=1

110

2. LINEAR TRANSFORMATIONS

82
>
>
>
>
6
>
>
6
>
<6
6
ker N12 ( ) = span 6
>
6
>
>
6
>
>
4
>
>
:

3 2

39
>
>
>
7>
>
7>
=
7>
7
7
7>
>
7>
>
5>
>
>
;

3 2

1 7 6 1 7 6 1
7 6
7 6
6 0 7 6 0
1 7
7 6
7 6
7;6
7;6
6
7
6
0 7 6 1 7
7 6 0
5 4
5 4
1
0
0
4
ker N13 ( ) = R4 = ker N12 ( ) Q13 ) complete the basis of ker N12 ( ) up to a basis of
; the completion
2R3

6 1 7
6 7
6 0 7
6 7
may be done with any vector from R4 n ker N12 ( ), for example with the vector v1 = 6 7.
6 0 7
6 7
4 5
0
82 3 2
3 2 3 2 39
>
>
>
1 7 6 1 7 6 1 7 6 1 7>
>
>
>
6
>
>
6 7 6
7 6 7 6 7>
>
>
>
<6 1 7 6 0 7 6 0 7 6 0 7>
=
6 7 6
7 6 7 6 7
The set 6 7 ; 6
7 ; 6 7 ; 6 7 is linear independent, so it is a basis of R4 = ker N13 ( ) in
>
6
7
6
7 6 7 6 7>
>
>
6 0 7 6 1 7 6 0 7 6 0 7>
>
>
>
>
4
5
4
5 4 5 4 5>
>
>
>
>
: 0
0
1
0 ;
2
which the rst 8
32
vectors
2 for
3 ker N1 ( ).
39are a basis
>
>
>
1 7>
>
>
6 1 7
6
>
>
>6 7>
6 7
>
>
>
6 0 7
=
<6 0 7>
6 7
6 7
1
= ker N12 ( ) ) N12 (v1 ) 6= 0
Q3 = span 6 7 . v1 = 6 7 2 R4 = ker N13 ( ) ) N13 (v1 ) = 0 v1 2
6
7
>
6
7
>
>
>6 0 7>
6 0 7
>
>
>
>
4 5
4 5>
>
>
>
;
: 0 >
0
3
2
N1 (v1 ) = 0 ) N1 (N1 (v1 )) = 0 ) N1 (v1 ) 2 ker N12 ( ) and N1 (N12 (v1 )) = 0 so that N12 (v1 ) 2 ker N1 ( )
f0g = ker N10 ( )

dim=2

ker N1 ( )

dim=3

dim=4

ker N12 ( )

ker N13 ( )

dim=2

ker N1 ( )
2N12 (v1 )

1
1
6 1 7
6 0
6 7
6
6 0 7
6 1 2
0
6 7
6
v1 = 6 7, N1 (v1 ) = 6
6 0 7
6 1 0
2
6 7
6
4 5
4
0
0
1
1
2
3
6 2 7
6
7
6 2 7
6
7
6
7 the vectors v1 , N1 (v1 ), N12 (v1 )
6 2 7
6
7
4
5
2

32

0 76
76
6
1 7
76
76
6
1 7
76
54
0

Q12

dim=3
ker N12 (
2N1 (v1 )

= R4

dim=1
Q13
2v1

1 7 6 0 7
1
1
0
6 0
7 6
7
6
6
7
6 1 2
0 7
0
1
7 6 1 7 2
6
7=6
7, N1 (v1 ) = 6
6
7
6 1 0
0 7
2 1
7 6 1 7
6
5 4
5
4
0
0
0
1
1 0

32

76 0 7
76
7
76 1 7
76
7
76
7=
76 1 7
76
7
54
5
0

## 2.6. INTRODUCTION TO SPECTRAL THEORY

111

To complete the basis, another vector has to be chosen to correspond to the cell of order 1, which
means from ker N1 ( ), which should be linear independent with the one already chosen from ker N1 ( ),
(v21 ).
which means with N128
>
>
>
>
6 2
>
>
6
>
<6 1
6
ker N1 ( ) = span 6
>
6 1
>
>
6
>
>
4
>
>
: 0
2 3
6 1 7
6 7
6 0 7
6 7
6 7
6 0 7
6 7
4 5
1

3 2

7 6 1
7 6
7 6 0
7 6
7;6
7 6 0
7 6
5 4
1

3
2
3
2
39
2
>
>
6
>
6 2 7
7>
6 2 7
7
6
>
7
6
7>
6
>
6
6 1 7
7=
6 2 7
7
6
7
6
6
7
7 + 26
7 = 26
7 and 6
7
6
7
6
7>
6
6
>
6 1 7
7>
6 2 7
>
5
4
5
4
5>
4
>
>
;
0
2

1 7
7
0 7
7
7 2 ker N1 ( ) Chose v2 =
0 7
7
5
1

82
3 2
3 2 3 2 39
>
>
>
>
>
>
6 2 7 6 0 7 6 1 7 6 1 7>
>
>
>
>
7
6
7
6
7
6
7
6
>
>
=
<6 2 7 6 1 7 6 0 7 6 0 7>
6
7 6
7 6 7 6 7
;
;
;
The Jordan basis is J = fN12 (v1 ) ; N1 (v1 ) ; v1 ; v2 g = 6
7 6
7 6 7 6 7
6 2 7 6 1 7 6 0 7 6 0 7>
>
>
>
6
7
6
7 6 7 6 7>
>
>
>
>
4
5
4
5 4 5 4 5>
>
>
>
>
: 2
0
0
1 ;
The matrix of N1 ( ) in this basis has as columns the representations of the images through N1 ( ) of

## the basis vectors:

6 1 7
6 0 7
6 7
6 7
6 0 7
6 0 7
6 7
6 7
N1 (N12 (v1 )) = 0 ) [N1 (N12 (v1 ))]J = 6 7 N1 (N1 (v1 )) = N12 (v1 ) ) [N1 (N1 (v1 ))]J = 6 7
6 0 7
6 0 7
6 7
6 7
4 5
4 5
0
0
2 3
2 3
6 0 7
6 0 7
6 7
6 7
6 1 7
6 0 7
6 7
6 7
N1 (v1 ) = N1 (v1 ) ) [N1 (v1 )]J = 6 7 N1 (v2 ) = 0 v2 ) [N1 (v2 )]J = 6 7
6 0 7
6 0 7
6 7
6 7
4 5
4 5
0
0
2
3
6 0 1 0 0 7
6
7
6 0 0 1 0 7
6
7
So the matrix of N1 ( ) in the basis J is 6
7.
6 0 0 0 0 7
6
7
4
5
0 0 0 0
The connection between the nilpotent linear operator and the restriction of the original linear operator
(in this case, exactly the original linear operator) is
N1 (x) = U1 (x)
is:

1 I4

(x) ) U1 (x) = N1 +

1 I4

## (x) so the matrix of the original linear operator in J

112

2. LINEAR TRANSFORMATIONS

6 1 0
6 0 1 0 0 7
7
6
6
6 0 1
6 0 0 1 0 7
7
6
6
7+1 6
6
6 0 0
6 0 0 0 0 7
7
6
6
5
4
4
0 0
0 0 0 0
The change
ofbasis matrix
2
3

0 0 7 6 1 1
7 6
6
0 0 7
7 6 0 1
7=6
6
1 0 7
7 6 0 0
5 4
0 0
0 1
from the
2 standard

1
1 1 7
4
6 0
7
6
1
6 0
1 0 0 7
7
6
2
1
7)C =6
6 1
1 0 0 7
1
7
6
5
4
0 0 1
0 1
32 2
2
1
1
0 76 1
4
4
6 0
76
6
1
1
76 1
6 0
0
76
6
2
2
J = C 1 AC = 6
76
6
6 1
1 1
1 7
76 1
6
54
4
1
0 12
0
1
2
[the2Jordan decomposition
32
2 the initial matrix]
3 of

6 2
6
6 2
6
C=6
6 2
6
4
2

1
4

0 0 7
7
1 0 7
7
7
1 0 7
7
5
0 1
basis to3the basis J is:

0 7
7
1
7
0
7
2
7 and the Jordan canonical
1
1 7
7
5
1
1
2
3
32
1 1 7
1
1 0 76 2 0
7
76
7
6 2
1
0
0
3
0 1 7
7
76
7 =
76
7
6
1 0 0 7
0
1 1 76 2
7
5
54
2
0
0 1
1
1 1
32

1 1 0 0 76
76
6
0 1 1 0 7
3
76
76
6
0 0 1 0 7
0
76
54
0 0 0 1
1
2
6 1 1 0 j
6
6 0 1 1 j
6
6
We observe the structure of the Jordan cells: 6
6 0 0 1 j
6
6
j
6
4
0 0 0 j

6 1
6
6 1
6
6
6 1
6
4
0

0 7 6 2
7 6
6
0 1 7
7 6 2
7=6
6
1 1 7
7 6 2
5 4
2
1 1
1

1 1 76
76
6
1 0 0 7
76
76
6
1 0 0 7
76
54
0 0 1

1
4

1
2

32

0 7
7
0 7
7
7
0 7
7.
7
7
7
5
1

1
4
1
2

1
1
2

form is:

6 1 1 0
6
6 0 1 1
6
6
6 0 0 1
6
4
0 0 0

0 7
7
0 7
7
7 or
0 7
7
5
1

0 7
7
0 7
7
7
1 7
7
5
1

## 2.6.37. Example. Use the 8

previous Jordan decomposition to solve the linear homogeneous ordinary
>
>
x01 = x1 + x2 + x3
>
>
>
>
>
< x0 = 3x2 x1 + x4
2
dierential equations system
>
>
x03 = x4 x3 x1
>
>
>
>
>
: x0 = x4 x3 x2 :
4
2
3 2
32
3
0
1
1 0 7 6 x1 7
6 x1 7 6 1
6
7 6
76
7
6 x0 7 6 1 3
6
7
0 1 7
6 2 7 6
7 6 x2 7
The matrix form of the system is: 6
7=6
76
7
6 x0 7 6 1 0
76 x 7
1
1
6 3 7 6
76 3 7
4
5 4
54
5
0
x4
0
1
1 1
x4
Replace the matrix with its Jordan decomposition (from the previous example):

## 2.6. INTRODUCTION TO SPECTRAL THEORY

x01

32

1 1 76 1 1
76
6
1 0 0 7
76 0 1
76
6
1 0 0 7
76 0 0
54
0 0
0 0 1
2
1
4
6 0
6
1
6 0
6
2
multiply the equality with 6
6 1
1
6
4
0 1
2
32
3 22
1
1
0 7 6 x01 7 6 1
4
4
6 0
6
76
7 6
1
1
6 0
7 6 x0 7 6 0
0
6
76 2 7 6
2
2
6
76
7=6
6 1
7
6
0 7
1 1
1 7 6 x3 7 6
6
6 0
4
54
5 4
1
0 21
1
0
x04
2
Apply
2
3the 2change of variables:3 2
6
7 6 2
6
7 6
6 x0 7 6 2
6 2 7 6
6
7=6
6 x0 7 6 2
6 3 7 6
4
5 4
2
x04

1
4

1
4

32

1
4

0 0 76 0
76
1
1
6
1 0 7
76 0
2
2
76
7
6
1 0 76 1
1 1
54
1
0 1
0 12
2
3
1
0 7
4
7
1
0 7
7
2
7 to the left:
1
1 7
7
5
1
1
2
32
1
1
1 0 0 76 0
4
4
76
1
1
6
1 1 0 7
76 0
2
2
76
6
0 1 0 7
1 1
76 1
54
1
0 0 1
0 12
2

y10

0 7 6 x1 7
7 6
6
6 y1 7 6 0
7 6
6
7
76
6
7 6
1
1
0 7
6
6
6
7
7
6 y2 7 6 0
0 7 6 x2 7
6 y2 7 6
6
7 6
2
2
7=6
7)6
76
6
7=6
6 y0 7 6
76 x 7
6 y 7 6 1
1
1
1
6 3 7 6
76 3 7
6 3 7 6
5 4
4
5
54
4
5 4
1
1
0
y4
x4
1
0 2
y4
2
The
in the variables
32
3y is:2
3
2
3 system
2 new
0
6 y1 7 6 1 1 0 0 7 6 y1 7 6 y1 + y2 7
76
7 6
7
7 6
6
6 y 0 7 6 0 1 1 0 7 6 y2 7 6 y2 + y3 7
76
7 6
7
6 2 7 6
76
7=6
7
7=6
6
7
6 y0 7 6 0 0 1 0 7 6 y 7 6 y
76 3 7 6
7
6 3 7 6
3
54
5 4
5
5 4
4
y4
0 0 0 1
y4
y40
Solve the system in y:
2
3 2
t
t
6 y1 (t) 7 6 e te
6
7 6
6 y2 (t) 7 6 0 et
6
7 6
6
7=6
6 y (t) 7 6
6 3
7 6 0 0
4
5 4
y4 (t)
0 0
Return
to3 the2original
2
6 x1 (t) 7 6 2
6
7 6
6 x2 (t) 7 6 2
6
7 6
6
7=6
6 x (t) 7 6 2
6 3
7 6
4
5 4
x4 (t)
2

1
4

1
4

1
2

1
2

32

0 76
76
6
0 7
76
76
6
1 7
76
54
1

113

x1 7
7
x2 7
7
7
x3 7
7
5
x4

32

0 76
76
6
0 7
76
76
6
1 7
76
54
1
1
4
1
2

1
1
2

x1 7
7
x2 7
7
7
x3 7
7
5
x4
32

x01

0 76
7
7
76
0 7
6
7
0 7 6 x2 7
7
76
6 x0 7
1 7
76 3 7
5
54
0
x4
1

## y4 (t) = c4 et , y3 (t) = c3 et , y2 (t) = (c3 t + c2 ) et , y1 (t) =

32
3
1 2 t
t e 0 7 6 c1 7
2
76
7
6 c2 7
tet
0 7
76
7
76
7
7
6
7
et
0 7 6 c3 7
54
5
t
c
0
e
4
variables:
32
3

1 1 76
76
6
1 0 0 7
76
76
6
1 0 0 7
76
54
0 0 1

y1 (t) 7
7
y2 (t) 7
7
7=
y3 (t) 7
7
5
y4 (t)

c3

t2
+ c2 t + c1 et
2

114

2. LINEAR TRANSFORMATIONS

6 2
6
6 2
6
=6
6 2
6
4
2

32

1 1 76
76
6
1 0 0 7
76
76
6
1 0 0 7
76
54
0 0 1

te

et

1 2 t
te
2
tet

et

32

0 76
76
6
0 7
76
76
76
0 76
54
t
e

2 t

c1 7 6 c4 e
2c1 e + c3 (e
t e ) 2tc2 e
7 6
2 t
t
t
t
t
6
c2 7
7 6 c3 (t e + te ) c2 (e + 2te ) 2c1 e
7=6
6
2 t
c3 7
tet ) c2 (et 2tet ) + 2c1 et
7 6 c3 (t e
5 4
c4
2c1 et + c4 et + 2tc2 et + t2 c3 et

3
7
7
7
7
7
7
7
5

115

## 2.6.38. Example. (Two eigenvalues with multiplicity 2) [Problem 2.23,

2 page
6 0 2
6
6 1 0
6
4
4
R ! R with the matrix representation in the standard basis A = 6
6 0 1
6
4
0 0
Jordan canonical form and a Jordan
basis. 3
2
1 7
6 0 2 0
7
6
6 1 0 0 0 7
7
6
Step 1: the matrix is A = 6
7.
6 0 1 0 0 7
7
6
5
4
0 0 1 0
2

I4 ) =

= 1:

0
1 7
7
0 0 7
7
7; determine the
0 0 7
7
5
1 0

+1 = (

1)2 ( + 1)2

0
1

)
)

= 1, n1 = 2,

1, n2 = 2

## Step 3: Find the dimension of the eigenset for

2
Consider the
1R4 )2 ( ); its matrix is (A I4 )2 .
1 1R4 ) ( ) = (U
2 linear operator (U3
0
1 7
6 1 2
7
6
7
6 1
1
0
0
7 Not.
6
A I4 = 6
7 = B (the matrix has rank 3).
6 0
7
1
1
0
6
7
5
4
0
0
1
1
2
32
2
3
0
1 7
4
1 2 7
6 1 2
6 3
6
7
6
7
6 1
7
6 2 3
7
1
0
0
0
1
6
7
6
7
(A I4 )2 = 6
7 = 6
7 = B 2 (the matrix has rank 2) (by
6 0
7
6
1
1 0 7
2 1
0 7
6
6 1
7
4
5
4
5
0
0
1
1
0
1
2 1

raising at other powers, the rank of the matrix B k will stay the same).
The
2 set
6 3
6
6 2
6
6
6 1
6
4
0

V
4
3
2
1

1
= ker (U 3 2 1 I)n3
()2
= ker3(U

1
2

2 76
76
6
1 7
76
76
6
0 7
76
54
1

x1 7 6
7 6
6
x2 7
7 6
7=6
6
x3 7
7 6
5 4
x4

0 7
7
0 7
7
7,
0 7
7
5
0

1R4 )2 2
( ) is the set of solutions
of the system:
3
6 x1 = 3a + 2b; 7
6
7
6 x2 = 2a + b 7
6
7
meaning 6
7;
6 x =a
7
6 3
7
4
5
x4 = b

116

2. LINEAR TRANSFORMATIONS

6 2 7
6 3 7
7
6
6 7
6 1 7
6 2 7
7
6
6 7
with v1 = 6 7, v2 = 6
7, we get: V
6 0 7
6 1 7
7
6
6 7
5
4
4 5
1
0

## = span (v1 ; v2 ), while the set B1 = fv1 ; v2 g is a basis for

V 1.

1
1
1
1
Step 4: Determine
U1 ( )3of U2( ) over
3 V , U1 ( ) : V ! V ; U1 (x) = U (x), 8x 2 V :
32
2 the restriction
1 76 3 7 6 4 7
6 0 2 0
2
3
76 7 6 7
6
6 1 0 0 0 76 2 7 6 3 7
2
76 7 6 7
6
5 (the scalars
U (v1 ) = Av1 = 6
7 6 7 = 6 7 = 2v1 + ( 1) v2 ) [U (v1 )]B1 = 4
6 0 1 0 0 76 1 7 6 2 7
1
76 7 6 7
6
54 5 4 5
4
1
0
0 0 1 0
2 and 1 may be found
3 1) =
2 from the system
3 2 U (v
2 av13+ bv2 )

6 0 2 0
6
6 1 0 0
6
U (v2 ) = Av2 = 6
6 0 1 0
6
4
0 0 1
1 and 0 may be found from the

1 76
76
6
0 7
76
76
6
0 7
76
54
0
system

2 7 6 3 7
2 3
7 6 7
6 2 7
1
1 7
7 6 7
7 = 6 7 = 1 v1 + 0 v2 ) [U (v2 )]B1 = 4 5 (the scalars
6 7
0
0 7
7 6 1 7
5 4 5
1
0
U (v2 ) = av1 + bv2 )
3
2
2 1
5 [x] in the basis B1 = fv1 ; v2 g of V 1 .
We get U1 ( ) : V 1 ! V 1 ; [U1 (x)]B1 = 4
B1
1 0
Step 5: The nilpotent linear operator N1 ( ) : V 1 ! V 1 (attached to the eigenvalue 1 ) is the

restriction over V

) over V
2
2 1
1
5 1 4
In the basis B1 N1 ( ) has the matrix 4
1 0
0
2
32 2
1
1
0
5 =4
Remark that the matrix is nilpotent: 4
1
1
0
Step 6: Find the chain of kernels:
1

of (U

1 I4 ) (

) (or (U1
2

ker N1 ( )

## The linear operator: Its matrix:

N10 ( )

1 I2 ) (

ker N12 ( )

Its kernel:

).

0
1
0
0

5=4

f0g

1
1

5.

5.

ker N1r1

()

dimension:

0 = m10
82
39
< 1 =
1
1
4
5 span 4
5
N1 ( )
1 = m11
:
;
1
1
1
82
2
3
3 2 39
< 1
0
0
1 =
2
4
5
4
5
4
5
N1 ( )
span
;
2 = m12
:
;
0 0
1
0
The kernel of N1 ( ) is the set of all solutions of the system:
I2
2

ker N1r1 ( ) = V

2
4

1
1

1
1

32
54

x1
x2

5=4

0
0

117

k

x1 = a
x2 =

5. Calculate

= n1 = 2

## to get r1 = 2 and for each k = 1; r1 consider the decomposition

ker N1k ( ) = ker N1k

Q1k ;

()

## Find the subspaces Q1k :

f0g = ker N10 ( )

ker N12 ( )

ker N1 ( )

= R2

k
ker N1 ( )

Q12

82
39
< 1 =
5 (which is a basis for ker N1 ( )) up to a basis for ker N12 ( ) = R2 , for example
Complete the set 4
:
1 ;
2 3
1
with the vector [u2 ]B1 = 4 5 (may be chosen any vector from ker N12 ( ) n ker N1 ( )).
0
2
32 3 2
3
1
1
1
1
54 5 = 4
5 2 ker N1 ( ).
[u1 ]B1 = [N1 (u2 )]B1 = 4
1
1
0
1
2 3
02 31 2
32 3
1
1
1
1
1
54 5 =
Remark: if the chosen vector would be, for example, 4 5, then N1 @4 5A = 4
1
1
1
1
1
2
3
2
3
2
1
4
5, which is from ker N1 ( ), but not exactly the same vector 4
5, but a certain linear combina2
1
tion.
The Jordan basis in V

## for N1 ( ) is B1 = fu1 ; u2 g, where:

2 3
0
N1 (u1 ) = 0 ) [N1 (u1 )]B = 4 5
1
0
2 3
1
N1 (u2 ) = u1 ) [N1 (u2 )]B = 4 5
1
0
2
3
2
3
0 1
0 1
5 [x] , which means that B1 is the Jordan basis for N1 ( ) while 4
5
We get [N1 (x)]B = 4
B1
1
0 0
0 0
is the Jordan canonical form for N1 ( ).
1

## Repeat the previous steps for the next eigenvalue,

1 with multiplicity n2 = 2:

## Step 3: Find the dimension of the set attached to

1:

118

2. LINEAR TRANSFORMATIONS

Consider (U
2

## 1R4 )2 (3) = (U + 1R4 )2 ( ); its matrix is (A + I4 )2 .

1 7
6 1 2 0
7
6
6 1 1 0 0 7
7 Not.
6
A + I4 = 6
7 = C (rank 3).
6 0 1 1 0 7
7
6
5
4
0 0 1 1
32 2
2
1 7
1
2
6 1 2 0
6 3 4
7
6
6
6 1 1 0 0 7
6 2 3 0
1
7
6
6
(A + I4 )2 = 6
7 =6
6 0 1 1 0 7
6 1 2 1
0
7
6
6
4
5
4
0 0 1 1
0 1 2
1
powers the matrix B, the corresponding rank remains
The
2
6 3
6
6 2
6
6
6 1
6
4
0

set V
4

2
2

6
6
6
6
with v3 = 6
6
6
4

7
7
7
7
7 = C 2 (rank 2) (if we continue raising to successive
7
7
5
the same).

n1
2
3I) 2(

2
) =3ker (U
3 set of all solutions of the system:
2 + 1R4 ) ( ) is the
2 7 6 x1 7 6 0 7
6 x1 = 3a + 2b; 7
7
6
76
7 6 7
6 x2 = 2a b 7
6 x2 7 6 0 7
1 7
7
6
76
7 6 7
7
76
7 = 6 7, ) 6
7
6 x =a
6
7 6 7
0 7
7
6 3
7 6 x3 7 6 0 7
5
4
54
5 4 5
x4 = b
0
1
x4
3
2
3
3 7
6 2 7
7
6
7
6 1 7
2 7
7
6
7
7, v4 = 6
7, we get: V 2 = span (v3 ; v4 ); the set B2 = fv3 ; v4 g is a basis in V 2 .
7
6
7
1 7
6 0 7
5
4
5
0
1

= ker3(U
2

2
Step 4: Find the2restriction U2 ( 3
) of
2 U ( )3over2the set3V , U2 ( ) : V

! V 2 ; U2 (x) = U (x), 8x 2 V 2 :

1 76 3 7 6 4 7
6 0 2 0
2
3
6
76
7 6
7
6 1 0 0 0 76 2 7 6 3 7
2
6
76
7 6
7
5 (the scalars
U (v3 ) = Av3 = 6
76
7=6
7 = 2v1 + v2 ) [U (v3 )]B2 = 4
6 0 1 0 0 76 1 7 6 2 7
1
6
76
7 6
7
4
54
5 4
5
0 0 1 0
0
1
2 and 1 may be found
av3 + bv
3 4)
2 from the system
3 2 U (v33 ) = 2

1 76
6 0 2 0
6
76
6 1 0 0 0 76
6
76
U (v4 ) = Av4 = 6
76
6 0 1 0 0 76
6
76
4
54
0 0 1 0
scalars 1 and 0 may be found from the

2 7
6 3 7
2
3
7
6
7
7
6
7
1 7
1
6 2 7
5 (the
7 = 6
7 = v1 + 0 v2 ) [U (v4 )]B2 = 4
7
6
7
0 7
0
6 1 7
5
4
5
1
0
system U (v4 ) = av3 + bv4 )
2
3
2
1
5 [x] with B2 = fv3 ; v4 g a basis of V 2 .
We get U2 ( ) : V 2 ! V 2 ; [U2 (x)]B2 = 4
B2
1
0
Step 5: The nilpotent linear operator N2 ( ) : V 2 ! V 2 (attached to the eigenvalue 2 ) is the

restriction over V

2 I4 ) (

2 I2 ) (

) over V 2 ).

2

## Step 6: Find the chain of kernels:

f0g = ker N20 ( )

ker N2 ( )

32

1
2

5 =4

0
3

0 0

5+1 4

0 0

kernel:

dimension:

N20 ( )

f0g

0 = m10

N2 ( )

N22 ( )

82
39
< 1 =
5
5 span 4
4
1 = m11
;
:
1
1
1
82
3 2 39
2
3
<
1
1 =
0 0
5;4 5
4
5
span 4
2 = m12
;
:
1
0
0 0
2
1

2

x1 = a

x2 =

Find

0 1

5=4

ker N2r1 ( ) = V

()

I2
2

1 0

5.

5.

ker N2r1

ker N22 ( )

119

1
1

32
54

x1
x2

5 = 4

0
0

5, which

k

= n2 = 2

## from where we get r2 = 2 and for each k = 1; r2 consider the decomposition

ker N2k ( ) = ker N2k

()

Q2k ;

## Find the sets Q2k :

f0g = ker N20 ( )

ker N2 ( )

ker N22 ( )

= R2

k
ker N2 ( )

Q22

82
39
< 1 =
5 (which is a basis over ker N2 ( )) up to a basis of ker N22 ( ) = R2 , for example
Complete the set 4
:
1 ;
2 3
1
with th vector [u4 ]B2 = 4 5 (may be chosen any vector from ker N22 ( ) n ker N2 ( )).
0
2
32 3 2
3
1
1
1
1
54 5 = 4
5 2 ker N2 ( ).
[u3 ]B2 = [N2 (u4 )]B2 = 4
1
1
0
1
The Jordan basis in V 2 for the linear operator N2 ( ) is B2 = fu3 ; u4 g, where:

120

2. LINEAR TRANSFORMATIONS

## N2 (u3 ) = 0 ) [N2 (u3 )]B = 4

2

3
5

0
2 3
1
N2 (u4 ) = u3 ) [N2 (u4 )]B = 4 5
2
0
3
3
2
2
0 1
0 1
5
5 [x] , which means that B2 is a Jordan basis for N2 ( ) while 4
We get [N2 (x)]B = 4
B2
2
0 0
0 0
is the Jordan canonical form for the linear operator N2 ( ).
Assemble all the
results3in the initial basis of R4 for both eigenvalues:
2 obtained
3
2
6 3 7
6 2 7
2 3
2
3
6 7
6
7
6 2 7
6 1 7
1
1
6 7
6
7
5, [u2 ] = 4 5,
For 1 : v1 = 6 7, v2 = 6
7, B1 = fv1 ; v2 g is a basis of V 1 , [u1 ]B1 = 4
B1
6 1 7
6 0 7
0
1
6 7
6
7
4 5
4
5
0
1

1
basis
B1 = fu1 ; u2 g Jordan
3 in V2
2
6
6 3 7
7
6
6
6
6 2 7
7
6
6
For 2 : v3 = 6
7, v4 = 6
6
6 1 7
7
6
6
5
4
4
0

## B2 = fu3 ; u4 g Jordan basis in V

2 7
3
2
2 3
7
7
1 7
1
1
5, [u4 ] = 4 5,
7, B2 = fv3 ; v4 g is a basis in V 2 , [u3 ]B2 = 4
B2
0 7
1
0
7
5
1

## The Jordan basis of the

3 2linear3operator
2 3U ( )
2 initial
6 3 7 6 2 7 6 1 7
6 7 6
7 6 7
6 2 7 6 1 7 6 1 7
6 7 6
7 6 7
u1 = 1 v1 1 v2 = 6 7 6
7=6 7
6 1 7 6 0 7 6 1 7
6 7 6
7 6 7
4 5 4
5 4 5
0
1
1
2 3
6 3 7
6 7
6 2 7
6 7
u2 = 1 v1 + 0 v2 = 6 7
6 1 7
6 7
4 5
0
2
3 2
3 2
6 3 7 6 2 7 6
6
7 6
7 6
6 2 7 6 1 7 6
6
7 6
7 6
u3 = ( 1) v3 + 1 v4 = 6
7+6
7=6
6 1 7 6 0 7 6
6
7 6
7 6
4
5 4
5 4
0
1

is:

1 7
7
1 7
7
7
1 7
7
5
1

## 2.6. INTRODUCTION TO SPECTRAL THEORY

121

6 3 7
7
6
6 2 7
7
6
u4 = 1 v3 + 0 v4 = 6
7
6 1 7
7
6
5
4
0

1 3
6 1 3
6
6 1 2 1
2
6
The changeofbasis matrix is: 6
6 1 1
1 1
6
4
1 0 1
0
2 1
3
1 32
0
6 4
4
2 76 0 2 0
6
6 1
1
1
1 7
76 1 0 0
6
7
6 4
4
4
4 76
Verication: 6 1
6
3
1
76 0 1 0
6
0
76
6
4 2 54
4 4
1
1 1
1
0 0 1
4
4
4 4
The
of the initial
3 2
3 2matrix:
2 Jordan decomposition
1 7 6 1 3
1 3 76 1 1
6 0 2 0
6
7 6
76
6 1 0 0 0 7 6 1 2 1
76 0 1
2
6
7 6
76
=
6
7 6
76
6 0 1 0 0 7 6 1 1
6
1 1 7
6
7 6
76 0 0
4
5 4
54
0 0 1 0
1 0 1
0
0 0

7
6
6
7
7
6
7
6
7, its inverse: 6
7
6
7
6
5
4
32

1 76
76
6
0 7
76
76
6
0 7
76
54
0

0
0
1
0

1 3
1 2

1 0

0 76
76
6
0 7
76
76
6
1 7
76
54
1

1 1

32

1
4
1
4
1
4
1
4

1
1
4
1
4
1
4
1
4

0
1
4
0
1
34

3
4
1
4
3
4
1
24

3 7 6
7 6
6
2 7
7 6
7=6
6
1 7
7 6
5 4
0
0
1
4
0
1
4

3
4
1
4
3
4
1
4

1
2
1
4
1
2
1
4

1 1
0 1
0 0
0 0

1
2
1
4
1
2
1
4

7
7
7
7
7,
7
7
5
0
0
1
0

0 7
7
0 7
7
7
1 7
7
5
1

7
7
7
7
7
7
7
5

2.6.39. Example. Usage of the previous decomposition for solving the attached linear homogeneous
system of ordinary dierential
equations:
8
>
>
x_ 1 = 2x2 x4
>
>
>
>
>
< x_ 2 = x1
Consider the system
.
>
>
x
_
=
x
>
3
2
>
>
>
>
: x_ 4 = x3
Use the
A = BJB 1 , where:
2 Jordan decomposition
3
6 0 2 0
6
6 1 0 0
6
A=6
6 0 1 0
6
4
0 0 1
2
1
6 1 3
6
6 1 2 1
6
B=6
6 1 1
1
6
4
1 0 1

1 7
7
0 7
7
7,
0 7
7
5
0
3
3 7
7
2 7
7
7, B
1 7
7
5
0

6
6
6
6
=6
6
6
4

1
4
1
4
1
4
1

0
1
4
0
1

3
4
1
4
3
4
1

1
2
1
4
1
2
1

3
7
7
7
7
7
7
7
5

122

2. LINEAR TRANSFORMATIONS

0 7
6 1 1 0
7
6
7
6 0 1 0
0
7
6
J =6
7
6 0 0
1 1 7
7
6
5
4
0 0 0
1
to solve the system.
2
3 2
32
3
1 7 6 x1 7
6 x_ 1 7 6 0 2 0
6
7 6
76
7
6 x_ 2 7 6 1 0 0 0 7 6 x2 7
6
7 6
76
7
The matrix form: 6
7=6
76
7 ) use the Jordan decomposition:
6 x_ 7 6 0 1 0 0 7 6 x 7
6 3 7 6
76 3 7
4
5 4
54
5
x_ 4
0 0 1 0
x4
3
2
3 2
32
32 1
3
1 32
0
1 3 76 1 1 0
0 76 4
6 x_ 1 7 6 1 3
4
2 7 6 x1 7
6
7
6
7 6
76
76 1
1
1
1 7
7 6 x2 7
6 x_ 2 7 6 1 2 1
6
6
2 7
0 7
7
6
7
6
7 6
76 0 1 0
76 4
4
4
4 76
7 B 1 [to the left] )
6
7=6
76
76 1
3
1
7
6
7
6 x_ 7 6 1 1
7
6
7
6
0
1 1 76 0 0
1 1 76
7 6 x3 7
6 3 7 6
4 2 54
5
4
5 4
54
54 4
1
1 1
1
x4
x_ 4
1 0 1
0
0 0 0
1
4 3 24
4 4
2
3
2
3
3
2 1
1
3
1
3
1 32
0
0 76 4 0
6 4
4
2 7 6 x_ 1 7 6 1 1 0
4
2 7 6 x1 7
7
6
6
7
6
7
6
7
6 1
1
1
1 76
1
1
1 7
76 1
7 6 0 1 0
7 6 x2 7
6
0
x
_
2
76 4
7 6
7
6
4
4
4 76
4
4
4 76
) 6 41
76 1
6
7=6
6
7
3 1 7
3 1 7
7
6
6
7
6
7
7
6
7
6
1 1 76
0
0
7 6 x_ 3 7 6 0 0
7 6 x3 7
6
4 2 54
4 2 54
4 4
54 4
5 4
5
1
1 1
1
1 1
1
1
0 0 0
1
x_ 4
x4
4
4
4 24 3 2
43
4
4 4
1
3
1 32
0
6 y1 7 6 4
4
2 7 6 x1 7
6
7 6 1
6
7
1
1
1 7
6 y2 7 6
7 6 x2 7
6
7 6 4
7
6
7
4
4
4 76
Change of variables: 6
7=6 1
7)
3
1
6 y 7 6
76 x 7
0
6 3 7 6
76 3 7
4 2 54
4
5 4 4
5
1
1
1 1
y4
x4
4
4 3 4 4
2
3 2 1
3
1 32
0
6 y_ 1 7 6 4
4
2 7 6 x_ 1 7
6
7 6 1
6
7
1
1
1 7
6 y_ 2 7 6
7 6 x_ 2 7
6
7 6 4
6
7
4
4
4 7
)6
7=6 1
76
7 ) the initial system becomes:
3
1
6 y_ 7 6
7 6 x_ 7
0
6 3 7 6
76 3 7
4 2 54
4
5 4 4
5
1
1
1 1
y_ 4
x_ 4
4
4 342 4 3 2
2
3 2
3
0 7 6 y1 7 6 y1 + y2 7
6 y_ 1 7 6 1 1 0
6
7 6
76
7 6
7
7 6 y2 7 6 y2
7
6 y_ 2 7 6 0 1 0
0
6
7 6
76
7 6
7
6
7=6
76
7=6
7)
6 y_ 7 6 0 0
6
7 6
7
1 1 7
6 3 7 6
7 6 y3 7 6 y4 y3 7
4
5 4
54
5 4
5
y_ 4
0 0 0
1
y4
y4

8
>
>
y_ 1 = y1 + y2
>
>
>
>
>
< y_ 2 = y2
)
)
>
>
y_ 3 = y4 y3
>
>
>
>
>
: y_ 4 = y4

## ) decoupling over the eigenvalues: 4

y_ 1
y_ 2

5=4

y1 + y2

5 and 4

y2

Solve
2
3the systems:
2
3
y_ 1
y1 + y2
4
5=4
5 ) y2 = k2 et , y_ 1 = y1 + y2 = y1 + k2 et )
y_ 2
y2

y_ 3
y_ 4

123

5=4

y4

y3

y4

## y1 = k2 et j e t ) y_ 1 e t y1 e t = k2 ) ) (y1 e t )t = k2 ) y1 e t = k2 t+k1 ) y1 = (k2 t + k1 ) et

3 2
3
y_
y
y3
4 3 5=4 4
5 ) y4 = k4 e t ) y_ 3 = y3 + k4 e t ) y_ 3 et + y3 et = k4 ) ) (y3 et )0 = k4 )
t
y_ 4
y4
y3 et = k4 t + k3 ) y3 = (k4 t + k3 ) e t
32
3
3 2
2
3 2
y_
21

t
6 y1 7 6 (k2 t + k1 ) e 7 6
7 6
6
7 6
7 6
6 y2 7 6
k2 et
7 6
6
7 6
Obtain the matrix form of the solution: 6
7=6
7=6
6 y 7 6 (k t + k ) e t 7 6
7 6
6 3 7 6 4
3
5 4
4
5 4
k4 e t
y4
2
3 2 1
3
3
1 32
0
6 y1 7 6 4
4
2 7 6 x1 7
6
7 6 1
6
7
1
1
1 7
6 y2 7 6
7 6 x2 7
7
6
7 6 4
4
4
4 76
because 6
7=6
6
7, we get:
3 1 7
6 y 7 6 1
7
6
7
0
6 3 7 6
7 6 x3 7
4 2 54
4
5 4 4
5
1
1 1
1
y4
x4
4
2
3 2
34 2 4 3 4
1 3 7 6 y1 7
6 x1 7 6 1 3
6
7 6
76
7
6 x2 7 6 1 2 1
7 6 y2 7
2
6
7 6
76
7
6
7=6
76
7, and the solution:
6 x 7 6 1 1
7
6
1 1 7 6 y3 7
6 3 7 6
7
4
5 4
54
5
x4
1 0 1
0
y4
2
32
32
3 2
3
t
t
1 3 76 e t e
0
0 7 6 k1 7
6 x1 7 6 1 3
6
7 6
76
76
7
6 x2 7 6 1 2 1
6
6
7
2 7
et
0
0 7
6
76 0
7 6 k2 7
7 6
6
76
76
7=6
7=
6 x 7 6 1 1
76 0
7
t
t 76
1
1
0
e
t
e
k
6 3 7 6
76
76 3 7
4
5 4
54
54
5
x4
1 0 1
0
0
0
0
e t
k4
2
3
t
t
t
t
t
t
te ) + k2 (3e + te ) k3 e 7
6 k1 e + k4 (3e
6
7
6 k1 et k4 (2e t te t ) + k2 (2et + tet ) + k3 e t 7
6
7
=6
7
6 k (et + tet ) + k et k e t + k (e t te t ) 7
6 2
7
1
3
4
4
5
k1 et + k3 e t + tk2 et + tk4 e t

et t et
0

et

e
0

t e
e

7 6 k1 7
76
7
7 6 k2 7
76
7
76
7;
76 k 7
76 3 7
54
5
k4

124

2. LINEAR TRANSFORMATIONS

2.6.40. Example (Two eigenvalues, one with multiplicity 2). Consider the linear operator U ( ) given by
2
3
1
1
1
6
7
6
7
the matrix A = 6 3
4
3 7 2 M3 3 (R) in the standard basis.
4
5
4
7
6
Find the Jordan canonical form and a Jordan basis.
The eigenvalues are the solutions of the characteristic equation: det (A

I3 ) = 0 )

2)2 =

1) (

0
For

1, the eigenset is V1 = fx 2 R3 ; (A

I3 ) x = 0g = f (0; 1; 1) ;

For

## 2 Rg and the algebraic dimension of the eigenvalue is

2 while the geometric dimension is 1, so that the linear operator is not diagonalizable.

6
6
Consider the linear operator N2 ( ) with the matrix in standard basis B2 = A 2I3 = 6
4
The kernel of N2 ( ) is ker N2 ( ) = V2 = f (1; 0; 1) ;
2

6
6
The linear operator N22 ( ) has the matrix B22 = 6
4

## kernel ker N22 ( ) = f (1; 0; 1) + (0; 1; 2) ;

2 Rg.
2

2 Rg

32

0
6
7
6
7
3 7 = 6 9
4
5
9
4
1

33

0
18
18
0

7
0

7
7
3 7.
5
4

7
7
9 7 and the
5
9
0

6
7
7
6
7
7
The linear operator
( ) has the matrix
3
6
3 7 = 6 27
54
27 7 and its
4
5
5
4
7
4
27
54
27
3
2
2
stops
kernel is ker N2 ( ) = ker N2 ( ), so that the chain of kernels ker N2 ( ) ker N2 ( ) = ker N23 ( )
N23

B23

6
6
= 6
4

## (stabilizes) at the exponent 2.

f0g

ker N2 ( )

ker N22 ( )

= ker N23 ( )

jj
ker N2 ( )
Choose a complement

Q22

3v 0
3N2 (v 0 )
2
Q2 of ker N2 (

## basis over Q22 , for example v 0 = (0; 1; 2).

2
32
3 2
3
1
1
1
0
1
6
76
7 6
7
6
76
7 6
7
0
Then N2 (v ) = 6 3
6
3 7 6 1 7 = 6 0 7 2 ker N2 ( )
4
54
5 4
5
4
7
4
2
1
The Jordan basis is f(0; 1; 1) ; (1; 0; 1) ; (0; 1; 2)g, the changeofbasis matrices are

2 Rg and a

## 2.6. INTRODUCTION TO SPECTRAL THEORY

6
6
C=6
4

7
6
7
7
6
7
1
1 0
1 7 and C = 6 1 0
1 7
5
4
5
1
1
2
1
1
2
The Jordan2 canonical form3is2
32
0
1
1
1
1
2
1
76
76
6
76
76
6
C 1 AC = 6 1
4
3 76 1
0
0 76 3
54
54
4
1
4
7
6
1
1
1

6
6
=6 1
4
1
1

2
0
1
2

125

7
7
0 7.
5
1
1 0 0

7
7 6
7
7 6
0
1 7=6 0 2 1 7
5
5 4
0 0 2
1
2
3
2
6 1 j 0 0 7
7
6
7
6
7
6
In the Jordan basis, the matrix is a Jordan matrix with two blocks 6
7
6 0 j 2 1 7
7
6
5
4
0 j 0 2
Remarks:
2
3
1
1
1
6
7
6
7
the linear operator N2 ( ) has the matrix 6 3
6
3 7
4
5
4
7
4
the linear operator N2 ( ) is not nilpotent;
the kernel ker N22 ( ) is N2 ( )invariant and has a basis f(1; 0; 1) ; (0; 1; 2)g, because:
2
32
3 2 3
0
1
1
1
1
6
76
7 6 7
6
76
7 6 7
6 3
6
3 7 6 0 7 = 6 0 7 = 0 (1; 0; 1) + 0 (0; 1; 2) and
4
54
5 4 5
0
4
7
4
1
2
32
3 2
3
1
1
1
1
0
6
76
7 6
7
6
76
7 6
7
=
6 3
7
6
7
6
6
3
1
0 7 = 1 (1; 0; 1) + 0 (0; 1; 2).
4
54
5 4
5
4
7
4
2
1
The restriction of the linear operator N2 ( ) over ker N22 ( ), N2r ( ) : ker N22 ( ) ! ker N22 ( ) has the
2
3
0 1
5 for the basis f(1; 0; 1) ; (0; 1; 2)g.
matrix 4
0 0
2
32 2
3
0
1
0
0
5 =4
5.
The linear operator N2r ( ) is nilpotent, because 4
0 0
0 0

126

2. LINEAR TRANSFORMATIONS

## 2.7. Bilinear and Quadratic Forms

2.7.1. Denition. Consider (V1 ; K) and (V2 ; K) two vector spaces over the same eld of scalars. A
function B ( ; ) : V1

## V, then the bilinear functional is

called symmetric.

## ; eknk for each space (Vk ; K), k = 1; 2 and B ( ; ) :

V2 ! K a bilinear functional.

V1

## The matrix AB (E1 ; E2 ) = B e1i ; e2j

corresponding to the bases E1 ,E2 .

i=1;n1
j=1;n2

## 2.7.3. Proposition. Consider the bases Ek = ek1 ;

ear functional B ( ; ) : V1

## V2 ! K is uniquely and completely determined by the associated matrix

AB (E1 ; E2 ).
Moreover, we have the matrix representation B (x; y) = [x]TE1 AB (E1 ; E2 ) [y]E2 .

2
6
6
6
4

n2
P

Proof. The vectors x 2 V1 and y 2 V2 are uniquely represented in the xed bases as [x]E1 =
2
3
3
!
1
1
6
7
7
n1
n2
P
P
6 . 7
.. 7
1
2
=
. 7, [y]E2 = 6 .. 7 and from the linearity in each variable we get: B (x; y) = B
i ei ;
j ej
4
5
5
i=1
j=1
n1

n2

n1
P

j=1

i=1

1 2
i ei ; ej

n1
P

i=1

n1
P

i=1

= [x]TE1

iB

1 2
i ei ; e1

i=1

n1
P

(e1i ; e21 )

n1

n1
P

iB

i=1

i6
6
6
4

1 2
i ei ; en2

e1i ; e2n2

6 1 7
6 .. 7
6 . 7=
4
5
n2

6 1 7
6 .. 7
6 . 7=
4
5
n2

(e11 ; e21 )

B e1n1 ; e21
n1 P
n2
P
AB (E1 ; E2 ) [y]E2 =

i=1 j=1

e11 ; e2n2

B e1n1 ; e2n2
i jB

e1i ; e2j .

32

76 1 7
7 6 .. 7
76 . 7 =
54
5
n2

The alternative representations are obviously dependent on the chosen bases. The unicity of the
representation comes from the unicity of the coordinates of a vector in a basis.

## 2.7. BILINEAR AND QUADRATIC FORMS

127

2.7.4. Remark. The bilinear functional may be viewed as a composition between the linear transformation
U ( ) : V2 ! Kn1 dened by U (y) = AB (E1 ; E2 ) [y]E2 and the linear functional fa ( ) : V1 ! K with xed
a 2Kn1 (fa ( ) 2 (V1 )0 ) dened by fa (x) = [x]TE1 a: B (x; y) = fU (y) (x).
2.7.5. Proposition. Consider the vector spaces (Vk ; K), k = 1; 2 and a bilinear functional B ( ; ) :
V1

V2 ! K. For the bases Ek (old basis) and Fk (new basis) over Vk , with k = 1; 2, we have
AB (F1 ; F2 ) = (M (F1 ))TE1 AB (E1 ; E2 ) (M (F2 ))E2 .
The matrix (M (Fk ))Ek denotes the changeofbasis matrix (the columns are the representations in

## the old basis of the vectors of the new basis), k = 1; 2.

Proof. The connection between the coordinates of a vector represented in the old basis and in the
new basis is given by
[x]E1 = (M (F1 ))E1 [x]F1 for x 2 V1 and
[y]E2 = (M (F2 ))E2 [y]F2 for y 2 V2 .
Then B (x; y) = [x]TE1 AB (E1 ; E2 ) [y]E2 =
= (M (F1 ))E1 [x]F1

## = [x]TF1 (M (F1 ))TE1 AB (E1 ; E2 ) (M (F2 ))E2 [y]F2 .

By using the unicity of the representation B (x; y) = [x]TF1 AB (F1 ; F2 ) [y]F2 we obtain the required
relation.
2.7.6. Remark. The rank of the matrix representing the bilinear functional doesnt depend on the bases
(because changing the representation matrix is done by using only invertible matrices).
2.7.7. Remark. The matrix AB (E; E) representing a symmetric bilinear functional B ( ; ) : V

V!K

in a certain basis E is a symmetric matrix [meaning AB (E; E) = (AB (E; E))t the matrix equals its
transpose].
The converse statement is also true: if the representing matrix of a bilinear functional is symmetric,
then the bilinear functional is also symmetric.
2.7.8. Remark. We may attach to any bilinear functional B ( ; ) : V
1
functional Bs ( ; ) : V V ! K by Bs (x; y) = [B (x; y) + B (y; x)].
2

V ! K a symmetric bilinear

2.7.9. Denition. For a symmetric bilinear functional, the following set is called the kernel of the functional:
ker B ( ; ) = fx 2 V; B (x; y) = 0; 8y 2 Vg
When ker B ( ; ) = f0g the bilinear functional is called nondegenerated.

128

2. LINEAR TRANSFORMATIONS

## 2.7.10. Remark. The kernel of a symmetric bilinear functional is a vector subspace.

2.7.11. Proposition. A symmetric bilinear functional is nondegenerated if and only if its attached matrix
is invertible.

Proof. Consider two arbitrary bases E1 ; E2 and a symmetric bilinear functional B ( ; ) over V.
Then B (x; y) = [x]TE1 AB (E1 ; E2 ) [y]E2 .
The matrix AB (E1 ; E2 ) is invertible if and only if the system [x]TE1 AB (E1 ; E2 ) = E1 has only the
null solution (the system is Cramer with only the null solution).
T

If x0 2 V is a solution of the system [x0 ]E1 AB (E1 ; E2 ) = E , then B (x0 ; y) = 0, 8y 2 V so that
x0 2 ker B ( ; ).
Conversely, if x0 2 ker B ( ; ), by using particular values for y we get that x0 is a solution of the system
[x]TE1 AB (E1 ; E2 ) = E1 .

n
o
T
This means that ker B ( ; ) = x 2 V; [x]E1 AB (E1 ; E2 ) = E1 , which concludes the proof.

2.7.12. Denition. A function Q ( ) : V ! K, dened over the vector space (V; K), for which there is a
symmetric bilinear functional B ( ; ) : V

Def

## When given Q ( ), the function B (x; y) =

1
[Q (x + y)
2

Q (x)

Q (y)], with B ( ; ) : V

V ! K, is

## a symmetric bilinear functional.

2.7.13. Remark. The association quadratic form symmetric bilinear functional given above is oneto
one.
The matrix form for quadratic forms is similar: Q (x) = [x]TE AQ (E) [x]E =
AQ (E) = AB (E; E), which is symmetric.

## 2.7.14. Denition. Consider a quadratic form Q ( ) : V ! R.

(1) Q ( ) is called positive denite when Q (x) > 0; 8x 2 V; x 6= 0.
(2) Q ( ) is called negative denite when Q (x) < 0; 8x 2 V; x 6= 0.
(3) Q ( ) is called semipositive when Q (x)

0; 8x 2 V.

## (4) Q ( ) is called seminegative when Q (x)

0; 8x 2 V.

(5) Q ( ) is called undened when 9x; y 2 V; Q (x) > 0 and Q (y) < 0.
The following shapes should give some intuition.

n
P

i;j=1

B (ei ; ej ) xi xj where

50
40
30

20
10

-4

-2 0
0 0
2

-4

-2
2

x4

-4

-2

0
4 20 0 -2 -42
-10

-20

z
-30
-40
-50

(x; y) 7!

x2

y2

20
10

-4z -2 0
0 02
2
y -10 x

-2

-4

-20

20

10

-4

-2 0
0 0
2
2

-2

-4

4x

## parabolic cylinder (x; y) 7! x2

y2

129

130

2. LINEAR TRANSFORMATIONS

-4

-4
-2

-2
0
0 0
2
2
-10
x4
z4 y
-20

(x; y) 7!

x2

10
-4

-5

z 00 0
2
-10
4y

ellipsoid

-2

x5

x2 y 2 z 2
+
+
=9
12 22 32

## The previous shape is given by the 3dimensional quadratic form

all its coe cients are positive
x2 y 2 z 2
[this means that 2 + 2 + 2
1
2
3

x2 y 2 z 2
+ +
which is positive, since
12 22 32

10
-10

-10

z
y

10

0
0 0
-10

10

x2 y 2
+
12 22

z2
=9
32

## 2.7. BILINEAR AND QUADRATIC FORMS

131

10
-10

-10
0
0 0

z
y

-10

10

10

elliptic hyperboloid

x2
12

y2
22

z2
=9
32

When we manage to nd a certain basis F over V for which the matrix AQ (F ) representing the
quadratic form Q ( ) is a diagonal matrix then we say that the quadratic form is reduced to its canonical
form.
The previous characteristics are obtained from the canonical form, by considering the signs of the
elements of the diagonal on the matrix AQ (F ).
One problem is the existence of a basis under which the matrix is diagonal, which exists always.
Another problem is nding eectively such a basis.
There are several methods, from which we will present three.
The rst one is called "The Gauss procedure" and manipulates the algebraic form Q (x) =

n
P

ij i j ,

i;j=1

while the second one manipulates the matrix form Q (x) = [x]TE AQ (E) [x]E , and it is called "The Jacobi
procedure". The third one is called "The eigenvalues/eigenvectors procedure".
2.7.15. Theorem. (The Gauss procedure for nding a canonical basis) Consider Q ( ) : V ! R,
2
3
Q (x) =

n
P

ij i j

i;j=1

ij )i;j=1;n .

6 1 7
6 . 7
= [x]TE AQ (E) [x]E a quadratic form, where [x]E = 6 .. 7, dim V = n and AQ (E) =
4
5
n

Then there is a basis F over V such that the representation matrix of the quadratic form is diagonal.
Proof. Proof by induction with respect to the dimension of the vector space V.
When dim V = 1 the quadratic form is in canonical form, with F = E.
Consider the statement true for dim V = k

(1) 9i 2 f1;

(2) 8i 2 f1;

## ; kg ; aii = 0 [no square term].

132

2. LINEAR TRANSFORMATIONS

For the second case, there are two subcases: either the quadratic form is null or not, in which case
9i0 ; j0 2 f1;

## ; kg, i0 6= j0 such that ai0 j0 6= 0.

For the rst subcase the matrix AQ (E) is null, the quadratic form is is in canonical form while F

E.

The second subcase reduces to the rst case by using the coordinate transformation:
8
>
>
>
<
>
>
>
:

## which in matrix form is:

2
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
4

i0

i0

j0

i0

1
7
.. 7
. 7
7
7
7
i0 7
.. 7
. 7
7=
7
7
j0 7
7
.. 7
. 7
5
k

j0

ith
0 col
1

6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
4

j0th row

j0

8i 2 f1;

i;

2
ith
0 row

; kg n fi0 ; j0 g

j0th col
0

...
1
..
.

..
.

7
7
7
7
7
7
1
7
..
.. 7
.
. 7
7
7
7
1
7
7
7
..
.
7
5
1

2
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
6
4

1
7
.. 7
. 7
7
7
7
i0 7
.. 7
. 7
7;
7
7
j0 7
7
.. 7
. 7
5
k

## [x]E1 = (M (E1 ))E [x]E0 :

The matrix (M (E1 ))E has the following entries:
value

## values 1 on the rest of the main diagonal

value 1 on the places (i0 ; j0 ) and (j0 ; i0 )
value 0 on the other places:

(M (E1 ))E = (

ij ) ;

## Since the determinant of the matrix is

ij

8
>
>
>
>
>
>
>
<
>
>
>
>
>
>
>
:

1; for i = j; i; j 2
= fj0 g
1; for i 6= j; i; j 2 fi0 ; j0 g
0; for the other places.

## corresponds to a change of basis.

The new basis, called E1 , where the coordinates of x are [x]E1 =
matrix (M (E1 ))E .

1; for i = j = j0

k+1

iT

, is dened by the

## 2.7. BILINEAR AND QUADRATIC FORMS

133

Apply the transformation to get some other coe cients in terms of s; the whole point of the trans2
i0 :

Q (x) =

k
X

aij

i j

i;j=1

k
X

a0ij

i j,

i;j=1

## with a0i0 i0 = 2ai0 j0 6= 0:

By using the above transformation the second case is reduced to the rst case, with the basis E1 replacing
E (there are some other ways of achieving this goal)
For the rst case denote by i0 one of the indexes for which the coe cient ai0 i0 6= 0.
k
P
Then Q (x) =
aij i j =

[separate the
= ai 0 i 0

2
i0

i0

i;j=1
2
i0 term,
k
P

k
k
P
P
aij i j =
aji0 j +
ai 0 j j +

j=1;j6=i0

## [form a perfect square, such

i;j=1;i;j6=i0

j=1;j6=i0
that i20 is

the rst term, the rst paranthesis is the middle term, while

## the third term

t contain
0 and the remainder doesn
!
k
k
P
P
ai0 j
= ai0 i0 @ i20 + 2 i0
+
j
ai i
0 0

j=1;j6=i0

ai 0 i 0

k
P

j=1;j6=i0

= ai 0 i 0

i0

ai0 j
ai0 i0 j
k
P

j=1;j6=i0

!2

ai0 j
ai0 i0 j

j=1;j6=i0

k
P

aij

i;j=1;i;j6=i0
!2

ai0 j
ai0 i0 j

i0 ]

i j

k
P

ai0 i ai0 j
ai0 i0

aij

## Consider the coordinates transformation:

8
>
< i0 =
i

i;j=1;i;j6=i0

>
:

!2 1

i0

k
P

j=1;j6=i0

i j.

ai0 j
ai0 i0 j

= i , for i 6= i0 .

(the determinant of the matrix for the transformation is nonzero so that the transformation is a change
of basis).
Then the quadratic form is
Q (x) = ai0 i0

2
i0

k
X

a0ij

i j;

i;j=1;i;j6=i0

which means that in the new basis the matrix attached to the quadratic form has the value ai0 i0 at place
(i0 ; i0 ) and the value 0 at all the other places on the line and column i0 .

134

2. LINEAR TRANSFORMATIONS

0

B
B
B
B
B
B
B
(M (E1 ))E = B
B
B
B
B
B
@

iT

0
..
.

1
..
.

0
. . ..
. .

0
..
.

ai0 1
ai0 i0

ai0 2
ai0 i0

..
.

1
.. . .
.
.

..
.

C
C
C
C
C
C
C
C;
C
C
C
C
C
A

ai0 ;k
ai0 i0

..
.

## which denes the change of basis [x]E = (M (E1 ))E [x]E1 .

The space V is decomposed in adirect sum of two subspaces, the rst corresponding to the coordinate
i0 (1 dimensional space) and the second corresponding to the other coordinates (k

1-dimensional).

According to the induction hypothesis, since the subspace span (e1i )i=1;k;i6=i0
k
k
P
P
a00ii i2 .
there is a basis (fi )i=1;k;i6=i0 such that
a0ij i j =

has dimension k

1,

i=1;i6=i0

i;j=1;i;j6=i0

## Choose fi0 = e1i0 and F = (fi )i=1;k .

It follows that

Q (x) =

k
X

a00ii

2
i

= [x]TF AQ (F ) [x]F

i=1

with [x]F =

iT

## , while AQ (F ) is a diagonal matrix.

2.7.16. Example. Discuss (with respect to the parameter ) the nature of the quadratic form Q(x) =
2
1

+6

2
2

2
3

+3

+4

1 2

1 3.

+6

2.7.17. Solution. Use the Gauss method to obtain the canonical form:
2
1

Q(x) =
2
1

+6

+ 2 1 (2

2
2

+3

+3

2
3

+4

3)

+6

1 3

+3

2
3)

1 2

+ (2

=(

+2

+3

2
3)

2
2

=(

+2

+3

2
3)

+2

2
2

=(

+2

+3

2
3)

+ 2(

2
2

2 3

=(

+2

+3

2
3)

+ 2(

Q (x)

+ (3

(2

12

9 2)
2 3

2
3)

2
3

+9

+ 3 (1

0
nondef

semi

pos def
The change of basis is given by:

+3

2 3

1
3

1
1

2 2
3

+6

12

3)
2
2

2 3

2 2
3)

9 2)

pos def

2
2

+6
2
3

+3

+3

2
3

=
18

1
3

+++

2
3

2 2
3

+ (3

9 2)

)
+1

0
semi
pos def

nondef

2
3

## 2.7. BILINEAR AND QUADRATIC FORMS

8
>
>
>
<
>
>
>
:

+2

+3

6
6
() 6
4

7 6
7 6
2 7 = 6 0
5 4
0
3
2
3

6
6
[x]F = (M (F ))E [x]E , where [x]E = 6
4

2
1
0

32

76
76
76
54

3
2

7
6
7
6
7, [x]F = 6
5
4

1
1

1
2
3

135

7
7
7, or
5

7
6
7
6
1
2
2 7, and (M (F ))E = 6 0
5
4
0 0
3
3
because the determinant of (M (F ))E is nonzero.

## The change of basis is valid for all

3
1

7
7
7.
5

The basis E (the initial basis) is E = fe1 ; e2 ; e3 g while the basis F (the new basis) is F = ff1 ; f2 ; f3 g,
where f1 = e1 , f2 =

2e1 + e2 , f3 =

9 e1 + 3 e2 + e3 .

## 2.7.18. Theorem. (Metoda Jacobi de aducere la forma canonic

a) Fie baza initial
a E = fe1 ;
; en g
2
3
x
6 1 7
n
P
6 .. 7
T
n V, [x]E = 6 . 7 si o functional
a p
atratic
a Q ( ) : V ! R, Q (x) =
ij xi xj = [x]E AE [x]E o
4
5
i;j=1
xn
functional
a p
atratic
a, unde AE este matricea functionalei p
atratice, AE = (aij )i;j=1;n . Dac
a determinantii
0

= 1 si

## = det (aij )i;j=1;k pentru k = 1; n sunt toti nenuli, Atunci exist

a o baz
a F a lui V n care
Q (x) =

n
X
k=1

k 1 2
k;
k

6 1 7
6 . 7
unde [x]F = 6 .. 7.
4
5
n

## Proof. Fie functionala biliniar

a simetric
a atasat
a functionalei p
atratice Q ( ), B (x; y) =

1
[Q (x + y)
2

[x]TE AE [y]E .
Are loc: aij = B (ei ; ej ) = B (ej ; ei )
C
aut
am o baz
a F = (f1 ;

## ; fn ) astfel nct pentru ecare k 2 f1; : : : ; ng, fk =

Dezvoltat, forma c
autat
a este (matricea atasat
a este triunghiular
a):

k
P

j=1

jk ej .

Q (x

136

2. LINEAR TRANSFORMATIONS

8
>
>
f1 =
>
>
>
>
>
>
>
f2 =
>
>
>
>
>
>
f3 =
>
>
>
>
<

11 e1 ;
12 e1

22 e2 ;

13 e1

23 e2

k
>
P
>
>
>
f
=
k
>
>
j=1
>
>
>
>
>
>
>
>
>
k
>
P
>
>
>
f
=
: n

33 e3 ;

jk ej

jn ej :

j=1

8
< B (ei ; fk ) = 0; pentru i = 1;
Pentru obtinerea formei canonice se xeaz
a conditiile:
: B (e ; f ) = 1
k

## n aceste conditii Q (x) = B (x; x) = [x]TF AF [x]F =

6 11
6
6 12
6
=6 .
6 ..
6
4

1n

0 7
6
7
6
6
0 7
7
6
A
E6
.. 7
7
6
. 7
6
5
4

22

..

nn

2n

Pentru k = 1:

B (e1 ; f1 ) = 1 ()

11 B

Pentru k = 2:
8
8
< B (e1 ; f2 ) = 0
<
)
: B (e ; f ) = 1
:
2

2
kk k .

(e1 ; e1 ) = 1 )

11

Am folosit

k=1

..
.

11

7 6
7 6
7 6
2n 7
6
=6
.. 7
7
. 7 6
6
5 4

12

0
..
.

1n

22

..

nn

(e1 ; e1 ) +

12 B

(e2 ; e1 ) + 22 B (e2 ; e2 ) = 1
2 6= 0 deci este Cramer

## n general sistemul de ordin k este: 8

>
8
>
>
>
< B (ei ; fk ) = 0; i = 1; (k 1)
< B
)
: B (e ; f ) = 1
>
>
>
k k
>
: B
8 k
P
>
>
1)
jk B (ei ; ej ) = 0; i = 1; (k
<
j=1
)
k
P
>
>
:
jk B (ek ; ej ) = 1

11

0
..
.

..
.

1
[B (e1 ; e1 ) =
B (e1 ; e1 )

12 B

## Sistemul are determinantul

22 B

1)

AF = (M (F ))TE AE (M (F ))E =
3
3 2
2

0
..
.

n
P

; (k

22

..

0 7
7
0 7
7
:
.. 7
7
. 7
5
nn

6= 0]

(e1 ; e2 ) = 0

ei ;

k
P

jk ej

j=1

ek ;

k
P

jk ej

j=1

= 0; i = 1; (k

1)
)

=1

j=1

6= 0. Coecientii

ik

## 2.7. BILINEAR AND QUADRATIC FORMS

Necunoscuta
ind

kk

137

se obtine din formulele Cramer ca ind raportul dintre doi determinanti, la numitor

iar la num
ator
3 determinantul obtinut din k prin nlocuirea ultimei coloane cu coloana terme2ar
6 0 7
6 . 7
6 .. 7
6 7
a ultima coloan
a se obtine determinantul
nilor liberi, adic
a 6 7 de dimensiune k. Prin dezvoltare dup
6 0 7
6 7
4 5
1
k

k 1,

asa c
a

kk

k 1

natura functionalei p
atratice Q(x) = x21 + 6x22 + 3x23 +

2.7.19. Example. S
a se discute dup
a parametrul
4x1 x2 + 6 x1 x3 .

1 2 3
6
7
6
7
a
2.7.20. Solution. Matricea functionalei p
atratice este: A = 6 2 6 0 7. Functionala biliniar
4
5
3 0 3
1
atasat
a este: B (x; y) = [Q (x + y) Q (x) Q (y)] = xt Ay [forma matricial
a] = x1 y1 + 6x2 y2 + 3x3 y3 +
2
2x1 y2 + 2x2 y1 + 3 x1 y3 + 3 x3 y1 [forma algebric
a] Baza initial
a este cea standard. Se caut
a noua baz
a
8
>
f = 11 e1 ;
>
>
< 1
de forma:
astfel nct pentru ecare k = 1; 2; 3 s
a e satisf
acut sistemul:
f2 = 12 e1 + 22 e2 ;
>
>
>
:
f3 = 13 e1 + 23 e2 + 33 e3 ;
8
< B (ei ; fk ) = 0; i = 1; (k 1)
Se aduce functionala p
atratic
a la forma canonic
a folosind Metoda Jacobi.
: B (e ; f ) = 1
k

= 1,

= 1,

1 2

2 3

Q(x) =

1 2
1 1

= 2,

9 2 ). Pentru

= 6 (1

2 6
poate aplicat
a. Pentru
0
B 11
B
(M (F ))E = B 0
@
0

12

13

22

23

33

2
=
1

1
3

1 2
2 2

2
6(1 9

2)

C
C
a satisfac
a urm
atoarele conditii:
C s
A

## (1) 811 = 1, cu solutia 11 = 1.

8
< 12 + 2 22 = 0
< 12 = 1
(2)
, cu solutia
.
: 2 +6
:
1
12
22 = 1
22 = 2
8
8
>
>
+ 2 23 + 3 33 = 0
=
>
>
>
>
< 13
< 11
(3)
, cu solutia
2 13 + 6 23 = 0
12 =
>
>
>
>
>
>
:
:
3 13 + 3 33 = 1
13 =

3
9

1
3(9

1)

2
3.

1
3

metoda nu

Baza F a fost c
autat
a astfel nct

138

2. LINEAR TRANSFORMATIONS
3

e1 + 12 e2 , f3 =

Alegem f1 = e1 , f2 =

e
1 1

e
1 2

1
3(9

e.
1) 3

Am notat cu E = (e1 ; e2 ; e3 )

## si F = (f1 ; f2 ; f3 ) baza initial

a si respectiv cea corespunz
atoare formei canonice. Natura formei p
atratice
1
3

1
este precizat
a n tabelul urm
ator: 1

## include si decizia pentru valorile

1
3

+++

+1
Studiul complet

Q (x)
nedef ? poz def ? nedef
1
si se ia folosind alt
a metod
a (de exemplu Gauss).
3

## 2.7.21. Theorem. (Teorema de iner

tie Sylvester) Num
arul de coecienti strict pozitivi, strict negativi
si nuli din forma canonic
a a functionalei p
atratice nu depinde de metoda folosit
a pentru aducerea la forma
canonic
a.

## Proof. Fie Q (x) =

p1
P

i=1

+ 2
i i

n form
a canonic
a n bazele F1 =

q1
P

j=1
ff11 ;

2
j

p2
P

i=1

+ 2
i i

; fn1 g si F2 =

q2
P

j=1
ff12 ;

2
j

dou
a scrieri ale functionalei p
atratice

## coecienti sunt strict pozitivi, urm

atorii qj coecienti sunt negativi si ultimii n

## (pj + qj ) coecienti sunt

nuli. Am impus aceste conditii pentru a nlesni scrierea. Conditiile pot usor ridicate.

[v]F2

6
6
6
6
6
6
6
6
6
6
6
6
=6
6
6
6
6
6
6
6
6
6
6
4

0
..
.
0
p2 +1

..
.
p2 +q2
p2 +q2 +1

..
.
n

7
7
7
7
7
7
7
7
7
7
7
7
7, deci are loc:
7
7
7
7
7
7
7
7
7
7
5

6
6
6
6
6
6
6
6
6
6
6
6
= 6
6
6
6
6
6
6
6
6
6
6
4

7
.. 7
. 7
7
7
7
p1 7
7
7
0 7
7
.. 7
. 7 iar
7
7
0 7
7
7
0 7
7
.. 7
. 7
7
5
0
1

## 2.7. BILINEAR AND QUADRATIC FORMS

Q (v) =

p1
P

i=1

Q (v) =

+ 2
i i

q2
P

j=1

0 si
2
j

+
i

0 si

> 0; 8i = 1; p1
j

9
>
>
=

>
> 0; 8j = 1; q2 >
;

139

## ) Q (v) = 0, deci v = 0, adic

a V1 \ V2 = f0g.
p2 )

n ) p1

ce nseamn
a c
a p1 = p2 . Egalitatea q1 = q2 se demonstreaz
a analog.

p2 ; analog urmeaz
a c
a p2

p1 , ceea

140

2. LINEAR TRANSFORMATIONS

2.7.22. Remark. When the real matrix A is symmetric (A = At ), its eigenvalues are real so its eigenvectors
are also with real coordinates.
Proof. Consider an eigenvalue 2 C and an associated eigenvector u (with complex coordinates).
2
3
2
3
z
z
6 1 7
6 1 7
6 . 7
6 . 7
If u = 6 .. 7 2 Cn , then u = 6 .. 7 (the vector corresponding to the complex conjugates of the
4
5
4
5
zn
zn
h
i
coordinates of u), ut = z1
zn
2
3
z1
7
h
i6
6 .. 7
t
+ zn zn , which is a real positive number.
while u u = z1
zn 6 . 7 = z1 z1 +
4
5
zn
We have Au = u and by transposing
From Au = u by transposing we get (Au)t = ( u)t ) ut At = ut ) ut A = ut
From ut A = ut by complex conjugation we get (ut A) = ( ut ) ) ut A = ut
We get:
ut Au = (ut A) u = ut u
and
ut Au = ut (Au) = ut ( u) = ut u
so that ut u = ut u
As ut u is a strictly positive real number, we get

As det (A

## real eigenvalues (not necessary distinct).

Because (A

I) u = 0 is a linear homogeneous system with unknown u and all co cients real numbers,

its solutions are also real numbers, which means that the coordinates of all the eigenvectors are real.

CHAPTER 3

## Inner Product Vector Spaces

3.0.23. Denition. Consider a real vector space (V; R). A function h ; i : V

V ! R is called (real)

## scalar product when it satises the following properties

ps1: [symmetry] 8x; y 2 V; hx; yi = hy; xi;
ps2: [additivity] 8x; y; z 2 V; hx + y; zi = hx; zi + hy; zi;
ps3: [homogeneity] 8x; y 2 V; 8 2 R; h x; yi =
ps4: [strict positive deniteness] 8x 2 V; hx; xi

hx; yi;
0 and hx; xi = 0 , x = 0.

A real vector space (V; R) together with a scalar product dened over it is called Euclidean space.
In this context, we have the following notions:
Not.

## the lenght of a vector x 2 V is the real number kxk =

p
hx; xi;

the measure of the angle between two vectors x; y 2 V is the real number ^ (x; y) 2 [0; ] dened
by cos ^ (x; y) =

hx;yi
;
kxk kyk

two vectors x; y 2 V are called orthogonal when hx; yi = 0 and we denote this situation by x ? y.
3.0.24. Example. Over Rn the standard scalar product is hx; yi =

n
P

i=1

## xi yi = [x]TE [y]E = [y]TE [x]E .

3.0.25. Example. For the vector space F (X) with X nite, the function hf ( ) ; g ( )i =

f (x) g (x) is

x2X

a scalar product [since X is nite, the sum is also nite and so there are no existence problems].
3.0.26. Remark. When V is a real vector space with an independent set of at least two vectors, then:
8x, y 2 V 9wx;y 2 V such that hwx;y ; wx;y i = 1 and hwx;y ; x

yi = 0.

[When the real vector space has dimension at least two, for each two (noncollinear) vectors there is at
least a vector orthogonal over their dierence]
Proof. Consider the independent set fa, bg V, and the vectors x; y 2 V.
a
When x = y, we have w = p
which satises the statement.
ha; ai
When x 6= y, consider the set f (x y) ; 2 Rg = span (x y), and z 2 V n span (x
vector exists since span (x
Find
hz +
v=z

(x

y) ; x

(x

yi = 0 () hz; x

hz; x yi
(x
hx y; x yi

y) (such a

y) = z

y) satises hv; x
yi +

hz; x yi
(x
kx yk2

hx

y; x

y); we have:

yi = 0:
yi = 0 ()

hz; x yi
)
hx y; x yi

142

## v 6= 0 (because otherwise z would belong to span (x

hw; x
=

1
kvk2

v
1
y =
hv; x
2;x
kvk
kvk2
hz; x yi
hz; x yi
hx y; x
kx yk2

yi =

v
v
=
satises the statement:
hv; vi
kvk2
hz; x yi
(x y) ; x y =
kx yk2

y)) and w =

1
z
kvk2
1
=
(hz; x
kvk2

yi =
yi

yi

hz; x

yi) = 0.

3.0.27. Remark. A real scalar product over (V; R) is a symmetric bilinear functional such that the
attached quadratic form is strictly positive denite. For a xed real vector space such a real scalar
product may be chosen in dierent ways, leading to dierent geometric measurements such as the lenght
of a vector, the angle between two vectors, the distance between two vectors and others.
3.0.28. Denition. Fie V un spatiu vectorial complex. O aplicatie h ; i : V

V ! C se numeste produs

## scalar complex dac

a
ps1: [hermiticsimetrie] 8x; y 2 V; hx; yi = hy; xi;
ps2: [aditivitate n prima variabil
a] 8x; y; z 2 V; hx + y; zi = hx; zi + hy; zi;
ps3: [omogenitate n prima variabil
a] 8x; y 2 V; 8 2 C; h x; yi =
ps4: [denire pozitiv
a si nedegenarare] 8x 2 V; hx; xi

hx; yi;

0 si hx; xi = 0 , x = 0.

Dac
a pe spatiul vectorial complex V s-a denit un produs scalar complex atunci spunem c
a V este un
spatiu unitar.
3.0.29. Example. Pe Cn produsul scalar standard este hx; yi =
adjuncta hermitic
a1) [De fapt, hx; yi = [x]E [y]E ].

n
P

xi yi = x y, cu x = xT (adjuncta sau

i=1

## 3.0.30. Remark. Dac

a pe C2 sar utiliza produsul scalar real hx; yi = x1 y1 + x2 y2 [f
ar
a conjugare
complex
a] atunci sar obtine vectori nenuli de lungime nul
a: h(1; i) ; (1; i)i = 1

1 = 0.

3.0.31. Example. Pe spatiul vectorial Mm n (K), cu K = R sau C, se poate deni produsul scalar Froben
m
P
P
nius: hA; Bi = T r (B A) =
alk blk [vezi Denitia 7.7.5 si demonstratia la Observatia 7.7.6]; acest
l=1

k=1

produs scalar

## 3.0.32. Remark. Din simetria si aditivitatea produsului scalar real se obtine: hx + y; x + yi

hx; xi

hy; yi = 2 hx; yi [sau hx + y; x + yi = hx; xi + 2 hx; yi + hy; yi] Din hermiticsimetria si aditivitatea produsului scalar complex se obtine: hx + y; x + yi

hx; xi

## Proof. Pentru produsul scalar real:

1 Exprimarea

"matrice adjunct
a" se poate referi la dou
a situatii care nu au leg
atur
a ntre ele: adjunct
a n sens de "matrice
transpus
a de conjugate complexe" sau "transpusa matricii cofactorilor" (care apare la inversarea unei matrici p
atratice) (vezi
si denitiile 7.5.5 si 7.5.6). Este o situatie nefericit
a n care aceeasi denumire este folosit
a pentru dou
a situatii distincte, iar
distinctia va trebui f
acut
a din context.

143

## hx + y; x + yi hx; xi hy; yi = hx; x + yi+hy; x + yi hx; xi hy; yi = hx + y; xi+hx + y; yi hx; xi

hy; yi =
= hx; xi + hy; xi + hx; yi + hy; yi

hx; xi

## Pentru produsul scalar complex:

hx + y; x + yi hx; xi hy; yi = hx; x + yi+hy; x + yi hx; xi hy; yi = hx + y; xi+hx + y; yi hx; xi
hy; yi =
= hx; xi + hy; xi + hx; yi + hy; yi

hx; xi

hx; xi

hy; yi =

## hx; yi + hx; yi.

Indiferent dac
a spatiul vectorial este un spatiu euclidian sau un spatiu unitar:
p
se numeste lungimea unui vector x 2 V num
arul real kxk = hx; xi.
Din 3.0.32 se obtin relatiile:

yk2 = kxk2

## [pentru produs scalar complex] kx + yk2 = kxk2 +2 Re (hx; yi)+kyk2 si kx

2 hx; yi + kyk2 .

## yk2 = kxk2 2 Re (hx; yi)+

kyk2 .
Doi vectori x; y 2 V se numesc ortogonali dac
a hx; yi = 0 si not
am acest fapt prin x ? y.
3.0.33. Remark. Produsul scalar complex are n a doua variabil
a propriet
atile: hx; y + zi = hy + z; xi =
hy; xi + hz; xi = hy; xi+hz; xi = hx; yi+hx; zi [aditivitate] hx; yi = h y; xi =

hy; xi = hy; xi =

hx; yi

## [conjugatomogenitate] Asta nseamn

a c
a pentru ecare x 2 V, functia y 7! hx; yi este aditiv
a si conjugat
omogen
a [se mai numeste conjugatliniaritate] Produsul scalar complex este liniar n prima variabil
a si
conjugatliniar n a doua variabil
a [se mai numeste functional
a sesquiliniar
a2] Un produs scalar complex pe
(V; C) este orice functional
a sesquiliniar
a a c
arei functional
a hermiticp
atratic
a atasat
a este strict pozitiv
denit
a. O functional
a sesquiliniar
a a c
arei functional
a hermiticp
atratic
a atasat
a este strict pozitiv
denit
a se poate alege n mai multe moduri ntr-un spatiu vectorial complex xat. M
asur
arile geometrice
rezultate vor dependente de aceast
a alegere, asa c
a lungimea unui vector, distanta dintre doi vectori nu
vor denite univoc.
3.0.34. Denition. Dou
a spatii vectoriale (peste acelasi corp) cu produse scalare (V1 ; h ; i1 ) si (V2 ; h ; i2 )
se numesc izometric izomorfe dac
a exist
a o bijectie U ( ) : V1 ! V2 cu propriet
atile: 1. U ( ) 2 L (V1 ; V2 )
(U ( ) este morsm de spatii vectoriale) 2. hx; yi1 = hU (x) ; U (y)i2 (este izometrie) sau 2. hx; xi1 =
hU (x) ; U (x)i2 (conditia 2. poate nlocui conditia 2. din cauza observatiei 3.0.32)
3.0.35. Remark. (hx; yi1 = hU (x) ; U (y)i2 ) () (hx; xi1 = hU (x) ; U (x)i2 )
2 Prexul

"sesqui" se refer
a la "o dat
a si jum
atate" sau

144

## Proof. ")" se consider

ax=y
"(" se foloseste hx + y; x + yi

hx; xi

hy; yi = 2 hx; yi

## 3.0.36. Remark. Exist

a spatii vectoriale X si exist
a produse scalare denite pe X h ; i1 si h ; i2 astfel
nct structurile (X; h ; i1 ) si (X; h ; i2 ) nu sunt izomorfe [ca o conditie necesar
a, spatiul vectorial X trebuie
s
a e de tip innit] [J. Rtz]
3.0.37. Remark. ntrun spatiu vectorial cu produs scalar (real sau complex) 8x 2 V, hu; xi = hv; xi )
u = v.
Proof. Se alege x = u

v = 0 () u = v.
3
32
2
x
0 1
5 4 1 5 are propri3.0.38. Remark. n R2 cu produsul scalar standard, operatorul U (x) = 4
x2
1 0
etatea: hU (x) ; xi = h(x2 ; x1 ) ; (x1 ; x2 )i = 0.
3.0.39. Theorem.

v si se obtine hu

v; u

vi = 0 () u

(1) ntrun spatiu vectorial cu produs scalar (real sau complex), pentru un op-

## erator U ( ) : V ! V, 8x; y 2 V, hU (x) ; yi = 0 ) U ( ) = O ( ) [operatorul nul].

(2) ntrun spatiu vectorial cu produs scalar complex, pentru un operator U ( ) : V ! V, 8v 2 V,
hU (v) ; vi = 0 ) U ( ) = O ( ) [operatorul nul].
(3) Proprietatea 2. n general nu are loc pe un spatiu vectorial cu produs scalar real.
Proof. 1. Se foloseste Obs. 3.0.37: 8x; y 2 V, hU (x) ; yi = 0 ) U (x) = 0; 8x 2 V ) U ( ) = O ( )
2. Fie v = x + y, cu x; y 2 V si

2 C. Atunci:

## 0 = hU (v) ; vi = hU ( x + y) ; x + yi = h U (x) + U (y) ; x + yi =

= h U (x) ; xi + h U (x) ; yi + hU (y) ; xi + hU (y) ; yi = j j2 hU (x) ; xi + hU (x) ; yi + hU (y) ; xi +
hU (y) ; yi =
=

hU (x) ; yi +

hU (y) ; xi

pentru

= 1, hU (x) ; yi + hU (y) ; xi = 0

pentru

= i, i hU (x) ; yi

i hU (y) ; xi = 0 ) hU (x) ; yi

hU (y) ; xi = 0

## ) 8x; y 2 V, hU (x) ; yi = 0 si din punctul 1. ) U ( ) = O ( ).

3. Un exemplu este cel de mai sus.
3.0.40. Proposition. Dac
a V este un spatiu euclidian sau un spatiu unitar atunci au loc
(1) 8x; y 2 V; jhx; yij

## , vectorii x si y sunt liniar dependenti (i.e. sunt coliniari);

(2) Functia k k : V ! [0; 1) are propriet
atile:
(a) kxk

0; 8x 2 V; kxk = 0 , x = 0.

145

(c) kx + yk

## [orice functie cu aceste propriet

ati se numeste norma pe V];
yk2 = kxk2 + kyk2

(3) kx

2 Re hx; yi, iar pentru x ? y avem kx + yk2 = kxk2 + kyk2 (teorema lui

Pitagora).
Proof. 1. Evident inegalitatea CauchyBuniakovski este adev
arat
a pentru hx; yi = 0.
Pentru un spatiu euclidian:
hx + y; x + yi

= 4 hx; yi2

hx; xi hy; yi

## si deci are loc: hx0 ; x0 i + 2

c
a x0 +

0 y0

hy; yi

0; 8 2 R; 8x; y 2 V

0; 8x; y 2 V.

Se observ
a c
a dac
a x0 si y0 sunt astfel nct
0

hx0 ; y0 i +

2
0

## = 0, atunci ecuatia de gradul 2 n

hy0 ; y0 i = 0, adic
a hx0 +

0 y0 ; x0

0 y0 i

## are o solutie dubl

a
= 0 de unde rezult
a

= 0, adic
a vectorii x0 si y0 sunt liniar dependenti.

## Pentru un spatiu unitar:

0; 8 2 C; 8x; y 2 V , hx; xi + hx; yi + hx; yi + j j2 hy; yi

hx + y; x + yi
Se alege
)

hx;yi
= t jhx;yij
, cu t 2 R arbitrar ) hx; xi + 2t jhx; yij + t2 hy; yi

= 4 jhx; yij2

hx; xi hy; yi

hx0 +

0 y0 ; x0

0 y0 i

0; 8t 2 R; 8x; y 2 V

0; 8x; y 2 V.

Se observ
a c
a dac
a x0 si y0 sunt astfel nct
dubl
a t0 si pentru

0; 8 2 C; 8x; y 2 V

## = 0, atunci ecuatia de gradul 2 n t are o solutie

hx;yi
= t0 jhx;yij
deci are loc: hx0 ; x0 i +

= 0 de unde rezult
a c
a x0 +

0 y0

hx0 ; y0 i +

0 hx0 ; y0 i

2
0

hy0 ; y0 i = 0, adic
a

= 0, adic
a vectorii x0 si y0 sunt liniar dependenti.

## hx; xi + hx; yi + hx; yi + j j2 hy; yi = hx; xi + hx; yi +

hx; yi + hy; yi

2. Functionala p
atratic
a atasat
a formei biliniare ce deneste produsul scalar este kxk =
Trebuie vericate propriet
atile:
a. kxk

p
hx; xi:

0; 8x 2 V; kxk = 0 , x = 0.

b. k xk = j j kxk ; 8x 2 V; 8 - scalar.
c. kx + yk

## kxk + kyk ; 8x; y 2 V (inegalitatea triunghiului).

Primele dou
a conditii sunt imediate. Folosind hx; yi + hy; xi = 2 Re hx; yi

2 jhx; yij

2 kxk kyk,

## inegalitatea triunghiului rezult

a
kx + yk2 = hx + y; x + yi = kxk2 + kyk2 + 2 Re hx; yi
kxk2 + kyk2 + 2 kxk kyk = (kxk + kyk)2 :

a c
a
<x;y>
kxkkyk

2 [ 1; 1] :

146

## [Relatii ntrun paralelogram]

3.0.42. Remark. cos (x; y) =
) kx

## yk2 = kxk2 + kyk2

<x;y>
kxkkyk

si kx

yk2 = hx

y; x

yi = kxk2

2 hx; yi + kyk2 )

a)

## 3.0.43. Remark. Are loc identitatea paralelogramului

kx + yk2 + kx
Proof. kx + yk2 + kx

## yk2 = 2 kxk2 + kyk2 ; 8x; y 2 V:

yk2 = hx + y; x + yi + hx

y; x

yi =

## 3.0.44. Remark. Paralelogramul format de vectorii x, y este dreptunghi () kx + yk = kx

Proof. kx + yk = kx

yk () kx + yk2 = kx

yk2 () kxk2

yk.

## 2 hx; yi + kyk2 = kxk2 + 2 hx; yi +

kyk2 () 4 hx; yi = 0 () x ? y.
3.0.45. Remark. Pentru p 2 [1; 1), functiile kxkp =

n
P

k=1

1=p

jxk jp

## sunt norme. Dac

a p 6= 2, norma kxkp

## nu provine de la un produs scalar (nu exist

a niciun produs scalar a c
arui norm
a atasat
a s
a e k kp cu
p 6= 2). [Demonstratia se bazeaz
a pe observatia c
a orice norm
a care provine de la un produs scalar satisface
regula paralelogramului; normele k kp cu p 6= 2 nu satisfac identitatea paralelogramului demonstratie
prin contraexemple]. Are loc o armatie mai tare: O norm
a provine de la un produs scalar dac
a si numai
dac
a satisface identitatea paralelogramului.
3.0.46. Remark (Identit
ati polare; reconstructia produsului scalar din norm
a). Dac
a V este un spatiu
vectorial real normat iar norma provine de la un produs scalar, atunci produsul scalar (real) de la care
provine norma este:
hx; yi =

1
kx + yk2
4

kx

yk2 ;

Dac
a V este un spatiu vectorial complex normat iar norma provine de la un produs scalar, atunci produsul
scalar (complex) de la care provine norma este:
hx; yi =

1
kx + yk2
4

kx

yk2 + i kx + iyk2

## 3.0.47. Remark. Dac

a x, y 6= 0 atunci vectorii x + y si x
Proof. (x + y) ? (x

y) () hx + y; x

i kx

iyk2 :

yi = 0 () kxk2

## 3.0.48. Remark (Determinantul Gram este p

atratul ariei paralelogramului format de vectori).
kxk2 kyk2 sin2 (x; y).

hx; xi hx; yi
hy; xi hy; yi

3.1. ORTOGONALITY

147

3.0.49. Denition. Pentru orice doi vectori se numeste distanta dintre vectori lungimea diferentei:
d (x; y) = kx

yk :

## 3.0.50. Remark. Distanta dintre doi vectori are propriet

atile:
(1) d (x; y)

0; d (x; y) = 0 , x = y.

(3) d (x; y)

## 3.0.51. Remark (Caracterizare a mijlocului/mediei). Dac

a x, y, m 2 V satisfac:
1
kx
2

km

xk = ky

(m

y)k2 = k(x

m) + (m

y)k2 + k(x

## [aplic regula paralelogramului pentru vectorii (x

m) si (m

y)]

atunci m =

mk =

yk ;

x+y
.
2

Proof. kx
= 2 kx
) k(x

yk2 + k(x

mk2 + km
m)

m)

yk2 = kx

(m

y)k2 =

yk2

y)k2 = 0 ) x

(m

m)

y )m=

m=m

x+y
.
2

3.1. Ortogonality
Ne vom limita prezentarea la spatii euclidiene.
3.1.1. Remark. Dac
a vectorii (xi )i=1;k sunt ortogonali doi cte doi, atunci are loc Teorema lui Pitagora
generalizat
a:
k
X

xi

i=1

Proof.

k
P

xi

i=1

k
P

i=1

xi ;

k
P

j=1

xj

k P
k
P

i=1 j=1

k
X
i=1

kxi k2 :

hxi ; xj i =

k
P

i=1

hxi ; xi i =

k
P

i=1

kxi k2 .

## 3.1.2. Remark (Variant

a a Teoremei Pitagora generalizat
a). Dac
a vectorii (xi )i=1;k sunt ortogonali doi
cte doi iar

1,

## scalari, atunci are loc:

k
X
i=1

2
i xi

k
X
i=1

j i j2 kxi k2 :

3.1.3. Remark. Orice familie de vectori nenuli si ortogonali doi cte doi este liniar independent
a.
3.1.4. Denition. O familie de vectori fvi gi2I dintrun spatiu euclidian se numeste familie ortogonal
a
dac
a hvi ; vj i = 0, 8i 6= j; i; j 2 I. Dac
a, n plus, kvi k = 1, 8i 2 I familia se numeste ortonormat
a. O baz
a
se numeste ortonormala dac
a este familie ortonormat
a.

148

## 3.1.5. Remark. Coordonatele unui vector ntro baz

a ortogonal
a satisfac relatia: xk =
natele unui vector ntro baz
a ortonormal
a satisfac relatia: xk = hx; vk i.

hx; vk i
; Coordokvk k2

## 3.1.6. Denition. Dou

a submultimi A, B ale lui V se numesc ortogonale si se noteaz
a cu A ? B dac
a
8x 2 A, 8y 2 B; x ? y. Se numeste complement ortogonal al unei multimi de vectori A si se noteaz
a A?
S
B = fv 2 V; hv; xi = 0; 8x 2 Ag :
multimea A? = span
B?A

## 3.1.7. Proposition. ntrun spatiu euclidian V sunt adev

arate armatiile:
(1) A ? B si x 2 A \ B ) x = 0;
(2) span fai ji 2 Ig ? B , ai ? y; 8i 2 I; 8y 2 B;
(3) A? este subspatiu vectorial si A ? A? ;
(4) Dac
a A este subspatiu vectorial a lui V atunci dim A? = dim V

dim A si V = A

A? ;

(5) Dac
a A si B sunt subspatii vectoriale ale lui V pentru care dim A + dim B = dim V si A ? B
atunci V = A

B; B = A? si A = B ? ;

(6) Dac
a A este subspatiu vectorial a lui V atunci A?
(7) A

B ) B?

Proof.

= A; n general, A?

= span (A);

A? si A? = (span (A))? .

(1) 9x 2 A \ B ) hx; xi = 0 ) x = 0:

P
(2) ( Fie x 2 span fai ji 2 Ig arbitrar. Atunci x =
time nit
a de
j aj (unde J este o mul
j2J I
P
indici) si hx; yi =
tine prin particularizarea vectorului din
j haj ; yi = 0, 8y 2 B: ) Se ob
j2J I

si y =

## unde pentru ecare a 2 A avem

P
?
8j 2 J1 , ha; bj i = 0 si 8j 2 J2 , ha; bj i = 0. Rezult
a astfel c
a x+y =
si
j bj 2 A
j2J1 [J2 -nit
a
P
x=
( j ) bj 2 A? pentru orice scalar 2 R:
j2J1 -nit
a

j bj

j bj

j2J2 -nit
a

j2J1 -nit
a

## (4) Fie (e1 ; e2 ; :::; ep ) o baz

a a subspatiului vectorial A pe care o complet
am la o baz
a E = (e1 ; e2 ; :::; em )
m
P
a spatiului vectorial V. Fie y 2 A? V arbitrar si y =
i ei descompunerea lui y n baza E.
i=1

Sistemul

## he1 ; yi = he2 ; yi = ::: = hep ; yi = 0;

care caracterizeaz
a relatia A ? A? conform celui de al doilea punct al propozitiei, se scrie
8
>
>
he1 ; e1 i 1 + he1 ; e2 i 2 + ::: + he1 ; ep i p + ::: + he1 ; em i m = 0
>
>
>
>
>
< he2 ; e1 i 1 + he2 ; e2 i 2 + ::: + he2 ; ep i p + ::: + he2 ; em i m = 0
:
>
>
:::
>
>
>
>
>
: hep ; e1 i 1 + hep ; e2 i 2 + ::: + hep ; ep i p + ::: + hep ; em i m = 0

3.1. ORTOGONALITY

149

## Deoarece matricea functionalei biliniare ce deneste produsul scalar asociat

a bazei fe1 ; e2 ; :::; ep g
este nesingular
a, sistemul este compatibil de m p ori nedeterminat. Vom avea dim A? = m p =
dim V

dim A.

(5) Folosind prima armatie avem A \ B = f0g si din dim A + dim B = dim V rezult
a c
aV=A
Deoarece B este un subspatiu vectorial a lui A? cu dim B = dim V

B:

## dim A = dim A? rezult

a c
a

B = A? :
(6) Este o consecinta a punctelor precedente.
(7) Este o consecinta a punctelor precedente.

## 3.1.8. Remark. x 2 A si x ? A ) hx; xi = 0 ) x = 0.

3.1.9. Remark. Evident A? = fx; x ? Ag.
3.1.10. Remark. 1. Dou
a subspatii vectoriale sunt ortogonale dac
a si numai dac
a ecare vector al unei
baze din primul subspatiu este ortogonal pe ecare vector al unei baze din cel de-al doilea subspatiu. 2.
Suma a dou
a subspatii vectoriale ortogonale este direct
a. Suma unei familii oarecare de subspatii vectoriale
ortogonale dou
a cte dou
a este direct
a. 3. Un sistem ortogonal de vectori, ce nu contine vectorul nul,
este format cu vectori liniar independenti. Dimensiunea spatiului dim V este num
arul maxim de vectori,
nenuli, ortogonali;
3.1.11. Theorem. (Teorema de ortogonalizare GramSchmidt) Fie V un spatiu euclidian cu dim V = n.
Fie E = (vi )i=1;n o baz
a si e Vk = span (v1 ;
(1) 8k 2 f1; : : : ; ng ; span (f1 ;
(2) 8k 2 f1; : : : ; n

; vk ). Exist
a o baz
a F = (fi )i=1;n cu propriet
atile:

; fk ) = Vk :

1g ; fk+1 ? Vk :

## Proof. Fie f1 = v1 ; f2 este un vector din V2 care este ortogonal pe f1 : u2 =

hf1 ;

2
1 f1

+ v2 i = 0 )
2
1

hf1 ;v2 i
hf1 ;f1 i

2
1

hf1 ; f2 i = 0 )

hf1 ; f1 i + hf1 ; v2 i = 0

) f2 = v2

hf1 ;v2 i
f
kf1 k2 1

= v2

hv1 ;v2 i
v
kv1 k2 1

## [cum v1 este vector n baza E, este nenul, asa

c
a norma sa este nenul
a]
f3 este un vector din V38ortogonal cu f1 si 8
cu f2 :
< hf1 ; f3 i = 0
< hf1 ;
f3 = 13 f1 + 23 f2 + v3 ;
)
: hf ; f i = 0
: hf ;
2 3
2
8
=0
>
8
z }| {
>
>
< hf1 ; f1 i 13 + hf1 ; f2 i 23 + hf1 ; v3 i = 0
<
)
)
>
:
>
hf2 ; f1 i 13 + hf2 ; f2 i 23 + hf2 ; v3 i = 0
>
: | {z }
=0

2
1 f1 + v2 ;

3
1 f1

3
2 f2

+ v3 i = 0

3
1 f1

3
2 f2

+ v3 i = 0

3
1

3
2

hf1 ;v3 i
kf1 k2

hf2 ;v3 i
kf2 k2

150

## Prin inductie se aa la fel toti vectorii fi , i = 1; n. Mai mult, pentru c

a pentru ecare i, fi si vi sunt n
pozitia de perpendicular
a, respectiv oblic
a fata de subspatiul generat de v1 ;
kfi k

; vi 1 , urmeaz
a c
a are loc

kvi k.

Inductia se organizeaz
a dup
a dimensiunea spatiului.
Dac
a E = (e) este o baz
a a spatiului euclidian V, cu e 6= 0, atunci evident F = E este un sistem
ortogonal.
Pentru k 2 f1; : : : ; n

1g arbitrar, dac
a E = (e1 ; e2 ; : : : ; ek+1 ) este o baz
a a spatiului euclidian V

atunci vom ar
ata c
a exist
a F = ff1 ; f2 ; : : : ; fk+1 g o baz
a ortogonal
a a spatiului V.
Conform ipotezei de inductie exist
a ff1 ; f2 ; : : : ; fk g sistem ortogonal, format cu vectori nenuli, pentru
care span fe1 ; e2 ; : : : ; ek g = span ff1 ; f2 ; : : : ; fk g.
k
P
Fie fk+1 = ek+1
"j fj , unde scalarii "1 ; : : : ; "n se determin
a din conditiile hfk+1 ; fj i = 0, 8j = 1; k.
j=1

## Obtinem fk+1 = ek+1

n
P
hek+1 ;fj i

j=1

hfj ;fj i

fj .

Evident F = ff1 ; f2 ; : : : ; fk+1 g este un sistem ortogonal, format cu vectori nenuli. Rezult
a F este liniar
independent. Familia F este si sistem de generatori pentru V (span (F ) = V = span (E)).
Mai mult, pentru c
a pentru ecare i 2 f2; : : : ; ng, fi si ei sunt n pozitia de perpendicular
a, respectiv
oblic
a fata de subspatiul generat de e1 ;

; ei 1 ; urmeaz
a c
a are loc kfi k

kei k.

## 3.1.12. Remark. Am demonstrat c

a orice spatiu euclidian admite o baz
a ortogonal
a, baz
a ce poate
ortonormal
a.

## 3.2. The Projection of a Vector over a Subspace

Fie V0 un subspatiu vectorial, f1 ;

; fk o baz
a notat
a B0 a lui V0 si v un vector oarecare, n general

## care nu apartine lui V0 . Exist

a o proiectie astfel nct V = p (V)

## Vectorul v se descompune ntr-un vector din V0 care este "proiectia ortogonal

a a lui v pe V0 " notat
a Pr v;
V0

## si un vector ortogonal pe V0 , care este perpendiculara din v pe V0 si care este v

g
aseasc
a efectiv aceast
a descompunere.
Au
8 loc:
>
< v Pr v
V0

>
: Pr v 2 V0
V0

? V0

8
>
>
<

Pr v; fj
V0

k
P
>
>
: Pr v =
V0

i=1

i fi :

= 0; 8j = 1; k

PrV0 v. Se cere s
a se

## 3.2. THE PROJECTION OF A VECTOR OVER A SUBSPACE

Se aa coecientii

a e ortogonal pe V0 , adic
a pe ecare dintre fi :
Pr v s

## din conditia ca vectorul v

v
v
k
P

Pr v; fj
V0

k
P

151

V0

= 0; 8j = 1; k )
= 0; 8j = 1; k )

i fi ; fj

i=1

hfi ; fj i = hv; fj i ; 8j = 1; k:

i=1

## Ultima relatie este un sistem liniar neomogen n necunoscutele 1 ;

0
hf1 ; fk i
B hf1 ; f1 i hf1 ; f2 i
B
B hf2 ; f1 i hf2 ; f2 i
hf2 ; fk i
B
B
B
B
@
hfk ; f1 i hfk ; f2 i
hfk ; fk i

## si care are matricea

C
C
C
C
C:
C
C
A

Aceast
a matrice este nesingular
a pentru c
a reprezint
a matricea n baza f1 ;

; fk a restrictiei functionalei

## pozitiv denite care determin

a produsul scalar la subspatiul V0 : Deci sistemul este compatibil determinat
2
3
^
6 1 7
6 .. 7
si admite o unic
a solutie 6 . 7, care este chiar reprezentarea n baza f1 ;
; fk (a subspatiului vectorial
4
5
^k
V0 , de dimensiune k) a proiectiei ortogonale a vectorului v pe subspatiul V0 :
2

Pr v
V0

Vectorul v

B0

3
^1
6
7
6 . 7
= 6 .. 7 (2 V0 ) ; Pr v
V0
4
5
^k

=
E

k
X

^ i [fi ]E (2 V0

V) :

i=1

a astfel:
V0

Pr v
V0

=
E

k
X

^ i [fi ]E

[v]E :

i=1

## 3.2.1. Remark. Are loc Teorema lui Pitagora:

2
2

kvk = Pr v
V0

pentru c
a vectorii Pr v si v
V0

+ v

Pr v
V0

Pr v sunt perpendiculari.
V0

## 3.2.2. Remark. Lungimea perpendicularei este mai mic

a sau egal
a dect lungimea oric
arei oblice.

152

## Proof. Orice oblic

a este de forma v

Pr v este ortogonal pe V0

v0 , cu v0 2 V0 . Din faptul c
av

V0

urmeaz
a c
a este ortogonal pe ecare vector din V0 , deci pe v0 . Atunci au loc relatiile:
v

v0 =

Pr v + Pr v

V0

V0

v0 2 V0

Pr v
V0

Pr v

V0

v0

Pr v
V0

v0

) v

9
>
>
>
>
>
=

2
T. P.

v0 k2 = Pr v

) kv

>
>
>
>
>
;

V0

kv0

Pr v
V0

v0

+ v

Pr v
V0

vk

adic
a lungimea oric
arei oblice este mai mare dect lungimea perpendicularei

## 3.2.3. Theorem. Pentru ecare v 2 V si pentru ecare V0 subspatiu al lui V, functia f ( ) : V ! R,

denit
a prin f (u) = ku

## vk are un unic minim pe V0 .

Proof. Evalu
am min f (u). Dac
a v 2 V0 , atunci minimul este nul, este atins chiar n v si este unic;
u2V0

pentru v 2
= V0 si u 2 V0 , u

v este o oblic
a din v si deci este mai lung
a dect perpendiculara din v
ku

vk

Pr v
V0

v ;

deci
8u 2 V0 ; f (u)

Pr v
V0

asa c
a
min f (u) = f

u2V0

Pr v
V0

= kuk ;

adic
a pentru v xat si u 2 V0 , valoarea minim
a a expresiei ku

## din v pe V0 si este atins

a ntr-un singur punct din V0 , care este chiar piciorul perpendicularei din v pe
V0 .

a d (v; V0 ) =
v

Pr v .
V0

## 3.3. The Least Square Method

Pentru prezentarea acestei metode se va adopta o terminologie care provine din Statistic
a / Biostatistic
a
/ Econometrie, cu precizarea leg
aturilor cu Algebra Liniar
a.

153

Estim
ari
variabila "r
aspuns/dependent
a/prezis
a"

## matricea "datelor observate/variabilelor explicative" [n p]

h
i
X = X1 X2
[X j este coloana j a matricii X]
Xp

Y^ = Xi

num
arul de variabile explicative [coloane ale matricii X]

X j o coloan
a [contine n observatii pentru aceeasi variabil
a]
n

num
arul de observatii

Xi

xij

## observatia i asupra variabilei explicative j

2
3
parametri necunoscuti

6 1 7
6 . 7
= 6 .. 7
4
5

6
7
.. 7
^=6
6 . 7
4
5
^p

## o eroare aleatoare care reprezint

a discrepanta aproxim
arii
3
2
"
6 1 7
6 . 7
"=Y X
= 6 .. 7; "i = yi Xi
5
4
"n
Datele sunt prezentate tabelar:
"

Observatia R
aspunsul

^1

"^

"^i = yi

Xi

Predictori

Num
arul

termenul liber X 1

y1

y2

3
..
.
n

X2

Xp

x11

x12

x1p

x21

x22

x2p

y3
..
.

1
..
.

x31
..
.

x32
..
.

x3p
..
.

yn

xn1

xn2

..
.

xnp

Se ncearc
a explicarea r
aspunsului observat n termeni de predictori observati3; pentru asta se foloseste
o matrice X [descris
a mai sus] care contine observatiile. Desi liniile si coloanele matricii X sunt vectori,
semnicatiile lor sunt distincte, asa c
a aceleasi operatii algebrice vor nsemna altceva.

3 Trebuie

notat
a diferenta dintre "ceea ce se observ
a c
a se ntmpl
a" si "ceea ce se ntmpl
a" este o diferenta conceptual
a
care se pare c
a a fost inclus
a n modelare pentru prima dat
a n cadrul Mecanicii Cuantice [diferenta dintre realitate si
observarea realit
atii]

154

## 3. INNER PRODUCT VECTOR SPACES

x1j 7
7
x2j 7
7
7
3.3.1. Operations with Columns. Coloanele X j
x3j 7
7, pentru ecare j, contin instante
.. 7
7
. 7
5
xnj
diferite ale aceluiasi obiect (valori diferite de acelasi tip si m
asurate la fel); sunt realiz
ari diferite ale
6
6
6
6
6
= 6
6
6
6
6
4

## aceleiasi variabile explicative j.

Coloanele pot privite ca observatii imperfecte ale aceleiasi "valori perfecte" (care poate sau nu s
a e
unic
a).
Matricea
2
6 1
6
6 1
6
6
X=6
6 1
6 .
6 .
6 .
4
1
2

## de observatii este o "linie

de coloane":
3
x11 x12
x1p 7
2 3
7
x21 x22
x2p 7
1
7 h
6 7
i
7
6 . 7 Not
x31 x32
x3p 7
X p ; X 0 = 6 .. 7 = i.
7 = X0 X1 X2
4 5
..
..
..
.. 7
7
1
.
.
.
. 7
5
xn1 xn2
xnp
3
T
6 X0 7
7
6
6 XT 7
1
7
6
XT = 6 . 7
6 .. 7
7
6
5
4
X Tp
3
2
T
6 X0 7
6
7
6 XT 7 h
i
6 1 7
T
T
X X = 6 . 7 X0 X1 X2
X p = X j1 X j2 j1 ;j2 =0;p
6 .. 7
6
7
4
5
T
Xp
Conceptual, atta vreme ct Y va "explicat" n termeni de X j [de fapt Y va proiectat pe subspatiul
generat de X j ], vectorii Y si X j trebuie s
a fac
a parte din acelasi spatiu [de dimensiune (mai mare sau)
egal
a cu num
arul de observatii, n]
Mai nti trebuie s
a e mai multe observatii dect variabile explicative: p + 1 < n
Se noteaz
a cu Sc "spatiul coloanelor"; Y , X j 2 Sc , j = 0; p iar Span fX j ; j = 0; pg
Propriet
ati ale vectorului ndimensional i:
2 3
1
h
i 6 7
6
. 7
iT
i = 1
6 .. 7 = n [scalar]
1
1 n n 1
4 5
1

Sc

## 3.3. THE LEAST SQUARE METHOD

n 1

1
6 7 h
6 . 7
iT = 6 .. 7
1
1 n
4 5
1

1 7
6 1
6
7
..
7 [matrice p
=6
atratic
a de dimensiune n cu elemente 1]
.
1
1
6
7
4
5
1
1
n n de 1

n
1P
xij [scalar, media vectorului
Xj =
n i=1
n
P
1 T
xij = iT X j = nX j ) X j =
i
n
i=1
2
3
X
6 j 7
1 T
6 .. 7
i Xj =
6 . 7 = i Xj = i
n
4
5
Xj
component
a]
2
3

6 x1j X j 7
6
7
6 x2j X j 7
6
7
6
7
6 x
7
i X j = In
X
j 7 = Xj
6 3j
6
7
..
6
7
.
6
7
4
5
xnj X j
la medie ale vectorului X j ]
[poate scris si ca X j

155

X j]
X j si iT X j = nX j

1
i iT
n

## X j [vector n dimensional cu scalarul X j pe ecare

1
i iT X j = M 0 X j [vectorul ndimensional al deviatiilor de
n

X j dac
a se foloseste o conventie folosit
a n diverse limbaje de programare,

si anume c
a adunarea unui vector cu un scalar nseamn
a adunarea scalarului la ecare component
a a
vectorului]
1 T
i i transform
a un vector ntrun nou vector ale c
arui componente sunt vechile
n
componente din care se scade media vechilor componente) [operatorul de deviatie de la medie]
Matricea M 0 = In

Propriet
ati ale operatiilor cu i si cu M 0 :
M 0 este
sumelor de p
atrate ale deviatiilor
2 folosit mai ales n calcularea
3
1
1
6 1 n
n 7
6
1
1
1 7
.
7 [elementele pe diagonal
..
M0 = 6
a sunt 1
, iar n afara diagonalei sunt
6
7
n
n
n 5
4
1
1
1
n
n
n n
1
]
n
det M 0 = 0 [de exemplu se adun
a toate liniile la linia 1; se obtine linia 1 nul
a] asa c
a rank (M 0 ) < n
T

## M 0 = (M 0 ) [M 0 este matrice simetric

a]
M 0 i = 0 [vector ndimensional].
1
1 T
1
Dem: M 0 i = In
i iT i = i
i i i=i
i n = i i = 0 [vector ndimensional].
n
n
n
iT M 0 = 0T ) scalarul zero 0 = 0T X j = iT M 0 X j = iT (M 0 X j ) [suma deviatiilor de
la medie este zero]

156

## 3. INNER PRODUCT VECTOR SPACES

M 0 M 0 = M 0 (M 0 este idempotent
a)
Dem:
M0 M0 =

1
i iT
n

In

1
1
i iT
i iT
n
n
1
1
= In
i iT
i iT
n
n
1
1
= In
i iT
i iT
n
n
n
P
Pentru coloana j:
xij
= In

n
P

Dem:

i=1

xij

Xj

1
i iT =
n
1
1
i iT
i iT =
n
n
In

1
i iT i iT =
n2
=n
1
+ i iT = M 0 :
n
2
X j = X Tj M 0 X j
+

= Xj

Xj

X j = (M 0 X j )

xij1

X j1

X j2 = X j1

X j1

Xj

i=1

(M 0 X j ) = X Tj M 0 M 0

X j = X Tj M 0 X j
Pentru dou
a coloane distincte j1 si j2 :
n
P

Dem:

n
P

xij1

i=1

xij1

X j1

xij2

X j2 = X Tj1 M 0 X j2

X j2

X j2 = (M 0 X j1 ) (M 0 X j2 ) =

i=1

X Tj1

M 0 X j2 = X Tj1 M 0 X j2

## Matricea de observatii premultiplicat

a cu M 0 d
a o nou
a matrice n care din ecare coloan
a se
scade media coloanei:

h
Dem: M 0 X = M 0
X1 X2
h
= X1 X1 X2 X2
Xp

Xp
Xp

M0 X 1 M0 X 2

M0 X p

Premultiplicarea cu M 0 micsoreaz
a rangul initial al matricii X, deoarece rank (M 0 X)

min (rank (M 0

T

(M 0 X)

## 3.3.2. Operations with Lines. Liniile sunt: Xi =

reprezint
a un individ)

1 xi1 xi2

xip

(pot considerate c
a

## Xi , i = 1; n este dintrun spatiu de dimensiune p + 1 (de indivizi) (n indivizi de dimensiune p + 1)

care fac parte
2 dintr
3 un spatiu al liniilor notat Sl .
6 1 7
6
7
6 xi1 7
6
7
XiT = 6 . 7
6 .. 7
6
7
4
5
xip

## 3.3. THE LEAST SQUARE METHOD

157

1 7
xi1
xi2
xip 7
6 1
7
6
7
6 xi1 x2
7
xi1 7
x
x
x
x
i1
i2
i1
ip
i1
7 h
6
7
i
7
6
7
T
7
6
2
Xi
xi2 xip 7
xi2 7
1 xi1 xi2
xip = 6 xi2 xi2 xi1 xi2
7
7
6 .
.
.
.
.
.. 7
7
6 .
7
.
.
.
.
.
.
.
.
.
. 7
6
7
5
4
5
2
xip xip xi1 xip xi2
xip
xip
2
3
3
x1p 7 2
6 1 x11 x12
6
7 6 X1 7
6 1 x21 x22
7 6
x
7
2p 7
6
7
6
7 6
X
2
6
7
7
matricea de observatii X = 6
=
a de linii);
x3p 7 6 . 7 (coloan
6 1 x31 x32
6
7
.
6 . .
7
. 7
..
..
.. 7 6
6 . .
5
.
.
. 7 4
6 . .
4
5
Xn
1 xn1 xn2
xnp
h
i
X T = X1T X2T
XnT
2
3
6 X1 7
6
7
h
i 6 X2 7 P
n
6
7
X T Xi
X T X = X1T X2T
XnT 6 . 7 =
6 .. 7 i=1 i
6
7
4
5
Xn
dim (Span fX j ; j = 0; pg) [ Sc ] = dim (Span fXi ; i = 1; ng) [ Sl ]
i
h
o observatie complet
a este linia Oi = yi Xi ;
2
2
2
3
3
3
6 y1 7
6 0 7
6 "1 7
6
6
6
7
7
7
6 y2 7
6 1 7
6 "2 7
6
6
6
7
7
7
6
6
6
7
7
7
7, = 6
7, " = 6 " 7 si:
Y =6
y
6 3 7
6 2 7
6 3 7
6 . 7
6 . 7
6 . 7
6 . 7
6 . 7
6 . 7
6 . 7
6 . 7
6 . 7
4
4
4
5
5
5
yn
"n
p
h
i
Y = 0 1 + 1X 1 + 2X 2 +
+ pX p + " = X 1 X 2
+"=
Xp
2
3
2
3
6 X1 7
6 X1
7
6
7
6
7
6 X2 7
6 X2
7
6
7
6
7
=X
+"=6 . 7
+"=6
7+"
..
6 .. 7
6
7
.
6
7
6
7
4
5
4
5
Xn
Xn
Y X =2"
3
h
i
1
5="
Y X 4
6
6
6
6
6
Xi = 6
6
6
6
6
4

Cnd modelul este privit din perspectiva coloanelor, se obtine cte o unitate de m
asur
a distinct
a pentru

ecare

j,

asa c
a interpretarea coecientilor

X )T (Y

## ca scalari este fortat

a (interpretare de combinatie liniar
a)
X ) = YT

X T (Y

X ) = Y TY

Y TX

XT Y +

158

produsul Y T

= Y TY

(Y

X =

n (p+1)

yi

2 f0; 1g

(Y

i 6
6
6
1 Xi
4
0

B
B
X ) = BO
@

= 1:

X )T (Y

XT Y

(p+1) 1

dimensiune
1}
|
{z
2Y T X + T X T X
2

X )T (Y

Pentru

## are dimensiune 1 asa c

a este egal cu transpusul lui: Y T X =

1 n

7
7
7 = [Oi ]i=1;n
5

6
6
6
4
3T

7C
7C
7C
5A

7
7
1 7
5

6
6
X )=6
4

31T

OT

2
6
6
6
4

B
B
BO
@
2

6
6
O 6
4

1
0

3
7
7
7
5

6
6
6
4

31

7C
6
7C
6
7C = 6
5A
4
3T

1
7
7 6
7
7 6
1 7=6 1 7
5
5 4

OT

3T
7
7
7
5

OT

6
6
O 6
4

7
7
7, cu
5

1
7
6
7
6
O 6 1 7
5
4

Se rezolv
a problema de minimizare: min E ( ) = E ^ (iar ^ este solutia problemei)
^ este astfel nct e ? X

^T X T X ^ = 0 ()

Y TX

^ ()

^T X T X ^ = 0

O formul
a de update [, (A

XT X

T
X(n+1)
X(n+1)

^ ()

X^

X ^ = 0 () Y T X ^

## 66)]: la cele n cazuri existente se adaug

a un nou caz (o nou
a linie)

= XT X

i
x1j x2j x3j
xnj
2
3 2
T
1
1
1
6
7 6 1
6 T 7 6
6 X 7 6 x11 x21 x31
6 1 7 6
6
7 6
T
T 7 = 6
X =6
X
6 2 7 6 x12 x22 x32
6 . 7 6 .
..
..
6 . 7 6 .
.
.
6 . 7 6 .
4
5 4
X Tp
x1p x2p x3p
X Tj =

X^ ? X

^T X T X = 0 () X T Y = X T X ^

O posibil
a solutie este Y T X

X(n+1) .
h
XT X

..
.

"

3
1 7
7
xn1 7
7
7
xn2 7
7,
.. 7
7
. 7
5
xnp

X(n+1)

1
(X T X)

T
X(n+1)

XT X

T
X(n+1)
X(n+1)

32

XT

(p+1) n
(p+1)

1
1
6 1
6
6 x11 x21 x31
6
6
X =6
6 x12 x22 x32
n (p+1)
6 .
..
..
6 .
(p+1)
.
.
6 .
4
x1p x2p x3p

6 1
6 T
6 X
6 1
6
T
=6
6 X2
6 .
6 .
6 .
4
X Tp

..
.

7
7
7
7 h
7
7
1 X1 X2
7
7
7
7
5

^ = XT X

XnT

6 X1
6
6 X2
i6
6
6 X
6 3
6 .
6 .
6 .
4
Xn

1 x31 x32
.. ..
..
. .
.
1 xn1 xn2

7
7
7
7
n
7 P
7=
XiT Xi
7
7 i=1
7
7
5

## rank X T X = rank XX T = rank (X)

n
P

xi1

n
P

xi2
x1p 7 6
i=1
i=1
6 n
n
n
P
P
P
7 6
2
6
x
x
xi1 xi2
7
i1
i1
x2p 7 6 i=1
i=1
i=1
6
7 6 P
n
n
n
P
P
xi2
xi2 xi1
x2i2
x3p 7
7=6
6 i=1
7
i=1
i=1
..
.. 7 6
..
..
..
.
. 7 6
.
.
.
5 6
6 n
n
n
P
P
P
4
xnp
xip
xip xi1
xip xi2
i=1
i=1
i=1
3
T
T
1 X2
1 Xp 7
7
X T1 X p 7
X T1 X 2
7
7
T
T
X2 X2
X2 Xp 7
7=
7
..
..
..
7
.
.
.
7
5
X Tp X 2
X Tp X p

X T Y (n ipoteza c
a matricea X T X este inversabil
a)

Ranguri:

Exemplu:

1 x21 x22

T
1T X 1
6 1 1
6 T
6 X 1 XT X 1
1
1
i 6
6
6
T
T
= 6 X2 1 X2 X1
6
..
..
6
.
.
6
4
X Tp 1 X Tp X 1
3

rank (AB)

1 x11 x12

Xp

1 76
76
6
xn1 7
76
76
6
xn2 7
76
7
.. 7 6
6
. 76
54
xnp

159

160

## 3. INNER PRODUCT VECTOR SPACES

6
6
6
6
6
6
6
6
6
6
6
6
X=6
6
6
6
6
6
6
6
6
6
6
4
2

1972

737:1

1973

812:0

1974

808:1

1975

976:4

1976 1084:3
1977 1204:4
1978 1346:5
1979 1507:2
1980 1667:2

3T

1185:9 7
7
1326:4 7
7
7
1434:2 7
7
7
7
1549:2 7
7
7
1718:0 7 =
7
7
1918:3 7
7
7
2163:9 7
7
7
2417:8 7
7
5
2633:1

1972
1973
1974
1975
1976
1977
1978
1979
1980
6
7
6
7
= 6 737:1 812:0 808:1 976:4 1084:3 1204:4 1346:5 1507:2 1667:2 7
4
5
1185:9 1326:4 1434:2 1549:2 1718:0 1918:3 2163:9 2417:8 2633:1
2
3
35141 244
2:005 107 3:231 2 107
6
7
6
7
T
7
7
7
X X = 6 2:005 10 1:2300 10 1:9744 10 7
4
5
7
7
7
3:2312 10 1:9744 10 3:1715 10
2
3
35141 244
2:005 107 3:231 2 107
6
7
6
7
det 6 2:005 107 1:2300 107 1:9744 107 7 = 4:6064 1017 6= 0 (inversabil
a)
4
5
3:2312 107 1:9744 107 3:1715 107
3 1 2
2
5:8389 10 7
4:5206 10 6
35141 244
2:005 107 3:231 2 107
6
7
6
6
7
6
6 2:005 107 1:2300 107 1:9744 107 7 = 6 4:5206 10 6
1:5291 10 4
5
4
4
3:4091 10 6
9:9802 10 5
3:2312 107 1:9744 107 3:1715 107
2
3
6 4:50 7
6
7
6 6:44 7
6
7
6
7
6 7:83 7
7
6
6
7
6
7
6 6:25 7
6
7
6
7
Y = 6 5:50 7
6
7
6
7
6 5:46 7
6
7
6
7
6 7:46 7
6
7
6
7
6 10:28 7
6
7
4
5
11:77

3:4091
9:9802
6:5636

10

10

10

7
7
7
5

## 3.4. SPECIAL TYPES OF OPERATORS

= XT X
Semnicatie:

6:9004

6
6
XT Y = 6
4

2:943 5
2:2791

10

10

10

161

3
7
7
7
5

variabila explicat
a (Discount Rate) este explicat
a folosind variabilele explicative Year, Consumption,
GNP;
Se obtine "explicatia":
DiscountRate = ( 6:9004

10 4 ) Y ear + ( 2:943 5

10 2 ) Consumption + (2:2791

10 2 ) GN P

[Nu se discut
a adecvarea economic
a, calitatea explic
arii, etc]
3.4. Special Types of Operators
3.4.1. Projection Operators.
3.4.1. Denition. Fie V = V1

## Vectorul x1 va numit proiectia lui x pe V1 n directia V2 .

3.4.2. Denition. Un operator liniar p ( ) : V ! V se numeste proiectie (proiector) dac
a p (p (x)) =
p (x) ; 8x 2 V [proprietate de idempotenta].
3.4.3. Theorem. Pentru un operator p ( ) care este proiectie are loc: V = p (V)
Proof. Fie v 2 V ) v = p (v) + v
= p (v)

p2 (v) = p (v)

p (v) = 0 deci v

ker p ( ).

p (v)) =

## p (v) 2 ker (p (:)); asadar V = p (V) + ker p (:). Suma este

direct
a pentru c
a v 2 p (V) \ ker p ( ) ) 0 = p (v) si 9u, v = p (u) ) 0 = p (v) = p2 (u) = p (u) = v deci
p (V ) \ ker p ( ) = f0g
3.4.4. Proposition. Fie p ( ) : V ! V un operator de proiectie. Atunci:
(1) x 2 Im p ( ) = p (V) () x este nul sau este vector propriu corespunz
ator valorii proprii 1;
(2) x 2 ker p ( ) () x este nul sau este vector propriu corespunz
ator valorii proprii 0; [structura
spectral
a a unei proiectii este: valorile proprii sunt numai 0 si 1 (cu diverse grade de multiplicitate)
iar vectorii proprii sunt vectorii din imaginea operatorului (pentru valoarea proprie 1) si din nucleu
(pentru valoarea proprie 0)]
Proof. Din Teorema anterioar
a se stie c
a
x = 0 ) p (0) = 0 ) 0 2 p (V)
0 6= x 2 p (V) () p (x) = p (p (x)) ) p (x) 2 p (V)
p (x) = 0 sau p (x) 6= 0.

162

## p (x) 6= 0 ) p (x) este vector propriu atasat valorii proprii 1.

x 6= 0 si p (x) = 0 ) x 2 ker p ( ) si p (x) = 0 = 0 x ) x este vector propriu atasat valorii proprii
0.

## 3.4.5. Proposition. Dac

a p ( ) : V ! V este proiectie, atunci si (1V
Mai mult, ker (1V

p) ( ) = Im p ( ) si Im (1V

## Proof. x 2 ker (1V

p) ( ) ) x

p) ( ) = ker p ( ).

p (x) = 0 ) x = p (x) 2 Im p ( ).

x 2 Im (1V

## p) ( ) : V ! V este tot proiectie.

p) ( ) ) 9y, x = (1V

p) (y) = y

## p (y) ) p (x) = p (y)

p (p (y)) = p (y)

p) ( )
p (y) = 0 )

p (x) = 0 ) x 2 ker p ( ).
x 2 ker p ( ) ) p (x) = 0 ) x = x

p (x) = (1V

p) (x) ) x 2 Im (1V

p) ( ).

## 3.4.6. Denition. O proiectie se numeste ortogonala dac

a n plus are loc p (V ) ? ker p ( ).

2

0 0

0 0

## 2 R xat, operatorul p ( ) : R2 ! R2 denit prin p (x) = 4

0 0

32
54

32

x2
3

5 = 4

3 2
5 4

0 0

32
54

0 0

5 = 4

54

x1
x2

5, asa c
a

x2 + x 1
1
1
1
32
3
2
32
3
0 0
0 0
x1
0 0
x1
0 0
x1
54
54
5 = 4
54
5 = p (x). Nucleul: 4
54
5 =
p (p (x)) = 4
1
1
x2
1
x2
1
x2
2 3
2
32
3
2
3
0
0 0
x
y
4 5 ) x1 + x2 = 0 ) ker p ( ) = span f(1; )g Imaginea: 4
5 4 1 5 = 4 1 5 ) y1 = 0
0
1
x2
y2
2
32 3 2 3
0 0
0
0
5 4 5 = 4 5 (proiectia
(conditia de compatibilitate) ) Im p ( ) = span f(0; 1)g p ((0; )) = 4
1
nu schimb
a vectorii din imagine) proiectia p ( ) este ortogonal
a ()
= 0.
2

1
32

x1

32

3
5

163

## 3.4.8. Denition. Fie dou

a spatii vectoriale de tip nit cu produse scalare X si Y, si un operator liniar
U ( ) : X ! Y ntre cele dou
a structuri. Se numeste operatorul adjunct operatorului U ( ) operatorul liniar
U ( ) : Y ! X cu proprietatea:
hU (x) ; yiY = hx; U (y)iX ; 8x 2 X; 8y 2 Y:
3.4.9. Remark. Unicitatea operatorului adjunct: Operatorul adjunct este unic, deoarece dac
a doi operatori U1 ( ) si U2 ( ) ar satisface relatia, sar obtine hx; U1 (y)iX = hx; U2 (y)iX , 8x 2 X; 8y 2 Y; din
observatia 3.0.37, pagina 144 se obtine c
a U1 (y) = U2 (y), 8y 2 Y, adic
a U1 ( ) = U2 ( ). Existenta
operatorului adjunct: Se consider
a cte o baz
a ortonormal
a BX , BY n ecare dintre spatiile X, Y, si
matricea A atasat
a operatorului U ( ) n aceste baze. Se consider
a matricea A = AT , care se numeste
matricea adjuncta [sau Hermiticadjunct
a] a matricii A [matricea adjunct
a este matricea obtinut
a prin
transpunerea matricii initiale si conjugarea complex
a a ec
arui element al matricii] Operatorul liniar de
la Y la X care n aceleasi alegeri de baze are matricea A este chiar operatorul adjunct [U ( ) : Y ! X,
[U (y)]BX = A [y]BY ] n cazul n care spatiile X si Y sunt de tip real, matricea adjunct
a este chiar matricea
transpus
a.

164

## 3. INNER PRODUCT VECTOR SPACES

Propriet
ati ale matricii adjuncte A :

## O = O [adjuncta matricii nule este matricea nul

a]
I = I [adjuncta matricii identitate este matricea identitate]
(A + B) = A + B
( A) = A

3.4.10. Remark.

(AB) = B A
(A ) = A
Dac
a A este p
atratic
a, atunci: det A = det A
[determinantul matricii adjuncte este conjugatul complex al determinantului matricii]
(A 1 ) = (A )

[dac
a A este inversabil
a, atunci si A este inversabil
a,

## iar inversa adjunctei este adjuncta inversei]

Propriet
ati ale operatorului adjunct U ( ) :

## O ( ) = O ( ) [adjunctul operatorului nul este operatorul nul]

I ( ) = I ( ) [pentru X = Y, adjunctul operatorului identitate este operatorul identitate]
(U1 + U2 ) ( ) = U1 ( ) + U2 ( )
( U) ( ) = U ( )
(U1 U2 ) ( ) = (U2

U1 ) ( )

(U ) ( ) = U ( )
(U

) ( ) = (U )

( ) [dac
a U ( ) este inversabil, atunci si U ( ) este inversabil

## iar inversul adjunctului este adjunctul inversului]

1. ker U ( ) = (Im U ( ))? 2.

## 3.4.11. Remark (Leg

aturi dintre nucleele si imaginile operatorilor).
ker U ( ) = (Im U ( ))? 3. Im U ( ) = (ker U ( ))? 4. Im U ( ) = (ker U ( ))?

Proof. Se observ
a mai nti c
a toate multimile care apar sunt subspatii vectoriale ale spatiilor corespunz
atoare:
ker U ( ) si Im U ( ) sunt subspatii n X, iar ker U ( ) si Im U ( ) sunt subspatii n Y.
Se mai observ
a si c
a, dac
a X0 este subspatiu n X, atunci X0?

= X0 .

## Cu aceste observatii se constat

a c
a armatiile 1. si 3. se obtin una din alta prin ortogonalitate, la fel
ca si 2. si 4.
Folosind c
a (U ) ( ) = U ( ), se constat
a c
a armatia 2. este armatia 1. pentru operatorul adjunct.
R
amne de demonstrat armatia 1.

165

## y 2 (Im U ( ))? () y ? Im U ( ) () 8x 2 X, hy; U (x)iY = 0 () 8x 2 X, hU (y) ; xiX = 0 ()

U (y) = 0 () y 2 ker U ( ).

## 3.4.12. Exercise. Pe R3 se consider

a functionala biliniar
a h(x1 ; x2 ; x3 ) ; (y1 ; y2 ; y3 )iR3 = 10x1 y1 + 3x1 y2 +
3x2 y1 + 2x2 y2 + x2 y3 + x3 y2 + x3 y3 .
(1) S
a se arate c
a h ; iR3 deneste un produs scalar pe R3 .
(2) Folosind acest produs scalar, s
a se determine lungimile vectorilor x = (1; 2; 1) si y = (1; 1; 1) si
cosinusul unghiului dintre x si y.
(3) S
a se g
aseasc
a un vector perpendicular pe x si pe y (n raport cu produsul scalar de la 1.)
(4) S
a se g
aseasc
a (cu procedeul GramSchmidt) o baz
a ortonormat
a pornind de la baza canonic
a.
(5) Fie U ( ) 2 L (R2 ; R3 ) denit prin U (x1 ; x2 ) = (x1 ; x1 + x2 ; 4x2

x1 ). S
a se determine adjunctul

operatorului U ( ), dac
a pe R2 se consider
a produsul scalar uzual, iar pe R3 se consider
a produsul
scalar de la punctul 1.

10 3 0

6
6
3.4.13. Solution.
(1) h(x1 ; x2 ; x3 ) ; (y1 ; y2 ; y3 )iR3 =
6 3 2 1
4
0 1 1
3
produsul scalar pe R este simetric
a, iar cu metoda Jacobi
[x]TE3

7
7
7 [y]E3 . Matricea care deneste
5
0

= 1,

= 10,

= 11,

## 9 = 1;determinantii sunt toti strict pozitivi, asa c

a functionala p
atratic
a atasat
a
32 3
2 3
2 3T 2
1
1
10 3 0
1
76 7
6 7
6 7 6
76 7
6 7
6 7 6
este strict pozitiv denit
a. [x]E3 = 6 2 7; hx; xiR3 = 6 2 7 6 3 2 1 7 6 2 7 = 35 kxk =
54 5
4 5
4 5 4
0 1 1
1
1
1
2 3T 2
32 3
2 3
1
10 3 0
1
1
6 7
6 7 6
76 7
p
p
p
6 7
6 7 6
76 7
hx; xiR3 = 35 [y]E3 = 6 1 7; hy; yiR3 = 6 1 7 6 3 2 1 7 6 1 7 = 21 kyk = hy; yiR3 =
4 5 4
54 5
4 5
1
0 1 1
1
1
2 3T 2
32 3
1
10 3 0
1
6 7 6
76 7
p
hx; yiR3
27
27
6 7 6
76 7
21 hx; yiR3 = 6 2 7 6 3 2 1 7 6 1 7 = 27 cos (x; y) =
= p p
= p
kxk kyk
4 5 4
54 5
35 21
7 15
1
0 1 1
1
2 3T 2
32
3
1
10 3 0
z1
6 7 6
76
7
6 7 6
76
7
hx; ziR3 = 0 () 6 2 7 6 3 2 1 7 6 z2 7 = 16z1 + 8z2 + 3z3 = 0 hy; ziR3 = 0 ()
4 5 4
54
5
1
0 1 1
z3
3

= 20

10

166

## 3. INNER PRODUCT VECTOR SPACES

3T 2

10 3 0

32

z1

6 7 6
76
7
6 7 6
76
7
6 1 7 6 3 2 1 7 6 z2 7 = 13z1 + 6z2 + 2z3 =
4 5 4
54
5
1
0 1 1
z3
8
>
z =2 ;
>
>
< 1
z2 = 7 ;
>
>
>
:
z3 = 8 :
(2) e1 = (1; 0; 0), e2 = (0; 1; 0), e3 = (0; 0; 1) f1 = e1 , f2 =
2 3T 2
32
1
10 3 0
6 7 6
76
he1 ; e2 iR3
3
6 7 6
76
=
he1 ; e2 iR3 = 6 0 7 6 3 2 1 7 6
he1 ; e1 iR3
10
4 5 4
54
0 1 1
0

8
< 16z1 + 8z2 + 3z3 = 0
0
, solutia este:
: 13z + 6z + 2z = 0
1
2
3

## e1 + e2 astfel nct hf1 ; f2 iR3 = 0 ) =

3
2 3T 2
32
0
1
10 3 0
1
7
6 7 6
76
7
6 7 6
76
1 7 = 3 he1 ; e1 iR3 = 6 0 7 6 3 2 1 7 6 0
5
4 5 4
54
0
0
0 1 1
0
8
< hf1 ; f3 i 3 = 0
3
3
R
10 f2 =
(1; 0; 0)+(0; 1; 0) =
; 1; 0 f3 = f1 + f2 +e3 , astfel nct
()
: hf ; f i = 0
10
10
2 3 R3
8
8
< hf1 ; f1 i 3 + hf1 ; f2 i 3 = hf1 ; e3 i 3
< 10 = 0
10
R
R
R
hf1 ; f2 iR3 =
()
)
=
0;
=
: hf ; f i + hf ; f i = hf ; e i
: 11 = 1
11
2 1 R3
2 2 R3
2 3 R3
10
3
2 3T 2
32
2 3T 2
32 3
3
10 3 0 6
10 3 0
1
1
0
6 7 6
7 6 10 7
6 7 6
76 7
7
6
76
7
7
6
7
6 7 6
6
= 0 hf1 ; e3 iR3 = 6 0 7 6 3 2 1 7 6 0 7 = 0 hf2 ; f2 iR3 =
6 0 7 6 3 2 1 76 1 7
7
4 5 4
54
4 5 4
54 5
5
0
0 1 1
0
0 1 1
1
0
2
3T 2
2
3
3
2
3
32 3
3
3
3 T2
10
3
0
10
3
0
0
6 10 7 6
6 10 7 6
10 7
76
7
6
7
6
7 6
6
7
7
6
11
6
76 7
6 1 7 6 3 2 1 7
6
7
7
6
hf2 ; e3 iR3 = 6 1 7 6 3 2 1 7 6 0 7 = 1 )
76 1 7 =
6
7 4
10
54
54 5
4
5
5
5 4
4
0 1 1
0 1 1
1
0
0
0
32 3
2 3T 2
1
10 3 0
1
76 7
6 7 6
10
3
3
10
76 7
6 7 6
f3 =
; 1; 0 + (0; 0; 1) =
;
; 1 hf1 ; f1 iR3 = 6 0 7 6 3 2 1 7 6 0 7 =
11
10
11 11
4 5 4
54 5
0
0 1 1
0
2
3T 2
2
3
3
3
3
10
3
0
r
6
6 10 7 6
7 6 10 7
6
7 6
7 11
1
10
3
7
10 ) u1 = p (1; 0; 0) hf2 ; f2 iR3 = 6
=
) u2 =
; 1; 0
6 3 2 1 76
1 7
1 7
6
7
6
7
11
10
54
10
5 4
5 10
4
0 1 1
0
0
3T 2
2
3
2
3
3
3
10
3
0
6
6 11 7 6
7 6 11 7
6 10 7 6
p
7
10 7
7
7 = 1 ) u3 = 11 3 ; 10 ; 1 [U (x)] =
hf3 ; f3 iR3 = 6
3 2 1 76
E3
6
7
6 11 7 6
11
11 11
5 4 11 5
4
5 4
0 1 1
1
1
2
3
1 0
6
7
6
7
6 1 1 7 [x]E2
4
5
1 4

167

## Operatorul adjunct este denit prin:

hU (x) ; yiR3 = hx; U (y)iR2 ()
2
3
10 3 0
6
7
7
T 6
() [U (x)]E3 6 3 2 1 7 [y]E3 = [x]TE2 [U (y)]E2 ()
4
5
0 1 1
3
3T 2
2
10 3 0
1 0
7
7 6
6
7
7 6
T 6
() [x]E2 6 1 1 7 6 3 2 1 7 [y]E3 = [x]TE2 [U (y)]E2 , 8x 2 R2 , 8y 2 R3 ()
5
5 4
4
0 1 1
1 4
3
3T 2
2
2
3
10 3 0
1 0
7
7 6
6
13
4
0
7
7 6
6
5 [y]
() [U (y)]E2 = 6 1 1 7 6 3 2 1 7 [y]E3 = 4
E3
5
5 4
4
3 6 5
0 1 1
1 4
2
3T
3
2
1 0
6
7
1 1
1
6
7
5,
6 1 1 7 =4
4
5
0 1 4
1 4
2
3
2
3T 2
3
3
2
3
2
10 3 0
10 3 0
1 0
6
7
6
7 6
7
13 4 0
1 1
1 6
6
7 6
7
5
56 3 2 1 7
7=4
6 1 1 7 6 3 2 1 7=4
4
5
4
5 4
5
3 6 5
0 1 4
0 1 1
1 4
0 1 1
) U (y) = (13y1 + 4y2 ; 3y1 + 6y2 + 5y3 ).
3.4.3. The Isometric, Unitary and Orthogonal Operators.
3.4.14. Denition. Un operator liniar U ( ) : X ! Y este numit izometrie dac
a
8x 2 X; kU (x)kY = kxkX :
[p
astreaz
a lungimile]
3.4.15. Remark. O izometrie este functie injectiv
a [deoarece U (x) = 0 ) kU (x)kY = 0 ) kxkX = 0 )
x = 0 asa c
a nucleul operatorului este f0g]
3.4.16. Remark. Conditia de izometrie este echivalent
a cu p
astrarea produsului scalar:
hU (x1 ) ; U (x2 )iY = hx1 ; x2 iX ; 8x1 ; x2 2 X:
Proof. kU (x1
) hx1

x2 ; x1

) hx1 ; x1 i

x2 )kY = kx1

x2 kX )

x2 i = hU (x1

x2 ) ; U (x1

hx1 ; x2 i

hU (x2 ) ; U (x2 )i )

x2 )i = hU (x1 )

## hx2 ; x1 i + hx2 ; x2 i = hU (x1 ) ; U (x1 )i

U (x2 ) ; U (x1 )

U (x2 )i )

hU (x1 ) ; U (x2 )i

hU (x2 ) ; U (x1 )i +

168

## ) hx1 ; x2 i = hU (x1 ) ; U (x2 )i [deci din noua conditie rezult

a conditia din denitie]
Reciproc, din noua conditie rezult
a conditia de denitie cu x1 = x2 .
3.4.17. Remark. Un operator U ( ) : X ! Y este izometric dac
a si numai dac
a U U = IX .
Proof. U U = IX )
) 8x 2 X, hx; xi = h(U U ) (x) ; xi = hU (x) ; U (x)i ) kxk = kU (x)k.
Reciproc, dac
a operatorul este izometric, atunci cu observatia anterioar
a hx1 ; x2 i = hU (x1 ) ; U (x2 )i =
h(U U ) (x1 ) ; x2 i, 8x1 ; x2 2 X )
) (U U ) (x1 ) = x1 , 8x1 2 X ) U U = IX .
3.4.18. Remark. Reformulare: o matrice este izometric
a dac
a si numai dac
a coloanele ei formeaz
a o
familie ortonormat
a.
3.4.19. Remark (Teorema MazurUlam, , T. 3.1.2, pagina 76). Dac
a o functie (nu neap
arat liniar
a)
f ( ) : X ! Y satisface conditiile: f (0) = 0 si kx1

x2 k = kf (x1 )

a: p
astreaz
a

## distantele], 8x1 ; x2 2 X, atunci f ( ) este liniar

a. Mai mult, functia f ( ) este si izometrie.
Proof. Liniaritatea:
a) Aditivitatea:
x1 + x2
x1 = x2
2
x2 k = kf (x1 ) f (x2 )k,

x1 + x2
1
= kx1
2
2

## Fie x1 , x2 2 X; din relatia

folosind kx1
se obtine:
x1 + x2
f
2

x2 k,

x1 + x2
1
= kf (x1 ) f (x2 )k.
2
2
x1 + x2
f (x1 ) + f (x2 )
Din "Caracterizarea mijlocului" rezult
af
=
(ecuatia functional
a Jensen).
2
2
1
x1
= f (x1 )
Pentru x2 = 0 ) f
2
2
f (x1 ) + f (x2 )
x1 + x2
1
)
=f
= f (x1 + x2 ) ) f (x1 + x2 ) = f (x1 ) + f (x2 ) (aditivitate)
2
2
2
b) Omogenitate:
f (x1 ) = f (x2 )

## Din aditivitate rezult

a f ( x) = f (x) pentru orice
Fie

2 R si e (

n )n2N

## un sir de numere rationale care tinde la .

Are loc:
kf ( x)
=j

f (x)k
n j kxk+j

kf ( x)
n j kf

f(

n x)k

(x)k = j

+ kf (

n x)

n j (kxk

f (x)k = k x

n xk

+k

+ kf (x)k), 8n 2 N. Pentru c
a

nf
n

(x)

f (x)k =

si (kxk + kf (x)k)

este un num
ar xat, se obtine c
a f ( x) = f (x) (omogenitate)
2. Din relatia initial
a cu x2 = 0 se obtine kxk = kf (x)k si cum f ( ) este operator liniar rezult
a c
a este
si izometrie.

169

## 3.4.20. Remark. O functie (nu neap

arat liniar
a) f ( ) : X ! X care satisface kx1

x2 k = kf (x1 )

f (x2 )k

## mai este numit mi

scare rigida si este de forma f (x) = f (0) + T (x), cu T ( ) operator izometric.
3.4.21. Denition. Un operator liniar U ( ) : X ! Y este numit unitar dac
a este izometric si inversabil.
Dac
a, n plus, spatiile sunt reale, atunci operatorul mai este numit si ortogonal.
3.4.22. Remark. O izometrie (ntre spatii de tip nit) este operator unitar dac
a si numai dac
a dim X =
dim Y.
Proof. Dac
a este izometrie atunci este injectiv, si cum dimensiunile spatiilor sunt egale, este chiar
bijectiv.
Reciproc, dac
a izometria este inversabil
a, atunci dimensiunile spatiilor trebuie s
a e egale.
3.4.23. Remark. Cteva propriet
ati ale operatorilor unitari: Dac
a U ( ) este unitar, atunci U
U ( ). Dac
a U ( ) este unitar, atunci U

() =

## operatori care transform

a o baz
a ortonormal
a ntr-o alt
a baz
a ortonormal
a. Compunerea a doi operatori
unitari (cnd compunerea are sens) este tot operator unitar. Determinantul matricii unui operator unitar
este de modul complex 1 [jdet Aj = 1; n general, determinantul matricii unui operator unitar este un
num
ar complex; dac
a operatorul este ortogonal, atunci determinantul este num
ar real si det A =

1]

## 3.4.24. Proposition. Dac

a A = (aij )i;j=1;n este matricea operatorului unitar corespunz
atoare bazei canonice (ei )i=1;n , atunci are loc
AT A = AAT = In operatorul este inversabil si A

= AT :

## [coloanele matricii formeaz

a o familie ortonormat
a]
Proof. Din denitie rezult
a c
a
ij

## = hcoli (A) ; colj (A)i = lini AT ; colj (A)

) AT A = In :

3.4.25. Remark.
kU (x)k = kxk (conserv
a norma);
kU (x)

U (y)k = kx

yk (conserv
a distanta);

a unghiul).

170

## 3. INNER PRODUCT VECTOR SPACES

3.4.26. Proposition. Orice valoare proprie a unui operator unitar este num
ar real sau complex de modul
unitar [chiar dac
a operatorul este ortogonal, valorile proprii pot complexe].
Proof. hU (x1 ) ; U (x2 )i = h x1 ; x2 i =
proprii

= 1; adic
aj j=1

## 3.4.27. Remark. Exist

a o baz
a ortonormal
a format
a din vectori proprii n care matricea este de tip
0
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
@

cos
sin

1
1

sin

cos

..

.
cos
sin

m
m

sin

cos

1
...

C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
1 A

Geometric, aceast
a structur
a nseamn
a rotatii (f
ar
a omotetii) n planele ortogonale corespunz
atoare valorilor proprii complexe; directiile corespunz
atoare valorilor proprii reale combinate dou
a cte dou
a formeaz
a
plane ortogonale n care se realizeaz
a rotatii de 0 sau de 180 , la care pentru cazul impar se adaug
ao
simetrie sau o identitate pe ultima directie.
2

cos
sin

sin
cos

32
54

x1
x2

5.

## (1) Operatorul U ( ) este o rotatie n plan de unghi [n sens trigonometric]

Cazuri
p 3 particulare:
2 p
2
3
2
2
0
1
6
5 [rotatie de 90 ], U =4 ( ) de matrice 4 p2
p2 7
U =2 ( ) de matrice 4
5 [rotatie de
2
2
1 0
2
2
2
32
3
2
3
cos
sin
x
x cos
x2 sin
54 1 5 = 4 1
5. Operatorul este izometrie
45 ] U (x) = 4
x2
sin
cos
x2 cos + x1 sin
2
3
q
x1 cos
x2 sin
5 = (x1 cos
[p
astreaz
a lungimile]: 4
x2 sin )2 + (x2 cos + x1 sin )2 =
x2 cos + x1 sin
p
p
2
2
2
= x1 cos2 + x2 sin
2x1 x2 sin cos + x22 cos2 + x21 sin2 + 2x1 x2 sin cos = = x21 + x22 =
kxk, deci kU (x)k = kxk. Unghiul dintre vectorul x si vectorul U (x) este

x1 (x1 cos
hx; U (x)i
=
kxk kU (x)k
cos ) (x;\
U (x)) = .

x21 + x22

+ x1 sin )

171

==

x21 + x22

x21 cos

## (2) Familia fv1 ; v2 g [de vectori format

a din coloanele matricii] este ortonormat
a: v1 = 4
2

v2 = 4

sin

+x

cos

5,

sin

## = 1, hv1 ; v2 i = sin cos +sin cos = 0.

cos
(3) Operatorul este inversabil [deci este operator unitar; deoarece matricea are numai elemente reale,
3
3 1 2
3 2
2
cos ( ) sin ( )
cos
sin
cos
sin
5)
5 =4
5=4
este operator ortogonal]: 4
sin ( ) cos ( )
sin
cos
sin
cos
1
U ( ) = U ( ).
2
3
cos
sin
5 = (cos
(4) Polinomul caracteristic este P ( ) = det 4
)2 + sin2 = 2
sin
cos
2 sin cos + 1, cu r
ad
acinile complexe 1 = cos + i sin ; 2 = cos
i sin , ambele de modul
complex 1.
3.4.29. Remark. Dac
a un operator U ( ) : R2 ! R2 este ortogonal iar matricea are determinant 1, atunci
operatorul este o rotatie.
3
2
a b
5 matricea operatorului;
Proof. Fie A = 4
c d
Determinantul trebuie s
a e 1, asa c
a ad bc = 1.
Coloanele trebuie s
a e ortogonale, asa c
a ab + cd = 0
Coloanele trebuie s
a e de lungime 1, asa c
a a2 2
+ c2 = 1,
b2 + d 2 =
32
31 2
a b
a c
54
5=4
Matricea invers
a este matricea transpus
a, asa c
a4
c d
b d
2
3 1
2
3
2
a b
d
b
5 = 4
5 se obtine c = b, a = d ) A = 4
Din 4
c d
c a
consider
a

## astfel nct cos

= a si atunci sin

obtine a =

d, b = c ) A = 4
2

sin = b ) A = 4

cos
sin

sin

cos

a
b
3

5.

b
a

5, cu

5=4

1 0

ac + bd c + d
0 1
3
a b
5, cu a2 + b2 = 1. Se
b a
2
3
cos
sin
5.
= b, iar forma general
a este A = 4
sin
cos

2

a2 + b2 ac + bd

a2

1, atunci din 4

b2 =

a b
c d

1. Se consider
a

3
5

2
4

5 se

=a)

3
5

172

## 3.4.31. Denition. Operatorul liniar U ( ) este numit normal dac

a
(U

U ) ( ) = (U

U) ( )

(operatorul comut
a cu adjunctul s
au) [pentru ca operatiile de compunere s
a aib
a sens, operatorul trebuie
s
a aib
a ca domeniu si codomeniu acelasi spatiu].

3.4.32. Proposition. Orice vector propriu al unui operator normal, atasat valorii proprii , este vector
propriu al operatorului adjunct, atasat valorii proprii :

## Proof. Pentru x 6= 0 vector propriu al operatorului [U (x) = x], are loc

U (U (x)) = U (U (x)) = U ( x) = U (x) ;
ceea ce nseamn
a c
a si U (x) este vector propriu al operatorului U ( ) ; atasat valorii proprii

; adic
a

operatorul U ( ) transform
a vectorii proprii ai operatorului U ( ) corespunz
atori valorii proprii

tot n

## vectori proprii de acelasi tip.

Mai mult, pentru orice doi vectori proprii x si y are loc
hU (x) ; yi = hx; U (y)i = hx; yi =
adic
a U (x) = x

x; y ;

173

## 3.4.33. Theorem. (Structura unui operator normal real) Exist

a o baz
a ortonormal
a de vectori
proprii n care matricea operatorului este de forma
0
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
@
n care

C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
C
A

...

m+1

..

m+1 ;

## sunt cele reale, num

arul de

aparitii al ec
arei valori proprii ind egal cu ordinul ei de multiplicitate.
3.4.34. Remark. Operatorul normal reprezint
a o transformare format
a din rotatii cu omotetii n m plane
ortogonale dou
a cte dou
a si (numai) omotetii n celelalte r

m directii ortogonale.

## 3.4.35. Remark. Matricea

0
@

j
j

reprezint
a o omotetie de coecient

A=
2
j

q
2
j

2
j

2
j

0
B
@

p
p

j
2
2
j+ j
j
2+ 2
j
j

j
2
2
j+ j

j
2+ 2
j
j

1
C
A
p

j
2
2
j+ j

## 3.4.5. The Autoadjoint Operator.

3.4.36. Denition. Operatorul liniar U ( ) : V ! V este numit autoadjunct dac
a
U ( ) = U ( ):
[Dac
a V este spatiu vectorial real, operatorul autoadjunct mai este numit operator simetric; dac
a V este
spatiu vectorial complex, operatorul autoadjunct mai este numit operator hermitic]
3.4.37. Proposition. Matricea unui operator liniar autoadjunct ntr-o baz
a ortonormat
a a unui spatiu
real este simetric
a.

174

Proof. Fie x =

n
P

xi ei ; y =

i=1

n
P

n
P

j=1

## yj ej si A = (aij )i;j=1;n : Au loc:

aki ek ; deci
*
+
n
n
n P
n
P
P
P
hU (x) ; yi = U
xi ei ;
yj ej =
xi yj hU (ei ) ; ej i =
U (ei ) =

k=1

i=1

n P
n
P

j=1

n
P

i=1 j=1

n P
n
P

xi yj
aki ek ; ej =
xi yj aji si analog,
i=1 j=1
k=1
*
!+
n
n
n P
n
P
P
P
hx; U (y)i =
xi e i ; U
yj ej
=
xi yj hei ; U (ej )i =
=

i=1 j=1

i=1

n P
n
P

xi yj ei ;

i=1 j=1

j=1

n
P

akj ek

i=1 j=1

n P
n
P

i=1 j=1

k=1

## hU (x) ; yi = hx; U (y)i ; 8x; y 2 Rn

urmeaz
a c
a aij = aji ; adic
a matricea este simetric
a (A = AT )
3.4.38. Theorem. Valorile proprii ale unui operator real simetric sunt reale.
Proof. Av = v prin nmultire la stnga cu v ) v T Av = v T v = v T v
Av = v prin conjugare complex
a ) Av = v si apoi dup
a nmultire la stnga cu v T ) v T Av =
vT v = vT v
dar v T Av

= v T AT v T

kvk2 =

= v T Av )

kvk2 )

3.4.39. Theorem. Vectorii proprii asociati la valori proprii distincte ale unui operator real simetric sunt
ortogonali doi cte doi.
9
>
>
Av1 = 1 v1 )
=
>
>
>
>
>
T
T
Av2 = 2 v2 ) v1 Av2 = 2 v1 v2 >
>
>
=
v2T Av1

Proof.

T
1 v2 v1

>
>
>
>
T
T
v2 v1 = v1 v2 >
>
>
>
>
>
;
=
6
1
2

) v1T v2 = 0

## 3.4.40. Remark. Exist

a o baz
a ortonormal
a format
a din vectori proprii n care matricea unui operator
simetric este diagonal
a de valori proprii (pentru c
a un operator simetric este un operator normal f
ar
a valori
proprii complexe). Geometric, operatorul simetric reprezint
a omotetii pe directii ortogonale.
3.4.41. Proposition. Pe un spatiu vectorial complex, un operator este hermitic dac
a si numai dac
a
8x 2 V, hx; U (x)i 2 R.
Proof. hx; U (x)i 2 R () hx; U (x)i = hx; U (x)i = hU (x) ; xi = hx; U (x)i () U ( ) = U ( )
[ultima echivalenta se obtine cu aplicarea Teoremei 3.0.39, punctul 2., pagina 144]

175

## 3.4.6. The Antiautoadjoint Operator.

3.4.42. Denition. Operatorul liniar U ( ) se numeste antiautoadjunct dac
a
U()=

U ( ):

3.4.43. Proposition. Un operator antiautoadjunct are numai valori proprii pur imaginare.
Proof. Din U ( ) =

U ( ) rezult
a c
a

; adic
a partea real
a a valorii proprii este nul
a
0
0
3.4.44. Remark. Celulele matricii n baza canonic
a Jordan real
a cap
at
a forma special
a@
=

A,

0
care din punct de vedere geometric nseamn
a omotetii si rotatii de 90 n plane ortogonale dou
a cte dou
a.
j

## 3.5. The Leontief Model

Modelul4 din aceast
a sectiune a fost dezvoltat de Wassily Leontief la sfrsitul deceniului 3, secolul
XX si a fost motivul pentru care Wassily Leontief a primit Premiul Nobel n S
tiinte Economice n 1973.
Printre numele alternative folosite pentru a semnaliza utilizarea acestui model, se num
ar
a: "Modelul
InputOutput", "Analiza InputOutput", "Analiz
a Interindustrial
a", "Balanta leg
aturilor dintre ramuri",
etc. Toate aceste nume se refer
a la diverse variante statice si/sau dinamice care au n comun anumite
caracteristici. n continuare vor prezentate exemple si rezultate de Algebr
a Liniar
a folosite n unele
dintre aceste variante.
3.5.1. Example (Un model Leontief nchis). [preluat dup
a ] Trei rme (de servicii de tmpl
arie (T),
instalatii electrice (E) si instalatii sanitare (S)) stabilesc de comun acord s
a fac
a schimb de servicii pe o peservicii
efectuate de:

T 2

de activitate E 4

la:

rioad
a determinat
a (10 s
apt
amni). Activitatea necesar
a este prezentat
a n tabel:
s
apt
amni

S 4

10 10 10
Din motive legate de legislatie, taxe si impozite, ecare este obligat s
a declare, s
a primeasc
a si s
a pl
ateasc
a
sume rezonabile corespunz
atoare activit
atilor desf
asurate. Valoarea/pretul cte unei s
apt
amni de activitate pentru ecare trebuie s
a e n jur de 1000Euro, iar rmele cad de acord s
asi ajusteze valorile asa
nct ecare s
a pl
ateasc
a exact ct primeste (sistem de barter schimb n natur
a n trei: de fapt, ecare
4 Aceast
a sectiune

## foloseste foarte mult cartea , n care pot g

asite multe dintre detaliile care lipsesc din aceast
a prezentare.

176

## 3. INNER PRODUCT VECTOR SPACES

pl
ateste cu serviciul pe carel ofer
a serviciile pe care le primeste). Not
am cu pT , pE , pS pretul primit
de ecare rm
a pentru o s
apt
amn
a de servicii. Conditia de
8 echilibru cere ca totalul primit de ecare
>
2pT + pE + 6pS = 10pT ;
>
>
<
s
a e egal cu totalul pl
atit de ecare. Se obtine sistemul:
4pT + 5pE + 1pS = 10pE ; Prima ecuatie
>
>
>
:
4pT + 4pE + 3pS = 10pS :
se refer
a la T iar interpretarea ei este: n perioada dat
a rma T este pl
atit
a pentru activitatea ei 10pT
(membrul drept) si are de plat
a 2 s
apt
amni de activitate c
atre ea ns
asi la pretul pT , 1 s
apt
amn
a de
activitate c
atre E la pretul pE si 6 s
apt
amni de activitate 8
c
atre S la pretul pS , care dau un total de
31
>
> p T = pE ;
>
>
32
<
2pT + pE + 6pS (membrul stng). Solutia sistemului este:
Printre solutiile posibile se
pE = pE ;
>
>
>
>
: pS = 9 pE :
8
8 8
31
>
>
>
pT =
1000 = 968:75
p = 1000
>
>
>
>
32
<
< T
num
ar
a: pE = 1000 )
PT = 1000 )
pE = 1000
pE = 1032:3 Sistemul se poate scrie
>
>
>
>
>
>
>
:
: pS = 9 1000 = 1125
pS = 1161:3
8
2
3
32
2
3 2
3
2
1
6
2
1
6
p
p
T
T
6 10 10 10 7
6 10 10 10 7 6
7 6
7
6 4
5
1 7
6 4 5 1 76
7 6
7
7 sa obtinut prin mp
artirea ec
arei
ca 6 10 10 10 7 6 pE 7 = 6 pE 7. Matricea E = 6
6
54
4
5 4
5
10 10 7
4 10
5
4
4
3
4
4
3
pS
pS
10
10
10
10 10 10
coloane la suma ei si este un exemplu de matrice inputoutput sau matrice de schimb.

## Modelul Leontief utilizeaz

a date observationale referitoare la o anumit
a regiune geograc
a (stat, regiune,
continent, mondial, etc). Activitatea regiunii este descompus
a n industrii sau activitati industriale distincte, care ntro perioad
a xat
a de timp produc si consum
a bunuri. Aceeasi industrie produce bunuri
prin consumul de bunuri produse celelalte industrii, inclusiv ea ns
asi. Aceast
a informatie despre productie
si consum a aceleiasi liste de industrii (indexat
a de la 1 la n) este strns
a ntrun tabel de tranzactii interindustriale. Linia i0 se refer
a la industria i0 si descrie descompunerea/distributia cantit
atii productiei
industriei i0 din punct de vedere al industriilor care consum
a aceast
a productie, asa c
a elementul pe locul
(i0 ; j) este cantitatea de produs nit (output) produs de industria i0 si consumat de industria j. Coloana
j0 descrie cantit
atile consumate de industria j0 , descompuse/distribuite dup
a industriile produc
atoare.
Fiecare industrie poate consuma o parte din ceea ce produce, iar cantit
atile respective au indici (i0 ; i0 ).

177

## Sursa: Fig 1.1, pag. 3, 

Se observ
a c
a productia unei industrii este descompus
a pentru nceput n dou
a categorii, dup
a motivul
pentru care este achizitionat
a cantitatea respectiv
a: "Final Demand" [consum care nu conduce la productia
altor bunuri] si "Producers as consumers" [consum care conduce la productia altor bunuri]. Mai departe,
ecare dintre aceste dou
a categorii este descompus
a n alte componente.
Un exemplu de organizare a datelor (date referitoare la 2005 pentru leg
aturile InputOutput dintre
regiunile Japoneze) poate g
asit la adresa:
http://www.meti.go.jp/english/statistics/tyo/tiikiio/index.html
Din documentele disponibile la aceast
a adres
a se poate remarca, de exemplu, c
a datele referitoare
la 2005 au fost asamblate pentru prezentare n martie 2010 (ceea ce ar trebui s
a e o sugestie despre
complexitatea si necesarul de efort al unei asemenea sarcini, n viata real
a).

## 3.5.2. Denition. O matrice p

atratic
a cu elemente pozitive se numeste productiva dac
a 9v 0 > 0 astfel
6=

nct
v 0 > Av 0 :
6=

## (relatia u > vntre doi vectori u si v va numit

a inegalitate tare ntre vectorisi nseamn
a c
a ecare
6=

coordonat
a a vectorului u este strict mai mare dect coordonata corespunz
atoare a vectorului v)

## 3.5.3. Theorem. Valoarea absolut

a a oric
arei valori proprii a unei matrici productive este strict subunitar
a.

178

## 3. INNER PRODUCT VECTOR SPACES

Proof.
Av = v ) vi =
j j vi0
vi
vi0

) j j vi0

j=1

n
P

aij vj ) j j jvi j

vi
vi0

max
j

j vi0 vv0i
i

n
P

vj
vj0

n
P

aij vj0

j=1
n
P

j=1

< max
j

vj
vj0

vj
vj0

j=1

aij jvj j )

## aij vj0 < max

j

vj
vj0

vi0 )

vi0 8i = 1; n )

dac
a se alege n locul lui i indicele n care membrul drept atinge maximul, e acesta r, relatia devine:
j j vr0

vr
vr
< 0 vr0 ) j j < 1:
0
vr
vr

## 3.5.4. Theorem. Dac

a A este productiv
a, atunci exist
a (I

A)

(I

A) I + A + A2 +

Am = I

Am+1

## si din teorema precedent

a urmeaz
a c
a lim Am+1 = 0 (matricea nul
a), deci prin trecere la limit
a relatia
m!1

devine
A) I + A + A2 +

(I
deci exist
a (I

A)

Am +

= I + A + A2 +

Am +

= I;

## 3.5.5. Example. Modelul lui Leontie

Ax + y = x;
unde A este o matrice productiv
a care reprezint
a matricea coecientilor tehnologici (coecientul din productia de bun j consumat pentru producerea unei unit
ati de bun i) , x este un vector coloan
a al nivelurilor
de productie iar y este vectorul coloan
a al cererii nale. Relatia se mai scrie
y = (I
si din teoremele anterioare 9 (I
x = (I

A)

A)

A) x

si are loc
1

y = (I + A + A2 +

= y + Ay + A2 y +

Am +

Am y +

)y =

relatie interpretabil
a astfel: pentru obtinerea unei productii nale nete y trebuie produs
a cantitatea Ay
intermediar
a necesar
a producerii lui y; pentru care trebuie produs A2 y necesar lui Ay;

: Productia

## 3.5. THE LEONTIEF MODEL

179

total
a x a fost descompus
a n productie nal
a y si n productii intermediare Am y date de matricile de
consumuri intermediare Am :
Problema caracteriz
arii matricilor productive admite si o reciproc
a:
3.5.6. Theorem. Dac
a pentru matricea p
atratic
a pozitiv
a A matricea (I

A)

exist
a si este pozitiv
a,

## atunci A este productiv

a.
Proof. Fie v > 0 ) x = (I

A)

6=

v>0)x

a

6=

6=

6=

## 3.5.7. Remark. Transpusa unei matrici productive este tot productiv

a.
3.5.8. Theorem. (Perron-Frobenius) Fie A o matrice real
a pozitiv
a. Atunci:
(1) Dac
a toate elementele matricii A sunt strict pozitive, atunci exist
a o valoare proprie de modul
maximal care este real
a, strict pozitiv
a si c
areia i se poate asocia un vector propriu cu toate
elementele strict pozitive;
(2) Dac
a A este nenul
a, are o valoare proprie real
a strict pozitiv
a, de modul maximal, c
areia i se
poate asocia un vector propriu (nenul) de elemente pozitive (sau nule).
Proof.

(1) Fie

## o valoare proprie de modul maximal si e v un vector propriu asociat valorii

n
P
proprii M : Are loc Av = M v; adic
a pe coordonate
aij vj = M vi ; 8i = 1; n; deci
M

j=1

j
2

6
6
e p = 6
4

M j jvi j

n
X
j=1

n
X

jaij vj j

j=1

jaij j jvj j ; 8i = 1; n;

jv1 j
7
.. 7
. 7 = jvj ; cum aij > 0; are loc
5
jvn j
j

Mj p

Ap;

a 9k 2 f1;
j
si e z = (A

M j I) p;

M j jvk j

<

n
X
j=1

; ng astfel nct
akj jvj j

6=

6=

6=

6=

M j I) p

= A2

dar
Az = A (A

M j Ap

180

deci
A2 p = Az + j
cu B =

1
A>
"+j M j
6
=

M j Ap

> "Ap + j
6=

M j Ap

= (" + j

M j) Ap;

0; are loc
BAp > Ap
6=

B k Ap > Ap;
6=

din faptul c
a

## este valoare proprie de modul maximal, urmeaz

a c
a B are toate valorile proprii

## de modul subunitar, deci lim B m = 0 (matricea nul

a de ordin n); prin trecere la limit
a, rezult
a
m!1

c
a 0 > Ap contradictie cu existenta indicelui k, asa c
a are loc
6=

Mj p

= Ap

(2) Se aplic
a 1. pentru matricea A + "U; cu U matricea de ordin n care are toate elementele 1; dup
a
care se trece la limit
a

## 3.6. Vector Spaces over Complex Numbers

Scopul acestei sectiuni este s
a rezume caracteristicile spatiilor vectoriale de tip nit peste C, reprezentate ca (Cn ; C). Peste tot n aceast
a sectiune bara orizontal
a deasupra unui obiect se refer
a la conjugatul
complex [sau la enumerare: 1; n reprezint
a toate numerele de la 1 la n].
; zn ) 2 Cn , atunci z = (z1 ;

Dac
a z = (z1 ;

## ; zn ) [vectorul conjugat este conjugatul componentelor]

Dac
a u; v 2 Cn , atunci u + v = u + v
Dac
a

2 C si z 2 Cn , atunci z = z

## Produsul scalar nu poate extins la C n aceeasi form

a deoarece (1; i) (1; i) = 1 1 + i i = 0, asa c
a
dac
a sar folosi aceeasi form
a de produs scalar ca pe R sar obtine vectori nenuli de lungime nul
a.
n
P
Forma folosit
a este hu; vi =
ui vi (functional
a sesquiliniar
a), pentru care
i=1

hu + v; wi = hu; wi + hv; wi
hu; v + wi = hu; vi + hu; wi
h u; vi =

hu; vi

hu; vi =

hu; vi

hu; vi = hv; ui
kuk2 = hu; ui =

n
P

i=1

ui ui 2 R+ si kuk = 0 () u = 0.

181

## a a matricii este matricea care

Dac
a A = (aij )i=1;m;j=1;n atunci A = (aij )i=1;m;j=1;n [conjugata complex
are drept componente conjugatele complexe ale componentelor matricii initiale]
A + B = A + B [conjugarea complex
a a sumei este suma conjugatelor complexe]
A = A [conjugarea complex
a a produsului cu un scalar este produsul cu scalarul conjugat complex
al matricei conjugat
a complex]
A = A [conjugata complex
a a conjugatei complexe este matricea initial
a]
a a transpusei este transpusa conjugatei complexe]
AT = AT [conjugata complex
A = AT se numeste adjuncta matricii A
o matrice se numeste hermitic
a (autoadjunct
a) dac
aA=A
Dac
a A este matrice cu componente reale, atunci A = AT
(A + B) = A + B
( A) = A
(A ) = A
hu; vi = uT v
hAu; vi = hu; A vi

CHAPTER 4

A ne Spaces
, 
4.1. Denitions
4.1.1. Denition. Fie un spatiu vectorial (V; K). Multimea A este numit
a spatiu an peste V dac
a exist
a
o functie (notat
a aditiv) + : A

## V ! A astfel nct (P; ~v ) 7! P + ~v 2 A are propriet

atile:

(1) P + ~0 = P
(2) P + (~v + w)
~ = (P + ~v ) + w
~
!
(3) 8P; Q 2 A, 9!~vP;Q 2 V, P + ~vP;Q = Q [vectorul ~vP;Q este notat P Q]
(A; V; K)
Un spatiu an este o actiune tranzitiv
a a grupului aditiv al spatiului vectorial
Dimensiunea spatiului an este prin denitie dimensiunea spatiului vectorial atasat.
Observatii:
Semnul "+" poate avea semnicatii diferite:
+

## adunarea a doi scalari din K

~v + w
~ adunarea a doi vectori din V
P + ~v functia care deneste structura an
a [adunarea dintre un punct si un vector de pozitie]
n contextul grupurilor, functiile care satisfac 1. si 2. sunt numite "actiuni" iar cele care satisfac
si 3. sunt numite "actiuni tranzitive"
!
Aplicatia A 3 Q 7! P Q 2 V este o bijectie.
Spatiul an standard atasat unui spatiu vectorial este A = V with vector addition as the a ne
mapping.
the set V has elements
as a vector space, an element is a vector
as an a ne space, an element is a point
the vector ~0 is the origin of the a ne space [?]
!
any point P may be seen as the "position vector" OP , with O = ~0.
!
!
!
!
the relation P + P Q = Q may be seen as OP + P Q = OQ
!
for each point P , the set P Q of all vectors with origin in P is a vector space.

184

4. AFFINE SPACES

4.1.2. Denition. [A ne subspace] [linear subvarieties] [linear varieties] If (V0 ; K) is a vector subspace
of (V; K) and B

## A, then (B; V0 ; K) is an a ne subspace of (A; V; K) if: 8P 2 B, 8~v 2 V0 , P + ~v 2 B

!
and 8P; Q 2 B, P Q 2 V0 .
a ne straight lines: A ne subspace of dimension 1
a ne plane: A ne subspace of dimension 2
a ne hyperplane: A ne subspace of dimension A

## Notation: 8P 2 A, 8V0 vector subspace of V, P + [V0 ] = fQ 2 A; Q = P + ~v ; ~v 2 V0 g [B is the

linear variety through P directed by V0 ] [V0 is the direction of B]
4.1.3. Denition. [The linear variety generated by a nite number of points]
For P1 ,

## , Pn 2 A, the linear variety generated by the points, denoted by hP1 ;

; Pn i, is the smallest

## linear variety containing the points.

D !
!E
The dimension of the linear subspace P1 P2 ;
; P1 Pr is the dimension of the linear variety. When
D !
!E
the dimension of P1 P2 ;
; P1 Pr is maximal, which means r 1, the set of points is called a nely
independent.

4.1.4. Denition. [The sum of two linear varieties] L1 + L2 is the smallest linear variety containing both
varieties.
4.1.5. Denition. Two linear varieties P + [V1 ] and Q + [V2 ] are called "parallel" when V1
V2

V2 or

V1

## 4.1.6. Denition. [A ne Frames] An a ne frame on (A; V; K) is a set R = fP; (e1 ;

P 2 A (is called "origin") and (e1 ;

## 4.1.7. Denition. [A ne coordinates] Given (A; V; K) and R, 8Q 2 A 9!q1 ;

Notation: [Q]R = (q1 ;

; qn )

; en )g, where

n
! P
; qn 2 K, P Q =
qi ei
i=1

!
!
1
4.1.8. Denition. Barycenter: Given r points P1 , P2 ,
, Pr , the barycenter G is: G = P1 + P1 P1 + P1 P2 +
r
!
!
1
For two points, G is the midpoint G = P1 +
P1 P 1 + P1 P 2
2
4.1.9. Denition. Collinear points? P , Q, R are collinear when the linear variety generated by them,
D ! !E
hP; Q; Ri has dimension 1 (it is a straight line). The linear subspace P Q; P R has dimension 1.
4.1.10. Denition. Simple ratio: Consider three collinear points A, B, C 2 A. The simple ratio
!
!
(A; B; C) is the unique scalar such that AB =
AC

4.1. DEFINITIONS

185

## 4.1.1. Properties. Prop: [Properties of a ne spaces]

P + ~v = P + w
~ ) ~v = w
~ [we may cancel/reduce points] [for any xed P 2 A, the function
V 3 ~v 7! P + ~v 2 A is injective]
Proof: Denote by Q = P +~v = P + w.
~ Then the vectors both satisfy condition 3, which means
!
!
that ~v and w
~ satisfy the property of P Q. Since the vector P Q is unique, we have ~v = w.
~
P + ~v = Q + ~v ) P = Q [we may cancel/reduce vectors] [for any xed ~v 2 V, the function
A 3 P 7! P + ~v 2 A is injective]
Proof: From 2., P = P + ~0 = P + (~v

~v ) = (P + ~v ) + ( ~v ) = (Q + ~v ) + ( ~v ) = Q + (~v

~v ) =

Q + ~0 = Q
!
P Q = ~0 () P = Q [the null vector is any vector which has the same origin and endpoint]
!
!
Proof: From 3., if P = Q then P + P P = P and since P P is unique and ~0 also satises the
!
relation, P P = ~0.
!
If P Q = ~0, then P + ~0 = Q and from 1. P = Q.
!
!
P Q = QP [the negative vector means the vector with the opposed direction]
!
!
!
!
Proof: since P + P Q = Q and Q + QP = P , it follows P + P Q + QP = P which means
!
!
that P Q + QP = ~0.
!
!
!
P Q + QR = P R [Chaslesidentity] [vector addition satises the parallelogram law] [Axiom [A
1] in , page 98]
!
!
!
!
!
!
Proof: P + P Q = Q, Q + QR = R, P + P R = R ) P + P Q + QR = Q + QR = R )
!
!
!
!
!
P + P Q + QR = R ) P Q + QR = P R.
!
8P 2 A, 8~v 2 V, 9!Q 2 A, P Q = ~v [surjectivity]
!
Proof: For Q = P + ~v , we have P Q = ~v .
!
!
P Q = P R ) Q = R [injectivity]
!
!
!
!
!
!
Proof: P + P Q = Q, P + P R = R; if P Q = P R then P + P Q = P + P R ) Q = R.
!
Previous surjectivity and injectivity for the mapping P 7! P Q are the axiom [A 2] in , page
98.
!
!
!
!
P Q = RS ) P R = QS [parallels between parallels are equal]
Proof:
!
P + PQ = Q
!
R + RS = S
!
P + PR = R
!
Q + QS = S
!
!
!
!
!
!
P R = P Q + QR = QR + RS = QS.

186

4. AFFINE SPACES

## Prop: If (B; V0 ; K) is an a ne subspace of (A; V; K), then 8P 2 B, P + [V0 ] = B.

Proof:
P: Consider B = P + [V0 ]. Then:
!
Q 2 P + [V0 ] () P Q 2 [V0 ]
Q 2 P + [V0 ] ) Q + [V0 ] = P + [V0 ]
!
Q; R 2 P + [V0 ] ) QR 2 [V0 ]
P: For P1 ,

, Pn 2 A, hP1 ;

Proof:
D !
P1 + P1 P2 ;

D !
; Pn i = P1 + P1 P2 ;

!E
; P1 P r

!E
; P1 Pr is a linear variety which contains all the points Pk and which is contained in

## each linear variety which contains the points.

D !
!E
the dimension of P1 P2 ;
; P1 Pr is the dimension of the linear variety; when this dimension is

maximal (r

## P: [Intersection of two linear varieties]

!
(P + [V1 ]) \ (Q + [V2 ]) 6= ; () P Q 2 V1 + V2
Proof: R 2 (P + [V1 ]) \ (Q + [V2 ]) ()
() 9v1 2 V1 such that R = P + v1 and 9v2 2 V2 such that R = Q + v2 ()
!
!
!
!
!
() P R 2 V1 and QR 2 V2 ) P Q = P R + RQ 2 V1 + V2
!
!
Conversely, if P Q 2 V1 + V2 then 9v1 2 V1 and 9v2 2 V2 such that P Q = v1 + v2 )
!
) Q = P + P Q = P + v1 + v2 so that P + v1 = Q + ( v2 ) 2 (P + [V1 ]) \ (Q + [V2 ])
P:
R 2 (P + [V1 ]) \ (Q + [V2 ]) ) (P + [V1 ]) \ (Q + [V2 ]) = R + [V1 \ V2 ]
Proof: Since R 2 (P + [V1 ]) \ (Q + [V2 ]), we have P + [V1 ] = R + [V1 ] and Q + [V2 ] = R + [V2 ] ;so that
(P + [V1 ]) \ (Q + [V2 ]) = (R + [V1 ]) \ (R + [V2 ]) = R + [V1 \ V2 ]
P:

h
D !Ei
(P + [V1 ]) + (Q + [V2 ]) = P + V1 + V2 + P Q

Proof:

P: [Grassmann Formulas]
Consider L1 = P + [V1 ], L2 = Q + [V2 ]
If L1 \ L2 6= ;, then dim (L1 + L2 ) = dim L1 + dim L2

dim (L1 \ L2 )

## If L1 \ L2 = ;, then dim (L1 + L2 ) = dim L1 + dim L2

dim (V1 \ V2 ) + 1

Proof:
P:
If L1 jjL2 and L1 \ L2 6= ;,then L1

L2 or L2

L1

4.1. DEFINITIONS

187

Proof:
P: [Parallels between parallels are equal]
Consider two parallel straight lines r and s cut by two parallel straight lines r0 and s0 . Denote their
!
!
!
!
intersections A = r0 \ s, B = s0 \ s, C = r \ r0 , D = r \ s0 . Then AB = CD and AC = BD.
Proof:
; en )g and R0 = fP 0 ; (f1 ;

## Change of an a ne frame: Consider two frames R = fP; (e1 ;

0
1
0
1
p1
a1j
B
C
B
C
B .. C
B .. C
0
[P ]R = B . C, [fj ]R = B . C
@
A
@
A
pn
anj
0
1
0
1
x
y
B 1 C
B 1 C
B . C
B . C
X 2 A, [X]R = B .. C, [X]R0 = B .. C
@
A
@
A
xn
yn
n
! P
PX =
xi e i

; fn )g.

i=1

n
! P
PX=
y j fj
0

j=1

n
! P
PP0 =
pi e i
i=1

fj =

n
P

aij ei

i=1
n
n
P
! P
P 0X =
y j fj =
yj
j=1

j=1

!
!
!
P 0X = P 0P + P X =

n
P

aij ei

i=1
n
P

pi e i +

i=1
n
P

n P
n
P

yj aij ei =

j=1 i=1
n
P

xi ei =

i=1

) 8i = 1; n, xi pi =
aij yj )
j=1
0
1
0
1
a1n C 0
B a11
x1
B
C
B
C B a21
CB
a
2n
B . C B
CB
) B .. C = B
CB
C@
@
A B
B
C
@
A
xn
an1
ann
[X]R = A [X]R0 + [P 0 ]R

n
P

n
P

i=1

(xi

pi ) ei

n
P

yj aij

j=1

ei

i=1

1 0
y1
C B
.. C B
. C+B
A @
yn

1
p1
C
.. C
. C
A
pn

Consider:

## an a ne space (A; V; K) with a ne frame R = fP; (e1 ;

a linear variety L = Q+[V0 ], with Q = (q1 ;
j = 1; r;

; qn ) and (v1 ;

; en )g,
; vr ) a basis of V0 , and vj =

n
P

i=1

aij ei ,

188

4. AFFINE SPACES

1
a1j
B
C
B . C
[vj ]R = B .. C, j = 1; r
@
A
anj
We have X = (x1 ;
[X]R = [Q]R +

r
P

; xn ) 2 L () 9
j

j=1

2 K s.t. xi = qi +

[vj ]R

r
P

j aij ,

j=1

8i = 1; n

## Equation of a Straight Line:

[X]R = [Q]R + [v]R
Lines: 0

a
b
B 1 C
B 1 C
B .. C
B .. C
[A]R = B . C, [B]R = B . C
@
A
@
A
an
bn
For = 1, Q = A and [B]R = [A]R + [v]R ) [v]R = [B]R
So, for the line: [X]R = [A]R + ([B]R

[A]R

[A]R ), we have:

## The segment AB = fX 2 A; [X]R = [A]R + ([B]R

The line AB = fX 2 A; [X]R = [A]R + ([B]R

[A]R ) ;

[A]R ) ;

2 [0; 1]g

2 Rg

[A]R ) ;

2 [0; 1)g

## The halfline [BA = fX 2 A; [X]R = [A]R + ([B]R

0 1
0 1
3
1
B C
B C
B C
B C
Ex: A = B 2 C, B = B 3 C,
@ A
@ A
1
2

[A]R ) ;

2 ( 1; 1]g

x=2 +1
y=

+2 )

z=

+1

x=3

2z

y=3

z=z
Barycenter: Given r points P1 , P2 ,
, Pr , the barycenter G is:
!
!
!
1
G = P1 +
P1 P1 + P 1 P2 +
+ P1 Pr
r
!
!
!
!
!
!
1
Remark:
P1 P2 +
+ P 1 Pr = P1 G ) P1 P2 +
+ P1 P r = r P 1 G )
r
!
!
!
!
!
) P1 G + P 1 G P 1 P2 +
P 1 G P 1 Pr = 0 )
!
!
!
!
!
) P1 G + P2 P1 + P 1 G +
P r P 1 + P1 G = 0 )
!
!
!
!
!
!
) P1 G + P2 G +
Pr G = 0 or GP1 + GP2 +
+ GPr = 0
!
!
!
P: The barycenter is the unique point X such that XP1 + XP2 +
+ XPr = 0

4.1. DEFINITIONS

189

Proof:
1. G satises the relation:
!
G + GPj = Pj
!
!
!
For j = 2; r, GPj = GP1 + P1 Pj
!
!
!
!
!
!
!
!
GP1 + GP2 +
+ GPr = GP1 + GP1 + P1 P2 +
+ GP1 + P1 Pr =
!
!
!
!
!
= r GP1 + P1 P2 +
+ P1 Pr = r GP1 + r P1 G =
2. G is unique:
!
!
!
!
!
!
If P1 G + P2 G +
Pr G = 0 and XP1 + XP2 +
+ XPr = 0, then, by adding them:
!
!
!
!
!
!
XP1 + P1 G + XP2 + P2 G +
+ XPr + Pr G = 0 )
!
!
) r XG = 0 ) XG = 0
!
!
1
Remark: we also have G = Pi +
Pi P 1 +
+ P i Pr
r
!
!
1
P 1 P1 + P1 P2
For two points, G is the midpoint G = P1 +
2
0
0
1
1
p
g
B 1j C
B 1 C
1 Pr
B .. C
B .. C
pij
If [Pj ]R = B . C, then G = B . C, with gi =
r j=1
@
@
A
A
pnj
gn
Remark: Consider three collinear points A, B, C 2 A.
!
!
AB = (A; B; C) AC
!
!
B = A + AB = A + (A; B; C) AC
If (A; B; C) = , then:
1
(A; C; B) =
(B; A; C) =

1
1

(B; C; A) =
(C; A; B) =

1
(C; B; A) = 1

n
!
The line segment AB = X 2 A; X = A + AB;
C 2 AB () 0 < (A; C; B) < 1

o
2 [0; 1]

C 2 AB () (C; A; B) < 0
P: Consider three noncollinear points A, B, C 2 A, and G their barycenter (centroid/geometric
center). Then the straight line joining A and G meets BC in the midpoint A0 of the points B, C.
2
Moreover, (A; G; A0 ) = . [AA0 is called "median"]
3
Proof:
ThalesTheorem: Consider three parallel straight lines r, s, t that meet two concurrent straight lines
in A, A0 (on r), in B, B 0 (on s), in C, C 0 (on t). Then (A; B; C) = (A0 ; B 0 ; C 0 ).

190

4. AFFINE SPACES

Proof:
MenelausTheorem: If a straight line meets the sides of a triangle ABC at P , Q, R, respectively, then
(P; A; B) (Q; B; C) (R; C; A) = 1.
Proof:
Cevas Theorem: Consider a triangle ABC and three points on its edges PA 2 [BC], PB 2 [AC],
PC 2 [AB]. The straight lines APA , BPB , CPC are concurrent in a point P () (PA ; B; C) (PB ; C; A)
(PC ; A; B) =

1.

Proof:
Pappus Theorem:
Proof:
Desargue Theorem:
Proof:
?Rouths Theorem:
Proof:
?van Obels Theorem:
Proof:
Bib:

Part 2

## Linear Algebra Software Products

CHAPTER 5

Geogebra

CHAPTER 6

CARMetal
Produsul software CARMetal, versiunea 3.7.5, disponibil la adresa http://db-maths.nuxit.net/CaRMetal/,
a fost folosit pentru a genera unele dintre vizualiz
arile incluse n text.

Part 3

Appendices

Reviews
Binary Logic
Logica matematic
a binar
a se ocup
a de operatii cu enunturi logice si de evaluarea valorii lor de adev
ar;
se consider
a numai enunturi logice cu o valoare de adev
ar din dou
a posibile. Desi aceast
a conventie este
restrictiv
a, nu este scopul prezent
arii de fata s
a includ
a alte situatii, cum ar enunturile logice "Eu mint",
sau "Acest enunt este fals". Prezentarea de mai jos se refer
a doar la acele enunturi c
arora li se poate atasa
o valoare de adev
ar. Nu sunt incluse comenzile (enunturile imperative), interogatiile, exclamatiile, ci doar
enunturile de tip declarativ.

TII CU ENUN
TURI LOGICE):
p

ep

(NON): 0 1
(1) NEGA
TIA LOGICA

## Ex: p: "2 + 3 = 5" ) ep: "2 + 3 6= 5". p: "Ioana are o

1 0
cas
a" ) ep: " Ioana nu are o cas
a".
p

p^q

0 0 0
(
(2) CONJUNC
TIA LOGICA
SI): 0 1 0

## Ex: p: "Ioana are o cas

a", q: "Ioana are o asigurare

1 0 0
1 1 1
de cas
a" ) p ^ q: "Ioana are o cas
a si o asigurare de cas
a" p: "2 este num
ar natural", q: "2 este
num
ar ntreg" ) p ^ q: "2 este si num
ar natural si ntreg" n limbaj formal: (2 2 N) ^ (2 2 Z) )
2 2 N \ Z (2 face parte din intersectia multimilor)
p

p_q

0 0 0
(SAU) (SAU INCLUSIV): 0 1 1
(3) DISJUNC
TIA LOGICA
1 0 1
1 1 1
"Ioana are o masin
a" ) p _ q: "Ioana are o cas
a sau o masin
a"

## Ex: p: "Ioana are o cas

a", q:

200

7. REVIEWS

_
p_q

0 0 0
(4) (SAU EXCLUSIV):

0 1 1

## Este mai rar folosit si poate descris cu ajutorul celorlalti

1 0 1
1 1 0
_ "Ioana are e o cas
conectori. Ex: p: "Ioana are o cas
a", q: "Ioana are o masin
a" ) p_q:
a, e o
masin
a (dar nu ambele)"
(Dac
(5) IMPLICA
TIA LOGICA
aAtunci):
(a) Din adev
ar implic
a numai adev
ar
(b) Din fals implic
a orice.
p

p!q

0 0 1
Din denitie rezult
a urm
atoarea tabl
a de adev
ar pentru implicatia logic
a:

0 1 1
1 0 0
1 1 1

## Pentru structura p ! q, se mai folosesc denumirile: p: "ipotez

a", q: "concluzie". Ex: p: "Ioana
are o cas
a", q: "Ioana are o masin
a" ) p ! q: "Dac
a Ioana are o cas
a, atunci are o masin
a".
p

p\$q

0 0 1
LOGICA
(Dac
(6) ECHIVALEN
TA
asinumaidac
a):

0 1 0

Echivalenta logic
a poate

1 0 0
1 1 1
denit
a folosind ceilalti conectori: p \$ q

a", q:

a dac
a si numai dac
a are carnet de
conducere"

## 7.0.12. Denition. [Tautologie] Un enunt logic este numit "tautologie" dac

a este adev
arat indiferent de
valoarea de adev
ar a enunturilor componente.

## Ex: p: "Ioana are o masin

a" ) p_ep: "Ioana are o masin
a sau nu" este o tautologie.
7.0.13. Denition. [Enunturi echivalente] Dou
a enunturi logice se numesc "echivalente" dac
a p \$ q este
o tautologie.

BINARY LOGIC

201

## 7.0.14. Denition. [Enunturi mai puternice] Se spune despre un enunt logic p c

a "este mai tare dect"
un enunt logic q dac
a p ! q este tautologie si q ! p nu este tautologie.
Ex: Pentru orice dou
a enunturi p, q, enuntul p ^ q este mai tare dect enuntul q.
Exercitiu: S
a se decid
a dac
a pentru enunturile "(p ^ q) ! r" si "p ! r" se poate arma c
a unul dintre
ele este mai puternic dect cel
alalt.
7.0.15. Denition. [Contradictie] Un enunt logic este numit "contradictie" dac
a este fals indiferent de
valoarea de adev
ar a enunturilor componente.
Ex: p: "Ioana are o masin
a" ) p^ep: "Ioana are o masin
a si nu are" este o contradictie.
Toate enunturile logice pot exprimate folosind enunturi descriptive si conectorii logici e, _, ^. Se vor
prezenta pe scurt propriet
atile acestor conectori logici si leg
aturile dintre ei si alte operatii logice.
TI ALE CONECTORILOR e, _, ^) (Pot vericate cu table
7.0.16. Remark. (PROPRIETA
de adev
ar):
(1) e (ep)

a de adev
ar

(2) p _ q

a de adev
ar

(3) p _ (q _ r)

a de adev
ar

(4) p _ 1

a de adev
ar

(5) p _ 0

a de adev
ar

(6) p _ p

a de adev
ar

(7) p ^ q

a de adev
ar

(8) p ^ p

a de adev
ar

(9) p ^ (q ^ r)

a de adev
ar

(10) p ^ 0

a de adev
ar

(11) p ^ 1

a de adev
ar

(12) e (p _ q)

## ep^eq (Regulile lui DeMorgan) Dem: tabl

a de adev
ar Ex: S
a se ae negatia enuntului

## logic "n vacanta merg la mare sau la munte". Se noteaz

a p: "n vacanta merg la mare" si q: "n
vacanta merg la munte"; enuntul initial este p _ q. Negatia este e (p _ q)

## ep^eq, care poate

tradus
a: "n vacanta nu merg nici la mare si nici la munte".
(13) e (p ^ q)

a de adev
ar

(14) p ^ (q _ r)

## (p ^ q) _ (p ^ r) (distributivitatea conjunctiei fata de disjunctie) Dem: tabl

a de

adev
ar
(15) p _ (q ^ r)

(p _ q) ^ (p _ r) Dem: tabl
a de adev
ar

202

7. REVIEWS

## 7.0.17. Theorem. [Reducerea disjunctiei exclusive la e, _, ^]

_
p_q

(p _ q) ^e (p ^ q)

(p^eq) _ (q^ep) :

Proof. Tabl
a de adev
ar.
7.0.18. Theorem. [Reducerea echivalentei la implicatie]
p\$q

(p ! q) ^ (q ! p) :

Proof. Tabl
a de adev
ar.
7.0.19. Theorem. (Reducerea implica
tiei logice la opera
tii elementare)
p!q

ep _ q:

## Proof. Prin tabl

a de adev
ar.
7.0.20. Theorem. (Negarea implica
tiei logice) e (p ! q)
Proof. e (p ! q)

e (ep _ q)

e (ep) ^eq

p^eq.

p^eq.

## Ex: p: "Mie foame", q: "M

annc" ) p ! q: "Dac
a mie foame, m
annc" ) e (p ! q)

p^eq:

"Mie foame si nu m
annc".
Un r
aspuns frecvent ntlnit la ntrebarea "Cum se neag
a "Dac
a mie foame, m
annc"?" este r: "Dac
a
nu mie foame, nu m
annc".
Se observ
a c
ar
p

p!q

## (ep) ! (eq) iar tabla de adev

ar arat
a c
a cele dou
a enunturi logice spun altceva:
e (p ! q) (ep) ! (eq)

0 0 1

0 1 1

1 0 0

1 1 1

Se mai observ
a c
ar

(ep) ! (eq)

(eep) _ (eq)

p_eq

q ! p asa c
a r spune, de fapt, acelasi

## lucru ca si enuntul: "Dac

a m
annc, mie foame".
7.0.21. Theorem. (Principiul demonstratiei prin reducere la absurd) p ! q
Proof. p ! q

ep _ q

q_ep

e (eq) _ep

(eq) ! (ep).

(eq) ! (ep).

7.2. SETS

203

## 7.1. Predicate Logic

7.1.1. Denition. Se numeste predicat logic orice functie p ( ) : D ! P (multimea propozitiilor) (orice
functie care are drept codomeniu multimea propozitiilor) (domeniul poate privit ca multimea de parametri ai predicatului)
Predicatele logice pot privite si ca reformul
ari ale unor constructii de tip "dac
aatunci": a spune "un
p
atrat este un dreptunghic" este acelasi lucru cu "dac
a x este un p
atrat, atunci x este un dreptunghi" sau
"orice p
atrat este un dreptunghi". Formularea "x este un dreptunghi" este un predicat p (x) unde x este
o gur
a geometric
a.
7.1.2. Denition. Armatia "pentru orice valoare posibil
a a lui x, are loc p (x)" se formalizeaz
a prin
"8x; p (x)" iar semnul "8" se numeste "cuanticator logic universal".
7.1.3. Denition. Armatia "exist
a o valoare posibil
a a lui x, pentru care are loc p (x)" se formalizeaz
a
prin "9x; p (x)" iar semnul "9" se numeste "cuanticator logic existential".
7.1.4. Denition. Armatia "exist
a o unic
a valoare posibil
a a lui x, pentru care are loc p (x)" se formalizeaz
a prin "9!x; p (x)" si are semnicatia
9!x; p (x)

## Negatii ale enunturilor logice obtinute din predicate cu cuanticatori:

e (8x; p (x))

9x; ep (x) ;

e (9x; p (x))

8x; ep (x)

7.1.5. Example. Negatia enuntului logic: Toti oamenii sunt muritorieste: "Exist
a oameni nemuritori",
deoarece structura este e (8x p (x))

9xep (x).

## 7.1.6. Example. Negatia enuntului logic: Orice om dac

a e nervos, tip
a" este "exist
a oameni nervosi si
care nu tip
a", deoarece structura este e (8x (p (x) ! q (x)))

## 7.1.7. Remark. Cuanticatorii logici (existential si universal) nu comut

a:
8x 9yx p (x; y) 6= 9y8xp (x; y)
7.2. Sets
Printre caracteristicile secolelor XIX si XX n matematic
a se num
ar
a si denitivarea constructiei
matematicii ca sistem axiomatic. Una dintre constat
ari este c
a, axiomatic vorbind, totul porneste de

204

7. REVIEWS

## la notiunea de multime. Notiunea ns

asi de multime nu poate denit
a. De altfel, pentru orice stiinta se
ridic
a ntrebarea: Ce notiuni foloseste acea stiinta dar nu le poate deni riguros?. Pentru matematic
a,
r
aspunsul este: Notiunea de multime. n general, r
aspunsurile posibile pentru aceast
a ntrebare sunt
consecinte ale ciclului de rezultate teoretice obtinute de Kurt Gdel n prima jum
atate a secolului XX,
despre incompletitudinea unui sistem axiomatic. n esenta si f
ar
a a se intra n mai multe detalii, sa
stabilit c
a orice sistem axiomatic este incomplet sau contradictoriu; asta nseamn
a, printre altele, c
a orice
stiinta necontradictorie trebuie s
a se astepte la probleme indecidabile si s
a aib
a maturitatea de a le dep
asi.
7.2.1. Operations with Sets. Operatiile cu multimi vor prezentate pentru o multime nevid
a xat
a
si submultimi ale ei. Multimea f
ar
a elemente este notat
a ; (multimea vid
a).
7.2.1. Denition. Operatiile de baz
a cu multimi sunt: reuniune, intersectie, complementar
a. Relatia
dintre o multime si elementele ei este cea de apartenenta. Relatia de baz
a ntre multimi este incluziunea.

Reuniune:

A [ B = fx 2 ; x 2 A _ x 2 Bg

Intersectie:

A \ B = fx 2 ; x 2 A ^ x 2 Bg

Complementar
a:

C A = fx 2 ; x 2 A ^ x 62 g

Incluziune:

Egalitate:

A = B () (A

B () (8x 2 A; x 2 B)
B^B

Multimea p
artilor: P ( ) = fA; A

A)

## 7.2.2. Remark. Fie A, B, C 2 P ( ). Prin CA se va ntelege complementara multimii A fata de

Operatiile cu multimi au urm
atoarele propriet
ati:
(1) A [ A = A (idempotenta reuniunii);
(2) A [

(4) A \

## (5) A [ ; = A (proprietatea de element neutru a reuniunii);

(6) A \ ; = ; (proprietatea de prim element a intersectiei);
(7) A [ B = B [ A (comutativitatea reuniunii);
(8) A \ B = B \ A (comutativitatea reuniunii);
(9) A [ (B [ C) = (A [ B) [ C = A [ B [ C (asociativitatea reuniunii);
(10) A \ (B \ C) = (A \ B) \ C = A \ B \ C (asociativitatea intersectiei);
(11) A [ (B \ C) = (A [ B) \ (A [ C) (distributivitatea reuniunii fata de intersectie);
(12) A \ (B [ C) = (A \ B) [ (A \ C) (distributivitatea intersectiei fata de reuniune);

7.2. SETS

205

(13) A [ (A \ B) = A;
(14) A \ (A [ B) = A;
(15) A [ CA = ;
(16) A \ CA = ;;
(17) CCA = A;
S

(18) C (A [ B) = CA \ CB; C

Ai

i2I

(19) C (A \ B) = CA [ CB; C

i2I

Ai

i2I

i2I

## (20) A n B = A \ CB (diferenta a dou

a multimi);
7.2.3. Denition. Multimile A si B se numesc disjuncte dac
a A \ B = ;.
7.2.4. Denition. Se numeste produs cartezian a dou
a multimi A si B multimea:
A

## 7.2.5. Remark. Alte propriet

ati:
(1) A n (B [ C) = (A n B) n C;
(2) A n (B \ C) = (A n B) [ (A n C);
(3) (A [ B) n C = (A n C) [ (B n C);
(4) (A \ B) n C = A \ (B n C) = (A n C) \ B;
(5) A

(B [ C) = (A

B) [ (A

C);

(6) A

(B \ C) = (A

B) \ (A

C);

(7) A

(B n C) = (A

B) n (A

C);

7.2.2. Relations.

## 7.2.6. Denition. Fie X, Y dou

a multimi. Se numeste relatie (corespondenta) binara ntre multimile X
si Y orice triplet R = (X; Y; GR ), unde GR este o submultime a produsului cartezian, GR

Y. X

se numeste domeniul de denitie al relatiei, Y se numeste codomeniul relatiei iar GR se numeste gracul
relatiei. Multimea
DR = fx 2 X; 9y 2 Y; (x; y) 2 GR g

## se numeste domeniul (efectiv) al relatiei R. Multimea

ImR = fy 2 Y ; 9x 2 X; (x; y) 2 GR g

206

7. REVIEWS

## se numeste imaginea relatiei. Relatia R

GR

= (Y; X; GR 1 ) denit
a prin

= f(y; x) ; (x; y) 2 GR g

## se numeste relatia inversa relatiei R. Relatia

X

= (X; X; G

) denit
a prin G

= f(x; x) ; x 2 Xg

## X se numeste relatia identitate pe X.

7.2.7. Denition. Fie X, Y , Z trei multimi si relatiile R1 = (X; Y; GR1 ), R2 = (Y; Z; GR2 ). Relatia
R = (X; Z; GR ) denit
a prin:
GR = f(x; z) ; x 2 X; z 2 Zsi 9y 2 Y a.. (x; y) 2 GR1 si (y; z) 2 GR2 g
se numeste compunerea relatiilor R1 si R2 si se noteaz
a R2 R1 (R2 R1 = R).
7.2.8. Remark. Operatia de compunere a relatiilor este asociativ
a dar nu este comutativ
a.
7.2.9. Denition. R = (X; X; GR ) se numeste relatie de preordine dac
a are propriet
atile:
(1) Reexivitate: G

GR (8x 2 X, (x; x) 2 GR );

## (2) Tranzitivitate: (x; y), (y; z) 2 GR ) (x; z) 2 GR .

7.2.10. Denition. R = (X; X; GR ) se numeste relatie de echivalenta dac
a este relatie de preordine
si are n plus proprietatea de simetrie: GR

## fy 2 X; (x; y) 2 GR g se numeste clasa de echivalenta a lui x n raport cu relatia R. Multimea claselor de

echivalenta (multimea ct a lui X n raport cu R) se noteaz
a cu X=R.
7.2.11. Remark. Dac
a R este o relatie de echivalenta pe X si x^ este clasa de echivalenta a unui element,
atunci:
(1) x 2 x^, 8x 2 X;
(2) (x; y) 2 R , x^ = y^;
(3) (x; y) 2
= R , x^ \ y^ = ;.
7.2.12. Denition. R = (X; X; GR ) se numeste relatie de ordine (preferinta) dac
a este relatie de preordine
si n plus are proprietatea de antisimetrie: GR \ GR

= G

## Relatia de ordine se numeste totala dac

a 8x; y 2 X, (x; y) 2 GR sau (y; x) 2 GR (GR [ GR

=X

X)

## si se numeste partiala dac

a nu este total
a. Perechea (X; R) se numeste multime ordonata (de relatia de
ordine R). O multime ordonat
a se numeste inductiv ordonata dac
a orice submultime total ordonat
a a sa
este majorat
a.
7.2.13. Remark. (Lema lui Zorn) Orice multime inductiv ordonat
a are un element maximal.

7.2. SETS

207

## 7.2.14. Remark. Dac

a R este relatie de ordine pe X, atunci R

## numeste relatia de ordine duala relatiei R).

Proof. Reexivitatea: x 2 X ) (x; x) 2 GR ) (x; x) 2 GR 1 ;
Tranzitivitatea: Fie (x; y) 2 GR

si (y; z) 2 GR

## ) (y; x) 2 GR si (z; y) 2 GR ) (z; x) 2 GR )

(x; z) 2 GR 1 ;
Antisimetria: (x; y) 2 GR

si (y; x) 2 GR

) (x; y) 2 GR si (y; x) 2 GR ) x = y

## 7.2.15. Denition. Fie X o multime ordonat

a si A 2 P (X).
A se numeste majorata (minorata) dac
a 9a 2 X astfel nct (x; a) 2 GR ((a; x) 2 GR ) 8x 2 A.
Elementul a se numeste majorant (minorant) al multimii A.
Dac
a multimea majorantilor (minorantilor) lui A este minorat
a (majorat
a) atunci minorantul
majorantilor (majorantul minorantilor) este unic si se numeste supremul (inmul) multimii A (se
noteaz
a sup A, respectiv inf A).
a 2 X se numeste maximal (minimal) dac
a 8x 2 X, (a; x) 2 R ((x; a) 2 R) ) x = a.
Dac
a A = X este majorat
a (minorat
a), atunci majorantul (minorantul) este unic si se numeste
ultim element (prim element).
7.2.3. Functions.
7.2.16. Denition. O relatie R = (X; Y; GR ) se numeste de tip functie (functionala) dac
a are propriet
atile:
(1) 8x 2 X, 9y 2 Y , (x; y) 2 GR ;
(2) (x; y1 ) 2 GR si (x; y2 ) 2 GR ) y1 = y2 .
7.2.17. Remark. Dou
a relatii sunt egale dac
a cele dou
a triplete sunt egale, adic
a dac
a domeniile,
codomeniile si gracele sunt egale.
7.2.18. Denition. Fie f ( ) : X ! Y o functie. Pentru A

X, multimea

f (A) = ff (x) ; x 2 Ag

f

Y multimea

## se numeste preimaginea multimii B prin functia f ( ).

7.2.19. Remark. Fie f ( ) : X ! Y o functie. Au loc urm
atoarele armatii:

208

7. REVIEWS

(1)
8A 2 P (X) ; 8B 2 P (Y ) ; f (A)

B,A

(B)

(2)
8A 2 P (X) ; f f

(A)

(f (A))

(3)
8A 2 P (X) ; 8B 2 P (Y ) ; f A \ f
(4)
8 (Bi )i2I

P (Y ) ; f

Bi

i2I

(5)
P (Y ) ; f

8B 2 P (Y ) ; f

8 (Bi )i2I

[
\

Bi

i2I

(6)

P (X) ; f

P (X) ; f

8 (Ai )i2I

Ai

i2I

i2I

Ai

!
!

(Bi )

(Bi )

i2I

i2I

(CB) = Cf

(8)

(B) = f (A) \ B

(7)
8 (Ai )i2I

(B)

f (Ai )

f (Ai )

i2I

i2I

Proof. Exercise.

## 7.2.4. Operations with Functions. Ca regul

a general
a, cu dou
a (sau mai multe) functii se pot
efectua operatii algebrice n anumite conditii iar rezultatul operatiei este o nou
a functie. Conditiile sunt
urm
atoarele:
Functiile trebuie s
a aib
a acelasi codomeniu; operatiile algebrice care pot efectuate cu functii
sunt corespondentele operatiilor algebrice care pot efectuate cu elementele codomeniului comun. Rezultatul operatiei dintre functii este o nou
a functie care are drept codomeniu codomeniul
comun al celor dou
a functii. Dac
a functiile au drept codomenii multimi diferite, operatia nu se
poate efectua (eventual se poate face mai nti, n conditii specice, o operatie de transformare a
codomeniilor)
Dac
a functiile au acelasi domeniu de denitie atunci rezultatul operatiei dintre functii este o
nou
a functie care are drept domeniu de denitie domeniul de denitie comun al functiilor. Dac
a
domeniile de denitie sunt diferite, se poate eventual deni rezultatul operatiei ca o functie care

209

## are drept domeniu de denitie partea comun

a a domeniilor de denitie ale functiilor participante
la operatie (intersectia domeniilor); dac
a nu au parte comun
a, operatia nu poate denit
a.
Exemple:
Dac
a f ( ) ; g ( ) : D ! R sunt dou
a functii cu codomeniul real denite pe aceeasi multime D, cu aceste
dou
a functii se pot efectua operatiile care pot efectuate cu numere reale: adunare, nmultire, sc
adere,
mp
artire. Se obtin urm
atoarele functiirezultat:
Functiasuma s ( ) : D ! R, s ( ) := (f + g) ( ), denit
a prin s(x) = f (x) + g(x) 8x 2 D.
Functiaprodus p ( ) : D ! R, p ( ) := (f g) ( ), denit
a prin p(x) = f (x)g(x) 8x 2 D.
Functiadiferenta (f

g) ( ) : D ! R, denit
a prin (f

Functiact h ( ) : D1 ! R, denit
a prin h (x) :=

f (x)
g(x)

g)(x) := f (x)

g(x) 8x 2 D

8x 2 D1 pe multimea D1 = fx 2 Djg(x) 6=

0g.
Existenta unei structuri de ordine pe codomeniul comun al functiilor permite extinderea relatiei de
ordine si la functii; dac
a f ( ) ; g ( ) : D ! R sunt dou
a functii cu codomeniul real denite pe aceeasi
multime D, atunci se spune c
a f()

g ( ) dac
a are loc f (x)

## g (x) 8x 2 D; se obtine astfel o relatie

de ordine ntre functii, care pierde din caracteristicile initiale ale relatiei dintre elemente (noua relatie
nu mai este total
a, n sensul c
a pentru dou
a functii se poate ntmpla s
a nu e comparabile, chiar dac
a
elementele codomeniului sunt toate comparabile). De asemenea, se extind la functii notiunile de maxim,
minim, modul:
h ( ) : D ! R, h(x) := max(f (x); g(x)) (maximul a dou
a functii), k ( ) : D ! R, k(x) := min(f (x); g(x))
(minimul a dou
a functii), jf j ( ) : D ! R, jf j (x) := jf (x)j (modulul unei functii).
7.3. Usual Sets of Numbers
Multimea Denitia

Denumirea

f1; 2;

f0; 1; 1; 2; 2;

a
;
b

; n;

naturale
g ntregi

; n; n;

a 2 Z; b 2 N

rationale

Se va deni ulterior

reale

fa + bi; a; b 2 Rg

complexe
N

6=

6=

6=

6=

p
3
2 Q n Z, 2 2 R n Q.
2
p
Demonstratie: Presupunem prin reducere la absurd c
a 2 2 Q. Atunci exist
a a; b 2 N astfel nct
p
a
2 = ) 2b2 = a2 ) a este multiplu de 2 ) a = 2k ) 2b2 = 4k 2 ) b2 = 2k 2 ) b este multiplu de 2 )
3 2 Z n N,

210

7. REVIEWS

p
a
se poate simplica prin 2. Deci, dac
a 2 ar rational, atunci ar o fractie simplicabil
a prin 2
b
de oricte ori, ceea ce este o contradictie
fractia

## 7.3.1. The Decimal Structure of Numbers. Fractii zecimale:

Fractii

Cu num
ar nit de zecimale

1; 2

zecimale Cu num
ar innit de zecimale periodice simple 0; (3)
periodice mixte

Fie a 2 f0; 1;

0; 2 (3)

neperiodice
a
; 9g. Atunci num
arul x = 0; (a) = :
9

Dem:
1
1
x = 0; (a) = 0; aaaaaaaa
=
(a; (a)) =
10
10
1
=
(a + x) ) 10x = a + x ) 9x = a ) x =
10
a1
ak
a1
ak
Analog, 0; (a1
=
ak ) = k
10
1
99
| {z 9}

(a + 0; (a)) =
a
:
9

k ori

## Numerele irationale se reprezint

a ca fractii zecimale neperiodice: structura lor zecimal
a nu se repet
a.

## Cteva numere irationale:

= 3; 141592653589793238462643383279502 8841971693993751058209749445923078164062862089986280348

e = 2; 71828182845904523536028747135266249775724709369995957496696762772407663035354759457138217

ln 2 = 0; 693147180559945309417232121458176568075500134360 25525412068000949339362196969471560586
7.3.2. The Set of Real Numbers.
Pe multimea numerelor rationale se consider
a multimea sirurilor Cauchy. Dou
a siruri Cauchy
rationale (pn )n2N si (qn )n2N sunt echivalente dac
a siruldiferenta (pn

## qn )n2N tinde la 0, lim (pn

n!1

qn ) =

0. Un num
ar real este o clas
a de echivalenta format
a din toate sirurile Cauchy de numere rationale
echivalente cu un sir Cauchy xat. Detaliile acestei constructii fac parte din cursul de Analiz
a.
Structura (R; +) este grup abelian cu element nul 0.
Structura (Z; +) este subgrup n structura (R; +);
Structura (Z; +) este grup ciclic, n sensul c
a este generat
a de un singur element, elementul
1 (sau

1).

Structura (Z; +) este cel mai mic subgrup n (R; +) care contine 1.
7.3.3. The Set of Complex Numbers. Informatiile si rezultatele din aceast
a subsectiune sunt
rezumate din 1 si din .
Se consider
a multimea R2 mpreun
a cu operatiile:
1 Lucrarea

poate consultat
a att pentru detalii de demonstatii ct si pentru exemple de exercitii de liceu (inclusiv la nivel
de olimpiad
a).

211

## adunare: (x1 ; y1 ) +c (x2 ; y2 ) = (x1 + x2 ; y1 + y2 )

nmultire: (x1 ; y1 )

(x2 ; y2 ) = (x1 x2

y1 y2 ; x1 y2 + x2 y1 )

[se face distinctie ntre operatiile de adunare si nmultire pe R, notate + respectiv , si noile operatii,
notate +c si c ]

## 7.3.1. Remark (Propriet

ati ale operatiilor de adunare si nmultire).

(1) Dou
a elemente (x1 ; y1 ) si (x2 ; y2 )

sunt egale () x1 = x2 si y1 = y2 .
(2) Adunarea +c :
(a) este comutativ
a,
(b) este asociativ
a,
(c) are elementul (0; 0) ca element neutru,
(d) orice element (x; y) are invers fata de adunare (opus), care este elementul ( x; y).
(3) nmultirea c :
(a) este comutativ
a,
(b) este asociativ
a,
(c) are elementul (1; 0) ca element neutru,
(d) orice element z = (x; y) 6= (0; 0) are invers fata de nmultire, care este elementul z
x
y
;
.
2
2
2
x +y
x + y2
(4) nmultirea este distributiv
a fata de adunare.

(5) Structura C = (R2 ; +c ; c ) este un corp comutativ (este identicat ca si "corp al numerelor complexe").
(6) Multimea R

f0g

## R2 este parte stabil

a a lui R2 n raport cu ecare dintre operatiile de adunare

+c si de nmultire c . Structura (R

## f0g; +c ; c ) este subcorp pentru structura (R2 ; +c ; c ).

(7) Se consider
a structurile (R; +; ) (structura obisnuit
a de numere reale) si (R
f() : R!R

f0g; +c ; c ). Functia

f0g denit
a prin f (x) = (x; 0) este bijectiv
a si este morsm de corpuri (deci

## izomorsm de corpuri), adic

a are propriet
atile:
(a) f (x) +c f (y) = f (x + y),
(b) f (x)

f (y) = f (x y)

## Din acest motiv, "se identic

a" elementul (x; 0) cu x 2 R ["(x; 0) = x"]
(8) Elementul (0; 1) se numeste "unitatea imaginar
a" si se noteaz
a cu i [(0; 1) = i]2
(9) Se observ
a c
a (y; 0) c (0; 1) = (y 0

## [elementul (0; y) se numeste "num

ar pur imaginar"]

2 Se

pare c
a notatia i pentru

1 a fost folosit
a pentru prima dat
a de matematicianul elvetian Leonhard Euler n 1777.

212

7. REVIEWS

## se va face din context (nu va mai f

acut
a prin

simboluri distincte).
(11) Un num
ar complex este (x; y) = (x; 0) +c (0; y) = x +c i c y (forma algebric
a a num
arului complex)
(a) z = (x; y) = x + iy
(b) Elementul x este numit "partea real
a a num
arul complex z" [x = Re (z)]
(c) Elementul y este numit "partea imaginar
a a num
arului complex z" [y = Im (z)]
(12) Propriet
ati ale unit
atii imaginare i:
(a) i0 = 1, i1 = i,
(b) (0; 1) (0; 1) = (0 0

1 1; 0 1 + 1 0) = ( 1; 0) ) i2 =

## (i) se mai spune si c

a i este solutie complex
a a ecuatiei polinomiale x2 + 1 = 0
p
(ii) se mai scrie si c
ai=
1, dar n folosirea acestei scrieri trebuie f
acut
a o distinctie
p
a o dat
a cu aceast
a notatie apare si un concept nou:
legat
a de semnul " ", si anume c
"radical complex", care este diferit de "radical real".
(iii) Denitia radicalului real este: Dat ind a 2 R+ , se numeste radical [real] un num
ar
b 2 R+ astfel nct b2 = a. [unicitate] Se constat
a c
a pentru ecare a 2 R+ nu
pot exista mai multi radicali reali distincti [radicalul real este unic], deoarece dac
a
a = b21 = b22 atunci (b1
pozitive, b1 + b2

## b2 ) (b1 + b2 ) = 0 si cum numerele b1 si b2 sunt ambele reale

0 si b1 + b2 = 0 () b1 = b2 = 0, asa c
a b1 + b2 > 0 deci b1 = b2

## este singura posibilitate.

1
(c) i3 = i, i4 = 1, i 1 = = i3 = i.
i
4n
(d) pentru n 2 N , i = 1, i4n+1 = i, i4n+2 =

1, i4n+3 =

i, i

= ( i)n .

(13) Propriet
ati ale puterilor ntregi ale numerelor complexe:
(a) pentru z = 0 si n 2 N , 0n = 0.

1
(b) pentru z 6= 0, z 0 = 1, z 1 = z, pentru n 2 N , z n = z| {z z}, = z 1 , z
z

n ori

(c) 8n; m 2 Z, z

zn
z m = z n+m , m = z n
z

n m

, (z ) = z

nm

, (z1 z2 ) =

z1n z2n ,

## 7.3.2. Denition. Dac

a z = a + bi este un num
ar complex, num
arul z = a
complex al lui z.
7.3.3. Remark (Propriet
ati al conjugatului complex).
(2) z 2 R () z = z
(3) 8z 2 C, z z 2 R+
(4) z1 + z2 = z1 + z2
(5) z1 z2 = z1 z2

(1) (z) = z

= (z 1 ) .

z1
z2

z1n
.
z2n

bi se numeste conjugatul

(6) 8z 6= 0, z

=z

213

z1
z1
=
z2
z2
z z
z+z
, Im (z) =
.
(8) Re (z) =
2
2i
(7) 8z2 6= 0,

## 7.3.4. Denition. Dac

a z = a + bi este un num
ar complex, modulul lui este: jzj =
7.3.5. Remark (Propriet
ati ale modulului unui num
ar complex).

p
a2 + b 2

## (2) jzj = j zj = jzj

(3) z z = jzj2
(4) jz1 z2 j = jz1 j jz2 j
(5) jz 1 j = jzj 1
z1
jz1 j
(6)
=
z2
jz2 j
(7) Re (z), Im (z) 2 [ jzj ; jzj]
(8) jzj

jRe (z)j + jIm (z)j [n membrul stng modulul este complex iar n membrul drept modulele

sunt reale]
(9) jz1 + z2 j
(10) jjz1 j

jz2 jj

jz1 j + jz2 j
jz1

z2 j

## 7.3.6. Denition. Pentru z = a + bi 2 C, perechea (a; b) se numeste imaginea geometrica a numarului

complex z.
7.3.7. Remark (Interpret
ari n termeni geometrici).

## numeste planul complex.

(2) Axele de coordonate se numesc axa reala si axa imaginara.
(3) Num
arul complex z poate identicat cu vectorul de pozitie al punctului (a; b).
(4) Modulul jzj este lungimea vectorului de pozitie [se mai numeste raza polara a punctului (a; b)].
(5) Adunarea numerelor complexe corespunde adun
arii vectorilor de pozitie.
(6) nmultirea cu un num
ar real a unui num
ar complex corespunde nmultirii cu un scalar a vectorului
de pozitie.
(7) Unghiul dintre axa real
a pozitiv
a si vectorul de pozitie al punctului (a; b) 6= (0; 0) d
a argumentul
num
arului complex (nenul). Determinarea acestui unghi se face folosind functii trigonometrice
si inversele lor, si are de dep
asit dou
a dicult
ati: prima este legat
a de periodicitatea functiilor
trigonometrice iar a doua este legat
a de domeniul/codomeniul functiilor inverse ale celor trigono2k + 1
; k 2 Zg, pentru a
metrice [desi functia tangent
a tan ( ) este denit
a pe domeniul R n f
2
inversabil
a trebuie considerat
a doar pe
;
].

214

7. REVIEWS

## (a) Prima dicultate este rezolvat

a prin introducerea a dou
a notiuni: Arg (z) (argumentul num
arului complex) si arg (z) (valoarea principal
a a argumentului num
arului complex), legate ntre ele de relatia Arg (z) = farg (z) + 2k ; k 2 Zg, care mai este folosit
a si n forma
Arg (z) = arg (z) + 2k , k 2 Z [importanta distinctiei dintre Arg ( ) si arg ( ) se va vedea
ulterior, la extinderea n complex a functiilor uzuale reale]
(b) A doua dicultate este rezolvat
a prin denirea atent
a a valorii principale a argumentului,
pentru ecare situatie important
a: se folosesc functiile tan ( ) : (

; ) ! R si arctan ( ) :

R
a astfel: arg (a + bi) =
8 ! ( ; ) iar functia arg ( ) : C n f0g ! ( ; ] este denit
b
>
>
arctan
;
a > 0; b 2 R
>
>
>
a
>
>
>
b
>
>
+ ; a < 0; b > 0
arctan
>
>
>
a
>
>
>
b
<
arctan
; a < 0; b < 0
a
>
>
>
>
;
a = 0; b > 0
>
>
2
>
>
>
>
>
;
a = 0; b < 0
>
>
2
>
>
>
: ;
a < 0; b = 0:
(8) Modulul r = jzj si argumentul ' = arg (z) al num
arului complex (perechea (r; ')) se numesc
coordonatele polare ale numarului complex.

## (9) Date ind coordonatele polare r si ' ale num

arului complex z, forma algebric
a z = a + bi se
g
aseste cu formulele:

8
< a = r cos ';
: b = r sin ':

## (10) Forma trigonometric

a a unui num
ar complex z = a+bi este z = jzj (cos ' + i sin ') [= jzj (cos arg (z) + i
iar printre motivele pentru care aceast
a form
a alternativ
a este util
a sunt formule cum ar :
(a) z1 = r1 (cos '1 + i sin '1 ), z2 = r2 (cos '2 + i sin '2 ) ) z1 z2 = r1 r2 (cos ('1 + '2 ) + i sin ('1 + '2 ))
[de fapt arg (z1 + z2 ) = farg (z1 ) + arg (z2 ) ; arg (z1 ) + arg (z2 )
(

## ; arg (z1 ) + arg (z2 ) + g \

; ]]

(b) [Formula lui DeMoivre]: z = r (cos ' + i sin ') ) z n = rn (cos n' + i sin n')
z1
r1
(c) z1 = r1 (cos '1 + i sin '1 ), z2 = r2 (cos '2 + i sin '2 ) )
= (cos ('1 '2 ) + i sin ('1 '2 ))
z2
r2
(11) Radical complex:
p
(a) [R
ad
acina complex
a de ordin n a unui num
ar complex]: z = r (cos ' + i sin ') ) n z =
p
' + 2k
' + 2k
n
r cos
+ i sin
, k = 0; n 1
n
n
p
2k
2k
(b) [R
ad
acina complex
a de ordin n a unit
atii]: n 1 = cos
+ i sin
, k = 0; n 1
n
n
r
jzj + Re (z)
(c) [R
ad
acina complex
a de ordin 2 dintrun num
ar complex]: z = a+bi = a jbj i =
2

215

## 7.3.8. Remark. Desi multimea numerelor complexe este g

asit
a ca ind similar
a altor multimi/structuri,
similarit
atile nu sunt "perfecte", n sensul c
a, de exemplu, nmultirea unui num
ar complex cu un num
ar
complex nusi g
aseste corespondent (canonic) n planul complex [exist
a interpret
ari n contexte particulare]. Aceast
a observatie ar trebui s
a ndemne la precautii atunci cnd sunt folosite reprezent
ari considerate
similare cu C (cum ar multimea R2 sau planul complex).
ei' = cos ' + i sin '
Forma exponential
a:
z = jzj ei'
7.4. Cardinality of the Usual Sets of Numbers
7.4.1. Denition. O multime se numeste numarabila dac
a exist
a o bijectie ntre ea si multimea numerelor
naturale. (Denitie echivalent
a: o multime se numeste num
arabil
a dac
a poate pus
a sub forma unui sir
n care orice termen s
a poat
a atins dup
a un num
ar nit de pasi)
N este o mul
time num
arabil
a, pentru c
a poate pus
a sub forma:
N = f0; 1; 2;

; n;

## g, n care ecare termen poate atins dup

a un num
ar nit de pasi (se

## construieste un sir care contine toate elementele multimii si n care indicele ec

arui element este
nit)
7.4.2. Remark. Pot construite modalit
ati proaste de num
arare: dac
a aceleasi elemente ale sirului
sunt puse astfel nct mai nti se num
ar
a toate numerele pare si apoi toate numerele impare, la cele impare
nu se ajunge dup
a un num
ar nit de pasi, deci desi multimea este num
arabil
a, procedeul de num
arare este
gresit.
Z este o mul
time num
arabil
a: elementele multimii pot puse sub forma unui sir, astfel:
a0 = 0; a1 = 1; a2 =

1; a3 = 2; a4 =

Z = f0; 1; 1; 2; 2;

; n; n;

2;
g

(se alterneaz
a numerele ntregi negative cu cele ntregi pozitive).
Multimea Q este num
arabil
a: Un procedeu adecvat de num
arare este cel al lui Cantor:
Este clar c
a dac
a se reuseste num
ararea adecvat
a a numerelor rationale pozitive, procedeul
poate folosit si pentru num
ararea adecvat
a a numerelor rationale negative si folosind procedeul
de num
arare a numerelor ntregi (alternarea numerelor pozitive cu cele negative) se obtine o
num
arare adecvat
a a multimii Q.
Procedeul de num
arare al lui Cantor (pentru numerele rationale pozitive):

216

7. REVIEWS

Se organizeaz
a numerele rationale pozitive ntrun tabel n care linia i este ocupat
a de fractiile
cu num
ar
atorul i iar coloana j de fractiile cu numitorul j; se obtine un tablou cu un num
ar innit
de linii si de coloane, dar cu diagonalele secundare nite.
1

1
1

1
2

1
3

2
1

2
2

1
n

2
3

2
n

m
1

3
2

3
3

m
2

3
n

m
3

1
2

3
1

1
3

1
1

Diagonalele sunt:

2
2

2
1

3
1

m
n

Num
ararea pe aceste diagonale secundare duce la atingerea ec
arui termen dup
a un num
ar
nit de pasi.
Multimea R nu este num
arabil
a:
Demonstratie:
Demonstr
am c
a intervalul [0; 1] (

R) nu este num
arabil, prin reducere la absurd: presupunem

c
a ar num
arabil. Atunci ar exista un sir astfel nct [0; 1] = fx1 ; x2 ;

; xn ;

g.

## Construim sirul de intervale nchise (Ik )k2N astfel:

1) section
am intervalul [0; 1] n trei intervale [0; 1] = 0; 13 [

1 2
;
3 3

2
;1
3

nchise si de lungimi

egale iar dintre acestea alegem ca I1 un interval care nu contine x1 . Deci dup
a alegere se obtine
intervalul I1 cu propriet
atile:
a) I1 = [a1 ; b1 ]
b) b1

## [0; 1] (I1 este un interval nchis, inclus n [0; 1]).

a1 = 13 .

c) x1 62 I1 .
2) Pentru ecare k > 1 aplic
am intervalului Ik

## n locul intervalui [0; 1]) pentru a obtine intervalul Ik cu urm

atoarele propriet
ati:
a) Ik = [ak ; bk ]
b) bk

ak =

bk

Ik

ak

## (Ik este un interval nchis, inclus n Ik 1 )

.

c) xk 62 Ik .
Se obtine astfel sirul de intervale (Ik )k>0 cu urm
atoarele propriet
ati:
a) toate elementele sirului sunt intervale nchise, incluse n [0; 1] iar sirul este descresc
ator (n
sensul c
a Ik

Ik 1 )

1
.
3k

## 7.5. GENERAL MATRIX TOPICS

217

c) xj 62 Ik 8j = 1; k
Din aceste propriet
ati se observ
a c
a marginile intervalelor formeaz
a dou
a siruri (ak )k>0 cresc
ator si (bk )k>0 descresc
ator, toate elementele sirului (ak )k>0 ind mai mici dect toate elementele
sirului (bk )k>0 . Pentru c
a cele dou
a siruri sunt monotone si m
arginite, sunt convergente la limitele
T
notate a, respectiv b si are loc a b. Se obtine c
a intersectia
Ik = [a; b] (deci, n particular,
k>0

nevid
a) iar orice element al acestei intersectii nu poate membru al sirului (xn )n>0 (din con-

## structia sirului de intervale). Asadar a fost g

asit m
acar un element din [0; 1] care nu este element
al sirului, contradictie cu presupunerea c
a [0; 1] sar putea scrie sub forma unui sir, n care ecare
termen s
a poat
a atins dup
a un num
ar nit de pasi.
Rezult
a c
a multimea R nu este num
arabil
a.
Multimea tuturor sirurilor innite de 0 si 1 este nenum
arabil
a.
Demonstatie:
de
Presupunem c
a multimea poate pus
a sub forma unui sir s1 = (sn1 )n2N , s2 = (sn2 )n2N ,
8
< 0, dac
a snn = 1
n
n
. S
irul s este un
siruri de 0 si 1. Se construieste sirul s = (s )n2N astfel: s =
: 1, dac
a snn = 0
nou sir de 0 si 1, contradictie.
Not
a: (R; Q) este spatiu vectorial de dimensiune egal
a cu cardinalitatea lui R.

## 7.5. General Matrix Topics

7.5.1. Denition. Se numeste matrice (engl. matrix, pl. matrices) o functie care are ca domeniu de
denitie un produs cartezian I J si care asociaz
a ec
arei perechi (i; j) 2 I J cte o expresie (matematic
a)
(orice reprezentare dreptunghiular
a de expresii matematice).

7.5.2. Remark. Multimile I si J sunt privite traditional ca multimi nite de indici (pot considerate si
innite; cnd va cazul, se va face distinctia n context); reprezentarea acestor functii se face tabelar, dar
exist
a si alte conventii. Prin conventie multimea I indexeaz
a liniile iar J indexeaz
a coloanele. O matrice
cu m linii si n coloane se mai numeste matrice de tip (m; n). Matricile de tip (m; 1) sau (1; n) se mai
numesc vectori (coloan
a, respectiv linie). Pentru ecare alegere posibil
a a indicilor de linie si de coloan
a
(i; j) se mai numeste loc (pozitie, celul
a) al (a) matricii; valoarea care se aa pe un loc se mai numeste
intrare (trebuie f
acut
a distinctie ntre locul (i; j) si elementul aij care ocup
a locul, adic
a ntre argument si
valoarea din codomeniu atasat
a argumentului). Operatiile cu matrici care vor descrise n continuare nu
au ntodeauna sens pentru expresii matematice oarecare; de obicei, diverse operatii se efectueaz
a numai
asupra unor anumite tipuri de matrici iar diferenta se face din context.

218

7. REVIEWS

7.5.3. Denition. Se numeste submatrice a unei matrici restrictia matricii la o submultime de indici:
dac
a A = (aij )i2I;j2J si I0

I, J0

## a functiei care deneste matricea). Exemplu: A (ijj) este submatricea obtinut

a din matricea initial
aA
prin ndep
artarea liniei i si coloanei j.
Fie A = (aij )i=1;n;j=1;m o matrice;
7.5.4. Denition. AT = (aji )i=1;n;j=1;m se numeste transpusa lui A (engl. transpose of A) (este matricea
care are drept coloane liniile matricii A).
7.5.5. Denition. Pentru matrici cu elemente numere complexe, adjuncta hermitica (transpusa hermitica,
transpusa conjugata, etc.) (engl. Hermitian transpose, conjugate transpose, adjoint, Hermitian adjoint,
etc) a unei matrici este matricea care se obtine din matricea initial
a prin transpunere si trecere la conjugata
complex
a pentru elementele matricii initiale.
7.5.6. Denition. Adjuncta (adjuncta clasica ) (engl. adjugate, classical adjoint) (a) unei matrici este
transpusa matricii cofactorilor.
7.5.7. Denition. Cofactorul (engl. cofactor) locului (i; j) al matricii A este num
arul Aij = ( 1)i+j det A (ijj).
A

1
AdjugateA
det A

7.5.8. Denition. In = (

ij )i=1;n;j=1;n

## se numeste matrice identitate (engl. identity matrix ); 0n;m =

(0)i=1;n;j=1;m se numeste matrice nula (engl. null matrix ); matrice patratica ( engl. square matrix): n = m
(num
arul de linii si de coloane este egal); diagonala principala a unei matrici p
atratice: locurile (i; i),
i = 1; n; prin extindere, diagonala principal
a a unei matrici oarecare este format
a din locurile (i; i),
i = 1; min (n; m) matrice simetrica ( engl. symmetric matrix): A = AT (nu poate dect p
atratic
a);
Matrice diagonala (engl. diagonal matrix): (di

ij )i=1;n;j=1;n

a,

## zero n rest); matrice superior(inferior) triunghiulara ( engl.upper(lower) triangular matrix): o matrice p

atratic
a pentru care elementele sub (peste) diagonala principal
a sunt nule; matrice strict superior (inferior)
triunghiulara ( engl. strictly upper (lower) triangular matrix): o matrice p
atratic
a pentru care elementele
sub (peste) diagonala principal
a sunt nule, inclusiv diagonala principal
a. Matrice ortonormala pe coloane
(engl. column orthonormal matrix):
AT A = I
Matrice ortonormala (engl. orthonormal matrix):
A p
atratic
a si AT A = I

219

## Rangul unei matrici (engl. rank of a matrix): dimensiunea maxim

a a unei submatrici p
atratice a matricii,
care are determinantul nenul (num
arul maxim de coloane care, privite ca vectori coloan
a ntrun spatiu
vectorial, formeaz
a un sistem liniar independent). Matrice inversa a lui A (engl. the inverse matrix of
A): o matrice B care satisface relatiile: AB = BA = I. Matrice patratica inversabila (nesingulara) (engl.
nonsingular matrix): matrice de rang maxim (echiv. matrice pentru care exist
a o matrice invers
a); Matrice
de rang maxim (engl. full rank): matrice pentru care RangA = min fn; mg; Urma unei matrici patratice
A (engl. trace of matrix A): suma elementelor de pe diagonala principal
a a unei matrici p
atratice.
7.5.9. Remark.

## (1) A superior triunghiular

a ) AT inferior tringhiular
a.

(2) Dac
a o matrice A 2 Mn;n (R) este strict superior (inferior) triunghiular
a, atunci An = 0.
7.5.10. Remark. Dac
a exist
a matricea invers
a, este unic
a; de obicei se noteaz
a cu A 1 .
Proof. Din AB1 = B1 A = I si AB2 = B2 A = I rezult
a c
a B1 si B2 au aceleasi dimensiuni iar
B1 = B1 I = B1 (AB2 ) = (B1 A) B2 = IB2 = B2 .
7.5.11. Denition. 1n este o matrice coloan
a de dimensiune n si cu toate elementele egale cu 1. enij este
matricea p
atratic
a de dimensiune n care are pe locul (i; j) valoarea 1 si 0 n rest
7.5.12. Remark. Se observ
a c
a

8
< 0 dac
a j 6= k
n n
eij ekl =
: en dac
a j = k:
il

Tijn (a) = In + aenij se numeste matrice elementara (transformare elementara), pentru i 6= j; nmultirea la
stnga a unei matrici (nu neap
arat p
atratice, de dimensiune (n; m)) cu matricea elementar
a Tijn (a) are ca
rezultat o nou
a matrice (tot de dimensiune (n; m)), ale c
arei linii corespund cu liniile vechii matrici, mai
putin linia i care este nlocuit
a cu valoarea obtinut
a prin adunarea la vechea linie i a liniei j nmultit
a cu
a (liniai + a liniaj ! liniai ). Tijn (a) A este rezultatul operatiei elementare (ntre linii): se adun
a la linia

i linia j nmultit
a cu a si rezultatul se scrie pe linia i (operatie de atribuire).
0
1
0
10
1 0
1
0
0
0
a
a
a
a
a
a11
B1 0 0 0 C
11
12
13
14
15
B
CB
C B
B
C
B
C
B
C
B
B 0 1 0 aC
B0 1 0 aC Ba21 a22 a23 a24 a25 C Ba21 + aa41 a2
B
C 4
B
CB
C B
4
7.5.13. Example. T24
(a) = B
CB
C=B
C; T24 (a) A =B
B0 0 1 0C Ba
C B
B0 0 1 0 C
a
a
a
a
a31
B
C
B
C B
31
32
33
34
35
B
C
@
A
@
A @
@
A
0 0 0 1
a41 a42 a43 a44 a45
a41
0 0 0 1
ATijm (a) este rezultatul operatiei elementare (ntre coloane): se adun
a la coloana j coloana i nmultit
a cu
0
1
0
1
1 1 0 0 0
0
1
B1 0 0 0 C 0
C
B
C a11 a12 a13 a14 B
C Ba11 a12 + aa13 a13 a14 C
CB
B0 1 0 0 C B
CB
C
0 1 0 0C
B
C B
B
C B
4
a. T32 (a) = B
B
C=B
C; B
a21 a22 a23 a24 C
a21 a22 + aa23 a23 a24 C
B
C
B
C
C
B0 a 1 0 C @
AB
A
B0 a 1 0C @
B
C
A
a31 a32 + aa33 a33 a34
@
A a31 a32 a33 a34 @
0 0 0 1
0 0 0 1

220

7. REVIEWS

## 7.5.14. Remark. Propriet

ati ale matricilor elementare:
(1) det Tijn (a) = 1,
(2) Tijn (a) Tijn (b) = In + aenij

a matrici

## elementare este tot o matrice elementar

a) (proprietate de parte stabil
a),
(3) Tijn (0) = In este element neutru
Tijn (a) Tijn ( a) = Tijn (0) = In (Tijn (a) este nesingular
a cu inversa Tijn ( a)) (cu alte cuvinte, transform
arile elementare sunt reversibile).
1
0
1
01n
n+1
A = Ti+1j+1
(a)
(4) @
n
0n1 Tij (a)
7.5.15. Denition. Se numeste matrice de transformare orice produs nit de matrice elementare.
7.5.16. Denition. Qnij , i < j (matrice de permutare) (permutation matrix ) este matricea obtinut
a din
matricea unitate In prin permutarea ntre ele a liniei i si a liniei j (poate privit
a si ca matricea obtinut
a
din matricea unitate prin permutarea ntre ele a coloanelor i si j).
a A, dar liniiile i si j sunt
7.5.17. Remark. Qnij A este matricea care are aceleasi linii ca matricea initial
schimbate ntre ele.
a, dar coloanele i si j
7.5.18. Remark. AQnij este matricea care are aceleasi coloane ca matricea initial
sunt schimbate ntre ele.
n
7.5.19. Denition. Rij
, i < j este este matricea obtinut
a din matricea unitate In prin permutarea ntre

ele a liniei i si a liniei j, linia j ind cu semn schimbat (elementul de sub diagonala principal
a este

1)

(poate privit
a si ca matricea obtinut
a din matricea unitate prin permutarea ntre ele a coloanelor i si j,
cu elementul de sub diagonala principal
a de valoare

1).

7.5.20. Remark. Qnij = Tijn ( 1) Tjin (1) Tijn ( 1) In (deci Qnij este matrice de transformare), adic
a se fac
succesiv urm
atoarele operatii asupra matricii identitate:
(1) se scade din linia i linia j si se pune rezultatul n locul liniei i,
(2) se adun
a la linia j linia i si se pune rezultatul n locul liniei j,
(3) se scade din linia i linia j si se pune rezultatul n locul liniei i.
n
n
7.5.21. Remark. Rij
= Tjin ( 1) Tijn (1) Tjin ( 1)(deci Rij
este matrice de transformare), adic
a se fac suc-

cesiv urm
atoarele operatii asupra matricii identitate:
(1) se scade din linia j linia i si se pune rezultatul n locul liniei j,
(2) se adun
a la linia i linia j si se pune rezultatul n locul liniei i,

221

## (3) se scade din linia j linia i si se pune rezultatul n locul liniei j.

1
0
1 0
10
0
B 1 0 0 0 C B 1 0 0 0 C Ba11 a12 a13 C Ba11 a12
C
B
C B
CB
B 0 0 1 0 C B
C B
CB
C B
B
B 0 0 1 0 C Ba21 a22 a23 C Ba31 a32
n
7.5.22. Example. Q23 = B
C=B
CB
C; B
C Ba
C Ba
B 0 1 0 0 C B
a
a
0
1
0
0
B 21 a22
C B 31
32
33 C
C B
B
A @
@
A
@
A
@
a41 a42
a41 a42 a43
0 0 0 1
0 0 0 1
1
0
1 0
10
0
0
1
B 1 0 0 0 C B 1 0 0 0 C Ba11 a12 a13 C B
C
B
C B
CB
B a11 a13 a12 a14 C
B 0 0 1 0 C B
C B
CB
B
C
C B
B
B 0 0 1 0 C Ba21 a22 a23 C B
4
B a
C
;
;
R
=
C=B
B
C
B
C
B
a
a
a
23
23
22
24 C
B 21
C B
C Ba
C B
B 0
@
A
a
a
0
1
0
0
C B
B
C
B
31
32
33
1
0
0
C @
B
A @
@
A
a31 a33 a32 a34
A
@
a41 a42 a43
0 0 0 1
0 0 0 1
0

Ba11
B
Ba
B 21
@
a31

a12

a13

a22

a23

a32

a33

1
B 1
a14 C B
CB
B 0
a24 C
CB
AB
B 0
a34 @
0

0
0

1
0

0 C 0
C B a11
1 0 C
C B
C=B
B a21
0 0 C
C @
A
a31
0 1
0

a13

a12

a23

a22

a33

a32

1
a13 C
C
a33 C
C
C;
a23 C
C
A
a43

0
Ba11
B
Ba
B 21
@
a31

a11

a12

a31

a32

a21
a41

a22
a42

1
a14 C
C
a24 C
C;
A
a34

B
B
B
B
7.5.23. Remark. Exist
a o matrice de transformare care transform
aB
B
B
@

a12

a13

a1

a22

a23

a2

a32

a33

a3

1
a13 C
C
a33 C
C
C;
a23 C
C
A
a43

a1 C
B 1 C
B C
C
B 0 C
a2 C
B C
C
n
=
6
0
n
B . C(= en1 ).
C
R
.. C
B .. C
. C
B C
@ A
A
an
0

## 7.5.24. Theorem. Fie A = (aij )i=1;n;j=1;m o matrice nenul

a. Atunci exist
a r 2 N \ [1; min fn; mg] si o

## matrice de transformare U (de dimensiune n) astfel nct U A =

0
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
B
@

cu 1

j1 <

0 :: 0 1 b1j1 +1 :: ::

::

:: ::

::

:: ::

::

::

:: ::

::

0 :: 0 0

::

0 1 b2j2 +1 :: ::

0 :: 0 0

::

0 0

::

0 1 b3j3 +1 :: ::

::

:: :: :: ::

::

:: ::

::

:: ::

::

:: ::

::

0 :: 0 0

::

0 0

::

0 0

::

0 1 brjr +1

0 :: 0 0

::

0 0

::

0 0

::

0 0

::

:: :: :: ::

::

:: ::

::

:: ::

::

:: ::

::

0 :: 0 0

::

0 0

::

0 0

::

0 0

::

:: b1m C
C
:: b2m C
C
C
:: b3m C
C
C
C
:: :: C
C
C
:: brm C
C
C
:: 0 C
C
C
C
:: :: C
A
:: 0

## < jr ; o matrice de forma de mai sus se numeste matrice scara (e

salon) (de fapt, form
a

n scar
a pe linie a matricii initiale) (engl. row echelon form); se caracterizeaz
a prin urm
atoarele: primele
j1

## 1 coloane sunt identic nule, submatricea format

a din coloanele j1 + 1 pn
a la j2

## la n este submatrice nul

a, etc (elementele unitare, deci nenule de pe locurile (1; j1 ),

1 si liniile 2 pn
a
, (r; jr ) formeaz
ao

pseudodiagonal
a (scar
a), iar elementele la stnga si sub aceast
a pseudodiagonal
a sunt nule) (num
arul de
zerouri de la nceputul ec
arei linii creste strict odat
a cu indicele de linie).

222

7. REVIEWS

## Proof. Prin inductie dup

a dimensiunile matricii: presupunem rezultatul adev
arat pentru orice matrice de tip (n0 ; m0 ) cu n0 < n, m0 < m; e j1 prima coloan
a a matricii A care are m
acar un element nenul.
Exist
a o transformare elementar
a care aduce elementul nenul pe linia 1, anuleaz
a toate celelalte elemente
ale coloanei j1 si transform
a elementul nenul n 1. Submatricea obtinut
a prin nl
aturarea coloanelor 1
si liniei 1 este de dimensiuni (n

1; m

j1

## j1 ) si conform ipotezei de inductie poate adus

a la o form
a esalon

qed.
7.5.25. Remark. matricea A si orice matrice esalon a ei au acelasi rang.
7.5.26. Remark. O alt
a form
a de tip esalon este forma e
salon redusa pe linie (reduced row echelon form),
care are urm
atoarele propriet
ati:
(1) Num
arul de zerouri de la nceputul ec
arei linii creste odat
a cu indicele liniei.
(2) Primul element nenul al ec
arei linii este egal cu 1.
(3) Fiecare coloan
a care contine prima valoare nenul
a a unei linii are celelalte elemente nule.
7.6. Determinants
a11 a12

= a11 a22

a12 a21

a21 a22
a11 a12 a13
a21 a22 a23

## a31 a32 a33

a11 a12 a13
a21 a22 a23
7.7. Matrix Operations
Operatii standard cu matrici: Adunare, sc
adere, nmultire a dou
a matrici (n cazul particular al nmultirii unei matrici linie cu o matrice coloan
a, operatia se mai numeste si produs scalar), nmultirea unei
matrici cu o expresie.
7.7.1. Denition. Produs matricial (product of matrices) pentru matrici compatibile din punct de vedere
al produsului matricial, i.e. num
arul de coloane al primei matrici este egal cu num
arul de linii al celei dea
doua matrici (engl. commensurate matrices):
A = (aij )i=1;n;j=1;m ; B = (bjl )j=1;n;l=1;p ; C = AB;
m
P
C = (cil )i=1;n;l=1;p ; cil =
aij bjl ; 8i = 1; n; l = 1; p
j=1

223

## 7.7.2. Denition. Multiplicare cu un scalar (engl. scalar multiplication):

2 C; C = A; C = (cij )i=1;n;j=1;m

A = (aij )i=1;n;j=1;m ;

cij = aij ; 8i = 1; n; j = 1; m
7.7.3. Denition. Suma matriciala (engl. sum of matrices):
A = (aij )i=1;n;j=1;m ; B = (bij )i=1;n;j=1;m ; C = A + B
C = (cij )i=1;n;j=1;m ; cij = aij + bij ; 8i = 1; n; j = 1; m
7.7.4. Denition. Transpunere (engl. transpose):
A = (aij )i=1;n;j=1;m ; C = A0 = AT = t A
C = (cij )i=1;n;j=1;m ; cij = aji ; 8i = 1; n; j = 1; m
7.7.5. Denition. Urma (Trace) unei matrici p
atratice:
T r (A) =

n
X

aii

i=1

## 7.7.6. Remark. Dac

a A 2 Mn;m (R) si B 2 Mm;n (R), atunci T r (AB) = T r (BA).
Proof. Pentru A = (aij )i=1;n;j=1;m , B = (bij )i=1;m;j=1;n ,
m
P
aik bkj , iar
AB = (cij )i=1;n;j=1;n , cu cij =
k=1
n
P

T r (AB) =

n
P

l=1

cll =

n
P

l=1

m
P

aki bjk ;

k=1

alk bkl

k=1

m
P

k=1

n
P

alk bkl

l=1

m
P

dkk = T r (BA)

k=1

atratice A, num
arul
det (A) =

"( )

2Sn

n
P

aik

ik

Qn

k=1 ak (k)

## (dezvoltarea determinantului dup

a linia i)=

k=1

n
P

akj

kj (dezvoltarea

k=1

determinantului dup
a coloana j), unde:
7.7.9. Denition. complementul algebric al locului (pozitiei) (i; k) este:

ik

= ( 1)i+k dik

## 7.7.10. Denition. minorul locului (pozitiei) (i; k) se noteaz

a dik si este determinantul submatricii
obtinute prin eliminarea liniei i si coloanei k.
7.7.11. Denition. Se numeste permanentul matricii p
atratice A, num
arul
P ermanent (A) =

P Qn
2Sn

k=1 ak (k)

224

7. REVIEWS

## 7.8. Other Matrix Operations

7.8.1. Denition. Produs ntre elementele matricilor (engl. element product)
A: B = C; cij = aij bij ; 8i = 1; n; j = 1; m
7.8.2. Denition. mpartire ntre elementele matricilor (engl. element division):
A:

B = C; cij =

aij
; 8i = 1; n; j = 1; m
bij

## 7.8.3. Denition. Conditie logica (engl. logical condition):

8
< 1; aij
B = C; cij =
: 0; a 6
ij

A:

bij
bij

; 8i = 1; n; j = 1; m

## B = (aij B) (rezultatul este o matrice obtinut

a

astfel: ecare loc al matricii A este ocupat de elementul de pe locul (i; j) nmultit cu matricea B)
3
2
A 0
5
7.9.2. Denition. Sum
a direct
a (engl. direct sum):A B = 4
0 B
2

## 7.9.3. Remark. (produsul a dou

a matrici partitionate) Pentru A = 4

obtinem
2
AB = 4

## A11 B11 + A12 B21 A11 B12 + A12 B22

A21 B11 + A22 B21 A21 B12 + A22 B22

A11 A12
A21 A22

3
5

5, B = 4

B11 B12
B21 B22

3
5

Proof. Pentru A = (aij )i=1;n;j=1;m , B = (bjk )j=1;m;k=1;p ; produsul este C = (cik )i=1;n;k=1;p , unde
m
P
cik =
aij bjk . Partitionarea matricii A este:
j=1
2
3
12
A11 = a11
;
A
=
a
;
12
ij i=1;n1 ;j=m1 +1;m
ij i=1;n1 ;j=1;m1
4
5
21
22
A21 = aij i=n +1;n;j=1;m ; A22 = aij i=n +1;n;j=m +1;m
1
1
1
1
Analog
se
descrie
parti
t
ionarea
matricii
B:
2
3
4

B11 = b11
jk

B21 =

Din cik =

j=1;m1 ;k=1;p1

b21
jk j=m1 +1;m;k=1;p1
m1
m
P
P
aij bjk =

j=1

j=1

## (cik )i=1;n1 ;k=1;p1 =

=

m1
P

j=1

B12 = b12
jk

11
a11
ij bjk +

m
P

m1
P

; B22 =
m
P
aij bjk +

j=1

j=m1 +1

21
a12
ij bjk

m
P

b22
jk j=m1 +1;m;k=p1 +1;p

k=m1 +1

aij bjk +

a:
!

aij bjk

j=m1 +1

i=1;n ;k=1;p

i=1;n1 ;k=1;p1

m1
P

=

m1
P

j=1

12
a11
ij bjk +

m
P

j=m1 +1

22
a12
ij bjk

=

m1
P

j=1

11
a21
ij bjk +

m
P

j=m1 +1

j=1

12
a21
ij bjk +

m
P

j=m1 +1

21
a22
ij bjk

m1
P

## i=1;n1 ;k=p1 +1;p

m
P

22
a22
ij bjk

aij bjk

i=n1 +1;n;k=1;p1

=
i=1;n1 ;k=p1 +1;p

=
i=n1 +1;n;k=1;p1

m
P

aij bjk +

j=m1 +1

j=1

aij bjk

j=m1 +1

aij bjk +

j=1

m1
P

aij bjk +

j=1

m1
P

m
P

225

aij bjk

j=m1 +1

## i=n1 +1;n;k=p1 +1;p

=
i=n1 +1;n;k=p1 +1;p

Am demonstrat c
a AB = 4

am
2
32
A
A12
A22
4 11
54
A21 A22
A21

A12
A11

3
5

5=4

A11 A22

A11 A12

A21 A22

A21 A12

5=4

A11 A22

A12 A21

A21 A22

A21 A12

A11 A22

## A12 A21 A12 A11

A11 A12

A22 A11

A21 A12

Dac
a A11 si A12 comut
a are loc:
2
4

A11 A12
A21 A22

32
54

A22

A12

A21

A11

Dac
a A21 si A22 comut
a avem:
2
4

A11 A12
A21 A22
2

32
54

A11 0
A21 I

A22

A12

A21

A11

5=4
2

## 5 = det A11 , det 4

0
I

A21 A11

5 = det A11 .

3
5

Proof. Se dezvolt
a determinantul dup
a coloanele (respectiv liniile) corespunz
atoare matricii unitate;

## cnd sau terminat, ceea ce r

amne este det A11
2
3
A11 0
5 = det A11 det A22 .
7.9.6. Remark. det 4
0 A22
Proof. Se observ
a c
a are loc relatia:
2
4

A11 0
0

32
54

0 A22

5=4

A11

A22

3
5

226

7. REVIEWS

a
2

A11

A22

A11

det 4

## 7.9.7. Remark. det 4

A21 A22

Proof. Se observ
a c
a

asa c
a

2
4

A11

det 4
7.9.8. Remark. det 4

A21 A22

A12

A21 A22

5 = det 4

A11 0
0

5 det 4

0 A22

A11

A21 A22
3

5 = det 4

A11 0

5=4

A21 I

5 = det (A22

54

A21 I

A11 0

32

0 A22

5 det 4

0 A22

5;

## 5 = det A11 det A22 .

A21 A12 ).

Proof.
Se observ
a c
a3
3
2
2
2
3
2
3
0
I
A12
I
0
I A12
I A12
I
5 det 4
5 = det 4
5 = det 4
5 si se obtine det @
det 4
A21 I
0 A22 A21 A12
A21 A22
A21 A22
A21
det (A22 A21 A12 )
2
3
A11 A12
5 si A11 sunt p
7.9.9. Remark. Dac
aA = 4
atratice inversabile, are loc: det A = det A11 det A22 A2
A21 A22
2

Proof. 4

A11 A12

5=4

A11 0

32
54

A111 A12

A21
A22
A21 A22
0 I
3
A11 A12
5 = det A11 det A22 A21 A111 A12
) det 4
A21 A22
2

Proof. Evident

n2 Q
n1
Q

i=1 k=1

2
4

In1
C In

In1 B
C

D
2

0
n1

32
54

In1 B
C

5)

ari elementare 4
In1

C In
2

5=4

n1

5. Rezult
a

In1
C +C D

B
CB

5=4

I
0 D

B
CB

3
5

I
0 D

B
CB

5.

## 7.9. PARTITIONED MATRICES

227

32
23
3 2
1
0
0
0
0
a13
a14
a14 7 6 1
76 1
67
76
67
7 6
6
670
6
1
0 07
1
a23
a24
1 a23 a24 7
76 0
67
7 6 0
7.9.11. Example.
76
67 ;
7==6
6
67
6
a32 1 07
a32 a33 a34 7
76 0
7 6 0 a32 a33 a13 a31 a34 a31 a14670
54
45
5 4
0
0
0 1 a41
a41 a42
a43
a44
a42 a43 a44
32
23
2
0
a13
a14
0 0 07 6 1
0
a13
a14
67 1
6 1
76
67
6
6
67 0
6 0
1
a23
a24
1 0 07
1
a23
a24
76 0
67
6
=6
76
67 ;
6
6 0
7
0 a33 a13 a31 a23 a32 a34 a31 a14 a3
0 1 07
0 a33 a13 a31 a23 a32 a34 a31 a14 a32 a246
76 0
67 0
6
54
45
4
a41 a42
a43
a44
a41 0 0 1
a41 a42
a43
a44
32
23
2
a13
a14
0
0 07 61 0
a13
a14
671
61 0
76
67
6
6
670
60 1
a23
a24
1
0 07
a23
a24
7 60 1
67
6
=6
76
67 ;
6
7
60 0 a
0
1 07
a13 a31 a23 a32 a34 a31 a14 a32 a246
7 60 0 a33 a13 a31 a23 a32 a34 a31 a14 a32 a2
670
6
33
54
45
4
0 a42
a43 a13 a41
a44 a14 a41
0
a42 0 1
0 a42
a43 a13 a41
a44 a14 a41
3
2
a13
a14
7
61 0
7
6
7
60 1
a23
a24
7
6
a c
a:
=6
7 : Se observ
60 0 a
a13 a31 a23 a32 a34 a31 a14 a32 a24 7
7
6
33
5
4
0 0 a43 a13 a41 a23 a42 a44 a14 a41 a24 a42
32
07 6 1
76
6
1 0 07
76 0
76
6
0 1 07
7 6a31
54
0 0 1 a41

6 1
6
6 0
6
6
6 a
6 31
4
0

2
a
4 33
a43

a13

a13 a31

a23 a32

a34

a31 a14

a13 a41

a23 a42

a44

a14 a41

Forma transform
arii:

2
61
6
60
6
6
60
6
4
0

32
07 6
76
6
0 07
76
76
6
1 07
76
54
0 1
0

1
0
a42

aA=4
atunci se obtine

A
2

Proof. Fie B = 4

8
>
>
B11
>
>
>
>
>
< B12
>
>
B21
>
>
>
>
>
: B22

B21 B22

0
a41

A21 A22
2

5, C = A22

C

a44

2
a
4 31
a41

32
07 6
76
6
0 07
76
76
6
1 07
76
54
0 1
0

1
a32
0

32
a
5 4 13
a42
a23

a32

a14
a24

3 2
07 6
7 6
6
1 0 07
7 6
7==6
6
0 1 07
7 6
5 4
0 0 1

0
a31
0

A21 A111

A111 A12 C
C

5:

5 = AD. Obtinem

A21 A111

+ A12 C
1

A21 A111

+ A22 C

5:
1

a31

a32

a41

a42

3
07
7
0 07
7
7
1 07
7
5
0 1
0

## A21 A111 A12 si A11 sunt p

atratice si nesingulare,

=

a34

A21 A111

=

32
07 61
76
6
1 0 07
7 60
76
6
0 1 07
7 60
54
0 0 1
0
0

A11 A12

=4

B11 B12

2
a
5 = 4 33
a24 a42
a43
a32 a24

A12 C
A12 C

A22 C

A21 A = I

+ A12 C
1

=0

A21 A111 = 0

=I

228

7. REVIEWS

pentru D = 4

A111

C
2

Rezult
a B = AD = 4

A21 A111

I 0
0 I

5 si A

A111 A12 C 1

5.

= D.
2

AT1
AT2

5A=4

A11 A12
A21 A22

5 ) AT = 4

AT11

AT21

AT12

AT22

3
5

## 7.10.1. Denition. Dac

a A 2 M(m;k) (R), A 2 M(k;m) (R) se numeste inversa generalizata a lui A n
sens MoorePenrose dac
a satisface urm
atoarele conditii:
(1) AA A = A
(2) A AA = A
(3) AA si A A sunt simetrice
7.10.2. Remark. Inversa MoorePenrose este unic
a.
Proof. Fie B care satisface si ea (i), (ii), (iii). Atunci:
T

A = A AA = (A A) A = AT (A ) A = (AB A) (A ) A = AT (B ) AT (A ) A = AT (B ) (A
T

AT (B ) A AA = AT (B ) A =
T

= (B A) A = B AA = B (AA ) = B (A ) AT = B AB (A ) AT = B (AB ) (A ) AT =
T

B (B ) AT (A ) AT = B (B ) (AA A) =
T

= B (B ) AT = B (AB ) = B AB = B .
a si pozitiv semidenit
a.
7.10.3. Remark. Fie A 2 M(m;k) (R); atunci AT A 2 M(k;k) (R) este simetric
(k;m)(m;k)

## Proof. A 2 M(m;k) (R), x 2 Rk (vector coloan

a) ) Ax 2 Rm (vector coloan
a) si (Ax)T Ax =<
Ax; Ax >
AT AT

0 , xT A T A x

0 adic
a AT A 2 M(k;k) (R) este pozitiv semidenit
a; pentru c
a AT A

= AT A, rezult
a c
a matricea AT A este si simetric
a.

## 7.10.4. Remark. Dac

a produsul dintre o matrice si transpusa ei este matricea nul
a atunci matricea este
nul
a.
Proof. Diagonala principal
a a produsului este suma p
atratelor elementelor ec
arei linii din matricea
initial
a.
7.10.5. Theorem. (Aarea SVD si a inversei generalizate) Orice matrice A 2 M(m;k) (R) de rang r poate
descompus
a ntrun produs A = U
(m;k)

(m;r)

(r;r)

(r;k)

229

## (1) D este matrice diagonal

a (r; r) cu elementele pe diagonala principal
a strict pozitive descresc
atoare
(d11

d22

drr > 0)

## (2) U si V sunt matrici ale c

aror coloane sunt vectori ortonormali, de dimensiuni respectiv (m; r) si
(k; r) (i.e. U T

(r;m)

U = VT

(m;r)

V = Ir iar U

(r;k)

(k;r)

(r;r)

(m;r)

U T = Im , U T

(r;m)

(r;m)

U = Ir ).

(m;r)

## (3) Mai mult, inversa generalizat

a n sens MoorePenrose a matricii A este A

(k;m)

= V D

U T , unde

## V 2 M(k;r) , D 2 M(r;r) , U 2 M(m;r) .

a si pozitiv semidenit
a. 9W 2
Proof. Fie A 2 M(m;k) (R); atunci AT A 2 M(k;k) (R) este simetric
(k;m)(m;k)

## M(k;k) (R) astfel nct

W T AT A W
este diagonal
a (admite o baz
a ortonormal
a format
a din vectori proprii n care matricea atasat
a este diagonal
a) (mai mult, se poate presupune c
a pe diagonala principal
a valorile proprii, care sunt toate pozitive,
sunt ordonate descresc
ator), deci W T AT A W = G. Fie r rangul matricii AT A (i.e. matricea AT A are
exact r valori proprii strict pozitive iar celelalte sunt nule); atunci G = diag (gii )i=1;k iar g11
grr >
3
2
3
"
# 2
G1 0
Ir
0(r;k r)
5, W = W1 W2 , 4
5 = Ik = W T W =
0 = gr+1r+1 =
= gkk ; deci G = 4
(k;r) (k;k r)
0 0
0(k r;k) Ik r
3
2
"
#
W1T
7
6 (r;k)
7 W1 W2 =
6
5 (k;r) (k;k r)
4
T
W2
(k r;k)
3
2
T
T
W1 W1 W1 W2
6 (r;r)
(r;k r) 7
7(asa c
6
a W1T W1 = Ir ),
=4
5
T
T
(r;r)
W2 W1 W2 W2
(k r;r)
(k r;k r)
2
3
#
"
W1T
6 (r;k) 7
7 = W1 W1T + W2 W2T
Ik = W W T = W1 W2 6
4
5 (k;r)
(k;r) (k;k r)
(k;k r)(k r;k)
(r;k)
W2T
(k r;k)

(asa c
a W1 W1T = Ik
W2 W2T ),
(k;r) (r;k)
(k;k r)(k r;k)
2
3
2
3
T
W1
G 0
5 AT A [W1 W2 ] = 4 1
5,
iar 4
T
W2
0 0
2
3
2
3
T
T
6 W1 A A 7
G1 0
6
7
(r;k)
5,
6
7 [W1 W2 ] = 4
4 W T AT A 5
0 0
2
(k r;k)
2
3 2
T
T
T
T
6 W1 A A W1 W1 A A W2 7 6 G1
6
7 6 (r;r)
(r;r)
(r;k r)
6
7=
4 W T AT A W W T AT A W 5 4 0
1

(k r;r)

(k r;k r)

(k r;k)

(r;k r)

(k r;k r)

7
7,
5

230

7. REVIEWS

8
>
W1T AT A W1 = G1
>
>
>
(r;r)
>
(r;r)
>
>
>
>
T
T
>
>
< W1 A A W2 = (r;k0 r)
(r;k r)

; n particular W2T AT A W2 =

>
>
W2T AT A W1 = 0
>
>
(k r;k)
>
(k r;r)
>
>
>
>
>
T
T
>
0
: W2 A A W2 = (k r;k
r)

(k r;k r)

(k r;k r)

(k r;k r)

(k r;k r)

(k r;k r)

AW2 = 0(m;k

r)

## (produsul dintre matrice si transpusa ei este matricea nul

a ) matricea este nul
a)

Se denesc:
p
p
D = G1 (i.e. D =
gii

i=1;r

(r;r)

## este matrice diagonal

a care are ca elemente valori strict pozitive si

descresc
atoare).
V = W1
(k;r)

U = AV D

(m;r)

## Din rationamentele de mai sus rezult

a:
V T V = W1T W1 = Ir ,
T

U T U = (AV D 1 ) U = (D 1 ) V T AT AV D
=D

W1T AT AW1 D

= D 1 G1 D

=
p
p
= diag gii gii gii

=0(m;k

=A

= Ir ,

i=1;r!

W2 W2T
(k;k r)(k r;k)

r)

z }| {
A W2 W2T = A

0(m;k

(k;k r) (k r;k)
1 T

r)

W2T = A

## 0(m;k) = A care stabileste descompunerea.

(k r;k)

Matricea V D U satisface:
V D 1U T

1) A V D 1 U T A = U DV T
= UD V T V D
2) V D 1 U T

U DV T =

U T U DV T = U DIr D 1 Ir DV T = U DV T = A

U DV T

V D 1U T = V D

U T U D V T V D 1U T =

= V D 1 Ir DIr D 1 U T = V D 1 U T = A
3) U DV T
)

V D 1 U T = U D V T V D 1 U T = U DIr D 1 U T = U U T

U DV T

3) V D 1 U T
)

V D 1U T

V D 1U T

= UUT

## = U U T (matricea este simetric

a)

U DV T = V D 1 Ir DV T = V V T
U DV T

= VVT

a)

## Din 1), 2), 3), 3) rezult

a c
a V D 1 U T satisface relatiile care denesc inversa MoorePenrose; cum
matricea care satisface aceste relatii este unic
a, rezult
a A = V D 1U T .

## 7.10.6. Remark. Dac

a A este simetric
a, atunci U este linie de vectori proprii ai lui A corespunz
atori
la valori proprii nenule, asa c
a AT U = U D1 , cu D1 matrice diagonal
a (r; r) cu valori proprii nenule, n
ordine descresc
atoare. n acest caz, V = AT U D

## 7.11. ALGEBRAIC STRUCTURES

231

identice mai putin eventual semnul, coloanele matricilor U si V sunt e egale (pentru r
ad
acini pozitive) e
cu semn invers (pentru r
ad
acini negative). Asadar, dac
a A este pozitiv semidenit
a, are o descompunere
SVD A = U DU T cu U cu coloane ortogonale iar D pozitiv diagonal
a.
1

## 7.10.7. Remark. Dac

a A este p
atratic
a si nesingular
a, are loc: A = A

(inversa generalizat
a n sens

## MoorePenrose coincide cu inversa obisnuit

a a matricii)
Proof. A p
atratic
a si nesingular
a ) 9A
AA A = Aj A

la dreapta ) AA

) AA A = Aj A

la stnga ) A A = I; analog

## matrice care satisface relatiile; rezult

a c
a A = A 1.
7.10.8. Remark. Sistemul de ecuatii A x =
(m;k)(k;1)

y
(m;1)

are o solutie ,

multimea plan
a a tuturor solutiilor este multimea vectorilor x = A
(k;1)

= A A

(m;1)

y + [Ik

(k;m)(m;1)

y ; mai mult,

(m;k)(k;m)(m;1)

A ] z 8z 2 Rk .

(k;m)(m;k) (k;1)

## Proof. Ax = yj A la stnga ) A Ax = A yj A la stnga ) AA Ax = AA y ) Ax = AA y

deci dac
a sistemul este compatibil atunci y = AA y.
Reciproc, dac
a y = AA y atunci x = A y este solutie a sistemului: A (A y) = AA y = y, deci
sistemul este compatibil.
Mai mult, pentru z 2 Rk , are loc: A (A y + [I
y + Az

A A]z) = AA y + A[I

A A]z = y + Az

AA Az =

Az = y.

2

## Proof. AA A = Aj A la dreapta ) (AA ) = AA

2

AA A = Aj A la stnga ) (A A) = A A
7.11. Algebraic Structures
7.11.1. Denition. Fiind dat
a o multime nevid
a M , este numit
a lege de compozitie pe M orice functie
'( ; ) : M

## M ! M . [orice functie care asociaz

a oric
arei perechi de elemente din M un element din M ]

## [functia ' ( ; ) mai este numit

a operatie algebric
a sau operatie binar
a] Legile de compozitie pot notate
cu diverse simboluri: x y, x y, x y, x + y (notatie aditiv
a), x y (notatie multiplicativ
a), etc. Orice
exprimare prescurtat
a despre o lege de compozitie va subntelege toate elementele din denitie.
7.11.2. Denition. Fiind dat
a o multime nevid
a M , o submultime a sa H si o lege de compozitie ' ( ; )
pe M , multimea H este numit
a parte stabila a lui M n raport cu ' ( ; ) dac
a: 8x; y 2 H, ' (x; y) 2 H.
[rezultatul operatiei dintre dou
a elemente din H nu p
ar
aseste multimea H]

232

7. REVIEWS

## 7.11.3. Denition. O operatie binar

a este numit
a asociativa dac
a: 8x; y; z 2 M , (x y) z = x (y z)
[expresia x y z are acelasi rezultat, indiferent de ordinea n care se fac operatiile]
7.11.4. Denition. O operatie binar
a este numit
a comutativa dac
a 8x; y 2 M , x y = y x.
7.11.5. Denition. Un element e 2 M este numit neutru pentru legea
Despre o lege

dac
a: 8x 2 M , x e = e x = x.

se spune c
a are element neutru pe M dac
a: 9e 2 M , 8x 2 M , x e = e x = x.

## 7.11.6. Remark. O lege de compozitie, dac

a are element neutru, atunci acesta este unic. [o lege nu poate
avea mai multe elemente neutre]
Proof. Se presupune c
a pentru legea

ar exista dou
a elemente neutre, e si f . Atunci:

## I. [e este element neutru] 8x 2 M , x e = e x = x si

II. [f este element neutru]8x 2 M , x f = f
Din I., cu x = f se obtine: f

x = x.

e = e f = f , iar

e=e

## Din cele dou

a relatii se obtine:
e = e f = f , asa c
a, dac
a ar exista dou
a elemente neutre, acestea ar egale.
7.11.7. Denition. Fie

o lege asociativ
a si cu element neutru pe M . Un element x 2 M este numit
dac
a: 9x0 2 M , x

## simetrizabil n raport cu legea

simetricul lui x n raport cu

x0 = x0

## si este dependent de x se schimb

a la schimbarea lui x]

## 7.11.8. Remark. Simetricul unui element x n raport cu legea , dac

a exist
a, este unic. [un element nu
poate avea mai multe simetrice]
Proof. Din denitie, legea

este asociativ
a si cu element neutru.

Se presupune c
a pentru x 2 M ar exista dou
a simetrice n raport cu legea :
x x0 = x0 x = e si
x x00 = x00 x = e. Atunci:
x0 = x0 e = x0 (x x00 ) = (x0 x) x00 = e x00 = x00 ,
asa c
a cele dou
a elemente simetrice sunt egale.
7.11.9. Remark. Fie x 2 M un element simetrizabil. Atunci si simetricul lui x, x0 2 M , este simetrizabil,
iar simetricul lui este x: (x0 )0 = x.
Proof. Dac
a ar exista, elementul (x0 )0 ar trebui s
a satisfac
a relatia: (x0 )0 x0 = x0 (x0 )0 = e [I].
Din denitia simetrizabilit
atii lui x se stie c
a elementul x0 exist
a si satisface relatia: x x0 = x0 x = e
[II].

233

## Din relatia [II], se constat

a c
a, dac
a n [I] se nlocuieste (x0 )0 cu x, relatia este satisf
acut
a.
Deci exist
a un element (si anume x) care satisface denitia simetrizabilit
atii lui x0 . Cum acest element
este unic, se obtine c
a (x0 )0 = x.
7.11.10. Remark. Fie x; y 2 M dou
a elemente simetrizabile. Atunci si elementul x

y 2 M este tot

## simetrizabil, iar simetricul lui este (x y)0 = y 0 x0 .

Proof. Dac
a elementul (x y) 2 M ar simetrizabil, ar trebui s
a existe un element (x y)0 care s
a
satisfac
a relatia:
(x y) (x y)0 = (x y)0 (x y) = e.
Dac
a n aceast
a relatie se nlocuieste (x y)0 cu (y 0 x0 ), se constat
a:
(x y) (y 0 x0 ) = ((x y) y 0 ) x0 = (x (y y 0 )) x0 = (x e) x0 = x x0 = e,
(y 0 x0 ) (x y) = y 0 (x0 (x y)) = y 0 ((x0 x) y) = y 0 (e y) = y 0 y = e,
asa c
a elementul (y 0 x0 ) satisface relatia din denitia simetrizabilit
atii lui (x y).
Din unicitate se obtine c
a (x y)0 = (y 0 x0 ).
7.11.11. Denition. Se consider
a o lege
legea

## pe multimea M . Perechea (M; ) este numit

a monoid dac
a

este asociativ
a si cu element neutru.

a o lege

a grup dac
a (M; )

## este monoid n care orice element este simetrizabil.

7.11.13. Remark (Reguli de calcul ntrun grup). ntrun grup (M; ) urm
atoarele reguli sunt adev
arate:
(1) x y = x z ) y = z [simplicare la stnga]
(2) y x = z x ) y = z [simplicare la dreapta]
(3) Ecuatia a x = b are solutia x = a0 b, care este unic
a.
(4) Ecuatia x a = b are solutia x = b a0 , care este unic
a.
Proof. 1.
2.
3.
4.
7.11.14. Denition. Se consider
a un grup (M; ). O submultime a lui M , ; 6= H

M este numit
a

## subgrup al lui M dac

a H este parte stabil
a a lui M si H contine, odat
a cu un element x, si simetricul
acestuia:
(1) 8x; y 2 H, x y 2 H;

234

7. REVIEWS

(2) 8x 2 H, x0 2 H.
7.11.15. Remark. Dac
a H este subgrup al lui (M; ), atunci:
(1) e 2 H;
(2) (H; jH ) este grup.
Proof. 1.
2.
7.11.16. Denition. Se consider
a dou
a grupuri (M; ) si (N; ). O functie f ( ) : M ! N este numit
a:
(1) Morsm de grupuri (de la (M; ) la (N; )), dac
a 8x; y 2 M , f (x y) = f (x)

f (y) [functia

"transport
a" rezultatul operatiilor din domeniu n rezultatul operatiilor din codomeniu].
(2) Izomorsm de grupuri, dac
a f ( ) este morsm si este bijectiv
a [functia "transport
a biunivoc"
rezultatul operatiilor din domeniu n rezultatul operatiilor din codomeniu]. Dac
a M = N si
= , atunci un morsm (izomorsm) mai este numit si endomorsm (automorsm).
7.11.17. Remark. Dac
a f ( ) este morsm de la (M; ) la (N; ), atunci:
(1) f (eM ) = eN [eM este elementul neutru al grupului (M; ) iar eN este elementul neutru al grupului
(N; )]
(2) f (x0 ) = (f (x))0 [n membrul stng simetricul este n (M; ) iar n membrul drept simetricul este
n (N; )]
(3) Dac
a f ( ) este izomorsm de la (M; ) la (N; ), atunci f

## ( ) este izomorsm de la (N; ) la

(M; ).
Proof. 1.
2.
3.
7.11.18. Denition. Se consider
a multimea A mpreun
a cu dou
a legi de compozitie abstracte, notate
aditiv si multiplicativ. Tripletul (A; +; ) este numit inel dac
a:
(1) (A; +) este grup comutativ (abelian);
(2) (A; ) este monoid;
(3) 8x; y; z 2 A, x (y + z) = x y + x z, (y + z) x = y x + z x [nmultirea este distributiv
a la
stnga si la dreapta fata de adunare] [elementul neutru fata de operatia notat
a aditiv este notat
0] [elementul neutru fata de operatia notat
a multiplicativ este notat 1] [simetricul unui element x
fata de operatia notat
a aditiv este notat

b nseamn
a a + ( b)]

235

## [simetricul (cnd exist

a, al) unui element x fata de operatia notat
a multiplicativ este notat x 1 si
1
este numit inversul lui x] [se evit
a scrierea ca nlocuitoare pentru x 1 , deoarece constructiile cu
x
fractii sunt mai specice si reduc din generalitatea contextului] [nu sau mai ad
augat paranteze
suplimentare pentru a stabili ordinea operatiilor, deoarece se consider
a conventia c
a operatia
notat
a multiplicativ se efectueaz
a prima] [se consider
a si conventia c
a xy = x y]
Elementele din A care sunt simetrizabile n raport cu operatia notat
a multiplicativ mai sunt numite si
unitati ale inelului. Dac
a operatia este comutativ
a, inelul (A; +; ) este numit comutativ. Dou
a elemente
x; y 2 A sunt numite divizori ai lui 0 dac
a: x 6= 0 si y 6= 0 si xy = 0. Un element 0 6= x 2 A este numit
divizor al lui 0 dac
a 9y 2 A, y 6= 0 astfel nct xy = 0. Despre un element x 2 A se spune c
a nu este
divizor al lui 0 dac
a 8y 2 A , xy 6= 0 [altfel spus: xy = 0 ) y = 0] [cu x se poate simplica, desi nu
este neap
arat inversabil]. Un inel (A; +; ) cu cel putin dou
a elemente si f
ar
a divizori ai lui 0 este numit
domeniu de integritate.
7.11.19. Remark (Reguli de calcul ntrun inel). ntrun inel (A; +; ) au loc:
(1) x + y = x + z ) y = z, y + x = z + x ) y = z [se poate face operatia de reducere n ambii
membrii, cu orice element];
(2) Ecuatiile a + x = b si x + a = b au ecare solutia unic
ax=b

a;

(3) x0 = 0x = 0
(4) 1 6= 0
(5) ( x) y = x ( y) =
(6) x (y

z) = xy

xy si ( x) ( y) = xy [regula semnelor]

xz si (y

z) x = yx

zx

(7) Dac
a (A; +; ) este domeniu de integritate, atunci: xy = xz si x 6= 0 ) y = z [se poate simplica
la stnga cu un element nenul, chiar dac
a acesta nu este inversabil] yx = zx si x 6= 0 ) y = z [se
poate simplica la drepta cu un element nenul, chiar dac
a acesta nu este inversabil]
7.11.20. Denition. Se consider
a dou
a inele (A; +; ) si (B; +0 ; 0 ). O functie f ( ) : A ! B este numit
a
morsm de inele dac
a:
(1) f (x + y) = f (x) +0 f (y),
(2) f (x y) = f (x) 0 f (y)
(3) f (1) = 10 . Dac
a, n plus, functia f ( ) este si bijectiv
a, atunci f ( ) este numit
a izomorsm.
7.11.21. Remark. Dac
a f ( ) este morsm de inele, atunci:
(1) f (0) = 00 ,
(2) f ( x) =

f (x)

(3) Dac
a x este inversabil n (A; +; ), atunci si f (x) este inversabil n (A0 ; +0 ; 0 ) si (f (x))

10

= f (x 1 ).

236

7. REVIEWS

## 7.11.22. Remark. n cazul izomorsmului de inele, dac

a A0 are m
acar un element care nu este divizor al
lui 0, atunci axioma 3. este o consecinta a axiomelor 1. si 2. din denitia morsmului de inele.
Proof. Fie x0 2 A0 care nu este divizor al lui 0. Atunci 9x 2 A, x0 = f (x) si:
f (1) 0 x0 = f (1) 0 f (x) = f (1 x) = f (x) = x0 = 10

x0 ) f (1) = 10 [pentru c
a x0 nu este divizor al

lui 0]
7.11.23. Denition. Un inel (K; +; ) este numit corp dac
a 0 6= 1 si orice element nenul admite invers:
8x 2 K, x 6= 0, 9x

2 K.

## 7.11.24. Remark. Corpurile nu au divizori ai lui 0.

7.11.25. Remark. Dac
a (K; +; ) este corp, atunci (K ; ) este grup [K este o notatie pentru multimea
K n f0g], numit grupul multiplicativ al corpului (K; +; ).
7.11.26. Denition. Se consider
a dou
a corpuri (K; +; ) si (K 0 ; +0 ; 0 ) si o functie f ( ) : K ! K 0 . Functia
f ( ) este numit
a (izo)morsm de corpuri dac
a este morsm (bijectiv) de inele.
7.11.27. Remark. Pe o multime pot considerate diferite legi de compozitie, n raport cu care structurile
algebrice obtinute (pentru aceeasi multime) s
a e diferite.

## Examples of Exams from Previous Years

8.1. Written paper from 26.11.2014
I. Consider in (R2 [t] ; R) the polynomials:
p1 = t2 + 2t

2, p2 = t2

2t, p3 =

t2 + 1, p4 = t2

4t + 1, p5 = 2t2 + 1

## and the subspaces X1 = span (p2 ; p3 ; p4 ) and X2 = span (p1 ; p5 ).

(0; 5p) a) Determine a basis and the dimension for X1 .
(0; 5p) b) Determine a basis and the dimension for X2 .
(0; 5p) c) Determine a basis and the dimension for X1 + X2 .
(1p) d) Determine a basis and the dimension for X1 \ X2 .
(0; 5p) e) Give the general statement for The Dimension Theorem (Grassman) and verify it for X1 and X2 .
(1p) f) Represent a vector of X1 \ X2 in a basis for X1 , a basis for X2 , a basis for X1 + X2 and a basis
for X1 \ X2 .
(0; 5p) g) Is the sum X1 + X2 direct? Find two distinct representations for 0 2 V1 + V2 .
(1p) II. Consider a vector space (V; K) and a linear transformation U ( ) : V ! V such that U

U = 1V .

## Show that U ( ) is a bijection.

III. Consider a vector space (V; K) and a linear transformation U ( ) : V ! V such that U

U = 1V .

Show that:
(0; 5p) a) X1 = fx + U (x) ; x 2 Vg is a vector subspace in (V; K).
(0; 5p) b) X2 = fx
(1p) c) V = X1

X2 .

## IV. Consider the linear transformation U ( ) : R4 ! R3 , given by

U (x1 ; x2 ; x3 ; x4 ) = (x1 + x2 ; x1 + x3 ; x1 + x4 ).
(0; 5p) a) Find a basis and the dimension for ker U ( ).
(0; 5p) b) Find a basis and the dimension for Im U ( ).
(0; 5p) c) Give the general statement and verify for U ( ) the RankNullity Theorem.
(1p) V. Consider a vector space (V; K) and two subspaces V1 and V2 . Prove that the following statements
are equivalent:
1. 8v 2 V1 + V2 , 9!v1 2 V1 , 9!v2 2 V2 , v = v1 + v2 .
2. V1 \ V2 = f0V g.

238

## 8. EXAMPLES OF EXAMS FROM PREVIOUS YEARS

(1p) VI. State the denitions of the objects involved and prove that the composition of two linear transformations is a linear transformation.
Note: The grade will be 1p+ "the points obtained from each exercise" (for the quality of the explanation).
8.1.1. Solution.
I. For the basis E = ft2 ; t; 1g in (R2 [t] ; R), the representations of the polynomials are:
2
3
2
3
2 3
2
3
2
3
1
1
1
1
2
6
7
6
7
6
7
6
7
6 7
6
7
6
7
6
7
6
7
6 7
[p1 ]E = 6 2 7, [p2 ]E = 6 2 7, [p3 ]E = 6 0 7, [p4 ]E = 6 4 7, [p5 ]E = 6 0 7;
4
5
4
5
4
5
4
5
4 5
2
0
1
1
1
1

a) The determinant

1
2

0
3

## 2 6= 0, so that the rank of

7
7
2 0
4 7 is 2
5
0
1
1
) A basis for X1 is fp2 ; p3 g [X1 = span (p2 ; p3 )] and dim X1 = 2.
3
2
1 2
7
6
1 2
7
6
b) The minor
= 4 6= 0 ) the matrix 6 2 0 7 has rank 2
5
4
2 0
2 1
) A basis for X2 is fp1 ; p5 g [X2 = span (p1 ; p5 )] and dim X2 = 2.
2
3
1
1 2
1
1 1 2
6
7
6
7
c) The minor
2 0 0 = 6 6= 0 ) the matrix 6 2 0
4 0 7 has rank 3
4
5
0
1 1
0
1
1 1
) A basis for X1 + X2 is fp2 ; p3 ; p5 g, dim (X1 + X2 ) = 3, X1 + X2 = R2 [t]

6
6
the matrix 6
4

## d) The intersection X1 \ X2 is given by the vectors v = 8

p2 + p3 = p1 + p5
>
8
>
=
;
>
>
>
>
=
+
2
>
>
>
>
<
< = 2 ;
,
with
the
solution:
) the system:
2
=2
>
>
>
>
= 2R
>
>
:
>
>
= 2 +
>
>
: =0
) a basis for X1 \ X2 is fp1 g, dim X1 \ X2 = 1.
e) 3 = dim (X1 + X2 ) = dim X1 + dim X2

dim X1 \ X2 = 2 + 2 1.
2
3
2
3
2 3
1
6
7
1
1
6
5; [p1 ]
4 5; [p1 ] = 6 2 7
f) p1 2 X1 \ X2 ; [p1 ]fp1 g = ; [p1 ]fp2 ;p3 g = 4
=
7.
fp1 ;p5 g
E
4
5
2
0
2
g) The sum is not direct, because the intersection has dimension 1.
Because p1 2 X1 \ X2 , then p1 2 X1 and p1 2 X2 which is a subspace, so

p1 2 X2

239

## So 0 = 0 + 0 = p1 + ( p1 ) are two dierent decompositions of 0.

II.
(U

U ) ( ) = 1V ( ) () U (U (x)) = x, 8x 2 V.

injectivity:
Consider x1 , x2 2 V such that U (x1 ) = U (x2 ) ) U (U (x1 )) = U (U (x2 )) ) x1 = x2 (unique)
surjectivity:
Consider y 2 V; then U (U (y)) = y and with xy := U (y), we get:
8y 2 V, 9xy 2 V [xy = U (y)] such that U (xy ) = y.
III.
a) y1 , y2 2 X1 = fx + U (x) ; x 2 Vg ) 9x1 , x2 2 V such that y1 = x1 + U (x1 ) and y2 = x2 + U (x2 ).
Then y1 + y2 = x1 + U (x1 ) + x2 + U (x2 ) = (x1 + x2 ) + U (x1 + x2 ) 2 X1 .
For

2 R, y1 =

b) y1 , y2 2 X2 = fx
Then y1 + y2 = x1
For

(x1 + U (x1 )) = ( x1 ) + U ( x1 ) 2 X1 .
U (x) ; x 2 Vg ) 9x1 , x2 2 V such that y1 = x1
U (x1 ) + x2

U (x2 ) = (x1 + x2 )

U (x1 ) and y2 = x2

U (x2 ).

U (x1 + x2 ) 2 X2 .

2 R, y1 =

(x1 U (x1 )) = ( x1 ) U ( x1 ) 2 X2 .
x + U (x) x U (x)
c) For x 2 V, x =
+
2 X1 + X2 because:
2
2
x + U (x)
x + U (x) 2 X1 (subspace) )
2 X1 and
2
x U (x)
2 X1 .
x U (x) 2 X2 (subspace) )
2
So X1 + X2 = V.
Consider y 2 X1 \ X2 ) 9x1 , x2 2 V such that y = x1 + U (x1 ) = x2

U (x2 ) )

## ) U (y) = U (x1 + U (x1 )) = U (x1 ) + U (U (x1 )) = U (x1 ) + x1 = y and

U (y) = U (x2
We get y =

U (x2 )) = U (x2 )

U (U (x2 )) = U (x2 )

x2 =

y ) y = 0.

Since the sum covers the space and their intersection is null, X1

a) the system

8
>
>
>
<
>
>
>
:

## the set f(1; 1; 1;

8
>
>
x1 =
>
>
>
x1 + x2 = 0
>
>
< x2 =
has
the
solutions:
x1 + x3 = 0
>
>
x3 =
>
>
>
x1 + x4 = 0
>
>
: x4 =
1)g is a basis and the dimension is 1:

X2 = V.
x1 + x2

6
6
= 6 x1 + x3
4
x1 + x4

2R

7 6
7 6
7=6
5 4

6 x1 7
6
7
7 6 x2 7
76
7
7
1 0 1 0 76
56 x 7
6 3 7
5
1 0 0 1 4
x4
1 1 0 0

240

## 8. EXAMPLES OF EXAMS FROM PREVIOUS YEARS

6
6
b) The matrix 6
4

y2 , y3 .

8
>
x + x2 = y 1
1 1 0 0
>
>
< 1
7
7
x1 + x3 = y2 is compatible for each y1 ,
1 0 1 0 7 has rank 3, so the system
>
5
>
>
:
x1 + x4 = y 3
1 0 0 1
3

## ) Im U ( ) = R3 with a basis given by f(1; 0; 0) ; (0; 1; 0) ; (0; 0; 1)g and dimension 3.

c) The dimension of the domain is 4, and the RankNullity Theorem is:
4 = 1 + 3 [dim V1 = dim ker U ( ) + dim Im U ( )]V.
1: ) 2. 1.7, page 59
2: ) 1: 1.7, page 59
VI.
2.2.7, page 75

241

## 8.2. Classical Solution Topics

Subiectul I 2 puncte.
Se dau vectorii b1 = (1; 2; 3), b2 = (0; 1; 1), b3 = (2; 1; 4), b4 = (m; 0; 2), unde m 2 R, si X =
spanR (fb2 ; b3 ; b4 g).
(1p) a) Ar
atati c
a X este subspatiu vectorial n (R3 ; R) si determinati dimensiunea sa.
(1p) b) Folosind metoda de pivotare GaussJordan s
a se determine coordonatele vectorului b1 ntrun
7
reper al lui (X; R) format doar din vectorii b2 , b3 , b4 , n situatia n care m = .
3
Subiectul I 2 puncte.
n spatiul vectorial (R3 ; R) se consider
a vectorii:
v1 = (1; 2; 2), v2 = (1; 2; 0), v3 = ( 1; 0; 1), v4 = (1; 4; 1), v5 = (2; 0; 1) si subspatiile X1 si X2 ,
unde X1 este generat de fv2 ; v3 ; v4 g iar X2 este generat de fv1 ; v5 g. S
a se determine:
(0; 5p) a) Cte o baz
a pentru X1 si X2 .
(0; 5p) b) Subspatiul Y = X1 \ X2 si dimensiunea sa.
(1p) c) Subspatiul X = X1 + X2 , dimensiunea sa si s
a se verice teorema dimensiunii (Grassman).
Subiectul II 2 puncte.
Se consider
a functionala p
atratic
a denit
a pe (R3 ; R)
V (x) = f (x; x) = 2x21 + x23

4x1 x2

2x2 x3

## (0; 5p) a) Scrieti matricea functionalei p

atratice corespunz
atoare reperului canonic (bazei canonice).
(0; 5p) b) Determinati functionala biliniar
a polar
a a functionalei p
atratice.
(1p) c) Determinati forma canonic
a a functionalei p
atratice si natura functionalei p
atratice.
Subiectul II 2 puncte.
Se consider
a functionala p
atratic
a denit
a pe (R3 ; R)
V (x) = f (x; x) =

x21 + x23

2x1 x2

4x2 x3

## (0; 5p) a) Determinati matricea functionalei p

atratice corespunz
atoare reperului canonic (bazei canonice)
din (R3 ; R).
(0; 5p) b) Determinati functionala biliniar
a polar
a a functionalei p
atratice.
(1p) c) Determinati forma canonic
a a functionalei p
atratice precum si baza formei canonice.
Subiectul III 2 puncte

242

## n spatiul vectorial (R3 ; R) se consider

a X multimea tuturor combinatiilor liniare ale vectorilor v1 =
(1; 2; 3), v2 = ( 4; 3; 1).

(1p) a) S
a se determine o baz
a ortogonal
a a lui X.
(1p) b) S
a se determine proiectia vectorului v = ( 2; 3; 1) pe X.

## Subiectul III 2 puncte

n spatiul vectorial (R4 ; R) se consider
a vectorii x = (1; 0; 1; 1), a1 = (3; 0; 1; 1), a2 = (8; 1; 5; 4),
a3 = (4; 1; 1; 0) si subspatiul X = span (fa1 ; a2 ; a3 g). S
a se determine:

## (1p) a) proiectia ortogonal

a a vectorului x pe subspatiul X;
(1p) b) complementul ortogonal X ? al subspatiului X si dimensiunea sa.

243

Subiectul IV (10

0; 3 = 3puncte)

## (0; 3p) 1) Deniti subspatiul propriu asociat valorii proprii

a operatorului liniar U : X ! X.

## (0; 3p) 1) Ce nseamn

a c
a functionala p
atratic
a V : X ! R este negativ denit
a?
(0; 3p) 1) Deniti distanta pe spatiul vectorial real X.
(0; 3p) 1) Deniti complementul ortogonal al unui subspatiu vectorial.
(0; 3p) 2) n spatiul (R3 ; R) se consider
a vectorii v1 = ( 2; 1; 1), v2 = (1; 3; 1), v3 = ( 1; 4; 0), v4 =
(2; 6; 2).
Atunci:fv1 ; v2 ; v4 g sistem liniar independent. fv1 ; v4 g sistem liniar dependent. fv2 ; v3 g
sistem liniar dependent. fv1 ; v2 g sistem liniar independent. fv1 ; v2 ; v3 ; v4 g reper n
(R3 ; R).
(0; 3p)(d)
(b)
(a)
(e)
(c)
3) Fie X, Y subspatii liniare ale spatiului liniar (R4 ; R). Atunci:
(a) dim (X + Y ) = dim X

dim Y + dim (X \ Y );

## (b) dim (X + Y ) = dim X + dim Y + dim (X \ Y )

(c) dim X + dim Y = dim (X + Y )
(d) dim X + dim Y = dim (X \ Y ) + dim (X + Y )
(e) dim X = 2 dim (X + Y )

dim Y

11.

(0; 3p) 3) Fie X, Y subspatii liniare ale spatiului (R2 ; R). Atunci:
(a) dim (X \ Y ) = dim (X + Y )

dim Y ;

dim X

## (b) dim X + dim Y = dim (X + Y ) + dim (X \ Y )

(c) dim (X \ Y ) = dim X + dim Y + dim (X + Y )
(d) dim X + dim Y = dim (X + Y )
(e) 2 dim X = dim (X + Y )

2 dim Y .

(a) 3 dim Im U

3 dim ker U ;

dim Im U

(d) 3 dim ker U

3 dim Im U ;

## (e) dim ker U + dim Im U = 3.

(0; 3p) 5) Fie U : R3 ! R3 un operator liniar. Atunci x 2 R3 n f(0; 0; 0)g este vector propriu dac
a:
(a) 9 2 R , astfel nct U (x) = 2 x;
(b) 8 2 R, U ( x) =

x;

(c) 8 2 R, U (2 x + x) = 2 x;

244

## 8. EXAMPLES OF EXAMS FROM PREVIOUS YEARS

(d) 8 2 R, U ( x + x) = x;
(e) 9 2 R, astfel nct U (2x) 6= 2 x.
(0; 3p) 6) Fie U : R3 ! R3 un operator liniar a c
arui matrice n reperul canonic al lui (R3 ; R) este A:
Atunci:

6
6
(a) Dac
aA=6 0
4
0
2
2 1
6
6
(b) dac
aA=6 0 2
4
0 0
2
0
6
6
(c) dac
aA=6 0
4
3
2
2 1
6
6
(d) dac
aA=6 0 2
4
0 0
2
1 0
6
6
(e) dac
aA=6 0 2
4
0 0

2
0
1

7
7
a Jordan;
1 7, atunci U are forma canonic
5
2

7
7
a Jordan;
0 7, atunci U are forma canonic
5
2
3
0
1
7
7
a.
2 0 7, atunci U are forma diagonal
5
0
2
3
0
7
7
a Jordan;
1 7, atunci U are forma canonic
5
2
3
0
7
7
0 7, atunci U nu este diagonalizabil.
5
2

Din :

## Capitolul 1 [Spatii Vectoriale]

Capitolul 2 [Spatii liniare euclidiene]
Capitolul 3 [Operatori liniari]

Din :

## Capitolul 2 [Spatii vectoriale],

Capitolul 3 [Aplicatii Liniare],
Capitolul 4 [Sisteme de ecuatii liniare. Vectori si valori proprii],
Capitolul 6 [Forme biliniare. Forme p
atratice]

## Sectiunea 2.3 [Sisteme de ecuatii algebrice liniare]

Sectiunea 2.4 [Spatii vectoriale]
Sectiunea 2.5 [Operatori Liniari]
Sectiunea 2.6 [Forme liniare. Forme p
atratice]

Din :

## Capitolul I [Spatii vectoriale]

Capitolul II [Operatori (Transform
ari) Liniari]
Capitolul III [Forme biliniare si p
atratice]

Din :

## Capitolul 3 [Spatii vectoriale]

Capitolul 4 [Aplicatii liniare si matrice]
Capitolul 5 [Valori si vectori proprii]
Capitolul 6 [Spatii euclidiene]
Capitolul 7 [Forme p
atratice]

245

246

## 8.5. The Structure of the Semester Paper

Lucrarea semestrial
a va avea loc la seminar, si va contine urm
atoarele tipuri de subiecte:

g
asirea formei canonice Jordan si a bazei Jordan pentru un operator.
discutarea naturii unei functionale p
atratice dup
a un parametru
studierea unei functionale liniare
alte subiecte.

## 8.6. Conditions for Exam

Prezentarea la examen se va face cu:

## (1) act de identitate (BI /CI /pasaport)

(2) 2 seturi capsate de cte 6 (=12 pagini / set) coli albe A4 [80g/m2] nesemnate
(3) instrumente de scris
(4) gum
a de sters si/sau pasta corectoare

a

## (1) folosirea materiale ajut

atoare
(2) utilizarea tehnicii electronice (telefoane mobile, calculatore, ...)
(3) deranjarea bunei desf
asur
ari a examenului si comunicarea cu colegii

247

248

## 8.7. Old Structure of the Exam

Subiectul 1 (2 puncte)
spatiu vectorial (subspatiu vectorial) si propriet
ati
lema schimbului (si tehnica de pivotare)
sum
a si intersectie de subspatii vectoriale (inclusiv teorema lui Grassmann)
sum
a direct
a de subspatii vectoriale si suplementul unui subspatiu vectorial
Subiectul 2 (2 puncte)
nucleul si imaginea unui operator liniar (inclusiv teorema dimensiunii)
vectori si valori proprii (inclusiv probleme conexe)
functionale liniare si dualul algebric al unui spatiu vectorial
functionale p
atratice si aducerea lor la forma canonic
a
Subiectul 3 (2 puncte)
baze ortonormate si algoritmul Gram-Schmidt
ortogonalitate (operator de proiectie ortogonal
a, proiectia unui vector pe un subspatiu, complement ortogonal)
adjunctul unui operator liniar si propriet
ati
endomorsme autoadjuncte (inclusiv operatori de proiectie ortogonal
a)
Subiectul 4 (3 puncte)
Denitii sau enunturi de teoreme
Aplicatii teoretice simple (inclusiv demonstratii de teoreme)
Exercitiu din sisteme de ecuatii diferentiale liniare de ordinul nti, de dimensiune cel mult
2 si probleme conexe (inclusiv teorema Hamilton-Cayley si probleme atasate algoritmului de
jordanizare).

Bibliography
 Allen, Roy, George, Douglas: "Mathematical Analysis for Economists", MacMillan and Co., London, 1938.
 Andreescu, Titu; Andrica, Dorin; "Complex Numbers from A to...Z", Birkhuser, Boston, 2006.
 Anton, Howard; Rorres, Chris: "Elementary Linear Algebra Applications version", Tenth edition, 2010, Wiley.
 B
adin, Luiza; C
arpusc
a, Mihaela; Ciurea, Grigore; S
erban, Radu: "Algebr
a Liniar
a Culegere de Probleme", Editura
ASE, 1999.
 Bellman, Richard: Introducere n analiza matricial
a, (traducere din limba englez
a), Editura Tehnic
a, Bucuresti,
1969. (Titlul original: Introduction to Matrix Analysis, McGraw-Hill Book Company, Inc., 1960)
 Benz, Walter: "Classical Geometries in Modern Contexts - Geometry of Real Inner Product Spaces" - Second Edition,
Springer, 2007.
 Blair, Peter, D.; Miller, Ronald, E.: "InputOutput Analysis: Foundations and Extensions", Cambridge University
Press, 2009.
 Blume, Lawrence; Simon, Carl, P.: "Mathematics for Economists", W. W. Norton & Company Inc., 1994.
 Bourbaki, N.: Elements de mathematique, Paris, Acta Sci. Ind. Herman, Cie, 1953.
 Busneag, Dumitru; Chirtes, Florentina; Piciu, Dana: "Probleme de Algebr
a Liniar
a", Craiova, 2002.
 Burlacu, V., Cenusa, Gh., S
acuiu, I., Toma, M.: Curs de Matematici, Academia de Studii Economice, Facultatea de
Planicare si Cibernetic
a Economic
a, remultiplicare, uz intern, Bucuresti, 1982.
 Chirita, Stan: "Probleme de Matematici superioare", Editura Didactic
a si Pedagogic
a, Bucuresti, 1989.
 Chitescu, I.: Spa
tii de func
tii, Editura stiintic
a si enciclopedic
a, Bucuresti, 1983.
 Colojoar
a, I.: Analiza matematic
a Editura didactic
a si pedagogic
a, Bucuresti, 1983.
 Cr
aciun, V. C.: Exerci
tii
si probleme de analiz
a matematic
a, Tipograa Universit
atii Bucuresti, 1984.
 Cristescu, R.: Analiza func
tional
a, Editura stiintic
a si enciclopedic
a, Bucuresti, 1983.
 Dr
agusin, C., Dr
agusin, L., Radu, C.: Aplica
tii de algebr
a, geometrie
si matematici speciale, Editura Didactic
a
si Pedagogic
a, Bucuresti, 1991.
 Glazman, I. M., Liubici, I. U.: Analiza liniar
a pe spa
tii nit dimensionale, Editura stiintic
a si Enciclopedic
a,
Bucuresti, 1980.
 Golan, Jonathan S.:The Linear Algebra a Beginning Graduate Student Ought to Know, Springer, third edition,
2012.
 Greene, William H.: "Econometric Analysis", sixth edition, Prentice Hall, 2003.
 Guerrien, B.: Algebre lineare pour economistes, Economica, Paris, 1991.
 Halanay, Aristide; Olaru, Valter Vasile; Turbatu, Stelian: "Analiz
a Matematic
a", Editura Didactic
a si Pedagogic
a,
Bucuresti, 1983.
 Holmes, Richard, B.: Geometric Functional Analysis and its Applications
 Ion D. Ion; Radu, N.: Algebr
a, Editura Didactic
a si Pedagogic
a, Bucuresti, 1991.

250

BIBLIOGRAPHY

 Kurosh, A.: Cours dalgebre superieure, Editions MIR, Moscou, 1980.
 Leung, KamTim: "LINEAR ALGEBRA AND GEOMETRY", HONG KONG UNIVERSITY PRESS, 1974.
 Ling, San; Xing, Chaoping: "Coding Theory A First Course", Cambridge University Press, 2004.
 McFadden, Daniel: Curs Economics 240B (Econometrics), Second Half, 2001 (class website, PDF)
 Monk, J., D.: Mathematical Logic, Springer-Verlag, 1976.
 Pavel, Matei: "Algebr
a liniar
a si Geometrie analitic
a culegere de probleme", UTCB, 2007.
 R
adulescu, M., R
adulescu, S.: Teoreme
si probleme de Analiz
a Matematic
a, Editura didactic
a si Pedagogic
a,
Bucuresti, 1982.
 Rockafellar, R.,Tyrrel: Convex Analysis, Princeton, New Jersey, Princeton University Press, 1970.
 Roman, Steven: "Advanced Linear Algebra", Third Edition, Springer, 2008.
 Saporta, G., S
tef
anescu, M. V.: Analiza datelor
si informatic
a cu aplica
tii la studii de pia
ta
si sondaje de

## opinie-, Editura economic

a, 1996.
 S
abac, I. Gh.: Matematici speciale, vol I, II, Editura didactic
a si pedagogic
a, Bucuresti, 1981.
 S
ilov, G. E.: Analiz
a matematic
a (Spa
tii nit dimensionale), Editura stiintic
a si enciclopedic
a, Bucuresti, 1983.
 Strang, Gilbert: Introduction to Linear Algebra, Third Edition, Springer, 2003
 Tarrida, Agust, Revents: "A ne Maps, Euclidean Motions and Quadrics", Springer, 2011.
 Thompson, Anthony, C.: "Minkowski Geometry", CUP, 1996.
 Treil, Sergei: "Linear Algebra Done Wrong", http://www.math.brown.edu/~treil/papers/LADW/book.pdf, last accessed on 18.12.2011.
 Weintraub, Steven, H.: "Jordan Canonical Form: Application to Dierential Equations", Morgan & Claypool, 2008.
 Weintraub, Steven, H.: "Jordan Canonical Form: Theory and Practice", Morgan & Claypool, 2009.