You are on page 1of 58

Online Education by

Department of Mathematics
M K Bhavnagar University
For M.Sc. (Mathematics) Sem-2
Paper No.: 108
Linear Algebra
Instructor: Dr. P. I. Andharia
Syllabus: Unit-3
• Eigen Values and Eigen Vectors
• Characteristic Polynomials
• Cayley-Hamilton Theorem
• Minimal Polynomials
• Triangulation, Diagonalization
• Rational Canonical Form, Jordan
Canonical Form
• Inner Product Spaces
Definition: Eigen Values and Eigen Vectors
Let be a finite dimensional vector space over and we fix a basis
= { 1 , 2 , … … , }. Let : → be a linear [We use the same symbol
to denote the matrix ( )]. We say a real number is an Eigen
value of , if there exist a non-zero vector ∈ such that = .
Any such non-zero vector ∈ with = is called an Eigen vector
of corresponding to the Eigen value .
Working Rule to find Eigen Values and corresponding Eigen Vectors:
Suppose is an Eigen value of a matrix and is corresponding Eigen
vector. Then, = ⇒ ( − ) = ⇒ det( − ) = 0
The equation det( − ) = 0 is called characteristic equation and
det( − ) is called characteristic polynomial of the matrix .
Solve the equation det( − ) = 0 to find Eigen values of .
Note: (1) There may be more than one Eigen vector corresponding to
the same Eigen value.
(2) Eigen value and Eigen vector are also known as characteristic
value and characteristic vector respectively.
Example:
0 0 2
Find Eigen values and corresponding Eigen vectors of = 0 2 0 .
2 0 3
Solution:
The characteristic equation of is given by det( − ) = 0. i.e.
− 0 2
0 2− 0 = 0.
2 0 3−
2− 0 0 2−
On expanding we get, − +2 =0
0 3− 2 0
⇒ − (2 − )(3 − ) + 2(−2)(2 − ) = 0
2
⇒ (2 − )( − 3 − 4) = 0
⇒ (2 − )( − 4)( + 1) = 0
⇒ = −1, 2, 4 are Eigen values of .
To find Eigen vector corresponding to the Eigen value = −1:
Let = ( , , ) be the required Eigen vector then from = ,
0 0 2
0 2 0 = −1
2 0 3
2 0
⇒ 2 + = 0
2 +3 0
⇒ + 2 = 0, 3 = 0, 2 + 4 = 0
Solving these, we get = −2 , = 0.
Thus, required Eigen vector corresponding to = −1 is = (−2, 0, 1).
Similarly, to find Eigen vector corresponding to the Eigen value = 2:
Let = ( , , ) be the required Eigen vector then from = ,
00 2
02 0 =2
20 3
2 2 0
⇒ 2 − 2 = 0
2 +3 2 0
⇒ 2 − 2 = 0, 2 + =0
Solving these, we get = , 3 = 0 or = 0, = 0.
Thus, required Eigen vector corresponding to = 2 is = (0, 1, 0).
Finally, to find Eigen vector corresponding to the Eigen value = 4:
Let = ( , , ) be the required Eigen vector then from = ,
00 2
02 0 =4
20 3
2 4 0
⇒ 2 − 4 = 0
2 +3 4 0
⇒ − 2 = 0, −2 = 0, 2 − =0
Solving these, we get = 0, 2 = .
Thus, required Eigen vector corresponding to = 4 is = (1, 0, 2).
Theorem:
Let be a vector space. Let : → be a linear map. Assume that is
a nonzero Eigenvector of corresponding to the Eigen value of . Let
≠ , for ≠ , 1 ≤ , ≤ . Then, { 1 , 2 , … … , } is a linearly
independent set.
Proof:
We will prove this result by mathematical induction.
Clearly, for = 1, 1 ≠ , { 1 } is linearly independent.
For = 2, let 1 , 2 be two Eigen vectors of corresponding to Eigen
values 1 , 2 respectively where 1 ≠ 2 . Then,

1 = 1 1 and 2 = 2 2.

Assume that 1, 2 are linearly dependent then ∃ a scalar such that


1, = 2.

∴ 1 = 1 1 ⇒ ( 2) = 1 1

⇒ ( 2) = 1 1 ∵ is linear
⇒ 2 2 = 1 1

⇒ 2 2 = 1 1

⇒ 2 1 = 1 1

⇒( 2 − 1) 1 =
⇒ 2 − 1 =0 ∵ 1 ≠
⇒ 2 = 1

Which is contradiction to 1 ≠ 2.

So our assumption 1, 2 are linearly dependent is wrong.


Hence, 1, 2 are linearly independent.
Assume the result for = − 1. i.e. if 1 , 2 , … … , −1 are Eigen
vectors of corresponding to distinct Eigen values 1 , 2 , … … , −1
respectively, then { 1 , 2 , … … , −1 } is linearly independent.
Now, for = , let 1 , 2, … … , be Eigen vectors of corresponding
to distinct Eigen values 1, 2, … … , respectively. Then,
= .
Assume that { 1 , 2 , … … , } is linearly dependent. Then ∃ scalars
, 1 ≤ ≤ − 1 not all are zero such that
−1

= .
=1
−1 −1

Now, = ⇒ =
=1 =1
−1 −1

⇒ ( )= ∵ is linear
=1 =1
−1 −1

⇒ =
=1 =1
−1

⇒ ( − ) =
=1
But, { 1 , 2, … … , −1 } is linearly independent, therefore
( − ) = 0, ∀1≤ ≤ −1

Since, are not all zero, say ≠ 0, we have − = 0 for some .
This is contradiction to 1, 2, … … , are distinct.
Hence, our assumption is wrong and { 1 , 2, … … , } is linearly
independent. This completes the proof.
Theorem: Cayley-Hamilton Theorem
Every square matrix satisfies its characteristic equation.
Proof:
Let be an × matrix. Let
( )= −1
+ −1 + ………+ 1 + 0
be the characteristic polynomial of . Then we have to prove that
( ): −1
+ −1 + ………+ 1 + 0 = 0.
Recall that if is an × matrix then
−1
1 −1
1
= ( ) ⇒ = ( )
det det
⇒ det( ) = ( )
Take = − then
( − ) ( − ) = det( − ) = ( ) … … (1)
Now, ( − ) is a square matrix whose entries are determinants of
( − 1)-square submatrices of ( − ). Hence, ( − ) is a matrix
whose entries are polynomials in of degree at most − 1.
( −1
∴ − )= −1 + ………+ 1 + 0
where are matrices with real entries. So, from (1)
( −1
− )( −1 + ………+ 1 + 0)
−1
=( + −1 + ………+ 1 + 0)
Comparing the coefficients of like powers of , we get
−1 =
−2 − −1 = −1
−3 − −2 = −2
… … … … … … … … ..
0− 1 = 1
− 0= 0
Multiplying the first of these equations by , the second by −1 , ……,
the second last by and the last one by , and adding them we get the
desired result
−1
+ −1 + ………+ 1 + 0 = 0.
Definition:
Two matrices and of order × are said to be similar matrices if
−1
there exist a matrix of order × such that = .
Theorem:
Similar matrices have the same characteristic polynomial.
Proof:
Let and be two similar matrices of order × . Then there exist an
−1
× matrix such that = .
Now, we know that characteristic polynomial of is ( − ) and
that of is ( − ).
( −1
− ) = det ( − )
−1 −1
= det ( − )
−1 −1
= det ( − )
−1 −1
= det ( ( − ))
−1
= det ( ( − ) )
−1
= det( ) det( − ) det( )
= det( ) det( − ) det( )−1
1
= det( ) det( − )
det
= det( − )
Thus, similar matrices have the same characteristic polynomials.
Definitions:
(1) A linear transformation on a vector space is called triangulable
if there exist a basis for such that the matrix of relative to that
basis is an upper triangular matrix.
(2) A linear transformation on a vector space is called
diagonalizable if there exist a basis for such that the matrix of
relative to that basis is a diagonal matrix.
(3) A polynomial ( )= + −1 −1 + … … … + 1 + 0 is
called a monic polynomial if = 1. 1
(4) A monic polynomial ( ) of minimal degree such that ( ) = 0 is
called the minimal polynomial of a matrix .
(5) The companion matrix to a monic polynomial
( )= + −1 −1 + … … … + 1 + 0
is the × square matrix
0 0 0 … 0 − 0
1 0 0 … 0 − 1
⎛ ⎞
0 −
( ) or ( ) = ⎜0 1 0 … 2 ⎟
⎜0 0 1 … 0 − 3 ⎟
⋮ ⋮ ⋮ ⋱ ⋱ ⋮
⎝0 0 0 … 1 − −1 ⎠
(6) If is × matrix and is × matrix then the direct sum of
and denoted by ⨁ which is a matrix of order ( + ) × ( + )
and given by ⨁ = , where is zero matrix.
(7) (Result) Every × matrix is similar to a direct sum of companion
matrices ( 1 )⨁ ( 2 )⨁ … … … ⨁ ( ) where ( ) are monic
polynomial with ( )/ +1 ( ), 1 ≤ ≤ − 1. In this case
1 ( ), 2 ( ) , … … … , ( ) are called invariant factors of .
(8) A rational canonical form of an × matrix is a matrix which is
a direct sum of companion matrices ( 1 )⨁ ( 2 )⨁ … … … ⨁ ( )
where ( ) are monic polynomial with ( )/ +1 ( ), 1 ≤ ≤
− 1. In this case 1 ( ), 2 ( ), … … … , ( ) are called invariant
factors of . So, = ( 1 )⨁ ( 2 )⨁ … … … ⨁ ( ).
(9) A 1 × 1 Jordan block is a matrix ( , 1) = ( ). An × Jordan block
0 0 … 0 0
1 0 … 0 0
⎛0 1 … 0 0⎞
is a matrix ( , ) = ⎜ .
⋮ ⋮ ⋮ ⋱ ⋱ ⋮⎟
0 0 0 … 0
⎝0 0 0 … 1 ⎠
(10)Jordan canonical form is a direct sum of Jordan blocks.
Paper No.- 108
Paper Title: Linear Algebra

Hardik M Pandya
Department of Mathematics,
M. K. Bhavnagar University, Bhavnagar
Unit – 3
 Inner product space
 Norm
 Metric
 Inner product space
Definition:
Let 𝑉 be a vector space. A function (or a map) 〈, 〉: 𝑉 ×
𝑉 → ℝ is said to be an inner product on 𝑉 if it satisfies
the following conditions:
For 𝑥, 𝑦, 𝑧 ∈ 𝑉 and 𝛼 ∈ ℝ
(i) 〈𝑥, 𝑥 〉 ≥ 0 and 〈𝑥, 𝑥 〉 = 0 ⟺ 𝑥 = 𝜃𝑉
(ii) 〈𝑥, 𝑦〉 = 〈𝑦, 𝑥 〉
(iii) 〈𝑥 + 𝑦, 𝑧〉 = 〈𝑥, 𝑧〉 + 〈𝑦, 𝑧〉
(iv) 〈𝛼𝑥, 𝑦〉 = 𝛼 〈𝑥, 𝑦〉
An ordered pair (𝑉, 〈, 〉) is called an Inner Product Space.
For simplicity we say, 𝑉 is an inner product space.
Remarks:

[1] Using (ii) and (iii),


〈𝑥, 𝑦 + 𝑧〉 = 〈𝑦 + 𝑧, 𝑥 〉 = 〈𝑦, 𝑥 〉 + 〈𝑧, 𝑥 〉 = 〈𝑥, 𝑦〉 +
〈𝑥, 𝑧〉.

[2] Using (ii) and (iv),


〈𝑥, 𝛼𝑦〉 = 〈𝛼𝑦, 𝑥 〉 = 𝛼 〈𝑦, 𝑥 〉 = 𝛼 〈𝑥, 𝑦〉.
Example 1
Let 𝑥 = (𝑥1 , 𝑥2 ), 𝑦 = (𝑦1 , 𝑦2 ) ∈ ℝ2 . Define 〈𝑥, 𝑦〉 =
2𝑥1 (2𝑦1 + 𝑦2 ) + 2𝑥2 (𝑦1 + 𝑦2 ). Check whether 〈, 〉 is an
inner product on ℝ2 ?
Solution:
Let 𝑥 = (𝑥1 , 𝑥2 ), 𝑦 = (𝑦1 , 𝑦2 ), 𝑧 = (𝑧1 , 𝑧2 ) ∈ ℝ2 and
𝛼 ∈ ℝ.
By definition of 〈, 〉 we have
〈𝑥, 𝑦〉 = 2𝑥1 (2𝑦1 + 𝑦2 ) + 2𝑥2 (𝑦1 + 𝑦2 )
(i) 〈𝑥, 𝑥 〉 = 2𝑥1 (2𝑥1 + 𝑥2 ) + 2𝑥2 (𝑥1 + 𝑥2 )
= 4𝑥1 2 + 4𝑥1 𝑥2 + 2𝑥2 2
= 4𝑥1 2 + 4𝑥1 𝑥2 + 𝑥2 2 + 𝑥2 2
= (2𝑥1 + 𝑥2 )2 + 𝑥2 2
≥ 0 because perfect squares are non-negative.
and
〈𝑥, 𝑥 〉 = 0 ⟺ (2𝑥1 + 𝑥2 )2 + 𝑥2 2 = 0 (Using (i))
⟺ 2𝑥1 + 𝑥2 = 0, 𝑥2 = 0
⟺ 𝑥1 = 0, 𝑥2 = 0
⟺ 𝑥 = (0,0) or 𝑥 = 𝜃
〈𝑥, 𝑦〉 = 2𝑥1 (2𝑦1 + 𝑦2 ) + 2𝑥2 (𝑦1 + 𝑦2 )
(ii) 〈𝑥, 𝑦〉 = 2𝑥1 (2𝑦1 + 𝑦2 ) + 2𝑥2 (𝑦1 + 𝑦2 )
= 4𝑥1 𝑦1 + 2𝑥1 𝑦2 + 2𝑥2 𝑦1 + 2𝑥2 𝑦2
= 4𝑥1 𝑦1 + 2𝑥2 𝑦1 + 2𝑥1 𝑦2 + 2𝑥2 𝑦2
= 2𝑦1 (2𝑥1 + 𝑥2 ) + 2𝑦2 (𝑥1 + 𝑥2 )
= 〈𝑦, 𝑥 〉
〈𝑥, 𝑦〉 = 2𝑥1 (2𝑦1 + 𝑦2 ) + 2𝑥2 (𝑦1 + 𝑦2 )
(iii) 𝑥 + 𝑦 = (𝑥1 + 𝑦1 , 𝑥2 + 𝑦2 )
∴ 〈𝑥 + 𝑦, 𝑧〉 = 2(𝑥1 + 𝑦1 )(2𝑧1 + 𝑧2 )
+2(𝑥2 + 𝑦2 )(𝑧1 + 𝑧2 )
= 2𝑥1 (2𝑧1 + 𝑧2 ) + 2𝑦1 (2𝑧1 + 𝑧2 ) +
2𝑥2 (𝑧1 + 𝑧2 ) + 2𝑦2 (𝑧1 + 𝑧2 )
= 2𝑥1 (2𝑧1 + 𝑧2 ) + 2𝑥2 (𝑧1 + 𝑧2 ) +
2𝑦1 (2𝑧1 + 𝑧2 ) + 2𝑦2 (𝑧1 + 𝑧2 )
= 〈𝑥, 𝑧〉 + 〈𝑦, 𝑧〉
〈𝑥, 𝑦〉 = 2𝑥1 (2𝑦1 + 𝑦2 ) + 2𝑥2 (𝑦1 + 𝑦2 )
(iv) 𝛼𝑥 = 𝛼 (𝑥1 , 𝑥2 ) = (𝛼𝑥1 , 𝛼𝑥2 )
∴ 〈𝛼𝑥, 𝑦〉 = 2𝛼𝑥1 (2𝑦1 + 𝑦2 ) + 2𝛼𝑥2 (𝑦1 + 𝑦2 )
= 𝛼 [2𝑥1 (2𝑦1 + 𝑦2 ) + 2𝑥2 (𝑦1 + 𝑦2 )]
= 𝛼 〈𝑥, 𝑦〉
So, 〈, 〉 satisfies all the conditions of inner product.
Hence it is an inner product on ℝ2 .
Example 2
Let 𝑥 = (𝑥1 , 𝑥2 ), 𝑦 = (𝑦1 , 𝑦2 ) ∈ ℝ2 . Define 〈𝑥, 𝑦〉 =
𝑥1 (4𝑦1 + 𝑦2 ) + 𝑥2 (3𝑦1 + 2𝑦2 ). Check whether 〈, 〉 is an
inner product on ℝ2 ?
Solution:
Let 𝑥 = (𝑥1 , 𝑥2 ), 𝑦 = (𝑦1 , 𝑦2 ), 𝑧 = (𝑧1 , 𝑧2 ) ∈ ℝ2 and
𝛼 ∈ ℝ.
By definition of 〈, 〉 we have
〈𝑥, 𝑦〉 = 𝑥1 (4𝑦1 + 𝑦2 ) + 𝑥2 (3𝑦1 + 2𝑦2 )
(i) 〈𝑥, 𝑥 〉 = 𝑥1 (4𝑥1 + 𝑥2 ) + 𝑥2 (3𝑥1 + 2𝑥2 )
= 4𝑥1 2 + 𝑥1 𝑥2 + 3𝑥1 𝑥2 + 2𝑥2 2
= 4𝑥1 2 + 4𝑥1 𝑥2 + 2𝑥2 2
= 4𝑥1 2 + 4𝑥1 𝑥2 + 𝑥2 2 + 𝑥2 2
= (2𝑥1 + 𝑥2 )2 + 𝑥2 2
≥0
because perfect squares are non-negative.
and

〈𝑥, 𝑥 〉 = 0 ⟺ (2𝑥1 + 𝑥2 )2 + 𝑥2 2 = 0 (Using (i))


⟺ 2𝑥1 + 𝑥2 = 0, 𝑥2 = 0
⟺ 𝑥1 = 0, 𝑥2 = 0
⟺ 𝑥 = (0,0) or 𝑥 = 𝜃
〈𝑥, 𝑦〉 = 𝑥1 (4𝑦1 + 𝑦2 ) + 𝑥2 (3𝑦1 + 2𝑦2 )
(ii) 〈𝑥, 𝑦〉 = 𝑥1 (4𝑦1 + 𝑦2 ) + 𝑥2 (3𝑦1 + 2𝑦2 )
= 4𝑥1 𝑦1 + 𝑥1 𝑦2 + 3𝑥2 𝑦1 + 2𝑥2 𝑦2 and
〈𝑦, 𝑥 〉 = 𝑦1 (4𝑥1 + 𝑥2 ) + 𝑦2 (3𝑥1 + 2𝑥2 )
= 4𝑥1 𝑦1 + 𝑥2 𝑦1 + 3𝑥1 𝑦2 + 2𝑥2 𝑦2
= 4𝑥1 𝑦1 + 3𝑥1 𝑦2 + 𝑥2 𝑦1 + 2𝑥2 𝑦2
∴ 〈𝑥, 𝑦〉 ≠ 〈𝑦, 𝑥 〉
So, 〈, 〉 does not satisfy all the conditions of inner
product. Hence it is not an inner product on ℝ2 .
Exercise
Let 𝑥 = (𝑥1 , 𝑥2 ), 𝑦 = (𝑦1 , 𝑦2 ) ∈ ℝ2 . In each of the
following case, check whether 〈, 〉 is an inner product on
ℝ2 ?
(a) Define 〈𝑥, 𝑦〉 = 𝑦1 (𝑥1 + 2𝑥2 ) + 𝑦2 (2𝑥1 + 5𝑥2 ).
(b) Define 〈𝑥, 𝑦〉 = 𝑦1 (2𝑥1 + 𝑥2 ) + 𝑦2 (𝑥1 + 𝑥2 ).
Definition:
Let 𝑥 = (𝑥1 , 𝑥2 , … … , 𝑥𝑛 ) and 𝑦 = (𝑦1 , 𝑦2 , … … , 𝑦𝑛 ) be
two vectors in ℝ𝑛 .The dot product of 𝑥 and 𝑦 is denoted
by 𝑥 ∙ 𝑦 and defined as 𝑥 ∙ 𝑦 = ∑𝑛𝑖=1 𝑥𝑖 𝑦𝑖 .
Example 3
The dot product on ℝ𝑛 is an inner product on ℝ𝑛 .
Solution:
Define 〈, 〉 on ℝ𝑛 as 〈𝑥, 𝑦〉 = 𝑥 ∙ 𝑦, ∀𝑥, 𝑦 ∈ ℝ𝑛 .
We shall show that 〈, 〉 is an inner product on ℝ𝑛 .
Let 𝑥 = (𝑥1 , 𝑥2 , … … , 𝑥𝑛 ), 𝑦 = (𝑦1 , 𝑦2 , … … , 𝑦𝑛 ) and
𝑧 = (𝑧1 , 𝑧2 , … … , 𝑧𝑛 ) be vectors of ℝ𝑛 and 𝛼 ∈ ℝ.
(i) 〈𝑥, 𝑥 〉 = 𝑥 ∙ 𝑥
= ∑𝑛𝑖=1 𝑥𝑖 𝑥𝑖
= ∑𝑛𝑖=1 𝑥𝑖 2
= 𝑥1 2 + 𝑥2 2 + … … + 𝑥𝑛 2
≥0
because perfect squares are non-negative.
and
〈𝑥, 𝑥 〉 = 0 ⟺ 𝑥1 2 + 𝑥2 2 + … … + 𝑥𝑛 2 = 0
(Using (i))
⟺ 𝑥1 = 0, 𝑥2 = 0, … … … , 𝑥𝑛 = 0
⟺ 𝑥 = (0,0, … … ,0) or 𝑥 = 𝜃

(ii) 〈𝑥, 𝑦〉 = 𝑥 ∙ 𝑦
= ∑𝑛𝑖=1 𝑥𝑖 𝑦𝑖
= ∑𝑛𝑖=1 𝑦𝑖 𝑥𝑖
=𝑦∙𝑥
= 〈𝑦, 𝑥 〉
(iii) 𝑥 + 𝑦 = (𝑥1 + 𝑦1 , 𝑥2 + 𝑦2 , … … , 𝑥𝑛 + 𝑦𝑛 )
𝑛

∴ 〈𝑥 + 𝑦, 𝑧〉 = ∑(𝑥𝑖 + 𝑦𝑖 )𝑧𝑖
𝑖=1
= (𝑥1 + 𝑦1 )𝑧1 + (𝑥2 + 𝑦2 )𝑧2 + ⋯ + (𝑥𝑛 + 𝑦𝑛 )𝑧𝑛
= 𝑥1 𝑧1 + 𝑦1 𝑧1 + 𝑥2 𝑧2 + 𝑦2 𝑧2 + ⋯ + 𝑥𝑛 𝑧𝑛 + 𝑦𝑛 𝑧𝑛
= (𝑥1 𝑧1 + 𝑥2 𝑧2 + ⋯ + 𝑥𝑛 𝑧𝑛 )
+(𝑦1 𝑧1 + 𝑦2 𝑧2 + ⋯ + +𝑦𝑛 𝑧𝑛 )
= ∑𝑛𝑖=1 𝑥𝑖 𝑧𝑖 + ∑𝑛𝑖=1 𝑦𝑖 𝑧𝑖
= 〈𝑥, 𝑧〉 + 〈𝑦, 𝑧〉
(iv) 𝛼𝑥 = 𝛼 (𝑥1 , 𝑥2 , … … , 𝑥𝑛 ) = (𝛼𝑥1 , 𝛼𝑥2 , … … , 𝛼𝑥𝑛 )
𝑛

∴ 〈𝛼𝑥, 𝑦〉 = ∑ 𝛼𝑥𝑖 𝑦𝑖
𝑖=1
= 𝛼𝑥1 𝑦1 + 𝛼𝑥2 𝑦2 + … … … + 𝛼𝑥𝑛 𝑦𝑛
= 𝛼 (𝑥1 𝑦1 + 𝑥2 𝑦2 + … … … + 𝑥𝑛 𝑦𝑛 )
= 𝛼 ∑𝑛𝑖=1 𝑥𝑖 𝑦𝑖
= 𝛼 〈𝑥, 𝑦〉
So, 〈, 〉 satisfies all the conditions of inner product.
Hence it is an inner product on ℝ𝑛 .
Thus, dot product on ℝ𝑛 is an inner product on ℝ𝑛 .
Note: Now onwards, if not specified, the inner product
on ℝ𝑛 is taken as dot product.
Norm
Definition
Let 𝑉 be an inner product space and 𝑣 ∈ 𝑉. The norm of
𝑣 is denoted by ‖𝑣 ‖ and defined as ‖𝑣 ‖ = √〈𝑣, 𝑣 〉, the
positive square root of the non-negative number 〈𝑣, 𝑣 〉.
Remarks:
[1] ‖𝑣 ‖2 = 〈𝑣, 𝑣 〉
[2] The norm of a vector in ℝ𝑛 is a length of the
vector.
[3] A vector with norm 1 is called unit vector.
Example 4
Find norm of the following vectors:
(i) 𝑣 = (4, 0, – 3) ∈ ℝ3
(ii) 𝑤 = (1, – 2, 6, 3) ∈ ℝ4
Solution:
(i) 〈𝑣, 𝑣 〉 = 𝑣 ∙ 𝑣 = (4, 0, – 3) ∙ (4, 0, – 3)
2 2 2
= 4 + 0 + (– 3)
= 16 + 9 = 25
∴ ‖𝑣 ‖ = √〈𝑣, 𝑣 〉 = √25, the positive square root.
∴ ‖𝑣 ‖ = 5.
(ii) 〈𝑤, 𝑤〉 = 𝑤 ∙ 𝑤
= (1, – 2, 6, 3) ∙ (1, – 2, 6, 3)
2
= 1 + (– 2) + 62 + 32
2

= 50
∴ ‖𝑤‖ = √〈𝑤, 𝑤〉 = √50, the positive square root.
∴ ‖𝑤‖ = 5√2.
Theorem 1
Let V be an inner product space. Then for every 𝑥 ∈ 𝑉
and 𝛼 ∈ ℝ we have
(i) ‖𝑥 ‖ ≥ 0 and ‖𝑥 ‖ = 0 ⟺ 𝑥 = 𝜃𝑉
(ii) ‖𝛼𝑥 ‖ = |𝛼 |‖𝑥 ‖.
Proof
Let 𝑥 ∈ 𝑉 and 𝛼 ∈ ℝ.
(i) Since the norm is a positive square root of a non-
negative number, it is clear that ‖𝑥 ‖ ≥ 0.
Now ‖𝑥 ‖ = 0 ⟺ ‖𝑥 ‖2 = 0
⟺ 〈𝑥, 𝑥 〉 = 0
⟺ 𝑥 = 𝜃𝑉
(ii) ‖𝛼𝑥 ‖2 = 〈𝛼𝑥, 𝛼𝑥 〉
= 𝛼 〈𝑥, 𝛼𝑥 〉
= 𝛼 2 〈𝑥, 𝑥 〉
= 𝛼 2 ‖𝑥 ‖2
Taking positive square root on both the sides, we get
‖𝛼𝑥 ‖ = |𝛼 |‖𝑥 ‖.
Remark:
Using result (2) of Theorem 1,
‖𝑥 − 𝑦‖ = ‖(−1)(𝑦 − 𝑥 )‖
= |−1|‖𝑦 − 𝑥 ‖
= ‖𝑦 − 𝑥 ‖.
Theorem 2 Cauchy – Schwarz inequality
Let 𝑉 be an inner product space. If 𝑥, 𝑦 ∈ 𝑉 then
|〈𝑥, 𝑦〉| ≤ ‖𝑥 ‖‖𝑦‖.
Further, equality holds if and only if the set {𝑥, 𝑦} is
linearly dependent.
Proof:
If 𝑥 = 𝜃𝑉 or 𝑦 = 𝜃𝑉 then 〈𝑥, 𝑦〉 = 0 and ‖𝑥 ‖‖𝑦‖ = 0.
∴ |〈𝑥, 𝑦〉| = ‖𝑥 ‖‖𝑦‖
Also one of 𝑥, 𝑦 is 𝜃𝑉 , therefore the set {𝑥, 𝑦} is linearly
dependent.
If 𝑥 ≠ 𝜃𝑉 and 𝑦 ≠ 𝜃𝑉 then we can let
𝑥 𝑦
𝑢= ‖𝑥‖
and 𝑣 = ‖𝑦‖
.
∴ ‖𝑢‖ = 1 and ‖𝑣 ‖ = 1 (1)
∴ ‖𝑢‖‖𝑣 ‖ = 1
Now,
〈𝑢 − 𝑣, 𝑢 − 𝑣 〉 = 〈𝑢, 𝑢 − 𝑣 〉 + 〈−𝑣, 𝑢 − 𝑣 〉
= 〈𝑢, 𝑢〉 + 〈𝑢, −𝑣 〉 + 〈−𝑣, 𝑢〉 + 〈−𝑣, −𝑣 〉
= 〈𝑢, 𝑢〉 − 〈𝑢, 𝑣 〉 − 〈𝑣, 𝑢〉 + 〈𝑣, 𝑣 〉
= 〈𝑢, 𝑢〉 − 〈𝑢, 𝑣 〉 − 〈𝑢, 𝑣 〉 + 〈𝑣, 𝑣 〉
= ‖𝑢‖2 − 2〈𝑢, 𝑣 〉 + ‖𝑣 ‖2
= 2 − 2〈𝑢, 𝑣 〉 by using Equation (1)
= 2(1 − 〈𝑢, 𝑣 〉) (2)
And we know that 〈𝑢 − 𝑣, 𝑢 − 𝑣 〉 ≥ 0
∴ 2(1 − 〈𝑢, 𝑣 〉) ≥ 0 ⟹ 1 − 〈𝑢, 𝑣 〉 ≥ 0
⟹ 1 ≥ 〈𝑢, 𝑣 〉
⟹ 〈𝑢, 𝑣 〉 ≤ 1 (3)
Similarly,
〈𝑢 + 𝑣, 𝑢 + 𝑣 〉 = 〈𝑢, 𝑢〉 + 〈𝑢, 𝑣 〉 + 〈𝑣, 𝑢〉 + 〈𝑣, 𝑣 〉
= ‖𝑢‖2 + 2〈𝑢, 𝑣 〉 + ‖𝑣 ‖2
= 2 + 2〈𝑢, 𝑣 〉 by using Equation (1)
= 2(1 + 〈𝑢, 𝑣 〉) (4)
And we know that 〈𝑢 + 𝑣, 𝑢 + 𝑣 〉 ≥ 0
∴ 2(1 + 〈𝑢, 𝑣 〉) ≥ 0 ⟹ 1 + 〈𝑢, 𝑣 〉 ≥ 0
⟹ 1 ≥ −〈𝑢, 𝑣 〉
⟹ −〈𝑢, 𝑣 〉 ≤ 1 (5)
From the Equations (3) and (5), we get
|〈𝑢, 𝑣 〉| ≤ 1
𝑥 𝑦
∴ |〈 , 〉| ≤ 1
‖𝑥 ‖ ‖𝑦‖
1
∴ |〈𝑥, 𝑦〉| ≤ 1
‖𝑥 ‖‖𝑦‖
∴ |〈𝑥, 𝑦〉| ≤ ‖𝑥 ‖‖𝑦‖
Further, suppose equality holds, that is
|〈𝑥, 𝑦〉| = ‖𝑥 ‖‖𝑦‖
1
⟺ |〈𝑥, 𝑦〉| =1
‖𝑥‖‖𝑦‖
𝑥 𝑦
⟺ |〈‖ ‖ , ‖ ‖〉| =1
𝑥 𝑦

⟺ |〈𝑢, 𝑣 〉| = 1
⟺ 〈𝑢, 𝑣 〉 = 1 𝑜𝑟 – 〈𝑢, 𝑣 〉 = 1
⟺ 1 − 〈𝑢, 𝑣 〉 = 0 𝑜𝑟 1 + 〈𝑢, 𝑣 〉 = 0
⟺ 2(1 − 〈𝑢, 𝑣 〉) = 0 𝑜𝑟 2(1 + 〈𝑢, 𝑣 〉) = 0
⟺ 〈𝑢 − 𝑣, 𝑢 − 𝑣 〉 = 0 𝑜𝑟 〈𝑢 + 𝑣, 𝑢 + 𝑣 〉 = 0
by using Equations (2) and (4)
⟺ 𝑢 − 𝑣 = 𝜃𝑉 𝑜𝑟 𝑢 + 𝑣 = 𝜃𝑉
⟺ 𝑢 = ±𝑣
𝑥 𝑦
⟺ ‖𝑥‖
= ±‖ ‖
𝑦
‖𝑥‖
⟺𝑥= ±‖ ‖𝑦
𝑦

⟺ The set {𝑥, 𝑦} is linearly dependent.


Theorem 3
Let 𝑉 be an inner product space. Then for every 𝑥, 𝑦 ∈ 𝑉
(i) ‖𝑥 + 𝑦‖ ≤ ‖𝑥 ‖ + ‖𝑦‖ (This is known as triangle
inequality)
(ii) |‖𝑥 ‖ − ‖𝑦‖| ≤ ‖𝑥 − 𝑦‖
(iii) ‖𝑥 + 𝑦‖2 − ‖𝑥 − 𝑦‖2 = 4〈𝑥, 𝑦〉
(iv) ‖𝑥 + 𝑦‖2 + ‖𝑥 − 𝑦‖2 = 2(‖𝑥 ‖2 + ‖𝑦‖2 )
(v) ‖𝑥 + 𝑦‖ = ‖𝑥 ‖ + ‖𝑦‖ if and only if the set {𝑥, 𝑦} is
linearly dependent.
Proof:
We know that
‖𝑥 + 𝑦‖2 = 〈𝑥 + 𝑦, 𝑥 + 𝑦〉
= 〈𝑥, 𝑥 〉 + 〈𝑥, 𝑦〉 + 〈𝑦, 𝑥 〉 + 〈𝑦, 𝑦〉
= ‖𝑥 ‖2 + 2〈𝑥, 𝑦〉 + ‖𝑦‖2 (1)

And ‖𝑥 − 𝑦‖2 = 〈𝑥 − 𝑦, 𝑥 − 𝑦〉
= 〈𝑥, 𝑥 〉 − 〈𝑥, 𝑦〉 − 〈𝑦, 𝑥 〉 + 〈𝑦, 𝑦〉
= ‖𝑥 ‖2 − 2〈𝑥, 𝑦〉 + ‖𝑦‖2 (2)
(i) ‖𝑥 + 𝑦‖ ≤ ‖𝑥 ‖ + ‖𝑦‖
⇒ ‖𝑥 + 𝑦‖2 = ‖𝑥 ‖2 + 2〈𝑥, 𝑦〉 + ‖𝑦‖2
by using Equation (1)
≤ ‖𝑥 ‖2 + 2|〈𝑥, 𝑦〉| + ‖𝑦‖2
≤ ‖𝑥 ‖2 + 2‖𝑥 ‖‖𝑦‖ + ‖𝑦‖2
(Using Cachy – Schwarz inequality)
= (‖𝑥 ‖ + ‖𝑦‖)2
∴ ‖𝑥 + 𝑦‖2 ≤ (‖𝑥 ‖ + ‖𝑦‖)2
Taking positive square root on both the sides, we get
‖𝑥 + 𝑦‖ ≤ ‖𝑥 ‖ + ‖𝑦‖
(ii) |‖𝑥 ‖ − ‖𝑦‖| ≤ ‖𝑥 − 𝑦‖
⇒ ‖𝑥 ‖ = ‖(𝑥 − 𝑦) + 𝑦‖
≤ ‖𝑥 − 𝑦‖ + ‖𝑦‖ (by Triangle inequality)
∴ ‖𝑥 ‖ − ‖𝑦‖ ≤ ‖𝑥 − 𝑦‖ (3)
‖𝑦‖ = ‖(𝑦 − 𝑥 ) + 𝑥 ‖
≤ ‖𝑦 − 𝑥 ‖ + ‖𝑥 ‖ (by Triangle inequality)
= ‖𝑥 − 𝑦‖ + ‖𝑥 ‖
∴ ‖𝑦‖ ≤ ‖𝑥 − 𝑦‖ + ‖𝑥 ‖
∴ ‖𝑦‖ − ‖𝑥 ‖ ≤ ‖𝑥 − 𝑦‖
∴ −( ‖𝑥 ‖ − ‖𝑦‖) ≤ ‖𝑥 − 𝑦‖ (4)

From the Equations (3) and (4), we get


|‖𝑥 ‖ − ‖𝑦‖| ≤ ‖𝑥 − 𝑦‖
(iii) ‖𝑥 + 𝑦‖2 − ‖𝑥 − 𝑦‖2 = 4〈𝑥, 𝑦〉
Using the Equations (1) and (2), we get
‖𝑥 + 𝑦‖ − ‖𝑥 − 𝑦‖ = ‖𝑥 ‖2 + 2〈𝑥, 𝑦〉 + ‖𝑦‖2 −
(‖𝑥 ‖2 − 2〈𝑥, 𝑦〉 + ‖𝑦‖2 )
= ‖𝑥 ‖2 + 2〈𝑥, 𝑦〉 + ‖𝑦‖2
−‖𝑥 ‖2 + 2〈𝑥, 𝑦〉 − ‖𝑦‖2
= 4〈𝑥, 𝑦〉
(vi) ‖𝑥 + 𝑦‖2 + ‖𝑥 − 𝑦‖2 = 2(‖𝑥 ‖2 + ‖𝑦‖2 )
Using the Equations (1) and (2), we get
‖𝑥 + 𝑦‖ + ‖𝑥 − 𝑦‖ = ‖𝑥 ‖2 + 2〈𝑥, 𝑦〉 + ‖𝑦‖2
+‖𝑥 ‖2 − 2〈𝑥, 𝑦〉 + ‖𝑦‖2
= 2‖𝑥 ‖2 + 2‖𝑦‖2
= 2(‖𝑥 ‖2 + ‖𝑦‖2 )
(vii) ‖𝑥 + 𝑦‖ = ‖𝑥 ‖ + ‖𝑦‖ if and only if the set {𝑥, 𝑦}
is linearly dependent.
⇒By Cauchy – Schwarz inequality we know that
|〈𝑥, 𝑦〉| = ‖𝑥 ‖‖𝑦‖ if and only if the set {𝑥, 𝑦} is
linearly dependent.

Now, given that ‖𝑥 + 𝑦‖ = ‖𝑥 ‖ + ‖𝑦‖


⟺ ‖𝑥 + 𝑦‖2 = (‖𝑥 ‖ + ‖𝑦‖)2
⟺ ‖𝑥 ‖2 + 2〈𝑥, 𝑦〉 + ‖𝑦‖2
= ‖𝑥 ‖2 + 2‖𝑥 ‖‖𝑦‖ + ‖𝑦‖2
by using Equation (1)
⟺ 〈𝑥, 𝑦〉 = ‖𝑥 ‖‖𝑦‖
⟺ The set {𝑥, 𝑦} is linearly dependent.
Metric:
Definition:
Let 𝑋 be any set. A function 𝑑: 𝑋 × 𝑋 → ℝ is said to be a
metric on X if it satisfies the following conditions:
For any 𝑥, 𝑦, 𝑧 ∈ 𝑋,
(i) 𝑑 (𝑥, 𝑦) ≥ 0 and 𝑑 (𝑥, 𝑦) = 0 ⟺ 𝑥 = 𝑦
(ii) 𝑑(𝑥, 𝑦) = 𝑑(𝑦, 𝑥)
(iii) 𝑑 (𝑥, 𝑧) ≤ 𝑑(𝑥, 𝑦) + 𝑑(𝑦, 𝑧) (Triangle inequality)
Example 5
Let 𝑉 be an inner product space. If we define
𝑑 (𝑥, 𝑦) = ‖𝑥 − 𝑦‖, ∀𝑥, 𝑦 ∈ 𝑉 then show that 𝑑 is a
metric on 𝑉.
Solution:
Let 𝑥, 𝑦, 𝑧 ∈ 𝑉.
(i) By definition of norm, ‖𝑥 − 𝑦‖ ≥ 0
∴ 𝑑(𝑥, 𝑦) ≥ 0.
Now, 𝑑 (𝑥, 𝑦) = 0 ⟺ ‖𝑥 − 𝑦‖ = 0
⟺ 𝑥 − 𝑦 = 𝜃𝑉
⟺𝑥=𝑦
(ii) 𝑑 (𝑥, 𝑦) = ‖𝑥 − 𝑦‖
= ‖𝑦 − 𝑥 ‖
= 𝑑 (𝑦, 𝑥 )

(iii) 𝑑 (𝑥, 𝑧) = ‖𝑥 − 𝑧‖
= ‖(𝑥 − 𝑦) + (𝑦 − 𝑧)‖
≤ ‖𝑥 − 𝑦‖ + ‖𝑦 − 𝑧‖
(by Cauchy - Schwarz inequality)
= 𝑑 (𝑥, 𝑦) + 𝑑 (𝑦, 𝑧)
∴ 𝑑 (𝑥, 𝑧) ≤ 𝑑 (𝑥, 𝑦) + 𝑑(𝑦, 𝑧)
Example 6
Let 𝑉 be an inner product space. For a metric 𝑑 defined
as 𝑑 (𝑥, 𝑦) = ‖𝑥 − 𝑦‖, ∀𝑥, 𝑦 ∈ 𝑉, show that 𝑑 (𝑥 + 𝑧, 𝑦 +
𝑧) = 𝑑(𝑥, 𝑦).
Solution:
𝑑 (𝑥 + 𝑧, 𝑦 + 𝑧) = ‖(𝑥 + 𝑧) − (𝑦 + 𝑧)‖
= ‖𝑥 + 𝑧 − 𝑦 − 𝑧‖
= ‖𝑥 − 𝑦‖
= 𝑑(𝑥, 𝑦)

You might also like