0% found this document useful (0 votes)
235 views21 pages

Eigenvalues To Diagonalization

The document discusses matrices and their properties. It defines eigenvalues and eigenvectors as solutions to the equation (A - λI)u = 0. The eigenvalues are roots of the characteristic polynomial |A - λI| = 0. For each eigenvalue λ, the corresponding eigenvectors u satisfy Au = λu. The eigenspace for λ contains all eigenvectors for λ including the zero vector. Two examples find the eigenvalues, eigenvectors, and eigenspaces for sample matrices.

Uploaded by

Majety S Lskshmi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
235 views21 pages

Eigenvalues To Diagonalization

The document discusses matrices and their properties. It defines eigenvalues and eigenvectors as solutions to the equation (A - λI)u = 0. The eigenvalues are roots of the characteristic polynomial |A - λI| = 0. For each eigenvalue λ, the corresponding eigenvectors u satisfy Au = λu. The eigenspace for λ contains all eigenvectors for λ including the zero vector. Two examples find the eigenvalues, eigenvectors, and eigenspaces for sample matrices.

Uploaded by

Majety S Lskshmi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Module 2: Matrices

Dr. T. Phaneendra

December 18, 2017

Contents: Eigenvalues and eigen vectors - Properties of eigenvalues and eigen


vectors - Diagonalization - Orthogonal transformation

Eigenvalues and Eigen vectors


Let a system of n linear equations in n unknowns u1 , u2 , ..., un be expressed
in the matrix form as

(A − λI)u = 0, (1)

where λ is a constant, and I denotes the unit matrix of order n. The diago-
nal entries of A − λI are aii − λ, i = 1, 2, ..., n, while, the nondiagonal entries
are aij for j = 1, 2, ..., n with j 6= i.
 
u1
 u2 
A column matrix u =  . , which satisfies (1), is called a solution vector
 
 .. 
un
or
  simply solution of the system (1). Obviously, for every λ value, u0 = O =
0
0
 ..  is a solution of (1), and is called its trivial solution vector or zero-
 
.
0
vector. The nontrivial solution vectors of the system of (1) can be obtained
from the condition

|A − λI| = 0. (2)

The determinant |A − λI| is called the characteristic polynomial of A.

Since |A − λI| is of nth degree, the condition (2) leads to a polynomial equa-
tion of nth degree, called the characteristic equation P (λ) of A. The roots
of P (λ) are called the characteristic roots or eigenvalues of the matrix
A. The nonzero solution vectors of (1), corresponding to each eigenvalue,

1
ADDE(MAT2002) Dr. T. Phaneendra
Module 2 Professor of Mathematics

are called the eigenvectors of A.

The set of eigenvalues is called the spectrum, and the maximum of the
magnitudes of the eigenvalues, is called the spectral radius of A.

Let λ be an eigenvalue of a square matrix A of order n. The set of all


eigenvectors corresponding to λ, together with the zero-vector u0 = O, is a
subspace of Rn , and is called the eigenspace Eλ of λ. Thus
 [ 
Eλ = u0 u 6= O : Au = λu · (3)

Questions:
1. What is the characteristic equation of a square matrix A?
2. What is the difference between the characteristic polynomial and the
characteristic equation?
3. How many eigenvalues does a square matrix of nth order have?
4. What do you mean by the algebraic and geometric multiplicities of an
eigenvalue?
Example 1. Find the eigenvalues,
 eigenvectors and the eigen spaces of
3 2 2
the matrix A =  1 4 1 ·
−2 −4 −1
Solution. The characteristic equation of A is P (λ) = |A − λI| = 0, that is

3 − λ 2 2

1
4−λ 1 = 0. (4)
−2 −4 −1 − λ
We employ the row and column operation to find the roots of (4). In fact,
first apply R1 → R1 + R2 + R3 to (4) to get

2 − λ 2 − λ 2 − λ

1
4−λ 1 = 0
−2 −4 −1 − λ
or
1 1 1

(2 − λ) 1 4−λ 1 = 0.
−2 −4 −1 − λ
With C2 → C2 − C1 and C3 → C3 − C1 , this gives

1 0 0

(2 − λ) 1 3 − λ 0 = 0
−2 −2 1 − λ

511, A10, SJT 2


phaneendra.t@vit.ac.in
ADDE(MAT2002) Dr. T. Phaneendra
Module 2 Professor of Mathematics

so that (2−λ)(3−λ)(1−λ)−0. Therefore, the eigenvalues of A are λ = 3, 2, 1.

Eigenvectors corresponding to λ are the nonzero solutions of the system


(A − λI) u = 0.
Eigenvectors corresponding to λ = 3 are the nonzero solutions of the system
(A − 3I) u = 0, that is
    
0 2 2 u1 0
1 1 1  u2  = 0 ,
−2 −4 −4 u3 0
or
u2 + u3 = 0,
u1 + u2 + u3 = 0,
u1 + 2u2 + 2u3 = 0.
Solving any two of these three equations, say the first two, we get
u1 u2 u3
= = = a1 .
0 1 −1
 
0
Thus for a1 6= 0, u1 = a1  1  are the eigenvectors of A corresponding to
 −1 
0
λ = 3. Note that e1 =  1  is the only linearly independent eigenvector,
−1
and all other eigenvectors are scalar multiples of e1 . Therefore, the geo-
metric multiplicity of λ = 3 is 1.

The eigenspace of λ = 3 is
      
 [   0 [ 0 
E3 = u0 u 6= O : Au = 3u = 0 k1  1  : k1 6= 0 ·
0 −1
   

Eigenvectors corresponding to λ = 2 are the nonzero solutions of the system


(A − 2I) u = 0, that is
    
1 2 2 u1 0
1 2 1  u2  = 0 .
−2 −4 −3 u3 0
or
u1 + 2u2 + u3 = 0,
u1 + 2u2 + u3 = 0,
2u1 + 4u2 + 3u3 = 0.

511, A10, SJT 3


phaneendra.t@vit.ac.in
ADDE(MAT2002) Dr. T. Phaneendra
Module 2 Professor of Mathematics

Solving the first two equations,


u1 u2 u3
= = = a2 .
−2 1 0
 
−2
Thus for a2 6= 0, u2 = a2  1  are the eigenvectors of A corresponding to
  0
−2
λ = 2. Note that e2 =  1  is the only linearly independent eigenvector,
0
and all other eigenvectors are scalar multiples of e2 . Therefore, the geo-
metric multiplicity of λ = 2 is 1.

The eigenspace of λ = 2 is
      
 [   0 [ −2 
E2 = u0 u 6= O : Au = 2u = 0 k2  1  : k2 6= 0 ·
0 0
   

Eigenvectors corresponding to λ = 1 are the nonzero solutions of the system


(A − I) u = 0, that is
    
2 2 2 u1 0
1 3 1  u2  = 0 .
−2 −4 −2 u3 0
or

u1 + u2 + u3 = 0,
u1 + 3u2 + u3 = 0,
u1 + 2u2 + u3 = 0.

Solving the first two equations,


u1 u2 u3
= = = a3 .
−1 0 1
 
−1
Thus for a3 6= 0, u3 = a3  0  are the eigenvectors of A corresponding to
  1
−1
λ = 1. Note that e3 =  0  is the only linearly independent eigenvector,
1
and all other eigenvectors are scalar multiples of e3 . Therefore, the geo-
metric multiplicity of λ = 1 is 1.

The eigenspace of λ = 1 is

511, A10, SJT 4


phaneendra.t@vit.ac.in
ADDE(MAT2002) Dr. T. Phaneendra
Module 2 Professor of Mathematics

     
 [   0 [ −1 
E1 = u0 u 6= O : Au = u = 0 k3  0  : k3 6= 0 ·
0 1
   

Example 2. Find the eigenvalues,


 eigenvectors and the eigen spaces of
1 1 −2
the matrix A = −1 2 1  ·
0 1 −1
Solution. The characteristic equation of A is P (λ) = |A − λI| = 0, that is

1 − λ 1 −2

−1 2−λ 1 = 0

0 1 −1 − λ
Employing the operations C1 → C1 +C3 followed by R1 → R1 −R3 , and then
simplifying, we get (1 + λ)(2 − λ)(1 − λ) = 0 so that the eigenvalues of A are
λ = 2, 1, −1.

Eigenvectors corresponding to λ are the nonzero solutions of the system


(A − λI) u = 0.
 
1
For λ = 2, the eigenvectors are u1 = a1 3, where a1 6= 0.
1
 
3
For λ = 1, the eigenvectors are u2 = a2 2, where a2 6= 0.
1
 
1
For λ = −1, the eigenvectors are u3 = a3 0, where a3 6= 0.
1
Example 3. Find the eigenvalues,
 eigenvectors and the eigen spaces of
1 2 1
the matrix A =  6 −1 0  ·
−1 −2 −1
Solution. The characteristic equation of A is P (λ) = |A − λI| = 0, that is

1 − λ 2 1

6
−1 − λ 0 = 0
−1 −2 −1 − λ
The eigenvalues of A are λ = 3, 0, −4.

Eigenvectors corresponding to λ are the nonzero solutions of the system


(A − λI) u = 0.

511, A10, SJT 5


phaneendra.t@vit.ac.in
ADDE(MAT2002) Dr. T. Phaneendra
Module 2 Professor of Mathematics

 
2
For λ = 3, the eigenvectors are u1 = a1  3 , where a1 6= 0.
−2
 
1
For λ = 0, the eigenvectors are given by u2 = a2  6 , where a2 6= 0.
−13
 
1
For λ = −4, the eigenvectors are given by u3 = a3  6 , where a3 6= 0.
−13
Example 4. Find the eigenvalues,
 eigenvectors and the eigen spaces of
2 1 1
the matrix A = 2 3 2 ·
3 3 4
Solution. The characteristic equation of A is P (λ) = |A − λI| = 0, that is

2 − λ 3 2

2
3−λ 2 = 0
3 3 4 − λ

With the help of the operations C1 → C1 − C3 , R3 → R2 − R3 and R1 →


R1 + R2 + R3 , we get (1 − λ)2 (7 − λ) = 0 so that the eigenvalues of A are
λ = 7, 1, 1.

Eigenvectors corresponding to λ are the nonzero solutions of the system

(A − λI) u = 0.
 
3
For λ = 7, the eigenvectors are u1 = a1 −1, where a1 6= 0.
1
     
 0 S 3 
The eigenspace of λ = 7 is E7 = 0 k1 −1 : k1 6= 0 ·
0 1
   

For λ = 1, the eigenvectors are determined by

(A − I) u = 0,

that is     
1 1 1 u1 0
2 2 2 u2  = 0 .
3 3 3 u3 0
which reduce to the single equation

u1 + u2 + u3 = 0 or u3 = −u2 − u3 .

511, A10, SJT 6


phaneendra.t@vit.ac.in
ADDE(MAT2002) Dr. T. Phaneendra
Module 2 Professor of Mathematics

Thus
         
u1 u1 u1 .1 + 0.u2 1 0
u2  =  u2  =  u1 .0 + 1.u2  = u1  0  + u2  1 
u3 −u1 − u2 u1 (−1) + u2 (−1) −1 −1
   
1 0
Note that e2 =  0  and e3 =  1  are two linearly independent eigen-
−1 −1
vectors, and all other eigenvectors are linear combinations of scalar multi-
ples of e2 and e3 . Therefore, the geometric multiplicity of λ = 1 is 2.

The eigenspace of λ = 1 is
        
 0 [ 1 0 
E1 = 0 k2  0  + k3  1  : k2 6= 0, k3 6= 0 ·
0 −1 −1
   

Example 5. Find the eigenvalues,


 eigenvectors and the eigen spaces of
3 −3 2
the matrix A = −1 5 −2 ·
−1 3 0
Solution. The characteristic equation of A is P (λ) = |A − λI| = 0, that is

3 − λ −3 2

−1 5−λ −2 = 0

−1 3 0 − λ
With the help of the operations R1 → R1 + R2 , R3 → R2 − R3 and R2 →
R1 + R2 we get (2 − λ)2 (4 − λ) = 0 so that the eigenvalues of A are λ = 4, 2, 2.

Eigenvectors corresponding to λ are the nonzero solutions of the system


(A − λI) u = 0.
 
−1
For λ = 4, the eigenvectors are u1 = a1  1 , where a1 6= 0.
1
For λ = 2, the eigenvectors are determined by
(A − 2I) u = 0,
that is     
1 −3 2 u1 0
−1 3 −2 u2  = 0 .
−1 3 −2 u3 0
which reduce to the single equation
u1 − 3u2 + 2u3 = 0 or u1 = 3u2 − 2u3 .

511, A10, SJT 7


phaneendra.t@vit.ac.in
ADDE(MAT2002) Dr. T. Phaneendra
Module 2 Professor of Mathematics

Thus
         
u1 u1 3.u2 + (−2).u3 3 −2
u2  =  u2  =  1.u2 + 0.u3  = u2 1 + u3  0 
u3 −u1 − u2 0.u2 + 1.u3 0 1
   
3 −2
Note that e2 = 1 and e3 =  0  are two linearly independent eigen-
0 1
vectors, and all other eigenvectors are linear combinations of scalar multi-
ples of e2 and e3 . Therefore, the geometric multiplicity of λ = 2 is 2.
Example 6. Find the eigenvalues,
 eigenvectors and the eigen spaces of
1 −3 3
the matrix A = 3 −5 3 ·
6 −6 4
Solution. The characteristic equation of A is P (λ) = |A − λI| = 0, that is

1 − λ −3 3

3
−5 − λ 3 = 0
6 −6 −λ4

With the help of the operations C1 → C1 + C2 , C3 → C2 + C3 and R2 → R1 +


R2 we get (2 + λ)2 (λ − 4) = 0 so that the eigenvalues of A are λ = 4, −2, −2.

Eigenvectors corresponding to λ are the nonzero solutions of the system

(A − λI) u = 0.
 
1
For λ = 4, the eigenvectors are u1 = a1 1, where a1 6= 0.
2
For λ = −2, the eigenvectors are determined by

(A + 2I) u = 0,

that is     
3 −3 3 u1 0
3 −3 3 u2  = 0 .
6 −6 6 u3 0
which reduce to the single equation

u1 + u2 − u3 = 0 or u3 = u1 + u2 .

Thus
         
u1 u1 1.u1 + 0.u2 1 0
u2  =  u2  = 0.u1 + 1.u2  = u1 0 + u2 1
u3 u1 + u2 1.u1 + 1.u2 1 1

511, A10, SJT 8


phaneendra.t@vit.ac.in
ADDE(MAT2002) Dr. T. Phaneendra
Module 2 Professor of Mathematics

   
1 0
Note that e2 = 0 and e3 = 1 are two linearly independent eigenvec-
1 1
tors, and all other eigenvectors are linear combinations of scalar multiples
of e2 and e3 . Therefore, the geometric multiplicity of λ = −2 is 2.
Example 7. Find the eigenvalues,
 eigenvectors and the eigen spaces of
−2 2 −3
the matrix A =  2 1 −6 ·
−1 −2 0
Solution. The characteristic equation of A is P (λ) = |A − λI| = 0, that is

−2 − λ 2 −3

2
1−λ −6 = 0
−1 −2 0 − λ

Using the operations C2 → C2 − 2C1 and C3 → C3 + 3C1 , and then simpli-


fying, we get (3+λ)2 (λ−5) = 0 so that the eigenvalues of A are λ = 5, −3, −3.

Eigenvectors corresponding to λ are the nonzero solutions of the system

(A − λI) u = 0.
 
1
For λ = 5, the eigenvectors are u1 = a1  2 , where a1 6= 0.
−1
For λ = −3, the eigenvectors are given by
     
u1 −2 3
u2  = u1  1  + u2 0
u3 0 1
   
−2 3
Note that e2 =  1  and e3 = 0 are two linearly independent eigen-
0 1
vectors, and all other eigenvectors are linear combinations of scalar multi-
ples of e2 and e3 . Therefore, the geometric multiplicity of λ = −3 is 2.
Example 8. Find the eigenvalues,
 eigenvectors and the eigen spaces of
4 0 1
the matrix A = 2 3 2 ·
1 0 4
Solution. The characteristic equation of A is P (λ) = |A − λI| = 0, that is

4 − λ 0 1

2
3−λ 2 = 0
1 0 4 − λ

511, A10, SJT 9


phaneendra.t@vit.ac.in
ADDE(MAT2002) Dr. T. Phaneendra
Module 2 Professor of Mathematics

Expanding the determinant,


(4 − λ)(3 − λ)(4 − λ) − (3 − λ) = 0 or (3 − λ)2 (5 − λ) = 0
The eigenvalues of A are λ = 5, 3, 3.

Eigenvectors corresponding to λ are the nonzero solutions of the system


(A − λI) u = 0.
 
1
For λ = 5, the eigenvectors are u1 = a1 2, where a1 6= 0.
1
For λ = 3, the eigenvectors are given by
     
u1 −1 0
u2  = a2  0  + a3 1
u3 1 0
The geometric multiplicity of λ = 3 is 2.
Example 9. Find the eigenvalues,
 eigenvectors and the eigen spaces of
1 −1 0
the matrix A = 0 1 1 ·
0 0 −2
Solution. The characteristic equation of A is P (λ) = |A − λI| = 0, that is

1 − λ −1 0

0
1−λ 1 = 0
0 0 −2 − λ
The eigenvalues of A are λ = 1, 1, −2.
 
1
For λ = 1, the eigenvectors are a1 0. The geometric multiplicity of λ = 1
0
is 1.
 
1
For λ = −2, the eigenvectors are a2  3 , where a2 6= 0.
−9

Properties of Eigenvalues and Eigenvectors


Property 1. The sum of all the eigenvalues of a square matrix equals its
trace (the sum of the diagonal entries of A). That is, if λ1 , λ2 , ..., λn are the
eigenvalues of a square matrix A = [aij ] of order n, then
n
X n
X
λi = tr(A) = aii . (5)
i=1 i=1

511, A10, SJT 10


phaneendra.t@vit.ac.in
ADDE(MAT2002) Dr. T. Phaneendra
Module 2 Professor of Mathematics

Property 2. The product of all the eigenvalues of a square matrix equals


its determinant. That is, if λ1 , λ2 , ..., λn are the eigenvalues of a square
matrix A of order n, then

λ1 × λ2 × ... × λn = det(A). (6)

A square matrix in which all the entries above (or below) the principal diag-
onal is called a lower triangular matrix (or upper triangular matrix).
A square matrix in which all the nondiagonal entries are zero, is called a
diagonal matrix.
Property 3. The diagonal entries of a triangular matrix (upper/lower) will
serve as its eigenvalues.
Property 4. Let λ be an eigenvalue of a square matrix A, and u be the
corresponding eigenvector. Then
(a) cλ will be an eigenvalue of cA with the same eigenvector u, c 6= 0,

(b) λ2 , λ3 , ... will be eigenvalues of A2 , A3 , ... with the same eigenvector


u.
Property 5. A square matrix A is invertible if and only if each eigenvalue
of A is nonzero.
Property 6. Let λ be an eigenvalue of an invertible matrix A and u be the
corresponding eigenvector. Then 1/λ will be an eigenvalue of A−1 with the
same eigenvector u.

Questions:
1. Let λ be an eigenvalue of an invertible matrix A, how do you find the
eigenvalues of adj A?

2. What can you say about the eigenvalues of a square matrix and its
transpose?

Theorem 1 (Cayley-Hamilton Theorem). Every square matrix satisfies its


own characteristic equation. Thus if A is a square matrix of order n, and

λn + a1 λn−1 + a2 λn−2 + · · · + an−1 λ + an = 0

is its characteristic equation, where a1 , a2 , ..., an are constants, then

An + a1 An−1 + a2 An−2 + · · · + an−1 A + an I = 0. (7)

If A is nonsingular, then an 6= 0, and from (7) the inverse of A is given by


1
A−1 = − An−1 + a1 An−2 + a3 An−3 + · · · + an−2 A + an−1 I .

(8)
an

511, A10, SJT 11


phaneendra.t@vit.ac.in
ADDE(MAT2002) Dr. T. Phaneendra
Module 2 Professor of Mathematics

 
2 −1 1
Example 10. Verify Cayley-Hamilton theorem for A = −1 2 −1
1 −1 2
and find A−1 .
Solution. The characteristic equation of A is P (λ) = |A − λI| = 0, that is

2 − λ −1 1

−1 2−λ −1 = 0

1 −1 2 − λ

Expanding this and simplifying, we get

λ3 − 6λ2 + 9λ − 4 = 0. (9)

To verify Cayley-Hamilton theorem, we require

A3 − 6A2 + 9A − 4I = 0. (10)

Now
    
2 −1 1 2 −1 1 6 −5 5
A2 = −1 2 −1 −1 2 −1 = −5 6 −5 ,
1 −1 2 1 −1 2 5 −5 6

    
6 −5 5 2 −1 1 22 −21 21
A3 = A2 .A = −5 6 −5 −1 2 −1 = −21 22 −21 .
5 −5 6 1 −1 2 21 −21 22

With these, we see that


 
22 − 36 + 18 − 4 −21 + 30 − 9 + 0 21 − 30 + 9 + 0
A3 − 6A2 + 9A − 4I = −21 + 30 − 9 + 0 22 − 36 + 18 − 4 −21 + 30 − 9 + 0
21 − 30 + 9 + 0 −21 + 30 − 9 + 0 22 − 36 + 18 − 4
 
0 0 0
= 0 0 0 = 0.
0 0 0

This verifies (10).

Further,
 
9 − 12 + 6 0 + 6 − 5 0−6+5
1 1
A−1 = 2

− 9I + 6A − A =  0 + 6 − 5 9 − 12 + 6 0+6−5
4 4
0−6+5 0+6−5 9 − 12 + 6
 
3 1 −1
1
=  1 3 1 .
4
−1 1 3

511, A10, SJT 12


phaneendra.t@vit.ac.in
ADDE(MAT2002) Dr. T. Phaneendra
Module 2 Professor of Mathematics

 
2 1 1
Example 11. Verify Cayley-Hamilton theorem for A = 0 1 0 and
1 1 2
then find A−1 .
Solution. The characteristic equation of A is P (λ) = |A − λI| = 0, that is

2 − λ 1 1

0
1−λ 0 = 0
1 1 2 − λ

Expanding this and simplifying, we get

λ3 − 5λ2 + 7λ − 3 = 0. (11)

Verify Cayley-Hamilton theorem by showing that

A3 − 5A2 + 7A − 3I = 0. (12)

Now
 
2 −1 −1
1 2  1
A−1 = A − 5A + 7I =  0 3 0 .
3 3
−1 −1 2

Diagonalization
A square matrix whose  nondiagonal
 entries
 are 
zero, is called a diagonal
2 0 0 0 0 0
matrix. For instance, 0 3 0  and 0 0 0 are diagonal matrices.
0 0 −1 0 0 1
A square matrix A is said to be diagonalizable if there exists a nonsingular
matrix P such that P −1 AP is a diagonal matrix D. When such P exists, we
say that P diagonalizes A. The matrix P is called a modal matrix. The pro-
cedure of reducing a given square matrix into a diagonal matrix D through
a modal matrix P is called the diagonalization.
Theorem 2. A square matrix A of order n is diagonalizable if and only if
it has n linearly independent eigenvectors.
Suppose that λ1 , λ2 , ..., λn are the eigen values of A, and u1 , u2 , ..., un
be the corresponding linearly independent eigenvectors
 of A. Then
 group
these into a matrix to give the modal matrix P = u1 u2 ... un . Thus

P −1 AP = diag λ1
 
λ2 ... λn = D.

(a) If the eigenvalues are distinct, the corresponding n eigenvectors will


always be linearly independent

511, A10, SJT 13


phaneendra.t@vit.ac.in
ADDE(MAT2002) Dr. T. Phaneendra
Module 2 Professor of Mathematics

(b) If the eigenvalues are not distinct, we have to find an appropriate set
of n linearly independent eigenvectors to produce the modal matrix P
 
2 1 2
Example 12. Examine whether A = 0 2 −1 is diagonalizable.
0 0 2
Solution. Since A is an upper traingular matrix, its diagonal elements will
be the eigenvalues of it, namely λ = 2, 2, 2. Thus the algebraic multiplicity
of λ = 2 is 3. We know that the eigenvectors corresponding to an eigenvalue
λ are the nonzero solutions of the system (A − λI) u = 0.

Therefore, the eigenvectors corresponding to λ = 2 are the nonzero solu-


tions of the system (A − 2I) u = 0, that is
    
0 1 2 u1 0
0 0 −1 u2  = 0 ,
0 0 0 u3 0
or

u2 + 2u3 = 0, −u3 = 0.

Solving these, we get u2 = 0 = u3 . Thus


   
u1 u1
u2  =  0  = u1 e1 ,
u3 0
 
1
where e1 = 0 and u1 6= 0. Note that e1 is the only linearly independent
0
eigenvector, and all other eigenvectors are scalar multiples of e1 . Therefore,
it is not possible to obtain three linearly independent eigenvectors for A to
form a modal matrix P . Hence A is not diagonalizable.
 
1 0 −1
Example 13. Examine whether A = 1 2 1  is diagonalizable. If so
2 2 3
find a nonsingular matrix P which diagonalizes A.
Solution. The characteristic equation of A is P (λ) = |A − λI| = 0, that is

1 − λ 0 −1

1
2−λ 1 = 0.
2 2 3 − λ

The eigenvalues of A are λ = 3, 2, 1. Since the eigenvalues are distinct, A


can be diagonalizable.

511, A10, SJT 14


phaneendra.t@vit.ac.in
ADDE(MAT2002) Dr. T. Phaneendra
Module 2 Professor of Mathematics

To find the modal matrix P which diagonalizes A, we should find the eigen-
vectors of A.
 
−1
The eigenvectors corresponding to λ = 3 are a1  1 , the eigenvectors
  2
−2
corresponding to λ = 2 are a2  1 , and the eigenvectors corresponding to
2
 
0
λ = 1 are a3  1 , where a1 , a2 and a3 are nonzero. Now, we write
−1
     
−1 −2 0
e1 =  1  , e2 =  1  e3 =  1  and P = [e1 e2 e3 ].
2 2 −1

Note that e1 , e2 and e3 are eigenvectors of distinct eigenvalues


and hence
−1 −2 0

are linearly independent. In fact, |P | = 1 1 1 = −3 6= 0. Thus e1 ,
2 2 −1
e2 and e3 are linearly independent.

The modal matrix, which diagonalizes A, is


 
−1 −2 0
P = [e1 e2 e3 ] =  1 1 1 .
2 2 −1
 
3 0 0
Finally, P −1 AP = 0 0 = diag(3 2 1) = D.
2
0 1 0
 
2 −2 3
Example 14. Diagonalize A = 1 1 1  through an appropriate
1 3 −1
nonsingular matrix P .
Solution. The characteristic equation of A is P (λ) = |A − λI| = 0, that is

2 − λ −2 3

1
1−λ 1 = 0.
1 3 −1 − λ

The eigenvalues of A are λ = 3, 1, −2. Since the eigenvalues are distinct, A


can be diagonalizable.

511, A10, SJT 15


phaneendra.t@vit.ac.in
ADDE(MAT2002) Dr. T. Phaneendra
Module 2 Professor of Mathematics

 
1
The eigenvectors corresponding to λ = 3 are a1 1, the eigenvectors cor-
  1
−1
responding to λ = 1 are a2  1 , and the eigenvectors corresponding to
  1
11
λ = −2 are a3  1 , where a1 , a2 and a3 are nonzero. The modal matrix,
−4
which diagonalizes A, is
 
1 −1 11
P = 1 1 1 .
1 1 −4
 
3 00
Finally, P −1 AP = 0 0  = diag(3 1
1 − 2) = D.
0 −20
 
5 −4 4
Example 15. Diagonalize A = 12 −11 12 through an appropriate
4 −4 5
nonsingular matrix P .
Solution. The eigenvalues of A are λ = 1, 1, −3.

  
1 0
The eigenvectors corresponding to λ = 1 are a1  0  + a2 1, and the
  −1 1
1
eigenvectors corresponding to λ = −3 are a3 3, where a1 , a2 and a3 are
1
nonzero. Let  
1 0 1
P =  0 1 3 .
−1 1 1
Note that |P | = −1, and hence the column vectors of P are linearly
 indepen-

1 0 0
dent. Hence P serves as a modal matrix. Hence P −1 AP = 0 1 0  =
0 0 −3
D.
 
1 −6 −4
Example 16. Show that A = 0 4 2  is diagonalizable, and find an
0 −6 −3
appropriate nonsingular matrix P that diagonalizes A.
Solution. The eigenvalues of A are λ = 1, 1, 0. The eigenvectors corre-

511, A10, SJT 16


phaneendra.t@vit.ac.in
ADDE(MAT2002) Dr. T. Phaneendra
Module 2 Professor of Mathematics

   
1 1
sponding to λ = 1 are a1 2 + a2 0, and the eigenvectors corresponding
  0 3
2
to λ = 0 are a3  1 , where a1 , a2 and a3 are nonzero. Let
−2
 
1 1 2
P = 2 0 1  .
0 3 −2
Note that |P | = 5, and hence the column vectors of P are linearly
 indepen-

1 0 0
dent. Hence P serves as a modal matrix. Hence P −1 AP = 0 1 0 = D.
0 0 0
 
2 0 2
Example 17. Show that A = −1 3 1 is not diagonalizable.
1 −1 3

Symmetric and Orthogonal Matrices


A square matrix A is symmetric if AT = A, where AT is the transpose of A.
Theorem 3 (Real Eigenvalues). Eigenvalues of a symmetric matrix with
real entries are real.
   
u1 v1
 u2   v2 
Definition (Orthogonal Column Vectors). Two u =  .  and v =  . 
   
 ..   .. 
un vn
in Rn are said to be orthogonal if their inner product u • v is zero, where
u • v = uT v = u1 v1 + u2 v2 + · · · + un vn .
 
u1
 u2 
Definition (Norm of a Column Vector). The norm of a vector u =  .  in
 
 .. 
un
Rn is given by
√ √ q
kuk = u•u= uT u = u21 + u22 + · · · + u2n .

Dividing a vector u with its norm kuk, we get its normalized form e =
u/ kuk. A set of mutually orthogonal normalized vectors is called an or-
thonormal set.
Definition (Orthogonal Matrix). A real square matrix P is said to be or-
thogonal if P −1 = P T , that is P P T = I.

511, A10, SJT 17


phaneendra.t@vit.ac.in
ADDE(MAT2002) Dr. T. Phaneendra
Module 2 Professor of Mathematics

Theorem 4 (Criterion for Orthogonal Matrix). An n × n matrix P is or-


thogonal if and only if its column vectors form an orthonormal set.

Diagonalization of Symmetric Matrices


An n×n symmetric matrix A with real entries can always be diagonalizable
through an orthogonal matrix P such that P −1 AP is a diagonal matrix D.
Thus a symmetric matrix is said to be orthogonally diagonalizable.

Distinct Eigenvalues
We remember the following result:
Theorem 5 (Orthogonal Eigenvectors). Let A be an n × n symmetric ma-
trix. Then the eigenvectors corresponding to distict eigenvalues are mutually
orthogonal.
Let A be an n × n real symmetric matrix with distinct eigenvalues λ1 , λ2 , ...,
λn . Then, by the above theorem, the corresponding eigenvectors u1 , u2 , ...,
un are mutually orthogonal. Normalize these vectors as e1 , e2 , ..., en . Then
P = [e1 e2 ... en ] is an orthogonal modal matrix. The diagonal form of
A is given by
P −1 AP = dia(λ1 , λ2 , ..., λn ) = D.
 
1 −1 0
Example 18. Diagonalize A = −1 2 −1 using an appropriate or-
0 −1 1
thogonal matrix P .
Solution. The eigenvalues of A are λ = 3, 1, 0.
 
1
Eigenvectors corresponding to λ = 3 are a1 −2, where a1 6= 0. For a1 = 1,
  1  
1 1
we write u1 = −2. Eigenvectors corresponding to λ = 1 are a2  0 ,
1   −1
1
where a2 6= 0. For a2 = 1, we write u2 =  0 . Eigenvectors correspond-
−1
   
1 1
ing to λ = 0 are a3 1, where a3 6= 0. For a3 = 1, we write u3 = 1.
1 1
Since the eigenvalues of the symmetric matrix A are distinct, the eigenvec-

511, A10, SJT 18


phaneendra.t@vit.ac.in
ADDE(MAT2002) Dr. T. Phaneendra
Module 2 Professor of Mathematics

tors u1 , u2 , u3 are pairwise orthogonal. In fact, we have


 T  
1 1
u1 • u2 = −2  0  = (1)(1) − 2(0) + (1)(−1) = 0,
1 −1
 T  
1 1
u2 • u3 =  0  1 = (1)(1) + (0)(1) + (−1)(1) = 0,
−1 1
 T  
1 1
u1 • u3 = −2 1 = (1)(1) − 2(1) + (1)(1) = 0.
1 1

The normalized forms of u1 , u2 , u3 are respectively


 √   √   √ 
1/ √6 1/ 2 1/√3
e1 = −2/√ 6 , e2 =  0√  , e3 = 1/√3 .
1/ 6 −1/ 2 1/ 3

In other words, e1 , e2 and e3 form an orthonormal set. Write


 √ √ √ 
1/ √6 1/ 2 1/√3
P = [e1 e2 e3 ] = −2/√ 6 0√ 1/√3 .
1/ 6 −1/ 2 1/ 3
 
3 0 0
We see that P is an orthogonal matrix, and P −1 AP = 0 1 0. Thus P
0 0 0
is regarded as a modal matrix for diagonalizing the symmetric matrix A.

Eigenvalues not Distinct


Let A be a 3 × 3 real symmetric matrix with eigenvalues λ1 , λ2 and λ3
such that λ2 = λ3 . Suppose that u1 is the eigenvector corresponding to the
eigenvalue λ1 , and u2 and u3 be the linearly independent eigenvectors cor-
responding to the repeated eigenvalue λ2 . Then u1 is orthogonal to u2 and
u3 . However, u2 and u3 are not orthogonal (different linearly independent
eigenvectors corresponding to a repeated eigenvalue are not orthogonal, in
general).

Replacing u3 with a linear combination


!
u2 • u3
v3 = u3 − 2 u2 ,
ku2 k

we get mutually orthogonal vectors u1 , u2 , v3 . Normalize these vectors as


e1 , e2 , e3 , by dividing with their norms. Thus e1 , e2 , e3 form an orthonormal

511, A10, SJT 19


phaneendra.t@vit.ac.in
ADDE(MAT2002) Dr. T. Phaneendra
Module 2 Professor of Mathematics

set. Hence P = [e1 e2 e3 ] is an orthogonal modal matrix. The diagonal


form of A is given by

P −1 AP = dia(λ1 , λ2 , λ3 ) = D.
 
−2 2 3
Example 19. Diagonalize A =  2 1 6 using an appropriate orthog-
3 6 6
onal matrix P .
Solution. The eigenvalues of A are λ = 11, −3, −3.
 
1
Eigenvectors corresponding to λ = 11 are a1 2, where a1 6= 0. For a1 = 1,
  3
1
we write u1 = 2.
3
   
−3 −2
Eigenvectors corresponding to λ = −3 are a2  0  + a3  1 , where a2 ,
1 0
   
−3 −2
a3 are nonzero. We write u2 =  0  and u3 =  1 .
1 0
The eigenvectors u1 , u2 , u3 are linearly independent, u1 •u2 = 0, u1 •u3 = 0.
But u2 • u3 = 6 6= 0. Thus u1 , u2 , u3 are not pairwise orthogonal.

However, we can find a vector which is orthogonal to u2 by the formula:


       
  −2 −3 9/5 − 2 −1/5
v3 = u3 − uku2 •uk23 u2 , =  1  − 10
6 

0 = 1 = 1 
2
0 1 −3/5 −3/5

The normalized forms of u1 , u2 , v3 are respectively


 √   √   √ 
1/√14 −3/ 10 −1/√ 35
e1 = 2/√14 , e2 =  √ 0  , e3 =  5/ 35  .

3/ 14 1/ 10 −3/ 35

In other words, e1 , e2 and e3 form an orthonormal set. Write

P = [e1 e2 e3 ].
 
11 0 0
We see that P is an orthogonal matrix, and P −1 AP =  0 −3 0 .
0 0 −3
Thus P is regarded as a modal matrix for diagonalizing the symmetric ma-
trix A.

511, A10, SJT 20


phaneendra.t@vit.ac.in
ADDE(MAT2002) Dr. T. Phaneendra
Module 2 Professor of Mathematics

 
2 2 1
Example 20. Diagonalize A = 2 5 2 using an appropriate orthogo-
1 2 2
nal matrix P .
Solution. The eigenvalues of A are λ = 7, 1, 1.
 
1
Eigenvectors corresponding to λ = 7 are a1 2, where a1 6= 0. For a1 = 1,
1
 
1
we write u1 = 2.
1
   
1 2
Eigenvectors corresponding to λ = 1 are a2  0  + a3 −1, where a2 , a3
  −1 
 0
1 2
are nonzero. We write u2 =  0  and u3 = −1.
−1 0
The eigenvectors u1 , u2 , u3 are linearly independent, u1 •u2 = 0, u1 •u3 = 0.
But u2 • u3 = 2 6= 0. Thus u1 , u2 , u3 are not pairwise orthogonal.

However, we can find a vector which is orthogonal to u2 by the formula:


     
  2 1 1
v3 = u3 − uku2 •uk23 u2 , = −1 − 22  0  = −1

2
0 −1 1

The normalized forms of u1 , u2 , v3 are respectively


 √   √   √ 
1/√6 1/ 2 1/ √3
e1 = 2/√6 , e2 =  0√  , e3 = −1/√ 3 .
1/ 6 −1/ 2 1/ 3

In other words, e1 , e2 and e3 form an orthonormal set. Write

P = [e1 e2 e3 ].
 
7 0 0
We see that P is an orthogonal matrix, and P −1 AP = 0 1 0. Thus P
0 0 1
is regarded as a modal matrix for diagonalizing the symmetric matrix A.

511, A10, SJT 21


phaneendra.t@vit.ac.in

You might also like