Eigenvalues To Diagonalization
Eigenvalues To Diagonalization
Dr. T. Phaneendra
(A − λI)u = 0, (1)
where λ is a constant, and I denotes the unit matrix of order n. The diago-
nal entries of A − λI are aii − λ, i = 1, 2, ..., n, while, the nondiagonal entries
are aij for j = 1, 2, ..., n with j 6= i.
u1
u2
A column matrix u = . , which satisfies (1), is called a solution vector
..
un
or
simply solution of the system (1). Obviously, for every λ value, u0 = O =
0
0
.. is a solution of (1), and is called its trivial solution vector or zero-
.
0
vector. The nontrivial solution vectors of the system of (1) can be obtained
from the condition
|A − λI| = 0. (2)
Since |A − λI| is of nth degree, the condition (2) leads to a polynomial equa-
tion of nth degree, called the characteristic equation P (λ) of A. The roots
of P (λ) are called the characteristic roots or eigenvalues of the matrix
A. The nonzero solution vectors of (1), corresponding to each eigenvalue,
1
ADDE(MAT2002) Dr. T. Phaneendra
Module 2 Professor of Mathematics
The set of eigenvalues is called the spectrum, and the maximum of the
magnitudes of the eigenvalues, is called the spectral radius of A.
Questions:
1. What is the characteristic equation of a square matrix A?
2. What is the difference between the characteristic polynomial and the
characteristic equation?
3. How many eigenvalues does a square matrix of nth order have?
4. What do you mean by the algebraic and geometric multiplicities of an
eigenvalue?
Example 1. Find the eigenvalues,
eigenvectors and the eigen spaces of
3 2 2
the matrix A = 1 4 1 ·
−2 −4 −1
Solution. The characteristic equation of A is P (λ) = |A − λI| = 0, that is
3 − λ 2 2
1
4−λ 1 = 0. (4)
−2 −4 −1 − λ
We employ the row and column operation to find the roots of (4). In fact,
first apply R1 → R1 + R2 + R3 to (4) to get
2 − λ 2 − λ 2 − λ
1
4−λ 1 = 0
−2 −4 −1 − λ
or
1 1 1
(2 − λ) 1 4−λ 1 = 0.
−2 −4 −1 − λ
With C2 → C2 − C1 and C3 → C3 − C1 , this gives
1 0 0
(2 − λ) 1 3 − λ 0 = 0
−2 −2 1 − λ
The eigenspace of λ = 3 is
[ 0 [ 0
E3 = u0 u 6= O : Au = 3u = 0 k1 1 : k1 6= 0 ·
0 −1
The eigenspace of λ = 2 is
[ 0 [ −2
E2 = u0 u 6= O : Au = 2u = 0 k2 1 : k2 6= 0 ·
0 0
u1 + u2 + u3 = 0,
u1 + 3u2 + u3 = 0,
u1 + 2u2 + u3 = 0.
The eigenspace of λ = 1 is
[ 0 [ −1
E1 = u0 u 6= O : Au = u = 0 k3 0 : k3 6= 0 ·
0 1
2
For λ = 3, the eigenvectors are u1 = a1 3 , where a1 6= 0.
−2
1
For λ = 0, the eigenvectors are given by u2 = a2 6 , where a2 6= 0.
−13
1
For λ = −4, the eigenvectors are given by u3 = a3 6 , where a3 6= 0.
−13
Example 4. Find the eigenvalues,
eigenvectors and the eigen spaces of
2 1 1
the matrix A = 2 3 2 ·
3 3 4
Solution. The characteristic equation of A is P (λ) = |A − λI| = 0, that is
2 − λ 3 2
2
3−λ 2 = 0
3 3 4 − λ
(A − λI) u = 0.
3
For λ = 7, the eigenvectors are u1 = a1 −1, where a1 6= 0.
1
0 S 3
The eigenspace of λ = 7 is E7 = 0 k1 −1 : k1 6= 0 ·
0 1
(A − I) u = 0,
that is
1 1 1 u1 0
2 2 2 u2 = 0 .
3 3 3 u3 0
which reduce to the single equation
u1 + u2 + u3 = 0 or u3 = −u2 − u3 .
Thus
u1 u1 u1 .1 + 0.u2 1 0
u2 = u2 = u1 .0 + 1.u2 = u1 0 + u2 1
u3 −u1 − u2 u1 (−1) + u2 (−1) −1 −1
1 0
Note that e2 = 0 and e3 = 1 are two linearly independent eigen-
−1 −1
vectors, and all other eigenvectors are linear combinations of scalar multi-
ples of e2 and e3 . Therefore, the geometric multiplicity of λ = 1 is 2.
The eigenspace of λ = 1 is
0 [ 1 0
E1 = 0 k2 0 + k3 1 : k2 6= 0, k3 6= 0 ·
0 −1 −1
Thus
u1 u1 3.u2 + (−2).u3 3 −2
u2 = u2 = 1.u2 + 0.u3 = u2 1 + u3 0
u3 −u1 − u2 0.u2 + 1.u3 0 1
3 −2
Note that e2 = 1 and e3 = 0 are two linearly independent eigen-
0 1
vectors, and all other eigenvectors are linear combinations of scalar multi-
ples of e2 and e3 . Therefore, the geometric multiplicity of λ = 2 is 2.
Example 6. Find the eigenvalues,
eigenvectors and the eigen spaces of
1 −3 3
the matrix A = 3 −5 3 ·
6 −6 4
Solution. The characteristic equation of A is P (λ) = |A − λI| = 0, that is
1 − λ −3 3
3
−5 − λ 3 = 0
6 −6 −λ4
(A − λI) u = 0.
1
For λ = 4, the eigenvectors are u1 = a1 1, where a1 6= 0.
2
For λ = −2, the eigenvectors are determined by
(A + 2I) u = 0,
that is
3 −3 3 u1 0
3 −3 3 u2 = 0 .
6 −6 6 u3 0
which reduce to the single equation
u1 + u2 − u3 = 0 or u3 = u1 + u2 .
Thus
u1 u1 1.u1 + 0.u2 1 0
u2 = u2 = 0.u1 + 1.u2 = u1 0 + u2 1
u3 u1 + u2 1.u1 + 1.u2 1 1
1 0
Note that e2 = 0 and e3 = 1 are two linearly independent eigenvec-
1 1
tors, and all other eigenvectors are linear combinations of scalar multiples
of e2 and e3 . Therefore, the geometric multiplicity of λ = −2 is 2.
Example 7. Find the eigenvalues,
eigenvectors and the eigen spaces of
−2 2 −3
the matrix A = 2 1 −6 ·
−1 −2 0
Solution. The characteristic equation of A is P (λ) = |A − λI| = 0, that is
−2 − λ 2 −3
2
1−λ −6 = 0
−1 −2 0 − λ
(A − λI) u = 0.
1
For λ = 5, the eigenvectors are u1 = a1 2 , where a1 6= 0.
−1
For λ = −3, the eigenvectors are given by
u1 −2 3
u2 = u1 1 + u2 0
u3 0 1
−2 3
Note that e2 = 1 and e3 = 0 are two linearly independent eigen-
0 1
vectors, and all other eigenvectors are linear combinations of scalar multi-
ples of e2 and e3 . Therefore, the geometric multiplicity of λ = −3 is 2.
Example 8. Find the eigenvalues,
eigenvectors and the eigen spaces of
4 0 1
the matrix A = 2 3 2 ·
1 0 4
Solution. The characteristic equation of A is P (λ) = |A − λI| = 0, that is
4 − λ 0 1
2
3−λ 2 = 0
1 0 4 − λ
A square matrix in which all the entries above (or below) the principal diag-
onal is called a lower triangular matrix (or upper triangular matrix).
A square matrix in which all the nondiagonal entries are zero, is called a
diagonal matrix.
Property 3. The diagonal entries of a triangular matrix (upper/lower) will
serve as its eigenvalues.
Property 4. Let λ be an eigenvalue of a square matrix A, and u be the
corresponding eigenvector. Then
(a) cλ will be an eigenvalue of cA with the same eigenvector u, c 6= 0,
Questions:
1. Let λ be an eigenvalue of an invertible matrix A, how do you find the
eigenvalues of adj A?
2. What can you say about the eigenvalues of a square matrix and its
transpose?
2 −1 1
Example 10. Verify Cayley-Hamilton theorem for A = −1 2 −1
1 −1 2
and find A−1 .
Solution. The characteristic equation of A is P (λ) = |A − λI| = 0, that is
2 − λ −1 1
−1 2−λ −1 = 0
1 −1 2 − λ
λ3 − 6λ2 + 9λ − 4 = 0. (9)
A3 − 6A2 + 9A − 4I = 0. (10)
Now
2 −1 1 2 −1 1 6 −5 5
A2 = −1 2 −1 −1 2 −1 = −5 6 −5 ,
1 −1 2 1 −1 2 5 −5 6
6 −5 5 2 −1 1 22 −21 21
A3 = A2 .A = −5 6 −5 −1 2 −1 = −21 22 −21 .
5 −5 6 1 −1 2 21 −21 22
Further,
9 − 12 + 6 0 + 6 − 5 0−6+5
1 1
A−1 = 2
− 9I + 6A − A = 0 + 6 − 5 9 − 12 + 6 0+6−5
4 4
0−6+5 0+6−5 9 − 12 + 6
3 1 −1
1
= 1 3 1 .
4
−1 1 3
2 1 1
Example 11. Verify Cayley-Hamilton theorem for A = 0 1 0 and
1 1 2
then find A−1 .
Solution. The characteristic equation of A is P (λ) = |A − λI| = 0, that is
2 − λ 1 1
0
1−λ 0 = 0
1 1 2 − λ
λ3 − 5λ2 + 7λ − 3 = 0. (11)
A3 − 5A2 + 7A − 3I = 0. (12)
Now
2 −1 −1
1 2 1
A−1 = A − 5A + 7I = 0 3 0 .
3 3
−1 −1 2
Diagonalization
A square matrix whose nondiagonal
entries
are
zero, is called a diagonal
2 0 0 0 0 0
matrix. For instance, 0 3 0 and 0 0 0 are diagonal matrices.
0 0 −1 0 0 1
A square matrix A is said to be diagonalizable if there exists a nonsingular
matrix P such that P −1 AP is a diagonal matrix D. When such P exists, we
say that P diagonalizes A. The matrix P is called a modal matrix. The pro-
cedure of reducing a given square matrix into a diagonal matrix D through
a modal matrix P is called the diagonalization.
Theorem 2. A square matrix A of order n is diagonalizable if and only if
it has n linearly independent eigenvectors.
Suppose that λ1 , λ2 , ..., λn are the eigen values of A, and u1 , u2 , ..., un
be the corresponding linearly independent eigenvectors
of A. Then
group
these into a matrix to give the modal matrix P = u1 u2 ... un . Thus
P −1 AP = diag λ1
λ2 ... λn = D.
(b) If the eigenvalues are not distinct, we have to find an appropriate set
of n linearly independent eigenvectors to produce the modal matrix P
2 1 2
Example 12. Examine whether A = 0 2 −1 is diagonalizable.
0 0 2
Solution. Since A is an upper traingular matrix, its diagonal elements will
be the eigenvalues of it, namely λ = 2, 2, 2. Thus the algebraic multiplicity
of λ = 2 is 3. We know that the eigenvectors corresponding to an eigenvalue
λ are the nonzero solutions of the system (A − λI) u = 0.
u2 + 2u3 = 0, −u3 = 0.
To find the modal matrix P which diagonalizes A, we should find the eigen-
vectors of A.
−1
The eigenvectors corresponding to λ = 3 are a1 1 , the eigenvectors
2
−2
corresponding to λ = 2 are a2 1 , and the eigenvectors corresponding to
2
0
λ = 1 are a3 1 , where a1 , a2 and a3 are nonzero. Now, we write
−1
−1 −2 0
e1 = 1 , e2 = 1 e3 = 1 and P = [e1 e2 e3 ].
2 2 −1
1
The eigenvectors corresponding to λ = 3 are a1 1, the eigenvectors cor-
1
−1
responding to λ = 1 are a2 1 , and the eigenvectors corresponding to
1
11
λ = −2 are a3 1 , where a1 , a2 and a3 are nonzero. The modal matrix,
−4
which diagonalizes A, is
1 −1 11
P = 1 1 1 .
1 1 −4
3 00
Finally, P −1 AP = 0 0 = diag(3 1
1 − 2) = D.
0 −20
5 −4 4
Example 15. Diagonalize A = 12 −11 12 through an appropriate
4 −4 5
nonsingular matrix P .
Solution. The eigenvalues of A are λ = 1, 1, −3.
1 0
The eigenvectors corresponding to λ = 1 are a1 0 + a2 1, and the
−1 1
1
eigenvectors corresponding to λ = −3 are a3 3, where a1 , a2 and a3 are
1
nonzero. Let
1 0 1
P = 0 1 3 .
−1 1 1
Note that |P | = −1, and hence the column vectors of P are linearly
indepen-
1 0 0
dent. Hence P serves as a modal matrix. Hence P −1 AP = 0 1 0 =
0 0 −3
D.
1 −6 −4
Example 16. Show that A = 0 4 2 is diagonalizable, and find an
0 −6 −3
appropriate nonsingular matrix P that diagonalizes A.
Solution. The eigenvalues of A are λ = 1, 1, 0. The eigenvectors corre-
1 1
sponding to λ = 1 are a1 2 + a2 0, and the eigenvectors corresponding
0 3
2
to λ = 0 are a3 1 , where a1 , a2 and a3 are nonzero. Let
−2
1 1 2
P = 2 0 1 .
0 3 −2
Note that |P | = 5, and hence the column vectors of P are linearly
indepen-
1 0 0
dent. Hence P serves as a modal matrix. Hence P −1 AP = 0 1 0 = D.
0 0 0
2 0 2
Example 17. Show that A = −1 3 1 is not diagonalizable.
1 −1 3
Dividing a vector u with its norm kuk, we get its normalized form e =
u/ kuk. A set of mutually orthogonal normalized vectors is called an or-
thonormal set.
Definition (Orthogonal Matrix). A real square matrix P is said to be or-
thogonal if P −1 = P T , that is P P T = I.
Distinct Eigenvalues
We remember the following result:
Theorem 5 (Orthogonal Eigenvectors). Let A be an n × n symmetric ma-
trix. Then the eigenvectors corresponding to distict eigenvalues are mutually
orthogonal.
Let A be an n × n real symmetric matrix with distinct eigenvalues λ1 , λ2 , ...,
λn . Then, by the above theorem, the corresponding eigenvectors u1 , u2 , ...,
un are mutually orthogonal. Normalize these vectors as e1 , e2 , ..., en . Then
P = [e1 e2 ... en ] is an orthogonal modal matrix. The diagonal form of
A is given by
P −1 AP = dia(λ1 , λ2 , ..., λn ) = D.
1 −1 0
Example 18. Diagonalize A = −1 2 −1 using an appropriate or-
0 −1 1
thogonal matrix P .
Solution. The eigenvalues of A are λ = 3, 1, 0.
1
Eigenvectors corresponding to λ = 3 are a1 −2, where a1 6= 0. For a1 = 1,
1
1 1
we write u1 = −2. Eigenvectors corresponding to λ = 1 are a2 0 ,
1 −1
1
where a2 6= 0. For a2 = 1, we write u2 = 0 . Eigenvectors correspond-
−1
1 1
ing to λ = 0 are a3 1, where a3 6= 0. For a3 = 1, we write u3 = 1.
1 1
Since the eigenvalues of the symmetric matrix A are distinct, the eigenvec-
P −1 AP = dia(λ1 , λ2 , λ3 ) = D.
−2 2 3
Example 19. Diagonalize A = 2 1 6 using an appropriate orthog-
3 6 6
onal matrix P .
Solution. The eigenvalues of A are λ = 11, −3, −3.
1
Eigenvectors corresponding to λ = 11 are a1 2, where a1 6= 0. For a1 = 1,
3
1
we write u1 = 2.
3
−3 −2
Eigenvectors corresponding to λ = −3 are a2 0 + a3 1 , where a2 ,
1 0
−3 −2
a3 are nonzero. We write u2 = 0 and u3 = 1 .
1 0
The eigenvectors u1 , u2 , u3 are linearly independent, u1 •u2 = 0, u1 •u3 = 0.
But u2 • u3 = 6 6= 0. Thus u1 , u2 , u3 are not pairwise orthogonal.
P = [e1 e2 e3 ].
11 0 0
We see that P is an orthogonal matrix, and P −1 AP = 0 −3 0 .
0 0 −3
Thus P is regarded as a modal matrix for diagonalizing the symmetric ma-
trix A.
2 2 1
Example 20. Diagonalize A = 2 5 2 using an appropriate orthogo-
1 2 2
nal matrix P .
Solution. The eigenvalues of A are λ = 7, 1, 1.
1
Eigenvectors corresponding to λ = 7 are a1 2, where a1 6= 0. For a1 = 1,
1
1
we write u1 = 2.
1
1 2
Eigenvectors corresponding to λ = 1 are a2 0 + a3 −1, where a2 , a3
−1
0
1 2
are nonzero. We write u2 = 0 and u3 = −1.
−1 0
The eigenvectors u1 , u2 , u3 are linearly independent, u1 •u2 = 0, u1 •u3 = 0.
But u2 • u3 = 2 6= 0. Thus u1 , u2 , u3 are not pairwise orthogonal.
P = [e1 e2 e3 ].
7 0 0
We see that P is an orthogonal matrix, and P −1 AP = 0 1 0. Thus P
0 0 1
is regarded as a modal matrix for diagonalizing the symmetric matrix A.