## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

1.1

**Eigenvalues and Eigenvectors
**

Characteristic Polynomial and Characteristic Equation

Procedure. How to ﬁnd the eigenvalues? A vector x is an e.vector if x is nonzero and satisﬁes Ax = λx ⇒ (A − λI )x = 0 must have nontrivial solutions ⇒ (A − λI ) is not invertible by the theorem on properties of determinants ⇒ det(A − λI ) = 0 ⇒ Solve det(A − λI ) = 0 for λ to ﬁnd eigenvalues.

Deﬁnition. P (λ) = det(A − λI ) is called the characteristic polynomial. det(A − λI ) = 0 is called a characteristic equation.

Proposition. A scalar λ is an e.v. of a n × n matrix if λ satisﬁes P (λ) = det(A − λI ) = 0.

Example. Find the e.v. of A =

0 1 . −6 5

1

Since A − λI =

0 1 λ 0 − −6 5 0 λ

=

−λ 1 , −6 5 − λ

we have the characteristic equation det(A − λI ) = −λ(5 − λ) + 6 = (λ − 2)(λ − 3) = 0. So λ = 2, λ = 3 are eigenvalues of A.

Theorem. Let A be a n×n matrix. Then A is invertible if and only if: a) λ = 0 is not an e.v. of A; or b) det A = 0.

Proof. For b) we have discussed the proof on the determinant section. For a): (⇒): Let A be invertible ⇒ det A = 0 ⇒ det(A − 0I ) = 0 ⇒ λ = 0 is not an e.v. (⇐). Let 0 be not an e.v of A ⇒ det(A − 0I ) = 0 ⇒ det A = 0 ⇒ A is invertible.

Theorem. The eigenvalues of a triangular matrix are the entries of the main diagonal.

2

2. 0 0 . .. of A and A3 .. .Proof. ann are the eigenvalues of A.v. . . . .. . det(A − λI ) = det 0 6 − λ 10 . . . a22 . . . . ann − λ = (a11 − λ)(a22 − λ) . 3 2 3 Example.. 6. 0 0 2−λ Thus the characteristic equation is (3 − λ)(6 − λ)(2 − λ) = 0 ⇒ eigenvalues are 3. . a1n 0 a . Recall that a determinant of a triangular matrix is a product of main diagonal elements. . a1n 0 a22 − λ .v. Determine an e. . a 22 2n A= ..v. . . . . . Hence. Example. . of An ? 2 3 . a2n det(A − λI ) = . . ann then the characteristic equation is a11 − λ a12 . . . . . What is an e. . 0 0 . (ann − λ) = 0 ⇒ a11 . Find the eigenvalues of A = 0 6 10 0 0 2 3−λ 2 3 Solution. . . if a11 a12 .. of A. . . . . . Suppose λ is e. .

then they have the same characteristic polynomial and hence the same eigenvalues.2 Similar Matrices Deﬁnition. Since λ is e.v. then B − λI = P −1 AP − λP −1 P = P −1 (AP − λP ) = P −1 (A − λI )P . Analogously for A3 . Theorem. matrices A and B have the same e. 4 . λn is an e. Using the multiplicative property of determinant.v. of A2 .v. Therefore λ2 is an e. We have Ax = λx and A2 x = λ2 x ⇒ AA2 x = A3 x = Aλ2 x = λ2 Ax = λ3 x. Proof. In general.v. A n × n matrix B is called similar to matrix A if there exists an invertible matrix P such that B = P −1 AP . If n × n-matrices A and B are similar. we have det(B − λI ) = det(P −1 (A − λI )P ) = det P −1 det(A − λI ) det P = det(A − λI ). Thus λ3 is an e.Solution. Hence. of A ⇒ ∃ nonzero vector x such that Ax = λx ⇒ AAx = Aλx = λAx = λ2 x.v. of A3 . 1. of An . If B = P −1 AP .

The algebraic multiplicity of an eigenvalue is its multiplicity as a root of the characteristic equation (multa (λ)). + c1 A + c0 I = 0. Hamilton-Caley. 2 5 Example. The characteristic equation is det(A − λI ) 2−λ 0 0 0 5 3−λ 0 0 = det 9 1 3−λ 0 1 2 5 −1 − λ = (2 − λ)(3 − λ)(3 − λ)(−1 − λ) = 0 Thus the e. The algebraic multiplicity of λ = 3 is 2. 5 .v. . λ2.3 Algebraic and Geometric Multiplicity of Eigenvalues Deﬁnition.Theorem. Try to prove it as an exercise) If P (λ) = det(A − λI ) = λn + cn−1 λn−1 + . 1. (Without proof. + c1 λ + c0 = 0 then P (A) = An + cn−1 An−1 + . Find the polynomial of A = 9 1 and ﬁnd e. 0 3 1 2 0 0 0 0 3 0 5 −1 Solution.v. or multa (3) = 2. . with the algebraic multiplicity. .3 = 3 and λ4 = −1. . are λ1 = 2.

3 = −1. v. Recall that the dimension of a vector space is equal to the number of linearly independent vectors it contains. 1 1 0 Solution. Find e. 2. and their algebraic and geometric 0 1 1 multiplicity for A = 1 0 1. λ. The eigenspace Eλ consists of the zero vector and all eigenvectors corresponding to an e. Example.v. λ2.v. 3 we ﬁnd that 1 Eλ=2 = Span{1} 1 6 . are λ1 = 2. Solving the equation (A − λi I )x = 0 for i = 1.v. Deﬁnition. λ is the dimension of the corresponding eigenspace Eλ (multg (λ)). The characteristic equation is det(A − λI ) −λ 1 1 = det 1 −λ 1 = −λ3 +3λ+2 = −(λ−2)(λ+1)2 .Deﬁnition. 1 1 −λ So the e. The geometric multiplicity of an e.

0 1 3 Solution.v. and their algebraic and geometric 0 0 1 multiplicity for A = 1 0 −3. Solving the equation (A − λi I )x = 0 for i = 1.v. 3 we ﬁnd that 1 Eλ=1 = Span{−2} 1 Thus multa (1) = 3 and multg (1) = 1. −1} 1 1 Thus multa (2)=multg (2) = 1 and multa (−1)=multg (−1) = 2 Example.Eλ=−1 −1 0 = Span{ 0 .3 = 1. 0 1 3−λ So the e. 2. The characteristic equation is det(A − λI ) −λ 0 1 = det 1 −λ −3 = λ3 − 3λ2 + 3λ − 1 = (λ − 1)3 .2. 7 . are λ1. Find e.

.Multiplicity Theorem. Note that each vj in Bλi is an eigenvector of A corresponding to λi . 8 . . . vn }. vm } be a basis of the corresponding eigenspace Eλi where multg (λi ) = m. 2 . . Proof. . . . Let λi be an eigenvalue of A. m Extend Bλi to form a basis B = {v1 . . . λi . . . For any eigenvalue λi . i = 1. . . n of a n × n-matrix A holds multg (λ) ≤multa (λ). . . Note that B is now a basis in the whole n-dimensional space (Rn or C n ) while Bλi is only a basis in the eigenspace corresponding to e. j = 1. Let Bλi = {v1 . Note that eigenspace Eλi is only subspace of the whole n-dimensional space (Eλi ∈ Rn or Eλi ∈ C n ). vm+1 . . . vm . Thus Avj = λi vj .v. 2. .

2. . 0. . Now using the deﬁnition of e. 9 . |Q−1 Avn ]] = λi Im C . m. the matrix Q is invertible. Since these vectors are linearly independent. . . 0 . |λi em |Q−1 Avm+1 | . 0. |vn ) be a matrix which columns are vectors v1 . . Such vector ej is called a j -th ort. . . . . . vn ] Thus A = [λi e1 |λi e2 | .v. . . . |vm |vm+1 | . . . Notice that vj = Qej where ej = (0. . . . . vm . . . j = 1.Let Q = (v1 |v2 | . . 1. j ˜ = Q−1 AQ = Q−1 A[v1 . . vn of the vector basis B . vm+1 . . . vm .: Q−1 Avj = Q−1 λi vj = λi Q−1 vj = λi ej . . . 0)T . . ˜ is similar to the matrix A since A ˜ = The matrix A Q−1 AQ. . . 0 D where Im is the m × m-identity matrix. . vm+1 . . . .

1 1 0 1 e. Thus multa (λ) = 4. its the only e. Solution.v.Hence using the property of determinant for the block diagonal matrixes (see the assignment 2) ˜ PA (λ) = PA ˜ = det(A − λIn ) = det((λ − λi )Im )det(D − λIn−m ) = (λ − λi )m PD (λ).2.v. To ﬁnd the eigenspace and geometric multiplicity we need to solve the equation (A − 1I )x = 0 and ﬁnd basis 10 . Here In and In−m are n × n.3. ⇒ multg (λ) ≤multa (λ). is λ1. Since the matrix A is upper-triangular. Thus a characteristic polynomial PA (λ) has a root of λi of at least degree m.v.and (n − m) × (n − m)identity matrixes respectively. and eigenspace of A = 0 0 0 0 Deﬁne the algebraic and geometric multiplicities of 0 0 0 0 . 1 0 0 1 Example. Find e.4 = 1. where m = multg (λi ).

1 0 0 0 1 0 { 0 . 0 0 0 0 0 0 0 0 0 0 1 0 11 . multg (λ) = 3 Notice that the eigenspace E1 is a 3-dimensional hyperplane in R4 . x2 and x3 are arbitrary numbers. x4 = 0. 1} 0 0 0 Therefore.for the null space of (A − 1I ). 0 0 A − 1I = 0 0 Hence the solution is x1 . 0 . for example. Thus we can choose 3 linearly independent vectors.

λ = 1 and multa (1) = 4 and multg (1) = 3. x2 x3 and x4 are arbitrary numbers.Example.v. Thus we can choose 4 linearly independent vectors. Suggest a 4 × 4-matrix with e. 0 . Solution. 0 0 0 1 0 1 0 0 { 0 . From the Multiplicity Theorem we have the following options multg (λ) = 1. multg (λ) = 2 and multg (λ) = 4. From the previous example it is easy to see that 1 0 0 0 0 1 0 0 if A = I = 0 0 1 0 0 0 0 1 0 0 then multa (1) = 4 and A − 1I = 0n = 0 0 Hence the solution of (A − 1I )x = 0 is x1 . 0 0 . 0} 1 0 0 0 12 0 0 0 0 0 0 0 0 0 0 . 1 . for example.

α. This would imply that x3 = 0 and x4 = 0 while x1 and x2 are arbitrary. 1 0 hence A = 0 0 0 1 0 0 0 1 1 0 0 0 also works for arbiβ 1 Exercise.v. 0 0 0 0 0 0 1 0 For example. it is possible to think backward and choose such a matrix A such that the null space of (A − 1I ) has only 2 linearly independent vectors. To get multg (λ) = 2.Therefore. λ = 1 and multa (1) = 4 and multg (1) = 3. Notice that for this case the eigenspace E1 coincides with R4 . β = 0. 1 1 1 0 0 0 1 α Notice that A = 0 0 1 0 0 0 trary α and β . multg (λ) = 4. A − 1I = 0 0 0 1 and 0 0 0 0 0 0 . 13 . Find a 4 × 4-matrix with e.

1. λ2 . . The trace of an n ×n-matrix A is deﬁned to be Tr(A) = Sp(A) = n i=1 aii . (Tr is English. . Proof. . .. Let for simplicity assume that A is similar to a diagonal matrix D = diag{λ1 . • Tr(A) = Tr(AT ) • Tr(αA) = αTr(A) • Tr(A + B) = Tr(A) + Tr(B) • Tr(AB) = Tr(BA) Proof as an exercise. Let A be a n × n-matrix and λ1 . Sp is German from ”Spur”. the sum of the diagonal elements. . . .4 Trace of a Matrix Deﬁnition. i.e. λn be its e. Hence A = P −1 DP .) Properties. . λ2 . From the properties of trace tr(A) = tr(P−1 DP) = tr(PP−1 D) = tr(D) = From the properties of determinants det(A) = det(P−1 DP) = det(P)det(D)det(P−1 ) = det(D) = n i=1 λi . λn }. Theorem. Then Tr(A) = n i=1 λi and det(A) = n i=1 λi .v. . 14 n i=1 λi .

. it is trivial to compute Dk . Let D = Dk .. Notice that det(A) = λ1 λ2 = 0 and Tr(A) = λ1 + λ2 = 2a ⇒ λ1 = 0 and λ2 = 2a.. D3 . .Example.. Find eigenvalues of A = calculation. . . 15 . and 0 3 Example 1. The matrix is diagonal if all its entries are only on the main diagonal. 0 . .. det D.. ∀k ∈ N. 2 0 .5 Diagonalization Deﬁnition. 1. etc.. I = . . . D = 0 2 0 . . Compute D2 .. . a a a a without Solution. 0 0 1 . . 0 1 0 0 0 . 0 0 3 1 If a matrix is diagonal. 1 0 Example. .

of A. In fact.e. D = 0 2k k In general. 16 . i. If it is true for (n−1). Solution. then An = (P DP −1 )(P D(n−1) P −1 ) = P D n P −1 . Theorem . the diagonal entries of D are e. In this case. D = 2 0 4 0 = = 0 3 0 9 0 22 0 23 == 3 0 32 0 0 . A = P DP −1 . A n × n-matrix is diagonalizable iﬀ A has n independent eigenvectors. 3k 22 0 . with D a diagonal matrix. corresponding to eigenvectorscolumns of P . ∀k ∈ N.v. where D is diagonal. 33 Example 2. Deﬁnition. A n × n-matrix is said to be diagonalizable if A is similar to a diagonal matrix. ∃ P — invertible such that A = P DP −1 . ﬁnd a formula for Ak . iﬀ the columns of P are n linearly independent eigenvectors of A. Let A = P DP −1 . D = 0 Solution. By induction.2 0 0 3 2 Analogously. 0 32 0 . A2 = (P DP −1 )(P DP −1 ) = (P D2 P −1 ).

. . . | Avn = λ1 v1 | λ2 v2 | . λ2 . . 0 2 PD = P . | λn vn ⇒ Av1 = λv1 .. If A = P DP −1 ⇒ P A = P D ⇒ Av1 | Av2 | . vn . v1 .. . Notice that if P is a n × n-matrix with columns v1 . | λn vn . . . . . . . . . . . and if D is any diagonal matrix with diagonal entries λ1 . . . while λ1 0 . . of A are v1 . . 17 . . . . .v. . . . . . λn = λ1 v1 | λ2 v2 | . λn . Avn = λn vn . . . . . . . . Hence by deﬁnition λ1 . . av2 = λ2 v2 . vn are linearly independent non-zero vectors. 0 0 . then AP = A v1 | v2 | .Proof.. . 0 0 λ . . v2 . (⇒): Given A = P DP −1 . . v2 . . . . Since P is invertible. λn are e. . . . . . . | vn = Av1 | Av2 | . vn are corresponding eigenvectors of A. | Avn .

. . . | λn vn ⇒ AP = P D. vn are linearly independent) ⇒ A = P DP −1 . | Avn = λ1 v1 | λ2 v2 | . . . Then by deﬁnition of e. .v. Diagonalize A = 1 2 1 if possible. . λn (not necessarily distinctive) to construct a diag D. −1 0 1 1) Find the eigenvalues of A: 18 . . Av1 = λ1 v1 . use them to construct P and use n eigenvalues λ1 . Avn = λn vn . Av2 = λ2 v2 . . . Since P is invertible (all v1 . . . . Deﬁnition. . 2 0 0 Example. . . . . .(⇐): Given n linearly independent eigenvectors v1 . Linearly independent vectors v1 . v2 . v2 . . vn . . λ2 . . . v2 . ⇒ Av1 | Av2 | . vn form an eigenvector basis in Rn . .

v2 = 1 . v3 are clearly linearly independent ⇒ for a basis. we get 0 0 −1 v1 = −1 . λ2. Vectors v1 . v2 . λ1 = 1. 2−λ 0 0 det(A − λI ) = det 1 2−λ 1 −1 0 1−λ = (2 − λ)2 (1 − λ) = 0. v2 . 1 0 1 corresponding to λ1 = 1. By solving (A − λi I )x = 0. 2) Find three linearly independent eigenvectors if possible. 3) Construct P from v1 . v3 = 0 . Thus. 1 0 1 19 .3 = 2 are e. 3. λ2.3 = 2. i = 1. v3 : 0 0 −1 P = − 1 1 0 .v. 2.

Diagonalize A = 0 2 2 if possible. 1 0 1 0 0 2 1 0 2 2 4 6 Example.4) Construct D from the corresponding eigenvalues λ1 .v. 0 0 4 Solution. −1 0 1 1 0 1 1 0 2 0 0 −1 1 0 0 0 0 −2 P D = − 1 1 0 0 2 0 = − 1 2 0 . λ2. 1) Since A is triangular. 0 0 2 5) Check your results by verifying that AP = P D: 2 0 0 0 0 −1 0 0 −2 AP = 1 2 1 −1 1 0 = −1 2 0 . it is clear that λ1 = 4.3 = 2 are e. λ2 . 20 . λ3 : 1 0 0 D = 0 2 0 .

3 Thus dimension of eigenspace corresponding to λ2. λ1 = 5. λ3 = −2. 1 1 = 2 is v = 0. λ3 we can ﬁnd corresponding eigenvectors 21 . 0 Eigenvector for λ2. are λ1 = 5. For each λ1 . the e. i = 1. 2.v. 0 0 −2 Solution. 3. and ﬁnd the eigenbasis. Since it is a triangular matrix. Diagonalize A = 0 3 −1 if possible. 5 Eigenvector for λ1 = 4 is v = 1.3 = 1 is 1 (multg (2) = 1 and multa (2) = 1) ⇒ P is singular (it is not enough eigenvectors to form a basis for R3 ) ⇒ A is not diagonalizable. 5 0 4 Example.2) Solve (A − λi I )x = 0. λ2 .

If v1 . . . . v2 = 1 . . . . . λ2 . v2 . λr for a n × n-matrix A. . then v1 . v2 . v1 = 0 . vr are eigenvectors corresponding to distinct eigenvalues λ1 . λ3 = −2) 0 0 −2 are diagonalizable? Theorem 1. . 3.3 = 2) and −1 0 1 5 0 4 A = 0 3 −1 (λ1 = 5. . −2). 22 . . vn are linearly independent. v2 . v3 are linearly independent ⇒ form an eigenvector basis in R3 ⇒ P = v1 | v2 | v3 is invertible ⇒ A is diagonalizable and D = diag(5. . 2 0 0 Why A = 1 2 1 (λ1 = 1. v3 = 1 5 0 0 1 Clearly v1 . 4 1 0 −7 . λ2 = 3. λ2.

. Suppose v1 . Using the fact that Avi = λi vi . cp−1 such that ∃i. vp−1 are linearly independent ⇒ by deﬁnition of linear independence 23 . . + cp−1 vp−1 = vp . . . (∗) Multiply both sides of this equation by A ⇒ c1 Av1 + c2 Av2 + . . + cp−1 Avp−1 = Avp . . vr are linearly dependent. . c2 . . Notice that by construction v1 . . Then there exist c1 . . v2 . . . . v2 . we get c1 λ1 v1 + c2 λ2 v2 + . . Let (p − 1) be the last index such that vp is a linear combinations of the preceding linearly independent vectors (p ≤ n). From contradiction. . . + cp−1 λp−1 vp−1 = λp vp . + cp−1 λp−1 vp−1 − cp−1 λp vp−1 = λp vp − λp vp = 0 ⇒ c1 (λ1 −λp )v1 +c2 (λ2 −λp )v2 +.+cp−1 (λp−1 −λp )vp−1 = 0. ci = 0 and c1 v1 + c2 v2 + . i ∈ N. (∗∗) Multiply (∗) by λp and subtract result from (∗∗): c1 λ1 v1 − c1 λp v1 + c2 λ2 v2 − c2 λp v2 + . . .Proof. . . . .

Proof.c1 (λ1 − λp ) = c2 (λ2 − λp ) = . 2. n. . . . = λn .e. . . n ⇒ each eigenspace of dimension ni has ni linearly independent vectors but 24 . . If A has distinct eigenvalues λ1 . λn ⇒ eigenvectors v1 . Proof. . . (⇐): Let multg (λi ) =multa (λi ). . . . Corollary. . 2. . i ∈ N : ci = 0 ⇒ ∃i. | vn is invertible and A is diagonalizable. λ2 . . vn are linearly independent and form an eigenvector basis ⇒ p = v1 | v2 | . . i ∈ N : λi − λp = 0 ⇒ contradiction since by statement of Theorem λ1 = λ2 = . . If a n × n-matrix A has n distinct eigenvalues. Diagonalization theorem 2. . ∀i = 1. i. But ∃i. ∀i = 1. v2 . . eigenvectors of A form a basis in Rn . . A n × n-matrix A is diagonalizable iﬀ multg (λi ) =multa (λi ). . A is disgonalizable. . = cp−1 (λp−1 − λp ) = 0. .

vn which form the matrix P such that A = P −1 DP and D is diagonal. . . .ni = multa (λi ) and multa (λ1 ) + multa (λ2 ) + . . vn }. (General case. . . Let B be a set of {v1 . + multa (λn ) = n ⇒ n1 + n2 + . 2. λ2 . . v2 . . . . . Then by the Theorem* ∃n linearly independent eigenvectors v1 . . .v. (⇐): Let A be diagonalizable. i = 1. 25 . .) Assume for simplicity that ∃ only one ∗ eigenvalue λ∗ k such that multa λk = p and all other e. λ1 . . . . . . λ1 . . λn have algebraic multiplicity 1 (multa λi = 1.v. . . n since dimension of any vector space cannot be less than 1 (trivial case). . . v2 . . λ2 . n) then clearly multa λi = multg λi = 1. 2. i = 1. λn−1 . . . . If all e. + nn = n ⇒ there are n linearly independent eigenvectors which form an eigenvector basis in Rn ⇒ A is diagonalizable by Theorem (∗). i = k have single algebraic multiplicity (multa λi = 1).

. vn−p }i=k = dimEλ∗ = p = multa λ∗ k. 0 2 Solution. λn−1 . eigenvector vj corresponding to the e. . . k ⇒ vectors which belong to the set B/{v1 . of multa = 2: λ1. . v2 . λj do not belong to the eigenspace Eλi corresponding to the e. .v. λ1 = 2. . . k 2 0 0 Example. A has two e. λ2 = 6. 26 . Note that these vectors are k linearly independent by statement of the Theorem and dimB/{v1 . . vn−p }i=k belong to the eigenspace Eλ∗ . . A has 3 disticnt e. . corresponding to λ1 . . Diagonalize if possible A = 24 −12 0 0 λ3 = 1 0 0 2 0 0 0 .e. i = k . i. do not belong to Eλ∗ . ⇒ A is diagonalizable. v2 .Since clearly eigenspaces do not intersect. . .2 = −2. λ3. Solve (A − λi E )x = 0 and ﬁnd eigenvectors. i = k . λ2 . . vn−p .v.v. Why is A = 2 6 0 diagonalizable? 3 2 1 Solution. v2 . . −2 0 0 −2 Example. .4 = 2.v. λi (λj = λj ) (verify it at home) ⇒ the eigenvectors v1 .

Thus = −2: v1 = −6 3 6 0 0 0 0 . 27 .4 1 0 0 . since multa (−2) = multg (−2) = 2 and multa (2) = multg (2) = 2.Basis for λ1. the matrix A is diagonalizable. Thus = 2: v3 = 1 0 0 1 multg (2) = 2.2 multg (−2) = 2. Basis for λ3. Hence by The Diagonalization Theorem. v2 = 1. v4 = 0.

- Guassian Elimination Method
- Laminar and Turbulent Flows in Bounded Systems
- 02 Chapters Two - Laplace Transform (SOLVED PROBLEMS)
- Cramers Rule
- Manually Operated Groundnut Shelling Machine
- Miniature Plant
- Eigen Values Eigen Vectors
- Chapter 2 Incompressible Flow Through Pipes
- Soil Brick Making Machine
- Pressure Drop
- Applications of Laplace Transform
- Simultaneous Equations
- Addressing Ravens
- Guassian Elimination
- Partial Fractions
- Cone Clutch Question Bank
- Matrices - Gaussian Elimination - Simultaneous Equations
- ila0601
- Pelton Wheel Project Report-1
- mechanical design
- Fourier Series
- Mechanical Design
- Lab 3 Report Plant ANATOMY
- Num_LA
- Design of Shafts
- Combined Bending and Torsion Lab report
- Pressure Drop
- Chapter 8
- Slip Factor
- 02-04ChapGere

Read Free for 30 Days

Cancel anytime.

Close Dialog## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

Loading