You are on page 1of 9

*

Linear Algebra
Exercises set 4 - SOLUTIONS

Bachelor in Physics, 1st year


University of Bucharest

Eigenvectors/values

1. Let R : C3 → C3 be the linear map R(z1 , z2 , z3 ) = (z1 , −z3 , z2 ). Find its eigenvalues and
corresponding eigenvectors. Is this linear map diagonalizable? The conclusion remains
true for R seen as map from R3 into R3 ? Same questions for σx : C2 → C2 , σx (z1 , z2 ) =
(z2 , z1 ).
Solution. With respect to the canonical
 basis thematrix of R seen as a map between
1 0 0
3-dim complex vector spaces is R =  0 0 −1 . The eigenvalues are roots of the
0 1 0
characteristic polynomial det(R − λI), that is

1−λ 0 0

0 −λ −1 = 0 ⇐⇒ (1 − λ)(λ2 + 1) = 0,

0 1 −λ
so λ1 = 1 and λ2,3 = ±i. By solving the corresponding linear system:

 (1 − λ)z1 = 0
−λz2 − z3 = 0 ,
z2 − λz3 =0

we find that the eigenspaces are respectively spanned by (1, 0, 0), (0, i, 1), and (0, −i, 1).
In particular they are 1-dim., so the eigenvalues algebraic and geometric multiplicities
coincide. Therefore R is diagonalisable (we can use [2, 2.2.54] to conclude this).
This conclusion is not true if R is restricted to R3 because not all the roots belong to
the field K which in this case would be R. Notice that the real version of R is a rotation
in R3 of 90◦ about the Ox1 axis.
For σx we proceed in the same manner, finding λ1,2 = ±1 and that σx is diagonalizable
(both as map on C2 and as map on R2 ).

 
−5 3
2. Let A be a linear operator with associated matrix A = . Show that A
6 −2
is diagonalizable and compute its eigenvalues. Deduce that there exists a matrix B such
that B3 = A.
* Lectures
given by Prof. N. Cotfas. Assistant R.Slobodeanu. You can address your questions at
radualexandru.slobodeanu@g.unibuc.ro or at nicolae.cotfas@unibuc.ro

1
Solution. Computing the characteristic polynomial of A we get

PA (λ) = (λ − 1)(λ + 8)

The roots are distinct and belong to R; this allows us to conclude that A is diagonalizable.

♣ If the characteristic polynomial of the linear operator A on the K-vector


space V are all distinct and all belong to K, then A is diagonalizable; 1 cf. [2,
2.2.54].

Therefore it exists an invertible matrix P such that A = P DP −1 , where


 
1 0
D =
0 −8

Before proving that there exist a matrix B such that B3 = A, let’s suppose that for D
the answer is immediate: M3 = D for
 
1 0
M=
0 −2

Choose B = P MP −1 . Then

B3 = P M3 P −1 = P DP −1 = A.

Notice that the exercise did not ask us to fiend B explicitly.

3. Let m ∈ R and A a linear map on R3 whose matrix in the canonical basis is


 
1 0 1
A= −1 2 1 
2−m m−2 m

(a) Compute the eigenvalues of A; (b) For which values of m, the linear operator A is
diagonalizable?; (c) Suppose m = 2. Compute Ak for any k ∈ N.

Solution. (a) Let us compute the characteristic polynomial of A. We have



λ−1 0 −1 λ−1 0 −1

−PA (λ) = 1 λ−2 −1 =C1 +C2 →C1 λ − 1 λ − 2 −1
m−2 2−m λ−m 0 2−m λ−m

λ−1 0 −1
λ−2 0
=L2 −L1 →L2 0 λ−2 0 = (λ − 1) = (λ − 1)(λ − 2)(λ − m)
2−m λ−m
0 2−m λ−m

The eigenvalues of A are λ1 = 1, λ2 = 2 and λ3 = m. In particular, if m = 1 or 2, then


A has only two eigenvalues.
1
Indeed, eigenspaces have dimension at least 1, and they cannot intersect elsewhere than in 0 (check
this!!). Since their sum cannot exceed 3 dimensions, their dimension must be exactly 1, equal to the
algebraic multiplicity of the eigenvalues. Therefore A is diagonalizable according to the standard criterion.

2
(b) If m 6= 1 and m 6= 2, then A is an endomorphism of R3 with 3 distinct real eigenvalues.
Therefore A is diagonalizable by (♣).
If m = 1, the characteristic polynomial is (1 − λ)2 (2 − λ) so A is diagonalizable if and
only if the dimension of the eigenspace Vλ1 associated to the eigenvalue λ1 = 1 is equal
to 2. Let us check this. The vector u = (x, y, z), is an eigenvector for λ = 1 if
 
x+z =x z=0


 
  x = x (arbitrary)
Au = u ⇐⇒ −x + 2y + z = y ⇔ −x + y + z = 0 ⇔ y=x
z=0

 x−y+z =z  
 x−y =0
Therefore Vλ1 = span{(1, 1, 0)} and dim Vλ1 = 1 6= 2: the linear operator A is not
diagonalizable.
Suppose now that m = 2. We have this time to determine the dimension of the eigenspace
Vλ2 associated to the eigenvalue λ2 = 2. For u = (x, y, z) we have

−x + z = 0

  x = x (arbitrary)
Au = 2u ⇐⇒ −x + z = 0 ⇐⇒ y = y (arbitrary)
z=x
 
0=0

Therefore Vλ2 = span{(1, 0, 1), (0, 1, 0)} and dim Vλ2 = 2 = algebraic multiplicity of λ2 .
The linear operator A is diagonalizable.
(c) We begin by diagonalizing A. We have already computed a basis in Vλ2 . As to the
eigenvalue λ1 = 1 (Warning! here m = 2), we have, for a vector u = (x, y, z),
 
x+z =x z=0


 
  x=x
Au = u ⇐⇒ −x + 2y + z = y ⇔ −x + y + z = 0 ⇔ y=x
z=0
  
 2z = z  z=0
So a basis in Vλ1 is given by (1, 1, 0). Denote u = (1, 1, 0), v = (0, 1, 0) and w = (1, 0, 1).
According to the previous discussion {u, v, w} is a basis of eigenvectors for A and in this
basis the matrix of A is  
1 0 0
D= 0 2 0 
0 0 2
Let P the change of base matrix from the canonical basis of R3 to the new basis {u, v, w}.
We have:  
1 0 1
P = 1 1 0 
0 0 1
and A = P DP −1 . By direct computation we also have
 
1 0 −1
−1
P =  −1 1 1 
0 0 1
Form A = P DP −1 ,we easily deduce that Ak = P Dk P −1 . But since D is diagonal, we
have  
1 0 0
k
D =  0 2k 0 
0 0 2k

3
We finally obtain  
1 0 2k − 1
Ak =  1 − 2k 2k 2k − 1 
0 0 2k

4. (a) Prove that if the matrix A (or the operator A) satisfies a polynomial equation
P(A) = 0, then the eigenvalues λ of A satisfies the same polynomial equation P(λ) = 0.
(b) Let U be the matrix  
0 1 1 1
 1 0 1 1 
U=  1 1 0 1 

1 1 1 0
Compute U2 and deduce a simple relation relating U2 , U and the identity matrix I4 .
Deduce the eigenvalues of U. Diagonalize U.
(c) A projection on a vector space V is a linear operator P : V → V such that P 2 = P .
What are the eigenvalues of such operator, assuming P 6= Id and P not identically 0?
Deduce that V = ker P ⊕ ImP .

Solution. (a) Assume that a2 A2 + a1 A + a0 I = 0 is the polynomial equation (the


general case is completely similar), where ai are some real/complex coefficients. Apply
both members of this operator equality on an eigenvector v of A corresponding to the
eigenvalue λ. We get a2 A(Av) + a1 Av + a0 v = 0, or equivalently a2 λAv + a1 λv + a0 v = 0,
so (a2 λ2 + a1 λ + a0 )v = 0. Since by definition v 6= 0, we deduce that λ satisfies the same
polynomial equation as A, namely: a2 λ2 + a1 λ + a0 = 0.

Warning! This does not imply that all solutions of P(λ) = 0 are eigenvalues
of A.

(b) We easily see that  


3 2 2 2
 2 3 2 2 
U2 = 
 2

2 3 2 
2 2 2 3
and therefore U2 = 2U+3I4 , where I4 is the identity matrix. The polynomial X 2 −2X −3
annihilates U (it is even the minimal polynomial of U since there exists no lower degree
polynomial that annihilates U) and it has the simple roots −1 et 3. We will check now
whether these are also eigenvalues of U or not.

4
 
x
 y 
If λ1 = −1 is an eigenvalue then we must be able to find v = 
 z  such that:

t


 y + z + t = −x

 x + z + t = −y
Uv = −v ⇐⇒

 x + y + t = −z

x + y + z = −t

⇐⇒ x + y + z + t = 0


 x = −y − z − t
y = y (arbitrary)

⇐⇒

 z = z (arbitrary)
t = t (arbitrary)

This system is compatible, so λ1 = −1 is an eigenvalue and its geometric multiplicity is


3: a basis in Vλ1 is given by the vectors (−1, 1, 0, 0), (−1, 0, 1, 0) and (−1, 0, 0, 1) (check
their linear independence!). Solving similarly Uv = 3v, we find that λ2 = 3 is an eigen-
value with the eigenspace Vλ2 = span{(1, 1, 1, 1)}. But eigenvectors corresponding
to distinct eigenvalues are linearly independent ([2, 2.2.53]). Therefore the set
set of vectors {(−1, 1, 0, 0), (−1, 0, 1, 0), (−1, 0, 0, 1), (1, 1, 1, 1)} found above is a basis in
R4 formed by eigenvectors of U. With respect to this new basis, the associated linear
operator U will have a diagonal form.
(c) According to (a), if λ is an eigenvalue of P , then it must satisfy λ2 = λ too, so λ = 0
or 1. Notice that at this moment, this does not imply the λ = 0 or 1 are eigenvalues, but
only that these are possible eigenvalues: that eigenvalues are to be found among these
solutions of λ2 = λ.
We will prove that both both values 0 and 1 do actually occur.
Indeed, as P 2 = P , for any v we have P (P v) = P v, which says that P v is an
eigenvector with eigenvalue 1 whenever v is not in the kernel of P (i.e. P v 6= 0), and such
v must exist because P 6≡ 0. So λ = 1 is an eigenvalue.
Again,since for any v we have P (P v) = P v, we deduce P (P v − v) = 0 which says that
P v − v is an eigenvector with eigenvalue 0 whenever P v − v 6= 0 and such v must exist as
P is not the Identity map. So λ = 0 is an eigenvalue.
By definition Vλ1 =0 = ker P . Let us remark that Vλ2 =1 ⊂ ImP (because P v = v implies
obviously that v ∈ ImP ) and ImP ⊂ Vλ2 =1 (if v = P u for some u ∈ V , then, applying P
in both terms of this equality and using P 2 = P , results in P v = P u, so P v = v). We
proved that Vλ2 =1 = ImP . As, in general,
2
the sum of eigenspaces corresponding to distinct eigenvalues is a direct sum

we got that ker P and ImP form a direct sum ker P ⊕ ImP ⊂ V . But, cf. Rank Theorem
[B5] in [1] we have dim ker P +dim ImP = dim V . We can conclude that ker P ⊕ImP = V .
Notice that this can be read as Vλ1 ⊕ Vλ2 = V so, in particular, we proved that any
projection is a diagonalizable operator!

2
Check this by noticing that it cannot exist a common eigenvector v for the eigenvalues λi 6= λj .

5
5 (Simultaneous diagonalization). Consider two linear ! operators from C2 to C2 whose
√ 1−i 1+i
!
2 − √ 0 √
matrices in the canonical basis are A = 21 2 and B = 12 √2
− 1+i

2
0 − 1−i

2
i 2
Show that the two operators commute (i.e. AB − BA = 0) and that there exists a basis
of common eigenvectors for A and B (we say that the corresponding operators can be
diagonalized simultaneously). Try to generalize this example.

Solution. The commutativity is a straightforward computation of matrix √ products. Then



we compute a basis of eigenvectors for A, whose eigenvalues are λ1 = 23+1√ , λ2 = − 3−1
2
√ ,
2 2
and we check that these are also eigenvectors for B (long but elementary computation).
Such a basis is the following:
n  √  √ o
v = − 3+1 2
(1 − i) , 1 , w = 3−1
2
(1 − i) , 1

The fact that A is diagonalizable is an illustration of the general result (cf. [E18] in [1])

any Hermitian matrix is diagonalizable.

Let us now observe that the commutativity of two linear operators A and B on the
same vector space V , when applied on an eigenvector v of A with eigenvalue λ, yields:

A(Bv) − B(Av) = 0 ⇒ A(Bv) = λBv

In other words: if v is eigenvector of A, then Bv is also an eigenvector of A (for the same


eigenvalue). If the eigenspace Vλ of A is 1-dimensional, then we must have Bv = µv for
some scalar µ, i.e. v is also an eigenvector for B. So one possible generalization of the
exercise is: If A is diagonalizable with 1−dimensional eigenspaces 3 and B commutes with
A, then A and B can be simultaneously diagonalized.

Another result of this type involves self-adjoint operators on Hilbert spaces


(see [E20] in [1]) and its ∞-dim version is heavily used in Quantum Mechanics.
In our case, A and iB are self-adjoint so the conclusion can be obtained also
from this general result.



1 0 0
6 (Jordan form). Consider the matrix A =  0 0 −1 . Is this matrix (or the
0 1 2
3
corresponding operator on R ) diagonalizable?
 Show
 that there exist a nonsigular matrix
1 0 0
Q such that A = QBQ−1 where B =  0 1 1 .
0 0 1

Solution. The characteristic polynomial of A is PA (λ) = −(1 − λ)3 with λ = 1 its only
root. So A has a single eigenvalue (of algebraic multiplicity 3) λ = 1, and, since A 6= I3 ,
the associated operator A is not diagonalizable.
3
Or, equivalently, with distinct eigenvalues.

6
Notice that (x, y, z) ∈ Vλ = ker(A − I) ⇐⇒ y + z = 0. We see that the eigenspace Vλ
is of dimension 2 with a basis {u1 , u2 }, where u1 = (1, 0, 0) and u2 = (0, 1, −1). We look
for a third vector u3 such that Au3 = u2 + u3 . Put u3 = (x, y, z). Then


 x=x
Au3 = u2 + u3 ⇐⇒ −z = 1 + y ⇐⇒ z = −1 − y

y + 2z = −1 + z

let us choose u3 = (0, 0, −1). It is easy to check the linear independence of {u1 , u2 , u3 }
which is therefore a basis in R3 . With respect to this basis the matrix associated to the
operator A is exactly B. The matrix Q is the change of base matrix.
The matrix B is called the Jordan form of the linear operator A (it is a block-diagonal
form, see WikiDef).

7. Let V = {P ∈ R[X]| deg P ≤ n} and φ : V → V the linear map defined by φ(P) =


(X 2 − 1)P 00 + 2XP 0 . Write the associated matrix with respect to the canonical basis,
justify that φ is diagonalizable and find the eigenvalues of φ. The ”eigenvectors” are the
celebrated Legendre polynomials appearing in the wavefunctions for the hydrogen atom.
Can you find the three lowest eigenvalues ones? Hint. You may take n = 3 or 4.
Solution. Let us first compute the action of φ on the canonical basis (we take n = 4):
φ(1) = 0
φ(X) = 2X
φ(X 2 ) = −2 + 6X 2
φ(X 3 ) = −6X + 12X 3
φ(X 4 ) = −12X 2 + 20X 4
so the associated matrix is
 
0 0 −2 0 0

 0 2 0 −6 0  
Φ=
 0 0 6 0 −12 

 0 0 0 12 0 
0 0 0 0 20
The upper triangular form of the matrix makes very easy the computation of the char-
acteristic polynomial: P (λ) = −λ(λ − 2)(λ − 6)(λ − 12)(λ − 20). The eigenvalues
λ0 = 0, λ1 = 2, . . . , λ4 = 20 are all distinct and real, so φ is diagonalizable cf. (♣).
For arbitrary n the argument is similar, and the eigenvalues are:
λ` = `(` + 1), `∈N
To find the eigenpolynomials, we must solve an ordinary differential equation (ODE),
e.g. for λ2 = 6:
(x2 − 1)P 00 + 2xP 0 = 6P
We observe first the de degree = k polynomials form invariant subspaces for φ, and we
can guess that the solution of the above equation should be of degree 2. By injecting
P(x) = ax2 + bx + c into the above ODE, and identifying the coefficients, we get P(x) =
3x2 − 1 which is an ”eigenvector” of φ corresponding to λ2 = 6.

7
8. (a) Let u : R → R3 be a (column) vector valued function and A a constant 3 × 3
matrix. Show that the (system of ordinary) differential equation(s)
du
= Au (1)
dt
admits solutions of the form u(t) = eλt v for some constant vector v and some scalar λ.
(b) Show that the partial differential equation (called heat equation),
∂u ∂ 2u
= (2)
∂t ∂x2
in the unknown real function u(x, t) = temperature in point x, at time t, admits solutions
of the form u(x, t) = eλt v(x) for some function v and some λ ∈ R. Characterize v and λ
in the context of eigenvalues/eigenvectors of linear operators (∞-dim case).
Solution. (a) For u(t) = eλt v, Eq. (1) becomes λeλt v = eλt Av, or after simplification
(exponential is nowhere vanishing), λv = Av saying that, in order to have a solution of
this form, it is enough to choose v eigenvector of A corresponding to an eigenvalue λ.
(b) For u(x, t) = eλt v(x), Eq. (2) becomes λeλt v(x) = eλt v 00 (x), or after simplification
(exponential is nowhere vanishing), λv(x) = v 00 (x) saying that, in order to have a solution
of this form (solution of separation of variables), it is enough to choose v(x) eigenfunction
d2
of the double derivative operator dx 2 acting on smooth functions, corresponding to an

eigenvalue λ (e.g. v(x) = cos( −λ x), λ < 0).

9 (Hopf map - continued*). Recall the map defined in Set 3:


ϕ(x1 , y1 , x2 , y2 ) = x21 + y12 − x22 − y22 , 2(x1 x2 + y1 y2 ), 2(x1 y2 − x2 y1 )

(3)
At an arbitrary point p = (a1 , b1 , a2 , b2 ) 6= 0, you computed the differential Dp ϕ : R4 → R3
which is a linear map with associated matrix given by the Jacobian Jp of all partial
derivatives at p. Show that Jtp Jp is diagonalizable and compute the diagonal elements.
Solution. Using the expression of Jp from the previous set, we find by direct multiplica-
tion of the transpose of the Jacobian matrix with itself:
 
4 (a21 + a22 + b22 ) 4a1 b1 −4b1 b2 4a2 b1
 4a1 b1 4 (b21 + a22 + b22 ) 4a1 b2 −4a1 a2 
Jtp Jp =  2 2 2

 −4b1 b2 4a1 b2 4 (a1 + b1 + a2 ) 4a2 b2 
2 2 2
4a2 b1 −4a1 a2 4a2 b2 4 (a1 + b1 + b2 )
In order to compute the characteristic polynomial P (λ) = det(Jtp Jp − λI4 ) we can either
use a software with symbolic computations (Wolfram Mathematica) or to employ the
elementary operations on rows, e.g. multiply 2nd row with ab11 and add it to the first one:
2
(a1 + a22 + b22 ) − 1 λ a1 b1 −b1 b2 a2 b 1
4
2 2 2 1
1
a 1 b 1 (b 1 + a 2 + b 2 ) − 4
λ a1 b 2 −a 1 a 2

4 P (λ) = 2 2 2 1

4
−b1 b2 a1 b 2 (a1 + b1 + a2 ) − 4 λ a2 b2

2 2 2 1
a2 b 1 −a1 a2 a2 b 2 (a1 + b1 + b2 ) − 4 λ
b1

1 0 0
a1
2 2 2 1
a b (b + a + b ) − λ a b −a 1 a2

= a21 + b21 + a22 + b22 − 14 λ 1 1 1 2
 1 2 2 4
2 2 2 1
−b1 b2 a1 b 2 (a1 + b1 + a2 ) − 4 λ a2 b2

a2 b 1 −a1 a2 a2 b 2 (a1 + b1 + b22 ) − 14 λ
2 2

8
Here we assumed a1 6= 0 (Recall the hypothesis p = (a1 , b1 , a2 , b2 ) 6= 0, so at least one of
the components is not equal to 0. The other three possibilities have a similar treatment).
Continue by making a zero in the (first line, second column) etc. We find in the end:
3
P (λ) = λ λ − 4(a21 + a22 + b21 + b22 )
so the eigenvalues are λ1 = 0, and λ2 = 4(a21 + a22 + b21 + b22 ), with algebraic multiplicities 1
and 3, respectively. The corresponding eigenspaces are calculated by solving the system:
    
4 (a21 + a22 + b22 ) 4a1 b1 −4b1 b2 4a2 b1 x1 x1
2 2 2

 4a b
1 1 4 (b 1 + a 2 + b 2 ) 4a b
1 2 −4a a
1 2   1  = λi  y1
  y   

2 2 2
 −4b1 b2 4a1 b2 4 (a1 + b1 + a2 ) 4a2 b2   x2   x2 
2 2 2
4a2 b1 −4a1 a2 4a2 b2 4 (a1 + b1 + b2 ) y2 y2
For λ1 = 0 we begin by multiplying the 2nd equation with ab11 and add it to the first one,
which results in x1 = − ab11 y1 , and we continue with similar manipulations. We get
Vλ1 = span {(−b1 , a1 , −b2 , a2 )} .
For λ2 = 4(a21 + a22 + b21 + b22 ) the above system reduces to one single equation:
−b1 x1 + a1 y1 − b2 x2 + a2 y2 = 0
so Vλ2 represents a hyperplane in R4 (so a 3-dimensional space). Since we obtained that
the eigenspaces have dimension 1 and 3 respectively, exactly as the algebraic multiplicities
of corresponding λi , we deduce that Jtp Jp is diagonalizable, cf. again [B30] in [1].
Actually we will see that (cf. [E19] in [1])
any symmetric matrix is diagonalizable.
So it would be enough to notice that Jtp Jp is symmetric. More generally, any product At A
of the transpose with the matrix itself is symmetric.

***

Supplementary reading. Optical resonator stability is directly addressed by eigen-


vectors and eigenvalues techniques. Read the definition of ray transfer matrix and the
main examples, then redo the computations in the section Resonator stability in WikiRTM.
For more information see [4].
   
2 −1 −1 −1 0 0
Answer the same questions as in Exercise 6 for A =  2 1 −2  and B =  0 1 1 .
3 −1 −2 0 0 1

References
[1] N. Cotfas, Elements of Linear Algebra & some applications, on-line pdf at link
[2] Cotfas N., Cotfas L.A., Elemente de algebră liniară, Ed. Univ. Bucureşti, 2015 link.
[3] Lipschutz S., Lipson M. Schaum’s outline. Linear algebra, McGraw-Hill Education,
2018.
[4] Hodgson N., and H. Weber, Laser Resonators and Beam Propagation, Springer, 2005.

You might also like