Professional Documents
Culture Documents
Linear Algebra
Exercises set 4 - SOLUTIONS
Eigenvectors/values
1. Let R : C3 → C3 be the linear map R(z1 , z2 , z3 ) = (z1 , −z3 , z2 ). Find its eigenvalues and
corresponding eigenvectors. Is this linear map diagonalizable? The conclusion remains
true for R seen as map from R3 into R3 ? Same questions for σx : C2 → C2 , σx (z1 , z2 ) =
(z2 , z1 ).
Solution. With respect to the canonical
basis thematrix of R seen as a map between
1 0 0
3-dim complex vector spaces is R = 0 0 −1 . The eigenvalues are roots of the
0 1 0
characteristic polynomial det(R − λI), that is
1−λ 0 0
0 −λ −1 = 0 ⇐⇒ (1 − λ)(λ2 + 1) = 0,
0 1 −λ
so λ1 = 1 and λ2,3 = ±i. By solving the corresponding linear system:
(1 − λ)z1 = 0
−λz2 − z3 = 0 ,
z2 − λz3 =0
we find that the eigenspaces are respectively spanned by (1, 0, 0), (0, i, 1), and (0, −i, 1).
In particular they are 1-dim., so the eigenvalues algebraic and geometric multiplicities
coincide. Therefore R is diagonalisable (we can use [2, 2.2.54] to conclude this).
This conclusion is not true if R is restricted to R3 because not all the roots belong to
the field K which in this case would be R. Notice that the real version of R is a rotation
in R3 of 90◦ about the Ox1 axis.
For σx we proceed in the same manner, finding λ1,2 = ±1 and that σx is diagonalizable
(both as map on C2 and as map on R2 ).
−5 3
2. Let A be a linear operator with associated matrix A = . Show that A
6 −2
is diagonalizable and compute its eigenvalues. Deduce that there exists a matrix B such
that B3 = A.
* Lectures
given by Prof. N. Cotfas. Assistant R.Slobodeanu. You can address your questions at
radualexandru.slobodeanu@g.unibuc.ro or at nicolae.cotfas@unibuc.ro
1
Solution. Computing the characteristic polynomial of A we get
PA (λ) = (λ − 1)(λ + 8)
The roots are distinct and belong to R; this allows us to conclude that A is diagonalizable.
Before proving that there exist a matrix B such that B3 = A, let’s suppose that for D
the answer is immediate: M3 = D for
1 0
M=
0 −2
Choose B = P MP −1 . Then
B3 = P M3 P −1 = P DP −1 = A.
(a) Compute the eigenvalues of A; (b) For which values of m, the linear operator A is
diagonalizable?; (c) Suppose m = 2. Compute Ak for any k ∈ N.
2
(b) If m 6= 1 and m 6= 2, then A is an endomorphism of R3 with 3 distinct real eigenvalues.
Therefore A is diagonalizable by (♣).
If m = 1, the characteristic polynomial is (1 − λ)2 (2 − λ) so A is diagonalizable if and
only if the dimension of the eigenspace Vλ1 associated to the eigenvalue λ1 = 1 is equal
to 2. Let us check this. The vector u = (x, y, z), is an eigenvector for λ = 1 if
x+z =x z=0
x = x (arbitrary)
Au = u ⇐⇒ −x + 2y + z = y ⇔ −x + y + z = 0 ⇔ y=x
z=0
x−y+z =z
x−y =0
Therefore Vλ1 = span{(1, 1, 0)} and dim Vλ1 = 1 6= 2: the linear operator A is not
diagonalizable.
Suppose now that m = 2. We have this time to determine the dimension of the eigenspace
Vλ2 associated to the eigenvalue λ2 = 2. For u = (x, y, z) we have
−x + z = 0
x = x (arbitrary)
Au = 2u ⇐⇒ −x + z = 0 ⇐⇒ y = y (arbitrary)
z=x
0=0
Therefore Vλ2 = span{(1, 0, 1), (0, 1, 0)} and dim Vλ2 = 2 = algebraic multiplicity of λ2 .
The linear operator A is diagonalizable.
(c) We begin by diagonalizing A. We have already computed a basis in Vλ2 . As to the
eigenvalue λ1 = 1 (Warning! here m = 2), we have, for a vector u = (x, y, z),
x+z =x z=0
x=x
Au = u ⇐⇒ −x + 2y + z = y ⇔ −x + y + z = 0 ⇔ y=x
z=0
2z = z z=0
So a basis in Vλ1 is given by (1, 1, 0). Denote u = (1, 1, 0), v = (0, 1, 0) and w = (1, 0, 1).
According to the previous discussion {u, v, w} is a basis of eigenvectors for A and in this
basis the matrix of A is
1 0 0
D= 0 2 0
0 0 2
Let P the change of base matrix from the canonical basis of R3 to the new basis {u, v, w}.
We have:
1 0 1
P = 1 1 0
0 0 1
and A = P DP −1 . By direct computation we also have
1 0 −1
−1
P = −1 1 1
0 0 1
Form A = P DP −1 ,we easily deduce that Ak = P Dk P −1 . But since D is diagonal, we
have
1 0 0
k
D = 0 2k 0
0 0 2k
3
We finally obtain
1 0 2k − 1
Ak = 1 − 2k 2k 2k − 1
0 0 2k
4. (a) Prove that if the matrix A (or the operator A) satisfies a polynomial equation
P(A) = 0, then the eigenvalues λ of A satisfies the same polynomial equation P(λ) = 0.
(b) Let U be the matrix
0 1 1 1
1 0 1 1
U= 1 1 0 1
1 1 1 0
Compute U2 and deduce a simple relation relating U2 , U and the identity matrix I4 .
Deduce the eigenvalues of U. Diagonalize U.
(c) A projection on a vector space V is a linear operator P : V → V such that P 2 = P .
What are the eigenvalues of such operator, assuming P 6= Id and P not identically 0?
Deduce that V = ker P ⊕ ImP .
Warning! This does not imply that all solutions of P(λ) = 0 are eigenvalues
of A.
4
x
y
If λ1 = −1 is an eigenvalue then we must be able to find v =
z such that:
t
y + z + t = −x
x + z + t = −y
Uv = −v ⇐⇒
x + y + t = −z
x + y + z = −t
⇐⇒ x + y + z + t = 0
x = −y − z − t
y = y (arbitrary)
⇐⇒
z = z (arbitrary)
t = t (arbitrary)
we got that ker P and ImP form a direct sum ker P ⊕ ImP ⊂ V . But, cf. Rank Theorem
[B5] in [1] we have dim ker P +dim ImP = dim V . We can conclude that ker P ⊕ImP = V .
Notice that this can be read as Vλ1 ⊕ Vλ2 = V so, in particular, we proved that any
projection is a diagonalizable operator!
2
Check this by noticing that it cannot exist a common eigenvector v for the eigenvalues λi 6= λj .
5
5 (Simultaneous diagonalization). Consider two linear ! operators from C2 to C2 whose
√ 1−i 1+i
!
2 − √ 0 √
matrices in the canonical basis are A = 21 2 and B = 12 √2
− 1+i
√
2
0 − 1−i
√
2
i 2
Show that the two operators commute (i.e. AB − BA = 0) and that there exists a basis
of common eigenvectors for A and B (we say that the corresponding operators can be
diagonalized simultaneously). Try to generalize this example.
The fact that A is diagonalizable is an illustration of the general result (cf. [E18] in [1])
Let us now observe that the commutativity of two linear operators A and B on the
same vector space V , when applied on an eigenvector v of A with eigenvalue λ, yields:
1 0 0
6 (Jordan form). Consider the matrix A = 0 0 −1 . Is this matrix (or the
0 1 2
3
corresponding operator on R ) diagonalizable?
Show
that there exist a nonsigular matrix
1 0 0
Q such that A = QBQ−1 where B = 0 1 1 .
0 0 1
Solution. The characteristic polynomial of A is PA (λ) = −(1 − λ)3 with λ = 1 its only
root. So A has a single eigenvalue (of algebraic multiplicity 3) λ = 1, and, since A 6= I3 ,
the associated operator A is not diagonalizable.
3
Or, equivalently, with distinct eigenvalues.
6
Notice that (x, y, z) ∈ Vλ = ker(A − I) ⇐⇒ y + z = 0. We see that the eigenspace Vλ
is of dimension 2 with a basis {u1 , u2 }, where u1 = (1, 0, 0) and u2 = (0, 1, −1). We look
for a third vector u3 such that Au3 = u2 + u3 . Put u3 = (x, y, z). Then
x=x
Au3 = u2 + u3 ⇐⇒ −z = 1 + y ⇐⇒ z = −1 − y
y + 2z = −1 + z
let us choose u3 = (0, 0, −1). It is easy to check the linear independence of {u1 , u2 , u3 }
which is therefore a basis in R3 . With respect to this basis the matrix associated to the
operator A is exactly B. The matrix Q is the change of base matrix.
The matrix B is called the Jordan form of the linear operator A (it is a block-diagonal
form, see WikiDef).
7
8. (a) Let u : R → R3 be a (column) vector valued function and A a constant 3 × 3
matrix. Show that the (system of ordinary) differential equation(s)
du
= Au (1)
dt
admits solutions of the form u(t) = eλt v for some constant vector v and some scalar λ.
(b) Show that the partial differential equation (called heat equation),
∂u ∂ 2u
= (2)
∂t ∂x2
in the unknown real function u(x, t) = temperature in point x, at time t, admits solutions
of the form u(x, t) = eλt v(x) for some function v and some λ ∈ R. Characterize v and λ
in the context of eigenvalues/eigenvectors of linear operators (∞-dim case).
Solution. (a) For u(t) = eλt v, Eq. (1) becomes λeλt v = eλt Av, or after simplification
(exponential is nowhere vanishing), λv = Av saying that, in order to have a solution of
this form, it is enough to choose v eigenvector of A corresponding to an eigenvalue λ.
(b) For u(x, t) = eλt v(x), Eq. (2) becomes λeλt v(x) = eλt v 00 (x), or after simplification
(exponential is nowhere vanishing), λv(x) = v 00 (x) saying that, in order to have a solution
of this form (solution of separation of variables), it is enough to choose v(x) eigenfunction
d2
of the double derivative operator dx 2 acting on smooth functions, corresponding to an
√
eigenvalue λ (e.g. v(x) = cos( −λ x), λ < 0).
8
Here we assumed a1 6= 0 (Recall the hypothesis p = (a1 , b1 , a2 , b2 ) 6= 0, so at least one of
the components is not equal to 0. The other three possibilities have a similar treatment).
Continue by making a zero in the (first line, second column) etc. We find in the end:
3
P (λ) = λ λ − 4(a21 + a22 + b21 + b22 )
so the eigenvalues are λ1 = 0, and λ2 = 4(a21 + a22 + b21 + b22 ), with algebraic multiplicities 1
and 3, respectively. The corresponding eigenspaces are calculated by solving the system:
4 (a21 + a22 + b22 ) 4a1 b1 −4b1 b2 4a2 b1 x1 x1
2 2 2
4a b
1 1 4 (b 1 + a 2 + b 2 ) 4a b
1 2 −4a a
1 2 1 = λi y1
y
2 2 2
−4b1 b2 4a1 b2 4 (a1 + b1 + a2 ) 4a2 b2 x2 x2
2 2 2
4a2 b1 −4a1 a2 4a2 b2 4 (a1 + b1 + b2 ) y2 y2
For λ1 = 0 we begin by multiplying the 2nd equation with ab11 and add it to the first one,
which results in x1 = − ab11 y1 , and we continue with similar manipulations. We get
Vλ1 = span {(−b1 , a1 , −b2 , a2 )} .
For λ2 = 4(a21 + a22 + b21 + b22 ) the above system reduces to one single equation:
−b1 x1 + a1 y1 − b2 x2 + a2 y2 = 0
so Vλ2 represents a hyperplane in R4 (so a 3-dimensional space). Since we obtained that
the eigenspaces have dimension 1 and 3 respectively, exactly as the algebraic multiplicities
of corresponding λi , we deduce that Jtp Jp is diagonalizable, cf. again [B30] in [1].
Actually we will see that (cf. [E19] in [1])
any symmetric matrix is diagonalizable.
So it would be enough to notice that Jtp Jp is symmetric. More generally, any product At A
of the transpose with the matrix itself is symmetric.
***
References
[1] N. Cotfas, Elements of Linear Algebra & some applications, on-line pdf at link
[2] Cotfas N., Cotfas L.A., Elemente de algebră liniară, Ed. Univ. Bucureşti, 2015 link.
[3] Lipschutz S., Lipson M. Schaum’s outline. Linear algebra, McGraw-Hill Education,
2018.
[4] Hodgson N., and H. Weber, Laser Resonators and Beam Propagation, Springer, 2005.