You are on page 1of 6

PARTIAL SOLUTIONS TO MIDTERM 2 REVIEW

(Last edited April 7, 2014 at 1:48pm.)

Chapter 5: Eigenvalues and eigenvectors


1
(1) If A is invertible and 2 is an eigenvalue of A, then 2 is an eigenvalue of
A−1 .
Answer : True.
(2) If 2 is an eigenvalue of A, then −2 is an eigenvalue of −A.
Answer : True.
(3) If A, B are n × n matrices such that 2 is an eigenvalue of A and 3 is an
eigenvalue of B, then 5 is an eigenvalue of A + B.
Answer : False.
(4) Every square matrix (with real entries) has a real eigenvalue.
Answer : False.
(5) Every square matrix (with real entries) has a complex eigenvalue.
Answer : True.
(6) Every matrix is diagonalizable.
Answer : False.
(7) Diagonalizable matrices are invertible.
Answer : False.
(8) Invertible matrices are diagonalizable.
Answer : False.
(9) Eigenvalues are, by definition, nonzero scalars.
Answer : False.
(10) Eigenvectors are, by definition, nonzero vectors.
Answer : True.
(11) If v1 and v2 are eigenvectors of A with eigenvalue λ, then {v1 , v2 } is linearly
independent.
Answer : False.
(12) All upper triangular matrices are diagonalizable.
Answer : False.
(13) A nonzero vector can be contained in two distinct eigenspaces of A corre-
sponding to two distinct eigenvalues.
Answer : False.
(14) The sum of two eigenvectors is an eigenvector.
1
2 PARTIAL SOLUTIONS TO MIDTERM 2 REVIEW

   
1 0 1
Answer : False. Let A = ; then is an eigenvector of eigenvalue
  0 2 0  
0 1
1 and is an eigenvector of eigenvalue 2, but their sum is not an
1 1
eigenvector of A.
(15) If A is similar to a diagonalizable matrix B, then A is also diagonalizable.
Answer : True. Since B is diagonalizable, there exists an invertible ma-
trix P and a diagonal matrix D such that B = P DP −1 . Since A is similar
to B, there exists an invertible matrix Q such that A = QBQ−1 . Then
A = QP DP −1 Q−1 = (QP )D(QP )−1 . Thus A is diagonalizable.
   
1 2 1 0
(16) The matrix is similar to .
0 1 0 2
     
1 2 1 0 1 2
Answer : False. If were similar to , then would
0 1 0 2 0 1
   
1 0 1 2
be diagonalizable (since is a diagonal matrix). However, is
0 2 0 1
not diagonalizable because the multiplicity of the eigenvalue 1 is 2, while
dim E1 = 1.
(17) There exists a 3 × 3 matrix A with real entries such that the eigenvalues of
A are 1, 2 + 3i, 4 − i.
Answer : False. The characteristic polynomial of a matrix with real
coefficients is a polynomial with real coefficients, hence its roots are either
real or come in conjugate pairs.

Chapter 6: Orthogonality and least squares


(1) The length of a nonzero vector is always positive.
Answer : True.
(2) A nonzero vector can be orthogonal to itself.
Answer : False.
(3) A nonzero vector can be contained in both W and W ⊥ .
Answer : False.
(4) If two vectors are orthogonal, then they are linearly independent.
Answer : False. One of the vectors may be the zero vector. The state-
ment is true (by Theorem 4 in Section 6.2) if we require that the two vectors
are nonzero.
(5) If x is orthogonal to u and v, then it is orthogonal to au+bv for any scalars
a, b.
Answer : True. Suppose hx, ui = 0 and hx, vi = 0. Then hx, au + bvi =
ahx, ui + bhx, vi = a · 0 + b · 0 = 0.
(6) There exists a 4 × 5 matrix A whose columns are nonzero and mutually
orthogonal.
Answer : False. Suppose there exists such a matrix A = [u1 · · · u5 ]
(with ui ∈ R4 ). Then {u1 , . . . , u5 } would be a linearly independent set by
Theorem 4 of Section 6.2. This contradicts the fact that R4 has dimension
4.
PARTIAL SOLUTIONS TO MIDTERM 2 REVIEW 3

(7) Let W = Span{u1 , u2 , u3 , u4 , u5 }. Then it is possible for W to have an


orthonormal basis consisting of 2 vectors.          
1 0 3 5 7
Answer : True. Example: Let W = Span , , , , .
    0 1 4 6 8
1 0
Then , is an orthonormal basis of W .
0 1
(8) If W is a subspace of Rn and v any vector in Rn , the projection of v onto
a subspace W is always contained in W .
Answer : True.
(9) If W is a subspace of Rn and v any vector in Rn , v − projW v is orthogonal
to W .
Answer : True.
(10) Let W be a subspace of Rn and v a vector in Rn . There exists exactly one
vector w in W such that v − w is orthogonal to W .
Answer : True.
(11) If W = Span{u1 , . . . , um } ⊂ Rn , the orthogonal projection of v onto W is
v·u1 v·um
u1 ·u1 u1 + · · · + um ·um um .
Answer : False. The given formula holds only when u1 , . . . , um are
nonzero and mutually orthogonal.
(12) The orthogonal projection of y onto u is a scalar multiple of y.
Answer : False.
(13) Let V and W be subspaces of Rn such that V is contained in W . For any
b ∈ Rn , we have

projV (projW b) = projV b .

Answer : True. Let dim V = ` and dim W = m (which implies ` ≤ m).


Choose a basis {u1 , . . . , u` } of V , then add vectors {u`+1 , . . . , um } so that
{u1 , . . . , um } is a basis for W . Perform Gram-Schmidt on {u1 , . . . , um }, to
get an orthonormal basis {v1 , . . . , vm }. Notice that, by the Gram-Schmidt
process, {v1 , . . . , v` } is an orthogonal basis for V . Then

b · v1 b · vm
projW b = v1 + · · · + vm .
v1 · v1 vm · vm

For any 1 ≤ i ≤ m, we have


 
b · v1 b · vm
(projW b) · vi = v1 + · · · + vm · vi
v1 · v1 vm · vm
b · v1 b · vm
= (v1 · vi ) + · · · + (vm · vi )
v1 · v1 vm · vm
b · vi
= (vi · vi )
vi · vi
= b · vi .
4 PARTIAL SOLUTIONS TO MIDTERM 2 REVIEW

Thus we have

projW b · v1 projW b · v`
projV (projW b) = v1 + · · · + v`
v1 · v1 v` · v`
b · v1 b · v`
= v1 + · · · + v`
v1 · v1 v` · v`
= projV b .
(14) The set of all vectors in Rn orthogonal to one fixed vector is a subspace of
Rn .
Answer : True.
(15) If W is a subspace of Rn , then W and W ⊥ have no vectors in common.
Answer : False. It has (only) the zero vector in common.
(16) Let 0 be the zero vector in Rn . Then {0}⊥ = Rn .
Answer : True. Every vector is orthogonal to the zero vector.
(17) Let W be a subspace of Rn . Then dim W + dim W ⊥ = n.
Answer : True. Let W = Span{u1 , . . . , um } and A = [u1 · · · um ].
Then dim W = dim Col A = dim Row A = dim Col AT . On the other
hand, W ⊥ = (Col A)⊥ = Nul AT . Thus dim W + dim W ⊥ = dim Col AT +
dim Nul AT = n (where the last equality follows by Rank-Nullity Theorem).
(18) If {v1 , v2 , v3 } is an orthogonal set and if c1 , c2 , c3 are scalars, then {c1 v1 , c2 v2 , c3 v3 }
is an orthogonal set.
Answer : True. The assumption implies that vi · vj = 0 if i 6= j. Thus
(ci vi ) · (cj vj ) = ci cj (vi · vj ) = 0 if i 6= j.
(19) Let A be an m × n matrix such that the ith column is ui ∈ Rm . Then the
(i, j)th entry in AT A is ui · uj .
Answer : True.
(20) If an m × n matrix U has orthogonal columns,
 then necessarily
 m ≥ n.
0 1 0 0 0 0
Answer : False. Consider A = or . (By Theorem 4
0 0 1 0 0 0
of Section 6.2, the existence of a column of zeros is the only way a matrix
with orthogonal columns may have more columns than rows.)
(21) If an m × n matrix U has orthonormal columns, then necessarily m ≥ n.
Answer : True. Suppose that U = [u1 · · · un ]. Each vector ui is nonzero
because it has norm 1. Then by Theorem 4 in Section 6.2, {u1 , . . . , un } is
a linearly independent set (of vectors in Rm ). Since Rm has dimension m,
we must have m ≥ n.
(22) If an m × n matrix U has orthonormal columns, then U T U = In .
Answer : True.
(23) If an m × n matrix U has orthonormal columns, then U U T = Im .
Answer : False.
(24) A square matrix with orthogonal columns is an orthogonal matrix.
Answer : False. The columns must also be unit vectors (normal vectors).
(25) If a square matrix has orthogonal columns, then it also has orthogonal rows.
PARTIAL SOLUTIONS TO MIDTERM 2 REVIEW 5

 
0 1
Answer : False. Consider A = .
0 1
(26) If a square matrix has orthonormal columns, then it also has orthonormal
rows.
Answer : True. If a matrix A has orthonormal columns, then AT A = I.
By the IMT, A is invertible and its inverse is AT . Thus AAT = I. Thus
the rows of A are orthonormal.
(27) If W is a subspace of Rn , then kprojW vk2 + kv − projW vk2 = kvk2 .
Answer : True. This is the pythagorean theorem.
(28) If x̂ is a least squares solution to Ax = b, then Ax̂ 6= b.
Answer : False. If b ∈ Col A, then Ax̂ = projCol A b = b.
(29) If x̂ is a least squares solution to Ax = b, then Ax̂ ∈ Col A.
Answer : True. For any vector x, we have Ax ∈ Col A.
(30) If A is an orthogonal matrix, then A is diagonalizable.
Answer
 : False.
 Some orthgonal matrices with real coefficients, like
0 −1
A = , have complex eigenvalues, hence are not diagonalizable
1 0
with real coefficients (i.e. there does not exist an invertible matrix P with
real coefficients and a diagonal matrix D with real coefficients such that
A = P DP −1 ).
If all the eigenvalues of an orthogonal matrix A are real, then we can
use Exercise 16 on page 337 (combined with the fact that any orthogonal
upper-triangular matrix is diagonal) to show that A is diagonalizable with
real coefficients.
The same exercise also shows that any unitary matrix A (i.e. satisfying
T
A A = In ) can be diagonalized with complex coefficients.

Section 7.1: Diagonalization of symmetric matrices


(1) Orthogonally diagonalizable matrices are symmetric.
Answer : True.
(2) Diagonal matrices are symmetric.
Answer : True.
(3) Symmetric matrices are diagonal.
Answer : False.
(4) Diagonalizable matrices are symmetric.
Answer : False.
(5) Symmetric matrices are diagonalizable.
Answer : True if you interpret “symmetric” to mean “real symmetric”.
(6) Orthogonal matrices are symmetric.
Answer : False.
(7) If A is an orthogonal matrix then kAxk = kxk for all x ∈ Rn .
Answer : True.
(8) There exists a symmetric matrix A with eigenvalue i + 1.
6 PARTIAL SOLUTIONS TO MIDTERM 2 REVIEW

Answer : True if you count complex symmetric matrices, but false if


we’re talking about real symmetric matrices.
(9) All the eigenvalues of a diagonal matrix are real.
Answer : True if we’re talking about real symmetric matrices, but false
if we’re talking about complex symmetric matrices.
 
cos θ − sin θ
(10) There exists a rotation matrix (i.e. a matrix of the form
sin θ cos θ
for some angle θ) which is symmetric.
Answer : True. Take θ = nπ for some integer n.

You might also like