Professional Documents
Culture Documents
1
2 Section 1.1 Solutions
~x + 2~v + ~x = ~v + (−~x) + ~x
1~x + 2~v + 1~x = 1~v + ~0 by V10 and V5
1~x + 1~x + 2~v = 1~v by V3 and V4
1~x + 1~x + 2~v − 2~v = 1~v − 2~v add −2~v on the right
(1 + 1)~x + (2 − 2)~v = (1 − 2)~v V8
2~x + 0~v = (−1)~v by operations on R
2~x + ~0 = (−1)~v by the proof of V4
2~x = (−1)~v by V4
1 1
[2~x] = [(−1)~v] multiply by 1/2
2 2
1~x = (−1/2)~v by V7
~x = (−1/2)~v by V10
x2
3
1.1.3 (a) (i) Taking the part along the x2 -axis to be the base, gives the length of
2
the base is 2 and the height is 1, so the area is 2.
1
-1 1
x1
(ii) The parallelogram is flat, so the area of the par-
x2
allelogram is 0. 1
-2 -1 1 2
x1
-1
1 1 1 1
A = (u1 + v1 )(u2 + v2 ) − v1 v2 − u2 v1 − u2 u1 − v1 v2 − u2 v1 − u1 u2
2 2 2 2
= u1 v2 − u2 v1
Section 1.2 Solutions 3
x2
(u1 + v1 , u2 + v2 )
u2 + v2
v2 (v1 , v2 )
~v ~u + ~v
(u1 , u2 )
~u
(0, 0) u1 u1 + v1 x1
NOTE: We may have to take the absolute value of this, if we do not order the vectors to match the
diagram. In Chapter 5 we will see that this quantity is called the determinant of a 2 × 2 matrix.
(d) The set only contains two vectors, so the set represents the two points (1, −3, 1), (−2, 6, −2) in R3 .
1 2
(e) Since 0 , 1 is linearly independent (neither vector is a scalar multiple of the other), we
−2 −1
1 2
get that the set represents the plane in R3 with vector equation ~x = s 0 + t 1 , s, t ∈ R.
−2 −1
0
(f) The set only contains the zero vector, so it represents the origin in R3 . A vector equation is ~x = 0.
0
4 Section 1.2 Solutions
(g) The set is linearly independent (verify this) and so the set represents a hyperplane in R4 with vector
equation
1 1 3
0 0 1
~x = t1 + t2 + t3 , t1 , t2 , t3 ∈ R
1 2 0
1 1 0
1
1
(h) The set represents the line in R4 with vector equation ~x = t , t ∈ R.
1
0
" # " # " # " #
1 1 1 0
1.2.2 (a) Observe that (−1) +2 − = . Hence, the set is linearly dependent. Solving for the
2 3 4 0
" # " # " #
1 1 1
first vector gives =2 − . Note that we could have solved for the second or third vector
2 3 4
instead.
(b) Since neither vector is a scalar multiple of the other, the set is linearly independent.
(c) Since neither vector is a scalar multiple of the other, the set is linearly independent.
" # " # " #
2 −4 0
(d) Observe that 2 + = . Hence, the set is linearly dependent. Solving for the second
3 −6 0
" # " #
−4 2
vector gives = (−2)2 .
−6 3
1 0
(e) The only solution to c 2 = 0 is c = 0, so the set is linearly independent.
1 0
(f) Since the set contains the zero vector, it is linearly dependent by Theorem 1.2.3. Solving for the
0 1 4
zero vector gives 0 = 0 −3 + 0 6.
0 −2 1
1 1 −2 0
(g) Observe that 0 1 + 2 2 − −4 = 0. Hence, the set is linearly dependent. Solving for the
0 −1 2 0
1 1 −2
second vector gives 2 = 0 1 − 21 −4.
−1 0 2
(h) Consider
0 1 2 0
0 = c1 −2 + c2 3 + c3 −1
0 1 4 −2
Section 1.2 Solutions 5
Since vectors are equal if and only if there corresponding entries are equal we get the three equa-
tions in three unknowns
c1 + 2c2 = 0
−2c1 + 3c2 − c3 = 0
c1 + 4c2 − 2c3 = 0
The first equation implies that c1 = −2c2 . Substituting this into the second equation we get
7c2 − c3 = 0. Thus, c3 = 7c2 . Substituting these both into the third equation gives −12c2 = 0.
Therefore, the only solution is c1 = c2 = c3 = 0. Hence, the set is linearly independent.
(i) Observe that
1 2 1 2 0
1 2 0 1 0
(−2) + + 0 + 0 =
2 4 1 3 0
1 2 0 1 0
Hence, the set is linearly dependent. Solving for the second vector gives
2 1 1 2
2
= 2 + 0 + 0 1
1 0
4 2 1 3
2 1 0 1
(j) Since neither vector is a scalar multiple of the other, the set is linearly independent.
" # (" #) (" #)
1 3 3
1.2.3 (a) Clearly < Span . Thus, does not span R2 and so is not a basis for R2 .
0 2 2
" #
x
(b) Since neither vector is a scalar multiple of the other, the set is linearly independent. Let ~x = 1 ∈
x2
2
R and consider " # " # " # " #
x1 2 1 2c1 + c2
= c1 + c2 =
x2 3 0 3c1
Comparing entries we get
x1 = 2c1 + c2
x2 = 3c1
2
(" # "c1#)= x2 /3 and c2 = x1 − 2x2 /3. Since there is a solution for all ~x ∈ R , we have that
Thus,
2 1
, spans R2 .
3 0
Since the set spans R2 and is linearly independent, it is a basis for R2 .
6 Section 1.2 Solutions
" #
x1
(c) Since neither vector is a scalar multiple of the other, the set is linearly independent. Let ~x = ∈
x2
R2 and consider " # " # " # " #
x1 −1 1 −c1 + c2
= c1 + c2 =
x2 1 3 c1 + 3c2
Comparing entries we get
x1 = −c1 + c2
x2 = c1 + 3c2
Solving, we get
(" c2#="(x#) x ∈ R2 ,
1 + x2 )/4 and c1 = (−3x1 + x2 )/4. Since there is a solution for all ~
−1 1
we have that , spans R2 .
1 3
Since the set spans R2 and is linearly independent, it is a basis for R2 .
(d) Observe from our work in (c) that
" # " # " #
1 −1 1 1 0
+ =
2 1 2 3 2
Thus, the set is linearly dependent and hence is not a basis.
" #
x
(e) Since neither vector is a scalar multiple of the other, the set is linearly independent. Let ~x = 1 ∈
x2
R2 and consider " # " # " # " #
x1 1 −3 c − 3c2
= c1 + c2 = 1
x2 0 5 5c2
Comparing entries we get
x1 = c1 − 3c2
x2 = 5c2
2
Solving,
(" # we #) c2 = x2 /5 and c1 = x1 + 3x2 /5. Since there is a solution for all ~x ∈ R , we have
" get
1 −3
that , spans R2 .
0 5
Since the set spans R2 and is linearly independent, it is a basis for R2 .
(f) The set contains the zero vector and so is linearly dependent by Theorem 1.2.3. Thus, the set
cannot be a basis.
1.2.4 (a) The set contains the zero vector and so is linearly dependent by Theorem 1.2.3. Thus, the set
cannot be a basis.
x1
(b) Let ~x = x2 be any vector in R3 . Consider the equation
x3
x1 −1 1 −c1 + c2
x2 = c1 2 + c2 1 = 2c1 + c2
x3 −1 2 −c1 + 2c2
Section 1.2 Solutions 7
x1 1 1 −1 c1 + c2 − c3
x2 = c1 1 + c2 2 + c3 −1 = c1 + 2c2 − c3 (1.2)
x3 −1 −3 2 −c1 − 3c2 + 2c3
c1 + c2 − c3 = x1
c1 + 2c2 − c3 = x2
−c1 − 3c2 + 2c3 = x3
c1 = x1 + x2 + x3
c2 = −x1 + x2
c3 = −x1 + 2x2 + x3
1 1 −1
Thus, ~x is in the span of 1 , 2 , −1 . Hence, it spans R3 .
−1 −3
2
Moreover, if we let x1 = x2 = x3 = 0 in equation (1.2), we get that the only solution is c1 = c2 =
c3 = 0, so the set is also linearly independent. Hence, it is a basis for R3 .
1 0
1.2.5 There are infinitely many correct answers. One simple choice is 0 , 1 . Since the vectors in the set
0 0
are standard basis vectors for R3 , we know they are linearly independent and hence the set forms a basis
for a hyperplane in R3 .
1.2.6 Assume that {~v1 , ~v2 } is linearly independent. For a contradiction, assume without loss of generality that
~v1 is a scalar multiple of ~v2 . Then ~v1 = t~v2 and hence ~v1 − t~v2 = ~0. This contradicts the fact that {~v1 , ~v2 }
is linearly independent since the coefficient of ~v1 is non-zero.
On the other hand, assume that {~v1 , ~v2 } is linearly dependent. Then there exists c1 , c2 ∈ R not both zero
such that c1~v1 + c2~v2 = ~0. Without loss of generality assume that c1 , 0. Then ~v1 = − cc12 ~v2 and hence ~v1
is a scalar multiple of ~v2 .
Section 1.2 Solutions 9
1.2.7 If ~v1 ∈ Span{~v2 , ~v3 }, then there exists c1 , c2 ∈ R such that ~v1 = c1~v2 + c2~v3 . Thus,
where the coefficient of ~v1 is non-zero, so {~v1 , ~v2 , ~v3 } is linearly dependent.
0
1.2.8 (a) Let ~z = 0 and consider
1
0 1 0 c1
0 = c1 1 + c2 1 = c1 + c2
1 0 2 2c2
Comparing entries gives c1 = 0, c1 + c2 = 0, and 2c2 = 1. Since no such c1 and c2 exist, we get
that ~z < Span{~x, ~y}. Therefore, {~x, ~y} does not span R3 and hence is not a basis for R3 .
(b) Since ~z < Span{~x, ~y} we get that {~x, ~y,~z} is linearly independent by Theorem 1.2.2.
Consider
a 1 0 0 c1
b = c1 1 + c2 1 + c3 0 = c1 + c2
d 0 2 1 2c2 + c3
Comparing entries gives c1 = a, c1 + c2 = b, and 2c2 + c3 = d. Solving for c1 , c2 , c3 gives
c1 = a, c2 = b − a, c3 = d − 2(b − a)
has solution c1 = a, c2 = b − a, and c3 = 1t (c − 2(b − a)), so that {~x, ~y, t~z} also spans R3 .
1
(d) Yes, there are infinitely many such vectors w ~ . One choice is w ~ = 0. It is left as an exercise for
0
the reader to repeat the steps in parts (a) and (b) to prove this.
10 Section 1.2 Solutions
1
1.2.9 (a) Let ~z = 0 and consider
0
1 1 0 c1
0 = c1 2 + c2 1 = 2c1 + c2
0 −1 2 −c1 + 2c2
Comparing entries gives c1 = 1, 2c1 + c2 = 0, and 2c2 = 0. Since no such c1 and c2 exist, we get
that ~z < Span{~x, ~y}. Therefore, {~x, ~y} does not span R3 and hence is not a basis for R3 .
(b) Since ~z < Span{~x, ~y} we get that {~x, ~y,~z} is linearly independent by Theorem 1.2.2.
a
Let ~a = b be any vector in R3 and consider
d
a 1 0 1 c1 + c3
b = c1 2 + c2 1 + c3 0 = 2c1 + c2
d −1 2 0 −c1 + 2c2
Comparing entries gives c1 + c3 = a, 2c1 + c2 = b, and −c1 + 2c2 = d. Solving for c1 , c2 , c3 gives
1 2 2 1 2 1
c2 = b − d, c1 = b + d, c3 = a − b + d
5 5 5 5 5 5
Therefore, every vector in R3 is in Span{~x, ~y,~z}, so {~x, ~y,~z} spans R3 .
Since {~x, ~y,~z} spans R3 and is linearly independent, it is a basis for R3 .
(c) No, if we take t = 0, then the set {~x, ~y, t~z} will contain the zero vector and hence will be linearly
dependent by Theorem 1.2.3.
If t , 0, then we can repeat our work in (a) to get that {~x, ~y, t~z} is linearly independent. We can
also repeat our work in part (b) to find that
a
b = d1 ~x + d2~y + d3 (t~z)
c
has solution c1 = a, c2 = b − a, and c3 = 1t (c − 2(b − a)), so that {~x, ~y, t~z} also spans R3 .
0
(d) Yes, there are infinitely many such vectors w ~ . One choice is w ~ = 0. It is left as an exercise for
1
the reader to repeat the steps in parts (a) and (b) to prove this.
1.2.10 Assume for a contradiction that {~v1 , ~v2 } is linearly dependent. Then there exists c1 , c2 ∈ R with c1 , c2
not both zero such that c1~v1 + c2~v2 = ~0. Hence, we have
with not all coefficients equal to zero which contradicts the fact that {~v1 , ~v2 , ~v3 } is linearly independent.
Section 1.2 Solutions 11
1.2.11 To prove this, we will prove that both sets are a subset of the other.
Let ~x ∈ Span{~v1 , ~v2 }. Then there exists c1 , c2 ∈ R such that ~x = c1~v1 + c2~v2 . Since t , 0 we get
c2
~x = c1~v1 + (t~v2 )
t
so ~x ∈ Span{~v1 , t~v2 }. Thus, Span{~v1 , ~v2 } ⊆ Span{~v1 , t~v2 }.
If ~y ∈ Span{~v1 , t~v2 }, then there exists d1 , d2 ∈ R such that
Hence, we also have Span{~v1 , t~v2 } ⊆ Span{~v1 , ~v2 }. Therefore, Span{~v1 , ~v2 } = Span{~v1 , t~v2 }.
1.2.12 Assume that {~v1 , ~v2 } is linearly independent and ~v3 < Span{~v1 , ~v2 }. Consider
If c3 , 0, then we have ~v3 = − cc31 ~v1 − cc23 ~v2 which contradicts that fact that ~v3 < Span{~v1 , ~v2 }. Hence,
c3 = 0.
Thus, we have c1~v1 + c2~v2 + c3~v3 = ~0 implies c1~v1 + c2~v2 = ~0. Since {~v1 , ~v2 } is linearly independent, the
only solution to this is c1 = c2 = 0. Therefore, we have shown the only solution to c1~v1 + c2~v2 + c3~v3 = ~0
is c1 = c2 = c3 = 0, so {~v1 , ~v2 , ~v3 } is linearly independent.
1.2.13 We will prove that the sets are subsets of each other.
Let ~x ∈ Span{~a, ~b, ~c}. Then, by definition, there exists f1 , f2 , f3 ∈ R such that ~x = f1~a + f2~b + f3~b. We
can write this as
f1 f2 f3
~x = (r~a) + (s~b) + (t~c) ∈ Span{r~a, s~b, t~c}
r s t
So, Span{~a, ~b, ~c} ⊆ Span{r~a, s~b, t~c}.
On the other hand, let ~y ∈ Span{r~a, s~b, t~c}. Then, by definition, there exists g1 , g2 , g3 ∈ R such that
~y = g1 (r~a) + g2 (s~b) + g3 (t~c) = (g1 r)~a + (g2 s)~b + (g3 t)~c ∈ Span{~a, ~b, ~c}
(c) The statement is true. Let ~x ∈ R2 . Since {~v1 , ~v2 } spans R2 there exists c1 , c2 ∈ R such that
~x = c1~v1 + c2~v2 . Then we get
1 0
Span 0 , 1 ,S ∪T
0 0
1
1 0
since ~x = 1 ∈ Span 0 , 1 , but ~x < S ∪ T since ~x < S and ~x < T .
0 0
0
Section 1.3 Solutions 13
and
(x1 + y1 ) + (x2 + y2 ) = x1 + x2 + y1 + y2 = x3 + y3
So, ~x + ~y satisfies the condition of S4 , so ~x + ~y ∈ S4 .
cx1
Similarly, for any c ∈ R, c~x = cx2 and
cx3
so c~x ∈ S4 .
Thus, by the Subspace Test, S4 is a subspace of R3 .
To find a basis for S4 we need to find a linearly independent spanning set for S4 . We first find a
spanning set. Every vector ~x ∈ S4 satisfies x1 + x2 = x3 and so has the form
x1 x1 1 0
x2 = x2 = x1 0 + x2 1 , x1 , x2 ∈ R
x3 x1 + x2 1 1
1 0
Thus, B = 0 , 1 spans S4 . Moreover, neither vector is a scalar multiple of the other so the
1 1
set is linearly independent. Thus, B is a basis for S4 and so S4 is a plane in R3 .
(e) By definition S5 is non-empty subset of R4 .
0 0
0 0
Let ~x, ~y ∈ S5 . Then, ~x = and ~y = . Hence
0 0
0 0
0 0 0
0 0 0
~x + ~y = + = ∈ S5
0 0 0
0 0 0
(a1 + a2 ) − (b1 + b2 )
(b 1 + b2 ) − (d1 + d2 )
~x + ~y =
(a1 + a2 ) + (b1 + b2 ) − 2(d1 + d2 )
(c1 + c2 ) − (d1 + d2 )
Now, consider
−1 0 c1 − c2
0 1
0
= c 0 + c 1 + c 0 = c2
0 1 1 2 1 3 0 c1 + c2
0 0 0 1 c3
Comparing entries gives c1 − c2 = 0, c2 =0, c1 + c2 = 0, and c3 = 0. Thus, c1 = c2 = c3 = 0
1 −1 0
0 1 0
is the only solution so, B = , , is also linearly independent, and so is a basis for S8 .
1 1 0
0
0 1
Therefore, S8 is a hyperplane in R4 .
1.3.2 By definition, a plane P in R3 has vector equation ~x = c1~v1 + c2~v2 + ~b, c1 , c2 ∈ R where {~v1 , ~v2 } is linearly
independent.
If P is a subspace of R3 , then ~0 ∈ P and hence P passes through the origin. On the other hand, if P
passes through the origin, then there exists d1 , d2 ∈ R such that
~0 = d1~v1 + d2~v2 + ~b
Since c1 and c2 can be any real numbers, (c1 − d1 ) and (c2 − d2 ) can be any real numbers, so P =
Span{~v1 , ~v2 }. Consequently, P is a subspace of R2 by Theorem 1.3.2.
1.3.3 Since S does not contain the zero vector it cannot be a subspace.
x1
1.3.4 (a) Let ~x = x2 be any vector in R3 . Consider the equation
x3
x1 1 2 c1 + 2c2
x2 = c1 3 + c2 0 = 3c1
x3 1 1 c1 + c2
c1 + 2c2 = x1
3c1 = x2
c1 + c2 = x3
18 Section 1.3 Solutions
Subtracting the third equation from the first equation gives c2 = x1 − x3 . Substituting this and the
second equation into the third equation we get
1
x2 + x1 = 2x3
3
1 2 1
1
Thus, ~x is in the span of 3 , 0 if and only if 3 x2 + x1 = 2x3 . Thus, the vector 0 is not in the
1 1
0
span, so B does not span R3 and hence is not a basis for R3 .
(b) From our work in (a), we have that B is a basis for the subspace
( ) 1 2
1
S = ~x ∈ R3 | x2 + x1 = 2x3 = Span
3 , 0
3 1 1
1.3.5 A set S is a subset of Rn if every element of S is in Rn . For S to be a subspace, it not only has to be a
subset, but it also must be closed under addition and scalar multiplication of vectors (by the Subspace
Test).
1.3.6 (a) Observe that 0 − 2(1) + 2 = 0 and 1 − 2(0) + (−1) = 0, so B ⊂ S1 . Therefore, since S1 is closed
under linear combinations, we have Span B ⊆ S1 .
x1
Let ~x = x2 ∈ S1 . Then, we can find that x1 = 2x2 − x3 . Consider
x3
2x2 − x3 0 1 c2
x2 = c1 1 + c2 0 = c1
x3 2 −1 2c1 − c2
Comparing entries gives c2 = 2x2 − x3 , c1 = x2 , 2c1 − c2 = x3 . Observe that all three equations are
valid for all x1 , x2 , x3 ∈ R (the third equation is consistent with the first two). Thus, every ~x ∈ S1
can be written as a linear combination of the vectors in B. Hence, we also have S1 ⊆ Span B.
Consequently, B spans S1 . Additionally, B is linearly independent since neither vector is a scalar
multiple of the other. Thus, B is a basis for S1 .
1
(b) Observe that ~x = 0 ∈ S2 since 2 = 2(1) − 3(0). But, ~x < Span B, so B does not span S2 and so is
2
not a basis for S2 .
−2
(c) Observe that ~x = 1 ∈ S3 since (−2) + 2(1) = 0. But, ~x < Span B, so B does not span S3 and so
0
is not a basis for S3 .
Section 1.3 Solutions 19
(d) Observe that 4 = 2(2) and 2 = (−4)(−1/2), so B ⊂ S4 . Thus, since S4 is closed under linear
combinations, we get that Span B ⊆ S4 .
x1
Let ~x = x2 ∈ S4 . Since x2 = −4x3 and x1 = 2x2 = 2(−4x3 ) = −8x3 , we can write ~x in the form
x3
−8x3 4
−4x3 = −2x3 2
x3 −1/2
x1 2 1 2c1 + c2
x2 = c1 1 + c2 −1 = c1 − c2
3x1 + x2 7 2 7c1 + 2c2
Comparing entries gives 2c1 + c2 = x1 , c1 − c2 = x2 , and 3x1 + x2 = 7c1 + 2c2 . The first two
equations gives c1 = 31 (x1 + x2 ) and c2 = 13 x1 − 32 x2 . Observe that we then have
7 7 2 4
7c1 + 2c2 = x1 + x2 + x1 − x2 = 3x1 + x2
3 3 3 3
So, the third equation is consistent with the first two. Therefore, every ~x ∈ S5 is in Span B.
Consequently, S5 ⊆ Span B. So, Span B = S5 .
Since neither vector in B is a scalar multiple of the other, we get that B is also linearly independent.
Hence, B is a basis for S5 .
1 −1 0
1.3.7 By definition −1 , 0 , 1 is a spanning set for S.
3 2 −5
1 −1 0
1 −1
Observe that −1 − 0 = 1 . So, by Theorem 1.2.1, B = −1 , 0 is also a spanning set for S.
3 2 −5 3 2
Since neither vector in B is a scalar multiple of the other, we get that B is also linearly independent.
Hence, B is a basis for S.
20 Section 1.3 Solutions
These imply that a = 0 and c = −2b for any b ∈ R. We choose b = 1, so that we have a = 0, b = 1,
c = −2.
We will now prove that this choice does work. First, observe that B is linearly independent since neither
vector is a scalar multiple of the other.
We have picked a, b, c so that B ⊂ S, so Span B ⊆ S.
x1
Let ~x ∈ S. Since the condition on S is x2 − 2x3 = 0, we get that ~x can have the form ~x = x2 . Consider
1
2 x2
x1 1 2 c1 + 2c2
x2 = c1 2 + c2 2 = 2c1 + 2c2
1
2 x2 1 1 c1 + c2
(b) If P had a basis, then there would be a spanning set for P. But, this would contradict Theorem
1.3.2, since P is not a subspace.
(c) If a = b = c = 0, then P is the empty set. If a , 0, then every vector in ~x ∈ P has the form
d b
x1 a − a x2 − ac x3 d/a
−b/a −c/a
x2 = x2 = 0 + x2 1 + x3 0 , x2 , x3 ∈ R
x3 x3 0 0 1
1.4.2 (a) Since the plane passes through (0, 0, 0) we get that a scalar equation is
(b) For two planes to be parallel, their normal vectors must be scalar multiples of each other. Hence,
3
a normal vector for the required plane is 2 . Thus, a scalar equation of the plane is
−1
Since the plane passes through the origin, we get that a scalar equation is
2x1 − x2 + x3 = 0
3 2 −6
(d) We have 3 × 1 = 9 . Any non-zero scalar multiple of this will be a normal vector for the
3 −1 −3
−2
plane. We pick ~n = 3 . Since the plane passes through the origin, we get that a scalar equation
−1
is
−2x1 + 3x2 − x3 = 0
Since the plane passes through the origin, we get that a scalar equation is
(b) We have
1 3 1 2 3 2
2 · −2 = 0, 2 · 1 = 0, −2 · 1 = 0
−1 −1 −1 4 −1 4
Hence, the set is orthogonal.
(c) We have
0 1 0 0 1 0
0 · 0 = 0, 0 · 1 = 0, 0 · 1 = 0
0 1 0 0 1 0
Hence, the set is orthogonal.
Section 1.4 Solutions 23
(d) We have
√ √ √ √
1/ 2 0√ 1/ 2 1/ 2 0√ 1/ 2
√
1/ 3 · −1/ 3 = 0, 1/ √3 · −1/ √3 = 0, √
−1/√ 3 · −1/ √3 = 0
√ √ √ √
2/ 6 1/ 6 −1/ 6 2/ 6 −1/ 6
1/ 6
Hence, the set is orthogonal.
1.4.4 We have
√ √ √ √ √ √
1/ 3 1/ 2 1/ 3 1/ 6 1/ 2 1/ 6
√ √ √ √
1/ √3 · 0√ = 0, 1/ √3 · −2/√ 6 = 0, 0√ · −2/√ 6 = 0
1/ 3 −1/ 2 −1/ 2
1/ 3 1/ 6 1/ 6
Hence, the set is orthogonal. Also,
√
1/ 3
p
√
= 1/3 + 1/3 + 1/3 = 1
1/ √3
1/ 3
√
1/ 2
p
0
= 1/2 + 0 + 1/2 = 1
√
−1/ 2
√
1/ 6
p
√
−2/√ 6
= 1/6 + 4/6 + 1/6 = 1
1/ 6
x1
Similarly,
0 = ~v2 · ~0 = ~v2 · (c1~v1 + c2~v2 ) = c1~v2 · ~v1 + c2~v2 · ~v2 = 0 + c2 k~v2 k2 = c2
Thus, c1~v1 + c2~v2 = ~0 implies c1 = c2 = 0, so {~v1 , ~v2 } is linearly independent.
24 Section 1.4 Solutions
1.4.8 We have
~y · ~n = (c1~v + c2 w
~ ) · ~n = c1~v · ~n + c2 w
~ · ~n = 0 + 0 = 0
1
1.4.10 By definition, the set S of all vectors orthogonal to ~x = 1 is a subset of R3 . Moreover, since ~0 · ~x = 0
1
~x · (~y + ~z) = ~x · ~y + ~x · ~z = 0 + 0 = 0
and
~x · (t~y) = t(~x · ~y) = 0
for any t ∈ R. Thus, ~y + ~z ∈ S and t~y ∈ S, so S is a subspace of R3 .
1.4.11 Let ~x ∈ Rn . Then,
0 x1
~0 · ~x = .. · .. = 0x1 + · · · + 0xn = 0
. .
0 xn
as required.
1 0 1
0 1 1
1.4.12 (a) The statement is false. If ~x = , ~y = , and ~z = , then ~x · ~y = 0, ~y · ~z = 0, but ~x · ~z = 1.
0 1 −1
1 0 0
" # " #
0 1
(b) The statement if false. If ~x = and ~y = , then ~x · ~y = 0, but {~x, ~y} is linearly dependent since
0 0
it contains the zero vector.
(c) The statement is true. We have
1 0 0
~x × (~y × ~z) = 0 × 0 = 0
0 0 0
But,
0 1 0
(~x × ~y) × ~z = 0 × 0 = 1
1 0 0
(f) The statement is true. If ~x · ~n = ~b · ~n is a scalar equation of the plane, then multiplying both sides
by t , 0 gives another scalar equation for the plane ~x · (t~n) = ~b · (t~n). Thus, t~n is also a normal
vector for the plane.
26 Section 1.5 Solutions
(b) We have
" # " #
~u · ~v 9 1 9/10
proj~v ~u = ~v = =
k~vk2 10 3 27/10
" # " # " #
3 9/10 21/10
perp~v ~u = ~u − proj~v ~u = − =
2 27/10 −7/10
(c) We have
" # " #
~u · ~v 0 2 0
proj~v (~u) = ~v = =
k~vk 2 13 −3 0
" # " # " #
6 0 6
perp~v (~u) = ~u − proj~v (~u) = − =
4 0 4
(d) We have
" # " #
~u · ~v 9 3 27/13
proj~v (~u) = ~v = =
k~vk2 13 2 18/13
" # " # " #
1 27/13 −14/13
perp~v (~u) = ~u − proj~v (~u) = − =
3 18/13 21/13
(e) We have
1 3
~u · ~v 3
proj~v (~u) = ~v = 0 = 0
k~vk2 1
0 0
3 3 0
perp~v (~u) = ~u − proj~v (~u) = 2 − 0 = 2
4 0 4
(f) We have
1 3
~u · ~v 9
proj~v (~u) = ~v = 1 = 3
k~vk2 3
1 3
3 3 0
perp~v (~u) = ~u − proj~v (~u) = 2 − 3 = −1
4 3 1
Section 1.5 Solutions 27
(g) We have
1 4
~u · ~v 8 0 0
proj~v (~u) = ~v = =
k~vk2 2 0 0
1 4
2 4 −2
5 0 5
perp~v (~u) = ~u − proj~v (~u) = − =
−6 0 −6
6 4 2
(h) We have
1 4/17
~u · ~v 8 4 16/17
proj~v (~u) = ~
v = =
k~vk2 34 4 16/17
−1 −4/17
1 4/17 13/17
1 16/17 1/17
perp~v (~u) = ~u − proj~v (~u) = − =
1 16/17 1/17
1 −4/17 21/17
(i) We have
1 2/3
~u · ~v 4 2 4/3
proj~v (~u) = ~v = =
k~vk2 6 0 0
1 2/3
3 2/3 7/3
1 4/3 −1/3
perp~v (~u) = ~u − proj~v (~u) = − =
1 0 1
−1 2/3 −5/3
(j) We have
−2 2/3
~u · ~v −8 −4 4/3
proj~v (~u) = ~
v = =
k~vk2 24 0 0
−2 2/3
3 2/3 7/3
1 4/3 −1/3
perp~v (~u) = ~u − proj~v (~u) = − =
1 0 1
−1 2/3 −5/3
28 Section 1.5 Solutions
3
1.5.2 (a) A normal vector for the plane P is ~n = −2. Thus,
3
1 3 1
~v · ~n 0
projP (~v) = perp~n (~v) = ~v − ~
n = 0 − −2 = 0
k~nk2 22
−1 3 −1
1
(b) A normal vector for the plane P is ~n = 2. Thus,
2
−1 1 −14/9
~v · ~n 5
projP (~v) = perp~n (~v) = ~v − ~n = 2 − 2 = 8/9
k~nk2 9
1 2 −1/9
−1 −14/9 1
5
perpP (~v) = ~v − projP (~v) = 2 − 8/9 = 2
9
1 −1/9 2
0 0 −4 1
(c) Observe that −1 × 3 = 0 . Hence, a normal vector for the plane P is ~n = 0. Thus,
1 1 0 0
3 1 0
~v · ~n 3
projP (~v) = perp~n (~v) = ~v − ~
n = −4
1 0 = −4
−
k~nk2
1 0 1
3 0 3
perpP (~v) = ~v − projP (~v) = −4 − −4 = 0
1 1 0
1 −2 3
(d) A normal vector for the plane P is ~n = −2 × 1 = 0 . Hence,
1 −2 −3
1 3 3/2
~v · ~n 3
projP (~v) = perp~n (~v) = ~v − ~n = 1 + 0 = 1
k~nk2 18
2 −3 3/2
1 3/2 −1/2
perpP (~v) = ~v − projP (~v) = 1 − 1 = 0
2 3/2 1/2
Section 1.5 Solutions 29
1 1 −1
1.5.3 A normal vector for the plane P is ~n = 1 × 0 = 2 .
1 −1 −1
1
(a) The projection of ~x = 2 onto P is
3
1 −1 1
~x · ~n 0
projP (~x) = perp~n (~x) = ~x − ~
n = 2 − 2 = 2
k~nk2 6
3 −1 3
2
(b) The projection of ~x = 1 onto P is
0
2 −1 2
~x · ~n 0
projP (~x) = perp~n (~x) = ~x − ~n = 1 − 2 = 1
k~nk2 6
0 −1 0
1
(c) The projection of ~x = −2 onto P is
1
1 −1 0
~x · ~n −6
projP (~x) = perp~n (~x) = ~x − ~
n = −2 − 2 = 0
k~nk2 6
1 −1 0
2
(d) The projection of ~x = 3 onto P is
3
2 −1 13/6
~x · ~n 1
projP (~x) = perp~n (~x) = ~x − ~n = 3 − 2 = 8/3
k~nk 2 6
3 −1 19/6
3 2 0
1.5.4 A normal vector for the plane P is ~n = 3 × 3 = 1.
−1 −1 3
(~x + ~y) · ~v
proj~v (~x + ~y) = ~v
k~v k2
~x · ~v + ~y · ~v
= ~v
k~v k2
~x · ~v ~y · ~v
= 2
~v + ~v
k~v k k~v k2
= proj~v (~x) + proj~v (~y)
(b) We have
(s~x) · ~v
proj~v (s~x) = ~v
k~v k2
~x · ~v
=s ~v
k~v k2
= s proj~v (~x)
(c) We have
proj~v (~x) · ~v
~v
proj~v proj~v (~x) =
k~v k2
~x·~v
k~v k2
~v · ~v
= ~v
k~v k2
~x·~v
(~v · ~v)
= ~v·~v 2 ~v
k~v k
~x · ~v
= ~v
k~v k2
= proj~v (~x)
(c) The statement is false. If ~x is orthogonal to ~v, then proj~v (~x) = ~0 and hence {proj~v (~x), perp~v (~x)} is
linearly dependent.
Section 1.5 Solutions 31
1.5.7 (a) We know that {~v1 , ~v2 } is linearly independent as otherwise Span{~v1 , ~v2 } would not be a plane.
(b) Consider
c1~v1 + c2~v2 + c3 proj~n (~x) = ~0
If c3 , 0, then we have
~x · ~n c1 c2
2
~n = − ~v1 − ~v2
k~n k c3 c3
Then, !
~x · ~n ~x · ~n 1
~x · ~n = (~
n · ~
n ) = ~
n · ~n = − (c1~v1 + c2~v2 ) · ~n = 0
k~n k2 k~n k2 c3
since ~n is the normal vector for P and ~v1 , ~v2 ∈ P. Hence, ~x is orthogonal to ~n. So, by Theorem
1.4.6, we get that ~x ∈ P which is a contradiction. Therefore, c3 = 0.
Hence, we have that c1~v1 + c2~v2 = ~0. This implies that c1 = c2 = 0 since {~v1 , ~v2 } is linearly
independent. Thus, {~v1 , ~v2 , proj~n (~x)} is linearly independent.
(c) Since we have already proven the set is linearly independent, we just need to prove that it spans
R3 . Let ~y be any vector in R3 . Then, by definition of projection and perpendicular, we have
Moreover, projP (~y) ∈ P, so there exists d1 , d2 ∈ R such that projP (~y) = d1~v1 + d2~v2 . Therefore, we
have
~y · ~n
~y = d1~v1 + d2~v2 + ~n
k~nk2
Hence, R3 ⊆ Span{~v1 , ~v2 , ~n}. Finally, since ~v1 , ~v2 , ~n ∈ R3 we have that Span{~v1 , ~v2 , ~n} ⊆ R3 by
Theorem 1.1.1. Consequently, R3 = Span{~v1 , ~v2 , ~n}.
32 Section 1.5 Solutions
since k projd~(~p) − ~xk2 > 0. Hence, k~p − ~xk > k~p − projd~(~p) as required.
(b) It says that projd~(~p) corresponds to the point on L that is closest to P.
(c) k perpd~(~x)k is the distance P is from the line L.
1.5.9 (a) Consider ~0 = c1~u + c2~v. Then, we have
~u · ~0 = ~u · (c1~u + c2~v)
0 = c1 (~u · ~u) + c2 (~u · ~v)
0 = c1 (1) + c2 (0)
0 = c1
Similarly, we have
~x · ~n
projP (~x) = perp~n (~x) = ~x − ~n
k~nk2
~x · ~n
~x − ~n = d1~u + d2~v
k~nk2
Then
~x · ~n
~u · (~x − ~n) = ~u · (d1~u + d2~v)
k~nk2
~x · ~n
~u · ~x − ~u · ~n = d1~u · ~u + d2~u · ~v
k~nk2
~u · ~x = d1
~x · ~u
proj~u (~x) = ~u = (~u · ~x)~u = d1~u
k~uk2
~x · ~v
proj~v (~x) = ~v = (~v · ~x)~v = d2~v
k~vk2
Hence,
perp~u×~v (~x) = d1~u + d2~v = proj~u (~x) + proj~v (~x)
Chapter 2 Solutions
x1 s + 1
x2 = 2s
x3 s+1
Since the third equation is not satisfied, ~x is not a solution of the system.
34
Section 2.1 Solutions 35
(c) We have
2.1.4 (a) TRUE. Since we can write the equation as x1 + 3x2 + x3 = 0, it is linear.
(b) FALSE. Since it contains the square of a variable, it is not a linear equation.
1 2
(c) FALSE. The system x1 = 1, x2 = 1, and x3 = 1 has solution 1, but clearly 2 is not a solution.
1 2
" #
−1
(d) FALSE. The systems x1 + x2 = 0 has more unknowns than equations, but it has a solution .
1
2.1.5 (a) Since the right hand side of all the equations is 0, the system consists of hyperplanes which all
pass through the origin. Since, the origin lies on all of the hyperplanes, it is a solution. Therefore,
the system is consistent.
(b) The i-th equation of the system has the form
ai1 x1 + · · · + ain xn = 0
Hence, the rank of the coefficient matrix and the rank of the augmented matrix are 2.
iii. x3 is a free variable, so let x3 = t ∈ R. Then x1 = 3 and x2 + x3 = −1, so x2 = −1 − t.
Therefore, all solutions have the form
x1 3 3 0
~x = x2 = −1 − t = −1 + t −1
x3 t 0 1
iv. The system is a pair of planes in R3 which intersect in a line passing through (3, −1, 0).
" # " #
2 2 −3 2 2 −3 3
(b) i. ,
1 1 1 1 1 1 9
ii.
" # " #
2 2 −3 3 1 1 1 9
R1 ↔ R2 ∼ ∼
1 1 1 9 2 2 −3 3 R2 − 2R1
" # " # " #
1 1 1 9 1 1 1 9 R1 − R2 1 1 0 6
∼ ∼
0 0 −5 −15 − 51 R2 0 0 1 3 0 0 1 3
Hence, the rank of the coefficient matrix and the rank of the augmented matrix are 2.
iii. x2 is a free variable, so let x2 = t ∈ R. Then x1 + x2 = 6, so x1 = 6 − t, and x3 = 3. Therefore,
all solutions have the form
x1 6 − t 6 −1
~x = x2 = t = 0 + t 1
x3 3 3 0
iv. The system is a pair of planes in R3 which intersect in a line passing through (6, 0, 3).
2 1 −1 2 1 −1 4
(c) i. 2 −1 −3 , 2 −1 −3 −1
3 2 −1 3 2 −1 8
38 Section 2.2 Solutions
ii.
2 1 −1 4 3 2 −1 8 R1 − R3
2 −1 −3 −1 R1 ↔ R3 ∼ 2 −1 −3 −1 ∼
3 2 −1 8 2 1 −1 4
1 1 0 4 1 1 0 4
2 −1 −3 −1 R2 − 2R1 ∼ 0 −3 −3 −9 − 31 R2 ∼
2 1 −1 4 R3 − 2R1 0 −1 −1 −4
1 1 0 4 R1 − R2 1 0 −1 1 R1 + R3
0 1 1 3 ∼ 0 1 1 3 R2 + 3R3 ∼
0 −1 −1 −4 R3 + R2 0 0 0 −1
1 0 −1 0 1 0 −1 0
0 1 1 0 ∼ 0 1 1 0
0 0 0 −1 (−1)R3 0 0 0 1
Hence, the rank of the coefficient matrix is 2, and the rank of the augmented matrix is 3.
iii. Since the rank of the augmented matrix is greater than the rank of the coefficient matrix, the
system is inconsistent.
iv. The system is a set of three planes in R3 which have no common point of intersection.
1 2 0 1 2 0 3
(d) i. 1 3 −1 , 1 3 −1 4
−3 −8 2 −3 −8 2 −11
ii.
1 2 0 3 1 2 0 3 R1 − 2R2 1 0 2 1
1 3 −1 4 R2 − R1 ∼ 0 1 −1 1 ∼ 0 1 −1 1
−3 −8 2 −11 R3 + 3R1 0 −2 2 −2 R3 + 2R2 0 0 0 0
Hence, the rank of the coefficient matrix and the rank of the augmented matrix are 2.
iii. x3 is a free variable, so let x3 = t ∈ R. Then x1 + 2x3 = 1 and x2 − x3 = 1, so x1 = 1 − 2t, and
x2 = 1 + t. Therefore, all solutions have the form
x1 1 − 2t 1 −2
~x = x2 = 1 + t = 1 + t 1
x3 t 0 1
iv. The system is three planes in R3 which intersect in a line passing through (1, 1, 0).
" # " #
1 3 −1 1 3 −1 9
(e) i. ,
2 1 −2 2 1 −2 −2
ii.
" # " #
1 3 −1 9 1 3 −1 9
∼ ∼
2 1 −2 −2 R2 − 2R1 0 −5 0 −20 − 51 R2
" # " #
1 3 −1 9 R1 − 3R2 1 0 −1 −3
∼
0 1 0 4 0 1 0 4
Hence, the rank of the coefficient matrix and the rank of the augmented matrix are 2.
Section 2.2 Solutions 39
iv. The system is a pair of planes in R3 which intersect in a line passing through (−3, 4, 0).
" # " #
2 −2 2 1 2 −2 2 1 −4
(f) i. ,
3 −3 3 4 3 −3 3 4 9
ii.
1 −4 21 R1
" # " #
2 −2 2 1 −4 2 −2 2
∼ ∼
3 −3 3 4 9 R2 − 23 R1 0 0 0 5/2 15 52 R2
1 −1 1 1/2 −2 R1 − 21 R2
" # " #
1 −1 1 0 −5
∼
0 0 0 1 6 0 0 0 1 6
Hence, the rank of the coefficient matrix and the rank of the augmented matrix are 2.
iii. x2 and x3 are free variables, so let x2 = s ∈ R and x3 = t ∈ R. Then x4 = 6, and x1 − x2 + x3 =
−5, so x1 = −5 + s − t. Therefore, all solutions have the form
iv. The system is a pair of hyperplanes in R4 which intersect in a plane passing through (−5, 0, 0, 6).
1 1 1 6 1 1 1 6 5
0 2 −4 4 0 2 −4 4 −4
(g) i. ,
2 0 4 5 2 0
4 5 4
−1 2 −3 4 −1 2 −3 4 9
ii.
1 1 1 6 5 1 1 1 6 5
0 2 −4 4 −4 1
0 2 −4 4 −4 2 R2 ∼
∼
2 0 4 5 4 R3 − 2R1 0 −2 2 −7 −6
−1 2 −3 4 9 R4 + R1 0 3 −2 10 14
5 R1 − R2
1 1 1 6
1 0 3 4 7
0 1 −2 2 −2 0 1 −2 2 −2
0 −2 ∼ ∼
2 −7 −6 R3 + 2R2 0 0 −2 −3 −10 (−1)R3
1
0 3 −2 10 14 R4 − 3R2 0 0 4 4 20 4 R4
R1 − 3R3
1 0 3 4 7 1 0 3 4 7
0 1 −2 2 −2 0 1 −2 2 −2 R2 + 2R3
0 0
∼ ∼
2 3 10 0 0 1 1 5
0 0 1 1 5 R3 ↔ R4 0 0 2 3 10 R4 − 2R3
40 Section 2.2 Solutions
1 −8 R1 − R4 0 −8
1 0 0 1 0 0
0 1 0 4 8 R2 − 4R4 0 1 0 0 8
∼
0 0 1 1 5 R3 − R4 0 0 1 0 5
0 0 0 1 0 0 0 0 1 0
Hence, the rank of the coefficient matrix and the rank of the augmented matrix are 4.
iii. The solution is x1 = −8, x2 = 8, x3 = 5, x4 = 0.
iv. The system is four hyperplanes in R4 which intersect only at the point (−8, 8, 5, 0).
1 2 1 2 1 2 1 2 −2
2 1 2 1 2 1 2 1 2
(h) i. ,
0 2 1 1 0 2 1 1 2
1 1 2 0 1 1 2 0 6
ii.
1 2 1 2 −2 2 −2
1 2 1
6 − 31 R2
2 1 2 1 2 R2 − 2R1 0 −3 0 −3
0 2 1 1 ∼ ∼
2 0 2 1 1 2
1 1 2 0 6 R4 − R1 0 −1 1 −2 8
2 −2 R1 − 2R2 1 0 1 2 R1 − R3 1 0 0 1 −4
1 2 1
0
0 1 0 1 −2 0 1 0 1 −2 0 1 0 1 −2
∼ ∼
0 2 1 1 2 R3 − 2R2 0 0 1 −1 6 0 0 1 −1 6
0 −1 1 −2 8 R4 + R2 0 0 1 −1 6 R4 − R3 0 0 0 0 0
Hence, the rank of the coefficient matrix and the rank of the augmented matrix are 3.
iii. x4 is a free variables, so let x4 = t ∈ R. Then x1 + x4 = −4, x2 + x4 = −2, and x3 − x4 = 6.
Thus, x1 = −4 − t, x2 = −2 − t, and x3 = 6 + t. Therefore, all solutions have the form
iv. The system is four hyperplanes in R4 which intersect in a line passing through (−4, −2, 6, 0).
0 1 −2 0 1 −2
3
2 2
3 2 2
3 1
(i) i. ,
1 2 4 1 2 4 −1
2 1 −1 2 1 −1 1
Section 2.2 Solutions 41
ii.
0 1 −2 4 −1
3 1 2
2 2 3 1 2 2 3 1 R2 − 2R1
R1 ↔ R3 ∼
∼
1 2 4 −1 0 1 −2 3
2 1 −1 1 2 1 −1 1 R4 − 2R1
4 −1 4 −1 R1 − 2R2
1 2
1 2
0 −2 −5 3 0 1 −2 3
∼
∼
0 1 −2 3 0 −2 −5 3 R3 + 2R2
R2 ↔ R3
0 −3 −9 3 0 −3 −9 3 R4 + 3R2
8 −7 8 −7 R1 − 8R3
1 0 1 0
0 1 −2 3 0 1 −2 3 R2 + 2R3
0 0 −9 ∼
∼
9 0 0 1 −1
− 91 R3
0 0 −15 12 0 0 −15 12 R4 + 15R3
R1 − R4 1 0 0
1 0 0 1
1 0 0 1
0
0 1 0 1 0 1 0 1 R2 − R4 0 1 0 0
0 0 1 −1 ∼
∼
0 0 1 −1 R3 + R4 0 0 1 0
− 31 R4
0 0 0 −3 0 0 0 1 0 0 0 1
Hence, the rank of the coefficient matrix is 3, and the rank of the augmented matrix is 4.
iii. Since the rank of the augmented matrix is greater than the rank of the coefficient matrix, the
system is inconsistent.
iv. The system is a set of four planes in R3 which have no common point of intersection.
2.2.2 (a) We need to determine if there exists c1 , c2 ∈ R such that
1 1 3 c1 + 3c2
3 = c1 2 + c2 −4 = 2c1 − 4c2
1 −1 2 −c1 + 2c2
c1 + 3c2 = 1
2c1 − 4c2 = 3
−c1 + 2c2 = 1
The second row corresponds to the equation 0 = 5. Thus, the system is inconsistent, so ~x < Span B.
42 Section 2.2 Solutions
2
Hence, c1 = 5 and c2 = 51 , and so ~x ∈ Span B. We can verify this answer by checking that
1 1 3
2 1
0 = 2 + −4
5 5
0 −1 2
(c) We need to determine if there exists c1 , c2 , c3 ∈ R such that
−1 3c1 + 2c2 − c3
1 3 2
−3 1 1 0 c1 + c2
= c + c + c =
−7 1 2 2 3 3 −4 2c1 + 3c2 − 4c3
−9 3 −4 3 3c1 − 4c2 + 3c3
Hence, we need to solve the system of equations
3c1 + 2c2 − c3 = 1
c1 + c2 = −3
2c1 + 3c2 − 4c3 = −7
3c1 − 4c2 + 3c3 = −9
We row reduce the corresponding augmented matrix to get
2 −1 0 −3
3
1 1 1
R1 ↔ R2
1 1 0 −3 3 2 −1 1 R2 − 3R1
∼ ∼
2 3 −4 −7 2 3 −4 −7 R3 − 2R1
3 −4 3 −9 3 −4 3 −9 R4 − 3R1
0 −3 R1 + R2 0 −1 0 −1
1 1 1 7 1 7
0 −1 −1 10 0 −1 −1 10
0 −1 −1 10
∼ ∼
0 1 −4 −1 R3 + R2 0 0 −5 9 0 0 −5 9
0 −7 3 0 R4 − 7R2 0 0 10 −70 R4 + 2R3 0 0 0 −52
The fourth row corresponds to the equation 0 = −52. Thus, the system is inconsistent, so ~x <
Span B.
Section 2.2 Solutions 43
The third row corresponds to the equation 0 = −55/3. Thus, the system is inconsistent, so ~x <
Span B.
2.2.3 (a) Consider
0 1 1 1 2
0 = c 2 + c 3 + c −1 + c 1
1 2 3 4
0 −1 1 4 1
Observe that this will give 3 equations in 4 unknowns. Hence, the rank of the matrix is at most 3,
so by the System Rank Theorem, the system must have at least 4 − 3 = 1 free variable. Therefore,
the set is linearly dependent.
(b) Consider
0 1 2 3 c1 + 2c2 + 3c3
0 = c1 0 + c2 1 + c3 3 = c2 + 3c3
0 −1 −2 −1 −c1 − 2c2 − c3
This corresponds to the homogeneous system
c1 + 2c2 + 3c3 = 0
c2 + 3c3 = 0
−c1 − 2c2 − c3 = 0
To find the solution space of the system, we row reduce the corresponding coefficient matrix.
1 2 3 1 2 3 R1 − 2R2
0 1 3 ∼ 0 1 3 ∼
1
−1 −2 −1 R3 + R1 0 0 2 R
2 3
1 0 −3 R1 + 3R3 1 0 0
0 1 3 R2 − 3R3 ∼ 0 1 0
0 0 1 0 0 1
Hence, the only solution to the system is c1 = c2 = c3 = 0. Therefore, the set is linearly indepen-
dent.
44 Section 2.2 Solutions
(c) Consider
0 1 4 7 c1 + 4c2 + 7c3
0 = c1 2 + c2 5 + c3 8 = 2c1 + 5c2 + 8c3
0 3 6 9 3c1 + 6c2 + 9c3
This corresponds to the homogeneous system
c1 + 4c2 + 7c3 = 0
2c1 + 5c2 + 8c3 = 0
3c1 + 6c2 + 9c3 = 0
To find the solution space of the system, we row reduce the corresponding coefficient matrix.
1 4 7 1 4 7
2 5 8 R2 − 2R1 ∼ 0 −3 −6 ∼
3 6 9 R3 − 3R1 0 −6 −12 R3 − 2R2
1 4 7
0 −3 −6
0 0 0
Hence, we see that there will be a free-variable. Consequently, the homogeneous system has
infinitely many solutions, and so the set is linearly dependent.
(d) Consider
0 5 −6 6 5c1 − 6c2 + 6c3
0 = c1 −3 + c2 3 + c3 −6 = −3c1 + 3c2 − 6c3
0 4 −4 8 4c1 − 4c2 + 8c3
This corresponds to the homogeneous system
From our work in Problem 2.2.2 (d), we know the rank of the coefficient matrix is 2. Thus, there
is one free variable and so the homogeneous system has infinitely many solutions. Thus, the set is
linearly dependent.
(e) Consider
0 3 2 1 3c1 + 2c2 + c3
0 = c1 1 + c2 1 + c3 3 = c1 + c2 + 3c3
0 2 3 2 2c1 + 3c2 + 2c3
This corresponds to the homogeneous system
3c1 + 2c2 + c3 = 0
c1 + c2 + 3c3 = 0
2c1 + 3c2 + 2c3 = 0
Section 2.2 Solutions 45
Thus, we see the rank of the coefficient matrix is 3 and hence the solution is unique. Consequently,
the set is linearly independent.
(f) Consider
0 3
6 0 3c1 + 6c2
0
= c1 1 + c2 4 + c3 2 = c1 + 4c2 + 2c3
0 3 −3 −1 3c1 − 3c2 − c3
0 5 4 8 5c1 + 4c2 + 8c3
This corresponds to the homogeneous system
3c1 + 6c2 = 0
c1 + 4c2 + 2c3 = 0
3c1 − 3c2 − c3 = 0
5c1 + 4c2 + 8c3 = 0
Thus, we see the rank of the coefficient matrix is 3 and hence the solution is unique. Consequently,
the set is linearly independent.
46 Section 2.2 Solutions
c1 + 2c2 + 4c3 = 0
3c1 − 5c2 + c3 = 0
−2c1 + 3c2 = 0
Hence, the rank of the coefficient matrix is 3. Therefore, the only solution to the homogeneous
system is c1 = c2 = c3 = 0, so the set is linearly independent. Moreover,
h i since the rank equals
the number of rows, by the System Rank Theorem (3), the system A | ~b is consistent for every
~b ∈ R3 , so the set also spans R3 . Therefore, it is a basis for R3 .
(d) Consider
0 1 2 1 c1 + 2c2 + c3
0 = c1 1 + c2 1 + c3 0 = c1 + c2
0 1 −1 −3 c1 − c2 − 3c3
Section 2.2 Solutions 47
c1 + 2c2 + c3 = 0
c1 + c2 = 0
c1 − c2 − 3c3 = 0
Hence, the rank of the coefficient matrix is 3. Therefore, the only solution to the homogeneous
system is c1 = c2 = c3 = 0, so the set is linearly independent. Moreover,
h i since the rank equals
~
the number of rows, by the System Rank Theorem (3), the system A | b is consistent for every
~b ∈ R3 , so the set also spans R3 . Therefore, it is a basis for R3 .
(e) Consider
0 1 3 3 c1 + 3c2 + 3c3
0 = c1 0 + c2 3 + c3 6 = 3c2 + 6c3
0 −4 −5 2 −4c1 − 5c2 + 2c3
This gives the homogeneous system
c1 + 3c2 + 3c3 = 0
3c2 + 6c3 = 0
−4c1 − 5c2 + 2c3 = 0
Hence, the rank of the coefficient matrix is 2. Therefore, the homogeneous system has infinitely
many solution and so the set is linearly dependent. Consequently, it is not a basis.
(f) Consider
0 3 4 2 3c1 + 4c2 + 2c3
0 = c1 2 + c2 4 + c3 1 = 2c1 + 4c2 + c3
0 −3 0 −2 −3c1 − 2c3
48 Section 2.2 Solutions
Hence, the rank of the coefficient matrix is 3. Therefore, the only solution to the homogeneous
system is c1 = c2 = c3 = 0, so the set is linearly independent. Moreover,h i since the rank equals
the number of rows, by the System Rank Theorem (3), the system A | ~b is consistent for every
~b ∈ R3 , so the set also spans R3 . Therefore, it is a basis for R3 .
h i
2.2.5 Let {~v1 , . . . , ~vn } be a set of n-vectors in Rn and let A = ~v1 · · · ~vn .
(h) The statement is false. Consider the system x1 + x2 + x3 = 1, 2x1 + 2x2 + 2x3 = 2. The system
has m = 2 equations and n = 3 variables, but the rank of the corresponding coefficient matrix A is
1 , m.
h i
(i) If A | ~b is consistent for every ~b ∈ Rm , then rank A = m by Theorem 2.2.5(3). Since the solution
is unique, we have that rank A = n by Theorem 2.2.5(2). Thus, m = rank A = n.
hh i i
2.2.10 Assume that B is linearly independent. Then, the only solution to the system ~v1 · · · ~vk | ~0 is the
trivial solution. Hence, by Theorem 2.2.5, the rank of the coefficient matrix must equal the number of
columns which is k.
h i hh i i
On the other hand, if the rank of ~v1 · · · ~vk is k, then ~v1 · · · ~vk | ~0 has a unique solution by
Theorem 2.2.5. Thus, B is linearly independent.
2.2.11 (a) We have that {~v1 , . . . , ~vk } spans Rn if and only if the equation c1~v1 + · · · + ck~vk = ~b is consistent for
every ~b ∈ Rn . The equation c1~v1 + · · · + ck~vk = ~b ishconsistent
i for every ~b ∈ Rn if and only if the
corresponding system of n equation in k unknowns A | ~b is consistent for every ~b ∈ Rn . Finally,
h i
by Theorem 2.2.5(3) the corresponding system of n equation in k unknowns A | ~b is consistent
for every ~b ∈ Rn if and only if rank A = n.
(b) If k < n, then the coefficient matrix A of the system c1~v1 + · · · + ck~vk = ~b must have rank A < n
by Theorem 2.2.4. But then, we have Span{~v1 , . . . , ~vk } = Rn and rank A < n which contradicts part
(a). Hence, k ≥ n.
2.2.12 (a) Since ~ui ∈ S and Span{~v1 , . . . , ~vℓ } = S, we can write each ~ui as a linear combination of ~v1 , . . . , ~vℓ .
Say,
Consider
c1~u1 + · · · + ck~uk = ~0
Substituting, we get
a11 c1 + · · · + ak1 ck = 0
..
.
a1ℓ c1 + · · · + akℓ ck = 0
This homogeneous system of ℓ equations in k unknowns (c1 , . . . , ck ) must have a unique solution as
otherwise, we would contradict the fact that {~u1 , . . . , ~uk } is linearly independent. By Theorem 2.2.5
Section 2.2 Solutions 51
(2), the rank of the matrix must equal the number of variables k. Therefore, ℓ ≥ k as otherwise
there would not be enough rows to contain the k leadings ones in the RREF of the coefficient
matrix.
(b) If {~v1 , . . . , ~vℓ } is a basis, then Span{~v1 , . . . , ~vℓ } = S. If {~u1 , . . . , ~uk } is a basis, then it is linearly
independent. Hence, by part (a), k ≤ ℓ. Similarly, we also have that {~u1 , . . . , ~uk } spans S and
{~v1 , . . . , ~vℓ } is linearly independent, so ℓ ≤ k. Therefore, k = ℓ.
h i
2.2.13 The i-th equation of the system A | ~b has the form
ai1 x1 + · · · + ain xn = bi
h i
If ~y is a solution of A | ~0 , then we have
ai1 y1 + · · · + ain yn = 0
h i
If ~z is a solution of A | ~b , then we have
ai1 z1 + · · · + ain zn = bi
ai1 (z1 + cy1 ) + · · · + ain (zn + cyn ) = ai1 z1 + · · · + ain zn + c(ai1 y1 + · · · + ain yn )
= bi + c(0)
= bi
f1 + f2 = 50
f2 + f3 + f5 = 60
f4 + f5 = 50
f1 − f3 + f4 = 40
(b) We row reduce the augmented matrix corresponding to the system in part (a).
R1 − R2
1 1 0 0 0 50 1 1 0 0 0 50
0 1 1 0 1 60 0 1 1 0 1 60
0 0
∼ ∼
0 1 1 50
0 0 0 1 1 50
1 0 −1 1 0 40 R4 − R1 0 −1 −1 1 0 −10 R4 + R2
1 0 −1 0 −1 −10 0 −1 0 −1 −10
1
0 1 1 0 1 60 0 1 1 0 1 60
0 0 ∼
0 1 1 50 0 0 0 1 1 50
0 0 0 1 1 50 R4 − R3 0 0 0 0 0 0
52 Section 2.2 Solutions
f1 − f3 − f5 = −10
f2 + f3 + f5 = 60
f4 + f5 = 50
We see that f3 and f5 are free variables. Thus, we get that the general solution is
53
54 Section 3.1 Solutions
(g) The product doesn’t exist since the number of rows of the second matrix does not equal the number
of columns of the first matrix.
" # 5 −3 " #
−1 2 0 −1 17
(h) 2 7 =
2 1 1 11 −2
−1 −3
T
2 1 h i 1
(i) 3 −1 = 2 3 5 −1 = 4
5 1 1
5 −3 " # −11 7 −3
−1 2 0
(j) 2 7 = 12 11 7
2 1 1
−1 −3 −5 −5 −3
T
2 1 2 h i 2 −2 2
(k) 3 −1 = 3 1 −1 1 = 3 −3 3
5 1 5 5 −5 5
T
2 x1 h i x1
(l) 3 x2 = 2 3 5 x2 = 2x1 + 3x2 + 5x3
5 x3 x3
(b) We have
n
X X n
n X n X
X n
tr(AT B) = (AT B)ii = (AT )i j (B) ji = (A) ji (B) ji
i=1 i=1 j=1 i=1 j=1
" # x1 " #
3 2 −1 4
3.1.4 We take A = , ~x = x2 , and ~b = .
2 −1 5 5
x3
3c1 + 2c2 + c3 = 0
c1 + c2 + c3 = 0
3c1 + 2c2 + c3 = 0
c1 − c2 − 2c3 = 0
Section 3.1 Solutions 55
t1 + t2 + 2t3 = −1
2t1 + t2 + 3t3 = −4
−2t1 + t3 = 3
−t1 − t2 + t3 = −2
2 −1 1 0 0 −2
1 1
2 1 3 −4 0 1 0 3
−2 ∼
0 1 3 0 0 1 −1
−1 −1 1 −2 0 0 0 0
h i
3.1.9 Let A = ~a1 · · · ~an . By Theorem 3.1.3 we have
h i h i h i
AIn = A ~e1 · · · ~en = A~e1 ··· A~en = ~a1 ··· ~an = A
as required.
A1
(a) Since A has a row of zeros, we can write A in block form as A = ~0T . Thus,
3.1.11
A2
A1 A1 B A1 B
AB = ~0T B = ~0T B = ~0T
A2 A2 B A2 B
a2 + bc = 1 (3.3)
ab + bd = 0 (3.4)
ac + cd = 0 (3.5)
2
bc + d = 1 (3.6)
The population will always stay at 3000 since we are not losing or gaining population. In par-
ticular, each year-to-year we will have 9/10 + 1/10 of the population of X and 1/5 + 4/5 of the
population of Y.
(c) Observe that for any t, we have
" # " #" # " #
xt+1 9/10 1/5 2000 2000
= =
yt+1 1/10 4/5 1000 1000
Therefore, by induction, the population will always be xt = 2000, yt = 1000.
3.1.14 (a) Let xt be the number of RealWest’s customers, let yt be the number of Newservices customers at
month t. We get " # " #" #
xt+1 0.7 0.1 xt
=
yt+1 0.3 0.9 yt
(b) We have
" # " #" # " #
x1 0.7 0.1 0.5 0.4
= =
y1 0.3 0.9 0.5 0.6
" # " #" # " #
x2 0.7 0.1 0.4 0.34
= =
y2 0.3 0.9 0.6 0.66
(c) We have
Thus, [L] = A.
Section 3.2 Solutions 59
3.2.2 (a) Since B has 4 columns for B~x to be defined we must have ~x ∈ R4 . Thus, the domain of f is R4 .
Moreover, since B has 3 rows, we get that B~x ∈ R3 . Thus, the codomain of f is R3 .
(b) We have
2
1 2 −3 0 −11
−2
f (2, −2, 3, 1) = 2 −1 0
3 = 9
3
1 0 2 −1 7
1
−3
1 2 −3 0 −13
1
f (−3, 1, 4, 2) = 2 −1 0 3 = −1
4
1 0 2 −1 3
2
(c) We have
1
f (~e1 ) = B~e1 = 2
1
2
f (~e2 ) = B~e2 = −1
0
−3
f (~e3 ) = B~e3 = 0
2
0
f (~e4 ) = B~e4 = 3
−1
Thus, [ f ] = B.
3.2.3 (a) Let ~x, ~y ∈ R3 and s, t ∈ R. Then,
L(s~x + t~y) = L(sx1 + ty1 , sx2 + ty2 , sx3 + ty3 ) = ([sx1 + ty1 ] + [sx2 + ty2 ], 0)
= s(x1 + x2 , 0) + t(y1 + y2 , 0)
= sL(~x) + tL(~y)
Hence,
h i "1 1 0#
[L] = L(1, 0, 0) L(0, 1, 0) L(0, 0, 1) =
0 0 0
60 Section 3.2 Solutions
L(s~x + t~y) = L(sx1 + ty1 , sx2 + ty2 , sx3 + ty3 ) = ([sx1 + ty1 ] − [sx2 + ty2 ], [sx2 + ty2 ] + [sx3 + ty3 ])
= s(x1 − x2 , x2 + x3 ) + t(y1 − y2 , y2 + y3 )
= sL(~x) + tL(~y)
Hence,
h i "1 −1 0#
[L] = L(1, 0, 0) L(0, 1, 0) L(0, 0, 1) =
0 1 1
(c) Observe that L(0, 0, 0) = (1, 0, 0) and L(1, 0, 0) = (1, 1, 0), so L(0, 0, 0) + L(1, 1, 0) = (2, 1, 0). But,
Hence,
h 0 0 0
i
[L] = L(1, 0, 0) L(0, 1, 0) L(0, 0, 1) = 0 0 0
0 0 0
L(s~x + t~y) = L(sx1 + ty1 , sx2 + ty2 , sx3 + ty3 , sx4 + ty4 )
= ([sx4 + ty4 ] − [sx1 + ty1 ], 2[sx2 + ty2 ] + 3[sx3 + ty3 ])
= s(x4 − x1 , 2x2 + 3x3 ) + t(y4 − y1 , 2y2 + 3y3 )
= sL(~x) + tL(~y)
Section 3.2 Solutions 61
Hence,
h i "−1 0 0 1#
[L] = L(1, 0, 0, 0) L(0, 1, 0, 0) L(0, 0, 1, 0) L(0, 0, 0, 1) =
0 2 3 0
" #
2
3.2.4 Let ~a = .
2
(a) We have
" # " #
~e1 · ~a 2 2 1/2
proj~a (~e1 ) = ~a = =
k~ak2 8 2 1/2
" # " #
~e2 · ~a 2 2 1/2
proj~a (~e2 ) = ~a = =
k~ak2 8 2 1/2
Hence,
h i "1/2 1/2#
[proj~a ] = proj~a (~e1 ) proj~a (~e2 ) =
1/2 1/2
(b) We have " #! " #" # " #
3 1/2 1/2 3 1
proj~a = =
−1 1/2 1/2 −1 1
(c) We have " #! " # " #
3 4 2 1
proj~a = =
−1 8 2 1
3.2.5 We have
1 3 −7/11
6
reflP (~e1 ) = 0 −
1 = −6/11
11
0 −1 6/11
0 3 −6/11
2
reflP (~e2 ) = 1 − 1 = 9/11
11
0 −1 2/11
0 3 6/11
2
reflP (~e3 ) = 0 + 1 = 2/11
11
1 −1 9/11
Hence,
−7/11 −6/11 6/11
[reflP ] = −6/11 9/11 2/11
6/11 2/11 9/11
62 Section 3.2 Solutions
and
!
1 1 1 1 1
L(0, 1) = L [(1, 1) − (1, −1)] = L(1, 1) − L(1, −1) = (2, 3) − (3, 1) = (−1/2, 1)
2 2 2 2 2
3.2.14 Let ~u = ... . By definition, the i-th column of [proj~u ] is
un
~ei · ~u
proj~u (~ei ) = ~u = ui~u
k~uk2
1 0
Thus, B = −1 , 0 spans ker(L) and is clearly linearly independent. Hence, B is a basis
0
1
for ker(L).
Every vector ~y ∈ Range(L) has the form
" # " #
x1 + x2 1
L(~x) = = (x1 + x2 ) , (x1 + x2 ) ∈ R
0 0
(" #)
1
Hence, C = spans Range(L) and is linearly independent, so it is a basis for Range(L).
0
ii. We have " # " # " #
1 1 0
L(1, 0, 0) = L(0, 1, 0) = L(0, 0, 1) =
0 0 0
" #
1 1 0
Thus, [L] = .
0 0 0
iii. By Theorem 3.3.4, B is a basis for Null([L]). By Theorem 3.3.5, C is a basis for Col([L]).
1 0 1
By definition, Row([L]) = Span 1 , 0 . Thus, 1 spans Row([L]) and is clearly lin-
0 0
0
early independent, so it is a basis for Row([L]).
To find a basis for the left nullspace of [L] we solve the homogeneous system [L]T ~x = ~0.
Row reducing the corresponding coefficient matrix gives
1 0 1 0
1 0 ∼ 0 0
0 0 0 0
1
Thus, B = 1 spans ker(L) and is clearly linearly independent. Hence, B is a basis for
−1
ker(L).
Every vector ~y ∈ Range(L) has the form
" # " # " # " #
x − x2 1 −1 0
L(~x) = 1 = x1 + x2 + x3 , x1 , x2 , x3 ∈ R
x2 + x3 0 1 1
(" # " #)
1 0
Hence, C = , spans Range(L) and is linearly independent, so it is a basis for
0 1
Range(L).
ii. We have " # " # " #
1 −1 0
L(1, 0, 0) = L(0, 1, 0) = L(0, 0, 1) =
0 1 1
" #
1 −1 0
Thus, [L] = .
0 1 1
iii. By Theorem 3.3.4 B is a basis for Null([L]). By Theorem 3.3.5 C is a basis for Col([L]).
1 0
1 0
By definition, Row([L]) = Span −1 ,
1 . Thus, −1 , 1 spans Row([L]) and is
0
1 0 1
clearly linearly independent, so it is a basis for Row([L]).
To find a basis for the left nullspace of [L] we solve the homogeneous system [L]T ~x = ~0.
Row reducing the corresponding coefficient matrix gives
1 0 1 0
−1 1 ∼ 0 1
0 1 0 0
Thus, we have x1 = x2 = 0, so Null([L]T ) = {~0}. Thus, a basis for Null([L]T ) is the empty
set.
(c) i. Observe that for any ~x ∈ R3 , we have
0
L(x1 , x2 , x3 ) = 0
0
66 Section 3.3 Solutions
So, ker(L) = R3 . Thus, a basis for ker(L) is the standard basis for R3 .
Every vector ~y ∈ Range(L) has the form
0
L(~x) = 0
0
Hence, Range(L) = {~0} and so a basis for Range(L) is the empty set.
ii. We have
0 0 0
L(1, 0, 0) = 0 L(0, 1, 0) = 0 L(0, 0, 1) = 0
0 0 0
0 0 0
Thus, [L] = 0 0 0.
0 0 0
iii. By Theorem 3.3.4 the standard basis for R3 is a basis for Null([L]). By Theorem 3.3.5 the
empty set is a basis for Col([L]).
0 0 0
. Thus, Row([L]) = {~0} and so a basis for
By definition, Row([L]) = Span 0 , 0 , 0
0 0 0
Row([L]) is the empty set.
To find a basis for the left nullspace of [L] we solve the homogeneous system [L]T ~x = ~0.
Clearly, the solution space is R3 . Hence, the standard basis for R3 is a basis for Null([L]T ).
(d) i. If ~x ∈ ker(L), then we have
" # " #
0 x1
= L(x1 , x2 , x3 , x4 ) =
0 x2 + x4
Hence, x1 = 0 and x4 = −x2 . Thus, we have
x1 0 0 0
x2 x2
= x2 + x3 0 , x2 , x3 ∈ R
1
~x = =
x3 x3 0
1
x4 −x2 −1 0
0 0
1 0
Thus, B = , spans ker(L) and is clearly linearly independent. Hence, B is a basis
0 1
−1 0
for ker(L).
Every vector ~y ∈ Range(L) has the form
" # " # " #
x1 1 0
L(~x) = = x1 + (x2 + x4 ) , x1 , (x2 + x4 ) ∈ R
x2 + x4 0 1
(" # " #)
1 0
Hence, C = , spans Range(L) and is linearly independent, so it is a basis for
0 1
Range(L).
Section 3.3 Solutions 67
ii. We have
" # " # " # " #
1 0 0 0
L(1, 0, 0, 0) = L(0, 1, 0, 0) = L(0, 0, 1, 0) = L(0, 0, 0, 1) =
0 1 0 1
" #
1 0 0 0
Thus, [L] = .
0 1 0 1
iii. By Theorem 3.3.4 B is a basis for Null([L]). By Theorem
3.3.5 C is a basis for Col([L]).
1 0
1 0
0 1
0 1
By definition, Row([L]) = Span , . Thus, , spans Row([L]) and is clearly
0 0
0 0
0 1 0 1
linearly independent. Therefore, it is a basis for Row([L]).
To find a basis for the left nullspace of [L] we solve the homogeneous system [L]T ~x = ~0.
Row reducing the corresponding coefficient matrix gives
1 0 1
0
0 1 0 1
0 ∼
0 0 0
0 1 0 0
Thus, we have x1 = x2 = 0, so Null([L]T ) = {~0}. Thus, a basis for Null([L]T ) is the empty
set.
(e) i. If ~x ∈ ker(L), then we have " # " #
0 x
= L(x1 , x2 ) = 1
0 x2
Hence, x1 = x2 = 0. So, ker(L) = {~0} and so a basis for ker(L) is the empty set.
Every vector ~y ∈ Range(L) has the form
" # " # " #
x1 1 0
L(~x) = = x1 + x2 , x1 , x2 ∈ R
x2 0 1
(" # " #)
1 0
Hence, C = , spans Range(L) and is linearly independent, so it is a basis for
0 1
Range(L).
ii. We have " # " #
1 0
L(1, 0) = L(0, 1) =
0 1
" #
1 0
Thus, [L] = .
0 1
iii. By Theorem 3.3.4 the empty set is a basis for Null([L]). By Theorem 3.3.5 C is a basis for
Col([L]). (" # " #) (" # " #)
1 0 1 0
By definition, Row([L]) = Span , . Thus, , is a basis for Row([L]).
0 1 0 1
68 Section 3.3 Solutions
To find a basis for the left nullspace of [L] we solve the homogeneous system [L]T ~x = ~0. We
see that the only solution is ~x = ~0, so Null([L]T ) = {~0}. Thus, a basis for Null([L]T ) is the
empty set.
(f) L(x1 , x2 , x3 , x4 ) = (x4 − x1 , 2x2 + 3x3 )
i. If ~x ∈ ker(L), then we have
" # " #
0 x4 − x1
= L(x1 , x2 , x3 , x4 ) =
0 2x2 + 3x3
Hence, x4 = x1 and x3 = − 32 x2 . Thus, we have
x1 x1 1 0
x x
= x 0 + x 1 , x , x ∈ R
~x = 2 = 2
1 2 1 2
x3 −2x2 /3
0 −2/3
x4 x1 1 0
1 0
0 1
Thus, B = , spans ker(L) and is clearly linearly independent. Hence, B is a
0 −2/3
1
0
basis for ker(L).
Every vector ~y ∈ Range(L) has the form
" # " # " #
x4 − x1 1 0
L(~x) = = (x4 − x1 ) + (2x2 + 3x3 )
2x2 + 3x3 0 1
(" # " #)
1 0
Hence, C = , spans Range(L) and is linearly independent, so it is a basis for
0 1
Range(L).
ii. We have
" # " # " # " #
−1 0 0 1
L(1, 0, 0, 0) = L(0, 1, 0, 0) = L(0, 0, 1, 0) = L(0, 0, 0, 1) =
0 2 3 0
" #
−1 0 0 1
Thus, [L] = .
0 2 3 0
iii. By Theorem 3.3.4 B is a basis for Null([L]).
By Theorem 3.3.5 C is a basis for Col([L]).
−1 0 −1 0
0 2
0 2
By definition, Row([L]) = Span , . Thus, , spans Row([L]) and is
0
3
0 3
1
0
1
0
clearly linearly independent. Therefore, it is a basis for Row([L]).
To find a basis for the left nullspace of [L] we solve the homogeneous system [L]T ~x = ~0.
Row reducing the corresponding coefficient matrix gives
−1 0 1 0
0 2 0 1
0 3 ∼ 0 0
1 0 0 0
Section 3.3 Solutions 69
Thus, we have x1 = x2 = 0, so Null([L]T ) = {~0}. Thus, a basis for Null([L]T ) is the empty
set.
(g) i. If ~x ∈ ker(L), then we have
0 = L(~x) = (3x1 )
Hence, x1 = 0. Therefore, ker(L) = {0}, so a basis for ker(L) is the empty set.
Every vector in the range of L has the form L(~x) = (3x1 ). Hence, {1} spans Range(L) and is
linearly independent, so it is a basis for Range(L).
ii. We have L(1) = 3. Thus, [L] = [3].
iii. By Theorem 3.3.4, the empty set is a basis for ([L]). By Theorem 3.3.5, {1} is a basis for
Col([L]). Now notice that [L]T = [L]. Hence, the empty set is also a basis for ([L]T ) and {1}
is also a basis for Row([L]).
i. If ~x ∈ ker(L), then
0
Thus, B = 1 spans ker(L) and is linearly independent. Hence, it is a basis for ker(L).
0
Every vector ~y ∈ Range(L) has the form
2x1 + x3 2 1
L(~x) = x1 − x3 = x1 1 + x3 −1 , x1 , x3 ∈ R
x1 + x3 1 1
2 1
Thus, C = 1 , −1 spans Range(L) and is clearly linearly independent, so it is a basis for
1 1
Range(L).
ii. We have
2 0 1
L(1, 0, 0) = 1 , L(0, 1, 0) = 0 , L(0, 0, 1) = −1
1 0 1
2 0 1
Thus, [L] = 1 0 −1.
1 0 1
70 Section 3.3 Solutions
iii. By Theorem 3.3.4 B is a basis for Null([L]). By Theorem 3.3.5 C is a basis for Col([L]).
2 1 1
By definition, Row([L]) = Span 0 , 0 , 0 . However, observe that
1 −1 1
1 1 2
1 3
0 + 0 = 0
2 2
−1 1 1
1 1 1 1
Hence, Row([L]) = Span 0 , 0 . Since 0 , 0 is also clearly linearly independent,
−1 1
−1 1
it is a basis for Row([L]).
To find a basis for the left nullspace of [L], we solve [L]T ~x = ~0. Row reducing the corre-
sponding coefficient matrix gives
2 1 1 1 0 2/3
0 0 0 ∼ 0 1 −1/3
1 −1 1 0 0 0
−2/3
Hence, 1/3 spans the left nullspace of [L] and is clearly linearly independent, so it is a
1
basis for the left nullspace of [L].
(h) i. If ~x ∈ ker(L), then we have
0 = L(~x) = (x1 , 2x1 , −x1 )
Hence, x1 = 0. Therefore, ker(L) = {0}, so a basis for ker(L) is the empty set.
Every vector in the range of L has the form
x1 1
L(~x) = 2x1 = x1 2 , x1 ∈ R
−x1 −1
1
Hence, C = 2 spans Range(L) and is linearly independent, so it is a basis for Range(L).
−1
1 1
ii. We have L(1) = 2 . Thus, [L] = 2 .
−1 −1
Section 3.3 Solutions 71
iii. By Theorem 3.3.4, the empty set is a basis for ([L]). By Theorem 3.3.5, C is a basis for
Col([L]).
We have Row([L]) = Span{1, 2, −1} = Span{1}. Thus, {1} is a basis for Row([L]).
To find a basis for the left nullspace of [L], we solve
0 = [L]T ~x = x1 + 2x2 − x3
Thus, every vector in the left nullspace of [L] has the form
x1 −2x2 + x3 −2 1
~x = x2 = x2 = x 1 + x 0 , x2 , x3 ∈ R
2
3
x3 x3 0 1
−2 1
Hence, 1 , 0 is a basis for Null([L]T ).
0 1
i. If ~x ∈ ker(L), then
(0, 0) = L(x1 , x2 ) = (x1 + x2 , 2x1 + 4x2 )
Hence, x1 + x2 = 0 and 2x1 + 4x2 = 0. This implies that x1 = x2 = 0. Therefore, ker(L) = {~0}
and so a basis for ker(L) is the empty set.
Every vector ~y ∈ Range(L) has the form
" # " # " #
x1 + x2 1 1
L(~x) = = x1 + x2 , x1 , x2 ∈ R
2x1 + 4x2 2 4
(" # " #)
1 1
Thus, C = , spans Range(L) and is clearly linearly independent, so it is a basis for
2 4
Range(L).
ii. We have " # " #
1 1
L(1, 0) = , L(0, 1) =
2 4
" #
1 1
Thus, [L] = .
2 4
iii. By Theorem 3.3.4 the empty set is a basis for Null([L]). By Theorem 3.3.5 C is a basis for
Col([L]). (" # " #) (" # " #)
1 2 1 2
By definition, Row([L]) = Span , . Hence, , spans Row([L]) and is also
1 4 1 4
clearly linearly independent, so it is a basis for Row([L]).
To find a basis for the left nullspace of [L], we solve [L]T ~x = ~0. Row reducing the corre-
sponding coefficient matrix gives
" # " #
1 2 1 0
∼
1 4 0 1
Hence, the only vector in Null([L]T ) is the zero vector. So, a basis for the left nullspace of
[L] is the empty set.
72 Section 3.3 Solutions
(" # " #)
1 2
3.3.2 (a) By definition, , spans Col(A). Since it is also linearly independent, it is a basis for
3 4
Col(A).
(" # " #)
1 3
By definition, , spans Row(A). Since it is also linearly independent, it is a basis for
2 4
Row(A).
To find Null(A), we solve A~x = ~0. Row reducing the coefficient matrix of the corresponding
system gives " # " #
1 2 1 0
∼
3 4 0 1
Thus, the only solutions is ~x = ~0. Hence, Null(A) = {~0} and so a basis is the empty set.
To find Null(AT ), we solve AT ~x = ~0. Row reducing the coefficient matrix of the corresponding
system gives " # " #
1 3 1 0
∼
2 4 0 1
Thus, the only solutions is ~x = ~0. Hence, Null(AT ) = {~0} and so a basis is the empty set.
(" # " # " #) " # " # " #
1 0 1 1 1 0
(b) By definition, Col(B) = Span , , . Since =1 + (−1) , we get that
0 1 −1 −1 0 1
(" # " # " #) (" # " #)
1 0 1 1 0
Col(B) = Span , , = Span ,
0 1 −1 0 1
(" # " #)
1 0
by Theorem 1.2.1. Thus, , is a spanning set for Col(B) and is also linearly independent,
0 1
so it is a basis for Col(B).
1 0
By definition,
0 , 1 spans Row(B). Since it is also linearly independent, it is a basis for
1 −1
Row(B).
To find Null(B), we solve B~x = ~0. The coefficient matrix is already in RREF, so we get x1 + x3 = 0
and x2 − x3 = 0. Let x3 = t ∈ R. Then, every vector ~x ∈ Null(B) has the form
x1 −x3 −1
x2 = x3 = x3 1
x3 x3 1
−1
So, 1 spans Null(B) and is clearly linearly independent and hence is a basis for Null(B).
1
To find Null(BT ), we solve BT ~x = ~0. Row reducing the coefficient matrix of the corresponding
system gives
1 0 1 0
0 1 ∼ 0 1
1 −1 0 0
Section 3.3 Solutions 73
Thus, the only solutions is ~x = ~0. Hence, Null(BT ) = {~0} and so a basis is the empty set.
1 2 0
(c) By definition, Col(C) = Span −1 , −2 , 0 spans Col(C). Observe that
1 3 −1
1 0 2
2 −1 − 0 = −2
1 −1 3
1 0
Hence, −1 , 0 also spans Col(C). Since it is also linearly independent, it is a basis for
1 −1
Col(C).
By definition,
1 −1 1 1 1
Row(C) = Span 2 , −2 , 3 = Span 2 , 3
0
0
−1
0 −1
1 1
Hence, 2 , 3 is a basis for Row(C) since it spans Row(C) and is linearly independent.
0 −1
To find Null(C), we solve C~x = ~0. We row reduce the corresponding coefficient matrix to get
1 2 0 1 0 2
−1 −2 0 ∼ 0 1 −1
1 3 −1 0 0 0
x1 + 2x3 = 0 and x2 − x3 = 0. Then, every vector ~x ∈ Null(C) has the form
x1 −2x3 −2
x2 = x3 = x3 1 , x3 ∈ R
x3 x3 1
−2
So, 1 spans Null(C) and is clearly linearly independent and hence is a basis for Null(C).
1
To find Null(C T ), we solve C T ~x = ~0. Row reducing the coefficient matrix of the corresponding
system gives
1 −1 1 1 −1 0
2 −2 3 ∼ 0 0 1
0 0 −1 0 0 0
1
So, 1 spans Null(C T ) and is clearly linearly independent and hence is a basis for Null(C T ).
0
(d) To find a basis for Col(A), we need to determine which columns of A can be written as linear
combinations of other columns. We consider
0 1 2 1 1 c1 + 2c2 + c3 + c4
0 = c1 −2 + c2 −5 + c3 −2 + c4 1 = −2c1 − 5c2 − 2c3 + c4
0 −1 −3 −1 2 −c1 − 3c2 − c3 + 2c4
If we ignore the last column and pretend the third column is an augmented part, this shows that the
third column equals 1 times the first column (which is obvious). If we ignore the third column and
pretend the fourth column is an augmented part, we see that 7 times the first column plus -3 times
1 2
the second column gives the fourth column. Hence, we have that −2 , −5 spans Col(A).
−1 −3
Since it is also linearly independent (just consider the first two columns in the system above) and
hence it is a basis for Col(A).
To find a basis for Row(A), we need to determine which rows of A can be written as linear combi-
nations of other columns. We consider
If we ignore the last column and pretend the third column is an augmented part, this shows that
the third column equals the sum of the first two
columns. That is, the third row of A equals the
1 −2
2 −5
sum of the first two rows. Hence, we have that , spans Row(A). Since it is also linearly
1 −2
1
1
independent (just consider the first two columns in the system above) and hence it is a basis for
Row(A).
Section 3.3 Solutions 75
To find Null(A), we solve A~x = ~0. Observe that the coefficient matrix is A which we already row
reduced above. Hence, we get x1 + x3 + 7x4 = 0 and x2 − 3x4 = 0. Then, every vector ~x ∈ Null(AT )
has the form
x1 −x3 − 7x4 −1 −7
x 3x
= x + x 3 , x , x ∈ R
0
2 = 4
x3 3 4 3 4
x3
1 0
x4 x4 0 1
−1 −7
0 3
So, , spans Null(AT ) and is clearly linearly independent and hence is a basis for
1 0
0 1
Null(A).
To find Null(AT ), we solve AT ~x = ~0. We observe that we row reduced AT above. We get x1 +x3 = 0
and x2 + x3 = 0. Then, every vector ~x ∈ Null(AT ) has the form
x1 −x3 −1
x = −x = x −1 , x3 ∈ R
2 3 3
x3 x3 1
−1
So, −1 spans Null(AT ) and is clearly linearly independent and hence is a basis for Null(AT ).
1
3.3.3 If A~x = ~b has a unique solution, then A~x = ~0 has a unique solution ~x = ~0. Thus, Null(A) = {~0}.
On the other hand, if Null(A) = {~0}, then A~x = ~0 has a unique solution. Thus, by the System Rank
Theorem(2), rank(A) = n and so A~x = ~b has a unique solution.
3.3.4 By definition Range(L) is a subset of Rm and ~0 ∈ Range(L) by Lemma 3.3.1.
~ ∈ Rn such that L(~x) = ~y and L(~
Let ~y,~z ∈ Range(L). Then, there exists ~x, w w) = ~z. We now see that
~ ) = L(~x) + L(~
L(~x + w w) = ~y + ~z
and
L(t~x) = tL(~x) = t~y
So, ~y + ~z ∈ Range(L) and t~y ∈ Range(L) for all t ∈ R. Thus, Range(L) is a subspace of Rm by the
Subspace Test.
3.3.5 Consider
c1 L(~v1 ) + · · · + ck L(~vk ) = ~0
Then,
L(c1~v1 + · · · + ck~vk ) = ~0
Therefore, c1~v1 + · · · + ck~vk ∈ ker(L) and hence c1~v1 + · · · + ck~vk = ~0 since ker(L) = {~0}. Thus, c1 = · · · =
ck = 0 since {~v1 , . . . , ~vk } is linearly independent. Hence, {L(~v1 ), . . . , L(~vk )} is linearly independent.
76 Section 3.3 Solutions
3.3.6 Let [L] be the standard matrix of L. By definition of the standard matrix, we have [L] is an n × n matrix.
Thus, we get that ker(L) = {~0} if and only if
~0 = L(~x) = [L]~x
has a unique solution. By the System Rank Theorem (2), this is true if and only if rank[L] = n. Moreover,
by the System Rank Theorem (3), we have that ~b = [L]~x = L(~x) is consistent for all ~b ∈ Rn if and only
if rank[L] = n. Hence, ker(L) = {~0} if and only if rank[L] = n if and only if Range(L) = Rn .
3.3.7 Let ~y ∈ Col(A). Then, by definition there exists ~x ∈ Rn such that ~y = A~x. Thus,
Thus, Null(A) is the solution space of this homogeneous system which we know is a subspace by
Theorem 2.2.3.
3.3.9 We just need to prove that if B is obtained from A by applying any one of the three elementary row
operations, then Row(A) = Row(B).
T
~a1
Let A = ... . Then, Row(A) = Span{~a1 , . . . , ~am }.
~aTm
If B is obtained from A by swapping row i and row j, then B still has the same rows as A (just in a
different order), so
Row(B) = Span{~a1 , . . . , ~am } = Row(A)
3.3.10 (a) If ~b = A~x for some ~x ∈ Rn , then by definition ~b ∈ Col(A). On the other hand, if ~b ∈ Col(A),
then there exists ~x ∈ Rn such that A~x = ~b, so the system is consistent. Hence, we have proven the
statement.
(b) We disprove the statement " #with a counter" example.
# Let L : R2 → R2 be defined by L(~x) = ~0.
1 1
Then, L(1, 1) = ~0 and so ∈ ker(L), but , ~0.
1 1
" #
1 1
(c) We disprove the statement with a counter example. Let A = and let L : R2 → R2 be defined
1 1
(" # " #) " #
2 3 2 3
by L(~x) = A~x. Then, observe that Range(L) = Span , , but [L] , .
2 3 2 3
Hence, # "
5 0
[L + M] =
2 1
(b) We have
# " " # " #
x1 + x2 −x3 x1 + x2 − x3
(L + M)(x1 , x2 , x3 ) = L(x1 , x2 , x3 ) + M(x1 , x2 , x3 ) = + =
x1 + x3 x1 + x2 + x3 2x1 + x2 + 2x3
Hence, " #
1 1 −1
[L + M] =
2 1 2
(c) We have
"
# " # " #
2x1 − x2 −2x1 + x2 0
(L + M)(x1 , x2 , x3 ) = L(x1 , x2 , x3 ) + M(x1 , x2 , x3 ) = + =
x2 + 3x3 −x2 − 3x3 0
Hence, # "
0 0 0
[L + M] =
0 0 0
78 Section 3.4 Solutions
(c) We have
So, Range(M ◦ L) = R2 .
Section 3.4 Solutions 79
(c) No, it isn’t. If M ◦ L is onto, then for any ~z ∈ R p , there exists ~x ∈ Rn such that ~z = (M ◦ L)(~x) =
M(L(~x)). Hence, ~z ∈ Range(M). So, Range(M) = R p .
3.4.4 Let L, M ∈ L and let c, d ∈ R.
V4 Let ~x, ~y ∈ Rn and s, t ∈ R. Hence,
O(s~x + t~y) = ~0 = sO(~x) + tO(~y)
so, O is linear and hence O ∈ L. For any ~x ∈ Rn we have
(L + O)(~x) = L(~x) + O(~x) = L(~x) + ~0 = L(~x)
Hence, L + O = L.
V5 Let ~x, ~y ∈ Rn and s, t ∈ R. Then,
(−L)(s~x + t~y) = −L(s~x + t~y) = −[sL(~x) + tL(~y)] = s(−L)(~x) + t(−L)(~y)
Hence, (−L) is linear and so (−L) ∈ L. For any ~x ∈ Rn we have
(L + (−L))(~x) = L(~x) + (−L)(~x) = L(~x) − L(~x) = ~0 = O(~x)
Thus, L + (−L) = O
V8 For any ~x ∈ Rn we have
[(c + d)L](~x) = (c + d)L(~x) = cL(~x) + dL(~x) = [cL + dL](~x)
Hence, (c + d)L = cL + dL.
3.4.5 We have
h i h i
(M ◦ L)(~e1 ) · · · (M ◦ L)(~en ) = M(L(~e1 )) · · · M(L(~en ))
h i
= [M](L(~e1 )) · · · [M](L(~en ))
h i
= [M] L(~e1 ) · · · L(~en )
= [M][L]
3.4.6 We want to find all t1 , t2 ∈ R such that
O = t1 L1 + t2 L2
[O] = [t1 L1 + t2 L2 ]
" #
0 0 0
= t1 [L1 ] + t2 [L2 ]
0 0 0
" # " #
2 0 0 1 1 1
= t1 + t2
0 1 −1 0 0 1
" #
2t1 + t2 t2 t2
=
0 t1 −t1 + t2
Hence, we have the system of equations
2t1 + t2 = 0, t2 = 0, t2 = 0, t1 = 0, −t1 + t2 = 0
Therefore, the only solution is t1 = t2 = 0.
80 Section 3.4 Solutions
O = t1 L1 + t2 L2 + t3 L3
[O] = [t1 L1 + t2 L2 + t3 L3 ]
" #
0 0
= t1 [L1 ] + t2 [L2 ](~x) + t3 [L3 ]
0 0
" # " # " #
1 3 2 5 1 2
= t1 + t2 + t3
6 1 8 3 −2 3
" #
t1 + 2t2 + t3 3t1 + 5t2 + t3
=
6t1 + 8t2 − 2t3 t1 + 3t2 + 3t3
t1 + 2t2 + t3 = 0
3t1 + 5t2 + t3 = 0
6t1 + 8t2 − 2t3 = 0
t1 + 3t2 + 3t3 = 0
0 −3
1 2 1 1
3 5 1 0 1 2
∼
6 8 −2 0 0 0
1 3 3 0 0 0
Thus, we get
t1 3t1 3
t2 = −2t3 = t3 −2 , t3 ∈ R
t3 t3 1
3.4.8 Define Id : Rn → Rn by Id(~x) = ~x. Then, for any ~x, ~y ∈ Rn and s, t ∈ R, we have
Hence, Id is linear.
For any ~x ∈ Rn we have
(L ◦ Id)(~x) = L(Id(~x)) = L(~x)
and
(Id ◦L)(~x) = Id(L(~x)) = L(~x)
as required.
Chapter 4 Solutions
(e) By definition S5 is a subset of P2 (R) and the zero polynomial z(x) = 0 + 0x + 0x2 ∈ S5 since 0 = 0.
Let a + bx + cx2 , d + ex + f x2 ∈ S5 . Then, a = c and d = f . Hence,
1 + x2 = c1 (1 + x + x2 ) + c2 (−1 + 2x + 2x2 ) + c3 (5 + x + x2 )
= c1 − c2 + 5c3 + (c1 + 2c2 + c3 )x + (c1 + 2c2 + c3 )x2
c1 − c2 + 5c3 = 1
c1 + 2c2 + c3 = 0
c1 + 2c2 + c3 = 1
c1 + 3c3 = 2
c2 + c3 = 5
−c1 + 2c2 − 3c3 = 4
c1 + 2c2 + c3 = 4
c1 + c2 + 2c3 = 0
2c1 + 3c2 + 4c3 = −3
c2 + 3c3 = 3
−c1 − c2 − 5c3 = −6
0 −1
1 1 2 0 1 0
2 3 4 −3 0 1 0 −3
∼
0 1 3 3 0 0 1 2
−1 −1 −5 −6 0 0 0 0
c1 + c2 + 2c3 = 0
2c1 + 3c2 + 4c3 = −3
c2 + 3c3 = 3
−c1 − c2 − 5c3 = −6
c1 + c2 + 2c3 = 0
2c1 + 3c2 + 4c3 = −3
c2 + 3c3 = 3
−c1 − c2 − 5c3 = −6
c1 + 2c2 + c3 = 0
c1 + 2c2 − 2c3 = 0
−2c1 + c2 + 3c3 = 0
c1 + 4c3 = 0
Hence, we have the same homogeneous system as in part (c) so this set is also linearly independent.
86 Section 4.1 Solutions
(e) Consider
" # " # " # " # " # " #
0 0 1 −1 3 1 −1 5 3 1 c1 + 3c2 − c3 + 3c4 −1c1 + c2 + 5c3 + c4
= c1 +c2 +c +c =
0 0 2 1 2 1 3 0 1 4 2 2 2c1 + 2c2 + 2c4 c1 + c2 + c3 + 2c4
c1 + 3c2 − c3 + 3c4 = 0
−c1 + c2 + 5c3 + c4 = 0
2c1 + 2c2 + 2c4 = 0
c1 + c2 + c3 + 2c4 = 0
3 −1 3 1 0
1
0 0
−1 1 5 1 0 1 0 0
2 ∼
2 0 2 0 0 1 0
1 1 1 2 0 0 0 1
t1 + 2t2 + t3 = 0
3t1 + 5t2 + t3 = 0
6t1 + 8t2 − 2t3 = 0
t1 + 3t2 + 3t3 = 0
0 −3
1 2 1 1
3 5 1 0 1 2
∼
6 8 −2 0 0 0
1 3 3 0 0 0
Since there are infinitely many solutions, the set is linearly dependent.
Section 4.1 Solutions 87
4.1.5 (a) The statement is true. Assume that {~v1 , . . . , ~vℓ } is any subset of B that is linearly dependent. Then
there exists coefficients c1 , . . . , cℓ not all zero such that
c1~v1 + · · · + cℓ~vℓ = ~0
Hence, ~x + ~y ∈ {~0V }. We need to show that s~x = ~0 for all s ∈ R. We have 0~x = ~0 ∈ S by Theorem 4.1.1.
~ = 1s ~z. Then, we get
If s , 0, then for any ~z ∈ V we can let w
~z + s~0 = s~
w + s~0 = s(~
w + ~0) = s~
w = ~z
V3 We have
~s + ~t = {s1 + t1 , s2 + t2 , . . .} = {t1 + s1 , t2 + s2 , . . .} = ~t + ~s
~s + ~0 = {s1 + 0, s2 + 0, . . .} = {s1 , s2 , . . .} = ~s
88 Section 4.1 Solutions
V8 We have
(a+b)~s = {(a+b)s1 , (a+b)s2 , . . .} = {as1 +bs1 , as2 +bs2 ), . . .} = {as1 , as2 , . . .}+{bs1 , bs2 , . . .} = a{s1 , s2 , . . .}+b{s1 ,
V9 We have
a(~s+~t) = a{s1 +t1 , s2 +t2 , . . .} = {a(s1 +t1 ), a(s2 +t2 ), . . .} = {as1 +at1 , as2 +at2 , . . .} = {as1 , as2 , . . .}+{at1 , at2 , . . .} =
V10 We have
1~s = {1s1 , 1s2 , . . .} = {s1 , s2 , . . .} = ~s
Thus, S is a vector space under these operations.
4.1.9 Let V be a vector space. Prove that (−~x) = (−1)~x for all ~x ∈ V. We have
(−1)~x = (−1)~x + ~0 by V4
= (−1)~x + (~x + (−~x)) by V5
= ((−1)~x + ~x) + (−~x) by V2
= ((−1)~x + 1~x) + (−~x) by V10
= (−1 + 1)~x + (−~x) by V8
= 0~x + (−~x) operations on R
= ~0 + (−~x) by Theorem 4.1.1(1)
= (−~x) + ~0 by V3
= (−~x) by V4
4.1.10 We are assuming that there exist c1 , . . . , ck−1 ∈ R such that
c1~v1 + · · · + ci−1~vi−1 + ci~vi+1 + · · · + ck−1~vk = ~vi
Let ~x ∈ Span{~v1 , . . . , ~vk }. Then, there exist d1 , . . . , dk ∈ R such that
~x = d1~v1 + · · · + di−1~vi−1 + di~vi + di+1~vi+1 + · · · + dk~vk
= d1~v1 + · · · + dk−1~vk−1 + di (c1~v1 + · · · + ci−1~vi−1 + ci~vi+1 + · · · + ck−1~vk ) + di+1~vi+1 + · · · + dk~vk
= (d1 + di c1 )~v1 + · · · + (di−1 + di ci−1 )~vi−1 + (di+1 + di ci )~vi+1 + · · · + (dk + di ck−1 )~vk
Thus, ~x ∈ Span{~v1 , . . . , ~vi−1 , ~vi+1 , . . . , ~vk }. Hence, Span{~v1 , . . . , ~vk } ⊆ Span{~v1 , . . . , ~vi−1 , ~vi+1 , . . . , ~vk }.
Clearly, we have Span{~v1 , . . . , ~vi−1 , ~vi+1 , . . . , ~vk } ⊆ Span{~v1 , . . . , ~vk } and so
Span{~v1 , . . . , ~vk } = Span{~v1 , . . . , ~vi−1 , ~vi+1 , . . . , ~vk }
as required.
Section 4.1 Solutions 89
4.1.11 If {~v1 , . . . , ~vk } is linearly dependent, then there exists c1 , . . . , ck ∈ R not all zero such that
c1~v1 + · · · + ck~vk = ~0
Thus,
c1~v1 + · · · + ci−1~vi−1 − ~vi + ci~vi+1 + · · · + ck−1~vk = ~0
Hence, {~v1 , . . . , ~vk } is linearly dependent.
90 Section 4.2 Solutions
0 = c1 (1 + x2 ) + c2 (1 − x + x3 ) + c3 (2x + x2 − 3x3 ) + c4 (1 + 4x + x2 ) + c5 (1 − x − x2 + x3 )
= (c1 + c2 + c4 + c5 ) + (−c2 + 2c3 + 4c4 − c5 )x + (c1 + c3 + c4 − c5 )x2 + (c2 − 3c3 + c5 )x3
0 = c1 + c2 + c3
0 = c1 + 2c2 + 3c3
0 = c1 − c2 + 2c3
This gives a system of linear equations with the same coefficient matrix as above. So, using the
same row-operations we did above we see that the matrix has a leading one in each " row# of its
a b
RREF, so by the System Rank Theorem, the system is consistent for all matrices , so B
0 c
spans U.
Thus, B is a basis for U.
(" # " # " #) " #
1 2 5 6 9 0 a b
(d) B = , , of M2×2 (R) Let ∈ M2×2 (R) and consider
3 4 7 8 1 2 c d
" # " # " # " # " #
a b 1 2 5 6 9 0 c1 + 5c2 + 9c3 2c1 + 6c2
= c1 + c2 + c3 =
c d 3 4 7 8 1 2 3c1 + 7c2 + c3 4c1 + 8c2 + 2c3
This gives a system of 4 equations in 3 unknowns. By Theorem 2.2.4, since the rank of the
coefficient matrix" is at #most 3 we get by the System Rank Theorem that the equation cannot be
a b
consistent for all ∈ M2×2 (R) and hence B does not span M2×2 (R). Thus, B is not a basis.
c d
4.2.2 (a) Observe that every vector in S1 has the form xp(x) = x(a + bx + cx2 ) = ax + bx2 + cx3 . Hence,
S1 = Span{x, x2 , x3 }. Clearly {x, x2 , x3 } is also linearly independent, and hence it is a basis for S1
and dim S1 = 3.
(b) Observe that every vector in S2 has the form
" # " # " # " #
a1 a2 a a2 1 0 0 1
= 1 = a1 + a2
0 a3 0 −a1 0 −1 0 0
(" # " #)
1 0 0 1
Thus, B = , spans S2 and is clearly linearly independent. Consequently, B is a
0 −1 0 0
basis for S2 and dim S2 = 2.
(c) Let p(x) = ax2 + bx + c be any vector in S3 . Then p(2) = 0, so a(2)2 + b(2) + c = 0. Thus,
c = −4a − 2b. Hence, every p(x) ∈ S3 has the form
(e) Let p(x) = a + bx + cx2 be any vector in S5 . Then a = c, so every p(x) ∈ S5 has the form
a + bx + ax2 = a(1 + x2 ) + bx
Thus, S5 = Span{1 + x2 , x}. Since {1 + x2 , x} is also clearly linearly independent, it is a basis for
S5 . Thus, dim S5 = 2.
" #
a b
(f) Let A = ∈ S6 . Then A satisfies AT = A which implies that
c d
" # " #
a b a c
=
c d b d
0 = c1 (1 + x) + c2 (1 + x + x2 ) + c3 (x + 2x2 ) + c4 (x + x2 + x3 ) + c5 (1 + x2 + x3 )
= c1 + c2 + c5 + (c1 + c2 + c3 + c4 )x + (c2 + 2c3 + c4 + c5 )x2 + (c4 + c5 )x3
Row reducing the coefficient matrix of the corresponding homogeneous system gives
0 −3
1 1 0 0 1 1
0 0
1 1 1 1 0 0 1 0 0 4
∼
0 1 2 1 1 0 0 1 0 −2
0 0 0 1 1 0 0 0 1 1
Thus, treating this an augmented matrix we get that 1+x2 +xn3 can be written as a linear combinationo
of the other vectors. Ignoring the last column shows that 1 + x, 1 + x + x2 , x + 2x2 , x + x2 + x3
is linearly independent, and hence a basis for the set it spans. Thus, dim S8 = 4.
Section 4.2 Solutions 93
(i) Consider
0 1 −1 1 0 0 c1 − c2 + c3
0 = c1 −3 + c2 4 + c3 −1 + c4 2 + c5 1 = −3c1 + 4c2 − c3 + 2c4 + c5
0 2 −1 4 2 1 2c1 − c2 + 4c3 + 2c4 + c5
Row reducing the coefficient matrix of the corresponding homogeneous system gives
1 −1 1 0 0 1 0 3 2 1
−3 4 −1 2 1 ∼ 0 1 2 2 1
2 −1 4 2 1 0 0 0 0 0
Thus, we see that the last three vectors can be written as alinear combination
of the first two
1 −1
vectors. Moreover, the first two columns shows that the set −3 , 4 is linearly independent,
2 −1
and hence a basis for the set it spans. Thus, dim S9 = 2.
(" # " # " # " # " # " #)
1 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0
4.2.3 , , , , ,
0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 1
4.2.4 (1) This is Theorem 4.2.1.
(2) Assume {~v1 , . . . , ~vk } spans V where k < n. Then, we can use Theorem 4.1.4 to remove linearly
dependent vectors (if any) to get a basis for V. Thus, we can find a basis for V with fewer than n
vectors which contradicts Theorem 4.2.2.
(3) Assume that {~v1 , . . . , ~vn } is linearly independent, but does not span V. Then, there exists ~v ∈ V
such that ~v < Span{~v1 , . . . , ~vn }. Hence, by Theorem 4.1.5 we have that {~v1 , . . . , ~vn , ~v} is a linearly
independent set of n + 1 vectors in V. But, this contradicts (1). Therefore, {~v1 , . . . , ~vn } also spans
V.
Assume {~v1 , . . . , ~vn } spans V, but is linearly dependent. Then, there exists some vector ~vi ∈
Span{~v1 , . . . , ~vi−1 , ~vi+1 , . . . , ~vn }. Hence, by Theorem 4.1.4 we have that
So, V is spanned by n − 1 vectors which contradicts (2). Thus, {~v1 , . . . , ~vn } is also linearly inde-
pendent.
4.2.5 Since we know that dim P3 (R) = 4, we need to add two vectors to the set {1 + x + x3 , 1 + x2 } so that the
set is still linearly independent. There are a variety of ways of picking such vectors. We observe that
neither x2 nor x can be written as a linear combination of 1 + x + x3 and 1 + x2 , so we will try to prove
that B = {1 + x + x3 , 1 + x2 , x2 , x} is a basis for P3 . Consider
4.2.6 Let ~e1 , ~e2 , ~e3 , ~e4 denote the standard basis vectors for M2×2 (R). Then, clearly {~v1 , ~v2 , ~e1 , ~e2 , ~e3 , ~e4 } spans
M2×2 (R). Consider
~0 = c1~v1 + c2~v2 + c3~e1 + c4~e2 + c5~e3 + c6~e4
Row reducing the corresponding coefficient matrix gives
0 4/5 −3/5
1 1 1 0 0 0 1 0 0
2 2 0 1 0 0 0 1 0 0 −1/5 2/5
2 3 0 0 1 0 ∼ 0 0
1 0 −3/5 1/5
1 4 0 0 0 1 0 0 0 1 −6/5 2/5
If we ignore the last two columns, we see that this implies that B = {~v1 , ~v2 , ~e1 , ~e2 } is a linearly indepen-
dent set of 4 vectors in M2×2 (R). Thus, since dim M2×2 (R) = 4 we get that B is linearly independent by
Theorem 4.2.3.
1 0 0
−1 2 0
4.2.7 Three vectors in the hyperplane are ~v1 = , ~v2 = and ~v3 = . Consider
0 −1 1
0 0 −2
0 c1
0 −c + 2c
= c ~v + c ~v + c ~v = 1 2
0 1 1 2 2 3 3
−c2 + c3
0 −2c3
The only solution is c1 = c2 = c3 = 0 so the vectors are linearly independent. Since a hyperplane in R4
has dimension 3 and {~v1 , ~v2 , ~v3 } is a linearly independent set of three vectors in the hyperplane, it forms
a basis for the hyperplane.
1
0
We need to add a vector ~v4 so that {~v1 , ~v2 , ~v3 , ~v4 } is a basis for R4 . Since does not satisfy the equation
0
0
of the hyperplane, it is not in Span(~v1 , ~v2 , ~v3 ) and so is not a linear combination of ~v1 , ~v2 , ~v3 . Thus,
{~v1 , ~v2 , ~v3 , ~v4 } is linearly independent set with four elements in R4 and hence is a basis for R4 since R4
has dimension 4.
4.2.8 Let B = {~v1 , . . . , ~vk } be a basis for S. Then, B is a linearly independent set of k vectors in V. But,
dim V = dim S = k, so B is also a basis for V. Hence, V = S.
4.2.9 Theorem 4: Since k < n, we have that Span{~v1 , . . . , ~vk } , V. Thus, there exists a vector ~vk+1 <
Span{~v1 , . . . , ~vk }. Consider
c1~v1 + · · · + ck~vk + ck+1~vk+1 = ~0 (4.7)
If ck+1 , 0, then
c1 ck
~vk+1 = − ~v1 − · · · − ~vk
ck+1 ck+1
which contradicts ~vk+1 < Span{~v1 , . . . , ~vk }. Thus, ck+1 = 0. Hence, equation (4.7) becomes
c1~v1 + · · · + ck~vk = ~0
Section 4.3 Solutions 95
0 1 −1 1 0 0 −2 −2
1 1
1 0 1 −3 0 0 1 0 3 1
∼
0 1 1 2 3 0 0 1 −1 2
−1 1 2 3 7 0 0 0 0 0
Hence, for the first system we have c1 = −2, c2 = 3, and c3 = −1, and for the second system we
have d1 = −2, d2 = 1, and d3 = 2. Thus,
−2 −2
[A]B = 3 and [B]B = 1
−1 2
0 −4 1 0 −2 −2
1 0 2
0
1 1 0 1 1 0 1 0 3 3
∼
1 1 0 1 1 0 0 1 1 −1
0 1 −1 2 4 0 0 0 0 0
Hence, for the first system we have c1 = −2, c2 = 3, and c3 = 1, and for the second system we
have d1 = −2, d2 = 3, and d3 = −1. Thus,
−2 −2
[A]B = 3 and [B]B = 3
1 −1
c1 (1 + x + x2 ) + c2 (1 + 3x + 2x2 ) + c3 (4 + x2 ) = −2 + 8x + 5x2
d1 (1 + x + x2 ) + d2 (1 + 3x + 2x2 ) + d3 (4 + x2 ) = −4 + 8x + 4x2
98 Section 4.3 Solutions
−14 −17
Hence, [p(x)]B = 13 and [q(x)]B = 16 .
−5 −5
4.3.5 To find the change of coordinates matrix from B-coordinates to C-coordinates, we need to determine the
C-coordinates of the vectors in B. That is, we need to find c1 , c2 , d1 , d2 such that
" # " # " # " # " # " #
2 5 3 2 5 5
c1 + c2 = , d1 + d2 =
1 2 1 1 2 3
We row reduce the corresponding doubly augmented matrix to get
" # " #
2 5 3 5 1 0 −1 5
∼
1 2 1 3 0 1 1 −1
" #
−1 5
Thus, C PB = .
1 −1
To find the change of coordinates matrix from C-coordinates to B-coordinates, we need to determine the
B-coordinates of the vectors in C. That is, we need to find c1 , c2 , d1 , d2 such that
" # " # " # " # " # " #
3 5 2 3 5 5
c1 + c2 = , d1 + d2 =
1 3 1 1 3 2
We row reduce the corresponding doubly augmented matrix to get
" # " #
3 5 2 5 1 0 1/4 5/4
∼
1 3 1 2 0 1 1/4 1/4
" #
1/4 5/4
Thus, B PC = .
1/4 1/4
Section 4.3 Solutions 99
4.3.6 To find B PC we need to find the B-coordinates of the vectors in C. Thus, we need to solve the systems
b1 (1) + b2 (−1 + x) + b3 (1 − 2x + x2 ) = 1 + x + x2
c1 (1) + c2 (−1 + x) + c3 (1 − 2x + x2 ) = 1 + 3x − x2
d1 (1) + d2 (−1 + x) + d3 (1 − 2x + x2 ) = 1 − x − x2
Similarly, to find C PB we find the C-coordinates of the vectors in B by row reducing the corresponding
triply augmented matrix to get
1 1 1 1 −1 1 1 0 0 1/2 −1/2 1
1 3 −1 0 1 −2 ∼ 0 1 0 0 1/4 −3/4
1 −1 −1 0 0 1 0 0 1 1/2 −3/4 3/4
1/2 −1/2 1
Thus, C PB = 0 1/4 −3/4.
1/2 −3/4 3/4
4.3.7 (a) To find S PB we need to find the S-coordinates of the vectors in B. We have
1 1 1
2 2 2
[1 − x + x ]S = −1 , [1 − 2x + 3x ]S = −2 , [1 − 2x + 4x ]S = −2
1 3 4
1 1 1
Hence, S PB = −1 −2 −2.
1 3 4
To find B PS we need to find the B-coordinates of the vectors in S. Thus, we need to solve the
systems
b1 (1 − x + x2 ) + b2 (1 − 2x + 3x2 ) + b3 (1 − 2x + 4x2 ) = 1
c1 (1 − x + x2 ) + c2 (1 − 2x + 3x2 ) + c3 (1 − 2x + 4x2 ) = x
d1 (1 − x + x2 ) + d2 (1 − 2x + 3x2 ) + d3 (1 − 2x + 4x2 ) = x2
(b) We have
1 1 1 3 2
[p(x)]S = S PB [p(x)]B = −1 −2 −2 1 = −1
1 3 4 −2 −2
(d) We have
1 1 1 2 1
[r(x)]S = S PB [r(x)]B = −1 −2 −2 −3 = 0
1 3 4 2 1
Hence,
3 −1 1
S PB = 0 3 −3
3 0 1
To find the change of coordinates matrix from S to B we need to find the coordinates of the vectors in S
with respect to B. We need to find a1 , a2 , a3 , b1 , b2 , b3 , c1 , c2 , c3 such that
" # " # " # " #
1 0 3 0 −1 3 1 −3
= a1 + a2 + a3
0 0 0 3 0 0 0 1
" # " # " # " #
0 1 3 0 −1 3 1 −3
= b1 + b2 + b3
0 0 0 3 0 0 0 1
" # " # " # " #
0 0 3 0 −1 3 1 −3
= c1 + c2 + c3
0 1 0 3 0 0 0 1
Section 4.3 Solutions 101
Row reducing the augmented matrix to the corresponding system of equations gives
3 −1 1 1 0 0 1 0 0 1/3 1/9 0
0 3 −3 0 1 0 ∼ 0 1 0 −1 0 1
3 0 1 0 0 1 0 0 1 −1 −1/3 1
Hence,
1/3 1/9 0
B PS =
−1 0 1
−1 −1/3 1
Thus, we take
−4/3 8/3 −1
B = B PS = 1/2 −1 1/2
2/3 −1/3 0
4.3.10 Consider
~0 = c1 [~v1 ]B + · · · + cn [~vn ]B
Thus, since B is linearly independent, this implies that c1 = · · · = cn = 0. Consequently, {[~v1 ]B , . . . , [~vn ]B }
is a linearly independent set of n vectors in Rn . Hence, by Theorem 4.2.3, it is a basis for Rn .
4.3.11 Observe that [~vi ]B = ~ei . Thus, we have that [~vi ]C = ~ei and so
~vi = 0~
w1 + · · · + 0~
wi−1 + 1~ wi+1 + · · · + 0~
wi + 0~ ~i
wn = w
Thus, ~vi = w
~ i for all i.
Chapter 5 Solutions
103
104 Section 5.1 Solutions
−1
1 2 1 −4 −2/3 5/3
Thus, 0 −2 4 = 2 1/6 −2/3.
3 4 4 1 1/3 −1/3
−1
3 1 −2 1/10 −3/10 1/2
Thus, 1 2 1 = 1/10 7/10 −1/2.
2 1 1 −3/10 −1/10 1/2
−1
1 0 1 1 0 −1/3
Thus, 0 2 −3 = 0 1/2 1/2 .
0 0 3 0 0 1/3
−1 0 −1 2 1 0 0 0 −1 0 −1
2 1 0 0 0
1 −1 −2 4 0 1 0 0 0 −1 −3 6 1 1 0 0
∼
−3 −1 5 2 0 0 1 0 0
0 11 −10 −4 −1 1 0
−1 −2 4 4 0 0 0 1 0 0 0 0 1 −1 −1 1
This gives us two systems A~b1 = ~e1 and A~b2 = ~e2 . We solve both by row reducing a double augmented
matrix. We get " # " #
3 1 0 1 0 1 0 0 2/5 −1/5
∼
1 2 0 0 1 0 1 0 −1/5 3/5
2/5 0
Thus, the general solution of the first system is −1/5 + t 0 , t ∈ R and the general solution of the
0 1
−1/5 0
second system is 3/5 + s 0 , s ∈ R. Therefore, every right inverse of A has the form
0 1
2/5 −1/5
B = −1/5 3/5
t s
h i
5.1.5 We need to find all 3 × 2 matrices B = ~b1 ~b2 such that
h i h i
~e1 ~e2 = AB = A~b1 A~b2
This gives us two systems A~b1 = ~e1 and A~b2 = ~e2 . We solve both by row reducing a double augmented
matrix. We get " # " #
1 −2 1 1 0 1 0 1 −1 2
∼
1 −1 1 0 1 0 1 0 −1 1
−1 −1
Thus, the general solution of the first system is −1 +t 0 , t ∈ R and the general solution of the second
0 1
2
−1
system is 1 + s 0 , s ∈ R. Therefore, every right inverse of A has the form
0 1
−1 − t 2 − s
B = −1 1
t s
" #T
1 −2 1
5.1.6 Observe that B = . Also, observe that if AB = I2 , then (AB)T = I2 and so BT AT = I2 . Thus,
1 −1 1
" #
−1 − t −1 t
from our work in Problem 5, we have that all left inverses of B are .
2−s 1 s
5.1.7 To find all
" left inverses
# of B, we use the same trick we did in Problem 6. We will find all right inverses
1 0 3
of BT = . We get
2 −1 3
" # " #
1 0 3 1 0 1 0 3 1 0
∼
2 −1 3 0 1 0 1 3 2 −1
Section 5.1 Solutions 107
1 −3
Thus, the general solution of the first system is 2 + t −3 , t ∈ R and the general solution of the second
0 1
0 −3
system is −1 + s −3 , s ∈ R. Therefore, every left inverse of B has the form
0 1
" #
1 − 3t 2 − 3t t
B=
−3s −1 − 3s s
5.1.8 (a) Using our work in Example 4, we get that
" # " #
−1 1 1 −1 −1 1
A = =
2(1) − 1(3) −3 2 3 −2
" # " #
1 5 −2 −5 2
B−1 = =
1(5) − 2(3) −3 1 3 −1
# "
5 9
(b) We have AB = and hence
6 11
" # " #
−1 1 11 −9 11 −9
(AB) = = = B−1 A−1
5(11) − 9(6) −6 5 −6 5
# "
4 2
(c) We have 2A = , so
6 2
" # " #
−1 1 2 −2 −1/2 1/2 1
(2A) = = = A−1
4(2) − 2(6) −6 4 3/2 −1 2
" # " # " #
2 3 1 1 −3 −1 3
(d) We have AT = , so (AT )−1 = 2(1)−3(1) = and
1 1 −1 2 1 −2
" #" # " #
T T −1 2 3 −1 3 1 0
A (A ) = =
1 1 1 −2 0 1
5.1.9 We have
AT (A−1 )T = [A−1 A]T = I T = I
Hence, (AT )−1 = (A−1 )T by Theorem 5.1.4.
5.1.10 Consider
c1 A~v1 + · · · + ck A~vk = ~0
Then, we have
A(c1~v1 + · · · + ck~vk ) = ~0
Since A is invertible, the only solution to this equation is
c1~v1 + · · · + ck~vk = A−1~0 = ~0
Hence, we get c1 = · · · = ck = 0, because {~v1 , . . . , ~vk } is linearly independent.
108 Section 5.1 Solutions
AB~x = A~0 = ~0
Since AB is invertible, this has the unique solution ~x = (AB)−1~0 = ~0. Thus, B is invertible.
Since AB and B−1 are invertible, we have that A = (AB)B−1 is invertible by Theorem 5.1.5(3).
" #
1 0 0
(b) Let C = and D = C T . Then C and D are both not invertible since they are not square,
0 1 0
" #
1 0
but CD = is invertible.
0 1
h i
5.1.12 Assume that A has a right inverse B = ~b1 · · · ~bm . Then, we have
h i h i h i
~e1 · · · ~em = Im = AB = A ~b1 ··· ~bm = A~b1 · · · A~bm
Hence, A~x = ~y is consistent for all ~y ∈ Rm . Therefore, rank A = m by the System Rank Theorem(3). But
then we must have n ≥ m which is a contradiction.
5.1.13 (a) The statement if false. The 3 × 2 zero matrix cannot have a left inverse.
(b) The statement is true. By the Invertible Matrix Theorem, we get that if Null(AT ) = {~0}, then AT is
invertible. Hence, A is invertible by Theorem 5.1.5.
" # " # " #
1 0 −1 0 0 0
(c) The statement is false. A = and B = are both invertible, but A + B = is
0 1 0 −1 0 0
not invertible.
" #
1 0
(d) The statement is false. A = satisfies AA = A, but A is not invertible.
0 0
(e) Let ~b ∈ Rn . Consider the system of equations A~x = ~b. Then, we have
AA~x = A~b
~x = (AA)−1 A~b
Thus, A~x = ~b is consistent for all ~b ∈ Rn and hence A is invertible by the Invertible Matrix
Theorem.
" # " #
0 0 1 0
(f) The statement is false. If A = and B = , then AB = O2,2 and B is invertible.
0 0 0 1
(" # " # " # " #)
1 0 1 0 0 1 0 1
(g) The statement is true. One such basis is , , , .
0 1 0 2 1 0 2 0
(h) If A has a column of zeroes, then AT has a row of zeroes and hence rank(AT ) < n. Thus, AT is not
invertible and so A is also not invertible by the Invertible Matrix Theorem.
Section 5.1 Solutions 109
(i) If A~x = ~0 has a unique solution, then rank(A) = n by Theorem 2.2.5 and so A is invertible and
hence the columns of A form a basis for Rn by the Invertible Matrix Theorem. Thus, Col(A) = Rn ,
so the statement is true.
(j) If rank(AT ) = n, then AT is invertible by the Invertible Matrix theorem, and hence A is also
invertible by the Invertible Matrix Theorem. Thus, rank(A) = n by Theorem 5.1.6.
(k) If A is invertible, then AT is also invertible. Thus, the columns of AT form a basis for Rn and hence
must be linearly independent.
(l) We have that I = −A2 + 2A = A(−A + 2I). Thus, A is invertible by Theorem 5.1.4. In particular
A−1 = −A + 2I.
" # " # " # " # " #
1 1 2 1 2 −1 2 −1 5 3
5.1.14 (a) Let A = and B = . Then, A−1 = and B−1 = . Then, AB =
1 2 3 2 −1 1 −3 2 8 5
" # " #
5 −3 5 −4
and (AB)−1 = , but A−1 B−1 = .
−8 5 −5 3
(b) If (AB)−1 = A−1 B−1 = (BA)−1 , then we must have AB = BA.
110 Section 5.2 Solutions
Hence,
1 0 0 1 0 0 1 0 0 1 −2 0 1 0 0
E1 = 1 1 0 E2 = 0 1 0 E3 = 0 1/3 0 E4 = 0 1 0 E5 = 0 1 0
0 0 1 −2 0 1 0 0 1 0 0 1 0 3 1
and E5 E4 E3 E2 E1 A = R. Then
Hence,
0 0 1 1/2 0 0 1 0 0
E1 = 0 1 0 E2 = 0 1 0 E3 = 0 −1 0
1 0 0 0 0 1 0 0 1
1 −1 0 1 0 0 1 0 0
E4 = 0 1 0 E5 = 0 1 0 E6 = 0 1 −2
0 0 1 0 −1 1 0 0 1
112 Section 5.2 Solutions
and E6 E5 E4 E3 E2 E1 A = R. Then
A = E1−1 E2−1 E3−1 E4−1 E5−1 E6−1 R
0 0 1 2 0 0 1 0 0 1 1 0 1 0 0 1 0 0 1 0 0
= 0 1 0 0 1 0 0 −1 0 0 1 0 0 1 0 0 1 2 0 1 0
1 0 0 0 0 1 0 0 1 0 0 1 0 1 1 0 0 1 0 0 1
Hence,
0 1 0 1 0 0 1 0 0 1 0 0 1 0 −2
E1 = 1 0 0 E2 = −2 1 0 E3 = 0 1 0 E4 = 0 −1/3 0 E5 = 0 1 0
0 0 1 0 0 1 −1 0 1 0 0 1 0 0 1
and E5 E4 E3 E2 E1 A = R. Then
A = E1−1 E2−1 E3−1 E4−1 E5−1 R
0 1 0 1 0 0 1 0 0 1 0 0 1 0 2 1 2 0
= 1 0 0 2 1 0 0 1 0 0 −3 0 0 1 0 0 0 1
0 0 1 0 0 1 1 0 1 0 0 1 0 0 1 0 0 0
and E8 E7 E6 E5 E4 E3 E2 E1 A = R. Then
Hence,
1 0 0 1 0 0 1 −2 0 1 0 7 1 0 0
E1 = 0 1 0 E2 = 0 0 1 E3 = 0 1 0 E4 = 0 1 0 E5 = 0 1 −4
1 0 1 0 1 0 0 0 1 0 0 1 0 0 1
and E5 E4 E3 E2 E1 A = R. Then
Hence,
1 0 0 1 0 0 1 −1 0 1 0 0
E1 = −1 1 0 E2 = 0 1 0 E3 = 0 1 0 E4 = 0 1 0
0 0 1 −1 0 1 0 0 1 0 0 −1/2
1 0 −4 1 0 0
E5 = 0 1 0 E6 = 0 1 1
0 0 1 0 0 1
and E6 E5 E4 E3 E2 E1 A = I. Hence,
A−1 = E6 E5 E4 E3 E2 E1
and
Hence,
" # "# " #
0 1 1 0 1 2
E1 = E2 = E3 =
1 0 0 1/3 0 1
and E3 E2 E1 A = I. Hence,
A−1 = E3 E2 E1
and
#"
" #" #
0 1 1 0 1 −2
A =E1−1 E2−1 E3−1 =
1 0 0 3 0 1
Section 5.2 Solutions 115
Hence,
0 1 0 1 0 0 1 0 0 1 −4 0
E1 = 1 0 0 E2 = 0 1/2 0 E3 = 0 1 0 E4 = 0 1 0
0 0 1 0 0 1 1 0 1 0 0 1
1 0 0 1 0 0 1 0 8 1 0 0
E5 = 0 1 0 E6 = 0 1 0 E7 = 0 1 0 E8 = 0 1 −3
0 −6 1 0 0 −1/6 0 0 1 0 0 1
and E8 E7 E6 E5 E4 E3 E2 E1 A = I. Hence,
A−1 = E8 E7 E6 E5 E4 E3 E2 E1
and
2 0 21 R1
" # " #
4 2
R1 − R2 ∼ ∼
2 2 2 2 12 R2
" # " #
1 0 1 0
∼
1 1 R2 − R1 0 1
Hence,
1 0 0 1 0 0 1 0 0
E1 = 2 1 0
E2 = 0 1 0 E3 = 0 0 1
0 0 1 4 0 1 0 1 0
1 0 0 1 0 1
E4 = 0 1 0 E5 = 0 1 0
0 0 −1/4 0 0 1
and E5 E4 E3 E2 E1 A = I. Hence,
A−1 = E5 E4 E3 E2 E1
and
(c) We have
" # " #
1 2 3 1 2 3
∼ R1 − R2 ∼
2 6 5 R2 − 2R1 0 2 −1
" # " #
1 0 4 1 0 4
1 ∼
0 2 −1 2 R2 0 1 −1/2
5.2.7 If A is invertible, then by Corollary 5.2.7, there exists a sequence of elementary matrices E1 , . . . , Ek such
that A = E1−1 · · · Ek−1 . If B is row equivalent to A, then there exists a sequence of elementary matrices
F1 , . . . , Fk such that Fk · · · F1 A = B. Thus,
B = Fk · · · F1 E1−1 · · · Ek−1
a2 a3
1 a
0 b − a b2 − a2 b3 − a3
det A = 2 2 3 3
0 c − a c − a c − a
0 d − a d2 − a2 d3 − a3
b − a b2 − a2 b3 − a3
= c − a c2 − a2 c3 − a3
d − a d2 − a2 d3 − a3
1 b + a b2 + ba + a2
= (b − a)(c − a)(d − a) 1 c + a c2 + ca + a2
1 d + a d2 + da + a2
Section 5.3 Solutions 121
1 b + a b2 + ba + a2
= (b − a)(c − a)(d − a) 0 c − b c2 + ca − b2 − ba
0 d − b d2 + da − b2 − ba
c − b c2 + ca − b2 − ba
= (b − a)(c − a)(d − a)
d − b d2 + da − b2 − ba
1 a + b + c
= (b − a)(c − a)(d − a)(c − b)(d − b)
1 a + b + d
= (b − a)(c − a)(d − a)(c − b)(d − b)(d − c)
1
det A−1 =
det A
5.3.4 By Theorem 5.3.9 and Corollary 5.3.10 we get
1
det B = det(P−1 AP) = det P−1 det A det P = det A det P = det A
det P
5.3.5 (a) We have AAT = AA−1 = I. Hence, by Theorem 5.3.9 and Corollary 5.3.10 we get
Inductive Hypothesis: Assume that the result holds for any (n − 1) × (n − 1) matrix.
Inductive Step: Suppose that B is an n × n matrix obtained from A by swapping two rows. If the i-th
row of A was not swapped, then the cofactors of the i-th row of B are (n − 1) × (n − 1) matrices which
can be obtained from the cofactors of the i-th row of A by swapping the same two rows. Hence, by the
inductive hypothesis, the cofactors Ci∗j of B and Ci j of A satisfy Ci∗j = −Ci j . Hence,
∗ ∗
det B = ai1Ci1 + · · · + ainCin = ai1 (−Ci1 ) + · · · + ain (−Cin )
= − ai1Ci1 + · · · + ainCin = − det A
Section 5.4 Solutions 123
6 −6 16
(d) We have adj A = 0 3 −7. Hence,
0 0 2
6 0 0
A(adj A) = 0 6 0 = (det A)I
0 0 6
−1 4 −2
(e) We have adj A = 2 −9 5 . Hence,
3 −11 6
1 0 0
A(adj A) = 0 1 0 = (det A)I
0 0 1
−1 −8 −4
(f) We have adj A = −2 −12 −4. Hence,
−2 −8 −4
4 0 0
A(adj A) = 0 4 0 = (det A)I
0 0 4
124 Section 5.4 Solutions
" #
d −b
5.4.2 (a) We have det A = ad − bc and adj A = , thus
−c a
" #
−1 1 d −b
A =
ad − bc −c a
−1 t + 3 −3
(b) We have det A = −17 − 2t and adj A = 5 2 − 3t −2t − 2. Hence,
−2 −11 −6
−1 t + 3 −3
1
A−1 = 5 2 − 3t −2t − 2
−17 − 2t
−2 −11 −6
4t + 1 −1 −3t − 1
(c) We have det A = 14t + 1 and adj A = −10 14 11 . Hence,
3 + 2t −4 2t − 3
4t + 1 −1 −3t − 1
−1 1
A = −10 14 11
14t + 1
3 + 2t −4 2t − 3
cd 0 −cb
(d) We have det A = acd − b2 c and adj A = 0 ad − b2 0 . Hence,
−cb 0 ac
cd 0 −cb
1
A−1 = 0 ad − b2 0
acd − b2 c
−cb 0 ac
5.4.3 Solve the following systems of linear equations using Cramer’s Rule.
" #
3 −1
(a) The coefficient matrix is A = , so det A = 21 + 4 = 25. Hence,
4 7
1 2 −1 19
x1 = =
25 5 7 25
1 3 2 7
x2 = =
25 4 5 25
" #
19/25
Thus, the solution is ~x = .
7/25
Section 5.4 Solutions 125
" #
2 1
(b) The coefficient matrix is A = , so det A = 14 − 3 = 11. Hence,
3 7
1 1 1 9
x1 = =
11 −2 7 11
1 2 1 7
x2 = =−
11 3
−2 11
" #
9/11
Thus, the solution is ~x = .
−7/11
" #
2 1
(c) The coefficient matrix is A = , so det A = 14 − 3 = 11. Hence,
3 7
1 3 1 16
x1 = =
11 5 7 11
1 2 3 1
x2 = =
11 3 5 11
" #
16/11
Thus, the solution is ~x = .
1/11
2 3 1
(d) The coefficient matrix is A = 1 1 −1, so det A = 6. Hence,
−2 0 2
1 3 1
1 4
x1 = −1 1 −1 =
6 6
1 0 2
2 1 1
1 −3
x2 = 1 −1 −1 =
6 6
−2 1 2
2 3 1
1 7
x3 = 1 1 −1 =
6 6
−2 0 1
2/3
Hence, the solution is ~x = −1/2.
7/6
126 Section 5.5 Solutions
5 1 −1
(e) The coefficient matrix is A = 9 1 −1, so det A = −16. Hence,
1 −1 5
4 1 −1
1 12
x1 = 1 1 −1 =
−16 −16
2 −1 5
5 4 −1
1 −166
x2 = 9 1 −1 =
−16 −16
1 2 5
5 1 4
1 −42
x3 = 9 1 1 =
−16 −16
1 −1 2
−3/4
Thus, the solution is ~x = 83/8 .
21/8
5.5.4 Since adding a multiple of one column to another does not change the determinant, we get
h i h i
V = | det ~v1 · · · ~vn | = | det ~v1 · · · ~vn + t~v1 |
Hence,
h i " 1 −4#
[L]B = [L(1, −2)]B [L(2, 1)]B =
−1 6
(b) We have
" # " # " #
6 1 1
L(1, 1) = = 10 −4
2 1 2
" # " # " #
10 1 1
L(1, 2) = = 19 −9
1 1 2
Hence,
h i " 10 19 #
[L]B = [L(1, 1)]B [L(1, 2)]B =
−4 −9
(c) We have
" # " # " #
6 −3 1
L(−3, 5) = = −2 +0
−10 5 −2
" # " # " #
−1 −3 1
L(1, −2) = =0 −1
2 5 −2
Hence,
h i "−2 0 #
[L]B = [L(−3, 5)]B [L(1, −2)]B =
0 −1
128
Section 6.1 Solutions 129
(d) We have
4 1 1 0
L(1, 1, 1) = 0 = (−1) 1 + 5 −1 − 6 −1
5 1 0 −1
1 1 1 0
L(1, −1, 0) = −1 = 1 1 + 0 −1 + 2 −1
−1 1 0 −1
−3 1 1 0
L(0, −1, −1) = 0 = 0 1 − 3 −1 + 3 −1
−3 1 0 −1
Hence,
h i −1 1 0
[L]B = [L(1, 1, 1)]B [L(1, −1, 0)]B [L(0, −1, −1)]B = 5 0 −3
−6 2 3
(e) We have
16 2 1 0
L(2, −1, −1) = −2 = 6 −1 + 4 1 − 12 0
10 −1 1 −1
−1 2 1 0
L(1, 1, 1) = 2 = (−1) −1 + 1 1 + 3 0
−1 −1 1 −1
6 2 1 0
L(0, 0, −1) = 0 = 2 −1 + 2 1 − 4 0
4 −1 1 −1
Hence,
h i 6 −1 2
[L]B = [L(2, −1, −1)]B [L(1, 1, 1)]B [L(0, 0, −1)]B = 4 1 2
−12 3 −4
(f) We have
0 0 2 1
L(0, 1, 0) = 2 = 2 1 + 0 0 + 0 0
0 0 1 1
4 0 2 1
L(2, 0, 1) = 0 = 0 1 + 2 0 + 0 0
2 0 1 1
130 Section 6.1 Solutions
−1 0 2 1
L(1, 0, 1) = 0 = 0 1 + 0 0 − 1 0
−1 0 1 1
Hence,
h i 2 0 0
[L]B = [L(0, 1, 0)]B [L(2, 0, 1)]B [L(1, 0, 1)]B = 0 2 0
0 0 −1
(g) We have
0 0 −1 1
L(0, 1, 0) = 2 = 2 1 + 0 0 + 0 1
0 0 1 1
−1 0 −1 1
L(−1, 0, 1) = 0 = 0 1 + 1 0 + 0 1
1 0 1 1
3 0 −1 1
L(1, 1, 1) = 4 = 2 1 − 1 0 + 2 1
1 0 1 1
Hence,
h i 2 0 2
[L]B = [L(0, 1, 0)]B [L(−1, 0, 1)]B [L(1, 1, 1)]B = 0 1 −1
0 0 2
Thus,
2 −1 1 1
L(~x) = 4 1 + 22 0 + 15 1 = 19
0 1 0 22
(b) We have
0
[L(~y)]B = [L]B [~y]B = −2
3
Thus,
2 −1 1 5
L(~y) = 0 1 − 2 0 + 3 1 = 3
0 1 0 −2
Section 6.1 Solutions 131
(c) From our work on similar matrices, we have that [L]B = P−1 [L]P where P is the change of
coordinates matrix from B-coordinates to standard coordinates. Thus,
2 −1 1 0 1 0 1 −1 1 5 −7 6
[L] = P[L]B P−1 = 1 0 1 0 4 2 0 0 1 = 3 −3 7
0 1 0 3 3 0 −1 2 −1 −2 4 2
Thus,
1 0 1 8
L(~x) = 6 1 + 13 1 + 2 0 = 19
0 1 1 15
(b) We have
−3
[L(~y)]B = [L]B [~y]B = 6
2
Thus,
1 0 1 −1
L(~y) = 3 1 + 6 1 + 2 0 = 3
0 1 1 8
(c) From our work on similar matrices, we have that [L]B = P−1 [L]P where P is the change of
coordinates matrix from B-coordinates to standard coordinates. Thus,
1 0 1 0 0 3 1/2 1/2 −1/2 3/2 −3/2 7/2
[L] = P[L]B P−1 = 1 1 0 5 0 −1 −1/2 1/2 1/2 = 7/2 3/2 −3/2
0 1 1 0 2 2 1/2 −1/2 1/2 2 3 −1
Therefore, " #
1 0
[proj~a ]B =
0 0
132 Section 6.1 Solutions
(b) ("
For #our
" geometrically
#) natural basis, we include ~b and a vector orthogonal to ~b. We pick B =
1 −1
, . Then we have
1 1
" # " #
~ ~ 1 −1
proj~b (b) = b = 1 +0
1 1
" #! " # " #
−1 1 −1
proj~b = ~0 = 0 +0
1 1 1
Hence, " #
1 0
[perp~b ]B =
0 0
(c) ("
For #our
" geometrically
#) natural basis, we include ~b and a vector orthogonal to ~b. We pick B =
1 −1
, . Then we have
1 1
" # " #
~ ~ 1 −1
perp~b (b) = 0 = 0 +0
1 1
" #! " # " # " #
−1 −1 1 −1
perp~b = =0 +1
1 1 1 1
Hence, " #
0 0
[perp~b ]B =
0 1
(d) For our geometrically natural basis, we include the normal vector ~n for the plane of reflection and a
basis for the plane. To pick a basis for the plane, we need to pick a set of two linearly independent
vectors which are orthogonal to ~n. Thus, we pick
1 0 1
B= 0 , 1 , 0
−1 0 1
Then we get
1 0 1
refl~n (~n) = −~n = − 0 + 0 1 + 0 0
−1 0 1
0 0 1 0 1
refl~n 1 = 1 = 0 0 + 1 + 0 0
0 0 −1 0 1
1 1 1 0 1
refl~n 0 = 0 = 0 0 + 0 1 + 1 0
1 1 −1 0 1
Section 6.1 Solutions 133
Hence,
−1 0 0
[refl~n ]B = 0 1 0
0 0 1
Thus,
(PQ)−1 A(PQ) = C
So, A is similar to C.
6.1.6 If AT A is similar to I, then there exists an invertible matrix P such that
P−1 AT AP = I
Hence, by the Invertible Matrix Theorem, we get c1~v1 + · · · + ck~vk = ~0. Thus, c1 = · · · = ck = 0, since
{~v1 , . . . , ~vk } is linearly independent. Therefore C is also linearly independent.
Let ~y ∈ Null(B). Then, we have that
~0 = B~y = P−1 AP~y
134 Section 6.2 Solutions
AP~y = P~0 = ~0
~y = d1 P−1~v1 + · · · + dk P−1~vk
1
det A = det(PDP−1 ) = det P det D det P−1 = det D det P = det D = d11 d22 · · · dnn
det P
(d) We have
−3 − λ −2 −2
−3 − λ −2
C(λ) = det(A − λI) = 2
2−λ 1 = (1 − λ)
2 2 − λ
0 0 1 − λ
= −(λ − 1)(λ2 + λ − 2) = −(λ − 1)2 (λ + 2)
Hence, the eigenvalues are λ1 = 1 with aλ1 = 2, and λ2 = −2 with aλ2 = 1. Since λ2 has algebraic
multiplicity 1 it also has geometric multiplicity 1 by Theorem 6.2.4. For λ1 we get
−4 −2 −2 1 1/2 1/2
A − λ1 I = 2 1 1 ∼ 0 0 0
0 0 0 0 0 0
−1 −1
Thus, a basis for Eλ1 is 2 , 0 . Thus, λ1 has geometric multiplicity 2.
0 2
(e) We have
−3 − λ 6 −2
C(λ) = det(A − λI) = −1 2 − λ −1
1 −3 −λ
−3 − λ 6 −2 −3 − λ 8 −2
= 0 −1 − λ −1 − λ = 0 0 −1 − λ
1 −3 −λ 1
−3 + λ −λ
= −(λ + 1)(λ2 − 1) = −(λ + 1)2 (λ − 1)
Hence, the eigenvalues are λ1 = −1 with algebraic multiplicity 2 and λ2 = 1 with algebraic mul-
tiplicity 1. Since λ2 has algebraic multiplicity 1 it also has geometric multiplicity 1 by Theorem
6.2.4. For λ1 we get
−2 6 −2 1 −3 1
A − λ1 I = −1 3 −1 ∼ 0 0 0
1 −3 1 0 0 0
Hence, the eigenvalues are λ1 = 2 with aλ1 = 2, and λ2 = 8 with aλ2 = 1. Since λ2 has algebraic
multiplicity 1 it also has geometric multiplicity 1 by Theorem 6.2.4. For λ1 we get
2 2 2 1 1 1
A − λ1 I = 2 2 2 ∼ 0 0 0
2 2 2 0 0 0
(j) We have
1 − λ 0 1
C(λ) = det(A − λI) = 0 2−λ 1
0 0 2 − λ
= (1 − λ)(2 − λ)2
Hence, the eigenvalues are λ1 = 1 and λ2 = 2. The algebraic multiplicity of λ1 is 1, so the
algebraic multiplicity is also 1 by Theorem 6.2.4. For λ2 = 2, we have
−1 0 1 1 0 0
A − λ2 I = 0 0 1 ∼ 0 0 1
0 0 0 0 0 0
(t~v) · ~v
proj~v (t~v) = ~v = t~v = 1(t~v)
k~vk2
Thus, λ1 = 1 is an eigenvalue with Eλ1 = Span{~v}.
If ~y is any non-zero vector orthogonal to ~v, then we get
proj~v ~y = ~0 = 0~v
Hence, λ2 = 0 is an eigenvalue with Eλ2 = Span{~y}.
6.2.5 If A is invertible and ~v is an eigenvector of A, then there exists λ ∈ R such that A~v = λ~v. Multiplying
both sides on the left by A−1 gives
~v = A−1 (λ~v) = λA−1~v
Since ~v , ~0, this implies that λ , 0. Thus, A−1~v = λ1 ~v. Hence, ~v is also an eigenvector of A−1 with
eigenvalue λ1 .
" #
a b
6.2.6 Let A = . We have that
c d
" # " # " # " #
1 1 1 1
A =2 and A =3
2 2 3 3
This gives " # " # " # " #
a + 2b 2 a + 3b 3
= and =
c + 2d 4 c + 3d 9
Solving a + 2b = 2 and "a + 3b =
# 3 gives b = 1 and a = 0. Solving c + 2d = 4 and c + 3d = 9 gives d = 5
0 1
and c = −6. Thus, A = .
−6 5
Section 6.2 Solutions 139
0 0 0 1
0 0 0 0
6.2.7 Let A = . Then, observe that C(λ) = λ4 , so λ = 0 is an eigenvalue with a = 4. However,
1 λ1
0 0 0 0
0 0 0 0
since rank(A − 0I) = 1, we have by the Solution Theorem that dim Null(A − 0I) = 3 and so gλ1 = 3.
Thus, gλ1 < aλ1 as required.
1 0 0 0
0 2 0 0
6.2.8 Let A = . Then, observe that
0 0 3 0
0 0 0 4
(AB)(5~u + 3~v) = A(5B~u + 3B~v) = A(25~u + 9~v) = 25A~u + 9A~v = 150~u + 90~v = 30(5~u + 3~v)
" #
6
w = AB(c1~u + c2~v) = c1 (AB)~u + c2 (AB)~v = c1 (30~u) + c2 (30~v) = 30(c1~u + c2~v) = 30~
AB~ w=
42
# " " #
1 0 1 0
6.2.10 It does not imply that λµ is an eigenvalue of AB. Let A = and B = . Then λ = 2 is an
0 2 0 1/2
" #
1 0
eigenvalue of A and µ = 1 is an eigenvalue of B, but λµ = 2 is not an eigenvalue of AB = .
0 1
6.2.11 By definition of matrix-vector multiplication, we have that
c
6.2.13 If A and B are similar, then there exists an invertible matrix P such that P−1 AP = B. We get that the
characteristic polynomial for B is
−1 1
Hence, a basis for Eλ1 is 1 , 0 .
0
1
2 −1 1 1 0 1
For λ2 = 3 we have A − λ2 I = −1 2 1 ∼ 0 1 1.
1 1 2 0 0 0
142 Section 6.3 Solutions
−1
Hence, a basis for Eλ2 is −1.
1
−1 1 −1
It follows that A is diagonalized by P = 1 0 −1. The resulting diagonal matrix D has the
0 1 1
6 0 0
eigenvalues of A as its diagonal entries, so D = 0 6 0.
0 0 3
(e) We have
1 − λ 0 −1
C(λ) = 2 2−λ 1 = (2 − λ)(λ2 − 6λ + 9) = −(λ − 2)(λ − 3)2
4 0 5 − λ
−1
Hence, a basis for Eλ2 is 0 . Consequently, the matrix is not diagonalizable since gλ2 < aλ2 .
2
(f) We have
1 − λ 1 2 2 − λ 1 2 2 − λ 1 2
C(λ) = 2 −λ 1 = 2 − λ −λ 1 = 0 −1 − λ −1
1 −1 −1 − λ 0
−1 −1 − λ 0
−1 −1 − λ
= (2 − λ)(λ2 + 2λ) = −λ(λ − 2)(λ + 2)
−1
Hence, a basis for Eλ1 is −3 .
2
−1 1 2 1 −1 0
For λ2 = 2 we have A − λ2 I = 2 −2 1 ∼ 0 0 1.
1 −1 −3 0 0 0
Section 6.3 Solutions 143
1
Hence, a basis for Eλ2 is 1.
0
3 1 2 1 0 3/4
For λ3 = −2 we have A − λ3 I = 2 2 1 ∼ 0 1 −1/4.
1 −1 1 0 0 0
−3
Hence, a basis for Eλ3 is 1 .
4
−1 1 −3
It follows that A is diagonalized by P = −3 1 1 . The resulting diagonal matrix D has the
2 0 4
0 0 0
eigenvalues of A as its diagonal entries, so D = 0 2 0 .
0 0 −2
(g) We have
1 − λ −2 3 2 − λ 0 2 − λ 2 − λ 0 0
C(λ) = 2 6−λ −6 = 2 6−λ −6 = 2 6−λ −8
1 2 −1 − λ 1
2 −1 − λ 1
2 −2 − λ
= (2 − λ)(λ2 − 4λ + 4) = −(λ − 2)3
−2 3
Hence, a basis for Eλ1 is 1 , 0. Thus, gλ1 = 2 < aλ1 , so A is not diagonalizable.
0 1
(h) We have
3 − λ 1 1 3 − λ 1 0 3 − λ 1 0
C(λ) = −4 −2 − λ −5 = −4 −2 − λ −3 + λ = −2 −λ 0
2 2 5 − λ 2 2 3 − λ 2 2 3 − λ
= (3 − λ)(λ2 − 3λ + 2) = −(λ − 3)(λ − 2)(λ − 1)
0
Hence, a basis for Eλ1 is −1.
1
1 1 1 1 1 0
For λ2 = 2 we have A − λ2 I = −4 −4 −5 ∼ 0 0 1.
2 2 3 0 0 0
−1
Hence, a basis for Eλ2 is 1 .
0
2 1 1 1 0 −1
For λ3 = 1 we have A − λ3 I = −4 −3 −5 ∼ 0 1 3 .
2 2 4 0 0 0
1
Hence, a basis for Eλ3 is −3 .
1
0 −1 1
It follows that A is diagonalized by P = −1 1 −3. The resulting diagonal matrix D has the
1 0 1
3 0 0
eigenvalues of A as its diagonal entries, so D = 0 2 0.
0 0 1
(i) We have
−3 − λ −3 5 2 − λ −3 5 2 − λ −3 5
C(λ) = 13 10 − λ −13 = 0 10 − λ −13 = 0 10 − λ −13
3 2 −1 − λ 2 − λ 2 −1 − λ 0 5 −6 − λ
= (2 − λ)(λ2 − 4λ + 5)
h
i 1
Hence, a basis for Eλ1 is −3/5 1 0 , 0 .
1
2 6 −10 1 0 −2
For λ2 = 6 we have A − λ2 I = 5 −5 −5 ∼ 0 1 −1.
5 3 −13 0 0 0
2
Hence, a basis for Eλ2 is 1 .
1
−3/5 1 2
It follows that A is diagonalized by P = 1 0 1. The resulting diagonal matrix D has the
0 1 1
−2 0 0
eigenvalues of A as its diagonal entries, so D = 0 −2 0.
0 0 6
(k) We have
2 − λ 1 0 2 − λ 1 0 2 − λ 1 0
C(λ) = 1 3−λ 1 = 0 3−λ 1 = 0 3−λ 1
3 1 4 − λ −1 + λ
1 4 − λ −3 + 2λ
0 4 − λ
= (2 − λ)(3 − λ)(4 − λ) − 3 + 2λ = λ3 − 9λ2 + 24λ − 21
We find that C(λ) has non-real eigenvalues and so is not diagonalizable over R.
(l) We have
1 − λ 1 1 1 1 − λ 0
0 1
1 1−λ 1 1 1 −λ 0 1
C(λ) = =
1−λ
1 1 1 1 0 −λ 1
1 − λ 1 λ λ 1 − λ
1
1 1
1 − λ 0 0 1
1 − λ 0
1
1 −λ 0 1
= = (−λ) 1 −λ 1
1 0 −λ 1
2
λ 2 − λ
λ 2 − λ
2
0
1 − λ 0 1
= (−λ) 1 −λ 1 = λ2 (λ2 − 4λ) = λ3 (λ − 4)
3 0 3 − λ
Thus, the eigenvalues of A are λ1 = 0 and λ2 = 4.
1 1 1 1 1
1 1 1
1 1 1 1 0 0 0 0
For λ1 = 0 we have A − λ1 I = ∼ .
1 1 1 1 0
0 0 0
1 1 1 1 0 0 0 0
146 Section 6.3 Solutions
−1 −1 −1
1 0 0
Hence, a basis for Eλ1 is 0 , 1 , 0 .
0
0 1
−3 1 0 −1
1 1 1 0
1 −3 1
1 0 1 0 −1
For λ2 = 4 we have A − λ2 I =
∼ .
1 1 −3 1 0 0 0 −1
1 1 1 −3 0 0 0 0
1
1
Hence, a basis for Eλ2 is .
1
1
−1 −1 −1 1
1 0 0 1
It follows that A is diagonalized by P = . The resulting diagonal matrix D has
0 1 0 1
0 0 1 1
0 0 0 0
0 0 0 0
the eigenvalues of A as its diagonal entries, so D = .
0 0 0 0
0 0 0 4
" #
3 −1
6.3.2 (a) We have [L] = . Thus, from our work in problem 6.3.1(a), we get that if we take
−1 3
(" # " #) " #
1 −1 2 0
B= , , then [L]B = .
1 1 0 4
" #
1 3
(b) We have [L] = .
4 2
We have
1 − λ 3
C(λ) = = λ2 − 3λ = 10 = (λ − 5)(λ + 2)
4 2 − λ
Thus, the eigenvalues of A are λ1 = 5 and λ2 = −2. Since all the eigenvalues have algebraic
multiplicity 1, we know that [L] is diagonalizable.
" # " #
−4 3 4 −3
For λ1 = 5 we have A − λ1 I = ∼ .
4 −3 0 0
(" #)
3
Hence, a basis for Eλ1 is .
4
" # " #
3 3 1 1
For λ2 = −2 we have A − λ2 I = ∼ .
4 4 0 0
Section 6.3 Solutions 147
(" #)
−1
Hence, a basis for Eλ2 is .
1
(" # " #) " #
3 −1 5 0
Therefore, we take B = , and our we get [L]B = .
4 1 0 −2
−2 2 2
(c) We have [L] = −3 3 2.
−2 2 2
We have
−2 − λ 2 2 −2 − λ 2 2
C(λ) = −3 3−λ 2 = −3 3 − λ 2
−2 2 2 − λ λ
0 −λ
−2 − λ 2 −λ
= −3 3 − λ −1 = λ(−2 + 3λ − λ2 )
λ 0 0
= −λ(λ − 2)(λ − 1)
Thus, the eigenvalues of A are λ1 = 0, λ2 = 2, and λ3 = 1. Since all the eigenvalues have algebraic
multiplicity 1, we know that [L] is diagonalizable.
−2 2 2 1 −1 0
For λ1 = 0 we have A − λ1 I = −3 3 2 ∼ 0 0 1.
−2 2 2 0 0 0
1
Hence, a basis for Eλ1 is 1 .
0
−4 2 2 1 0 −1
For λ2 = 2 we have A − λ2 I = −3 1 2 ∼ 0 1 −1.
−2 2 0 0 0 0
1
Hence, a basis for Eλ2 is 1 .
1
−3 2 2 1 0 −1
For λ3 = 1 we have A − λ3 I = −3 2 2 ∼ 0 1 −1/2.
−2 2 1 0 0 0
2
Hence, a basis for Eλ3 is 1 .
2
1 1 2
0 0 0
Therefore, we take B = 1 , 1 , 1 and our we get [L]B = 0 2 0.
0 1 2
0 0 1
148 Section 6.3 Solutions
6.3.3 (a) By definition A is similar to D = diag(λ1 , . . . , λn ). Hence, by Theorem 6.1.1, we have that
tr A = tr diag(λ1 , . . . , λn ) = λ1 + · · · + λn
P−1 AP = diag(λ1 , . . . , λn )
as required.
6.3.6 If A is diagonalizable, then there exists an invertible matrix P and diagonal matrix D such that P−1 AP =
D. Thus,
D = DT = (P−1 AP)T = PT AT (P−1 )T
Let Q = (P−1 )T , then Q−1 = PT and so we have that
Q−1 AT Q = D
0 ≤ (a + d)2 − 4(ad − bc) = a2 + 2ad + d2 − 4ad + 4bc = a2 − 2ad + d2 − 4bc = (a − d)2 + 4bc
as required.
Section 6.4 Solutions 149
" #
0 1
(b) If A = , then (a − d)2 + 4bc = (0 − 0)2 + 4(1)(0) = 0, but A is not diagonalizble since a0 = 2,
0 0
but g0 = 1 < a0 .
If A is the zero matrix, then (a − d)2 + 4bc = 0, but A is diagonalizable since it is already diagonal.
(c) If (a−d)2 +4bc = (2x)2 where x ∈ Z, then observe that a−d must be even since (a−d)2 = (2x)2 −4bc
is even. Let a − d = 2y where y ∈ Z, then we have that the eigenvalues of A are
p
−2y ± (2x)2 −2y ± 2|x|
λ= = = −y ± |x|
2 2
which are both integers.
−5 −2 2100
" #" # " #
100 100 −1 0 1 1 2
A = PD P =
4 1 0 (−1)100 3 4 −5
−5 · 2100 + 8 −5 · 2101 + 10
1
=
3 102 103
2 −4 2 −5
6.4.3 We have
2 − λ 2
C(λ) = = λ2 + 3λ − 4 = (λ − 1)(λ + 4)
−3 −5 − λ
Thus, the eigenvalues of A are λ1 = 1 and λ2 = −4.
For λ1 = 1 we get "
# " #
1 2 1 2
A − λ1 I = ∼
−3 −6 0 0
(" #)
−2
Hence, a basis for Eλ1 is .
1
For λ2 = −4 we get " # " #
6 2 1 1/3
A − λ2 I = ∼
−3 −1 0 0
(" #)
−1
Hence, a basis for Eλ2 is .
3
" # " #
−2 −1 1 0
It follows that A is diagonalized by P = to D = . Thus, we have A = PDP−1 and so
1 3 0 −4
" #" # " #
100 100 −1 −2 −1 1 0 1 3 1
A = PD P =
1 3 0 4100 −5 −1 −2
1 −6 + 4100 −2 + 2 · 4100
" #
=−
5 3 − 3 · 4100 1 − 6 · 4100
Section 6.4 Solutions 151
6.4.4 We have
−2 − λ 2
C(λ) = = λ2 − 3λ − 4 = (λ + 1)(λ − 4)
−3 5 − λ
Thus, the eigenvalues are λ1 = −1 and λ2 = 4.
For λ1 = −1 we have " # " #
−1 2 1 −2
A − λ1 I = ∼
−3 6 0 0
(" #)
2
Hence, a basis for Eλ1 is .
1
For λ2 = 4 we have " # " #
−6 2 1 −1/3
A − λ2 I = ∼
−3 1 0 0
(" #)
1
Hence, a basis for Eλ2 is .
1
"
# " #
2 1 −1 0
It follows that A is diagonalized by P = to D = . Thus, we have A = PDP−1 and so
1 3 0 4
" #" # " #
200 200 −1 2 1 1 0 1 3 −1
A = PD P =
1 3 0 4200 5 −1 2
1 6 − 4200 −2 + 2 · 4200
" #
=
5 3 − 3 · 420 −1 + 6 · 4200
6.4.5 We have
−2 − λ 1 1 −1 − λ 1 1
C(λ) = −1 −λ 1 = −1 − λ −λ 1
−2 2 1 − λ 0 2 1 − λ
−1 − λ 1 1
= 0 −1 − λ 0 = −(λ + 1)2 (λ − 1)
0 2 1 − λ
Thus, the eigenvalues are λ1 = −1 and λ2 = 1.
For λ1 = −1 we have
−1 1 1 1 −1 −1
A − λ1 I = −1 1 1 ∼ 0 0 0
−2 2 2 0 0 0
1 1
Hence, a basis for Eλ1 is 1 , 0 .
0 1
For λ2 = 1 we have
−3 1 1 1 0 −1/2
A − λ2 I = −1 −1 1 ∼ 0 1 −1/2
−2 2 0 0 0 0
152 Section 6.4 Solutions
1
Hence, a basis for Eλ2 is 1 .
2
1 1 1 −1 0 0
It follows that A is diagonalized by P = 1 0 1 to D = 0 −1 0. Thus, we have A = PDP−1
0 1 2 0 0 1
and so
6.4.6 If λ is an eigenvalue of A, then there exists a non-zero vector ~v such that A~v = λ~v. We will prove by
induction that An~v = λn~v.
Base Case: n = 1. We have A1~v = λ1~v.
Inductive Hypothesis: Assume that Ak~v = λk~v.
Inductive Step: We have
as required.
6.4.7 Let S = T − 1I. Then, we have
n
n
X X
si j = ti j − 1 = 0
i=1 i=1
Thus, the column sums of S equal 0. Therefore, det S = 0 since if we add each of the first n − 1 rows of
S to the n-th row using Theorem 5.3.6, we will get a row of zeros in the new matrix.
Hence, 0 = det S = det(T − 1I), so λ = 1 is an eigenvalue of T .
6.4.8 λ1 = 1 is an eigenvalue of A. So, by Theorem 6.3.6, the other
(a) By question 7 we know that
9
eigenvalue is λ2 = 10 + 45 − 1 = 10
7
.
For λ1 = 1 we have
1 1
" # " #
− 10 5 1 −2
A − λ1 I = 1 ∼
10 − 51 0 0
(" #)
2
Hence, a basis for Eλ1 is .
1
7
For λ2 = 10 we have
"1 1
# " #
5 5 1 1
A − λ2 I = 1 1 ∼
10 10 0 0
(" #)
−1
Hence, a basis for Eλ2 is .
1
Section 6.4 Solutions 153
" # " #
2 −1 1 0
It follows that A is diagonalized by P = to D = 7 . Thus, we have A = PDP−1 and
1 1 0 10
so
# "
2 −1 1 0 1 1 1
" #
t t −1
A = PD P = 7 t
3 −1 2
1 1 0 10
t t
7 7
1 2 + 10 2 − 10
= t t
3 1− 7 1+2 7
10 10
(b) We have
t t "
" # " # 7 7 x #
xt x0 1 2 + 10 2−2
lim = lim At = lim t 10 t 0
t→∞ yt t→∞ y0 t→∞ 3 7
1 − 10 1+2 7 y0
10
" #" #
1 2 2 x0
=
3 1 1 y0
" #
1 2x0 + 2y0
=
3 x0 + y0
" #
1 2T
=
3 T
as required.
" #
0.7 0.1
6.4.9 We have A = . By question 7 we know that λ1 = 1 is an eigenvalue of A. So, by Theorem
0.3 0.9
7 9
6.3.6, the other eigenvalue is λ2 = 10 + 10 − 1 = 35 .
For λ1 = 1 we have
3 1
" # " #
− 10 10 1 −1/3
A − λ1 I = 3 1 ∼
10 − 10 0 0
(" #)
1
Hence, a basis for Eλ1 is .
3
3
For λ2 = 5 we have
"1 1
# " #
10 10 1 1
A − λ2 I = 3 3 ∼
10 10 0 0
(" #)
−1
Hence, a basis for Eλ2 is .
1
# " " #
1 −1 1 0
It follows that A is diagonalized by P = to D = . Thus, we have A = PDP−1 and so
3 1 0 53
# "
1 −1 1 0 1 1 1
" #
t t −1
A = PD P = t
3 1 0 53 4 −3 1
t t
1 1 + 3 53 1 − 35
= t t
4 3−3 3 3 + 3
5 5
154 Section 6.4 Solutions
3
Thus, if T = x0 + y0 + z0 is the total number of cars, then in the long run 5 of the cars will be at the
airport, 51 will be at the train station, and 15 will be at the city centre.
" #
0 1
6.4.11 First, recall that 2e have A = . Now, observe that we have
1 1
" # " #
xn n−1 1
=A (6.8)
yn 1
x1 − 3x3 = 0
x2 + 3x3 = 0
x3 is a free variable, so we let x3 = t ∈ R. Then we have any vector ~x in the nullspace satisfies
x1 3t 3
x2 = −3t = t −3
x3 t 1
3
Thus, a basis for the nullspace is −3 .
1
To find a basis for the left nullspace, we row reduce the transpose of the matrix. We get
0 1 1 0
1 2 ∼ 0 1
3 3 0 0
Thus, the left nullspace is {~0}. Therefore, a basis for the left nullspace is the empty set.
(b) Row reducing we get " # " #
1 2 −1 1 2 0
∼
2 4 3 0 0 1
156
Section 7.1 Solutions 157
1 0 (" # " #)
1 −1
Hence, a basis for the row space is 2 , 0, a basis for the column space is , . To find
0 1
2 3
a basis for the nullspace we solve
x1 + 2x2 = 0
x3 = 0
x2 is a free variable, so we let x2 = t ∈ R. Then we have any vector ~x in the nullspace satisfies
x1 −2t −2
x = t = t 1
2
x3 0 0
−2
Thus, a basis for the nullspace is 1 .
0
To find a basis for the left nullspace, we row reduce the transpose of the matrix. We get
1 2 1 0
2 4 ∼ 0 1
−1 3 0 0
Thus, the left nullspace is {~0}. Therefore, a basis for the left nullspace is the empty set.
(c) Row reducing we get
2 −1 5 1 0 2
3 4 2 ∼ 0 1 −1
1 1 1 0 0 0
1 0
2 −1
Hence, a basis for the row space is 0 , 1 , a basis for the column space is 3 , 4 . To
2 −1 1 1
find a basis for the nullspace we solve
x1 + 2x3 = 0
x2 − x3 = 0
x3 is a free variable, so we let x3 = t ∈ R. Then we have any vector ~x in the nullspace satisfies
x1 −2t −2
x t
2 = = t 1
x3 t 1
−2
Thus, a basis for the nullspace is 1 .
1
158 Section 7.1 Solutions
To find a basis for the left nullspace, we row reduce the transpose of the matrix. We get
2 3 1 1 0 1/11
−1 4 1 ∼ 0 1 3/11
5 2 1 0 0 0
1
x1 + x3 = 0
11
3
x2 + x3 = 0
11
Let x3 = t ∈ R. Then every vector ~x in the left nullspace satisfies
x1 −t/11 −1/11
x2 = −3t/11 = t −3/11
x3 t 1
−1/11
Hence, a basis for the left nullspace is −3/11 .
1
x1 − 2x2 + 2x4 = 0
x3 + x4 = 0
x2 and x4 are free variables, so we let x2 = s ∈ R and x4 = t ∈ R. Then we have any vector ~x in
the nullspace satisfies
x1 2s − 2t −2
2
x s
= s + t 0
1
2 =
x3 −t 0
−1
x4 t 0 1
2 −2
1 0
Thus, a basis for the nullspace is , .
0 −1
0
1
Section 7.1 Solutions 159
To find a basis for the left nullspace, we row reduce the transpose of the matrix. We get
5
x1 − x3 = 0
3
7
x2 − x3 = 0
3
x1 + 3x3 = 0
x2 − 2x3 = 0
x4 = 0
x3 is a free variable, so we let x3 = t ∈ R. Then we have any vector ~x in the nullspace satisfies
x1 − 3x4 = 0
x2 = 0
x3 + 10x4 = 0
Also, ~a3 , ~0, since ~e3 is not in the nullspace of A. Thus, a basis for Col(A) is {~a3 }.
Section 7.1 Solutions 161
~b1
T
7.1.3 Denote the rows of E by ~b1 , . . . , ~bm . That is, E = · · ·. Since R is the RREF of A and rank(A) = r, by
~ T
bm
definition of RREF we have that the bottom m − r rows of R are all zeros. Hence, we have
~bT A = ~0T , r+1≤i≤m
i
~b1
T
~b1 · ~x
~0 = BT ~x = ..
.
~bn · ~x
7.1.7 We need to find a matrix A such that A~x = ~0 if and only if ~x is a linear combination of the columns of A.
Observe that we need dim Null(A) = dim Col(A) = rank(A). So, by the Dimension Theorem, we have
2 = rank(A)
" + dim# Null(A) = 2 rank(A). Thus, we need " rank(A)
# = 1. Hence, we can pick A to be of the
x1 x2 0 1
form A = . It is easy to see that taking A = gives
0 0 0 0
(" #)
1
Col(A) = Span = Null(A)
0
162 Section 7.1 Solutions
7.1.8 If A and B are similar, then there exists an invertible matrix P such that P−1 AP = B ⇒ P−1 A = BP−1 . If
~x ∈ Null(P−1 A), then P−1 A~x = ~0, so A~x = P~0 = ~0. Thus, ~x ∈ Null(A). On the other hand, if ~y ∈ Null(A),
then P−1 A~y = P−1~0 = ~0. So, Null(P−1 A) = Null(A) and hence by the Dimension Theorem
If ~x ∈ Col(BP−1 ), then there exists ~y ∈ Rn such that ~x = BP−1~y ∈ Col(B). On the other hand, if
~z ∈ Col(B), then there exists ~u ∈ Rn such that ~z = B~u. Since P−1 is invertible, by the Invertible
Matrix Theorem, there exists ~v ∈ Rn such that ~u = P−1~v. Thus, ~z = BP−1~v ∈ Col(BP−1 ). Thus,
Col(B) = Col(BP−1 ), so rank(B) = rank(P−1 B). Thus,
.
7.1.9 (a) If A~x = ~0, then AT A~x = AT ~0 = ~0. Hence, the nullspace of A is a subset of the nullspace of AT A.
On the other hand, consider AT A~x = ~0. Then,
Thus, A~x = ~0. Hence, the nullspace of AT A is also a subset of the nullspace of A, and the result
follows.
(b) Using part (a), we get that dim(Null(AT A)) = dim(Null(A)). Thus, the Dimension Theorem gives
as required.
7.1.10 Let ~x ∈ Col(B). Then, there exists ~y such that B~y = ~x. Now, observe that
Thus, ~x ∈ Null(A). Hence, Col(B) is a subset of Null(A) and thus, since Col(B) is a vector space, Col(B)
is a subspace of Null(A).
1 1
7.1.11 (a) A = 1 0
1 2
" #
1 1 1
(b) B =
1 0 2
(c) We first observe that C must have 3 columns. So, lets make C a 1 × 3 matrix. We require that
h i
C = −2 1 1
−2
(d) Using our work in part (c), we see that we can take D = 1 .
1
(e) No, there cannot be such a matrix F since we would have dim(Col(F)) = 2, dim(Null(F)) = 2,
and F would have to have 3 columns, but this would contradict the Dimension Theorem.
Chapter 8 Solutions
Thus, T is linear.
(e) Let a1 + b1 x + c1 x2 + d1 x3 , a2 + b2 x + c2 x3 + d2 x3 ∈ P3 (R) and s, t ∈ R. Then,
D[s(a1 + b1 x + c1 x2 + d1 x3 ) + t(a2 + b2 x + c2 x3 + d2 x3 )] = D(sa1 + ta2 + (sb1 + tb2 )x + (sc1 + tc2 )x2 + (sd1 +
= (sb1 + tb2 ) + 2(sc1 + tc2 )x + 3(sd1 + td2 )x2
= s(b1 + 2c1 x + 3d1 x2 ) + t(b2 + 2c2 x + 3d2 x2 )
= sD(a1 + b1 x + c1 x2 + d1 x3 ) + tD(a2 + b2 x + c2 x3 +
164
Section 8.1 Solutions 165
Thus, D is linear.
(f) We have " #! " #!
1 0 −1 0
L +L =1+1=2
0 1 0 −1
but "
# " #! " #!
1 0 −1 0 0 0
L + =L =0
0 1 0 −1 0 0
So, L is not linear.
8.1.2 (a) If L is linear, then we have
L(a1 , a2 , a3 ) = a1 L(1, 0, 0) + a2 L(0, 1, 0) + a3 L(0, 0, 1)
= a1 (1 + x) + a2 (1 − x2 ) + a3 (1 + x + x2 )
= (a1 + a2 + a3 )1 + (a1 + a3 )x + (−a2 + a3 )x2
=
..
.
sbn + tcn
b1 c1
(M ◦ L)(~v) = M(L(~v))
Observe that L(~v) ∈ W which is in the domain of M. Hence, we get that M(L(~v)) ∈ Range(M) which is
in U. Thus, the codomain of M ◦ L is U.
Let ~x, ~y ∈ V and s, t ∈ R. Then,
Hence, M ◦ L is linear.
8.1.6 (a) Consider c1~v1 + · · · + ck~vk = ~0. Then, we have
Thus, c1 = · · · = ck = 0 since {L(~v1 ), . . . , L(~vk )} is linearly independent. Thus, {~v1 , . . . , ~vk } must
also be linearly independent.
(b) If L : R2 → R2 is the linear mapping defined by L(~x) = ~0 and {~v1 , ~v2 } = {~e1 , ~e2 } is the standard
basis for R2 , then {~v1 , ~v2 } is linearly independent, but {L(~v1 ), L(~v2 )} is linearly dependent since it
contains the zero vector.
8.1.7 Let L, M ∈ L and t ∈ R.
(a) By definition tL is a mapping with domain V. Also, since W is closed under scalar multiplication,
we have that (tL)(~v) = tL(~v) ∈ W. Thus, the codomain of tL is W. For any ~x, ~y ∈ V and c, d ∈ R
we have
Hence, tL ∈ L.
(b) For any ~v ∈ V we have
[t(L + M)](~v) = t[(L + M)(~v)] = t[L(~v) + M(~v)] = tL(~v) + tM(~v) = [tL + tM](~v)
If [L] is invertible, then there exists a matrix A such that A[L] = I = [L]A. Define M : Rn → Rn by
M(~x) = A~x. Then, for any ~x ∈ Rn we have
and
(L ◦ M)(~x) = L(M(~x)) = [L]A~x = I~x = ~x
Thus, M = L−1 .
" #
−1 2 1
8.1.9 Using our work from problem 7, we know that the standard matrix of L is the inverse of [L] = .
3 −4
" #
−1 4/11 1/11
We have that [L] = . Consequently,
3/11 −2/11
!
4 1 3 2
L−1 (x1 , x2 ) = x1 + x2 , x1 − x2
11 11 11 11
Thus, a − b = 0 and b + c = 0. So, a = b = −c. So, every vector a + bx + cx2 ∈ ker(L) has the
form −c − cx + cx2 = c(−1 − x + x2 ). Thus, a basis for ker(L) is {−1 − x + x2 }. Consequently,
nullity(L) = 1 and the Rank-Nullity Theorem gives
Hence, a + b = 0 and a + b + c = 0. Thus, c = 0 and b = −a. So, every vector in ker(L) has the
form
a a 1
b = −a = a −1
c 0 0
Section 8.2 Solutions 169
1
Thus, a basis for ker(L) is −1 . Thus, nullity(L) = 1 and
0
Thus, Range(L) = W.
170 Section 8.3 Solutions
8.2.6 (a) Since the rank of a linear mapping is equal to the dimension of the range, we consider the range
of both mappings. Let ~x ∈ Range(M ◦ L). Then there exists ~v ∈ V such that ~x = (M ◦ L)(~v) =
M(L(~v)) ∈ Range(M).
Hence, Range(M ◦ L) is a subset, and hence a subspace, of Range(M). Therefore, dim Range(M ◦
L) ≤ dim Range(M) which implies rank M ◦ L ≤ rank M.
(b) The kernel of L is a subspace of the kernel of M ◦ L, because if L(~x) = ~0, then
(M ◦ L)(~x) = M(L(~x)) = M(~0) = ~0
Therefore
nullity(L) ≤ nullity(M ◦ L)
so
n − nullity(L) ≥ n − nullity(M ◦ L)
Hence rank(L) ≥ rank(M ◦ L), by the Rank-Nullity Theorem.
(c) We have
" #! " # " # " #
1 1 1 1 −1
T = =1 +0
0 0 1 1 1
" #! " # " # " #
0 1 1 3 1 1 −1
T = = +
1 0 2 2 1 2 1
" #! " # " # " #
1 0 1 1 1 1 −1
T = = −
0 1 0 2 1 2 1
" #! " # " # " #
0 0 1 1 −1
T = =1 +0
1 1 1 1 1
Section 8.3 Solutions 171
" #
1 3/2 1/2 1
Hence, C [T ]B = .
0 1/2 −1/2 0
(d) We have
" # " # " #
2 1 1 1
L(1 + x ) = =0 +1
2 1 2
" # " # " #
0 1 1
L(x − x2 ) = =0 +0
0 1 2
" # " # " #
2 0 1 1
L(x ) = = (−1) +1
1 1 2
" # " # " #
0 1 1
L(x3 ) = = (−1) +1
1 1 2
" #
0 0 −1 −1
Hence, C [L]B = .
1 0 1 1
(e) We have
(f) We have
L(1) = 0 = 0 + 0x + 0x2
L(x) = 1 = 1 + 0x + 0x2
L(x2 ) = 2x = 0 + 2x + 0x2
0 1 0
Hence, [L]B = 0 0 2.
0 0 0
172 Section 8.3 Solutions
(b) For ease, name the vectors in B as ~v1 , ~v2 , ~v3 , ~v4 .
" #! " #
1 1 1 1
L = = 1~v1 + 0~v2 + 0~v3 + 0~v4
0 0 0 0
" #! " #
1 0 1 0
L = = 0~v1 + 1~v2 + 0~v3 + 0~v4
0 1 0 1
" #! " #
1 1 1 1
L = = 0~v1 + 0~v2 + 1~v3 + 0~v4
0 1 0 1
" #! " #
0 0 0 1
L = = 0~v1 + −1~v2 + 1~v3 + 0~v4
1 0 0 0
1 0 0 0
0 1 0 −1
Thus, [L]B = .
0 0 1 1
0 0 0 0
(c) We have
"#! " # " # " #
1 0 2 0 1 0 2 0
T = =0 +1
0 1 0 3 0 1 0 3
" #! " # " # " #
2 0 5 0 1 0 2 0
T = =1 +2
0 3 0 7 0 1 0 3
" #
0 1
Thus, [L]B = .
1 2
(d) We have
Thus, h i
[L]B = [L(~v1 )]B ··· [L(~vn )]B = I
Section 8.4 Solutions 173
" # (" #)
1 1 1
8.3.4 Let A = and define L : R2 → R2 by L(~x) = A~x. Clearly we have Range(L) = Span . Let
1 1 1
(" # " #)
1 1
B= , . Then, we get
1 −1
" #! " # " # " #
1 2 1 1
L = =2 +0
1 2 1 −1
" #! " # " # " #
1 0 1 1
L = =0 +0
−1 0 1 −1
" #
2 0
Thus, [L]B = . Clearly, Col([L]B ) , Range(L).
0 0
Linear: Let any two elements of P1 (R) be ~a = a0 + a1 x and ~b = b0 + b1 x and let s, t ∈ R then
L(s~a + t~b) = L s(a0 + a1 x) + t(b0 + b1 x)
= L (sa0 + tb0 ) + (sa1 + tb1 )x
" # " # " #
sa0 + tb0 a0 b
= =s + t 0 = sL(~a) + tL(~b)
sa1 + tb1 a1 b1
Therefore, L is linear.
One-to-one: If a0 + a1 x ∈ ker(L), then
" # " #
0 a
= L(a0 + a1 x) = 0
0 a1
= L (sa0 + tb0 ) + (sa1 + tb1 )x + (sa2 + tb2 )x2 + (sa3 + tb3 )x3
" # " # " #
sa0 + tb0 sa1 + tb1 a a b b
= = s 0 1 + t 0 1 = sL(~a) + tL(~b)
sa2 + tb2 sa3 + tb3 a2 a3 b2 b3
174 Section 8.4 Solutions
Therefore, L is linear.
One-to-one: If a0 + a1 x + a2 x2 + a3 x3 ∈ ker(L), then
" # " #
0 0 a a1
= L(a0 + a1 x + a2 x2 + a3 x3 ) = 0
0 0 a2 a3
Therefore L is linear.
One-to-one: Assume ~a ∈ ker(L). Then
" # " #
0 0 2 a2 a1
= L (x − 1)(a2 x + a1 x + a0 ) =
0 0 0 a0
we define L : S → U by
−1
1
1 0
L a + b = a(1 + x) + bx2
0 0
0 1
−1 −1
1 1
1 0 1 0
Linear: Let any two elements of S be ~a = a1 + b1 and ~b = a2 + b2 and let s, t ∈ R
0 0 0 0
0 1 0 1
then
−1
1
1 0
L(s~a + t~b) = L (sa1 + ta2 ) + (sb1 + tb2 )
0 0
0 1
= (sa1 + ta2 )(1 + x) + (sb1 + tb2 )x2
= sa1 (1 + x) + ta2 (1 + x) + sb1 x2 + tb2 x2
= s[a1 (1 + x) + b1 x2 ] + t[a2 (1 + x) + b2 x2 ] = sL(~a) + tL(~b)
Therefore L is linear.
One-to-one: Assume L(~a) = L(~b). Then a1 (1 + x) + b1 x2 = a2 (1 + x) + b2 x2 . This gives a1 = b1
and a2 = b2 hence ~a = ~b so L is one-to-one.
−1
1
1 0
Onto: For any a(1 + x) + bx2 ∈ U we can pick ~a = a + b ∈ S so that we have L(~a) =
0 0
0 1
a(1 + x) + bx2 hence L is onto.
L[s(b1~v1 + · · · + bn~vn ) + t(c1~v1 + · · · + cn~vn )] = L (sb1 + tc1 )~v1 + · · · + (sbn + tcn )~vn
Therefore, L is linear.
176 Section 8.4 Solutions
L[s(b1~v1 + · · · + bn~vn ) + t(c1~v1 + · · · + cn~vn )] = L (sb1 + tc1 )~v1 + · · · + (sbn + tcn )~vn
Therefore, L is linear.
One-to-one: If L(b1~v1 + · · · + bn~vn ) = L(c1~v1 + · · · + cn~vn ), then
b1 + b2 x + b3 x2 + · · · + bn xn−1 = c1 + c2 x + c3 x2 + · · · + cn xn−1
L(c1~v1 + · · · + cn~vn ) = ~0
Hence, c1~v1 + · · · + cn~vn ∈ ker(L). Since L is an isomorphism, it is one-to-one and thus ker(L) = {~0} by
Lemma 8.4.1. Thus, we get c1~v1 + · · · + cn~vn = ~0. This implies that c1 = · · · = cn = 0 since {~v1 , . . . , ~vn }
is linearly independent. Consequently, {L(~v1 ), . . . , L(~vn )} is a linearly independent set of n vectors in an
n-dimensional vector space, and hence is a basis.
8.4.4 ~ ∈ W. Since M is onto, there exist a ~u ∈ U such that M(~u) = w
(a) Let w ~ . Then, since L is onto, there
exists a ~v ∈ V such that L(~v) = ~u. Hence,
~
(M ◦ L)(~v) = M(L(~v)) = M(~u) = w
as required.
(b) Let L : R → R2 be defined by L(x1 ) = (x1 , 0). Clearly, L is not onto. Now, define M : R2 → R by
M(x1 , x2 ) = x1 . Then, we get
~ = (M ◦ L)(~v) = M(L(~v))
w
F is onto: Let B = {~v1 , . . . , ~vn } and let ~y ∈ Col(A). Then, ~y = A~x for some ~x = ... ∈ Rn . Let
xn
~v = x1~v1 + · · · xn~vn so that [~v]B = ~x. Then,
(b) Since Range(L) is isomorphic to Col(A), they have the same dimension. Hence,
8.4.6 Since L and M are one-to-one, we have nullity(L) = 0 = nullity(M) by Lemma 8.4.1. Since the
range of a linear mapping is a subspace of its codomain, we have that dim V ≥ dim Range(L) and
dim U ≥ dim Range(M). Thus, by the Rank-Nullity Theorem we get
and
dim U ≥ dim Range(M) = rank(M) + 0 = rank(M) + nullity(M) = dim V
Hence, dim U = dim V and so U and V are isomorphic by Theorem 8.4.2.
8.4.7 (a) We disprove the statement with a counter example. Let U = R2 and V = R2 and L : U → V be the
linear mapping defined by L(~x) = ~0. Then, clearly dim U = dim V, but L is not an isomorphism
since it isn’t one-to-one nor onto.
(b) By definition, we have that dim(Range(L)) = rank(L). Thus, the Rank-Nullity Theorem gives us
that
dim(Range(L)) = dim V − nullity(L)
If dim V < dim W, then dim V − nullity(L) < dim W and hence the range of L cannot be equal to
W. Thus, we have proven the statement is true.
(c) We disprove the statement with a counter example. Let V = R3 and W = R2 and L : V → W be
the linear mapping defined by L(~x) = ~0. Then, clearly dim V > dim W, but L is not onto.
(d) The Rank-Nullity Theorem tells us that
since rank(L) ≤ dim W. Thus, nullity(L) , {~0} and hence L is not one-to-one by Lemma 8.4.1.
Therefore, we have proven the statement is true.
Chapter 9 Solutions
and 0 = h~x, ~xi = 2x12 + 3x22 + 4x32 if and only if x1 = x2 = x3 = 0. Thus, h~x, ~xi = 0 if and only if
~x = ~0. So, it is positive definite.
h~x, ~yi = 2x1 y1 + 3x2 y2 + 4x3 y3 = 2y1 x1 + 3y2 x2 + 4y3 x3 = h~y, ~xi
Thus, it is symmetric.
hs~x + t~y,~zi = 2(sx1 + ty1 )z1 + 3(sx2 + ty2 )z2 + 4(sx3 + ty3 )z3
= s(2x1 z1 + 3x2 z2 + 4x3 z3 ) + t(2y1 z1 + 3y2 z2 + 4y3 z3 ) = sh~x, ~xi + th~y,~zi
179
180 Section 9.1 Solutions
Thus, h~x, ~xi ≥ 0 and h~x, ~xi = 0 if and only if ~x = ~0. So, it is positive definite.
Thus, it is symmetric.
hs~x + t~y,~zi = 2(sx1 + ty1 )z1 − (sx1 + ty1 )z2 − (sx2 + ty2 )z1 + 2(sx2 + ty2 )z2 + (sx3 + ty3 )z3
= s(2x1 z1 − x1 z2 − x2 z1 + 2x2 z2 + x3 z3 ) + t(2y1 z1 − y1 z2 − y2 z1 + 2y2 z2 + y3 z3 )
= sh~x, ~xi + th~y,~zi
and hp, pi = 0 if and only if p(−2) = p(1) = p(2) = 0. Since p is a polynomial of degree at most
2 with three roots we get that p = 0. Hence, hp, pi = 0 if and only if p = 0.
and hp, pi = 0 if and only if p(−1) = p(0) = p(1) = 0. Thus, hp, pi = 0 if and only if p = 0.
as required.
9.1.9 Let f, g, h ∈ C[−π, π] and a, b ∈ R. We have
Z π
h f, f i = ( f (x))2 dx ≥ 0
−π
182 Section 9.2 Solutions
since ( f (x))2 ≥ 0. Moreover, we have h f, f i = 0 if and only if f (x) = 0 for all x ∈ [−π, π].
Z π Z π
h f, gi = f (x)g(x) dx = g(x) f (x) dx = hg, f, i
Z−ππ −π
(c) We have
h1 + x, xi 2
=
k1 + xk2 5
h−2 + 3x, xi 6 1
= =
k − 2 + 3xk2 30 5
h2 − 3x2 , xi 0
= =0
k2 − 3x2 k2 6
2/5
Hence, [x]B = 1/5.
0
(d) We have
h1 + x, x2 i 2
=
k1 + xk2 5
h−2 + 3x, x2 i −4 2
2
= =−
k − 2 + 3xk 30 15
h2 − 3x2 , x2 i −2 1
2 2
= =−
k2 − 3x k 6 3
2/5
Hence, [x2 ]B = −2/15.
−1/3
(e) i. Consider
3 − 7x + x2 = c1 (1 + x) + c2 (−2 + 3x) + c3 (2 − 3x2 ) = (c1 − 2c2 + 2c3 ) + (c1 − 3c2 )x + (−3c3 )x2
ii. We have
iii. We have
h1 + x, 3 − 7x + x2 i −3
=
k1 + xk2 5
2
h−2 + 3x, 3 − 7x + x i −64 32
2
= =−
k − 2 + 3xk 30 15
h2 − 3x2 , 3 − 7x + x2 i −2 1
2 2
= =−
k2 − 3x k 6 3
−3/5
Hence, [3 − 7x + x2 ]B = −32/15.
−1/3
Therefore, B is an orthogonal set in R3 . Since it does not contain the zero vector, we get that it is
a linearly independent by Theorem 9.2.3. Thus, it is a linearly independent set of 3 vectors in R3
and hence it is an orthogonal basis for R3 .
(b) We have
* 1 1 +
−2 , −2 = 12 + (−2)2 + 22 = 9
2 2
*2 2+
2 , 2 = 22 + 22 + 12 = 9
1 1
*−2 −2+
1 , 1 = (−2)2 + 12 + 22 = 9
2 2
(c) We have
* 1 4+
−2 , 3 = 1(4) + (−2)(3) + 2(5) = 8
2 5
*2 4+
2 , 3 = 2(4) + 2(3) + 1(5) = 19
1 5
*−2 4+
1 , 3 = (−2)(4) + 1(3) + 2(5) = 5
2 5
(d) We have
* 1/3 4+ !
1 2 2
−2/3 , 3 = (4) + − (3) + (5) = 8/3
3 3 3
2/3 5
*2/3 4+
2 2 1
2/3 , 3 = (4) + (3) + (5) = 19/3
3 3 3
1/3 5
*−2/3 4+
2 1 2
1/3 , 3 = − (4) + (3) + (5) = 5/3
3 3 3
2/3 5
Therefore, B is an orthogonal set in R3 under the given inner product. Since it does not contain
the zero vector, we get that it is a linearly independent by Theorem 9.2.3. Thus, it is a linearly
independent set of 3 vectors in R3 and hence it is an orthogonal basis for R3 .
(b) We have
* 1 1 +
2 , 2 = 12 + 2(2)2 + (−1)2 = 10
−1 −1
* 3 3 +
2 2 2
−1 , −1 = 3 + 2(−1) + (−1) = 12
−1 −1
*3 3+
1 , 1 = 32 + 2(1)2 + 72 = 60
7 7
(c) We have
* 1 1+
2 , 1 = 1(1) + 2(2)(1) + (−1)(1) = 4
−1 1
* 3 1+
−1 , 1 3(1) + 2(−1)(1) + (−1)(1) = 0
−1 1
*3 1+
1 , 1 = 3(1) + 2(1)(1) + 7(1) = 12
7 1
4/10 2/5
Therefore, by Theorem 9.2.4 we have [~x]B = 0/12 = 0 .
12/60 1/5
(d) We have
* 1 1+
1 4
√ 2 , 1 = √
10 −1 1
10
1 3 1
* +
√ −1 , 1 = 0
12
−1 1
Section 9.2 Solutions 187
* 3 1+
1 12
√ 1 , 1 = √
60 7 1 60
√
4/ 10
Therefore, by Corollary 9.2.5 we have [~x]C = 0√ .
12/ 60
Therefore, it is an orthogonal set. Since it does not contain the zero vector, we get that it is a
linearly independent by Theorem 9.2.3. By definition, it is also a spanning set, and hence it is an
orthogonal basis for S.
(b) We have
*" # " #+
1 1 1 1
, = 12 + 12 + (−1)2 + (−1)2 = 4
−1 −1 −1 −1
*" # " #+
1 −1 1 −1
, = 12 + (−1)2 + 12 + (−1)2 = 4
1 −1 1 −1
*" # " #+
−1 1 −1 1
, = (−1)2 + 12 + 12 + (−1)2 = 4
1 −1 1 −1
(c) We have
*" # " #+
1/2 1/2 −3 6
, =3
−1/2 −1/2 −2 −1
*" # " #+
1/2 −1/2 −3 6
, = −5
1/2 −1/2 −2 −1
*" # " #+
−1/2 1/2 −3 6
, =4
1/2 −1/2 −2 −1
3
Therefore, [~x]C = −5.
4
188 Section 9.2 Solutions
This implies that a = 0 and −2a + 2b − 2c = 0 ⇒ b = c, so any polynomial of the form bx + bx2 is
orthogonal to each polynomial in B. We use x + x2 , and normalize it to 21 (x + x2 ). This polynomial
extends B to an orthonormal basis of P2 (R).
9.2.7 We have p p √ p
kt~vk = ht~v, t~vi = t2 h~v, ~vi = t2 h~v, ~vi = |t|k~vk
9.2.8 (a) Since PPT = I we get
9.2.10 We have
But, {~v1 , . . . , ~vm } is orthonormal, so ~vi · ~vi = 1 and ~vi · ~v j = 0 for all i , j. Hence we have QT Q = I as
required.
9.2.11 (a) We have
But, since {~v1 , . . . , ~vn } is orthonormal we have < ~vi , ~vi >= 1 and < ~vi , ~v j >= 0 for i , j, hence we
get
< ~x, ~y >= c1 d1 + · · · + cn dn
(b) Taking ~y = ~x, the result of part (a) gives k~xk =< ~x, ~x >= c21 + · · · + c2n .
9.2.12 (a) We have d(~x, ~y) = k~x − ~yk ≥ 0 by Theorem 9.2.1(1).
(b) We have 0 = d(~x, ~y) = k~x − ~yk But, by Theorem 9.2.1(1), we have that k~x − ~yk = 0 if and only if
~x − ~y = ~0 as required.
(c) We have
q q q
d(~x, ~y) = k~x−~yk = h~x − ~y, ~x − ~yi = h(−1)(~y − ~x), (−1)(~y − ~x)i = (−1)(−1)h~y − ~x, ~y − ~xi = k~y−~xk = d(~y, ~x)
d(~x,~z) = k~x − ~zk = k~x − ~y + ~y − ~zk ≤ |~x − ~y| + k~y − ~zk = d(~x, ~y) + d(~y,~z)
(e) We have
d(~x, ~y) = k~x − ~yk = k~x + ~z − ~z − ~yk = k~x + ~z − (~y + ~z)k = d(~x + ~z, ~y + ~z)
d(c~x, c~y) = kc~x − c~yk = kc(~x − ~y)k = |c| k~x − ~yk = |c| d(~x, ~y)
190 Section 9.3 Solutions
w3 , ~v1 i
h~ h~ w3 , ~v2 i
~v3 = w ~3 − ~v1 − ~v2
k~v1 k2 k~v2 k2
1 1 5 −2/3
0 22
= 3 − 0 − 4 = 5/3
2 66
1 −1 5 −2/3
2
So, we take ~v2 = 3 . Finally, we get
−3
w3 , ~v1 i
h~ w3 , ~v2 i
h~
~v3 = w ~3 − 2
~v1 − ~v2
k~v1 k k~v2 k2
2 −3 2 0
−2 16
= 3 − 1 − 3 = 1
11 22
−1 −1 −3 1
w3 , ~v1 i
h~ w3 , ~v2 i
h~
~v3 = w ~3 − 2
~v1 − ~v2
k~v1 k k~v2 k2
−2
1 1 0
2 −2 2 9 3 0
= − − =
−2 16 2 12 −1 −1
0 2 −1 1
4(−2) + 2 · 2 1 1
=2−x− (2x2 + x) = 2 − x + (2x2 + x) = 2 + x2 − x
4 · 3 + 2(−2) + 2(−2) + 4 2 2
So an orthogonal basis is B2 = {x, 2x2 + x, 2 + x2 − 21 x}.
9.3.8 (a) Consider
~0 = c1~v1 + c2~v2 + c3~e1 + c4~e2 + c5~e3 + c6~e4
Row reducing the corresponding coefficient matrix gives
1 3 1 0 0 0 1 0 1/4 0 0 3/4
−1 1 0 1 0 0 0 1 1/4 0
0 −1/4
−1 1 0 0 1 0 ∼ 0 0 0
1 0 1
1 −1 0 0 0 1 0 0 0 0 1 1
So, we have extended {~v1 , ~v2 } to an orthogonal basis {~v1 , ~v2 , ~v3 , ~v4 } for R4 .
Section 9.4 Solutions 193
1 x1
⊥
(c) We first observe that a basis for S3 is 0 . Let ~
x = x2 ∈ S3 . Then
1
x 3
1 x1
0 = 0 · x2 = x1 + x3
1 x3
−1 0
Consequently, we have that B = 0 , 1 spans S⊥3 and is clearly linearly independent, so B is
1
0
a basis for S⊥3 .
" #
a1 a2
(d) Let A = ∈ S⊥4 . Then
a3 a4
" # " #
a a2 1 0
0=h 1 , i = a1 + a3
a3 a4 1 0
" # " #
a a2 2 −1
0=h 1 , i = 2a1 − a2 + a3 + 3a4
a3 a4 1 3
Row reducing the coefficient matrix of the homogeneous system gives
" # " #
1 0 1 0 1 0 1 0
∼
2 −1 1 3 0 1 1 −3
Hence, we get the general solution is
−1
a1 0
a −1 3
2 = s + t ,
s, t ∈ R
a3 1 0
a4 0 1
(" # " #) (" # " #)
−1 −1 0 3 −1 −1 0 3
Therefore, the orthogonal complement of S4 is S⊥4 = Span , . Since ,
1 0 0 1 1 0 0 1
is also clearly linearly independent, it is a basis for S⊥4 .
" #
a a2
(e) Let A = 1 ∈ S⊥5 . Then
a3 a4
" # " #
a1 a2 2 1
0=h , i = 2a1 + a2 + a3 + a4
a3 a4 1 1
" # " #
a a2 −1 1
0=h 1 , i = −a1 + a2 + 3a3 + a4
a3 a4 3 1
Section 9.4 Solutions 195
(b) Denote the vectors in the orthonormal basis from part (a) by ~c1 , ~c2 , and ~c3 respectively. Then we
get
projS (~y) = h~y, ~c1 i~c1 + h~y, ~c2 i~c2 + h~y, ~c3 i~c3
−1 6
1 1
12 1 2 12 1 −1 −12 1 0
= √ √ + + =
6 6 0 2 2 1 12 3 0
1 1 −1 6
(c) We have
perpS (~z) = ~z − projS (~z) = ~z − h~z, ~c1 i~c1 + h~y, ~c2 i~c2 + h~y, ~c3 i~c3
−1 1 3/2
1 1 1
0 3 1 2 3 1 −1 −3 1 0 0
= − √ √ + + = −
0 6 6 0 2 2 1 12 3 0 0
2 1 1 −1 2 3/2
−1/2
0
=
0
1/2
"# " # " #
1 0 1 1 2 0
9.4.4 (a) Denote the given basis by ~z1 = , ~z2 = , ~z3 = ~ 1 = ~z1 . Then, we get
. Let w
−1 1 1 1 1 1
" # " # " #
~z2 · w
~1 1 1 1 1 0 2/3 1
~ 2 = ~z2 − projw~ 1 (~z2 ) = ~z2 −
w ~1 =
w − = To simplify calculations
k~
w1 k 2 1 1 3 −1 1 4/3 2/3
" #
2 3
we use w ~2 = instead. Then, we get
4 2
" # " # " # " #
~z3 · w
~1 ~ 2)
(~z3 · w 2 0 2 1 0 10 2 3 8/11 −10/11
~ 3 = z3 −
w ~
w 1 − ~
w 2 = − − =
w1 k2
k~ w2 k2
k~ 1 1 3 −1 1 33 4 2 5/11 −3/11
"#
8 −10
~3 =
We pick w . Then the set {~ ~ 2, w
w1 , w ~ 3 } is an orthogonal basis for S.
5 −3
(b) From our work in (a)
" #
~x · w
~1 ~x · w
~2 ~x · w
~3 11 19 −23 35/9 13/9
projS (A) = ~
w 1 + ~
w 2 + ~
w 3 = ~
w 1 + ~
w 2 + ~
w 3 =
k~w1 k2 k~w2 k2 k~w3 k2 3 33 198 −35/18 93/18
(d) Since projS (B) = B we have that B ∈ S. Since projS (A) , A, we have A < S.
198 Section 9.4 Solutions
hx − x2 , 1i −2 2
~v2 = x − x2 − 1 = x − x2 − 1 = + x − x2
k1k2 3 3
h1 + x + x2 , 1i h1 + x + x2 , 2 + 3x − 3x2 i
projS (1 + x + x2 ) = 1 +
k1k2 k2 + 3x − 3x2 k
5 4
= 1 + (2 + 3x − 3x2 )
3 24
1 1
= 2 + x − x2
2 2
1 3
perpS (1 + x + x2 ) = 1 + x + x2 − projS (1 + x + x2 ) = −1 + x + x2
2 2
(c) Since projS and perpS are linear mappings, from our work in (b) we get
and
perpS (2 + 2x + 2x2 ) = 2 perpS (1 + x + x2 ) = −2 + x + 3x2
9.4.6 Let {~v1 , . . . , ~vk } be an orthonormal basis for W. Then, for any ~u, ~v ∈ V and s, t ∈ R, we have
projW (s~u + t~v) =hs~u + t~v, ~v1 i~v1 + · · · + hs~u + t~v, ~vk i~vk
=s h~u, ~v1 i~v1 + · · · + h~u, ~vk i~vk + t h~v, ~v1 i~v1 + · · · + h~v, ~vk i~vk
= s projW ~u + t projW ~v
~c + ~x + ~b = ~x + ~y
h1, −1 + x + x2 − x3 i −4
projS⊥ (−1 + x + x2 − x3 ) = 2
1= = −1
||1|| 4
Therefore,
Then, we have
c1~v1 + · · · + ck~vk = −ck+1 w
~ 1 − · · · − ck+ℓ w
~ℓ
The vector on the left is in U and the vector on the right is in W. But, the only vector that is both in U and
W is the zero vector. Therefore, each ci = 0 and hence {~v1 , . . . , ~vk , w
~ 1, . . . , w
~ ℓ } is linearly independent.
For any ~v ∈ U ⊕ W we have that ~v = ~u + w ~ by definition of U ⊕ W. We can write ~u = c1~v1 + · · · + ck~vk
~ = d1 w
and w ~ 1 + · · · + dℓ w
~ ℓ and hence
~v = c1~v1 + · · · + ck~vk + d1 w
~ 1 + · · · + dℓ w
~ℓ
~x = (AT A)−1 AT ~b
" #−1 " # 1
3 2 1 1 1
= 2
2 6 2 1 −1
5
" #" #
1 6 −2 8
=
14 −2 3 −1
" #
25/7
=
−19/14
1 5 1
(b) We have A = −2 −7 and ~b = 1. Hence, the vector ~x that minimizes kA~x − ~bk is
1 2 0
~x = (AT A)−1 AT ~b
" #−1 " # 1
6 21 1 −2 1
= 1
21 78 5 −7 2
0
" #" #
1 78 −21 −1
=
27 −21 6 −2
" #
−4/3
=
1/3
2 1 4
(c) We have A = 2 −1 and ~b = −1. Hence, the vector ~x that minimizes kA~x − ~bk is
3 2 8
~x = (AT A)−1 AT ~b
" #−1 " # 4
17 6 2 2 3
= −1
6 6 1 −1 2
8
" #" #
1 6 −6 30
=
66 −6 17 21
" #
9/11
=
59/22
Section 9.6 Solutions 201
1 2
2
1 2 3
(d) We have A = and ~b = . Hence, the vector ~x that minimizes kA~x − ~bk is
1 3
2
1 3 3
~x = (AT A)−1 AT ~b
" # 2
#−1 "
4 10 1 1 1 2 3
=
10 26 2 2 3 3 2
3
" #" #
1 26 −10 10
=
4 −10 4 25
" #
5/2
=
0
1 −1 0
2
2 −1 1
and ~b = −2. Hence, the vector ~x that minimizes kA~x − ~bk is
(e) We have A =
2 2 1 3
1 −1 0 −2
~x = (AT A)−1 AT ~b
−1 2
10 0 4 1 2 2 1
−2
= 0 7 1 −1 −1 2 −1
3
4 1 2 0 1 1 0
−2
5/3
= 5/3
−11/3
2 −2
1
−1 1
and ~b = 2. Hence, the vector ~x that minimizes kA~x − ~bk is
(f) We have A =
3 1
1
2 −1 2
~x = (AT A)−1 AT ~b
" #−1 " # 1
18 −4 2 −1 3 2 2
=
−4 7 −2 1 1 −1 1
2
" #
9/22
=
1/11
202 Section 9.6 Solutions
(d) In our calculations in (a), (b), and (c), we see that ~y · ~n > ~x · ~n and ~z ∈ P, so ~y is the furthest from
P.
1 −1 −3
9.6.3 (a) Let X = 1 0 and ~y = 2 . Then we get
1 1 2
" #
T −1 T 1/3
~a = (X X) X ~y =
5/2
1 −2 4
0
1 −1 1
and ~y = −2. Then we get
9.6.4 (a) Let X =
1 1 1 0
1 2 4 1
−3/2
~a = (X T X)−1 X T ~y = 2/5
1/2
So, y = − 23 x + 52 x2 .
Chapter 10 Solutions
−3 4 5
−3 ~bT
PT1 AP1 = 0 1 4 =
~0 A1
0 7 4
#" " #
1 4 ~ 4
where A1 = and b = .
7 4 5
204
Section 10.1 Solutions 205
" # " #
4 −7 8 −3
Our work in part (b) tells us that we can take Q = √1 and get T 1 = .
65 7 4 0 −3
1 0 0
1 ~0 √ √
T
We now take P2 = ~ = 0 4/ 65 −7/ 65. Let P = P1 P2 = P2 , then
0 Q √ √
0 7/ 65 4/ 65
√ √
−3 51/ 65 −8/ 65
~
T
−3 b Q
PT AP = ~
= 0
8 −3
0 T1
0 0 −3
1
(d) Observe that λ = 7 is an eigenvalue of A with corresponding eigenvector ~v1 = 0. We extend {~v1 }
0
1 0 0
to the basis for R3 . Thus, we pick P1 = I. Thus, we have
0 , 1 , 0
0 0 1
7 −6 5 ~ T
7 b
PT1 AP1 = 0 1 4 = ~
0 A1
0 6 3
"
# " #
1 4 ~ −6
where A1 = and b = .
6 3 5
We have C(λ) = det(A − λI) = λ2 − 4λ − 21 = (λ + 3)(λ − 7). So, λ1 = −3 and
" # " #
4 4 1 1
A + 3I = ∼
6 6 0 0
" #
−1
Hence, a unit eigenvector for λ1 = −3 is ~v1 = √1 . We extend {~v1 } to the orthonormal basis
2 1
" # " # " #
1 −1 1 −3 2
{~v1 , ~v2 } for R2 with ~v2 = √12 . Taking Q = √12 gives QT A1 Q = = T1.
1 1 1 0 7
1 0 0
1 ~0 √ √
T
We now take P2 = ~ = 0 −1/ 2 1/ 2. Let P = P1 P2 = P2 , then
0 Q √ √
0 1/ 2 1/ 2
√ √
7 11/ 2 −1/ 2
~
T
7 b Q
PT AP = ~
= 0
−3 2
0 T1
0 0 7
0
(e) Observe that λ = 2 is an eigenvalue of A with corresponding eigenvector ~v1 = 1. We extend {~v1 }
0
206 Section 10.1 Solutions
0 1 0 0 1 0
to the basis 1 , 0 , 0 for R3 . Thus, we pick P1 = 1 0 0. Thus, we have
0 0 1
0 0 1
2 2 1 ~ T
2 b
PT1 AP1 = 0 1 −1 = ~
0 A1
0 4 5
" # " #
1 −1 2
where A1 = and ~b = .
4 5 1
√
2 0 5
~
T
2 b Q
PT AP = ~
= 0 3 5
0 T1
0 0 3
is upper triangular.
(g) Observe that λ = 2 is an eigenvalue of A with corresponding h i ~v1 = ~e3 . We extend {~v1 }
eigenvector
4
to the basis ~e3 , ~e1 , ~e2 , ~e4 for R . Thus, we pick P1 = ~e3 ~e1 ~e2 ~e4 . Thus, we have
2 −1 1 0
0 2 0 1
PT1 AP1 =
0 0 2 −1
0 0 0 2
is upper triangular.
10.1.2 If A is orthogonally similar to B, then there exists an orthogonal matrix P such that PT AP = B. If B is
orthogonally similar to C, then there exists an orthogonal matrix Q such that QT BQ = C. Hence,
B−1 = QT A−1 Q
(b) If A and B are orthogonally similar, then there exists an orthogonal matrix P such that PT AP = B.
Thus, we have
B2 = (PT AP)(PT AP) = PT A(PPT )AP = PT A2 P
Thus, A2 and B2 are also orthogonally similar.
(c) If A and B are orthogonally similar, then there exists an orthogonal matrix P such that PT AP = B.
Thus,
BT = (PT AP)T = PT AT (PT )T = PT AP = B
(d) If A and B are orthogonally similar, then there exists an orthogonal matrix P such that PT AP = B.
Assume that there exists an orthogonal matrix Q such that QT BQ = T is upper triangular. Then,
where PQ is orthogonal since both P and Q are orthogonal. Thus, A is also orthogonally similar
to T .
Hence, λ1 √= 1 is an eigenvalue, √ and by the quadratic formula, we get the other eigenvalues are
λ2 = (1 + 33)/2 and λ3 = (1 − 33)/2.
−1 2 −2 1 0 0
For λ1 = 1 we get A − λ1 I = 2 0 0 ∼ 0 1 −1
−2 0 0 0 0 0
0
A basis for the eigenspace of λ1 is 1
1
Section 10.2 Solutions 209
1+ √33 √
− 2√ −2 1 0 −1+ 33
√ 2 4
1+ 33 1− 33
For λ2 = 2 we get A − λ2 I = 2 2 0√ ∼ 0 1
1
1− 33 0 0 0
−2 0 2
√
1 − 33
A basis for the eigenspace of λ2 is −4 .
4
1− √33 √
− 2√ −2 1 0 −1− 33
√ 2 4
1− 33 1+ 33
For λ3 = 2 we get A − λ3 I = 2 2 0√ ∼ 0 1 1
1+ 33 0 0 0
−2 0 2
√
1 + 33
A basis for the eigenspace of λ3 is −4 .
4
After normalizing, the √ basis vectors √ for the eigenspaces
√ form√an orthonormal basis for R3 . Hence,
0
√ (1 − 33)/(66 − √ 2 33) (1 + 33)/(66 + √ 2 33)
P = 1/ 2 −1/(66 − 2√ 33) −1/(66 + 2√ 33) orthogonally diagonalizes A to
√
1/(66 − 2 33)
1/ 2 1/(66 + 2 33)
1 √0 0
T
P AP = 0 (1 + 33)/2
0
√ .
0 0 (1 − 33)/2
4 − λ −2
(c) We have C(λ) = det(A − λI) = = λ2 − 11λ − 24 = (λ − 3)(λ − 8)
−2 7 − λ
Hence, the eigenvalues are λ1 = 3 and λ2 = 8.
" # " # (" #)
1 −2 1 −2 2
For λ1 = 3 we get A − λ1 I = ∼ . Thus, a basis for Eλ1 is .
−2 4 0 0 1
" # " # (" #)
−4 −2 1 1/2 −1
For λ2 = 8 we get A − λ2 I = ∼ . Thus, a basis for Eλ2 is .
−2 −1 0 0 2
2
After" normalizing,
√ √the# basis vectors for the eigenspaces
" #form an orthonormal basis for R . Hence,
2/ √5 −1/√ 5 3 0
P= is orthogonal and PT AP = .
1/ 5 2/ 5 0 8
(d) We have
2 − λ −2 −5 7 − λ 0 −7 + λ 7 − λ 0 0
C(λ) = −2 −5 − λ −2 = −2 −5 − λ −2 = −2 −5 − λ −4
−5 −2 2 − λ −5 −2 2 − λ −5 −2 −3 − λ
= −(λ − 7)(λ2 + 8λ + 7) = −(λ − 7)(λ + 1)(λ + 7)
−1
A basis for the eigenspace of λ1 is 0
1
3 −2 −5 1 0 −1
For λ2 = −1 we get A − λ2 I = −2 −4 −2 ∼ 0 1 1
−5 −2 3 0 0 0
1
A basis for the eigenspace of λ2 is −1 .
1
9 −2 −5 1 0 −1
For λ3 = −7 we get A − λ3 I = −2 2 −2 ∼ 0 1 −2
−5 −2 9 0 0 0
1
A basis for the eigenspace of λ3 is 2.
1
After normalizing,
√ the basis√vectors for the eigenspaces form an orthonormal basis for R3 . Hence,
√
−1/ 2 1/ 3 1/ 6
√ √ 7 0 0
−1/√ 3 2/ √6 orthogonally diagonalizes A to PT AP = 0 −1 0 .
P = 0
√
0 0 −7
1/ 2 1/ 3 1/ 6
1 − λ −2
(e) We have C(λ) = det(A − λI) = = λ2 + λ − 6 = (λ + 3)(λ − 2)
−2 −2 − λ
Hence, the eigenvalues are λ1 = −3 and λ2 = 2.
" # " # (" #)
4 −2 1 −1/2 1
For λ1 = −3 we get A − λ1 I = ∼ . Thus, a basis for Eλ1 is .
−2 1 0 0 2
" # " # (" #)
−1 −2 1 2 −2
For λ2 = 2 we get A − λ2 I = ∼ . Thus, a basis for Eλ2 is .
−2 −4 0 0 1
2
After" normalizing,
√ √the# basis vectors for the eigenspaces
" form
# an orthonormal basis for R . Hence,
1/ √5 −2/√ 5 −3 0
P= is orthogonal and PT AP = .
2/ 5 1/ 5 0 2
(f) We have
1 − λ 0 2
C(λ) = 0 1−λ 4 = (1 − λ)(λ2 − 3λ − 14) + 2(−1)(1 − λ)(2) = −(λ − 1)(λ2 − 3λ − 18) = −(λ − 1)
2 4 2 − λ
After normalizing,
√ the
√ basis√vectors for the eigenspaces form an orthonormal basis for R3 . Hence,
−1/ 2 −1/ 6 1/ 3
0 0 0
√ √ √
P = 1/ 2 −1/ 6 1/ 3 orthogonally diagonalizes A to PT AP = 0 0 0.
√ √
0 0 3
0 2/ 6 1/ 3
(h) We have
2 − λ 2 4 2 − λ 2 2 + λ 2 − λ 2 2 + λ
C(λ) = 2 4−λ 2 = 2 4−λ 0 = 2 4−λ 0 = −(λ + 2)(λ2 − 10λ + 16) =
4 2 2 − λ 4
2 −2 − λ 6 − λ
4 0
−1 1
Thus, 1 , 1 is an orthogonal basis for the eigenspace of λ1 .
0 2
2 −1 1 1 0 1
For λ2 = 3 we get A − 3I = −1 2 1 ∼ 0 1 1
1 1 2 0 0 0
−1
A basis for the eigenspace of λ2 is −1 .
1
After normalizing,
√ √the basis√vectors for the eigenspaces form an orthonormal basis for R3 . Hence,
−1/ 2 1/ 6 −1/ 3
√ √ √ 6 0 0
P = 1/ 2 1/ 6 −1/ 3 orthogonally diagonalizes A to PT AP = 0 6 0.
√ √
0 0 3
0 2/ 6 1/ 3
(k) We have
−λ 1 −1 −λ 1 −1
C(λ) = 1 −λ 1 = 1 −λ 1
−1 1 −λ 0 1 − λ 1 − λ
−λ 2 −1
= 1 −1 − λ 1 = (1 − λ)(λ2 + λ − 2) = −(λ − 1)2 (λ + 2)
0 0 1 − λ
−1 1 −1/2
0 − −1 1 = 1/2
2
1 0 1
√ √
1/ 2 −1/ 6
√ √
Remembering that we need an orthonormal basis, we take ~v1 = 1/ 2 and ~v2 = 1/ 6 .
√
0
2/ 6
Section 10.2 Solutions 215
Next, we have
2 1 −1 1 0 −1
A + 2I = 1 2 1 ∼ 0 1 1
−1 1 2 0 0 0
√
1/ 3
√
Hence, ~v3 = −1/ 3.
√
1/ 3
h i
Thus, {~v1 , ~v2 , ~v3 } is an orthonormal basis for R3 and hence, taking P = ~v1 ~v2 ~v3 gives
PT AP = diag(1, 1, −2)
(l) We have
1 − λ 1 1 1 1 − λ 0
0 1 1 − λ 0
0 1
1 1−λ 1 1 1 −λ 0 1 1 −λ 0 1
C(λ) = = =
1−λ
1 1 1 1 0 −λ 1 1 0 −λ 1
1 − λ 1 λ λ 1 − λ 1 λ λ 1 − λ
1
1 1
1 − λ 0 0 1
1 −λ 0 1
= = (−λ)2 (λ2 − 4λ) = −λ3 (λ − 4)
1 0 −λ 1
0 3 − λ
3
0
The eigenvalues are λ1 = 0 and λ2 = 4.
1 1 1 1 1 1 1 1
1 1 1 1 0 0 0 0
For λ1 = 0 we get A − 0I = ∼
1 1 1 1 0 0 0 0
1 1 1 1 0 0 0 0
−1 −1 −1
1 0 0
A basis for the eigenspace of λ1 is , , = {~ ~ 2, w
w1 , w ~ 3 }.
0 1 0
0
0 1
However, we need an orthogonal basis for each eigenspace, so we apply the Gram-Schmidt pro-
cedure to this basis.
−1 −1 −1 −1/2
−1
−1
Thus, we take ~v3 = and we get that ~v1 , ~v2 , ~v3 is an orthogonal basis for the eigenspace of λ1 .
−1
3
−3 1 1 1 0 0 −1
1
1 −3 1
1 0 1 0 −1
For λ2 = 4 we get A − 4I = ∼
1 1 −3 1 0 0 1 −1
1 1 1 −3 0 0 0 0
1
1
A basis for the eigenspace of λ2 is .
1
1
After normalizing,
√ the
√ basis vectors
√ for the eigenspaces form an orthonormal basis for R3 . Hence,
−1/ 2 −1/ 6 −1/ 12 1/2
0 0 0 0
√ √ √
1/ 2 −1/ 6 −1/ 12 1/2 0 0 0 0
P = √ √ orthogonally diagonalizes A to PT AP = .
0 2/ 6 −1/√ 12 1/2 0 0 0 0
0 0 3/ 12 1/2
0 0 0 4
10.2.2 Observe that {~v1 , ~v2 , ~v3 } is an orthogonal
√ set,
√ but we need an orthogonal matrix. So, we normalize the
1
− 3 −2/ 5 2/ 45
√ √
vectors and take P = −2/3 1/ 5 4/ 45.
√
2/3 0 5/ 45
We want to find A such that PT AP = diag(−3, 1, 6). Thus,
1 0 2
A = P diag(−3, 1, 6)PT = 0 1 4
2 4 2
10.2.3 If A is orthogonally diagonalizable, then there exists an orthogonal matrix P and diagonal matrix D
such that D = PT AP = P−1 AP. Since A and P are invertible, we get that D is invertible and hence,
10.2.5 Assume that there exists a symmetric matrix B such that A = B2 . Since B is symmetric, there exists an
orthogonal matrix P such that PT BP = diag(λ1 , . . . , λn ). Hence, we have B = PDPT and so
So,
A = Q diag(λ21 , . . . , λ2n )QT = [Q diag(λ1 , . . . , λn )QT ][Q diag(λ1 , . . . , λn )QT ]
Thus, we define B = Q diag(λ1 , . . . , λn )QT . We have that QT BQ = diag(λ1 , . . . , λn ) and so B is symmet-
ric since it is orthogonally diagonalizable.
" √ √ #
1/ √2 1/ √2
10.2.6 (a) The statement is false. The matrix P = is orthogonal, but it is not symmetric,
−1/ 2 1/ 2
so it is not orthogonally diagonalizable.
(b) The statement is false by the result of 4(c).
(c) The statement is true. If A is orthogonally similar to B, then there exists an orthogonal matrix
Q such that QT AQ = B. If B is symmetric, then there exists an orthogonal matrix P such that
PT BP = D. Hence,
D = PT (QT AQ)P = PT QT AQP = (QP)T A(QP)
where QP is orthogonal since a product of orthogonal matrices is orthogonal. Hence, A is orthog-
onally diagonalizable.
(d) If A is symmetric, then it is orthogonally diagonalizable by the Principal Axis Theorem. Thus, A
is diagonalizable and hence gλ = aλ by Corollary 6.3.4.
1 2 2
x1 = − y1 − y2 + y3
3 3 3
2 2 1
x2 = y1 − y2 − y3
3 3 3
2 1 2
x3 = y1 + y2 + y3
3 3 3
4 −1 1
(g) i. A = −1 4 1
1 1 4
ii. We have
C(λ) = −(λ − 5)2 (λ − 2)
Hence, the eigenvalues are λ1 = 5 and λ2 = 2 so the quadratic form and symmetric matrix
are positive definite.
iii. The corresponding diagonal form is Q = 5y21 + 5y22 + 2y23
√ √ √
−1/ 2 1/ 6 −1/ 3
√ √ √
We find that a matrix which orthogonally diagonalizes A is P = 1/ 2 1/ 6 −1/ 3
√ √
0 2/ 6 1/ 3
diagonalizes A. So, computing ~x = P~y we get that the required change of variables is
1 1 1
x1 = − √ y1 + √ y2 − √ y3
2 6 3
1 1 1
x2 = √ y1 + √ y2 − √ y3
2 6 3
2 1
x3 = √ y2 + √ y3
6 3
−4 1 1
(h) i. A = 1 −4 −1
1 −1 −4
ii. We have
C(λ) = −(λ + 3)2 (λ + 6)
Hence, the eigenvalues are λ1 = −3 and λ2 = −6 so the quadratic form and symmetric matrix
are negative definite.
iii. The corresponding diagonal form is Q = −3y21 − 3y22 − 6y23
Section 10.3 Solutions 221
√ √ √
1/ 2 1/ 6 −1/ 3
√ √ √
We find that a matrix which orthogonally diagonalizes A is P = 1/ 2 −1/ 6 1/ 3
√ √
0 2/ 6 1/ 3
diagonalizes A. So, computing ~x = P~y we get that the required change of variables is
1 1 1
x1 = √ y1 + √ y2 − √ y3
2 6 3
1 1 1
x2 = √ y1 − √ y2 + √ y3
2 6 3
2 1
x3 = √ y2 + √ y3
6 3
10.3.2 (a) Let λ1 , λ2 be the eigenvalues of A. If det A > 0 then ac − b2 > 0 so a and c must both have the
same sign. Thus, c > 0. We know that det A = λ1 λ2 and λ1 + λ2 = a + c and so λ1 and λ2 must
have the same sign since det A > 0 and we have λ1 + λ2 = a + c > 0 so we must have λ1 and λ2
both positive so Q is positive definite.
(b) Let λ1 , λ2 be the eigenvalues of A. If det A > 0 then ac − b2 > 0 so a and c must both have the
same sign. Thus, c < 0. We know that det A = λ1 λ2 and λ1 + λ2 = a + c and so λ1 and λ2 must
have the same sign since det A > 0 and we have λ1 + λ2 = a + c < 0 so we must have λ1 and λ2
both negative so Q is negative definite.
(c) Let λ1 , λ2 be the eigenvalues of A. If det A < 0 then det A = λ1 λ2 < 0 so λ1 and λ2 must have
different signs thus Q is indefinite.
10.3.3 If h , i is an inner product on Rn , then it is symmetric and positive definite. Observe that ~yT A~x is a 1 × 1
matrix, and hence it is symmetric. Thus,
~xT A~y = h~x, ~yi = h~y, ~xi = ~yT A~x = (~yT A~x)T = ~xT AT ~y
Hence, ~xT A~y = ~xT AT ~y for all ~x, ~y ∈ Rn . Therefore, A = AT by Theorem 3.1.4. Observe that
for all ~x , ~0 and h~x, ~xi = 0 if and only if ~x = ~0. Thus, h , i is positive definite.
We have
h~x, ~yi = ~xT A~y = (~xT A~y)T = ~yT AT ~x = ~yT A~x = h~y, ~xi
So, h , i is symmetric.
We have
~ i = (s~x + t~y)T A~
hs~x + t~y, w w = (s~xT + t~yT )A~
w = s~xT A~
w + t~yT A~
w
~ i + th~y, w
= sh~x, w ~i
10.3.4 (a) If A is a positive definite symmetric matrix, then ~xT A~x > 0 for all ~x , ~0. Hence, for 1 ≤ i ≤ n,
we have
aii = ~eTi A~ei > 0
as required.
(b) If A is a positive definite symmetric matrix, then all the eigenvalues of A are positive. Thus, since
the determinant of A is the product of the eigenvalues, we have that det A > 0 and hence A is
invertible.
10.3.5 We first must observe that A + B is symmetric. Since the eigenvalues of A and B are all positive, the
quadratic forms ~xT A~x and ~xT B~x are positive definite. Let ~x , ~0. Then ~xT A~x > 0 and ~xT B~x > 0 , so
~xT (A + B)~x = ~xT A~x + ~xT B~x > 0 , and the quadratic form ~xT (A + B)~x is positive definite. Thus the
eigenvalues of A + B must be positive.
Section 10.4 Solutions 223
" #
1
To sketch x12 + 8x1 x2 + x22 = 6 in the x1 x2 -plane we first draw the y1 -axis in the direction of ~v1 =
1
" #
−1
and the y2 -axis in the direction of ~v2 = . Next, to convert the asymptotes into the x1 x2 -plane,
1
" # " √ √ #" # " #
y1 T 1/ √2 1/ √2 x1 1 x1 + x2
we use the change of variables = P ~x = = √ . So, the
y2 −1/ 2 1/ 2 x2 2 −x1 + x2
asymptotes in the x1 x2 -plane are
r
5
y2 = ± y1
3
r !
1 5 1
√ (−x1 + x2 ) = ± √ (x1 + x2 )
2 3 2
√ √
3(−x1 + x2 ) = ± 5(x1 + x2 )
√ √ √ √
( 3 ∓ 5)x2 = ( 3 ± 5)x1
√ √
3± 5
x2 = √ √ x1
3∓ 5
To sketch 3x12 − 2x1 x2 + 3x22 = 12 in the x1 x2 -plane we first draw the y1 -axis in the direction of
" # " #
−1 1
~v1 = and the y2 -axis in the direction of ~v2 = . Plotting gives
1 1
Section 10.4 Solutions 225
" #
−4 2
(c) The corresponding symmetric matrix is . We find that the characteristic polynomial is
2 −7
C(λ) = (λ + 8)(λ + 3). So, we have eigenvalues λ1 = −8 and λ2 = −3.
" # " # " #
4 2 1 1/2 −1
For λ1 we get A − λ1 I = ∼ . So, a corresponding eigenvector is ~v1 = .
2 1 0 0 2
" # " # " #
−1 2 1 −2 2
For λ2 we get A − λ2 I = ∼ . So, a corresponding eigenvector is ~v2 = .
2 −4 0 0 1
Thus, we have the ellipse −8y21 − 3y22 = −8 with principal axis ~v1 for x1 and ~v2 for y1 .
Plotting this we get the graph in the y1 y2 -plane is
To sketch −4x12 + 4x1 x2 − 7x22 = −8 in the x1 x2 -plane we first draw the y1 -axis in the direction of
" # " #
−1 2
~v1 = and the y2 -axis in the direction of ~v2 = . Plotting gives
2 1
226 Section 10.4 Solutions
" #
−3 −2
(d) The corresponding symmetric matrix is . We find that the characteristic polynomial is
−2 0
C(λ) = (λ + 4)(λ − 1). So, we have eigenvalues λ1 = 1 and λ2 = −4.
" # " # " #
−4 −2 1 1/2 −1
For λ1 we get A − λ1 I = ∼ . So, a corresponding eigenvector is ~v1 = .
−2 −1 0 0 2
" # " # " #
1 −2 1 −2 2
For λ2 we get A − λ2 I = ∼ . So, a corresponding eigenvector is ~v2 = .
−2 4 0 0 1
" #
−1
To sketch −3x12 −4x1 x2 = 4 in the x1 x2 -plane we first draw the y1 -axis in the direction of ~v1 =
2
" #
2
and the y2 -axis in the direction of ~v2 = . Next, to convert the asymptotes into the x1 x2 -plane,
1
" # " √ √ #" # " #
y1 T −1/√ 5 2/ √5 x1 1 −x1 + 2x2
we use the change of variables = P ~x = = √ . So, the
y2 2/ 5 1/ 5 x2 5 2x1 + x2
asymptotes in the x1 x2 -plane are
1
y2 = ± y1
2 !
1 1 1
√ (2x1 + x2 ) = ± √ (−x1 + 2x2 )
5 2 5
2(2x1 + x2 ) = ±(−x1 + 2x2 )
(2 ∓ 2)x2 = (−4 ∓ 1)x1
To sketch −x12 + 4x1 x2 + 2x22 = 6 in the x1 x2 -plane we first draw the y1 -axis in the direction of
" # " #
1 −2
~v1 = and the y2 -axis in the direction of ~v2 = . Next, to convert the asymptotes into the x1 x2 -
2 1
" # " √ √ #" # " #
y1 T 1/ √5 2/ √5 x1 1 x1 + 2x2
plane, we use the change of variables = P ~x = = √ . So,
y2 −2/ 5 1/ 5 x2 5 −2x1 + x2
the asymptotes in the x1 x2 -plane are
r
3
y2 = ± y1
2
r !
1 3 1
√ (−2x1 + x2 ) = ± √ (x1 + 2x2 )
5 2 5
√ √
2(−2x1 + x2 ) = ± 3(x1 + 2x2 )
√ √ √ √
( 2 ∓ 2 3)x2 = (2 2 ± 3)x1
√ √
2 2± 3
x2 = √ √ x1
2∓2 3
To sketch 4x12 + 4x1 x2 + 4x22 = 12 in the x1 x2 -plane we first draw the y1 -axis in the direction of
" # " #
−1 1
~v1 = and the y2 -axis in the direction of ~v2 = . Plotting gives
1 1
"#
2 −2
(g) The corresponding symmetric matrix is A = . The eigenvalues are λ1 = 3 and λ2 =
−2 −1
" # " #
−2 1
−2 with corresponding eigenvectors ~v1 = , ~v2 = . Thus, A is diagonalized by P =
1 2
Section 10.4 Solutions 229
" √ √ # " #
−2/√ 5 1/ √5 3 0
to D = . Hence, we have the hyperbola 3y21 − 2y22 = 6 with princi-
1/ 5 2/ 5 0 2
pal axis ~v1 , and ~v2 . Since this is a hyperbola we need to graph the asymptotes. They are when
0 = 3y21 − 2y22 .
q
Hence, the equations of the asymptotes are y2 = ± 32 y1 . Graphing give the diagram below to the
left. We have " # " #" # " #
y1 T 1 −2 1 x1 1 −2x1 + x2
= P ~x = √ = √
y2 5 1 2 x2 5 x1 + 2x2
So, the asymptotes in the x1 x2 -plane are given by
r !
1 3 1
√ (x1 + 2x2 ) = ± √ (−2x1 + x2 )
5 2 5
Solving for x2 gives √
−1 ± 2 3/2
x2 = √ x1
2 ± 3/2
which is x2 ≈ 0.449x1 and x2 ≈ −4.449x1 .
Alternately, we can find that the direction vectors of the asymptotes are
" # " #" # " √ √ #
x1 1 −2 1 1 (−2 + 3/2)/
= P~x = √ √ = √ √5
x2 5 1 2 3/2 (1 + 2 3/2)/ 5
y21 + 5y22 = 5
Graphing this in the y1 y2 -plane gives the diagram below to the left.
The principal axes are ~v1 and ~v2 . So, graphing it in the x1 x2 -plane gives the diagram below to the
right.
Section 10.5 Solutions 231
C(λ) = (λ − 5)(λ − 3)
Hence, the eigenvalues of A are λ1 = 5 and λ2 = 3. Therefore, the maximum is 5 and the minimum
is 3.
" #
3 5
(b) The corresponding symmetric matrix is A = . Hence, we have
5 −3
√ √
C(λ) = (λ − 34)(λ + 34)
√ √ √
√ of A are λ1 =
Hence, the eigenvalues 34 and λ2 = − 34. Therefore, the maximum is 34 and
the minimum is − 34.
−1 0 0
(c) The corresponding symmetric matrix is A = 0 8 2 . Hence, we have
0 2 11
Hence, the eigenvalues of A are λ1 = 7, λ2 = −1, and λ3 = 12. Therefore, the maximum is 12 and
the minimum is −1.
3 −2 4
(d) The corresponding symmetric matrix is A = −2 6 2. Hence, we have
4 2 3
C(λ) = (λ − 7)2 (λ + 2)
Hence, the eigenvalues of A are λ1 = 7 and λ2 = −2. Therefore, the maximum is 7 and the
minimum is −2.
2 1 1
(e) The corresponding symmetric matrix is A = 1 2 1. Hence, we have
1 1 2
C(λ) = (λ − 1)2 (λ − 4)
Hence, the eigenvalues of A are λ1 = 1 and λ2 = 4. Therefore, the maximum is 4 and the minimum
is 1.
232 Section 10.6 Solutions
Next compute
√
1 3 " √ # 11/√ 195
1 1 2/ 13
~u1 = √
A~v1 = √ 2 1 = 7/ 195
σ1 15 1 1 3/ 13 √
5/ 195
√
1 3 " √ # 3/ 26
1 1 −3/√ 13 √
~u2 = A~v2 = √ 2 1 = −4/ 26
σ2 2 1 1 2/ 13 √
−1/ 26
We then need to extend {~u1 , ~u2 } to an orthonormal basis for R3 . Since we are in R3 , we can use the
cross product. We have
11 3 13
7 × −4 = 26
5 −1 −65
√ √ √ √
11/ 95 3/ 26 1/ √30
1/ 30
√ √ √
So, we can take ~u3 = 2/ 30 . Thus, we have U = 7/ 195 −4/ 26 2/ 30 . Then,
√ √ √ √
−5/ 30 5/ 195 −1/ 26 −5/ 30
A = UΣV T as required.
234 Section 10.6 Solutions
" #
12 −6
(e) We have AT A = . The eigenvalues of AT A are (ordered from greatest to least) λ1 = 15
−6 3
and λ2 = 0.
√ # " " √ #
−2/√ 5 1/ √5
Corresponding normalized eigenvectors are ~v1 = for λ1 and ~v2 = for λ2 .
1/ 5 2/ 5
" √ √ #
T −2/√ 5 1/ √5
Hence, A A is orthogonally diagonalized by V = .
1/ 5 2/ 5
√
√ 15 0
The singular values of A are σ1 = 15 and σ2 = 0. Thus, the matrix Σ is Σ = 0 0.
0 0
Next compute √
√ # −1/ 3
2 −1 " √
1 1 −2/√ 5
~u1 = A~v1 = √ 2 −1
= −1/ 3
σ1 15 2 −1 1/ 5 √
−1/ 3
We then need to extend {~u1 } to an orthonormal basis for R3 . One way of doing this is to find an
−1 −1
T T
orthonormal basis for Null(A ). A basis for Null(A ) is 1 , 0 . To make this an orthogonal
0
1
−1
basis, we apply the Gram-Schmidt procedure. We take ~v1 = 1 , and then we get
0
−1 −1 −1/2
0 − 1 1 = −1/2
2
1 0 1
√ √
−1/ 2 −1/ 6
√ √
Thus, we pick ~u2 = 1/ 2 and ~u3 = −1/ 6. Consequently, we have
√
0
2 6
√ √ √
−1/ 3 −1/ 2 −1/ 6
√ √ √
U = −1/ 3 1/ 2 −1/ 6 and A = UΣV T as required.
√ √
−1/ 3
0 2/ 6
3 0 0
(f) We have AT A = 0 3 0. The only eigenvalue of AT A is λ1 = 3 with multiplicity 3.
0 0 3
Next compute
1
1 1 1
~u1 = A~v1 = √
σ1 3 −1
0
0
1 1 1
~u2 = A~v2 = √
σ2 3 1
1
1
1 1 −1
~u3 = A~v3 = √
σ3 3 0
1
We then need to extend {~u1 , ~u2 , ~u3 } to an orthonormal basis for R4 . We find that a basis for Null(AT )
−1
0
is . After normalizing this vector, we take
−1
1
√ √ √
1/ √3 0√ 1/ √3 −1/ 3
1/ √3 1/ √3 −1/ 3 0√
T
U = and A = UΣV as required.
−1/ 3 1/ 3 0 −1/ 3
√ √ √
0 1/ 3 1/ 3 1/ 3
" #
T 4 8
(g) We have A A = . The eigenvalue of AT A are (ordered from greatest to least) λ1 = 20 and
8 16
" √ # " √ #
1/ √5 −2/√ 5
λ2 = 0. Corresponding normalized eigenvectors are ~v1 = for λ1 and ~v2 = for λ2 .
2/ 5 1/ 5
T
Hence, " A√ A is orthogonally √ # diagonalized by
1/ √5 −2/√ 5 √
V= . The singular values of A are σ1 = 20 and σ2 = 1. Thus the matrix Σ is
2/ 5 1/ 5
√
20 0
0 0
Σ = . Next compute
0 0
0 0
1 2 " √ # 1/2
1 1 1 2 1/ √5 1/2
~u1 = A~v1 = √ =
σ1 20 1 2 2/ 5 1/2
1 2 1/2
236 Section 10.6 Solutions
−1/2
1/2 1/2
−1/2 −1/2 −1/2
~u2 = , ~u3 = , ~u4 =
1/2 1/2 −1/2
1/2 −1/2 1/2
Thus, we take
√
4 3 " # 5/√ 45
1 1 4/5
~u1 =
A~v1 = √ 4 2 = 22/ 1125
σ1 45 −2 3/5 √
4
4/ 1125
0√
4 3 " #
1 1 −3/5
~u2 = = −2/ √ 125
A~v2 = √ 4 2
σ2 20 −2 4/5
4 11/ 125
h i
Thus, we have U = ~u1 ~u2 ~u3 . Then, A = (AT )T = (UΣV T )T = VΣT U T .
" #
40 0
10.6.2 (a) We have AT A = . Thus, the eigenvalues of AT A are λ1 = 40 and λ2 = 10. Hence, the
0 10
√
maximum of kA~xk subject to k~xk = 1 is 40.
" #
T 6 −3
(b) We have A A = . Thus,
−3 14
1 T
If σ , 0, then define ~v = σA ~u. Thus, we AT ~u = σ~v and
AAT ~u = σ2~u
A(σ~v) = σ2~u
A~v = σ~u
Thus, ~u is a left singular vector of A.
If σ = 0, then we have AAT ~u = ~0 and hence
kAT ~uk2 = hAT ~u, AT ~ui = (AT ~u)T (AT ~u) = ~uT AAT ~u = ~uT ~0 = 0
Thus, AT ~u = ~0 and so ~u is a left singular vector of A.
Replacing A by AT in our result above gives that ~v is a right singular vector if and only if ~v is an
eigenvector of AT A.
h i h i
10.6.5 Let U = ~u1 · · · ~um and V = ~v1 · · · ~vn .
By Lemma 10.6.7, {~u1 , . . . , ~ur } forms an orthonormal basis for Col(A).
By the Fundamental Theorem of Linear Algebra, the nullspace of AT is the orthogonal complement of
Col(A). We are given that {~u1 , . . . , ~um } forms an orthonormal basis for Rm and hence a basis for the
orthogonal compliment of Span{~u1 , . . . , ~ur } is B = {~ur+1 , . . . , ~um }. Thus B is a basis for Null(AT ).
√
By Theorem 10.6.1, we have that kA~vi k = λi . But, since UΣV T is a singular value decomposition of
A, we have that λr+1 , . . . , λn are the zero eigenvalues of AT A. Hence, {~vr+1 , . . . , ~vn } is an orthonormal set
of n − r vectors in the nullspace of A. But, since A has rank r, we know by the Dimension Theorem, that
dim Null(A) = n − r. So {~vr+1 , . . . , ~vn } is a basis for Null(A).
Hence, by the Fundamental Theorem of Linear Algebra we have that the orthogonal complement of
Null(A) is Row(A). Hence {~v1 , . . . , ~vr } forms a basis for Row(A).
4 2 2
10.6.6 (a) We get AT A = 2 5 1 which has eigenvalues λ1 = 8, λ2 = 4, and λ3 = 2 and corresponding
2 1 5
√ √
1/ 3 0√ −2/ 6
√ √
unit eigenvectors ~v1 = 1/ 3, ~v2 = −1/√ 2, and ~v3 = 1/ 6 . Thus, B = {~v1 , ~v2 , ~v3 }.
√ √
1/ 2
1/ 3 1/ 6
(b) We have
√ √
2 1 1 1/ √3 4/ √24
1 1
~u1 =
A~v1 = √ 0 2 0 1/ 3 = 2/ 24
σ1 8 0 0 2 1/ 3 2/ 24
√ √
2 1 1 0√ 0√
1 1
~u2 = A~v2 = 0 2 0 −1/√ 2 = −1/√ 2
σ2 2
0 0 2 1/ 2
1/ 2
√ √
2 1 1 −2/√ 6 −2/√ 12
1 1
~u3 =
A~v3 = √ 0 2 0 1/ 6 = 2/ 12
σ3 2 0 0 2 1/ 6
√ √
2/ 12
Section 10.6 Solutions 239
Hence, √
8 0 0
C [L]B = 0 2 √0
0 0 2
A = UΣV T
σ1
0 ··· 0
..
0 .
T
.. .. ~v1
h i σr . . .
= ~u1 · · · ~um .
. .. ..
. . 0 T
~vn
..
.
0
···
0 0 0
T
~v
h i .1
= σ1~u1 · · · σr~ur ~0 · · · ~0 ..
~vTn
= σ1~u1~vT1 + · · · + σr~ur~vTr
Chapter 11 Solutions
2iz = 4
z = −2i
(b) We have
1
1−z=
1 − 5i
1 + 5i
−z = −1 +
26
25 5
z= − i
26 26
240
Section 11.1 Solutions 241
(c) We have
(1 + i)z + (2 + i)z = 2
(3 + 2i)z = 2
2
z=
3 + 2i
6 4
z= − i
13 13
11.1.3 (a) Row reducing gives
1 2+i i 0 1+i 1 2 + i i 0 1 + i R1 − iR2
i −1 + 2i 0 2i −i R2 − iR1 ∼ 0 0 1 2i 1 − 2i ∼
1 2+i 1+i 2i 2 − i R3 − R1 0 0 1 2i 1 − 2i R3 − R2
1 2 + i 0 2 −1
0 0 1 2i 1 − 2i
0 0 0 0 0
Hence, the system is consistent with two parameters. Let z2 = s ∈ C and z4 = t ∈ C. Then, the
general solution is
−1 −2 − i −2
0 1
+ t 0
~z = + s
1 − 2i 0
−2i
0 0 1
(b) Row reducing gives
i 2 −3 − i 1 i 2 −3 − i 1 R1 − iR2
1 + i 2 − 2i −4 i R2 − R1 ∼ 1 −2i −1 + i −1 + i ∼
1
i 2 −3 − 3i 1 + 2i R3 − R1 0 0 −2i 2i 2 iR3
0 0 −2 2+i 0 0 0 i
R1 + 2R3
1 −2i −1 + i −1 +i ∼ 1 −2i −1 + i −1 + i
0 0 1 −1 0 0 1 −1
Hence, the system is consistent with one parameter. Let z3 = t ∈ C. Then, the general solution is
i −1 − i
~z = −i + t −1
0 1
(e) We have
11.1.5 Let z1 = a + bi and z2 = c + di. If z1 + z2 is a negative real number, then b = −d and a + c < 0. Also, we
have z1 z2 = (ac − bd) + (ad + bc)i is a negative real number, so ad + bc = 0 and ac − bd < 0. Combining
these we get
0 = ad + bc = ad − dc = d(a − c)
so either d = 0 or a = c. If a = c, then ac − bd < 0 implies that a2 + b2 < 0 which is impossible. Hence,
we must have d = 0. But then b = −d = 0 so z1 and z2 are real numbers.
11.1.6 Let z = a + bi. Then a2 + b2 = 1. Hence
1 1 1 1 − a + bi 1 − a + bi
= = =
1 − z 1 − a − bi 1 − a − bi 1 − a + bi (1 − a)2 + b2
1 − a + bi 1 − a + bi
= =
1 − 2a + a2 + b2 1 − 2a + 1
1−a b
= + i
2 − 2a 2 − 2a
1 b
= + i
2 2 − 2a
!
1 1
Hence, Re = .
1−z 2
11.1.7 Let z = a + bi. We will prove this by induction on n. If n = 1, then z = z. Assume that zk = (z)k for
some integer k. Then, by Theorem 5.1.3 part 5,
as required.
244 Section 11.2 Solutions
αz1
α~z = αz2 ∈ S1 since i(αz1 ) = α(iz1 ) = αz3 . Therefore, by the Subspace Test, S1 is a subspace of
αz3
3
C.
Every ~z ∈ S1 has the form
z1 z1 1 0
z2 = z2 = z1 0 + z2 1
z3 iz1 i 0
1 0
Thus, B = 0 , 1 spans S1 and is clearly linearly independent, so it is a basis for S1 . Conse-
i 0
quently, dim S1 = 2.
1 0 1 1 1
(b) Observe that ∈ S2 since 1(0) = 0 and ∈ S2 since 0(1) = 0. However, 0 + 0 = 1 < S2
0 1
0 0 0 0 0
since 1(1) , 0. Consequently, S2 is not closed under addition and hence is not a subspace.
z1 y1
(c) By definition S3 is a subset of C3 and ~0 ∈ S3 since 0 + 0 + 0 = 0. Let ~z = z2 , ~y = y2 ∈ S3 .
z3 y3
z1 + y1
Then, z1 + z2 + z3 = 0 and y1 + y2 + y3 = 0. Hence, ~z + ~y = z2 + y2 ∈ S1 since (z1 + y1 ) +
z3 + y3
αz1
(z2 + y2 ) + (z3 + y3 ) = z1 + z2 + z3 + y1 + y2 + y3 = 0 + 0 = 0. Similarly, α~z = αz2 ∈ S1 since
αz3
αz1 + αz2 + αz3 = α(z1 + z2 + z3 ) = α(0) = 0. Therefore, by the Subspace Test, S3 is a subspace of
C3 .
Every ~z ∈ S1 has the form
z1 z1 1 0
z2 = z2 = z1 0 + z2 1
z3 −z1 − z2 1 −1
1 0
Thus, B = 0 , 1 spans S3 and is clearly linearly independent, so it is a basis for S3 . Conse-
1 −1
Section 11.2 Solutions 245
quently, dim S3 = 2.
11.2.2 (a) Row reducing the matrix A to its reduced row echelon form R gives
1 i 1 0
1 + i −1 + i ∼ 0 1
−1 i 0 0
The non-zero rows of R form a basis for the rowspace of A. Hence, a basis for Row(A) is
(" # " #)
1 0
,
0 1
The columns from A which correspond to columns in R which have leadings ones form a basis for
the columnspace of A. So, a basis for Col(A) is
1 i
1 + i , −1 + i
−1
i
Solve the homogeneous system A~z = ~0, we find that a basis for Null(A) is the empty set.
To find a basis for the left nullspace, we row reduce AT . We get
" # " #
1 1 + i −1 1 1+i 0
∼
i −1 + i i 0 0 1
(b) Row reducing the matrix B to its reduced row echelon form R gives
" # " #
1 1 + i −1 1 1+i 0
∼
i −1 + i i 0 0 1
The non-zero rows of R form a basis for the rowspace of B. Hence, a basis for Row(B) is
1 0
1 + i , 0
0
1
The columns from B which correspond to columns in R which have leadings ones form a basis for
the columnspace of B. So, a basis for Col(B) is
(" # " #)
1 −1
,
i i
246 Section 11.2 Solutions
−1 − i
Solve the homogeneous system B~z = ~0, we find that a basis for Null(B) is
1 .
0
The non-zero rows of R form a basis for the rowspace of A. Hence, a basis for Row(A) is
1 0
0 , 1
1 1−i
The columns from A which correspond to columns in R which have leadings ones form a basis for
the columnspace of A. So, a basis for Col(A) is
1 1 + i
0 , 2
i 1−i
−1
Solve the homogeneous system A~z = ~0, we find that a basis for Null(A) is
−1 + i .
1
where ~x, ~y ∈ Rn .
z1 y1
11.2.4 (a) Let ~z = z2 and ~y = y2 and α, β ∈ C. Then
z3 y3
Thus L is linear.
" # " # " #
−i 1+i 1 + 2i
We have L(1, 0, 0) = , L(0, 1, 0) = , and L(0, 0, 1) = . Hence [L] =
−1 + i −2i −3i
" #
−i 1 + i 1 + 2i
.
−1 + i −2i −3i
(b) Row reducing [L] we get
" # " #
−i 1 + i 1 + 2i 1 −1 + i 0
∼
−1 + i −2i −3i 0 0 1
1 − i
A basis for ker(L) is the general solution of [L]~x = ~0, hence a basis is
1
.
0
(" # " #)
−i 1 + 2i
The range of L is equal to the columnspace of [L]. Thus, a basis for the range of L is , .
−1 + i −3i
11.2.5 ~ ,~z ∈ C2 and α, β ∈ C. Then
(a) Let w
" #
αz1 + βw1 + (1 + i)(αz2 + βw2 )
L(α~z + β~
w) = L(αz1 + βw1 , αz2 + βw2 ) =
(1 + i)(αz1 + βw1 ) + 2i(αz2 + βw2 )
" # " #
z1 + (1 + i)z2 w1 + (1 + i)w2
=α +β
(1 + i)z1 + 2iz2 (1 + i)w1 + 2iw2
= αL(~z) + βL(~
w)
Hence, L is linear.
248 Section 11.2 Solutions
" #
z1
(b) If ~z = ∈ ker(L), then
z2
" # " #
0 z1 + (1 + i)z2
= L(~z) =
0 (1 + i)z1 + 2iz2
Hence, we need to solve the homogeneous system of equations
z1 + (1 + i)z2 = 0
(1 + i)z1 + 2iz2 = 0
Hence, " #
0 0
[L]B =
0 1 + 2i
Section 11.3 Solutions 249
11.2.6 We have
1 −1 i 1 0 0
1 1
1 + i −i i = 1 + i 1 1 =
=0
i i
1 − i −1 + 2i 1 + 2i 1 − i i i
11.2.7 We have
1 1 2 1 0 0 1 0 0 2 + 2i −i −2 + i
0 1 + i 1 0 1 0 ∼ 0 1 0 1 −i i
i 1 + i 1 + 2i 0 0 1 0 0 1 −1 − i i 1−i
Hence,
−1
1 1 2 2 + 2i −i −2 + i
0 1 + i 1 = 1 −i i
i 1 + i 1 + 2i −1 − i i 1−i
11.2.8 We prove this by induction. If A = [a] is a 1 × 1 matrix, then
Assume the result holds for n − 1 × n − 1 matrices and consider an n × n matrix A. If we expand det A
along the first row, we get by definition of the determinant
n
X
det A = a1iC1i (A)
i=1
where C1i (A) represents the cofactors of A. But, each of these cofactors is the determinant of an n − 1 ×
n − 1 matrix, so we have by our inductive hypothesis that C1i (A) = C1i (A). Hence,
n
X n
X
det A = a1iC1i (A) = a1iC1i (A) = det A
i=1 i=1
Solving λ2 + 16 = 0 we find that the eigenvalues are λ1 = 4i and λ2 = −4i. For λ1 = 4i we get
" # " #
3 − 4i 5 5 3 + 4i
A − 4iI = ∼
−5 −3 − 4i 0 0
(" #)
3 + 4i
Hence, a basis for Eλ1 is . For λ2 = −4i we get
−5
" # " #
3 + 4i 5 5 3 − 4i
A + 4iI = ∼
−5 −3 + 4i 0 0
(" #)
3 − 4i
Hence, a basis for Eλ2 is .
−5
" # " #
3 + 4i 3 − 4i 4i 0
Therefore, A is diagonalized by P = to D = .
−5 −5 0 −4i
(c) We have
2 − λ 2 −1 −λ 0 λ
C(λ) = −4 1 − λ 2 = −4 1 − λ 2
2 2 −1 − λ 2
2 −1 − λ
0 0 λ
= −2 1 − λ 2 = −λ(λ2 − 2λ + 5)
1 − λ 2 −1 − λ
Hence, the eigenvalues are λ1 = 0 and the roots of λ2 − 2λ + 5. By the quadratic formula, we get
that λ2 = 1 + 2i and λ3 = 1 − 2i. For λ1 = 0 we get
2 2 −1 1 0 −1/2
A − 0I = −4 1 2 ∼ 0 1 0
2 2 −1 0 0 0
1
Hence, a basis for Eλ1 is 0. For λ2 = 1 + 2i we get
2
1 − 2i 2 −1 1 0 −1
A − λ2 I = −4 −2i 2 ∼ 0 1 −i
2 2 −2 − 2i 0 0 0
Section 11.3 Solutions 251
1
Hence, a basis for Eλ2 is i . For λ3 = 1 − 2i we get
1
1 + 2i 2 −1 1 0 −1
A − λ3 I = −4 2i 2 ∼ 0 1 i
2 2 −2 + 2i 0 0 0
1
Hence, a basis for Eλ3 is −i.
1
1 1 1 0 0 0
Therefore, A is diagonalized by P = 0 i −i to D = 0 1 + 2i 0 .
2 1 1 0 0 1 − 2i
(d) We have
1 − λ i
C(λ) = = λ2
i −1 − λ
Hence, the eigenvalues are λ1 = 1 and the roots of λ2 − 4λ + 5. By the quadratic formula, we get
that λ2 = 2 + i and λ3 = 2 − i. For λ1 = 1 we get
1 1 −1 1 0 0
A − I = 2 0 0 ∼ 0 1 −1
3 −1 1 0 0 0
252 Section 11.3 Solutions
0
Hence, a basis for Eλ1 is 1. For λ2 = 2 + i we get
1
−i 1 −1 1 0 −(1 + 2i)/5
A − λ2 I = 2 −1 − i 0 ∼ 0 1 −(3 − i)/5
3 −1 −i 0 0 0
1 + 2i
Hence, a basis for Eλ2 is 3 + i . For λ3 = 2 − i we get
5
i 1 −1 1 0 −(1 − 2i)/5
A − λ3 I = 2 −1 + i 0 ∼ 0 1 −(3 + i)/5
3 −1 i 0 0 0
1 − 2i
Hence, a basis for Eλ3 is 3 − i .
5
0 1 + 2i 1 − 2i 1 0 0
Therefore, A is diagonalized by P = 1 3 + i 3 − i to D = 0 2 + i 0 .
1 5 5 0 0 2−i
(f) We have
1 + i − λ 1 0 1 + i − λ 1 0
C(λ) = 1 1−λ −i = 0 1 − λ −1 − i + λ
1 0 1 − λ 1 0 1 − λ
= (1 + i − λ)(1 − λ)2 + (−1 − i + λ) = −(λ − (1 + i))[λ2 − 2λ + 1 − 1]
= −(λ − (1 + i))(λ − 2)λ
i
Hence, a basis for Eλ1 is 0 . For λ2 = 2 we get
1
−1 + i 1 0 1 0 −1
A − λ2 I = 1 −1 −i ∼ 0 1 −1 + i
1 0 −1 0 0 0
Section 11.3 Solutions 253
1
Hence, a basis for Eλ2 is 1 − i . For λ3 = 0 we get
1
−1 + i 1 0 1 0 1
A − λ3 I = 1 −1 −i ∼ 0 1 −1 − i
1 0 −1 0 0 0
−1
Hence, a basis for Eλ3 is 1 + i .
1
i 1 −1 1 + i 0 0
Therefore, A is diagonalized by P = 0 1 − i 1 + i to D = 0 2 0.
1 1 1 0 0 0
(g) We have
5 − λ −1 + i 2i 1 − λ 0 i − iλ
C(λ) = −2 − 2i 2 − λ 1 − i = −2 − 2i 2 − λ 1 − i
4i −1 − i −1 − λ 4i
−1 − i −1 − λ
1 − λ 0 0
= −2 − 2i 2 − λ −1 + i
4i −1 − i 3 − λ
= (1 − λ)[λ2 − 5λ + 4) = −(λ − 1)2 (λ − 4)
1 − i −i
Hence, a basis for Eλ1 is 4 , 0 . For λ2 = 4 we get
0 2
1 −1 + i 2i 1 0 −i
A − λ2 I = −2 − 2i −2 1 − i ∼ 0 1 (1 − i)/2
4i −1 − i −5 0 0 0
−2i
Hence, a basis for Eλ2 is −1 + i .
2
1 − i −i −2i 1 0 0
Therefore, A is diagonalized by P = 4 0 −1 + i to D = 0 1 0.
0 2 2 0 0 4
254 Section 11.3 Solutions
(h) We have
−6 − 3i − λ −2 −3 − 2i i − λ −2 −3 − 2i
C(λ) = 10 2−λ 5 =
0 2−λ 5
8 + 6i 3 4 + 4i − λ −2i + 2λ 3 4 + 4i − λ
i − λ −2 −3 − 2i
= 0 2−λ 5
0 −1 −2 − λ
= (i − λ)(λ2 − 1) = −(λ − i)2 (λ + i)
−1
Hence, a basis for Eλ1 is 0 .
2
Observe, that if sin θ = 0, then Rθ is diagonal, so we could just take P = I. So, we now assume
that sin θ , 0.
" # " #
−i sin θ − sin θ 1 −i
For λ1 = cos θ + i sin θ, we get Rθ − λ1 I = ∼
sin θ −i sin θ 0 0
" #
i
We get that an eigenvector of λ1 is .
1
" # " #
i sin θ − sin θ 1 i
Similarly, for λ2 = cos θ − i sin θ, we get Rθ − λ2 I = ∼
sin θ i sin θ 0 0
" #
−i
We get that an eigenvector of λ2 is .
1
" # " #
i −i cos θ + i sin θ 0
Thus, P = and D = .
1 1 0 cos θ − i sin θ
Section 11.4 Solutions 255
" #
1 0
(b) We have R0 = which is already diagonal, so R0 is diagonalized by P = I as we stated in
0 1
(a).
" √ √ # " #
1/ √2 −1/√ 2 i −i
Rπ/4 = . From our work in a) Rπ/4 is diagonalized by P = to D =
1/ 2 1/ 2 1 1
" √ #
(1 + i)/ 2 0 √
. Indeed, we find that
0 (1 − i)/ 2
" #" √ √ #" # " √ #
−1 1 −i 1 1/ 2 −1/ 2 i −i (1 + i)/ 2 0 √
P Rπ/4 P = √ √ =
2 i 1 1/ 2 1/ 2 1 1 0 (1 − i)/ 2
11.3.5 If A is a 3 × 3 real matrix with a non-real eigenvalue λ, then by Theorem 11.3.1, λ is another eigenvalues
of A. Also, by Corollary 11.3.2, we have that A must have a real eigenvalue. Therefore, A has 3 distinct
eigenvalues and hence is diagonalizable.
(g) h~w,~zi = w ~ · ~z = w
~ · ~z = ~z · w~ = h~z, w
~ i = 5 − 5i
* 3 −2 +
(h) h~u + ~z, 2i~
w − i~zi = 0 , 2 + 2i = 3(−2) + 0(2 − 2i) + (2 − 2i)(−3 + i) = −10 + 8i
2 − 2i −3 − i
(b) We have
hB, Ai = i(1) + 2(−2i) + (1 − i)(i) + 3(1 − i) = 4 − 5i
(c) We have p p √
kAk = hA, Ai = 1(1) + 2i(−2i) + (−i)(i) + (1 + i)(1 − i) = 8
(d) We have p p
kBk = hB, Bi = i(−i) + 2(2) + (1 − i)(1 + i) + 3(3) = 4
11.4.3 (a) Observe that *" √ # " √ #+
i/ √2 1/ √2 i 1 1 −i i i
, = √ · √ + √ · √ = + =i
1/ 2 −i/ 2 2 2 2 2 2 2
Hence, the columns are not orthogonal, so the matrix is not unitary.
√ √
(1 + i)/2 (1 − i)/ 6 (1 − i)/ 12
√
3i/ √12 . Then we find that UU ∗ = I, so U is unitary.
(b) Let U = 1/2 0√
−i/ 12
1/2 2i/ 6
(c) The matrix is a real orthogonal matrix, and hence is unitary.
(d) Observe that
0
r r
0
= 0 + 0 + 2 =
1
,1
4 2
(1 + i)/2
and
~u∗i ~u j = ~uTi ~u j = h~ui , ~u j i = 0
whenever i , j. Thus, U ∗ U = I as required.
11.4.5 (a) We have
~ } = Span{~z, ~v} where ~v is the vector from part (b) and {~z, ~v} is orthogonal over
(c) Note that Span{~z, w
C so we have
< ~u,~z > < ~u, ~v >
projS (~u) = 2
~z + ~v
k~zk k~vk2
11.4.6 Denote the vectors in the spanning set by A1 , A2 , and A3 respectively.
Let B1 = A1 . Next, we get
" # " # " #
hA2 , B1 i i 0 i 1 i i/2 1/2
A2 − B1 = − =
kB1 k2 0 i 2 0 0 0 i
" #
i 1
So, we let B2 = . Next, we get
0 2i
" # " # " # " #
hA3 , B1 i hA3 , B2 i 0 2 −2i 1 i 4 i 1 i/3 1/3
B3 = A3 − B1 − B2 = − − =
kB1 k2 kB2 k2 0 i 2 0 0 6 0 2i 0 −i/3
(b) We have
(αA)∗ = (αA)T = αAT = αAT = αA∗
(c) We have
(AB)∗ = (AB)T = BT AT = BT AT = B∗ A∗
~ ∈ Cn we have
(d) For any ~z, w
~ = (A~z)T w
~ i = (A~z) · w
hA~z, w ~ = ~zT AT w
~ = ~zT AT w
~ = ~zT A∗ w
~ = ~zT A∗ w ~ = h~z, A∗ w
~ = ~z · A∗ w ~i
11.4.8 ~ ∈ Cn we have
(a) For any ~z, w
~ i = (U~z)T U w
hU~z, U w ~ = ~zT U ∗ U w
~ = ~zT U T U w ~ = ~zT U ∗ U w
~ = ~zT w
~ = h~z, w
~i
(b) Suppose λ is an eigenvalue of U with eigenvector ~v. We get kU~vk2 = hU~v, U~vi = h~v, ~vi = k~vk2 by
part (a). But we also have kU~vk2 = kλ~vk2 = hλ~v, λ~vi = λλ̄h~v, ~vi = |λ|2 k~vk2 , so since k~vk , 0, we
must have |λ| = 1.
" #
i 0
(c) The matrix U = is unitary since U ∗ U = I and the only eigenvalue is i with multiplicity 2.
0 i
258 Section 11.5 Solutions
k~u + ~vk2 = h~u + ~v, ~u + ~vi = h~u, ~u + ~vi + h~v, ~u + ~vi = h~u, ~ui + h~u, ~vi + h~v, ~ui + h~v, ~vi
= k~uk2 + 0 + 0 + k~vk2 = k~uk2 + k~vk2
The converse is not true. One counter example is: Consider V = C with its standard inner product
and let ~u = 1 + i and ~v = 1 − i. Then k~u + ~vk2 = k2k2 = 4 and k~uk2 + k~vk2 = 2 + 2 = 4, but
h~u, ~vi = (1 + i)(1 + i) = 2i , 0.
Thus, T is also skew-Hermitian. Since T is upper triangular, we have that T ∗ is lower triangluar.
Consequently, T is both upper and lower triangular and hence diagonal.
So, U ∗ AU = T is diagonal as required.
(b) If A is skew-Hermitian, then A∗ = −A and hence
λh~z,~zi = hλ~z,~zi = hA~z,~zi = h~z, A∗~zi = h~z, −A~zi = h~z, −λ~zi = −λh~z,~zi
1 = k~t1 k = |t11 |
Since t11 , 0, we must have t12 = 0. Then, we get 1 = k~t2 k = |t22 |. Next, we have
Therefore, t13 = t23 = 0. Continuing in this way we get that T is diagonal and all of the diagonal
entries satisfy |tii | = 1.
So, U ∗ AU = T is diagonal as required.
(b) If A is unitary, then A∗ = A−1 and so
AA∗ = I = A∗ A
(b) We have C(λ) = λ2 − 5λ + 4 = (λ − 4)(λ − 1). Thus, the eigenvalues are λ1 = 4 and λ2 = 1. For
λ1 = 4 we get " # " #
−2 1 + i 1+i
A − λ1 I = ⇒ ~z1 =
1 − i −1 2
For λ2 = 1 we get " # " #
1 1+i −1 − i
A − λI = ⇒ ~z2 =
1−i 2 1
" # 1+i −1−i
4 0 √ √
Hence we have D = and U = 26 3
.
0 1 √ √1
6 3
(c) We have C(λ) = λ2 − 8λ + 12 = (λ − 2)(λ − 6). Thus, the eigenvalues are λ1 = 2 and λ2 = 6. For
λ1 = 2 we get " √ # " √ #
3 2−i − 2+i
A − λ1 I = √ ⇒ ~z1 =
2+i 1 3
For λ2 = 6 we get " √ # "√ #
−1 2−i 2−i
A − λI = √ ⇒ ~z2 =
2+i −3 1
√ √
"
2 0
# − √ 2+i 2−i
2
Hence we have D = and U = 312 1
.
0 6 √
2
12
(d) We have
1 − λ 0 1
C(λ) = 1 1−λ 0 = (1 − λ)3 + 1 = −λ3 + 3λ2 − 3λ + 2
0 1 1 − λ
Since A is a 3 × 3 real matrix, we know that A must have at least one real eigenvalue. We try the
Rational Roots Theorem to see if the real root is rational. The possible rational roots are ±1 and
±2. We find that λ1 = 2 is a root. Then, by the Factor Theorem and polynomial division, we get
h i √ √
3 1 3
Hence, taking U = ~z1 ~z2 ~z3 gives U ∗ AU = diag(2, 21 + 2 i, 2 − 2 i).
(e) We have C(λ) = −(λ − 2)(λ2 − λ + 2) = −(λ − 2)2 (λ + 1) so we get eigenvalues λ = 2 and λ = −1.
For λ = 2 we get
−1 0 1 + i 0 1 + i
C − λI = 0 0 0 ⇒ ~z1 = 1 ,~z2 = 0
1 − i 0 −2 0 1
For λ = −1 we get
2 0 1 + i 1 + i
C − λI = 0 3 0 ⇒ ~z3 = 0
1−i 0 1 −2
B∗ B = (A∗ A−1 )∗ (A∗ A−1 ) = (A−1 )∗ AA∗ A−1 = (A−1 )∗ A∗ AA−1 = (AA−1 )∗ (AA−1 ) = I
11.5.7 If AB is Hermitian, then (AB) = (AB)∗ = B∗ A∗ = BA, since A and B are Hermitian.
A3 + 2A − 2I = O
A3 + 2A = 2I
!
1 2
A A +I =I
2
Thus, A−1 = 21 A2 + I.
11.6.2 By the Cayley-Hamilton Theorem, we have that
A3 − 147A + 686I = O
A3 = 147A − 686I
441 −1176 −294 686 0 0
= −1176 −1323 −588 − 0 686 0
−294 −588 882 0 0 686
−245 −1176 −294
= −1176 −2009 −588
−294 −588 196
1 0 0
11.6.3 Observe that −λ3 + 3λ − 2 = (λ + 2)(λ − 1)2 . Hence, one choice is to take A = 0 1 0 .
0 0 −2