CHAPTER 2
Systems of Linear Equations
EXERCISE SET 2.1
1. (a) and {c) are linear. (b) is not linear due to the x1x3 term. {d) is not' linear due to the x}
2
term.
2. (a) a.nd (d) are linear. (b) is not linear because of the xyz t erm. (c) is not linear because of the
3/5
x
1
term.
3. (a) is linear. (b) is linear if k # 0. (c) is linear only if k = 1.
4. (a) is linear. (b) is linear jf m =f: 0. (c) is linear only if m = 1.
5. (a), (d), and (c) are solutions; these sets of values satisfy all three equations. (b) and (c) are not
solutions.
6. (b) , (d), and (e) are solutions; these sets of values satisfy all three equations. (a) and (c) are not
solutions.
7. The tluee lines intersect at the point {1, 0) (see figure). The values x = 1, y = 0 satisfy all three
equations and this is the unique solution of the system.
3.x3y"' 3
:c
The augmented matrix of the system is
l
r231 ~ ~ ~ ] ·
3 ! 3
Add 2 times row 1 to row 2 and add 3 times row 1 to row 3:
[ ~ =! ! ~ ]
Multiply row 2 by  j and add 9 times the new row 2 t.o row 3:
[ ~ ~ ~ ]
From the last row we s ~ that t he system is redundant (reduces to only two equations). From the
second row we see that y = 0 and, from back substitution, it follows t hat x = 1  2y = 1.
22
Exercise Set 2.1
8. The three lines do not intersect in a. common point (see figure). This system has no solution.
)'
The augmented matrix of the system is
and the reduced row echelon form of this matrix (details omitted) is:
The last row corresponds to the equation 0 = 1, so the system is jnconsistent.
23
9. (a) The solution set of the equation 7x  5y = 3 can be described parametrically by (for example)
solving the equation for x in terms of y and then making y into a parameter. This leads to
x ==
3
'*:,5t, y = t , where oo < t < oo.
(b ) The solution set of 3x
1
 5xz + 4x3 = 7 can be described by solving the equation for x1 in
terms of xz and x
3
, then making Xz and x
3
into parameters. This leads to x
1
=
x:z = s, x3 = t , where  oo < s, t < oo.
(c) The solution set of  8x
1
+ 2x
2
 5x
3
+ 6x
4
= 1 can be described by (for example) solving
the equation for x
2
in t.erms of x
1
, and x
4
, then making x
1
, x3, and x
4
into parameters.
This leads to :t
1
= r, x 2 = l±Brt
5
s 
6
t, X3 = s, X4 = t, where <Xl < r, s, t < oo.
(d) The solution set of 3v Bw + 2x y + 4z = 0 can be described by (for example) solving
the equation for y in terms of the other variables, and then making those variables into
parameters. This leads to v = t1, w = t2, x = t3, y = 3tl  8t2 + Zt3 + 4t4, z = t4, where
 oo < t11 t2 , t3, t4 < oo.
10. (a) x = 2 lOt, y = t, where  (X)< t < OO.·
(b) x1 = 3 3s + l2t, x
2
= s, x
3
= t, where oo < s, t < oo.
(c) Xt = r , x2 = s, X3 = t, X4 = 20 4r  2s 3t, where  oo < r, .s, t. < oo.
(d) v = t1, w = t2, x = t1  t2 + 5t3  7t4, y = t3, z = t4, where oo < h, t2, t3, t4 < oo.
11. (a) If the solution set is described by the equations x = 5 + 2t, y = t , then on replacing t by y in
t he first equation we have x = 5 + 2y or x 2y = 5. This is a linear equation with the given
solution set.
(b) The solution set ca.n also be described by solving the equation for y in terms of x. and then
making x jnto a parameter. This leads to the equations x = t, y = !t
24 Chapter2
12. (a) If x1 = 3 + t and x2 = 2t, ther. t = Xt + 3 and so x2 = 2x1 + 6 or 2xl + x2 = 6._ This is
a. linear equation with the given solution set.
(b) The solution set can also be described by solving the equation for X2 in terms of x
1
, and then
making x
1
into a parameter. This leads to the equations x
1
= t, x
2
= 2t + 6.
13. We can find parametric equations fN the line of intersection by (for example) solving the given
equations for x and y in terms of z, then making z into a parameter:
x+y= 3+z} =}
2x + y = 4 3.z
2x+y = 4 3z}
=} x = l4z
x y = 3 z
From the above it follows that y = 3 + z  x = 3 + x  1 + 4z = 2 + Sz, and this leads to the
parametric equations
X = 1  4t, y = 2 + 5t, Z = t
for the line of intersection. The corresponding vector equation is
(x,y,z) = (1,2,0) + t(4,5, 1)
14. We can find parametric equations for the line of intersection by (for example) solving the given
equations for x and y in terms of z, then making z into a parameter:
x + 2y = 1 3z}
3x 2y = 2 z
:::;. 4x = ( 1  3z) + ( 2  z) = 3  4z ::::}
34z
X=
4
From the above it follows hat y =
1
t,
8
z _ This leads to the parametric equations
X :::: i  t, y = ~  t, Z = t
and the corresponding vector equation is
(x, y, z) = ( ~ , l· 0) + t( 1, 1, 1}
15. If k :fi 6, then the equations x y = 3, 2x  2y = k represent nonintersecting parallel lines and so
the system of equations has no solution. If k = 6, the two lines coincide and so there are infinitely
many solutions: x = 3 + t, y = t, where oo < t < oo.
16. No solutions if k:/: 3; infinitely many solutions if k = 3.
[
3 2 ! 1]
17. The augmented matrix of the system is 4 5 l 3 .
18. The augmented matrix is [!
0 2
1 4
1 1
7 3: 2
Exercise Set 2.1
The augmented matrix o! the system is [ ~
2 0  1 1
~ ]
19.
3 2 0  1
3 1 7 0
The augmented matrix is [ ~
0
0 : 11
20.
1 0 12 .
0
1 ! 3 J
21. A system of equations corresponding to the given augmented matrix is: .
2xt = 0
3xl 4x2 = 0
X2 = 1
22. A system of equations corresponding to the given augmented matrix is:
3x
1
 2x
3
= 5
7xl + x2 + 4x3 =  3
 2x2 + X3 = 7
23. A system of equations corresponding to the given augmented ma.tri.x is:
?x1 + 2x2 + x3  3xt = 5
x 1 + 2x2 + 4x3 = 1
24. X}= 7, Xz =  2, X3 = 3, :J:4 = 4
25
25. (a) B is obtained from A by adding 2 times the first row to the second row. A is obtained from
B by adding 2 times the first row to the second row.
(b) B is obtained from A by multiplying the first row by!· A is obtained from B by multiplying
the first row by 2.
26, (a) B is obtained from A by interchanging the first and third rows. A is obtained from B by
interchanging the first and third rows.
(b) B is obtained from A by multiplying the third row by 5. A is obtained from B by multiplying
the third row by }.
27. 2x + 3y + z = 7
2x + y + 3z = 9
4x + 2y + 5z = 16
29. X J y + Z = 12
2x + y + 2z = 5
X  z =  1
31. (a) Jc1 + c2 + 2c3  C4 = 5
cz + 3c3 + 2c4 = 6
c1 + c2 + 5c4 = 5
2ct + c2 + 2c3 """ 5
28. 2x + 3y + 12z = 4
8x + 9y + 6z = 8
6x + 6y + 12z = 7
30. X + y + Z = 3
y+z=lO
y + z = 6
(b ) 3cl + cz + 2c3  C4 = 8
Cz + 3c3 + 2c4 = 3
 c1 + c2 + 5c4 = 2
2c1 + cz + 2c3 = 6
26
32.
(c)
(a)
3cl + c ~ + 2c3  c4:;;;; 4
c2 + 3c3 + 2c, := 4
cl + Cz + Sc4 = 6
2ct + c2 + 2c3 = 2
Ct + C2 + 2c3
=
2
2c
1
 2c3 + 5C4 = 2
 4cl + 2cz c
3
+ 4c4 = 8
+ 2Cl +
CJ
C4 =
0
5ct 
Cz + 3c3 + C4 =
12
(c) c1 + cz + 2c3 = 4
2c1  2c3 + Sc4 =  4
 4cl + 2c2  c3 + 4c• = 2
+ 2c2 + c3  C4 = 0
5c1  c2 + 3c3 + c4 = 24
Chapter 2
(b)
Ct + c2 + 2c3 = 5
2Ct  2c3 + 6c4 = 3
4ct + 2cz C3 + 4c4 = 9
+ 2cz +
CJ
c.&=
4
5Ct  Cz + 3c3 + C4 = 11
DISCUSSION AND DISCOVERY
Dl. (a) There is no common intersection point.
(b) There is exactly one common point of intersection.
(c) The three lines coincide.
D2. A consistent system has at least one solution; moreover, it either has exactly one solution or it
has infinitely many solutions.
If the system has exactly one solution, then there are two possibilities. If the three lines are
all distinct but have a. common point of intersECtion, then any one of the three equations can be
discarded without altering the solution set. On the other hand, if two of the lines coincide, then
one of the corresponding equations can be discarded wi t hout altering the solution set.
If the system has infinitely many solutions, then the three lines coincide. In this case any one
(in fact any two) of the equations can be discarded without altering the solution set.
D3. Yes. If B can be obtained from A by multiplying a row by a. nonzero constant, then A can be
obtained from B by multiplying the same row by the reciprocal of that constant. If B can be
obtained from A by interchanging two rows, then A can be obtained from B by interchanging the
same two rows. Finally, if B can be obtained from A by adding a multiple of a row to another
row, then A can be obtained from B by subtracting the same multiple of that row from the other
row.
D4. If k = l = m = 0, then x = y = 0 is a solution of all three equations and so the system is consistent.
If the system has exactly one solution then the three lines intersect at the origin.
05. The parabolay = ax
2
+ bx + c will pass through the points {1, 1), (2, 4), and (1, 1) if and only if
a+ b+c=l
4a + 2b + c = 4
a  b+c=I
Since there is a unique parabola passing through any three noncollinear points, one would expect
this system to ba.ve exactly one· solution.
Discussion and Discovery 27
D6. The parabola y == ax
2
+ bx + c passes through the points (x1, Yt), (xz, Y2), a.nd (x3, Ys) if and only
if
+ + c = Yl
+ Zbx2 + c = Y2
 bxs + c = Y3
i.e. if and only if a, b, and c satisfy the linear sy5tem whose augmented matrix is
1 Y1]
1 Y2
1 Y3
D7. To say that the equations have the same solution set is the same thing as to say that they represent
the same line. From the first equation the x
1
intercept of the line is x1 = c, and from the second
equation the x
1
intercept is x
1
= d; thus c = d. If the line is vertical then k = l = 0. If the line
is not vertical then from the first equation the slope is m = t, and from the second equation the
slope ism= f; thus k = l. In summary, we conclude that c = d and k = .l; thus the two equations
are identicaL
08. (a) True. If there are n 2 columns, then the first n 1 columns correspond to the coefficients
of the variables that appear in the equations ar1d the last column corresponds to the constants
that appear on the righthand side of the equal sign.
{b) False. Referring to Example 6: The of linear systems appearing in the lefthand
column all have the same solution set, but the corresponding augmented matrices appearing
in the righthand column are all different . ·
(c) False. Multiplying a row of the augmented matrix by zero corresponds to multiplying·· both
sides of the corresponciing equation by zero. But this is equivalent to discarding one of the
equations!
(d) True. If the system is consistent, one can solve for two of the variables in terms of the third
or (if further redundancy is present) for one of the variables in terms of the other two. ln any
case, there is at lca.c:;t one "free" variable that ca.n be made into a parameter in describing
the solution set of the system. Thus if the system is consistent, it will have infinitely many
solutions.
D9. (a) True. A plane in 3space corresponds to a linear equation in three variables. Thus a set of
four planes corresponds to a system of four linear equations in three variables. If there is
enough redundancy in the equations so that the system reduces to a system of two indepen
dent equations, then the solution set will be a line. For example, four vertical planes each
containing the zaxis and intersecting the xyplane in four distinct lines.
( b) False. Interchanging the first two columns corresponds to interchanging the coefficients of
the first two variables. This results in a different system with a differeut solution set. [It
is oka.y to interchange rows since this corresponds to interchanging equations and therefore
does not alter the solution set.]
(c) False. If there Is enough redundancy so that the system reduces to a system of only two
(or fewer) equations, and if these equat ions are consistent, then the original system will be
consistent.
(d) True. Such a system will always have the trivial solution x
1
= x2 = · · · = Xn = 0.
28 Chapter2
EXERCISE SET 2.2
1. The matrice1' (a), (c), and (d) are in reduced row echelon form. The matrix (b) does not satisfy
property 4 of the definition, and the matrix (e) does not satisfy property 2.
2. Tbe mo.trices (c), {d), an(l {e) are in reduced row echelon form. The matrix (a) does not satisfy
property 3 of the definition, and the matrix {b} does not satisfy property 4.
3. The matrices (a) and (b) are in row echelon form. The matrix (c) does not satisfy property 1 or
property 3 of the definition.
4. The matrices (a) and {b) are in row echelon form. The matrix (c) does not satisfy property 2.
5. The matrices (a) and (c) are in reduced row echelon form. The matrix (b) does not satisfy property
3 and thus ls not in row echelon form or reduced row echelon form.
6. The matrix (c) is in reduced row echelon form. The matrix (a) is in row echelon frem but does
not satisfy property 4. The matrix (b) does not satisfy property 3 and thus is not in row echelon
form or reduced row echelon form.
7. The pos:;ible 2 by 2 reduced row echelon forms are [8 g). [8 A]. [6 and [A o] with any real
number substituted for the *.
8. The possible 3 by 3 reduced row echelon forms are
. !] . !
and
:]
0 0 0 0 0 0 0 0 0 . o 0 0
with any real numbers substituted for the *'s.
9. The given matrix corresponds to the system
X1 = 3
Xz 0
X3 = 7
which clearly has the unique solution x
1
= 3, x
2
= 0, X3 = 7.
10. The given matrix corresponds to the system
x1 + 2x2 + Zx4 =  1
X3 + 3x4 = 4
Solving these equations for the leading variables (x
1
and x
3
) in terms of the free variables (x2 and
x4) results in x
1
= 1 2x
2
 2x
4
and x
3
= 4 3x
4
. Thus, by assigning arbitrary values to x2
and x4, the solution set of the system can be represented by the parametric equa.tious
Xt = 1 28 2t, X2 = S, X3 = 4 3t, X4 = t
where  oo < s, t < S>O· The corresponding vector form is
x2, X3, X4) = (1, 0, 4, 0} + s( 2, 1, 0, G)+ t( 2, 0, 3, 1)
Exercl&e Set 2.2
11. The given matrix corresponds to the system
+ 3xs = 2
+ 4xs = 7
X4 + 5xs = 8
29
where the equation corresponding to the zero row has been omitted. Solving these equations
for the leading variables (x1, xs, and x4) in terms of the free variables (x2 and xs) results in
Xt = 2 + 6x2 3xs, X3 = 7 4xs and X4 = 8 5xs. Thus, assigning arbitrary values to x
2
and
xs, the solution set can be represented by the parametric equations
X] = 2 + 6s 3t, X2 = S, X3 = 7 4t, X4 = 8 5t, X5 = t
where oo < s, t < oo. The correspondjng vector form is
12. The given matrix corresponds to the system
x1  3x
2
= 0
X3 = 0
0=1
which is clearly inconsistent since the last equation is not satisfied for any values of x
1
, x
2
, and x
3
.
13. The given matrix corresponds to the system
 7x4 = 8
+ 3x4 = 2
X3 + X4 = 5
Solving these equations for the leading variables in terms of the free variable results in x
1
== 8 +· 7 x
4
,
x2 =o 2 3x1, and x3 = 5  x
4
. Thus, making x
4
into a parameter, the solution set of the system
can be represented by the parametric equations
X1 = 8 + 7t, Xz == 2 3t, X3 = 5 t, X4 = t
where oo < t < oo. The corresponding vector form is
14. The given matrix corresponds to the single equation Xt + 2xz + 2x.t  x
5
= 3 in which x3 does n0t
appear. Solving for x1 in terms of the other variables results in x
1
= 3 2xz 2x4 + X5. Thus,
making x2, x3, x4, and Xs into parameters, the solution set of the equation is given by
Xt ::::: 3  2s  2u + V, X2 = S, X3 = t, X4 = u, X5 = V
where oo < s, t, u, v < :x>. The corresponding (column) vector form is
XI 3 2s 2u+v 3 2 0
r  ~
+. r ~
X2 s 0 1 0
X3
=
0 +s 0 +t 1
+u l !
X4 u 0 0 0
l ~
xs v 0 0 0
3C Chapter2
15. The system of equations corresponding to the given matrix is
x1  3xz + 4x3 = 7
x2 + 2x3 = 2
X3 = 5
Starting with the last equation and W'orking up, it follows t hat Z3 == 5, Xz = 2  2x3 = 2  10 =  8,
and x1 = 7 + 3x
2
 4x3 = 72420 = 37.
Alternate solution via Gauss Jordan (starting from the original matrix and reducing further):
[
1 3
0 1
0 0
4
2
1
Add  2 times row 3 to row 2. Add 4 times row 3 to row 1.
Add 3 times row 2 to row 1.
0
1
0
0 13]
0 8
l 5
0 37]
0  8
1 5
From this we conclude (as before) that x
1
= 37, x2 =  8, and x3 = 5.
16. The system of equations corresponding to the given matrix is
x
1
+ 8x3  S:c4 = 6
x2 + 4x3  9x.: = 3
X3 + X 4 = 2
Starting with the last equation and working up, we have .1:3 = 2  x
4
, x2 = 3  4x3 + 9x4 =
3 4(2 .r4) + 9x4 = 5 + 13x
4
, and x
1
= 6 8xa + 5x4 = 6 8(2  x
4
) + 5x4 = .;::: 10 + 13x4·
Finally, assigning an arbitrary value to x
4
, the solution set can be described by the paramet
ric equations x
1
= 10 + 13t, x
2
=  5 + 13t, x
3
= 2 t, x
4
= t.
Alternate solution via Gauss Jordan (starting from the original matrix and reducing further):
[ ~
0 8 5
~ ]
1 4  9
0 1 l
Add  4 times row 3 to row 2. Add 8 times row 3 to row 1.
[ ~
0 0 13
10]
1 0 13
5
0 1 1 2
From this we conclude (as before) that x
1
= 10 + 13t, x
2
~ 5 + 13t, x
3
= 2  t , x4 = t.
Exercise Set 2.2 31
17. The corresponding system of equations is
x1 + 7xz  2x3  8xs =  3
X3 + x4 + 6xs = 5
X4 + 3xs ,... 9
St<Uting with the last equation aml working up, it follows that x" = 9 3xs, X3 = 5  :t4  6xs =
5 (9  3xs} 6xs = 4 3xs, and Xt =  3 1x2 + 2x3 + 8xs =  3 7x'l + 2( 4 3xs) + 8xs =
11  ?x2 + 2xs. Fina.lly, assigning arbitrary 'Values to xz and :ts, the solut ion set can be described
by
Xt = 11 7s + 2t, Xz = S, X3 =  4 3t, X4 = 9  3t, Xs = t
18. The corresponding system
X1  3x
2
+ 7XJ = 1
:tz + 4x3 = 0
0 = 1
is inconsistent since t here are no values of x
1
, x
2
, and x3 which satisfy the third equation.
10. The corrcspouding system is
x
1
+ x2  Sx3 + 2x4 = 1
X2 + 4x3 = 3
X4 = 2
Starting with the last equation, we have x
4
= 2, x
2
= 3  4x3. X 1 = 1  x2 + 3x3  2x4 =
l  (3  4x
3
) + 3x
3
 2{2) =  6 + 7x
3
. Thus, making x
3
inLo a paramet er , the solution set can be
described by the equat ions
Xl =  6 + 7t , X2 = 3  4t, = t , X
4
= 2
20. The correspondi.ng system is
x1 + 5x3 + 3x4 = 2
x2  2x3 + 4x4 =  7
X3 + X4 = 3
Thus x
4
is a free. and, setting x
4
= t, we have :r:, = 3  t , :r.2 =  7 + 2(3  t)  4t =
 l  6l, a.nd x
1
= 2  5(3  t) 3t =  13 + 2t.
21. Star ting wi th the first equation and working down, we have x1 = 2, xz = Hs  x1) = S(S 2) = l,
and X3 = l(12  J:r1  2:r2) = i (12 6 2} = !
4  2x
1
4 + 2
22. :r1=  l , x2 =
3
= 
3
=2,x3=5 x
1
 4x2=5+18 =  2
23. The augmented matrix of the system is
Add row 1 to row 2. Add 3 times row 1 to row 3.
[
1
1
10
32
Multiply row 2 by  1. Add 10 t imes t he new row 2 to row 3.
Multiply row 3 by  f
2
•
[i
1 2
1  5
0  52
8]
9
114
0 1 2
Add 5 t imes row 3 to row 2. Add 2 times row 3 to row 1.
1 0
!]
1 0
0 1
Add  1 times row 2 to row 1.
0 0
!]
1 0
0 1
Thus the solution is x
1
= 3, x 2 = 1, x3 = 2.
Chapter 2
24. The augmented matrix of the system is
H
2
5
!
Multiply row 1 by Add 2 t imes the new row 1 to row 2. Add 8 times the new row 1 to row 3.
I Q 7 4 l
fl 1 1
lo 7  4 1
Multiply row 2 by Add 7 t imes t he new row 2 to row 3.
l 1
!]
1
4
7
0 0
Add  1 times row 2 to row 1.
0
3
!]
7
1
4
1
0 0
Finally, assigning a n arbitrary value t o the free variable X3, the solution set is represented by the
parametric equations
Exercise Set 2.2
25. The augmented matrix of the system is
3
1 2 1 1]
1 2 2 2
2 4 1 1
0 0 3 3
Add 2 times row 1 to row 2. Add row 1 to row 3. Add 3 times row 1 to row 4.
[
1 1 2 1 1]
0 3 6 0 0
0 1 2 0 0
0 3 6 0 0
Multiply row 2 by Add the new row 2 to row 3. Add 3 times the new row 2 to row 4.
1 1 2 1
0 1 2 0
0 0 0 0
0 0 0 0
Add row 2 to row 1.
0 0
1 1]
1 2 0 0
0 0 0 0
0 0 0 0
33
Thus, setting z = s and w ., t, the solution set of the system is represented by the parametric
equations
x = 1 + t, y = 2s, z = s, w = t
26. The augmented matrix of the system is
6 3 5
Interchange rows 1 and 2. Multiply the new row l by j. Add 6 times the new row 1 to row 3.
[
1 2 1
0 2 3 .2
0 6 9 9
Multiply row 2 by  Add 6 times the new row 2 to row 3.
2 1
1 1
0 0 3
It is now clear from the last row that the system is inconsistent.
34 Chapter 2
27. The augmented matrix of the system is
28.
Multiply row 3 by 2, then add 1 times row l t o row 2 and  3 times row 1 to the new row 3.
The last two rows correspond t o the (incompatible) equations 4 x2 = 3 and 13x2 = 8; thus the
system is inconsistent.
The augmented matrix of the system is
2 1
3 2
1 3 11
4 2 30
and the reduced row echelon form of this matrix is
Thus the system is inconsistent.
29. As an intermediate step in Exercise 2:i, the augmented matrix of the system was reduced to
Starting with the last row and working up, it follows that x3 = 2, x2 = 9 + Sx3 = 9 + 10 = 1,
and x1 = 8  x1  2x3 = 8  J  4 = 3.
30. As an intermediate step in 24, the augmented matrix of the system was reduced to
[
1 1 1
0 1
0 0 0
i]
Starting with the last equation and working up, it follows that x2 =  a.nd :::1 = x2 X3 =
x
3
=
Finally, assigni ng an arbitrary value to x
3
, the solution set can be
described by the parametric equations
E.xerclse Set 2.2
31. As an intermediate step in Exercise 25, the augmented matrix of the syste10 was reduced to
[
1 1 2
0 1  2
0 0 0
0 0 0
 ~  ~ ]
0 0
0 0
35
It follows t hat y = 2z a.nd x = 1 + y 2z +w =  1 + w. Thus, setting z = s and w = t, the
solution set of the system is represented by the parametric equations x = 1 + t , y = 2s, z = s,
W = t.
32. As in Exercise 26, the augmented matrix of the syst em can be reduced to
[
1 2  1
0 1 !
0 0 0
 ~ ]
 1
3
and from this we can immediately conclude that the system has no solution.
33. (a ) There are more unknowns than equat ions in t his homogeneous system. Thus, by Theorem
2.2.3, there at e infinitely many nontrivial solutions.
(b) From back substit ution it is clear that x
1
= Xt = X3 "" 0. This system hns only the trivial
solution.
34 . . (a) There are more unknowns than equations in this homogeneous system: thus there are in
finitely many nont rivial solutions.
(b) The second equation is a multiple of the first. T hus t he system r educes to only one equation
in two unknowns and t here are infinitely many solut ions.
35. The augmented matrix of the homogeneous system is
[
2 1 3 0]
1 2 0 0
0 l 2 0
Interchange rows 1 and 2. Add 2 times the new row 1 to the new row 2.
[
1 2
0  3
0 1
0
3
2
~ ]
Multiply row 2 by  ~ . Add  1 times row 2 to row 3. Multiply the new row 3 by ! ·
[ ~
2 0
1  1
0 1
~ ]
The last row of t his matrix corresponds to x
3
= 0 a..nd, f!·om back substitution, it follows that
xz = x3 = 0 and x
1
=  2x2 = 0. This system bas only the trivial solut ion.
36 <;;napter 2
36. The augmented matrix of the homogeneous system is
37.
[
3 1
5 1
1 1
1 1
Multiply row 2 by 3. Add  5 times row 1 to the new row 2, then multiply this last row 2 by
[
3111 0]
0 4 1 4 0
Let x
3
= 4s, x
4
:::: t. Then, using back substitution, we have 4x
2
=  x
3
 4x4 = 4s  4t and
3:c
1
= x
2
 :c
3
 x
4
= s + t  4s t = 3s. Thus the solution set of the system can be described
by t he parametric equations x1 = s, x2 = s t, X3 = 4s, X4 = t.
The augmented matrix of the homogeneous system is
2 2 4
0
1
3
i 3 2
lnterchange rows l and 2. Add 2 times the new row 1 to row 3.
0 1 3
2 2 4
1 1 8
. · Multiply row 2 by ! Add  1 times the new row 2 to row 3. Mul tiply the new row 3 by
0
1 3
1 1 2
0 0 1
Add 2 times row 3 to row 2. Add 3 times row 3 to row 1.
0
1 0
1 1 0
0 0 l
This is the reduced row echelon form of the matrix. From this we see that y (the third variable)
is a free variable a.nd, on setting y = t, the solution set of the system ca.n be described by the
parametric equations w = t, x =  t, y = t, z = 0.
38. The augmented matrix of the homogeneous system is
[
2 1  3
1 2 3
1 1 4
and the reduced row echelon form of this matrix is
[H H]
Thus has only the trivial solution x = y = z = 0.
Exercise Set 2.2 37
39.
40.
The augmented matrix of this homogeneous system is
u
1 3 2
01
1 4 3
~ j
3 2
1
 3 5
_,,
and the reduced row echelon form of this matrix is
[ ~
0
7 5
~ ]
2
2
1 3 2
0 0 0
0 0 0
Thus, setting w == 2s and x = 2t, the solution set of the system can be described by the parametric
equations u = 7s  5t, v = 6s + 4t, w = 2s, x = 2t.
The augmented matrix of the homogeneous system is
I a 0 I 0
1 4 2 0 0
0  2 2 1
0
2  4 I 1 0
1  2 1 0
ancf the reduced row Echelon form of this matrix is
1 0 0 0 0
0 1 0 0 0
0 0 1 0 0
0 0 0 1 0
0 0 0 0 0
Thus t h ~ system re<.luces to only four equations, and these equations have only the trivial solution
X1 =. X2 = Xj :::: X4 :;;;·: 0.
41. We will solve the system by Gaussian elimination, i.e. hy reducing the augmented matrix of the
system to a rowechelon form: The augmented matrix of the original system is
[ ~
1 3 4
~ ]
0 2 7
3 5
1 4 1
Interchange rows 1 and 2. Add  2 times the new row 1 to the new row 2. Add 3 times the new
row 1 to row 3. Add 2 times the new row 1 to row 4.
{ ~
0 2 7
~ ]
1 7  10
3 7 16
1 8  10
Chapter2
Multiply row 2 by 1. Add 3 times the new row 2 to row 3. Add 1 times the new row 2 to row 4.
0 2 7
1  7 10
0  14 14
0 15
 ~ 2 0
Multiply row 3 by {
4
. Add  15 times the new row 3 to row 4. Multi ply the new row 4 by ! .
0  2 7
1  7 10
0 1 1
0 0 1
~ ]
This is a rowechelon form for the augmented matrix. Ftom the last row we conclude that /
4
= 0,
and from back substitution it follows that I
3
= /
2
= 1
1
= 0 also. This system has onlJ!, the trivial
solution.
42. The augmentcn matrix of the homogeneous system is
43.
1
1
[
 ~  ~ ~  ~
1 1 2 0 1
2 2  1 0 1
and the reduced row echelon form of this matrix is
[
~ ~ ~ ~ ~ ~ ]
0 0 0 1 0 0
0 0 0 0 0 0
~ ]
From this we concl ude that the second and fifth variables are free variables, and that the solution
set of the system can be described by the parametric equations
Z1 =  .s  t , z2 = s, z3 =  t, z4 = o, z ~ = t
The augmented matrix of the system is
[!
2 3
.:2]
1 5
1 14
Add  3 ti mes row 1 to row2. Add 4 times row 1 to row 3.
[
1 2 3 4]
0 7 4  10
0 7  26 a  14
Multiply row 2 by 1. Add tne new row 2 to row 3.
[ ~
2 3
7 4
0 22
4 ]
10
a4
Exercise Set 2.2 39
44.
45.
46.
From the last row we conclude that z = and, from back substitution, it is clear that y and x
are uniquely determined as well. This system has exactly one solution for every value of a. ·
The augmented matrix of the system is
2 1
2 3
2 a
2
 3
Add 2 times t he fi rst row to the second row. Add  1 times the first row to the third row.
(
1 2 1 2]
0 6 1 3
0 0 a
2
 4 a 2
The last row·corresponds to (a
2
 4}z =a 2. If a=  2, thls becomes 0 = 4 and so the system
is inconsistent. If a = 2, the last equation becomes 0 = 0; thus the system reduces to only two
equations and, with z serving as a free variable, bas infinitely many solutions. If a f.: ±2 the system
has a unique solution.
The augmented matrix of the system is
[1
2
n ll
2 a
2
 5
Add 2 times row 1 to row 2.
2
3]
a
2
 9
Jf a= 3, then the last row corresponds to 0 = 0 and the system has infinitely many solutions.
1£ a=  3, the last row corresponds to 0 = 6 and the system is inconsist ent. If a:/; ±3, then
y = :,:_
3
9
=
and, from back substitution, x is uniquely determined as well; the system has
exactly one solution in this case.
The augmented matrix of the is
1 7
7]
3 17
16
2
a
2
+ 1 3a
This reduces to
1 7
7]
1 :3
_"l
0 a
2
 9 3a+9
The last row corresponds to (a
2
 9)z = 3a + 9. If a= 3 thls becomes 0 = 0, and the system will
infinitely many solutions. If a = 3, then the last row corresponds to 0 = 18; the system is
inconsistent. If a 'I ±3, then z = = o.:_
3
and, from back substitution, y and x are uniquely
determined as well; the system has exactly one solut ion.
41. (a) If x + y + z = 1, then 2x + 2y + 2z = 2 f.: 4; thus the system has no solution. The planes
represented by the two equations do not intersect (they are parallel).
40 Chapter 2
(b) lf x + y + z = 0, then 2x + 2y + 2z = 0 alsoi thus the system js redundant and has infinitely
many solutions. Any set of values of the form x = s t, y = s, and z = twill satisfy both
equations. The planes represented by the equations coincide.
48. To rt:<.luce thl' matrix [ ~  ~ ~ ] to reduced r owechelon form without introducing f1actions:
3 4 5
Add 1 times row 1 to row 3. Interchange rows 1 and 3. Add 2 times row 1 to row 3.
[ ~ = ~ _:]
Add  3 times row 2 to row 3. Interchange rows 2 and 3. Add 2 times row 2 to row 3. Multiply
row 3 by 
3
1
7
•
[ ~
3 2]
1  22
0 1
Add 22 times row 3 to row 2. Add 2 times row 3 to row l. Add  3 times row 2 to row 1.
This is the reduced rowechelon form.
49. The system is linear in the variab1es x = sin a, y = cos{3, and z = tany.
2x y + 3z = 3
4x+ 2y 2z = 2
6x  3y + z = 9
We solve the system by per forming the indicated rvw operations on the augmented matrix
[
2 1 3
4 2  2
6 3 1
!]
Add  2 times row 1 t o row 2. Add 3 times row 1 to row 3.
r
20 1 3 31
4  8 4
lo o 8 oj
From this we conclude that tan 1 = z = 0 a.nd, from back substitution, that cos {3 = y = 1 and
sin a= x = 1. Thus a = t. {3 = 1r, and '"Y = 0.
50. Tbis sy11t.em is linear in the variables X = x"J., Y = y
2
, and Z = z
2
, with augmented matrix
[i :
1
2
Exercise Set 2.2 41
The reduced rowechelon form for this matrix is
! H]
It follows from this that X = 1, Y = 3, a.nd Z = 2; thus x = ±1, y = ±..;3, and z = ±JZ.
51. This system is homogeneous with augmented matrix
[
2 ).  1
2 1  ).
2  2
1).
1f). = 1, the augmented matrix js [ i & 8], and the reduced form of this
2  2 (} 0
matrix is (6 g g]. Thus x = y = z = 0, i.e. the system has only the trivial solution.
0 0 1 0
1f .A = 2, the augmented matrix is [ g}, and the reduced rowechelon form of this
2  2  1 0
matrix is Thus the system ha.<l infinitely many solutions: x = !t, y = 0, z = t ,
0 0 0 0
where  co < t < co.
52. The augmented matrix U I reduces to the following form:
1
1
0
2
l
0
a l
a b
c  ab
Thus the syst.em is consistent if and only if c a  b = 0.
53. (a) Starting with the giVfm system and proceeding as directed, we have
O.OOOl x + l.OOOy 1.000
1.000x  l.OOOy 0.000
l .OOOx + 10000y 10000
l .OOOx  l.OOOy 0.000
l.OOO:t + lOOOOy = 10000
 1 OOOOy = 10000
which results in y 1.000 and x 0.000.
(b) If we first interchange rows and then proceed as directed, we have
l.OOOx  l.OOOy = 0.000
O.OOOl x + l.OOOy = 1.000
l .OOOx  l.OOOy = 0.000
l.OOOy = 1.000
\Vhich results in y 1.000 and x 1.000.
42
(c) The exact solutjon is x =
1
4
0:: andy =
The approximate solution without using partial pivoting is
0.00002x + l.OOOy = 1.000
l.OOOx + l.OOOy = 3.000
l.OOOx + 50000y = 50000
l.OOOx + l.OOOy = 3.000
l.OOOx + 50000y = 50000
 50000y = 50000
which results in y 1.000 and x 0.000.
The approximate solution using partial pivoting is
0.00002x + l.OOOy = 1.000
l.OOOx + l.OOOy = 3.000
l.OOOx + LOOOy = 3.000
0.00002x + LOOOy = 1.000
l.OOOx + l.OOOy = 3.000
l.OOOy = 1.000
which results in y 1.000 and x ::::: 2.000.
54. (a) Solving the system a..; directed. we have
0.21x + 0.33y = 0.54
0.70x + 0.24y = 0.94
0.70x + 0.24y = 0.94
0.21x + 0.33y = 0.54
l.OOx + 0.34y = 1.34
0.2lx + 0.33y = 0.54
l.OOx + 0.34y = 1.34
0.26y = 0.26
resulting in y ::::: 1.00 and x :=::: 1.00. The exact solution is x = 1, y = 1.
(b) Solving the system as directed, we have
O.llx1  + 0.20xs = 0.02
O.lOxt + 0.36x2 + 0.45xa = 0.25
0.50xt O.Olx2 + 0.30xs = 0.70
O.SOx1  O.Olx2 + 0.30xs = 0.70
O.IOx1 + 0.36x:J + 0.45xs = 0.25
O.Uz:t  + 0.20xa = 0:02
l.OOx1  0.02x2 + 0.60x3 = 1.40
O.lOx1 + 0.36x:z + 0.45x3 = 0.25
O.ll x1  0.13x2 + 0.20xs = 0.02
Chapter2
Discussion and Discovery 43
0.02x2
+
0.60XJ
1.40
0.36x2
+
0.39x3 0.39
0.13x2
+
0.13x3
=
0.13
0.02x2
+
0.60x3

1.40
l.OOxz
+
i.08x3
1.08
0.13x2
+
0.13x3
=
0.13
O.OZx2
+
0.60x3
=
1.40
l.OOx2
+
1.08XJ 1.08
0.27x3 = 0.27
resulting in x3 ;:::,; 1.00, x2;:::,; 0.00, and x1;:::,;  2.00. The exact solution is x
1
=  2, x
2
= 0,
X3 = 1.
DISCUSSION AND DISCOVERY
Dl. lf the homogeneous system has only the trivial solution, then the nonhomogeneous system will
either be inconsistent or have exactly one solution.
02. (a) All three liues pass through the origin and at least two of t hem do not coincide.
(b) If the system has nontrivial solutions then tbe lines must coincide and pass through the
origin.
D3. (a) Yes. If axo + bvo = 0 then a(kxo) + b(kyo) = k(axo + byo) = 0. Similarly for the other equa
tion.
(b) Yes. If a:r.o + byo "" 0 and ax
1
+ lly1 = 0 then
and similarly for 1.he other equation.
(c) Yes in both cases. These statements are not true for nonhomogenous systems.
D4. The first system may be inconsistent, but the second system always has (at least) the trivial
solution. If the first s y ~ t e m is consistent then the solution sets will be parallel objects (points,
lines, or the entire plane) with the second containing Lhe origin.
D5. (a)
(b)
(c)
06. (a)
(b)
(c)
D7. (a)
(b)
(c)
(d)
At most three (the number of rows in the matrix).
At most five (if B is the zero matrix) . If B is not the zero matrix, then there are at most 4
free variables (5  r where 1· is the number of nonz.ero rows in a. row echelon form) .
At most three (the number of rows in the matrix) .
At most three (the nwnbc. of columns).
At most three (if D is the zero matrix) . U 8 is not the zero matrix, then there are at most
2 free variables (3 r where r is the number of nonzero rows in a. row echelon form).
At most three (the number of columns).
False. For ex<.mple, x + y + z = 0 and x + y + z = 1 are inconsistent.
False. If there is more than one solution then there are infinitel y many solutions.
Fabe. If the system is consistent t hen, since there is at least one free variable, there will be
infinitely many solutions.
T:n..e. A homogeneou:; syi:\tem always has (at least) t he trivial S<llution.
44
Chapter 2
08. (a) True. For 1] can be reduced to either [5
(b) False. The reduce<l row echelon form of a. matrix is unique.
(c) False. The appearance of a row of zeros means that there was some redundancy in the
system. But the remaining equations may be inconsistent, have exM:tly one solution, or have
infinitely many solutions. All of these are possible.
(d) False. ·There may be redundancy in the system. For example, the system consisting of the
equations x + y = 1, 2x + 2y = 2, and 3x + 3y = 3 has infinitely many solutions.
D9. The system is li near in the variables x = sina, y = cos/3, z = tan7, and this system has only the
trivial solution x = y = z = 0. Thus sin a = 0, cos/3 = 0, tany = 0. It follows that a= 0, 11', or
·2rr; /3 = or
3
;; and 1 = 0, rr, or 21r. There are eighteen possible combinations in all. This does
not contradict Theorem 2.1.1 since the equations are not linear in the variables a, f3t 'Y·
WORKING WITH PROOFS
Pl. {a) lf a ;f. 0, then the reduction can be accomplished as follows:
If a = 0, then b =I 0 and c =f 0, so the reduction can be carried out as follows:
(b) If can be reduced to [5 ?] , then the corresponding row operations on the augmented
matr ix r] will reduce it to a matrix of the form [5 ? {] and from this it follows that
the syst.ern has the unique solutjon x = K, y = L.
CHAPTER 3
Matrices and Matrix Algebra
EXERCISE SET 3.1
1. Since two matrices are equal if and only if their corresponding entries are equal, we have a  b = 8,
b + a = 1, 3d+ c = 7, and 2d c = 6. Adding the first two equations we see that 2a = 9; thus
a = and it follows that b = 1  a =  Adding the second two equat!ons we see that 5d = 13;
thus d =
1
i and c = 7  3d = 
2. For the two matrices to be equal, we must have a= 4, d 2c = 3, d + 2c =  1, and a+ b = 2.
From the first and fourth equations, we see immediately that a= 4 and b =  2 a= 6. Adding
the second a.nd third equations we see that 2d = 2; thus d = 1 and c =  l
2
d =  1.
3. (a) A has size 3 x 4, BT has size 4 x 2.
(b) ll32 = 3, 0.23 = 11
(c) tl,J = 3 if aud only if (i,j)  (1, 1), (3, 1), or (3, 2)
(d) c, (A') = Ul (e) , ,(2BT) = [1 2)
4 . (a) B has size 2 x 4. AT has size 4 x 3.
(b) bl2 = i. b21 = 4
(c) b,; = 1 if and only if (i, j) = (2, 2) or (2, 3)
5. (a) A+·2B = H
(b) A  is not defined
(c) 4D  3CT =
=
{d)
D _ DT = [ 1 1] _ [1 3] = [ 0
3 3 1 3  4
(e) G + (2FT) =l +
4 1 3 4 2
'"
(f) (7 A,  B) + E is not defined
55
56
6. (a) 3C + D = (
3
9
°) + [
1
3.  3
= [!
(b)
= [
(c) 40  5DT = [
4
12
[: = [
(d) F _, pT = [
3 2
 !l H
(e) B+(4£T) = H i] + H
""
20 12 20
(f) (7C  D) + B is not defined
7. (a) CD= [l 0][ 1 1]=[1 1]
3  1  3 3 6 0
(b) AE = H
[1 4 2] = [!
1 3 1 5 4 5
( c) FG= =l
3 2 4j 4 1 3 32 9 25
(d) B''P = G ; !]
1
!
(e) naT = H ; = H
(f) G E is not defined
8. (a) GA = H :
r 1 s
(b) FB =
(c) GP = H :
= !)
3 1 1 14 5
=
4 4 0 16 5
H = [::
Chapter 3
Exercise Set 3.1
(d)
t1 1 3
3 101
3 7
(e) E£"
(f) D A is not defined
9. Ax = H
!J HJ l m +J m = HJ
10. Ax = [! :
3  1
=+!} +5 (_;]  2 = [q] .
11. (a)
[::1
12. (a)
;1 [::] n1
13. 5x
1
+ 6x
2
 7x3 == 2
:tl  2x·z + :l.r3 = 0
1
l Xz  I3 = 3
(b) !] [=:] :;: [ :)
1 5  2 X3 2
(b) [! =! = nJ
14. X 1 + X2 + X3 = 2
2Xt + 3X2 = 2
5xl  Jxz  6x3 =  9
15. (ABh3 = r2( A) · c3(B) = (6)(4) + (5)(3) + (4)(5) =59
16. (BAht= r 2(B} · c1 (A) = (0)(3) + (1)(6) + (3)(0) = 6
[6 2
!] = {67
17. (a) r
1
(AB) = rt(A)B = [3
2 7] 0 1
41 41]
7 7
[6 2
!] = {63
(b)
r3(AB) = r 3(A)B = IO
4 9} 0 1
67 57]
7 7
c,(AB) = Ac, (B) =
 2 7
1 r
2
1 n
(c)
5
4 1 = 21
4
9 7 67
57
58
Chapter3
t3 2 7]
18. (a) ' ' (BA) • 1(B)A [6 2 : : = [6 6 70[
(b) r3 (BA) • 3(8).4 [7 7 5[  : ;) = {6.1 41 122[
(c) c2(BA) = Bcz(A) =
7 7
4] [2] [6]
3 5 = 17
5 4 41
19. (a) tr(A) = (A)u + (A)22 + (A)33 = 3 + 5 + 9 = 17
(b) tr(AT) = (AT)ll + (AT)2;: + (AT)33 = 3 + 5 + 9 = 17
(c) tr(AB) = (AB)u + (AB)2
2
+ (AB)33 = 67 + 21 +57 = 145, tr(B) = 6 + 1 + 5 = 12; thus
t.r( AB)  tr(A)tr(B) = 145 (17)(12) = 145204 =  59
20. (a) tr(B)= 6+1+ 5 = 12 (b) tr(BT)=6+ 1+ 5 = 12
(c) tr(BA) = (BA)n + (BA)22 + {BA)33 = 6 + 17 + 122 = 145, tr(A) = 3 + 5 + 9 = 17; thus
tr(BA) tr(B)tr(A) = 145 (12){17} = 145 204 =  59
21. (a) uT' v = {2 3] [:] = 8 + 15 = 7 (b) uvT = [4 5j
(d) v'ru = l4 5J[!] =  8+ 15=7 = uTv
(e ) tr(uvT) = t r(vuT) = u · v = v · u = uTv = vTu = 7
22. (a) uTv = [3 4
5] m = 6 _ 28+ 0 = 22
(b) uvr = [ :] [2
[
6 21 0]
7 0] =  8 28 0
10 35 0
(c) tr(uvT)=628+0 = 22=uTv
(d) 7 OJ [:] =628 + 0 =  22 =uTv
(e) tr(uvT) = tr(vuT) = u · v = v. u = uTv = vTu = 22
23. [k l ll = [k
(k + 1}
2
= 0 if and only if k = 1.
Exercise Set 3.1
24. [2 2 k] [ ~ ~ ~ ] r ~ ]  = 12 2
0 3 1 lk
if and only if k = 2 or k =  10.
59
kJ [4: 3kl = 12 + 2(4 + 3k) + k(6 + k) = k
2
+ 12k +·20 = 0
6+k
25. Let F = (C(DE}. Then, from Theorem 3.1.7, the entry in the ith row and jth column ofF is
the dot product of the ith row of C and the jth column of DE. Thus (F)23 can be computed as
follows:
27. Suppose tha t A ism x nand B is r x s. If AB is defined, then n = r . On the other hand, if BA
is defined, then s = m. T hus A is m x n and B is n x m. It follows that AB is m x m and BA is
n xn.
28. Suppose that A is m x n and B is r x s. If BA is defined, then s = m a.nd BA is r x n. If, in
addition, A(BA) is defined, then n = r . Thus B is ann x m matrix.
29. (a) If the i th row of A is a row of zeros, then (using the row rule) r i{AB) = ri(A)B = OB = 0
and so the ith row of AB is a row of zeros.
(b) If the jth column of B is a column of zeros, then (using the column rule) c_i(AB) = Ac; (B) =
AO = 0 and so the jth column of AB is a column of zeros.
30. (a) If B anu C have the same jth column, then ci(AB) ::= Aci (B) = Acj (C) = cj(AC) and so
AB and AC have the same jth column.
(b) If B and C have the sa.me ith row, then ri (BA) = ri (B)A = r;(C)A = ri(CA) and so BA
and C A have the same ith row.
31. (a) If i ::j: j, t.heu a;
1
h ~ unequal row and column numbers; that is, it is off (a bove or below) the
main diagonal of the matrix [a;iJ; thus the matrix has zeros in all of the positions that are
above or below the main diagonal.
an
0 0 0 0 0
0
a22
0 0 0 0
[aijJ =
0 0 a33 0 0 0
0 0 0 a44 0 0
0 0 0 0
a 55
0
0 0 0 0 0
a 66
(b) If i > j , then the entry Cl;j has row number larger than column number ; that is, it lies below
the main diagonal. Thus laii] has zeros in all of the positions below t he main diagonal.
(c) If i < j, then the entry a;.i has row number smaller than column number; that is, it lies above
the main diagonaL Thus la;j] has zeros in a.ll of the positions above the main diagonal.
(d) If li  il > 1, then either i  j > 1 or i  j < 1; that is, either i > j + 1 or j > i + 1. The
first of these inequal ities says that the entry aij lies below the main diagonal and also be
low t he "subdiagonal" cor>sisting of entries immediately below the diagonal entries. The
60 Chapter 3
second inequality says that the entry aii lies above the diagonal and also above the entries
immediat ely above the diagonal entries. Thus the matrix A has the following form:·
an <lt2
0 0 0 0
a21 a2.2 £l23
0 0 0
A = ia;j} =
0 asz a33 a34 0 0
0 0 a43 a44 a4s 0
0 0 0 as4 ass as6
0 0 0 0
a6s aM
32. (a) The entry a;j = i + j is the sum of the row and column numbers. Thus the matrix is
: .....
(b) The entry aii = ( l)i+i is  1 if i + j is odd and +1 if i + j is even. Thus the matrix is
r
: :
1  1 1
(c) We have a;j = 1 if i = j or i = j ± 1; otherwise a.;i = l. Thus the ent ries on the main
di agonal, and those on the subdiagonals immediately above and below the main diagonal ,
arc all l; whereas the remaining entries are all + 1. The matrL'<. is
1  1  1 1
1 1  1 1
33. The component,s of the matrix product [! : represent the total expenditures for
3 
purchases during each of the first four months of the year. For example, the February expenditures
were (5)($1) + (6)($2) + (0)($3) == $17.
34. (a)
(b)
[
45 + 30 60 + 33 7f:> + 40] [75 93 115]
. . 30+ 21 30 +23 40 +25 51 53 65
T he entnes of the matrnc M + J = = represent t he
12+ 9 65+12 45 + 11 21 77 56
15 + 8 10 + 10 35 + 9 23 50 44
total units sold in ea.ch of the categories during the months of May and June. For example,
t he total number of medium raincoats sold was M
3
2 + J32 = 40 + 10 = 50.
[
15 27 35]
The entries of the matrix M  J = :
represent the difference between May and
1 30 26
June sales in each of the categories. Note that June sales were less than May sales in each
<:Me; thus e ntries represent decreMP.S.
Discussion and Discovery 61
(c) Let x r :]· Then th7 wmponents of M x = ::] [:] = represent the totalnum
l 15 <10 35 90
ber (all sizes) of shirts, jeans, suits, and raincoats sold in May.
(d ) Let y = 11 1 1 1]. Then the components of
[
45 60
30 30
yM=[l 1 1 1)
12 65
15 40
75]
= 1102 195 205J
35
represent the total number of small, medium, and large items sold in May.
(e) The product yMx = [1 1 1 = {1 1 =492 represt>nts the
15 40 35 90
total number if items (all sizes and categories) sold in May.
DiSCUSSION AND DISCOVERY
Dl. If AB has 6 rows and 8 columns, then A must have size 6 x k a nd B must have six k x 8; thus A
has 6 rows and B has 8 columns.
D2. If A = [o o] then AA = [o o] [o o] = (o o].
10' 1010 00
03. Let A = G and B = In the following we illustrate three different methods for computing
the product AB.
Met.'wd 1. Using Definition 3. 1.6. This is the same as what is later referred to as thP. column rule.
Since
we have AB = [:
Method 2. Using Theorem 3.1.7 (the dot product rule}, we have
Method 3. Using the row rule. Since
rt(AB) = rt(A)B = 11 2] [! = [7 Sj and r2(AB) = t2(A)B = [1 lj ""' [4 3]
we have AB = [: .
62 Chapter 3
D4. The matnx A i . is the only 3 x 3 with property. Here is a proof
Since xc, (A) + yc,(A) + zc
3
( A) A[:] , we must have xc, (A)+ yc,(A) + zc,( A) [= for
all x, y, and z. Talting x 1, y = 0, z = 0, it follows that c, (A) = Similarly, c2(A) = [ i]
and c3 (A) = m.
D5. There is no such matrix. Here is a proof (by contradiction):
Suppo"' A ;,. 3 x 3 for which A[:] ["f] for all x, y, •nd z. Then A[:] · on the other
hand, we must have A [:] = A [ =: l =  m . Thus there is no such matrix.
D6. (a) S1 = and S2 = are two square roots of the matrix A=
(b) The matrices S = and S = [±:s· are four different square roots of B =
(c) Not all matrices have square roots. For example, it. is easy to check that the matrix [
has no square root.
07. Yes, the zero matrix A = ha.o:; the property that AB has three equal rows (rows of zeros)
for every 3 x 3 rnatrix B.
DS. Yes, the matrix A = has the property that AB = 28 for every 3 x 3 matrix B.
09. (a) False. For example, if A is 2 x 3 and B is 3 x 2, then AB and BA are both defined.
(b) 'frue. If AB and BA are bot h defined and A ism x n, then B must ben x m; thus AB is
m x m and BA is n x n. If, in addition, AB + BA is then AB and BA must have
the same size, i.e. m = n.
(c) True. From the column rule, c
1
(AB) = Ac;(B). Thus if B has a column of zeros, then AB
will have a column of zeroll.
(d) False. For example, =
(e) True. If A is n x m, then AT ism x n, AAT is n x n, and AT A ism x m. Thus AT A :1.nd
AAT both square matrices, 8Jld so tr(AT A) and tr(AAT) are both defined.
(f) False. If u and v are 1 x n row vectors, then u T v is ann x n matrix.
Exercise S&t 3.2
DlO. The second column of AB is also the sum of the first and third columns:
s
Dll. {a) :L)aikbkj) = a,lblj + a;2b'2j + · · · + aisb.•i
.1.:=1
(b) This sum represents (AB)ij, the ijth entry of the matrix AB.
WORKING WITH PROOFS
63
Pl. Suppose B = is an s x n matrix andy= !Yt yz
yB = fy • c1(B) y · c2(B)
Ys]· Then yB is the 1 X n row vector
Y · Cn(B)]
whose jth component is y · Cj(B). On the other hand, the jth component of the vector
Ytrl (B)+ Y2r2(B) + · · · + Ysr s(B}
is Ytblj + Y2b2j + · · · + Ysbsj = y • Cj(B). Thus Formula (21) is valid.
P2. Since Ax= x
1
c
1
(.t1) + x
2
c
2
(A) + · · · + XnCn(A), the linear system Ax= b js equivalent to
X1C1 (A)+ X2c2(A) + · · · + XnC,l(A) = b
Thus the system Ax == b is consistent if and only if the vector b can be expressed as a linear
combination of the <.:olumn vectors of A.
EXERCISE SET 3.2
flO
4
2] [0
2
31 [10
6
1!]
1. (a) (A+ B)+ C = 0 5 7 + 1 7
4j = 1
12
l 2
. 6
10 3
f.)
9 5 1 19
[j
1
+ [:
5
2] [10
6
1:]
A+ (B r C)= 4 8 6 = 1 12
4 7 2 15 5 1 19
[28
28
[0 2
[10
222
26]
(b)
(AB)C =
31 1 7
=
83 ·67 278
21 36 3 5 87 33 240
u
1
3] [ 18
62
33] [ 10
222
26]
A(BC) = r1
5 7 17
22 = 83 67 278
1 4 11 27
38 87 33 240
(a+ b)C = (3) [!
2
!]
6 9]
(c) 7 21 12
5 9 15 27
[
8
12] [ 0
14
21] [ 0
6
9]
aC+bG = 28 16 + 7 49 28 = 3 21 12
12 20 36 21 35 63 9 15 27
64
Chapter3
(d) a(B  C)= (4) =
1  12  3 4  48  12
[
32  12 20] [ 0  8
aB  aC = 0 4 8  4 28
16  28 24 12 "20
=
36 4  48 12
(r 3 51 r
 2
m
2. (a) a(BC) = (4) 0 1 2 1 7
4  7 6 3 5
[ 18 62
[72
 248
132]
= (4) 7 17 22 = 28 68 88
11  27 38 44  108 152
[32 12 20] [0 2
3] [ 72
 248
!32]
(aB)C = 0 4 8 1 7
4 = 28 68 88
16  28 24 3 5 9 44  108 152
r 8 3 5
1
[ 0 8
12] r72
248 132]
B0C) = 0 1 2 4 28 16 = 28 G8 88
L 4 7 6 12 20 36 44 108 152
[8 5
2][ 2 1 3] [ 20 30
9]
(b) (B + C)A = 1 8
6 0 4 5 =  10 37
()7
7 2 15 2 1 4 16 0 71
[ 26 25 11] [ 6 5 2] [ 20
 30
9]
BA +CA = 4 6 13 +  6
31 54 =  10 37 6i
 4 26 1 12 26 70  lfi 0 71
(AT)T H
0 _T [ 2
1
3]
3. (a ) 4 1 = 0 4
! = A
5 4 2 1
riO
4
r [10
0
( b) (A+ B)r = 0 5 7 =  4 5
L 2
6 10  2 7 10
AT H
0
2] [ 8
0
l
0 2]
4 1 + 3 1
5 6
5 4 5 2 6  2 7 10
(c) . (3C)T
6
T [ 0
3
1:] =(3)
1
!] 3C'
21 12 = 6 21 7
15 27 9 12 27 3 4
Exercise Set 3.2
T =
2r 36 6
20 0]
 31 21
38 36
2 6 3 5 4 6
38 36
4. (b)
H
1 8] T [ 8 1 1]
 6  2 = 1 6  12
 12  3  8  2 3
sr cr =
5
=
2 6 3 4 9  8  2
3
[
18
(d) (BC)'i' """"" 7
ll
62 33
1
T r18
17 22 = 62
27 38  33
1
22 38
cr Br ,...
:1
0 =
4 9  5 2 6 33
22 38
5 (a) lr ( [ : ]) '' 2 +1 +I = 10, and
t r(A') ,, ( [ _ ; m = 2 + 4+4 = 10
(b) , ,(3.4) ,, ( [ :m 6 + 12 + 12 = 30 3tr(;lj
(c) tr(A) 10, tr(B) tr m _! m = 8 +I +6 = 15,
nnd t r(A+ tc ( n "' 25; thustr(A tr(A) +tr(B)"
(d) tr(AB) tr ( ::: 2831 + 36 = 33,
( [
26  25  ul )
and t r(BA) = t.r 4 6 13 = 26 + 6+ I = 33; t.hll.s tr(AB) = tr(HA) .
4 26 1
65
66
6. (c)
(d)
[
6
Lr(A  B) = tr 0 _
6
([
18
t.t (BC) = t r
= 6 + 32 =  5 = 10  15 = tr(A) tr(B).
8 2
= 18 + 17 + 38 = 37,
 27 38 J
Chapter 3
(
[
12  23
and t r(CB) = tr 24 24
60  67
!! ]) 12  24 + 49 = 37. Thus h(BC) l<(C B).
7. (a) A matrix X satisfies the equation tr(B)A + 3X = BC if and only if 3X = BC  tr(B) A, i.e.
3] [ 2 1
4 15 0 4
9  2 1
[
8 3 5] [0 2
3X = 0 1 2 1 7
:1  7 6 3 5
!]
[
18 62 33] [ 30  15
= 7 17 22  0 60
11  42 38 30 15
45] [48
75 = 7
60 41
in w},i ch case we have X = 1 47 53
[
48 47 78]
. . 41  42 22
(b) A matrix X satisfies the equation B +(A+ X)T = C tf a.ud only if
(A +X)T =C  B
A +X = ((A+ X)T)T = (C  8)
7
X= (C B )T A = CT BT  A
78]
53
22
47
47
 42
r
0 1
Thu:; X = :'. 7
4
3] [ 8 0 4] [ 2 1
5   3 1  7  0 4
9  5 2 6  2 1
[!0 2 4}
v  · 1 2 7 .
4 10 1  1
8. (a) X = B  =
2 2
(b) X = CT BT = _!_
10 10 8
92  167 282
9. (a) dPt.(A) = 65= 1
A1 = r 2

l  5
(b)
{At)1 = [3
s
(c)
A'l' = G , det(AT) = 1
(AT)1 = [ 2
 1
= (A l)T
(d)
2A = [
6
10
,det(2A) = 4
4
4 10
2] = [ 2
6 2 5
=
Exercise Set 3.2 67
10. (a) det(B)=8+12=20
s1_ 1 [ 4
20 ·4
(b)
det(B1) = 8 + 12 = _!_
20
2
20
(BI)1 = 20 ( _!_ [2
20 4
•!]) =
3]
4 ::::B
(c)
BT = [ 2
a
:] , det(BT) = 20
T i 1 [4
(B ) = 20 :;
= (IJl)T
(d)
3B = [
6
12
9]
12
, det(3B) = 180
1 1 [ 12
(
3
B) = 180 12
9] 1 [ 4
6 = 60 4
=
1 [10 5]l 1 [ 7 5]
ll. (a) (AB) = 18 7 = 20 18 10
slA• = [_: = ;o
l
(b) (ABC)_ 1 _ [ 70 45] __ 1 [ 79 ·45]
 122 79  40 122 70
4] _!_ [ 4 3] [ 2
2 2 6 20 4 2 5
1] l [ 79 45]
3 = 40 122 70
12
_ (a} (BC)_ 1 .,. [18 11] l = _!_ [ 12 11]
. 16 12 10 16 18
cts1 =! r1 4] _!__ [ 4 3] = __!__ [ 12 111
2 2 6 20 4 2 40 16 18J
(b) (BCD)_
1
= [36 33] I = _1 [ 33]
· 32 36 240 32 aG
ulc1 n1 = r :] [._: = [
14
. X= (C _B)I AB = [5 7] [10 5] = __!__ [88 37]
22 6 4 18  7 11 66  29
68
Chapter3
16. (a)
24]
67
(b)
[
6 4] 1 [3 o] 1 r 6 4]
= 2 1 6 o 2 = sl3 1
19. {a) Given that A
1
= noting that det( A
1
) 13, we have A = (A
1
) ··
1
=
1
1
3
[_!
(b) Given that (7A)
1
= have A = =
( )
, ( T) 1 [3 1] l T [3 
2
1] 
1
__ (l) _
3
1] __ ·.1>],
20. a (,ivc'? that 5A  =
5 2
it fol ows t hat 5A =
5
:; ::;. ._,
and so =  = . A
1 [:.!. · lJT 1 [2 5)
5 5 3 5 ) 3
(b) Given (I+ 2A)
1
= we have I+ ZA =
=
1
1
3
r: and so it follows that
1 ( 1 [5 2] [I OJ) 1 [9 l]
A ::: 2 13 1 1  o l . = 13 2 c '
21. The matrix A = is invertible if and only if det(A) = c
2
 c f. 0, i.e. if and only if c :/:. 0, 1.
22. The matrix A = [  :) is invertible if anJ only if dct(A) = c
2
+ 1 0, i.e. if aud only if c =/= ±L
23. One such example is A = In genera), any matrix of the form [: ; ] .
3 2 3J e J c
24. One such example is A =
In general, any matrix of the form ;].
3 2 0 e  ! 0
25. Let X= [xiJJ . Then AX= I if and only if
3x3
0
'] ['"
X}2 xnl
"
lJ
1 0 X21 X22
X23 j
1
1 1 X31 X32 X33
0
Exercise Set 3.2
i.e. if and only if the entries of X satisfy the following system of equations;
Xn
+ X:z t
+ X22
Xt3 + X23
xz1
! X3 1 =}
+ X32 = 0
+ X:tJ =: 0
=0
=1
=0
=0
= 0
+ X33 = 1
69
This system has the unique solution xu = :t12 = xzz = x23 = X31 = X33::;; ~ and x 13 = x2
1
=
x;12 =  !·Thus A is invertible and A
1
= [! ~ !] ·
'2  2 'l
26. Let X = lx,iJ . Then AX = I if anrl only if
3X3
[
~ ~ ~ ] [:::
2 0 l XJJ
X13] [} 0 ()]
X23 = 0 1 0
x:n 0 0 l
i.e. if and only i i xu = 1, :tu = 0, x13 = 0, :t21 = 0, xn = 1, r23 = 0, 2x11 + X31 = 0, 2x12 + X32 =
0, and 2x
1
3 + X33 = l. This is a very simple syst.em of equations whicb has the solution
Xll = 1, X12 = 0, X13 = 0, X21 = 0, X22 = l , X23 = 0, X31 =  2, X32 = 0, X33 = 1
Thu& A is invertible and A t = [ ~ ~ ~ ]
 2 0 l
28. (a) The mat<ix uvT ~ [:] [I I 3) =  2 2 6 ha;: the property that
[
3 3 9]
4 4 12
 3
2
4
70 Chapter 3
(b) u· Av HJ. ([_! ; [:]) HJ. 50
ATU· V = (U :][;)) HJ HJ 112+63
29. U A is invertible and AB = AG then, using the Associative law (Theorem 3.2.2(a)). we have
30. If A is invertible and AC = 0, then C = IC = (A
1
A)C = A
1
(AC) = A
1
0= 0. Similarly, if C
is invertible and AG = 0, then A= AI= A(CC
1
) = (AC)C
1
= OC
1
= 0.
31. (a) If A= ::::] , then det(A} = cos
2
() + sin
2
B = 1. Thus A is invertible and· :
for every value of ().
A_1 = [cosO
si nO
 si n fJ]
cosO
(b) The given system can be written so
i.e. x' = xcosO ysi n8 andy'= xsinO + ycos£1.
32. (a) If A
2
:o:: I. t hen (1 A)
2
= 1
2
 2A + A
2
= 1  2A +A = I  A; t hus J  A is idempotent.
( b) If .4
2
= / , then (2A  1)(2A I)= 4A
2
 4.4 + !
2
= 4A 4A + I = I ; thus 2A I is invert
ible anci (2A  1)
1
= 2.4 I.
33. (a) If A anc.l B are invertible square matrices of the same size, and if A + B is also invertible,
then
A(A
1
+ s
1
)B(A + B)
1
::: (I+ AB
1
)B(A + B)
1
= (B + A)(A + B)
1
=I
(b) From part( a) it follows that A(A
1
+ B
1
)B;;::. A+ B, and so At + B
1
= A
1
(A+ B)B
1
•
Thus the matrix A
1
+ B
1
is invertible nnd (A
1
+ B
1
) 
1
= B(A B) 
1
.4 .
34. Since uTv = u · v:::: v · u = vru, we have (uvT)
2
= (uvT)(uvT) = u (vTu)vT = (vTu)uvT =
(uT v)uvT. T hus, if uTv =f  1, we have
an( so the matrix A = I+ uvT is invertible with A
1
= I 
vuvT.
Discussion and Discovery 71
35. If A = [: and p(x) = x
2
 (a+ b}x +(ad  be) , then
p(A) =  {a+ b)A + (ad  bc)I = !]
2
 (a+ b) [ + (ad be)
= [a2+bc ab + ba1_[a
2
Jda ab l·db]+(ad  bc)(l 0
1
]
ca + de cb + d
2
J ac + de ad + d
2
0
36. From parts (d) and (e) of Theorem 3.2.12, it follows that
tr(AB  BA) = tr(AB) tr(BA) = 0
for a.ny two n x n matrices A and B. On the other hand, tr(fn) = 1 t 1 + · · · + I ·= n. Thus it is
not possible to have AB  BA= I.,.. .
37. The adj acency matrix A and its square (computed using the rowcolumn rule) are as follows:
0 1 1 1 0 0 0 0 0 0 0 1 3 1
0 0 0 0 1 l 0 0 0 0 0 0 0 2
0 0 0 0 0 l 0 0 0 0 0 0 0
4 s  0 0 0 0 0 1 1
Az =:
0 0 0 0 0 0 1
0
()
0 0 0 0 1 0 0 0 0 0 0 0
0 0 0 0 0 0 1 0 0 0
0 0 0 0
0 0 0 0
(}
0 0 0 0 0 0 0 0 0
The entry in the ijth position of t he mat rix l1'
2
represents t he number of ways of traveling from i
to j wit h one intermediate stop. For example, t here are three such ways of traveling from J to 6
(t hrough 2, 3 or 1) and two such ways of t raveli ng from 2 to 7 (t hrough 5 or 6).
In genern.l t.h€ entry in tjth position of the matrix An represent s the number of ways of traveling
from i t o j with exactly n  1 int.errnediatc stops .
DI SCUSSION AND DISCOVERY
01. (a) Let and B = Then A
2
= B
2
= A2 B
2
= On
the other hand, (A + B)( A G =
(b) (A + B)(A  B) = A
2
 AB + BA  B
2
(c) (A + B )(A  B) = A
2
 8
2
if and only if AB = BA.
D2. If A is any one of the eight matrices A =
;J A
2
= f.:!.
0 0 ±1
D3. (a) If A
2
+ 2A +I= 0, ihcn i =  A
2
 2A = A(A 21) ; thus A is invertible and we have
A 
1
=A 21.
72
(b) If p(A) = anA"+ a
11
_
1
A"
1
+ · · · + a
1
A + aol = 0, where a.o =I 0, then we have
A( a .. An 1  anI A"  2 .. .  al J) = I
ao ao ao
Thus A is invertible with At= !!.li. A"
1
 . ..  !!.!.[.
' oo ao oo
Chapter 3
D4. No. First note that if A
3
is defined then A must be square. Thus if A
3
= AA
2
=I, it follows that
A is invertible with A I = A
2
.
D5. (a) False. (AB)
2
= (AB)(AB) = A(BA)B. If A and B commute then (AB)
2
= A
2
B
2
, but if
B A ¥: AB then this _will not in general be true.
(b) True. Expanding both expressions, we ha.ve (A  B)'J. = A
2
 ABBA + B
2
and
(B A)Z = 8
2
 BA  AB + A
2
; thus {A B)
2
= (B A)
2
•
(c) True. The basic fact (from Theorem 3.2.11) is that (AT)
1
= (A
1
)7', and fr0m this it
follows that (A n)T = ({A") 
1
)T = {(A")T)
1
= ((AT)n)1 = (AT)  n.
(d) False. For example, if A= and B = G then u (AB) = tr = 3, whereas
tr(A)tr(B) = (2)(2) = 4.
(e) False. For example, if B = ·A then A+ B = 0 is not invertible (whether A is invertible or
not).
D6. (a) If A is invertible, then the system Ax= b has a. unique solution for every vector b in
R
3
, namely x = A
1
b. Let x
1
, X:t, and x
3
be the solutions of Ax= e
1
, Ax= e
2
, and
Ax= e3 r espectively, and let B be the matrix having these vectors as its columns; B =
!x, x2 x3}. T hen we have AB = Ajx
1
X 2 x3] = [Ax1 Ax2 Ax3] = !e, ez e3] =I.
Thus A
1
= B = [x1 Xz x3].
(b) From part {a) , the columns of the matrix A
1
are the solutions of Ax= e1. Ax= e
2
, and
Ax= e3 . The augmented matrix of Ax = e
1
is
[i H
a.nd the reduced row echelon form of this matrix is
0
1
0
0 3]
0 1
1 0
D7. The matrices G and n have determinant equal to 1. The matrices
have determinant equal to 1. These six matrices are invertible. The other ten matrices6U
have determinant equal to 0, and thus are: not invertible.
DS. True. If AB = BA, then n• Al = ( AB)
1
= (BA)
1
= A
1
s
1
•
Worki ng with Proofs 73
WORKJNG WITH PROOFS
Pl. We proceed as in the proof of part (b) given in the text. It is clear that the matrices (ab)A and
a(bA) ha.ve the same size. The following shows that corresponding entries are equal:
P2. 'The following shows that corresponding entries on the two sides are equal:
P3. The argument that the matrices A(B C) and AB AC must have the same size is the sa.rne as
in the proof of part (b) given in t he text. The following shows that corresponding column vectors
are equal:
cjiA( B  C)J = Acj(B C) = A(cj(B) Cj(C))
= Ac
1
( B) Aci (C) = Cj (AB) c
1
(AC) = cj (AB  AC)
P1. Th('.se t. hree rn<\t r ices clel\rly ha.ve t he same size. The following shows t hat corresponding column
vectors are equal:
cj ja(BC)] = ac
1
(BC) = a(Bc
1
(C)) = (aB)cj(C) = c; [(aB)CJ
Cj!a(BC)J = acj ( BC) = a(Bc;(C)) = B(acj(C)) = B(cj (aG)) = Cj [B(aC)]
P5. If cA = 0 a.nd c :;f 0 then, using Theorem 3.2.l(c)), we have A = 1A = ( ( D c) A = ~ (cA) = ~ 0 = 0.
P6. (a) Tf A is invertible, then AA
1
=I and A
1
A = ! ; thus A
1
is invertible and (A
1
)
1
=A
(b) . If A is invertible t hen, from Theorem 3.2.8, A2 is invertible and (A
2
)  l = A
1
A 
1
= (A
1
)
2
.
P7. (a)
It t hen follows that A
3
is invertible and (A
3
) · · l = (A
2
) 
1
A
1
= (A ..
1
)
2
A
1
= {A
1
)
3
, et c.
In general, from the remark following Theorem 3.2.8, (An ) 
1
= A
1
• • A
1
A 
1
= (A
1
)n.
AA
1
·"   = · 
[
a b] { 1 [ d bl) l [ad ··· be
· c d \ad .... be c a (t<l  (,c 0
(
1 [ d ... b]) [f'l b 'J 1 [da be
A
1
A = ad  be  c a c d = ad  be 0
(b) Let A = (: ;] where ad  be = 0. Then A is invertible if and only if !.here a.re scalars e, f,
g, and h, such Lhnt
[: !] (; ~ ] = [ ~ ~ ]
i.e. if ami only if the followhg system of equa •. ions is consistent:
ae + bg = 1
ce + dg = 0
af + bh = 0
cf + dh = 1
Multiply t he fi rst equation by d, multi ply t he second equat ion by b, and subtract . This Jea.ds
t o
(da  bc)e = d
74 Chapter3
and from this we conclude that d = 0 is a necessary condition in order for the system to be
consistent. It then follows (since ad be = 0} that be= 0 and so either b = 0 or c = 0. Let
us assume that b = 0 (the case c = 0 can be handled similarly). Then the equations reduce
to
ae = 1
ce = 0
af = 0
cf = 1
and these equations are easily seen to be inconsistent. Why? From ae = 1 we conclude
that e f. 0; and from ce = 0 we conclude t hat c = 0 or e = 0. From this it follows that c
must be equal to 0. But this is inconsistent with c/ = 1! ln summary, we have shown that
if ad be = 0, then the system of equations has no solution and so the matrix. A is not
invertible.
n n n n n n
tr(AB) = L = L = L L btkakt = L(BA)u = tr(BA)
k=: l k=l / "" 1 1=1 k= l 1=1
EXERCISE SET 3.3
1. (a) This matrix is elementary; it is obtained from J2 by adding 5 times the first row to the
second row.
2.
3.
4.
5.
(b) This matrix is not elementary since two row operations are needed to obtain it from I
2
(follow
the one in part (a,) by interchanging the rows).
(c) This matrix js elementary; it is obtained from h by interchanging the first and third rows.
(d) This matrix is not invertible, and therefore not elementary, since it has a row of zeros.
(c) is elementary. {a) , (b) , and (d) are not elementary.
(a) Add 3 times the firs t row to the second row.
{b) Multiply the third ro• .... hy ! .
(c) lnt.erchange the first and fourth rows.
(d) Add times the third row to the first row.
(a) Add 2 times the second row to the first row.
(b) Multiply the first row by  1.
{c) Interchange the first and third rows.
(d) Add 12 times the second row t o t.he fourth row.
f
=
0
(a) (b) 1
0
r 0 OJ =
0 1 0
0 0 !
3
0 0
W'
0 0
0
I
 7
(c)
1 0 1 0
{d)
1 0
=
0 1
0 l 0 1
0 0 0 0 0 0
r
0
l
7
1 0
==
0 1
I\
0 v
EXERCISE SET 3.3
(b) =
0 0 1 0 0 1
(d) j ! r J
7. (a)
:: 1rr1r fmm A by interchanging the first and third rows; thus EA B where
(b) EB =A where E is the same as in part (a).
0
0
1
0
75
(c) :; : A by adding 2 times the first row to the third row; thus EA = c where
2 0 1
(d) EC = A where E is the inverse of the matrix in part (c)
1
i.e. E =
2 0 1
8. (a) E =
(b)C
(d) !
(c) E =
0
0 0]
l 2
0 1
9. Using thr. method of Example 3, we start with the partitioned rr.atrix lA I I] and perfonn row
operations aimed at reducing the left side to I; these same row operations performed simultaneously
on the right ::;ide will produce t.he matrix A J.
[
1 5 I 1 0
1
]
2 20: 0
Add 2 times the first row to the second row.
[
1 5 : 1 0]
0 10 l 2 1
Multipiy the seconci row by
then add 5 times the new second row to the first row.
[
1 0! 2
0 11_! .L
I 5 10
[ 2 _!]
Thus A l = _
1
1 . On the other hano
1
using the formula from Theorem 3. 2. 7
1
ami the fact
5 10
det(A) = 10
1
we have A =
10
=
1
? ·
l 1 [ 21) 5] [ 2 l]
2 1 g iO
76
10.
11.
Chapter 3
1 1 [ 1
A = 14  4
3] ;= [
2  
. 7
l
(a)
(b)
Start with the partitioned matrix [A I I J.
[!
4  1 l1 0
I
0
J I 0 1
I
5  4l0 0
Interchange rows 1 and 2.
0 3l0 1
4 1 : 1 0
I
5  4l0 0
Add  3 times row 1 to row 2. Add  2 times row 1 to row 3.
0 3:o
1
4  10: 1 3
I
5 10: 0  2
Add  1 t imes row 2 to row 3.
0 3l 0 1
4  10: 1 3
I
1 0: 1
1
Add 4 times row 3 to row 2, then interchange rows 2 and 3.
0 3: 0
' l Q I 1
I
0  10: 5
1 0]
__ :
Multiply row 3 by 
1
1
0
, then add 3 times the new row 3 to row 1.
0 : _!l I 10 5
0 : 1 1 1
: _l 7 2
I 2 10 S
1
0
From this we conclude that A is i nvertiblc , and that A
1
 [
Start with the partitioned matrix [A I IJ.
3  4 l 0
4 I 0 1
4 2 9 0 0
11
TO
l

7 '2
10 6
Multiply row 1 by 1. Add  2 times the new row 1 to row 2; add 4 times the new row 1 to
row 3.
[
1 3
0 10
()  10
0
1
0
EXERCISE SET 3.3 77
(c)
Add row 2 to row 3.
[
1 3
0 10
0 0
0
1
1
At this point, since we have obtained a row of on the left side, we conclude that the
matrix .4 is not invertible.
St art with the partitioned matrix !A I Jj.
0 1 : 1 0
I 1 : 0 1
I
1 o:o 0
Add 1 times row 1 to row 3.
fl
0 1 :
1 0
I
0 1 1
j I
I
1 1:  1 0
Add · 1 times row 2 t o row 3, then mult iply the new row 3 by 1.
0 1 1 0
j]
1 1 0 1
0 1
1 l
2 2
Add 1 times row :3 t.o rows 2 and 1.
0 o:
l
l
ll
I
2
2
1
0:  4
1
2
I
I I
0
1 I
I 2 2
. r
f\·, "" t h ;, weconchtdet hnt ,\ ;, iot •«li ble, nod that A·· ' = [;
I
ll
2
I
2
I
'i
12. (a) A l
(b) A
1
= 1  3
[
l 0 0]
[
1 1 Ol
(c) A
1
= 0
0  1 4 0 0 !.
3
13. As in t he inversion al gori t hm, we s tart wit h the po.rti tioncd matrix [A I I J and perform row oper
ations a.irr..;)d at redudng left. side to its reduced row echelon form R. If A is invertible, then
R will be the identity matrix and the matrix produced on the right side will be A  l . In the more
general situation the reduced matrix will have the form [R I B} = [BA I BIJ where the matrix B
on the right has the property that B.1 = R. [Note that B is the product of ejementary matrices
and thus i" always an invertible matrix, whether A is invertible or not .]
[
1 2
0 0
1 2
3[1 0 0]
1 ! 0 l 0
4 : 0 0 1
78
Chapter 3
Add 1 Limes row 1 to row 3.
2 3: 1 0
0
I
0 1 I 1
I
0 1 1 1 0
Add 1 t imes row 2 t o row 3.
2 3: 1 0
0
I
0
1 I 1
I
0 0 : 1  1
Add  3 times row 2 to row l.
2 0: 1 3
I
0 0
} I
1
I
0 0: 1  1
[) 2 0]
u
 3
The reduced row echelon form of A is R = 0 0 1 , and the matrix B = 1 has the
0 0 0  1
property that B A = R.
R =
0
B=H
0
0]
14. 1 0
l
8
0
1
15. 1f c = 0, t.hen t.hc row is a row of zeros, so the matrix is not inverti ble If c f 0, then after
rnult.iplying the fir5t row by 1/ c, we have:
Add I timE''> the tl rst row to t he second row and to the t hird row.
[
] 1
0 c 1
0 0
If c = 1, then t he second and third rows are rows of zeros, and so t he matrix is not invertible. If
c :j: 1, then we can divide the second and third rows by c  1 obtaining
l :]
from this it is clear that the reduced row echelon form is the identity mat ri x. Thus we conclude
that the mat rix is invertible if and only if c :f:. 0, 1.
16. c ¥= 0, ±v2
17. The matrix B is obtained by starting with t he identity matrix and performing the same sequence
of row operation' ; thus B = [! Hl = : :J
EXERCISE SET 3.3 79
18 B
19. 1f any one of the ki's is 0, then the matrix A has a 1.ero row and thus is not invertible. If the kt.'s
are all nonzero, then multiplying the ith row of the matrix [A I IJ by 1/ki for i = 1, 2, 3, 4 and then
reversing the order of the rows yields
20.
0
0
l
k2
0
0
l
k3
0
0
thus A is and A  l is the matrix in the righthand block above.
[ '
0 0
k
l I
0
' p k
k =r 0; A
1
=
1 I 1
p · k1
k
1 \ I
F
p 1?1
21. (a) The identity matrix can be obtained from A by first adding 5 times row 1 to row 3, and then
multiplying row 3 by Thus if E1 = and E2 = ;] , th€n E2E
1
A = I.
(h) A
1
• E
2
E
1
where £
1
and E
2
a.re as in part (a) .
(c) A = E}
1
E2
1
=
22. ( a )
• • :anT::,
[rof .: ::''. ad[t'fg A
0 0 1 0 0 2
then
(b) A I = E2E
1
where £
1
and E2 are as in par t (a).
(c)
23. The identity ma.Lrix can be obtained from A by the following sequence of row operations. The
corresponding matrices and their inverses arc as indicated .
0
0
(1) Interchange rows 1 and 3.
E1=
1
s1_
1
l 
0 0
H
0 ol
0
(2) Add 1 times row 1 to row 2.
£2 =
l
EI_
1
2 
0 0
80
Chapter 3
E, [
0 0]
s;•
0 0]
{3) Add  2 times row 1 to row 3.
1 0
1. 0
2 0 1
0 1
E,
0
Ei'
0
{4) Add row 2 to row 3.
1
1
1
1
Es
0
j]
E;'
0
j]
(5) Multiply row 3 by i·
1 ]
0
0
E, [:
0
:]
0
H
(6) Add row 3 to row 2.
1
E t _
1
() 
0
0
E,
()
Ei' [:
0
(7) Add  2 times row 3 to row 1.
1
1
0
0
Ea =
 1
1
(8) A<ici  1 t imes row 2 to row 1.
l
1
LO
0
0
It follows that A
1
= EaE1E6EsE4E3E2E
1
and A= E}
1
E2
1
Ej
1
Ei'£5'Ei
1
E7
1
E8
1
.
25. The two syst ems have t he same coefficient matrix. If we augment this m:\trix with the two columns
of constants on the right sides, we obtain
[
1 2 1 ! 1 : 0]
1 3 21 3•0
I I
0 1 z: 4 l 1
the reduced row echelon form of this matrix is
! : !:]
0 0 t: o: 4
From this WP. conclude that the first system has the solution x
1
=  9, x
2
= 4, :r
3
= 0; and the
second syst em has the solution 4, x2 = 4, xa = 4.
EXERCISE SET 3.3 81
26. 'The coefficient matrix, augmented with the two colUJnns of constants, is
27.
2 5: 1 : 2]
I I
l  3 I  } I 1
I I
0 1 : 3 :  1
and the reduced row echelon form of this matrix is
[
1 0 0 l  32 ! 11]
0 1 0 I 8 I 2
I I
0 0 1 l 3 :  1
Thus the first system has the solution x
1
= 32, x
2
= 8, x
3
= 3; and the second system has the
solution x
1
= 11, x2 =  2, X3 = 1.
(a) The systems can be. written in matrix form as
2
:] [::]
n1
2
m
3
::0
and 3
:::
1
]
["
:]
3 ']
The inverse of the coefficient matrix A = 3 2 l . Thus the solutions
l 1
are given by
[:
 3
:] [!]
n1 H
Hl
2 and 2
 1  1
b,] [ :
3
' ][  J 0] [9 4]
(b) A
1
!b 1 2 1 3
= : .
1
 1 l 4
28. ( The systems can be wri tten in matrix form us
nod [ J
Tbe inverse of the coefficient matrix
tions are given by
:]
82 Chapter3
29 0 The augmented matrix can be row reduced to l bt
2
b:z) 0 Thus the system is
consistent if and only if b1  = 0, i.e. if and only if b
1
= 2b2.
30.
[
1 2 5: btl
The augmented matrix 4 5 8: b2 can be row reduced to
:1 3 3 I b3
0
2 6 : bl l
3  12l bz 4bl 0 Thus
o 0 I  I> 1 + bz + b3
the system is consistent if and only if b
1
+ bz + b
3
= 0.
31.
[
l 2 llbll [1
The augmented matrix 2 3 2 ! &a can be row reduced to o
4 7 4 , b3 0
2 1 : bl l
1 o ! t 2b1 . Thus the
0 0 I 2bt  b2 + 113 .
system is consistent if and only if 2b1  b2 + b3 = Oo
32. The augmented matrix for t his system can be row reduced to
[
1 1 3 2
0 1 11  5
0 0 0 0
0 0 0 0
and from tohis it foll ows that t he system Ax = b is consistent if and only if t he components of
b sat isfy the equations b
1
+ bs + b
4
= 0 and + b
2
+ b
4
= 0. The general solution of this
homogeneous system can be written as b
1
::: s + t , b2 = 2s + t, bs = s, b4 = t. Thus the original
system Ax b is consistent if and only if the ve•:to' b is of the fonn b [ ;, (
11
l s [i] + t m ·
33. The matrix A can be reduced to a row echelon from R by the following sequence of row operations:
r o
1 7
j]
A= ll
3 3
2  5 1
(1) Intercha.nge rows 1 and 2.
[ :
3 3
j]
1 7
2  5 1
(2) Add 2 times row 1 t o row 3.
r 3
8
:J
0 1 7
0 1 7
(3) Add 1 times row 2 to row 3.
R l:
3 3 8]
l 7 8
0 0 0
It follows from this that R = E
3
E
2
E
1
A where E
1
, E'2, E
3
<lie the elementary matrices correspond
ing to the row operations indicated above. Finally, we have the factorization
A EFG R [! u ! ;] [:
1 1 0 0 0 0
where E = E!
1
• F = Ei
1
, and G = £3
1
•
WORKING WITH PROOFS 83
34. The matrix A is obtained from the identity matrix by aJding a times the first row to the third row
and adding b times t he second row to the third raw. If either a = 0 or b = 0 (or both) the result
is an elementary matrix. On the other hand, if a =I= 0 and b =I= 0, the result is not a.n elementary
matrix since two elementary row operations are required to produce it. Thus A is elen,enta.ry if
and only if ab = 0.
DISCUSSION AND DISCOVERY
Dl. Suppose the matrix A can be reduced to the identity matrix by a known sequence of elementary
row operations, and let E
1
, E
2
, ...• Ek be the corresponding sequence o£ elementary matrices.
Then we have Ek · ··E
2
EtA =I, and so A= E;
1
E2
1
· · ·Ej;
1
l. Thus we can find A by applying
the inverse row operations to I in the reverse order.
D2. There i.'J not. For example, let b = 1 and a"" c = d = 0. Then =
D3. There is no nontrivial solution. From the last equation we ste. that x
4
= 0 and, from back substi
tution, it follows immediately that x
3
= x'2 = x
1
= 0 also. The coefficient matrix is invertible.
D4. (a) The matrix B = [
1
:
0
0
] has the requited property.
0 2
(b) Yes, t he mat rix B = _: works just ns well.
(c) The matrix B must be of size 3 x 2: it is not square and therefore not invert.ible.
D5. (a ) False. Only invertible mat.rkes can be as a produ<:t of elementary matrices.
(b) F'alse. For example t he product =_ cannot be obtained from the identity by
a single elementary row operation.
(c) This row operation is equivalent t.o multiplying the given mat.rix by an elementary
matrix: and, since any elementary matrix is invertible, the product is still invertible.
(d) 'T'r11e. If A is invertible and AR = 0, then B = IB = (A'
1
A)B = A
1
(AB) = A
1
0= 0.
(e) True. If r1. is invt!rtible then the homogeneous system Ax = 0 has only the trivial solution;
otherwise (if A is singular) there are infinitely many solutions.
D6. All of these statements arc true.
D7. No. An iuvertible matrix cannot have a row of zeros. Thus, for A to be invertible we must have
a# 0 and h =I= 0. But (assuming this) if we add d/a times row 1 to row 3, and add e/h times
row 5 to row 3; we have a matrix with a row of zeros in the third row.
WORKlNG WITH PROOFS
Pl. If AB is invertible then, by Theorem 3.3.8, both A and B a.re invertible. Thus, if either A orB is
singular, then the product AB must be singular.
P2. Suppose that A = BC where B is invertible, and that B is reduced to I by a sequence of elementary
row operations corresponding to the elementary matrices E
1
, E2, .. . , Ek· Then I= Ek · · · EzE1B,
and so B
1
= Ek · · · EzE1. From this it follows that Ek · · · E2E
1
A == E,.. · · · E2BC = C; thus the
same sequence of r{)w operations will reduce A to C.
84 Chapter 3
P3. Suppose Ax= 0 iff x = 0. We wish to prove that, for any positive integer k, Akx = 0 iff x = 0.
Our proof is by induction on the exponent k.
Step 1. If k = 1, then A
1
x =Ax= 0 iff x = 0. Thus the statement is true fork= 1.
Step 2 (induction step): Suppose the statement is true fork= j, where j is any fixed integer 2:: 1.
Then A1+
1
x =A) (Ax) """' 0 iff Ax= 0, and this is true iff x = 0. This shows that if the statement
is true fork= j it is also true for k = j + 1.
• ..
These two steps complete the proof by induction.
P4. From Theorem 3.3.7, the system Ax= 0 has only the trivial solution if and only if A is invertible.
But, since B is invertible, A is invertible iff B A is invertible. Thus Ax = 0 has only the trivial
solution iff (BA)x = 0 has only the trivial solution.
P 5. Let e1, ez, .. . , em be the standard unit vectors in Rm. Note that ell e2, ... , em are the rows of
the identity matrix I= lm. Thus, for any m x n matrix A, we have
for i = l, 2, ... , m. Suppose now that E is a.n elementary matrix that is obtained by performing a
single elementary row operation on I. \Ve consider the three types of row operations separately.
Row Interchange. Suppose E is obtained from I by interchanging rows i and j. Then
r
1
(EA) = fi(E)A = eiA = ri(A)
ri(EA) = rj(E)A = e;A = ri(A)
and rk(EA) = rk(E)A = = rk(A) fork =J: i,j. Thus EA is the matrix that is obtained from
A by interchanging rows i a.nd j.
Row Scaling. Suppose E is obtained from I by multiplying row i by a nonzero scalar c. Then
and rk(EA) = r,(E)A = ekA = rk(A) for all k =I= i. Thus EA is the matrix that is obtained from
A by multiplying row i by the scalar c.
Row Replacement. Suppose E is obtained from I by adding c times row ito row j (i :f.=j). Then
ri(EA) == rj(E)A = (ej + cei)A eiA + ceiA = ri(A} + cri(A)
and = rk(E)A = ekA = rk(A) for all k =J:. j. Thus EA is the matrix that is obtained from
A by adding c times row i to row j.
P6. Let A = [: !] , and let B = Then AB = BA if and only if [! :] = [: i.e. if and only if
a= d and b =c. Thus the only matrices c0mmute with Bare those of the form A= !] .
Suppose now that A is a matrix of this type, a.nd let G = Then AC = C A if a.nd only if
[: = [;b
2
ba], and this is true if and only if b = 0. Thus the only 2 x 2 matrices that commute
with both B and C are those of the form A= where oo < a < oo. It is easy to
see that such a matrix will commute with all other 2 x 2 matrices as well.
EXERCISE SET 3.4
85
P7. Every 111 x n matrix A can be tra.nsforrr:ed to reduced row echelon form B by a sequence of elemen
tary row operations. Let E
1
, E2, . . . , be the corresponding sequence of elementary
Then we have B = · · · E2E
1
A = CA where C = E1r: · · · E
2
E
1
is a.n inver tible m:1trix.
P8. Suppose that A is reduced to I= In by a sequence of elementary r ow operations, a.nd that
Et. Fh, .. . , n.re the correspondjng elementary matrices. Then · · · E2E
1
A =I and so
· · · E2E1 = A
1
. Jt foUows that the same sequence of elementary row operations will reduce
the matrix [A I B] to t.he matrix I Ek · · · E2E1B) = [I I A
1
BJ.
EXERCISE SET 3.4
1. (a)
(b)
(c)
2.
(
(b)
(c )
3. (a)
(b)
4. (a)
(b)
5. (a)
(b)
(XJ,X2) = t(l, 1); Xt = t, X2 = t
(Xt
1
X;!, X3) :: t(2, 1,  4); X} = 2t, X2 = t, X3 = 4t
(x,,.:1:2, X3,X4) = t(l, 1,2,3); X1 = t, X2 = t, X3 =  2t, X
4
= 3t
(x1, x2) = t(O,  4); :c1 = 0, x2 = 4t
(.t: , ,:l:<!, :2:3 ) = t(Q,  1, 5) ; XI= 0, X2 = t, X3 = 5t
(Xl
1
X2, X3
1
X4 ) = £(2, 0, 5, _ ,1); Xt = 2t, X2 = 0, X3 = 5t , :t
4
=  4t
(Xt,X2, X3) = s(4, · 4, 2) + t (3,5, 7); Xt = 4s  3t, X2 =tiS + 5t, :t
3
= 2s + 7t ·.
(::tt , 2'2 · X3, X4 ) = s( l , 2, 1,  3) + t(3, 4, 5, 0); X1 = S + 3t , X2 = 2s + 4t, :r
3
= S + 5t, x
4
= 3.'f
(:z:
1
,:r2 X3) ::: s{0, 1,0) +t(2,1, 9); x
1
= 2t, X2 = s+t, X3 = 9t
lXt, x2. ::3, :r4, x:;) = $(1, 5,  1, 4, 2) + t (2, 2. 0, 1,  4); x
1
= s + 2t, x2 = Ss + 2l , x
3
=  s,
X4 = 1s + t, X:; = 2s  4t
u =  2v; Lhus u is in the s ubspace span{v}.
u :f: kv for a ny scalar k ; thus u is not in the subspace s pan{v}.
6. (a) no
{b) yes
7. (a) Two vect ors are linearly dependent if and only if one is a scalar multiple of the other; thus
these vectors 1\re linea.rJy dependent.
(b) These vectors are linearly independent.
8. (a) linear ly independent.
(b) linearly dependent
9. u = 2v  '\v; thus the u, v , w are linearly dependent.
10. u = 4v  2w
11. (a) A line (a 1dilnensio nal subspace) in R
4
that passes through the origin and is parallel to the
vector u = (2,  3, 1, 4) = 4(4, 6, 2, 8}.
( b) A plane (a 2.:!imensional suospace) in R" that passes through the origin and is parallel to
the vectors u = (3,2, 2, 5) and v = (6, 4, 4, 0).
12. (a) A plane in tha t passes through the origin and is parallel to the vectors u = (6, 2, 4,8)
and v = (3, 0. 2,  4).
(h) A line in R
4
that passes through the origin and is parallel to the vector u = (6, 2, 4, 8).
86
13. The augmented matrix of the homogenous system is
H
6 2 5
6 1 3
12 5 18
a.nd the reduced rowechelon form of this matrix is
6 0 11
0 1 8
0 0 0
Thus a general solution of the system can be written in parametric form as
Xt = 6s llt, X2 = s, X3 := 8t, X4 = t
or in vector form as
(Xt. X2
1
x
3
, X4) = S( 6
1
1,0, 0) + t( 11, 0, 8, 1)
Chapter 3
... :
This shows that the solution space is span{ v
1
, v2} where v1 = ( 6, 1, 0, 0) and v2 = ( 11, 0, 8, 1 ).
14. The reduced row ech!:!lon form of the augmented matrix is
[
1 1
0 0
0 0
0 1 1
1 2  3
0 0 0
thus a. general solution of the system is given by
(x
1
, x
2
, x
3
, x
4
, x
5
) = r( l, 1, 0, 0, 0) + s(l , 0,  2, l , 0) + t( l, 0, 3, 0, 1}
15. The augmented matrix of the homogeneous system is
16.
2 1
4 1
and the reduced row echelon form of this matrix is
l
0
[
1 2 0
0 0 1
1
1
1 0]
i 0
Thus a general solution of the system can be written in parametric form as
X
1
= 2r js X2 = T, X3 = .X4 = S, X$= t
or in vector form as
(xt, xz, .x3, X4, xs) 0,0,0) + s( l , 0, 1, 0) + 0, l,O, 1)
The vectors v
1
= ( 2, 1, 0,0, 0), v2 = ( 3. 0, i, 1, 0), and v3 = ( 0, 0, 1) span the solution
sp3ce.
The reduced row echelon form of t!1a augmented matrix is
0 4 0 1
1 1 0 2
0 0 1  3
0 0 0 0
thus a. general solution is given by (xt. x2, X3, x4, xs) = s(4, 1, 1, 0 , O) + t(  1, 2, 0, 3, 1 ).
EXERCISE SET 3.4
17. (a) Vz is a scalar multiple ol· v
1
(v2 = 5vl); thus these two vectors are linearly dependent.
(b) Any set of more than 2 vectors in R
2
is linearly dependent (Theorem 3.4.8).
18. (a) Any set containing the zero vector is linearly dependent.
(b) Any set of more than 3 vectors in R
3
is linearly depenrlcnt (Theorem 3.4.8).
19. (a) Those two vectors arc linearly independent since neither is a scalar multiple of the other.
(b) v
1
is a scalar multiple of vz(v
1
= 3vz)i thus these two vectors are linearly dependent .
87
(c) These three vectors are linearly Independent since the system n : [:] has
only the trivial solution. The coefficient matrix, which is the matrix having the given vectors
as its columns, is invertible.
(d) These four vectors are linearly dependent since any set of more than 3 vectors in R
3
is linearly
dependent.
20. (a) linearly independent
(c) linearly dependent
(b) linearly independent
(d) linearly dependent
21. (a) The matrix having t hese vectors as its columns is invertible. Thus the vectors are linearly
independent; they do not lie in a. plane.
(b) These vectors are linearly dependent (v
1
= 2v
2
 3v3); they lie in a. plane but not on a line.
(c) These vectors line on a line; v
1
= 2v2 and v3 = 3vz.
22. In parts (a) and (b} the vectors are linearly independent; they do not lie in a plane. In part {c)
the vectors are linearly dependent; they lie in a plane but not on a line .
... ...·...,""
23. (a) This vectors is it is closed under scalar and addition:
k(a, 0, 0) = (ka, 0, O) and (a1. a, 0) + (az, 0, 0) = (a1 + az, 0, 0).
(b) This seL of ''cctors is not a subspace; it is not closed under scalar multipllcation.
(c) .. set of vectors is a.· subspace. lf b = a+ c, then . kb = ka + kc. If b1 ;;;· a 1 + c1, and
b2 = nz + cz, then (b1 + bz) = (aJ + a2) + (c1 + cz) .
(d) This set of vectors is not a subspace; it is not closed under addition or scalar multiplication.
24. Sets (a) and (b) are not subspaces. Sets (c) and (d) are suhspaces.
25. The set W consists of all vectors of the form x = a(l,O, 1, 0); thus W = span{v} v =
(l,O, 1, 0) . This corresponds t o a line (i.e. a 1dimensional subspace) through the origin in R
4
.
26. W = 5pan{ v
1
, v
2
} ·.vhcre v
1
::: (1, 0, 2, 0,  1) and v
2
= (0, 1, 0, 3, 0). This corresponds to a. plane
(i.e. a 2dimensional subspace) through the origi n R::..
27. (a)
(b)
7vl  2vz + 3v3 = 0
2 3 7 3 7 2
YJ = 7Y2 · 7V3, Vz = zVl + 2YJ 1 V3 = 3Vl + 3 V2
28. If ,\ = ! , then the given vectors are clearly dependent. Otherwise, consider the matrix having
these vectors as its columns. Are there any other values of..\ for which it is sing\.!lar? To determine
this, it is convenient to start by multiplying all of the rows by 2.
[Y 2: JJ
88 Chapter 3
Assum!ng >. ¥ this matrix can be reduced to the row echelon form
l
0 2+2.>.
which is singular if and only if>.=  1. Thus the gjven vectors are dependent iff >.= ! or>.= 1.
29. (a) Suppose S = { v
1
, v
3
} is a. linearly independent set. Note first that none of these vectors
can be equal to 0 (otherwise S would be linearl y dependent). and so each of the sets {v
1
},
{v2}, and {v3} is linearly independent . Suppose then that Tis a 2element subset of S,
e.g. T = {v
1
, II T is linearly dependent then there are scalars c
1
and c
2
, not both
zero, such that c1 v
1
+ c
2
vz = 0. But, if this were true, then c1 Vt + c2v2 + Ovs = 0 would
be a nontrivial linear relationship among the vectors v'h v2, v 3, and so S would. be linearly
dependent. Thus T = {v
1
, v
2
} is linearly independent. The same argument applies to any
2element subset of S. Thus if S is linearly independent , then each of its nonen:tpty subsets
is linearly independent.
(b) If S = {v
1
, v2, v
3
} is linearly dependent, then there a.re scalars c1o c
2
, and c
3
, not all zero,
such that c
1
v1 + c2v2 + CJVJ = 0. Thus, for any vector v in R", we have c
1
v
1
+ c
2
v
2
+
c3 v3 + Ov = 0 and this is a nontri vial linear relationsbjp among the vectors v
1
, v
2
, vs, v.
This shows that if S = { v
1
, v
2
, v
3
} is linearly dependent then so is T = ( v
1
, v
2
, v
3
, v} for
any v.
30. The arguments used in Exercise 29 can easily be adapted to this more general situation.
31. (u  v ) + ( v  w) + (w  u) = 0; t hus the vectors u  v, v  w, and w  u form a linearly de
pendent set . · ·
32. First not e that the relationship between the vectors u , v and s, g can be written as
= (:)
and so the inverse relationship is given by
[
s] [ 1 0.06] _, [u] 1 [ 1
g = 0.12 1 v = 0.9928  0.12
Parts (a.), (b) , (c) of t he problem can now be answered as follows:
(a) s =
6 9
i
28
( u  0.06v)
(b)
(c)
33. (a) No. 1t is not closed under either addition or scalar multiplication.
(b) P 876 = 0.38c + 0.59m + 0. 73y + 0.07k
P 2t6 = 0.83m + 0.34y + 0.47k
P328 = c + 0.47y + 0.30k
(c) H Pe75 + P2J6) corresponds to the CMYK vector (0.19,0.71 , 0.535, 0.27).
34. (a) k1 = k2 = = (b) k1 = k
2
= · · · = k1 =
(c) The components of x = + tc2 + 4c3 represent the averag.e scores for t he students if the
Test 3 grade (the final exam?) is weighted twice as heavily as Test 1 and Test 2.
DISCUSSION ANP DISCOVERY 89
35. (a) kt = kz = k3 = k4 = (b) k1 = k2 = k3 = k4 = k& = i
(c) The components of x; =
+ ! r
2
+ represent the average total population of Philadel
phia, Bucks, and Delaware counties in each of the sampled years.
n
36. v = L vk
k = l
=DISCUSSION AND DISCOVERY
Dl. (a) Two nonzero vectors will span ·R
2
if and only if t hey a.re do not lie on a line.
(b) Three nonzero vectors . will span R
3
if and only if they do not lie in a
D2. ( a ) Two vectors in Rn will span a plane if and only if they are nonzero and not scalar multiples
of one another.
(b) Two vectors in R" will span a line if and only if they are not both zero and one is a scalar
multiple of the other.
(c) span { u} = span { v} if and only if one of vectors u and v is a. scalar multiple of the other.
D3. (a) Yes. If three nonzero vectors are mutually orthogonal then none of t.hem lit:S in the plane
spanned by the other two; thus t he three are linearly independent.
(b) Suppose the vectors vl> v2, and v
3
are nonzero and mutually orthogonal; thus v, · v; =
llvdl
2
> 0 fori= 1, 2, 3 and v; · vi = 0 for i=/: j . To prove they are linearly independent we
must show that if
C1V1 + C2V2 + C:JVJ = 0
then Ct = c_a = C3 = 0. Tbis follows from the fact that if c 1v
1
+ c 2v2 + C3Y3 = 0, then
C; llv;ll
2
= vi· (c1v1 + c2v2 + c3v3) = V
1
• 0 = 0
for i = 1, 2, 3.
D4. The vectors in t he fi rst figure are linearly independent since none of them lies in the plane spanlled
by the other two (none of them can be expressed as a linear combi nation of the other two). The
vectors in the second figure arc linearly dependent since v
3
= v
1
+ v
2
.
DS. This :set is d ose>d under scalar multi plication, but uot under For example, the vectors
u = (1,2) nnd v = (2,  1) correspond to points in Lhe set, but 11 + v = (1, 1) does not.
06. (a) False. For example, two of the vectors ma.y lie on a line (so one is a scalar multiple of the
other), but the third vector may not lie on this same line and therefore cannot be expressed
a.s a linear combination of the other two.
(b) False. The set of all linear combinations of two vectors can be {0} (if both are 0), a line (if
one is a scalar multiple of the other), or a plane (if they are linearly independent) .
{c) False. For example, v and w might be linearly dependent (scalar multiples of each other).
[But it is true that if {v, w} is a linearly independent set, and if u cannot be expressed as
a linear combination of v and w, then { u, v, w} is a independent set . J
(d) True. See Example 9.
(e) Ttne:. lf c1(kv1)+ cz(kv2 ) + cz(kvz) = 0, then k(c1 v1 +c2v2 + c3v2) = 0. Thus, since k =/:
0, it follows that Ct v
1
+ c
2
v2 + c
3
v
2
:::::: 0 and so c
1
= cz = c
3
= 0.
D7. (a) False. The set { u, ku} is always a linearly dependent set.
(b) False. This statement is true for a homogeneous system (b:::::: 0), but not for a nonhomogen
eous system. !The solution space of a nonhomogeneous linear system is a translated sub
space.J
Chapter 3
(c) True. If W is a subspace, then W is closed under scalar multiplication and addition,
and so spa.n(W} = W. ·
(d) False. Forexample,ifSt = {(I ,O),(O, l}}a.nd S2 = {(1, l) ,(O, l}}.thenspan(SI) = span(S
2
)=
R
2
, but. S1 # S2.
08. Since span(S) is a subspace (already closed under scalar multiplication and addition), we have
span(spa.n(S}) = spa.n(S). ·
WORKING WITH PROOFS
Pl. Let() be the angle between u and w, and let 4> be the angle between v and w. We will show that
9 = ¢. First recall that u · w = llu!lllwll cosO, so u · w = kllwll cos9. Simllarly, v • w.=lllwllcos¢.
On the other hand we have ·
u · w = u · (lu + kv) = l(u · u) + k(u · v) = lk
2
+ k(u • v)
aud ::;o kllwll u. w = lk
2
+ k(u . v) , i.e. l!wll cos e = lk + ( u. v) . Similar calculations show
that llwll cos¢ = (v · u) + kl ; thus Uwll cos B = Uwll cos¢. It follows that cos 9 =cos¢ and () = </>.
P2. If X belongs to WIn w2 and k is a scalar, then kx also belongs to Wt n wl since both Wt and
w2 are subspaces. Similarly, if XI and X2 belong to Wt n W2 , then x, + X2 belongs to WIn w2.
Thus WI n w2 is closed under scalar multiplication and addition, .i.e. wl n w2 is a subspace.
P3. First we show t hat ·w
1
+ W
2
is closed under scalar multiplication: Suppose z = x + y where xis in
W1 andy is in W2. Then, for any scalar k, we have kz = k(x + y) = kx + ky, where kx is in W
1
and ky is in (since w, and w2 are subspaces); thus kz IS in w} + w2. Finally we show that
WI + w2 is closed under addition: Suppose Zl =X) + Yl and Z2 = X2 + Y2 o where X) and X2 are
in W1 and Y• and Y2 are in W2 . Then z
1
+ Z2 == (x1 + y, ) + (x2 + Yz) = (x1 + x2) + (Yl + Y2) .
where X} + X2 is in l t\ and Yl + Y2 is in w2 (since WI and w2 subspaces); thus Zl + Z2 is in
H.\+ w2.
EXERCISE SET 3.5
1. (a) The reduced r,Jw echelon form of the augmented matrix of the homogeneous system is
thus a general sol ut ion is x1
l
0
0
!
Gr (in column vector form)
EXERCISE SET 3.5
(c) From (a) and (b), a general solution of the nonhomogeneous system is gjven by
(d) The reduced row echelon form of the of the nonhomogeneous system is
..   .. 
,,.
2 1
;]
3
3
f
0 0
·.:., 0 0
This solut ion is related to the one in Qa.rt (c) by the change of variable .s
1
= s, t
1
= t + 1.
 ··· r• .. •otNo•··.
2. (o) The <educed <OW echelon ro,m is [;
ll
0 T
2
0 0
0] [Zil o ; thus a general solution is :r
2
= t .
0   ·   :1:3 1
(d) The rnd uced row echelon form is
0 ¥
1 g
o· o
: [:} [1J +t' nv
'··.. .."
3. (a) The reduced row echelon form of the augmented matrix of the give n system is
! 0 .!. ]
3 3
0 1 1
0 0 0
thus a general sol ut ion is given by x
1
= !  1t, x2 = s, X3 = t , X4 = 1; or
91
92 Chapter 3
(b) A general solution of the !s
.· :::....
[
::] ::::: s [!] + ,r;1 \.
' .. X4 0 l 0
and a particular solution of the given nonhomogenous system is x1 = !. x2 = 0, X3 = 0, x
4
= 1.
········· ··•·· : .. o····· ···
4. (a) The reduced row echelon form of the augmented matrix of the given system is
0 _1: 133]
1 9 7
0 0 0
thus a general solution is given by
(b) A general solution of the associated homogeneo'..ls system is
and a particular solution of the nonhomogeneous is x
1
=
1
i, x2 = 0, x3 = 7, X4 = 0.
5. The vector w can be expressed as a linear combination of v
1
, v
2
, and v
3
, if and only if the system
: [::] :::::
1 5 12 c
3
1
is consistent. The reduced row echelon form of the augmented matrix of this system is
0
1
0
0 2]
0 3
1 1
From this we conclude that the system has a. unique solution, and that w = 2v
1
+ 3v
2
+ v3.
6. The vector w can be expressed as a linear combination of v
1
, v
2
, and v
3
, if and only if the system
f
3 8
 2 1
l2 4
EXERCISE SET 3.5 93
is consistent. The reduced row echelon form of the augmented matrix of this system is
·  . . /
,.
'
'
/
0 3
10]
/
/
l 1 6
0 0 0
•,
... ······• .......
•· ..
and from this we conclude that the :;ystem. hilS inftnitcly many given b}: c1 .:;::;.10  ..
c
2
== 6 + t, c
3
= t. Thus.w can be expressed as a linear combination of v1, v2, and V3. fii particutat;
taking t = 0, we have w = 10v
1
 6vz.
7. The vector w is in span{v
1
, v
2
, v
3
, v
4
} if and only if the system
H
1 l
3
0 2
is consistent. The row reduced echelon form of the augmented mo.trix of this system is
!
. . : ..•.. ... . .·· . . . . ..... ··. . . ·::.::_;:,..._
From this we conclude that the system has infinitely many solutions; w is in span{v
1
, vz, v
3
, v
4
}. / i
,......········"· ······
8. The vector w is in span{v
1
, v
2
, v
3
} if and only if the system
is consistent.. The reduced row echelon form of the augmented matrix of this system is
0 2
1
0 0
From this we conclude that the system has infinitely many solutions; thus w is in span{vt, v2, VJ}.
9. (a) The hyperplane al. consists of all vectors x = (x, y) such that a • x = 0, i.e. 2x + 3y = 0. This
corresponds to the line through the origin wit.h parametric equations x = it, y = t.
(b) The hyperplane a..!. consists of all vectors x = (x, y, z) such that a· x = 0, i.e. 4x 5z = 0. This
corresponds to the plane through the origin with parametric equations x = y = s, z = t.
(c) a.i. consists of all vectors x = (xbx
2
, X3, x4) such that a· x = 0, i.e. Xt + 2x2 3x3 + 7x4 = 0.
This is a hyperplane in R
4
with parametric equations Xt = 2r + 3s 7t, x2 = r, X3 = s,
X4 = t.
0. (a) x = 4t, y = t (b) x = s, y = 3s + 6t, z = t
(c) x1 = r + 2s, x
2
= r, x
3
= s, s;
4
= t
94 Chapter 3
11. This system reduces to a single equation, Xt + x2 + x3 = 0. Thus a general solution is given by
x
1
= s t, x
2
= s, x
3
=: t; or (in vector form)
The solution space is twodimensional.
13. The reduced row echelon form of the augmented matrix of the system is
0
1
7
7
19
T
l
7
Thus a general solution is x
1
= I_( s x
2
= + ts + ¥t, X3 = T, x
4
= s, x
5
= t; or
3 19
:1
X1
7
7
X2
2 1
7
7
xs =r
1
+s
0
+t
X4
0 1
X f.>
0 0
The solution space is threedimensional.
15. (a) A genecal x =: t:_y \;t [: l m + f l] + t nl
. .. ' ······..
(b) The solution space corresponds to the plane which passes through the point fP( 1, 0, 0) il.nd is
· ····· · ,. _______ .,.... _____ ·
... ······ ... ······ ...................... ........  .... ......_.. ······ ···· ....... , . ..,.
16. (a) A general solution is x = 1 t, y = f;

(b) The solution corresponds to the line which passes through the point P(l, 0) and is parallel
to the vecto( v = ( 1,1).
17. (a) A vector x = (x, y,z) is orthogonal to a= (1, 1, 1) and b = ( 2, 3,0) if and only if
y z. = 0
g+3y j
(b) The solution space is the line through the to the vectors a and b.
EXERCISE SET 3.5
(c) The reduced row echelon form of the augmented matrix of the system is
· iJ ·
"
and so a by x z = t; or (x, y,z) l).,ote
that the vectof: .. I:!.,. 
 ..
l 8. (a) A vector x = (x,y,z) is orthogonal to a = (  3,2,1) and h= (0, 2, 2) if and only if
 3x + 2y  z =0
 2y 2z = 0
(b) The solution space is th.e llne through .. ori,g!11 .. tha,:t is perpendicular to
(c) ( x, y, z ) = _: 1, 1); note that the vector v = (  1, ·:.__ i, 1) is orthogonal to both a and b. ·
 
.9. {a} A \'ector x = (x
1
,x2,x3,x
4
) is orthogonal to v1 = (1, 1, 2,2) and vz = {5, 4,3,4) if and only if
x 1 + xz + 2x3 + 2x4 = 0
5xl + 4xz + 3x3 1 4Xt = 0
(b) The solution space is the plane (2 dimensional subspace) in R
4
that passes through the origin
a.nd is perpendicular to the vectors v1 and v
2
.
(c) The redUl:ed row Pchelon form of the augmented matrix of t.he system is
l
 5 ·1
7 6
0
and so a. general solution of the system is given by (x,,xz,XJ ,x
4
) = s{5, 7,1,0} + t(4,  6,0, 1) .
Note t hat t he vectors (5,7, 1,0) and (4,6,0, 1) are orthogonal to both v
1
and v2.
:o. ( a ) A vector x = (x
1
,:c
2
,r.
1
,x
4
, is orthogonal to the vectors v
1
= (1, 3, 4,4, 1),
v
2
== (3, 2, 1, 2,  3) , and v3 == (1, 5, 0,  2,  4) if and only if
x1 + 3xz + 4x3 +  Xs = 0
3xl + 2x2 + X3 + 2x4  3xs = 0
x, + 5xz  2x4  4xs = 0
{b) The solution space is the plane (2 dimensional subspace) in R
5
t hat. passes through the origin
and is perpendicular to the vectors v1, vz, a.nd v3.
(c) The reduced row echelon for m of the augmented matrix of the system is
0 0
3 7
5
 IO
1 0
13 _33
 Z5
50
0
31 21
25 50
and so a. general solution of the system is given by
(x1 , x2, x
3
, x1, xs) = s( 1, 0) + t( {o, I)
The ( t W. 1, 0) and ( t
7
0
, 0, 1) are orthogonal to v •• Vz , and v3.
96
Chapter 3
DISCUSSION AND DISCOVERY
Dl. The solution set of Ax = b is a. translated subspace x
0
+ W , where W is the solution space of
Ax= 0.
D2. If vis orthogonal to every row of A, t hen ilv = 0, and so (since A is invertible) v = 0 .
D3. The general solution will have at least 3 free variables. Thus, assuming A i: 0, the solution space
will be of dimension 3, 4, 5, or 6 depending on how much redunda.ocy there is.
D4. (a) True. The solution set of Ax= b is of the form x
0
+ W where W is the solution space of
Ax = O.
(b)
(c)
(d)
(e)
False. For example, the system :r  Y =
0
is inconsistent, but the associated homogeneous
xy=l
system has infinitely many solutions.
True. Each hyperplane corresponds to a single homogeneous linear equation in four variables,
and there must be a least four equations in order to have a unique solution.
'Tme. Every plane in R
3
corrr.:>ponrls to a equation of the form ax + by + cz = d.
False. A vector x is orthogonal to row(A) if and only if x is a solution of the homogenous
system Ax = 0.
WORKING WITH PROOFS
P 1. Suppose that Ax = 0 has only the and t hat Ax = b is consistent. If Xt and Xz
are t wo sol utions of Ax= b, then A(x
1
 x
1
) = Ax
1
 Axz = b  b = 0 anti so Xt  x z = 0, i. e.
x
1
= xz. Thus, if Ax= 0 has only the t rivial solution, t he system Ax = b is either inconsistent
or has exactly one solution.
PZ. Suppose that Ax = 0 has infinitely many solutions and that Ax= b is consistent. Let Xo be
any solution of Ax = b . Then, for any solution w of Ax= 0 , we have A(xo + w) = A.xo + Aw =
b + 0 = b . Thus, if Ax = 0 has infinitely many solutions, the system Ax = b is either inconsistent
or ha.<> infinitely many solutions. Conversely, if Ax= b has at. most. one solution, then Ax= 0 has
only t.he trivial solution.
P3. If x
1
is<:\ solution of Ax = band x2 is a solution of Ax = c, then A(x, + x2) = Ax1 + Axz = b + c;
i.e. x
1
+ X2 is a solution of Ax= h +c. Thus if Ax= band Ax ;= care consistent systems, then
Ax = b + c is also consistent. This argument c2n easily be adapted to prove the following:
Theorem. If Ax = bi is consistent for each j = 1, 2, ... , r and if b = b
1
+ b2 + · · · + b,., then
Ax = b is also consistent .
P4. Since (ka) · x = k(a · x) and k f. 0, it follows that (ka) · x = 0 if and only if a· x = 0. This proves
that. (ka)..L = a..L.
EXERCISE SET 3.6
1. (a) This mat rix has a row of zeros; it is not invertible.
(b) A diagonal matrix with nonzero entries on the
o
0
] 
1
o ol
20 = O!O·
u 0 0 3
EXERCISE SET 3.6
97
0
r rl
0
fl
2. (a}
 2
0 = 0
1
(b) not invertible.
2
0
3 0
0
0
_; l [!
[ 2 1] ' 0 [ 10 2]
3. (a)  1
(b) 4 1  2] =  20  2
0 2 2 5 4 10
2 5 10 10
0
0][ 2 1] 0 [3
0
0][ 10
2]
ro 6] (c)  1
: 2] =
1 0 20  2
=
20 2
0 0 2 10 10 20  20
[ 6
 2
[ 12
5
[  24  10
12]
4. (a)  1  2 (b) 3 10
(c) 3  10
 20
"
15 5 60 20
0
0
11
0
5. (a) 4
{b)
1
(c)
)k
4
0
0
0
(a)
0
1]
(b)
0
[2'
0
3]
6.
I
9
(c) A  k
3k
9
0
0
0
7. A is if and only if x =!= l ,  2, 4 (the diagonal entries must be nonzero).
9.
Apply the invers ion algori t hm (see Section 3.3) to find the inverse of A.
2 3 : 1 0
1 2:0 1
'
0 1 : 0 0
Add 2 t.i 111es row 3 to row 2.
Add ' 3 ti mes row 3 t o row l.
2
0: 1 0
;]
1 o: o 1
I
0
1 : 0 0
Add 2 times row 2 to row 1.
0 0: 1
2
1 ol o 1
I
0 1 : 0 0
Thus the ;nverse of the upper triangular matrix A [ i
23] ['
 2
:r
1
is A
1
=
I
0
0
98
If A=
; } ,lhro A
1
=
r[
0
;]
0 0 0 1
10. 1 1 0  2  1
4 4 1  11 4
u
0
i]
12.
0 8]
11. A = 0 0 4
 1 4 0
13. The matrix A is symmetric if and only if a, b, and c satisfy the following equations:
a  2b + 2c = 3
r2a +b+···· c·,;· {) ..
The augmented matrix of this system is
nnd the reduced row echelon form is
:}.· :
0
1
0
1  13
0
0
Chapter 3
Thus, in order for the matrix A to be symmetric we must have a = 11 , b =  9, and c = 13.
14. There are infinitely ma.ny solutions: a = l + lOt, b = 1t, c = t , d = 0, where  oo < t < oo.
15. 'vVe hnve A l = =  ; t hus A
1
ts symmetnc
[
2 1 J . I 1 [3 }] _ . .
 1 3 5 I 2
16. We have A
1
=  2 1 7 =  13  5 1 ; thus A
1
is symmetric.
[
l  2
1 r45 13 ll l
3  7 4
14
11 1 3
17. AB =
=
0 0  4 0 0 3 0
0  12
18. From Theorem 3.6.3, the factors commute if and only if the product is symmetric. Thus the factors
;n (a) do not commute, whe, eas the ractocs ;n (h) do commute.
19. If A =
] _l
0 0  1 0 0 (1)
5
0 1 .
.......... _ ..
( 1)2 0 0 I
20. If A=
0 0] [(! )
2
4 0 , then A
2
=
3
0
0 l 0
0
EXERCISE SET 3.6
21. The lower tria.ngular matrix A = [! is invertible because of the nonzero diagonal entries.
1 3 1
Thus, from Theorem 3.6.5, AAT ond AT A are also invert ible; furthermore, since these matrices
are symmetric, their inverses are also symmetric by Theorem 3.6.4. .Following are the matric:es
AAT, AT A, and their inverses.
r
6
[ I 3 8]
ATA =
10 (AT A)
1
= 3 10 27
3 8  27 74
[l
3
[ 74
 27
;]
AAT :::::
10
(AAT) 1 =
10
6  3
22. Following are the matrice..<; AAT and AT A, and their inverses.
['i
5
[ I
 3
AT.4 = 5
{AT,4.)1 =
 3 10
2 5  17 30
3
r 35 13
AAT =
10
(AA
1
')• = l13
5
5 5  2
23. The fixed points of t he matrix A == are ihe solut ions of the homogeneous system (1  A)x =
0. The augmented matrix of this system is
[
 1 1
0
o]
 J 1
and from this it is easy to see that the system has the solution x
1
= t, x
2
= t. T hus the
fixed poi nts of A are vectors of t he form x = = t [ _ where  oo < t < oo.
24. All vectors of thcform x = H] = t [:] , whero  oo < t < oo.
25. (a) If A
2
= = Thus A is nilpotent with nilpotency mdex 2,
and the inverse of I A = is (I A)
1
= I+ A=
(b) The nilpotency index of A = I A =:'
is
8 l 0  8 1
100 Chapter 3
26. (a) A= has nilpotency indt>.x 2.
1A=[lO]
1 1
(I  A) t = I + A = [
1 0
]
. 1 1
(b)
[
0 2 1,]
A = o o 3 has nilpotency index 3.
0 0 0
=:]
+ + =
27. (a) If A is invertible and skew symmetric, then (A
1
)T = (AT)
1
= (A)
1
= A
1
; thus A
1
is skew symmetric.
(b) If A and B are skew symmetric, then
(A+ B)T =AT+ BT =A B =(A+ B)
(A B)T =AT BT = A+ B = (A B)
(kA)r =kAT= k( A)'"'"' kA
thus the matrices A+ B, A B, and kA are also skewsyn\met.ric.
28. (a) If A is any square matrix, then (A+ AT)T =AT+ ATT = A'r +A= A+ AT, so A+ AT is
symmetric. Similarly, (A AT)T =(A AT), so A AT is skewsymmetric.
(b) A= HA +.A
1
') AT)
(c) A= = +
29. If H =In 2uuT then we have nr = I'I 2(uuT)T =In 2uuT = H; thus li is symmetric.
30.
T r2 T
['']
AA = [r,
rf
r;,:]
['''!
rlrr
Tl
r1 · r1 :r 1 • r2
'' · rm 1
r1rm
= r2r1
r2rr
[ ''; r,
r2 · rz
f2 • r rn
T T
rm · rl I'm ·rz Tm • I'm J rmr
1
r,.,.r
2
fmfm
31. Using Formula (9), we have tr(Ar A}= lladl
2
+ lla2!1
2
+ · · · + tlanll
2
= 1 + 1 + · · · + 1 = n.
DISCUSSION AND DISCOVERY
Dl. (a)
(b)
[
3an 5an 7al3] [au a12 a13l [3 0 0]
A= 3a21 Sa22 1a23 = a:n a22 a23J
1
0 5 0 = BD
3a::n Saa2 1a33 a31 a32 a33 0 0 7
There are other such factorizations as well. For example, A "" [!::: :::
3a31 an
DISCUSSION AND DISCOVERY
101
D2. {a) A is symmetric since a
31
= j
2
+ i
2
= i
2
+ j
2
for each i, J .
(b) A is not symmetric.since a;;= j i =  (i j) =  a;; for each i, j. In fact, A is skew
symmroetric.
(c) A is symmetric since a,, = 2j + 2i = 2i + 2j = a,i for each i, j.
(d) A i s not symmetric since a
1
, = zp + 2t
3
f:. 2i
2
+ 2j
3
==a.; if i = 1, j = 2.
In general, the matrix A= IJ(i,j)} is symmetric if and on1y if f(j , i} = f(i,j) for each i , j.
03. No. In fact, if A and B a.re commuting skewsymmetric matrices, t hen
(AB)T = BT AT= (D)(A) = BA = AB
and so the product AB is symmetric rather than skewsymmetric.
04. Using Formula {9), we have tr(AT A)= lln1JI
2
+ 1la21J
2
+ · · · + IJa,.jJ
2
. Thus, if ArA = 0, then it
follows that llatll
2
= 1Jazll
2
= · · · = llanll
2
= 0 amlso A= 0.
0 5. (a) lf A = then A
2
= thus A
2
== A if and only if = d; for 1 = 1, Z, and
0 0 d3 0 0
D6.
3, n.nd this i:> t.rue iff di  0 or 1 for ·i = 1, 2, ancl 3. There are a. tOtal of eight such matrices
(3 x 3 matrices whose diagonal entries are either 0 or 1)
(b) There are a t otal of 2n such matrices (n x n. rnatrice.s whose diagonal entries are either 0 or 1).
If A = [d'
0
], then A
2
+ 5A + 6!
2
= 0 if and only if d? + 5£'. + G = 0 for i = 1 and 2; i.e. if and
0 dl
only if d; =  2 or  3 for i = 1 and 2. There are a t.ota.l of fonr such matrices (auy 2 x 2 diagonal
matrix diagonal entries are either  2 or  3) .
D7. If A is both symmetric and skew symmetric, then AT= A and AT =  A; thus A = A and so
A =0.
0 8. In a symmetric matrix, enlries that are symmetr ically positioned across the main diagonal are
equal to each other. Thus a symmetric matrix is completely determined by the entries that lie on
or above the main rliagonal, and entries t hat. appear below the main diagonal a.re duplicates of
entries that appear above the mai n diagonal. Au n x n matrix has n entriP.s on the main diagonal ,
n  l entries on the diagonal just above the maiu diagonal, etc. Thus there are a total of
n(n + 1)
n + (n 1) + · · · + 2 + 1 =
2
entries lhat lie on or above the main diagonal. For a ;;ymmet.ric matrix, t his is the maximum
number of distinct entries the matrix can have.
In a skewsymmetric matrix, the diagonal entries are 0 and entries t hat are symmetrically posi
tioned ac:ross the main diagonal arc the negatives of each other. The maximum nwnber of distinct
entries can be attained by selecting distinct positive entries for the p,.,sit.ions above the 1.0ain
diagonal The entries in the n(n
2

11
positions below the main diagonal will then automatically be
distinct from each other and from the entries on or above the main diagonal. Thus the maximum
number of distinct entries in a skewsymmetric matrix is
n(n 1} n(n I )
':.....:. + + 1 = n(n 1) + 1
2 2
09. If D == then AD = ldt a1 d2a2J where a
1
and a2 are lhe rolumns of A. Thus AD =
I= let e2] (where e1 and are the standard unit vectors in R
2
) if and only if dlaJ = e1
and dza2 = e2. But this is true if and only if d1 i 0, d2 :/; 0, a
1
= J;e
1
, and a
2
= :, Thus
102
Chapter 3
[
_!. 0]
A =
1
where d
1
, d
2
=1 0. Although described here for the case n = 2, it should. be clear
d2 '
that the same argument can be applied to a square matrix of any size. Thus, if .1D = 1, then the
f
..!. 0 0
dt
l
o .l. .. . o
d2
tliagonal entri es d
1
, dz , ... ,d.n of D must be nonzero, and A= : : . . . ; .
0 0 ... ..1.
d,..
DlO. (a) False. If A is not square then A is not invertible; it doesnt matter whether AAT (wh.ich
is always square) is invertible or not. (But if A is square and AAT is invertible, then A is
invertible by Theorem 3.6.5.]
(b) False. For example if A = G and B = !] , then A + B = G is
( c) True. If A is both symmetric and triangular, then A must be a diagonal matrix. Thus
A = :, and so p(A) = : ] is also a. diagonal matrix (both
0 0 Un 0 0 p(d., )
symmetric and triangular).
(d) True. For example, in the 3 x 3 case, we have
(e) True. If Ax= 0 has only the trivial solution, then A is invertible. But if A is invertible then
so is AT {Theorem 3.2.11); thus Ar x =a has only the trivial sol ution.
Dll. (a) False. For =
(b) True. If A is invertible then Ak is invertible, and thus Ak =1 0, for every k = 1, 2, 3, .. . . This
shows that an invertible matrix cannot be nilpotent; equivalently, a nilpotent matrix cannot
be· inverti ble.
(c) True (assuming A i= 0). If A
3
= A, then .4
6
= A, A
9
=A, A
12
=A, . . . . Thus is it not
possible to have A,.. :::: 0 for any positive integer k , since this would imply that = 0 for all
j ?. k.
(d) 1'rue. See Theorem 3.2.11.
(e) False. For example, I is invert)ble but I 1 = 0 is not invertible.
WORKING WITH PROOFS
Pl. If A and 1J are symmetric, then A
1
' = A and BT = B. It follows that
(AT)T = AT
(A+ B)T AT + BT = A+ B
(A  B)T = AT  BT = A B
(kA)T =kAT = kA
th11s t.hc matrices Al' . A+ B, A  B , and kA are also symmetric.
EXERCISE SET 3.7
P 2. Our proof is by induction on the exponent k .
[
d
0
J: o:·] = [d
0
I:
Step 1. We have D
1
= D = ....,
0
] ; thus the statement is true for k = I.
rl l
n
103
Step 2 st ep). Suppose the statement is t rue for k = j, where j is an integer 1. 'rhen
and so the statement is also true for k = j + l.
These two steps complete the proof by induction.
;2 ::: 1 =
P 3. If d•, d2, ... ,d
11
are nonzero, then
0 0 d., Q 0 I 0 0
} , 0 .. . 01
0 }; 0
is inver tible with n' = On the other hand if auy one of
0 0 :i;; .
D=
:, : l
0 0 • · · dn
the diagonal entries is zero, then D has a row of zeros and thus is not. invertible.
P 4. We will show t hat if A is symmetric (i.e. if AT == A), then ( An) T = A
11
for each positive integer .
n. Our proof is by induction on the exponent n.
Step L Since A is symmetric, we have (A
1
) T A = A
1
; thus t he statement is true for
n = 1.
Step 2 (induction step). Suppose the statement is true for n = j, where j is An integer 1. Then
(Ai+l f = (AAi)T = (Ai )T A1' = AJ A= AH
1
and so the statement is also true for n = j + I.
These two steps complete the proof by induction.
P5. If A is invertible, then Theorem 3.2.11 implies AT is !::1vertible; thus t he products AAT and AT A
are invertible as well. On the other hand, if either AAT or AT A is invertible, then Theorem
3.3.8 implies that A is invertihle. It follows that A, AAT, and AT A are all invert ible or all
singular.
EXERCISE SET 3. 7
1. We will solve Ax = b by first soiving Ly = b for y, and t hen solving Ux = y for x.
The system Ly = b is
3yl = 0
2yl + Y2 = 1
from which, by forward substitution, we obtain Yl = 0, Y2 = 1.
104
The system U x = y 1s
x1 2x2 = 0
X2 = 1
Chapter 3
from which, by back substitution, we obtain x
1
= 2, X2 = 1. It is easy to check that this is in fact
the solution of Ax = b .
2. The solution of Ly = b is y
1
= 3, Y2 = The solution of Ux = y (a.nd of Ax= b ) is x
1
=
X
 l
2 '1·
3. We will solve Ax = b by first solving Ly = b for y , and then solving Ux = y for x.
The system Ly = b is
3yx = 3
2yl + 4y2 = 22
 4yl  Y2 + 2y3 = 3
from which, by forward substitution, obta.in Yl = 1, Y2 = 5, YJ =  3.
The system U x = y is
XJ  2X2 ··· X3 = 1
x2 + 2x3 = 5
X3 = 3
from which, by back substitution, we obtain :x:
1
= 2, x
2
= 1, x
3
=  3. It is easy to check t ha.t
this is in fact the solution of Ax = b .
4 . T he solution of Ly = b is Yt = 1, Y2 = 5, Y3 = 11. The solution of Ux = y {and of Ax = b ) is
X  ill X  37 X _ 11
l  14 ' 2  i4) 3  i4.
5. T he matrix A can be reduced to row echelon form by the following operations:
The mult.ipliers associated wit h these operations 1, and
A = LU = [ 2 0] [1 4]
1 3 0 1
is an LUfactorization of A.
To solve the system Ax= b where b = we first solve Ly = b for y , and then solve Ux = y
for x :
The system Ly ;:: b is
from which we obtain y
1
=:=.  1, Y2 :::::  1.
The system U x = y is
2yl =  2
yl + 3y2 =  2
X 1 + 4X2 =  1
X2 =  1
from which we obtain x
1
= 3, x
2
=  1. It is easy to check that thi! is in fact the solution of
Ax. =:::. b.
EXERCISE SET 3.7 105
6. An LUdecomposition of the matrix A= [  : is A= LU = ( :
The solution of Ly = b, ;here b = is Yl = 2, =  1. The solution of Ux = y (and of
Ax = b) is x
1
= 4, x2 = .. 1.
7. The matrix A can be re<luced to row echelon form by the following sequence of operations:
2
2] [ 1
·1
1] ['
 1
A= 2 2 7 0 2 2 7 0  2 7
5 2 1 5 2 0 4
1 1
l  1 7 0 l] [' 1  1
I]
7 0 1  1 = u
[' 1 1]
4 1 0 0 5 0 0 1
The multipliers associ ated wilh these operations arc 4, 0 (for the second row), 1, 4, aud
thus
A = LU = [
 1
0
2
4
is an LUfactorization of A.
To solve the •ystem Ax = b w ha•e b = [ , we fi.s.,olve Ly = b ro, y, and then wive U x = y
for x:
The system Ly = b is
2yl = 4
 2y2 =  2
y! + 4yz + 5y3 = {)
from which we obtain Y1 = 2, Y2 = 1, YJ = 0.
The system Ux = y is
X 1  X2  X3 = 2
xz x 3 = I
X3 = 0
from which we obtain x
1
= 1, .1.'
2
= 1, X3 = 0. Jt is easy to check that this is the solution of
Ax=b.
8. An LUdecomposition of t he matrix A = [! is A = LU = [!
0
1
0 4 26 0 4  2 0 0 l
Th• solution or Ly = b, wh•,. b m , ;., y, = 0, Y> = I, Yl = 0. The solution or U x = y (and or
Ax = b) is Xt = 1, xz = 1, x3 = 0.
106
9.
Chapter 3
The matrix A can be reduced to row echelon form by the following sequence of operations:
0 1
0  1
0
 1
3  2 3  2 3 0
A=
2
 1
1 2 0
1 1
2
0 0 1 0 1 0 1
0 1
0 1
;1
0 1
1 0 l 0 1 0
1 2
0 2 0 1
0 1 0 1 0 1
.•"!.:.·
0 1
01 [1
0  1
1 0
2J 0
l 0
0 1 1 0 0 1
0 0 4 0 0 0
The multipliers associated with these operations are 1, 2, 0 (for the third row), 0, 1, 0,
1, and thus
[1
0 0
0 1
ALU
3 0 1 0
 1 2 0 1
0 1 0 0
is an LUdecomposition of A.
To solve thesystem Ax= b where h = [;],we first solve Ly  b lory, and then solve Ux  y
for x:
The system Ly = b is
yl == 5
2yl + 3yz =  1
Y2 + 2y3 == 3
Y3 + 4y4 = 7
from which we obtain Y1 =  5, Y2 = 3, Y3 = 3, Y4 == 1.
The sy.:>tcm Ux = y is
x
1
 X3 =  5
xz + 2x4 = 3
X3 + X4 = 3
X4 = 1
from which we obtain x
1
= 3, x
2
= 1, x
3
= 2, x
4
= 1. It is easy to check that this is in fact the
solution of Ax = b.
EXERCISE SET 3.7
107
10.
[''
0 0] ['
0 0
j
 2 0
0]
An LUdecomposition of !1 =
12
is A= LU =
4 0 1 3 0
2
0 2 0 1
. The
0 1 4  5 0 1 1 0 0
ro!ution o[ Ly = b, wh"e b = [!lis y, = 4, Y> =  l, y, Y< ,J;. The soluUon of Ux y
(and of Ax= b) is x
1
= 1 , x
2
= X 3 = !, X4 =
1
1
0
.
....,...,_ ..... _..._,
11. Le!, elt e2, the standard unit vectors in R
3
. ,_?, ?.) __ __ c?J'!IDP..?fL2L.
the matriX".A
1
1Sofitaiiiea·E'y solvTng Ax·= ei for x = x;. Using the given LU
wyj oy llrsf'solviiig ej" _then· sqlving U'x=Y;19.!.
x_=;= xi.
(1) The sys tem 0'_:::
= 1
2yl + 4yz = 0
4yl  Y2 + 2yJ = 0
from which we obtain Yt = Y'2 =  Y3 = TI . Then the system Ux = Yt is
X 1  2Xz  X3 = !
•r • 2"'  l
T "3  6
 7
:r.3  il
from which we obtain x 1 =  L xz = 1, x3 = TI · Thus X 1 = [=!]
(2) ComputatioP of xz: The system Ly = e2 is
:1yl = 0
2yl + 4y2 = 1
4y1  Y2 + 2y3 = 0
from which we oblaiu !11 = 0, )/2 ::.:. L Y3 = ! . Then the system U x = Yz is
xa  2x2  X3 = 0
xz + 2:r3 =!
X3 = 4
fmm which we j,., = 0, ., l· Thus x, m
(3} Computation of x
3
: The syst em Ly = e3 is
3yl = 0
2yl + 4y2 == 0
4Yt  Y?. + 2y3 = 1
108
from whieh we obtain y
1
""' 0, y
2
= 0, y
3
= Then t he system Ux = YJ is
x1  2x2  X3 = 0
x2 + 2x3 = 0
X3 =
f•·om whkh w' obtain x,  l , x,  1, x, ! . Thus x, [
Finally, as a result of these computations, we conclude that A
1
= [x1
Chapter 3
!]
0 1 .
l
12. Let e
1
, e
2
, e
3
of! the standard unit vectors in R
3
• The jth column x i of the matrix A  .L obtained
by solving the syst em Ax = ej for x = Xj. Using the given LUdecomposi tion, we do this by .first
solving Ly = e
1
for y = y
1
, and then solving Ux = Yj for x = x;.
The sdution of L. y  c, io y, [ • nd the solution a[ Ux y , is x, r::J·
The sohotion of Ly c, is y
3
[:]. a nd the.olution of Ux = Y2 is x, m.
The solution of Ly e3 is Yl  m, •nd the solution of Vx y3 is x, [
_..2._]
7 14
* t. .
'l l
7 r.i
13. The nta.t l ix il can 1.c reduced to row echelon form by Lhe following sequence of oper ations:
H
1
[;
! ll
1
:]
2
A:.::
0 t
0 2 + 1
t
2
2 2 I 2
1
11 r
I
:]
i 2
lj t 0
1 =U
1 2 0 0
where thfl a.o:;sociated multipliers are 2, 2, 1 (for the leading entry in t he second row) , and
 J. This leads t.o the factorization A = LU where L =
0
If instead we prefer a lower
2 1 l
t.ri<mgular factor t hat has 1s on the main diagonal, this can be achieved by shifting the diagonal
entries of L to a diagonal mat rix D and writing the factorization as
EXERCISE SET 3.7
14. A =
=
= L'DU
2 4 l1 2 2 1 0 0 63 0 0 1
15. (a) This is a permutation matrix; it is obtained by interchanging the rows of J.z.
(b) This is not a. permutation matrix; the second row is not a. row of h.
109
(c) This js a permutation mat rix; it is obtained by reordering the rows of 1
4
(3rd, 2nd, 4th, pt ).
16. (b) is a permutation matrix. (a) and (c) are not permutation matrices.
17. The system b i<equivalent top> p > b where p> P [! :]. Using the given
decomposition, the system p  l A;x = p  l b can be written as
18.
[
1 0
LUx= 1
3 5
We solve this by first solving Ly = plb for y, and then solving Ux = y for x.
The system Ly = p  I b is
Y1 = 1
Y2 = 2
3yl  5y2 + = 5
from which we c•bt..ain y
1
= 1, '!/2 = 2, y
3
= 12. Finally, t he system Ux = y is
x1 + 2x2 + 2x3 = 1
X2 + 4X3 = 2
17x3 = 12
from which we obtaiu Xt = x2 :::::  = g, This is the solution of Ax = b.
The system Ax h iseq uivolent to p 'Ax P ' b whe>·e p ' P [ i
decomposition, the syst.em pl Ax = plb can be wri tten M
(\ 0]
o 1 . Using the given
1 0
The solution of Ly = p  l b is y
1
= 3, y
2
= 0, Y3 = 0. The solution of Ux = y (and of Ax= b ) is
X1 = X2 = 0, X3 = 0.
19. If we interchange rows 2 and 3 of A, then the resulting matrix can be reduced t o row echelon form
without any further row !nterchanges. This is equivalent to first multiplying A on the left by the
corresponding permutation matrix P:
[
3 1
0 2
3 1
1
1
2
:1
J
110 3
The reduct ion of P A to row echelon form proceeds as follows:
[
3 1
PA = 0 2
3  1
This corresponds to t he LUdecomposition
[
3 1
PA = 0 2
3 1
0] [3 0 0] [1
1 = 0 2 0 0
1 3 0 1 0
of t.he matrix PA, or to the following P LU decomposition of A
[
1 0 0] [3 0 0] [1
A = 0 0 1 0 2 0 0 1
0 1 0 3 0 1 0 0
= p 
1
LU
Note that, since p
1
decomposition ca.n also be as A = P LU.
The system Ax = b = ! is equivalent to PAx= Pb = and, using the LUdecornposjtion
obtained above, this can be written as
[
3 o olll · 4 ol [x
1
] [2]
PAx == DUx == 0 2 0 0 1 ! x 2 4 = Pb
3 0 } 0 0 } X3 1
Finally, the solutjon of Ly = Pb is Yt = y
2
= 2, y
3
= 3, and the solution of Ux = y (and of
Ax = b) is x
1
=  x2 = X3 = 3.
20. If we interchange rows 1 and 2 of A, then the resulti ng matrix can be reduced to row echelon form
wi thoul any furt her row interchanges. This is equi valent to first multiplying A on the left by the
corresponding permut at ion matrix P:
The LUdecomposit ion of t he matrix PA is
P A = LU = ;]
2 0 3 0 0 1
a.nd the corresponding FLUdecomposit ion of A is
Since P
1
= P , this <.:<tn also be wri tten as A "" P LU.
DISCUSSION AND DISCOVERY 111
The system Ax = b = [ _ !] is equivalent to PAx = Pb = using the LU decomposition
obtained above, this can be WTitten as
PAx = LUx = 1 ;] [ ::] = [ = Pb
2 Q 3 0 0 1 X3 2
The solution of Ly = Pb is y
1
= 5, Y2 = y
3
= 4; and the solution of Ux = y (and of Ax= b)
is X1 =  16, X2 = 5, X3 = 4.
21. (a) We have n = l 0
5
for the given systern and so, from Table 3. 7.1, the number of gigaflops
required for the forward and backward phases a.re approximately
Grorwa.r<l =
X 10
9
=
X w
9
= s X 10
6
;::::: 670,000
Gbackwt>rd = n
2
X 10
9
= (10
5
)'
2
x 10
9
= 10
Thus if a computer can execute l gigaflop per second it will require approximately 66.67 x 10
4
s for the forward phase, and approximately 10 s for the backward phase.
(b) If n = 10
4
then, from Example 4, the total number of gigaflops required for the forward and
backwo.rd phasE>.s is approximately x 10
3
+ 10
1
;::::: 667. Thus, in order to complete
task in IE'.ss than 0.5 s, a computer must be a.ble to execute at least
= 1334 gigaflops per
second.
22. (a) If A has such au LU decomposition, it ca.n be written us
This yiel ds the syslem of equations
x = a, y = b, w:z; = c, wy + z = d
and, since a =I= 0, this system has the unique solution
x =a, y= b,
(b ) florn the above, we ha\'e
c
711 = ,
a
z=
(ad be)
a
[:
DISCUSSION AND DISCOVERY
01. The rows of Pare obtaine<l by reordering of the rO'I.VS of the identity matrix 1
4
(4th , 3'd, 2nd,
Thus P is a per mutation mn.trix. Multiplication of A (on the left) !;,y P results in (.he corresponding
reordering of the rows of A; thus
3  3
 11 12
PA=
0 7
2 1
CHAPTER 4
Determinants
EXERCISE SET 4.1
1. :1 = (3}(4) (5)(  2) = 12 + 10 = 22
2. 1: = (4)(2) _ (1){8) = s 8 = o
3. ,S
7
1 = ( 5){2)  (7)(7) = 10 + 49 = 59
7 2
4. IV: (J2)(/3) _ (4)(./6) = 3v'6
I
a  3
5. 3
5 I
a  2
= (a 3)(a  2)  (5)( 3) = (a
2
Sa+ 6) + 15 = a
2
 5a + 21
2 7 6
6. 5 1 2 = ( 8  42 + 240) (18 + 140 + 32) = 190 190 = 0
3 R 4
2 1 4
1. 3 5  7 = (  20  7 + 72)  (20 + 8·1 + 6) = 45 110 = 65
l 6 2
1 1 2
8. 3 0 5 = (O 5 + 42)  {0 + 6 + 35) = 37  41 =  4
7 2
3 0 0
9. 2  1 5 = (12 + 0 + 0)  (0 + 135 + 0) =  123
1 9  4
c  4 3
10. 2 1
c2
= {2c l6c
2
+ 6(c l))  (12+ c
3
(c 1)  l6) =  c• +c
3
16c
2
+Be 2
4 c 1 2
11. (a) { 4, 1, 3, 5, 2} is a.n odd permutation interchanges). The signed product is
{b) { 5, 3, 4, 2, 1} is an odd permutation (3 interchanges). The signed product. is atsa.z3a34a.42ast·
(c) { 4, 2, 5, 3, 1} is an odd pArmuta.tion (3 interchanges). The signed product is al4ll22llJS043ll51·
(d) {5, 4, 3, 2, 1} is a.n even permutation (2 interchanges) . The signed product is +ats<124a33a.czas1·
{e) { 1, 2, 3, 4, 5} is an even permutation (O interchanges). The signed product is +au
(f) {1 , 4, 2,3, 5} is an even permutation (2 interchanges) . The signed product is
118
Exercise Set 4.1 119
1 2. (a), (b) , (c), and (d) are even; (e) and {f) are odd.
13. det(A) = (,\ 2)(>. + 4) + 5 = >.
2
+ 2>.  3 = (..\ 1)(>. + 3}. Thus det(A) = 0 if and only if/\= 1
or >.=  3.
14. >. = 1, ,\ = 3, or ,.\ =  2.
15. det(A) = (>.  1)(>. + l }. Thus dct(A} = 0 if and only if A"" 1 or .A=  1.
16. A = 2 or ,\ = 5.
17. We have 1;
1
=_
1
xl = x( l x) + 3 = x
2
+ x + 3, and
18.
19.
20.
21.
1 0 3
2 x 6 = ((x(x  5) + 0 18)  ( 3x 18 + 0) = x
2
 2x
] 3 X 5
Thus t he gi ven equation is if and only if :t
2
+ :r + 3 = .x
2
 2x, i.e. if 2x
2
 3x  3 = 0. The
roots of this quadro.tic equation are x =
3
* {33.
y=3
0
0 0 0
1 0
0
(a) 0 = (1)(1)(1) =  1 (b)
1 2
 1
0
3
=0
0 0
0
'1
0
1 2 3 8
1 2 7  3
(c)
0 I 4 1
= (1)(1)(2) (3) = 6
0
()
2 7
0 0 0 a
1 1 1
2 0 0
(a) 0 2 0 = 2
3
= 8 (b)
0 2 2
= (1 )(2)(3)( 4) = 24
0 0 2
0 0 3
0 0 0 4
0 0 0
{c)
l 2 0 0
= (  3)(2)( 1)(3) = 18
40 10  1 u
100 200  23 3
Mn = 1;
11
4
= 29. c11 = 29
Ml2 =
11
4
= 2l,Ct2 =  21 M13 = 21, = 27
,2
M21 = 1
!I = G21 = 11
M22 =
!I= 13,c22 = 13
M23 = 5, C23 = s
12
M3t = 7 31 _
1
=  19, C31 =  19
M32
31 _
1
=== 19, C32 == 19 M33 == 19, c33 = 19
120
22.
Mll
= 6, cn = 6
M12 = 12,012 = 12 M1s = 3, C1a = 3
M21 =
2, C21 =  2
M22 = 4, Cn = 4 M2a = l , Oza = 1
M31
21 )
6
= O,Ca1 = 0 Ma2 = O,C
3
2 =0 Ms3 = 0, Cas = 0
0 0 3
23. (a) M
13
= 4 1 14 = (0 + 0 + 12)  (12 + 0 + 0) = 0 0
13
= 0
4 1 2
4 1 6
(b) M
23
= 4 1· 14 = (8  56+24)(24+568)=96 0
23
= ·96
4 1 2
4 l 6
(c) Mn= 4 0 14 =(0+ 56 +72)(0+8+168) = 48 0
22
=  48
4 3 2
 1 1 6
(ct) M21= 1 o 14 = {O+I4+18)(0+2  42) = 72 c21=  n
1 3 2
24. (a) M3z =  30, C32 = 30
(c) .M41=  l,C41= l
{b) M44 = 13, 044 = 13
(d) M24 = 0,024 = 0
25. (a) det(A) = (l) Cu + ( 2)C
1
2 + (3)0
13
= (1)(29) + ( 2)(21) + (3)(27) = 152
(b) det(A) = ( I)Cll + (6)C21 + (3)C
31
= (1){29) + (6)(11) + (3)(19) = 152
(c) det(A) == (6)C21 + (7)C22 +(1)0
23
= (6)(11) + (7)(13) + (1)(5) = 152
(d) det.(A) = (2)C12 + (7)C22 + (1)0
32
= (2)( 21) + (7)(13) + {1){19) == 152
(e) det(A) = ( 3)CJI + (l)Ca2 + ( 4)Caa = ( 3)(  19) + ( 1)(19) + (4)( 19) = 152
(f) det(A) = (3)013 + ( l)Czs + (4)0
33
= (3)(27} + (  1)(5) + (4)(19) = 152
26. (a) det(A) = (l)Cll + (l)Ct2 + (2)0
13
= (1)(6) + (1)( 12) + (2)(3) = 0
(b) det(A) = (l)011 + {6) C21 + (3)Ca
1
= (1)(6) + (3)(2) + (0)(0) = 0
(c) det(A) = (3)021 + (3)022 + (6)023 = (3){2) + (3)(4) + (6)(1} = 0
(d) det(A) = (l)Ou + (3)022 + (I)Cs
2
= (1)(12) + {3)(4) + (1)(0) = 0
(e) det(A) = (O)C31 + (l)C
32
+ (4)C
33
= {0)(0) + (1)(0) + (4)(0) = 0
(f) = (2)013 + ( G)02J + {4)033 = (2)(3} + (6)( 1) + (4)(0) = 0
27. Using column 2: det{A} = ;j = (5){15+ 7) =  40
28. Using row 2: det(A) = (4) = (1)( 18) + ( 4)(12) =  66
Chapter 4
DISCUSSION AND DISCOVERY
30. Jet( A) ::::: k
3
 8k
2
 IOk + 95
3 3 5 3
:n. Using column 3: det(A) = ( 3) 2 2 2  (3) 2
2 10 2 4
32. det(A) = 0
33. By expanding along the t hird column, we have
3 5
2  2 = (  3)(128)  (3)( 48) = 240
0
sinO
cosO
cos B
0
I sinB cosO'
sin8 0 = (1)
8
. =sin
2
fJ+cos
2
B= 1
cos smB
s inO cosO sinO+ cosB 1
for all values of 0.
121
34. AB = bf) a.nd BA =
bd ce]. Thus AB = BA if and onJy if ae + bf = bd + ce , and
this is equi valent to t he condition that lb
0
 cl = b(<l  f )  (a  c)e = 0.
e d · f
[
a b] ) 2 [a
2
+be ab +bdl 2 2 2
36. If A = d , then tr(A = a + d, A = d b d
2
1, and tr(A ) =a + 2bc + d . Thus
c co+ c c + J
l I tr( ,1) 1 I 1 I a+ d 1 I 1 2 2 2
2 tr(A2) tr(A) =2 a.2 +Zbc+d2 a +d =2((a+d) (a +2bc+d ) =adbc=det(A).
DISCUSSION AND DISCOVERY
D 1. Lhe product of integers is always t\n integer , each elemf!ntary product is an integer and so
the of the elementa.ry products is an integer
D2. The signed elementary products will all be ±(1)(1) · · · (1) = ±1, with half of t hem equal to +1
and half equal to 1. Thus the determinant. will be zero.
D3. A 3 x 3 matrix A can have as many as six zeros wit hout having det(A) = 0. For example, let. A
be a diag0nal matrix with nonzero diagonal entries.
X y
D4. If we expand along the first row, the equation a
1
bt 1 = 0 becomes
a .. 2 b1 1
and thjs is an equation for t he line through t he points Pt (a
1
, bt) and P2(a2, b2).
D5. If ur = (a
1
,a2, a
3
) and' vT =
then each of the six elementary products of uvT is of
Lhe form (a1b;,)(a2bi?)(a3bj
3
) where {j
1
,iz,]J} is a permutation of {1,2,3}; thus each of the
elementary products is equal to (ata2as)(b1b2b3).
122 Chapter4
WORKING WITH PROOFS
Xt l/1 1 a l7l 1
Pl. If t he three points lie on a vertical line then XJ = x2 = xa = a and we have x2 yz 1 = a yz 1 = 0.
:r:s l/3 1 a !13 1
Thus, without loss of generality, we may assume that the points do not· lie on a vertical line. In
this case the points are collinear if and only if = The latter condition is equivalent
%,q x1 :1!2  Xl
to (Y3  Yt)(x2  xt) = (Y2 Y1)(:r3 x1) wh.ich can be writ ten as:
(X2Y3  X:;Yz)  (XtY3 XJYl) + (XJY2 X2Y1) = 0
On the other hand, expanding along the third column, we have:
X l Yt 1
· Thus the three points are collinear if and only if x
2
v2 1 = 0.
XJ l/3 1
P2. We wish to pwve that for each positive integer n, t here are n! permutations of a set {it , )2, . .. , j .,.}
of n distinct elements. Our proof is by induction on n .
Step 1. It is clear that there is exactly 1 = 1! permutation of the set {j
1
} . Thus the statement is
true for the case n = 1.
Step 2 (induction srep). Suppose that the statement is true for n = k, where k is a fixed integer
1. Let S = {j1, j2, ... ,jk, )k+I } Then a permutation of the setS is for med by first choosing
one of k + 1 positions for the element
and then choosing a permutation for the remaining
k elements in t he remaining Jc positions. There are k + 1 possibilities for the first choice and, by
the hypothesis, k! possibilities for the second choice. Thus there are a. total of (k + l )k! = (k + 1)!
of S . This shows that if the statement is true for n = k it must also be true for
n = k + l. These two steps complete the proof by induetion.
EXERCISE SET 4.2
1. (a) det(A) = = 8 3 = 11
1
2 11
dct{AT) =
3 4
=  8 3=  11
(b)
2. (a)
3. (a)
2 1
det(A) = 1 2
5 3
2
det(AT) = 1
3
3
4 = (24  20  9)  (30  6  24) =  5  0 =  5
6
1 5
2 3 = (24  9 · 20)  (30 6  24) = 5 0 = 5
4 6
det(A) = det{AT) = 2 (b) det(A) = det(AT) = 17
3 1 3
331
0
1
9 22
3
= (3) (!) (2)(2) = 4
0 0  2 12
0 0 0 2
EXERCISE SET 4.2
123
(b)
~ c )
4. (a)
5. (a)
(b)
(c)
(d )
6. (a)
7. (a)
(b)
3 1 9
 1 2 3 = 0 ( ~ t and third columns are proportional)
l 5 3
3
17
4
0 5 1 = (3)(5)( 2) =  30
0 0  2
2 (b) 0 (identical rows) (c) 0 (proportional rows)
d e
f
a b c a b c
g h i = ( 1) g h i = ( 1)(1) d e I =(1)(1)( 6)=  6
a b c d e
1
g h i
3a 3b 3c
(l b c
I a
b c a b c
 d e
 ! = (3)
d e
f
= ( 3) I d
e f = (  12) d e f = ( 1'2)(  6) = 72
4g 4h 4i 4g 4h 4i •1g 4h 4i g h
a+g b I· h 1:+ i a b c
d e
f =
d e f = 6
g h i g h i
3a 3b 3c 3a
3b
3cl a
b c
d e
f .
= d e f = (3) d e f = (3)(t)) = 18
g4d h 4e 1  4/ g h i
I
9
h l
6 (b) 0 (c) 6 (d)  12
,2
det(2A) =
6
:1 =  16 24 = 40
,1
2
2
det(A) = 4
3
2
1 = 4(  4  6) ..,  40
4
 tl 2
<.let(· 2A) = 6  4
2 8
2
(  2)
3
det(A) = (  8) 3
 6
 2
 10
 1
2
4
= (160 + 8  288)  (48  64 + 120) = · ·440  8 =  4t18
3
1 = (  8)((20  1 + 36)  (6 + 8  15)) = (  8)(55 + 1) = 448
5
8. (a) del(4A) =  224 = (4)
2
(14) = (4)
2
det(A)
(b) det(3A)= 63 = (3)
2
(7) = (3)
2
det(A)
9. lf x = 0, the given matrix becomes A ~ [ ~ ~ ~ ] and, since the first and third rows are rropor
o 0 5
tiona!, we have det(A) = 0. If x = 2, the given matrix becomes B = [ ~ i ;] and, since the first
0 0  5
and second rows are proport ional, we have det(B) ::::: 0.
124 Chapter 4
LO. If we replace the first row of the matrix by the sum of the first and second rows, we conclude that
[
b+c b+a] · [a+b+ c b+c+a
det a b c = det a b
1 l 1 1 1
c+b+a]
a = 0
1
since the first and third rows of the latter matrix are proportional .
11. We use the properties of determinants stated in Theorem 4.2.2. Corresponding row operations are
as indicated.
3 6 9 1 2  3
det(A) = 0 0 2 = (3) 0 0  2
A common factor of 3 was
 2 1 5 2 1 5
taken from the first row.
1 2 3
:;:; (3)
0 0 2
the first mw "'''
0 5 ·1
to the third row.
1 2  3
= (3)( 1) 0 5 1
The second and third rows
were interchanged.
0 0 2
= (3)( 1)( 10) = 30
12. det(A) = 5
13. We use the properties of determinants stated in Theorerr1 4.2.2. Corresponding row operations are
as indicated.
1  3 0 l  3
det(A) = 2 4
=
0  2
5 2 2 5  2
3 0

0 2
0 13 2
1  3 0
= (2) 0 1
l
2
0 13 2
1 3 0
= { 2) 0 1
1
 2
0 0
17
2
= (2} en = 17
14. det(A) = 33
0
l
2
2 times the first row was
added to the second row.
5 t imes the first row was
added to the third row.
A factor of · 2 was taken
from the second row.
 13 times the second row
was added to the third row.
EXERCISE SET 4.2 125
15. We use t he properties of determinant s stated in Theorem 4.2.2. Corresponding r ow opera.t.ions are
as indicate'i.
16.
] 7 .
1 2 3 1
det(A) =
5  9 6 3
1 2  6  2
2 8 6 1
1 2  3
0 1 9
=
0 0 3
0 0 108
1 2 3
0 1  9
=
0 0  3
0 0 0
= 39
det(A) = 6
1
0
=
0
0
1
 2
 1
23
1
2
 1
 l 3
2  3
9
0  3
12 0
1
 2
1
 1
5 times row 1 was added to row 2;
row 1 was added to row 3;  2 times
row l was added to row 4.
 12 times r ow 2 was added to row 4.
36 times row 3 was added row 4 _.]
We usc the properties of deter minants stated in T heorem 4.2.2. Corresponding row operations are
as indicated.
0 1 1 1
1 1
1
1
2 2 2
1
1
1
1
0 1 1 1
The first and second rows
det(A) =
2 2 l
= (1)
'2 l 1
0
2 1 l
0
were interchanged.
3 3 3 3 3 3
l 2
0 0
l 2
0 0  :;
3
 3
3
1 l 2 l
= (  1 ) ( ~ )
0 l 1 1
A facto< of l was t ~
'2 I I
3 :l 3
0 from the first row.
I 2
0 0
 3
:i
1 1 2 1
= (  1 ) ( ~ )
0 1 l 1
 ~ times row 1 was added
0
I
 l
2
to row 3; ~ times row 1
3 3
was added to row 4.
0
'2 l
3 3
1 1 2 1
= (1) ( ~ )
0 1 l
l times row 2 was added
0 0
2 1
t o row 3;  1 t.imes row 2
3 3
was added to row 4.
0 0
1 2
 3 3
1 1 2 1
= (1) ( ~ ) (  ~ )
0 1 1
A factor of  ~ was t aken
I
0 0 1
2
from row 3.
0 0
l 2
 3
3
126
18.
19.
1 1
= (1) (
0 1
0 0
0 0
2
1
1
0
1
1
1
1
2
! times row 3 was added
to row 4.
= ( 1) ( ( =
det.{A) =  2
al bt G! + b1 + C1 al
(a) a2 b2 02 + b2 + Cz = 02
03 b3 as+ b3 + C3 .:l3
a1 bt C)
= bz C2
<l3 bJ C:J
a1 + b1 a1  b1 C)
2a
1
(b) az + b2
(12  b'J
C2
=
2a2
a3 + b3 a3  bs CJ
2a
3
Gt a1  b1 Ct
=2 02 02 b2 C2
03 GJ  b3 C3
a1  bl Ct
=2 02  b2 C2
,a3
 hJ CJ
GJ bl Cl
= 2 02 b2 C2
(13 03 CJ
a1 + b1 t + b2t OJ + b3t
bl bt + Ct
bz b2 + C2
bJ b3 + ca
Gl Ot CI
(1.2 b2 C2
(1.3 b3 ca
Add 1 times column 1 to
column 3.
Add 1 times column 2 to
column 3.
J Add column 2 to column 1.
Factor of 2 taken from
column 1.
Add 1 times column 1 to
column 2.
Factor of  1 taken from
column 2.
a 1 + b1t a2 + bzt a3 +
Chapter 4
20. (a) att +bt a3t+b3
(1 t
2
)bl (1  (
2
)b2 {1 t
2
}b3
Add  t times
row 1 to row 2.
Gt + btt
= (1  t
2
) bt
Ct
(1.1 a2
= (1. t2 ) bt b:z
Ct C2
a2 + bzt
b2
C2
GJ
b3
CJ
·a3 + b3t
b3
CJ
Factor of (1  t
2
)
t.Rken from row 2.
Add  t iin:.es
row 2 to row 1.
EXERCISE SET 4.2
al b1 + ta1 c1 + rbt + sa1
la1
(b)
a2 b2 + ta2 c2 + + sa2
= r2
a3 b:1 + ta3 c3 + rb3 + sa3 a a
a 1 b1 C)
 a2 b2 c2
a3 b3 c3
1 X
x2
1 X
x2
21. det{A) = 1 y
y2
= 0 y  x
Y2 _ x2
1 ;;
z 2
0 z x
z2  x2
x2 I
b1
bl
UJ
Ct +rb1 + sa1
C2 + 1'b2 + sa2
C3 + rb3 + sa3
Add t times column 1
to column 2.
Add s times column 1
to column 3.
Add r times col umn 2
to column 3.
1 x x
2
=(yx)(z  x) 0 1 y+x
0 1 z +x
1 X
=(y x)( zx) 0 1
0 0
y + xl = (y  x)(z  x) (z y)
z  y
22. dct(.4) :... (a  b)
3
(a + 3b)
127
23. If we add the first row of A to the second row, the rMult is a lfla.trix B that has two identical row$;
thus dct( A) = dct(.H) = 0.
24. If we add of the first four rows of A t o the last row, t.he result is a matrix B that has a. row of
thus det.(A) = det(B) = 0
(a) \Ve have dct(A) = (k 3)(k ···· 2)  4 = k
2
 5k + 2. Thus, from Theorem 4.2.4, the matrix .4
is invertible if and if lc
2
 5k + 2 f:. 0, i.e. k # 5± fTI .
(b ) We ha..e dE>t.( A) = ( + :1 ..,.. 8 + 8k. Thus, from Theorem 1.2.4, the matri x
A is invertible if and only if 8 + Bk f 0, i.e. k =/' ... 1.
(a) k i: ±2
(a) det(aA) = 3
3
det(A) = (27)(7) = 189
(c) tlct(2A
1
) ''= 2
3
dct{A
1
) = = ¥
{h )
k=L.!
" 1
 1 1 1
(b) det(A ) = det(A) = 7
1 1 1 1
( d) clet((
2
A)t ) det(2A) z3 det(A) = (8){7) = 56
:8. (a) det(· A) = (l)
4
det{A) = (1)(2) =  2
( c) = det(2A) = 2
4
det(A) = 32
&. We have [i
= [ !
2 2 0 0 2 10
2
det(AB) = 6
10
4 5 2
15 6 = 0
22  23 10
(b) d
. ,t  1) 1 1
et.t.
1
= det(A} :;:; 2
(d) det(A
3
) = { det(A))
3
= (  2)
3
= 8
t hus
22  23
4  5
3 9 = 21
3 9
) = 2(6  18) = 24
2 2 2 2
128
Chapter 4
On the other hand, det(A) = (1)(3)( 2) = 6 and det(B) = (2)(1){2) = 4. Thus det(AB) =
det(A) det{B).
30. We have AB = = and so det(AB) = 170. On the other hand,
0 0 2 5 0 l 10 0 2
det(A) = 10 and det(B) = 17. Thus det(AB) = det(A)det(B).
31. The following sequence of row operations reduce A to an upper triangular form:
H
 2 3
2 3
}·
2 3
2 3
1]
9 6 1 9 1 9 1 9 2
2 6 0 3 1 0 0 3 1 0 0 3 1
8 6 12 0 1 0 0 108 23 0 0
0\·' 13
The corresponding LV factorization is
[ !
2 3
'] [ 1
0 0
2
3
1]
9
I)
=
1 0 1  9  2
A=
1 2 0 0
=LU
 G 1  3  1
2 8 6 1 2 12  36 0 0  13
and from this we conclude that det(A) = det(L) det(U) = (1)(39) = 39.
32. The following sequence of row operations reduce A to an upper triangular form:
1 3
I .. l
r
I 3
;]
1
3
_;]
2 'J
2 2 2 2
0 1
l l
;
l 1 1 1
2 '2
4
2 1
2 1 0 1 0 1 2
1 2
1 2
0 1 0 0 6
The corres ponding LUdecomposition is
1 3
l] "'
0 0
l 3
_; ] = LU
2 2
0 1
1
0 l 1
A =
2
2 1 2  1 0 1  2
1 2 3 0 l 1 0 0 6
a.nd from this we conclude that det.(Jt) = dct(L) det(U) :::: (1)(6) = 6.
33. U we add the firs t row of A to the second row, using the identity sin
2
B + cos
2
0 :::: 1, we see that
sin
2
o sin
2
/3 sin
2
1 sin
2
o
det( A) = cos
2
o cos
2
/3 cos
2
1 = 1
1 1 1 1
sin
2
f3 sin
2
1
1 1 = 0
1 1
since the resulti(lg matrix has two identical rows. Thus, from Theorem 4.2.4, A. is not invertible.
34. (a) .A=3or.A=1 (b) .A=6or.A =l (c) >. = 2 or >. = 2
35. (a) Since det(AT) = det{A), we have det{AT A)= det(AT) det(A) = (det(A))
2
= det(A) det(AT) =
det(AAT).
(b} · Since det( AT A)= (det (A))
2
, it follows that det(AT A) = 0 if and only if det(A) = 0. Thus. from
'T'heorem 4.2.4, AT A invP.rt.ihiP. if ano only if A is invertible.
DISCUSSION AND DISCOVERY
36. det(A
1
BA) = det(A I) det(B) dct( A) = dct(B} det(A} = det(B)
37. llxii
2
IIYII
2
 (x. y)
2
== (xi+ T vi+ v5> (XlYl + XzY2 + XsYa)
2
Jx
1
x
2
1
2
+ IXJ X:J I
2
+ lxz :r
3
1
2
= (x1Y2  :r2yd
2
+ (x1y3 X3Y1)
2
+ (x2Y3 X3Y2)
2
IYl Y2 YJ 'YJ Y2 Y3
 2XtY2X2Yl + + xi y§ 2XJY3X3Yl + + 2X2Y3X3Y2 +
38. (a) We: have det =5 4 = 1 and det = 3 6 = 3; thus det(M) =(I){ 3) = 3.
l 2 0 3 0 0
129
(b) We have 2 5 o =
2
1 = 2(5  4) = 2 and 2
 1 3 2
5
 ·3
1 o = (3)(1 )(  4) = 12; th\18 det(M) =
8  4
(2)( 12) =  24.
3 5 1 3 5
121
12
112
39. (a) det(M) =
1
 2 6 2 = (fi + 4) 0 12 12 = (10)(1) = 1080
 4  13
3 5 2 0 4  . 13
det (M) =
2 0
(1){1) ,.,, l
(b) 1 2
jo
()
l
DISCUSSION AND DISCOVERY
Dl. The matrices singular if and only if the corresponding determinants are zero. This leads to
the system of equations
from whir.h it follows t.hat s =
and t =
•
D2. Since det(AB) = det(A) det(B) == det(B) dP. t(A) = det(BA), it i;, always true that det(AB) =
det(BA).
D3. If .4 or B is not invertible then either det(A) = 0 or det(B) = 0 (or both). It follows that det{AB) =
det(A) det(B) = 0; thus AB is not invertible.
130 Chapter 4
D4. For convenience call the given matri'V An· If n = 2 or 3, t hen An can be reduced to the identity
matrix by interchanging the first and last rows. Thus det(An) =  1 if n = 2 or 3. lf = 4 or 5,
then two row interchanges are required to reduce An to the identity (interchange the first and last
rows, then interchange the second and next to last rows). Thus det (An) = + 1 if n = 4 or 5. This
pattern continues and can be summarized as follows:
det(A2k) = =  1 fork = 1,3,5, .. .
det(A2k) = det(Azk+t) = +1 fork= 2, 4,6, .. .
DS. If A is skewsymmetric, then det(A) = det(AT) = det(A) = (l)ndet(A) where n is the size of
A. It follows that if A is a skewsymmetric matrix of odd order, then det(A) = det(A) and so
det(A) = 0.
D6. Let A be an n x n matrix, and let B be the matrix that results when the rows of A are written
in reverse order. Then the matrix B can be reduced to A by a series of row interchanges. If
n = 2 or 3, then only one interchange is needed and so det(B) =  det(A). If n = 4 or 5, then two
interchanges are required and so det(B) = +det( A). This pattern continues;
D7. (a)
(b)
(c)
(d)
DB. (a)
(b)
(c)
(d)
(e)
det(B) =  det(A) for n = 2k or 2k + 1 where k is odd
det(B) = + det(A) for n = 2k or 2k + 1 where k is even
False. For example, if A= I= /2, then det(J +A)= det(2/} = 4, whereas 1 + det{A) = 2.
Thue. F'tom Theorem 4.2.5 it follows that det(A") = (det(A))n for every n = 1, 2, 3, ... .
False. From Theorem 4.2.3(c), we have det(3A) = 3n det(A) where n is the size of A. Thus
the: stu.ternent is false except when n = 1 or riet(A) = 0.
Thue. If det (A) = 0, the matrix is singular and so the system Ax = 0 has infinitely many
solutions.
True. If A is invert ible, then det(A) =F 0. Since det(ABA) = det(A) det(B) det(A) it follows
that if A is invertible and det( ABA) = 0, then det(B) = 0.
'frue. If A ,.,; A  l , then since det( A  l) = det
1
(A) , itiollows that {det( A ))2 == 1 and so det (A) =
±1.
True. If the reduced row echelon form of A has a row of zeros, then A is not invertible.
'frue. Since = det(A), it follows that det(AAT) = det(A) det(AT) = (det{A))
2
:?: 0.
True. If det.(A) f 0 then A is invertible, and an invertible matrix can always be written as
a product of elementary matrices.
D9. If A = A
2
, then det(A) = det(A
2
) = (det(A))
2
and so det(A) = 0 or det(A) = 1. If A= A3, then
det(A) = det(A
3
) = (det(A))
3
and so det(A) = 0 or det(A) = ±1.
DlO. Each elementary product of this matrix must include a factor that comes from the 3 x 3 block of
zeros on the upper right. Thus all of the elementary products are zero. It follows that det(A) = 0,
no matter what values are assigned to the starred quantities.
Dll. This permutation of the columns of an n x n matrix A can be attained via a sequence of n 1
column interchanges which successively move the first column to the right by one position (i.e.
interchange columns 1 and 2, then interchange columns 2 and 3, etc.). Thus the determinant of
the resulting matrix is equal to (1)"
1
det(A).
EXERCISE SET 4.3
WORKING WITH PROOFS
Pl.
If x [] and y = [] then, using cofacto• expansions along the jth w lumn, we have
det(G) = (x1 + Y1)C1; + (xz + Y2)C2; + · · · + (xn +yn)Cn;
131
P2. Suppose A is a square matrix, and B is the matrix that is obtained from A by adding k t imes the
ith row to the jth row. Then, expanding along the jth row ofB, we have
det(B) = (ajl + kao)Gjl + (aj2 + ka;2)Cj2 + · · • + (a;n + ktJ.in)Cjn
= det(A) +kdct(C)
where C is the mat.rix obtained from .t1 by replacing the jth row by a copy of tho ith row. Since
C has two identical rows, it follows that det( C) = 0, and so det( R) = det(A).
EXERCISE SET 4.3
1. The matrix of cofactors from A )s C = ! det(A) = (2)(3) + (5)(3) + (5)( 2) = 1.
&  s 3
Thus adj{A) = lr ! and At = a.dj(A) = 
 2 2 3 2  2  3
I 2
3 3
0 1
0
3. The matrix of cofactors from A is C = 6 4 0 , and det (A) = (2}(2) + (  3)(0) + (5)(0) = 4. Thus
[
2 0 0]
4.
5.
[
2 6
adj(A) = o 4
0 0
adj(A) [
29
I!
4 6 2
·1] 1 1 [2 6 4] I]
G and A
1
=  adj(.4) =  0 4 6 = o 1 .
2
4 4
oo2 1
0 0 2
0
[
0
12 1
 6
12 29
l
12
·2
17
!I 18 9 13 26 13
Xt = 17
::::  = 
X2 = j;
=  =
16 8
16 8
3
;]
132 Chapter 4
 153
6. :r = .:....__ _ _;_ =  = 3
 51
204
y = !..: ,1 = 51 = 4
6 4 1 1 6 1 1  4 6
1 1 2 4 1 2 4  1  1
7. x=
20 2  3
1  4 1
144
2 20 3
61
2 2 20
230 46
= y= =  z= ==
 55
1  4 1
 55
1 4 1
55 11
4  1 2 4  1 2 4 1 2
2 2 3 2 2 3 2 2 3
4  3 1 1 4 1
1 3 4
 2  1 0 2 2 0
2  1 2
0 0 3
30
4 0 3
38
8.
XJ = =  Xz = =
X3 =
1  3
]  11
1 3 1
 11
4 0 0
1  3 1
40
= 
 11
2  1 0 2  1 0
2  1 0
4 0  3 4 0 3
4 0 3
 32  4 2 1  1 32 2 1
1·1 1 7 9 2 14 7 9
11 1 3  1 11 3 1
 .t  2 I  4  2115 1 4 1 4
=  3384 = 8
9. I t=
2
=  =5
Xz =
 I  4 1  423  1  4 2 1  423
2  1 7
9 2 1 7 9
 1
:3  1 1 3 1
I  2  .t 2 1  4
 1  4  32 1  1  4 2  32
2  l 14 9 2 1 7 14
 I 1 ll 1  1 1 3 11
1  2  4  4
=  1269 = 3
1 2 1 4
423
X3 =
 423
x4 =
 423
=  =  1
 423  423
4 2  1 1 2 4  1 1
6 3  1 2 4 6  1 2
12 5  3 4
8
"' 3 4
6 3  2 2
2 3 6 2 2
2
10.
Xt = = = l Iz = ==1
2 2  l 1 2
2 2  1 1 2
4 3  1 2 4 3  1 2
8 5  3 4
8 5  3 4
3 3  2 2 3 3 2 2
EXERCISE SET 4.3 133
2 2 4 1
2 2 1 4
4 3 0 2
4 3
 1
6
8 5 12 4
8 5 3 i2
3 3 6 2  2 3 3 2 6  2
=
2
=  = 1 X4 =
2
::;:  =  1
2
2
1 3 4 1 2 3
2  2  1 3 1 1
4 1 1
21 3
11. x= = =
2 3 4
14 2
1 3  2
 33
12. y= = 
1 2 3
41
1  2 1 3 1 1
3 l 1
1 4  2
13.
[
cosO
The matr ix of cofactors is C =  e
sin9 0]
, and det ( A) = cos
2
(} + sin
2
8 = 1. Thus A is invertible
[
co38  sin(1 0]
and A
1
•. ::: adj(A) = sinO cos @ 0 .
0 0 l
14. Using the identity 1 + tan
2
o = sec
2
a:, we have det(A!) = 1 foro :f. + n1r. Thus, for these values of
15.
n, t.he ma trix A is invertible and A
1
= adj(/1.) = cos
2
o ' · l.an o cos
2
a 0 .
[
tan acos2 a cos
2
a 0 l
0 0 sec
2
a
The coefficienl matrix is 11 = <1 k 2 , and dct(A) = k(k 4).
[
3 3 I]
Thus the system has a unique
2k 2k k
solution if k # 0 and k i 4. In this case the solution is given by:
x =
1 3 1 3 1 l
2 k 2 4 2 2
1 2k k
(k l)(k  6)
2k l k
2(k  1)
 = y =
=
k(k  4) k(k 4) k(k  4) k(k 4)
3 3 1
4 k 2
z=
2k 2k l
k(k 4)
(2k  3)(k  4) 2k  3
k(k 4)
= 
k
3{k
2
 k + 3)
(5k  3}(3k + 2)
16.
r 5' 3
14k  3
X = ..,..,.:
(5k 3)(3k + 2)
y =
3k
z=
5k  3
17. We have det(A) = 1i x
2
y = y(y
2
 x
2
). Thus A is invertible if a.nd only if y =I= 0 a.nd y =I= ±x. The
l [·x·y 0 y2]
formula. for the inverse is A
1
= (
2 2
) y
2
0  xy .
. '!/ Y  X  :ry y2 _ ;r2 ;r2
134
Chapter 4
18. We have det(A) = (s
2
 t
2
)
2
. Thus A is invertible if ;md only if s =/; ±t. The formula for the inverse
is
[
s 0 t 0]
\1 1 0 s 0 t
1
= (s2  t
2
) t 0 s 0 ·
0 t 0 s
19. ]det(A)I =II+ 2] = 3 20. ldet(A)I = ]o 61 = 6
21. ]det(A)J = 1(0 4 + 6) (0 + 4 + 2)] = l41 = 4
22. ]det(A)I = ](1 + 1 + 0) (0 3 + 2}1 = 131 = 3
[33]
23. The parallelogram has the vectors P
1
Pz = (3,2) and P1P4 = (3, 1) as adjacent sides. Let. A=
2 1
.
Then, from Theorem 4.3.5, the area of the parallelogram is ]det(A)] = ]3 61 = 3.
t [2 "o].
24. The parallelogram has the vectors P
1
Pz = (2, 2) and P1P4 = (4, 0) as adjacent sides. Let A ;;;.;
2
Then, front Theorem 4.3.5, the area of the parallelogram is ]det(A)] = ]0 8] = 8.
25. area =
1
.
2
2 0 1
3
1
1 1 =7
1 2 1
1 1
1
26. area 
2
2 2
3 3
27. V = ]det(A)I wlsere A= [
2
2
6 thus V = ]16] = 16.
2 4
28. v = 45
1
1 = 3
1
29. The vectors lie in the same plane if and only if the parallelepiped that they determine is Jegenerate
in the sen.;;e that. it.s "volume" is zero. In this example, we have
1
v = 2
5
0 4 = 16
2 0
and so the \'Cctors do not lie in the same plane.
30. These vectors lie in the same plane.
31. a=± )5(0, 2, 1)
j k
33. U XV= 2
2
3 6 = 36i 24j
3 6
' .
l j k
+ +
32. a= ±fsi(6, 3, 4)
sin f) = l]u X vll = )1296 + 576 = )1872 = 12M
j!u!IJlvll v'4§y49 49 49
34. (a) ABxAC= 1 2 2 = 4i + j  3k
area = ! x ACII = £P
1 1 1
(b) area IIXBII h = thus h = =
EXERCISE SET 4.3
15. (a) v x w = 0
2
j k
2  3 = (14 + 18)i {0 + 6)j + (0 4)k = 3Zi  6j  •lk
6 7
j k
135
{b) U X ( V X W) = 3
32
2 1 = ( 8  6}i  { J2 + 32)j + (18  64)k = 14i 20j 82k
16.
17.
6 4
j k
(c) uxv = 3 2  1 =(6+2)i(9+0)j+(6  0)k =4i + 9j +6k
0 2  3
j k
(uxv)xw =  4 9 6 =(6336)i(28  12)j +( 24  18)k = 27i+40j42k
2 6 7
(a)
(0, 171), ..... 2fJ4)
U X V ~ ~  ~ (a)
I
3
( b) U XV =  2
3
j
4
1
(h) (44,55,22) ( (:) (  8, 3, 8)
k
2 = 18i + 36j 18k = (18, 36, 18) is orthogonal to both u and v.
5
j k
1 5 =  3i + 9j  3k = ( 3, 9, 3) is orthogonal to both u and v.
0 3
i8. (a) u x v = (0,  6,  3)
(b) U X V = (2,6, 12)
vz l) = v xu
U2
.o.
(l
t 12
u x (v + w) .,.,.,
v2 + w:i
= (u x v) + (u x w)
= (ku) x v
136
j
43. (a) u >< v = 1 1
0 3
j
(b) uxv= 2 3
 1 2
44. (a) A = lfu x vlf = 0
+ +
j
45. PtP2 X plp3 =  1  5
2 0
k
.2
1
k
0
2
= 7.ij+ 3k A= ll u x v ii = J49 + 1 + 9 = J59
= 6i +4j +7k
k
2 =  15i + 7j + IOk
3
A= ll u x v lf = J 36 + 16 + 49 = Ji01
(b) A= llu x vii = /IT4
li M X = + 49+ 100 = 4TI
46. A= t II:PQ X = tJll40 = J285
Chapter 4
47. Recall that the dot product distributes across addition, i.e. (a+ d )· u =a· u + d · u . Thus, with
u = b x c, it follows that (a+ d ) · (b x c) =a · (b x c)+ d · (b x c) .
48. Using properties of cross products stated in Theorem 4.3.8, we have
(u + vj x (u  v ) = u x (u  v) + v x ( u v) == ( u x u)+ ( u x (  v) ) + (v x u) + (v x ( v))
= (u x u)  (u x v) + {v x u)  (v x v) = 0  (u x v)  (u x v) 0 =  2(u x v )
49. The vcc: lor AB x AC = 1 3 = 8i + 4j + 4k is perpendicular t o AB aiH.I AC, and thus is per
 t j k .I ) t
 I 3 1
pendic\l lar to the plane determined by t he points A, B, and C.
50. (a) W h
(I
vz 1131 1111 t1311
11
1 vzl) th
e uve v x w = ,  . ; us
W2 W3 W1 W3 W} W 2
(b) lu · (v x w)l is equal to the volume of tlte parallelpiped having the vectors u, v , w as adjacent
edges. A proof can be found in any standard calculus text.
o , and so A
1
= detCA)adj(A) = t 14 7 51. {a) We have adj(A) =
I 0
l] [ 34 21
· 1 1 0
1
(b) The ,.duced .ow echelon fo<mof(A IT];, [:
0
0
0:¥
I
0 I 2
I
t :
_: A
1
=
0 'f  'i
_: ;].
0 7
EXERCISE SET 4.3 137
(c) The method used in (b) requires much less computation.
>2. We have det(AJ.:) = (det(A))k. Thus if A'''= 0 for some k, then det{A) = 0 and so A is not invertible.
53. F:rom Theorem 4.3.9, we know that v x w is orthogonal t.o the plane determined by v and w. Thus
a vector lies in the plane determined by v and w if and only if it is orthogonal to v x w. Therefore,
since u x (v x w) is orthogona.l to v x w, it follows that u x (v x w) lies in the plane determined by
v and w.
>4. Since (u x v) x w = w x (u x v), it follows from the previous exercise that (u x v) x w lies in the
plane determined by u and v.
55. If A is upper triangular, and if j > i, then the submatrix that remains when the ith row and jth
column of A are deleted is upper triangular and has a zero on its main diagonal; thus C;; (the
ijth cofactor of A) must be zero if j > i. It follows that the cofactor matrix C is lower triangular,
and so adj( A) "" or is upper triangular. Thus, if A is invertible and upper triangular. then A_, =
is also upper triangular.
56. If A is lower triangular and invertible, then AT is upper triangular and so (A
1
)T = (A
1
'}
1
is upper
triangular; thus A I is lower triangular.
57. The polynomialp(x) = ax
3
+ bx
2
+ex+ d passes through the points (0, 1}, (1, 1), (2, 1), and (3, 7)
if and only if
d=
a+ b+ c+d=1
Ra + '1b + 2c + d = 1
27 a + 9h + 3c + d = 7
Using Cra.mers Rule, the solution of this system is given by
1 0 0
ll
0
1 1 1 1 1 ·1
1 4 2 1 8 l
7
9 3 12 127
7
:::: 1
b=
0 0 0 1
 12
0 0
1 1 I 1 1 1
8 4 2 l 8 4
27 9 3 1 27 9
0 0 1 I
0 0
l l 1 1 1 1
8 4 ·1 1
8 1
27 9 7 1 12 27 9
c= ==1 d=
12
12 12
Thus the interpolating polynomial is p(x) = x:l  2x
2
 x + 1.
0
1 1
2 1
3 24
=  = 2
0 1 12
1 1
2 1
3
0 1
1
]
2  1
3 71 12
==1
12
138
DISCUSSION AND DISCOVERY
,. ux v
I
I v
Chapter 4
Dl. (a) The vector w = v x (u x v} is orthogonal to both v and u x v;
thus w is orthogonal to v and lies in the plane determined by
u and •t.
(b) Since w is orthogonal to v, we have v • w = 0. On the other hand, u · w = l!ullllwll cos9 =
llullllwll sin(% B) where() is the angle between u and w. It follows that lu · wl is equal to
the area of the parallelogram having u and v as adjacent edges.
D2. No. For example, let u = (1, 0, 0), v = (0; 1, 0), and w = (1, 1, 0). Then u x v = u x w = (0, 0, 1),
but vi= w.
D3. (u · v) x w docs not make sense since the first factor is a scalar rather than a vector.
D4. If either u or v is the zero vector, then u x v = 0. If u and v are nonzero then, from Theorem
4.3.10, we have flux vii= JluiJI!viJsinO where B is the angle between u and v. Thus if u x v = 0,
with u and v not zero, then sin B = 0 and so u and v are parallel.
D5. The associative law of multiplication is not valid for the cross product; that is u x (v x w) is not
in general the same as (u x v) x w. · ·
D6. Let A = [ c (l c)]. Then det(A) = c
2
+ (1  c)
2
= 2c
2
 2c + l f. 0 for all values of c. Thus,
lc c
for every c, the system has a unique solution given by
1
3 (lc)l
4 c 7c 4
x
1
= 'l.c2  2c + 1 = 2c""" 2,.2c_+_l
07. (c) The solution by GaussJordan elimination requires much less computation.
D8. {a) True. As was shown in the proof of Theorem 4.3.3, we have Aadj(A} = det(A)L
(b) False. In addition, the determinant of the coefficient matrix must be nonzero.
(c) True. In fact we have adj(A) = det(A)A
1
and so (adj(A))
1
=
(d) False. For example, if adj(A) =
Ut UJ
(e) True. Both sides "ne equal to v1 l!'J t13 •
W1 W2 W3
WORKING WITH PROOFS
P 1. We have u · v = II ullllvll cos 0 and II u x vII = II ul!ll vI! sin 0; thus tan 0 = II '::.;11 .
P2. The angle between the vectors is () = o: /3; thus u · v = Uullllvll cos( a {3) or cos( a: /3) =
U•V
Uullllvll'
EXERCISE SET 4.4 139
P3. (a) Using properties of cross products from Theorem 4.3.8, we have
(u + kv) x v = (u x v) + (kv x v) = (u x v) + k(v x v) = (u x v) + kO = u x v
(b) Using part (a) of Exercise 5U, we have
u
1
Uz U3 VJ V2 V3
u · {v x w) = Vt v2 V3 Ut 1l2 1l3 = v · {u x w) = (u x w) • v
W} W2 W3 W) W2 W3
P4. If a, b, c, and d all lie in the same plane, then ax b and c x d are both perpendicular to that
plane, and thus parallel to each other. It follows that (ax b) x (c x d} = 0.
P5. Let Q1 = (xt,Yt,l), Q2 = (x2,y2,l), Q3 = (x3,Y3,l), and letT denote the tetrahedron in R
3
·
having the vectors OQ1, OQ2, OQ:.h a..c; adjacent edges. The base of this tetrahedron lies in the
plane z = l and is congruent to the triangle 6P
1
P2h; thus vol(T)"" !area(6P
1
P
2
P
3
). On the
ot.her hand, voi(T) iR equal to timP.s the volume of the p<\rallelepipe(l having OQ
1
, OQ
2
, OQ3,
as adjacent edges and, from part (b) of Exercise 50, the latter is equal to OQ
1
• (OQ:z x OQ3).
Thus
X} Yl
area(6.P1P2P3) = 3voi(T) = · (OQ2 x OQ3) = X2 Y2
'X3 1/3
l
l
1
EXERCISE SET 4.4
1. (a)
(b)
The matrix A= [
1 0
1 has nontrivial fixed points since det(I A)""' 1°
0
1 "'o. The fixed points
1 0 1 1
are the solutions of the system ( l .. A)x = 0, which can be expressed in vector form a..<> x =
where oo < t < oo.
The matrix B . [
0
'] has nontrivial fixed points since det(J  B) = I
1
= o. The fixed
l 0 l 1
points are the solutions of the system (I  B)x = 0, which can be expressed in vector form as
x = t where oo < t < co.
2. (a) No nontrivial fixed points. (b) No nontrivial fixed points.
3. We have Ax= [: :] m = 5x; thus X= [:] ,, an cigcnvedor of A corresponding to the
eigenvalue ..\ = 5.
4. We have Ax = [ =: =;] [:] m thus m ,, an eigenvector of A corresponding to the
eigenvalue >. == 0.
140
5. (a)
(b)
(c)
6. (a)
(b)
(c)
7. (a)
Chapter 4
The characteristic equation of A=[! is det(.XI A) = 1>.=
8
3
1
1 = (,X + 1} = 0.
Thus ..X = 3 and >. =  1 are eigenvalues of A; has algebraic multiplicity 1.
The characteristic equation is I.X =
4
10
).. :
2
1 = (>.  lO)(..l. + 2) + 36 = (>.  4)2 = 0. Thus ,X = 4
is the only it has algebraic multiplicity 2.
The characteristic equation is 1.>.
2
,
0
I = (..l. 2)
2
= 0. Thus ,X= 2 is the only it
1 "'2
has algebraic multiplicity 2.
The characteristic equation is .X
2
 16 = 0. Thus ,X = ±4 are eigenvalues; each has aJgebra.ic
multiplicity 1.
The characteristic equation is >..
2
= 0. Thus ,\ = 0 is the only eigenvalue; it has aJgebra.ic
multiplicity 2.
The characteristic equation is(>.. = 0. Thus >.. = 1 is the only eigen.value; it fi8s algebraic
multiplicity 2.
.>.  ·1 0  1
The characteristic l!quat io!l is 2 .>. 1 o = ).
3
 6).
2
+ 11 .A 6 = (.A  1)(>.  2){>.. 3) =
2 0 A  1
0. Thus .A = 1, >.. = 2, and >. = 3 are eigenvalues; each has .algebraic multiplicity 1.
.A4 5 5
(b) The characteristic equation is .X 1 1 = A
3
 4>.
2
+ 4A =..\(A  2)
2
= 0. Thus A= 0
t 3 ). + 1 ..
and ). = 2 are eigenvalues; >. = 0 has algebraic multiplicity 1, and ). = 2 hFlS multiplicity 2.
).. 3 4 1
(c) The characteristic equation is .x + 2  1 = ).
3
 >..
2
 8>. + 12 = (,X+ 3)(A 2)
2
= 0. Thus
 3 9 ).
A =  3 and ,\ = 2 are eigenvalues; A=  3 has multiplicity 1, and >.. = 2 has multiplicity 2.
8. {a) The characteristic equation is >.
3
+ 2..\
2
+ A=>.(>.+ 1)
2
= 0. Thus >. = 0 is an eigenvalue of
multiplicity 1, and ).. = 1 is an eigenvalue of multiplicity 2.
(b) The characteristic tlltuation is ).
3
 6)..
2
+ 12).,  8 = (A 2)
3
= D; thus,\ = 2 is an eigenvalue of
multiplicity 3.
(c) The chara.ctcrist.ic equation is >..
3
 2A
2
 15>. + 36 = (,\ + 4)(A  3)
2
= 0; t hus A= 4 is an
eigenvalue of multiplicity 1, and ,\ = 3 is an eigenvalue of multiplicity 2.
9. (a) The eigenspace corresponding to >. = 3 is found by solving the system [ (;] = This
yields the general solution x = t, y = 2t; t.hus the eigenspace consists of all vectors of the form
[:J = Geometrically, this is the line y = 2x in the xyplane.
The eigenspace to A =  1 is found by solving the system [ =: (:] = This
yields solution x = 0, y = t; thus the eigenspace consists of all vectors of the form
= t Geometrica.lly, this is the line x = 0 (ya.x.is).
(b) The eigenspace corresponding to .A = 4 is found by solving the system [ =: !] [:] = . This
yields the general solution x = 3t, y = 2t; thus the eigenspace consist of a ll vectors of the form
[:J = Geometrically, this is the line y = 3x
E.XERCISE SET 4.4 141
(c) The eigenspace corresponding to >. = 2 is found by solving the system [
0 0
) (x] = (
0
]. This
1 0 y 0 .
yields the general solution x = 0, y = t; thus the eigenspace consists of all vectors of the form
[:] = Geometrically, t his is the line x = 0.
11). (a) The eigenspace corresponding to>. = 4 consist s of all vectors of the for m (:) = this is
the line y = The eigenspace corresponding to >. =  4 consists of all vectors of the forrri
[:] = t this is t he line y = 
11.
(b) The eigenspace corresponding to>.= 0 consists of all vectors of the form = + this
is the entire xyplane.
(c) The eigenspace corresponding to A. = 1 consists of all vectors of the form (:J = t this is the
line x = 0.
(a)
(b)
(c)
The eigenspace corresponding to >. = 1 is obtained hy solving "
[
 [oool
yields the general solution x == 0, y = t, z = 0; thus the eigenspace consists of all vectors of the
• [ t [:] ; thls cones pond• to a Hne thmugh the o•igin (the yaxis) in R
3
• Slmilad y,
the eigenspace to !. 2 consists of all vecto" of the fo<m [:] = t [ = , and the
eigeosp""e conesponding to !. 3 consists of all vccto" of the fo<m [:] t [ : ]·
The eigenspace corresponding to >. == 0 is found by solving the system
5
f :] =
· i :l l v 0
This yields the general solution x = 5t, y = t , z = :H .. Thus the eigenspace of all vectors
of the fo<m [:] = t [!]; this is the line through the odgin >nd the point. (5, 1, 3).
The eigenspace con ... pondh;g to!.= 2 is found by •olving the system [=I ! :] m = This
yields [:] = s + t [:], which conesponds to a plane thwugh the odgin.
The eigenspace corresponding to>.=  3 is fo:1nd by solving [:] = This yields
3 9 3 z 0
[: ] = t [ , which cones ponds to a Hoe th•ough the odgin. The eigenspa<e to
!. = h found by solvi ng u : ][:] = m This [ l f l] , whichabo con&
sponds to a line through t he origin.
142 Chapter4
12. (a) The cigenspA£e CO<J<SpOnding to A = 0 consists of vectors of the form m = t [ !]. and the
. concsponding to A = 1 consists of vectorn oft he focm [: ] = t [ ]·
(b) The eigenspa.;e conesponding to ). = 2 consists of vectors of the fe<m m = t Li l
(c) The eigenspace cnn esponding to A =  4 consists of vectorn of t he fOTm [:]  t [: ] • and the
eigenspace conc•ponding to A= 3 oonsi•t• of vectOTS of the form m = t [ :] ·
13. ( a ) The c.ha.md.t1ristic polynomial is p(>.) = (>. + 1)(>. 5) . The eigenvalues a.rc >. = ..:_ 1 and>.= 5.
( b) The characteristic polynomial is p(A) = (.A  3) (.X  7)(>. 1). 'The eigenvalues are >.= 3, ). = 7,
and A= 1.
(c) The characteristic polynomial is p(A) = (>. + l)(.X The eigenvalues are.\= s
(with multiplicity 2), ). = 1, and>.=
?0
2
?.]
14. 'l'wo examples are A :::: L v
.o 0 0 1
and B = [
3
0
0
1
u
0
0
1
 1
15. Using t he block diagonal structure, the characteristic polynomial of t he given matrix is
p(),) =
),  2  3
l
0
0
),  6
0
0
0
0
.\+2
1
0
0
 5
>  2
= !).  2 3 If), + 2  5 I
l >. 6  1 >. 2
.::: [{>  2)(>.  6) + 3)[(>. + 2)(>  2) 5J =' " (>
2
 8), + 15)(,\
2
 9) = (> 5)(.),  3)
2
(X + 3)
Thus the eigenvalues are 5, ), = 3 (with multiplicity 2), and), = 3.
16. Using the block triangular structure, the characteristic polynomial of the given matrix is
I A 011 ). + 2 0 I ,
p(>) = I = > 2 (). + 2)(>. 1)
 1). 0 >  1
Thus the eigenvalues of B are ). = 0 (with muitiplicity 2), ,\ =  2, and ), = l.
17 . The characteristic polynomial of A is
>. +1 2
p( A) = det( AI  A) = 1 ).  2
1
2
1 =(.\+ 1)(>. 1)
2
).
EXERCISE SET 4.4 143
thus the eigenvalues are .X = 1 and ..\ = 1 (with multiplicity 2}. The eigenspace correspontJing to
.>. =  1 is obtained by sol.ing the system
whkh yield$ [:] l [  :]. Similarly, the eigenspace co"esponding to !. = I is obtained by sol viug
H : :1m = m
which has the general solution ' = t, y " t  ·'· ' = ' ' or (in vedorform) = ·' [:] + t [1]
The eigenvalues of A.!
5
are). = (  1)
25
= 1 and >. = ( 1)
25
= 1. Correspcnding eigenvectors the
same as above.
18. Thoeigenvalues of A are !. = J , !. = j, !. = 0, end !. = 2 ng eigcnveclo" are [ :j. r:J. .
rll· and [ r"'peotively. The eigenvalues of A
9
are A= (1)' 1, A = (!)' = ,:, . :0)' =
0
0.
and >. == {2)!} == 512. Corresponding eigenvectors arc the same as above.
l9. The characteristic polynomial of A is p()..) = >.
3
 >.
2
 5)..  3 = (..>. 3)().. + 1)
2
; thus the cigeo
are .>.
1
= 3, .A2 = 1, >..3 = l. We bnve det( A) = 3 and tr(A) = 1. Thus det(A) = 3:;
(3)( 1)(1} = .A
1
.A2>..3 and tr(A) = I= (3) + ( 1) + ( 1) =.At+ .X2 + .>.3.
W. The characteristic polynomial of A is p(>..) = >.
3
 6.:\
2
I· 12.>  8 = (.>. 2)
3
; thus the eigenvalues are
At = 2, \2 = 2, A3 = 2. We have det (A) = 8 and tr(A) = (). Thus det(A) = 8 = (2)(2)(2) = ..\.
1
.\2.:\3
and tr(A} = 6 = (2) + (2) + (2) = )q + .:\2 + .\3.
!1. The eigenvalues are .\ = 0 and .>. = 5, with associa.t.ed eigenvectors
( ann GJ respectively. Thus the eigenspaces correspond to t he
perpendicular li nes y = !x andy .::::: 2x.
:2. The eigenvalues are ..\ = 2 and >. = 1, with associated eigenvec
tors [ v;] and [ respectively. Thus the eigenspace::; correspond
to the perpendicular lines y""' andy=  J2x.
144 Chapter 4
23. The inv<l.riant lines, if any, correspond to eigenspaces of the matrix.
(a) The eigenvalues 2 and.\:::::: 3, with associated eigenvectors g] and Thus
the lines y = 2x ami y = x arc invariant under the given matrix.
(b) This matrix }m.':i no rt'lal eigenvalues, so t her e are no invariant lines.
(c) The only eigenvalue is A= 2 (multiplicity 2), with associated eigenvector Thus the line
y = 0 is invariant under the given matrLx..
24. The char acteristic polynomi al of A is p{A) = >..
2
 (b + l )A + (b 6a), so A has the stated eigenvalues
if and only if p( 4) = p(  3) = 0. This leads to the equations
from which we conclude that a = 2l b.,., 0.
6a  4b = 12
6a + 3b = 12
25. T he characteristic polynomial of A i:; p( A) = >.
2
 (b + 3)A + (3b  2a), so A has the s tated eigenvalW!S
if l.l.lld onl y if p(2) = p(5) = 0. This leads to the equations
 2a + b = 2
a+ b=5
from whi ch we conclude t.hat a = 1 and b = 4.
26. The chl'l.r!'l.ctcristic polynomial of A is p(>. ) = (A  3)(A
2
 2Ax + x
2
 4). Note that the second factor
in this polynomial cannot. have a double root (for any value of x) since ( 2x)
2
 4(x
2
 4) = 16 # 0.
Thus t he only possible repeated eigenvalue of A is .> = 3, and this occurs if and only if .\ = 3 is a
root of t.he second factor of p(A), i.e. if and only if 9 6x + x
2
 4 = 0. The roots of this quadratic
equation are x = 1 and :t  5. For t hese values of x, A= 3 is an eigenvalue of multiplicity 2.
27. lf A'l =.f. then A(x ·l tlx) = Ax+ A
2
x :: Ax+ x = x +Ax; thus y = x +Ax is an eigenvector of A
corresponding to A = 1. Sirni lady, z = x · Ax is a.n eigenvector of A corresponding to A = 1.
28. Accorrling to Theorern 4.4.8, the characteristic polynomial of A can be expressed as
where >.1, >.
2
, ... , Ak arc the distinct eigenvalues of A and m
1
+ m
2
+ · · · + mk = n. The constant
t.crrn iu this polynomial is p(O) . On the other har.J, p(O) == det(  A) = (l)ndet(A).
29. (a) lJsinf:! Formulu. (22), the d1atacteristic equation of A is .\
2
+ (a+ d)>..+ (ad  be)= 0. This is n
qundratric equation with discriminant
(a+ d)
2
 4(ad  be) = a
2
+ 2ad + d
2
 4ad + 4bc = (a  d)
2
+ 4bc
Thus the eigenvalues of A are given by .A= d)± J(a d}'l + 4bcj.
{b) If (u d)
2
+ 4/>c > 0 then, from (a) , the characteristic equation has two distinct real roots.
(c) If (a df + 4bc ::o 0 then, from (a), il:l one real eigenval ue {of multiplicity 2).
{d) If {n. dV + 4bc < ()then, from (a), t here a re no real eigenvalues.
EXERCISE SET 4.4 145
30. lf (a d)Z + 4bc > 0, we have two distinct real eigenvalues A
1
and Az. The corresponding eigenvectors
are obtained by solving the homogeneous system ·
a):tt  bx2 = 0
 cx1 + (Ai d)x2 = 0
Since A; is an eigenval ue this system is redundant, and (using the first equation) a general sol ution
is given by
x
1
= t,
Finally, setting t = b. we ;,;ee that [a =b>.J is an eigenvector corresponding to >. = ..\i·
n. If the characteristic polynomial of A is p(.A) = A
2
+ 3>. 4 = (>. l){A + 4) then the eigenvalues of
A are >.1 = 1 and A2 =  4.
(a) From Exercise P3 below, A
1
has eigenvalues .\
1
= 1 and ,\2 =
(b) From (a), together with Theorem 4.4.6, it follows that. A
3
has eigenvalues A
1
"'""(1 )
3
= 1 and
>z= (   o\.
(c) Prom P4 below, A  41 has eigenvalues AJ = 1 4 = 3 and ,\2 =  4  4 =  8.
(d) From P 5 below, 5A has eigenvalues)" = 5 and ..\
2
;::: 20.
( e) From P2( a.) below, t he eigenvalues of <'lA:r + 2I = ( 4A 1 21) T arc the same !'IS those of 4A + 2J;
namely A
1
= 4(1) + 2 = 6 and ..\2 = 4( 4) + 2 = \4.
l2. If ...lx = Ax, where x ? 0, then (Ax) · x = (..\x) · x .,. >.(x · x) == .AIIxll
2
and so A= (i1:
1
?t.
'3. (a) The characteristic polynomial of the matrix C is
>.. 0
()
0 co
 1 A 0 0 Ct
p( ,\} = det ( >. J  C) ""
0 I
,\
0 Cz
0 0 0  1
A+ Cn 1
Add .>. l ime.> the row to the first row, then expand by cofactors a.long the first column:
0
A2
0 0 Co+ CtA
)..'2
0 0 co +Ci A
 1 >. 0 0 Ct
..\ 0
.>.
1
Cz
p(A) =
0
1
0 cz
=
0 0 0  1
..\ + C:n  1
0 0 1
..\+ Cn 1
Add A
2
times t.he secvnd row to the first row, then expand by cofactors along the first column.
0
Az
0 co+ Ct >. + cz>..
2
,\2
0 co + C) A+ C2A
2
 1
).
0 C2
p(.>.) = =
0 0
 1 ..\ + c,._l
0
1
A+ Cnl
146 Chapter4
Continuing in this fashion for n  2 steps, we obtain
= 1,\nl co + Ct A + c2.\
2
+ · · · + Cn.2>.n
2
1
1 >+Cnt
= Co+ CtA + C2 A
2
+ · · · + Cn2A
11

2
+ <'n  1An l + >."
(b) The matrix C = [l
0
0
1
0
0 2]
0 3
hasp(>.) = 2  3>. + .>
2
 5>.
3
+ .>.
4
as its cha.ra.cteristic poly
o 1
1 5
nomial.
DISCUSSION AND DISCOVERY
Dl. (a) The characteristic polynomial p(.>.) has degree 6; thus A is a 6 x 6 matrix.
(b ) Yes. From Theorem 4.4 .12, we have dct{A) = {1)(3)
2
(4)
3
= 576 =/: 0; thus A is invertible.
D2. If A is :;quare mn.trix all of whose entries arc the sl:l mc then det(A) = 0; thus Ax = 0 has nontrivial
solutions and >. = 0 is an eigenvalue of A.
03. Using Formul a (22), the characteristic polynomial of A is p(..X) = ).
2
 4.\ + 4 = (..\ 2)
2
. Thus
). = 2 is the only eigenvalue of A (it has multiplicity 2).
04. The eigenvalues of A (wit h multiplicity) arc 3, 3, and  2, 2,  2 . . Thus, from Theorem 4.4. 12, we
l1ave det(A) = (3)(3)(  2) ( ··2)( 2) =  72 m1d tr(1t) = 3 + 3  2 2 2 = 0.
D5. The matrix A = [a bl j satisfies the condition tr(A) = det(A) if and only if a+ d = ad be. If
c d
d = 1 then this is equation is satisfied if and only if be =  1, e.g., A = lf d f. 1, then the
t
. · · t . r. d ·r 1 1 ·r d· tbc A [
1

1
]
equa 1011 1s sa. 1S1l€ 1 an< on y 1 a = <J ••
1
, e.g.,
1 2
·
06. The characteristic polynomial of A factors a.s p(..X) = (>. 1)(\ + 2)
3
; t hus t he eigenval ues of A are
). = 1 and).=  2. It follo\vS from Theorem 4.4.6 t hat the eigenvalues of A
2
are ). = (J)
2
= 1 and
>. = (  2)
2
= 4. .
D7. {a)
(b)
(c)
(d)
DB. (a)
(b)
False. For example, x = 0 sa.tisfies this condition for any A and .>. . The correct statement is
t.hat if Ax = ..\x for some nonzero vector x, then x is a n eigen\'ector of A.
True. If >. is an eigenva lue of A, then .\
2
is an eigenvalue of A
2
; thus (.>.
2
I  A
1
)x = 0 has
uontrivial sol utions.
False. If A = 0 is an eigenvalue of A, then the system Ax = 0 ha.s nontrivial solutions; thus
A is not invertible anrl so the row vectors and column vectors of A are linearly dependent.
!The statement becomes true if "independent" is replaced by "dependent" .J
False. For example, A= G has eigenva.lues ). = 1 and .\ = 2. !But it is true that a
:;ymmetric mat rix has real eigenvalucs.J
fl OJ . [1 OJ
False. For example, the reduced row echelon form of A = h
2
l S I =
0 1
.
True. We havP. A(x
1
+ x2) = A
1
x
1
+ .>.2x2 and, if ..X1 f. >.2 it can be shown (sine"' x1 and X2
must be linearly indep(:ndent) thal .)qxl + A2Xz f. {J(xl + X2) for any value of a.
WORKING WITH PROOFS 147
(c) '!'rue. The characteristic polynomial of A is a. cubic polynomial, o.nd every cubic polynomial
has at least one real root.
(d) 1Tue. If r(A) = .>." + 1, then det(A) = (l }np(D) = ± 1 =I 0; thus A is invertible.
WORKING WITH PROOFS
Pl. If A = [ac b], t hen A
2
= [
02
+be ab + bdl
2
and tr(A)A = (a+ d) [a b] = [
02
+do ab + d!]; thus
d ca + de cb+ d
2
j c d ac+dc ad+ a
A
2
_ tr (A)A = [be 
0
ad cb _o ad] = ( de
0
t(A) 0 ]
_ det.(A) = det(A)J
nod so p(A)....: A'l  tr(A)A + det(A)J = 0.
P 2. (a) Using previously estabJished properties, we have
11
1
') = det(Afr · AT)= dct((M A)r) = det(M  11)
Thus A a11d AT have t.hc *lame characteristic polynomial.
(b ) The e;gcnvalues are 2 and 3 in each case. The eigenspace of A conesponrling to ). = 2 is ob
tai ned by solving the system r:J t.he eigenspace of AT corresponding ': '
t o >.. = 2 is obto.inf:d by solving [:] Thus the eigenspace of A corresponds to the
line y = 2x, whereas eigenspace of A'r corresponds to 11 = 0. Simtlarly, for A 3, the
eigf'nspacc of A corresponds to x = 0; whereas the eigeuspace of AT corresponds toy=
P3. Suppose that Ax = .Xx where x i 0 and A is invert.ible. Then x = A
1
Ax = A
1
>.x = AA
1
x
and, s1nce ,\ :f; 0 (because .·1 is invertible), it follows that A
1
x = }x. Thus l is an eigenvalue of
.t\  t and x is cormsponding
P4. Suppose tlt;H Ax .\.x whcr<' x :f. 0 . T hen (A  sJ)x = Ax  six = >.x  sx = (A s}x. Thus
A sis an C'i);»nvaluc: nf A  sf and a r.orresponding eigenvector.
P 5. Suppo:;e that, Ax = AX where x :f 0. Then (sA)x = s(Ax} = s(.>.x) = (s>..)x. Thus s.>. is aa eigen
value of sA and x is a. corres ponciing eigenvector.
P 6. If the matrix A = [: :] is symmP.tric, then c :=: b a.nd so (a d)
2
+ 4b<: = (a  d)
2
·+4b
2
.
ln the case that A has a repeated eigenvalue, we must have (a d)
2
+ 4b
2
= 0 and so a= d a.nd
b = 0. T hus t he only sytnructric 2 x 2 matrices wi Lh repeated eigenvalues arc those of the form
A """ al . [;ud1 a. ntatrix has .>. = a as its only eigenvalue, and the corresponding eigenspa<:e is R
2
.
This prov<•s part (a) of Theorem 4.4..\ I.
If (a d)
2
+ 4b
2
> 0, t hen A has two distinct real eigenvalues .>.
1
and ..\2, with corresponding
eigenvectors x
1
and x2, g.iven by:
.>. 1 = +d)+ J(a d)2 + 4b2)
148 Chapter 4
The eigenspaces correspond t o the lines y = m
1
x andy = mzx where m; = a = ~ ' , j = .1, 2. Since
(a >.
1
){a >.
2
) = (H(a d)+ J(a d)2 + 4b2J) (t!(a  d)  J(a  d)2 + 4b2])
= t((a  d)
2
 (a  d)
2
 4b
2
l =  b
2
we have m
1
m
2
=  1; thus the eigenspaces correspond to perpendicular lines. This proves part (b)
of Theorem 4.4.11.
Note. It is not possible to have (a  d)
2
+ 4b
2
< 0; thus the eigenvalues of a 2 x 2 symmetric
mat rix must necessarily be real
P7. Suppose that Ax = .XX a.nd Bx = x. Then we have ABx = A(Bx) = A(x) = >.x and BAx =
B(Ax) = B(>.x) = .XX. Thus >. is an eigenvalue of both AB and BA, and xis a corresponding
eigenvector.
CHAPTER 6
Linear Transformations
EXERCISE SET 6.1
1. (a) TA: R
2
? R
3
; domain = RZ, codomain = R
3
(b) TA: R
3
? R
2
; domain = R
3
, codomain= R
2
(c) TA: R
3
? R?; domain= R
3
, codomain= R
3
3. The domain of Q, the codomain ofT T(l ,  2) = (1,2,3).
4. The domain ofT is R
3
, the codomain ofT is R
2
, o.nd T(0,  1,4) = (2, 2).
6. (a)
[
2
T(x) =
[
2xl + :!.:7 + 4XJ]
= + 5x:z + 7xa
6x1 
(b) T(x) =
7. (a) We have = b !f and only if xis a solution of the linear syst,em
[
1 2 [XJ ] [ 1]
0 l 3 X2 =  1
2 5 3 X;! 3
The reduced row echelon form of the augmented matrix of the above system is
0
6 111
1  3
o o oJ
and it follows that the system ha..o; the general solut.ion x
1
= 1  6t, x2 = 1 + 3t, x
3
= t.
Thus any vee to< of the fo<m X [:J + t nl will have the pwpe<ty that [J
(b) We have TA (x) = b if a nd only if x is a. solution of the linear system
=
2 5  3 XJ 1
165
166
Chapter6
The reduced row echelon form of the augmented matrix of the above system is
0 0 0 1
and from the last row we see that the system is inconsistent. Thus there is no vector x in
R' for which TA(x) l:J·
8. ( a) We have TA(x) = b if and only if xis a solution of the linear system
[
[=:] =
1 2 0 5 xa 4
The reduced row echelon form of the augmented matrix of the system is
0
1
0
2 1
1 2
0 0
and it follows t.ho.t the system has general solution x
1
= 2 2s + L, .r
2
= 3 s 2t,
X3 S, x, (. any v.,.;tor ofthe form X m +s [ =r] + t [  ; ] will have t he p<Operly
that TA(x ) =
(h) We have TA (x ) = b if and only if x is a sol ution of the li near system
[
! [::] =
 1 2 0 5 X3 3
The reduced row er.hclon form of the augmented matrix of t he system is
0 0 0
Thus the system is inconsistent; there is no vector x in R
4
for which TA(x) =
9. (a). (c), and (d) are linear t ransformations. (b) is not linear; neitht!r homogeneous nor
10. (a) and (c) are linear t ransformations. (b) and (d) are not linear; neither homogeneous nor addit ive.
11. (a)_ and (c) are linear transformations. (b) is not linear; neither homogeneous Mr addit ive.
12. (n) nml (c) o.re (b) is not linear ; nPi t her homogene<ms nor add:tive.
exercise Set 6.1 167
13 . This lmnsfonn• lion can wri tten in • [o,m "' [ : : ] [! =: :] l::J ; it is a linear t rans
formation with domair@. and
The domain is R
2
o.nd the codoma in is R:i.
·.( __ __  · • .,.,.J· ..__ _ _   ··  ·. ····· ...
15. This transformation can be written in matrix form as =  l [=:]; it is a linear
1U3 2 4 1 XJ
transformation with domt nd
16 . clomain:ls R
4
land the codomain(is R
2
. !
( ' ···:_) 1.._ ' .
· ... 
17. [TJ "" !T(eJ) T(e
2
)] = 18. [T] = [T(et) T(e:z) T(e3)};:;; [!
4 0 2 2 IJ
19. (a)
(b)
We have T( l , O) = (1, 0) a nrl T(O, 1) = (1, 1); thf' st<lP.d'l.rd matrix is [TJ =
ing the matrix, we have T ( [ = [ [ :] = [:]. This a grees with direct calculation
using t he given formula: T{ 1, 4) = ( ( 1) + 4, 4) = (5, 4).
We have T(l,O,O) = (2, 0,0), T(O, 1,0) = (1, 1,0), and 1'(0,0, 1) = (1,1,0) ; t hus the stan
"G : l U<ingthc matri x wc haveT ([J) [i
[ This usmg the given formula:
l ll [_rJ =
T (2, l ,  3) = (2(2)  ( 1) I (3j, 1 + ( 3), 0) = (0, 2,0)
20. (a) [1'] =
(b) ITl = [
3 0
]
0  5
r([_m [:, Ul
T ( [ = [
21. ( a) The stn.ndard mat.rix of the transformation is IT] = [: i
(b ) If x = ( 1, 2,4) t.hen, using the equations, we have
T(x) = (3(1) + 5{2) (4),4( 1) (2) + (4) , 3(1) + 2(2)  (4)} = (3,  2, 3)
On the other hand, using t he matrix, we luwe
168
Chapter 8
22. (a)
{T] = [2 3 1] .
3 5  1
( b)
T ([m = _:] nl = r:]
23.
(a)
=
(b)
r o 1) [2] = [1]
l1 0 1 2
(c)
=
(d)
[ = GJ
24.
(a)
=
(b)
=
(c)
=
(d)
[  = [ 
25. (a)
[to tJ n = [ t>J [070l
(b)
r:J = r:J
72 72 4 72 4.950
(c)
[  = [ =!J
(d) [>f !] [3] [ >f'+•] [4598]
! :/,f 4  + 2VJ  1.964
'26.
(a)
[ 1] [•] [2J3  [1 946]
4 3 = 2 + 4.598
{b )
=
. .
(c)
(d ) [ l [ 2 + ¥ ]  [ 4598]
:1/ 4 3  2/3 + 1.964
27. (a) [ ! [ l + [19641
! 4 = 3'{3 + 2 4.598.
(b)
[ l 4] rJ [ + JJ] [2482]
4 4 = ¥ + 3 4.299
28. (a)
[•] [ 2 ¥ l [ 4598]
3 = 2v'3 +  l.9G4
[ 1
4] [•] [ 1 ¥ l [ 0.2991 (b)
4
':i
3 = 0.518J

29.
, . [   [ 8 sin 8]
1 he matnx A =
1
corresponds to R
8
= .
8 8
where B =
3
; (135°).
 S i ll COS
30.
' . 72] [cos28 sin29)
rhe matnx A = I I corresponds to He = . where B = i (22.5° ).
 7l SJn 28  COS 28
31. (a) liL
__ [cos 28 sin28] __ [cos
2
8sin
2
8 2sin9cos8]
.
2
wherecosB = a.nd sin8 = •
sm28  cos 28 2sin8cos0 sm .. 8  cos 8 v l +m• '1/JTrn
Thus Hr.= [
1
;:
2
m;r:_
1
] .
{b) p = ( cos
2
8 sin8cos9] = 1 [1 m]
L :Jin9 cotJ 9 oinl(J l+,....'l m m
2
Exercl &e Set 6.1 169
32. (a) We have m == 2; thus H = HL = [! ;J and H ( [!]) = [!] = t [z:] = ·
(b ) We have m = 2; thus P = h = [ ! ] and P ( [ !] ) = 1 [!] = ! ( = [ ::! ] ·
33. (a) We bave m = 3; thus H = HL = fo [: !] = !] and the reflection of x = [: ] about
the line y = 3x is given by H ([:]) = g !] [:] = k =
(b) We have m = 3; t hus P = PL =
!] and P ([:]) = fo :] [:] =
1
1
0
=
34. If Tis defined by the formula. T(x,y} = (0,0), then T(cx, cy) = (0,0) = c(O, O) = cT(x,y) and
T(x1 + X2, Yt + Y2) = (0, 0) = (0, 0) + {0, 0) = T(x1 , Y1) + T(x2 , Y2); thus Tis linear.
If T is defined hy T(x, y).::: (1, 1), then T(2x, 2y) = (1, I) f. 2(1, 1) = 2T(x, y) and T(x
1
+ x
2
,
Yl + Y2) ::..., (1, l) =I (1 , 1) + (1, 1) = T(x
1
, yl) + T(xz, Y2); thus T is neither homogeneous nor ad
Jitive.
36. The gi ven equations can we writt.en in matrix form us r [ :] = [;], and so
r:J = r:J = l 3:
.. . .... ··  . ... .
 · <'' __ ........ _ ... .... ..
Thus t he image of t he line· ·:e + y = 1 corresponds to { s + 4t ) + (3s t) = 1, i.e. t o Lhe line
2s  .:.=: 1. ·  ··· ·
[
1 00
37. (a) [T] = {b) IT] =
0 (iJ 1
(c) [T] = !
38. (a) (b)
.f
170
(c) (d)
X
[
1 0]
39. T(x, y) = (  x, 0); thus IT} =
0 0
.
40. T(x,y) = (y,  x ); thus IT]= [i
DISCUSSION AND DISCOVERY
Dl. (a) False. For example, T(x,y} = {x
2
,y
2
) satisfies T(O) = 0 but is not linear.
(b) True. Such a. transformation is both homogeneous and additive.
(c) True. This is the transformation with standard matrix (T] = [.
Chapter 6
(d) True. The ·z.cro transformation T(x, ·y) o:: (0, 0) is the only linear transformation with this
property.
(e) False. Such a transformation cannot be linear since T(O) = v o f. 0.
D2. The eigenvalues of A are >.= 1 and .>. =  1 with corresponding eigenspaces given (respectively)
by and where oo < t < oo.
D3. From familiar trigonometric identities, we have A = . ·e
[
cos 29 sin 28] __ R .
sin 29 cos 28 "
Thus muJtiplication
by A corresponds to rot ation about the origin through the angle 2fJ.
D4. Jf
A · R _ [cos O .. sin 81 h AT _ [ cos 0 sin 8] _ [cos( 9)
/ 1 · .. o  t en  .. _
sin9 cosO' sin9 cosO sin(8)
= R o· Thus multipli
cation by !1.,. corresponJs to rotation t hrough the angle 0.
D5. Since T(O) :::: x
0
=f:. 0, this transformation is not linear. Geometrically, it corresponds to a. rotation
followed by a translation.
D6. 1f b = 0, theu f is both addi t ive and homogeneous. If b =f:. 0, then j is nei ther additive nor homo
geneous.
D7. Since Tis linear, we have T (x
0
+tv) = T(x
0
) + tT(v). Thus, if T(v) =f:. 0, the image of the line
x = xo +tv is the line y = Yo+ tw where y
0
= T (xo) and w = T(v) . lf T(v) = 0, then the image
of x = xo + tv is the point Yo = T(xo).
EXERCISE SET 6.2
1. ArA = [_:I]
(_: iJ·
[
aQ _
2 1
J [ !Q llJ rl 0
1
[aQ _ll J
2. AT A=
2
!.1
29 29 29
 ·thus A is orthogonal and A
1
=AT=
29 29
ll !Q  £! :o  0 l ' ll .aQ •
' D 'iG 29 zg
Exercise Set 6.2
171
u
 5
 !s
4
5
_ g
25
1] .
16
25
12] [ 4
25 5
5 25
l2 · u
2S 25
I 2 1
76 73
0
=
3 16
5 25
[
1 0 0]
0 1 0 ;
0 0 l
thus A is orthogonal and A l = AT =
5. (a) AT A = rt = thus A is orthogonal. We have det(A) = 1, and
A = [ _ = Ro where B =
3
;. Thus multiplication by A corresponds to CO\tnterclock
wise rotation about the origin through the angle
3
;.
(b) [ J [ ; thus A is o'thogonal. We have det(A) 1, and so
A = [ =He wbere fJ = i · Thus multiplication by A corresponds to ref!ed ion
the line origin through the origin maki ng an angle with the positive x axis.
6. (a)
AT A= [ 4 1J [ 1] = [
1 0
1
]; thus A is orthogonal. We have det (A) = l, and
_v'J * :a l 0
2 < 2 2
[
l v'Jl
A = J 
= Re where () =
(b) rlT A= [ v;l [ Y,}l = (
1 0
] · Lhus A is or thogonal. We have dct(A) = 1, and so
a _ J_ ./3 _! o 1 •
2 2 2
A = 2 2
[
! ./3]
.:£1  !
7. 2
= He where 0 = %·
1. (a)
A = [t ;]
(b)
A = [!
(c)
A =
(d)
8. (a)
A =
(b)
A =
(c)
n
(d) A= [I
2
9. (a) Expansion in the xdirection with factor ::! {b) Contraction wi th factor
(c) Shear in the xdirection with factor 4. (d) Shear in theydirection with factor  4.
10. (a) Compression in the ydirection with factor
(b) Dilation wjt b factor 8.
(c) Shear in the xdirection with factor  3.
(d) Shear in the yclirflct.ion wi th factor 3.
172 Chapter 6
11. The action of T on t he stanciatd unit vectors is e
1
= [ ]t [ t [ J t [ = Te1 . and e2 =
t ; t thus [T] = [Te1 Te2) =
12. The action ofT on t he staudard uni t vectors is e1 = t t t and
e2 = t t t =Te2; thus IT) = [Te1 Te2) = [!
13. The actioo of T on the "anda<d unJt vedo<s is e, = m 4 m ; m = Te, • e, = m
4
m ;
m ='Fe, , and e, = m + m + m = Te,; thus 111 =
14. The action of T on t he standard unit. vectors is e1 = [i] } = T et, e2 =
t hus [T]
0  1
15. ( a) ( b)
y
(c)
16. (a} (b) (c)
y
17. (a)
(b)
iL
(c)
Exercise Set 6.2
18. (a)
19. (a)
9 0  1 s 3
nJ nl
I ' " ·••'' ,., ................ ,._,
20. (a)
21. (a) [TJ(:  ·'
·.. 0 1 0
·,, , .
...... _ .....
[
1 I) (01 ,1]
22. (o) [T] _, o o
0  1
23. (a) tnt [:
(b) IRI =
[
(c) !RI [· f
24. (a) !Ill = [:
[
.iJ
(b) [Rj = _;
[
I
7z
(c) [RJ =:
(b)
{b)
! Hl
[
1 0 OJ r2
1
r2
1
000 1 = 0
0 0 1 3 3
(b)
(b) [T) = [
1 0 0
[
0 0 1]
(b) [Tj = o 1 o
L 0 0
173
(c) !11.
(c) n ! m
(c) ! nJ m
(c) [T) =
0 \) 1
(c) IT]= [!
25. The matrix A is a. rotation matrix since it is orthogonal anrl det (A) == 1. Tbe axis of rotation
;, found by .olv;ng the system (/ A)x = 0, whkh ;s r=J = A geneml solutioo
ofth;s sys<em ;s x = [:J = t thus the matdx A co:,.;; •• :, to• a mt:.;on about the xa>ds.
Choosing the positive orientation and comparing with Table 6.2.6, we see tha.t the angle of rotation
is !I =
174 Chapt&r6
26. The matrix A is a rotation matrix since it is orthogonal and = l. The axis of rotation is
found by solving the s[:sltem A)x = 0 , which is u n = m. A genenJ solution
of t his system is :<. = = t ; thus the axis of rotation is t he line pnssing through the origin
and the point (1, 1, 1). The plane passing through the origin that is perpendicular to this line has
equation x + y + z = 0, and w == ( 1, 1, 0) is a vector in this plane. Writing w in column vector
form, we have
[
·1]
W= l ,
0
Thus the rotation angle 8 relative to the orientation u = w x Aw = (1,1, 1) is detenn.ined by
0
_ wAw _ 1 d Ll _ 2>r(120o)
cos  
2
, an so 17
3
.
27. We have tr(A) = 1, and so Formula (17) reduces to v = Ax + ATx. Taking x = e
1
, this results in
v = Ax+ AT x = {A + Ar)x =
000 = 'j
From this we concl ude that t he xaxis is t.he ax.is of rotat ion. Finally, using Formula (16), the
rotation angle is determined by cosO = = 0, and so 0 =
28. We have tr(A) = 0, and so Formula ( 17) reduces to v = Ax + AT x + x . Ta king x = e1. this results
in
v = Ax+ AT x 1 x = (A 1 + 1 )x = 
llll
Thus the axis of rotation is the line through the origin and the point (1,1,1). Using Formula {16),
the rotation angle is determined by cos()= =  thus 8 =
2
;.
29. (a) We have T, ([:]) = m = thus T, is a lineA< ope<ato< and 1!, =
Similad y, M, = ! and M
3
=
(b) If x= (x , y , z), then T
1
x · (xT
1
x) = (x,O,O) · (O, y,z) = 0; th,;;; T
1
x and xT
1
x are or
thogonal for every vector x in R
3
. Simllarly for T
2
and T3 .
30. (a) We have S(e
1
) = S(l, 0, 0) = {1, 0, 0), S{e
2
) = S(O, 1, 0) = (0, 1, 0), and S(e
3
) = S(O, 0, 1) =
[
I 0 kl
(k, k, l) ; thus the standard matrix for S is [S] = o l k .
0 0 1
DISCUSSION AND DISCOVERY 175
(b) The shear in the xzdirection with factor k is defined by S(x, y, z) = (x + ky,y, z + ky); and
[
lkO] .
its s t andard matrix is lSI = o l o . Similarly, the shear in the yzdirection with factor k
is dcnned by S{x, y, z) = (x , y
0
+ :x
1
, z + kx); anti its standard matrix is lS) =
k 0 1
31. If u = (1, 0,0) then, on substituting a= 1 and b = c = 0 into Formula ( 13) , we have
0 0 l
cos () sinO
sin{) cos £J
The other entries of Table 6.2.6 are obtained similarly.
DISCUSSION AND DISCOVERY
Dl. (a) The unit square is mapped onto th<'! segment 0 $ x s; 2 along the xax.is (y = 0) .
(b) Tho umt square is rnappC(J onto the 0 y 3 along theyaxis (x = 0).
(c) The unit square is mapped onto the rectl'ingle 0 5 x $ 2, 0 5 y S 3.
(d) T he unit square is rotated a hout. the or igin through an angle () ·=  i (30 degrees clockwise) .
D2. I n order for the mat rix to be orthogonal we rnust have
and from this it follows that o. = 0, b =  jg, c = "73 or a. ""' 0, b = r. = 
These are the
only porss ibi lit.ies.
D3. The t. wo columns of t.his matrix a.re or thogonal for any values of a and b. Thus, for t he matrix to
be orthogonal, all t hat is required is t hat the column \·ectors be of length I. T hus a and b must
satisfy (<1 + b}
2
+ (a  b)
2
= 1, or (equiv<\lcntly) 2a
2
+ 21} = L
D4. If A is an orthogonal matrix and Ax :=. >.x . t hen llxll = I! Axll = 11>xll = I.XI II xll. Thus the eigen
values of A (i f any) must be of ah;;olutc L
D5. (a) Vect Qrs parallel to the line y = x will be eigenvectors corresponding to the eigenvalue>.= 1.
Vectors perpendicular to y = x will be eigenvect ors corresponding to the eigenvalue ,\ = 1.
(b) Every nonzero vector is an eigenvector corresponding to the eigenvalue ,X =
DO. The shel.'lr in the x .. dircction with factor  2; thus 1'(x, y) = (x  2y, y) and ITl = n °
D7. From thP. polarization identity, we have x · y = Hllx + yJ]
2
 l!x  yll
2
) = i(l 6  4) = 3.
D8. !f llx + Yll = llx  Yll tho.n the parallelogram having x and y as
adjacent edges has diagonals of equal length a.nd mus t t herefore be
a rectangle.
yx X+Y
X
176 Chapter 6
EXERCISE SET 6.3
1. (a) ker(T) = {(O,y)J ' oo < y < vo} (ya.xis), ran(T) = {(x,O)l oo < x < oo} The
transformation T is neither 11 nor onto.
(b) ker(T) = {(x, 0, 0)1 oo < x <co} (xa. "<is), tan(T) = {{0, y, z)l  oo < y, z < oo} (yzplane).
The transformation T is neither nor ·· ··
(c) ker(T) = { }• ran(T) = R
2
• The transformation Tis both and onto.
(d) ke<(T) = { m} ran(T) = R". The transfonnotion T i• both J.] Md onto.
2. (a) ker(T) = {(x, 0)[  oo < x < oo} (xaxis), ran(T) = {(O,y)l oo < y < oo} (y..,a,xis). The
transformation T is neither 11 nor onto.
(b) ker(T) == {(0, y, 0)1 co< y < oo} (ya.xis), ran(T) = {(x,O, z)l  oo < x,z < oo} (xzplane) .
Tho transformation T is neither 11 nor onto. ·
{c) ker(T) = { ran(T) = R
2
The transformat.ion Tis both 11 nnd ont o.
(d) ker(T} = { }· ran(T) = R
3
. The transformation Tis both 11 and onto.
3. The kernel of the tro.nsformation is the solution set of the system Ax = 0. The augmented matrix
of thi.s system can be reduced to t;hus the solution set consists of all vectors of the form
x = where  oo < t < oo.
4. The kernP.l of the transformation js the solution set of _:  !] = The augmented
matrix of this system can be reduced t.o
vectors of the form x = t [ where oo < t < oo.
2 : 0]
 5 l 0 ; thus the solution set consists of all
0 : o
5. The kernel of the t<aosformatOon is the solut;on set of [:  ; :J [:] = m. The augmented
matrix of this system can he reduced to _: thus the .solution set consists of all vectors
0 0 0 : 0
of the form x = t [ :J where oo < t < oo.
6. The kernel of the t ransformation TA : R
3
t R
4
is the solution set of the linear system
[
: =; :] [::] =
3 4  3 XJ 0 J
Exercise Set 6.3
0
0
0
The augmented matrix ofthis sy, tem can be <educed to [:
consists of aJI vcct.ocs of the form x = t [ il whe<e oo < t: oo
177
! :
3 I
b I
thus the solution set
o:
0 :
7. The kernel of T is equal to the solution space of [ _: = l] l:l = m . The augmented matrix
of this system can be reduced to ! m] ; thus has only the trivial solution and so
0 0 o:o
ker(T) = { } .
8. Thr. of 1' is eqml.! to the solution space of the system = l J. <tugmented
[
l 0 1
1
OJ .
matrix of this system can be redlJced to
0 1
0
i
0
; thus kcr(T) cons1sts of all vectors of the form
x = t [ where · oo < t < oo.
9. (a) The vector b is in the column space of A if and only if the llnE\ar system Ax = b is consistent.
The atJ grnented matrix ()f Ax = b is
H
2
2
8
0
1
l
and the reduced row ochf: lon form of this matrix is
0
L
0
I
6
0
Frorn this we conclude tha.t the system is inconsistent; thus b is not in the column space of A.
(b) ThP. augmented matrix of the system Ax = b is
H
2
2
8
0: 3]
1• 1
l
1: 8
and the reduced row echelon form of this matrix is
0  k
1
0
I
6
0
1] 6
0
178
Chapter6
From this we conclude that the system is con'listent, with general solution
[
4 ltl [4] [ 1]
3 + 3 3 3
X = = + t
Taking t == 0, the vector b can expressed as a linear combination of the column vectors of A
as follows:
= b = + + Oc3(A) = +
8 3 6 3168
10. (a) The vector b is in the column spttee of A if a.nd only if the linear system Ax =
The augmented matrix of Ax = b is
[3 2
1 5
i
2
] 1 1 5 3
I }
I
0 1 1 1
: 0
and the reduced row echelon form of this matrix is
0 1
i 0]
1 1  1 •0
I
0 0 0
: 1
From trus we conclude t hat the system is inconsistent; thus b is not in the column space of A.
(b) The augmented matrix of the system Ax = b is
[
3  2
1 4
0 1
1 5 : 4]
5 3 : (j
1 1 : 1
and reduced row echelon form of t his matrix is
0
1
0
1 1 : 2]
l 1 : l
0 0 : 0
Prow this we conclude that the system is consistent, with general solution
Taking s = t = 0, the vector b can expressed as a linear combi nation of the column vectors
of A as follows:
m b 2c, (A) + lc,{A) + Oc, (A) + lk, (A) 2 [!] + n l
Exercise Set 6.3
179
11. The vector w is in the range of the linear operator T if and only if the linear system
2x y = 3
x +z = 3
y  z=O
is consistent.. The augmented matrix of t.his system can be nxiuced to
0
1
0
0 : 2]
0 : 1
I
1 : 1
Thus the system has a unique sol ution x = (2, 1,1), a.nd we have Tx = T(2, 1, 1) = (3,3, 0) = w.
12. The vector w is in the range of the linear operator T if and only if the linear system
X  y = 1
x +y+z = 2
:t +2z =l
.is consist ent. The augmented matrix of thls system be reduced to
0
1
0
O' I]
I 3
0: 4
1 J
I
I _§
I 3
Thus the system has a unique solution x = G,  and we have T x = T (  =
(1, 2,  1) = w .
13. The operator can be wri tten as [:: J = ( n [:: J, t hus the standard mat rix is A = [
Since df't(A) = 17 :f. 0, the operator is both 1·1 and onto.
14. The operator can be wri tten as [::] = [ ::] ; t h11s t he standard matrix is A = Since
det(A) '" 0, the operu.tor is neither 1l nor onto.
15. The
operator can he written as wz = 2 o " 2:2 : t hus the
[
'WI] [1 3 2] [Xi]
stanrlard ma.trix is A =
16.
:\ 2] 1.113 1 :3 6 ::Z:J
o 4 . Siw;e A) = 0, the operator is neither l  1 nor onto.
3 G
[
'WI] [l 2 3] [XJ]
The operator can be written as 1.112 = 2 5 3 :z:2 ; thus the standard matrix is A =
1.1/3 1 0 8 X3
Since dct(A) =  1 f. 0, the operator is both 11 and onto.
17. The operator can be written as w =TAx, where A= Since det(A) = 0, TA is not onto.
, ·  .
The_ ran_gr. TA consists vedors of w = t (j_:here oo < t < oo. Jn particular,
w = is not in the range of TA .
180
18.
The operator can be written as w =TAX, where A = 5  1 3 . Since det(A) = 0, TA
[
1. 2 1]
4 1 2
onto. The range of T,1 consists of all vectors w = (w
1
, w2, w3) f0r which the linear system
= [::] =
4 1 2 X3 W3
is consistent. The augmented matrix of this system can be row reduced as follows:
[
1  2
5  1
4 1
1: Wt] [1 2 1 : Wt ] [1 2 1 \ W1 ]
3l W2 t 0 9 2 l W2  5wl + 0 9 2 : W2  5w
1
2 : W3 0 9  2 ] W3  4w1 0 0 0 : W3  W2 +. 'tJIH ·
is not
Thus the system is consistent {w .is in the range of 1A} if ar:d only if w
1
 w
2
+ w
3
= 0. In
partiClllar, the vector w = (1, 1,1) is not in the range of TA .
19. (a) Th,, linear t.rl\.nsformation TA : R
2
+ R
3
is onetoone if And only if the linear system Ax = 0
has only the t ri vial solution. The augmented matrix of Ax = 0 is
and the row echelon form of this matrix is
From t his we concl ude that Ax = 0 has only the trivial solution, and so TA IS onetoone.
(b) The augmented matrix of the system Ax == 0 is
[
2 3
0  4
a.nd t he reduced row echeJon form of t his matrix is
o 4 : o
0
J
1
} I
2 :
Fwm this we conclude that .4x 0 has the gene'"! solution x = t [ In p"'ticula<, the
systerr, Ax = 0 has nontrivial solutions and so the transformation TA : R
3
+ R
2
is not one
t.oone.
20. (a) The range of the t ransformation : R
2
: R
3
consists of those vectors w in Jl3 for which
the linear system Ax = w is consistent. The augmented matrix of Ax = w is
[
1 1 ! Wt ]
2 0 : W2
3 4 : W3
Exercise Set 6.3
and this matrix can be row reduced as follows:
[
1 1 : WJ] [1 . : : W) l [} }
2 0 : 'W2 , 0 2 : W2  2WI , 0 2
3 4 : W3 0 1 : W3  3w1 0 2
[
1 1 l
, 0 2:
I
0 o:
181
Wt l
Wz  2Wt
2w3 + w2 8w1
From t his we conclude that Ax = w i..; consistent if and only if  8w1 + w
2
+ 2w., = 0; thus
the transformation TA is not onto.
(b) The range of the transformation TA : R
3
R
2
consists of those vectors win R
2
for which
the 1inear system Ax = w is consistent. The augmented matrix of Ax = w is
2 3 l Wt]
0  4 1 W2
and this· matrix can be r ow reduced as follows:
2 3 : WJ l
2 1 : u.•2 +w1 ..r
F'rorn this we conclude that Ax = w is consistent for every vector w in R
2
; thus TA is onto.
21. (a) The augmented matrix of the system Ax= b can be row reduced as follow:; :
 2
 1
3
b, l [I 2 l 3 : b, l
4 6  2 b2 0 k 8  8 ; 02  2bl
0 3 3 b:J 0 6 6 6 : 03  3bl
['
2  1 3
I
b, ]
I
I
 {bl
, 0
1 l 1
I
I
I
kbz 
0 0 0
Ql
I
It follows that Ax = b is consistent if and only \f ib3   ib = 0, or 6/>1 + 3bz  4b3 = 0.
( b) The range of t ransformation T A consists b o tne form
[ls:+1'] =' [!] +t m
Note. This is just one possibility; jt was obtained by solving for b
1
in t.erms of 02 and b.> and
then making bz and o
3
into parameters.
(c) The augmented matrix of the system Ax ::; 0 can be row reduced to
(l
0
1
0
1 L : 0]
1 1 l 0
0 0 : 0
Thus the kernel of TA (i .e. the solution s pace of Ax = 0) consists of all vectors of the form
X=
[
s tl [1] [1]
s + t 1 1
= s + t
s 1 0
t 0 1
182 Chapter 6
DISCUSSION AND DISCOVERY
DL (a) True. If T is onetoone, then Tx = 0 if and only if x = 0; thus T( u  v) = 0 implies
u  v = 0 and u = v.
(b) True. lf T : Rn + Rn is onto, then (from Theorem 6.3.14) it is onetoone and so the argu
ment given in part (a) applies.
(c) 'ITue. See Theorem 6.3.15.
(d) 'ltue. If TA is not onetoone, then the homogeneous linear system Ax = 0 has infinitely
many nontrivial solutions.
(e) True. The standard matrix of a shear operator T is of the form A = [ or A = . In
either case, we have det(A) =:= 1 'f:. 0 and so T = TA is onetoone.
D2. No. The transformation is not onetoone since T(v) = ax v = 0 for all vectors v tb.at:a.re parallel
to a.
D3. The transformation TA : R"+ Rm .is onto.
D4. No (assuming vo is not a scalar multiple of v). The line x = vo +tv does not pass through the
origin and thus is not a subspace of nn . It. follows from Theorem 6.3.7 that this line Calmot be
flQ ual t o range of a linear operator.
WORKING WITH PROOFS
P l. Jf Bx = 0, then (11B)x = A(Bx) = AO = 0; thus x is in t he nullspace of AB.
EXERCISE SET 6.4
 3
!][:
 2
0] [ 5 1
2!]
1. !Tn
= BA = 0 1 3 = 10  8
2 4 45 25
[:
2
[:
 3
:1] [ 8  3 I]
[TA oToJ = A E3 = 1 0 1 = 5  15  8
2 4 6 7 44  11 45
[ 4
0
3 1] [40
0
20]
2. [Ts oTAJ = BA =
 1 5 0 1 = 12  9 18
2 3  3 6 38 18 43
fTA oTaf = AB =
3 1][ 4
0
.J r
18
22]
0 1 1 5 2 = 10  3 16 .
 3 6 2  3 8 31 58
3. (a) iT!) = [12) =
{b) [72 c T1] = = [! ITt oTz) = = _:]
(c) T2(Tt (x1, x2)) = (3xt + Jx2, 6x1  2x2) T1 (T2(x1t x2)) = (Sx1 + 4x2, x1  4x2}
Exercise Set 6.4 183
0
2
4. {a) !T1] = 1 [T2] =
()
 1
3 0  1
2 0][ 4
0
u
2
{b) [12 o T1] := 0  1 2 l = 3
0 1  1 3 3
{T1 o T3] [
0
2 0]
[ 4 8 0]
1 0 1
=
2 4 1
1
3 0 1  1 2 3
(c) T2{T,(x1,x2,x3)) = (2x2,:r1 +3x2,17:r1 +3x2)
T1 (T2(x1, x2, x3)) = ( 4:rt + 8x2, 2x1  4x2  x3, xi  2x2 + 3x3)
5. (a) The st andard matrix for the rotat1on is A
1
= [  and the standard matrix for the
tion is A2 = [ Thus t he standard matrix for the rotation followed by the reflection is
[
0 1] [0 1] [l OJ
A
2
A
1
= 1 0 1 0 = 0  1
(b) The st andard matrix for the projection followed by the contraction is
(c) The standard matrix for the reflection followed by the dilation is
. [:l OJ [1 OJ [3 01
.il·z A1 = =
0 3 0 1 0 3
6. (a) The st andard matrix for the r.omposition is
A,A, A, [ _J]
( b) The standard matrix for the composition i:s
(c) The composition corresponds to a counterclockwise rotation of 180°. The standard matrix is
[
1 0]
R z<. R1n R_ .n = R 11 7rr r. = Rrr =
3 12 !2 12+J2+3 0  1
7. (a) The standard matrix for the reflection followed by the projection is
184
{b) The standard matrix for the rotation followed by the dilation is
0] [ 0 [ 1 0
o o 1 o = o V2
v'2 72 0  1 0
(c) The matrix for t he projection followed by the reflection is
8. (a) The standard matrix for t he reflect ion followed by the projection is
(b) The st.andard matrix for the rotation followed by the contraction is
[
l 0 0] [' 0 0] .[! 0 0]
A2A1 = 0 0 {! ! = 0 :If
00 3
1
0! ,/3 0.! ,/3
2 2 6 6
(c) The standard matrix for the projection followed by t he reflection is
D. (a) The standard ma.trix for the composition is
(b) T he standard matri x for the composit ion is
10. (a) The standard matrix for the composition is
_fl
16
.l..
16
l
8
Chapter 6
.1.. ]
16
_fl
16
1
Exercise Set 6.4 185
(b) The standard matrix for the composition is
11. (a) We have A= = thus multiplication by A corresponds to expansion by a
factor of 3 in the ydirection and expansion by a factor of 2 in the xdirection.
(b) We have A = = multiplication by A corresponds to reflection about
the line y = x followed by a shear in the xdirection with factor 2.
Note. The factoriut.t.ion of A as a product of elementary matrices is not unique. This is just
one possibility.
(c) We have = multiplicat ion by A corresponds to
reflection about t he xaxis, followed by expansion in the ydi rect.ion by a fad or of 2, expansion
in the xdirect ion by a factor of 4, and reflection about the line y = x.
(d) We havcA= [: :] =
in the xdircction with factor 3, followed hy expans ion in the ydircction with factor 18,
and a shear in t he ydirection with factor 4.
12. (a) We have A=[! ;] = !] multipHcatlon by A corresponds t.o compression
by a factor of in the xdirect.ion and compression by a factor of in t he y<.lirection.
(b) We have = n; thus mul tiplication by A corresponds to a shear in the
:.r.direction with f(l(: t(Jt 2, followed by reflection about t he line y = :r;.
(c) We have A= [ = g thus multiplk:a Lion by A corresponds to
reflection about the line y = x. followed by followed by expans ion in the xdirect ion by a
factor of 2, expansion in the ydirection by a factor of [) , and reflection about t.he xaxis.
(d) We have :] = g multiplicat ion by A corresponds a shear in the
xdircction with factor 4, followed by a shear in theydirection with factor 2.
13. (a) Reflection of R.
2
about the :r axis.
(b) Rotation of R
2
about the origin through an angle of 
(c ) Contraction of R
2
by a factor of
(d) Expansion of R
2
in the ydirection wj th factor 2.
14. (a ) Refl ection about theyaxis.
(b j Rotation about the origin through an angle of
(c) Dilation by a fact.or of 5.
(d) Compression ill xdirect.ion with factor
186 Chapter 6
15. The st andan.l matrix fqr the operator Tis A= [! Since A is invertible, Tis onetoone and
the st andard matrix for r 
1
is A
1
= [ : thus r 
1
(w1, w
2
) = (w1 + 2'W2, tv
1
 w
2
) .
16. The standard matrix for the opP.rator Tis A = Since A is invertible, Tis onetoone and
the standard matrix for yl is A
1
= !l thus r 
1
(wl,w2) = <lwl + 1w2,  1
1
2wl + iw2)·
17. The standard matrix for the operator Tis Since A is invertible, T 1s onet(rone and
the sto.ndn.rd matrix for r  l is A
1
= [ thus r
1
(wl, w2) = (W2,  wl)·
18. The standard matrix for the operator Tis A = Since A is not invertible, Tis not onetoone.
[
1 2 2]
2 1 1 . Since A is invertible, T is
1 1 0
19. The standard mat r ix for the operator T is A =
and thf' standard matrix for r
1
is A t =
1
2 41
2  3 ; t hus the formula for T l is
3 sJ . .
20. The st andard matrix for 7' is A = [ i Since A is not invertible, Tis not onetoone.
21. The st. andarrl matrix for the operator T is A =
[
3 3
2 2
and the standard matrix for T
1
is A I =
I I
2 '.i
4 1]
7 I . Since A is inver t ible, T is one toone
3 0
Ill
=1 ; thus the fo<mula fo< r 
1
is
22. The standa<d mat<lx fo< the ope<ato< TIs A = [; : :J. Since A is inve<tible, TIs onetoone and
r* r5
the standard mat rix for T 
1
is A l = l fi  iJ
I 5
26 T3
7]
26
= ; thus the formula for r
1
is
26
23. (a) lL is easy to see directly (from the geometric definitions) that T
1
o T
2
= 0 = T
2
o T
1
• This
aJ.qn follows fTom = and IT2} lTd= =
Exercise Set 6.4 187
(b) It js easy to see directly that the composition of Tt and T2 (in either order) corresponds to
rotation about the origin through the angle 81 + Bz; thus 11 o Tz = T2 o T
1
. This also follows
(c)
from the computation carried out in Example l.
r
l 0 0 l [cos 0
2
 sin 0
2
We have lTd = lO cosOt sin9t and [12] = sin02 cos02
0 sin £1 1 r.:os&1 0 0
[
cos0
2
 sinBz
[Tx oTz] =ITt} !Tz) = coslh sin Bz cosB1 cos82
sin lh sin Bz sin B1 cos B2
thus
 cos 8
1
sin Bz
cosfh cosBz
sin 8
1
sin 81 sin Ozl
 sin 81 cos Bz
COSO)
and it follows that T
1
o Tz f Tz o T1 .
24 . (a) It is ea..<;y t.o see i.ha.t T
1
oT
2
=I :;: TzoT
1
. This l::llso follows from the fact that
[Td [Tz!= = and lTz]ITJ] = =
(b) We have [Tl] = and [T:d = [::: IIi oTz] = jTJ} [T
2
] = and
[T
2
o T
1
] = [T2} [T
1
) =
0
° l1= [T
1
o 12] . lt follows that T
1
o 1'
2
f Tz oTt .
Sin j
(c) We have =kl ; t.hus fTI] [Tz] === (kJ)[TzJ=kl12] and [TzJ ITl i = ITz]{kl} =
0 0 k
k [T
2
}. Jt follows that 1'
1
o T
2
= T
2
o T
1
= kT2.
[
_.\ f '\!j l
25. Wt:; have JJ,
13
= and J1,
16
 _,; . T int:> the standard matrix for the composition
'l 2 '  1 2
.ill [ !
2  2
1  viJ
2 z
26. We have H:rii = and f/,
18
= T hus t. he standard matrix for the composition is
li H . = 7i] [0 1] _ [ 72
rr /8 7ft4 ..!_ _ 1 l O  _ _!_
V2 72 v'2
27. The image of the unit square is the parallelogram having the
vectors T( e!) = U] and T( ez) = [ as adjacent sides. The area
of this parallelogram is jdet(A}I = ldet = 12 + II= 3.
188
28. The image of the unit square is the parallelogram having the
vectors T(e
1
) = g] T{e
2
) = [!] as adjacent sides. The ar(;a
of this parallelogram is ldct(.A)I = \det !] I = 18  91 = 1.
DISCUSSION AND DISCOVERY
y
7f·
"·
( • 3)
···
")
'f
1/
ChapterS
J
{5. 7)
J
,
I
(3. )

.1:
5
Dl . (a) 'l'rue. If 1't(X) = 0 with x # 0, then T2(T
1
(x)) = T
2
(0) = 0; t hus T2 o is not onetoone.
( b ) False. If T
2
(ran(T
1
)) = R" then T
2
o T,, is onto.
(c) False. If xis in Rn, then Tz(T
1
(x)) = 0 if and only if T,(x) belongs t o the kernel of Tz;
thus if Tt is onetoone and ra.n(TI) n ker(Ti) = {OL then the transformation 72 oTt will be
onet oone.
(d ) True. If ran{T
2
) :F Rk then ra.n(T
2
oT
1
) ran(T
2
) # Rk.
02. We have RfJ =  sinf{J'], H
0
= (
0
1 0
1
], and R;;
1
= Rp = [ Thus
sm "" cos  "  sm cos"'
R H R_
1
_ [cos
2
{3  sin
2
{3 2sinf1cosf3 l  [cos2/3 sin2/3]  H
0 0
f.i ·· 2 sin {3 cos f3 sin
2
f3  cos
2
f3 J  sin 2/3 cos 2/3  i3
and su multiplication by the matrix
1
corresponds to reflection about the line L.
D3. From Example 2, we have He, Hez = R
2
ctJ
2
 Ba) · Since {)= 2(0 it follows t hat Re =
Thus every rotation can be expressed as a composition of two reflections.
CHAPTER 7
Dimension and Structure
EXERCISE SET 7.1
1 . (a)
2. (a)
(b)
{b)
(c)
(c)
5. (a) Any one of the vectors (1
1
2), (! ,1), (  1,  2) forms a basis for the line y = 2x.
(b) Any two of the vectors (1.  1, 0), (2, 0,  1), (0, 2, 1) form a basis for x + y + 2z = 0.
6. (a) Any one of the vectors (1, 3), (2, 6) , (  1 ~ 3) forms a basis for the line x = t
1
y = 3t .
(b) Any two of the vectors (1, 1,3), (1,1
1
2}, ('2, 0, 5) form n. basis for the plane x=t
1
+t
21
y = tl  l:t, z ~ 3t, + 2t2.
7. The augmented matrix of the system is [
3 1 1
:
0
] and the reduced row echelon form of
5 · 1 I  1 1 0
[
1 0 l 0 I 0]
this mat rix is ~ : . Thus a general solution of the system is
0 1
4
1
1
0
x (l" _!s ·· t s t)  "(.!  ! 1 O)+t(O  1 0 1)
 4 VJ 4 J )  tJI 4 I 4 t ' l l \
where oo < s, t < oo. T he solution space is 2dimensional wil.h canonical basis {v
1
,v2_} where
V J :.; (  ~ 1  ! 1 1 ) 0) (l.J1 d V 2 """ ( 0 >  1, 0 I l) •
8. Tho augmented mat,;x of tho system i' [: =: ,; ~ ~ ] nnd t ho ceduced row echelon fo,m of th;s
[
I () :1: 0]
matri x is 0 1 2 • o . Thus the general solution is
I
0 0 OtO
X= ( 3t,  2t , t) ~ = t( 3, 2, 1)
where co< l < oc. The solution space is 1dimcnsional with cnnonic..al basis {( 3, 2, 1)}.
9. The augmented matrix of tl1e ::;ystcm is [ ~  ~  ~  ~ ~ ]
0
0
°] and the reduced row echelon form
l 1  2 0 1 I
'
0 0 I l lt O
[
I 1 0 IJ I :0
of this matrix is ~ ~ ~ ~ ~ : ~ . Thus the general solution is
0 0 0 0 o:o
x = ( s  t , s,t,O,t) = s(1, 1,0,0,0) +t(1,0,  1,0,1)
where oo < .'l, t < oo. The solution space is 2dimensional with canonical basis {v
1
, v'2} where
Vt = (1, 1,0,0,0) and v 2 = ( J, 0,  1,0, 1).
195
t96
10.
Chapter 7
The augmented matrix of the system is r: : ! i and the reduced row echelon form
0 1 0 1 i I 0
. . . 0
[
1
of thJs maLnx 1s
1 0  1 1 I 0
0 0 3  3 : 0]
_ l . Thus t he general solution is
x = ( 3s + t, !t,s,t) = s(3,1, 0,1,0) + t( ! , 1, j, o, 1)
where  oo < 8 , t < oo. The solution space is 2dimensional with canonical basis {vh v
2
} where
Yt = (  3,1, 0,1, 0) and Y2 = (3,  1,1,0, 1).
11. (a) T he hyperplane (1, 2, 3).1 consists of all vectors x = (x, y,z) in R
3
satisfying.: the equation
x + 2y 3z = 0. Using ·y =sand z = t as free variables, the solutions of this·equation can
be written in the form
x = ( 2s + 3t, s , t) = s(  2, l, 0) + t(3, 0, 1)
where oo < s, t < oo. Thus the vectors v
1
= ( 2, 1, 0) and v
2
= (3, 0, 1) form a basis for
the hyperplane.
(b ) The hyperplanC! (2, 1, 4, 1)..!. consists of all vectors x= (x
1
, x
2
, xa, x
4
) in R
4
satisfying t he
equation 2xt  xz + 4x3 + x
4
= 0. Using x1 = r, x3 = s and x
4
= t as free variables, the
solutions of this equation can be written in the form
x = (r, 2r + 4s + t, s, t ) = r(l, 2, 0, 0) + s(O, 4, 1, 0) 't t(O, 1, 0, 1)
where oo < r , s, t < oo. Thus thevcctors v
1
= (1, 2, 0, 0), v
2
"' (0, 4, 1, 0), and v
3
= (0, 1, 0, 1
form a basis for the hyperplane.
1 2. (a) The hyperplane (  2, 1, 4).L consists of all vectors x = (x, y, z) in R
3
satisfying the equation
· 2x + y + 4z = 0. Using x = s and z = t as free variables, the solutions of this equation can
he writ ten in t he form
x = (s, 2s  4t, t ) = s(l , 2,0) + t(O, 4,1)
where  oo < s, t < oo. Thus the vect.ors v
1
= (1, 2, 0) and v 2 = {0,  4, 1) form a. basis for
t.he hyperplane.
(b ) The hyperplane (0,  3, 5, 7) J. consists of all vectors x = {x
1
, x2 , x
3
, X.t) in R4: satisfying t he
equation 3x2 + 5x3 + 7x4 = 0. Using x
1
= r, X3 = s and x
4
= t as free variables, the sol u
tions of this equation can be written in the form
x = (r, + s, t) = r(l , 0, 0, 0) + s(O, j ,1, 0) + t(O, 0, 1)
where  oo < r, 8, t < oo. Thus the vectors v
1
= (1, 0, 0, 0), v
2
= {0, 1, 0), and v 3 = (0, 0,
form a basis for the hyperplane.
DISCUSSION AND DISCOVERY
Dl. (a) Assuming A :1 0, the dimension of t he solution space of Ax = 0 is at most n 1.
(b) A hyperplane in R!' has dimension 5.
(c) The subspaces of R
5
ha.ve dimensions 0, 1, 2, 3, 4, or 5.
(d) The vectors Yt = (1;0,1,0), v
2
=::: (1,1, 0,0), and v
3
= (1,1,1,0) are linearly independent,
t hus they span a 3dimensional subspace of R
4
•
F.XERCISE SET 7.2 197
D2. Yes, they are linearly independent. If we write the vectors in the order
v
4
= (0,0, 0,0, = (0,0,0,1, "', '),v2 = (0,0, 1, ·, • , •), v1 = (l, ·, • , •, • , •)
then, because of positions of the leading ls, it is clear that none of these vectors can be written
as a linear combination of the preceding ones in t.l!c list.
D3. False. Such a set is linearly dependent when set is wrltt.eH in reverse order, some vector
is a linear combination of its predecessors.
D4. The solution space of Ax = 0 has positive dimension if and only H the system has nontri vial
solutions, this occurs if and only if det(A) = 0. Since det(A) = t
2
+ 7t + 12 = (t + 3)(t + 4),
it foll ows t hat the solution space has positive dimension if and only if t = 3 or t = 4. The
solution space bas dimension 1 in each case.
WORKING WITH PROOFS
Pl. (a) If S = {vh v2, ... , vk} is a linearly dependent set in R" then t here C\re scalars Ct , cz, ... ,ck
(not all zero) such that c
1
v
1
+ c
2
v2 + · · · + = 0. lt follows that the setS'= {v
1
, v
2
, .. . ,
vk, w1, .. . , wr} is lillearly dependent since c
1
v
1
+ c2v2 I··· · + C>Yk + Owl+ ·· · + Owr = 0
i::; nuut1 i v i<\l Jepcndeucy rela.tio n i:l.lllung its dements.
(b) If S= {vt, v
2
, . . . ,vJc} is a linearly independent set in R"' then there is no nont rivial de
pendency relation among its elements, i.e., if Ct VJ + c2v2 I····+ = 0 then Ct = c2 =
· · · = == 0. It follows t hat if S' is any nonempty subset of S then S' must. aho be linearly
independent since a nontrivial dependency among its elements would also be a nontrivi al
among t he elements of S.
P2./ If k #·0, then (ka) · x '  k( a x) = (l if and only if a· x = 0: t.lm::; ( k:a )J = a.l
EXERCISE SET 7.2
1. (a) A bMis for R
2
must contain exactly two vectors (three are t.oo many); any of three vectors
in R
2
is linearly dependent.
(b) A basis for R
3
must centum three vect ors (two nre not eno11gh); s<>t. 0f (.wo V<'('t.01'S
cannot span R:
1
(c) Thf: v
1
and v
2
are linc<trly dependPnt (v
2
= 2v
1
).
2. (a) A basis for R
2
must contain exactly two vectQrS (one is not enough).
(b) A basis for R
3
must contain exact.ly three vect.ors (four are t.oo many)
(c) These vectors are linearly (any set of vectors cont<.>ini ng t.he zero vector is dependent.).
3. (a) The vectors v
1
= (2, 1) and v
2
""'(0,3) are linearly independent neither is a scalar m1.tlt iplc
of the other; thus these two \'ect.ors form a hasis for R
2
.
(b) The vector Vz = (7,8,0) is not a scalar multiple of vl = (4, l,O), and V:J:::: (1, 1, 1) is not a
linear combination of v
1
and v
2
since any such linear combination would have 0 in the third
component. Thus v
1
, v
2
, and v
3
are Iincarly independent, anJ it follows that t.hese vectors
from a basis for R
3
.
4. (a) The vectors v
1
= (7, 5) and v
2
= (4, 8) arc linearly independent since neither is a scalar multi ple
of the other; thus these two vectors form a for R
2
.
(b) The vector v2 = (0
1
8,0) is not a. scalar multiple of v
1
= (0, 1, 3), and v
3
= (1,6,0) is not a
linear combi1iation of v
1
and v2 since any such linear combim.t.ioJl would have 0 in the first
component. Thus v
1
, v
2
, and v
3
are linearly il.dependcnt, and it follows that these vectors
from a basis for R
3
.
'
.".
il\
198 Chapter 7
5. (a) The mat r ix having v
1
, v2, v
3
as its column vectors is A = [
4
Sir • .cc det{A) := 5 :f: 0,
 4  2  3
the column vectors are linearly independent and bence form a basis for Rl.
( b ) The matrix having v
1
, v
2
, and v3 as its column vectors is A= [ : Since det(.l.) = 0,
 5  2 8
t he column vectors are linearly dependent and hence do not form a ba.si& for Jl3.
6. (a) The matrix having v
1
, v
2
, and v
3
as its column vectors is A= [ Since det(A) = 0,
8 5 3
the column vectors are linearly dependent and hence do not form a basis for R
3
.
(b) The matrix having v
1
, v
2
, and v
3
as its column vectors is A = ! det(A) =
l l 1
4 =f 0, t he column vectors are linearly independent a..nd hence form a basis for R
3
.
7 (a) An arl•itmry vector x = (:r. , y , z:) in R
1
can be written as
t hus the vectors v
1
, v
2
, v
3
, and v
4
span R
3
. Oo tbe other hand, si nce
the vecl ors v
1
, v2, v3, and v
4
are linea rly dependent. and do not form a basis for R
3
.
(b) ThP ,·ector equat ion v = c
1
v
1
+ c
2
v
2
+ c
3
v
3
+ c
4
v
4
is equivalent to the linear system
8. (a)
[
1 0 0
0 2 0
0 0 3
:J = m
and, = t, n general solution of this system is given by
.. · = .t
=··'· ·.. . · ... .. l ...... . • . ..
where  oo < t < oo. Thus
for a.ny value of t . In par ticular, corresponding to t = 0, t = 1, and t = 6, we have v =
v , + v 2 + v3, v = 4 v
2
+ + v4, and v = 7v
1
+ 4v2 + 3v
3
 6v
4
.
An nrhitra ry \'ector x = (x, y, z} can be written as
x = zv
1
+ (y z)v, + (x  y)v3 + Ov4
r} I J ·
t.hus the vectors v , , v2, v
3
, and v
4
span R
3
• On the other hand, since v
4
= 2v
2
+ v3, the
vt:ct.ors v 1, v 2 , v
3
, v 4 are linearly dependent and do not form a basis for R
3
. .,
..
..
(
l .. .,.),
'
.. . V1 ... ,
·.) I
I \1.,
EXERCISE SET 7.2 199
{b) The vector equation v = c
1
v
1
+ c
2
v 2 + c
3
v
3
+ c4v4 is equivalent to the linear system
The augmented matrix·of this system is
1 3 I 1]
1 0 2 :2
o o ol3
and the reduced row echelon form of this matrix is
[
1 0 0 0: 3]
0 1 0 2:1
Q 0 1 1 I 1
Thm:, c.
1
= t , a general solut ion of the system i:; t;i•:cn by
C1 = 3, cz =: 1 2t , C3 =  1   t,c4 = t
where oo < t < oo. Thu:;
v = 3v
1
· (1 + 2t)vz  (1 1 t)v3 + LV4
for any value of t. For example, corresponding _t9 t = 0, t =  1, and t =  we have v =
3v
1
 v7  v3, v = 3v
1
+ v2 v
4
, and v = 3v, 
9. The vector v
2
= {1, 2,  2) is not a scalar multiple of v
1
= (  1, 2, 3) ; thus S == { v
1
, v
2
} is a linearly
independent set. Let A be the matrix having v
11
v
2
, and e
1
as its columns, i.e.,
[
1 1 1]
A ::: 2  2 0
3  2 0
Then det( A) = 2 ;6 0, and it follow:; from this that S' = { v
1
, Vz, e1} is a linearly indepew.lent set and
hence a basis for R
3
. [Similarly, it can he $hown that {v
1
,vz, e2} is a basis for R
3
. On t.he other
hand, set. {vl> v
2
,e
3
} is linearly dependent and thus not t':. basis for R
3
.J
10. The vector v
2
= (3, 1,  2) is not a scalar multiple of Vt = (1, 1, 0) ; thns S = {v
1
, v 2} is a linearly
independent set . L€t A be the matrix having v
11
v 2, and c
1
as columns, i.e.,
11.
.4 = [ :
0  2 0
Then det(A) = 2 f 0, and it follows ftom this that S' = { vb v2 , e1} is a linearly independent set and
hence a basis for R
3
. [Similarly, i t can be shown t hat {v
1
,v
2
, e2} and {v
1
, v2,e3} are bases for R
3
.]
200 Chapter7
12.
13.
We have v2 = 2v1, a.nd if we vector v2 from the set S then the remaining vectors a.re
linearly independent since· det 2 2 l = 27 =I= 0. Thus S' = { Vt, v
3
, v
4
} is a basis for R
3
•
2 3 4
[
1 1 I]
Since dct o 1 1 = 1 f 0, the vectors VJ. V3 (column vectors of the matrix) form a basis for R
3
.
0 0 1
The vector equation v = c1v1 + c2v2 +c3v3 is equivalent to the linear system
which, from back substitution, has the sol ution c3 = 1, c2 = 5 c3 = 4, c1 = 2 c2  c:v=3. Thus
v = 3vl + 4v2 + v3.
14. Since del =  2 =f. 0, the vect ors v1r v2, va (column vectors of the matrix) form a basis for
0 I J
R
3
. The \'P.ctor cquatioH v ::: c
1
v
1
+ c2v
2
+ c3v3 is equivalent to the linear system
15. (a) Since u · n .. (1)(2) + (2)(0) (  1)(1) = 1 i: 0, the vector u is not orthogonal ton. Thus Vis
not contained in W.
(b) The line Vis parallel to u = (2,  1, 1), and the plane W has normal vector n = (1,3, 1). Since
u·n:o::: (2)0)+(1)(3)+(1}(1) :::0, the vector u is orthogonal ton. Thus Vis contained
il' w.
16. (a) Sim:e u · n = (1) (2) + (1)( l) + (3)(  J) = 0, the vector u is ton. Thus Vis contained
in IV.
(b) T he line V is parallel to u = (J, 2,  5), a.nd the plane W has normal vector n = (3,2, 1). Since
u· n = (1) (3) + (2)(2) + (5)(1) = 2 i: 0, the vector u is not orthogonal ton. Thus. Vis not
contained in W.
17 (a) The vector equation c1 (1, 1, 0) + cz(O, 1, 2) + CJ(2, 1, 3) = (3, 2, 1) is equivalent to the linear
::;ystem with augmented mo.trix
0
1
2
The reduced row echelon form of this rna.t.rix is
0
1
0
0! l;l
Q l _i
I 1i
I
1 I 1
I S
1 hus (3, 2,  1) =
1
5
3
(1, 1, 0)  (0, 1, 2) + t {2, 1, 3) and, by linearity, it follows that
... 1)= \
3
(2,1,1) = !(26, 13, 20)
EXERCISE SET 7.2 201
(b) The vector equation CHI, 1,0} + c
2
(0,1, 2) + c
3
(2, 1,3) = (a,b,c) is equivalent to the linear sys
tem with augmented matrix
1
.. o 2 3 : c
The reduced row echelon form of this matrix is
Thus (a, b, c) 0) + lb + tc){O, 1,2) + ic)(2, 1, 3) and
T(a , b, c) = (! a + 1, 1) + + !c)(l , 0, 2) + 1, 0)
(
7 , 3b +I 1 + 4b 2 + )
= gaT !) SC> ga [;  gC, a C
( c) From lhe formula above, we have {TI = [ !
1
18. (a) 'fhe vector equation c
1
(1, l , l) + c2(l , l, 0} + c3(l, 0, 0) = ( 4, 3, 0) is equivalent to t he linear sys
tern
has solution c
1
= 0, c
2
= 3, = 1. Thus (4, 3, 0) = O(l, l.Jlj.3.(Ll 0) +
hy Jineari t.)', \V€ nave  . ·.
/
T('t 3, o) = o(3, o, I)+ 3(2, 1, 3,  1) + 1(5,  2, 1, .(11, 1, 10, 3)/
 .. .• •. _..?·
( b) The vector eqnatiun c, (1, l, 1) I· c2 ( l, l, 0) + c:l( 1, 0, 0) = (a, b, c) ill equivaleht .to the li near syil
rl I 1] [:t1] [a]
:: =
which ha.:; so!ution x
1
= c, x2 = b  c, X3 =a  b. Thus
(a, b, c) = c(l, 1,1) + (b  c)(l, 1,0) + (a b)(l , O, 0)
and so, by lmearity, we have
T(a, b, c) = c(3, 2, 0, 1) + (b c)(2, 1, 3, 1) + (a  b)(5, ·2, 1, 0)
= (5a  3b+ c,  2a +3b+ c,a + 2b  3c,b + 2c)
(c) F\om the lo<mula above, we have ITI = [j
3 1]
3 l
2 3 .
 1 2
202 Chapter 7
19. Since det ! :] :=  100 0, the vectors v
1
= (1, 7,  5), v 2 = (3, 2, 8), V3 = ( 4, :3, 5) form
 5 8 5 
a basis for R
3
. The vector equation c
1
v
1
+ c
2
v
2
+
= x is equivalent to the linear system with
augmented matrix
[
1 3 4:x]
7  2  3 l y
 5 8 5 : z
and the reduced row echelon form of this matrix is
[
1 0 0 i iox  1
1
doY +
0 1 0
1
 !x  !., + 1z
I 2 4" 4
0 0 1
: 33 23 19
1 50x + 10oY Imiz
Thus a general vector x = (x, y, z) can be expressed in terms of the basis vectors v
1
,
as follows:
X = ( ;0 X  y + 160 Z j V l + (  h + Z) V2 + + '!/  Z )v 3
DISCUSSION AND DISCOVERY
01. (a) True. Any set of more than n vectors in Rn is linearly depe ndent.
(b) True. Any set of less than n vectors cannot be spanning set for Rn.
( c) True. 1f every vector in Rn can be expressed in exactly one way as a linear combina tion of
the vectors in S , then S is a busis for Rn and t hus must contain exact ly n vectors.
(d) True. If Ax = 0 Iws infinitely many solutions. then det(A) ::: 0 and so the row vectors of r1
are linearly dependent.
(e) True. If V £:; W (or W V) and if dim( V) = dim( W), then V = W .
02. No. If s = {vl , Y2,  . . 'Vn} is a linearly dependent set in Rn, then s is not a spanning set for n n;
r. hus it. is not possible to create a basis by formi ug linear combinat ions of the vectors in S .
03. Each such operat()r corresponds to (and is determined by) a permutation of the vectors in t he
basis B . Thus t here a rc a t.otal of n! such operators.
04. Let A be the matrix having the vectors v
1
and v2 as its columns. Then
dct (A) ""' (sin
2
o sin
2
/3)  (cos
2
a  cos
2
{3) =  cos 2o + cos 2{3
and det(A) =/= 0 if and only if cos 2a j cos 2/3, i. e., if and only if o =/= ±/3 + br where k = 0, ± I ,
± 2, .. . . For these values of a and {3, the vectors v
1
and v
2
form a basis for R
2
.
D5. Suppose W is a. subspace of Rn and dim( W) = k. If S = {w
1
, w
2
, .. . ,wj } is a spanning set for
W, then eit her S is a basis for W (in which caseS contains exactly k ve<:: tors) or, from Theorem
7.2.2, a. basis for W can be obtained by removing appropriate vectors from S. Thus the number
of clements in a spanning set must be at least k , and the smallest possible number is k.
WORKING WITH PROOFS
Pl. Let V be a nonzero subspace uf R", and let v
1
be a nonzero vector in V. If dim(V) = 1, then
S = { v
1
} is a ba..<; is for V. Otherwise, from Theorem 7.2.2(bJ, a busis for V can be obtained by
adding appropriate vedors from V to the set S.
WORKING WITH PROOFS 203
P2. Let {v1, Vz, ...
1
vn} be a basis for Rn and, fork any integer between 1 and n, let V = spa.n{v1, v2,
••• 1 vk}· Then S = {vt. v
2
, ... , vk} is a basis for V and so dim{V) = k. The subspace V = {0}
has dimension 0. ·
P3. LetS= {v1o v2, ... , vn}· Since every vector in Rn can be written as a Hnearcombinationofvectors
in S, we have span( S) = Rn. Moreover, from the uniqueness, if Ct v1 + cz v2 + · · · +en v n = 0 then
Ct = Cz =···=en= 0. Thus the vectors Vt. vz, ... , Yn span Rn and are linearly independent, i.e.,
S = {vl, Vz, ...
1
vn} is a basis for Rn.
P4. Since we know that dim(R"} = n, it suffices to show that the vectors T(vt)
1
T(v2), ... ,T(vn) are
)inearly independent. This follows from the fact that if ctT(vl) + CzT(vz) + · · · + c
11
T(v,) = 0
then
T{c1v1 + CzVz + · · · + CnVn) = CtT(vt} + CzT(vz) + · · · + CnT(vn) = 0
and so, since Tis onetoone, we must have c1v1 + czvz + · · · + CnVn = 0. Since VJt v2, ... , Vn are
linearly independent it follows from this that c
1
= c
2
= ···=en= 0. Thus T(v
1
), T(vz), ... , T(v,)
are linearly independent.
P 5. Since B = { v
1
, v2, ... , vn} is a basis for R"', every vector x in Rn can he expressed as 1i linear
combination x = c
1
v
1
+ czv
2
+ · · · + c,. Vn for exactly one choice of scalars c
1
, c ~ h ... , Cn. Thus it
makes sense to define a transformation T: Rn 1 Rn by setting
T(x) = T{ctVl + CzVz + ... + CnYn) = clwl + c2w2 + ... + CnWn
n n
It is easy to check that Tis linear. For example, if x = I: CjVj andy= E d;v
1
, then
j=l j=l
n n n n
T(x + y) = T(l= (c; + d;)vJ) = L (cj + dj)wj = L CjWj + L d;w; = Tx + Ty
j=l j=l j;l j=l
and so T is additive. Finally, the transformation T has the property that Tv; = w; for each
j = 1, ... , n, and it is clear from the defining formula that T is uniquely determined by this
property.
P6. (a) Since { u
1 1
Uz
1
U3} has the correct number of elements, we need only show that the vectors are
linearly independent. Suppose c
1
, C:lJ c
3
are scalars such that c
1
u
1
+ CzUz + C3U3 = 0. Then
(c1 + Cz + c3)v1 + (cz + c3)v2 + c3v3 = 0 and, since { Yt. Vz, v3} is a linearly independent
set, we must have c
1
+ c2 + c3 = c
2
+ c
3
= c
3
= 0. It follows that c
1
= c2 = C3 = 0, and this
shows that Ut, uz, us are linearly independent.
(b) If { v1, v2, ... , Vn} is a basis for R
1
", then so is { u
1
, u2, ... , un} where
... '
P7. Suppose x is an eigenvector of A. Then x :f. 0, and Ax = >.x for some scalar >.. It follows that
span{x,Ax} = spa.n{x,>.x} = span{x} and so span{x,Ax} has dimension 1.
Conv\...sely, suppose tha.t span{ x, Ax} has dimension 1. Then the vectors x f. 0 and Ax are linearly
dependent; thus there exist scaJars CJ a:.d c
2
, not both zero, such that c
1
x + c
2
Ax = 0. We note
further that c2 =I= 0, for if c2 = 0 then since x =I= 0 we would have c
1
= 0 also. Thus Ax = >.x where
>.::;;; cdc2.
P8. SupposeS= {vt, v2, ... , vk} is a basis for V, where V C Wand dim(V) = dim(W). Then Sis
a linearly independent set in W, and it follows that . ~ must be basis for W. Otherwise, from
Theorem 7.2.2, a basis for W could be obtained by adding additional vectors from W to Sand
this would violate the assumption that dim(V) = dim(W). Finally, since S is a. basis for W and
S C V, we must have W = span(S) C V and soW= V.
204
Chapter 7
EXERCISE SET 7.3
1. The orthogonal complement of S = { v
1
, v
2
} is the solution set of the .system
A general solution of this system is given by x = Jt, y = !t, z =tor (.x, y,z) = t( Thus
S.l is the line through the origin that is parallel to the vector ( ! , 1}.
A vector that is orthogonal to both v
1
and v
2
is w = v
1
x Vz = 1 1 .3 = 7i + j + 21<: = ( 7, 1, 2).
[
i j I<]
0 2 1
Note the vector w is parallel to the one obtained in our first solution. Thus SJ. is the line through
the origin that is parallel to { 7, 1, 2) or, equivalently, parallel ,1).
2. The orthogonal complement of S = {v
1
, v
2
} is the solution set of the system
A general solution of this system is given by x = y =1ft, z =tor (x,y,z} = 
1
i
1
1). Thus
S.l. is the line tluough the origin that is parallel to the vector (!,  ¥ , 1).
. . [I j kl
A vector that is orthogonal to both v
1
and v
2
is w = v
1
x v
2
= 2 0  1 = i  llj + 2k = (1,  11 , 2).
1 1 5
Note the vector w is parallel to the one obtained In our first solution. Thus 51.. is the line through
the origin that is parallel to (1, 11,2) or, equivalently, parallel to (j,
3. We have u · v
1
= ( 1)(6) + (1)(2) + (0)(7) ·1 (2)(2) = 0. Similarly, u · v
2
= 0 and u · v3 = 0. Thus u
is orthogonal to any linear combination of the vectors v
1
, v
2
, and v
3
, i,e., u bek>,ngs to the orthogonal
complement of W = spa.n{vl> v2, v3}.
"
4. We have u · v
1
= (0)(4) + (2)(1} + (1)(2) + ( 2)(2) = 0, u · v
2
= 0, and u · v
3
= 0. Thus u is orthog
onal to any linear combination of the vectors vlt v
2
, and v
3
, i.e., u is in the orthogonal complement
ofW.
5; The line y = 2.x corresponds to vectors of the form u = t{1,2), Le., W = span{(l, 2)}. Thus WJ.
corresponds to the line y = or, equivalently, to vectors of the form w = s(2, 1).
6. Here W J. is the line which is normal to the plane x  2y  3z = 0 8Jld pa:3Sea through the origin. A
normal vector to this plane is n = (1,  2, 3); thus parametric equations for WJ. are given by:
X = t , y = 2t, Z =  3t
7. The line W corresponds to S<:alar multiples of t he vedor u = (2, 5, 4); thus a vector w = (x, y, z) is
in WJ. if and only if u · w = 2x 5y + 4z = 0. Parametric equations for this plane are:
X =  2t, y = s, Z = t
' ·
q
EXERCISE SET 7.3 205 't
\ \
I
8. Here W is the line of intersection of the planes x + y + z = 0 and x  y + z = 0. Parametric equatip_ns '\
for this line are x =  tli==9.!! W corresponds to vectors of the ... 1
follows that w = (x,y, z) 1s m w1. if and only if x = z, i.e. if and only if w is of the Torm
w = (r, s, r) = r(l , 0, 1) + s{O,l, 0)
·······
Parametric equations for this plane are given by x = r, y == s, z = r.
Let A be the matrix having the given vectors as its rows. The reduced row echelon form of A is
0 16]
1 19
0 . 0
Thus the vectors w
1
= (1,0, 16) and w2 == (0, 1,  19) form a basis for the row space o( A or, equiv
alently, for the s ubspace W s panned by the given vectors.
We have W..l. = n.1w(A).l = nuii(.A) = null(U}. Thus, from the above, we cpnsists
of all vectors of the i.e., the vector u = (16, 19, 1) forms a basis for w.l..
l 0. Let A bt! •w•L• ix having lhe given vect ors as ils rows. This matrix can be row reduced to
u !
Thus the vect orl; w
1
= (2. 0,  1) and w
2
= (0, 1, 0) form a. basis for the row space of A or, equivalently,
for the subspace W spanned by the given vectors.
We have w.t = row(A) .L = null(A) = null(U). Thus, from the above, we conclude that w.1. consists
of all vectors of the form x = ( 0, t), i.e., the vector u = (!, 0, 1) forms a basis for W .1..
11. Let A be t he matr ix having the given vectors as its rows. The reduced row echelon form of A is
1
ll
0 0
· .:"
1 0
'
'; l
R=
0 . ,.
y •• •
\._,
1
0 0
Thus the vect ors w1 = (1, 0, 0, 0), w
2
= (0,1, 0, 0), and w
3
= (0, 0, 1,1) form a basis for the row space
of A or; equivalently, for the space W spanned by the given vectors.
We have w.t = row(A).l = null(A) = null(R). Thus, from the above, we conclude that W.l consists
of all vect OrS Of the form X = (Q, 0, t, t), i.e., the vector U = (0, 0, 1, 1) formS a basis for W.l.
12. Let A be the matrix having the given vectors as its rows. The reduced row echelon form of A is
[
1 0 0
R= 0 1 0 i
0 0 1 0
0 0 0 0
Thus the vectors w
1
= (1, 0, 0, 0), = (0, 1, 0, i), and w
3
= (0, 0, 1, 0) form a. basis for the row space
of A or, equivalently, for the space W spanned by the given vectors.
We have Wl. = rvw(A).l = null(A) = null(R). Thus, from the above, we conclude that w.t consists
of all vectors of the form x = (0,  0, t) , i.e., tho vector u = (0, 0, 1) foriDB a basis for W .1. •
I
. \
"" \
. ···. .
..i...· ',
,
t'
206
Chapter7
13. Let A be the matrix having the given vectors s.s its rows. The reduced row echelon form of A is
14.
R = ; !
Thus the vectors w
1
= (1,0, 0,2,0), w2 = (0, 1,0,3,0), W3 = (0, 0,1,4, 0) and W4 = (0,0, 0,0,1) form
a basis for the row space of A or, equivalently, for the space W spanned by the given vectoiS.
We have W.J. = row(A)..L = nuli(A) = null(R). Thus, from the above, we conclude that W.l consists
of all vectors of the form x = ( 2t, 3t, 4t, t, 0), i.e., the vector u = ( 2, 3, 4, 1, 0) forms a basis
for W i.
Let A be the matrix having the given vectors as· its rows. The reduced row echelon form of A is
1 0 0 0 243
0 1 0 0 57
R= 0 0 1 0 7
0 0 0 1 2
0 0 0 0 0
Thus the vectors w1 = (1, 0, 0, 0, 243), w2 = (0, 1, 0, 0, 57), W3 = (0, 0, 1, O, 7), W4 = (0, 0, 0, 1, 2)
form a basis for the row space of A or, equivalently, for the space W spanned by the given vectors.
We have Wi = row(A).l = null(A) = null(R). Thus, from the above, we conclude that W.J. consists
of all vectors of the form x = (243t, 57t, 7t,  2t, t}, i.e., the vector u = (243,  57, 7, 2, 1) forms a.
basis for W J..
15. In Exercise 9 we found that the vectoru = (16, 19, 1) forms a basis for W.l; thus W = w.t..t consists of
the set of all vectors which are orthogonal to u, i.e., vectors x = (x, y, z} satisfying 16x + 19y t z = 0.
16. In Exercise 10 we found that the vector u = (!, 0, 1) forms a basis for W J.; thus W = W .J..J. consists
of the set of all vectors which are orthogonal to u, i.e., vectors x = (x, y, z) satisfying x + 2z = 0.
17. In Exercise 11 we found that the vector u = (0, 0, 1, 1) forms a basis for W .l; thus W = W l..l
consists of the set of all vectors which are orthogonal to u, i.e., vectors x = (x
1
, x2, X3, X4) satisfying
X3 + X4 = 0.
18. In Exercise 12 we found that the vector u = 1) forms a basis for w.t; thus W = w.t.L.
consists of the set of all vectors which are orthogonal to u , i.e., vectors x = (x1, x2, x3, x4) satisfying
3xz 2x4 = 0.
19. Solution 1. Let A be the matrix having the given as its columns. Then a (column) vector
b is a linear combination of v1, v2, v3 if and only if the linear system Ax= b is consistent. The
augmented matrix of this system is
[
1 5 7 l bl]
1 4 6! 02
3  4 21 03
and a row echelon form for this matrix is
EXERCISE SET 7.3 207
From this we conclude that b = ( o
1 1
1
b
3
) lies in the space spanned by the given vectors if and only
if 16b
1
+ 19bz + b3 = 0.
Solution 2. The matri x
[
VI ] [ \ 1 3]
v 5 4 4
2
= ,
2
can be row reduced to
_1_::!! __ _
b bt b2 b3
[
l 0 16]
0 1  19
_o __ .s' ___ , and then further
b1 b2 ba
reduced to
From this we. conclude that W = span{ v
1
, v
2
, v3) has dimension 2, and that b = (bt, b2, b3) is in lV
if and only if 16b
1
+ + b3 = 0.
Solution 3. In Exercise 9 we found that the vector u = (16, 19, 1) forms a basis for w1.. Thus
b =
is in W ::: Wl. l. if and only if u · b = 0, i.e., if and only if Hib1 + 19b2 + b3 = 0.
20. Solution l. Let .4 be the ma.trix having the given vect ors ns its columns. Then a (column) vector
b is a .linear combination of v
1
, v
2
, v
3
if and only if the linear system Ax = b is consistent. The
augmented matrix of this system is · · ·
and a row echelon form for this matrix is
Thus a vector b = (b
1
, b2, b3) lies in the space spanned by the given vectors iJ and only if b
1
+ 2b3 = 0.
Solution 2. can be row reduced to and then further
reduced to
[
1 0 _! l
b;
From this we conclude that W = spau{v
1
, v2, v3} has dimension 2, and that b = (bt , b2, b3} is in W
if and only if b1 + 2b3 = 0.
Solution 3. In Exercise 10 we fOIJnd t hat the vector u = n I ul 1) forms a basis for w l.. Thus
b = (bt , b2, b
3
) is in W = W.l..J.. if and only if u · b = 0, i.e., if and only if bt + 2b3 = 0.
208 Chap1er 7
21. Solution 1. Let A be the matrix having the given vectors as its columns. Then a (column) vector b
is a linear combination <>f. v
1
, v
2
, v:l, v 4 if and only if the linear system Ax ::::: b is consistent. The
augmented matrix of this system is
0  2 4
b,]
0 0 2 b2
1 2 1
b3
1 2 1
b4
and a row reduced form for this matrix is
0  2
4: b, ]
z
I
1 1 1 ba
0 2'
2: bt + b2
0 0 0: + b4
Thus a vector b = (b
1
, bz , b
3
) lies in the space spanned by the given vectors if and only if b3 + b4 = 0.
8olution 2. The matrix v
3 !
4 2 1  1

bl 02 b3 b4
can be row reduced to [ : and then
0 0 0 0
·  
0! b2 ba b4
b
further reduced to
l
0
0
0
0
0
()
0
0
0
0
1
0
0
0
1
0
From t his we conclude t hat W = span{vlo v z, v
3
, v
4
} has di mension 3, and t.hat b = (b1.bz,b3) is in
W if and only if  b3 + b." = 0.
Solutwn 3. Tn E..xcrcise 1 J we found t.hat vector u = {0, 0, 1, l ) forms a basis for W .L. Thus
b = (b
1
, b
2
, is in IV = WL
1
if and onl y if u · b = 0, i.e., if and only if  b
3
+ b
4
= 0.
22. Solution 1. Let A be the matrix having the gi ven vectors as its columns. Then a (column) vector
b is a linear combination of these vectors if a.nd only if the linear system Ax = b is consistent. The
augmented mat.rix of this system is
[
]
3 5
b, ]
2 6 4 I b2
I
 2 3 1
I
03
I
3 9 6
I
04
I
and a row echelon form for this matrix is
i ! b3 l
1 9 23 ! 5bl  2b:l
0 0 t:!bl+ i b2
0 0 0 : ! b4  ! b2
I 3 2
Thus b = (b;. b
3
, b
4
) lies in the space spanned by t he given vectors i£ and only )f 2b4 = 0.
EXERCISE SET 7.3 209
23.
24.
[
vl] r 2 4 s 6]
Y2 l} 2  2 J
Solution 2. The matrix ·= can be row reduced to
b bt In b b1
reduced to
1 0 0 0
0 1 0
3
2
0 0 1 0
0 0 0 0

0 0 0
+b4
l 0 0 0
a
0 1 0 2
o o 1 o , and then further
0 0 0 0

From this we conclude that W = span{ v 1, vz, v3, v
4
} has dimension 3, and that b = (bl> bz, b3, b
4
) is
in W if and only if 3b2  2b4 = 0.
Solution 3. In Exercise 12 we found that t he vector u = (0,  0, 1) forms a basis for W .L. Thus b =
{bJ , b2. b3 , is in W ;; w .t.t if and only if u · b = + b
4
= 0, i. e., if and only if 3b2  2b
4
= 0.
The augmented matrix lv
1
v2 V:; v4 I ht l b z l b "J is
··2  ] 0 I 2 I
I I
0  2
l 0 1 2: 4:  2 2
0 1 2  1
I
2:  3 I  1
I
'
1 1 1 1
I
2: 1 1
I
2 3  1 1 ' 5 : 5 0
I
and the reduced row echelon form of this matrix is
0 0 0
1 '
0 0
I
l 0 0 1 : 1 0
0 1 0 l: 2 0
'
0 0 1
l ' 0 0
t
0 0 0 o:
0
From t his we conclude Uwt the vectors h
1
b :t lie in span{v
1
, v 2, v
3
, v
4
}, but bs does not.
The matrix jv
1
Vz '13 v 4 I b 1 I b2 I bsl is
0 1  L 3 •
t
1 1 0 2 : 1: o:2
0 3 2
I
1 '
7 : 1 : 6
I I I
2 1 0 '
'
2 '
I
2 : 4
0 l 1 1 : 1 : 2 : 1
and the reduced row echelon form of this matrix is
1 0 0
0 1 0
0 0 1
0 0 0
0 0 0
0: _! :
1
I 1) 1
0 I § I}
I 5 I
o:
6 t
1  t
t
5 '
1 t
t
1 t
'
0
o: o: 0
0
0
0
0
From this we conclude t hat t he vectors b
1
and b
2
lie in span{v
1
, vz, v
3
, v
4
} , but b
3
does not.
210
Chapter7
25. The reduced row echelon form of the matrix A is
26 .
0 0 0 0 1 0
0 0 0 0 0 1
Thus the vectors r
1
= (1,3, 0,4, 0,0), r2 = (0,0,1,2,0,0), r 3 = (O, O,O,O, l,O), r4 = (0, 0,0,0,0,1),
form a basis for the row space of A. We also conclude from a.n inspection of R that the null space of
A (solutions of Ax= 0) consists of vectors of the form
x = s(  3, 1, 0, 0,0,0) + t{4, 0, 2, 1, 0, 0)
Thus the vectors n
1
= (  3, 1, 0, 0, 0, 0) and n2 = (4, 0, 2, 1, 0, 0) form a basis for the null space of
A. It is easy to check that r; · Dj = 0 for a.ll i, j. Thus row( A) and null(A) are orthogonaf.subspat:es
of R
6
•
The row (>Chf'lon form of the m11.trix .4 is
1 0 0
1
!]
4
0 1 0
1
R=
2 5
0 0 1
t 3
;j
5
0 0 0 0 0
Thus the vectors f ) = (1, 0, 0, n. r 2 = (0, 1, 0, 4. a.nd TJ = (0, 0, 1, }, form a basis for the
row of A \Ve also conclude from nn inspection of R that the null space of A (solutions of
Ax ::: O) consists of vectors of the form
Thus the vect ors n
1
= ( · 1, 0), n
2
=: ( · 0, 1) form a basis for the null s pace of A.
It is easy to che<.:k t.hat r, · n
1
= 0 for all i, j . Thus row(A) and onll(A) are orthogonal subspaces of
R".
27. The vectors r1 = (1,3,0,4,0, 0), r2 = (0,0, 1, 2,0,0), r3 = (0,0,0,0, 1, 0), r
4
= (0,0,0,0,0,1) f<'lrm a.
basis for t he row space of A (see Exercise 25).
28. The vectors r, = ( l , 0, 0, !.  ), r2 = (0, l, 0, V. a..nd r 3 = (0, 0, 1, ! , for m a. basis for t he row
space of A (see Exercise 26).
29. The reduced row echelon forms of A and B are and respectively. Thus the
0 0 0 1
30.
vectors r l :::: (1,0,3, 0), r2 = (0, 1, 2,0) , r 3 = (0,0,0, n form a basis for both of the row spaces. It
follows t.hnt row(A) = row(B).
The reduced row forms of A and B are and respectively. Thus the
00 13 OQI J
0 0 0 0
vectors r
1
:::;: (1,0,0,5), r
2
= = (0,0,1,3) form a basis for of the row spaces. It
follows that ruw(A) = row(B).
DISCUSSION AND DISCOVERY
31. Let B be the matrix havi ng the given vectors as its rows. The reduced row echelon form of B is
R =
0  1
 4
211
and from this it follows that a general solution of Bx= 0 is gi ven by x = s (1, 4, 1,0) 1),
where oo < s, t < oo. T hus the vectors w
1
= (1, 4, 1, 0) and w2 = ( 2, 0, 0, 1) form a ha.sis for the
null space of B, and so the matrix
A = [ 141 0]
2 0 0 1
has the property that null (A) = row{A)J. = null( B).!. = row(B).l.J. =row( B).
32. (a) If A= ... and x = [:], then Ax = 0 if and only if x = y = 0. Thus the null sp•ce of A
corresponds t o point.s on t he zaxis. On the othe: hanrl, t he cokmn sp:.:.cc cf A t.'Onsists of all
vectors that are of the form
which corresponds t.o points on lht} xyplane.
( b) The matrix B = has the specified null space n. nd column space.
0 0 I
DISCUSSION AND DISCOVERY
D 1. (a) False. This statement is t. rue if and only if the nonzero rows of A are linear.ly independent,
i.e. , if Lhcre a.re no additional zero rows in i.tll echel on form. "
( b) True. 1f E is a11 elementary matrix, t hen E is invertible and so EAx = 0 if and only if
Ax= 0.
(c) True. If A ha.s ru.nk n , then there can be no zero rows in an echelon form for A; thus the
red11ced row echelon form is t he ident.it.y matrix.
(d) False. For example, if m = n and A is invert ible, then row(A) = Rn and null(A) = {0}.
(e) True. T his follows from the fact (T heorem 7.3.3) that SJ. is a subspace of Rn.
D2. (a) True. 1f A is invertible, then the rows of A and the columns of A each form a basis for W';
thus row(A) = col(A) = R" .
( b) Fah>e. ln fact, t he opposite is true: lf W is a subspace of V, then V J. is a subspace of W.l.
since every vector that is orthogonal to V will also be orthogonal to W.
(c) False. The specified condition implies that row(A) row( B) from which it follows that
mJll(A) = row(A)J. 2 row(B)L = null(B)
but in general the inclusion can be proper.
(d) False. F.or example A= and B = have the same row space but column
spaces.
212 Chapter 7
(e) True. This is in fact true for any invertible matrix E. The rows of EA are linear combinations
of t.he rows of A; thus it is always true that row(EA) row(A) . If E i:; invertible, then
A = E
1
(EA) and so we also have row{A) row(EA).
D3. If null{ A) is the line 3x 5y = 0, then row(A) = null(A)l. is the line 5x + 3y = 0. Thus each row
of A must be a scalar multiple of the vector {3,5), i.e .• A i'i of the form A= [:ls !Js].
3t  5t
D4. The null space of A corresponds to the kernel of TA, and the column space of A corresponds to
the range of T A .
D5. lf W = a..t. , then w..t. = a.L.L =span{ a} is the !dimensional subspace spanned by the vector a.
D6. (a) If nuU( A) is a line through the origin, then row(A) = null( A )..I. is the plane through the origin
that is perpendicular to that line.
(b) If col(A) is a line through the origin, then null( AT)= col(A)..l is the plane through the origin
thnt is perpendicular to that line.
D7. The first. two matrices arc innrtiblc; thus in each cnse the null spuce is {0}. The mill of the
matri..x A = is the line 3x + y = 0, and the null space of A = is all of R
2
.
DB. (a) Since S has equation y = 3x, Sl. has equation y =  and Sl.J. = S has equation y = 3x.
(b) If S == { ( 1, 2)}, then spa.n(S) has equation y = 2x; thus Sl. has equation y =  x ann Sl. l =
span(S) has equation y = 2x.
09. ·No, this is not possible. The row space of an invertible n x n matrix is all of Rn since its rows
form a ba.sis for R ... On the other hancl , the row space of a singular matrix is a proper subspace
of Rn since its rows are linearly dependent and do not span all of Rn.
WORKING WITH PROOFS
Pl. lt is clear, from the definition of row space, that (a) implies (c). Conversely, suppose that (c)
holds. Then, since <::ach row of A is a linear combination of the rows of B, it follows that any
li ne..a.r c.ombination of the rows of A can be expressed as a linear combination of the rows of B .
This shows that row(A) row{B), and a si1uilar argument shows that row(B) row(A). Thus
(a) holds.
P2. The row vectors of an invertible matrix are linearly independent and so, since there are exactly n
of them, they form a basis for R".
P3. If Pis invertible, then (PA)x = P(Ax) = 0 if and only if Ax= 0 ; thus the matrices PA and A have
the same null space, and so nullity{PA) = nullity( A). From Theorem 7.3.8, it follows that PA and
A. have also have the same row space. Thus rank(PA) = dim(row(PA)) = dirn(row(A)) =rank( A).
P4. From Theorem 7.3.4 we have SJ. = span(S)l., and (span(S)J.)..l = span{S) since spa.n(S) is a
subspace. Thus (Sl. )l. = (span{S)l. )l. = span(S)
[
rJ(A)·cl(A
1
) rl(A)·c:J(A
1
) •• • rt(A)·cn(A
1
)][ .. 1 0 ··· 0]
r'l(A)·ct(A
1
) r 2{A)·c2(A
1
) •• r 2(A)·cn(A
1
) 0 1 · · · 0
P5. WehaveAA
1
= . . . . = . . . . = [Oijl· In par
. . . . .
. . . .
r
11
(A) · C! (A
1
) r n{A) • C2{A
1
) · • · r n(A) · Cn(A
1
) 0 0 ·· · l
ticular, if it= {1, 2, . . . ,k} and j E {k + 1, k + 2,. _., n}, i < j and so ": ( A)· c;(A
1
) = 0.
This shows that the first k rows of A and the last n  k columns of A
1
are orthogonal.
EXERCISE SET 7.4 213
EXERCISE SET 7.4
1. The reduced row echelon form for A is
0 16]
1 19
0 0
Thus rank(A) = 2, and a general solution of Ax= 0 is given by x = ( 16t, 19t, t) = t(16, 19, 1). It
foll ows that nullity(A) = 1. Thus rank( A) + nullity(A) = 2 + 1 = 3 the number of columns of A.
2. T he reduced row echelon form for A .is
0 0
Thus rank(A) = 1, and a general solution of Ax= 0 is given by x = (1t, s, t) = s(O, l ,O) +
It. follows t hat. nullity(A) = 2. Thus rank(A) + nullity(A) = 1 + 2 = 3 t he number of columns of A.
3. The reduced row echelon form for A is
0
1
0
l tl
l :!
7
0 0
Thus rank(A) = 2, and a general solution of Ax= 0 iR given by x <= s( 1, 1, 1, 0) 1).
It follows t hat nullit.y(A) = 2. Thus rank(A) + null ity(A) = 2 + 2 = 4 the number of columns of A.
4. The reduced r ow form for A is
5.
0 1 1..J 2
l
] 0 1 2 ]}
.. ·.. .. ____ ___  . ·
Thus rallk(A) = 2, and a general solution of Ax= 0 is given by
x = r( 1,  1, 1, 0, 0) + s(2,  1, 0, l, 0) + t( 1, · 2, 0, 0, 1)
It follows that n111lity(A) = 3. Thus rank(A) + nullity(A) = 2 + 3 = 5.
The reduced row form for A is
1 0 0 2
4
3
0 0 0
I
 6
0 0 1 0
5
 n
0 0 0 0 0
0 0 0 0 0
Thus rank(A) = 3, and a general solution of Ax= 0 is given by
x ""' s(2,0,0,1,0} + fi,o, 1)
It follows that nul;ti.y(A) = 2. Thus rank(A} + nullHy(A) ""'3 + 2 = 5.
214 Chapter 7
6. The row echelon form for A is
0
1 2 2 4
3 3 3 3
1
1 7 4 l
3
3 3
3
0 0 0 0 0
0 0 0 0 0
Thus rank(A) = 2, and a gene1al solution of Ax = 0 is given by
X = r( s,
0,0, 0,0} + s( j, 0, 1, 0, 0, 0) + t( j, t, 0, 0, 1, 0,0)
+ A. 0,0,0,1,0) + 0, 0, 0, 0,1)
ll follows that nullity( A)= 5. Thus rank{A) + nullity(A) = 2 + 5 = 7.
7. (a) lf A is a 5 x 8 matrix having rank 3, then its nulHty must be 8 3 = 5. Thus there a.re 3 pivot
variables and 5 free parameters in a general solution of Ax = 0.
(b) Tf A is a 7 x 4 matrix having nullity 2, then its rank must be 4  2 = 2. Thus there are 2 pivot
variables and 2 free parameters in a general solution of Ax = 0.
(c) lf A is a 6 x G matrix whose row echelon forms have 2 nonz!'rO rows, then A hn.s rank 2 and
nullity 6 2 = 4 Thus there are 2 pivot variables and ..1 free parameters in a general solution
of Ax  0
8. (a) If A is a 7 x 9 matrix having rank 5, then its nullity must be 9  5 = 4. Thus there are 5 pivot
variables and t free parameters in a general solution of Ax = 0.
(b) If A is an 8 x 6 matrix having nullity 3, then its rank must be 63 = 3 Thu:; there are 3 pivot
variables and 3 free parameters in a general.solution of Ax = 0
(c) If A is a i x 7 matriX ''hose row echelon forms ha\f' 3 nonzero rows then ..1 has rank 3 and
nulhty 7  3 = ·1 Thus there are 3 pivot variables and 4 free paramettrs in a genera] solution
of Ax  0
9. (a) [fA a 5 x 3 rnatnx. then the largest possible value for rank( ... l) is 3 and the smallest possible
value for nulllty(A) is 0
(b) If A is a 3 x 5 :natrix, then the largest possible value for rank(A) is 3 and the smallest possible
valur for null1ty( '\) is 2
(c) If A is a 4 x 4 mn.trix, t hen the largest possible value ior ranK( A) IS 4 and the smallest possible
value for nulltty(A.) is 0.
10. (a) If 4 1s a 6 x I then lhe largest possible value for rank(.4) is 4 and the smallest possible
value fm nulhty(A) is 0.
(b) If A is a. 2 x 6 matrix, then the largest possible value for rank(A) is 2 and the smallest possible
value for nullity(A) is 4
(c) If A 1s a 5 x 5 matrix, then the largest posstble value for rank(A) ts 5 and the smaiJest possible
value for nulltty(A) is 0
11. Let A be the 2 x 4 matnx havmg v
1
and v
2
as its rows Then the reduced row echelon fonn of A is
0
"
3
and so a g<>n('ral solulton of Ax= 0 is given by x = 0) + tn, 0, 1). Thus the vect ors
Vt= (·! , l ,O,O), v2=(0,3,.t,5),
form a basts for R
4
EXERCISE SET 7.4 215
12. Let A be the 3 x 5 mat rix having v
1
, v
2
, and v:l as its row:>. Then the reduced row echelon form of
A is
0 0
19
¥]
 8
0
2:l 15
15
 8
0
43 43
 16
8
and so a general solution of Ax= 0 isx =
1
i , l, 0) + t (
2
1,
1
i, 
0, 1). Thus t he vectors
Vt = (1, 0, 2, 3, 5), v2 = (  2, 3, 3, }, 1), V3 = (4
1
1, 3,0
1
5)
W3 = 1,0), W4 =
1
8
5
,¥,0, 1)
form a basis for R
5
•
13. (a) This matrix is of re.nk 1 with l1 = =
(h) T his matrix is of rank 2.
7] = u vT.
(c) This matdx i< oh ank 1 with A = H : : _:;] +:1[1
1 3 3
1 4. (a) This matrix is of ra.nk 2.
15.
16.
(b) This of '""k 1 wi th A= __ ,: ::] = [ 6 3[ = uv''' .
(c) This matrix is of rank 1 with
;\ = [
3
1
lJ =
6 10 12  6 8] [ 2]
6 10 12 6 8 2
1
 i·Z 3 I
3  .) 6  ·l  1
3 5 6  3 4) = UVT
6 2 2]
9 3 3 . . t .
ts a svmmetnc rna nx.
3 l 1 •
3 1 1
0 0
16 20
 20 25
28 35
is a symmetric matrix.
1 7. The matrix A can be row reduced to upper triangular form as follows:
t l [1 1
1 t+ 0 t 1
1  t
2
0 0
t ]
1 t
(2+ t)(l  t)
If t = 1, then the la.tter has only one nonzero row and so = 1. If t = 2, then there are two
nonzero rows and ra.nk(A) = 2. If t :f: lor  2, then there are three nonzero rows and rank(A) = 3.
216 Chapter 7
18. The matrix A can be row red11ced as follows:
[ ~
3
1]
[i
3
tl r
3
_, l r
3
t ]
A= 6 2
+
6 2 + 0 3 2+3t + 0 3 2+ 3t
1 3 t 3
1 0 3 3t 1 + t
2
0 0 (t l){2t  3)
If t = 1 or t = ~ , then t he third row of the latter matrix is a zero row and rank( A) = 2. For all other
values of t there are no zero rows, and rank(A) = 3.
19. If lhe matrix [:r Y z] has rank 1 then the first row must be a scalar multiple of the second row.
1 X l/
Thus (x.y,z) = t(l,x,y), and sox= t, y = tx = t
2
, z = ty = t
3
.
20. Let A be the 3 x 3 matrix having VI. v
2
, v
3
as ils rows. The reduced row echelon form.of A is
...
thus the vt•ctors w1 = (1, 0, 5) and w 2 = (0, 1, 4) form a basis for W =row( A) = row(R). We have
[
Stl [5]
Ax = 0 tf and only if Rx = 0, and a general solution of the latter is given by x = 4: = t ~ ;
Lhus the vf'r.tor x
1
 (5, 4,1) forms a basis for W.l. = row(A).l.. Finally, dim(W)+dim(W).l. =
'>+I== 3 .
21. The suh.spare W, consisting of all vector!'J of the form x = t(2, 1,  3}, has dimension 1. The subspace
n·.l is the hyperplane cons1$tmg of all vectors x = (x,y,z) whlch are orthogonal to (2,1,3), i.e.,
which satisfy the equation
2x y 3z = 0
A gt•neral !iolution of the latter ts given by
x = (s. 2s 3t, t) = s(1. 2, O) + t(O, 3, 1)
wtwre  oo < t < oo. fhus the vectors (1, 2, 0) and (0, 3, 1) form l\ basis for w.t' and we have
dim(W) + dim(lV.l.) = I + 2 = 3
22. If u and v nre nonzc>ro column vectors and A = uvr, then Ax = (uvT)x = u(vrx) = (v. x) u . Thus
Ax= 0 if Bnd only if v · x = 0. i.e .• if and only if xis orthogonal to v . This shows that ker (T) = v.l. .
Similar!_\, I he range ofT conststs of all vectors of the form Ax= (v · x }u, and so ran(T) =span{ u}.
23. (a) If B 1<:: obtainerl from A by changing only one entry, then A  B has only one nonzero entry
(hence only one nonzero row), and so rank(A B) = 1.
(b) If B is obtained from A by changing only one column (or one row), then A  B has only one
nonzero column (or row) , and so rank{A  B) = 1.
(c) 11 B 1s obtamed from A in the specified manner. Then B  A has rank 1. Thus B  A is of the
form uvT and 8 =A+ uvT.
24. (a) If A= uvr, lhcn A
2
= ( uvT)( uvT) == u (vTu)vT = (v · u) uvT = (u. v )A.
( b ) If Ax= >. x for some nonzero vector x, then A
2
x = >.
2
x. On the other hand, since A
2
= (u · v )A,
we have A
2
x = (u · v )Ax = (u · v)>.x . Thus >.
2
= (u · v)>., a.nd if>. :f nit. follows that>.= u · v .
DISCUSSION AND DISCOVERY 217
(c) If A is any square matrix, then I  A fails to be jnvertible if and only if there is a nonzero .vector
x such tha.t (I  A)x = 0; this is equivalent to saying t hat>. = 1 is an eigenvalue of A. Thus if
A ::::: uvT is a nmk 1 r£tatrix for which I A is invertible, we must have u · v =/= 1 and .4
2
:fA.
DISCUSSION AND DISCOVERY
Dl. (a) True. For example, if A ism x n where m > n (more rows than columns) then the rows of
A form a set of m vectors in Rn and must therefore be linearly dependent. On t.he other
hand, if m < n then the columns of A must be linearly dependent.
(b) False. If the additional row is a li near combination of the existing rows, then the rank will
not be increased.
(c) False. For example, if m = 1 then rank( A)= 1 and nullity( A) = n  1.
(d) True. s·uch a matrix must have rank less than n; thus nullity(A):::::: n  rank(A) == L
(e) False. U Ax= b is inconsistent for some b then A is not invertible and so Ax = 0 hM
nontrivial solutions; thus nulHty(A) ;::: 1.
(f) 11uP.. We must have rank(A) + nullity(A) = 3; thus it is not possible to have rank( A) and
nullity(A) both C' qual t.o 1.
D2. If A is m x n, then AT is n x m and so, by Theorem 7.4.1, we have rank(AT) + nullity( AT)= m.
D3. If A is a 3 x 5 matri."<, then the number of leading l's in the reduced row echelon form is at most
3, und (assuming A =/= 0) the number of parameters in a general solution of Ax= 0 is a.t most 4.
D4. If A is a 5 x 3 matrix, thPn ·rank( A) :::; 3 and so the number of leading l's in the reduced rvw
echelon form of A is at most 3. Assuming A =I 0, we have rank(A) :;::: 1 and so the number of free
parameters in a general solution of Ax = 0 is at most 2.
D5. If A is a 3 x 5 matrix, then the possible values for rank(A) are 0 (if A= 0) , 1, 2, or 3, and the
corresponding values for If A is a 5 X 3 matrix, then the possi ble values
for rank(/\) are 0, 1, 2, or 3, and the corresponding values for nullity(A) are 3, 2, 1, or 0. If A is
a 5 x 5 matrix, then the possible values for rank(A) are 0, 1, 2, 3, 4, or 5, and t.he corresponding
values for nullity(A) are 5, 4, 3, 2, 1, or 0.
D6. Assuming u and v are nonzero vect.ors, the rank of A= uvT is 1; thus t.he nullity is n  1.
D7. Let A be the standard matrix ofT. If ker(T) is a. line through the origi n, then nullity(A) = 1 and
so rank(A) ::: n  1. Tt follows that ran(T) = col (A) has dimension n 1 and thus js a hyperplane
in R".
D8. If r == 2 a.nd s = 1, then rank(A) = 2. Otherwise, either r  2 or s l (or both) is 0, and
rank(A) = 3. Note thnt, since the first and fourt h rows are linearly independent, rank(A) can
never be 1 (or 0).
D9. If,\¥ 0, then the reduced row echelon from of A is
0 0
{]
1 0
0 1
0 0 0
L.
218
and so rank(A) = 3. On the other hand, 1f >. = 0, then the reduced row echelon form is
0
1
0
0
5
2
0
0 il
and so rank(A) = 2. Thus>. = 0 is the value for which the matrix A has lowest rank.
Chapter 7
DlO. Let .4 = and B = Then rank(A) = rank(B) = 1, whereas A
2
= bas rank 1 and
B2 = has rank 0.
Dll. If AB = 0 then rank( A) + rank(B)  n = rank(AB) = 0, and so rank(A) + rank(B} 5 n. Since
rank( B)  n nulHty(B), it follows that rank(A) + n  nullity( B) 5 n; thus rank{A) B).
S1milarly, rank(B) 5 nullity(A).
WORKING WITH PROOFS
Pl. First. we note that. the matrix A= (
011 012 013
] fails to be of rank 2 if and only if one of the rows
021 an 023
is a scalar multiple of the other (which includes the case where one 01 both is a row of zeros).
Thus it is sufficient to prove that the latter is equivalent to the condition that
Suppose that one of the rows of A is a scalar multiple of the other. Then the same 1s true of each
of the 2 >< 2 matrices that appear in ( #) and so each of these det erminants IS equal to zero.
Suppose, conversely, that the cond1t10n ( #) holds. Tlten we have
\\'ithout loss of gPnerality ,,.e may assume that the first row of A is not a row of zeros, and further
t Interchange columns if neces!lary) that. a11 =1: 0 We then have a22 = ( l!.U )at 2 and = ( l!.U )a.l3
an an
Thus and so the second row is a scalar multiple of the first
••II
row
P2. lf A is of rank 1, then A = xyT for some (nonzero) column vect ors x andy. If, in addition, A is
symmetric, Lhen we also have A = AT= (xyT)T = pr. From this it follows that
x(yr x) = (xyr )x = (yxT)x = y(xr x) = yllxll
2
Since x andy are nonzero we have xry = yTx = y · x :/; 0 and x = If xTy > 0, it follows
that A= yxr = y(gy)T == = = uuT IfxTy < 0, then thecor
respondmg formula 1s A =  uur where u =
P3. AB = lc.CA) c1(A)
[
r l (B)
r 2(B}
= Ct(A)r t(B ) + c2(A)r 2(8) + · · · +
and
each of the c
1
( A)r
1
(B) is a rank 1 matrix.
EXERCISE SET 7.5 219
P4. Since the set V U W contains n vectors, it. suffices to show that V U W is a linearly independent
set. Supp?se then tha.t c
1
, c
2
, . •. , and d
1
, d
2
, ••• , dn k are scalars with the property that
CtVJ + CzVz + · · · + CJ.:Vk + d1W1 + dzWz + · · · + dnkWnk = 0
Then the vector n = c1v1 + czvz + · · · + c,.vk = (d1w1 + dzW?. + · · · + dn JcWnk) belongs both
to V = row(A) and to W = null(A) = It follows that n · n = llnll
2
= 0 and son= 0.
Thus we simultaneously have c1 v1 + czVz + · · · + q v,. = 0 and d1 w1 + d2w2 + · · · + dnk Wnk =
0. Since the vectors v1, Vz, . .. , Yk are linearly indepenjent it follows that c
1
= cz = · · · = ck = 0,
and since w1, Wz, ... , Wnk are linearly independent it follows that d1 = dz = · · · = dnk = 0.
This shows that V U W is a linearly independent set.
P5. From the inequality rank(A) + rank( B)  n rank{AB} A) it follows that
n rank(A) n rank(AB} 2n  rank{A)  ra.n.k(B)
which, using Theorem 7.4.1, is the same as
nullity( A) nullity(AB) S nullity(A) +nullity( B)
Similarly, nullity(B) S nullity(AB) nullity(A) + nullity(B).
P6. Suppose .A= 1 + vr ; t '
1
u # 0, and let X= A
1
 A'. Then
Bx (A
T) A l uv1' At) I T Al uvT A1 + uvT Aluvr A t
.=: + uv  A = + uv · A
. uvTA
1
+ u(vTA
1
u)vTA
1
l +vTA
1
u
=l+uv rA
1
 = l + uvTA
1
 uvTA
1
;;;;;[
). A
1 1 A
1 TAt l A1 TA 1
and so B is invertible with B· = X= A  uv =A  uv
1
A l+vT.'I.  u '
Note. If vTA
1
u =  1, then nA lu =(A+ uvT)A
1
u = u + u(vT A
1
u) = u  u = 0; thus
BA
1
is singular and so B is singular.
EXERCISE SET 7.5
1. The <edueed cow .ehelon fonn of A is [j
0  1 2
0 l 1 J
, .... ··
0 7
l 9
0 0
0 0
4. 5]
5  6
0 0
and the reduced row echelon form of
0 0
AT is
0 0 0 0 Thus dim(row(A)) = 2 and dim(col(A) = dim(row(AT)) = 2. It follows that
0 0 0 0
0 0 0 0
diro(null(A)) =52 = 3, and dim(null(AT)) = 4 '2 = 2. Sjnce dim(null(A)) = 3, t here are 3 free
parameters in a general solution of Ax = 0.
2. The reduced row echelon form of A is
l 0 0 1
0 I 0 l
0
0
0
0 0
1 0
0 1
0 0
_ .!§.
1]
2
and the reduced row echelon form
7
2
0
of AT is 0 o 1 0 . Thus dim(row(A)) 3 and di.m(col(A) = dim{row(AT)) = 3. It follows t hat
0 0 0 0
0 0 0 0
dim(null(A)) = 53 == 2, and dim(null(AT)) == 43 = 1. Since dim(nuli(A)) = 2, there are 2 free
parameters in a general solution of Ax = 0.
220 Chapter 7
3. The reduce<! row echelon form of A is [ ~ ~ !] and the reduced row echelon form of AT is [ ~
Thus rank(A) 3 and rnnk(AT) = 3.
0 2
1  1
0 0
4. The reduced •ow echelon form of A is r ~ : ,! :] and the reduced row echelon form of AT is
[ ~ ~ ] . Thusrank(A) = 2 and rank(A;) =
0
2 'o ~
5. (a) dim(row(A)) = dim(col(A)) = rank(A) = 3,
dim(rlllJI(A)) = 3  3 = 0, dim(nuli(Ar)) = 33 = 0.
6.
7.
8.
(b) dim(row(A)) = dim(col(A)) = rank(A) = 2,
dim(null(A)) = 3  2 = 1, dim(null(AT)) = 3 2 = 1.
(c) dim(row(A)) dim(coi(A)) = rank{A) = 1,
dim(null(A)) = 3  I  2, dim(null(AT)) = 3  1 = 2.
(d) dim(row(A)) = dim(coi(A)) = rank(A) = 2,
dim(nuii{A)) = 9  2 = 7, dim(nuii(AT)) = 52 = 3.
(e) <lim(row(A)) = dJm(col(A)) = rank(A) = 2,
(a)
{b)
(c)
(d)
(e)
(a)
(b)
(c)
{d)
(a)
(b)
(c)
(d)
dim(nuii(A)) = 5 2 = 3, dim(nuii(AT)) = 9  2 = 7.
dim(row(A)) = dim(col(A)) = rank(A) = 3.
dim(nuii(A)) = ot  3 = 1, <lim(null(AT)) = 33 = 0.
dtro(row(A)} = dlm(col(A)) = rank(A) = 2,
dim(null(A)} = 3  2 = 1, dirn(nuii(AT)) = 4  2 = 2.
dim(row(A)) = c.llm( coi(A)) = rank(A) = 1,
dim(null (A)) = 3 I = 2, dim(nuli(AT)) = 6  l = 5.
dim(row(A)) = d1m(coi{A)) = rank( A) = 2,
dim(null(A)) = 7 2 = 5, dim{nuli( AT)) =5  2 = 3.
dim(row( A)) = dim(col(A)) = rank(A) = 2,
dim(nuli(A)) = 4 2 = 2, dim(nuii(AT)) = 7  2 = 5.
Since rank(A) = rnnk!A I bJ = 3 the system is consistent. The number of parameters in a general
solution is n  r = 3 3 = 0; i.e., the system has a unique solution.
Since rank(A) t rank[A I bJ the system is inconsistent.
Since rank{ A) = rank[A I bJ = 1 the system is consistent. The number of parameters in a general
solution is n  r = 3  1 = 2.
Since rank(A) = rank[ A I bj = 2 the system is consistent. The number of parameters in a general
soluLion JS n  r = 9 2 = 7.
Since rank{A) t rank[A I b) the system is inconsistent.
Since rank( A) = rank[ A I bl = 2 the system is consistent. The number of parameters in a. general
solution is n  r = 4  2 = 2.
Since rank( A) = rank[A I bJ = 3 the system is consistent. The number of parameters in a general
solution is n  r = 3  3 = 0, i.e. , tLc system has a unique solution.
Since rank( A) = rank[A I bJ = 4 the system is consistent. The number of parh.weters in a. general
solution 1s n  r = 7  4 = 3.
EXERCISE SET 7.5 221
9. (a) This matrix bas full column rank because the two column vectors are not scalar multiples of
each other; it does not full row rank because any three vectors in R
2
ar..; linearly
This matrix does not have full row rank because the 2nd row is a scalar mul'.iple of the 1st row;
it does not have full col umn rank since any three vectors in R
2
are linearly dependent.
(b)
(c)
(d)
This matrix has full row rank because the two row vectors arP. not scalar mult iples of each other,
it. does not have full column rank beeause any three vectors iu R
2
are linearly
This (square) matrix is invertible, it has full row rank and full column rank.
10. (a) Thls matrix has full column rank because the two column vectors are not scalar multiples of
each other; it does not have full row rank because any three vectors in R
2
are linearly dependent.
(b) This matrix has full row rank because the two row vectors are not scalar multiples of each other;
it does not have full column rank because any three vectors in R
2
are linearly dependent.
11.
(c) Thls mat rix does not have full row rank because the 2nd row is a scalar multiple of the lst row;
it does not have full column rank because any three vectors in R
2
are linearly dependent.
(d) This (square) matrix is invertible; it has full row rank and full column rank.
(a)
. [10 2 i 1]
det(ATA) = det l
6
 ="" 149 =f 0 and det(AAT) = dct 2 4  2 = 0; t hus ATA is invE>rt
1
2
·· ll  2 17
ible and AAT is not invertible. This corresponds to the fact (see Exercise 9a) that A has fuJl
column rank but not full row rank.
(b) det(AT A) = det [ : = o and det(AAT) = det. = o; thus neither AT A nor AAT
!0 40 20
IS invertible. This corresponds t.o the fact. that A doeR not have full column rank nor full row
rank.
(c) det(AT A) det [:: ; __ ;_;]: 'l'"1 det(AAT) :] 66,; 0; thus AT A Is not lnmtlble
but A A
1
i&inveitll)fe. This corresponds to the fact that A hu.s does not have full column rank
but. does have full row rank.
(d) "" del(!'>
1
] = 64 :f= 0 and det(AAT) "" det (
4 2
] = 61 =f 0; thus AT A and AAT are
4 ]{) 2 17
hoth invertible. This to the fact that A has full column rank and full row rank.
12. (a) det(AT A) ...,. det [
1
: = 45 =f 0 and det(AAT) = det [ = 0; thus AT A is invertible
· 1  2 I
and AAT is not invertible. Th1s corresponds to the fact (see Exercise lOa) that A has full
column rank but not full row rank
[
4 10 2] g
(b) ciet(AT.1)..,., rlct 10 34 13 = 0 and det(AAT) = del
= 1269 :j: 0; thus AT A is not
2 13 37
invertible but AAT is invertible. This corresponds to t he fact that A does not have full column
rank but does have full row rank.
(c) det(AT A)::; det
= 0 and det(AAT) = det = 0; thus neither AT A nor
60 15 45
AA T is invertible. This corresponds to the fact. that A has does not have full column. rank nor
full row rank.
222 Chapter 1
13.
14.
15.
16.
(d) det(AT A)= dct [
61 41
1 = 1369 =/= 0 and det(AAT) = det
3
3
7
] 1369 =/= 0; thus AT A and AAT
41 soj 1
are both invertible This corresponds to the fact that A has both full column r<illk and full row
rank
[
I 0: b3 l
The augmented matnx of Ax= b can be row reduced to o 1 ! 4b
1
 b
3
• Thus lhe system Ax = b
0 01 bt + b'J
is either inconsistent {if b
1
+ :f:. 0}, or bas exactly one sol ution {if b
1
+ = 0). Note Lhat the latter
includes the case b
1
= b2 = b3 = 0; thus the system Ax = 0 has only the t rivial solut ion.
[
1 4: bt l
The augmented matrix of Ax = b can be reduced to o 3: i>l 2bt . Thus the system Ax = b
0 0 1 b1  2i>l + b3
is t! ither inconsistent (if bt  2b2 + b
3
=I= 0), or has exactly one solution (if b
1
 2b
2
+ 6
3
= 0). The
latter incl udes t he case b
1
= b2 = b
3
= 0; thus the system Ax = 0 has only the trivial sdl'i.ition.
If A = [ then A r A [
6 12
]. It is clear from inspection t hnt the rows of A and .4. T A
 )
2
I 12 24
are multiples of the single vector u = (1,2). Thus row(A) = row(Ar A) is the !dimensional space
consisting of all scalar multiples of u . Similarly, null (A) = null(AT A) is the !dimensional space
consisting of all vectors v in R
2
which are orthogonal to u , i e all V('ctors of the form v = s( 2, 1).
The reduced ·row echelon from of A = [
1 1 1
] is [
0
1 0 7
] and the reduced row echelon form of
2 3 4 I 6
ATA = [ Thusrow(A)=row(ATA)isthe2dimensionalspaceconsisting
7  11 17 0 0 0
of all li near comb1ne.tions of the vect ors u
1
= (1, 0, 7) and u
2
= {0, I ,  6). Thus null(A) =null( AT A)
1s the 1dimtlnsional space consisting of aJI vectors v in R
3
which n.re orthogonal to both u
1
and u 2,
i e, all \'CClor of the form v = s(  7,6, 1)
17. augmented ma.trix of the system Ax = b can be rel!\lo.:ed to
3 bt
0 1
b2  bl
0 0 b3  ·lb2 + 3bl
)
0 0 b4 + 2b,
0 0 b5 8b2 + 7bt
thus the system will be inconsistent unless (b
1
, b
2
, b
3
, b
4
, b
5
) the equations b3 = 3bt 1 4b2,
b,. = 2bt bs = ?b1 + where b
1
can assume any values.
18. The augmented matrix of the system Ax= b can be 1educed to
[
1 2 3 1 :
0  7 8 5 !
0 0 0 0 :
thus the system is consistent if and only if b
3
= b
2
+ b
1
and, in this case, there will be in!'lnitely many
solutions.
WORKJNG WITH PROOFS 223
DISCUSSION AND DISCOVERY
D l. If A is a 7 X 5 matrix wit h rank 3, t hen A'r also has rank 3; t hus dim( row( AT)) = dim( col( AT))= 3
and dim(nuli(AT)) = 7  3 = 4.
02. If A has rank k then, from Theorems 7.5.2 and 7.5.9, we have dim(row(AT A))= rank(A1'A) =
rank(A
1
') = rank(A) = k and dim(row(AAT}) = rank(AAT) = rank( A) = k.
D3. If AT x = 0 has only the trivial solution then, from Theorem 7.5.11, A has full row rank. Thus, if
A is m >< n, we must m and dim(row(A)) = dim(col(A) ) = m.
D4. ·(a)
(b)
{c)
(d}
(e)
(f)
D5. (a)
(b)
\
n6. (a)
(b)
(c)
False. The row space and column space always have t he same dimension.
False. It is always true that rank(A) = ra.nk(AT) , whether A is square or not.
True. Under these assumptions, the system Ax = b is consistent (for any b) and so the
mat riCes A and [A I bJ have the same rank.
True. If an m x n matrix A has full row rank u.nd full column rank, then m = dim(row(A)) =
rank(.4) = dim(col(A)) = n.
True. If A
1
'J\ and AA'T are both inverti ble then, from 'Theorem 7.5.10, A has full column
rank ami full r ow rank; t hus A is square.
True. The rank of a 3 x 3 matrix is 0, 1, 2, or 3 a.nd the corresponding nullity is 3, 2, 1, ·or 0.
The solution$ of the system are given by x = (b s t, s, t) where  oo < s, t < oo. This
docs not violate Theorem 7.5.7(b).
The solutions can be expressed as (b, 0, 0) 1 s(  1,1, O) + t (  1, 0,1), where (b, 0, 0) is a pa.r
t.i cular solution and s(  1, 1, 0) + t(  1, 0, l) is a. general solution of the corresponding homo
geneous system.
If A is 3 x 5 , t.hen the columns of /1 are a. set of five vectors in R
3
and thus linearly
dependent .
Lf A 5 x 3, t. hen t. he rows of A are a set of 5 vectors in R
3
and thus a.re linearly dependent.
Jf A is m x n , with m f n, then eit her t he columns of A are linearly dependent or the rows
of A arc li nearly dependent (or both).
WORKING WITH PROOFS
Pl. Frnm T heorem 7.5.8(a) we have nuli(AT A) = nuii(A) . Thus if A .is m >< n, t hen AT A is n x nand
so rank(ATA) = n  nuility(AT A) = n  nulli ty(A} = rank( A). Similarly, nul i{AAT) = nuli(AT)
a.nd so rank(AAT) ::= m  = m  nullity(AT) ""' ra.nk(Al' ) = rank( A) .
P2. As above, we have rank( AT A) = n  A)= n  nulli ty(A) = rank(A) .
P3, (a) Since null(A1' A) = null(A), we have row(A) = nuB{A).l = null(A' A)l. = row(AT A).
{b) AT A is symmetric, we have col(AT A) = row(AT A) = row(A) = col(AT).
P4. If A is m x n where m < n, then the columns of A form a set of n vectors in Rm and thus are
linearly dependent . Similarly, if m > n, then the rows of A form a set of m vectors in and thus
are li near ly depender.t
224 Chapter 7
P5. If rank(A
2
) = ran
1
'(A) then dim(null(A
2
)) = n rank(A
2
) = n rank( A)= dim(nuii(A}) and,
since null(A) <;;; null(A
2
), it follows that null(A) = null(A
2
) .
Suppose now that y belongs to null( A) n col( A) . Then y = Ax for some x in R" and Ay = 0. Since
A
2
x = Ay = 0, it follows that the vector x belongs to null(A
2
) = null(A) , and soy = Ax= 0. This
shows that null( A) n col( A)= {0}.
P6. First we prove that if A is a nonzero matrix with rank k, then A has at least one invertible k x k
submatrix, and all submatrices of larger size are singular. The proof is organized as suggested.
Step 1. If A is an m x n matrix with rank k, then dim(col(A)) = k and so A has k linearly
independent columns. Let B be the m x k submatrix of A having these vectors as tts columns.
This matrix also has rank k and thus bas k linearly independent rows. Let C be the k x k
submatrix of B having these vectors as its rows. Then Cis an invertible k x k submatrix of A.
Step 2. Suppose D is an r x r submatrix of A with r > k. Then, since dim(coi(A}) = k < r, the
columns of A which contain those of D must be linearly dependent. It follows that bne columns
of D are linearly dependent since a nontrivial linear dependence arx1ong the containing columns
results in a nontrivial linear dependence among the columns of D. Thus D is singular
Conversely, we prove that if the largest invertible submatrix of A is k x k, then A has rank k
Step 1. Let C be an invertible k x k submatrix of A. Then the columns of C are linearly
independent and so the columns of A that contain the columns of C are also linearly independent.
This shows that rank(A) = dim(coi(A)) k.
Step 2. Suppose rank(A) = r > k. Then dim(col(A)) = r, and so A has r linearly independent
columns. Let 8 be the m x r submatrix of A having these vectors as its columns. Then B also
has rank r and thus has r lmearly independent rows. Let C be the submatrix of B having these
vectorc; as lt'l rm\''l Thf'n (' is a nonsingular r x r submatrix of A Thus the a..c;sumption that
rank(A) > k has led to a contradicti on. This, together with Step l, shows that rank(A) = k.
P7. If A is invertible then so is AT. Thus, using the cited exercise and Theorem 7.5 2, we have
rnnk(C P) = rank((C P)T) = rank(Prcr) rnnk(CT) =rank( C)
and from this 1l also follows that nullity(CP) = n rank(G' P) = n rank( C)= nullity( C).
EXERCISE SET 7.6
1. A mw eehelon fmm fo• A ;, 8 = : •!]; thus the fi.st two columns of A are the ph•oL columns
and these column vectors form a. b
0
asis
0
for A row echelon from for AT is C = thus
0 0 0
dim(row(A)) = 2, and the first two rows of A form a basis for row(A)
2. The first column of A forms a basis for col(A), and first row of A forms a basis for row(A).
3. The ma.tnx A can be row reduced to B = thus the first two columns of A are the pivot
0 0 0 0
:l:[r 'I
::::
0
:o::::eed ,0
EXERCISE SET 7.6 225
4. The redur.ed row echelon form of A is B = [i i i i] and the reduced row echelon form of AT
5.
0 =:
is C =
0 0 0 0
. Thus the first two columns of A form a basis for col(A), and the first two
0 0 0 0
0 0 0 0
rows of A form a basis for row(A) .
1 Q 0 2
i
3
0 0 0
I
· 6
The reduced row echelon form of A is B =.
0 0 l 0
and the reduced row echelon form
0 0 0 0 0
0 0 0 0
OJ
() ( Al' is C: =
; ! (;
0
1
] . Thus the fi rst tii <ee column..<> of A form a basis for col(A), and the
0 0 ()
first three rows of A form a hasis for row( A) .
1 3 ·o o
0 0 1 0
6. The reduced row echelon form of A is fJ =
0 0 0 1
and the reduced row echelon form of AT · iS
0 0 0 0
0 0 0 0
0 0 0 0
1'1
:
1
 l . Thus lhe 1st. 3rd, aud •lt.h columns of A form a basis for co!( A), and
0 0
o oJ
c =
0
1
0
2
5
0
4
0 0 t
0 0 0
the 1st, 2nd, and 4th rows of A for m a basis for row(A).
7 . Proceeding as in Example 2, we first form t he matrix ha.¥ing the given vectors ns its columns:
The reduced row echelon form of A is:
4 1]
G 10
6  11
2 3
From this we conclude that all three columns of A are pivot columns; the vectors v
1
, v
2
, v
3
a.re
lineMiy independent and form a basis for the space which they span (the column space of A).
226 Chapter7
8. Let A be the matrix having the given vectors as its columns. reduced row echelon form of A is
R=
0 0 0 0
0 0 0 0
From thls we conclude that {v
1
, v2} is a basis for W = col(A}, and it also follows that v
3
= 2v
1
+ v
2
and v
4
== 2Vt + v2.
9. Proceeding as in Example 2, we first form the matrix having the given vectors as its columns:
A==
0 0 2 2
3 6 0 3
The reduced row echelon form of A is:
From this we conclude that the 1st and 3rd columns of A are the pivot columns; thus {v
1
, v3} is
a basis for W col(A). Since c2(R) == 2c
1
(R) and c4(R) = c
1
(R) + c3(R), we also conclude that
v2 = 2vt and v
1
v
1
+ v
3
10. Let A be t he matrix having the given vectors as its columns. The reduced row echelon form of A is
0 2
1 1
0 0
0 0
0 1]
0 3
1 2
0 0
From this we conclude that the vectors v
1
, v
2
, v
4
form a basis fnr W = coi(A), and that v
3
= 2vt v
2
and = v
1
1 3vz +
11. The m•tnx lA I 1,1 = [:
0  1 : 1
0 '2 I 0
I
0 0 I 0
0
I
0
can be row reduced (and further parmioned) to
=
0 0 0 : 0 0 1
Thus the vectors v
1
= (1, 4,0) and v
2
= (0, 0, 1) form a basis for null(AT).
12. The matrix {A I /3j can be row reduced (and further partitioned) to
=
Thus the vector v
1
= (1, 1, 1) and forms a basis for nuii(AT) .
EXERCISE SET 7.6 227
13. The redu:ed row echelon form of A is R = ! . From this conclude that the first two
lo o o o
columns of A are the pivot columns, and so these two columm; forrn a hasis for col(A) . The fi rst
two rows of R form a basis for row(A) = row(R). We have Ax = 0 if and only if Rx = 0, und the
general solution of the latter is x = (s+4t , 3s+1t, s,t) = s{1,3,1,0) +t(4, 7, 0,1). Thus (1,3,1,0)
and (4, 7, 0, 1) fonn a basis for null(A) .
The reduced row echelon form of the partitioned matrix [A I !4} is
[
1 0  1 4 l 0 0 t
0 1  3 7 I 0 0 0 1
0 0 0 0 : 1 0 _ .!
I 4
0 0 0 0 : 0 1
Thus t he vectors ( 1, 0,   and (0, 1,  form a basis for null( AT).
14. The •educed ww echelon fmm of A ;, R = : :]. From this we conclude t hat the lsl. 2nd,
0 0 0 0
and 4th columns of A are the pivot columns, and so these three columns form a basis for col( A).
The first three rows of R form a basis for row( A) = row( R). We have Ax = 0 if and only if Rx ""' 0,
and the general solution of t he latter is x = s, O) l ,O). l , Orrorms
a basis for null( A). The reduced row echelon form of the part itioned mat rix [A I J
1
j is
l 0 :...1 0 :! 0  .l. 11 2 I 8 40 o
0 1
s
0
1 1 o 1 1
4 I l G 80 5
t }
I 5 5
Thus t.he vector (0, 1, £, t) forms a b&Sis for null (Ar).
15. {a) 'The lst, 3rd, and 4th columns of A are the pi vot columns; thus these vectors form a basis for
col(A).
(b) The fi rst three rows of A form a basis for row( A).
(c) We have Ax = 0 if and only if Rx = 0, i.e., if and only if x is of t he for(ll
X = ( 4t,t,O,O) = t(4,1,0,0)
Thus the vect or u = (4, 1, 0, 0) forms a basis for null(A).
(d) We have AT x = 0 if and only if Cx = 0, i.e. , if and only if x is of the form
x = = + s(O,
Thus t he vectors v
1
= ( LO, O, O, 1) and v
2
= (0, 1, 0) form a basis for nuli(Ar).
16. The reduced ww <ehelon fo•m of A = [:  : is .Ro [ t ]· Thus the columnrow factori>a·
ticn is
A=CR= il
228 Chapter 7
17.
.1nd the rorrespcmding columnrow expansion is
m [1
0
II+ HJ jO
1
[l
2 "] ['
 4
1  4  2 . 0
The reduced row echelon form of A =
8 14 18 Ro = 0
 9
6 10 10 0
row factorization is
A = CR=
and the corresponding columnrow expansion is
0
0
0

3
7
=;]. Thus the column
3
s
3
0
0
DISCUSSION AND DISCOVERY
Dl. (a) 1f A • a 3 J( 5 matrix, then the number of leading l 's an a row echelon form of A is at most
3, the number of parameters m a general solution of Ax = 0 is at most 4 (assummg A¥= 0 ),
I he rank of A is at most 3, the rank of AT is at most 3, and the nullity of AT is at most 2
(assuming tl # 0)
D2.
( b ) If A as a fl"' 3 rnatnx then the number of leading l 's in a row echelon form of A is at most
3, the number of pararnrters in a. general solution of Ax= 0 is at most 2 {a.ssumang A ¥= 0 ),
lhe r1.nk o f A is at most 3. the r ank of AT is at most 3, and the nullity of AT is. al most 4
(nssummg A "# 0)
(c) If A iR a 4 x 4 matrix, then the number of leading 1 's in a row echelon form of A' is at most
I, tbP number of parameters in a general solution of Ax = 0 is at most 3 (assuming A i 0),
l he rnnk of A is at most 4, the rank of AT is at most. ·1 , and the nullity of AT is at most. 3
(assummg A i 0).
(d ) If A is an m x n matrix, then the number of leading l's in a row echelon form of A is at mos t
rn, the n .. mber of parameters in a general solution o f Ax : 0 is at most n  1 (assumi ng
A# 0), the rank of A is at most min{m, n}, the rank of AT is at most min{m, n}, and the
nulli ty of AT is at most m 1 (assuming A# 0).
The pivot columns of a matrix A are those columns correspond tv tht: columns of a row
ec..,Plon form R of A which contain a leading 1. For example,
0 2 0 7
12] [1
2 0 3 0
4  10 6 12 28 . R = 0 0 1 0 0
4 5 6 5  1 0 0 0 0
and ::o the 1st, 3rd, nnd 5th rolumns of A are tbe pivot columns
EXERCISE SET 7.7 229
D3. The vectors v
1
= (4, 0, 1, 4, 5) Mid v
2
= (0, 1, 0, 1) form a bnsis for null( AT).
D4. (a) fa
1
, az, a4} is a. basis for col( A).
(b) aJ = 4a
1
 3a2 and a.o = 6a1 + 7a2 + 2a4.
EXERCISE SET 7.7
1. Using Formula (5) we have projaX =
1
;:.:1{
2
a= (
1
l )(1, 2) = (
1
5
1
,
2
i ).
On the other hand, since sin(} = fs and cos B = 7&, the standard mat rix for the projection is Po =
[
!. . [l 1 [.!l]
i i and we have Pex = i i_ [
6
} = ·¥ ·
2. Using Formula (5) we have proj
8
x = = =
On the other hand, since and the st andard mat rix for the projection is
Pe = [ _ and we have Pex = [_ ·
29 29 19 :.!!)
3. A vector parallel to l is a = (1, 2). Thus, using Formula 5, the projection of x on lis given by
. a·x ( 3)( ) (3 6)
proJaX = llall2 a ·""' 5 1, 2 = 5• 5
On the other hand, since sin 0 = .1,
5
and cos (J =
1
15
, the standard rnatrix for the projection is Pe =
[
1 2] [I [3] v
i ; and we have Pex = i ! = .
5 5 .')
4. A vector pnr11.llel to l is a = (3, ·1). Thus, using Formula. 5, the projection of x on l is g.iven by
. a ·x a)(
3
) ( 33 11
proJ.,X = ll all 2a :.::(IO . • ,··· 1. =  ro,w)
On the other hand, since sin O= vko and cosB = ;?to. the standard matrix for the projection is
Po = [_1 ... 1] and we have Pex""" [_ 11 = r ·
10 10 10 It) 10
5. The vector component of x along a is proj
6
x =
11
':"
1
j
2
a = ! (0, 2, 1) = (0,  ), and the component
orth.:.gona l to a is x proj
8
x = (1, 1, 1) · (0, 0 = (1,
6 · , _ X· a 5 ( l 2 3) .. ( 5 10 1[)) d · _ (Z Q 1) ( 5 10 15) _ ( 23 10 I )
. prOJ
8
X  iiGlr a =
14
, , _ .. Iii• 14• i4 , an x   prOJaX
1
, 
14
, i4• 14  14•  14, ·!4 .
7. The vector component of x along a is projax = {a:lij a = 4, 2, 2) = L {o 
1
1
0
), and the
component ort hogonal to a is x  proj.x = (2, I, 1, 2)  fo,  lo) = ( [o,
8. proj .. x = 1,1,  1):::; e7
2
, ¥),
x projax = (5, 0,  3, 7}  (11, = CZi,
1
i ,
5
7
5
).
. Ia · xl 12  6 + 241 20 20
9. llprOJaxll = = J4 +9 + 36 = J49 = 7
230
11.
Ia · xl 18 10 + 41
nail= J4 + 4 + 16
2 1 J6
v'24=J66
. la·xl 18+62+151 11 .J33
llprOJ nxll = Taf = v16 + 4 + 4 + 9 = J33 = 3
Chapter 7
. a· xl 1353 + 0 11 31 3IJ5i
12. llprOJaXII = Taf = V49 + I + 0 + 1 = J5I =
13. If a = (  1, 5, 2) then, from Theorem 7.7.3, the standard matrix for the orthogonal projection of R
3
onto span{a} is given by
P = ;. aaT = ... 5 11 5 2) =
 5 25 10
[
1] [ 1 5 2]
a a " 2  2 10 4
We note from inspection that P is symmetric. lt is also apparent that P has rank I since each of its
rows is a scalar multiple of a . Finally, it is easy to check that
p2 = _.!..
30 2
10 4
3
2
= _.!..
10 4
30
 2
and so P 1s tdempotent .
14. If a = (7,  2, 2) then the standard matrix for the orthogonal projection of R
3
onto span{ a} is given
by
Hi.
[
7] [ 19
1 T 1 _ 1
P = aa = 2 lr 2 2] =  14
aTa 57 57
2 14
11 11]
4  4
4 4
We note front in!>pection that Pis symmetric. It is also apparent that P has rank 1 si nce each of its
rows is a sc,\lar mult1plc uf a Finally, it is easy to check that P
2
= P and so P is idl.mpot enl.
[
3 ?.] . 16 'l
Let M =  1 u Then i\f
1
M = [
9 1
;] and. from Theorem 7 7 5, the standard matrix for the
I :l
orthogonal prOJCclton of !1.
3
onto W = span { a
1
, a2} is given by
P= AI(MTM)IMT = [! [ 13 9] [3 4
1 3 257  9 26 2 0
[
113 84
1
] = 
1
 84 208
3 257 96 56
96]
56
193
We not e from inspection t.hat lhe matrix Pis symmetric. The reduced row echelon form of P is
!
and from this we conclude that P has rank 2. Finally, it is easy to check that
P' = [ ,m 2!1 [ ,m = 2!1 [
=P
193
and so P ic:: idl'mpotent
EXERCISE SET 7.7 231
16. Let M [  : Then MT M :]and the •tandW"d matrix for the orthogonal
of R
3
onto W = span{ a
1
, a2} is given by
23] [1 2
30 4  2
102 305
From inspection we see that P is symmetr ic. The reduced row echelon form of P is
0
1
0
and from this we conclude that P has ranK. 2. Finally, it is easy to check that P
2
= P.
17. The standard matrix for the orthogonal projection of R
3
onto the xzplane is P = This
0 0 1
agcees with the following computation rn;ing Po, mula (2n Let M [: ; l Then MT M ,
and M(MT M) 
0
MT [: [; :1 :] = :J.
18. The standard matrix for t he orthogonal projection of R
3
onto the yzp]ane is P = This
0 0 I
19. We proceeri as iu Example 6. The general ::oolution of the equation x + y + z = 0 can Le writ ten as
and so the two column vectors on the right form a basis for the plane. If M is the 3 x 2 matrix
these vectors as Jts columns, then MT M = and the standard matrix of the orthogonal
projection onto the plane is
[ 2 l] [1 1 0] =
2 1 2  1 0 1 3
lJ 1  1 2
The orthogonal projection of the vector v on the plane is Pv =
1
 J 1] [ 2] [ 1]
2 1 4 = 1 7 .
 1 2  1  8
232 Chapter 7
20. The general solution of the equation 2x  y + 3z = 0 can be written as
and so the two column vectors on the right form a basis for the plane. 1f M is the 3 x 2 matrix
having these vectors as its columns, then NfT M = [!
and the standard matnx of the
projection onto the plane is
P = M(MT M)
1
MT = 2 3 _!_ [
10

6
] [
1 2 0
] = _!_ 2 13 3
[
1 0] [ 10 2 6]
0
14 6 5 0 3 1 14 6
1  3 5
[
10
The orthogonal projection of the vector von the plane is Pv = ]_ 2
14 6
2 6] [ 2] 1 [ 34]
13 3 4 = 14 53 •
1 li  1 5
21. Let A be the matrix having the given vectors as its columns. The reduced row echel<m form of A is
and from this we conclude that t.he a
1
and a 3 form a basts for the II' spanned b}·
the given vectors (column space of A). Let M be the 4 x 2 matrix havmg a
1
and a
3
as its columns.
Then
6 2 4 6 0 72
[
4 lj
0 2 5] 2  2 = [ 20
4 5
20]
30
and the standard matrix for the orthogonal projection of R
4
onto W ts g1ven by
M(MT M)t MT = [  : _1_ [ 30  20] [4
2 2 1760 20 72 1
4 5
[
89 105
= _1_ 105 135
220  3  15
25 15
 3
 15
31
75
25]
15
 75
185
 6 2
0  2
22. Let A be the matrix having the given vectors as its columns. The reduced row echelon form of A is
EXERCISE SET 7.7 23:,
and from this we conclude that the vectors a
1
and a2 form a basis for the subspace W spanned by the
oiven vectors. Let M be the 4 x 2 matrix havi ng a
1
<md a
2
as its columns. Then Mr M = [
36 24
]
o· · 24 126
and the matrix for t;he orthogonal projection of R
4
onto W is given by
[
5 3]
M(MT A1)l MT = 3 6 _1_ [ 21
1 0 660  4
1 9
4] [5 3
6 3  6
[
153 89 31 37]
1 1] = _.!._ 89 87 13 59
0 9 220 31 13 7 19
 37 59  19 193
23. The reduced row echelon from of the matrix A is R =
system Ax= 0 can be expressed as
0
1 1]
i i ; thus a general solution of the
1 2
r
ts + !t1 [
_!
3
 lt _.! l
x= 2 2 =s 2 +t 2
s J 1 OJ
l 0 I
where the t,wo col umn vectors on the nght hand side form a basis for Lhe solution space. Let D be
t.hP. matrix having these two vectors a.S its columns. Then
1
0
0] [=! i] = 0] = [1 OJ
1 1 0 2 01
0 1
and the standar d matr ix for t he orthogonal projection from R
4
onto the solution space is
ll
'l [t
0  1
2
P IJ(BT8)  ' Br !
_ .!. ? 1
[ :
l
1
0] =
 1
:; [ 0
2
_!
0 1 3
1 2
2
·· 1 0
_:]
0
2
Thus t he orthogonal projection of v ·= {5, 6, 7, 2) on the solution space is given (in column form) hy
0
3
24. The ceduced COW !com of the matdx A is R r:
0
: thus the gcnP.ra.l solution of
lo o 2
thesyst.m Ax = 0 can be expcessed as X = t r ' "' (to avoid lcactions) X = • [ ] · In ot h& words,
the solution space of Ax= 0 js equal to span{a) where a = ( 3, 0, 1, 2). Thus, from The;).lem 7.7.2,
the orthogonal projection of v = (1, 1, 2, 3) on th<! solution space is given by
. a· v 11 ( 15 s 10
proJ
8
v = a = i4 ( 3, 0, 1, 2) = 
14
, 0,
14
,
14
)
234 Chapter7
25. The reduced row echelon form of the A= is R = From
3 2 1 1 0 0 0 0
this we conclude that the first two rows of R form a basis for row(A), and the first two columns of A
form a ba.c:;is for col(A).
Orthogonal proJeCt JOn of R
4
onto row( A): Let B be the 4 x 2 matrix having the first two rows of R
as 1ts columns. ThPn
0 1 3 0 1 11 14
[
1 0]
1  2  4] 1 2 = [14 21]
3  4
and the standard matrix for the orthogonal projection of R
4
onto row(A) is given by
Pr = B(BTB)  tBT = [21 14] [1 0 1 3] = 2_
1 2 35 14 11 0 1 2 4 35 7 8
3  11 7 2
1 7]
8 2
9 11
1l 29
Orthogonal proJCCtJOn of R
3
onto col(A): Let C be the 3 x 2 matnx having the first two columns of
A as its columns. Then
cr c = [t 1 3] = [11 1]
102 32 75
and the standard matnx for the orthogonal projection of R
3
onto col(A) is given by
P, = cccrc)'cr = [i [: = H
26. Tht reduced row echelon form of IS R = ! l From this we conclude that the first two
0 0 0 0 0
rows of R form a basis for row(A), a.nd the first two columns of A form a basis for col(A).
Orthogonal projet..tlon onto row(A): Let B be t.he 5 x 2 mat rix having the first two rows of R as
ItS column'l Then sT B G and the standard matrix for the orthogonal projection of Rl'> onto
row(A) is given by
7 5 2 9  3
 5 7 2 3 9
Pr = B(BTB)I BT = 2_
2 2 4 6 6
24
9 3 6 3 15
3 9 5 3 15
Orthogonal projection onto col(A): Let C be the 1 x 2 matrix having t he fi rst two col umns of A as
its columns. Then ere= [
1
:
2
:] and t he standard matrix for the or t hogonal proj ection of R
4
onto
col(A) is given by
[
237
P. = C(crc)lcr = _I_  73
c 419 13
194
 73
369
1::]
29  46
64  46 203
95
EXERCISE SET 7.7 235
0
5 g .  5
27. The reduced row echelon foxm of the matrix [A I b) is
1 8 I 1]
1  : k . Prom this we
0 0 0 I 0
[ ,! ,! • ,!']
fii:
its columns, and let C = BTB = tl · Then the standard matrix for the orthogonal projection
of R
4
onto row(R) = row(A) is P = BC
1
Br, and the solution of Ax= b which lies in row(A) is
given by
r
1 _.!_ 2
[i]
n1
25 25 25
1 ! R 11
Xrow( A) = Pxo =
25
 25
=
·n 7
2!1
25
'25 25 0
11 1 2
!! 0
25 25 25
1 3 3 1 3
[
0 2 I I 4]
28. The reduced row echelon for m of the matrix [A I b j is !RIc] ::: 0 1 fi fi : . rrom this we
0 0 0 o, 0
conclude that b is onnsht<nt, and that x
0
= [ is one solution. Let B be the4 x 2 matrix
having the fi rst t wo rows of R as its columns, ami let C = BT B = [_  Then t he s tandard
:16 72
matrix for the orthogonAl projection of R
4
onto row(R) =row( A) is p ::: vc
1
BT, and the solution
of Ax = b which lies in row( A) is given by
I ...
20 131
" l r .l I "'l
:!99 299 299  f.<i9 j 299
20 224 124 l 48
299 209
m  3 _ m
Xrow(A) = Pxo =
13
l
32 90
A 0  !M
299 299 2!19 299 299
53 124 25
90 0 !!l
 m
299
 299 299  299
29. In Exercise Hl we found t hat the standard ma.trix for the orthogonal projection of R
3
onto the plane
W with equation x + y + z = 0 is
1
P =
3

 1 . 1 2
Thus the standard matrix for the orthogonal projection onto w.t (the line perpendicular toW) is
1  p =
0 0 1 1 · 1 2
1
1
1
Note that W L is t he !dimensional space (line) by t he vect or a= (1.1, 1) and so the com
putation above is consistent with the formula. given in Theorem 7.7.3.
236 Chapter 7
30. In Exercise 20 we found that the standard matrix for the orthogonal prOJection of R
3
onto the plane
W with equation 2.c  y + 3z = 0 is
[
10
2
14 6
2 6]
13 3
3 5
Thus the standard matrix for the orthogonal projection onto W J. (the line perpendicular to W) !s
I  p = 1
1
4 [
= 1
1
4 :]
0 0 1 6 3 5 6 3 9
31. Let A be the 4 x 2 matnx having the vectors v
1
and v
2
as its columns. Then
!I'TA = [1 2
3 4 1
3
0] !] = [ 14 8]
2 3  1  8 30
0 2
and the sta ndard matrix for the orthogonal projection of R
4
onto the subspace W = spa.n{v
1
, v
2
} is
p = A(Ar A)1 l r = [15 4] fl 2 3
3  1 178 4 7 l3 4  1
0 2
[
51 23
0 1 23 54
2] = 89 28  31
25 20
28
 31
59
5
25]
20
5
14
Thus the orthogonal projection of the vector v = (1, 1, 1,1} onto w.1. is gtven {in column form) by
\ Pv
r:]
ll
[
51
1 23
89 28
25
23 28
54  31
 31 59
20 5
DISCUSSION AND DISCOVERY
25] [1] 1] [127] [38]
20 1 1 l 66 1 23
1: =  89  89
01. (a) The rank of the standarrl matrix for the orthogonal projection of Rn onto a line through the
ongm 1s 1, and onto its orthogonal complement is n  1.
(b) If n 2, then the rank of the standard matrix for the orthogonal projection of Rn onto a
plane through t he ongin is 2, and onto its orthogonal complement is n  2.
02. A 5 x 5 rnatnx Pis thl' standard matrix for an orthogonal projection of R
5
onto some 3dimensional
subspace if and only 1f it is symmetric, idempotent, and has rank 3.
D3. If x1 = projax and x2 = x x1 then llxll
2
== llxdl
2
+ llx2ll
2
and so
ll x2ll
2
= llxll
2
llxtll
2
= llxll
2

2
= llxll
2

Thus q = / ll x ll
2

= ll x2ll where x2 is the vector component of x orthogonal to a.
DISCUSSION AND DISCOVERY 237
D4. If P the standard matrix for the orthogonal projection of Rn on a. subspace W, then P
2
= P
(Pis idempotent) and so p k = P for all k ;:::: 2. In particular, we have pn = P.
D5. (a) TJ"ue. Since projwu belongs toW and projw.1. u belongs to W.L, the two vectors a.re orthog
onal
(b) False. For example, the matrix P = satisfies P
2
;:: P but is not symmetric and there
fore does not correspond to an orthogonal projection.
(c) True. See the proof of Theorem 7.7.7.
(d) True. Since P
2
= P, we also have (J  P)
2
=I  2P + P
2
= I P; thus I P is idempo
tent.
(e) False. In fa.ct, since projcoi(Aj b belongs to col(A). the system Ax= projcol(A)b is always
consistent.
D6. Since (W.L )l. = W (Theorem 7.7.8), it follows that ((Wl.)..l)l. = W.1..
D7.
[
1 l 1]
The matrix A = 1 1 1 is symmetric and has rank 1, but is not the standard
1 l 1
matrix of an
orthogonal projection. Note that A
2
= 3A, so A is not idempotent.
D8. In this case the row space of A is equal to all of Rn. Thus the orthogonal projection of Rn onto
row(A) is the identity transformation, and its matrix is the identity matrix.
D9. Suppose that A is ann x n idempotent matrix, and that>. is nn eigenvalue of A with corresponding
eigenvector x(x f 0). Then A
2
x = A(Ax) = A(.h) = ..\
2
x. On the other hand, since A
2
=A, we
have A
2
x = Ax = .>.x. Since x =/= 0, it follows 'that >.? = >. and so >. = 0 or 1.
DlO. Using calculus: The reduced row echelon form of lA I bj is thus the general solution of
0 0 010
Ax = b is x = (7  3t, 3  t , t} where oo < t < oo. We have
ll xll
2
= {7 3t)
2
+ (3  t)
2
+ t
2
= 58  48t + 1lt
2
and so the solut.ion vector of smallest length corresponds to ft lllxii
2
J = 48 + 22t = 0, i.e., to
t :.=: We conclude that Xrow = (7= ii, 3 H, = (
1
5
p
Using an orthogonal projecti on: The solution Xrow is equal to the orthogonal projection of a.ny
solution of Ax = b, e. g., x = (7, 3, 0}, onto t he row space of A. From the row reduction alluded
to above, we sec that the vectors v
1
= (1, 0, 3) and v
2
== (0, 1, J) form a basis for the row space of
.4. Let. B be the 3 x 2 matrix having these ver,tor as its columns. Then BT B =
and the
standard matrix for the orthogonal projection of R
3
onto W = row(A} is given by
= 1\
3 1 10
Finally, in agreement with the calculus solution, we have
1
Xrow = Px =
11
[;] = f [ !]
3 1 10 0
1
24
238
Chapter 7
Dll. The rows of R Corm a basis for the row space of A, and G = RT has these vectors as its columns.
Thus, from Theorem 7.7.5, G(GTG)
1
GT is the standard matrix for the orthogonal projection of
Rn onto lV = row{A).
WORKING WITH PROOFS
Pl. If x anJ y are vectors in Rn and if a and /3 are scalars, then a· (ax+ /3y) =a( a· x) + /3(a · y).
Thus
a · (ax + /3y) (a · x) (a · y)
T(ax + /3y) = llall
2
a = a:lja112a + = aT(x) + /3T(y)
which shows that T is linear.
P 2. If b = ta, then bTb = b · b = (ta) · (ta) = t
2
a · a = t
2
aTa and (similarly) b bT = t2aaT; thus
1 T 1 2T 1 T
bTb bb = t2aTa t a a aT a aa
P3. Lf'l P hl' a symmetric n x n matrix that is idempotent. and has rank k. Then W = col(?) is a
kdimens•onal subspace of R". We will show that P is the standard matrix for the orthogonal
projecl ion of Rn onto W, i.e, that Px = projwx for all x in Rn . To this end, we first note that
Px belongs to W and that
x = Px + (x Px) = Px t (I P)x
To show that Px = projwx it suffices (from Theorem 7.7.4) to show that (I P)x belongs to
\1' • .1nd since W  col{ A) = ran(P). this IS equtvalem to showmg that Py · (J  P)x = 0 for all
y m R" Finally, since pT = P = P
2
(Pis symmetric and idempotent), we have P(J P) =
P  P
2
 P  P = 0 and so
for evcrv x and y m R" This completes the proof.
EXERCISE SET 7.8
J . First we note that the columns of A are linearly independent since they are not scalar multiples of
each other; thus A has full column rank. It follows from Theorem 7.8.3(b) that the system Ax= b
has a unique least squares solution given by
The least squares error vector is
b Ax =
=
5 4 5
11 8 11
15
and it is easy to chP"k that this vector is in fact orthogonal to each of the columns of A. For example,
(b Ax)· Ct(A) =
1
\ !(6){1) + {27}(2) + (15)(4)] = 0
EXERCISE SEl' 7.8 239
2. The colunms of A are linearly independent and so A has full column rank. Thus the system Ax= b
has a unique least squares solution given by
0 2 1 3  1  2.. 9
l [ [ 21
6] 2 I I] lj 21 [14]
The least squares error vector is
and it is easy ,to.check that this vector is orthogonal to each of the columns of A.
3. From Exercise 1, the least squares solution of Ax= b is x = f1
Ax = 2 3
= 16
[
1 1] [28]
4 5
11 8 11
40
On the other hand, the standard matrix for the orthogona.J projection of R
3
onto col(A) is
p _ A(ATA.)tAT::::.: [21 25]l [ 1 2 4] = _1
25 35 1 3 5 220
4 5 20 90 170
:).nd so we have
projcoi(A)b = Pb =
20 90
[· =
1
1
1
= Ax
170 f> 40
4. From Exercise 2, the least squares solution of Ax = b is x =
2
\ [ t:]; thus
[
2  2] [ tj6]
Ax = 1 .l .!.. [
9
] = ..!:_  5
3 1 21 14 21 13
On the other hand, the standard matrix of the orthogonal projection ont.o col(A) is
and so we have
= 2
1
1
=Ax
17 1 . 13
240 Chapter 7
5. The least squares solutions of Ax = b are obtained by solving the associated normal system AT Ax=
.4Th which IS
Sinc·p the rnntrix on the left is nonsingular, this system has the unique solution
X= [J:1] = [24 8]l [12] = 2_ [ 6 8] [12] =[to]
X2 8 6 8 80 8 24 8 l
The error vector is
and the least squares error is li b  AxiJ =
+
+ (0)
2
= Ji =
6. The least. squares solutions of Ax= b are obtained by solving the normal system AT Ax = ATb which
IS
[
14 42] [Xl] [ 4]
42 126 X2 = 12
This (redundant) system has infinJtely many solutions given by
X= [::J = 3tl = [!J + t r n
The error ve,.lor jc:;
and the least squares error is li b  Axil= + + = J* = :Lfl.
7. The least SCJUf\rf'S solut1ons of Ax= b are obtai;1ed by solving the normal system AT Ax = ATb which
is
The augmented matdx of this system reduces to [:
solutions given by
0
0
1
' 7]
'a
1 : i ; thus there are infinitely many
0' 0
EXERCISE SET 7.8 241
The error vector is
and l hc least squares error is li b  Axil = J(!)2 +
= =
8. The least squares solutions of Ax= bare obtained by solving AT Ax= ATb which is
[
1 [::] =
17 33 50 X3 6
Tbe augmented matrix of this system reduces to
0
1:sl .
1 : tf ; thus there are infinitely many
0 0 I 0
:;olutions given by
x = [:;] [W
1
,=:]
T he error vt.'Ctor is
and t he least squares error is llb  Axll = V( !iY + { fr )2 + { ·fi)2 = ,f1j; =
2
f[I .
9. The hnc"' model fo, the given data b M v = y wh"c M [i il and y = [;] . The least squa<"'
solmion i:> obtained by sobing the norrnal sys tem M Ty which is
[1: =
Since t he matrix on t he left is nonsingular, this system has a. unique solution given by
[
v
1
] [ 4 16J l [IO] 1 [ 37 8] [tO] 1 [3] r 1
3
0]
V2 = 16 74 47 = 20 ··" 8 2 47 ::::: 10 7 =
1
7
0
Thus the least squares st raight line fit to t he given data is y== 
1
3
0
+ I
0
x.
y
4
2
12 3 456 7
I
242 Chapter 7
10. The Hnear model for the given data is Alv y where M [1 and y [!]· The leaSt squares
solution obtajned by solving the normal system AfT Mv = MTy which is
r: r::J =
Since the matrix on the left is nonsingular, tllls system has a unique solution given by
[
Vl] [4 8] l [4] 1 [ 22 8] [4] 1 [16]
V1 = 8 22 9 = 24 8 4 9 = 24 4 = !
Thus the least squares straight line fit to the given data is y = j + !x.
y
2
I 2 3 4
I
11. The quadratk least squar" model for the ghen data is Mv y whe" M and y [!]·
The least squares solution IS obtained by solving the normal system M
7
Mv = MT y which 1s
8 22 62 1Jl 9
r .t 8 22] [l..'t ] [ ']
l22 62 178 V3 27
Since the matrix on the left JS nonsingular, this system has a unique solution given by
= [ : l [ :] = [ J ;] [ :] = [
V3 22 62 1 78 27 ! 1 ! 27 1
6 3 3
Thus the leas1 squares quadratic fit to the given data is y = 1 
1
6
1
x + jx
2
y
I
EXERCISE SET 7.8
12. The quadratic least. squares m.odel for the given data is Atv = y where A1 =
1 0 0
andy=
1 1 1
1 1 lj
1 2 4
The le3st squares snl tti.ion is obtained by solving t he normal system Mr llifv = M'I' y which is
Since the ma.t.rix on the left is nonsingular, t his system has a unique solution given by
[! : [ = : _;] [ == [=:]
113 6 10 18 14 t  2 1 H
Thus the least quadratic fit i:.n the given tl at <' is y = 1  +
13. The model for t.he lea."3t squares cubic fit to the gi ven data is Mv = y where
1 1
r
4.9
1 2 4
s
10.8
M:::::
3
!)
27 y= 27.9
.
4 16 64 60.2
1 5 25 125 113.0
T he normal sysle111 A(r /11 v = lv[T y is
r 5
15 55
225]
n l 2168
1
l
55 225 979 a1 916.0
225 979 4425
a2  4087.4
2'25 979 442:> 2051.1 a3 18822.4
243
and the solution, written in comma delimited form, is (ao, a
1
, a2 ,a3) (5.160, 1.864,0.811, {).775).
14. The model for the least squares cubic fit to the given data is Mv = y where
1 0 0 0 0.9
1 1 1 1 3. 1
M= 1 2 4
8
9.4
1 3 9 27 24.1
1 4 10 64 57.3
15.
The associated normal system MT M v = MT y is
[
5 10
10 30
30 100
100 354
30
100
354
1300
100]
354
1300
4890
[
aol [ 94.8]
a1 323.4
a2 = 1174.4
a3 4396.2
Chapter 7
and the solution, written in comma delimited form, is (ao, a1, a2, as) (0.817, 3.586, 2.171, 1.200).
[
1 :tal [Yal [1 :ral
1 :t2 312 1 1 · · 1
1
2:2 n :
If M = . and y = . , then MT M = [ ] . = [r: !] and
• • • X l %2 • · · :J:n • :z:, L. :t
. . . . . '
1 :tn Yn 1 Xn
MTy = [ 1 1 · · · 1]
X I :1:2 :Z:n
!Ill
Y.
2
= [JY• ] . Thus the normal system can be written as
: L.x,y,
!in
DISCUSSION AND DISCOVERY
Dl. (a) The distance from the point Po= {1, 2, 1) to the plane tv with equation x + y z = 0 is
d = IP)(l) + {1)(2) ; (1)\1)1 = _:__ = 2,'3
/{1)2+(1)2+(1)2 V3 3
and the point in the plane that is closest to Po is Q = ( ). The latter is found by
:+
computing thr
of the vector b = OP
0
onto the plane· ThP column
vectors of the matrix A = form a bnsis for tv and so the orthogonal projection of b
onto IV ag given by
(b) The distance from the point P
0
= (1, 2, 0, 1) to the hyperplane x
1
 x
2
+ 2x
3
 2x
4
= 0 is
d = I( 1 )(1) + ( 1){2) + (2)(0) + ( 2)( 1)1 = _ 1_ = JTij
/(1)2 + (1)2 + (2)2 + Jill 10
and the point in the plane that is closest to Po is Q == (
1
9
0
,
1
2
0
, 
). The latter is found
by computing the orthogonal projection of the vector b = onto the hyperplane:
P'
0
Jwb = ! io [ : :] H ! Ul =
1
1
0
[
DISCUSSION AND DISCOVERY
D2. (a) The vect or in col(A) Lhat is closest to b is projcol( t! )b = A( AT A) 
1
ATb .
(b) The least squares solution of Ax == b is X= (JiT A)  lATh.
(c) The least squares error vector isb A( AT A)
1
A'Tb .
(d) The least squn.res error is li b A(AT A)
1
A
1
' bll .
(e) The stan<iard matrix for the orthogonal projectiou onto col(A) is P = A(A
1
' A)
1
Ar .
245
D3. From Theorem 7.8.4, a vector xis a least squares solution of Ax = b if and only if b  Ax belongs
to coi(A).l . We have A = a.nd b  Ax = = = [ thus
4 5 4 5 s 14 s 14
b  Ax is orthogonal to col( A) if and only if
( 1 )(2) + (2)( 7) + ( 4)( s  14) = 0 ""' (  1 )(2) + (3)(  7) + (5 )( s  14)
·1s  68 = 0 = 5s  93
equations are dearly iucornpatible and so we conclude that, for any value of s, the vector x
is not a least sq11ates solution of Ax = b.
D4. The given data points nearly fall on a straight line; thus it, would be rel\..c;onable to a linear
least squares fit and then use the resulting linear formula y = a + bx to to x = 45.
05. The m<><;el foe t h;s least "''"'"' fit ;, [: il 1:1 : m, •nd t he conespond; ng nonnal system ;s
[
:
[:] Thus squares solution is
'l 36
a l 3
2
11 ;n 
1
... 1
21
[
3]1 [ l [ .1) 9] [1 l [ 5]
LJ = i * =  ¥ =
rP.'l ulting in y =
2
5
1
+ i as the best least squares fit. by a curve of this type.
y
10
6
4
2
I 2 3 4 5 6 7
D6. We have [:] = if and only if Ax + r =b and AT r = 0. Note that AT r = 0 if and only
if r is orthogonal to col(A) . It follows that b  Ax belongs t.o coi{A).L and so, from Theorem 7.8.4,
x is a least squares solution of Ax= b and r = b  Ax is the least squares error vec".nr.
246 Chapter 7
WORKING WITH PROOFS
Pl. If Ax= b is consistent, then b is in the column space of A and any solut ion of Ax = b is also a least
squares solution (since li b  Axil= 0). If, in addition, the columns of A are linearly independent
lhen there is only one solution of Ax = b and, from Theorem 7 8.3, t he least squares solution is
also umque. Thus, in this case, the least squares solution is the same as the exact solution of
Ax = b
P 2. If b is orthogonal to the column space of A, then projcoi(A)b = 0. T hus, since t he columns of A
are linearly independent, we have Ax = projcoi(A) b = 0 if and only if x = 0.
P 3. The least squares solutions of Ax = b are t he solutions of the normal system AT Ax = ATb. From
Theorem 3 5 1, the solution space of the latter is the translated subspace X.+ W where :X is any
least squares solution and W = null(AT A) = nuii(A).
P 4. If w is in W and w :1: projwb then, as in the proof of Theorem 7.8.1, we have l b  wll >
li b projwb ll ; thus projwb is the only best approximation to b from W.
P5. If ao, a 1, a2, ... , am are scalars such t hat aocdM) + a 1c2(M) t a2c3(M ) + .. + amCm+l(M) = 0,
then
ao + n1:ri + + · · · + amxi = 0
for each t = 1, 2, ... , n Thus each x, is a root of the polynomial P(x) = a
0
+ a
1
x + · · + amxm.
But such a polynomial (if not identically zero) can have at most m distinct roots. Thus, if n > m
and if at. least. m + 1 of the numbt>rs x
1
, x
2
, . • , Xn are distinct, then ao = a 1 = a2 = · · = am = 0.
This shows that the column vectors of M are linearly independent
P6. If at least m + I of the numbers :r
1
, :r
2
, ...• :rn are distinct then, fr om Exercise P5, the column
vect ors of M are linearly independent, thus M has full column rank and MT M is invertible.
EXERCISE SET 7.9
1. (a) v, · v2 = (2)(3) t (3)(2) = 12:/; 0, thus the vectors v
1
, v2 do not form an orthogonal set
(b) Vt · v2 = ( 1}(1) t (1}(1) = 0; thus the vectors v
1
, v 2 form an orthogonal set. The correspond
ing Orthonormal SCt IS ql = u::u = = = (32,
(c) We hnve v1 · vz = V t • v3 = v2 · v3 = 0; thus the vectors v
1
, v
2
, v3 form an orthogonal set. The
correspondmgorthonormal set isql = = (7s,O.js), q3 =
(d) Although v1 · v2 Vt · v3 = 0 we have v2 · v3 = (1)(4) + (2)(3) + (5)(0) =  2:/; 0; thus the
vectors v1 , v2, v3 do not form an orthogonal set.
2. (a) .,. · v2 = 0; thus the vectors v
1
, v 2 form an orthogonal set. The corresponding orthonormal set
is q l = ?tJ)
(b) v
1
• v2 :1: 0; thus the vectors v 1o v
2
do not form an orthogonal set.
(c) V t • v2 :/; 0, thus the vectors v
1
, v
2
, v
3
do not form an orthogonal set .
( d ) We have v
1
· v
2
= v
1
• v
3
= v
2
· v
3
= 0; t hus the vectors v
1
, v
2
, v
3
form an orthogonal seL.
The corresponding orthonormal set is q1 = j, = ! .i), q3 = (!,
3. (a) These vectors form an orthonormal set.
(b) These vectors do not form an orthogonal set since v2 · v3 =  '3; :/; 0.
(c) These form an orthogonal set but not orthonormnl set si nce ll v3ll = .j3 :1: 1.
EXERCISE SET 7.9 247
4. (a) Yes.
( b) No; ll v1 1l f. 1 ;,.nd ll v zll f. 1.
(c) No; Vt • vz f. 0. vz · vJ f. 0, llv<dl # l, and ll vJII f. L
6. (a) projwx = (x · v1)v1 + (x · v:a)vz = (l )vt + (2)vz = ( !. !, 4} + (1, 1,  1, 1) =
(b) projwx = (x · vt )vl + (x · vz)vz + (x · v3) v3 = (l )vl + (2)v2 + (O)v3 =
7. (n)
(b)
8. (a)
(b)
13. Using Formula (6), we have P =
r·2
1 2 3
J :i I
2 '2 3
3 3
72
14.
[
J
Using Formula (6), we h.:we P =
V'6
js] [
I 2  76
?J 76 I
275
I
7u
2
3
 I
:.
6
1 5. Using t.hf! matrix fouud in Exen;isc 1:3, the or thogona.l projection of w onto W = SJJan{ v
1
, v
2
} is
Pw =
r
! [ 0] [
.. s _1 2 = !§
9 9 9
 2  3 _L98
 9
On Lhe other hand, using Formula (7) , we have
Chapter 7
16. Using the matrix found in Exercise 13, the orthogonal projection of w onto W = span{v
1
, v
2
} is
On the other hand, u:.mg Formula (7), we have
projww = (w · Yt)Vt + (w · v2)v2 = !, + j, =
17. We have = (7J, 7J, 7J) and = '7s· 76). Thus the standard matrix for the orthog
onal projection of R
3
onto W = span{v
1
, v
2
} is given by
[
1
73
P=
73
1
73
1
76
= [! :
0 0 1
18. We have
11
:!11 = (S, Thus the standard matrix for the orthogonal pro
jection of R
3
onto W span{v
1
, v
2
} is given by
p = 1]
j [l
1
3
2
3
il
19. Using the matrix found in Exerctse 17, the orthogonal projection of w onto W =span{ v
1
, v
2
} is
Pw Hl [ =t]
On t ht> r,t her hand, using Formula (8), we havP
20. Using the matrix found in Exercise 18, the orthogonal projection of w onto W = span{ v
1
, v
2
} is
Pw=
! [ 2] [ OJ
!9 ; =
9
On the other hand, using Formula {8), we have
21. From Exe<Cise 17 we have P [ t t ; thus tcaoe( P) j + j + I 2.
EXERCISE SET 7.9 249
22.
From Excrciso !8 we have P ,. [!
4 '2]
9 9
 ; t hus trace( F ) = .; + = 2.
&
23. We have pT:;.. I' and i t is easy to check that P
2
= P; thus Pis the standard matrix. of an orthogonal
projection. The dimension of the range of the projection is equal t o tr(P ) = +
2
5
1
+ ff = 2.
24. The dimension of the range is equal to tr(P) = 19 + ;fg + = 1.
25. We have pT = P and it is ea.'!y to check that P
2
= P ; thus Pis the standard ma.t.rix of an orthogonal
projection. The dimension of t he range of the projection is equal t o tr(P ) = + t + = 1.
26. The dimension of the range is equal to tr(P) = + + = 2.
27. Let v
1
= wl = (1,  3) and v 2 = w2 Vt = (2, 2) ( 1; )(1,  3) =
(Jj, !> = {:3, I) . Then { v
1
, v2} is an orthogonal basis for R
2
, and the
vectors q1 =
II = ( jfo) and q2 =
= (fro, ftc) forrn an or
thononnal hAS is for R
2
.
28. Let V J = Wt = (1, 0) and V2 = W 'J · v 1 = (3, 5)  n)(l , 0) = (0,  :>) .
Then Q1 =
= (1,0) ami q 2 = ft:!n = {0,  1} form an orthonormal ba:;is
for R
2
29. Let YJ = w, = (1, 1, 1), voz = w 2  = (1, 1,0) J. , 1) = (1, 1, 0}, and
form u.n orthonormal hasls for R
3
.
30. Let. v, , .. WJ = (1, o. 0), Y2 = w 2 u;;u! v l = (3. 7, 2) (f)(l , O, 0) = (0, 7, 2). and
Then { v 1
1
v2, v3} is an orthogonal basis for R
3
1
and the vect ors
s
VJ
q, = llvdl = (l, O, O),
VJ (Q 30 105 )
q
3
= llv
3
U = ' ./11925
1
J uns
form an orthonormal basis for R
3
.
250 Chapter 7
31. Let V l = Wt = (0, 2, 1, 0), V2 = w2 VJ = (1, 1, 0, 0)  ( 1, 0) = (1, i,
and
Then { v
1
, v
2
, v
3
, v
4
} is an orthogonal basis for and the vectors
 YJ  {0 2Vs J5 0)   ( v'3(i  VJo :.&.Q 0)
ql  llvdl  '"5• Sl , q2  llv211 6 I 30 ' IS I '
 VJ  (.{IQ .£@ v'iO)   (v'iS .ill uTI .iil)
q 3
11
v
3
ll 10 ' 10, s ,s • q 4
15• 1s ,  ts • s
form an orthonormal basis for R4.
32. Let VJ = WJ = (1. 2.1. 0). V2 = Wz = (1,1, 2, 0) 2, 1, 0) = ( 0),
and
Then { v
1
, v2, v3, v
4
} is an orthogonal basis for R
4
, and the vectors
form an orthonormal basis for R
4
•
33. The vectors w, = ( 0), w
2
= ( 0), and w
3
= (0, 0, 1) form an "rthonormal basis for
R3.
EXERCISE SET 7.9
251
34. Let A be the 2 x 4 ma.trixhavingthe vectors w
1
= !) and w2 = 7J, as rows.
Then row(A) = spa.n{w
1
, w2}, and uull{A) = spa.n{wt, w2}.1. . A basis for null(A) can be found by
solving the linPnr system Ax = 0 . The reduced row echelon form of the augmented matrix for this
system is
1
0
_l
2
l
2
4 I
.!. : 0]
_! I 0
4 I
and oo a general ootut;on ;, • by [m , [\ 
1
;•] = ·[!] + •[=;] the vootorn w, =
4, 1, 0) and w4 = ( 0, 1) form a basis for span{w1, w2}', and B = {wJ, w2, w3, w
4
} is
a basis for R
4
. Note also t ha t, in adrlition t o being orthogonal to w1 a.nd W'J, the vectors w
3
and
w4 ar<:: orthogonal to each other. Thus B is an orthogonal basis for R4 and application of the
GrarnSchmic.Jt process to t hese vectors amounts to simply normnlizing them. This results in the
orthono110td l>asi::; {qt , Qz, Q:$, q 4} where Q1 = Wt (L Q2 = Wz:::: (7.f, 7:3• 0), Q3 =
11
:;
11
= (js ,)5, :76,0), and q4 =
11
:!g = (dis,  )r
8
, 7rn,o).
35. Note that W3 = w
1
+ Wz . Thus the subspace W spanned by t he given vectors is 2diinensiooal with
basis {w
1
, w2} . Let v1 = Wt = (0, 1, 2) and
w2·V1 ) ( 2)( ) ( 2 1)
v2= w2 ll vdl
2
v1 =(1, 0,l 
5
0,1 ,2 ;;
Then {v
1
. ,·l} i::: an Ol lbogonal basis for\\' , anti the vector!'
V1 · l 2 )
Ut = ll vdl = tO. /S' ,
form an orthonor mal ba.c;is for
36. NoLe that w.1 = w
1
 W z t w
3
• Thus the ::;uhspace \V spanned hy the given vectors :3dimensional
with { w 1 , W ?, w3}. Let v
1
= w
1
= ( 1, 2, 4, 7) . t\nd let.
v ... = \1: 2  = (3 0 4 2) · (2.)(l 2 4 .., , ) ""' ( 1! _J.
.. Jlv
1
jj 2 ' • ' ' · ;o • • • 11 ' :, 1 • 2
w3 • v 1 · v 2 , ( 9 )
llvdl
2
v1 llv
2
jj2 v2 = (2, 2, 7, 3)
70
( 1, 2,1,7)
(
31843) (  41  26
401
14' 7) 7 ' 2
14
(
9876 3768 5891 3032)
= 2005 ) 2005 > 2005 I  200.'5
Then {vt . v2, VJ} is an orthogonal ha.osis for w. and the vectors Uj = n::u = (j.ffl,
,;!1.,. (  41  2 s2 35 ) I ( 9876 3768 5891 3032 )
U
2
= Uv211 = JS'iit4 ' •/5614
1
v
1
5iii4' J5614 ' an( UJ = llvlil = J 155630105' v'l55630t05 ' ,/155630105
1
J l55630105
form an orthonormal basis for W.
37. that u
1
and u 2 are orthonormal vectors. Thus the projectior. vf w onto the subspace
W spanned by these two vectors is given by
WJ = projww = (w · u t) u
1
+ (w · u 2)u 2 = ( 0, + (2)(0, 1,0) = 2,
and the component of w orthogonal to W is
Wz = WWl =(1,2, 3)
252 Chapter 7
38. First we find vD orthonorma.l basis { Q], Q2} for w by applying the GramSchmidt process to {UtI u2}·
Let Vt = U t = (1,0
1
1
1
2)
1
v2 = U2 =
1(1,0,1,2) = (j,l, !,!}, and let
ql = = ( Js, 0, 7s, fs), q2 = = ( Jh. Ji2). Then {qt , Q2} is an orthonormal
basis for H' . and so the orthogonal projection of w = (  1, 2, 6, 0) onto W is given by
and the component of w orthogonal to W is
W2 = W  Wt = ( 1, 2,6, Q)  (
1
= (
1
1
4
9
1
39. If w = (a., b, c), then the vector
is an orthonormal for lhe !dimensional subspace lV spanned by w. Thus, using Formula (6),
the sta.ndard matrix for Lhe orthogonal projection of R
3
onto W is
P = u T u =
2
:
2 2
b Ia b c] =
2
2
ab b
2
be
[
a] [a
2
ab acl
a 1 + c a + + c b .'2
c . a.c c c
DISCUSSION AND DISCOVERY
D 1. If a and b" are nonzero, then u
1
= (1, 0, a) and u2 = (0, 1, b) form a basis for the plane z = ax + by,
ami applKHion of tlw GramSchm1dt process to these vectors yields '\n orthonormal basis { Q1, Q2}
where
02. (a) span{ vt} =span{ wl}, span{ v
1
, v2} = span{w
1
, w2}
1
and spa.n{vJ , v2 , v 3} =span{ Wt, w2, w3
(b) v 3 IS orthogonal to span{wt, w2}.
D3. If the vectors Wt, w2, .. , are linearly dependent, t hen at least one of the vectors in the list is
a linear combination of the previous ones. If w
1
is a linear combination of w
1
, w
2
, ••. , w
1
_
1
then,
when applying the GramSchmidt process at the jth step, the vector v i will be 0.
D4. If A has orthonormal columns, then AAT is the standard matrix for the orthogotia.l projection
onto the column space of A.
D5. (a) col(M) = col(P )
(b) Find an orthonormal basis for col(P) and use these vectors as the columns of the matrix M.
(c) No. Any orthonormal basis for col(P ) can be used t o form t he columns of M .
EXERCISE SET 7.10
253
D 6. (a) True. Any orthonormal set of vectors is linearly iudeptmdent.
(b ) Pa.lse. An orthogona:l set may contain 0. However, it is t r ue t hat any orthogonal 'set of
nonzero vect.ors is linea.rly independent.
(c) False. Strictly speaking, the subspace {0} has no basis, hence no or thonormal basis. However ,
it true that any nonzero subspace has an orthonormal basiS.
(d ) 1'ruc. 'The vect.or q 3 is ort hogonal to the subspace span{w
1
, w2}.
WORKING WITH PROOFS
Pl. If {v1, v2, .. . , is an orthogonal basis for W, then {vt/llvl ll , v2/llv2ll. . .. , is a.n
orthonor mal basis. Thus, using part (a), the orthogonal projection of a vector x on W can be
expressed as
0 ( Yt ) V1 ( V2 ) Y2 ( ) Vk
proJwX = x . llvtll + x. l!v2ll ll v1ll + . .. + x.
PZ. lf A. is symmet.ric and idempot ent, t hen A is the s tandard matrix of an orthogonal projection
operator; namely the ort hogonal projection of R" onto W = col(A). Thus A= UlJT where U is
any 11 x k matrix whose column vectors form an orthonormal basis for tV .
P3. We rnust. prove that v i E span{w
1
, w2 ___ , w,} for each J  1, 2, . .  The proof is by induction on
J
Step 1. Since v
1
= w
1
, we have v
1
E span{w
1
} ; t.hllS t.he stat ement is true for j = L
Step 2 (induction step) . Suppose the st atemeot is t.r ne for integers k which are less t han or
to j, i.e., fork = l , 2, . . . , j . Then
and since v 1 C ,c;pan{ w d, v'/. E span { Wt, w·
2
), . .. 0 a.nd v; (7 s pan{ Wt , w
2
, . .. , Wj } , it follows that
vJ+l C span{ Wt , w2, .. . , wi, wj
1
.1} Thus if the s tatement. is true for each of the integers k =
1, 2, ... , j then it is also true fork= j + 1.
T hesf: two s teps complete the proof by induction.
EXERCISE SET 7.10
1 . T he column vec.t,ors of the mat rix A are w
1
= g} and Wz = _ Applicat ion of t he GramSchmidt
procE'SS to vect.or yields
We have W1 := { w, · q i)q 1 = J5q l and w2 = {w2 · q
1
)q l + (w2 · Q2)Q2 = /5q l + ../5q2. Thus ap
plio.:ation of Form;.:!a (3) yields the following QR·decomposition of A:
A=== fl l i] =
2 3 v'5
[J5 V5j = QR
V5 0 v'5j
254
Chapter 7
2. Application of the GramSchmidt process to the column vectors Wt and w2 of A yields
We have Wt = J2q
1
and w2 = 3J2q
1
+ J3q2. This yields the following QRdecomposition of A:
[
l
[
1 1] 1 2 72 "J! J2 3v'2
A = 0 1 = 0 13 [ l = QR
1 4 1 1 0 J3
72 73
3. Application of the GramSchmidt process to the column vectors w
1
and w 2 of A yields
[
8 l
aJ26
Q2 = s0t
JJ26
We have WJ = (wl · Qt)CJ1  3q, and w2 = (w2 · q, )q l + (wz · Q2)Q2 + "fq z_
the following QRdecomposilion of A:
[
11]
A= = :
8 l
3726 3 .l
[o =QR
3726
1\ppliratton nf thP Grn111Schrnidt process to the column \'ectors w
1
, wz , w3 of A vields
This yields
\Ve have w1 = J2q
1
, w
2
= "lq
1
+ vl1q2, and w 3 = .J2q
1
 1q2 + ¥ Q3· This yields the follow
ing QRdec01npositinn l.lf A:
A
[
1 0 2]
011= 0 73'
1 2 Q I l
72 73'
[v'2
7s 0
Ja 0
v'2
v'3
0
J2]
il
 3
h1
3
=QR
5. Application of the GramSchmidt process to the column vectors w
11
w
2
, w3 of A yields
Q2 =
719
We have Wt = J2q
1
, w2 =
+ and wa = J2ql +
3
'(Pq2 + This yields the
following QRdecomposition of A:
A=
2 1]
I 1 = 72
3 l 0
1
738
1
73'8
6
73'8
J2] =QR
.i!!
19
0
EXERCISE SET 7.10 255
6. Application of the GramSchmidt process to the column vectors w 1, w2, w
3
of 11 yields
We have w
1
= 2ql, w2 =  q1 + qz, and w3 = + + ./iG3· This yields the followi ng QR
·decomposition of A:
2  1
A=
l
 1
r :L [; 1
1 oJ 2 2
0]
[o 1
1 0 0
J] =QR
72
h
[
1 1] = [3_: f3
7 . From 3, we have .4 = 2 1
3
:Jv"itl = R
I J
..r·tu Q
T hus the nnrmnl for
2 J 2 7 LO
3 W'16
3
Ax = b can be expressed as Rx = Qrb, which is:
,J,!l m = Lil
Solving thi.s system hy back substitution yields the least squares solution Xz = x
1
= 
[
1 0 2] [72  [v'2 v"2
8. From Ex.erci$c 1, we have A = 0 1 1  0 ,
16
o v'3
120 jz 73 )6 0 ()
system for Ax = b can be e.xprcssed as Rx = Q1' b, which is:
, ..
v2
[::] =
0
W X3
3 v 6
0
1
J3
2
v·'G
v'2]
 :/f. ·""' QR. Thus the normal
2..f6
:r
Solving this system by back substit ution yields X3 = Xz = x, = 0. Note that, in this example,
the syst.etn Ax = b is consistent and this is its exact solution.
[
1 2 1] · kl [Vi
9. F'rom Excrci11e 5, we have A = 1 1 1 = o
0 31 0 '.;?S8 719 0
41
0
J2]
3'(P = QR. Thus the normal
.ill
19
system for Ax = b can be expressed as Rx = QTb , which is:
w
2
1
v'2
.ill
2
l
 v'38
3
0 'J19
Solving this system by back substitution yields X3 = 16, x2 = 5, x, = 8. Note that, in this exam
ple, the system Ax = b is consistent and is its exact solution.
256 Chapter 7
10. Prom Exercise 6, we have A= ! :] H [:  I ll = QR. Thus the normal system
1 1 0 I l I O
2 72
for .\x  b can be expressed as Rx = Qrb, which is
i] [=:] = [t
0 0 X3 0
1
72
Solving this system by back substitution yields x
3
= 2, x
2
=
1
2
1
, x
1
=
11. The plane 2x  y + 3z = 0 corresponds to a.l. where a= (2, 1, 3). Thus, writing a as a column
vector, the standard matrix for the reflection of R
3
about the plane is
2 T
H = 1  aa =
a r a
= [
0 0 1 6 3 9 7
and the reflectiOn of I he vee lor b = (1, 2, 2) about that plane is given, in column form, by
Hb=
!2. lhevlane.x 1y lz Ocorrespondstoa
1
wherea=(l,l,4). Thus,writingaMacolumnvector,
t.he standard matnx for the reflection of R
3
about the plane tS
[ 1 4] [ 
18 I  4 :;
4  l 16 :1 .: _l
9 9 9
o.nd the retlec:ti•>n of the vector b = (1,0,1) about tltat plane is given, in column form, by
Hb H : Ul
0
[:
 1
:]
u
2
!l
3
13.
2 T
1 1
I
H = l   aa 
=
3
aT a
0 1
2
3
0
2 [  :
 1
u
I
3
14.
2 1
I 1
2
H =1 aa =
=
3
aT a
6 2
4J
0 2
2
3
0 0
OJ r
0 0
0 0
 ·0]
2 T
1 0 0 2 0 1  1
9 2
IT IT II
15. Fi = 1  aa
2 9 6
aT a
0 1
11
 1 1
TI IT IT
0 0 3 3
_..§.
i
7
II II
n
EXERCISE SET7.10
257
=
13 4 2
']
0 0
0] [ 1
2 3 T5 15 5 15
2 1 0 0 2 2 4 6
4 7 _1 4.
16. H l aar =
15 H' 5
15
aT a
0 1 o 10 3 6 9
2 4 1 2
5
ii s
g
0 0
1 1 2 3
2 _.i_ :.! 13
15 15
5
15
17. (a) Let a=vw==(3,,1)(5,0)=(2,4),
Then His the Householder matrix for the reflection about a..l, and Hv = w.
(b) Let a=vw=(3,4)(0,5)=(3,1),
4 •
5
Then H is the Householder matrix for the reflection about a..l, and Hv = w.
(c) Let a= v. w = (3,4) (¥, {/) = (
6
;12, S+l'l). Then the appropriate Householder ma
trix is:
2 T [1 0] 2
H = I aa =  ;::::
aT a 0 1 &0 l7v'2
1725/2] r .1__..
2 l'n
= 1
33+8../2 
2 ../2
18. (a) Let a= v w,., (1, 1) (v'2,0) = (1 v'2, 1). Then the appropriate Householder matrix is:
H = J ..... [1 OJ ___ 2 _ [(1 v'2)
2
1 v'2]
a1'a 0 1 ·12v'2 1/2 I
=
2 2 2
(b) Let v w = (1, 1) {0, /2) = (l, 1 .J2). Then the appropriate Householder matrix is:
H =
1
__ 2 aaT = [1 0] ... __ ._2_ [ 1 1 v'2]
aTa 0 1 42J2 1 v2 (1 /2)
2
= [ot o] _ [¥
l v'2
2
=
2 v'2 J:i
2 2
( <:} Let a = v  w = ( t, 1)  ( Y1) ' 1+2v'3) = e2n. l;/3). Then the appropriate holrlcr
mat.rix is:
= (
0
1 o] _
1 32:/3
2
( 3/'3 )( 12.,13 )]
e.i
13
)
2
3±2/3]
2
.fl
2
19. L;;t w = (llvll, 0, 0) = (3, 0, 0), and a= v w = ( 5, 1, 2). Then
2 T
lJ =I aa =
aT a
[
1 0 0] 2 [ 25 5 10] [
0 1 0   5 1 2 = 3
0 0 1
30
10 2 4
3
1
3
14
15
2
15
l]
11
I5
is the .standard matrix for the Householder reflection of R
3
about a..l, and H v = w.
258 Chapter 7
20. Let w = (llvll, 0, 0) = (3, 0, 0), a= v  w = ( 2, 2, 2). Then
0
lq :
4
4]
H
'2
!l
2
j
H =1   aar = 1 4 4 =
l
aT a
j
0 1 4 4 t
4
3
is the standard matrix for the Householder reflection of R
3
about a.l, and Hv = w.
21. Let v = (1, 1), w = (llvll, 0) = ( J2, 0), and a= v w = (1 /2,  1). Then the Householder re
flection abouL a..L maps v into w. The standard matrix for this reflection is
H = 1  _2_aaT = [1 0] 2 [(1 J2)2
aTa 0 1  4 2/2 1 + v'2
= [ol o] _ 2 + J2 [3 2J2 1 + J2]
1 2 1 + J2 1 J
= l'r
4] = [ .;;  Y{l
_fl _fl
2 2 2
[
:D [ l 2] [.;2
We have HA = _
1 3
=
0
= R and, setting
Q = H 
1
= HT = H, this
y1elds the following QRdecompositioo of the matrix A:
A=[ 1 2]=[ :f
 1 3 _fl
2
X}] [h  ] R
{/ 0 Q
22. Let v = (2, 1 ), w = (llvll, 0) = ( v's, 0), and a = v  w = (2 v's, 1 ). Then the Ho\lseholder reflection
about a.l. nnps v into w The standard matrix for this reflection JS
Now let Q, = iJ] = [: 'i
OJ _ 2 [(2  '' 5)
2
2 v'5l
I 10 4/S 2  ,f5 1 J
=
OJ  l = [¥
1
5 5 s :;
and multiply the given matnx on the left by Q
1
_M
5
Js]=R
_w o 1 o o
s
and, settmg Q = QT = Q,, we have the following QRdecomposition of the matrix A:
y] 0
0] [1
¥ 0
=QR
This ytelds
EXERCISE SET 7.10 259
23. Referring to the construction in Exercise 21, the second entry in the first column of A can be zeroed
24.
out by multiplying by the orthogonal matrix Q
1
as indicated below: ·
1] [v2
3 = 0
5 0
2¥'2 ../2]
0  2J2
4 5
Although the matrix on the ri ght is not upper triangul ar, it can be made so by interchanging the 2nd
and 3rd rows, and this can be achieved by interchanging the corresponding rows of Q
1
• This yields .
[
fl f.
QzA =
2
0 0
_fl
2 2
=
0 0 4 5 0
../2]
=R
and finally, setting Q == Q:;
1
"" Qf, we obtain the following QRdecomposi tion of A:
[
1 2
A =  1  2
0 ·1
ll [ 1
=
0 · v;l [V2
0 _ 1.1'2 0
2
1 0 0
2/2
4
0  2..;'2
= QR
Referring to the construction in Exercise 18 (a) , the second enlry in the first column of A can be
zeroed out by multiplying hy the orthogonal matrix Q
1
a.<> indicated below!
:If .;_; : 0 0 [l 2 1] ¥ ' L.}1
QI
A = {/  Yf : 0 0 1 3  2 o:  :If 3'{2 ....
o o: 16 o  l o r= oi cY o :
o o : o 1 o
0 1
\ ol o , 1 )
· / .
From a similar construction, the third entryin t he second colum'n of Q
1
A zeroed out by
multi plyiug by t he orthogomli matrix Q
2
as indicated below:
__  [v2 ¥ V[l
I r.;  I
0 I ·  y6 I 0 0 ,/2 3 f2
Q
Q
I I 3 .  :!..lL!
·> 1 ; 1 · ' ' 2 2 =
... I · r.:. t
 ' 
,J2 w
_ill
2 2
0
.i§
_ _.&
2 2
0 0 "' v'3
0 0 1
From a Uurd such construction, the fourth entry in the thi rd column of Q2Q
1
A can be zeroed out by
multi plying by the orthogonal matrix Q
3
as indicated below:
sV2  :£1
2 2
v'6
2 2 =
0  ./3
0
\12
0
0
0
_ill
2 2
:& _:L§
2 2 = R
0 2
0 0
Finally, setting Q = Q[QfQT = Q1Q2Q3, we obtain the following QRdccomposition of A:
:£1  .i§ 1  :il
2 6 2 6
::11 1 :il
2 6 2 6
0 1 :il
J 2 6
I fl
2 2
_ _ill
A=
0 0
0
......
260 Chapter 7
25. Since A = QR, the system Ax = b is equivalent to the upper triangular system Rx = QTb whlcb is:
26.
[
v'3  v'3 33] [:ttl [73
0 .../2 0 :t2 = 0
0 0
4
X3 .:i§
7s 3
Solving t his system by back substit ution yields :t3 = 1, :t2 = 1, :t
1
= 1.
(a)
(b)
Since aaTx = a (aTx) = (aTx)a, we have Hx =(I af.aaT)x = /x  af.aaaTx =X e:1T:)a.
Using the formula in pa.rt (a), we have Hx = x (
2
$:)a = (3,4, 1}
1
: (1, 1, 1) = ( 1, Jj).
On the other ha.nd, we have H = I  ;faaaT = j "" [1 : and so
0 0 l 1 1 1 _1 l
a 3 _ 3
DISCUSSION AND DISCOVERY
Dl. The standard matrix for the reflection of R
3
about ef is (as should be expected)
2 [
0 0 1 0 0 0 0 0 1
and for t.he others.
D2. The standard matnx for the reflection of R
2
about t he line y = mx is (taking a = (1, m)) given
by
D3. If s = ±v's3, then llwll = ll vll and tbe Householder reflect1on about. (v w).l mnps v into w
D4. Since llwll = llvll , the Householder reflection about (v  w).l. maps v into w , We have v  w =
(8, 12), and so (v  w).l. is the line 8x + 12y = 0, or y =
D5. Let a = v  w = ( 1, 2, 2) (0, 0, 3) = (1, 2, 1). Then the reflection of R
3
about a.l. maps v into
w, and the plane a .l corresponds to x + 2y z = 0 or z = x + 2y.
WORKING WITH PROOFS
P2. To show that H = I  af;aaT is orthogonal we must show that HT = H
1
• This follows from
T ( 2 T) ( 2 TJ' T 2 T 2 T 4 T T
H H = I  aa I  aa = I   aa  aa + aa aa
aTa aTa aTa aTa (aTa)2
=I .2.. aaT  
2
 aaT + 
4
 aaT =I
aTa aTa aTa
where we ha,·e used the fact that aaT aaT =a( aT a)aT = (aTa)naT.
EXERCISE SET 7.11
261
P 3. One ofthe features of the GramSchmidt process is that span{ q
1
, q 2, . . . , q,} = span { w
1
, w
2
, , . . , Wj}
for ea.ch j == 1, 2, . .. , k. in the expansion
we must have w
3
• CV ::j:. 0, for otherwise w j would be in span { Q1 , Q2, ... , ClJ 1 } =
span{w1 , w2 , ... , w; l } which would mean that { w
1
, w2, . . . , w
1
} is a linearly dependent set.
P4. If A = QR is a QRdecomposition of A, then Q = AR
1
. From this it follows that the columns of
Q belong to the column space of A. In particular, if R
1
= js,j], then from Q = AR
1
it follows
that
c; (Q) = Acj(R
1
) = StjCI(A) + s2jc2(A) + · · · +
for each j = 1, 2, ... , k. Finally, since dim(col(A)) = k and t.he vectors Ct(Q), c2(Q), ... ,ck(Q) are
linearly independent, it follows that they form a basis for col(A).
EXERCISE SET 7.11
1. (a) We have w '"" 3v1 ... 7v
2
; thus (w)B = (3, 7) and [w]B =
(b) h
1 1 I
[
2 a3) [cc2']  [11)' and
T e vect or equation c
1
v
1
+ c
2
v
2
= w is equiva. ent to t 1e inear system
 4
the 1:.olut.ion of this system is c
1
= 1
8
, c2 =
1
3
4
. Thus (w ) B = {fa, f4) and lwl o = (
2. (a) (w)B = ( 2, 5)
(b) (w)n =(I , 1}
3. The vector equation c1 v1 + + c
3
v
3
= w is equivalent to = Solving this sys
o 0 3 C3 3
[
tem by back subi:ititut ion yir.ldsc.
3
= l,c
2
= 2,c
1
= 3. Thus (w)a = (3,2, 1) and [win=
4. The vector equation c1 v
1
r.
2
v
2
+ c
3
v
3
= w is equiva.lent to  :. _; ] = Solving this
3 69C:J 3
system by row reduction yif!k!s c
1
=' 2, c
2
= 0, c
3
= 1. Thus (w)B = (  2, 0, 1).
5. If (u)n = (7,  2, 1), then u = 7v
1
 2v
2
+ = 7(1, 0, 0) · 2(2, 2, 0) + (3, 3, 3) = (6,  1, 3).
6. If (u)B = (8,  5, <1 ), then u = 8v
1
 5v
2
+ 4v
3
= 8(1, 2, 3) 5{ 1, :>, 6) + 4(7,  8, 9) =(56, 41, 30) .
7. (w}B = (w · v 1 , w · v2) = = ..
8. (w)a = (w · v
1
, w · v 'l , w · v
3
)::: (0, 2, 1)
9. (w)£J =(W • Vt W·v
2
W·v
3
)=(....Q... _..2_ _hQ y])
' ' /2' ,/3' v6 2 ' 3 • 6
262 Chapter 7
11. (a) We have u = v1 + = v =  v1 + 4v2 =
1
;).
( b) Using Theoretn 7.11.2: Uu ll = ll(u)all = y'(1)
2
+ (1)
2
= /2, llvll = ll(v)sll = J(1}
2
+ (4)2 =
m. and u. y = (u}s
0
(v)a = (1)(1) + (1)(4) = 3.
Computing directly· llull =
4
= /¥s = /2, llvll = Je;p + e;p = !iii= Jl7.
and u · v = (t)(
+ (!)( ;
6
) = = 3.
12. (a) We have u = 2v1 + v2 + 2v3 = i, v = 3vl + Ov2 2v3 = .!p, 1).
(b) ll u ll = ll(u)sll = j( 2)
2
+ (1)
2
+ (2)2 = 3, ll v ll = ll(v)sll = J(3)
2
+ (0)
2
+ ( 2)2 = Jf3, and
u · v = (u)s · (v )s = {2}(3) + (1)(0) + (2)(2) =  10.
ll u ll = + (D2 + = fj = 3, llv ll = = J¥ = Jf3,
and u · v = !) + ( i)(
1
3
°) + ( =
9
9
° = 10.
13. !lull = ll(u)Dil = J(  1)
2
1 (2)
2
+ (1)2 + (3)2 = Jl5
llvll = li(v)sll = }(0)
2
t (3)2 + (1)
2
+ (5)
2
= J35
Uwll  ll( w)sll = ,fi2)
2
+ ( 4)2 + (3)
2
+ (1)2 = v'30
u v + w 11 = 11 < v > s + < w > s 11 = 11 <2, 1, 4, 6) 11 = .;,..,....< =2=>
2
+:<
2
+:<"'="'s >""":.! = .Jf05
llv wll = li(v)a (w)all = 11(2, 1, 2, 4)11 = y'(2)2 + (1)2 + ( 2)2 + (4)2 = 5
v · w = (v)a · (w)a = (0)(2) + (3)(4) + (1)(3) + (5)(1) = 20
14. !l ull= ll(u)
8
11 = j(0)2 + (0)2 + (1)2 + (1)2 = /2
Hvll = ll(v)o
1
1 j(5)
2
+ (5)2 ..l.. {2)2 + (2)2 =,/58
ll wll = ll(w)all = .j(3)2 + {0)
2
+ (3)
2
+ (0)2 = /18 = 3v'2
ll v + wU = ll(v)a + (w)all = 11(8,5, 5, 2)11 =
+;( = JIT8
ll v  wU = il(v)a (w)all = 11(2, 5, 1, 2)11 = y'(2)'l + (5)2 + (1)2 + (2)2 = v'34
v · w = (viB · (w)a (5)(3) + (5)(0) + (2)(3) 4 (2)(0) = 21
15. Let B  {e
1
,e2} be the standard basis for R
2
, and let B' {v
1
, v
2
} be the basis corresponding to
the .r'y'systcm that is described in Ftgure Ex15. Then Ps· ..... s = [[vi]a [v2Jsl = [
1
and so
0 V'1
Ps.a· (PB'+D)
1
= It follows that x'y'coordinates are relat ed to xycoordinates by
the equations x' = x  y and y' = /2y. In particular:
(a) If (x, y) = (1, L), then (x', y') = (0, /2).
(c) [f (x,y) = (0,1), thPn (x',y') = (1, /2).
(b ) If (x,y) = (1,0), then (x',y') = (1,0).
(d ) If (x, y) = (a, b), then (x', y') = (a b, v'2b)
16. Let B = {i, j } be the standard basis for R
2
, and let B' = {u
1
, u2} be the basis for the x'y'system
as described in Figure Ex16. Then P9• s  (*f
10
] and Pas = (Ps ....
8
)
1
= [
0
]. Thuc;
=  1 73 1
x'y'coordinates are related to xycoordinates by tlre equations x' = and y' = + y. In
particu Jar.
(a) If (x,y) = (v'J, 1), then (x',y') = (2,0}.
( b ) 1f (x,y) = (1.0), then (x',y') =
(c) If (x,y) = (0, L). then (x', y') = (0, I )
(d) lf (x,y)
EXERCISE SET 7.11 263
[
:2  ::J I
(b) The row reduced echelon form of [ B I S'J =
1 4
:
0
[
4 3 ]
IT Ti
l 2 .
n H
(c) Note that
= [
2
1
J] I  .!.. [
4 3
]  P ·
4  li 1 'J.  StB·
0) . [1
IS
1 0
(v2)s) = (
2

3
).
l 1
O I 4
I rr
li _J..
I I I
1]; thus Ps+B ::::
ll
(d) We have w = v
1
 v z; thus [w)s = and lwls = Psts[w}s = G !] =
(e) We have w = 3e1  6e2; thus [w]s [w]D = Ps ,s[w]s = [_!] =
18. (a) ;]
(b)
(c)
(d)
(e)
19. (a)
(b)
{c)
0 8
The malrix [B I SJ :'!:
1 0 B10 0 1
form; Ps_.n .,.,
5 2 1
0
01 ·40
I
(J I 13
I
l, i 5
H) 9]
·.5 ··.1 as its reduced row echelon
2 ·1
lt is easy to chec.k tho.t
I 0 8 5 2 1 0
0 0]
.t o : t.hus ( Po.s)
1
= Ps+B·
0 1
a l  3 has o
I 5] [1
8 I 1 ()
0
0
0 I 2.'1!J1
o :. 77 as its reduceu row echelon form; thus
I I 30
[
[ 40 ) 6 91 [ 5] lr 239]
w = 5et 3e, + e3; thus [wls = 3 and lw] s = Ps ·t H[w)s = 13 5 3  3 = 77 .
l 5 2 1 1 30
(2
4_1
l
1] . [)
The row reduced echelon form of !Bt I B2] =
2
I
!S
 1 I 3 l 0
[11
10
2
g
The row reduced echelon form of [B2I Bd '""
11 2
i:.
I
l t 2 1
[ 0 _§ )
2  ;; .
[ 13
Note that
= ·r: ro (5)
Lt) = Ps,.82·
10
264
Chapter 7
(d ) w = ;i
0
u
1
+
thus [w]B, = [11 and (w]B, = = 11 [11 = [=:].
(e) We ha,·e w  4v1 7v 2i thus (w]s, (w)a, = Pa, .... a, (w)s, = = [11·
[
' 1 •1 2] . (
0
1
(b) The row reduced echelon form of (B2I Bt] =
3 4
:
2 3
IS
1 ; thus Pa,+B, =
0 I 2 5]
1 I  1  3
(c) Note that (Pe,.e.)
1
= = = Pe, .... s, .
(d ) We have w  2u1  u 2; thus [w]B, (w)e, ·= Pe, ... a,(w]s, = [_n =
(e) Wf! have w :3v1  v 2; thus [w]a
2
[w]B
1
= Pa, ... e ,[wJo, = _ =
21. (a)
[
6 2 2 I 3 3 1] [I
The row redured echelon form of [B2l Bt] = 6 6 3: 0 2 6 is o
[
0 4 7 I 3  1 1 0
thus Ps, BJ = H H
u
0
1
0
(b ) lfw=(5,8 5).wehave(w)a
1
=(l, l , l )andso
3
4
17
12
2
3
1J m , HJ
0
I 3
I 4
0
I 4
q 0
3
4
17
u
'2
.)
1 l
i2
.!1 .
1'2 I
2
3
(c) The row reduced e<'helon form of IB2 I w] = [=: is
0 4 715
;
I
o! H ; tHus (w)s,""
0
I !..2]
( ti, g, agrees with the computation in part (b).
22. (a )
The row reduced echelon form of the matdx (B2I Bt] = [
[
I 0 0 : 3 2 [ 3 2 S
0 1 0:2  3 4 ; thus Pa,_.a, = 2 3 4 .
0 0 11.'1 I 6 51 6
(b ) Ifw = 5), we have (w)a
1
= (9, 9,5) and so
I
1 4
I l
I 1
1
2 2
I
I 0 I I 1
1
3 2 I I
EXERCISE SET 7.11 265
(c) The row reduced echeloq form of [B2 I w) = [
3
 ! :] is [o
0
1
·53 21  5
( t,
2
,}, 6), which agrees with the computation in part (b).
0
l
0
I '2
0 •X]
o: ¥ ; t hus =
l : ()
23. The vector equation c1u 1 + c2u2 + C3 ll3 = v1 is equivalent to [* t [::] = the ·
'7J O 76 C
3
tTl
24.
25.
solution of this system is c
1
= ?s• c2 = = Thus (vt) s
1
= ( ,h, Similarly, we
have (v2)s
1
= (v
3
)s
1
=
l). Thus the transition matrix PB
2
4B, is
[ '
1
' l
76
3
372
Ps1+B, = _
1
0
21 4.
2\/'J 3¥"2
H is <'MY to check th<lt
1
2
0
[l 0 0]
:l,i2  0 1 0
.!. 0 0 1
6
Thus PFJ
1
4 Bt is an orthogonal matri x, a.nd !lince = = (Pn?•B,rr , the same is
true of Po, ·•B, .
[
o
0
1} .
(a) We have Vt = (0, 1) = Oe
1
+ le2 and vz = ( l, 0) = le\ t Oe2; thus Po .... s =
1
( b) 1f P = Ps.s = then, since P is orthogonal, we have pT = p
1
= (Ps ..... s)
1
= Ps+D·
Geometrically, this corresponds to the fact that reflection about y = x preserves lengt h and thus
is an orthogonal transformation.
(a) We have Vt = (cos20,sin28) and Vz = (sin20, cos2B); thus Pa ..... s =
sin28]
 cos28 ·
(b) II P = PB .... s then, since P is orthogonal, we have pr = pl = (PB_.s)
1
= Ps ..... b · Geomct
this corresponds to the fact that reflection about the line preserves length a.nd thus is
an orthogonal transformation.
266
27 .
(a) If (x) = (2], lhen = [ sin(34")] (:r] =
y 6 11 sm(
3
;) cos(
3
;} Y 72
(b) If [x:) = [5] ' then [;r] = [cose4" ) sin(
3
: >] [:r:] = r
ll 2 ll sm(
3
4
ft) cos(
3
4
") ll 72
28. (a) If [:] z rJ [:]= rJ =
(b) If r::J = then [:J = rJ = rJ =
29. (a) If [:] = nl, then [::] = [::(;) * nl = l
=  [ = [t].
0 1 z' 0 0 1 3 3
(b)
[
x'] [ 1] [x]
rr = ' lhen ;:
30 (a) {i]· [t : :][:] = : rl nl =
(b) If[::]= Ul· u : !l [::] = u: !l Ul =
31. We have [::] = [ r ;] m and = [! : !] [::] Thus
= [! r ;)[t r m
4
.:.::]
•I
DISCUSSION AND DISCOVERY
[
 .J_ l!]
IS
7 2.! •
R
Chapter 7
02. (a) Let 8 = {v,,v2, v 3}, where v
1
= (1, 1, 0) , v2 = (1,0,2),v3 = (0,2, 1) correspond to the col
umn veclors of the matrix P. Then, from Theorem 1.11.8, Pis the transition matrix from
8 to the standard basis S = {e
1
,e
2
,e
3
}.
(b) rr pis t he transition matrix from s = {e ,,e2,e3} to B = {wl . w2. WJ}, e, = Wt + w2.
e2 = w, + 2w3, a.nd e3 = 2w2 + WJ. Solving these vector equations for w
1
, w
2
, and WJ in
terms of e1, e2, and e3 results in w
1
= + = w2 = %e1  !e2 +
= i· w3 = je, + j e2 + \e3 = !>· Note that w
1
, w2, W3 correspond
to the column vectors of the matrix p .
WORKING WITH PROOFS
267
D3. If 8 f v
1
, v, , v 3} and if [ : ] is l<an,ition matrix from B to the given basis, then
v
1
== (1, 1, 1 ) , V·l = 3(1, 1, 0) + (1, 0, 0) = ( 4, 3, 0) , and v :l = 2(1, 1, 0) + ( 1, 0, 0) = (3, 2, 0).
D4. If lwls = w holds for every w, t hen the transi tion mat rD.. from the standard basis S to the basis
B is
Ps .... B = ([eds I (ez]s l le3]sl = rel l ez les} =In
and soB= S = {ebe2, ... ,en}·
D5. If !x Yls = 0, then {x]s = IY]B and sox= y .
WORKING WITH PROOFS
P l. If c1, Cz, . . . , Ck arc scalars, then (c1 v 1 + czv2 + · · · + = c1 (v t ) s + cz(vz )s + · · · +
Note also that ( v) 8 = 0 if and only if v = 0. It follows that c
1
v
1
+ Cz v2 + · · · + ck v 1: = 0 if and
only if c
1
(v
1
)a I· cz(vz)s + · · · + ck(vk) B = 0. Thus the vectors Y1, v2, . . ,
1
vk arc linearly indc·
pendent if and only if (v
1
) a, (v;!) B, ...
1
(vk).V are linearly independent.
P 2. The vec tors VJ, Vz , . .. ' VI( s pan Rn if and only if every vector v in nn can be expressed as a linear
combination of them, i.e., there exist scalars c
1 1
c2, .. . , Ck SllCh t.hat v = r.1 v
1
+ c
2
v
2
+ · · · +
Since (v)n = c1(v1) B + c2(v2)B + · · · + ck(vk)B and the coordinate mapping v + (v)a is onto,
it follows that the vectors v l, v2,' . . 'Vk span Rn jf and only if (vt)e. (v 2) B,· .. ' (vk)B span nn.
P :l. Since the coorciinate map x + !xJn is onto, we have A{xJs = C!xle for every x in R" if and only
if Ay =:.: Cy for every y in R
11
• Thus, using Theorem 3.4.4, we can conclude that. A = C if and
only if .4[x ]B "' C[xJ a for every x in R
11
•
P4.. Suppose B , ... {ut , u 2, . . . , u
71
} is a basis for [C' . T hen if v = a1 u1 + a2 U·l + · · · + a
71
Un anJ w =
il1 l.I J + l>zu 2 + · · · ' b, u,.. , we have:
V + W = (a1111 + a2 u2 + · · ' + O.n Un) 1 (b1U1 + b2U2 + ' ' · + bnUn)
=(eLl+ bt) Ut + (a2 + b2)u 2 +··· +(a, + bn )Un·
Thus (c:v)s = ca2, .. . , can)= c(a1 , . .. , an.) :::;: c(v)s and (v + w)e =(at + b1, .. . , an + b,.,) =
(al,a21 . .. , an) + (bt , b2, . .. ,bn) = (v)B + (w)s.
EXERCISE SET 8.1
CHAPTER 8
Diagonalization
Thus [x]a = [.];
2
[Tx]a = [:z\:
2
:r
2
), and (T]s = I[Tvt]a [Tv z]a] = we note
that
(Tx]s = rxl2l X2] = [22 ol] [ X2 ] = IT]a[x]a
l X2 X,
which is Formula (7).
Thus [x]s = r i_"'' +
[Tx]o =
!%
1
1 and ITis = IITvda
:r, + ,r, :;:rt
[
I  •
(Tv2JaJ = " .
"5
Finally, we note that
(TxJa
\\"l11ch is Formula (7)
3. Let P = Ps•B where S = {e
1
,e2} is the standard basis. Then P = [[vds (v2JsJ =
P[TJ p l = [I 1] [2 1] [ 0 1] = [1 1] _ [TJ
8
1 0 2 0 1 1 1 1
4. Let P = o where S { e1o ez} is the standard basis. Then P = llv!]s [vz]s] = and
PITJaP
1
=
5. For P\'ery v€:ctor x m R
3
, we have
2] [t
1 ..,.
:>

1
5
8
] [ 1 2] = [l
_ .1 5 2 l 3
5
= [T]
[::] (jx, jx, + jx,) m + H·· + jx, + [!] + (jx, + !•· jx,) m
268
Exercise Set 8. 1
269
ami
Finally, we note that
[
xz  !x3 ]
[TxJH =  4x: + X2
1
 
 2Xt + 2 X3
which is Formula (7).
6. .For every vector x in R
3
, have
X = = Hx,  fz, + !x,) nl + tlx, + + {,x,) m + ( jx, + lx,!x,) nl
and
[T] F = [[Tv,]o [7'v,]o [Tv, ]!,] = [ J
Finally, we note that
which is Formula (7) .
[
y
12
7
8
il
270 Chapter 8
7.
[i
0
;]
P = Ps .... a where 5' is the standard basis. T hen P = [lvt]s
[v2Js [vJ]sJ = and
[;
0
il H
3
t] [l
I
J]
H
1
_;]
2 2
P[TJaP
1
=
1 I
=
1 ={Tj
2 l
1
l l 1 I
0
2
2 2
2
8. Let P  Ps ..... a where S is the standard basis. Then P = [lv
1
]s
Jv,]s lv>lsl = n
3 1] [y
3 2
3 0 _'J.
8
:J n
_ !
8
..1..
12
3
8
j] [0 2
112 = 0 0
J 1 0
 8
= [TJ
 2
9. We have T v l = =II = v 1 and Tv2 = = 1
9
1 vl +
.. 'I I 1' I [()] J [2] 2 r3] 3 I 2 I d r I [1 I] l2 19 I Th
Stmt nr y, v
1
=
1
=IT
1
+IT
4
= nv
1
+ nv
2
an v
2

4
=  rrv
1
+ nv
2
. us
[T]s = anti [T]s· = Since V j = v; + v2 and V?. = v; v2, we have p =
I I TI II II
Ps ... a• = [: _:) and
P[l]sP
1
=
[
11
1] [_..:!..
1 M
fl] = [A
26 I I 2
TI 2 2 IT
32]
=!Tis
II
10. We ha\·e Tvl = [ 156] : 2] + 1798[ ") = _86v + L798v and Tv = [3] 6l v 
82 4S 22 15 1 15 1 2 2 16  90 1 45 2 •
Si1mlarly. = [ = 
3
2
1
GJ 
[ = :) = 
3
2
1
v; 
7
2
s v2 and Tv2 = [ i v; + ;sv;. Thus
l
81i 61 ] [ 31 9]
[Tis = _ and [Tis· = =; S1me v! = 10v1 r 8v2 and v 2 =   ¥ v2, we hnve
'2 2
[
lo [1o 
2
1 (_§§ 3AJ [tS _..!.. ] [¥
P = Pa ... FJ• =
8
_;j ancl P[T is P 
1
=
8
¥J 1!i: h _
1
i = 7i ¥ = [T]s··
11. The .qualion P[T]B p  l = [Tis• is equivalent to [T]a = p
1
[Tis• P. Thus, from Exercise 9, we
have
[Tis
!] [ 1:1
2 11
where, as before, P = Pe.s•.
 []
.!.2 1
11
1] = p• [T]s· P
1
12. The e>quation P[T]sP · I= [Tis· is equivalent to [T]s = P 
1
[Tl"' •P. Thus, from Exercise 10, we
have
[
86
[TJs =
45
61]
90
49
;rs
whf•rt•, ns before. P = Poe··
[
13
 90
 ..!.
45
[10
8
2
Exercise Set 8.1
13. The standard matrix is [T} = [
1
and, from Exercise 9, we have !T}s = [ ft . These
. , o 1  rr u
matrices are related by the equation
[
1
P[T]
8
P
1
=
5
5]
3 _25
11
l!.. l
ll 22
'l 6 5
TI 22
= [1 2] = jTJ
22 0 1
14. The standard matrix is [T] = and, from Exercise 10, we have {T)s = [ _ These
matrices are related by the equation
where P = [vl I v:J
15. (a) For ev€r y x = (:.c,y) in R
2
, we have
a.nd
(b) Ju agreement wit b l·hrmu Ia. (7), we hn.vr.
16. {a) For every x " (x.y,z) in R
3
, we ha.ve
and
\ .. Tx .,;,
[
x + Y + z1 [1] [1] [0]
j +(4z)
I /
( v, ? !
) J •
'··
Chapter 8
272
(b) ln agreement with Formula. (7), we have
17. For every vector x in R'l, we have
x = [::1 = + %xz) + ( {
0
x1 + fo:r2) = + ix2)v1 + (
1
3
0
xl + foxz)v2
and
Finally, in agreement with Formula. (26), we have
18. For every vector x m R
2
, we have
anrl
[
1 + I l
1
1

2
:r1
3
:r2
:tt  X2
t hus lxls = ; l· jTxJs' = , and ITJs•s = I!Tv1J B'
4 Xl 1 ;:t:z 3 '2
z X I + J%2
Finally, in agreement with Formula (26), we have
19. For every vect or x in R
3
, we have
Exercise Set 8.1 273
and
Tx = · = ·x
1
  .. xz  ;;XJ +  x1  x2 +  ;[J
[
3Xt + 2X3] ( 5 1 4 ) r3] (6 3 2 ) [1]
2x
1
_ x
2
r 14 ,
2
1 1<1 1
4
[
4xt +  6 1 1
h [
I
I l 1 [T J [rrt 14X2 'fX3]
t US X B = zXl 
2
xz + '2:2:3 , X B' =
3 2
. 1
l 1 I 7XJf4X2+'f:t3

2
x1 +
2
x:z +
2
x3
ann [T]B'B = r !
Ill 'f
!4]
I .
Iii
Finally, in agreement with Formula (26), we have
20. For every vector x in R
3
, we hnvc
and
· !]
1
• Finally,
lf 2
• =
rl  lx1  .2.. .t2 l [ 7 14'  14
4 l  13
7Xl il.X:.! I· ZJ \.ii
21. (a) [TvdB = and !Tv:da = GJ ·
(b) Tvl = VI  2v2 = [ 2GJ = Md T vz = 3vl +5v2 = + =
(c) For every vector x in R
2
, we have
Thus, using the linearity ofT, it follows t hat
· 1 2 f 5) ( 2 1 ) [ 7]  [
1
l Xt 
Tx==(  . xt+
5
x2)l
0
+ 
5
x1+5x2
11
.., 22x + ll x
5 1 5 2
or, in t he comma delimited form, T (xJ> x2) = (
1
ix1 fjx1 + Jtx2).
(d) Using the f.: mt:la obtained in part (c) , we have T(l , l ) = (lri'  ¥ + ¥) = ( '
5
u,
3
5
3
) .
274 Chapter 8
(c) For every vector x in R
4
, we have
x  4x2 +
1
2
1
x4)v1 +  2.r2 4x3 +
+ ( + !xJ !x4)v3 + + + !xJ
I hu:>, using lht ofT, it follows
Tx  4x2  !x3 +
1
; x4)Tv1 + 2x2 !x3 +
+ + !x3 !x.a)Tv3 + + + !x3
which leads to the following formula forT
(d ) Using Lhe formula obtained in par t (c), we have T(2, 2, 0, 0) = (  31
1
37, 12).
23. 1f T IS the identity operator then, since Te1 = e
1
and Te2 = e2, we have IT]= Similarly,
[T]a IT]s• = On the other hand, Tv1 = = + H[_:] = +
and Tv
2
= [ = {,v} !:v;; thus ITJs•.B =
5I
24. If Tis the identity operator then IT] = IT] B = IT] a• = On the other hand, we have Tv1 =
0 0 l
[
3
] 37 [
2
1 19 [ ?] II [g] 37 1 19 1 1J 1 T _ [Q]  299 1 209 1 229 I
= 36 + 36 : 36 = Jij vl + 36v2 36 v 3, v'2   144vl + mv2.  144v3,
DISCUSSION ANO DISCOVERY
[
37
3
and Tv3  [9] =
173
V. + illv' 
115
v'· thus [T}a• D = .IJ!
 1 . 48 2 48 3• ' 36
4 I I
 36
299
m
'109
m
'229
 m
ill]
48
119
48 .
ll5
48
275
25. Let B = {v
1
, v2, ,.. , v ,t) and B' = {u
1
, u 2, ... , um} be ba.ses for R" and Rm respectively. Then,
if T is the zero transformation, we have
for each i = 1, 2, ... , n. Thus {TJB' ,B = [0 I 0 I · · · I OJ is the zero matrix.
26. There is a. scalar k > 0 such that T(x) = kx for all x in R". Thus, if B = {v
1
, v
2
, •.• , v,} is any
basis for Rn, we have T(v
1
) = kvi for aJl j = 1, 2, . .. , n and so
[T]a =
0 0
27. [T]:
[H !]
0
0
k
=k
1 0
0 1
0 0
28. [TI =
29. We have Tv1 = [ _ = [ = 4vl and Tv2 = = [ = 6vz; thus {Tla =
[ From this we see that the effect of the operator 1' is to stretch the v
1
component of a
vector by a factor of 4 and reverse its direction, and to stretch the v 2 component by a factor of 6.
If the xycoordinat e axeg are rotated 45 degrees clockwise to produce an x'y'coordinate system
who::;e axes are aligned with the directions of the vectors v
1
ttnd v 2, then the effe.t is st.retch
by a fact or of 4 in t.he x' dircction, reflect about the y'axis, and stret ch by a factor of 6 in the
y'direction.
30. Wr. have ami
Tv3 = =  v'3v2  v3. Thus [1']a = o 1 vf:J = 2 0 !  {/ . From lhis we sec that
[
0] [2 0 0] [1 0 0]
 3; 0 J3  1 0 :,(f
t he effect of the operator 1' is to rotate vectors counterclockwise by an angle of 120 degrees about
the v, axis (looking toward the origin from the tip of v!), lbeu stretch by a factor of 2.
DISCUSSION AND DISCOVERY
Dl. Since Tv1 = v2 and Tv2 = vl> the matrix ofT with respect to the basis B = {v
1
,v
2
} is [T]B =
On t he other hand, since e1 =
and e2 = *v2, we have Te1 = iTv
1
= = 2e2 and
Te2 = =
thus the standard matrix forT is [Tj == t].
276 Chapter 8
D2. The !lppropri ate diagram is
c•
c
(T)Bl
03. The approp1 inte diagram is
c• D
D4 (a) I'luc We ha\·e IT1 (x ))e = !Tds s{x)s = {T2Js• e[x)s = !T2(x))s·: thus Tt(x) = T2(x).
( b) l· abe. l·ot example, the zero operator has same rn:ttrJX (t hP ZNO nMtnx) with respect to
any basis for R
2
(c) True If 8 = { v1, v2, , vn} and [T)s =I, then T(v,.J = v1c for each k = 1, 2, . , n and it
follows from this that T(x) = x for all x .
(d ) False For example, h:t B = {e1, e2}. 8' = {ez,ct}, and T(x, y) = (y, x). Them IT]s•. B = / 2
but T not the 1dent1t.y operator.
D 5. Ouc 1cason 1s that lhf tcpresPntation 0f the operator in • orht'r b:L.c;is may more clearly reAect
t. he rir effect. of the operator
WORKING WITH PROOFS
Pl . If x andy are vectors and cis a scalar then, si nce T is linear, we have
c[x)e = [cx]s 7 {T(cx)Ja = [cT(x )Je = c{T(x)]a
[x)o + [Y)s = jx + Y)s t[T(x + y)]s = jT(x) + T(y)Js  {Tfx)]e + {T(y)]s
Thts shows that th mapping {x]s 7 [T(x)Js is linear.
P2. Ifx is tn R" and y ts m Rk, then we ha"e [T1 (x)J a• = [Tds•.s lx)s and IT2(y)J a" = !T2) B".B'IY)s•.
Thu:
IT2('i JIX))]s" = IT2] B", B' IT1 (x)Js• = IT2)B".B'!Tds·.alx]s
and rrolll tIll: It follows thal IT2 0 Tds•·,s = (:t;JB" ,B'!Tds·.B·
EXERCISE SET 8.2 277
P3. If x is a vector in Rn, then [T]s[x]D = [Tx] B = 0 if a.nd only if Tx :::: 0. Thus, if T is oneto
one, it follows that [TJB[X]B = 0 if and only if [x] a = J, i.e., that: [T]a is an invertible matrix.
Furthermore, since [T'"
1
]a[T]B = [T
1
oT}
8
= [J]B = /,we have [T
1
]s = [TaJ
1
.
P4. [TJs = [T(vi) a I T(v2)al ··· I T(vn}s] = [Tle. B
EXERCISE SET 8.2
1. We have tr{A) = 3 and tr(B) = 1; thus A and Bare not similar.
2. We have det(A) = 18 and det(B) = 14; thus A and B are not similar.
3. We have rank( A) = 3 and rank(B) = 2; thus A and B are not similar.
4. We have rank(A) = 1 and rank(B) = 2: thus A and B are tJOt similar.
5. (a) The size of the matrix corresponds to the degree of it.s characteristic polynomial; so in this
case we have a 5 x 5 mat rix. The eigenvalues of the matrix wi Lh t hei r algebraic mu!Liplicities
arc A = 0 (wult.iplicity 1), A= 1 (multiplicity 2), and A= 1 (mult.iplicity 2). The eigenspace
corresponding to A = 0 has dimension 1, and the eigenspaces corresponding to A=  1 or..\= 1
have dimension 1 or 2.
(b) The matrix is 11 x 11 with eigenvalues A =  3 (multiplicity 1 ), A =  1 (multiplicity 3), and
>. = 8 (multiplicity 7). The eigenspace corresponding to >. ""' 3 has dimension 1; the eigenspace
corresponding to ). =  J has dimension 1, 2, or 3; and the eigenspace corresponding to A = 8
have dimension I, 2, 3, 4, 5, 6, 7, or 8.
6. (a) The matrix is 5 x 5 with eigenvalues A= 0 (multiplicity 1), A= 1 (rm1ltiplicity 1), .A= 2
(multiplicity 1), and .A = 3 (multiplicity 2). The eigenspaces corresponding to .A= 0, >. = 1; and
>. =  2 each have dimension J. The eigenspact! corresponding to A = 3 has di mension 1 or 2.
(b) The matrix is 6 x 6 with eigenvalues A= 0 (multiplicity 2), A= 6 (multiplici ty 1) , and .A=
2 (multiplicity 3). The eigenspace corresp(1nding to >. = 6 has dimension 1, the eigP.nspace
corrcspondinr; to A = 0 ha.<> dimension 1 or 2; and the eigenspace corresponding to A = 2 has
dimension 1, 2, (){ 3.
7. Since A is triangular, its characteristic polynomial is p(.A) = (..\ l)(A 1)(.\  2) =(.A  1)
2
(A 2).
Thus the eigelll'alues of !I are .A= 1 and>.= 2, with algebraic multiplicities 2 a nd 1 respectively. The
eigenspace corresponding to A= 1 is the solution space of the system (I A)x = 0, which is
The general solution or this system is x = [i] = t [:] ; thus the eigenspa« is 1dime.'5ional and so
A = 1 has geometric multiplicity 1. The eigenspace corresponding to A = 2 is the solution space of
the system (2/ A)x = 0 which is
278 Chapter 8
The solution space of tbis system ;., x ~ [':] = s [l thus the eigenspace is 1dimensiotlai and so
). = 2 also has geometric multiplicity 1.
8. The eigenvalues of A are ). = l , >. = 3, and >. = 5; each with algebraic multiplicity 1 and geometric
rcultipliclty 1.
9. The characteristic polynomial of A is p(>.) = det(>J A) = (.X 5)
2
(>.  3). Thus the eigenvalues of
A are>. = 5 and >. = 3, with algebraic multiplicities 2 and 1 respectively. The eigenspace correspond
ing to ). = 5 is the solution space of the system (51 A)x = 0, which is
Tbe gene>al solution of this system is x ~ m ~ t [! l ; thus the eigenspace is 1<limensional aud so
). = 5 has geometric multiplicity 1. The eigenspace corresponding to ). = 3 is the solution space of
the system (31  A)x = 0 which is
The solution space of this system is x = [ ~ ~ = s [ ~ ~ ; thus the eigenspace is ]dimensional and so
 2s 2
). = 3 also has geometric multiplicity l .
10. The characleristic poiynomial of A is (). + 1)(.>. 3)
2
. Thus the eigenvalues of A are>.=  1 and
>. = 3, with al gebraic multiplici ties 1 and 2 respectively. The eigenspace corresponding to >. = 1 JS
1dirnensional, u.nd so >. = 1 has geometnc multiplicity 1. The eigenspace corresponding to >. = 3
is Lhe solution space of the system (31 A)x = 0 which is
[ _ ~  ~
~ ] [:;] = [ ~ ]
[
1"' [0]
The general solution of Lhis system is x ~ s ~ J + t ~ ; thus the eigenspace is 2dimensional, and so
). = 3 geometric multiplicity 2.
ll. The characteristic polynomial vf A is p(>.) = >.
3
+ 3>.
2
= >.
2
(>. + 3); thus the eigenvalues are >. = 0
and ..\ =  3, with algebraic multiplicities 2 and 1 respectively. The rank of the matrix
OJ  A = A = [ ~ 1 = ~ ]
 1 1 1
EXERCISE SET 6.2 279
is dearly 1 since each of the rows is a scalar multiple of the lst row. Thus nullity( OJ  A) :=: 3 _1 = 2,
and this is tbe geometric multiplicity of ..\ = 0. On the other hand, the matrix
31 A =
[
 2 1 1]
1 2 1
 1  1 2
0 1]
1 1 . Thus nullity( 31 A) == 3  2 = 1, and
0 0
has rank 2 since its reduced row echelon form is
this is t he geometric multiplicity of >. =  3.
12. The characteris tic polynomial of A is (>. l)(A
2
 2A + 2); thus >. = 1 is the only real eigenvalue of
A. The reduced row echelon form of the matrix li  A= is  thus the rank
 l 4  6 0 (I 0
of 11 · A is 2 and the geomet ric multi plicity of ).. = 1 is nulli ty( ll  A) = 3  2 "" 1.
13. The characteristic polynomia.l of A is p(>.) = .>.
3
 11>.
2
+ 39A  45 = (A 5)(>. 3)
2
; thus the eigen
values are >. = 5 and ,\ = 3, with algebraic mul tipli<.:i ties 1 and 2 respectively. The matrix
5/  A ""'
 1 0 l
has <ank 2, since its •eduC€d <Ow cohelon fo•m is ! Thus nullity(f>l A)  3  2 = l, and
this is the geometric multi plicity of ,\ = 5. On the o t her ha.nd, the matrix
3T A =
 1 0 ]
has rank l since each of the rows is a scalar multiple of the 1st row. Thus nuUity(3I  A) = 3  l = 2,
and this is the geometric multiplicity of,\ ""'3. It follows that A hM 1 + 2 '== 3 line<'lrly independent
eigenvectors and t hus is diagonali<>able.
14 . The characteristic polynomial of A is {A+ 2)(>.  l)
2
; thus the eigenvalues are,\=  2 and A= 1,
with algeb<ak multipHcities I and 2 <espocti vdy. The matrix  21  A  [: : =:] h,. <a.Jll< 2,
and t he matrix 11 A = [ : = has rank 1. Thus > =  2 has 1, and
 1  1 l
...
>. = 1 has geometric multiplicity 2. It follows that A has l + 2 = 3 linearly independent eigenvectors
thus is diagonalizable.
15. The characteristic pc,lynomial of A is p(.>) = .>
2
 3>. + 2 = (>. l) (A 2); thus A ho.s two distinct
eigenvalues,>. = 1 and>.= 2. The eigenspace corresponding to A = 1 is obtained by solving the system
(/  A)x = 0. which is = The general solution of thls system is x = Thus,
I
280 Chapter 8
taking t = 5, we see that ?l = [:] is an eigenvector for >. = 1. Similarly, P2 = [!] is an eigenvector
for A= 2. Finally, the matrix P = [p
1
p
2
] = [! !} has the property that
p1 AP = [ 4 3] [14 12] [4 3] = [l 0]
5 4 20 17 5 4 0 2
16. The characteristic polynomial of A is (.>. 1)(.>. + 1); thus A ha.s two distinct eigenvalues, .>.
1
= 1 and
.>.2 = 1. Corresponding eigenvectors are P1 = and P2 = and the matrix P = !P
1
P2J has
the property that p l AP = =
17. The characteristic polynomial of A is p(>.) = .>.(.>. 1)(,\ 2); thus A has three distinct eigenvalues,
>. = 0, >. = 1, and >. = 2. The eigenspace corresponding to >.. = 0 is obtained by solving· the system
(OJ A)x 0, which is n : =!] [::] The solution of this system is x +:J
Similarly, the genml solution of (I A)x 0 is x = +l, and the gene<al solution of (2!  A)x 0
is x t [:]. Thus the mat,ix P [! :] has the P':perty that
p
1
AP =
 . . [
011011 101 002
18. The characteristic polynomial of A is (A 2)(>.  3)
2
; thus tl.e eigenvalues of A are >. = 2 and >.. = 3.
19.
The vecto' v 1 [i] is an eigenvecto. cwesponding to 2, and v 2 [!] and v3
linearly independent eigenvectors corresponding to .X = 3. The matrix P = (v
1
v
3
J has the
property that, p  t AP = ;:
001003001 003
The characteristic polynomiaJ of A is p(>.) = >.
3
 6>.
2
+ 11>..  6 = (..>.. 1)(>.  2)(.>. 3); thus A has
Lhree distinct eigenvalues >.
1
= 1, .>.2 = 2 and .>.
3
= 3. Corresponding eigenvectors are v
1
=
v, m, and v, [:] Thus A is diagonalizable and P [: : :] has the pmpe<ty that
pIAP = : =
0 1 1 3 1 3 1 3 4 0 0 3
[:].
Note. The diagonalizing matrix P is not unique· it depends on the choice (and the order) of tht:
eigenvectors. This is just one possibility.
EXERCISE SET 8.2 281
20. The characteristic polynomial of A is p(>.) = >..
3
 t >.
2
+ 5>. 2 = (>. 2)(>. 1)
2
; thus A bas two
::::
17 9 f 7) 0
which shows I hat the etgenspacP corresponding to >. = 2 has dunension 1 Simi larly, llw general
solution of (I  A )x 0 is x  s [ il· which shows that the eigenspace conesponding tn A = t also
has dimension 1. It follows that the matrix A is not diagonalizable since it has only two linearly
independent eigenvectors.
21. The characteristic polynomial of A is p(>.) = (>.  5)
3
; thus A has one eigenvalue, >. = 5, which has
algebraic multipUcity 3. The eigenspace corresponding to >.. = 5 is obtained by solving the system
(51 A)x = 0, which is H : [:;] = m The genml solution of this system is X t
which shows that t ht:' eigenspacP has dimension 1, i.P , Lhe eigenvalue has geometric mulliplicity 1 lL
follows that. A is nnt. since thn sum of t.hr gcomelrif multiplicil.it!S vf its is
less than 3
22. The characteristic polynomtaJ nf A is >.
2
(>. 1}; thus the eigenvalues of A are \ = 0, and >. = 1 The
eigenspace corresponding to A 0 hM dimension 2. and the vecto'5 v, = [ v, [:] fotm a
hMis lot this space. The "'""' ' ' = m fotms a ba.<is lot the eigenspace:wespondut< :o = I.
ThusA IS diagonalizable and thr matnx P = lv
1
v2 v 3) has the property that
p  t AP = : u !
23 ThP charactt>risttc polynomtal of A is p( .\) = (>. + 2)
2
t.\ 3)
2
• thus A has two etgcnwtlues, .\ = 2
and .> = 3, each of which has n.lgebraic rnulLiplicity 2. The eigewJpace corresponding L(l ..\ = 2 is
obtained by solving thP. system ( 2/ A)x = 0, whi ch is
The gonctnl solution of this system ts x = r + s whteh shows that the eigenspace has dimenSton
2, i.e., that the eigenvalue >. = 2 has geomeLric multiplicity 2. On the other hand, the general
solution of (31  A )x  0 is x = I •nd so = 3 has geomet de mu I liphcity I It follows that A is
not diagonalizable since the sum of the geometric of its eigenvalues is lt>..ss than 4
282 Chapter 8
The charactenstic polynorrual of A is p(>.) = (>. + 2)2(>. 3)
2
; thus A has two eigenvalues A= 2
au.l 3, each of algebrak multiplici<y 2. The vec<on v,  and v2 = form a for <he
eige "'P"' correspondi og to  2, and <he vectors v,  rrl and v,  [ form a basis for <he
e1genspace corresponding to A= 3. Thus A is diagonalizable and the matrix P = fv
1
v
2
v
3
v
4
J
has the property that
p'AP=
1 1
0 0
0 1
0 0
 =
0 0 0 3 0 0 010 0
1 0 0 0 30 0 01 0
0 0
2
•0
0 3
0 0
25. 1f the matrix A is upper triangular with l 's on the main diagonal, then its characteristic polynomial is
p(>.) = (>. 1 )" and >. = lss the only eigenvalue. Thus, in order for A to be diagonalizable, the system
(1 A)x = 0 m11st haven linearly independent solutions. But, if this is true, then (/ A)x. = 0 for
every vector x in R" and so I  A is the zero matrix, i.e., A =I.
26. If A 1s a 3 x 3 matrix with a threedimensional eigenspace then A has only one eigenvalue, A= A
1
,
whsch IS of geometric rnultJphc1ty 3. In other words, the eigenspace corresponding to A
1
is all of R
3
.
It follows that Ax = >.
1
x fos a.! I x in R
3
, and so A = >.
1
I is a diagonal matrix.
27. If C is similar to A then there is an invertsble matrix P such that C = pl AP It follows that
sf A IS invertible, then C ts invertible since it is a product of invert1ble matnces. Similarly, smce
PCP
1
== A, the invertibihty of C implies the invertibility of A.
28. If P =!PsI P2l · I PnJ. then AP = (Aps I · · · I ApnJ and PD =[At PI I A2P2I · · · I AnPnJ where
\1 \2. . An at c the diagonal entries of D. Thus A.I: Pk for each k = 1, 2, , n, i.e., is an
of an<.l P1r is an eigenvector corresponding to
29. The :>tandard matrix of the linear operator T is A = = and the characteristic polynomial
1  1 2
of A ts p(>.)  >.J + 6A
2
+ 9>. =>.(A+ 3)
2
• Thus the eigenvalues ofT are >. = 0 and A= 3, with
mulllplicltles 1 and 2 respectively. Since>.= 0 has algebraic multiplicity 1, its geometric
snult tplictty 1S also 1. The eigenspace associated with A = 3 is found by solving ( 31  A)x = 0
whst h IS
The gen"' ..t svlu<ion of < hissys<em is x  s [ +t [:] ; <hus  3 has geome<nc mulbplic.ty 2. I<
follows that T IS diagonali1.able since the sum of the geometric multiplicities of its eigenvalues is 3.
30. Th<' standard rnatnx of the operator Tis A= and the characteristic polynomial of A
l 1 0
i; (>. + 2)(>. 1)
2
Thus the eigenvalues ofT are A= 2 and >. = 1, with algebraic 1
DISCUSSION ANIJ DISCOVERY
283
and 2 respedively. Smce >. = 2 has algebraic multiplicity 1, tts geometric multiplicity is also 1.
The eigenspace associated with>.= 1 is found by solving the system (J  A)x = 0 Since the matrix
l  A= [ has rnnk 1, the solution space of(! A)x = 0 is two dimrnsional, i.e.,>.= 1
1  1 l
is an eigenvalue of geometnc multiplicity 2. It follows that T is diagonaltzal>le the sum of lhe
geometric multiplicities of its eigenvalues is 3.
31. If x is a vector in Rn and >. is scalar, then [Tx)a = [T]a[x]a and [>vc]B = .>.[x]a. lL follows that
Tx =Ax if and only if [T}s[x]s = [Tx]B =[Ax:] a = >.[x)a; thus xis an eigenvector ofT corresponding
to >. if and only if [x]s is an eigenvector of [T]
8
corresponding to >..
32. The characteristic polynomial of A is p(>.) = 1>. _:ca >. :_b dl = >.
2
 (a+ d)>.+ (ad  be), and the dis
criminant of thi's quadratic polynomial is (a+ d)
2
 4(ad be) = (a d)
2
+ 4bc.
(a) lf (a  d)
2
+ 4bc > 0, thon p(>.) has two distinct reo.! roots; thus A is since it has
two distinct Pigenvalues.
(b) Tf ( o  d)
2
+ tbc ·· n, t h"n r(>.) has no roots; t hue: A has no rc:1l t'lgf"nvalues and is the
nol diagonahzablP.
DISCUSSION AND DISCOVERY
01.
are not s1mtlar smce rank(A) = 1 and rank(8) = 2
02. (a) 'ITue. We havP A = pI AP where P =I
(b) 1Tue. If A 1s s tnHiar to B and B is similar to C. then there arc mvert1ble matrices P
1
nnd P2 surh that A  P
1

1
BP1 and B = F
2

1
(' P2 . lt follows that A = P
1
1
( P
2
1
C P
2
)P
1
=
(P2P1)
1
C(e'JJ>I ), thus A is sinular to C
(c) True. IfA  P
1
BP,thenA
1
=(P
1
BP) 
1
= P 
1
B
1
(P
1
)
1
p
1
e 'P.
( cl) False. This statement does not guarantee that there are enough linearly independent eigen
vectors. For example, the matrix A= has only one (real) eigenvalue, >. = 1,
0 1 0
wh1ch has mull1pltcity 1, but A is not dJagonalizable
03. (a) False. For examplr, 1  is diagonalizahle
(b) False. For exc.Lmple, 1f p l AP is a diagonal matrix then so IS Q
1
AQ where Q = 2P. The
diagonalizing matrix (if it exists) is not uruque!
(;:) True. Vectors from different eigenspaces correspond to djfferent eigenvalues and are therdore
linearly independent. In the situation described { v
1
, v2, v
3
} is a linearly independent set.
(d ) True. If an invert1ble matrix A is similar to a diagtlnal matrix D, then D must also be m
vertible, thus D has nonzero diagonal entries and o
1
is the diagoncJ matnx whose diagonal
entries are the reciprocals of the corresponding entries of D. Finally, 1f PIS an mvert1ble ma
trix such that p I AP = D, we have p• A
1
P = (P
1
AP) 
1
= o
1
and so A
1
is sintilar
tool
284
(e)
\
04. (a)
(b)
(c)
(d)
05. (a)
(b)
(c)
Chapter 8
1'ru_y The vectors in a basis are linearly independent; thus A bas n linear independent
A is a 6 x 6 matrix.
The cigenspuce corresponding to >. = 1 has dimension 1 The eigt:nspace corresponding to
>. = 3 has dimension l or 2. The eigenspace corresponding to >.= 4 has dtmension 1, 2 or 3.
If A is diagonalizable, then the eigenspaces corresponding to >. = 1, >. = 3, and >. = 4 have
dimensions 1, 2, and 3 respectively.
These vectors must correspond to the eigenvalue >. = 4.
If >.
1
has geometric multiplicity 2 and >.2 has geometric multiplicity 3, then >.3 must have
multiplicity l. Thus the sum of the geometric multiplicities is 6 and so A is diagonalizable.
In this case the matrix is not diagonalizable since the sum of the geometric multiplicities of
t he e1genvalues is less than 6.
The matrix may or may not be diagonalizable. The geomet.ric multiplicity of >.3 .IIWSt be 1 or
2. If the geometric multiplicity of >.3 is 2, then the matrix is diagonalizable. If the geometric
mllll.iplicitv of >1 Is 1, then the matrix is not dia.gonaliza.ble.
WORKING WITH PROOFS
Pl. If A and Bare similar, then there is an invertible matnx P such that A = p  t BP. Thus PA = BP
and so, usira,v the result of t he ci ted Exercise, we have rank( A) = rank(PA) = rank(BP) = rank(B)
and nullity(A) = nullity(PA) nullity(BP) = nnllity( B).
P2. If A and 13 are sumlar lhen t here ts an invert ible matnx P such that A = p  • BP Thus, using
part (e) of Theorem 3 2 12, we have tr(A) = tr(P 
1
BP) == tr(P
1
( IJP)) o..: tr(( BP)P 
1
) = tr(B).
P3. If X f. 0 and Ax = >.x t.hen, since pis invertible and cpl = p  l A, we have
C' P
1
x = p  I Ax = p
1
(,\x) = >.P
1
x
wit h p • x :f 0 T hus p •x is an eigenvector of C corresponding to >..
P4. If A and B ure :m111lar, lhen there is an invertible matrix P such t ha t A = p  l BP. We will prove,
by inducti on, tha t Air P
1
B" P (thus Ak and BJr are similnr) for every positive integer k .
Step l The fact that , \
1
= .1 = p 
1
BP = pl B
1
P is gtven
Step 2. If Ak = p  l BI<P, where k is a fixed integer then we have
A.\:+
1
= AAic = (P
1
BP)(P
1
BkP) = p 
1
B(PP
1
}BI<P = p  tBI<+lp
These two steps compl(· t e the proof by induction
PS If A ts dtagonalt zablc, t hen there is an invertible matrix P and a diagonal matrix D such that
P  • AP = D. We will prove, by induction, that p  tA'" P = Dk for every posttive integer k . Since
D" is diagonal thts shows that A" IS diagonalizable.
Step 1 The fact that p
1
A
1
P = P
1
AP = D = D
1
is given.
Step 2. If p  t Ak P = D" , where k is a fixed integer 1, then we have
p  lAH
1
p = p 
1
AAkP = (P
1
AP)( P
1
A
4
P) = DD" = Dlc+l
These two steps l"omplcte thP proof by induction.
EXERCISE SET 8.3
285
P6. (a ) Let W be the eigenspace corresponding to >o. Choose a basis {u 1, uz, ... , uk} for W, then
extend it to obtain 11. basis B = { Ut, u2, ... , u,., Uk+l• .. , un} for R".
(b) If P = [u1 I u2l I I · · · I un] =[Btl B2j then t.hc product AP has the form
AP = [\out I >.o u2l · · · I I
On the other hand, if Cis ann x n matrix of the form C = t hen PC has the form
where Z = if Z = p l AB
2
, we have AP =PC.
(c) Since AP =PC, we have p  l AP =C. Thus A is similar to C = so A and C
have the·same characteristic polynomjaJ.
(d ) Due to the special block structure of C, its characteristic polynomial of has the form
Thus the algebraic multiplicity of >.
0
as an eigenvalue of C, and of A, is greater than or equal
to k
EXERCISE SET 8.3
1. The charactenslic polynom1'\l of A is p(>.) = ,\
2
 5,\ = >.(>. 5). Thus the eigenvalues of A are).= 0
and ,\ = 5, and each of the cigcnspaces has climension 1.
2. The characteristic polynomial of A 1s p(,\) == >.
3
 27 ...\54== (,\ 6){>. + 3)
2
Thus the eigenvalues of
A are,\ = 6 and .>, = 3. Thf> t>igenspace corresponrling to,\ = C hl\.o:; dimension 1, anrllhe e1genspace
corresponding to ,\ = 3 ha.s rlimPnsion 2.
3. The characteristic polynom1a.l of A is p(>.) = >.
3
 3).
2
= ..\
2
(.X 3). Thus t he eigenvalues of A are
A= 0 and ,\ 3. The eigensptLCt: corresponding to >. = 0 has dillleu.sion 2, and the eigenspace corre
sponding to >.  3 has dimcllSIOn 1
4 . The charactenstic polynomial of A is p(>.) = ,\,
3
 9...\
2
+ 15). 7 (,\ 7)(> 1)
2
Thus the eigen
values of A are >. = 7 and >.  1. The eigenspace cor responding to ,\ = 7 has dimension I, and t he
eigenspace corrPspondmg to >. = l has dimension 2.
5. The gen"al solution or the system (0/  A)x 0 is x r + s [ :], thus the •ectors v
1
[
and v3 [:] form a basts for the eigenspace corresponding to = 0. Similarl y the vector v, = m
forms a basis for t he eigenspace corresponding to>.= 3. Since v 3 is or thogonal to both v
1
and v
2
it
follows that the two eigenspnces are
286 Chapter 8
6. The general wlution or (7f >i )x 0 is T [i} thus the vcctoc v, [:] rom•s a basis foe the
eigenspa<e ro"esponding to >. 7. Similacly the vecto"' v2 [ ]•nd v3 [:] form a basis for
the etgenspace corres ponding to >. = 1. Since v
1
is orthogonal to both v, and v
3
ll fvllows that the
two etgenspaces are orthogonal
7. The characteristic polynomial of A is p(>.) = >.
2
 6>. + 8 = (>. 2)(>.  4); thus the eigenvalues of A
are>.= 2 and >. 4. The vector v
1
= forms a basis for the eigenspace corresponding to>.= 2,
and the vector v2 = forms a basis for the eigenspace corresponding to >. = 4. These vectors are
orthogonal to each other, and the orthogonal matrix P = J = [  has the property
that
PTAP=
72
[3 l]
J2 1 3 12
= [2 0] = D
\/2 0 "
8. The characteristic polynomial of A is (>. 2}(>.  7); thus the eigenvalues of A are >. = 2 and >. = 7.
Correspondmg Pigcnvectors are v
1
= and v
2
= respectively. These vectors are odhogonal to
each other, and the orthogonal matrix P = 1
11
::
11
= hns the property that
pT_.1P = [ ts] [ 6 2] = [2 = D
..!S 2 3 '75 75 0 I
9. The characteristic polynomial of A is p(>.) = .>.
3
+ 6>.
2
 32 = (>. 2j(>. + 4)
2
; thus the eigenvalues of
A 2 and .\ =  4. The g'ncral solu<ion of(21 A)x 0 is T m' and the gcnecal solution
of (If  A)x  0 is x s [l] + t [ Thus <he vedo< v 1 [:] forms a basis foe the eigenspace
corresponding to >.  2, and the vectors v, [ :] and v, r:J form • basis for the eigenspace
corresponding to>.= 4. Application of the GramSchmidt process to {vt} and to {v2, v3} yield
orthonormal bases { u 1} and { u
2
, u
3
} for the eigenspaces, and the orthogonal mat nx
[
?s 73]
P = lut u2 u3) =
...16 0 73
has the property
r
I
76
pT AP = 12
2] [7:
2 76
0 2
76
[2
= 0
73 0
=D
Note. The diagonali?.ing matrix I' is not unique. It depends on the choice of bases for lhe eigenspaces.
This is j ,tst onf' possibilit.y.
EXERCISE SET 8.3
287
10. The characteristac polynomial of A is p(>.) = >.
3
+ 28>.
2
 1175>. 3750 = (>. + 3)(>. 25)(>. +50);
thus the eigenvalues of A are >.
1
= 3, >.
2
= 25, an<.l >.3 = 50. Corresponding eigenvectors are
v, = [!], v, = [ i], and v, = m. These vceto<S "!' mutually orthogonal, and the orthogonal
matrix
li::u] = [ ~
4
:J
 ~
[ V1
v2
0
p = llv1l1
ll v2ll
3
5
has the property that
pTAP = [  ~
0][ 2 0 36] [0
4
~ ]
n
0
 5 ~ ]
g
0
~ 0
3 0 1 0 = 25 =D
:J
0 i 36
0 23 0
3
0
5 5
11. The charactenstic pnlyn(lmial of A is p(>.) = \
3
 2>.
2
= ).
2
(>, 2); t.hus the eigf'nvalues of A are>.= 0
and > = 2. The genoral •oluUon or (OJ  .4)x = 0 is X = T m + s [ :]' and the general soluUon or
( 2/  A )x = 0 is x I [ l] · Thus the vedors v, = m and v, = [: l form a basis £or the eigenspace
""respondin• to I 0. o>nd >he vector v, = [l] £mms a basos £or the eigenspace eonesponding to
>. = 2. These VPctors are mutually orthogonal, and t.he orthogonal matrix
[
Vt
F = llvdl
1
72
I
72
0
~ ]
72
0
hn., the propt>rtv 1 hnl
[
n
 ~
72
~ ~ ] [ ~ ~ ~ ] [ ~
~ 0 0 0 0 1
1
l [ l
V2 I) 0 0
72 = 0 0 0 =D
0 0 0 2
12. The <"haracterrsltr. polynomaal of A JS p(>.) = .>.
3
 6>.
2
+ 9>. = >.(>. 3)
2
; thus the eigenvalues of A
are > = 0 and > = 3 The ve<:tor v, = [:] forms a ha.,is £or the eigenspace corresponding to A= 0,
and the vectors v, = [i ]••d v
3
= [~ ] form a basis £or the eigenspace correspondi ng to A = 3.
Application of GramSchmidt to { vt) and to { v
2
, v3} yield orthonormal bases { u
1
} and { u
21
u
3
} for
the eigenspaces, and the orthogonal matrix
288
has the property that
1
7J
1
"J2
I
tal [ 2  1
0 1 2
2
1 1
76
1]
 1 73
2 ....!_
v3
1

'1.2
I
0
Chapter 8
l l [ l
76 0 0 0
= 0 3 0 =D
2
0 0 3
7G
13. The characteristic polynomial of A is p(.X) = >.
4
 6>.
3
+ 8.X
2
= >.
2
(>.  2)(>.  4); thus the eigenvalues
of A are A 0, A 2, and A = 4. The solution of (01 A)x 0 is x r + s [!], :he
gene,aJ solution of (21  A)x 0 is x t l J, and the genecal solution of (4/ A)x = 0 is ·x u
Thus the ve<to" v, and v
3
fo<m • bO!Sis fo< th<· ci•cnspace coHe;ponding to 0.
v 3 [ ll fo<ms a basi: fo, the eigcn:pace co"esponding to >. 2, and v, [ ll fo,ms a basis
:o:::di:;::o >. 2 rlr the
1 0 0 0
[
0 0 0 1] [3 1 0 0] [0
T 0 0101300 0
pAP= 1 I
72 72 0 0 0 0 0 0 0
o o lo o o o 1
0
0
1
0
.J..
v'l
l
72
0
()
[0 0 0 0]
 0 0 0 0
0  0 0 2 0
0 0 0 0 4
14. The characteristic polynomial of A is p(..\) = >.
4
 1250..\
2
+ 390625 = (>. 25)
2
(..\ + 25)
2
; thus the
eigenvalues of A ace 25, and A= 25. The vecto" v1 = v3 fo<m a basis fo, the
eigenspace COHespondi og to A 25, and the vee to" v 3 v, [ ll fo<m a basis fo< the
eigenspace corresponding to..\ = 25. These four vectors art> mutually orthogonal , and the orthogcnal
matrix P
1
: :
11
1 has the property that
PTAP=
s
0
4
5
0
3
s
0
0
;!
s
0
4
01 [7
24
0 0
0
s
24 0
7 0
0 7
0 24
3
s
4
5
0
0
0
0
3
g
4
s
4
$
3
5
0
0 il
25
0
=
0
0
J
EXERCISE SET 8.3
289
15. The cig<>nvalues of the matrix A = are >
11
= 2 and >..
2
= tl, with corresponding nOrJ:!lalized
eigeuvccturs u
1
= [_ and u
2
= Thus the s pectral Jccompo.'lition of A is
71 7i
l
:l l] = (2) [ [ I
1 3 72 72
16. The eigenvalues of A are >t = 2 and .X2 = 1, wit h corresponding normali zed eigenvect ors u
1
= [
and u 2 = [ Thus the spectral decomposition of A is
17.
18.
:::.:·:,:i• ii :.:·
is
l
3
:2] [js] I [72] [+a] 1
= 2 [;n 76 1.]• [ ,', ;!; o]' 73 ;!;]
=
2
[! ! ll+t t ! =j]
Note fhl' :;pPctral decomposilion Lc; noluni()ue. l t depends on the r hoice of bases for the eigenspaces.
Thic; 1s juc;l one possibility.
A = [
 2
0 36
1
n
OJ + 25 n
1
0
lJ _ 50 m ll
0 3 0 ....: 3 1 {0 1 0
3G 0 23 0
OJ [
" [.
lll
[0 0
0  25
0 25 25 0
25
= 3 0 l
o o] o o
0
0 0
0 _g
0 9 12 0
16
25 25 25 25
[
1] [32] . 19. T he mal nx A has eigenvalues >. = 1 and ). = 2, with corresponding eigenvectors
1
and
Thus the matr ix P = [ :  has the property that p t AP = lJ = [ It follows that
3] [1 OJ [ 2
2 0 1024  1
3] [ 3070 3069]
1  2046 2045
290
Chapter 8
20. The ma•rix A has eigenvalues>.= 2 and>. = 2, with corresponding eigenvectors Thus
the matr;., P = ;) has the property that pl AP = D = It, follows that
AlO = PDIOp 1 = [1 1] [1024 0 ] [7 4] = [1024 0 ]
2 7 0 1024 2  1 0 1024
21. The matrix A has elgenv&lues and >. 1. The ve<:to' [:] fonns a basis fo' !he eigenspace
conesponding  1, and the vectorn m and [!] form • basis fo, the eigenspa<e conesponding
to >. = 1. Thus the matrix P = [: bas the property that pI AP = D = and it
4 1 0 0 0 1
follows that
AlOOO = PDIOOO p  t = [:  =
4 1 0 0 0 1 3 2 3 0 0 1
22. The mat,ix A has e•genvalu., >. 0, >. 1, and >.  1 , with conesponding eigenvedorn [:]. [:] ,
and [!] Thu• P [: ; !] has the pcope,ty that p• AP D ! and it follow• that
A1ooo = potooop1 =
[
6:]
23. (a) The characteristic polynomial of A is p(>.) = >.
3
 6>.
2
+ 12>. 8 Computing successive powers
of A, we have A
2
..,. [ : :1 and A
3
= thus
12 24 16 36 72 44
24
 40
72
6 [ :
44 12 24
4] [3 2
8 + 12 2 2
16 3 6
8/
which shows that A satisfies its characteristic equation, i.e., that p(A} = 0.
{b ) Since A
3
=6A
2
 12A + 8/, we have A
4
= 6A
3
l2A
2
+ 8A = 24A
2
 64A ..._ 4 I
(c) Since A
3
 6A
2
1 12A  81 = 0, we have A(A
2
 6A + 12/) = 81 and A
1
=
6A + 12/).
EXERCISE SET 8.3
291
24. (a) The characteristic polynomial of A is p(>.) = >.
3
 >.
2
 >. + 1. Computmg successive powers of
A, we have A
2
= =I and A
3
= = A, thus
0 0 1 4 0 5
[
5 0 6] [1 0 0] [5 0 6]
.A
2
 A = 3 1 3  0 1 0  3 l 3 =
4 0 5 0 0 1 4 0 5
which shows that. A satisfies its characteristic t>quat1on, i.e., that p(A) = 0.
(b) Since A
3
=A, we have A
4
= AA
3
= AA = A
2
= I .
(c) Since A
2
= I, we have A
1
=A.
l
25 . From Exercise 7 we hnve pT AP = [t [ = = D. Thus A = PDPT and
_ PetD pT = [ *) [e
2
' 0] [ ?z] =
1
[ t e.
4
' l''lt +
 k L 0.:
11
h .J,.
2
e:!Tt'
11
":!'+elt'
"2 .,!} "2 "'2
27 Prom Exercise 9 we have
[ I
I
n
76
J;; 1
pT AP = =f
l
3
72
2 76
I
2
n 'l

rr..
v'3 vJ ../3 \ (>
Thus A= PDP'l' nnd
[ I
1
['''
0
76
/2
e'A
Pe'o J>r =
I
j
e .,,
76
..;2
73 0
36
0
0
()
.... a
[ e" + se"
e2' _ e4t
e2t _ e·H
e
21
+ 5e
41
()
2e2t 2e4t
2e2t 2e"'
28 . Prom Exercise 10 we havP
PTAP = _! 0 0
[
0 1 0] [ 2
i 0 i 36
Thus A= PDPT and
4
g
0
3
s
I
0
72
1
 I
=D
72 ../3
=
0
1
0  I
7J
][I
I
0 7ti
76
0 ?z
I
J2
e4t
I
73
2e
21
 2e"]
2e2' 2e4t
4e2t + 2e 4t
= D
0 50
292
Chapter 8
=
4
5
0
3
5
[ " e'" + 2e'"
25 25
0
_ 12 e25t + 12 esot
25 25
0
 • + " e'"] 25 25
e31
0
0
..!!. e25t + !! e 50t
25 25
[
sln(21f) 0 0 l [0 0 OJ
29. Note that o
0
sin( 
0
4rr) 0 = o o
0
o
0
• Thus, proceeding as in Exercise 27:
sin(  47<) 0
[ l
I
[0
0
0][ to
1
0 0]
76
72
76
sin('rrA) = Psin (7rD)Pr = ts
I
0
1
!,
0 0
72
73 0 0 :;,
72
=
0
1 0
0
0 +a
l
11 0 0
76 73
73
4
;] [oos(t)
0
0 ][ 0
il
5
30. cos("ITA) Pcos(1rD}PT =
0 cos(257r) 0  1
0
3
0 cos(
507r) i
0
5
4
;J n
0
H
1
0] [f.
0
lll
g
25
0 1
0
! = 0
1 0
3
0
0
4 24
0
7
5 5 25 B
31. rr A [! then A' and A
3
Thus A is nilpotent and
0] [0 0 0] [0 0 0] [1 0 0]
0 + I 1 0 0 0 0 0 = 1 1 0
! 210 !flO
2
32. Smc(' .4
3
0 WP have sin(1r A) = sin(O)/ +1rcos(O}A t,1r
2
sin(O)A
2
= 'Tf'A =
cos(1r.A) = J ;
1
A
2
[
T 0 l
[
0 0 0]
rr 0 0
2tr rr 0
and
33. If P is symmetric nnd orthogonal, then pT = P and pT P = I ; thus P
2
= pr P = I If >. is an
eigenvaluE' of P thPn lhere is a nonzero vector x such thal Px = >.x S10ce F
2
= 1 1t follows lhat
..\
2
x = = Jx = x, thus >.
2
= 1 and so >. = ±1
DISCUSSION AND DISCOVERY
Dl. (a) True The malrix AAr is symmetric and hence is orthogonally diagonalizable.
(b) False If A •s dtagonahzable but not symmetric (therefme not orthogonally daagonalizable),
then there as a basis for R" (but not an orthogonal basis) cons1slmg of eJgenvectors of A.
WORKJNG WITH PROOFS
(c) False. An orthogonal matrix need not be symmetric; for ex\\mple A = .
293
{d) '!Tue If A is an invertible orthogonally diagonalizablc mntnX
1
then there is an orthogonal
mat rix P such that pT AP = D where D is a diagonal matnx with nonzero entries {lhe
eigenvalues of A) on the main diagonal. It follows that pr A lp  (PT AP)
1
= D
1
and
t.hat D
1
is a dtagonal matnx with nonzero entries (the reciprocals of the on
the main diagonal Thus the matrix A l is orthogonally diagonalizable.
(e) 1Tue If A is orthogonally cliagonalizable
1
t hen A is symmetnc and thus has r eal eigenvalues.
D2. (a) A = P DPT = [ ; ; ]
0 0 0 7 0
t
72
0
1
7z
72] [3 0 0]
0 = 0 3 4
I 0 4 3
72
(b) No. 'l'he vectors v 2 and v3 correspond to different eigenvalues, but are not orthogonal.
T here fore t hey cannot be eigenvectors of a symmet ric mat rix.
03. Yes. Since A is cll!lgonn.lizable and the eigenspaces are mutually orthogonlll
1
there 1s an orthonormal
basts for R" cons1stmg ol e1genvectors of A. Thus A ts orthogonally d1agouahzablc and therefore
mu11t be symmetric
WORKING WITH PROOFS
Pl. \'\'e first show that if A and C are orthogonally smular, then there orthonormal ha.c:es with
rc .. J • t lv hkb tht:) rt'pn:scnl the same l1ut:ar 'per ll Jr Fur 1!11:. lJUrpose, let T the v1 eratvr
defined by T(x) = Ax fhen A = [T], i.e., A is the matnx of T relative to the standard basis
B = { e,, e2
1 1
en} Smce A and C are orthogonally similar, there 1s an orthogonal matrL< P
such that G=PTAP. Let B' ={v
1
, v
21
.. , vn} where v
1
, v
2
, .• , vn are the column vectors
of P. Then B' is an orthonormal basis for R" , and P = Pa'•B Thus jTJa = PjTJs· PT and
[T]s· = pT[T]sP = pT AP =C. This shows that there exist orthonormal basrs with respect to
which .4 n.nJ C repn:!:>Pnt t.he same linear operator.
ConwrsPly, ll uppost: lhat A = [1']a and C = [TJa• when• T: R"  > Rn IS a linear operator
and B, fJ' n.re h1\SCS for nu. If P = PD'+8 t.hen Pis an ort,hogonnl matrix and C = [T]a· =
pTITlnP pTA P. Thus A and C are orthogonally si1ni ln.r.
P2. Suppose A  c1 Ut U{ I c2u 2uf + · · · + Cn ll nu;,· where {u
1
, u2, ... , un} is an orthonormal basis for
Rn. Sincf> (u
1
uJ)T = u J'
7
' u J = u
1
uf it follov.;s that A'T =A; thus A is symmetnc. Furthermore,
since u{' u
1
= u, · u
1
 o,,
1
\"e have
T\
Au
1
={c1u1uf +c2u2uf ···
2: c, u,u?' u
1
=c
1
u
1
I
for each j = 1
1
2, , n Thus c
1
, c
2
• • .
1
Cn are e1genvalues of A
P3. T he s pect r al decomposition A= >q u
1
uf + A2 u 2ui + ··· r ts equivalent. to A= PDPT
where P = !u tI u 2l · I unJ and D = A21 .. , >n); thus
/(A)= P /(D)PT = Pdiag(f( At). /(>•2), ... , /(An))PT
= /(AI) ulUJf + j(>.2) u2uf + · · · +
294 Chapter 8
P4. (a) Suppose A is a symmetric matnx, and >.o is an eigenvalue of A having geometric multi
plicity J.. Let l\' be the eigenspace corresponding to >.
0
. Choose an orthonormal basis
{ u
1
, u
2
,. , uk} for W, extend it to an orthonormal basis B = { u1, u2, ... , Uk+J, .. . , un}
for R", and let P be the orthogonal matrix having the vectors :>f B as its columns. Then, as
..;hown in Exercise P6(b) of Section 8.2, the product AP can be written asAP=
Since P IS orthogonal, we have
and since pT AP is a symmetric matrix, it follows that X = 0.
(b) Since A is similar to C = has the same characteristic polynomial as C, namely
(>. >.o)" det(>.Ink Y) = (>. >.o)kpy(>.) where py(>.) is the characteristic pofynomial of
Y . We will now prove that py (>.o) :f. 0 and thus that the a lgebraic multiplicity of >.
0
is
exactly k. The proof is by contradiction:
Suppose py (Ao)  0, i.e., that >.o is an eigenvo.lue of the matrix Y. Then there is a
nonzero vector y in R"k such that Yy = ..\oY· Let x = [;] be Lhe vector in R" whose first
k components are 0 and whose last n k components are those of y Then
and sox IS an eigenvector of C corresponding to >.o Since AP = PC, it follows that Px is
an eigenvector of A corresponding to >.o But. note that e
1
, •. , are also eigenvectors of C
to >.o, and that { e
1
• . . , ek x} IS a linearly independent set It follows that
{Pet, .. , Px} 1s a linearly mdependent. set of eigenvectors of A corresponding to >.
0
.
But this implies that the geometric muJtiplkity of >.
0
IS greater than k, a contradiction!
(c) It follows from part (b) that the sum of the d1mensions of the eigenspaces of A is equal ton;
thus A is dJagonalizable F\uthermore, smce A IS symmetric, the e1genspaces corresponrling
to different eigenvaluPs are orthogonal. Thus we can form an orthonormal bas1s for R" by
choosing an orthonormal basis for each of tht> e1genspaces and JOining them together. Since
the surn of the dimensions is n, this will be an orthonormal bns1s consisting of ejgenvectors
uf A. Thus A IS orthogonally diagonalizable.
EXERCISE SET 8.4
1. (a) 3x? + !xJ x2]
] [ 4 3] [X']
X
2
3 9 X2
EXERCISE SET 8.4
295
5. The quadratic form Q = 2xr 1  2x
1
x
2
can be expressed in matrix notation as
The matrix . \ has ds!•n·:alucs ,\
1
= 1 and ,\
2
= 3, wilh corresponding dgt:m'l(tor:> v
1
 GJ am.l
v2 = [ :] respective)) Thus matrix P = orthogonally diagona.IJZcs A, and the change
of \'&riable [:
2
1
) = x  Py [tz  [Y
1
] eliminates the cross product terms in Q·
7z 72 Y2
Note that the in verst: relationship between x aad y is = y = pT x = [ J'i [r
1
]
,. 7i J:'l •
 I
G. rho •• v"' """"'' ,.;, !om• ' "" ' " in Lt ,,, notation M Q 
1
A X w hccc A [: ] ·
The matrix A hns rigcnvahJ<'S ,\
1
= 1, )..
2
= 4, )..
3
= 6, with corresponding (orLhogonnl) eigenvectors
,.,
v' H l ' v, = m . v, [I] Thos the ma:ix p = [  : l octhogonally di .. on al izes A' and
tht' (:hang!' of vn.liablc x = Py eliminates the cross product terms in Q·
Y2
[
l 0
0 ..t
0 0
The given quadiM>e (o, m can be exp<essed in matdx notation as Q = x' Ax whm A = _: :]·
The matrix A has eigenvalues )..
1
= l, )..
2
= 4, )..3 = 7, with cor responding (orthogonal) eigenvectors
v,  nl' v, = [:]' v, = Ul Thus the matdx p = n ! ll d•agonaH..s A,
296 Chapter 8
and the change of variable x = Py eliminates the cross product terms in Q:
. ll 0 OJ lYlj
Q = xTAx = yr(pT AP)y = Y'l Ya} 0 4 0 Y'2 = + 4yi +
0 0 7 Y3
Note that the di agonalizing matrix P is symmetric and so the inverse relationship between x and y
is
= y = pT x = Px = [!
Y3 .
3
:2
3
1
3
2
3
8. The given quadratic form can be expressed as Q = xT Ax where A= [ The matrix
2  4 5 .
A has eigenvalues I and 10. The vec:o" v,  m and v, [ fo<m a basis fo< the
eigenspace w 'espond ing to 1, and v
3
[ _:] fo,ms a basis fo' the eigenspace co,esponding to
v:· ,;,> p::du;: :::::::::::::::::::
A and the change of variable x = Py eliminates the cross product terms in Q:
Q = xT Ax = yT (PT AP)y = !Y1 Y'2 YJ) = + Yi + 10y5
0 0 10 Y3
(b) !x
yj [:J + 17 8) [:J 5 = 0
10. (a) lx
[ 1 l][x]
y] y t 15
8} [:) 3 = 0
(b)
!x
Yl [;
11. (a) Ellipse
12. (a) Ellipse
(b) Hyperbola.
(b) Hyperbola
(c) Parabola
(c) Parabola
(d) Circle
(d) Circle
13. The CGUation can be written in matrix form as xT Ax= 8 where A = [
2

2
]. The ei"envalues of A
  2  1  q:;
are >. 1 = 3 and >.
2
=  2, ·.vith corresponding eigenvectors v
1
= v
2
= respectively. Thus
Lhe matrix P = [ ts] orthogonally diagonAIizes A. Note det(P) = 1, so P is a rotation
Vi 7s
EXERCISE SET 8.4
297
matrix. The equation of lhe conic m the rotated x'y'coordinate system is
which can be written ns 2y'
2
:.,;g'
2
= 8.:. thus. is a.· hyperbola. The a11gle through which the
axes have bePn rotated 1s 9 = tan
1
( 26 6°.
14. The equation can be written m matrix form as xT Ax= 9 where A= : ] . The eigenvalues of A are
At = 3 and .X
2
= 7, with corresponding eigenvectors v1 = and v2 = GJ respectively. Thus the
matrix P = [ orthogonally diagonalizes A. Note that det(P) = 1, soP is a rotation matrix.
The equation of the conic in the rotated x'y'coordinate system is
[x' y'] [::] = 9
whkh ran It<: f· iy'
2
= !1· thns th,. • .1n ThP angle of rotation corresponds
to and !lin 0 = lhus 8 = 45°.
15. The equation can be writ len in matrix form as xT Ax= 15 where A = The etgenvalues of
A are At = 20 and A2 = 5, with corresponding e1genvectors v
1
= and v
2
= [ respectively.
Thus the matrix P n orthogonally diagooallzE>s A Note that det(P)= I , soP is a rotation
matrix The equation of the come in the rotated .x'y'coordinnte system is
wh1ch we can wntf' as ·1r'
2
 y'
2
== 3; thus the conir is a hyperbola The angle through which the
axes havP been rotntcd IS (I = U\0 t ( ::::; 36. !)
0
16. The equation co.n be wrinen in rnalrix form as xr Ax= where A = :] . The eigenvalues of A
nrc .A1 = 4 and >.2 ;;: wit.h cur responding eigenvectors v
1
= [ _ and v2  Thus 1 he matrix
P dmgonalizes .4. Note that dct(P) = l, soP is a rotat1on matnx. The
equat1on of lhc conic in the rotntrrl x'y'coordinate system is
0] [x'] =
;! y' 2
2 •
which we can write as :r'
2
1 3y'
2
= 1; thus thP conic is an ellipse. The angle of rotation corresponds
lo cosO= and sm (} = 7i, thus(}= 45°_
17. (a) The e1genvalucs of A= and A= 1 and .X= 2; thus A is positive defimtP
( b) negative drfinil(' (c) indefinite
(d) positive semidPfimtc (e) negative semidefinite
298
18. (a)
(b)
(d)
The eigenval ues of A =
negative definite '
negative semidefinite
0
] and A = 2 and A = 5; thus A is indefinite.
5
(c) posi tive definjte
(e) positive semidefinite
Chapter 8
19. We have Q = xi+ > 0 for (x 1 . x2) 'f: (0, 0) ; thus Q is positive definite.
20. negative definite
21. We have Q = (x1  x2)
2
> 0 for x1 I x2 and Q = 0 for x1 = x2; thus Q is positive semidefinite.
22. negative semidefinite
23. We have Q = x{  > 0 for x1 of 0, x
2
= 0 and Q < 0 for x
1
= 0, xz ..f 0; thus Q is indefinite.
24. indefinite
25. (a) The eigenvalues of the matrix A = are A= 3 and A = 7; thus A is posi tive definite.
Since lSI = 5 and I
5
= 21 are positive, we reach the same conclusion using Theorem 8.4.5.
2 5
(b) The eigenvalues of A= are >. = 1, >. = 3, and ,\ = 5; thus A is positive definite.
0 0 5 12 } 0
The determinants of the principal submatrices are 121 = = 3, and  l 2 o = 15;
0 0 5
26. ( a)
(b)
thus we reach the same conclusion using Theorem 8.4.5.
I
The eigenvalues of the matrix A = are .A = 1 and >. = 3; thus A is positive definite. Since
121 = '2 and 1
2 1
] = 3 an• positive, we reach t.hc sttme conclusion using Theorem 8. 4.5
,1 2
The eigenvalues of A= [  are >. = 1, .\ = 3, and ,\ = 4; thus A is positive definite.
0 1 3 3 1 0
The determinants of the principal submatrices are 131 = 3, = 5, and  1 2  1 = 12;
0 1 3
thus we reach the conclusion using Theorem 8.4.5.
27. (a) The matrix A h.as eigenvalues >
1
= 3 and >.2 = 7, with corresponding eigenvectors v
1
= g) and
v 2 = ( Thus the matrix P = ( t] orthogonally diagonalizes A, and the matrix
B =
72
7;] = [Y.! + 4
 !':< .il  :11.
v2 2 2
has the property that 8
2
= A.
EXERCISE SET 8.4
299
(b) The matrix A has eigenvnlues >.
1
I, >.
2
3, >., 5, with corresponding ttigcnvecto<S v t m ,
' 2 n l , nnd v, m Thus P [  }nhogonWiy diagonWizea A, and
[
I
72
D=
has I he property that 8
2
= A.
0
v'3
[
0 .J5 0
+
= ! _il
'2 '2
0 0
28 . (a) The n1atrix A has eigenvalues >.
1
= 1 a.nd >.
2
= 3, with corresponding eigenvectors v
1
=
nnd v2 = [:] . Thus lhc matrix P = [% orthogonally dia.gonalizt>.s A, and the matrix
2 9.
B=
72
has the property Lhnl 8
2
= A.
(b ) The "'""" A has •i•cn<•Jtoes \ 1 I, >., 3, >., ' , wu h co,.spondin< eigen"""'"" v 1 [:] ,
[
1] [ '] [:.e j; 1
v 2  , and v 3  : Thus P = ;  orthogonally dia.gonalizes A, and
[7.
I
ton
[j
2
+ fl
I
' V>l
 72 0
7s 6 2
 j
6 T
B  t. 0
v'3
0
I I
\IG
 "J:i 0
..;2
72
= 3
3
I t
I 0
0
1 1 I
§ fl
I
7a
..;'2
73 7J
7:1
7:i
6 2
3
has t.h P property that 8
2
= A.
The q11ad rnt tc form Q = + + + 4x
1
x2  2x
1
x3  2x2x
3
can be in matrix nota
tion as Q  x T Ax where A = [ = :] . The determinants of the principal submatrices of A are
 1  1 k
s 2  1
1
5 21
!SJ = 5.
2 1
= 1, and :! 1  1 = k 2. Thus Q IS positi\·c d••fiuit<' tf and only if k ;> 2.
1 1 Jc
0
3 0 . T he quadrat 1r form can be expressed in matrix notation as Q = xr Ax where A =
u
1 Jc . tl
k 2
3 0 1
Jc = 5 3k
2
2
The del<'rrnlnanls of the pnncipal submatrices of A are J31 = 3, = 3, and
IS  1
Thus Q 1s positive definite i! :1::u only if lkl < V
0
Jc
300 Chapter 8
31. (a) The matrix A has e:genvalues
= 3 and >.
2
= 15, with corresponding eigenvectors vJ =
and vz = [:]. Thus A is positive definite, the matrix P = [ orthogonally diagonalues
.4, 1\n:i the matrix
8
has the property that 8
2
= A.
(b) The LOUdecomposition (pl59160) of the matrix A is
A = !] = [i [; = LDU
and, since L = ur, this can be written as
which is a factorization of the required type.
32. (a) T(x + y) (x + y)T A{x + y) = (xT + yT)A(x + y) = xT Ax+ xT Ay + yT Ax+ yT Ay
= xT Ax+ 2xT Ay + yT Ay = T(x) + 2xT Ay + T(y)
(b) T(cx.) = (cx)T A(cx) = c
2
(xT Ax)= 2T(x)
n n
33. We have (ctXl + czxz + · + CnXn)
2
= L; c'fx? + 2: 2: 2c.c
1
x,x
1
= xr Ax where
t=l t=l J=t+l
[q
c1c2 C1Cn
c,:c2
c
2
c.,
A=
c,cn
C2Cn
c2
n
34. (a) For each t 1, .. . ,n we have (x; i)
2
= 2x,x + x
2
= x? f: x
1
+ (t x
1
)
2
=
J=l
xf  £: x,x
1
+ ( f: + 2 f: f x
1
xk). Thus in the 4uadratic form
J=l r=1 J=l k=
1
+1
s; = 
1
[(x
1
 x)
2
+ (x2 x)
2
+ · · + (xn  x)
2
]
n1
the coefficient of x? is
[1  * + = and the coefficient of x,x
1
for t :f. j is
+ = n(n
2
_
1
>. lt foUows that s; = xT Ax where
l
1 I
n n(n1)  n{n1)
__ 1_
.!.
I
n(n1) n  n(n1)
A=
1 1
l
n{n1)  n(n1) n
DISCUSSION AND DISCOVERY 301
(b) We have si = ":
1
1(x,  i)
2
+ (xz  i)
2
t · • · + (xn :t)
2
J;::: 0, and 0 if a nd only if x
1
=
i, Xz = i, ... ,:en= :i:, i.e., if and only if x
1
= X2 = · ·::::: Xn. Thus s; is posit1ve semJdefinite.
35. (a) The quadratic form Q = jx
2
+
+
+ + + can be expressed in matrix
tation as Q x' Ax whe<e .4 [! !l The mal<ix A has eigenvalues >. l and >. j.
The vectorn v, [!] and v, form a bas;. for the eigenspace corresponding j,
and v
3
[:] forms a hasis for the eigenspace corresponding to >. ! Application of the
GramSchmidt process to { v
1
, v
2
, v
3
} produces orthonormal eigenvec tors {p 1, P2, PJ}, and t he
matrix
P•J = [i
'
76
P = Pt
I
P2 "JS
1
76
73
orthogonally diagonahz.es A. Thus L ange of variable x = Px' converts Q into a quadratic
form m the variables x' (x',y',z') w1thout cross product terms.
From this wP cnnrludc I hat thr equation Q 1 to an ellipsoid with a.xis lengths
2j"i = J6 in thP x' and y' directions, and 2.jf = the z' direction
{b) The matnx A must bP. positive definite.
DISCUSSION AND DI SCOVERY
Dl . (a)
(b)
(c)
(d)
(e)
(f )
0 2. (a)
(b)
(c)
(d )
(e)
False. For cxmnple the matrix A= hn.s r.igcnvalues l and 3; tlmii A 1s mdefi nite.
False. The term 4x
1
.z:'2x3 is not quadratic tn the variables x
1
, x2, XJ.
True. When cxpuudcd, each of t.he terms of lhe res uiLing expression is (of degree
2) in t he variables
True. The eigenvalues of a pos1tJVe definite matnx A are strictly positive, m particular, 0 Is
not an eigenvalue of A and so A is invert.1ble
False For examplr th!C' matnx A= is pos1t1ve sem1definite
True. rr the eigenvalues of. \ are posttive, tlu>n the eigenvalues of  A arc negat1ve
True. When wr itten in matrix form, we have x · x = xT Ax where A = I .
True. Tf A has pos1t1ve eigenvalues, then so does A
1
.
True. See Theorem 8.11 3(a)
True. Both of thE' principal S'lbmatrices of A wi ll have a positive determinant
False. If A = ( :] , then xT = x' + y'< On the other hand, the statement is true if A
is assumed to be symmetric.
302 Chapter 8
(f) False If c > 0 the graph is an ellipse. If c < 0 the graph is empty.
D3. The eigenvalues of A must be pos1lhe and equal to each other; in other words, A must have a
positive eigenvalue of muiLlplicily 2.
WORKING WITH PROOFS
Pl. Rotating the coordinate axes through an angle 8 corresponds to the change of va.ria.ble x = Px!
where P = [:: :::], i.e., x = x' cosO y' sinO andy= x' sin 8 + y' cosO. Substituting these
expressions into the quadratic form ax
2
+ 2bxy + cy
2
leads to Ax'
2
+ Bx'y' + Cy'
2
, where the
coefficient of the cross product term is
B = 2acos0sin 8 + 2b(cos
2
0 sin
2
0) + 2ccos OsinO = (a+ c) sin 20 + 2bc'os20
Thus the resulting quadratic form in the variables x' and y' has no cross product ter):nP.if and only
if (a+ c) si n 28 + 2b cos 20 = 0, or (equivalently) cot 28 =
P2. From the Print1paJ Ax1s Theorem (8.4. 1), there is an orthogonal change of varial>le x = Py for
wh1ch xr Ax= yl'Dy  >.t!Jf 1 where )q and >.2 are the eigenvalues of A. Since ,\
1
and >.2
are nonnegat1ve, it follows that xT Ax 0 for every vector x in Rn.
EXERCISE SET 8.5
l. {a) The first partial derivatives ot fare fz(x , y) = 4y 4x
3
and f
11
(x, y) = 4x  4y
3
To find the
cnt1cal pomts we sl'l l:r and fv equal to zero. This yields the equations y = x3 and x = y3.
1\om Lh1s we conduJe Lhat y = y
9
and so y = 0 or y = ±1 Since x = y
3
the corresponding
values of x are x = 0 and x = ±1 respectively Thus there are three critical points: (0, 0), (1, 1),
and ( 1,1).
(b) The Jl c:.sian matrix IS H(x.y) = [ln((x.y)) /xy((z,y)] = rl
2
:r
2
4
2] Evaluating this matrix
r y fvv z.y) 4  12y
2. (a)
{b)
at thP. cnticf\1 points off yaelds
/l(O,O)
[
0 •l]
4 c I [
12 4]
H(l,l) = 4 12 '
[
 12
H(  1,1) =
4
The eigenvalues of are A= ±4; thus the matrix H(O, 0) is indefinite and so 1 has a
saddle poinl at (0, 0) . The eigenvalues of
are>.= 8 and >. =  16; thus Lhe matrix
H(l,l) = H(  1,  1) is negative definHe and so f has a relative maximum at {1,1) and at
(1,1)
The first partial derivatives off are f%(x, y) = 3x
2
 6y and fv(x, y) = 6x 3y
2
. To find the
points we set 1% and f
11
equal to zero. This yields the equations y = 4x
2
and x = 
•
From Lhis we conclude that y = h
4
and so y = 0 or y = 2. The corresponding values of x are
x = 0 and 4  2 respectavdy. Thus there are two critical points: (0, 0) and ( 2, 2).
The Hessian matrix is H(x,y) = [/u((x,y))
1
1
zv((:z:,y))] = (
6
x 
6
]. The eigenvalues of H(O,O} =
fvz :z;, Y 1111 %, Y 6 611
[ are >. = ±6; this matrix is indefinite and so f has a saddle point at (0, 0) . The
eigenvalues of H( 2, 2) = r
12

6
] are >. = 6 and >. = 18, this matnx is negative definite
6 12
and so f has a relnLive maximum at ( 2, 2)
EXERCISE SET 8.5 305
13. The constraint equation 4x
2
+ 8y
2
= 16 can be rewritten as ( )
2
+ ( )
2
= l. Thus, with the change
of variable (x, y) = (2x', v'2y'), the problem is to find the exvreme values of z = xy = 2v'2x'y' subject
lo xn + y'
2
= 1 Note that. z = xy = 2v'2x'y' can be expressed as z = xrT Ax' where A= [
The of A are >.
1
'= v'2 and>.,= J2, with corresponding (normalized) eigenvectors v
1
=
l and v2 = [ Thus the constrained maximum is z = J2 occurring at (x', y') =
(x,y) = (J2, 1). Similarly, the constrained minimum is z = v'2 occurdng at (x',y') = (72,
or (x,y)= (J2,1)
14. The constraint :r
2
+ 3y
2
= 16 can be rewritten as ( )
2
+ ( 4y)
2
= 1. Thus, setting (x, y) = (4:r',
lhe problem is Lu find the extreme values of z = x
2
+ xy 1 2y
2
= 16x'
2
t + ¥v'
2
subject to
x'
2
+ y'
2
= 1. Note that z = l6x'
2
+ t ¥y'
2
can be expressed as z = xfT Ax' where A =
[
The eigenvalue:. of A are >.1 = 8 with correspondmg eigenvectors v, = [
an:! v·/= [ Normalized are u 1 = u::
11
= [ _ and ll ? = = [ Thus the
constrained m;ucimum is z =
5
3
6
occurring at (x',y') = ±{4, !>or (x, y) = .J:(2J3, 1s>· Similarly,
the constrained mmimum is z =a occurring at (x', y') = 4) or (x, y) = ±{2. 2)
15. The level curve correspondmg to the constramed maximum 1s
the hyp.:!rbola 5x
2
 y
2
= 5; it touches the unit ci rcle at (x, y) =
( ± 1, 0). The curve corre• ponuing to the constrained mini
mum IS the hyperbola 5z•  = 1, it touches the unit Circle al
(:r,y) = (0,±1)
16. !'he lt·v .. l cun·p l'nrrPspondmg to th"' constramed maxirnum 1s the
hype1bola xy 1t touches lhr> un1t circle at (x, y) == 72).
The level curvn corresponding to the constrained minimum is
the hyperbola xy  !; it touches the unit circle at. {x, y) =
±( J2)
'Y "_.!.
2
4
I 2
.ry: 2
4
xy=.!.
1
x.y = _l
1
17. The area of the inscribed rectangle is z = 4xy. where (x, y) is the corner pomt that lies in the first
quadrant. Our problem is to find the maximum value of z = 4xy subject to the constrakts x;::: 0,
y ;::: 0, x
2
+ = 25 The constraint equat1on can be rewritten as x'
2
+ y'
2
= 1 where x = Sx'
and y = y'. In terms of the vanables x' and y', our problem IS to find the maximum value of z =
20x'y' subject to x'
2
+ y'
2
= l , x';::: 0, y';::: 0. Note that z = x'T Ax' where
1
A =
The largest
eigenvalue of A IS >. = 10 with c:orresponding (normalized) eigenvector [ Thus the maximum
area IS z = 10. and th1s occurs whl'n (z',y') = (jz, (x,y) =
306 Chapter 8
18. Our problem is to find the extreme values of z = 4x
2
 4xy + y
2
subject to x
2
+ y
2
= 25. Setting
x = 5x' and y = 5y' this is equivalent to finding the extreme values of z = 100x'
2
 lOOx'y' + 25y'
2
subject to x'
2
t y'
2
= 1 Note that z = xff Ax' where .\ = [ The eigenvalues of A are .\
1
=
125 t'.nd >.2 = 0, wit h corresponding (normalized) eigenvectors v1 = [ and v
2
= [ Thus the
maximum temperature encount ered by the ant is z = 125, and th1s occurs at (x', y') = ±(;h. 75) or
(x, y) = ±( 2.15, VS). The minimum temperature encountered is z = 0, and t his occurs at (x', y') =
or {x,y) = ±(VS,2VS).
DISCUSSION AND DISCOVERY
Dl. (a) We have fz(x, y) = 4x
3
and /,Ax, y ) = 4y
3
; thus f has a critical point at (0, 0). Simifarly,
9:z(x, y) = 4x
3
and g
11
(x, y) =  4y
3
, and so g has a critical point a.t {0, 0). The Hessian
matrices for f and g are HJ(x,y) = [
12
;
2
12
°Y
2
] and H
9
(x,y) = [
12
;
2
respectively.
Since H,(O,O) H
9
(0.0) = second derh·ntl\·e test IS mconclusive in both cases.
(b) It is clear that f has a relative minimum at {0, 0) since /{0, 0) = 0 and f(x, y) = x
4
+ y
4
is
strictly positive at all other points (x,y). ln contrast, we have g(O,O) = 0, g(x,O) = x
4
> 0
for x ::/; 0, and g(O, y) = y
4
< 0 for y f. 0. Thus g has a saddle point at (0, 0).
D2. The eigenvalues of H are A= 6 and.,\= 2. Thus His indefinite and so the crit1cal points
off (if any} nrc s'\dcile pmnts. Starting from fr.Ax y\ (.r y} = 2 and /y:r(:r y) = /ry(xy) = 4
it follows, using partial mlegrat.ion, that. the quadratic form f 1s f(x, y) = :r
2
+ 4xy + y
2
. This
function has one cril!cal point (a saddle) which is located at the origin
D3. lfx is a uni t f>lgenvector corresponding to..\. then q(x) = x1'Ax = xT(.\x) = >.(x Tx ) = >.(1) = >..
WORKING WITH PROOFS
Pl. First. note that, as in 03, we have ur,Au m = m and ur,AuM = M On the otl.er hand, since um
and liM are orthogonal, we have = = M(u;: u M) = M(O) = 0 and ur,Aum =
0. It follows that if Xr j .f
1
_":r., UM, then
T (M e) r (em )
1
(Me ) (em)
xcAXc= Mm umAum+O+O+ J\fm u MAUt.t= M m m + M m M=c
EXERCISE SET 8.6
1. The ch.,.a<tedstic polynomial of A
7
m (t 2 o] is >.
2
(>.  5); thus the eigenvalues
of AT A are At = 5 and A2 = 0, and cr
1
= v'5 is a singular value of A.
2. The eigenvalues of AT A = =
are A1 = liJ and A
2
= 9; thus u
1
= v'i6 = 4 and
u2 = J9 = 3 are smgular values of A.
Exercise Set 8.6 307
3. The eigenvalues of A
7
'A [_ = >.
1
= 5 and >.
2
= 5 (i.e., >. = 5 is an eigeJJ.value
of mulliplicaty 2); thus the 'singular values of A are crt = J5 and cr2 .:: JS.
4 . The eigemalucs of ATA = ('7, = [Jz
v'2]
2
are >. 1 = 4 and >.
2
 1; thus the singular
values of A are cr
1
= J4  2 and cr
2
= v'l = 1. 
5. The only eigenvalue of AT A= = is >. = 2 (multiplicity 2), and the vectors
VJ = and v2 = form an orthonormal basis for the eigenspace (which is all of R
2
). The
singular values of A are crt = ../2 ar:d u2 = .J2. We have Ut = ;, Av1 = (; =
u2 = "\ Av2 ""  = [ This res ults in t.he following singular value decomposition
of A:
A •
[
1 1] = [v2 [l OJ _ UEVT
1 1 72 0 v2 0 I
6. The of AT A [J
0
] [
3 0
] = [
0
9 0
] are )q = 16 and >.
2
= 9, with corresponding
0 ·1 0 4 16
unit cagenvectors v
1
= and v
2
= resperLively The smgular values of A are u
1
= 4 and
a2 = 3 \Vp IHl,·e Ut : <r\ Avt = t12 = .,
1
,Av 2 = =
This results in lhe following singular 'a.l\!e dPCOmposition:
A c =
= UEVT
7. The of A r A = ['
1 0
] (
4 6
] = [
16
arf' .\
1
= 6t und = 1, with correspomling umt
0 I 0 ct 2.J L1 •
eigen\'t.!Liors " 1 = [ l] and v::: = [ The :;mgular values of A arc cr
1
= 8 and
o2 = 2. We hnvc
..L;\v
1
=
= [jg] and u
2
 ..L.4v
2
= ![
4
6
) [•};] =
c l 8 0 tl +. !.,. • (1, 2 0 4 _JA' ..l.
v$ v5 vr> .;:;
This results in the following singular value decomposition:
8. The ('tgcnvn.lucs of AT A [
3 3
] [
3 3
] = (
18
are >.
1
= 36 and >.2 = 0, with corresponding unit
3 3 3 3 18 I,.,
eigenwcLors Vt  anrl "2 = rPspccth·ely. The only smgular value of A is Ut = 6, and
we have u 1 = ; Av1 = [
3 3
] = [+ 1. Thf' vector u2 must be chosen so thaL { u1 , u2}
I 3 J 7i 72J
is an or thonormal bas1s for R
2
, e.g., u
2
= ( This results in the following singular value
decomposition.
A = [3 3] = [6 OJ [ =UEVT
3 3 72 72 0 0  72 72
308 Chapter 8
9. eigenvalues of AT A = = [_: :] are >.
1
= 18 and .A2 = 0, with corre
2 2
spondtl'g uniL etgenvectors v1 = and v2 = respectively The only singular value of
A u
1
= ,/18 = 3,/2, and we have u
1
= ;, Av, =
[ =! _:] [ = Ul Wemus',choose
the u2 and u3 so that { u1, u2, u3} is an o•thono•mal has;s fo• R
3
, e.g., u, = [i]•nd
u, = [ :] . This results io the foUowU.g singuiOI" value deoomposWon'
[
2 2] [
A =  1 1 =
2 2 _.£
3
t 1]
2 ! 0 0
3 3
Note. The singular value decomposition is not unique. It depends on the choice of the (extended)
orthonormal basis for R
3
. This is just one possibility.
10. Theeigenvalu"',"f ATA = [ =: J : J U _; =:]a<e .>1 = 18 and .>1 = .>3 = 0 The
ve<to.v, = [ il is a unit eigenvedo<eonesponding to .1, = 18. The vecto<S [:]and m fom
a basts for the eigenspace corresponding to ,\ = 0, and application of the GramSchmidt process
[
jsj"
to these vector:. yields orthononnc.l eigen,·eci:.ors v2 =  and v
3
= . The only smgular
[
2 1 2] [
value of A is o 1 = v'18 = 3v'2, and we have u, = a\ Av1 =
2 1
_
2
_ t = · We
must choose the vector u
2
so that { u
1
, u
2
} is an orthonormal basis ror R
2
, e.g., u
2
= [ 1 This
in tt[.llVfJ = UEVT
M 3v:>
ll. The eigenvalues of AT A =
 [ = are At = 3 and .A2 = 2, with corresponding
1 I
unit eigenvectors v
1
= and v
2
== •tlspectively. The singular values or A are u
1
= J3 and
Exercise Set 8.6 309
singular value decomposition:
,..
I
[ l [ '
0
']
[J3 0 l
I 0 .fi
76
A= 1 l =
I I
=UEVT
72
"76
1 1 73
1
+s 72
? I
·,
12. The eigenvalues or AT A = :1 [: i] = r;: ::1 are = 64 and = 4, with conespondlng
] 3.
unit eigenvectors v 1 = [ and v
2
= [_ resp,.ctively. The sinf?:nlar valtlf'.s <>fA
and o2 = 2. We have UJ k and = [_ = choose
' 4 0 vf> ...!.. J 4 0 :iS
 oJS
u, = [! l so that { u, , u,, u,} Is an octhonocma\basis roc R
3
. This cesults m the rollowmg singular
value decornposttton
·I 0
v5
I
::s
0
0][80]['2 1]
1 ll 2 .. /, _"J. = l/EVT
0 0 ll ..... $ "
5
6) [.J.. J.J [s
Using the smgular vrdtH' decomposition A = =
0 <l 75 0
0
] [ 3; '
75
]  U E vr found
2 :;a *
in Exercise 7, we have thr following polar de>composition nf A:
14. Using the singular value decowpo.<:ition A = G = = UEVT found
in Exercise 8, we have the following polar decomposition of A:
r6 0
1
[ r
[ = [3 3
1
0
1
= PQ
72 0 0 72 Ji , ,_,:! 72 72 72 3 3 0 1
310 Chapter 8
15. ln Exercise 11 we found the following s:ngular value decomposition·
[
10] [i1
A= 1 1 =
1 1
Since A has rank 2, the corresponding reduced singular decomposition is
A=
and the reduced singular value expansion is
16. Ir: Exercise 10 we found the following singular value decomposition:
A  [2 1 2] = [32 7z] [3J2 0 0] [
2 1 2 72 0 0 0
315
Since A has rank 1, the corresponding reduced singular value decomposition is
and the singular \'alue expansion is
17. The characteristi c polynomial of A is (.>. + 1 )(.>.  3)
2
; thus>. =  1 is an eigenvalue cf multiplicity 1
and >. = 3 is an Pig en value of multiplicity 2. The vector v
1
= [il forms a basis for the eigenspace
ing to >.  1' and the vee toes v' = m and v' [ i l form an (orthogonal) basis for
the etgenspace <'orresponding to>.= 3. Thus the matrix P = [
1
::
1
fvil) orthogonally diag
onalizes A and the eigenvalue decomposition of A is
A=
[
1 2 0] [3i
2 1 0 =
0 0 3 0
0 72] [1 0 0] [72 7, 0]
030 0 01
0 003 72720
Exerc.se Set 8.6
311
18.
The correspondi ng singular value decomposition of A is obtained by sh1ft1ng t he negat1ve sign from
the diagonal factor t o t he second orthogonal factor·
[
l
[
lol l [ ] [
1
1 2 0 72 7i 1 0 0 T2
A= 2 1 0 = 0:30 0
003 01 0 003
I
72
The characteristic polynomial of A is(..\+ 1)(..\ + 2)(..\ 4), t hus t h: ozrA are ..\1 =
.12  2, and .13 4. Corresponding unit eigenvecto" are v, [;,] ,
1
v, m a• v, [ _;,]
The matrix P = [v1 v 2 v3j orthogonally di agonalizes A o.nd lhe eigenvalue <.lec9 n position of A is
A= [
2
0 2] [35
2 0 = 0
0 3 .l,..
v5
[1
0 0  2
0 0 0
'115
0
0
 
The corresponding singular \'alue decomposition of A is obtained by shiflmg the llt!gative signs
from the diagonal factor to the SC'cond vrthogonal factor.
0 2] [ ...
2
'5
2 0 = 0
0 3 !,..
'11'5
0
I 0
0 l.
I !
01
0 '75
=
1
2
19. (a)
!
fnrm n basis for col(.4), u
3
and
1
'i
u, ! form • basis for col(A).L nuii (AT), and vr
for m a bes" for row(A). and v3 [
[
12 0
(b) A = : =:
12 0
10
6
I l
2 2
1 I
2 2
1 I
2 2
I I
2 2
forms n bas1s for row(At:_= nuli(A).
 !l
20. Since A = UEVT and V is orthogonal we have AV = UE. Written in t.olumn vl'ctor form th1s 1s
312
and, since T(x) = Ax, it follows that
0
0
0
0
and [T(v
1
)]
8
,  0 for j k + 1, ... ,n. Thus [TJ
8
•.
8
=E.
.. '
0
0
0
Chapter 8
21. Since A'J'A is positive semidefinite and symmetric its eigenvalues arc nonnegative, and its singular
values c;Lre the same as its nonzero eigenvalues. On the other hand, the singular values of A are the
squnre roots of the nonzero eigenvalues of AT A. Thus the singular values of AT A are the squares
oft hP sing\llar valnc>s of A.
22. lf A  U'i:.VT is a singular value decomposition of A, then
23.
"vhcre D IS the diagonal matrix having the eigenvalues of AAT (squares of the singular values of
A) on its main diagonal Thus ur AATU = D, i.e' u orthogonally diagonahzes AAT.
r fl !] .
We have Q = l
2
;, = [cos e Sin
8
] where e = 330°' lhus multiplication by Q corresponds to
_! .:c.1 sm8 cos8
4 l
rotat1on nhout thP orsgm through an angle ('If 330° The symmetric matrix P has eigenvalues A.= 3
and ,\ = l. \\'llh rorrP.spondmg unil eigenvectors u1 = [ rJ and U2 = [ Thus v = [ul U2] is
a clmgonnJi?,ing mat.nx for P:
[v'3 2] 4] vrpv·=
_ ! :i] 0 J3 ! fl
2 2 2 2
From this we conclude tha.L multiplication by P stretches R
2
by n factor of 3 m the directron of
u1 and by a factor of 1 in the direction vf u 2.
DISCUSSION AND DISCOVERY
01. (a) H A Ul.v·r IS a s1ngular value dccomposiLton of an m x n matrix of rank k, then U has
size m x m, E has size m x n and V bas size n x n.
{b) If A = Ur Er Vt IS a reduced smgular value decomposition of an m x n matnx of rank k,
then U has size m x k, E
1
has size k x k and V has size n x k .
02. If A 1s an invertible matrix, then it.s eigenvalues are nonzero Thus if A = UEVT is a singular
valur decomposition of A, then E is invertible and A
1
= (VT)
1
E
1
U
1
= vE
1
UT. Note also
I hal the clta.gonal "!1lril''i of r;t are the reciprocals of the diagonal entnes of r;, which are the
vnlurs of At. Thus A 
1
= VE
1
UT is a singular value decomposition of A l
Exerclsa Set 8.7
313
D3. If A and B are orthogonally similar matrices, then there is an orthogonal matrix P such that
B = PAPT. Thus, 'fA = (JEVT is a singular value decomposition of A. then
B = PAPT = P(UEVT)pT = (PU)E(PV)r
is a singular value decomposition of B. It follows that Band A have the same smgular values (the
nonzero diagonal entries of E).
D4. If P is the matrix for the or thogonal projection of Rn onto a subspace W of d1mensioo k, then
P
2
= P and the eigenvalues of P are ). = 1 (with multiplicity k) and >. = 0 (with multiplicity
n k). Thus t he singular values of Pare u
1
= 1, u2 = 1, ... , U!t = 1.
EXERCISE SET 8.7
1. We have AT A = [3 4] = 25; thus A+= ( AT A)
1
AT= .}
5
(3 4] = [fs 2i J·
2. WP. have AT A 
1 2
) = [
6 6
), thus the pseudoinversc of A is
3 1 6 11
2 I
A' =(AT A)1 AT= ....L [ Ll 6] [I 1 2] = [i 370
30
6 6 1 3 1 0
s
3. We ha·e AT A  [; :J = [;: thus U>e p"'udotnwse of A ts
+ _ T _ 1 T _ 1 [ 26 321 [7 0 5] _ [ !
A (A A) A  goo 32 74 1 0 5 
5. (a) AA1A = [1]= CJ =A
(b) A+AA+ = (ls /ll] [ts = [1] [I& is]= (ts fs) =A+
(c) AA+ = = is symmetnc; thus (AA+)T = AA+ .
{d) A+ =Ill is svmmetnc; thus (.4+ A)T = A+ A.
_!!.. ]
IS
I
0
0
(e)
The e1genvalues of AAT:; [
9 12
] are >q = 25 and
= 0, with corresponding unit eigen
12 16
vectors Vt = rn and v2 = [ :J respectively. The only singular value of AT is 0'! = 5, and
we have u 1 = o'. AT v
1
= !(3 4] [i] = [1]. This results in the singular value decomposition
,1 T = [3 •J = II I [5 OJ [! ll = UEVT
314
6.
Chapter 8
ThP corresponding reduced singular value decomposition is
AT = (3 4] = fll [5] [l = UlEl vr
and from this we obtam
(AT>+ = "•Ei"•ur = [i] UJ 111 = = (A+>r
(f) The eigenvalues of (A+)T A+ =
are At=
2
1
5
and A2 = 0, with corresponcling unit
eigenvectors v
1
[i] and v2 = respectively. The only singular value of A+ is a
1
= t•
and we have u 1  ct
1
, A+v• = 5[,\ fs) [!] = [1). This results in the singular value decompo
sition
(a)
(b)
(c)
(d)
(e)
The corresponding reduced singular value decomposition is
A+= [
2
3
5
= [1) [t) = u.E. vt
and from thts we obtam
4: 11 .4 =
.4
1
A:\
1
[l
=
[
l
= 5
0
__ ]_
30
2
5
.
30
2
5
1]
[l ;]
=
r 11 [I I]
=A
[i [!
7
7
 30  Jo
2 2
5 s
I
.t l
6
'2!1
3o
I 13
 yg
15
AA' = [; i]
1 z
is symmetric, thus (AA+)T = AA+
1$ E
1] [; i] is 'ymme.,ic; thus W A)T A.
The eigenvalues of AAT = are A
1
= 15, >.
2
= '2, and AJ = 0, with corresponding
3:]
unol eog•nvocto.s v, },;; v, *' Hl· and ., = f,o nl The singular values of
Exercise Set 8.7 315
AT a" u, = Jj5 and = .Ji. Setting u, = :, AT v, = *' [: [':] = *' 1:1 and
u, = :/r v, = [: :J [ !] = f,; [_:].we have
kJu 't"' vr
4  1
726
and from this it foUows t.hat
[
i&  :15]
(C) The eignn.;tlues of (A")rAi = 
1
;
40
arc ,\1 =ft . ..\2 ,\3 = 0, wtth
43 m ns
conespondmg uniteigenw>CIO<S v' = ).;; [':]. v, = ;+. H ]· and VJ = ;;;\; n ]· The sin·
gular values of A ... are Ot =
and 02 = Seltmg
7 • l [ I]
30 _I_ 3 = _L_ [ 3]
v'26 1 \113 2
we have
and fro111 this it follows that
[
5
...lm
( 1
• ) • = r E 1 ur = 11
I I I I .)t%
7
Ji9s
[J}:; 0 l [vp _ip] = =A
!...
0
.J2 :13 713 2 1
v26
7. The only eigenvalue of AT A = !?.5J is >.
1
= 25, with corresponding unit eigenvector v
1
= [ lj. The
only smgular value of A ts a 1 = 5 We have u 1 = ;, Av1 = [!] [Ij = [i], and we choose u2 = [ :1
so that { u 1, u2} is an orthonormal basis for R
2
• This results in the singular value decomposition
A= [!] = [: = UEVT
316 Chapter 8
T he corresponding reduced singular value decomposition is
and from this we obtain
8. The eigenvalues of AT A = [!
are )q = 15 and A2 = 2, with corresponding unit eigenvectors
v1 = Jb [!] and = 7iJ [ ThE' singular val ues of A are cr1 = VIs and <12 = V2. Setting
U t = _!_Avt = [
2
3
] =
cr1 v iS
2 1
j v 13 vl95
7
we have
)]  [v'IS 0 l
3  Ji9s 726 0 J2
1 7 4
:t'9s 725
and it follows that
.4• = V1Ej
1
UT = [ [;.
9. The eigenvalues of AT A = [
14 32
] are )q = 90 and .X
2
= 10, w1th correspondmg unit eigenvectors
32 26
v 1 = and v2 . [_ The smgnla.r values of A are <1
1
= '1/'90 = 3v'i0 and a
2
= JIO. \\'e
7S ..n;
have u, = 3 Jr,; i] = [; ]. u, = :,Av, = ,)rn i] = [_;,] , and we
choose u3 = so that { u
1
, u 2, UJ} is an orthonormal basis for R
3
. This yields the singular value
decomposition
1
71
0
0] [3v'l0 0 l
1 0 v'lO
0 0 0
The corresponding reduced singular value decomposition is
[
l
[
I I l
7 1 72 2
A = 0 0 = 0 0 [
3
ViQ O l [ 7s
5t: I I 0 jjO jw
i) J2 71
Exercise Set 8 7
317
10.
ancl from t h1s we obtain
.,+  \. •
•  I
[
2 1][1 0][1 7s W'iO 72
35?s 0 Jrn
0 [ l
v2 G
0 = t
o  :ro]
0 Jo
The ouly of A I' A = I·H] is A 1 = 41, with correspouJiug unit. v
1
= 11 }. The
only value of A is cr
1
= .J41. We have u 1 = ;, Av
1
= Jh [:] Ill = ( and we choose
u'l = r7fl so that { Uj, U2} is an orthonormal basis for R
2
• This results in the singular value
7oii
decomposition
A= (
4
] = [vf. [J4I]IIJ = ur;vr
5 74i 74i 0
The cora esponding reduced singular value decomposition is
and fm111 th1s we obtain
At VtE}
1
U[= [lJ[7,rr][Jrr
;.)
11. Smcc A = has full column rank, we have .4+ = (ATA)
1
AT: thus
:4+ = rlZJ 3]l [2 1] = _!_ [ ;:; 3] [:! [1
. 3 5 2 1 16 3 5 2 1 .j 1 1
12. Smcc rl has f11ll column rank, we have A+= (AT A)
1
A:r·: thus
[
'0:.! ,2_]
13. The mamx A = :] does not have full column rank, so Formula (3) does nol apply. The
eageuv<llues of AT A = are )q = 4 and .X
2
:::: 0, with correspondtng unit eagenwc tors v
1
=
and v2 ....., The only singular value or .4 is Ot = 2 We have U t = = [: :] =
we ,.hoose Ut that {u
1
,u
2
} is an orthonormal basis for R
2
• Th1s results m
7i
the smgular value decompmntion
A= [I l] = [2 o] [ = ur:vr
1 I 72 72 0 0 72 v"l
The corresponding reduced singular value decomposition is
318 Chapter 8
and fro·n th1s we obtain
A • = v, Et 'ur = [ 1 l 1 [ 7, 7,] = [: i]
14. The m<ltrix A does n:>t have fuU column rank, so Formula (3} d.:>es not apply. By inspection. we
see that a smgular value decompositiOn of A is given by
A = = = UEVT
0 0 0 0 1 0 0
The corresponding reduced singular value decomposition is
cllld fr0111 llt iS WE' 0 htO.in
15. The standard mfltrix for the orthogonal proJectmn of R
3
onto col{A} is
[
1 11 [1 ·f 8] [! A
\ + _ I :\ 6  30 15 _ !
 0 ; _ !  6 30
2 1 5 5 ! _J..
3 15
13
u
16. The standard matrix for the orthogonal projection of R
3
onto col(A) is
A A+ = [_ 0  l =
55 G Q jQ 001
17. The giv<n system can be wd tten in matdx fom• as Ax = b whm A = [ b =. [ _ ]· The
matrix A has thf' fnll owinp; reduced singular value decomposition and pseudoinverse:
A+ V
1
E)
1
Uf = (! f j) =
72 18 9 9
Thus the least squares solut1on of minimwn norm for the system Ax = b is:
Working with Proofs
319
18. The g;ven system can he wdtten as Ax= b whece A= [: and b = [:] . The pseudo;,..,,. of
_!_ [
11

9
] (
1 2 2
] = fii fi,] 0 Thus the least squares solution of
IS 9 9 l 3 l 0 
2
19° Since AT has full column rank, we have (A+)T = (AT)+ = (AAT)
1
A = (14)
1
(I 2 3] = (f.
1
2
4
fi)
and so A+= [H
20. The matrix AT has full column rank and. from Ext:n: t::.c 18, have (AT)+= [
[
0]
T $ I
.4  n : .
!1 l
'" l
&
0
Lhus
I 1
1
2 2
DISCUSSION AND DI SCOVERY
cnJ 1<; an m x n matrix with orthogonal (nonzero) column vectors, then
11
AT=
Note If the columns of A r\re orthonormal, then A+ = AT 0
02. (a) If A = a uv1', then A;. = lvuTo
a
I
Uc:tl
2
0
0
0
0
0
0
I
jjCJj"'
(b ) A • A= ( = v( u: u vr = vvT and = = uur.
0 3 If c IS a nomero scalar, l hen ( cA )"'
0
= i .4 + .
D5o (a) AA+ and A+ A are the standard matrices of orthoJ!'onal projection operators.
320 Chapter 8
(b) Using parts (a) and (b) of Theorem 8.7.2, we have (AA+)(AA+) = A(A+ AA+) = AA+ and
(A+ A)(A+ A)= Jl,+(AA+ A)= A+ A; thus AA,. and A+ A are idempotent.
WORKING WITH PROOFS
PI. If A has rank k, then we have U'{U
1
== V{V
1
= h; thus
.. .
P5. Using P4 and Pl, we have .4+ AA+ =A+ A H A+= A+.
P6. Using P4 and P2, we have (AA+)'r =(A++ A+)T = A++ A+= AA+.
P7. First note that, as in Exercise P2, we have AA+ = (U
1
L:
1
= U
1
U'[. Thus, since
the columns of U
1
form an orthonormal basis for col(A), the matrix AA+ = U
1
UT is the standard
matrix of the orthogonal projection of R" onto col(A).
P8. It follows from Exercise P7 t hat AT(AT)+ is the standard matrix of the orthogonal projection of
R" onto coi(AT) =·row(A). Furthermore, using parts (d), (e), and (f) of Theorem 8.7.2, we have
and so .4+ A is the matrix of the orthogonal projection of R" onto row(A) .
Exercise Set 2.1
23
8. The three lines do not intersect in a. common point (see figure). This system has no solution.
)'
The augmented matrix of the system is
and the reduced row echelon form of this matrix (details omitted) is:
The last row corresponds to the equation 0 = 1, so the system is jnconsistent. 9. (a) The solution set of the equation 7x  5y = 3 can be described parametrically by (for example) solving the equation for x in terms of y and then making y into a parameter. This leads to x == 3'*:,5t, y = t , where oo < t < oo.
(b ) The solution set of 3x 1  5xz + 4x3 = 7 can be described by solving the equation for x1 in terms of xz and x 3 , then making Xz and x 3 into parameters. This leads to x 1 = I..t 5;<~ t, x:z = s, x3 t, where  oo < s, t < oo.
=
(c)
The solution set of  8x 1 + 2x 2  5x 3 + 6x 4 = 1 can be described by (for example) solving the equation for x 2 in t.erms of x 1 , x;~, and x 4 , then making x 1 , x3, and x 4 into parameters. This leads to :t 1 = r, x 2 = l±Brt5 s  6 t, X3 = s, X4 = t, where <Xl < r, s, t < oo.
(d) The solution set of 3v Bw + 2x y + 4z = 0 can be described by (for example) solving the equation for y in terms of the other variables, and then making those variables into parameters. This leads to v = t1, w = t2, x = t3, y = 3tl  8t2 + Zt3 + 4t4, z t4, where  oo < t11 t2 , t3, t4 < oo.
=
10.
(a) (b) (c) (d)
x = 2 lOt, y
= t, where  (X)< t < OO.·
= s, x3 = t, where oo < s, t < oo. Xt = r , x2 = s, X3 = t, X4 = 20 4r  2s 3t, where  oo < r, .s, t. < oo. v = t1, w = t2, x = t1  t2 + 5t3  7t4, y = t3, z = t4, where oo < h, t2, t3, t4 < oo.
x1 = 3 3s + l2t, x 2
11. (a)
If the solution set is described by the equations x = 5 + 2t, y = t , then on replacing t by y in t he first equation we have x = 5 + 2y or x 2y = 5. This is a linear equation with the given solution set.
(b) The solution set ca.n also be described by solving the equation for y in terms of x. and then making x jnto a parameter. This leads to the equations x = t, y = ~
!t
24
Chapter2
12. (a)
If x1 = 3 + t and x2 = 2t, ther. t = Xt + 3 and so x2 a. linear equation with the given solution set.
= 2x1 + 6 or
2xl + x2 = 6._ This is
(b) The solution set can also be described by solving the equation for X2 in terms of x 1 , and then making x 1 into a parameter. This leads to the equations x 1 = t, x 2 = 2t + 6. 13. We can find parametric equations fN the line of intersection by (for example) solving the given equations for x and y in terms of z, then making z into a parameter:
x+y= 2x + y = 4 3.z
3+z}
X
=}
2x+y = x y = 3 z
4 3z}
Z
=}
x = l4z
From the above it follows that y parametric equations
=3+z =1 4t,
x = 3 + x  1 + 4z = 2 + Sz, and this leads to the
y = 2 + 5t,
=t
for the line of intersection. The corresponding vector equation is
(x,y,z)
= (1,2,0) + t(4,5, 1)
14. We can find parametric equations for the line of intersection by (for example) solving the given equations for x and y in terms of z, then making z into a parameter:
x + 2y = 3x 2y =
13z}
2 z
:::;.
4x = ( 1  3z) + (2  z)
=3
4z
::::}
X=
34z
4
From the above it follows hat y
= 1t,8z _ This leads to the parametric equations
X ::::
i  t,
y = ~  t,
Z
=t
and the corresponding vector equation is
(x, y, z) = (~,
l· 0) + t( 1, 1, 1}
15. If k :fi 6, then the equations x y = 3, 2x  2y = k represent nonintersecting parallel lines and so the system of equations has no solution. If k = 6, the two lines coincide and so there are infinitely many solutions: x = 3 + t, y = t, where oo < t < oo.
16.
No solutions if k:/: 3; infinitely many solutions if k
= 3.
3 . 2
17. The augmented matrix of the system is 4 7
32 !l 1] [
5 3:
18. The augmented matrix is
[!
0
2
1
4
1 1
Exercise Set 2.1
25
19. The augmented matrix o! the system is
[~
.
2 3 3
0
2 1
1
0 7
1  1
0
~]
20. The augmented matrix is [
~
0 0 : 1 0 12 0 1 3
11
!J
21. A system of equations corresponding to the given augmented matrix is: .
2xt =0 3xl 4x2 = 0
X2
= 1
22. A system of equations corresponding to the given augmented matrix is:
3x 1 7xl
+
x2
2x 3 + 4x3

= 5 = 3
=
7
 2x2
+
X3
23 . A system of equations corresponding to the given augmented ma.tri.x is:
?x1
2x2 x 1 + 2x2
+
+ x3 + 4x3
3xt = 5 = 1
24. 25.
X}=
7,
Xz
=  2,
X3
= 3, :J:4 = 4
(a) B is obtained from A by adding 2 times the first row to the second row. A is obtained from B by adding 2 times the first row to the second row. (b) B is obtained from A by multiplying the first row by!· A is obtained from B by multiplying
the first row by 2.
26, (a)
B is obtained from A by interchanging the first and third rows. A is obtained from B by interchanging the first and third rows.
(b) B is obtained from A by multiplying the third row by 5. A is obtained from B by multiplying the third row by }.
27.
+ 3y + z = 7 2x + y + 3z = 9 4x + 2y + 5z = 16
2x
X
28.
2x 8x 6x
+ 3y + 12z =
4 + 9y + 6z = 8 + 6y + 12z 7
=
3
29.
J y
2x
X
+
12 y + 2z = 5  z=  1
Z
+
=
30.
X +
y y
+Z =
y+z=lO +z= 6
31. (a)
+ c2 + 2c3  C4 = 5 cz + 3c3 + 2c4 = 6 c1 + c2 + 5c4 = 5 2ct + c 2 + 2c3 """ 5
Jc1
(b )
3cl
+ cz + 2c3  C4 = 8
Cz
 c1
2c1
+ c2
+ 3c3 + 2c4 = + 5c4 =
3 2
6
+ cz + 2c3
=
26
Chapter 2
(c)
+ c~ + 2c3  c4:;;;; 4 c2 + 3c3 + 2c, := 4 cl + Cz + Sc4 = 6 2ct + c2 + 2c3 =2
3cl
Ct
32.
(a)
+
C2
+ 2c3
2c3
2c1  4cl
+ 5C4 = 2 + 2cz c3 + 4c4 = 8 + 2Cl + CJ C4 = 0 5ct  Cz + 3c3 + C4 = 12
c1
=
2
(b )
Ct
+
c2
+ 2c3
 2c3
=
5
2Ct
+ 6c4
4ct
+ 2cz
= 3
9
4 11
C3
+ 4c4 =
c.&=
C4
+ 2cz + CJ5Ct Cz + 3c3 +
=
(c)
+ cz + 2c3
=
4
2c1  4cl
+ 4c• = 2 + 2c2 + c3  C4 = 0 5c1  c2 + 3c3 + c4 = 24
+ 2c2 
 2c3 c3
+ Sc4 =  4
DISCUSSION AND DISCOVERY
Dl.
(a) There is no comm on intersection point. (b) There is exactly one common point of intersection. (c) T he three lines coincide.
D2. A consistent system has at least one solution; moreover, it either has exactly one solution or it has infinitely many solutions. If the system has exactly one solution, then there are two possibilities. If the three lines are all distinct but have a. common point of intersECtion, then any one of the three equations can be discarded without altering the solution set. On the other hand, if two of the lines coincide, then one of the corresponding equations can be discarded without altering the solution set. If the system has infinitely many solutions, then the three lines coincide. In this case any one (in fact any two) of the equations can be discarded without altering the solution set.
D3. Yes. If B can be obtained from A by multiplying a row by a. nonzero constant, then A can be obtained from B by multiplying the same row by the reciprocal of that constant. If B can be obtained from A by interchanging two rows, then A can be obtained from B by interchanging the same two rows. Finally, if B can be obtained from A by adding a multiple of a row to another row, then A can be obtained from B by subtracting the same multiple of that row from the other row.
D4. If k = l = m = 0, then x = y = 0 is a solution of all three equations and so the system is consistent. If the system has exactly one solution then the three lines intersect at the origin.
05. The parabolay = ax2 + bx + c will pass through the points {1, 1), (2, 4), and (1, 1) if and only if
a+ b+c=l 4a + 2b + c = 4
ab+c=I
Since there is a unique parabola passing through any three noncollinear points, one would expect this system to ba.ve exactly one·solution.
a. Interchanging the first two columns corresponds to interchanging the coefficients of the first two variables. If the system is consistent. (a) True. then the original system will be consistent. Y2). Yt). But this is equivalent to discarding one of the equations! (a) True. To say that the equations have the same solution set is the same thing as to say that they represent the same line. and if these equations are consistent. thus the two equations l are identicaL f.Discussion and Discovery 27 D6.e.y to interchange rows since this corresponds to interchanging equations and therefore does not alter the solution set. Such a system will always have the trivial solution x 1 = x2 = ·· · = Xn = 0. but the corresponding augmented matrices appearing in the righthand column are all different . The parabola y == ax 2 + bx + c passes through the points (x1. In summary. If there is enough redundancy in the equations so that the system reduces to a system of two independent equations. then the first n. If there Is enough redundancy so that the system reduces to a system of only two (or fewer) equations. For example. ln any case. ~ 2 columns. · (c) False. From the first equation the x 1intercept of the line is x1 = c. one can solve for two of the variables in terms of the third or (if further redundancy is present) for one of the variables in terms of the other two. it will have infinitely many solutions. Ys) if and only if + b~1 + c = Yl 4ax~ + Zbx2 + c = Y2 ax~ bxs ax~ + c = Y3 i. (d) True. if and only if a. (xz.c:. If there are n (d) True. b. D9. Thus if the system is consistent. This results in a different system with a differeut solution set.n be made into a parameter in describing the solution set of the system. thus c = d.1 columns correspond to the coefficients of the variables that appear in the equations ar1d the last column corresponds to the constants that appear on the righthand side of the equal sign. t. Thus a set of four planes corresponds to a system of four linear equations in three variables. and from the second equation the x 1 intercept is x 1 = d. and c satisfy the linear sy5tem whose augmented matrix is 1 1 Y2 Y1] 1 Y3 D7. . then the solution set will be a line.t one "free" variable that ca. four vertical planes each containing the zaxis and intersecting the xyplane in four distinct lines. {b) False. there is at lca. [It is oka. Multiplying a row of the augmented matrix by zero corresponds to multiplying··both sides of the corresponciing equation by zero. Referring to Example 6: The sequen_c~ of linear systems appearing in the lefthand column all have the same solution set. If the line is not vertical then from the first equation the slope is m = and from the second equation the slope ism= thus k = l.] (c) False. A plane in 3space corresponds to a linear equation in three variables. .nd (x3. ( b) False. we conclude that c = d and k = . If the line is vertical then k = l = 0. 08.
{d). The possible 3 by 3 reduced row echelon forms are [~ ~ ~]. 0. The matrix (c) does not satisfy property 1 or property 3 of the definition. The matrix (b) does not satisfy property 3 and thus ls not in row echelon form or reduced row echelon form. 9. The pos:. t < S>O· The corresponding vector form is (x~.3t.28 Chapter2 EXERCISE SET 2. Tbe mo. The matrix (b) does not satisfy property 3 and thus is not in row echelon form or reduced row echelon form. and (d) are in reduced row echelon form. The matrix (c) is in reduced row echelon form. [6 ~). Thus.2x 4 and x 3 = 4. 4. + Zx4 = 1 4 10.0. 1. x2 = 0. 6.[~ ~ !] . 1) . 3. The given matrix corresponds to the system x1 + 2x2 X3 + 3x4 = Solving these equations for the leading variables (x 1 and x 3) in terms of the free variables (x2 and x4) results in x 1 = 1. 8.2t. The matrix (a) does not satisfy property 3 of the definition. the solution set of the system can be represented by the parametric equa. X3 = 7. [~ ~ ~] . 3. X4 =t where . 0} + s( 2.2x 2 . 4.[~ ! ~] and [~ ~ ~]·[~ ~ ~] [~ ~ ~]·[~o ~ :] .28. The matrices (a) and {b) are in row echelon form.trices (c). (c).oo < s. The matrices (a) and (c) are in reduced row echelon form. G)+ t( 2.ible 2 by 2 reduced row echelon forms are number substituted for the [8 g). X3. and [A o] with any real *. and the matrix (e) does not satisfy property 2. 7.2 1. by assigning arbitrary values to x2 and x4. The given matrix corresponds to the system = 3 0 X3 = 7 which clearly has the unique solution x 1 = 3. [8 A]. 0. The matrix (c) does not satisfy property 2. 5. X3 = 4. an(l {e) are in reduced row echelon form . 0 0 0 0 0 0 0 0 0 0 0 X1 Xz with any real numbers substituted for the *'s.tious Xt = 1. The matrix (a) is in row echelon frem but does not satisfy property 4. X2 = S. The matrices (a) and (b) are in row echelon form. 2. The matrice1' (a). x2 . X4) = (1. and the matrix {b} does not satisfy property 4. The matrix (b) does not satisfy property 4 of the definition.3x4 .
x4. X4 =t where oo < t < oo.2s . and x 3 . assigning arbitrary values to x 2 and xs.3xs. r~ l~ . xs.3t. Thus.7x4 = X3 Solving these equations for the leading variables in terms of the free variable results in x 1 == 8 +· 7x 4 .t .2s.2x4 + X5.4t. t. Solving these equations for the leading variables (x1.x 5 = 3 in which x3 does n0t appear. t < oo. the solution set can be represented by the parametric equations X] =2 + 6s. and x3 = 5 .2u + V.4xs and X4 = 8. the solution set of the equation is given by Xt ::::: 3 . X3 = 5 t. X4 = 8. The given matrix corresponds to the single equation Xt + 2xz + 2x. X5 =V where oo < s. X3 = t. x2 . 13.5t. making x 4 into a parameter. X2 = S. x3.3t.5xs. The correspondjng vector form is 12. Solving for x1 in terms of the other variables results in x 1 = 3. v < :x>.Exercl&e Set 2. X2 = S. the solution set of the system can be represented by the parametric equations X1 = 8 + 7t. Thus. and Xs into parameters. making x2. The given matrix corresponds to the system + 3xs = + 4xs = X4 2 7 + 5xs = 8 where the equation corresponding to the zero row has been omitted. Thus.2 29 11. Xz == 2.2u+v s 3 2 1 0 0 0 0 0 = u v 0 +s 0 0 0 +t 1 0 0 +u xs l! r~ +. x2 =o 2. u. and x4) in terms of the free variables (x2 and xs) results in Xt = 2 + 6x2.x 4. X3 = 7. X5 =t where oo < s.3x1. The corresponding vector form is 14.2xz. X3 = 7. The corresponding (column) vector form is XI X2 X3 X4 3. X4 = u. The given matrix corresponds to the system 8 + 3x4 = 2 + X4 = 5 . The given matrix corresponds to the system x1  3x 2 X3 =0 =0 0=1 which is clearly inconsistent since the last equation is not satisfied for any values of x 1 .
10 = .8(2 . The system of equations corresponding to the given matrix is x1 . x 4 = t.Jordan (starting from the original matrix and reducing further): [~ 0 1 8 5 4 9 l 0 1 ~] 10] 5 2 t .. the solution set can be described by the paramet = = = ric equations x 1 = 10 + 13t.4x3 = 72420 = 37.r4) + 9x4 = 5 + 13x4 .1:3 = 2 .x 4 ) + 5x4 . 2 2x3 = 2 .8. [~ 0 1 0 0 13 0 13 1 1 From this we conclude (as before) that x 1 = 10 + 13t. and x 1 6. x 2 = . it follows t hat Z3 == 5.4(2.9x. 0 1 0 37] 0 1 . 013] 0 l 8 5 Add 3 times row 2 to row 1. 16. x2 = + 8x3 + 4x3 X3 .t. assigning an arbitrary value to x 4 . Add .3C 15.4x3 + 9x4 3.:::10 + 13x4· Finally. Alternate solution via Gauss. and x3 = 5.5 + 13t.8 5 0 From this we conclude (as before) that x 1 = 37.4 times row 3 to row 2.. x 2 ~ 5 + 13t. x 3 = 2. Add 8 times row 3 to row 1.2 times row 3 to row 2. The system of equations corresponding to the given matrix is x1  Chapter2 3xz = 7 x2 + 2x3 = 2 X3 = 5 Xz = + 4x3 Starting with the last equation and W'orking up. and x1 = 7 + 3x 2 . Add 4 times row 3 to row 1. x 3 = 2  .x 4 .8. x4 = t.8xa + 5x4 = 6. we have .S:c4 6 .Jordan (starting from the original matrix and reducing further): 3 0 1 [ 0 0 1 4 2 1 Add .: = 3 + X4 = 2 = x2 Starting with the last equation and working up. Alternate solution via Gauss. x2 = 3 .
3xs.=2. the solut ion set can be described by Xt = 11..6l. it follows that x" = 9.7x'l + 2( 4. The corrcspouding system is x1 + x2 X2 Sx3 + 4x3 + 2x4 = X4 1 = 3 = 2 Starting with the last equa tion.2) = l. making x 3 inLo a paramet er . assigning arbitrary 'Values to xz and :ts.ng system is + 5x3 + 3x4 = 2 2x3 + 4x4 = .x2 + 3x3 .t .J:r1 .6xs = 4. The correspondi. X3 = . X4 =2 20. :r1= . The corresponding system of equations is x1 + 7xz  2x3 X3 X4 .6 + 7t .7 X3 + X4 = 3 Thus x 4 is a free. X2 = 3 . 5 9 St<Uting with the last equation aml working up. we have :r:.13 + 2t. Star ting with the first equation and working down. and x3 which satisfy the third equation.1x2 + 2x3 + 8xs = .6.?x2 + 2xs . vc:~.2x 1 4+2 = . Xs = t 18.4t = 21.3t = .4x3.2x4 l . and Xt = . x1 x2  :r. [~ 1 1 10 . a. we have x1 and X3 = l(12 .l .2 31 17.3t. x 2 = 3 . The corresponding system X1  3x2 :tz + 7XJ = 1 + 4x3 = 0 0 = 1 is inconsistent since t here are no values of x 1 .2:r2) = i (12.3 + x4 + 6xs = + 3xs .l .5(3 .7s + 2t.6 + 7x3 .3.riable and.x3=5 x 1 .7 + 2(3  t) . setting x 4 = t. Xz = S.3.lly.6xs = 5.3xs) + 8xs = 11 .(3 .2 3 3 23..Exercise Set 2. = 3 . 22. X4 =9  3t. Fina.2} = ! = 2.nd x 1 = 2 .x1) = S(S.4x 3 ) + 3x 3 . Thus.3xs .t). we have x 4 = 2. X:~ = t.:t4 .8xs = .. 10.4t. . xz = Hs . x 2 . X3 = 5 .2{2) = . x2 = 4 .4x2=5+18 = .2 = .(9 . the solutio n set can be described by the p<Uame~t:ic equat ions Xl = = . X 1 = 1 . Add 3 times row 1 to row 3. The augmented matrix of the system is Add row 1 to row 2.3xs}.4.
H fl lo IQ 7 1 1 4 OJ~ l 7 . Add 10 t imes t he new row 2 to row 3. Add 8 times the new row 1 to row 3. Add 7 t imes t he new row 2 to row 3. x3 0 0 1 0 0 1 Thus the solution is 24. the solution set is represented by the parametric equations .1 times row 2 to row 1. Add 2 times row 3 to row 1. l 1 7 4 1 0 0 !] [~ 3 0 7 4 1 1 0 0 !] Finally.1. [~ Add . assigning a n arbitrary value to the free variable X3. 2 5 ! The augmented matrix of the system is Multiply row 1 by ~ · Add 2 t imes the new row 1 to row 2.1 times row 2 to row 1. [~ Add .52 9 114 8] f2 • [i 0 ~ ~1 ~] 2 1 0 1 0 0 1 Add 5 times row 3 to row 2.32 Chapter 2 Multiply row 2 by . x 2 = = 2. !] !] [~ 1. [~ Multiply row 3 by  1 1 0 2 5 .4 1 Multiply row 2 by ~. x1 = 3.
Interchange rows 1 and 2. w =t 26. Add 6 times the new row 2 to row 3. 1 0 0 0 1 1 2 1 2 0 0 0 0 0 0 1 0 0 0 ~] 0 0 0 Add row 2 to row 1.~. t. 2 0 1 1] 0 0 0 0 0 0 the solution set of the system is represented by the parametric y x = 1 + t. The augmented matrix of the system is [~63 1 0 [ 0 ~6 ·~3 ~] 5 j.2 9 Multiply row 2 by . Thus. = 2s. Add row 1 to row 3.Exercise Set 2. Add 3 times the new row 2 to row 4. Add 6 times the new row 1 to row 3.2 33 25. setting z equations = s and [~ w .Add the new row 2 to row 3. z = s.. . Multiply the new row l by 2 1 2 3 6 9 ~] . [~ 2 1 ~ 1 0 ~] 1 0 3 It is now clear from the last row that the system is inconsistent. Add 3 times row 1 to row 4. The augmented matrix of the system is [ ~ 0 0 1 3 1 2 2 4 0 0 2 1 2 2 1 1 3 3 1] Add 2 times row 1 to row 2. 11 21 1] [ 3 6 3 0 0 0 0 1 2 6 0 0 0 Multiply row 2 by ~.
X3 = . then add 1 times row l to row 2 and . 29.(~. it follows that x2 = ~ . the solution set can be described by the parametric equations X )=.2x3 = 8 . The augmented matrix of the system is Multiply row 3 by 2.~ x 3 ). The last two rows correspond to the (incompatible) equations system is inconsistent. a. 2 3 1 1 4 2 3 2 I~] 11 30 and the reduced row echelon form of this matrix is As an intermediate step in Exercise 2:i.34 Chapter 2 27. thus the [~ Thus the system is inconsistent.x 2 . The augmented matrix of the system is 4 x2 =3 and 13x2 = 8.4 = 3.~X 3. assigning an arbitrary value to x 3 .nd :::1 = . 30. 24.x 3 = .~x 3 . the augmented matrix of the system was reduced to Starting with the last row and working up.3 times row 1 to the new row 3. As an intermediate step in Exercis~ = 2. x2 = 9 + Sx3 = 9 + 10 = 1.x1 .~. .~t . Finally. 28. the augmented matrix of the system was reduced to 1 0 1 1 1 ~ [ 0 0 0 i] Starting with the last equation and working up. it follows that x3 and x1 = 8 .~.J .
As in Exercise 26. f!·om back s ubstitution. This system bas only the trivial solut ion .2.3. there a t e in finitely many nontrivial solutions. z = s .1 0 0 0 3 and from this we can immediately conclude that the system has no solution..2 35 31. .nd x = 1 + y.1 + w. Thus. 2 1 0 3 [0 1 0 3 2 ~] !· Multiply row 2 by . 34 . .~ . (a ) There a re more unknowns than equat ions in t his homogeneous system. (b) The second equation is a multiple of the first.2z +w = . W = t. Add . the augmented matrix of the syst em can be reduced to 1 2 [ 0 1 . Multiply the new row 3 by [~ 2 1 0 . the augmented matrix of the syste10 was reduced to 1 1 2 0 [0 0 1 2 0 0 0 0 ~ ~] 0 0 0 0 It follows that y = 2z a. (a) There are more unknowns than equations in this homogeneous system: thus there are infinitely many nont rivial solutions. 33. The a ugmented matrix of the homogeneous sys tem is 3 0] 21 0 0 1 2 [0 2 0 l Interchange rows 1 and 2.1 0 1 ~] T he last row of t his matrix corresponds to x 3 = 0 a. Add 2 times the new row 1 to the new row 2. (b) From back s ubstit ution it is clear that x 1 = Xt = X3 "" 0. setting z = s a nd w = t. it follows that xz = x3 = 0 and x 1 = .1 ! ~] . Thus. by Theorem 2.nd. y = 2s.1 times row 2 to row 3. 32.2x2 = 0. T hus t he system r educes to only one equation in two unknowns and t here are infinitely many solut ions. 35. As an intermediate s te p in Exercise 25. This syste m hns only the trivial solution.x erclse Set 2. the solution set of the system is represented by the parametric equations x = 1 + t .E.
Add 2 times the new row 1 to row 3.napter 2 36.1 times the new row 2 to row 3. z = 0.x 3 .4s. x = .1~ lnterchange rows l and 2.3 1 2 3 [ 1 1 4 and the reduced row echelon form of this matrix is [H H] Thus thi~ ~ystem has only the trivial solution x =y =z = . 0 1 1 0 1 0 0 0 l This is the reduced row echelon form of the matrix. on setting y = t. using back substitution. then multiply this last row 2 by ~. [~ [~ 0 1 3 1 1 2 1 0 0 ~] ~] ~] 0. Then. Add 3 times row 3 to row 1.Add . 37. Thus the solution set of the system can be described by t he parametric equations x1 = s.4t and 3:c 1 = x2 . · Multiply row 2 by 0 2 1 1 3 2 4 1 8 !. The augmented matrix of the homogeneous system is 2 1 .nd.4x4 = 4s .t. the solution set of the system ca.t = 3s. 38.36 <. x 4 :::: t. X3 = 4s. x2 = . Add . 3111 [0 4 1 4 Let x 3 = 4s.n be described by the parametric equations w = t. T he augmented matrix of the homogeneous system is 3 1 [5 1 1 1 1 1 ~] 0] 0 Multiply row 2 by 3. From this we see that y (the third variable) is a free variable a.s .t. y = t.x 4 = s + t .. The augmented matrix of the homogeneous system is [_~ 2 0 i 2 1 4 3 2 3 ~] ~] Multiply the new row 3 by . [~ . we have 4x 2 = .:c3 . X4 = t. Add 2 times row 3 to row 2.5 times row 1 to the new row 2.
luces to only four equations.2 times the new row 1 to the new row 2. hy reducing the augmented matrix of the system to a rowechelon form: The augmented matrix of the original system is [~ 1 0 3 2 4 4 3 1 7 5 1 ~] Interchange rows 1 and 2. i.2 37 39. Add 3 times the new row 1 to row 3. X2 0 1 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 = Xj :::: X4 :. The augmented matrix of the homogeneous system is a 4 1 0 2 I 0 0 . Add .e.2 2 1 2 . 40. Add 2 times the new row 1 to row 4.Exercise Set 2.. 1 5 ~j 0 2 3 0 0 7 2 1 2 0 0 0 0 ~] 0 0 Thus. setting w == 2s and x = 2t.. the solution set of the system can be described by the parametric equations u = 7s . v = 6s + 4t. We will solve the system by Gaussian elimination.2 1 ancf the reduced row Echelon form of this matrix is 0 0 0 1 0 0 0 0 Thus th~ X 1 =.4 I 1 1 .3 _.. w = 2s.5t. and these equations have only the trivial solution 41.10 3 1 16 . The augmented matrix of this homogeneous system is and the reduced row echelon form of this matrix is u [~ I 1 1 3 3 4 2 5 2 3 01 . system re<.·: 0. x = 2t. {~ 0 1 2 7 7 8 7 .10 ~] .
2 7 1 .7 10 0 1 1 0 0 1 ~] 1 1 1 1 This is a rowechelon form for the augmented matrix.14 3 4] 3 Multiply row 2 by 1. Add 4 times row 1 to row 3. This system has onlJ!.1 0 0 ~] z~ and the reduced row echelon form of this matrix is [ Z1 = .s .t. Add . Add tne new row 2 to row 3.the trivial solution. Add 3 times the new row 2 to row 3. Multiply the new row 4 by !. 0 . 43. Ftom the last row we conclude that / 4 = 0.15 times the new row 3 to row 4. and from back substitution it follows that I 3 = / 2 = 11 = 0 also. 0 1 0 2 7 . The augmented matrix of the system is 0 0 ~ ~ ~ ~ ~ ~] 0 1 0 0 0 0 0 0 0 0 From this we conclude that the second and fifth variables are free variables.26 a . Add 1 times the new row 2 to row 4. z4 = o. [~ 2 7 0 4 22 a4 ..14 15 7 10 14 ~20 0 Multiply row 3 by {4 .t. The augmentcn matrix of the homogeneous system is [ ~ ~ ~ ~ 1 2 1 2 2 .10 .Chapter2 Multiply row 2 by 1 . z3 = .3 times row 1 to row2. 2 3 5 =t [! 1 1 14 . and that the solution set of the system can be described by the parametric equations z2= s . 1 [ 2 0 7 0 7 4 . 42.:2] 4 10 ] Add .
2 39 From the last row we conclude that z = ~2~ and.: 4. and the system will ha~e infinitely many solutions. The augmented matrix of the sys t~m is [~ This reduces to 1 7 3 17 2 a2 + 1 1 1 ~ 7] 16 3a [~ 7 :3 a 2  7] _"l 0 9 3a+9 The last row corresponds to (a 2 . from back substitution. 2 a2 5 n ~ ll [~ 2 a 2  9 a~ 3] and the system has infinitely many solutions. bas infinitely many solutions.1 times the first row to the third row. If a= 3 thls becomes 0 = 0. If a = 2. The augmented matrix of the system is [12 Add 2 times row 1 to row 2. from back substitution. If a f.±3. 1 6 1 2 1 ( a 0 2  3 4 a. If a:/.4}z =a.2 The last row·corresponds to (a2 . the last equation becomes 0 = 0. the system has exactly one solution. then 46. thus the system reduces to only two equations and . then z = !~:!:~ = o. If a= . from back substitution.3 ~] 2] Add 2 times t he fi rst row to the second row. thus the system has no solution. Jf a= 3.Exercise Set 2.2.2. it is clear that y and x are uniquely determined as well. then the last row corresponds to 0 =0 y = :. The augmented matrix of the system is [~ 0 0 2 1 2 2 a2 3 . y and x are uniquely determined as well. then the last row corresponds to 0 = 18. the system is inconsistent. the system has exactly one solution in this case. the last row corresponds to 0 = 6 and the system is inconsistent. then 2x + 2y + 2z = 2 f. If a = 3. 41. (a) If x + y + z = 1. Add . · 44. 1£ a= .9)z = 3a + 9. This system has exactly one solution for every value of a. 45 . . The planes represented by the two equations do not intersect (they are parallel).:_39 = 4 ~ 3 and. with z serving as a free variable. thls becomes 0 = 4 and so the system is inconsistent.: ±2 the system has a unique solution. ±3. x is uniquely determined as well.3. If a 'I.:_ 3 and.
2z = 2 6x .317 • [~ 3 . Y = y 2 . Multiply row 3 by . z 2x. Add .22 2] 1 0 1 Add 22 times row 3 to row 2. [~ =~ _:] Add . 3 31 r20 1 .3 times row 2 to row 1. Add 2 times row 2 to row 3. and z = tany. To rt:<. Add 2 times row 1 to row 3. from back substitutio n. 48.3y + =9 We solve the system by per forming the indicated rvw operations on the augmented matrix 2 [6 1 3 3 1 4 2 2 !] = 1 and Add . This is the reduced rowechelon form.y + 3z = 3 4x+ 2y.em is linear in the variables X = x"J.8 4 4 lo o 8 oj From this we conclude that tan 1 = z = 0 a. y = cos{3. The system is linear in the variab1es x = sin a.40 Chapter 2 (b) lf x + y + z = 0. and '"Y = 0.2 times row 1 t o row 2. Interchange rows 1 and 3.luce thl' matrix [~ ~ ~] 3 4 5 to reduced r owechelon form without introducing f1actions: Add 1 times row 1 to row 3. Tbis sy11t.nd. T he planes represented by the equations coincide. that cos {3 = y sin a= x = 1. Add 3 times row 1 to row 3. Thus a = {3 = 1r. t. Add 2 times row 3 to row l. y = s. Interchange rows 2 and 3. Any set of values of the form x = s.t. 50. then 2x + 2y + 2z = 0 alsoi thus the system js redundant and has infinitely many solutions.. with augmented matrix [i : 1 2 .3 times row 2 to row 3. 49. and z = twill satisfy both equations. and Z = z 2 .
thus x = ±1 .000 l .a .).000 \Vhich results in y ~ 1.co < t < co.l. 1 0 y = z i. matrix is (6 0 0 ~ g g Thus x = ].Exercise Set 2.l. (b ) If we first interchange rows and then proceed as directed.000.OOOx . y = ±. 2 [ . .000 10000 0.OOOx . z = t.1 OOOOy which results in y ~ = 10000 = 10000 1. 2 l 1 0 0 a b a c . the system has only the trivial solution. ~ =~ 2 ~ 8].OOOl x + l.OOOy = 1. the augmented matrix js [ i & 2 = 0. (a) Starting with the giVfm system and proceeding as directed.em is consistent if and only if c.OOOy l .OOOy = 0. 52.000 l.000 and x ~ 1.b = 0.e. = 1.1 1 . Y = 3.OOOx . and the reduced rowechelon form of this 0 !t..OOOy = 1. we have l.000 and x ~ 0.000 + 10000y l .1 ~ matrix is [~ ~ ~ ~] · 0 0 0 0 Thus the system ha.l.000.OOOy = 0.000x .OOOlx + l. ~ ~0] 1f)..l.OOOy 1. we have O.3. and the reduced row~echelon form of this 0 g}.A = 2.).OOO:t + lOOOOy .OOOx 1.000 0.<l infinitely many solutions: x = where .000 O. 2 2 2 (} 1). 1f .nd Z = 2.OOOy l. a.ab l row~ec:helon form: Thus the syst. and z = ±JZ. 51. The augmented matrix U~ I ~] reduces to the following 1 [~ 53. the augmented matrix is [ 2 . y = 0. This system is homogeneous with augmented matrix 2.2 41 The reduced rowechelon form for this matrix is [~ ! H ] It follows from this that X = 1.
+ l.34 0.OOOy = 3.OOx1 .70 O.60x3 + 0.ll x 1 .OOOy = l.26y :=::: = 1.33y = 0.OOOx + LOOOy = 3.25 O.54 l.40 = 0.0.OOx resulting in y ::::: 1.70x + 0.94 0.25 0.O.70x 0.0.OOOx + 50000y = 0. ~ 1.lOx1 = 1.OOOx + l.20xs = 0. 1 3x:~ + 0.00002x + L OOOy = 1.OOOx + 50000y = l.SOx1 .2lx + 0. The exact solution is x = 1.45x3 O.02x2 + 0.lOxt + 0.000 + l.34y = 1.24y = 0.54 = 1.000 l.OOOy = l.21x 0.34 0.OOOy = 1.33y = 0.000 and x ::::: 2.00.OOOy = 1.0.13x:~ + 0.36x:J + 0.llx1 .0 .00002x l.42 Chapter2 (c) The exact solutjon is x = 140:: andy = 0.13x2 + 0.20xa = 0:02 l.50000y = 50000 which results in y ~ 1.IOx1 + 0.24y = 0.000 and x ~ The approximate solution using partial pivoting is 0. directed.000 0.02 .25 = 0..30xs = 0.000 l.Olx2 + 0. y = 1.26 l.OOOx + l.O.OOOx + l.50xt. (b) Solving the system as directed.00 and x 0.00002x l.70 O.36x2 + 0.Uz:t .02 O.30xs = 0. (a) Solving the system a.33y + 0.Olx2 + 0.45xs = 0.36x:z + 0. we have O.000 3. we have + 0.OOOy = 3.000.000 l.OOOx !~S~ · The approximate solution without using partial pivoting is + l.000 which results in y 54.45xa = 0.000.20xs O.54 + 0.21x + 0.94 0..000 50000 3.000 50000 .34y = 0.OOOy = 1.OOx + 0.
U 8 is not the zero matrix.08XJ 0. . These sta tements are not true for nonhomogenous systems. or the entire plane) with the second containing Lhe origin. D7.2.OOxz 0.02x2 l.00.39x3 0. (a) All three liues pass through the origin and at least two of t hem do not coincide. lines.08 0.mple. Similarly for the other equation.02x2 0..08x3 0. (b) Yes.40 0. then the nonhomogeneous system will either be inconsistent or have exactly one solution. 02.. then there are at most 4 free variables (5 .40 1.r where 1· is the number of nonz. The first system may be inconsistent.13x2 0.13x3 0. 1.of columns). there will be infinitely many solutions. If B is not the zero matrix.13x3 =  1.00. (b) At most five (if B is the zero matrix) . The exact solution is x 1 = .60x3 i. If axo + bvo = 0 then a(kxo) + b(kyo) = k(axo + byo) = 0.39 0. . If the first sy~ tem is consistent then the solution sets will be parallel objects (points.27 0.r where r is the number of nonzero rows in a. (c) At most three (the number of rows in the matrix) . D4.2. (d ) T:n. DISCUSSION AND DISCOVERY Dl. 06. If a:r.27x3 = = = 1. D5.00.60XJ 0 . + + + + + + + + 0 . (b) False.. (c) At most three (the number of columns).13 1. Yes in both cases. X3 = 1. (a) At most three (the number of rows in the matrix). x + y + z = 0 and x + y + z = 1 are inconsistent.40 1 . (b ) If the system has nontrivial solutions then tbe lines must coincide and pass through the origin. row echelon form).13 0.OZx2 l. (c) Fabe... x2.he other equation. lf the homogeneous system has only the trivial solution. since there is at least on e free variable. For ex<.OOx2 resulting in x3 .13x2 O.ero rows in a. (a) At most three (the nwnbc.e.36x2 0.Discussion and Discovery 43 0. A homogeneou:. (a) False. (b) At most three (if D is the zero matrix) . x 2 = 0. the n there are at most 2 free variables (3.:::. (c) and similarly fo r 1. syi:\tem always has (at least) t he trivial S<llution . row echelon form) . and x1. If the system is consistent t hen. Yes. but the second system always has (at least) the trivial solution.:::.:::.08 0 .60x3 1.o + byo "" 0 and ax 1 + lly1 = 0 then (a) D3. If there is more than one solution then there are infinitely many solutions.
have exM:tly one solution. then b =I 0 and c =f 0. and this system has only the trivial solution x = y = z = 0. y = L. The appea rance of a row of zeros means that there was some redundancy in the system. For example. (d) False. f3t 'Y· WORKING WITH PROOFS Pl. z = tan 7. 2x + 2y = 2. . the system consisting of the equations x + y = 1. so the reduction can be carried out as follows: (b) If [~ ~] can be reduced to matr ix [~ ~ [5 ?] .1 since the equations are not linear in the variables a. or 21r. All of these are possible. The reduce<l row echelon form of a. cos/3 = 0. tany = 0. and 1 = 0. the syst. then the reduction can be accomplished as follows: If a = 0. {a) lf a . y = cos/3.1. But the remaining equations may be inconsistent. The system is linear in the variables x = sina. rr. This does not contradict Theorem 2. It follows that a= 0. or have infinitely many solutions. ·There may be redundancy in the system. 0.44 Chapter 2 08.ern has the unique solutjon x . For example[~ 1] can be reduced to either [5 nor[~ ~J· (b) False. or ·2rr. There are eighteen possible combinations in all. and 3x + 3y = 3 has infinitely many solutions. (a) True. /3 = ~ or 3. (c) False.f. Thus sin a = 0. matrix is unique. D9. then the corresponding row operations on the augmented r] will reduce it to a matrix of the form [5 ? {] and from this it follows that = K. 11'.
[~ ~) = [_ 1 ~ ~~] A . BT has size 4 x 2. (a) (b) A+·2B = H ~l+H ~l=H ~] rl~ 1 ~]. we see immediately that a= 4 and b = . (a) (b) (c) = A has size 3 x 4..2.~. . 3) 5. i 2 . or (3. thus d = 1 and c = 7 . and a+ b = 2. 1). j) = (2. (3. From the firs t and fourth equations.B'~' is not defined 4 D . 3d+ c = 7.(1.c = 6. thus a = ~ and it follows that b = 1 . b + a = 1. (a) B has size 2 x 4. Adding the second a.1 1.a = .nd third equations we see that 2d = 2. b21 = 4 (c) b.1. Adding the first two equation s we see that 2a = 9. For the two matrices to be equal. = 3.j) ll32 . Since two matrices are equal if and only if their corresponding entries are equal.3d = . d + 2c .CHAPTER 3 Matrices and Matrix Algebra EXERCISE SET 3. d. we must have a= 4. .(2BT) = [1 2) 4.3CT = (c) {d) D _ DT =[ 3 1 1_[1 3 0 ] ] = [ 3 1 3 4 (e) (f) G + (2F T ) =l ~ ~] + rl~ ~ r~ 4 1 3 '" 4 2 1~] 55 (7 A.1. 2) or (2. d 3. (A') = Ul (e) .a= 6. 1).b = 8. 2) (d) c. thus d = 1 and c = .l2 = .~. AT has size 4 x 3.J = 3 if a ud only if (i. = 1 if and only if (i. (b) bl2 = i.23 = 11 tl.2c = 3. 0. Adding the second two equat!ons we see that 5d = 13. we have a .B) + E is not defined . and 2d.
= EAT=[~~ 40 . (a) GA = H .56 Chapter 3 6.5DT = [ 4 ~]. p T (e) (f) 7. ~] H_:~ :~] i][~ H ~] [~ ~] [~~ !) : = 3 1 1 14 5 (b) s FB = l~ ~ r1 4 ~] [~ ~] [~ ~] = 4 0 16 5 (c) GP = H: ~l H ~l ~ :~l ~ = [:: .1 3 6 1 1]=[1 0] 3 3 1 5 = 4 5 ~] 1 32 9 25 ( c) FG= [~ ~ ~l [~ ~ ~1 =lr~ ~ ~] 3 2 4j 4 1 3 (d) B'' P = G . B +(4£T) = [: ~:] = [~ ~~] = [.[~ ~ ~] = [ ~ ~ ~J (c) (d) F _.D) + B is not defined (a) CD= [l 3 (b) AE = ~] H [1 4 2] [! ~~ 1 1 0][.!lH "" 20 12 + 20 (7C . ~] [~ ~ = !] =[I~ 1 ! I~] (e) (f) naT = GE is not defined 8.3 ~] [! ~] 3.~ ~ ~l [~ ~ ~ =~J Hi] H~~] [~~ ~~] 12 3 2 . (a) (b) 3 3C + D = ( 9 1 °) + [.
: [ :) X3 2 12. [q] [~ ~ 1 5 .9 15.r3 = 0 l Xz . (a) r 1 (AB) = rt(A)B = [3 [6 2 1 7 1 !] = !] = 21 67 {67 41 41] (b) r3 (AB) = r 3(A)B = IO 4 9} 0 7 [6 2 7 { 63 67 57] (c) c. (B) = [~ 2 5 4 7 4 9 2 1 = 7 1r.] .Jxz  = 2 6x3 = . 5x 1 :tl  7x3 == 2 2x·z + :l.1 [::]~ n1 + 6x 2 1 (b) [! =! ~n::J nJ = 13. (a) [~ ~1 .2 11. Ax = H~ !J HJ ~+~J 3 1 m m HJ +J = +5 (_.1 n .2 10. (ABh3 = r2( A) · c3(B) = (6)(4) + (5)(3) + (4)(5) =59 16. (a) (b) !] [=:] :.(AB) =Ac. X1 2X t + X2 + X3 = 2 + 3X2 5xl .I3 = 3 14.1 57 (d) ATG= [~ ~ a[~~ ~1=[2! t1 1 3 3 3 101 7 (f) D A is not defined (e) E£" ~ [~ ~ ~1 [~ ~) ~ [~~ ~~] l 9.Exercise Set 3. (BAht= r 2(B} · c1 (A) = (0)(3) + (1)(6) + (3)(0) = 6 2 7] 0 7 17. Ax = [! : ~][_i] = +!} [~ ~~ ~] [::1 ~ l~ [~) = .
+ 1}2 = 0 if and only if k = .1 41 122[ (c) c2(BA) = Bcz(A) = [~ 7 ~ 4] [2] = 3 5 7 4 [6] 17 41 19.tr(A)tr(B) = 67 + 21 +57 = 145.8 28 10 35 6 21 0] 0 0 tr(uvT)=628+0 = 22=uTv (d) (e) vru~[2 7 OJ [:] =628 + 0 = . thus 145204 = . u = uTv = vTu = 22 23.8+ 15=7 = uTv (e ) tr(uvT) = t r(vuT) = u · v = v · u = uTv = vTu = 7 22.58 Chapter3 18.204 = . (a) tr(A) = (A)u + (A)22 + (A)33 = 3 + 5 + 9 = 17 (b) tr(AT ) = (AT)ll + (AT)2.59 20. thus tr(BA).tr(B)tr(A) 21. tr(B) = 145.: + (AT)33 = 3 + 5 + 9 = 17 (c) tr(AB) = (AB)u + (AB)2 2 + (AB)33 t. (a) ' ' (BA) ~ • 1(B)A ~ [6 2 4[l~ t3 2 : : 7] : = [6 6 70[ (b) r 3 (BA) ~ • (8). (a) uT'v = 145.) = {6.(12){17} = (b) uvT = 145.22 =uTv tr(uvT) = tr(vuT) = u · v = v.59 = {2 3] [:] = 8 + 15 = 7 [~] [4 5j ~ [~~ ~~] (d) v'ru = l4 5J[!] = . (a) uTv = [3 4 5] m = 6 _ 28+ 0 = 22 (b) uvr = [ :] [2 7 0] = (c) [ .r( AB) . tr(A) = 3 + 5 + 9 = 17.(17)(12) = = 6 + 1 + 5 = 12.4 ~ [7 3 7 5[ [~ 5 . [k (k l ll [~ ~ ~][:] = [k 1. (a) (c) tr(B)= 6+1+ 5 = 12 tr(BA) = (BA)n + (BA)22 (b) tr(BT)=6+ 1+ 5 = 12 + {BA)33 = 6 + 17 + 122 = 145.
then the entry a.Exercise Set 3. then ci(AB) AB and AC have the same jth column. If. (a) If B anu C have the same jth column.1 59 24. then the entry Cl. (b) If the jth column of B is a column of zeros. t. The first of these inequalities says that the entry aij lies below the main diagonal and also below t he "subdiagonal" cor>sisting of entries immediately below the diagonal entries. then (using the row rule) r i{AB) = ri(A )B = OB = 0 and so the ith row of AB is a row of zeros. then n = r . The .iJ. On the other hand.10. Aci (B) = Acj (C ) = cj( AC) and so (b) If B and C have the sa.j has row number larger than column number .i has row number smaller than column number. Let F = (C(DE}.j] has zeros in a. that is. in addition. ::= 30 .j > 1 or i . it lies above the main diagonaL Thus la. then ri (BA) = ri (B)A and C A have the same ith row. T hus A is m x n and B is n x m. from Theorem 3. 31. (a) = r. the entry in the ith row and jth column ofF is the dot product of the ith row of C and the jth column of DE. It follows that AB is m x m and BA is n xn. (c) If i < j.nd BA is r x n. 1 h~ unequal row and column numbers. if BA is defined. it lies below the main diagonal. 29. Suppose that A is m x n and B is r x s. Thus (F)23 can be computed as follows: 27. (B) = A O = 0 and so the jth column of AB is a column of zeros. then (us ing the column rule) c_i(AB) = Ac. (d) If li . that is. it is off (a bove or below) the main diagonal of the matrix [a..7. 25. If AB is defined. then s = m . then s = m a. Thus B is ann x m matrix.il > 1. Thus laii] has zeros in all of the positions below t he main diagonal. Then. that is. Suppose tha t A ism x nand B is r x s.j < 1 . an 0 [aijJ 0 = 0 0 0 0 a22 0 0 0 0 0 0 a33 0 0 0 a44 0 0 0 0 0 0 0 0 0 0 0 0 a 66 0 0 a 55 0 (b) If i > j . If BA is defined. [2 2 k] [~ ~ ~] lk~] = 12 r 0 3 1 2 kJ [4: 3kl = 12 + 2(4 + 3k) + k(6 + k) = k + 12k +·20 2 = 0 6+k if and only if k = 2 or k = . then either i . (a) If the i th row of A is a row of zeros. either i > j + 1 or j > i + 1. 28. A(BA) is defined.1. then n = r . thus the matrix has zeros in all of the positions that are above or below the main diagonal. that is.ll of the positions above the main diagonal.heu a .me ith row.(C )A = ri(CA) and so BA If i ::j: j.
represent t he total units sold in ea.1 ~: : ~:] 1 .. Thus the entries on the main diagonal. The component. 34.i = l. arc all l..s of the matrix product [!~ : ~] [~]. whereas the remaining entries are all + 1. the February expenditures + (6)($2) + (0)($3) == $17. (b) The entries of the matrix M .1 1 1 .1 if i +j is odd and +1 if i +j is even.1 1 (c) We have a. The matrL'<..S.j = 1 if i = j or i = j ± 1.j =i +j is the sum of the row and column numbers.l)i+i is . otherwise a.j} = 0 0 0 0 asz 0 0 0 a33 a43 0 0 0 a4s 0 0 0 0 a44 0 0 as4 0 ass a6s as6 aM 32.r~~l ~ ~ ~~ 3 30+ 21 12+ 9 15 + 8 r =~ =~ ~ ~] 1 . For example. (a) The entry a.ch of the categories during the months of May and June. Thus the matrix is : . . Thus the matrix A has the following form:· an a21 <lt2 0 £l23 a2. Note that June sales were less than May sales in each <:Me.. (b) The entry aii = (.. Thus the matrix is : r ~. the total number of medium raincoats sold was M3 2 + J32 = 40 + 10 = 50.2 0 0 a34 A = ia. .1 1 represent the total expenditures for were (5)($1) purchases during each of the first four months of the year. is 33.60 Chapter 3 second inequality says that the entry aii lies above the diagonal and also above the entries immediately above the diagonal entries. (a) 45 + 30 60 + 33 7f:> + 40] [75 93 115] 30 +23 40 +25 51 53 65 T he entnes of the matrnc M + J = = 65+12 45 + 11 21 77 56 [ 10 + 10 35 + 9 23 50 44 . thus thes~ e ntries represent decreMP. For example.1 1 .J 15 27 35] = : 5 ~ ~: [ 1 30 26 represent the difference between May and June sales in each of the categories. and those on the subdiagonals immediately above and below the main diagonal .
jeans. Since rt(AB) = rt(A)B = 11 we have AB = [: 2] [! ~] = [7 Sj and r2(AB) = t2(A)B = [1 lj [~ ~] ""' [4 3] ~) · .1 . we have Method 3. Met. medium. Using the row rule. If A = [o 0 ' then AA = [o 0 1 0 = (oo]. (d ) Let y 1 1 1]. Since we have AB = [: ~]· Method 2. then A must have size 6 x k a nd B must have six k x 8. DiSCUSSION AND DISCOVERY Dl. thus A has 6 rows and B has 8 columns. Let A = G ~] and B = [~ ~] . If AB has 6 rows and 8 columns. and raincoats sold in May. In the following we illustrate three different methods for computing the product AB.7 (the dot product rule}. suits. This is the same as what is later referred to as thP. 1. o ] o [o o] 00 ] 1 1 03. column rule.Discussion and Discovery 61 (c) Let x ~ r:]· l = 11 Then th7 wmponents of M x = [:~ :~ 15 <10 ::] [:] = 35 [:~~] 90 represent the totalnum ber (all sizes) of shirts. and large items sold in May. Using Definition 3. D2. . (e) The product yMx = [1 1 1 1]~:~ :~ ~] [~] = ~ {1 15 40 35 1 l][~E] 90 =492 represt>nts the total number if items (all sizes and categories) sold in May. Using Theorem 3.6. Then the components of 45 60 yM=[l 1 1 1) 30 :~ = 1102 195 205J 12 65 [ 15 40 35 30 75] represent the total number of small.'wd 1.
From the column rule. c (A) = i] m . True. c 1 (AB) = Ac.~ has no square root. ~1 Not all matrices have square roots. (A) = [~] · Similarly. (c) True.62 Chapter 3 D4. D5. For example. it. Yes. AAT is n x n..e. then AB will have a column of zeroll. 3 x 3 for which A[:] ~ ["f] for all x. the matrix A = [~ ~ ~] has the property that AB = 28 for every 3 x 3 matrix B.(B). [o~ o~ o~l ha. (A) + yc. (a) S1 = [~ ~] and S 2 = [=~ =~] are two square roots of the matrix A= [~ ~]. and z. (e) (f) [~ ~] [~ ~] = [~ ~). y. The matnx A ~ [~ i . then AT ism x n. If.nd AAT both ar~:. Thus AT A :1. . z = 0. in addition. Then A[:] · [~]and. then B must ben x m . the zero matrix A = for every 3 x 3 rnatrix B . Here is a proof (by contradiction): Suppo"' A. is no such matrix. 8Jld so tr(AT A) and tr(AAT) are both defined. For example. square matrices. If u and v are 1 x n row vectors. If AB and BA are bot h defined and A ism x n. we must have A [:] =: l = mThus there . False. it follows that c. If A is n x m . if A is 2 x 3 and B is 3 x 2. y. (b ) 'frue. (b) The matrices S (c) = [±~ ~) and S = [±:s· ~~] are four different square roots of B = [~ ~]. y = 0. For example. 09. •nd z.(A) + zc.. is easy to check that the matrix [. Here is a proof Since xc. AB + BA is defi~d then AB and BA must have the same size. 07.o:. then u T v is ann x n matrix. (A)+ yc. on the other = A [ hand . D6. we must have xc.~] is the only 3 x 3 with th~ property. Yes.( A) ~ 2 [= ~:] [ for all x. the prop erty that AB has three equal rows (rows of zeros) DS. m = n. There is no such matrix. Thus if B has a column of zeros. then AB and BA are both defined. Talting x and c3 (A) = ~ 1. thus AB is m x m and BA is n x n. (d) False.(A) + zc3 ( A) ~ A[:] . and A T A ism x m. i. (a ) False.
(a) (A+ B)+ C = l2 flO 0 4 5 .) 31 [10 4j = 9 = 1 5 6 12 1 1!] 19 19 A+ (B r C)= [j 28 31 21 1 4 4 ~] + [: 5 8 2] [101 6 15 5 7 2 6 12 1 1:] (b) (AB)C = [28 2~ a~] [031 257 36 3] [187 5 4 11 A(BC) = u [! = ~] [10 = 83 87 22 38 222 ·67 33 278 240 26] 278 1 r1 1 62 17 27 33] = [10 21 15 12 27 83 87 222 67 33 26] 240 2 7 5 (c) (a+ b)C = (3) [~ 6 9] !] 9 aC+bG [~ 8 28 12] + [ 7 49 28 = [3 216 0 14 21] 0 16 36 21 12 9] 12 20 35 63 9 15 27 . the jth component of the vector Ytrl (B)+ Y2r2(B) + · · · + Ysr s(B} is Ytblj + Y2b2j + · · · + Ysbsj = y • Cj(B).1. EXERCISE SET 3. P2. Since Ax= x 1 c 1 (.t1) + x 2 c 2 (A) + · · · + XnCn(A). X2c2(A) the linear system Ax= b js equivalent to X1C1 (A)+ + · · · + XnC.2b'2j + · · · + aisb.2 63 DlO. WORKING WITH PROOFS Pl. {a) :L)aikbkj) . Suppose B = ~bij] is an s x n matrix andy= !Yt yz Ys]· Then yB is the 1 Xn row vector yB = fy • c1(B) y · c2(B) Y · Cn(B)] whose jth component is y · Cj(B). the ijth entry of the matrix AB.Exercise S&t 3.:=1 = a.6 2] + [0 7 1 10 3 2 7 f.lblj + a.2 1.l(A) = b Thus the system Ax == b is consistent if and only if the vector b can be expressed as a linear combination of the <. Thus Formula (21) is valid. The second column of AB is also the sum of the first and third columns: s Dll.:olumn vectors of A. On the other hand.•i (b) This sum represents (AB)ij.
64
Chapter3
(d) a(B  C)= (4)
[~~ =~ =~] = [~~ ~: ~!]
1  12  3 4  48  12
aB  aC
=
32  12 20] [ 0 8 0 4 8 4 28 [ 16  28 24 12 "20
36
~~] [~: ~: ~~]
=
4  48 12
2. (a)
a(BC) = (4)
= (4)
(r 3 51 r m [18 62 132 ] [72
2
7 5
0 4
1  7
2 6
1 3
7
17
~] = 22
38
11
 27
28 44
 248 68
88
 108
152
(aB)C =
[32 124 20] [0 27 3] [72 0 8 1
4
=
28
16
 28
24
3
5
9
44
 248 68  108
!32]
88 152 152
B0C) =
r8 3 5 [ 0 8 1 = r72 248 132] 2] 28 G8 88 28 16
0
L 4
(b)
1 7
2 6
1
4 12 20
36
44
108
(B + C)A
=
] 0 ] [8 5 2][ 2 1 3 [ 20 3 9 6 0 4
1 7
8 2
5
=
 10
37
()7
71
15
2
1
4
16
0
] 10 [26 256 11] + [ 6 5 542 [ 20 BA +CA = 4 13 6 31
=
 30
37 0
9 ]
6i 71
4
26
1
12
26
70
 lfi
3.
(a )
(AT)T
T ~ H _ [2
0 4
5
1 4
=
1
4
( b ) (A+ B)r =
riO 0
L2
4
5 6
0
4
r
7
10
0 2
1
3] !
0
5 7 0 1 2
=A
= [10 4
2
~]
10
AT
+ BT ~
H
6
21 15
2 + [3 8 1]
4 5
3
5
(c) .
(3C)T ~ [~
12
27
T[0
=
 [~~ l
6
0 6] 2 5
7 10
2
1
7
4
6 21 9 12
27
1 =(3) [~ :]
3
!]~ 3C'
Exercise Set 3.2
65
2r 36
] =~~ 3~] [~! 20 0
T =
 31 21
6
38
36
2
~ ~] [~3 5~ ~]....., [~:6 ~~ 2~] 6 4
38 36
6  12 2  3
T
4. (b)
(B  W ~
sr cr =
] H18
T
=
[
1 8
81 1]
6  12  2 3
=
(d) (BC)'i' """""
[~ ~ ~]  [~ ~ ~] [~ =~ 1~] 5 2 6 3 4 9  8 3 18 62 33 r18 ~ ~~] [
2
7
ll
17 27
22 38
=
cr Br ,... [~
:1
4
~ ~] [~
9
1
62  33
1
22
38
5
0 2
~] = [=~~ 1~ ~~]
6 33
22 38
5
(a)
tr(~) ~
t r(A')
lr ( [
~ ~
;
: ]) '' 2 +1 +I = 10, and
~ ,, ( [ _ ~ ~
,, ( [
(b)
, ,(3.4)
(c)
tr(A)
~ 10, tr(B)
nnd tr(A+
B) ~
(d) tr(AB)
~
tr (
n~: ~~]) "' r~~ ::: ~m ~
tc (
m ~ ~~ :m ~ ~ m! m _
tr
= 2 + 4+4 = 10
6 + 12
+ 12 = 30
~ 3tr(;lj + B) ~ tr(A ) +tr(B )"
= 8 +I
+6 = 15,
25; thustr(A
2831 + 36
= 33,
= 33; t.hll.s tr(AB) = tr(HA) .
and tr(BA) = t.r
26
4 ( [ 4
 25 6 26
13 1
ul ) =
26 + 6+ I
66
Chapter 3
6
6. (c)
Lr(A  B) = tr
0_
[ 6
18
(d )
t.t (BC)
~ ~] = 6 + 32 =  5 = 10  15 = tr(A) tr(B).
8 2
= tr
([
1
~
~~ ~~1) = 1 8 + 17 + 38 = 37,
 27 38
J
12  23 and tr(CB) = tr 24 24 ( [ 60  67
7. (a)
!!]) ~
1 3 7 5

12  24 + 49 = 37. T hus h(BC)
~
l<(CB).
A matrix X satisfies the equation tr(B)A 3X = 0 1 :1  7
+ 3X = BC if and only if 3X = BC 4 15 0 9 2
 15 60 15
tr(B) A, i.e.
8 3 5 [0 2 3 [ 2 1 ] ] [ !]
2 6
4 1
=
18
7 [ 11
62 17  42
33]
22 38
[ 30
0 30
75 60
45] = [4 8
7 41
47 47  42
78] 53
22
in w},ich case we have X
. .
=~
48 47 78] 1 47 53 [ 4 1  42 22 =C
tf a.ud only if
(b)
A ma trix X satisfies the equation B +(A+ X)T
(A +X)T =C  B
A +X
= ((A+ X)T)T = (C  8)7
X= (C B )T A = C T BT  A
0 Thu:; X
1
= :'.
r
;~
7 4
3  [ 8 0  7  [ 2 1 ~]  [!0 4] 0 4 5] 3 1
1
v ·
2 4 }
2 1 7 .  1
9
 5
2
6
2
1
4
10
8. (a)
X
= B  2:32~ = 2 ~
[3~ 2~~ =~~~] 92
 167 282 {At)1
1
(b) X = CT BT = _!_
10 10
~~
8
9.
(a) dPt.(A) = 65= 1
(b) d ~t( A  1 ) = 1
A1 = r 5 2 l
 ~]
= [3
s
~] = A
~] = (A  l)T
(c) A'l' =
G~] , det(AT) =
6 10
(AT) 1 = [ 2  1
(d ) 2A = [
~] ,det(2A) = 4
(2A)l=~ [
4 4 10
2] = ~2 [ 5 ~] = ~A 1 2 6
Exercise Set 3.2
67
10. (a)
det(B)=8+12=20
= 8
1 s1_ [ 4 20 ·4
~l
(b) det(B1)
202
+ 12
= _!_
20
(BI)1 =
20 (_!_ [2 20 4
20 :;
•!])
=
[~
3]
4 ::::B
(c) BT
=[
a
6
2 :] , det(BT) = 20
(B )
( 3B)
T
i
=
1 [4
~) = (IJl)T
(d) 3B = [
12
9]
12
18
, det(3B)
= 180
1
= 180
1 [ 12 12
9] = 60 [4 ~] 1 4 6
=
~ 8 1
ll. (a)
(AB)
1
=
[10 5]l 1[7 5]
7
= 20 18
10
slA• = 2~ [_:
(b) (ABC)_ 1
_ [
~] [_~ ~) = ;o [~~ 1~]
l

70 122
45] 79
__ 1 [ 79  40 122
·45] 70
cls~A 1 =~[I
2
2
4] _!_ [ 4 3] [ 2 1] = l [ 79 45]
6 20 4 2 5 3 40 122 70
12 _ (a} (BC)_ 1
.
.,.
[18 16 2
11] l
12
= _!_ [
12 10 16
11]
18
= __!__ [
cts1 =!
(b) (BCD)_ 1
·
r12 4] 20 [ 4 3] _!__ 42 6
32
=
40 16
12 111 18J
~} = 2~0 [ ~~ ~!]
= [36 33]  I
36
= _1 [ 240 32
~W
33]
aG
ulc1 n1
~ [~ ~] ~ r ~
:] 2~)
[._:
14 . X=
(C _B)I
AB
=~
22
[5 7] [10 5] = [88
__!__
6
4
18
 7
11
66
 29
37]
68
Chapter3
16. (a)
24]
67
(b)
=[ 2
19. {a) (b) Given that A  1
6 14] 6 [3 o] = sl3 14] 1 o 2 1r 6
1 13
= (~ ~],and noting that det( A 1 ) ~ 13, we have A = (A 1 ) ·· 1 =
[_!
~).
Given that (7A) 1
= [~ ~],we have A = H~ ~~
=
~[~ ~)·
1] 2
1 13
1 __
20. ( a )
, (,ivc'? that ( 5AT)  1 =
1 and so A =  [:.!. 5 5
[3 1] it foll ows t hat 5A ·lJT = [2 5) . 1 5 )
5 2
3 3

T =
[3 5
(l) [~:; _3 __ [~::;. 1]
·.1>], ._,
(b)
Given (I+ 2A)A :::
1
= [~ ~J. we have I+ ZA = (~ ~r
1
=
r ~] :
and so it follows that
1 1 1 1 2 (13 [52]
[I OJ) = 13 [9cl]' 1 o
l . 2
21 . The matrix A =
[~ ~} is invertible if and only if det(A)
= c2

c f. 0, i.e. if and only if c :/:. 0, 1.
22. The matrix A = [ ~  :) is invertible if anJ only if dct(A) = c2
+ 1 ~ 0,
[:
e
i.e. if aud only if c =/= ±L
23. One such example is A =
[~ ~ ~1 In genera), any matrix of the form
3 2 3J
1
~ ;] .
J
c
24. One such example is A = 25. Let X= [xiJJ
[~ ~ ~J. In general, any matrix of the form [~ ~
3 2 0
;].
0
e  !
3x3
. Then AX= I if and only if
[~ '] ['"
0 1 0 1 1
X21
X}2
X22 X32
xnl
X23
X33
j
X31
[~ ~]
"
lJ
1 0
X21 = 0. X13 = 0. if and only if the entries of X satisfy the following system of equations. Then AX = I if anrl only if [~ ~ ~] [::: 2 0 l Xll X13] X23 = [} 0 XJJ x:n 0 1 0 0 ()] 0 l i. :tu = 0.t = [ ~ ~ ~] .2 'l 26.!·Thus A is invertible and A. r23 = 0. X33 = 1 Thu& A is invertible and A . X12 = 0. x13 = 0. 2x11 + X31 = 0..= [ ~ !]· ! '2 = X33::.12 = . 2x12 + X32 0.2 69 i. :t21 = 0. (a) The mat<ix uvT ~ [:][I I 3) 9 = [ 336] ha.Exercise Set 3. X31 = .: the property that 2 2 4 4 12 3 2 4 . Let X = lx. X32 = 0. xn = 1.2 0 l 28. ~ = x2 1 = .iJ 3X3 .1. Xn ! X3 1 =} + X32 + X:tJ =0 =: 0 + X:z t =0 + X22 Xt3 =1 + X23 xz1 =0 =0 + X33 = This system has the unique solution xu 1 =0 1 and x 13 = :t12 = xzz = x23 = X31 x.e.e. X23 = 0.2. X22 = l. and 2x 13 + X33 = l. if and only ii xu = 1. This is a very simple syst.em of equations whicb has the solution = =.
I.2A +A = I . if C 31.1 0 = 0. ([_! = .4.2A + A 2 = 1 ._:~~: ::::] .1 = I . U A is invertible and AB = AG then.A .1 = A.4.4 . If A invertible AC = 0. (a) If A= [.1 =I (b) From part( a) it follows that A(A.70 Chapter 3 (b) u· Av ~ HJ. we have (uvT) 2 (uT v )uvT.4A + I = I .1 )B(A + B).1 (A+ B)B. t h us J .42 = / . (a) If A anc.1 + B.)) HJ ~ [~~] . ~] [:]) ~ HJ.A is idempotent.A) 2 = 12 .1 = B(A ~ B) .1 • Thus the matrix A. then det(A} = cos2 A_ 1 = () + sin 2 B = 1. then C = IC = A)C = A. A+ B. x' 32 . we have = (u vT)(uvT ) = u (vTu)vT = (vTu)uvT = an( so the matrix A = I+ u vT is invertible with A . and if A + B is also invertible. if u Tv =f ..1). ( b) If .2.1 ) . and so A t + B .1 = 2. A(A.1 )B(A + B). = xcosO. and so i. then A= AI= A(CC is Similarly.1.1 is invertible nnd (A. T hus.~~:) [~]. we have 1 30. 1 ) = (AC)C.l B are invertible square matrices of the same size. [~n ~ 6+8+48~ 112+63 50 ATU· V (U ~ :][.1 = (B + A)(A + B).I)= 4A 2 . thus 2A I is invertible anci (2A .1 = OC.l+~ 1 vuvT. then (2 A .1 )B.1 .e.1 + s.1 = 0.ysin8 andy'= xsinO + ycos£1. is invertible and AG = 0.2(a)).1 ::: (I+ AB. then 33. 34. Thus A is invertible and· : [cosO sinO .1 + B.::. Since u Tv = u · v:::: v · u = vru.1 + B .~: .4 + ! 2 = 4A. (a) If A 2 :o:: I.1 )(2A. t hen (1.sin fJ] cosO for every value of (). (b) The given system can be written as[~]= [_:7. HJ ~ and (A ~50 29. using the Associative law (Theorem 3.1 (AC) = A. .
DISCUSSION AND DISCOV ERY 01.B2 8 2 if and only if A B 1 + BA = BA. (a) If A 2 + 2A +I= 0.be) [~ ~] = [a2+bc ab + ba1_[a Jda ab l·db]+(ad ..12.B ) = A 2   .:!. On the other hand. tr(fn ) = 1 t 1 + ·· · + I ·= n .tr(BA) = 0 for a.(a+ b) [ ~ ~] + (ad. thus A is invertible and we have =A .B A= I. 37.2.21) . = B = 2 (~ ~] .B) = A 2 (c) (A + B )(A . D2.B 2 = (_~ ~]· On the other hand.Discussion and Discovery 71 35. (b) (A + B)(A . In genern. (a) Let A=[~ ~J and B = [~ ~] · AB Then A = 2 [~ ~] .J 0 ~ ] . D3.be) . . From parts (d) and (e) of Theorem 3.l t. (A + B)( A B)~ G~] [_~ ~l [~ 0  r=~ ~] .h€ entry in th~ tjth position of the matrix An represents the number of ways of traveling from i to j with exactly n . it follows that tr(AB . For example.ny two n x n matrices A and B. Thus it is not possible to have AB .then A ±1 2 = f.BA) = tr(AB). t here are three such ways of traveling from J to 6 (t hrough 2. then p(A) = A~ .1 int. If A = [: ~] and p(x) = x 2  (a+ b}x +(ad ..errnediatc s tops . ihcn i A1 = . and A2.bc)(l 0] ca + de cb + d2 J ac + de ad + d2 0 1 36.21.. The adjacency matrix A and its square (computed using the rowcolumn rule) are as follows: 0 1 0 0 0 0 0 0 0 () 0 0 0 0 1 1 0 0 0 0 0 1 l 0 0 0 0 l 0 0 0 0 0 s 4 0 0 0 0 0 1 1 0 0 1 0 0 1 (} 0 0 Az =: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 3 1 0 0 2 0 0 0 0 1 0 0 0 0 0 0 0 0 0 The entry in the ijth position of t he mat rix l1'2 represents t he number of ways of traveling from i to j wit h one intermediate stop. .A2 2A = A(A. 3 or 1) and two such ways of t raveling from 2 to 7 (t hrough 5 or 6).{a+ b)A + (ad  bc)I 2 = [~ !] 2 . If A is any one of the eight matrices A = .
A) 2 • (c) True.72 Chapter 3 (b) If p(A) = anA"+ a11 _ 1 A". Thus A is invertible with At= !!. ez e3] =I. If A is invertible. thus {A.n. First note that if A3 is defined then A must be square.1 + · · · + a 1 A + aol = 0.nd the reduced row echelon form of this matrix is [~ D7.1 are the solutions of Ax= e1. No.. Thus A.2 . True.1 = (A.BA . where a.li. Let x 1 . then we have A" . (AB) 2 = (AB)(AB) = A(BA)B.1 b .  A(' a . An ao 1 anI ao oo  al ao J) = I !!. The basic fact (from Theorem 3..B)'J. D5. and Ax= e3 r espectively. For example.2 .AB + A2 . but if B A ¥: AB then this _will not in general be true.1 • . = A 2 . unique solution for every vector b in R3 . [~ ~]. x2 x3}. .11) is that (AT). A n . Thus if A 3 = AA2 =I. and fr0m this it follows that (A . then n•Al = ( AB).ABBA + B 2 and (B. Ax= e 2 . For example. if B = ·A then A+ B = 0 is not invertible (whether A is invertible or not). whereas tr(A)tr(B) = (2)(2) = 4.o =I 0. if A= (e) (~ ~) and B = G ~J then u (AB) = tr ([~ ~J) = 3. The other ten matrices6U have determinant equal to 0.ve (A .!. The matrices 0 1 0 13] 0 1 0 0 [~ ~]. and thus are: not invertible. Expanding both expressions.[ . D6. If AB = BA. False.1 = B = [x1 Xz x3]. Ax= e 2 ..2. ao oo D4.1 = ((AT)n)1 = (AT) . If A and B commute then (AB) 2 = A2 B 2 . (b) True. .1 = (BA)  1 = A .1 )T = {(A")T).1 ~::. then the system Ax= b has a. These six matrices are invertible. The augmented matrix of Ax = e 1 is [i H~] a. and let B be the matrix having these vectors as its columns. it follows that A is invertible with A .n)T = ({A") .!. B = !x. DS. the columns of the matrix A.1 )7'.1 s. and x 3 be the solutions of Ax= e 1 . and [~ ~] have determinant equal to 1. The matrices (~ ~] .. G~]I and [~ n have determinant equal to 1. T hen we have AB = Ajx 1 X 2 x3] = [Ax1 Ax2 Ax3] = !e.I = A 2 . A". we ha.. and Ax= e3 .B) 2 = (B. (d) False. X:t.A)Z = 8 2 . namely x = A. (a) False. (a) (b) From part {a) .
we have A Dc) A = ~ (cA) = ~ 0 = 0.(: . such Lhnt [: !] (.8.2. and h .hree rn< r ices clel\rly ha.C) = A(cj(B).ab]) [f'cl db = ad 1 be [da.(C)) = B(acj(C)) = B(cj (aG)) = C j [B(aC)] = 1A = (( P5..bc)e =.nd c :..1 1 ·" a [c b{ ] d 1 [ \ad . It is clear that the matrices (ab)A and a(bA) ha.Aci (C ) = Cj (AB).C)J = Acj(B. T h('. The following shows t ha t corresponding column \t vectors are equal: cj ja(BC)] = ac1 (BC) = a(Bc1 (C)) Cj!a(BC )J = acj ( BC) = = (aB)cj(C) = c.2.be 'J 0 f. It t hen follows that A 3 is invertible and (A 3 ) · · l = (A2 ) . =0 af + bh = 0 cf + dh = 1 Multiply t he fi rst equation by d . multiply t he second equat ion by b. from Theorem 3. if ami only if the followhg system of equa •ions is consistent: . using Theorem 3. The argument that the matrices A(B. thus A.1 A .. then AA.C) and AB.1 = A. and subtract .A =( 1 [ ad . (b) Let A =.ve the same size. We proceed as in the proof of part (b) given in the text.l(c))..2.1 =A (b ) .1 =I and A .d . Then A is invertible if and only if !. et c.ds to (da . T he following shows that corresponding column vectors are equal: cjiA( B .1 = (A.Working with Proofs 73 WORKJNG WITH PROOFS Pl.1 A . (a) Tf A is invertible.1 • • A. In general.ve t he same s ize.1 = (A. The following shows tha t corresponding entries are equal: P2. be c d bl) = ·a l [ad · · be (t< .c1 (AC) = cj (AB .1 )n. ae + bg ce + dg ~] = [~ ~] = 1 i. This Jea.c l 0 A. (a) · AA.AC ) P1 . [(aB)CJ a(Bc. P6 .1 is invertible and (A.Cj(C)) = Ac1 ( B).1 A. (An ) . 1 ) 2 A.1 = {A..8. A 2 is invertible and (A2 ) .1 ) 3 .rne as in the proof of part (b) given in t he text.re scalars e. If A is invertible t hen. 'The following shows that corresponding entries on the two sides are equal: P3. If cA = 0 a.be .1 ) ..1 A = ! .here a.c d .(. P7.AC must have the same size is the sa.be = 0. from the remark following T heorem 3.] where ad .e.f 0 then.1 ) 2 . g.se t.1 = (A .l = A.
erchange the first and fourth rows.3 1. then the system of equations has no solution and so the matrix.he fourth row.. 0.74 Chapter3 and from this we conclude that d = 0 is a necessary condition in order for the system to be consistent. Let us assume that b = 0 (the case c = 0 can be handled similarly). (c) is elementary. we have shown that if ad. hy ! . Why? From ae = 1 we conclude that e f. (a) Add 2 times the second row to the first row. it is obtained from J2 by adding 5 times the first row to the second row. and (d) are not elementary. {c) Interchange the first and third rows. (c) This matrix js elementary . tr(AB) = L (ABh~c = L L:>~rtbtk = L L btkakt = k =: l k=l / "" 1 1=1 k = l n n n n n n L(BA)u 1=1 = tr(BA) EXERCISE SET 3.7 0 1 I 0 1 0 ! 0 0 3 0 OJ 0 1 0 I\ 0 r [~ == 7 l 0 1 0 v ~] . {a) .. (d) This matrix is not invertible.) by interchanging the rows). 3. But this is inconsistent with c/ = 1! ln summary. 2. (b) . and therefore not elementary. (d) Add ~ times the third row to the first row. (a) Add 3 times the firs t row to the second row.. 4. (d) Add 12 times the second row to t. since it has a row of zeros.1.be = 0} that be= 0 and so either b = 0 or c = 0. Then the equations reduce to ae = 1 ce = 0 af = 0 cf = 1 and these equations are easily seen to be inconsistent. (b) Multiply the first row by .be = 0. (c) lnt. (a) This matrix is elementary. A is not invertible. From this it follows that c must be equal to 0. and from ce = 0 we conclude that c = 0 or e = 0. It then follows (since ad. {b) Multiply the third ro•. 5. it is obtained from h by interchanging the first and third rows. (a) f~ ~r~ = r~ ~J 0 0 1 0 0 1 (b) [~ 0 1 0 0 1 0 0 (c) l~ 0 0 W' = [~ ~] 0 0 1 0 0 l 0 0 {d) [~ ~r r = . (b) This matrix is not elementary since two row operations are needed to obtain it from I 2 (follow the one in p art (a.
2. (a) :: 1rr1r 2 0 1 r~ j !r r~ J ~] ~ B where fmm A by interchanging the first and third rows.e.l 1 = [ g2 iO · ? 1 l] .3 75 (b) [~ ~ ~l1 = [~ ~ ~] 0 0 1 0 0 1 0 0 1 0 (d) 7. 71 ami the fact 1 21) = 10 [ 2 5] th~t det(A) = 10 1 we have A. thus EA (b) EB =A where E is the same as in part (a).he matrix A J. Using thr. 1 [0 5 : 1 0] 10 l 2 1 Multipiy the seconci row by 1~. 1 10 On the other hano 1 using the formula from Theorem 3.ide will produce t. : o[b~~r~lfrom A by adding 2 times the first row to the third row.L ~ Thus A l 2 = [_1 5 _!] . (a) E = ~] r~ 0 0] 0 l 0 2 1 (b)C ~ [~ ~ ~] (c) E = (d) E~[~ ! ~] 9. 1 0! 2 [0 11_! I ~] 10 5 .EXERCISE SET 3. then add 5 times the new second row to the first row. E 1 = [~ ~ ~]· 2 0 1 8.atrix lA I I] and perfonn row operations aimed at reducing the left side to I. 1 [2 5 I 1 0] 20: 0 1 Add 2 times the first row to the second row. method of Example 3. thus EA = c where (d) EC = A where E is the inverse of the matrix in part (c) i. (c) :. these same row operations performed simultaneously on the right ::. we start with the partitioned rr.
2 times the new row 1 to row 2. A 1 = 1 [ 1 14 . TO l 11 10 7 . 0 l 0 1 ' I 1 0 ] ~ I 0 . then interchange rows 2 and 3.2 times row 1 to row 3.10 0 1 0 ~] . add 4 times the new row 1 to row 3.10: 5 : 1 S 2 then add 3 times the new row 3 to row 1. 0 1 0 : ~ I ~ 0 : 1 : _l I _!l 10 . Add . 3:o 4 . [~ [~ Multiply row 3 by 1 . 0 5 . 4 (a) Start with the partitioned matrix [A . 3 10 0 [ () . Add . 10 3l .3 times row 1 to row 2. 11. and that A 1  [ ~.1 . 7 l I I J. 3 4 4 2 I l 0 0 [~ 4 1 9 0 1 0 ~] Multiply row 1 by 1.1 l1 [! Interchange rows 1 and 2.10: I 0 1 3 1 1 0: 1 3: Q Add 4 times row 3 to row 2.1 t imes row 2 to row 3.= [ t ~ ~~~ .4l0 0 0 0 1 Add .~] · '2 6 (b) Start with the partitioned matrix [A I IJ.76 Chapter 3 10.4l0 0 4 J I I I 0 0 1 0 1 ~] ~] ~] ~] __ 1 [~ [~ 3l0 1 : 1 I 5 .4 3] 2 .~ ] 5 1 7 10 2 From this we conclude that A is i nvertiblc .10: 1 I 5 10: 0 0 4 3 2 Add .
.1 = R. f\·. If A is invertible. since we have obtained a row of ~eros on the left side. then mult ip ly the new row 3 by [~ Add 1 times row :3 t.EXERCISE SET 3. [Note that B is the product of ejementary matrices and thus i" always an inve rtible matrix. 2 I 'i ll 1 1 0 (b) A 1 = ~ l [0 1 . [ 1 0 3 10 0 0 0 1 1 At this point.o rows 2 and 1. "" t h . In the more general situation the reduced matrix will have the form [R I B} = [BA I BIJ where the matrix B on the right has the property that B. (c) Start with the partitioned matrix !A I Jj. iot •«li ble .3 77 Add row 2 to row 3..)d at redudng th~ left. 0 I 1 o:o 1 : j I I 1:1 0 1 :0 1 I 0 ~] 0 1 0 fl l~ 0 1 1 1 1: 1 1 1 I 0 1 ~] 1. whether A is invertible or not .\ . we conclude that the matrix .. 3 Ol 0 13.irr.1 0 0] . Add · 1 times row 2 t o row 3. weconchtdet hnt .4 I I I 2 2 2 2 1 l 0 1 I 2 I ll r I ~ 2 I . nod that A·· ' = [.rtitioncd matrix [A I I J and perfo rm row operations a.l .. we s tart wit h the po.3 4 (c) A 1 = [0 ~ ~J !. 0 1 0 1 0 0 1 l 2 1 2 j] I [~ 12. (a) A l 0 1 o: l 0: .4 is not invertible. As in t he inversion algorit hm.] 1 2 0 0 [1 2 1 3[1 0 0] !0 l 0 4:0 0 1 . then R will be the identity m atrix and the matrix pro duced on the right side will be A . [~ Add 1 times row 1 to row 3. side to its reduced row echelon form R.
row is a row of zeros. If c :j: 1. [~ [~ 2 0 0 1 0 1I 0 0 1 1 1 2 I I 3: 0 1 0 0 1 .1 ~] ~] Add 1 t imes row 2 t o row 3.1 [ 0 0 If c = 1. then after rnult. we have: Add I timE''> the tlrst row to t he second row and to the t hird row. 0.1 ~] has the 14. 3: 1 I I I 1 0 0 : 1 Add .78 Chapter 3 Add 1 Limes row 1 to row 3. t. Thus we conclude that the matrix is invertible if and only if c :f:. The matrix B is obtained by starting with t he identity matrix and performing the same sequence of row operation' .iplying the fir5t row by 1/ c.1 ~] The reduced row echelon form of A is R = [) 20] . then we can divide the second and third rows by c . then t he second and third rows are rows of zeros. ] 1 0 c.nd from this it is clear that the reduced row echelon form is the identity matrix. u 3 1 .0.hc fir~t. 1. ±v2 17. thus B = [!Hl = [~ ~][! ~ :J : .hen t.1 obtaining [~ l :] ::~. [~ pro perty that B A 2 0 0 0: } I I 0 I 0: 1 1 3 1 . 1f c = 0. and so t he matrix is not invertible. c ¥= 0. 0 0 1 0 0 0 and the matrix B = = R.3 times row 2 to row l. so the matrix is not invertible If c f. 16 . R= [~ 0 1 0 ~l B=H 0 0 8 0] l 1 ~ 15.
EXERCISE SET 3. 23. 2. th€n is the matrix in the righthand block above. and then multiplying row 3 by ~· Thus if E 1 = E2E1 A = I.: ::''. then the matrix A has a 1. 4 and then reversing the order of the rows yields 0 l k3 k2 0 thus A is inv~rtible and A l 0 0 ~] [~ . .·:~·:d [rof~]A .:o:•::•:: :anT::.ry matrices and their inverses arc as indicated . 20.. £2 = H ~j 0 l 0 ol EI_ 2  [~ ~] [~ ~] 0 1 0 0 1 0 . 1f any one of the ki's is 0. (a) The identity matrix can be obtained from A by first adding 5 times row 1 to row 3. T he corresponding ~l~mcnta. E1= [~ ~] 0 0 1 s1_ l  (2) Add 1 times row 1 to row 2. k ' =r 0. (1) Interchange rows 1 and 3. ~] 0 0 l 19.'s are all nonzero. 3.ad[t'fg ?]'~':h:::::~ ~ ~d A 0 0 1 0 0 2 then (b) (c) A I = E2 E 1 where £ 1 and E2 are as in par t (a). If the kt.ero row and thus is not invertible.k1 p \ k 1 I F 1?1 ~J [~ ~] and E2 = 21. (h) A 1 • E 2 E 1 where £ 1 and E 2 a.3 79 18 B ~ [~ ~. then multiplying the ith row of the matrix [A I IJ by 1/ki for i = 1..] . A 1 = [' k p 0 l 0 0 I p 1 1 k I ·.Lrix can be obtained from A by the following sequence of row operations. The identity ma. (a ) A = E} E2 1 = [~ ~] [~ ~]0 1 ::~t.re as in part (a) .:~:::·.. 1 (c) 22.
80 Chapter 3 {3) Add .2 times row 1 to row 3._ 0 Ei' [: 0 1 0 1 1 0 = EaE1 E6EsE4E3E2E1 and A= E} 1 E2 1Ej 1 Ei'£5'Ei 1E7 1E8 1 . 25 . (7) Add . :r = 0. we obtain 1 2 1!1:0 ] [ z: 1 3 21I 3•0 I 0 1 4 l1 ~d the reduced row echelon form of this matrix is [~ 0 0 ~ ~t:!: !:] o: 4 From this W P. conclude that the first system has the solution x 1 second system has the solution ~ l ~ 4.' 0 0 1 j] (6) Add row 3 to row 2. Ea = L O It follows that A. If we augment this m:\trix with the two columns of cons tants on the right sides. E.9.2 times row 3 to row 1. 2 3 and the . ~ [~ ~] E. = .1 l 0 [~ H ~ ~] ~] EO'~ (~ ~] E () t . x2 = 4. E.1 r~ . ~ [ 2 0 0] ~ 1 0 0 1 s. xa = 4. ~ [~ 0 1 1 ~] 0 1 0 ~ [~ ~ [~ 1 0 ] ~] (5) Multiply row 3 by i· Es ~ [~ 0 1 j] E. 0 0 1 0 1 {4 ) Add row 2 to row 3. ~ [: :] E. 0 () 1 0 (8) A<ici .•~ Ei' [~ 0 0] 1. The two systems have t he same coefficient matrix. x = 4.1 t imes row 2 to row 1.
1 !b 1 b.1 28. I I I I 27.1 ' J 1] [ . ~ ~ nod l H :1m ~ H 2 .1 3 and (b) A. x2 = . x 2 = 8. ( ~i ) The systems can be written in matrix form us [~ ~ ~][. is [~ 2 5: 1: 2] l 3 I I I  0 1 : 1 3 :.EXE RCISE SET 3.3 81 26. x 3 = 3.} [ J Tbe inverse of the coefficient matrix tions are given by A~ [~ ~ :] .2.32 ! 11] 0 1 0 8 2 [0 0 1 l 3 : . 'The coefficient matrix. (a) The systems can be.1 } I I I and the reduced row echelon form of this matrix is 1 0 0 l .] ~ [. and the second system has the solution x 1 = 11. augmented with the two colUJnns of constants.l .: 3 2 . X3 = 1.3 l 4 1 0] = [9 :4] . Thus the solutions '] 2 l 1 [: :] [!] n1 2 .1 Thus the first system has the solution x 1 = 32. written in matrix form as [~ :] [::] 2 3 1 ::0 n1 = and [~ ~][:] m 2 3 ] ::: The inverse of the coefficient matrix A are given by [" :]~A' ~[: ~ 3 ~~ 3 .
b4 = t.5 1 (2) Add 2 times row 1 to row 3. if and only if b1 (~ ~ l bt ~22b:z) 2 3 0 T hus the system is 2~ = = 2b2. = Oo 32. The augmented matrix 1 2 5: 4 5 8: b2 [ :1 3 3 I b3 btl 0 can be row reduced to [1~ 0 . ll 1 2 3 . 4 4 . R l: 3 l 38 ] 8 0 0 0 It follows from this that R = E 3 E 2 E 1 A where E 1 ..b2 system is consistent if and only if 2b1 . b3 0 0 I 2bt . i.12l 6: the system is consistent if and only if b1 31. E 3 <lie the elementary matrices corresponding to the row operations indicated above. 30.nge rows 1 and 2. r 0 j] j] 1 0 1 37 7 8 :J 7 (3) Add 1 times row 2 to row 3.5 7 3 1 [: ~ 3 1 3 7 2 .2~ + b2 + b4 = 0 .][: ~~][~~ 0 0 ~:] 1 0 0 1 E! 1 • F = Ei 1 .I> 1 + bz + b3 bl l 0 Thus 2! 2 1 : 1 o ! ~ t 2b1 bl l .5 0 0 0 [0 0 0 0 0 and from to it foll ows that the system Ax = b is consistent if and only if the components of his b satisfy the equations b1 + bs + b4 = 0 and . we have the factorization A where E = ~ EFG R ~ [! ~ ~] u! . The augmented matrix for this system can be row reduced to 1 1 3 2 0 1 11 . The augmented matrix [ 2 l 2 llbll can be row reduced to [1o &a 3 7 + bz + b3 = 0. Thus the original system Ax ~ b is consistent if and only if the ve•:to' b is of the fonn b ~ [.82 Chapter3 29 0 The augmented matrix consistent if and only if [~ =~: ~] b1  can be row reduced to 0.e. o bz. b2 = 2s + t. bs = s. The matrix A can be reduced to a row echelon from R by the following sequence of row operations: ro A= (1) Intercha. The general solution of this homogeneous system can be written as b1 ::: s + t .b2 + b3 + 113 Thus the .4bl 0 I . Finally.( ~ s [i] + t m · 11 l 33. E'2. and G = £3 1 • .
The matrix A is obtained from the identity matrix by aJding a times the first row to the third row and adding b times t he second row to the third raw. it follows immediately that x 3 = x'2 = x 1 = 0 also. D7. If AB is invertible then. let b = 1 and a"" c = d = 0. the result is not a. 'Jhu~ ~] =_[~ ~} [~ ~) cannot be obtained from the identity by (c) This row operation is equivalent t. from back substitution . · · · E2BC = C.. True. . There i..ible. No. All of these s tatements arc true. The coefficient matrix is invertible. From the last equation we ste.3. we have a matrix with a row of zeros in the third row.. False. If A is invertible and AR = 0.re invertible. E 2 . a#. for A to be invertible we must have =I= 0.'J not. and so A= E.. (d) 'T'r11e.1 A)B = A. then B = IB = (A '. On the other hand. D2.0 and h WORKlNG WITH PROOFS Pl.8. thus the same sequence of r{)w operations will reduce A to C. D5 . Thus we can find A by applying the inverse row operations to I in the reverse order. if a =I= 0 and b =I= 0.enta. Ek · Then I= Ek · · · EzE1B.1 E2 1 · · ·Ej. For example. Then we have Ek · ··E2 EtA =I.o multiplying the given mat. . E2. If r1. Thus A is elen. Yes. t he mat rix B = [~ _: ~1 works just ns well. For example t he product [~ a single elementary row operation. if either A orB is singular. But (assuming this) if we add d/a times row 1 to row 3. 1 l.. An iuvertible matrix cannot have a row of zeros. and so B .WORKING WITH PROOFS 83 34. and let E 1 . If either a = 0 or b = 0 (or both) the result is an elementary matrix. (a) (b) The matrix B 1 = [0 2 : 0 ] 0 has the requited property. Thus. otherwise (if A is singular) there are infinitely many solutions.n elementary matrix since two elementary row operations are required to produce it. (e) D6. both A and B a. Suppose that A = BC where B is invertible. DISCUSSION AND DISCOVERY Dl. There is no nontrivial solution. D4. . since any elementary matrix is invertible.1 0= 0. the product is still invertible. (a ) (b) F'alse. is invt!rtible then the homogeneous system Ax = 0 has only the trivial solution. then the product AB must be singular.1 (AB) = A.rix by an elementary matrix: and. Suppose the matrix A can be reduced to the identity matrix by a known sequence of elementary row operations.• Ek be the corresponding sequence o£ elementary matrices. by Theorem 3. (c) The matrix B must be of size 3 x 2: it is not square and therefore not invert. .rkes can be ~xpresse<. Then A[~ ~] = (~ :::]=I=[~ ~]· D3.ry if and only if ab = 0.. .1 = Ek · · · EzE1. Only invertible mat.l as a produ<:t of elementary matrices. Thus. From this it follows that Ek · · · E2E1 A == E. and add e/h times row 5 to row 3. P2. and that B is reduced to I by a sequence of elementary row operations corresponding to the elementary matrices E 1 . that x 4 = 0 and.
P6.b 2 and this is true if and only if b = 0. i. \Ve consider the three types of row operations separately.. Our proof is by induction on the exponent k. . . Then AB = BA if and only if [! = [~ ~]. we have for i = l. P 5.3. then A 1x =Ax= 0 iff x = 0. Then and rk(EA) = r.. This shows that if the statement is true fork= j it is also true for k = j + 1. If k = 1. a. em be the standard unit vectors in Rm. Step 2 (induction step): Suppose the statement is true fork= j.e. Row Interchange. Thus the only matrices tha~ c0mmute with Bare those of the form A= Then AC (~ !] . . m. j. Thus. Suppose Ax= 0 iff x = 0. Thus the only 2 x 2 matrices that commute ba].7. Suppose E is obtained from I by multiplying row i by a nonzero scalar c. Akx = 0 iff x = 0. From Theorem 3.. . P4. Thus EA is the matrix that is obtained from Row Replacement. Suppose E is obtained from I by interchanging rows i and j. .nd let G = C A if a. =I= i.and let B = [~ ~]. Row Scaling. Then r1(EA) = fi(E)A = eiA = ri(A) ri(EA) = rj(E)A = e.=j). for any positive integer k. Suppose E is obtained from I by adding c times row ito row j (i :f. But. Suppose now that A is a matrix of this type.84 Chapter 3 P3. Thus Ax = 0 has only the trivial solution iff (BA)x = 0 has only the trivial solution. Suppose now that E is a.n elementary matrix that is obtained by performing a single elementary row operation on I. . These two steps complete the proof by induction. ez. It is easy to see that such a matrix will commute with all other 2 x 2 matrices as well. Then A1+1 x =A) (Ax) """' 0 iff Ax= 0. where j is any fixed integer 2:: 1. :] = [: ~. Let e1. Then ri(EA) == rj(E)A = (ej + cei)A ~ eiA + ceiA = ri(A} + cri(A) and r~c(EA) = rk(E)A = ekA = rk(A) for all k =J:. Step 1. [~ ~] =a[~ ~) where oo < with both B and C are those of the form A= a< oo.. We wish to prove that. since B is invertible. Note that ell e2. 2. Thus EA is the matrix that is obtained from A by adding c times row i to row j. Thus EA is the matrix that is obtained from A by interchanging rows i a. for any m x n matrix A.j. em are the rows of the identity matrix I= lm.nd j. A is invertible iff B A is invertible.nd only if [: ~!] = [. . • .. the system Ax= 0 has only the trivial solution if and only if A is invertible.(E)A = ekA = rk(A) for all k A by multiplying row i by the scalar c. if and only if a= d and b =c..A = ri(A) and rk(EA) = rk(E)A = c~eA = rk(A) fork =J: i. Let A = [: !] . and this is true iff x = 0. . Thus the statement is true fork= 1.
x2.:r2 X3) ::: s{0. X2 t t(2. 1.. X3) :: = t(l. .. 1. .4 85 P7. (b) These vectors are linearly independent. (c ) (Xl 1 X2.rJy dependent. 9). x2) (b) (. Lhus u :f: is in the s ubspace span{v}. X2 = t. 4) = 4(4. Jt fo Uows that the same sequence of elementary ro w o perations will reduce the matrix [A I B] to t. . 2'2 · X3. :t4 =  4t 3.EXERCISE SET 3.'\v. Two vectors are linearly dependent if and only if one is a sca la r m ultiple of the other. 0. 4.2t. :t3 = 2s + 7t X1 = S + 3t .2. vec tor~ (b) linearly dependent u = 2v . 0. X2 = s+t. . n. X2 (Xt. 10.1 . (a) (X J . Xt t. E2 . A plane (a 2. 2. (a) A pla ne in R~ tha t passes through the origin and is parallel to the vectors u = (6. Le t E 1 . (a) (b) A line (a 1dilnens io nal subspace) in R 4 that passes thro ugh the origin and is parallel to the vector u = (2. X3. 4. _ . 4. 2.2 w u. thus these vectors 1\re linea. x 1 = 2t. 0). (X t 1 X.:l:<!. 12.n inver tible m:1trix. 2. x 1 = s + 2t. P8. 5) . 2) + t (2.1). 8. .4t u u = . :r3 = S + 5t. x2 = Ss + 2l . 2) + t(3.) = $(1. = .. thus u is not in the subspace s pa n{v}. .X4) = t(l.4). 5) and v = (6. (a) = s(4. x :.4).4 1. (a) (a) no {b) yes 7.4). 7). E~c be the corresponding sequen ce of elementary m~. w are linearly dependent. 5. X4 ) = s( l . X:. Then we have B = E~r: · · · E2E 1A = CA where C = E1r: · · · E 2 E 1 is a. a. 2. :c1 = 0. Fh.!.nsforrr:ed to reduced row echelon form B by a sequence of elementary row o pe rations.0) +t(2. . Xt = 2t.X2) (b) (c) 2.t: . 1. 8}.'f (b) 5. Suppose tha t A is reduced to I= In by a sequence of elementary r ow operations. . 0).~. 2. 4. (a) (b) (:z: 1 . X3) = = 5t = 0 . thus the u = 4v . 4. X3 = 4t (x. X3 = 9t lXt. . Every 111 x n ma trix A can be tra. X2 = t.re the correspondjng elementary matrices.5. Then E~c · · · E2E1 A =I and so E~c · · · E2E1 = A.2v .:1:2. X3 1 X4 ) = £(2 .3t. = 3. . 2. kv fo r a ny scalar k . .. 1).nd that Et. x2 = 4t ( ~"') (x1. X2 = 2s + 4t.s. X3 . 1. :2:3 ) = t(Q .4).1. X} = 2t. X2 =tiS + 5t. 5. 9.1 BJ.2. · 4. x 3 X 4 = 1s + t.trices. :r4.3) + t(3. ::3. .:!ime nsional suospace) in R" that passes through the origin and is parallel to the vecto rs u = (3. Xt (b) 4.he matrix [E~c · · ·~EtA I Ek · · · E2 E1B) = [I I A. (a) (::tt . (h) A line in R 4 tha t passes through the origin and is parallel to the vector u = (6. x 4 ·. ..1. X1 t. 0. 11.X2 . . = = = = X4 = 3t t(O. X3 X3 = 5t .8) and v = (3. = 2s . . l:J~o. 4. = 4s .3. EXERCISE SET 3.1. 6. XI= 0. 8). (a) linear ly independe nt.1. X2 = t. v . 1.3). . 6. 2. . 5.
1. X2 = s. l. 3 .~. X4. . X4 = t ·:.. 1. x2. 0 0 0 0 01 1 0~] 1 2 .~t. 1) span the solution (xt.~. xs) = s(4.86 Chapter 3 13. 1. . 1) The vectors v 1 = (2.0) + s( l . 3. X3.1. 1. 1.: or in vector form as (Xt. 0).0. The augmented matrix of the homogeneous system is 2 4 1 1 l 1 0 1 ~] and the reduced row echelon form of this matrix is 1 2 0 ~ [0 0 1 ~ 10 ] i 0 = . 0. 1) This shows that the solution space is span{ v 1. x 2 .0. 8t. x 4 . 0 . 0. 0 . X2 1 x 3 . general solution of the system is given by (x 1 .. 0. .nd the reduced rowechelon form of this matrix is H [~ 6 0 0 6 2 6 1 12 5 18 5 3 ~] 0 11 1 8 0 0 ~] X3 := Thus a general solution of the system can be written in parametric form as Xt = 6s. 0 . 0 . 0) and v2 = (11. X3 = S.0. 8. 1). X$= t or in vector form as ~ _r_~i. general solution is given by (xt..llt. The reduced row ech!:!lon form of the augmented matrix is 1 1 [ = (6. v2 = (. The augmented matrix of the homogenous system is a. . · ~ .0. xs) sp3ce. 1. and v 3 = (.~S .3 0 0 0 thus a. x4 . 0. 1} 15. 0.O. x 5 ) = r( l.3. 16. 0. X2 = T. 0) + t( l . 0) + s(l . 1. 0. O) + t( . 0) + t(~. ~. 8. The reduced row echelon form of t!1a augmented matrix is [~ 0 1 0 0 4 1 0 0 1 2 1 3 0 0 0 0 ~] thus a. 0. x 3 . x z. v2} where v1 14. 2.l. 0). 0. 0 .X4 Thus a general solution of the system can be written in parametric form as X1 = 2r js ~t. 0) + t( 11.. 0. X4) = S( 61 1. 0.x3.2. 1). i. l .
If b1 .·a 1 + c1. 0. (a) The matrix having t hese vectors as its columns is invertible. Vz = zVl 7 3 + 2YJ 1 V3 7 2 = 3Vl + 3 V2 28. 0.·.4 87 17. (a) 7vl . Sets (a) and (b) are not subspaces. The set W consists of all vectors of the form x = a(l.2vz + 3v3 = 0 2 3 (b) YJ = 7Y2 ·. The coefficient matrix. 1.3v3). it is convenient to start by multiplying all of the rows by 2. In parts (a) and (b} the vectors are linearly independent.e. v 2 } ·.4. (d) This set of vectors is not a subspace."" . plane (i. v 1 = 2v2 and v3 = 3vz. .· subspace.a. 0. is invertible. 25. (b) Any set of more than 2 vectors in R 2 is linearly dependent (Theorem 3.. (a) linearly independent (c) linearly dependent (b) linearly independent (d) linearly dependent 21. (a) Any set containing the zero vector is linearly dependent. lf b = a+ c. 24. consider the m atrix having these vectors as its columns. 0) + (az.EXERCISE SET 3. and b2 = nz + cz.8).. . 1. a 2dimensional subspace) through the origin R::. it is not closed under addition or scalar multiplication. 0) .7V3 . they lie in a. 0. (a) Those two vectors arc linearly independent since neither is a scalar multiple of the other. plane. kb = ka + kc. a 1dimensional s ubs pace) through the origin in R 4 . If . 20.e. it is closed under scalar ~~ltipl~~.~ion and addition: k(a. they do not lie in a plane. (b) v 1 is a scalar multiple of vz(v 1 = 3vz)i thus these two vectors are linearly dependent ... (c) These vectors line on a line..1) and v 2 = (0. 19. 1. (a) Vz is a scalar multiple ol· v 1(v2 = 5vl). 0). 0. 3. a.. (c) These three vectors are linearly Independent since the system n: [:]~] [~] has only the trivial solution . [Y 2: JJ .8). they lie in a plane but not on a line . 27. (b) This seL of ''cctors is not a subspace. Sets (c) and (d) are suhspaces. O) and (a1. 0) = (ka..4.. then (b1 + bz) = (aJ + a2) + (c1 + cz) . 0) = (a1 + az. 22.. This corresponds to a. 0.re v (l. 23. = 26 . Thus the vectors are linearly independent.O . (d) These four vectors are linearly dependent since any set of more than 3 vectors in R 3 is linearly dependent. which is the matrix having the given vectors as its columns.. W = 5pan{v 1 . it is not closed under scalar multipllcation. then . thus these two vectors are linearly dependent. they do not lie in a. 2. Are there any other values of. 0).\ for which it is sing\.O.. plane but not on a line. (c) Thl~ set of vectors is a.!lar? To determine this. (a) This se~~f vectors is ~(~~.. 18. 0. (b) These vectors are linearly dependent (v 1 = 2v2 . thus W = span{v} wh~>. This corresponds to a line (i.\ = then the given vectors are clearly dependent. Otherwise.vhcre v 1 ::: (1. 0). !. In part {c) the vectors are linearly dependent. (b) Any set of more than 3 vectors in R 3 is linearly depenrlcnt (Theorem 3.
tc2 4c3 .59m + 0. (c) of the problem can now be a nswered as follows: (a) s = 6 9i 28 ( u  0. {v 2}. this matrix can be reduced to the row echelon form [~ 29. (b) If S = {v1 . linearly independent set. ~ ~~ l which is singular if and only if>. and so each of the sets {v1 }. then there a. if this were true.9928 1 [ 1 . 30. 0 2+2.12 Parts (a. v 2. 3 1. T hus the gjven vectors are d ependent iff >. g can be written as [~] = [o. v 3 . 0. Suppose then that Tis a 2element subset of S. v 3.>. t hus the vectors u .47y (c) 34.47k + 0. e. we have c 1 v 1 + c2 v 2 + c3 v3 + Ov = 0 and this is a nontrivial linear relationsbjp among the vectors v 1 . (u . This shows that if S = { 1 . v .u form a linearly de·· 32.1.~6] (:) and so the inverse relationship is given by [s] = [0. then each of its nonen:tpty subsets is linearly independent. not both zero.0. for any vector v in R".12u+v) (c) 33. ¥ ~.). and c3 . (b) . v 2.= ! or>. = 0.= .be linearly dependent.9~ 56 (0. But.27). v 3 } is a. and so S would. not all zero. then c1 V t + c2v 2 + Ovs = 0 would be a nontrivial linear relationship among the vectors v'h v2 .e scores for the students if the Test 3 grade (the final exam?) is weighted twice as heavily as Test 1 and Test 2.94v) = P 876 P 2t6 1 2~ u +~!~~v 5 (a) (b) No.535.g. vs. such that c1 v 1 + c 2v z = 0 . and w . H P e75 + P2J6) corresponds to the CMYK vector (a) k1 = k2 = k::~ = ~ (c) (b) k1 = k2 = · · · = k1 =~ The components of x = ~Ct + + represent the averag. ~s+h= l.83m + 0. v:~.88 Chapter 3 Assum!ng >. Thus if S is linearly independent . s uch that c 1 v1 + c2v2 + CJVJ = 0 .07k = 0.9~28(0. v.= 1. [u] . First note that the relationship between the vectors u . v 3 } is linearly dependent.v ) + (v . v:~} .~2 o. 73y + 0.19.38c + 0. v} for v any v . T = {v 1 . 1t is not closed under either addition or scalar multiplication. Thus.0.88u +0. v 3 } is linearly dependent then so is T = (v 1 .re scalars c1o c2 . (a) Suppose S = { v 1 . and {v 3} is linearly independent .w) + (w . The arguments used in Exercise 29 can easily be adapted to this more general situation.u ) penden t set .71 .34y + 0.06v) (b) g=o. Note first that none of these vectors can be equal to 0 (otherwise S would be linearl y dependent). P328 = c + 0.w.v. v 2 . 0. Thus T = {v1 .06] 1 v = 0.12 1 g 0 _. v 2 } is linearly independent. v and s. T h e same argument applies to any 2element subset of S. II T is linearly dependent then there are scalars c1 and c2 .30k (0. v 2 . = 0 .
(a) Two nonzero vectors will span ·R 2 if and only if t hey a. but not for a nonhomogeneous system.ll = vi· (c1v1 + c2v2 2 + c3v3) = V1 • 0 = 0 for i = 1. This :set is d ose>d under scalar multiplication. thus v. (a) kt = kz = k3 = k4 = ~ (b) k1 = k2 = k3 = k4 = k& = (c) The comp onents of x. · vi = 0 for i=/: j . To prove they are linearly independent we must s how that if C1V1 + C2V2 + C:JVJ = 0 then Ct = c_a = C3 = 0. will span R3 if and only if they d o not lie in a plan~. D2. D4. For example. 2. then k(c1 v1 +c2v2 + c3v2) = 0. DS.J . The vectors in the second figure arc linearly dependent since v 3 = v 1 + v 2 .J True. The vectors in the fi rst figure are linearly independent since none of them lies in the plane spanlled by the other two (none of them can be expressed as a linear combination of the other two). D 3. False. since k =/: 0. v = i L k= l n vk = DISCUSSION AND DISCOVERY Dl . then C.s a linear combination of the other two. = ~r 1 + ! r 2 + ~ r:. (b) Suppose the vectors vl> v2 .hem lit:S in the plane spanned by the other two.DISCUSSION ANP DISCOVERY 89 35. 3. (a) False. If three nonzero vectors are mutually orthogonal then none of t. 1) does not. !The solution space of a nonhomogeneous linear system is a translated subspace. Ttne:. [But it is true that if {v. 2. (c) span {u} = span {v} if and only if one of ~be vectors u and v is a. The set {u. False. w} is a lin~~trly independent set . 06. (b) False. This statement is true for a homogeneous system (b:::::: 0). · v. but 11 + v = (1. Bucks. but uot under addi~ion . llv. but the third vector may not lie on this same line and therefore cannot be expressed a. = llvdl 2 > 0 fori= 1. v. then {u. . The set of all linear combin ations of two vectors can be {0} (if both are 0). or a plane (if they are linearly independent) . it follows that Ct v 1 + c2 v2 + c 3 v 2 :::::: 0 and so c1 = cz = c3 = 0. lf c1(kv1)+ cz(kv2 ) + cz(kvz) = 0. D7. See Example 9. thus t he three are linearly independent.re do not lie on a line. For example. (a) (b) {c) (d) (e) False. and v 3 are nonzero and mutually orthogonal. (b) T wo vectors in R" will span a line if and only if they are not both zero and one is a scalar multiple of the other.1) correspond to points in Lhe set. and if u cannot be expressed as a linear combination of v and w. scalar multiple of the other. the vectors u = (1. two of the vectors ma.y lie on a line (so one is a scalar multiple of the other).2) nnd v = (2. ku} is always a linearly dependent set. a line (if one is a scalar multiple of the other). (a) Yes. v and w might be linearly dependent (scalar multiples of each other). Thus. 36. For example. 3 and v. and Delaware counties in each of the sampled years. represent the average total population of Philadelphia. Tbis follows from the fact that if c 1v1 + c 2v2 + C3 Y3 = 0. w} is a linearly independent set. (b) T hree nonzero vectors . ( a ) T wo vectors in Rn will span a plane if and only if they are nonzero and not scalar multiples of one another.
n(W} = W . Since span(S) is a subspace (already closed under scalar multiplication and addition).nd S2 = {(1. Let() be the angle between u and w. l) . thus kz IS in w} + w2. ) + (x2 + Yz) = (x1 + x2) + (Yl + Y2) . P3.e. w = lk 2 + k(u . thus Zl + Z2 is in H. If W is a subspace. Similar calculations show that llwll cos¢ = (v · u) + k l . where kx is in W1 and ky is in W:~ (since w. . l}}a. wll u · w = u · (lu + kv) P2.n(S). v • w.=lllwllcos¢. Forexample. l!wll cos e = lk + ( u. First recall that u · w = llu!lllwll cosO. (a) The reduced r. S1 # S2. Thus WI n w2 is closed under scalar multiplication and addition. wl n w2 is a subspace. v) . Then. but. we have s pan(spa. if XI and X2 belong to Wt n W2 . If X belongs to WIn w2 and k is a scalar. l}}.i. then kx also belongs to Wt n wl since both Wt and w2 are subspaces.(O.5 1. EXERCISE SET 3.Jw echelon form of the augmented matrix of the h omogeneous system is l 0 0 thus a general solut ion is x1 !~ ~] Gr (in column vector form) .thenspan(SI) = span(S2 )= R2 . and w2 are subspaces). It follows that cos 9 =cos¢ and () = </>. where X } + X2 is in lt\ and Yl + Y2 is in w2 (since WI and w2 ar~ subspaces). and let 4> be the angle between v and w . v ) . · (d) False. · WORKING WITH PROOFS Pl.Chapter 3 (c) True. we have kz = k(x + y) = kx + ky. thus Uwll cos B = U cos¢. and so spa.e. for any scalar k . then x. We will show that 9 = ¢. + X2 belongs to WIn w2.n(S}) = spa.O). so u · w kllwll cos9. Then z 1 + Z2 == (x1 + y.ifSt = {(I . i.o kllwll COB()~ u. Similarly. O n the other hand we have · = = l(u · u) + k(u · v) = lk 2 + k(u • v) aud ::. 08. First we show t hat ·w1 + W 2 is closed under scalar multiplication: Suppose z = x + y where xis in W1 andy is in W2.(O. \ + w2. Simllarly. then W is alr~ady closed under scalar multiplication and addition. Finally we show that WI + w2 is closed under addition: Suppose Z l =X) + Yl and Z 2 = X 2 + Y2 o where X) and X2 are in W1 and Y• and Y2 are in W2 .
~atrix of the nonhomogeneous system is  f ·. ··.s1  0 T ll 2.] 1 0 0 1 0 0 thus a general solut ion is given by x1 = ! .. or . X3 = t .. x2 = s. thus a general solution is [Zil = t ~¥] .   ." The reduced row echelon form of the augmented matrix of the give n system is 0 .. (o) The <educed <OW echelon ro. o] ~····· ~ 1 :r2 = s..EXERCISE SET 3.r .• 0 .~s 1t. 0 ~ 2 ···.] T his solution is related to the one in Qa.5 91 (c) From (a) and (b). •otNo•··.rt (c) by the change of variable . X4 = 1.m is [.· ·· · ::~~:··..3!.. [~ 3 0 0 2 3 1 0 0 . a general solution of the nonhomogeneous system is gjven by (d) The reduced row echelon form of the . 0 0   ·   :1:3 (d) T he rnd uced row echelon form is [~ '··. 1 0 1 o· g ¥ o : [:} [1J 3.· .. augmen_t~J.l ). (a) 3 ! +t' nv .:. t = t 1 + 1.
and that w = 2v1 + 3v2 + v3. The vector w can be expressed as a linear combination of v 1 . X4 [!] ... (a) The reduced row echelon form of the augmented matrix of the given system is 0 1 0 thus a general solution is given by _1: 133] 9 7 0 0 (b) A general solution of the associated homogeneo'. The vector w can be expressed as a linear combination of v 1 ..o·····. if and only if the system 2 f3 l2 8 1 4 .r. x2 = 0.~ !s .ls system is and a particular solution of the nonhomogeneous sy~tem is x 1 = 5. ::] ::::: s [ ' . if and only if the system [~ 1 : =~~] [: ] : : [~] 5 12 c3 1 is consistent. v 2 .. unique solution.92 Chapter 3 (b) A general solution of the assoc~~ted n<:miqgenoouS. x3 = 7. x2 = 0. v 2 . and v 3 .1 \... 1 i. 6. x 4 = 1. 4. and v 3 .··· and a particular solution of the given nonhomogenous system is ········· ··•·· = !. + 0 l 0 x1 : . X3 =0. The reduced row echelon form of the augmented matrix of this system is 0 1 0 0 2] 0 1 3 1 From this we conclude that the system has a.· :::. X4 = 0.
consists of all vectors x = (x. The row reduced echelon form of the augmented mo. •·.. ·::. v 4 } if and only if the system [~ · 1 0 .:.w can be expressed as a linear combination of v1.5z = 0. fii particutat... . we have w = 10v 1 .10 . 2x + 3y = 0. . 10] 0 .n~ given b}: c1 .e.. taking t = 0. From this we conclude that the system has infinitely many solutions..6vz.. (a) The hyperplane al. The reduced row echelon form of the augmented matrix of this system is [~ 9.ystem.5 93 is consistent.. v 3 . v2.. / .. z = t x1 = r .3x3 + 7x4 = 0. x4) such that a· x = 0..·:······~. The vector w is in span{v1 .7t. This corresponds to the line through the origin wit. . y = 3s + 6t... y = s. consists of all vectors x = (x.········"· ······ ~~(Is w is in span{v 1 . thus w is in span{vt. . 7.. v 4 }.. X3 = s. : . consists of all vectors x = (xbx 2. y = t + 2s. The vector w is in span{v 1 . 0._ i . it.trix of this system is ! ~ ~ \~] .~..·:::~::···::···· 8... X4 = t.h parametric equations x = y = t. H [~ l 3 2 is consistent. This corresponds to the plane through the origin with parametric equations x = ~t..EXERCISE SET 3. and V3. .' 0 l 0  3 1 0 6 . . c3 = t.. / . and from this we conclude that the :. . . X3.··.!.. v 2 .. (c) a..e. (b) The hyperplane a.4 = t (b) x = s.. v 3 .. hilS inftnitcly many solutio. i....······• .i.3t 1~ c 2 == 6 + t..e. (a) x (c) = 4t. x 3 = s.::. 0 0 2 1 0 From this we conclude that the system has infinitely many solutions.•. This is a hyperplane in R 4 with parametric equations Xt = 2r + 3s.. z) such that a· x = 0. x2 = r. . i. VJ}. y.. z =t. .::_.. Thus. The reduced row echelon form of the augmented matrix of this system is / / ' / •.. v 3 } if and only if the system is consistent.. v2. x 2 = r.·· . v 2 . 4x. i. y) such that a • x = 0. s. vz. Xt + 2x2.:.
94
Chapter 3
11. This system reduces to a single equation, Xt + x2 x 1 = s t, x 2 = s, x 3 =: t; or (in vector form)
+ x3 = 0.
Thus a general solution is given by
The solution space is twodimensional.
13. The reduced row echelon form of the augmented matrix of the system is
[~
Thus a general solution is x 1
0
~ 7
7
~
T
19
l
1
7
= ~rX1
I_( s
~t, x 2
= ~1' + ts + ¥t,
7
19
1
X3
= T,
x4
= s, x 5 = t; or
7
3
X2
7
1
2
xs =r
X4
+s
X f.>
0 0
7 0 1 0
+t
:1
~j
The solution space is threedimensional.
15.
(a)
A genecal
wlutioni~ =: ·=~ t:_y ~s, ~ ~ \;t [: ~ m +
x
. ..
l f l] nl
+t
'
······..
(b) The solution space corresponds to the plane which passes through the point fP( 1, 0, 0) il.nd is
p~:r._a:Herto the·vea:or·s:v1~r...:..·T;Y;·ora.n:a.y·z·;;·c.:::r;·o·;l} . . . ······ ... ······ ...................... ........  .... ......_.. ······ ···· ....... , ...,.
· ····· ·
,._______.,. ._____ ·
16.
(a)
A general solution is x = 1 t, y
(b) The solution ~pace corresponds to the line which passes through the point P(l, 0) and is parallel to the vecto( v = (1,1).
= f; or[:]=[~]+ t[~L~··· 
17. (a)
A vector x = (x, y,z) is orthogonal to a= (1, 1, 1) and b = ( 2, 3,0) if and only if
(b)
g+3y ~o The solution space is the line through the :iii;~~th~tj;·~erpendicu)ar to the vectors a
~~;·~ y ~ z. = 0~,;
j
and b.
EXERCISE SET 3.5
(c) The reduced row echelon form of the augmented matrix of the system is
·
and so a that the vectof:
l8.
( [~~T iJ ·"
gtneraj,..§Ql:ut~giv~ by x = ~=:: ~t, z = t; or (x, y,z) i·;(·=j·:~~ . l).,ote
(a) A vector x = (x,y,z) is orthogonal to a = (  3,2,1) and h= (0, 2, 2) if and only if
 3x
 ==~=!.~. :.t.~~..J orthog~~a.l !9,,_botl!.~_ap<:\ ..
+ 2y z =0  2y 2z = 0
I:!.,.
~

(b) The solution space is th.e llne through th.~ . ori,g!11.. tha,:t is perpendicular to th~ ·~~~t~rs· a.and"~ (c) (x, y, z ) = 'i:{~i~ _: 1, 1); note that the vector v = ( 1, ·:.__ i, 1) is orthogonal to both a and b. ·
==~·
.9.
{a}
A \'ector x

= (x 1 ,x2,x3 ,x4) is orthogonal
5xl
to v1
= (1, 1, 2,2) and vz = {5, 4,3,4) if and only if
x 1 + xz + 2x3
+ 2x4 = 0 + 4x z + 3x3 1 4Xt = 0
(b) The solution space is the plane (2 dimens ional s ubspace) in R4 that passes through the origin a.nd is perpendicular to the vectors v1 and v 2. (c) The redUl:ed row Pchelon form of the augmented matrix of t.he system is
0
l
 5 ·1
7 6
~]
=
and so a. general solution of the system is given by (x ,,xz,XJ ,x4) s{5, 7,1 ,0} + t(4,  6,0, 1) . Note that t he vectors (5,7, 1,0) and (4 ,6,0, 1) are orthogonal to both v 1 and v2.
:o.
(a )
A vector x = (x 1,:c2,r.1,x 4, x~) is orthogonal to the vectors v 1 = (1 , 3, 4,4, 1 ), v 2 == (3, 2, 1, 2,  3) , and v3 == (1, 5, 0,  2,  4) if and only if
3xl
+ 3xz + 4x3 + 4x.:~ + 2x2 + X3 + 2x4 x, + 5xz  2x4
x1

Xs
 3xs  4xs
=0 =0 =0
{b) The solution space is the plane (2 dimensional subspace) in R 5 t hat. passes through the origin and is perpendicular to the vectors v1, v z, a.nd v3. (c) The red uced row echelon fo r m of the augmented matrix of the system is
[~
The ve~tors
0
0
5
13  Z5 31
3
7  IO
1
0
0
25
_33 50 21 50
~]
and so a. general solution of the system is given by
(x1 , x2, x 3 , x 1, xs) = s( ~, ~. ~, 1, 0) + t( {o, ~g,  ~,0, I)
( t W. ~ . 1, 0) and ( t7 , ~~ . ~ , 0, 1) are orthogonal to v •• Vz , and v 3. 0
96
Chapter 3
DISCUSSION AND DISCOVERY
Dl. The solution set of Ax = b is a. translated subspace x 0
+ W , where W
is the solution space of
Ax= 0 .
D2. If vis orthogon al to every row of A, t hen ilv = 0 , and so (since A is invertible) v = 0 .
D3. T he general solution will have at least 3 free variables. Thus, assuming A i: 0, the solution space will be of dimension 3, 4, 5, or 6 depending on how much redunda.ocy there is.
D4.
(a) True. The solution set of Ax= b is of the form x 0 + W where W is the solution space of Ax = O. 0 False. For example, the system :r  Y = is inconsistent, but the associated homogeneous (b)
xy=l
system has infinitely many solutions. (c) True. Each hyperplane corresponds to a single homogeneous linear equation in four variables, and there must be a least four equations in order to have a unique solution. (d) 'Tme . Every pla ne in R3 corrr.:>ponrls to a equation of the form ax + by + cz = d. (e) False. A vector x is orthogonal to row(A) if and only if x is a solution of the homogenous system Ax = 0.
WORKING WITH PROOFS
P 1. Suppose that Ax = 0 has only the trivia.l ·s~lution and t hat Ax = b is consistent. If Xt and Xz are t wo solutions of Ax= b , then A(x 1  x 1 ) = Ax 1  Axz = b  b = 0 anti so Xt  x z = 0 , i.e. x 1 = xz . Thus, if Ax= 0 has only the t rivial solution, t he system Ax = b is either inconsistent or has exactly one solution.
PZ . Suppose that Ax = 0 has infinitely many solutions and that Ax= b is consistent. Let Xo be any solution of Ax = b . Then, for any solution w of Ax= 0 , we have A(xo + w) = A.xo + Aw = b + 0 = b . Thus, if Ax = 0 has infinitely many solutions, the system Ax = b is either inconsistent or ha.<> infinitely many solutions. Conversely, if Ax= b has at. most. one solution, then Ax= 0 has only t.he trivial solution.
P3. If x 1 is<:\ solution of Ax = band x2 is a solution of Ax = c, then A(x, + x2) = Ax1 + Axz = b + c; i.e. x 1 + X2 is a solution of Ax= h +c. Thus if Ax= band Ax ;= care consistent systems, then Ax = b + c is also consistent. This argument c2n easily be adapted to prove the following:
Theorem. If Ax = bi is consistent for each j Ax = b is also consistent. P4 . Since (ka) · x = k(a · x) and k that. (ka)..L a ..L.
= 1, 2, ... , r
and if b = b 1
+ b2 + · · · + b,.,
then
=
f. 0, it follows that (ka) · x = 0 if and only
if a· x = 0. This proves
EXERCISE SET 3.6
1.
(a)
This matrix has a row of zeros; it is not invertible.
(b) A diagonal matrix with nonzero entries on the c1 : ~gonal ;
o 0] 20
u ~
1
=
~1 o o l O!O·
0 0 3
EXERCISE SET 3.6
2.
(a}
3.
(a)
(c)
[~ [~ [~
0
2 0
0
 1
r
0
97
0
3
= rl 0
0
2 0
1
0
~][
2
_; l
2 5
fl [! ~]
4 10
0 1
(b) not invertible.
(b )
[ 2 1 [~' 0 [ 10 2] ]
4 1 2 5  2] =  20 10  2 10 20 10
10
0
 1
0
0 : 1 [~ 2] = [3~ ][ 2 ~] 0 ~
"
0 4
0 10 2] = ro 6 ][  2 ]
0 2 20 2 20  20
0
2
4.
(a)
[ 1 2 6
 20
~]
,~]
(b)
[12 3
15
5
10 5
~l
0
4
1
(c)
[  24  10 3  10 60 20
1~
0 0
12]
5.
(a)
A' ~ [~
A'~ [~
{b)
0
0 9 0
I
A'~ [~
A' ~ [~
0
0 9 0
1 1
(c)
A'~ [~
A k ~
( ~ )k
(J~'J
6. (a)
1 ]
(b)
,~]
(c)
[2'
0
3k
~
0
3 ]
7.
A is invertibl~ if and only if x =!= l ,  2, 4 (the diagonal entries must be nonzero).
9.
Apply the invers ion a lgorit hm (see Section 3.3) to find the inve rse of A .
[~ [~
Add 2 times row 2 to row 1.
2
1 0
3 :1
2:0
1 :0
'
0 1
0
~]
Add 2 t.i111es row 3 to row 2. Add ' 3 times row 3 t o row l.
2
1
0: 1
o:o I
1 :0
0
0 1 0
;]
~]
1 0
[~
0
1
0: 1
olI o
1:0
0
2 1 0
T hus the ;nverse of the upper triangular matrix A
~[
i 23]
~ is A = ~
1
[' :r
2
I
0
98
Chapter 3
10.
If
A= [~
0
1
4
; } ,lhro
A = [~ 1
0 1 4
0 0 1
11. A =
u
r[

1 2  11
0
 1
4
;]
4 0
0 0  1
i]
r2a
12.
] A = [~ 08
0 4
13. The matrix A is symmetric if and only if a, b, and c satisfy the following equations:
a  2b + 2c = 3
+ b+···· c·,;· {) .. 
~  + · c·~ 2
The augmented matrix of this system is
~~
nnd the reduced row echelon form is
:}.· :~~l
0 1 0
0 0
1
[~
14. There are infinitely ma.ny solutions: a = l
 13
~~]
0, whe re  oo
Thus, in order for the m atrix A to be symmetric we must have a = 11 , b =  9, and c = 13.
+ lOt, b = 1t, c = t , d =
< t < oo.
15. 'vVe hnve A  l
2 J = [ 1 1. 3
= [ 2
l  2 1
3 7
I
1 = ; t hus A 5 I 2
[3 }]
_ ts symmetnc . .
1
16. We have A 1
7
3 .~
4
1
1 13 =   13  5
r45
11
14
1 ; 1 3
lll thus A 0  12
1
is s ymmetric.
17. AB =
[~ ~ ~ r~ ~ ~ = [~ 1~ ~~]
0 0
 4
1
0
0
3
1
0
18. From Theorem 3.6.3, the factors commute if and only if the product is symmetric. Thus the factors ;n (a) do not commute, whe,eas the ractocs ;n (h) do commute. ~
19. If A =
r~ ~ ~}· then A~= [(1~s (~)5 ~ (r~ ~ ~1. ~
]
0
0
1
0
0
(1 ) 5
0
1
..........
20. If
A [~ =
00 ] [(! 0)4 0 , then A3
2
_
_l
.
..
2
0
=
0
l
0
( 1)2
~ ]  [~~~]
0
0
I
~f I . 25..3 I 10 ._t_~~!nve.A =:' [~8 1~ ~] is 8  1 .4.] I~] 30 22. AT. All vectors of thcform x = H] = t [:] .6.oo < t < oo.5. The augmented matrix of this system is o ] 0 and from this it is easy to see tha t the system has the gem~ra. Thus. The lower tria. ATA = r [l 6 3 ~ 10 3 6 ~] (AT A) 1 = [ I 3 8 ] 3 8 10 .)1 = 3 [. and their inverses. 24.A) = I+ A= [~ ~)· (b) The nilpotency index of A = [~ ~l 0~] G~~~.<. x2 = t.oo < t < oo. .6 21. Following are the matrice.27 .1 = [ 74 2~ 3 . their inverses are also symmetric by Theorem 3.EXERCISE SET 3.rse. and their inverses. The fixed points of t he matrix A == [~ ~] are ihe solut ions of the homogeneous system (1 . 1 and the inverse of I . A AT and AT A.4. AAT ond AT A are also invertible.l solution x 1 = t. where . since these matrices are symmetric.Following are the matric:es AAT.27 10 27 74 AAT ::::: 10 ~~l (AAT).ngular matrix A = [! ~ ~1 1 3 1 is invertible because of the nonzero diagonal entries. furthermore. AT A.4 = ['i ~] 5 5 2 {AT. T hus the fixed poi nts of A are vectors of t he form x = [_~J = t [_ ~].J 1 1 = 0.1 [. then A2 = [~ ~] [~ ~] (~ ~] =[~ ~J· Thus A is nilpotent with nilpotency mdex 2. from Theorem 3. whero .A = is (I.17 5 AAT = [~ ~] 3 10 5 1 (AA ')• = l13 5 r 35 13 5 2 ~] .A )x 23.s. (a) If A = [~ ~].6.
~ Tl [ ''. then (A+ AT)T =AT+ ATT = A'r +A= A+ AT. r.ric.A1') +~(A. DISCUSSION AND DISCOVERY Dl. and kA are also skewsyn\met.B.100 Chapter 3 26. we have tr(Ar A}= lladl 2 + lla2!1 2 + ··· + tlanll 2 = 1 + 1 + · ·· + 1 = n. so A. rf rlrr r2rr T r. Using Formula (9).. Similarly.B =(A+ B) (A. (b) A= +.BT = A+ B = (A. thus A. IA~ r~ =:] ~] + [~ ~ ~] + [~ ~ ~] [~ ~ ~] = 27. then = (AT). thus li is symmetric. rm · rl rz '' · rm f2 • r rn Tm • I'm 1 J r..:] r1rm = r2r1 rmr1 T ['' ! [ r1 · r1 :r 1 • r2 r2 · . T AA = [''] r2 '~ [r.2(uuT)T =In. (b) If A and B are skew symmetric. 28.1 )T is skew symmetric.2uuT then we have = H. 30.A) .t .1 .x 2..AT is skewsymmetric.AT) HA (c) A= [~ ~) = ~ [~ ~] + ~ [~ ~] nr = I'I.B)T =AT.AT). (a) A= 3a21 Sa22 1a23 3a::n Saa2 1a33 3an 5an 7al3] = [au a:n a31 a12 a22 a32 a13l [3 0 0] a23J a33 1 0 5 0 0 0 7 = BD (b) There are other such factorizations as well.1 = A. If H =In. 1A=[lO] 1 1 (I . then (A.rT 2 fmfm I'm ·rz 31.1 = (A).B) (kA)r =kAT= k( A)'"'"' kA thus the matrices A+ B.1 (A+ B)T =AT+ BT =A. = I + A = [1 1 0 ] 1 (b) A = [0o 3 ] o 2 1. (a) If A is any square matrix. (a) If A is invertible and skew symmetric. (a) A= [~ ~] has nilpotency indt>. so A+ AT is symmetric..AT)T =(A.. A.. (A. For example.2uuT 29. A "" [!::: ::: 3a31 an . 0 0 0 has nilpotency index 3.
+ G = 0 for i = 1 and 2. a 1 J. 2.0.A if and only if d~ = d. In a symmetric matrix..l entries on the diagonal just above the maiu diagonal. Using Formula {9). t hen (AB)T = BT AT= (D)(A) = BA = AB and so the product AB is symmetric rather than s kewsymmetric.A . for 1 = 1...0ain diagonal The entries in the n(n2. j = 2. Thus a symmetric matrix is completely determined by the entries that lie on or above the main rliagonal. ~2· if dlaJ = e1 Thus . if ArA = 0. J . j.l) p. appear below the main diagonal a.rue iff di . enlries that are symmetr ically positioned across the main diagonal are equal to each other. we have tr(AT A)= lln1JI 2 + 1la21J 2 + · · · + follows that llatll 2 = 1Jazll 2 = llanll2 = 0 amlso A= 0. D6. then A 2 + 5A + 6!2 = 0 if and only if d? + 5£'. {a) A is symmetric since a 31 = j 2 + i 2 = i 2 + j 2 ~a. for each i. 03. thus A = A and so A =0.3) .. = zp + 2t3 f:.j) for each i . the matrix A= IJ(i. Thus the maximum number of distinct entries in a skewsymmetric matrix is n(n. Z.. There are a. In fact. and 0 0 d3 0 0 d~ (b) 3. = .re commuting skewsymmetric matrices. .11 positions below the main diagonal will then automatically be distinct from each other and from the entries on or above the main diagonal. i.2 or . the diagonal entries are 0 and entries t hat are symmetrically positioned ac:ross the main diagonal arc the negatives of each other. There are a t. j.sit.(i. Thus AD = e2] (where e1 and e::~ are the standard unit vectors in R 2) if and only and dza2 = e2. Au n x n matrix has n entriP on the main diagonal . and entries t hat.a. (a) lf A = [d~ d~ ~1. then it 0 5. 2j + 2i = 2i + 2j = a.i = .j) = .re duplicates of entries that appear above the main diagonal..ric matrix..nd this i:> t. But this is true if and only if d1 i. rnatrice. Thus. then AT= A and AT = . ancl 3. No. = ··· IJa. then A2= [d~ d~ ~1. t his is the maximum number of distinct entries the matrix can have.s whose diagonal entries are either 0 or 1). 0 8.e 1 . if i = 1.1) + 1 rd~ d~J. (c) A is symmetric since (d) A is not symmetric s ince a 1 . 0.0 or 1 for ·i = 1. If A only if d. i} = f(i.ota.1) + · · · + 2 + 1 = n(n 2 1) entries lhat lie on or above the main diagonal. A is skewsymmroetric. (b) A is not symmetric . if and D7.1} n(n.ymmet.. If D == I= + 1 = n(n. n. for each i.since a. j.ions above the 1. a. and a 2 let = = :.2 or . 04.DISCUSSION AND DISCOVERY 101 D2.3 for i = 1 and 2..j)} is symmetric if and on1y if f(j . etc . d2 :/.. thus A 2==. if A and B a.= j . tOtal of eight such matrices (3 x 3 matrices whose diagonal entries are either 0 or 1) There are a t otal of 2n such matrices (n x n. In fact.. = In general. The maximum nwnber of distinct entries can be attained by selecting distinct positive entries for the n(~.s n . For a .I ) ':.:.l of fonr s uch matrices (auy 2 x 2 diagonal matrix ~hose diagonal entries are either . In a skewsymmetric matrix.. 0 = [d'0 dl ].i for each i. + 2 2 09.e.jJ 2 .. 2i2 + 2j3 ==a. then AD = ldt a1 d2a2J where a 1 and a2 are lhe rolumns of A. Thus there are a total of n + + (n. If A is both symmetric and skew symmetric.
False. if . If A and 1J are symmetric. . and A= l : 0 ] f . ) ( c) True... dz .d. Thus is it not possible to have A. (But if A is square and AAT is invertible. D lO. (a) example[~ ~] [~ ~) = (.. d. since this would imply that A~ = 0 for all j ?. (d) 1'rue.11). For example. then A is invertible by Theorem 3..6. This shows that an invertible matrix cannot be nilpotent. For example if A = G~) and B = [~ !] . . equivalently. A 12 =A.B)T = A+ B = AT  BT = A. .2.then A + B = G~) is symme~r~~: p(d. See Theorem 3. .5. If Ax= 0 has only the trivial solution.. (c) True (assuming A i= 0). in the 3 x 3 case...ich is always square) is invertible or not.!. (d) True. 2. diagonal matrix (both symmetric and triangular). 3.11. WORKING WITH PROOFS Pl.) Un 0 0 is also a. for every k = 1.B . I is invert)ble but I 1 = 0 is not invertible.1D d2 1 0] where d 1 . dt 0 .. For example. .n of D must be nonzero. . . If A 3 = A..2. A 9 =A . :::: 0 for any positive integer k . a nilpotent matrix cannot be· invertible. . A . If A is invertible then Ak is invertible. But if A is invertible then so is AT {Theorem 3.. then A 1 ' = A and BT = B. . Thus. : 0 . . then the tliagonal entries d 1 . . Thus A= [~ :.1.] (b) False.~ ~J (b) True. then A must be a diagonal m atrix. then A is invertible. (a) False. thus Ar x =a has only the trivial sol ution. .. then .l. 0 0 ~].. and kA are also symmetric. If A is not square then A is not invertible.) P(~. (AT )T = AT It follows that (A+ B )T ~ AT + BT (A . ..102 _!..0. d 2 =1 0. Although described here for the case n = 2 .. and thus Ak =1. If A is both symmetric and triangular. d2 0 o o . we have (e) True. . A+ B . (e) False.B (kA )T =kAT th11s t. it doesnt matter whether AAT (wh. k.hc matrices = kA Al' . be clear ' = 1. [ Chapter 3 A = d~ that the same argument can be applied to a square matrix of any size .46 = A. For Dll. . and so p(A) = ['(~. it should.
then D has a row of zeros and thus is not.8 implies that A is invertihle.d11 are nonzero.2. n. Our p roof is by induction on the exponent k . . Suppose the statement is true for n = j. thus t he statement is true for n = 1. Our proof is by induction on the exponent n . }.11 implies AT is !::1vertible. If A is invertible. by forward substitution. 'rhen and so the statement is also true for k = j + l.. and AT A are eith~p all invertible or all singular. invertible.. then ( An)T = A 11 for each positive integer . then Theorem 3. It follows that A . We will solve Ax = b by first soiving Ly = b for y. Step 2 (induc~ion st ep).e. Y2 = 1. ~ = d. d2. . Q 0 d~ I 0 0 is inver tible with n' = }.. . P5. 0 0 ] .. These two steps complete the proof by ind uction.. 1 [~. Step L Since A is symmetric. and t hen solving Ux = y for x. where j is an integer ~ 1.l ~1 0 0 ~. P 3. we have (A 1) T == A'~' = A = A 1 . .. thus t he products AAT and AT A are invertible as well. 01 0 On the other hand if auy one of 0 0 :i.7 1. if AT == A). P 4. If d•. 0 0 . Step 1. then Theorem 3. thus the statement is true for k rl n l = I. AAT. :l [ ~ 0 0 • · · dn [~:..] ~ . We will show t hat if A is symmetric (i... These two steps complete the proof by induction..2::: ~:. Then = (AAi)T = (Ai )T A1' = AJ A= AH 1 and so the statement is also true for n = j + I. Suppose the statement is t rue for k = j. then D= :. The system Ly = b is 3yl =0 2yl + Y2 = 1 from which. the diagonal entries is zero.7 103 P 2. we obtain Y l = 0.. EXERCISE SET 3. where j is (Ai+ l f An integer ~ 1.3.EXERCISE SET 3. We have D = D 1 = d ~o:·] = [d J: I: [ 0 . Step 2 (induction step). if either A AT or A T A is invertible. On the other hand.
we first solve Ly = b 2yl yl = .104 Chapter 3 The system U x =y 1s x1. T he solution of Ly = b is Yt X . Y2 ::::: . The solution of Ux = y {and of A x = b ) is 5.i4) 3 . It is easy to check that this is in fact 2. It is easy to check t ha. T he matrix A can be reduced to row echelon form by the following operations: The mult. We will solve Ax = b by first solving Ly = b for y . 3yx 2yl = 3 + 4y2 Y2 = 22 .t 4 . = 3. and then solve Ux = y To solve the system Ax= b where b for x : The system Ly .2x2 X2 =0 = 1 from which. The solution of Ly l X 2. Y3 = 11. x 3 = .4yl  + 2y3 = 3 from which. Y2 = 5. and then solving Ux The system Ly = b is =y for x . It is easy to check that thi! is in fact the solution of . The system Ux = y is X1 1 = . YJ = . Y2 = ~ The solution of Ux = y (a.2X2 ··· X3 = 1 x2 + 2x3 = 5 X 3 = 3 from which. x 2 = .nd of Ax= b ) is x 1 = ~.ipliers associated wit h these operations are~~ 1.3.1. Y2 = 5. and ~ . = 2. = 1.1 from which we obtain x 1 Ax. The sys tem U x = y is X J .'1· =b is y 1 = 3.ill X 37 X _ 11 l 14 ' 2 .in Yl = 1. x2 = 1. by back substitution. =:::. thus A = LU = [ is an LUfactorization of A. = 2.1. by back s ubstitution. by forward substitution.:: b is = r=~J .1. . b. we obtain x 1 the solution of Ax = b .i4. we obtain :x: 1 this is in fact the solution of Ax = b .2 + 3y2 = + 4X2 = X2 2 from which we obtain y 1 =:=. 1 2 0] [10 4] 3 1 for y . obta. 3.3. X2 = 1.
s. = .1.2y2 = 4 = .1 = 0 1 u The multipliers associated wilh these operations arc thus 4. y. . Jt is easy to check that this is the solution of 8.here b = r~~].1. 7. x3 = 0.: I~] is A= LU = (: ~][~ ~] . y·~ = . X1  X2  X3 xz. Y> = I. An LUdecomposition of the matrix A = Th• solution or Ly [!~ ~~1 0 4 26 is A = LU = [ ! 0 0 4 ~1 [~ . xz = 1. 4 . x2 . .1 ~ 0 2 4 To solve the •ystem Ax for x: =b w ha•e b = [=~] .x 3 = X3 = 2 I = 0 from which we obtain x 1 Ax=b. The solution of Ux The solution of Ly = b. is Yl = 2. 0 (for the second row).' 2 = 1. =y (and of Ax = b) is x 1= 4. 1. The matrix A can be re<luced to row echelon form by the following sequence of operations: A= [_~ 2 2 5 1 l 2] [ 1 1] [' ·1 . Y2 = 1. X 3 = 0. = 1.1 4 2 7 0 2 2 2 7 0 2 0 2 1 5 ~] 7 [~ 1 4 l] [' 7 1 1 1 0 0 I] [' 1 1] . YJ = 0. wh•. ~. y. The solution or Ux = y (and or . aud ~i A = LU = [ is an LUfactorization of A.7 105 6. . An LUdecomposition of the matrix A= [ . Yl = 0. b ~ m. 1..1 7 5 0 0 0 1 .. Ax = b) is Xt = 1. we fi. = 0.2 0 1 0 ~]· l = b.2 y! from which we obtain Y1 The system Ux = y is + 4yz + 5y3 = {) = 2.EXERCISE SET 3. and then wive U x =y The system Ly = b is 2yl .olve Ly =b ro...
and ~. thus ALU [1~ 0 3 . x 4 solution of Ax = b.1 ~ [~ ~1 u 0 1 0 . 0.:>tcm Ux = y is x1 = 3.1 0 0 2 1 0 is an LUdecomposition of A.]. x 3 = 2. x 2 = 1. 0 (for the third row).1 3 2 2 1 0 1 0 0 l 0 0 0 l 2 1 0 1 1 0 0 1 0 0 01 [1 2J ~ 0 1 0 4 1 0 1 0 0 0 0 ~1 ~ [~ .106 Chapter 3 9. Y3 The sy. 2.5. 1 .we first solve Ly  b lory. 0.5 + 2x4 = + X4 = X4 xz X3 3 3 1 = from which we obtain x 1 = 3. == 3 + 4y4 = from which we obtain Y1 = .1 3 0 1 2 0 1 1 0 1 0 0 0 1 1 ~1 ~ ~1 ~ .· The multipliers associated with these operations are 1 . = 1. The matrix A can be reduced to row echelon form by the following sequence of operations: A= [~ 0 0 0 3 1 0 1 2 2 1 [~ [~ 0 1 1 0 1 2 0 1 ~] [~ ~1 [~ 1 ~ 0 1 .•"!.:. and then solve Ux . 1. ~][~ = 1 0 0 0 1 0 ~] To solve thesystem Ax= b where h for x : The system Ly = [.y =b is yl 2yl == 5 + 3yz Y2 1 7 + 2y3 Y3 1. Y2 = 3. ~. ~ . Y4 ==  X3 =. It is easy to check that this is in fact the .
An LUdecomposition of !1 = ['' 12 ~ ~ 2 ~ is 0 1 4 .y ~ej"ror. t  x_=.of x z: T he sys tem Ly + 4y2 Y2 = 1 4y1  + 2y3 = 0 = 0 from which we oblaiu !11 = 0.. (1) Comp~_!a~ion ()f. L Y3 = ! .._... = 1 . = 0. The soluUon of Ux ~ Y< y (and of Ax= b) is x 1 .EXERCISE SET 3.. y...~L:=:J .6 l "3 :r.._.. Y> == l.5 0 0] y. The ~ 0 ro!ution o[ Ly = b. the matriX".. X4 11.~..= xi. ?._ ?._ ]·: The sys tem ~ 0'_::: .!.?fL2L.L xz = 1. ~ m = e3 is 3yl 2yl + 4y2 4Yt . ~ l· Thus x.1 1Sofitaiiiea·E'y solvTng ' tne··syst~tn Ax·= ei for x = x. Le!.3  from which we obtain x 1 = .:sr~ Yj~ ~<t _hen· sqlving U'x=Y.~. )/2 ::. x3 = TI· Thus X 1 = = e2 :1yl 2yl is = 0 [=!] ~ (2) ComputatioP. = (3} Computation of x 3 : The syst em Ly 0 == 0 + 2y3 = 1 .. Tl!_e...7 107 10. 1 10 .A. Using the given LUd~~ositi~& ~ wyj A~ _Lli!~ oy llrsf'solviiig ~I.. ~ ~.2x2 X3 xz + 2:r3 =! X3 = 4 fmm which we obtain ._. elt e2. wh"e b = [lis = ! ~. Y3 = X1  TI.1 ~VI ~ = 1 2yl + 4yz =0 4yl . Y'2 = .19..J. A= LU = [' ~ 0 4 0 0 0 2 0 1 1 j [~ 2 1 0 3 1 0 0 0 0] .. x 2 = = !. X3 4...Y?.(f?.:. Then the system U x = Yz is xa .. .Y2 + 2yJ = 0 from w hich we obtain Yt = ~.~ j. ~ .... Then X3 the system Ux = Yt is 2Xz •r ~2 = ! 7 il T • 2"' . ~e the standard unit vectors in R3 .)__ ~~~ jt~__ c?J'!IDP.~_.
2.c reduced to row echelon form by Lhe followin g sequence of oper ations: A:. e 3 of! the s tandard unit vectors in R 3 • The jth column x i of the matrix A . 2. we conclude that A.sociated multipliers are ~.t l ix il can 1. is y ~ 3 a nd the.. is x. as a result of these computations. ~ .. The sdu tion of L. x. ~ !. This leads t. y 3 = ~· Then t he system Ux = YJ is X3 x1 . ~ [ =~] · ~ 7 'l _.:: H [~ i 1 1 0 2 ~] 2 t 1 11 0 lj t r 0 [.o the factorization A = LU where L = [~ 0 ~1 If instead we prefer a lower 2 1 l t. Let e 1 . e 2 .ri<mgular factor t hat has 1s on the main diagonal. this can be achieved by shifting the diagonal entries of L to a diagonal mat rix D and writing the factorization as .i l * t. x.c. ! ll [~ 2 0 2 2 I + 2 1 1 2 :] t 2 1 0 I :] =U where thfl a._] 14 r. we do this by . io y.1 = [x1 !] 1 .1..J.first solving Ly = e1 for y = y 1 . and then solving Ux = Yj for x = x. ~ m. y . Using the given LUdecomposition. ~ ~l . ~ r::J· The sohotion of Ly ~ c. The solution of Ly ~· e 3 is Yl  m .o:. • nd the solution a[ U x ~ [ [:]. ~ 12. Thus x. y 2 = 0.olution of Ux = Y2 is x. The nta. 1 (for the leading entry in t he second row) .l . •nd the solution of V x ~y 3 is x. 7 13.L ~ obtained by solving the syst em Ax = ej for x = Xj.2. ~ [~1] ~ 0 l Finally.2x2 x2 = 0 + 2x3 = 0 X3 = ~ f•·om whkh w' obtain x.108 Chapter 3 from whieh we obtain y 1 ""' 0. ~ . y . and . .
Y3 = 0. the syst. 17.l b can be written as ~ 3 1 0 1 5 We solve this by first solving Ly = plb for y. y3 x1 = 12. permutation matrix. pt ).x = p . the system p . then the resulting matrix can be reduced t o row echelon form without any further row !nterchanges. it is obtained by interchanging the rows of J. the second row is not a. (b) T his is not a. Using the given 1 0 i decomposition.ain y 1 = 1.'Ax ~ P ' b whe>·e p.' ~ P ~ [ o 0] .7 109 14 .z. X3 = 0. 16.em pl Ax = plb can be written M The solution of Ly = p . y 2 X1 = ~' X2 = 0.EXERCISE S ET 3.I b is Y1 = 1 Y2 = 2 3yl . The system Ax~ b i<equivalent top> Ax ~ p > b where p> ~ P ~ ~ LUx= [ [! :]. '!/2 = 2.5y2 + y:. The solution of Ux = y (and of Ax= b ) is 19. = 0. x:~ = g..~ = 5 from which we c•bt. If we interchange rows 2 and 3 of A. This is equivalent to first multiplying A on the left by the corresponding permutation matrix P: 3 [3 1 1 1 0 2 1 2 :1 J . Using the given decomposition. The system Ly = p . t he system Ux = y is 1 4X3 = 2 17x3 = 12 2x3 = + 2x2 + X2 + from which we obtaiu Xt = i~ . (a) and (c) are not permutation matrices. This is the solution of A x = b. (c) T his js a permutation matrix.l b is y 1 = 3. and then solving Ux = y for x. (b) is a permutation matrix.l A. row of h. 4th. A = r~ ~~ ~~ = r~ ~ ~ r~ ~ ~ r~ ~ 2~ 2 4 l1 1 2 2 1 1 0 0 63 1 0 0 1 1 = L'DU 15. (a) This is a permutation matrix. The system Ax (\ 1 ~ h iseq uivolent to p. it is obtained by reordering the rows of 14 (3rd. Finally.i~. x2 ::::: . 2nd . 18.
he matrix PA. since p. 20. = .=[~zlthis decomposition ca.~ .110 Cha~ter 3 The reduction of P A to row echelon form proceeds as follows: PA = 3 0 [3 1 2 1 This corresponds to t he LUdecomposition 3 0 1 2 PA = 0] [3 0 0] [1 1 = 1 0 2 3 0 [ 3 1 0 1 0 0 of t. .:<tn also be wri tten as A "" P LU. this dccompo~i tiuu <.] 1 a. X3 = 3. This is equivalent to first multiplying A on the left by the corresponding permutat ion matrix P: The LUdecomposition of t he matrix PA is P A = LU = [~ ~ ~1 [~ ~ 2 0 3 0 0 . the solutjon of Ly = Pb is Yt Ax = b ) is x 1 = .n also be '~~~~ ~tlten as A = P LU. then the resulti ng matrix can be reduced to row echelon form withoul any furt her row interchanges. 1 The system Ax =b = ! is equivalent to P Ax= P b = and . using the LUdecornposjtion obtained above. x2 = ~.nd the corresponding FLUdecomposit ion of A is Since P 1 = P .1 LU Note that.~. y 2 = 2. y 3 = 3. this can be written as PA x == DUx == [3 o olll ·4 !ol [x [2] = 1 0 2 0 0 1 x 2] 4 Pb 3 0 } 0 0 } X3 1 Finally. or to the following P LUdecomposition of A A= 1 0 0] [3 0 0] [1 [0 1 0 3 0 1 0 0 0 1 0 2 0 0 ~ 1 0 0~] = ~ p . and the solution of Ux = y (and of If we interchange rows 1 and 2 of A.
P~) .::::: 667 .5 s.000 Gback wt>rd = 10 9 = (105 )'2 10. X3 = 4.11 0 2 .he corresponding reordering of the rows of A. 2nd.] [ ::] = [ 0 0 1 X3 2 ~1 = Pb The solution of Ly = Pb is y 1 = 5. from Example 4.ble to execute at least ~6: = 1334 gigaflops per second.3 12 7 1 . from Table 3.16.r<l = ~n 3 X 109 = ~(10:.VS of the identity matrix 14 (4th . The rows of Pare obtaine<l by reordering of the rO'I. this can be WTitten as PAx = LUx = [~ ~ ~1 [~ 2 Q 3 Y2 1 .1 .trix. ) 3 n2 X X w. since a =I= 0. (a) If A has such au LU decomposition.rd phasE>.re approximately Grorwa. we ha\'e y= b. in order to complete th~ task in IE'.n be written us This yields the syslem of equations x = a. 21. w:z. c wy + z =d and . 22. is X1 = .9 = x s X 106 . and approximately 10 s for the backward phase.s is approximately ~ x 10 3 + 10. y = b. = c. the number of gigaflops required for the forward and backward phases a. 3'd.9 = 10 Thus if a computer can execute l gigaflop per second it will require approximately 66. using the LUdecomposition obtained above.be) a [: !] = [~ ~][~ ~] DISCUSSION AND DISCOVERY 01.DISCUSSION AND DISCOVERY 111 The system Ax = b = [_ !] is equivalent to PAx = Pb = [_~]and. this system has the unique solution x =a.1.67 x 104 s for the forward phase. the total number of gigaflops required for the forward and backwo. (b) If n = 104 then. it ca..::::: 670. Thus P is a per mutation mn. (b ) florn the above. X2 = 5. = ~' y3 = 4. Thus.ss than 0. and the solution of Ux = y (and of Ax= b ) (a) We have n = l 0 5 for the given systern and so. Multiplication of A (on the left) !. a z= (ad. thus PA= 3 . a computer must be a. 7.y P results in (. 711 = .
7 + 72) .8 . 1} is an odd pA rmuta.~2 · {5.2 c. 4 .42ast· {4. 3. The signed product is al4ll22llJS043ll51· {5.1) . The signed product is +ats<124a33a.(12+ c3 (c. 2 1 = .(18 + 140 + 32) = 190. . 3.42 + 240). 1.20 .z3a34a.c• +c3 16c2 +Be.(20 + 8· + 6) = 45. T he signed product is +a 11 a:~!a32 a43ass· 11 8 . 5. The signed product is +au ana33a44a.n odd permutation ~3 interchanges).7 = ( . 4.2) . 5} is an even permutation (O interchanges).110 = 65 1 2 1 8./6) = 3v'6 5. 5} is an even permutation (2 interchanges) .S 7 2 1 = (5){2) .123 9 4 3 10 . 3 7 2 5 2 = (O. 2.3)(a .2 = (a. The signed product.5 + 42)  {0 + 6 + 35) = 37 .n even permutation (2 interchanges) . T he signed product is a 1 4a. (a) {b) (c) (d) {e) (f) {4. 4.CHAPTER 4 Determinants EXERCISE SET 4. 1: (4)(2) _ (1){8) = s. 3. a. is atsa.l)) .3 5 3 a .2) = 12 + 10 = 22 2. IV: ~I = 2 5 3 R (J2)(/3) _ (4)(. 7 6 1 2 = (. 3.(5)( 3) I I = (a2 Sa+ 6) + 15 = a2  5a + 21 6.(0 + 135 + 0) 9.21a. 5.33a4sa.(7)(7) = 10 + 49 = 59 4. 1} is an odd permutation (3 interchanges). c 2 4 4 1 c2 = {2c l6c2 + 6(c . 4. 2.190 = 0 4 4 2 1.czas1· { 1.~~· {1 .4 3 0 .1 1.l6) 2 = . 3 l 1 5 6 1 0 .(5)( . 2} is a. 2.1 11. 1} is a.3.41 = .8 = o 3. 2.1 0 5 = (12 + 0 + 0) .tion (3 interchanges). 2. 1~ :1 = ~I = 7 (3}(4). 3.
c11 = 29 4 = 11 Ml2 = ~~ 11 = 2l. Mn =1 .x) + 3 = x2 + x + 3.\.tic equation are x = 3 {33.1 119 1 2.3 = 0. * + :r + 3 = .Ct2 = . The 18. Thus det(A) = 0 if and only if/\= 1 >.x 2  2x.e. We have = 5. c33 .2 !I= G21 ~11. and (d) are even. det(A) = (>.= .\ 17.1. 1 M22 =~~ !I= =I~ = 13 M23 M3t = 1 2 7 _ 31 = . (c). . 1 0 2 x ] 3 3 6 X = ((x(x 5 5) + 0.21 4 13. C32 == 19 === 1 M33 == 19. (a).1 3 1 7 (b) 0 0 1 2 0 '1 1 2 0 0 =0 3 0 3 8 ~I 7 (c) 0 0 0 2 0 4 2 0 = (1)(1)(2) (3) =6 1 1 0 a 1 0 0 0 2 20. i.19. det(A) = (. (b) .Exercise Set 4. C31 = .2)(>. (a) 0 0 2 0 0 0 2 ~ = 23 = 0 8 (b) 0 0 2 3 0 ~ = (1 )(2)(3)(4) = 4 24 0 0 .\ 1)(>. if 2x 2 3x . .2.18 + 0) = x 2 . (e) and {f) are odd.19 1 M32 _ 31 19.1 .\ = .2x  Thus t he given equation is v<~. A = 2 or .18) ..3)(2)( 1)(3) = 18 100 200 21. 1 1=_1xl = x( l . or >. Thus dct(A} = 0 if and only if A "" 1 or .23 {c) l 40 2 10 0 0 u 3 = ( .2 + 2>.3. = 1.c22 M13 = 21. = 11= 29.lid if and only if :t 2 roots of this quadro. 16. 13.\ = 3. 15. C1 :~ = 27 = 5.1)(>.. or . . C23 = s = 19 M21 . + 4) + 5 = >.1 0 2 I () 0 = (1)(1)(1) = .( 3x. and . 14.3 =(. y=3 1 19. (a) 0 0 0 0 1 . + 3}. + l }.A= .
(a) M 13 = 4 1 14 = (0 + 0 + 12)  (12 + 0 + 0) = 0 0 13 =0 4 1 4 (b) 2 M 23 = 4 4 4 1 6 1· 14 = (8 .1)(5) + (4)(19) = 152 det(A) = (l)Cll + (l)Ct2 + (2)013 = (1)(6) + (1)(.48 4 .12) + (2)(3) = 0 det(A) = (l)011 + {6) C21 + (3)Ca 1 = (1)(6) + (3)(2) + (0)(0) = 0 det(A) = (3)021 + (3)022 + (6)023 = (3){2) + (3)(4) + (6)(1} = 0 det(A) = (l)Ou + (3)022 + (I)Cs2 = (1)(12) + {3)(4) + (1)(0) = 0 det(A) = (O)C31 + (l)C32 + (4)C33 = {0)(0) + (1)(0) + (4)(0) = 0 d~t( A) = (2)013 + ( G )02J + {4)033 = (2)(3} + (6)(.120 Chapter 4 22.1) + (4)(0) = 0 26.M41= .66 . Using row 2: det(A) . C32 = 30 (c) .1 3 1 2 6 (ct) M21= 1 1 o 3 14 = {O+I4+18)(0+2 .024 = 0 25.C41= l {b) M44 = 13.21) + (7)(13) + {1){19) == 152 det(A) = ( 3)CJI + (l)Ca2 + ( 4)Caa = ( 3)( .Ca1 3 =0 Ma2 = Ms3 = 0. (a) (b) (c) (d) (e) (f) 27.2)(21) + (3)(27) = 152 det(A) = ( I)C ll + (6)C21 + (3)C31 = (1){29) + (6)(11) + (3)(19) = 152 det(A) == (6)C 21 + (7)C22 +(1 )023 = (6)(11) + (7)(13) + (1 )(5) = 152 det.2)C12 + (3)013 = (1)(29) + (.56+24)(24+568)=96 1 2 l 6 0 23 = ·96 (c) Mn= 4 0 14 =(0+ 56 +72)(0+8+168) = 48 0 22 = . C21 = 2 M12 =12.l.C3 2 =0 M1s M2a = 3.(A) = (2)C12 + (7)C22 + (1)032 = (2)(. Cn = 4 O. (a) (b) (c) (d) (e) (f) det(A) = (l) Cu + (. 044 = 13 (d) M24 = 0. C1a = = l . Mll M21 =I~ ~~ = 6.19) + ( 1)(19) + (4)( 19) = 152 det(A) = (3)013 + (l)Czs + (4)033 = (3)(27} + (.30.012 = 12 = 4.j = (5){15+ 7) = . (a) M3z = . Oza = 3 1 M22 M31 21 = I~ 6 = ) O. cn = 6 = ~~ ~I= 2.40 = (1)(1~ ~I)+ (4) (1~ ~D = (1)( 18) + (4)(12) = . Cas = 0 0 0 23.42) = 72 2 c21= n 24. Using column 2: det{A} = (5)~=~ 28.
A c 1 tr(A) 2 = [a +be d 2 co+ c c + ab +bdl 2 b d 2 1. and tr(A ) J =a2 + 2bc + d2 . 36. = ±1 .cosO sinO+ cosB 1 for a ll values of 0.2. Thus the determinant. then tr(A ) = a + d. bt) and P2(a2. By expanding along the t hird column.b3 ).3)(128)  (3)( 48) = 240 32. . Thus l tr( .a2.2 +Zbc+ d2 a +d =2 ((a+d) (a +2bc+d ) =ad bc=det(A). the equation a1 a .. =sin 2 fJ+cos 2 B = 1 cos 8 smB = [~ ~c:~ bf) a.)(a2bi?)(a3bj3 ) where {j 1 . det(A) = 0 33. b2). Jet( A) ::::: k 3  8k 2  IOk + 95 3 5 3 2 2 . we have sin8 s inO.(a . If ur = (a1 . X y D4.DISCUSSION AND DISCOVERY 121 30.]J} is a permutation of {1.c)e = 0. 34. AB sinO cosO cos B 0 0 = (1) I sinB cosO' . with half of t hem equal to +1 A 3 x 3 matrix A can have as many as six zeros without having det(A) = 0. If we e xpand a long the first row. will be zero. D3. then each of the six elementary products of uvT is o f Lhe form (a1b. For example.2 bt b1 1 = 0 becomes 1 and thjs is an equation for t he line through t he points Pt (a 1 . Sin e~ Lhe the ~till\ product of integers is always t\n integer . A be a diag0nal matrix with nonzero diagonal entries.~.2 0 = (.iz. 3 5 Using column 3: det(A) = ( 3) 2 2 . D5.(3) 2 2 4 2 10 3 :n. and lb 0  this is equivalent to t he condition that e d · f cl= b(<l . thus each of the elementary products is equal to (ata2as)(b1b2b3).f ) . Thus AB = BA if and onJy if ae + bf = bd + ce .ry products is an integer D2.nd BA = [0 ~ bd~ ce]. each elemf!ntary product is an integer and so of the elementa. I DISCUSSION AND DISCOVERY D 1. The signed ele mentary p roducts will all be ±(1)(1) · · · (1) and half equal to 1 .1) 2 tr(A2) I I= 2 I 1 2 2 a+ d 1 1 2 a.3}. a 3) and ' vT = (b 1 .. If A = [a db] . let.
P2.x 1 = ~. j2. total of (k + l )k! = (k + 1)! p~r mutations of S . we have: Xl Yt v2 l/3 ·Thus the three points are collinear if and only if x2 XJ 1 1 1 = 0.x1) wh. Thus the statement is true for the case n = 1. (a) det(A) 3 1 = det{AT) = 2 3 9 (b) det(A) = d et(AT) = 17 1 3 0 331 3. k! possibilities for the second choice .Yz) . We wish to pwve that for each positive integer n. by the hypothesis. Our proof is by induction on n .2 12 0 = (3) (!) (2)(2) = 4 0 .(30 . .X2Y1) = 0 On the other hand.6 .. EXERCISE SET 4. Let S = {j1. without loss of generality.0 = 5 5 3 2 det(AT) = 1 3 2 3 = (24 .XJYl) + (XJY2.Xl to (Y3 . )2. The latter condition is equivalent ~ :1!2 . .2 1. where k is a fixed integer ~ 1.Yt)(x2 . j .24) = .24) = 5.11 3 (b) det(A) 2 4 6 1 = 5 (24 . There are k + 1 possib ilities for the first choice and . It is clear that there is exactly 1 = 1! permutation of the set {j 1 } .6 . (a) det(A) = ~ ~ ~~ = 8 3 = 11 = 2 1 1 dct{AT) = 2 1 3 4 11 = ..(30..0 = 5 4 6 2. These two steps complete the proof by induetion. Step 1. (a) 0 22 2 0 0 .. we may assume that the points do not· lie on a vertical line.9 · 20) .9) . Suppose that the statement is true for n = k.Then a permutation of the setS is for med by first choosing one of k + 1 positions for the element )~.} of n distinct elements.+ 1 .xt) = (Y2.. If t he three points lie on a vertical line then XJ = x2 = xa = a and we have yz 1 :r:s l/3 1 x2 = a l7l 1 a yz 1 a !13 1 = 0.8 3= . and then choosing a permutation for the remaining k elements in t he remaining Jc positions.X:.ich can be writ ten as: (X2Y3 .q .5 .20 . Thus. Thus there are a. In this case the points are collinear if and only if %. T his shows that if the statement is true for n = k it must also be true for n = k + l.(XtY3. )k+I }.122 Chapter4 WORKING WITH PROOFS Xt l/1 1 Pl. . . .Y1)(:r3. t here are n! permutations of a set {it . Step 2 (induction srep). expanding along the third column.jk.
40 (b) <.] 0 0 5 and.224 = (4) 2 (14) = (4) 2 det(A) (b) det(3A)= 63 = (3) 2 (7) = (3) 2 det(A) 9 .3) d e 4i • 1g 4h = 6 I b c f = (4i a 12) d g b h c e f = (.6) .8)((20 . we have det(A) = 0. we have det(B) ::::: 0. lf x = 0.6)= .1 + 36) . . (a) del(4A) = .4 .12 9 6.tl (c) 6 7.! 4g 4h e h = (3) d 4i 4g 1 i :+ Ia 4h b h c = (. (c) 0 (proportional rows) (a) e f g h i b a a = (.1) g c d b h c a b e h c i e 1 (l = (.6 i (b) 3a 3b 3c .1 l 9 2 3 = 0 5 3 1 (~t and third columns are proportional) ~c) 3 0 0 17 4 1 = (3)(5)(2) 5 0 = .8)(55 + 1) = 448 8.(48 . .16. (a) 2 d 5.1'2)( .64 + 120) = · ·440  8 = .24 = 40 2 6 4 2 8 .EXERCISE SET 4. since the first and third rows are rroporo 0 5 tiona!.15)) 5 = (.2 :1 = .8) 3 2 4 1 = ( . the given matrix becomes A ~ [~ ~ ~] and.6) = 72 (c) a+g d g b I· h a f i = 3c d g e f i 3a (d ) d g4d 3b e 3a 3b = d f.1)(1) b c e f d g I =(1)(1)( .2 123 (b) 3 .4/ e 3cl = (3) d f a b e h c f l h.. (a) det(2A) = . If x = 2. since the first and second rows are proportional. (a) 6 (b) 0 6 .30 (b ) 0 (identical rows) 2 4.(6 + 8 .2) 3 det(A) = ( .1 3 2 4 1 = 4( .4e g h i I = (3)(t)) = 18 (d) . 1 . the given matrix becomes B = [~ i .4t18 2 ..1 (.10 22 det(A) =4 .d e .let(· 2A) = 6 2 = (160 + 8 3 288) .
 0 0 3 2 13 1 0 2 5 times the first row was added to the third row. We use the properties of determinants stated in Theorerr1 4.2. en = 17 14.2. 2 3 5 0 10) 1 = (3)( 1) 0 0 2 The second and third rows were interchanged. 1 :. (3) 0 0 ~times the first mw "''' ~ded to the third row. 11.2.2.2) 0 0 A fac tor of · 2 was taken from the second row.:.2 17 1 . If we replace the first row of the matrix by the sum of the first and second rows.13 times the second row was added to the third row. = (3)( 1)(12. Corresponding row operations are as indicated. det(A) = 30 =5 13. we conclude that b+c a [ 1 ~ +a b l b+a] c 1 det = det · [ c b+c+ a c+b+a] a+b+ a 1 b a =0 1 1 since the first and third rows of the latter matrix are proportional .2) = (2} 0 0 . 1 3 4 0 l det(A) = 2 5 = 2 0 2 5 3 2 . det(A) = 3 0 6 0 1 9 2 1 2 = (3) 0 5 2 2 3 0 2 5 ·1 1 2 0 1 3 2 5 A common factor of 3 was taken from the first row.124 Chapter 4 LO. We use the properties of determinants stated in Theorem 4. 13 2 0 1 3 1 0 2 = {.2 0 l 2 2 times the first row was added to the second row. det(A) = 33 . 0 l 2 3 1 = (. Corresponding row operations are as indicated .
12 times r ow 2 was added to row 4. det(A) = 1 5 1 2 9 2 3 1 3 6 2 .EXERCISE SET 4. ]7.2.6 .1) (~) 0 0 0 0 0 1 0 0 0 3 . Corresponding row operations are as indicated.2.2 6 8 1 2 1 0 0 = 1 0 0 0 2 3 1 5 times row 1 was added to row 2.~ was t aken from row 3.1 108 23 1 2 . = 2 3 1 .imes row 2 was added to row 4. . Corresponding r ow opera.1 t.l 3 '2 2 .l3 = 39 16.ions are as indicate'i.2.3 :i 1 1 3 I 2 0 0 2 l = (1)(~) 0 1 1 3 3 1 l l 0 0 1 . 3 0 0 2 1 l = ( . . l 3 2 1 1 I = (1) (~) ( . We use t he properties of de terminant s stated in T heorem 4. row 1 was added to row 3. 1 1 1 1 l 1 2 1 det(A ) = 2 2 1 l 3 l .t.3 1 12 0 1 = 1 0 0 0 1 0 0 0 3 1 9 . .3 l 3 .:. l t~ .1)(~) 1 1 3 3 I 0 A facto< of was from the first row.2 125 1 5.2 3 .2 0 . det(A) =6 0 1 1 We usc the properties of deter minants stated in T heorem 4.2 times row l was added to row 4. 9 . '2 3 3 2 3 0 0 = (. ] .9 0 . ~ times row 1 was added to row 4.1) 1 0 '2 I 0 2 1 1 1 1 l 2 1 1 0 3 l 3 l l :l I 2 3 2 1 3 0 T he first and second rows were interchanged.3 0 0 36 times row 3 was added row 4 _.3 1 1 1 2 3 2 1 times row 2 was added t o row 3.2. 1 1 1 2 = (.~) 0 0 1 2 2 A factor of .1 .~ times row was added to row 3.
126
Chapter 4
= (1)
(~) ( ~)
1 0
1 1
2 1
1
1 1 2
1 ~ 1
0
0
0
0
times row 3 was added to row 4.
!
0
= ( 1)
(~) ( ~) ( ~) = ~
bl bt + Ct
bz b2 + C2 bJ b3 + ca
18. det.{A)
=
al
 2
19.
(a)
a2
03
bt b2
b3
G!
+ b1 + C1
al
02
.:l3
02
+ b2 + Cz = as+ b3 + C3
bt
C)
C2
Add 1 times column 1 to column 3.
a1
=
(l~
<l3
bz bJ
Add 1 times column 2 to column 3.
C:J
a1 (b)
+ b1 az + b2
a3 + b3
a1  b1 (12  b'J a3  bs
Gt
C) C2 CJ
=
Ct
2a 1 2a2 2a3
Gl(1.2
Ot
CI C2
b2 (1.3 b3
J Add column 2 to column 1.
ca
Factor of 2 taken from column 1.
a1 
b1
=2
02
03
02 b2
GJ 
C2
b3
Ct
C2
C3
a1
 bl  b2
=2
02
,a3
GJ
 hJ
bl
Add 1 times column 1 to column 2 .
CJ Cl C2
= 2
02
(13
b2
03
Factor of  1 taken from column 2.
CJ
a1 + b1 t
20.
a:~
+ b2t
OJ
+ b3t
(a)
att +b t
a2t+~
a3t+b3
a 1 + b1t (1 t 2 )bl
a2
+ bzt
a3
+ ~t
(1  ( 2)b2
{1 t 2 }b3
Add  t times row 1 to row 2.
Gt
= (1  t 2)
+ btt
bt Ct
a2
+ bzt
b2
C2
·a3
+ b3t
b3
CJ
Factor of (1  t 2 ) t.Rken from row 2.
(1.1
a2
GJ
= (1.
t2 ) bt
Ct
b:z b3
C2
Add  t iin:.es row 2 to row 1.
CJ
EXERCISE SET 4.2
127
al
(b)
a2
a3
b1 + ta1 b2 + ta2
b:1
+ ta3
a1
+ rbt + sa1 la1 b1 bl c2 + r~ + sa2 = c3 + rb3 + sa3 a a UJ
c1
Ct
r2
+rb1 + sa1 C2 + 1'b2 + sa2 C3 + rb3 + sa3
Add t times column 1 to column 2.

a2
a3
b1 b2 b3
C)
c2
c3
Add s times column 1 to column 3. Add r times column 2 to column 3.
21.
1 det{A) = 1 1
X
x2
y
;;
x2 1 X 1 x Y2 _ x2 =(yx)(z  x) 0 1 y2 = 0 y  x z2 0 1 0 z x z2  x2 x2
x2
y+x z +x
1 X =( y x)( zx) 0 1
y + = (y  x)(z  x) (z y) 0 0 z y
xl
I
22.
23.
dct(.4) :... (a  b) 3 (a
+ 3b)
If we add the first row of A to the second row, the rMult is a lfla.trix B that h as two identical row$; thus dct( A) = dct(.H) = 0. If we add ~ach of the first four rows of A t o the last row, t.he result is a matrix B that h as a. row of z~ros: thus det.(A) = det(B) = 0
24 .
~5.
(a) \Ve have dct(A) = (k 3)(k ···· 2)  4 = k 2  5k + 2. Thus, from Theorem 4.2.4, the matrix .4 is invertible if and ~)nly if lc2  5k + 2 f:. 0, i.e. k # 5±
fTI .
(b )
We ha..e dE>t.( A)
A is invertible
= ( 1 )~~ ~~ (3)~~ ~~ + (k)~~ :1 . ,. 8 + 8k. if and only if 8 + Bk f 0, i.e. k =/' ... 1.
{h )
k=L.!
" 1
T hus, from Theorem 1.2.4, the matrix
~() .
(a)
k
i: ±2
~7.
(a) det(aA) = 33 det(A)
(c)
tlct(2A 1 )
''=
= (27)(7) = 189
=
(8)( ~)
(b) det(A
 1
1 ) = det(A)
=7
1
23 dct{A 1 ) 1 ( d) clet((2 A)t ) ~ det(2A)
=
¥
= (8){7)
1 1 = 56
. 1 (b) d et.t.1,t  1) = det(A} :;:;
1 z3 det(A)
:8.
(a) det(· A)
= ( l) 4 det{A) =
(1)(2) =  2
2
1
(c) det(2A'~') = det(2A)
= 24 det(A) =32
= [
(d) det(A 3 )
= { det(A)) 3
= ( 2)3 = 8
&.
We have
All~ [i ~ ~] [~ ~ ~] 2 0 0 2
2
10
!
l~ ~!] ; t hus
22  23
det(AB) =
2 6
10
4 15 22
5 2 6 = 0  23 10
4  5 3 9 2 2
= 21
3 9
2 2
) = 2(6  18) = 24
128
Chapter 4
On the other hand, det(A) det(A) det{B). We have AB =
= (1)(3)( 2) =
6 and det(B)
= (2)(1){2) = 4.
Thus det(AB)
=
30.
[~ ~ ~] [~ ~ ~1 = [3~ ~ 1~1 , and so det(AB) = 170. On the other hand,
0 0 2 5 0 l 10 0 2
det(A) = 10 and det(B) = 17. Thus det(AB)
= de t(A)det(B).
2 1 0
0
31. The following sequence of row operations reduce A to an upper triangular form:
The corresponding LV factorization is
H
2
2 9
8
3 6 6 6
~] ~ [~
2 9
2
2
1 0 12
3
9 3 1
0
1
}·[~
0
1
0 0
3 9 3 1
108
~] ~ [~
23
2
1 0 0
3 9
0
0
3
2 1
1]
0\·' 13
3
I)
A=
[!
1 2
G
6
'] 1 ~ = [~
1 2
0 0
1
0
12
8
 36
and from this we conclude that det(A) = det(L) det(U) 32.
= (1)(39) = 39.
2 l
I
~lf~
2 1
3
l
2
3 1  9
3
0
0
2
 1
0
1] =LU
 13
The following sequence of row operations reduce A to an upper triangular form :
[~ ~] ~ [~
1
3
2
I
l
.. l
'J
0 2 1
1 1 2
2
2 1
'2
1
l
2
~l ~ ;~ r
0
2 2
l
1
0 0
1 1
;]
2 l
4
[~
 2 6
1
2
1
2 1
3
_;]
2
6
0
0
1 0
The corres ponding L Udecomposition is
A=
[~
1
0
2
1
3 1 1 2
l] "' [~
3
0
0  1
1
0
~][~
1
3
0 0
2 1 1
_; ] = LU
0
a.nd from this we conclude that det.(Jt) = dct(L) det(U) :::: (1)(6) = 6. 33. U we add the firs t row of A to the second row, using the identity sin 2 B + cos 2 0 :::: 1, we see that sin 2 o det( A) = cos 2 o
1
sin 2 /3 cos 2 /3 1
sin 2 1 sin 2 o 2 cos 1 = 1
sin2 f3
sin 2 1
1
1 1
1 1
=0
since the resulti(lg matrix has two identical rows. Thus, from Theorem 4.2.4, A. is not invertible.
34. (a) .A=3or.A=1
35.
(b) .A=6or.A =l
(c) >. = 2 or >. = 2
Since det(AT) = det{A), we have det{AT A)= det(AT) det(A) = (det(A)) 2 = det(A) det(AT) = det(AAT ). (b} · Since det(AT A)= (d et (A)) 2 , it follows that det(AT A) = 0 if and only if det(A) = 0. Thus. from 'T'heorem 4.2.4, AT A i~ invP.rt.ihiP. if ano only if A is invertible.
(a)
DISCUSSION AND DISCOVERY
129
36. det(A 1 BA)
37.
= det(A I) det(B) dct( A) = det~ll) dct(B } det(A} = det(B)
llxii 2 IIYII 2  (x. y) 2 == (xi+ X~ + x~ )(yf T vi+ v5> (XlYl + XzY2 + XsYa)2
2
2
2
IYl
Jx
1
x
2
1
Y2
+ IXJ
YJ
X:J I
'YJ
+ lxz
Y2
:r
3
1
Y3
= (x1Y2 
:r2yd 2 + (x1y3 X3Y1) 2 + (x2Y3 X3Y2) 2
~ xiy~  2XtY2X2Yl
38.
+ x~y~ + xi y§ 2XJY3X3Yl + x~yf + x~y~ 2X2Y3X3Y2 + X~y~
[~ ~]
= 3 6
3
(a) We: have det
(b)
[~ ~J
=5 4 = 1 and det
= 3;
0
1
thus det(M) =(I){ 3)
= 3.
l 2 0
We have
2 5 o  1 3 2
= {2)~~
2
0
5
1 = 2(5  4) = 2 and
2  ·3
o=
(3)(1 )(  4) = 12;
th\18 det(M) =
8 4
(2)( 12) =  24.
39.
(a)
det(M)
=
12 1
~I
2 0 1 2 () l
3 5 2 6 2 3 5 2
1 3 = (fi + 4) 0 12 0 4
5 12
 . 13
= (10)(1) 112  4
 13
121=
1 080
(b) det(M)
=
~~
jo
I~
~I = (1){1) ,.,, l
DISCUSSION AND DISCOVERY
D l. The matrices ar~ singular if and only if the corresponding determ inants are zero. This leads to the system of equations
from whir.h it follows t.hat s
= 9~
1
9
and t =
1 4
4~
•
it i;, always true that det(AB)
D2. Since det(AB) det(BA).
= det(A) det(B) == det(B) d P.t(A) = det(BA),
= =
D3. If .4 or B is not invertible then either det(A) = 0 or det(B ) det(A) det(B) = 0; thus AB is not invertible.
= 0 (or both).
It follows that det{AB)
130
Chapter 4
D4. For convenience call the given matri'V A n· If n = 2 or 3, then An can be reduced to the identity matrix by interchanging the first and last rows. Thus det(An) =  1 if n = 2 or 3. lf ~ = 4 or 5, then two row interchanges are required to reduce An to the identity (interchange the first and last rows, then interchange the second and next to last rows). Thus det (An) = +1 if n = 4 or 5. This pattern continues and can be summarized as follows:
det(A2k)
= det(A:~k+l ) =  1 det(A2k) = det(Azk+t) = +1
fork = 1,3,5, .. .
fork= 2, 4,6, .. .
DS. If A is skewsymmetric, then det(A) = det(AT) = det(A) = (l)ndet(A) where n is the size of A. It follows that if A is a skewsymmetric matrix of odd order, then det(A) = det(A) and so det(A) = 0. D6. Let A be an n x n matrix, and let B be the matrix that results when the rows of A are written in reverse order. Then the matrix B can be reduced to A by a series of row interchanges. If n = 2 or 3, then only one interchange is needed and so det(B) =  det(A). If n = 4 or 5, then two interchanges are required and so det(B) = +det( A). This pattern continues;
=  det(A) det(B) = + det(A)
det(B)
for n
for n
= 2k or 2k + 1 where k is odd = 2k or 2k + 1 where k is even
D7. (a) False. For example, if A= I= /2, then det(J +A)= det(2/} = 4, whereas 1 + det{A) = 2. (b) Thue. F'tom Theorem 4.2.5 it follows that det(A") = (det(A))n for every n = 1, 2, 3, ... . (c) False. From Theorem 4.2.3(c), we have det(3A) =3n det(A) where n is the size of A. Thus the: stu.ternent is false except when n = 1 or riet(A) = 0 . (d) Thue. If det (A) = 0, the matrix is singular and so the system Ax = 0 has infinitely many solutions.
DB.
(a) True. If A is invertible, then det(A) =F 0. Since det(ABA) = det(A) det(B) det(A) it follows that if A is invertible and det( ABA) = 0, then det(B) = 0. (b) 'frue. If A ,.,; A  l , then since det( A  l) = det1 , itiollows that {det( A ))2 == 1 and so det (A) = (A) ±1. (c) True. If the reduced row echelon form of A has a row of zeros, then A is n ot invertible. (d) 'frue. Since det(A~') = det(A), it follows that det(AAT) = det(A) det(AT) = (det{A)) 2 :?: 0. (e) True. If det.(A) f 0 then A is invertible, and an invertible matrix can always be written as a prod uct of elementary matrices.
D9. If A = A 2 , then det(A) = det(A 2 ) = (det(A)) 2 and so det(A) = 0 or det(A) = 1. If A= A3, then det(A) = det(A 3 ) = (det(A)) 3 and so det(A) = 0 or det(A) = ± 1.
DlO.
Each elementary product of this matrix must include a factor that comes from the 3 x 3 block of zeros on the upper right. Thus all of the elementary products are zero. It follows that det(A) = 0, no matter what values are assigned to the starred quantities.
Dll . This permutation of the columns of an n x n matrix A can be attained via a sequence of n 1 column interchanges which successively move the first column to the right by one position (i.e. interchange columns 1 and 2, then interchange columns 2 and 3, etc.). Thus the determinant of the resulting matrix is equal to (1)" 1 det(A).
Since C has two identical rows. P2. The matrix of cofactors from A is C 2 6 = 2 0 0] . If x ~ [] and y = [ ] then. we have det(G) = (x1 + Y1)C1.1 = .t1 by replacing the jth row by a copy of tho ith row. 16 1 7 9 X2 = j. and B is the matrix that is obtained from A by adding k times the ith row to the jth row. Thus [4 6 0 2 6 4 4 6 = 4 oo2 adj(A) = [ G 0 0 2 o 4 ·1] 1 1 and A. Suppose A is a square matrix.3 1.adj(. using cofacto• expansions along the jth w lumn.2)Cj2 + · · •+ (a. and so det( R) = det(A). !I 26 13 = .3 131 WORKING WITH PROOFS Pl.EXERCISE SET 4.n + ktJ.2 ! ~] and At = a. 0 3 I 3 2 ~] 0 1 3. + · · · + (xn +yn)Cn.] 5.4) = .and det(A) s 3 = (2)(3) + (5)(3) + (5)( 2) = 1.dj(A) = [~ . Xt I! = 17 3 ~1 :::: 18 = ~. The matrix of cofactors from A )s C = [~ & Thus adj{A) = lr~ . EXERCISE SET 4.3)(0) + (5)(0) = 4. 1 3 ~. Then.in )Cjn = det(A) +kdct(C) where C is the mat. it follows that det( C) = 0.0 4 [2 6 4] [ ~o ~1 ~ .=16 8 .~ ~]· 2 3 2 2 3 ! ~]. adj(A) ~ [ ~4~ 29 0 12 6 ~] 8 A' ~ ~adj(A) ~ [ ~~ 12 29 12 0 1 ·2 l . + (xz + Y2)C2. and det (A) = (2}(2) + (. I] 1 0 0 2 4. expanding along the jth row ofB . we have det(B) = (ajl + kao)Gjl + (aj2 + ka.rix obtained from .
1 4 .1 . :r = .1 2 .55 z= 2 4 20 1 1 4 2 ==55 11 230 46 2 2 3 1 4 1 2 3 2 1 2 3 4 .1 .I X3 .1 2 .3 .l 2 .1 ._ I~! ~1 = 153 = 3 1 1~ ~I 6 1 y = !.1 2 4 12 10.1 2 4 2 2 ==1 2 1 2 4 3 2 2 . x= 20 1 4 2 2 3 1 4 .55 y= 2 1 20 3 1 2 4 =  61 .:.4 1·1 1 11 1 .~~.1 .423 x4 = 1 2 2 7 3 1 4 .11 4 Xz 2 0 0 2 38 X3 3 0 = 3 1 3 .1 I~~ ~~1 1 204 .132 Chapter 4 6..1 7.1 2 1 2 2 4 4 6 .32 14 11 .1 2 .1 2 4 2 8 3 Iz = 2 4 8 3 6 "' 6 2 3 5 3 ~.I 4 2 .32 ..1 4 8..=5 .11 1 0 = 4 1 2 4 0 2 0 1 0 . Xt = 6 2 4 5 3 3 .___ _.51 4 1 1 6 1 1 2 1 4 .t :3 2 .423 6 2 3 .1 4 4 1 1 3 1 2 7 9 .l1 = .1 0 3 9.1 144 =. XJ = 0 1 4 0 0 3 ] 2 30 = .2115 = .5 = 4 .1 .l 2 = = l 2 1 2 8 3 3 3 2 3 5 .:.1269 = 3 .= .1 I 1 9 32 14 11 2 7 3 I 4 1 2 7 .1 1 1 = 1 2 4 4 = ..1 =  40 .423 9 1 3 1 2 4 .2 .423 4 423 = .423 4 .1 0 3 2 7 2 4 =.3 2 1 4 1 1 3 .: .1 1 9 1 4 = .1 1 3 2 ...3384 = 8 1 ..t .11 0 3 .32 14 ll 1 4 1 9 1 2 .1 1 2 4 .423 Xz = 1 .2 2 . I t= .
xy .: .4) =k(k.4) x = 1 2k k .. 0 ] l 0 0 14.r2 .1 5 4 6 = .3 133 :r~ = 2 2 4 4 3 0 8 5 12 3 3 6 2 4 1 3 2 2 .= ..r2 y2] .1 •.k + 3) (5k .4) (2k .. t. Thus A is invertible if a.x 2 y = y(y 2 formula.EXERCISE SET 4.4) k(k.4) 2(k . Using the identity 1 + tan 2 o = sec 2 a:.l)(k .:ry y2 _ .an o cos2 a l 0 sec 2 a 15. . we have det(A!) n. and det( A) = cos ] 2 (} + sin2 8 = 1.=14 11.) = 0 1 foro :f.2 1 .4 )..3 = .s~n e c~e ~ sin9 0 . y= 1 3 2 2 3 l 1 cosO 3 1 3 1 1 4 . lc4~ ' ~ r 5 3 X 14k .nd only if y =I= 0 a.1) 3 4 2k 3 k 2k 1 2 l z= k(k.:(5k.2 X4 = 8 3 3 i2 6 3 2 2 2 ::.= 1 2 . for the inverse is A 1 x 2 ).3)(3k + 2)  y= 3{k2 .6) k(k.4) = . Thus A is invertible and A. We have d et(A) = 1i.he m a trix A is invertible and A.k 2k . ~ 2 + n1r. for these values o f 0 0 .1 1 4 1 1 2 4 2 2 4 2 1 3 .. [2k 2k k <1 k Thus the system has a unique i 1 2 4.= k (k .. ::: adj(A) = [ co38 . Thus. x= 2 1 4 3 .0 and k 3 3 2I] .3}(3k + 2) z=5k  3k 3 =I= 17. In this case the solution is given by: 3 k 1 2 (k.nd y l X ±x. and d ct(A) = k(k.sin(1 sinO cos @ 0 .3)(k .2 1 2 12. The matr ix of cofactors is C = [ . '!/ Y  2 ) [·x·y y 2 0 0 .33 =  41 13. The coefficienl matrix is 11 = solution if k #.1 2 2 1 1 3 1 3 21 3 = .3 16. The = (2 .4 ) y= 3 1 4 2 2k l l 2 k k(k . = tan acos2 a [ cos a cos2 o ' · l.1 = adj(/1.
we have v 1 = 2 ~i 0 5 4 = 16 2 and so the \'Cctors do not lie in the same plane. 1) j 32..3k area ~ABC = ! ~~~ x ACII = £P (b) area ~ABC=~ IIXBII h = ~h. The parallelogram has the vectors ~ = (3. 2) and P1P4 = (4.e that. front Theorem 4. In this example.. ~ 24. The vectors lie in the same plane if and only if the parallelepiped that they determine is Jegenerate in the sen. t = (2.s "volume" is zero.. area ~ABC = 1 .134 Chapter 4 18. the area of the parallelogram is ]det(A)] = 1 ]0..61 =6 21. it.. ]det(A)I =II+ 2] = 3 20. 0) [33] 2 1 . Thus A is invertible if . 30.md only if s =/. thus V = ]16] = 16. v = 45 29. the area of the parallelogram is ]det(A)] = ]3. V = ]det(A)I wlsere A= [..4 + 6). [2 "o]. 28. from Theorem 4.t 2) t [ 0 1 \1 0 t t 0] s 0 0 t s 0 0 · s 19. a= ±fsi(6. 1 1 1 25. We have det(A) is = (s 2  t2 ) 2 . 31. Then. 2 Then.61 = 3. 1) as adjacent sides. (a) ' . The formula for the inverse s 1 0 = (s2 .5. Let A .5.3. thus h = ~xp = ~· . 2. A= ~ 23.8] 1 2 1 2 = 8.3 + 2}1 = 131 = 3 Let.3. ldet(A)I = ]o.6 2 2 4 ~ ~1.2) and P1P4 P1 Pz = (3. 0 a=± )5(0. ±t. U XV= 2 2 3 3 = 36i. 2 1 1 =7 1 2 3 1 1 2 26.24j j sin f) = l]u X vll j!u!IJlvll = )1296 + 576 = v'4§y49 )1872 = 12M 49 49 34. These vectors lie in the same plane. 4) k 6 6 33. The parallelogram has the vectors P1 Pz 2 0 1 as adjacent sides. ]det(A)J = 1(0.(0. area ~ABC=  2 =3 3 3 27. ]det(A)I + 2)] = l41 = 4 = ](1 + 1 + 0). l + + ABxAC= 1 1 2 1 k 2 1 = 4i +j . 3.(0 + 4 22.
o.0)k =. 3) is orthogonal to both u and v. . 36..4 9 2 16.{.22) j ( (:) ( ..3) (b ) U X V = (2.82k k (c) uxv = 3 0 2 .3 j k 6 =(6336)i(28 . 2fJ4) (h) (44.J2 + 32)j + (18 .12)j +( .6. 8) 17.{0 + 6)j + (0.3i 3 is orthogonal to both u and v. = (0.18)k = 27i+40j42k 7 (uxv)xw = .EXERCISE SET 4.55..8.8 . 3. u x (v + w) . 6 (a) (0. 9.6}i . 171)..20j.1 =(6+2)i(9+0)j+(6 . 12) U2 vzl) = v xu . ( b) U XV = . (l t 12 v2 + w:i = (u x v) + (u x w) = (ku) x v .2 3 i8.24 .6.. 18) k 5 = .3 135 15.4)k = 3Zi j 6j . .. (a) U XV ~ ~ ~ I k 4 2 3 1 5 j = 18i + 36j 18k = (18. (a) v x w = 0 2 j k 2 . (a) u x v 1 0 + 9j .3 6 7 = (14 + 18)i.14i.3k = ( 3..64)k = ...4i + 9j +6k 2 .•lk k = {b) U X (V X W) = j 3 2 1 32 6 4 (. .
(v x v ) = 0 .~ o . we have (u + v j x (u .(u x v) j + {v x u ) .I AC.I k .v) + v x ( u. We have adj(A) 51. i.]. 1 (b ) The .(u x v) .ow echelon fo<mof(A IT]. 7 0 0 0 . The vcc:lo r AB x A C .1 3 2 A= ll u x v lf = J 36 + 16 + 49 = Ji01 2 (b) A= j 44.e. 48. . 50.15i + 7j + IOk A =~ 46. W1 W3 W} W2 vzl) . with u = b x c . and so A2 1 = detCA)adj(A) 34 = t [14 1 1 21 0 ·1 7 0 ~~J. and thus is per t 3 1 pendic\llar to the plane determined by the points A. (a) We h uve v x w = (I vz W2 W3 1131.0 ) = . (a+ d )· u =a· u + d · u .2(u x v ) = . and C .thus A.1 5 2 0 k 2 3 = . w as adjacent edges. {a) = [~! I l] .8. PtP2 + X = X .= [~ _: 'f .v) ) + (v x u ) + (v x ( v)) = (u x 49. [: 0 0:¥ I 0 t : ~ I I 2 _: ~].1111 t1311 111 .(u x v). li M X plp~ll = ~v'225 + 49+ 100 = tJll40 = 4TI A= t II:PQ i~ll = J285 47. v . Using properties of cross products stated in Theorem 4. (a) A = lfu x + plp3 vlf = 0 llu x vii = /IT4 45. Recall that the dot product distributes across addition.v ) == ( u x u)+ ( u x ( . (a) u >< v = 1 j 1 k .'i . A proof can be found in any standard calculus text.3. B.~ t u ) . it follows that (a+ d ) · (b x c ) =a · (b x c)+ d · (b x c) .2 1 = 7.. thus (b) lu · (v x w )l is equal to the volume of tlte parallelpiped having the vectors u .duced .ij+ 3k k 0 = 6i +4j +7k A= ll u x v ii = J49 + 1 + 9 = J59 0 3 j (b) uxv= 2 .136 Chapter 4 43.. Thus.v ) = u x (u .I 1 3 = 8i + 4j + 4k is perpendic ular t o AB aiH.
9. we know that v x w is orthogonal t. since u x (v x w) is orthogona. >4. Thus if A'''= 0 for some k. thus C. 53.I is lower triangular. Thus. = det~A)adj(A) is also upper triangular. thus A .. and so adj( A) "" or is upper triangular. 7) d= a+ b+ c+d=1 Ra 27 a + '1b + 2c + d = 1 + 9h + 3c + d = 7 Using Cra. then AT is upper triangular and so (A. Therefore.121 4 8 l b= 7 0 0 1 1 8 4 27 9 0 1 2 3 0 1 1 1 2 1 1 = . the solution of this system is given by 1 1 1 7 (~ 0 1 4 0 1 ll 1 0 1 127 ·1 0 1 1 :::: 0 1 8 27 0 l 2 1 9 3 12 1 0 0 1 . 1}.1 is upper triangular.. 56. Since (u x v) x w = w x (u x v). 55. 1). and (3. Thus a vector lies in the plane determined by v and w if and only if it is orthogonal to v x w. It follows that the cofactor matrix C is lower triangular. If A is upper triangular.3 137 (c) >2. then det{A) = 0 and so A is not invertible. 57. The polynomialp(x) = ax 3 if and only if + bx2 +ex+ d passes through the points (0. and if j > i.2x 2 x + 1. it follows that u x (v x w) lies in the plane determined by v and w.EXERCISE SET 4. then A_.l to v x w. The method used in (b) requires much less computation.1 71 ==1 12 12 Thus the interpolating polynomial is p(x) = x:l . 1).:) = (det(A))k. (2. then the submatrix that remains when the ith row and jth column of A are deleted is upper triangular and has a zero on its main diagonal. We have det(AJ. if A is invertible and upper triangular.3. it follows from the previous exercise that (u x v) x w lies in the plane determined by u and v.1)T = (A 1'}. F:rom Theorem 4. (the ijth cofactor of A) must be zero if j > i. If A is lower triangular and invertible.= 2 24 12 I 2 1 l 9 0 l 3 1 1 1 ·1 I 3 0 1 8 c= 27 4 9 1 1 1 8 12 ==1 12 7 12 d=  27 0 1 1 9 12 1 ] 2 3 . (1. .mers Rule.o the plane determined by v and w.
(a) The vector w = v x (u x v} is orthogonal to both v and u x v. Thus. (u · v) x w docs not make sense since the first factor is a scalar rather than a vector. D5../3) P2.c2 2c + 1 07. then sin B = 0 and so u and v are parallel. If u and v are nonzero then. We have u · v U•V = II ullllvll cos 0 and II u x vII = II ul!ll vI! sin 0.4 2 = 2c""". (c) The solution by GaussJordan elimination requires much less computation. 0). Both sides "ne equal to ~]then adj(A) = [~ ~]· Ut U~ UJ t13 • v1 W1 l!'J W2 W3 WORKING WITH PROOFS P 1. from Theorem 4. v = (0. D2. Then det(A) = c2 + (1 (lc)l c  c) 2 = 2c2 .2c + l f. As was shown in the proof of Theorem 4. In addition. It follows that lu · wl is equal to the area of the parallelogram having u and v as adjacent edges. that is u x (v x w) is not in general the same as (u x v) x w. we have Aadj(A} = det(A)L (b) False. Thus if u x v = 0.1 = det~A)A. D4. For example. and w = (1. For example. D3. 1). In fact we have adj(A) = det(A)A. = (1.{3) or cos(a:. If either u or v is the zero vector.. the determinant of the coefficient matrix must be nonzero. 0. . ux v I I v ~X(UX>) =ux w (b) Since w is orthogonal to v. we have flux vii= JluiJI!viJsinO where B is the angle between u and v.B) where() is the angle between u and w. · · D6. {a) True. let u but vi= w. The associative law of multiplication is not valid for the cross product. thus u · v = Uullllvll' = . thus w is orthogonal to v and lies in the plane determined by u and •t.11 .1 and so (adj(A)). 0. 0).138 Chapter 4 DISCUSSION AND DISCOVERY Dl.. 1. The angle between the vectors is () = o:. Let A = [l . we have v • w = 0. (c) True. with u and v not zero. 1.3. 0). On the other hand. then u x v = 0. D8. 0 for all values of c. for every c. thus tan 0 = II '::. u · w = l!ullllwll cos9 = llullllwll sin(%./3.3.10. if A=[~ True.c c (l c c)]. (d) (e) False. Uullllvll cos( a.3. the system has a unique solution given by 3 x1 = 14 7c. Then u x v = (0.2c_+_l 'l. No.
0. c. 3.B) = I 1 l l~ = o. The fixed points a. which can be expressed in vector form as = t [~] where oo < t < co.her hand..Yt. from part (b) of Exercise 50.h a.. (b) No nontrivial fixed points.. (a) The matrix A= [ 1 1 0 0 1has nontrivial fixed points since det(I A)""' 1° 1 0 1 1"'o.<> are the solutions of the system ( l .3.y2.P1P2P3) = 3voi(T) = ~OQ1 · (OQ2 x OQ3) = ~ X} Yl l l X2 Y2 1/3 'X3 1 EXERCISE SET 4. as adjacent edges and.l). The base of this tetrahedron lies in the plane z = l and is congruent to the triangle 6P1 P2h. thus vol(T)"" !area(6P1 P 2 P3 ).l). A)x where oo < t < oo.s the volume of the p<\rallelepipe(l having OQ 1.c. OQ2. We have Ax eigenvalue >. and thus parallel to each other.4 139 P3. which can be expressed in vector form x = t[~] (b) The matrix B .4 1. We have Ax= [: eigenvalue . [ 0 x '] l 0 has nontrivial fixed points since det(J . OQ 2 .. It follows that (ax b) x (c x d} = 0. OQ3.EXERCISE SET 4. the latter is equal to OQ 1 • (OQ:z x OQ3). P5. .. adjacent edges. Let Q1 = (xt. If a. then ax b and c x d are both perpendicular to that plane.8.\ ~ :] m [·~] = 5x.Y3. =[ == 4. Q2 = (x2. OQ:. (a) No nontrivial fixed points. . thus ~ X= [:] . and letT denote the tetrahedron in R3 · having the vectors OQ1. we have u1 Uz v2 U3 V3 VJ V2 1l2 W2 V3 u · {v x w) = Vt W} Ut W) 1l3 W3 = v · {u x w) = (u x w) • v W2 W3 P4. Q3 = (x3. and d all lie in the same plane. 1 The fixed points are the solutions of the system (I . b.] [:] ~ m thus X~ m an eigenvector of A corresponding to the ·~Ox.l). = 0. (a) Using properties of cross products from Theorem 4. we have (u + kv) x v = (u x v) + (kv x v) = (u x v) + k(v x v) = (u x v) + kO = u x v (b) Using part (a) of Exercise 5U. 2. Thus area(6. On the ot. an cigcnvedor of A corresponding to the = 5. voi(T) iR equal to ~ timP. =: ~: =..B)x = 0.
..+ 1) 2 = 0 .A yields the general solution x =!] [:] = [~] .. >.has algebraic multiplicity 2... (c) The characteristic equation is . Thus .X 1 3 5 1 (b) The characteristic equation is ~ = A3  4>.140 Chapter 4 5.\ + 4)(A .\ 8.\ 2 + A=>. This [:J = t[~J . Thus A = .1 is found by solving the system [ yields tlu~ ~eneral solution x = 0.lO)(.value.] = (~].3)(~ + 1} = 0.·1 7. The characteristic equation is >. yields the general solution x 9.X2 . : This [:J = 3t. Geometrically. and . it fi8s algebraic multiplicity 2.A . Thus .x.15>. y = 2t.1 = ). "'2 I= (.8 = (A.1 (a) The characteristic l!quat io!l is 0. and >.X = 2 10 ).. : This thus the eigenspace consists of all vectors of the form [~] = t [~] . (a) The characteristic equation is .\(A  2) 2 = 0. = 1 is an eigenvalue of multiplicity 2.3  >. . Thus >.. 1 0 . 2 ..3  6).2){>. + 2) + 36 = (>. = 2 are eigenvalues.X = ±4 are eigenvalues. has algebraic multiplicity 1.2) 2 1 ). (b) The characteristic equation is >..lu~.. . 0 .l. it has algebraic multiplicity 2.. = 1 is the only eigen. this is the line y = 3x . 3 . y = t. A=  3 has multiplicity 1. Thus . 2 + 11 .X~ 11= (. .. thus. is found by solving the system [ (b) The eigenspace corresponding to . (a) The eigenspace corresponding to >. + 12 = = 0. thus the eigenspace consist of a ll vectors of the form = t(~J.A4 5 .2) 3 = D.algebraic multiplicity 1. each has aJgebra..1)(>. it 6.Geometrically.X. {a) = 2 are eigenvalues.\ = 2 is an eigenvalue of multiplicity 3.ctcris t.1 are eigenvalues of A. >. + 1 = 2 hFlS multiplicity 2. 1f~ = 0. Thus . (c) The characteristic equation is(>.ic multiplicity 1. each has . y = 2t. = 0 has algebraic multiplicity 1. and )...3 and . .2 + 4A =. = 0 is an eigenvalue of multiplicity 1. corrc~ponding = t.. and >. : 1 = 2 4 (>.x + 2 . (b) The characteristic equation is I. Thus A= 0 .>. = A=[! ~] is det(.XI~>ach A) =1>. . Thus .2A 2 .1 0 o A.3) 2 = 0.3 .(>...X = 3 and >. 2 . it has aJgebra.X+ 3)(A.X = 4 is the only cig~nvalue. = 3 is found by solving the system [ ~ ~] (.\ = 0 is the only eigenvalue. this is the line y = 2x in the xyplane.A = 1. 2 = 0..\ = 3 is an eigenvalue of multiplicity 2.1 = ). and ).lly. 3 + 2.. =4 =~) (:] = [~].ic multiplicity 2. 2 + 12). t. 3 4 9 t ). + 36 = (. ). Thus >.8>. (. Geometrica.>. .is). (a) The characteristic equation of Thus .3) = 2 = 3 are eigenvalues. this is the line x = 0 (ya.= 3 8 . t hus A= 4 is an eigenvalue of multiplicity 1. .2) 2 = 0. (b) The characteristic tlltuation is ). and ). = 2. (c) The characteristic equation is 1.16 = 0.3 .ic equation is >.4)2 = 0.X= 2 is the only eigenva..l. = 2 has multiplicity 2.6 = (. (c) The chara.hus the eigenspace consists of all vectors of the form The eigenspace to A = .>. .6)..A.
The eigenspa<e cxmespon~ing to found by solving u ~: :][:] m = This ~elds [ l f l]. which cones ponds to a Hoe th•ough the odgin.3 is fo:1nd by solving [~ =~ ~] 3 9 3 [:] [~] · = z This yields 0 [: ] = t [ !.r~~·s 3 The eigenspace corresponding to >. T he eigenspace corresponding to >.. The eigenspace con.. = t (~J. (c) T he eigenspace corresponding to>. = h ~:] . 1. line x 11. yields the general solution x == 0. z = 0. thus the eigenspace consists of all vectors of the form (x] [:] = t[~]. Geometrically.pondh. Thus the eigenspace by solving the system [~. this t = 1 consists of all vectors of the form = 1 is obta ined hy solving (= :J [~]. y f~•m [~] ~ t [:] .E.= . (a) s [~] + t[~) . (b) The eigenspace corresponding to This yields the general solution of the fo<m [:] = t [!].= 0 consists of all vectors of the form is the entire xyplane. this is (a) The eigenspace corresponding to>. 5 ~] f:] = r~l l :l consist~ of all vectors v 0 this is the line through the odgin >nd the point. (~) = (b) The eigenspace corresponding to>. . y = t. whichabo con& sponds to a line through t he origin .4 141 0 (c) The eigenspace corresponding to >. 3). This 1 0 y 0 . thus the eigenspace consists of all vectors of the • Slmilad y.= [ool '. thls conespond• to a Hne thmugh the o•igin (the yaxis) in R ~ 3 consists of all vccto" of the fo<m [:] ~ t [. == 0 is found 5t. (c) The eigenspace corresponding to A.4 consists of all vectors of the fo rrri [:] = t [_~]. = .= 2 is found by •olving the system [=I !:] m [~]· = This yields [:] = s [~] + t [:]. ~ 2 consists of all vecto" of the fo< m [:] = t [=~] . this is t he line y =  ~x. (5. 11). and the eigeosp""e conesponding to !.:]· >. which conesponds to a plane thwugh the odgin. the eigenspace co~""'Ponding to !. t his is the line x = 0. this is the = 0.X ERCISE SET 4. = 4 consists of a ll vectors of the for m (:) the line y = ~x. z ·i = :H.. = 2 is found by solving the system [ 0 0) = ( ]. yields the general solution x = 0.g to!. x = = t. " [3~ o~ ~1] [~z] . y = t.
2 1 = det( AI . = 3 (with multiplicity 2).t1ristic polynomial is p(>.5J 5 >.:_ 1 and>.1 0 1 u 15. 1) 2 ). + ~)'2().6) + 3)[(>.~ ). (a) The cigenspA£e CO<J<SpOnding to A = 0 consists of vectors of the form m !]. . = l.) = (>. .e conesponding to ).1).= 3. ).] and B = [ ~ 3 0 0 1 0 0 1 . ).ha.2)(>. .. + 2)(>.7)(>.1).A . + 2 . ( b ) The cha racteristic polynomial is p(A) = (.. (a ) The c. = 2 consists of vectors of the fe<m m Ll i = (c) T he eigenspace cnn esponding to A = . .142 Chapter4 12. I. +1 p( A) 2 ). . .. eigenspac~ concsponding to A = 1 consists of vectorn oft he focm [: ] = t [ ~]· t (b) The eigenspa.::: [{>. and ). = t[ and the .2. 2 ().\+2 1 2). and A= 1.. = 7.. The eigenvalues a.1) Thus the eigenvalues of B are ).5) .2  8).1 I= >. = 1. + 1)(>. Using t he block diagonal structure.rc >. + 2)(>.\ = . The characteristic polynomial of A is >. The eigenvalues are. .md.2 I . (c) T he characteristic polynomial is p(A) = (>.2 =' " !). the characteristic polynomial of the given matrix is p(>) = I A 011).l)(. 17 ..4 consists of vectorn of t he fOTm [:] .A~ (>.3) (.= ~ s 14.) 3 ). Using the block triangular structure.\= (with multiplicity 2). 5. = .t [ :] • and the eigenspace conc•ponding to A = 3 oonsi•t• of vectOTS of the form m = t [ :] · 13. 'l'wo examples are A :::: [~0 . = 0 (with muitiplicity 2).). and>.\+ 1)(>.2 l 3 >.6 0 0 0 0 = = 0 0 0 0 . 'The eigenvalues are >. . and)...1 5 >.A) = 1 2 1 =(.X.3)2 (X + 3) T hus the eigenvalues are 16. ). 6 If).X . .= 5.\2  9) = (>. + 2 0 0 >.2 p().o l 2 ?0 0 ~ L v ?. the characteristic polynomial of t he given matrix is ). = 3.5)(.. + 15)(.
>. 1. 1 = 3. == {2)!} == 512. with associa. (3)(. !1..>.. .X = 1 and .. !. The characteristic polynomial of A is p(). 0 0.! 5 are).\. 3 . end !.]and [ ~) respectively. thus the eigenvalues are At = 2. = ( . the eigenspace co"esponding to !. perpendicular li nes y :2. 2 . The eigenvalues are . . The characteristic polynomial of A is p(>. Thus the eigenspaces correspond to t he = !x andy .:\2 + ...EXERCISE SET 4.A 1.1] The eigenvalues of A. The eigenvalues are .5). A ~ :0)' = :j.t.) = >.A2>.>.2)3...X2 + . Thus the eigenspace::. >.) = >.1) + ( 1) =. correspond . The eigenspace correspontJing to . to the perpendicular lines y""' ~x andy= .>.>.::::: 2x..·'· ' = ' ' or (in vedorform) [~] = ·' [:] + t [.A t+ .6. thus the cigeovaluP~<> are . = I is obtained by solviug H: :1m = m which has the general solution ' =t.1) 25 = 1 and >. = 0. >.3. r:J. The eigenvalues of A are A = (1)' ~ 1. + 1) 2 .. Similarly.>. A = (!)' = . 18. . Corresponding eigenvectors arc the same as above. = J .. = 1.3 = (.3)(). T hus det(A) = 8 = (2)(2)(2) = . 9 =2 rll· and and [ ~] Conc~pondi ng eigcnveclo" are [ r"'peotively.\ tors [ = 2 and >.ing the system whkh yield$ [:] ~ l [ .J2x. We have det(A) = 8 and tr(A) = (). Thoeigenvalues of A are !. = . = j.\2.3 .\ ( ~) ann = 0 and .3 and tr(A) = I= (3) + (. = ( 1) 25 = 1. with associated eigenvec v.:\3 and tr(A} = 6 = (2) + (2) + (2) = )q + .>..\ = 1 (with multiplicity 2}. !.1)(1} = .3 = l.:\ 2 I· 12.:].8 = (.\3. = 5.>. y " t ..4 143 thus the eigenvalues are .ed eigenvectors GJ respectively. We bnve det( A) = 3 and tr(A) = 1. . A3 = 2. W. Correspcnding eigenvectors ar~ the same as above. \2 = 2.1 is obtained by sol. l9. Thus det(A) = 3:.A2 = 1 .:..
) = (A . p(O) == det( .= 3. The only eigenvalue is A= 2 (multiplicity 2). For t hese values of x. T he characteristic polynomial of A i:.e. The chl'l.r!'l...crrn iu this polynomial is p(O) .. so A has the s tated eigenvalW!S if l.be)= 0.6a). Note that the second factor in this polynomial cannot. the characteristic polynomial of A can be expressed as where >.':i =x arc invariant under the given matrix. lf A'l =.4 .2a + b = 2 a+ b=5 from which we conclude t. 0.J. tro~"r~ il:l one real eigenvalue {of multiplicity 2).\:::::: 3.tlx) = Ax+ A 2 x ::.4) = 16 # 0.. The roots of this quadratic eq uatio n are x = 1 and :t .144 Chapter 4 23.. from . z = x ·. then.ctcristic polynomial of A is p(>.Ax+ x = x +Ax. (a ). 29.6x + x 2 .riant lines. Sirni la dy.lld o nly if p(2) = p(5) = 0. + 4/>c > 0 If (a.f.. {b) If (u ~ d) (c) {d) If {n.A) = (l)ndet(A) . if any.l.df + 4bc ::o 0 2 then. the d1atacteristic equation of A is . so A has the stated eigenvalues if and o nly if p( 4) = p( .2 . .3) = 0. This matrix no rt'lal eigenvalues. from (a).A= ~[(a+ d)± J(a.\2 + (a+ d)>. 2 .d}'l + 4bcj.\ = 3 is a root of t.(b + l )A + (b. correspond to eigenspaces of the matrix. dV + 4bc < ()then.>. 28.1. Ak arc the distinct eigenvalues of A and m 1 + m 2 + · · · + mk = n. This is n qundratric equation with discriminant (a+ d) 2  4(ad . This leads to the equations 6a .+ (ad .5. Thus t he o nly possible repeated eigenvalue of A is .. 27.be) = a 2 + 2ad + d2  4ad + 4bc = (a . and this occurs if and only if . .4).2Ax + x 2 .4 = 0.4(x 2 . with associated eigenvectors ami y g] and [~)respectively.he second factor of p(A). thus y = x +Ax is an e igenvector of A corresponding to A = 1.(b + 3)A + (3b . then A (x ·l.2a). This leads to the equations . so t her e are no invariant lines. (22).4b = 12 6a + 3b = 12 from which we conclude that a = 2l b. i.Ax is a.hat a = 1 a nd b = 4. t here a re no real eigenvalues. with associated eigenvector [~]· Thus the line y = 0 is invariant unde r the given matrLx. On the other har. 26. >. have a double root (for any value of x) since ( 2x) 2 . The constant t. 2 . The inv<l. A = 3 is an e igenvalue of multiplicity 2.8. The char acteristic polynomia l of A is p{A) = >.3)(A2 . Thus = 2x }m. the characteristic equation has two distinct real roots. from (a) .d) 2 + 4bc Thus the eigenvalues of A are given by . (a) The eigenvalues the lines y (b) (c) are ~ = 2 and. p( A) = >. 25. Accorrling to Theorern 4.n eigenvector of A corresponding to A = 1. if and only if 9. (a) lJsinf:! Formulu. 24 ..
1 0 I () A 0 . = . t he eigenvalues of <'lA:r + 2I = (4A 1. A .41 has eigenvalues AJ From P 5 below. Finally.6. then expand by cofactors a.1 has eigenvalues .::: 20..l){A + 4) then the eigenvalues of A are >..4.1 row to the first row. namely A1 = 4(1 ) + 2 = 6 and .o\. then (Ax) · x A= (i1:1?t. is an eigenvalue this system is redundant. .\+ Cn .\i· n.\2 0 1 co + C A+ C2A 2 ) A+ Cnl = 0 0 .\2 = 4(..ee that [a =b>.1 A2 0 Co+ CtA Ct >._l 0 . we have two distinct real eigenvalues A1 and Az..J is an eigenvector corresponding to >. we .4 145 30. lf (a. (a) From Exercise P 3 below. If . = 5 and .1 1 0 0 1 co +Ci A Cz .4 .1 = 0 0 0 .. setting t = b.4 = . A... where x ? 0 .) below ..1 0 0 co+ Ct >.he secvnd row to the first row.\2 .\ p( ..AIIxll 2 and so l2.) = . (a) The characteristic polynomial of the matrix C is >. = (.. If the characteristic polynomial of A is p(. >.~)3 . A. Prom P4 below.J  C) "" 0 0 0 0 co Ct Cz A+ Cn1 0 Add .A) = A2 + 3>. 1 ).d)x2 = 0 Since A.\} = det ( >.4) + 2 = \4. + cz>.\2 = .~(b) (c) (d) ( e) From (a). p(..long the first column: 0 0 .\ p(A) = 0 cz .\ + C:n .1 = 1 a nd A2 = ..d)Z + 4bc > 0. then expand by cofactors along the first column.\2 = ..8...>. 0 ..(x · x) == .21) T arc the same !'IS those of 4A + 2J. 0 Az ).>.EXERCISE SET 4.'2 0 0 0 .\ + c..( . 5A has eigenvalues)" = 1 4 = 3 and .\x) · x . From P2( a. together with Theorem 4.1 .3 has eigenvalues A1 "'""(1 )3 = 1 and >z=.2 C2 .a):tt .. it follows tha t. and (using the first equation) a general solution is given by x 1 = t.\ 1 = 1 a nd .4 = (>.lx = Ax. The corresponding eigenvectors are obtained by solving the homogeneous system · (A~ .1 0 0 Add A2 times t.>..cx1 bx2 =0 + (Ai. l ime.> the ~econd 0 0 ..4. '3.
n.A1)x = 0 has uontrivial solutions.\x for some nonzero vector x.146 Chapter4 Continuing in this fashion for n .s p(.4.4.. (b) True.2.Thus.4 as its cha. ..hat if A x = .1. >. Thus 04.>. D7.\ 2 + · ·· + Cn.quare m n.2. =1 and .5>. then the ·· . = 1 and). 0 1 X2 (b) True.21 11  >+Cnt = C o+ CtA + C2 A 2 + · · · + Cn2A 2 + <'n .l + >.12. (a) The characteristic polynomial p(.12." (b) The matrix C nomial. 1.6 t hat the eigenvalues of A 2 are ). A= G~] has eigenva.>. {J(xl + X2) for any value of a. A = d·tbc . .g. = [l 0 0 1 0 0 2] hasp(>.= . For example.ymmetric matrix has real eigenvalucs.>.1An..\.72 m1d tr(1t) = 3 + 3 .J (d) False.2) 2 = 4. ·r 1 .ra. 3. r. e.be. If A = 0 is an eigenvalue of A. The eigenvalues of A (wit h multiplicity) arc 3. From Theorem 4. the reduced row echelon form of A must be linearly indep(:ndent) thal .2) 2. then the system Ax = 0 ha.2 it can be shown (sine"' x1 and + A2Xz f. The characteristic polynomial of A fac tors a.cteristic poly 5 DISCUSSION AND DISCOVERY Dl.S 2 I = [1 OJ .t 1S1l€d 1 an< o n 1 1 a = y ·r 1 A ~:: 1 06 .>.>.\ = 2.1)(\ + 2) 3 .X) ). thus A is invertible.) = 2 . !The statement becomes true if "independent" is replaced by "dependent" . D2. (b ) Yes. We havP.\nl 1 co + Ct A + c2. x = 0 sa.)qxl fl = h OJ l. .\2 is an eigenvalue of A 2 . If >.. 2.2 = 0.2x2 and.\) = 1. the characteristic polynomial of A is p(. 2I .X) = (>. (c) False. = ( .X1 f. Using Formul a (22).lues ). we obtain p~. e . = (J) 2 = 1 and >.) has degree 6.. !But it is true tha t a :. = 0. then . + 0 3 o 1 1 .2 . t hus t he eigenvalues of A a re ). {a) False. thus (. A(x 1 + x2) = A1x 1 + . For example.2 s teps. thus A is a 6 x 6 matrix. The matrix A d = [ac d bl satisfies the condition tr(A) j = det(A) [1 . The correct statement is t. we have dct{A ) = {1)(3) 2 (4) 3 = 576 =/: 0. It follo\vS from Theorem 4.. <J •• (~ ~J. if . lf d f.\ + 4 = (. we l1ave det(A) = (3)(3)( .. then x is a n eigen\'ector of A.2.J DB. (a) False. and .2. = 0 is an eigenvalue of A.s nontrivial solutions.1] · 2 if and only if a+ d = ad. is an eigenva lue of A. For example.g.. = 2 is the only eigenvalue of A (it has multiplicity 2).2>.. If = 1 then this is equation is satisfied if and only if be = . 2  has nontrivial 03. thus A is not invertible anrl so the row vectors and column vectors of A are linearly dependent. from Theore m 4. D5.trix all of whose entries arc the sl:l mc then de t(A) solutions and >. equa t 1011 1s sa. .2) ( ··2)( 2) = .4 .3 + .3>. 4. If A is :. thus Ax = 0 = ).tisfies this condition for any A and .>2 .
WOR KING WITH PROOFS
147
'!'rue. The characteristic polynomial of A is a. cubic polynomial, o.nd every cubic polynomial has at least one real root. (d) 1Tue. If r(A) = .>." + 1, then det(A) = ( l }np(D) = ± 1 =I 0; thus A is invertible. (c)
WORKING WITH PROOFS
Pl. If A = [ac
d
b t hen A2 = [02 +be ], ca + de
A2
_
cb+ d 2 j
ab + bdl and tr(A)A
= (a+ d) [a c
b] d
= [02 +do ac+dc
ab + ad+ a
d!]; thus
tr (A)A
= [be 
ad
cb _o
0
ad] =
( de t(A) 0
0 ] _ det.(A) = det(A)J
nod so p(A)....: A'l  tr(A)A
+ det(A)J
= 0.
11)
P 2.
(a)
Using previously estabJished properties, we have
d~t,(AJ 111')
= det(Afr · AT)= dct((M A)r) = det(M 
Thus A a11d AT have t.hc *lame characteristic polynomial. (b ) The e;gcnvalues are 2 and 3 in each case. The eigenspace of A conesponrling to ). tained by solving the system
= 2 is ob'
[_~ ~l r:J~[~];whereas t.he eigenspace of AT corresponding ': (~ ~~]
[:]
t o >..
=
2 is obto.inf:d by solving
~ [~J .
Thus the eigenspace of A corresponds to the
line y = 2x, whereas th~ eigenspace of A'r corresponds to 11 = 0 . Simtlarly, fo r A ~ 3, the eigf'nspacc of A corresponds to x = 0; whereas the eigeuspace of AT corresponds toy= ~x. P3. Suppose that Ax = .Xx where x i 0 and A is invert.ible. Then x = A 1 Ax = A 1 >.x = AA 1 x and, s1nce ,\ :f; 0 (because .·1 is invertible), it follows that A 1 x = }x. Thus is an eigenvalue of .t\  t and x is ~ cormsponding ~~igenvector.
l
P4. Suppose
A
tlt;H
Ax
.\.x whcr<' x :f. 0 . T hen (A  sJ)x
= Ax 
six = >.x  sx
= (A s}x .
Thus
sis an C'i);»nvaluc: nf A  sf and xi~ a r.orresponding eigenvector.
P 5.
Suppo:;e tha t, Ax = AX where x :f 0. Then (sA)x value of sA and x is a. corres ponciing eigenvector.
=
s(Ax}
=s(.>.x) = (s>..)x.
+ 4b<: =
Thus s.>. is a a eigen
P 6. If the ma trix A = [: :] is symmP.tric, then c :=: b a.nd so (a d) 2
(a  d) 2 ·+ 4b2 . 
ln the case that A has a repeated eigenvalue, we must have (a d) 2 + 4b2 = 0 and so a= d a.nd b = 0. T hus t he o nly sytnructric 2 x 2 matrices wi Lh repeated eigenvalues arc those of the form A """ al . [;ud1 a. ntatrix has .>. = a as its only eigenvalue, and the corresponding eigenspa<:e is R 2 . This prov<•s part (a) of Theorem 4.4..\ I.
If (a d) 2 + 4b 2 > 0, t hen A has two distinct real eigenvalues .>. 1 a nd eigenvectors x 1 and x2 , g.iven by:
..\2,
with corresponding
.>.1 = ~ l{a +d)+ J(a d)2
+ 4b2)
148
Chapter 4
The eigenspaces correspond to the lines y
= m 1x
2
andy = mzx where m;
= a=~', j
= .1, 2. Since
(a >. 1 ){a >.2 )
= (H(a d)+ J(a d)2 + 4b2J) (t!(a = t((a d)
2 
d)  J(a  d)2
+ 4b2])
(a  d)

4b
2
l=
b
2
we have m 1 m 2 =  1; thus the eigenspaces correspond to perpendicular lines. This proves part (b) of Theorem 4.4.11. Note. It is not possible to have (a  d) 2 + 4b2 < 0; thus the eigenvalues of a 2 x 2 symmetric matrix must necessarily be real P7. Suppose that Ax = .XX a.nd Bx = x . Then we have ABx = A(Bx) = A(x) = >.x and BAx = B (Ax) = B (>.x) = .XX. Thus >. is an eigenvalue of both AB and BA, and xis a corresponding eigenvector.
CHAPTER 6
Linear Transformations
EXERCISE SET 6.1
1.
(a) TA: R2 (b) TA: R3 (c) TA: R3
? ? ?
R 3 ; domain = RZ, codomain = R 3 R 2 ; domain = R3 , codomain= R2 R?; domain= R3 , codomain= R 3
3. The domain of Q , the codomain ofT
i(!~nd T(l ,  2) = (1,2,3) .
4 . The domain ofT is R3 , the codomain ofT is R 2 , o.nd T (0,  1,4) = (2, 2).
2
6. (a) T(x) =
[
~
=
2xl + :!.:7 + 4X J] [
;~x1
+ 5x:z + 7xa 6x1  x:~
(b) T(x) =
7.
(a) We have T;~(x) = b !f and only if xis a solution of the linear syst,em
1 2 ~)] [2 5 3
0 l 3
[XJ2] = [ 1 ] X 1
X;!
3
The reduced row echelon form of the augmented matrix of the above system is
[~
Thus any vee to< of the fo<m
(b) We have TA (x)
X
0
1
3
6
111
= 1 6t, x2 = 1 + 3t, x 3
o o oJ
and it follows that the system ha..o; the general solut.ion x 1
~ :J + t
[
n l
will have the pwpe<ty that
:~~x)_~ [
J
= t.
= b if a nd only if x
is a. solution of the linear system
[~ ~ ~]l:~] = [~]
2 5 3
165
XJ
1
166
Chapter6
The reduced row echelon form of the augmented matrix of the above system is
[~
R'
8. (a)
~ ~ ~]
0 0 1
0
and from the last row we see that the system is inconsistent. Thus there is no vector x in
for which TA(x)
~ l:J·
if and only if xis a solution of the linear system
We have TA(x)
=b
[1~ ~2 ~ ~] [=:] = [~] 0 5 xa 4
The reduced row echelon form of the augmented matrix of the system is
[~
X3
0 1 0
2 1 1 2
0
0
and it fo llows t.ho.t the system has general solution x 1 = 2 2s + L, .r2
~
S,
x,
~ (. ~hUS any v.,.;tor ofthe form ~ m = + r]
X
= 3 s 2t,
+s [
t [  ;]
will have t he p<Operly
that TA(x )
= [~]·
(h )
We have TA (x ) = b if and only if x is a solution of the linear system
[ ~ ~ ! ~] [::] = [~]
 1 2 0 5
X3
3
T he red uced row er.hclon form of the augmented matrix of t he system is
[~
0
~ ~ ~ ~1]
0 0
Thus the system is inconsistent; there is no vector x in R4 for which TA(x)
=
[~]·
9. (a). (c), and (d) are linear t ransformations. (b) is not linear; neitht!r homogeneous nor addi~ivc.
10. (a) and (c) are linear t ransformations. (b) and (d) are not linear; neither homogeneous nor addit ive. 11. (a)_and (c) are linear transformations. (b) is not linear; neither homogeneous Mr additive.
12.
(n) nml (c) o.re
li n~t\r ~ransformations.
(b) is not linear ; nPit her homogene<ms nor add:tive.
exercise Set 6.1
167
13 . T his lmnsfonn• lion can
~e written in m•~: [o,m "' [: : ] ~ [! =:
cod~n:~~R~.
 ·  .   · ···· ...
:] l::J ;
~]
it is a linear trans
formation with domair@. and
1~ .  "Notlitie'o~ .
·.(__ __  ·
The domain is R 2 o.nd the codoma in is R:i.
 ··
• .,.,.J· . . _ _ _ _   
~.
15. This transformation can be written in matrix form as
transformation with domt
(' ···:_) 1.._
[:~] = [~
1U3
nd
codom~R~~·,
'
 l 2 4 1
[=:];
XJ
it is a linear
16 . ...Noti~The clomain:ls R 4 land the codomain(is R 2 . !
·...~~.,.....,
17. [TJ "" !T(eJ) T (e 2 )] =
[~ ~]
4 0
=

.
18. [T]
= [T(et)
T(e:z) T(e3)};:;;
[! ~ ~1
2 2 IJ
19.
(a) We have T( l , O)
(1 , 0) a nrl T(O, 1)
= (1, 1); thn~ thf' st<lP.d'l.rd matrix is [TJ = [~ ~].Us
ing the m atrix, we have T ( [ ~]) = [ ·~ using t he given formula: T{ 1 , 4) =
= [:]. This a grees with direct calculation ( ( 1) + 4, 4) = (5,4).
[ :]
~]
(b) We have T (l,O,O) = (2, 0,0), T(O, 1,0) = (1, 1,0), and 1'(0,0, 1) = (1,1,0) ; t hus the stan
da~rlmatrix i, {Tf
[
~]  T his a.grec~(i.};"J~t"c7lculettion usmg the given formula :
T (2, l ,  3)
"G l :
U<ingthc matrix wc haveT
([J) ~ [i l ll [_rJ
=
= (2(2) 
( 1) I ( 3j, 1 + ( 3), 0) = (0, 2,0)
20.
(a)
[1'] =
[~ ~ ~]
3 0 ] 5
r([_m ~ [:, ~
T ( [
~]
Ul ~ [_~]
i ~~] ·
(4)}
(b)
ITl = [0
~]) = [~ ~J ~] ~ l 1~]
= [: 
[
21. ( a)
The stn.ndard mat.rix of the transformation is IT]
(b ) If x = ( 1, 2,4) t.hen, using the equations, we have
T(x)
= (3(1) + 5{2) (4),4( 1) (2) + (4) , 3(1) + 2(2) 
= (3 ,  2, 3)
On th e other hand, using t he matrix , we luwe
r:_ 1] . 30 . ~ sm. (a) (c) [~~ 1][ [2J3~] ~ [1 946] 4 •] + ~ [_~ ~] [~] .~] 72 4. (a) [to tJn [t>J ~ [070l = 72 72 4 [~ ~] [_~] = [~] [~ ~] [ . = r~ I ~ .598 [>f :/. ~ ~ 1.¥ l [4598] ~J ~ .. (135°).~] 2 [~] = [=!J 3 = 2 4.~ ~ ~] 1 ~ corresponds to R 8 = [ ~os 8 sin 8] where B = . 29.964 4 +3 ~ 4.cos 2 8 2sin9cos8] wherecosB = ~ a. S i ll 8 COS 8 [cos28 . (a) [l{3++2~ ~ [19641 3' 2 4.~:. (a) (c) [~ ~] [_~] = [=~J (b) (d) 25.'l [1 1 m m2 m] .4 ~ 3 2 /3 .>f'+•].. 4] [•] [VJ+ ~l~ [0.950 (b) (d) r~ ~J r = r J :J : 4  (c) [.7l 72] corresponds to He = I sin29) COS 28 28  where B = i (22. S Jn 3 ..' v l +m• l +~' [ 1 .. (b) [2+¥+ ] .nd sin8 = . ~lm+ • '1/JTrn 31. 8 .518J  ~J [•] 3 = [ 2v'3 + 2.[ 4598] [ l 4] rJ [~ +JJ] [2482] 4 = ¥ :1/. he matnx A = [ . 1.¥ ~ 3 = 0.168 Chapter 8 22. (a) (a) (c) {T] = [2 3 3 5 .. sin8cos9] oinl(J {b) p L = (:Jin9 cotJ 9 cos2 8 = l+.5° ).299 28.f [3 [~ + !] ].~1 (b ) 4 1 ':i . ' rhe matnx A .= sin28] __ [cos 8sin 8 2sin8cos0 2 2 2 sm28 .[_~] [! ~][3] ~ ! 4 = {b ) [_~ ~] [~) = [_~] [ l ~]!·]  (d ) 27. (a ) [_~ [ .598.[4598] ! 2VJ 1.2991 1. (a) liL __ [cos 28 Thus Hr.~ .9G4 . r~ ~J r~J = [=~J [~ ~] [~] = [~] (b) r o (d) l1 1) [2] = [1] 0 1 2 [~ ~J [ = GJ ~J [~ ~] [_~] = [~] 24.964 '26.1 1. ] ( b) T ([m[~ ~ nl = _:] = r:] 23..l. : m.cos 28 .~] = [ .
0) = (0.e. (a) We have m == 2. I) f. (1. thus Tis linear.. (a) We bave m = 3.en in matrix form us r~ ~] [:] = [.y} = (0.. fo [~ :] [:] = [~!] [~:!]. cy) = (0. . thus T is neither homogeneous nor ad2) Jitive. 1) + (1 ... .0) = c(O. Yl + Y ::.f .. ...=: 1. ..:. .···. (a) (b) . l) =I (1 .. to L line he ·..t) = 1.· 37..]. y) and T(x 1 + x 2 . line· ·:e + y = <''__. thus P = h 33. thus H = HL = the line y = 3x is given by H ([:]) (b) We have m = 3.. Y2).. 1) = 2T(x. 2y) = (1. y). . If T is defined hy T(x ..~t ..Exercl&e Set 6. (a) [T] 1 =[ ~ 0 0~] {b) IT] = [~. Yt + Y2) = (0. then T (cx....y) and T(x1 + X2. 1). 1~ [~ !] = [ and P ( [ = and the reflection of x = [: ] about = = and P ([:]) = 1 10 = 34. i. If Tis defined by the formula.. 2(1. 0) + {0. t hus P = PL = [ . thus H = HL = ~ = (b ) We have m = 2. = 36.. Y2). yl) + T(xz.1 169 32. The given equations can we writt.. and so  [~J · ·~ [~ ~r· r:J = ~ [~ ~J r:J = l ~ ~tl 3: ··· Thus t he image of the 2s . .. then T(2x.::: (1. Y1) + T(x2 .~ ~] 0 (iJ 1 (c) [T] = [~ ~] ! 38.. O) = cT(x.0).J and H ( [!]) t(~ ~] [!] = t [z:] = [~::] · ! ~ ~ !] !] ) ~ [~ ~1[!] = ! (~~] = [::!]· fo [: !] = ~ [~ !] = gr~ !] [:] k[~:] [~::].. 1) T(x 1 . 0) = T(x1 . T(x. 1 corresponds to {s + 4t ) + (3s. .._ .
we have T (x 0 +tv) = T(x0 ) + tT(v). if T(v) =f:. the image of the line x = x o +tv is the line y = Yo+ tw where y 0 = T (xo) and w = T(v) .cro transformation T(x.2 1. .x )..aQ zg • . we have A = 29 sin 28] __ R·e .1 =Ar= (_: _ iJ· 29 29 29 2 aQ 2 .[cos O .1 =AT= [aQ _llJ llJ 0 l ' ll 29 !Q :~o 29   'iG 1 ...1 ~~~ 29 !Q 21 ' D J [ £! :o . The eigenvalues of A are >. D6. cos [ sin 29 D3. Such a transformation cannot be linear since T(O) = v o f.x. True.>. 0. T(x. EXERCISE SET 6. AT A= [ ll !. Thus. T(x. theu geneous. D5 . = . For example. _ [cos( 9) sin9 cosO sin(8) ::~=:~] = R. Since T(O) :::: x 0 =f:.= 1 and . 40. Since Tis linear. (a) (b) (c) (d) False. cos 28 " Thus muJtiplication by A corresponds to rotation about the origin through the angle 2fJ. Geometrically. thus IT]= [i ~l DISCUSSION AND DISCOVERY Dl. True. This is the transformation with standard matrix (T] = [. Such a...rl 0·thus A is orthogonal and A . rotation followed by a translation. it corresponds to a.y} = {x2 .y2 ) satisfies T(O) =0 but is not linear. then j is neither additive nor homo D7 .1 with corresponding eigenspaces given (respectively) by t [~J and t[~]. 0) is the only linear transformation with this property.y) = (y. then the image of x = xo + tv is the point Yo = T (xo). R o . corresponJs to rotation t hrough the ang le 0. True. sin _ sin9 cosO' 81 _ t h en AT _ [ cos 0 sin 8] . 1f b = 0. ArA = [_:I] [in=[~ ~lthusAisorthogonalandA. D4. 0). transformation is both homogeneous and additive. . ·y) o:: (0. 0. this transformation is not linear.170 Chapter 6 (c) (d) X 39. T(x. The ·z. lf T(v) = 0 . Jf A ·/ 1 ·. y) = (.. (e) False. 0.o· Thus multipli cation by !1. f is both addi tive and homogeneous. where oo < t < oo. thus IT} = [1 0] 0 0 . D2. If b =f:. 0. From familiar trigonometric identities.
/3] = He where 0 = % · 2 1..:£1 . (a) Expansion in the xdirection with factor ::! (c) Shear in the xdirection with factor 4.! 7. 25 ~ ~] [~ 76 I 5./3 _! = (o 2 _ J _ 2 0 2 1 • ] · Lhus A is or thogonal.Exercise Set 6. . We have det(A ) ~ 1 . and so ~ ~] =He w bere fJ = i · Thus multiplication by A corresponds to ref!ed ion a~)ut ~ the line origin through the origin m aking an angle with the p ositive x.axis. 0 1 thus A is orthogonal and A . (c) Shear in the x direction with factor .2 171 u 5 16 .. We have dct(A ) = 1. A= [ J ~] [ ~] . (a) A= (b) A= (c) A = [~ n (d) [I ~] 2 9.4. thus 1 A is orthogonal. (a) AT A = A = [ =~ ~] = Ro where B = [ wise rotation about the origin through the a ngle 3 . 6.3. [~ ~] . We have det (A) = l. rt =~J r=~ ~J _ 1] 2 ~ ~ 2 73 1 ~ ~]· = 1. (d) Shear in theydirection with factor . {b) Contraction with factor ~. (a) A= [t . thus A is orthogonal. and (b) rlT A= [ ~ Y. 10. and = [~ ~].] [~ ~] (b) A= [! ~] [~ ~] (c) A= [~ ~] (d) A=[~ ~] A= 8 . Dilation wjtb factor 8. Thus multiplication by A corresponds to CO\tnterclock (b) A~'A. We have det(A) 3 . (d) Shear in the yclirflct. (a) (b) Compression in the ydirection with factor ~ . 2 .l =AT = 25 0 0 l 1].!s 5 _g 25 4 25 5 5 12] [~ 4 ~ l2 2S ·u 25 ~ ~ 5 3 0 ~] 25 16 0 =[ 100] . ~ 2 < thus A is o'thogonal.l [. a nd so A= !2 [ . (a) AT A= [ A = [ Jl _v'4 1J[:a l = [0 J * ~ v'~Jl = Re where () = ~ 2 2 1 0 ].ion with factor 3.}l 1 a~ v.
172 Chapter 6 11. = s mmm . and e2 = [~] t [~] . thus IT) = [Te1 Te2) = [! ~J · = Te. . (a) (b) ~ (c) iL . • e. The action of T on the standard unit. thus 1 = 11 [~ ~ ~] · 14. = 13. The actioo of T on the "anda<d unJt vedo< is e. = mmm + = Te. ( a) ( b) (c) I~ 16. The action ofT on the staudard unit vectors is e 1 = [~] [~J t [~] t [. 4 mm . vectors is e 1 = [~]t [i] }[~] t [~] = t hus [T] ~ ~l 0 1 y 15. (a} (b) (c) y 17. = T et. The action of T on the stanciatd unit vectors is e 1 = [ ~ ]t [~] ~] . [~] t [~I = Te~. thus [T] = [Te1 Te2) = (~ 12.~] = Tel 4 and e2 = [~) t r~] t [~] t [~] + =Te2. e2 m ='Fe.. t t [ ~J t [ ~] = Te1 . and e.
.2 1 8.1 0 01] [o o L 0 0 (c) [! ~ ~] 23. .t the angle of rotation is !I = ~ .~ ~ (c) !RI ~ [· f ... thus the matdx co:. .... J A geneml solutioo a mt:.6..s sys<em ._..... o o [ (01 ] .0 _1 . (a) [TJ(: [~ ~ ~] ·' ·... (a) tnt ~ [: (b) IRI = [_...A)x ofth.. (o) [T] _...on about the xa>ds. 1 I) 0 . (a) !Ill = [: (b) [Rj = [ _. .s x = = 0 .olv.1 (b) [Tj = 0 . 7z ~ (c) [RJ =: [ 25. (b) [T) = [ ·..s A [:J = t [~] . Choosing the positive orientation and comparing with Table 6.ng the system (/... (a) {b ) [~ ! ~ll:J ~ Hl 10 1 [000OJ r2 = r2 1 01 0 0 1 3 3 (c) 20. .. whkh . [~ ~ ~] 0 \) 1 173 19. to• [~ ~ ~] r= = [~] . Tbe axis o f rotation . ....••:. we see tha. 1 0 0 ~ ~ ~] 1 (c) [T) = 22.. found by ......Exercise Set 6. (a) r~: ~J nJ ~ nl I ' " ·••'' . rotation matrix since it is orthogonal anrl det(A) == 1.. (b) (c) n!~] mr~l ~ [~ !~] nJ ~ m IT]= 21.1 3 (b) (c) !11...iJ I 24. (a) ~ ~ [9 ~ ~][~]=[~] s 0 . The matrix A is a.2.
and so 0 = ~· 28. the rotation angle is determined by cosO = t. = [~ !~] = [~ ~ ~] · = 0. 0.2 . 29. is a lineA< ope<ato< and 1!. 0). cos 0 . M.he ax. 0) = [ I o0 l = S(O.~ .A)x = 0 . = [~ ~ ~] Similad y.1). and so Formula (17) reduces to v = Ax + ATx. (a) We have T. = A genenJ solution = = t ~ . 0) is a vector in this plane.O) · (O y. Writing w in column vector form. Taking x = e 1 .. (a) = S(l. Ta king x = e1. 1. 0.1.IIW~ . 1. S{e2 ) = S(O. 1) is detenn. thus the axis of rotation is t he line pnssing through the origin and the point (1. using Formula (16). l) . Finally. thus the standard matrix for S is [S] = k. thogonal for every vector x in R 3.(~' + 1 )x = [ 1~ llll [~1]. The axis of rotation is found by solving the of t his system is :<.1.174 Chapt&r6 26. then T 1x · (xT1x) = (x. the rotation angle is determined by cos()= tr(~)l = ... this results in v = Ax+ ATx = {A + Ar)x = [~2 000 0~] [~1] [~2'j = From this we conclude th at t he xaxis is t. this results v = Ax+ ATx 1.z) . and S(e3 ) (b) If x= (x . 1. 0.O. We have tr(A) in = 0. The matrix A is a rotation matrix since it is orthogonal and de~( A) = l. z). which is ~ u ~ n[~] m.. k. and w == (1. 0) = {1. y . Simllarly for T2 and T3 . We have tr(A) = 1. [l~l  Thus the axis of rotation is the line through the origin and the point (1. s[:sltem ([~]. [ 0 Thus the rotation angle 8 relative to the orientation u _ wAw _ 1 Ll _ 2>r(120o) .ined by 27.. 1) = (k.is of rotation. 1).3 =w x Aw = (1. th.x = (A 1 . and so 1 7 . T 1 x and xT1x are or(0. thus 8 = 2.. and so Formula ( 17) reduces to v = Ax + AT x + x . ([:]) = m [~ ~ ~ [~]. l k 0 0 1 .l = 0. 0). We have S(e 1 ) 30. Using Formula {16). = and M 3 thus T.r(~). . The plane passing through the origin that is perpendicular to this line has equation x + y + z = 0.. we have W= ·1] l . 1.
the o r igin through an angle () .Yll tho.= 1. ""' 0. dircction with factor . square is mapped onto the rectl'ingle 0 5 x $ 2. z) = (x + ky. T hus.x . anti its standard matrix is lS) = is dcnned by S{x. T hus a and b must satisfy (<1 + b} 2 + (a . l o [0 1 o + :x . From thP. jg. polarization identity. = . b = ~. and its s t andard matrix is lSI = lkO] . 0 5 y S 3. !f llx + Yll = llx .is (y = 0) . y) and ITl = [~ 4) yx n ° D7. Thus the eigen VectQrs paralle l to the line y = x will be eigenvectors corresponding to the eigenvalue>.\ = 1. Vectors perpendicular to y x will be eigenvect ors corres p o nding to the eigenvalue . i D2. t hen values of A (if any) must be of ah. These are the D3.y. X+Y D8. square is rotated a hout. z) = (x . If A is an o rthogonal matrix and Ax :=. DO..re or thogonal for any values of a a nd b. y . on substituting a= 1 and b = c = 0 into Formula ( 13) . 0.DISCUSSION AND DISCOVERY 175 (b) The shear in the xzdirectio n with factor k is defined by S(x..6 are obtained similarly. = (b) E very nonzero vector is an eigenvecto r corresponding to the eigenvalue .ies. we have 0 0 cos () sinO sin{) cos £J The other entries of Table 6.X = ~ . r. (a) llxll = I!Axll = 11>xll = I. L D 5. y. the shear in the yzdirection with factor k . Similarly.wo columns of t. >. I n order for the m a t rix to be orthogonal we rnust have and from this it follows that o. z + ky). or (equiv<\lcntly) 2a 2 + 21} = L D4 .·= (30 degrees clockwise) . 7~ X .nd mus t t herefore be a rectangle. c = "73 or a. (a) (b) (c) (d) The Tho The T he unit umt unit unit square is mapped onto th<'! segment 0 $ x s.2. z + kx).b) 2 = 1. y [~ ~ ~lk 0 1 31. we have x · y = Hllx + yJ]2 .'lr in the x .0) then . b = only p orss ibilit. 2 along the xax. l DISCUSSION AND DISCOVERY Dl.his matrix a.n the parallelogram having x and y as adjacent edges has diagonals of equal length a.XIIIxll.~3 . all t ha t is required is t ha t the column \·ectors be of length I.2.olutc vain~. = 0. The t.l!x  yll 2 ) = i(l 6  = 3. thus 1'(x. for t he m a trix to be orthogonal. square is rnappC(J onto the st~g rnent 0 ~ y ~ 3 alon g theyaxis (x = 0). The shel. y) = (x  2y. If u = (1.
ran(T) = R2 The transformat. (a) ker(T) = {(O.nsformation is the solution set of the system Ax = 0 . {c) ker(T) = { {(O. 6.· ·· ·· ~··"· (c) ker(T) = { [~] }• ran(T) = R2 • The transformation Tis both 1~1 and onto.y)l. 5.ion Tis both 11 nnd onto. z < oo} (yzplane). (b) ker(T) == {(0..y)J ' oo < y < vo} (ya.xis). tan(T) = {{0. 0 :o thus the solution set consists of all = t [ ~] where oo < t < oo . matrix of this system can he reduced to :J [ m. The kernel of the t ransformation TA : R 3 t R4 is the solution set of the linear system [: =.l of the transformation js the solution set of matrix of this system can be reduced t.co< y < oo} (ya. (d) ke<(T) = { m} ran(T) = R".hus the solution set consists of all vectors of the form x = t[·~] where . · [~]}.oo < x < oo} (xaxis). The transformation Tis both 11 and onto. :] = The augmented [~ ~ 0 0 _: ~ ~] .oo < y < oo} (y. The kernel of the tro.a.on set of [: .xis). The kernel of the t<aosformatOon is the solut.z < oo} (xzplane) .] Md onto. ran(T) = {(x. ran(T) transformation T is neither 11 nor onto. y. (d) ker(T} = { [~] }· ran(T) = R 3 .O)l. The " (b) ker(T) = {(x. = {(x.solution set consists of all vectors 0 : 0 of the form x =t [ :J where oo < t < oo.oo < x. The augmented matrix of thi. <is).176 Chapter 6 EXERCISE SET 6. The transformation T is neither 1~1 nor onto:·~~. y.3 1. 0)1. 0)1.3 XJ 0 ..oo < t < oo. 3. thus the . transformation T is neither 11 nor onto.. ran(T) Tho transformation T is neither 11 nor onto. t. z)l .o vectors of the form x [~ _:  !] [~] = [~] · The augmented 5 l 0 [~ ~ . :] [::] [~1J = 3 4 .oo < x < oo} (x~axis).2: 0] . (a) ker(T) = {(x. The 2. 4.xis).oo < y. The transfonnotion T = i• both J. z)l .oo < x <co} (xa.O. The kernP.s system can be reduced to [~ ~: ~]. 0. 0)[ .
t the system is inconsistent. Thr.ocs of the form x = t [ il whe<e oo < ~] t: 0 0 ! : b 3 I I o: thus the solution set 0 0 : oo 7. has only the trivial solution and so of this system can be reduced to [~ !m]. t~e s~st:.Exercise Set 6.tem can be <educed to [: consists of aJI vcct. l 0 1 OJ [0 1 0 i0 . k~rncl of 1' is eqml. augmented matrix of the system Ax = b is H 2 2 8 0: 13] 1• 1: 8 and the reduced row echelon form of this matrix is [~ 0 1 k 6 I 0 0 1] 6 0 . o:o 8.3 177 The augmented matrix ofthis sy.m thus 0 0 ker(T) = { [~] } . The a tJgrnented matrix ()f Ax = b is and the reduced row ochf:lon form of this matrix is H 2 2 0 1 l 8 [~ 0 ~ L I 6 0 0 ~] l Frorn this we conclude tha.! to the solution space of the system [~ ~ ~] [~] = ~J. (a) The vector b is in the column space of A if and only if the llnE\ar system Ax = b is consistent. (b) ThP. thus b is not in the column space of A. 9. thus kcr(T) cons1sts of all vectors of the form 1 l matrix of this system can be redlJced to x = t [ ~] where · oo < t < oo. 't'h~ <tugmented . The kernel of T is equal to the solution space of [ ~ _: = ll= mThe augmented matrix l] : .
b = ~c1{A) + ~c2(A} + 3 6 Oc3(A) = ~ [~] + ~ [~] 3168 (a) The vector b in the column spttee of A if a. (A) ~ 2 [!] nl + . with general solution Taking s = t = 0.nd only if the linear system Ax = b~i~consistent. the vector b can expressed as a linear combination of the column vectors of A as follows: m~ ~ b 2c .1 •0 I 0 0 : 1 i0 ] From trus we conclude t hat the system is inconsistent. The augmented matrix of Ax = b is is [31 2 1 0 1 1 5 5 3 2 i] I I } 1 1 : 0 an d the reduced row echelon form of this matrix is [~ th~ 0 1 0 1 1 . thus b is not in the column space of A. the vector b can expressed as a linear combination of the column vectors of A as follows: [~] = 8 10. (b) The augmented matrix of the system Ax = b is 3 2 1 4 [0 1 and 5 3 : 1 5: 4 ] (j 1 1 : 1 reduced row echelon form of t his matrix is [~ 0 1 1 11: 2 ] l : l 0 0 : 0 0 Prow this we conclude that the system is consistent.{A) + Oc.178 Chapter6 From this we conclude that the system is con'listent. with general solution 3 3 4+ 3ltl = [4] + t [~1] [~ ~it ~ 3 X = Taking t == 0. (A) + lk. (A) + lc.
l ~. 1.. 1. Jn particular. . 1 o :3 6 " 2:2 : t hus the stanrlard ma. G. the operator is both 11 and onto.tor is neither 1l nor onto. ~. TA is not onto. The operator can be written as [:: Since df't(A) 0 1 O I] 0:' J 3 4 1 I 0 I I _§ 3 T hus the system has a unique solution x = (1.3 179 11. 12.~). 17.1 nor onto. The vector w is in the range of the linear operator T if and only if the linear system 2x.112 :z:2 1.foriw = t (j_:here oo < t < oo. 15.·. The operator can be written as 'WI] [l 23 [XJ] ] 2 3 [ = 15 8 0 1.1/3 X3 . ~. n [ J.his system can be nxiuced to [~ Thus the system h as a unique solution x X  0 1 0 0: 2 ] 0 : 1 I 1 : 1 (3. The operator can be written as w =TAx .113 [1 3 2] [Xi] 2 o 3 4 . .1) = w . The operator can he written as :\ 2 ] G 'WI] [wz = 1.. Since 14.. thus the s tandard matrix is A = Since dct(A) = .nd we have Tx = T(2.trix is A = ::Z:J 16. 0) = (2. = [~] is not in the range of TA . det(A) '" 0. 0. The vector w is in the range of the linear operator T if and only if the linear system x +y+z = 2 :t +2z =.. ~·. and we have T x = T ( ~. . 1) = y = 1 = w. Siw.e d~ t( A) = 0. The operator can be written as [::] = [~ ~] th11s t he standard matrix is A = [~ ~] . Since det(A) = 0. the operator is neither l . where A= The_ran_ gr. ..3. a.  ~) = J = (~ = 17 :f.1). consists o~~~~ vedors of ~~. w ~ TA [~ =~]. the operator is both 1·1 and onto.an I . The augmented matrix of thls system be reduced to [~ 13. t hus the standard mat rix is A = [ ~ ~]. 0. The augmented matrix of t.is consistent. the operu. 2. :: [::] .1 f.y =3 x +z = 3 y .Exercise Set 6.z=O is consistent.
the a. linear t. the vector w = (1 .1 ) is not in the range of TA . 1.rl\. [~ 2 3 0 4 ~) oJ In p"'ticula<. w2. 20. w3) f0r which the linear system = [5 1.is in the range of 1A} if ar:d only if w 1 partiC lllar.1 consists of all vectors w = 3 . The range of T. where A onto.nsformation T A : R 2 + R 3 is onetoone if And only if the linear system Ax = 0 has only the t rivial solution.oone. and so TA (b) The augmented matrix of the system Ax == 0 is IS onetoone. 'tJIH · w 2 + w3 Thus the system is consistent {w . : R2 : R3 consists of those vectors w in Jl3 for which the linear system Ax = w is consistent. (a) The range of the t ransformation T.1 1 4 ~ [~ = ~] [::] = [:~] 4 1 2 X3 W3 is consistent.. Since det(A) = 0.4x 4 I } : 1 2 : 0 ~ 0 has the gene'"! solution x = t [ ~) . The augmented matrix of this system can be row reduced as follows: 1 2 [4 5 1 1 3l 2: 1: Wt] W2 t [1 2 1: l 0 0 9 2 9 2 ] Wt ] W2  5wl W3 W3 . 19. TA is not 2 (w1 . = 0. 2 1] .180 Chap~r6 18.~d row echelon form of this matrix is From t his we conclude that Ax = 0 has only the trivial solution. The augmented matrix of Ax = 0 is and the redur. The augmented matrix of Ax = w is 1 1 [ ! Wt] W2 W3 2 0 : 3 4 : . The operator can be written as w =TAX.nd t he reduced row echeJon form of t his matrix is o Fwm this we conclude that . In (a) Th. systerr.~.4w1 + 0 0 [1 2 1 \ 9 2 : 0 0: W1 ] W2  W3  5w1 W2 +. Ax = 0 has nontrivial solutions and so the transformation TA : R 3 + R 2 is not onet.
r l F'rorn this we conclude that Ax = w is consistent for every vector w in R 2 .2bl 6 6 : 03 . or 6/>1 + 3bz  4b3 = 0. jt was obtained by solving for b1 in t.: [~ ( b) The range of 2 1 4 0 . W) W2 .2WI 0 2 : l [} } . o tne form [ls:+ ]=' [!] 1' (c) +t m Note. = 0.ctors b \f ib3  = 0.. thus TA is onto.l ~ [I 3 I I 0 0 2 k l 3: b l . This is just one possibility.. 8 .3w1 0 2 0 o: 2: Wz Wt2Wt 2w3 + w2.~b1 ~ b2 . The augmented matrix of the system Ax= b can be row reduced as follow:.1 1 0 0 0 0 l 1 I I I I I b. [ 0 3 4 : W3 0 1 : W3 .t [ s t tl = [1+ [1] ] s 1 1 t 1 0 0 1 .. consistent if and only if . 1 1 l I 0 2 . (b) The range of the transformation TA : R3 ~ R 2 consists of those vectors win R2 for which the 1inear system Ax = w is consistent.8 .4 l Wt] 1 W2 and this· matrix can be r ow reduced as follows: [_~ 21.e.8w1 + w 2 + 2w. 3 6 2 3 3 b2 b:J b. and > then making bz and o into parameters. the solution s pace of Ax = 0) consists of all vectors of the form X= s + s.8w1 l From t his we conclude that Ax = w i.•2 +w1 .Exercise Set 6. (a) 2 3 : WJ 2 1 : u.{bl ] Ql ~b3 . 0 can be row reduced to [~ (l 0 1 0 1 1 0 0 0 : 0 1 L: 0 ] l Thus the kernel of TA (i . ~ b2 . T he augme nted matrix of Ax = w is 2 3 0 .3bl 6 [' 2 .kbz .: : [ 2 0 : 'W2 . 02 .3 181 and this matrix can be row reduced as follows: 11 : WJ] [1. thus the transformation TA is not onto.erms of 02 a nd b. 3 The augmented matrix of the system Ax ::.ib It follows that Ax = b is consistent if and only ~he transformation T A consists of'~.
Jf Bx = 0.4 5 1 2 3 fTA oTaf = AB = [~ 3 1][1 4 0 1 3 6 ~][! 0 3 1] = [40 1 12 6 0 9 18 20] 18 43 3 0 5 2 3 .3. (a) True. D2. [Ts o TAJ =BA = [.3.at:a.4 1. 16 58 31 . T he line x = vo +tv does not p ass through the origin and thus is not a subspace of It. then Tx (e) True. x1  . follows from Theorem 6. thus T( u . If TA is not onetoone. !Tn o 'I~4 J = BA = = [~ [: 3 0 !][: ~] [: 4 0 2 1 3 2 4 0] = [10 18 2!] 545 ~~ 25 2 3 0 [TA oToJ = A E3 1 :11] = [5 . The transformation is not onetoone since T(v ) to a . thus x is in t he nullspace of AB. No (assu ming vo is not a scalar multiple of v ). WORKING WITH PROOFS P l. In 'f:.v) = 0 implies u . nn . (a) iT!) = [~ ~1 = [~ [12) = [~ ~] ITt oTz) {b) [72 c T1] (c) ~] [~ ~J = [! ~] + Jx2. lf T : Rn + Rn is onto.re parallel D3 . then (11B)x = A(Bx) = AO = 0 .:~3 3. 0 and so T = TA is onetoone.7 that this line Calmot be flQ ual t o range of a linear operator. If T is onetoone.11 45 2 6 2.15.15 .3. then (from Theorem 6. The standard matrix of a shear operator T is of the form A = [ ~ either case. See Theorem 6.is onto. we have det(A) =:= 1 ~] or A = [~ ~] . (b) True. 6x1 2x2) = [~ ~J [~ ~] = [~ _:] 4x2} T2(Tt (x 1.J 2 = 8 r 10 38 18 3 22] . T he transformation TA : R"+ Rm .14) it is onetoone and so the argument given in part (a) applies. EXERCISE SET 6.8I] 8 3 7 44 .v = 0 and u = v. (c) 'ITue.182 Chapter 6 DISCUSSION AND DISCOVERY DL = 0 if and only if x = 0. x2)) = (3xt T1 (T2(x1t x2)) = (Sx1 + 4x2. (d) 'ltue. No. = ax v = 0 for all vectors v tb. then the homogeneous linear system Ax = 0 has infinitely many nontrivial solutions. D 4.
x2.17:r1 +3x2) T1 (T2(x1.4x2 . A. x2. (a) The s t andard matrix for the rotat1on is A 1 tion is A2 = [~ ~] . ~ [~ ~] [~ ~] [ ~ ~] ~ [~ _ J] ( b ) The s tandard matrix for the composition i:s (c) The composition corresponds to a counterclockwise rotation of 180°.OJ 1 1 = (b) The s t andard matrix for the projection followed by the contraction is (c) The s tandard matrix for the reflection followed by the dilation is . 2x1 . (a) T he s t andard matrix for the r.1 {b) [12 o T1] {T1 o T3] [~ ~] 20 ][ [~ ~] ~] ~ ~ ~}[~ 2 0] = [ 4 8 0] 0 1 3 ~] 0 1 2 [T2] = () 0 .tl~c Thus t he standard m atrix for the rotation followed by the reflection is A2 A 1 = 1] 1 0 [00 [01] [l0 .Exercise Set 6.4 183 4.x3)) = (2x2. (a) The s tandard matrix for the reflection followed by the projection is . x3)) = (4:rt + 8x2.il·z A1 . {a) !T1] = [~ := . =R rr = [1 0] 0 .n 3 12 = !2 R 12+J2+ 3 11 7rr r. = [~ .(x1.:r1 +3x2.2x2 + 3x3) 5. The standard matrix is R z<.1 0 . R 1n R_ .1 7.omposition is A.1 2 3 (c) T2{T.x3. xi .1 0 1 2 . = [:l 3 [10 1OJ = [3 3 OJ 01 0 0 6.1 3 4 0 l = u 2 3 3 [ 1 3 0 1 0 1 2 4 1 .~] and the standard matrix for the rP.A.
.l.184 Chapter 6 {b) The standard matrix for the rotation followed by the dilation is 0 [ ~ 0 ~] = [ 1V2 o] o 1 o o 0 v'2 (c) The standar~ 72 0 ~ 1 0 matrix for the projection followed by the reflection is 8. ] 16 8 1 (b) T he standard matrix for the composition is 10./3 6 (c) The standard matrix for the projection followed by t he reflection is D. _fl16 l 16 .! :If0~0 ] [0 ] .! 6 ./3 2 0! 0. (a) T he standard matrix for the reflection followed by the projection is (b) T he st.1.andard matrix for the rotation followed by the contraction is A2A1 = l ~ 0] 0 ~ 00 [' [ 1 00 3 {! ! = 2 0 0 . (a) The standard matrix for the composition is .trix for the composition is _ fl 16 . (a) The standard ma.
r. Note. multipHcatlon by Acorresponds t. and reflection about t.or of 5. . thus multiplicat ion by A corresponds ~o a shear in the xdircction with factor 4. (c) We have A=[~ ~] = [~ ~] [~ ~] r~ ~l [~ ~] ..ion by a fad or of 2.ion of A as a product of elementary matrices is not unique. (a) We have A=[! .ion with factor ~ . followed by reflection about t he line y = (c) We have A= [~ ~] = g~] [~ ~] [~ ~] [~ ~] . :. followed hy expans ion in the ydircction with factor 18. thus multiplk:a Lion by A corresponds to reflection about the line y = x .t.~. 14. Expansion of R2 in the ydirec tion wjth factor 2. The fac toriut. This is just one possibility.2 about the :r axis. followed by a shear in theydirection with factor 2.direction with f(l(:t(Jt 2. followed by expansion in the ydirect. Rotation of R 2 about the origin through an angle of . (b j Rotation about the origin through an angle of (c) Dilation by a fact. (d) We have A= [~ :] = [~ ~1 g~} .spondsto ashear in the xdircction with factor 3. expansion in the xdirect ion by a factor of 4.thus n. (d) We havcA= [: :] = [~ ~] [~ 1 ~) [~ ~]. (d) Compression ill Llu~ xdirect.] = [~ A =[~ ~] = !] [~~]. thus multiplicat ion by Acorresponds to reflection about t he xaxis.thu$ multiplicationbyA c~)rr~:. (a ) Refl ection about theyaxis.Exercise Set 6. 12. ~.ion and compression by a factor of ~ in t he y <. 13.o compression by a factor of ~ in the x direct. expansion in the ydirection by a factor of [) .lirection. and a shear in t he yd irection with factor 4. (a) We have A= [~ ~] = [~ ~] [~ ~l thus multiplication by A corresponds to expansion by a [~ ~] = [~ ~] [~ ~] . (a) (b) (c ) (d ) Reflection of R. and reflection about the line y = x. thus multiplication by A corresponds to reflection about factor of 3 in the ydirection and expansion by a factor of 2 in the xdirection.he xaxis. (b) We have [~ ~J [~ thus multiplication by Acorresponds to a shear in the :r. (b) We have A = the line y = x followed by a shear in the xd irection with factor 2. followed by followed by expans ion in the xdirect ion by a factor of 2.4 185 (b) The standard matrix for the composition is 11. Contraction of R 2 by a factor of ~.
rd matrix for r . The standard matrix for the opP. w2 ) = (w1 + 2'W2. 3 41 t hus the formula for T l is sJ .1 is T3 5 26 23. w2) 18 . T is one. The standard matrix for the operator Tis A the sto.l matrix fqr the operator Tis A= [! ~]. Since A is invertible. The standa<d mat<lx fo< the ope<ato< TIs A = [. The standard mat r ix for the operator T is A = 12 2] [ 1 1 1 .1 is A. the standard matrix fo r r .1 = [.wl)· = [~ ~]. The st.: ~l thus r .qn fo llows fTo m IT. 2 1 Since A is not invertible.toone ] 7 I . Tis onetoone and w2) . . Since A is invertible. ~ 26 I thus the formula for r. (a) lL is easy to see d irectly (from the geometric definitions) that T 1 o T 2 a J. 26 Since A is inve<tible. Tis not onetoone.andarrl matrix for the operator T is A 41 Since A is inver tible.1(w1.I = 2 I '. 20.1 is A.. The st andan.rator Tis A = the standard matrix for yl is A 1 = 2~ [~ !l [~ ~].l is A. Tis not onetoone.186 Chapter 6 15. 19.  1 12wl + iw2)· 17.1(wl.w2) = <lwl + 1w2.1 = [~ ~].t = [~ 1 2 2 3 . 0 Since A is invertible. TIs onetoone and the stand ard mat rix for T  1 is A l = r* iJ l fi r5=7] .i I =1 . : :J. 3 and the standard matrix for T .1 is A . Tis onetoone and thus r 1 (wl.] ~~~} = [~ ~] [~ ~] = [~ ~] and IT2} lTd= [~ ~) [~ ~] = [~ ~].ndn. Since A is invertible. tv 1  16. The standard matrix for 7 ' is A =[ i ~! [ ~].thus the fo<mula fo< r  Ill 0 1 is 22. thus r. T is onet~>oone a n d thf' standard matrix for r. =0 = T 2 o T 1 • This . = [~ 2 2 ~ ~ 3 3 21. T 1s onet(rone and = (W2. Since A is not invertible. The standard matrix for the operator Tis A = {~ ~].
i. i~ have JJ.cos 81 sin Bz sin 81 sin cosfh cosBz .: TzoT1 . 18 = [~ .~] as adjacent sides. thus cos02 .~ .!_ V2 7i][0 1] _[ 72 72 1 l O  _ _!_ v'2 27. (a) It is ea.. 16  f _. = [ '~ 7ft4 . lt fo llows that T j 0 o 1'2 f Tz oTt .y t.:os&1 sin02 0 cos02 0 ~].ha. . The image of the unit square is the parallelogram having the vectors T( e!) = U] and T( ez) = [. T hus t. °l1= [T o 12] .hus fTI] [Tz] === (kJ)[TzJ=kl12] 0 0 k k [T2 }. We have H:rii = [~ ~] and f/. and [TzJ ITl i = ITz]{kl} = (c) We have ITt] ::.sinBz [Tx oTz] =ITt} !Tz) = coslh sin Bz cosB1 cos82 [ sin lh sin Bz sin B1 cos B2 . This also follows from the computation carried out in Example l.4 187 (b) It js easy to see directly that the composition of Tt and T2 (in either order) corresponds to rotation about the origin through the angle 81 + Bz.1 ~ '\!jl 2 . t.he standard matrix for the composition is _ lirr /8 H .t T 1 oT2 =I :.. The area of this parallelogram is jdet(A}I = ldet [~ ~] ! = 12 + II= 3.\ ~ ] [~ ~ 'l and J1.~).sin 81 cos Bz sin 8 1 COSO) and it follows tha t T1 o Tz 24 . Wt:. .o see dir~etl y Ozl f Tz o T1 . Jt follows that 1'1 o T 2 = T 2 o T 1 = kT2.[~~~] =kl .~ ~1 [~ ~] and [T:d S in and lTz]ITJ] = [~ ~] [~ ~1 = [~ ~J jTJ} [T2 ] (b) We have [Tl] = [T2 o T1 ] = [::: 1 :~::]. 25.<.thus IIi oTz] = 1 = [co~O s~ne] and = [T2} [T1 ) = [c~soe. 13 = _.. (c) We have lTd = r lO l 0 0 0 cosO t sin9t sin l and [12] = [cos 0 sin 0 2  2 £11 r. thus 11 o Tz = T2 o T1 . This l::llso follows from the fact that [Td [Tz!= [~ ~] [~ ~1 = [. T int:> the standard matrix for the composition 2 ' .Exercise Set 6.ill [ 2 2 1  z 2 ! viJ 26.
.Ba) · Since {)= 2(0.188 ChapterS y 7f· 28. If T 2 (ran(T1 )) = R" then T2 o T. The image of the unit square is the parallelogram having the vectors T(e1 ) = g] a~d T{e 2 ) = [!] as adjacent sides.a of this parallelogram is ldct(. = He Her~· . (d ) True.i ··2 sin {3 cos f3 sin 2 f3 .Rk . sin~) sm ~" Thus R H R _1 [cos {3 . If 1't(X) = 0 with x # 0 .[cos2/3 sin2/3] .91 = 1.sin 2/3 cos 2/3 i3 and su multiplication by the matrix RiJHoR~ 1 corresponds to reflection about the line L. (a) 'l'rue.cos2 f3J . D3. we have He.sin {3 2sinf1cosf3 l . t hus T2 o T~ is not onetoone. 0 . From Example 2. (c) False. then T2(T1 (x)) = T2 (0) = 0 .n(TI) n ker(Ti) = {OL then the transformation 72 oTt will be onetoone. is onto. 7) J  'f 1/ 5 . We have RfJ = [cos~ sm "" 0 0 . it follows t hat Re Thus every rotation can be expressed as a composition of two reflections. (3. then Tz(T1 (x)) = 0 if and only if T. 02.1: DISCUSSION AND DISCOVERY D l .H f. H0 = ( cos _ 2 1 0 ]. If ran{T2 ) :F Rk then ra.eM~ cos"' . Hez = R 2ctJ 2 .sinf{J'].A)I = \det [~ !] I = 1 ..1 2 and R. If xis in Rn. thus if T t is onetoone and ra. 1 " = Rp = [.~8).(x) belongs to the kernel of Tz. The ar(. 8 "· ( • 3) ··· " I ) .n(T2 oT1) ~ ran(T2 ) #. J ) {5. ( b ) False.
1.s . (a) {b) 5. 2.2 I 0 1 I 0 0 l lt' O of this matrix is [ ~ ~ ~ ~ ~: ~ .v2_} where V J :. [0 1 4 1 10 3 5 · 1 1 1 0 : ] I .2) forms a basis for the line y = 2x.! .1 2}. t < oo.(l" 4 ~ VJ tJI "(. z ~ 3t. I 0 0 O tO Thus the general solution is X= ( 3t.x of tho system i' [: matrix is = . 9.ystcm is [I 1 0 IJ I :0 ~ . 1) form a basis for x + y + 2z 1 1 (! = 0.t . . 1.J1 d V 2 """ ( 0 >  1. 0 l) • I 8.~ ~ ~ ] °] and the reduced row echelon form 0 0 l 1 .O. (a) Any one of the vectors (1 2). 1.tl . (2.0.1. ('2. ( .. 6) . . 0.0. ( . 195 . + 2t2. (b) Any two of the vectors (1.1).1 0 1) 4 4 I t ' l l \ where oo < s.0.1 1 0 and the reduced row echelon form of Thus a general solution of the system is _!s ·· t J s ) t) 4  x. (1. The augmented matrix of tl1e ::. 1 7. 0).t.! 1 O)+t(O . (2.'l. basis for the plane x=t1 +t y =. 1..1.1) where oo < . t) t( 3 . .1). 5) form n..0.al basis {( 3. 1) where co< l < oc.CHAPTER 7 Dimension and Structure EXERCISE SET 7. 0. 21 6.0) and v 2 = (.0) +t(1. 2. .. . (a) (b) (c) (c) 2.0. 3).0. Thus the general solution 0 0 0 0 is o:o x = (. 1)} . . t < oo. The augmented matrix of the system is [ this matrix is 1 0 l 0 I 0] ~ : .~ . s..t) = s(1.s : ~= I () :1: 0] [ 0 1 2 •o .3).1 ~ 3) forms a basis for the line x = t y = 3t . 2. v'2} where Vt = (1. (0. (b) Any two of the vectors (1. The solution space is 1dimcnsional with cnnonic. 1). . ~ ~] nnd t ho ceduced row echelon fo. Tho augmented mat.l:t. (a) Any one of the vectors (1.2t . T he solution space is 2dimensional wil.1.0. 0.J.h canonical basis {v 1 . ( 1  ! 1 1) 0) (l.1 1. The solution space is 2dimensional with canonical basis {v 1 .m of th .
1. 1) where . 0) + t(3. consists of all vectors x = {x 1 . 0 o f thJs maLnx 1s ~ [ . s. s. 1. 0. 11. 0.he hyperplane.1. . 2s  4t. t < oo. the solutions of this·equation can be written in the form where oo < s. 0. v 2 = {0. A hyperplane in R!' has dimension 5. The augmented matrix of the system is 1 r ~ ~: : !i~] : 0 1 0 1 i I and the reduced row echelon form 0 .:the equation x + 2y. 0) and v 2 = (3.1. 1. the solutions of this equation can be written in the form = (2s + 3t.1. 4). 0. form a basis for the hyperplane.2.ors v 1 = (1. (a) (b) (c) (d) Assuming A :1 0 . ~ . ~.3 : 0] 1 0 1 1 0 ~ ~ ~ ~ I _l . 3).oo < s. s. x3 s and x 4 = t as free variables. 0) and Y2 = (3. v 2 =::: (1 .1 consists of all vectors x = (x. consists of all vectors x= (x 1 . j. t hus they span a 3dimensional subs pace of R 4 • . 1). Using x 1 = r. ~. 0) where . 2. T hus thevcctors v 1 form a basis for the hyperplane.2. 2.2x + y + 4z = 0. x 3. 0) + t(O. s .3. 1) form a. !t.0. 0.0) + t(O . basis for t. 1. . 0. 4. . 2. 0. (b ) T he hyperplanC! (2. The vectors Yt = (1 . or 5. l.3. t) = s( . (b ) The hyperplane (0.4. and v 3 = (0.1.1. 1. .1 . ~ Thus t he general solution is x = ( 3s + ~t..ve dimensions 0.1 . X. 1 (a) The h yperplane ( . 0).t.t) = s(3. t < oo. 3. and v 3 = (0. 5. 1) = (1. y. Thus the vect. 1. t < oo.z) in R 3 satisfying. 0) + s(O.oo < r. t < oo. The solution space is 2dimensional with canonical basis {vh v 2} where Yt = (. 0. the solutions of this equation can he written in t he form x = (s. t ) = r(l. DISCUSSION AND DISCOVERY Dl. 1) "' where oo < r .2. Using ·y =sand z = t as free variables. 4 . Using x = s and z = t as free variables. + s(O.2. (a) T he hyperplane (1 . t ) = s(l . = (1. 1. 1 2. t < oo.0. 1. Using x1 = r.0) are linearly independent. 0. 2. 0.3z = 0. and v 3 = (1. 1. 4. ~s + ~t.s. 4.oo < 8 . 1. 0. 0).0). X3 = s and x 4 = t as free variables. the solutions of this equation can b e written in the form x = (r. 8. 1) x = x = (r.xz + 4x3 + x 4 = 0. 0 0.1) where . 1) T hus the vectors v 1 = (. T he subspaces of R5 ha. xa. 0) and v 2 = {0.L consists of all vectors x = (x . j . y. 1. x 4 ) in R 4 satisfying t he equation 2xt . 0. 2.0) + t( ! .1. 0.0). 7)J. the dimension of t he solution sp ace of Ax = 0 is at most n.1.t) in R4:satisfy ing t he equation 3x2 + 5x3 + 7x4 = 0. x2 . 0). 2r + 4s + t. Thus the vectors v 1 form a basis for the hyperplane.1. x 2 . o. 1).!.t96 Chapter 7 10. 0.s. z) in R 3 satisfying the equation ·. 4. 0) 't t(O. v 2 (0.1. 0. 0).3 . t) = r(l .
he zero vector is de pendent. 8) arc linearly independent since neither is a scalar multiple of the other.l!c list. v 2 .lm::. • . Such a set is linearly dependent ~ince. <~ s<>t. v 2.3) are linearly independent ~incc of the other. . . ~ nuut1 i v i<\l Jepcndeucy rela. •) then.>ining t. False. then (ka) · x ' .v2 = (0.x) = (l if and only if a· x = 0: t.t. • .2 197 D2. 4 . It follows t hat if S' is a ny nonempty subs et of S then S' must. 0 f (. The solution space of Ax = 0 has positive dimension if and only H the system has nontrivial solutions. . thus these two vectors form a b~is for R 2 .0.oo many) (c) These vectors are linearly d ~7. . cz. A basis for R3 must centum ~xadly three vect ors (two nre not eno11gh). 1. Yes. i. thus these two \'ect.8. Since det(A) = t 2 + 7t + 12 = (t + 3)(t + 4).. v 2 .ck (not all zero) such that c1v 1 + c 2v 2 + · · · + C~<. The solution space bas dimension 1 in each case. lt follows that the setS'= {v 1. .2 1. 2.pendent (any set of vectors cont<.vJc} is a linearly independent set in R"' then there is no nont rivial dependency relation among its elements.e . and v 3 are linearly il. WORKING WITH PROOFS Pl. VJc = 0. 5) and v 2 = (4 . aho be linearly independent since a nontrivial dependency among its elements would also be a nontrivial ~~ependency among t he elements of S . If we write the vectors in the order v 4 = (0.0) is not a.F. ').0) is not a scalar multiple of vl = (4. (a) (b) (c) A bMis for R 2 must contain exactly two vectors (three are t. 1) and v 2 ""'(0. ·. 0. (a) If S = {vh v2. .O). l. (a) A basis for R2 must contain exactly two vectQrS (one is not enough).XERCISE SET 7. l. . ·. and v 3 are Iincarly independent. P2. v 2 .0.v3 = (0.6. any ~ct of three vectors in R 2 is linearly dependent. D3. (a) neither is a scalar m1.0. 1. because of ~he positions of the leading ls. "'. when is a linear combination of its predecessors.dependcnt. (b) The vector Vz = (7. 1.k( a. they are linearly independent.. scalar multiple of v 1 = (0. tht~ set is wrltt. •.. I··· · + C>Yk + Owl+ ·· · + Owr 0 = (b) If S= {v t.0.. anJ it from a basis for R 3 .~) . some vector D4. •). ..lllung its demen ts. wr} is lillearly dependent since c 1v 1 + c2v2 i::. linear combination of v 1 and v 2 since a ny such linear combination component. (a) The vectors v 1 = (7. .. vk.ors (four are t. .01'S cannot span R:1 Thf: vector~ v 1 and v 2 are linc<trly dependPnt (v 2 = 2v 1 ). it foll ows t hat the solution space has positive dimension if and only if t = 3 or t = 4. ( k:a )J = a. v1 = (l. vk} is a linearly dependent set in R" then t here C\re scalars Ct . (b) The vector v2 (0 1 8.eH in reverse order. Thus v 1 .). 1) is not a would have 0 in the third fo llows that t. w1.hese vectors 3.ioJl would h ave 0 in the first component. Thus v 1 .tio n i:l. if Ct VJ + c2v2 I····+ c~cv~c = 0 then Ct = c2 = · · · = c~c == 0. .wo V<'('t. 3). ar~d this occurs if and only if det(A) = 0.l EXERCISE SET 7. The vectors v 1 = (2. (b) A basis for R 3 must contain exact.0) is not a linear combi1iation of v 1 and v2 since any such linear combim. a nd it follows that these vecto rs from a basis for R3 . it is clear that none of these vectors can be written as a linear combination of the preceding ones in t.. • . = . .tltiplc and V:J:::: (1.ors form a hasis for R 2 . ./ If k #·0.0. . and v 3 = (1.1.oo many).ly three vect.
v2. . + (x .ors v 1. and v = 7v 1 + 4v2 + 3v 3 ...oo < t < oo. \ ~· ':~. 8 5 3 the column vectors are linearly dependent and hence do not form a basis for R 3 . v 2 .l. z:) in R 1 can be written as t hus the vectors v 1 . . ·~ . · .• a nd. . . v 2 . ' . v 3. (b ) T hP ..y)v3 + Ov4 ..:~. 7 (a) An arl•itmry vector x = (:r. v 3 as its column vectors is A = [ ~ 4 ~] · Sir•..5 2 t he column vectors a re linearly dependent and hence do not form a ba.·..  .). ~e t...· ~1 . v2 . and v 4 span R 3 .'I . 4 2 3 the column vectors a re linearly independent and bence form a basis for (b ) T he matrix having v 1 . + v 2 + v 3. Oo tbe other hand . Thus for a. 8 := 5 :f: 0... . and v 4 span R3 • On the other hand. (a) The matrix having v 1 .. = 8. corresponding to t = 0.·~"' . v 3 . ' il. y .cc det{A) Rl.t .198 Chapter 7 5.. v 2 . =··'··.~C4 = \'~. V1 . and v 3 as its column vectors is A = 4 [~ ! ~]. ·... since the vecl ors v 1. v = 4v 2 + ~ v3 + v4.\ r} I .".nd hence form a basis for R3 . a nd v3 as its column vectors is A= [ : ~ ~] · Since de t(. '~ \1.~.si& for 6. t = 1. .l ing c~ = t. n general solution of this system is given by CJ/~·.. z} can be written as x = zv 1 + (y. v2.~i ·= ~t.z)v. (a) An nrhitra ry \'ector x = (x . since v 4 = 2v 2 vt:ct. the (l ·· . ~n d v 4 are linearly dependent and do not form a basis for R 3 . .~ ·.~.._.) I .ny value of t . and t = 6. (a) The ma t r ix having v 1 . (b) The matrix having v 1 .·ector equa t ion v = c 1 v 1 + c2 v 2 + c 3 v 3 + c4 v 4 1 0 [0 0 0 is equivalent to the linear system 0 2 0 3 :J l~ l l = m . t he column vec tors are linearly independent a.hus the vectors v .~. and do not form a basis for R 3 . + v3 ..~. we have v v .Si~:e det(A) = l l 1 =f 0. v 3 . . J · t. where . . ..:~.6v 4 . I I . .. Jl3 . and v 3 as its column vectors is A= [ ~ ~ ~].) = 0. v 2 . In pa r ticular. y. and v 4 are linea rly dependent. v 3 . v 2 ..Since det(A) = 0.~3 ~ .
C3 = .] 11. and it follows ftom this that S' = { vb v2 .~ v4 . . 3) . 1. 2. L€t A be the matrix having v 11 v 2. Vz.t.6 0. thus S == {v 1 .i•:cn by 1 C1 = 3. e2} and {v 1.2 199 {b) The vector equation v = c1v 1 + c2v 2 + c3v 3 + c4v4 is equivalent to the linear system The augmented matrix·of this system is [ Thm:.e3} are bases for R 3 . v 2 } is a linearly independent set. The vector v 2 = {1. 9. A ::: 1 1 1] [ 2 3 2 . i. and t = .. ~>~·t. 0) . . [Similarly.v 2 .~ . and c 1 as i~s columns. t = .EXERCISE SET 7.vz. For example. thns S = {v1 .1. e2} is a b asis for R3 .1. it can he $hown that {v 1 . v 2} is a linearly independent set .c4 =t where oo < t < oo.v3.. it can be shown t hat {v 1.t. and e 1 as its columns. correspond ing _9 t =0.e. .J 10.t)v3 + LV4 for any value of t. The vector v2 = (3.v 4 . and v = 3v . ~he set.. Let A be the matrix having v 11 v 2 . cz =: 1 2t .e3 } is linearly dependent and thus not t':. . v = 3v1 ·. t. e1} is a linearly independent set and hence a basis for R 3 . {vl> v 2 . from this that S' = { v 1. basis for R 3 . v = 3v 1 + v2.2) is not a scalar multiple of Vt = (1 . 1 .ing }~ 1 0 2 :2 1 3 1] I o o ol3 and the reduced row echelon form of this matrix is 1 0 0 0: 3] 1 [0 0 0 2:1 Q 1 1 1 I c.(1 + 2t)vz  (1 1. [Similarly. 2. a general solut io n of the sys tem i:. e1} is a linearly indepew. Thu:.v7 . = t .2 0 0 Then det( A) = 2 .he other hand. On t.2) is not a scalar multiple of v 1 = ( .lent set and hence a basis for R 3 .2 ~] 0 Then det(A) = 2 f 0.e.4 = [ 0 ~ : . i.1 . and it follow:. we have v = t 3v 1 . .~ v3 . v 2.
re linearly independent since· det 2 2 l = 27 =I= 0. The \'P. 14. and the plane W has normal vector n = (1. 1. 1.2 =f.3. has the solution c3 v = 3vl + 4v2 + v3. (b) T he line V is parallel to u = (J. Since u · n . 1). the vector u is not orthogonal ton. Thus Vis contained (a) il' w. from back substitution. 3) = (3. by linearity.rix is 0 1 0!_i Ql 1 I I l. . Thus Vis not contained in W. v 4 } is a basis for R3 • 2 3 4 13.J) = 0. 0) .200 Chapter7 12.~(1. it follows that t S 1 T(~. 1. a.2)+ ~(4.2. . (1)(2) + (2)(0) ~ ( . a.0. 1) 0 1 2 is equivalent to the linear [~ 1 hus (3. v 3 .nd the plane W has normal vector n = (3. 1. Sim:e u · n = (1) (2) + (1)( l) + (3)( . Since u· n = (1) (3) + (2)(2) + (5)(1) = 2 i: 0. 2. . 0.2.l 1i 0 I I (1.1. 1)= \ 3 (2. Since del = 1.1 .ystem with augmented mo.5). Since dct o 11 I] = [ 1 1 1 f 0. V3 (column vectors of the matrix) form a basis for R3 . (b) The line Vis parallel to u = (2. c1 = 2. 1). 13.t.ctor cquatioH v ::: c1v 1 + c2v 2 + c3v3 is equivalent to the linear system 15.~ (0. 2 . Thus. 1. (a) (a) 17 T he vector equation c1(1. c2 = 5. 16.3.1)(1) = 1 i: 0. Since u·n:o::: (2)0)+( 1 )(3)+(1}(1) :::0.nd if we [~ele~ t~el vector v 2 from the set S then the remaining vectors a. 2) ::. Thus S' = { Vt. 0 0 1 The vector equation v = c1v1 + c2v2 +c3v3 is equivalent to the linear system which.1. Thus [~ ~ ~] = .c2  c:v=. Thus Vis contained in IV. 0) = !(26. 0) + cz(O.c3 = 4. the vector u is orthogonal ton. 1.1) = 13 5 T he reduced row echelon form of this rna. v:~.. 2.trix + CJ(2. the vectors VJ. 20) . We h ave v 2 = 2v 1. va (column vectors of the matrix) form a basis for 0 I J R3 . 2) + {2. the vector u is orthogon~l ton. 3) and. Vis not contained in W . the vector u is not orthogonal ton.1). . 1). the vectors v1r v2..
b + 2c) (c) F\om the lo<mula above. 0) = (a . we have ITI = [j 3 1] . b. l) + c2(l . o) = o(3. 1.2a +3b+ c. Thus (4. l.1) + 1(5.a + 2b . 1.and. 1) I· c2 ( l. c:. 0) + c:l( 1. (a) 'fhe vector equation c 1 (1.b.(Ll 0) + l(l~Q.EXERCISE SET 7._QL. . a + C) SC> 1 ( c) From lhe formula above. 0) + (~a+ lb + tc){O. 2) +(~a .3b+ c.. 0) a nd so.0) = O(l.. I)+ 3(2. Thus (a. 3. b.. 3. (1. l.. 2.• •.~c). b.~a+ ~b + !c)(l . 0. we have T(a. .0} + c 2 (0. 2) + c 3 (2. by lmearity. 1. 1. 1) + (a  b)(5.3c. 3. 1. 0. 1. o)~ . . X3 :: = ~ which ha. 3b = ( gaT !) + I ga + [.(1 1.to the linear syil rl I 1] [:t1 [a ] ] l~ ~ ~ c. c) ill equivaleht . 3) and T(a . 1.(l. 0. 1. x2 =b  =a .o ~ ~ ~] 1 2 3 : c The reduced row echelon form of this matrix is Thus (a. 0) = ( 4.3. .1. c) = c(3.1) + (b .2 201 (b) The vector equation CHI. / hy Jineari t. ·. 1.b. 1) + (b.2.c)(l.~b + ~ c}(4. ·2.2) + { ~alb+ ic)(2.c) is equivalent to the linear sys ~ [.Jlj. 0) = (5a . we have {T I =[ ! 1 18.)'. 10. 1) + (. l. 1. 0) is equivalent to t he linear systern ~hich has solution c1 \V€ = 0.c)(2. _. 0} + c3(l. 0 . O. nave . c) = c(l. l . c) = (! a + ~b.? · ( b) T he vector eqnatiun c. 3. 2~ o. l. 1. 1.~c)(2.~ = 1.b)(l .3) te m with augmented matrix = (a. c) =(~a+ ~b. s o!ution x1 = c. 1.. c 2 = 3. 3)/ . 0) 4b 2 7 .:. b. T('t 3.gC. b.0) + (a.l.1 2 3 l 2 3 . .
. is not possible to create a basis by formiug linear combinations of the vectors in S . then det(A) ::: 0 and so the row vectors of r1 are linearly dependent. :3.Imiz Thus a general vector x = (x. Let A be the matrix having the vectors v 1 and dct (A) ""' (sin 2 o.is for V . z) can be expressed in terms of the basis vectors v 1 . y. .. b asis for W can be obtained by removing appropriate vectors from S.. (a) True. If Ax = 0 Iws infin itely many solutions. 02. (b) True.202 Chapter 7 19. (e) True.. + 1z 2 4" 4 : 0 1 1 33 50 x 23 19 + 10oY. . . the vectors v 1 = (1 . 7. 8).2. 1f every vector in Rn can be expressed in exactly one way as a linear combina tion of the vectors in S . .!. If V £:. W (or W ~ V) and if dim( V) = d im ( W). and let v 1 be a nonzero vector in V. Thus t here a rc a t. If s = {vl . a. Since det [~ ! :] 5 8 5 := . The vector equation c 1v 1 + c 2 v 2 augmented matrix + c:~ v 3 = x is equivalent to the linear system with [. then eit her S is a basis for W (in which caseS contains exactly k ve<::tors) or. . (d) True.2(bJ. then s is not a spanning set for n n. ± 2.cos 2o + cos 2{3 and det(A) =/= 0 if and only if cos 2a j cos 2/3 .. If dim(V) = 1.wj } is a spanning set for W. 2. v 21 _~3 as follows: X = (. No. . the vectors v 1 and v 2 form a basis for R 2 .hus it. D5.. ( c) True. . v 2 = (3.5 1 0 0 0 1 0 [ 0 7 4:x] 13 3 l y 2 8 5 : z 1 1 and the reduced row echelon form of this matrix is i. 'Vn} is a linearly dependent set in Rn. E ach such operat()r corresponds to (and is determined by) a permutation of the vectors in t he basis B . 03. Any set of more than n vectors in Rn is linearly depe ndent. 5) form a b asis for R 3 .5).iox .. Y2. Let V b e a nonzero subspace uf R" . . Thus the number of clements in a spanning set must be at least k . and the smallest possible number is k.e. Then . ± I . then S = {v 1 } is a ba. w2 . If S = {w1 .. V3 = (4.2.sin 2 /3) v2 as its columns. i. Any set of less than n vectors cannot be spanning set for Rn. s ubspace of Rn and dim( W) = k. Suppose W is a.doY + t~oz] 1 I  !x .(cos 2 a . from Theorem 7. WORKING WITH PROOFS Pl.otal of n! s uch operators.2. For these values of a and {3.0 X  1~0 y + 160 Z j V l + ( ~X  h + ~ Z) V2 + ( ~~X + 1~30 '!/  11~0 Z )v3 DISCUSSION AND DISCOVERY 01. . Otherwise. . a busis for V can be obtained by < adding appropriate vedors from V to the set S.100  ~ 0.cos2 {3) = . 04. then V = W . from Theorem 7. r. then S is a busis for R n and t hus must contain exact ly n vectors. if and only if o =/= ±/3 + br where k = 0.
.. v 2. Since we know that dim(R"} = n. . such that c 1 x + c2 Ax = 0. 0. cdc2.~ must be basis for W. . for each = 1. Then x :f.e. . un} where P6. 1 vn} is a basis for Rn. n. v3} is a linearly independent set. fork any integer between 1 and n. .::. C:lJ c3 are scalars such that c1u 1 + CzUz + C3U3 = 0. Thus it makes sense to define a transformation T: Rn 1 Rn by setting T(x) = T{ctVl + CzVz + . it suffices to show that the vectors T(vt) 1 T(v2).sely. The subspace V = {0} has dimension 0.. vz..T(vn) are )inearly independent.. basis for W and S C V.>.2. v2. Vn are linearly independent it follows from this that c 1 = c2 = ···=en= 0. thus there exist scaJars CJ a:. + d. and it follows that .. + CnYn) =clwl + c2w2 + . a basis for W could be obtained by adding additional vectors from W to Sand this would violate the assumption that dim(V) = dim(W). where V C Wand dim(V) = dim(W).. and Ax = >. Let {v1. . Finally. vn}· Since every vector in Rn can be written as a Hnearcombinationofvectors in S. the transformation T has the property that Tv...l n CjWj +L j=l n d. Ax} has dimension 1.2. For example. vk} is a basis for V and so dim{V) = k.d c2 . vk} is a basis for V.x} = span{x} and so span{x. . Finally. . and it is clear from the defining formula that T is uniquely determined by this property. Then Sis a linearly independent set in W. (b) If { v1. i. . every vector x in Rn can he expressed as 1i linear combination x = c1 v 1 + czv 2 + · · · + c.. Yn span Rn and are linearly independent. let V = spa. if Ct v1 + cz v2 + ·· · +env n = 0 then Ct = Cz =···=en= 0. .Ax} has dimension 1. Vz.x where >. then so is { u 1.. for if c2 = 0 then since x =I= 0 we would have c1 = 0 also. It follows that c 1 = c2 = C3 = 0. + CnWn It is easy to check that Tis linear. uz. suppose tha.. Cn. and this shows that Ut. It follows that span{x. Moreover. c~h .. Then (c1 + Cz + c3)v1 + (cz + c3)v2 + c3v3 = 0 and.t span{x. Suppose c 1. 0 and Ax are linearly dependent. .v1 . Suppose x is an eigenvector of A. LetS= {v1o v2. since {Yt. from the uniqueness.. we have span(S) = Rn. us are linearly independent. .. if x n n = I: CjVj andy= E d. P8. .WORKING WITH PROOFS 203 P2. T(vz). .. v2.... = w. . 1 vn} be a basis for Rn and. S = {vl. .. Thus T(v1 ). Thus Ax = >. Then the vectors x f. = Tx + Ty j and so T is additive.x for some scalar >.. vn} is a basis for R"'. ' P7.n{x. v2. we must have c 1 + c2 + c3 = c2 + c 3 = c3 = 0... ..n{v1. . v2. Vn} is a basis for R 1".)vJ) j=l = L (cj + dj)wj = L j=l j.) are linearly independent.. since S is a.. from Theorem 7.. we must have c1v1 + czvz + · · · + CnVn = 0.. then j=l j=l n n T(x + y) = T(l= (c.. . We note further that c2 =I= 0. not both zero.. .. since Tis onetoone. · P3. . Since B = {v 1 . Vz..w. • • • 1 vk}· Then S = {vt.. P 5. (a) .. SupposeS= {vt...) = 0 then T{c1v1 + CzVz + · · · + CnVn) = CtT(vt} + CzT(vz) + · · · + CnT(vn) = 0 and so. . P4. Since VJt v2. Vz. Since { u 1 1 Uz 1 U3} has the correct number of elements. Conv\. Otherwise.. u2. This follows from the fact that if ctT(vl) + CzT(vz) + · · · + c 11T(v.. . we need only show that the vectors are linearly independent. T(v. Vn for exactly one choice of scalars c 1 .. we must have W = span(S) C V and soW= V. Thus the vectors Vt..Ax} = spa. . . . ..
y =1ft.~.y. y = s.2) or. Here W J. Similarly. S. 1}. z =tor (x.z) S.. The line y = 2.ngs to the orthogonal complement of W = spa. Thus j 0 . 2 1 Note the vector w is parallel to the one obtained in our first solution. A normal vector to this plane is n = (1. i. u · v 2 = 0. corresponds to the line y = ~x or. is the line which is normal to the plane x .(2)(2) = 0.system A general solution of this system is given by x = y= z =tor (. v3}. !t. We have u · v 1 = ( 1)(6) + (1)(2) + (0)(7) ·1.~1 . is the line through the origin that is parallel to {7. " 4. equivalently.1). 4). and v 3 . 2) or.e.¥ .1). thus a vector w = (x. 1. .l is the line through the origin that is parallel to the vector (. i. Thus u is orthogonal to any linear combination of the vectors v 1 . z) is in WJ. v 2 . Parametric equations for this plane are: = X = ~8  2t.1 kl = i 5 ll j + 2k = (1. 1. . v 2 } is the solution set of the system A general solution of this system is given by x = ~t. and u · v 3 = 0. y. 3). 3. = t( ~. is the line tluough the origin that is parallel to the vector (!. u · v 2 = 0 and u · v3 = 0.. u bek>. thus parametric equations for WJ. 2).3 1 . Thus A vector that is orthogonal to both v 1 and v 2 is w = v1 x Vz = [ 1i 0 j 1 I<] .2. 2). We have u · v 1 = (0)(4) + (2)(1} + (1)(2) + (2)(2) = 0. = [I 2 i 1 1). parallel to(~ . equivalently.3z = 0 8Jld pa:3Sea through the origin. 1 1 Note the vector w is parallel to the one obtained In our first solution.3 = 7i + j + 21<: = ( 7. 5. Le. Jt. Thus SJ. Thus u is orthogonal to any linear combination of the vectors vlt v 2 . 6.204 Chapter 7 EXERCISE SET 7. . v 2 } is the solution set of the . The orthogonal complement of S = {v 1 . 1). 1). to vectors of the form w = s(2. y.n{vl> v2.. are given by: X = t. equivalently.1). is the line through the origin that is parallel to (1. Thus 51.5y + 4z = 0.x.~ . u is in the orthogonal complement ofW. and v 3 .x corresponds to vectors of the form u = t{1. ~..2y . Z = . 11. !.l. Z =t . 5. if and only if u · w 2x.. 2)} .3t 7.2).. y = 2t. The line W corresponds to S<:alar multiples of the vedor u = (2. W = span{(l. Thus WJ.11 . 1 A vector that is orthogonal to both v 1 and v 2 is w = v 1 x v 2 .z} = t(~ . The orthogonal complement of S = {v 1 .e. 2. parallel to (j.
We have w. 0. '· y •• • ' \. 0. The reduced row echelon form of A is l ~] 0 1 1 0 R= 0 0 0 [0 0 i~] 0 0 1 0 0 0 Thus the vectors w 1 = (1.L = null(A) = null(U).~t.y.19) form a basis for the row space o( A or. . 1. we c~~!h. 19. 0..1. 1) + s{O. 1) forms a basis for w.. i. for the subspace W spanned by the given vectors. equivalently. basis for the row space of A o r. the vector u = (!.· '.l R= Thus the vectors w1 = (1. 0. 1..l.q EXERCISE SET 7. .l. 0. .1. The reduced row echelon form of A is 0 16] 1 0 19 .1. and w 3 = (0. Let A be t he matr ix having the given vectors as its rows. 0. if and only if w is of the Torm '·~ \ .l = nuii(. t). z = r . Let A be the matrix having the given vectors as its rows. i).W. 1. 1) forms a basis for W .y + z = 0. from the above. i.1) and w 2 = (0. Here W is the line of intersection of the planes x + y + z = 0 and x . 0. We have w.t = row(A) . We have Wl.A) = null(U}..t ~ .1 follows that ~ea"Or w (x. t.l = null(A) = null(R).:" 1 .. equivalently.tli==9..O. I "". Thus.It} ~~ . 0. y == s. ~. 16) and w2 == (0. ..~ !~.i.\ ~·. 0 1 0 0 0 0 1 Thus the vect orl. Parametric equations for this plane are given by x = r.l consists of all vect O Of the form X = (Q. 0. we conclude that consists of all vectors of the form x = (~t. from the above. This matrix can be row reduced to • u ~ [~ ! ~] w. Let A bt! llH~ •w•L ix having lhe given vect ors as ils rows. 0). cpnsists 1 of all vectors of the form~). s . t) . equivalently..(.0. 0. 0) form a. the vector U = (0. • w. 0. The reduced row echelon form of A is · . 0. . t' Let A be the ma trix having the given vectors as its rows. ·······. from the above. Thus. . for the s ubspace W s panned by the given vectors. tho vector u = (0.1. 0. Parametric equatip_ns '\ for this line are x = ~hus W corresponds to vectors of the form!.e.. ~ \ . = n._.. if and only if x = z. 0 Thus the vectors w 1 = (1.e.. w 1 = (2.~.l. 0).t = row(A). = rvw(A). 0. t).e. we conclude that consists of all vectors of the form x = (0. 1. i.e.l. . i.0) 0. 0). and w3 = (0.. the vector u = (16. for the space W spanned by the given vectors. 1. 11. i.3 I 205 't \ 8.. w ~ .. from the above. 1) formS a basis for W. r) = r(l . 0) form a. w(A). Thus.. 1) foriDB a basis for W .t. we conclude that W. We have W. '. 0.a. w 2 = (0.!! = = (r. ···. Thus . z) 1s m w1.l = null(A) = null(R). l 0.. .e.·t)..1) form a basis for the row space of A or. equivalently. for the space W spanned by the given vectors.l. 1. w~ = (0. rS 12. basis for the row space of A or.
Thus. 0. 0. 3. 15. 0).t. 0. vectors x = (x. i..l consists of all vectors of the form x = ( 2t.l.0. vectors x = (x1.e. Then a (column) vector b is a linear combination of v1. z} satisfying 16x + 19y t z = 0. for the space W spanned by the given vectoiS. w2 = (0. basis for W J. 1. 17. v2. 0.. i. W4 = (0.4 21 03 1 5 7l bl] . 0. from the above.0..s its rows.J.1) form a basis for the row space of A or....J.0. for the space W spanned by the given vectors. 7. we conclude that W. consists of all vectors of the form x = (243t. In Exercise 9 we found that the vectoru = (16.0). In Exercise 11 we found that the vector u = (0. 0. the vector u = (243. The reduced row echelon form of A is 1 0 0 0 0 0 0 R= 1 0 0 0 0 243 0 57 0 7 1 2 0 0 Thus the vectors w1 = (1.e. = row(A).L = nuli(A) null(R). x2.~. thus W W . 1) forms a basis for W. equivalently. vectors x = (x.J.. 1) forms a.l consists of the set of all vectors which are orthogonal to u.e. 1. The augmented matrix of this system is = [ and a row echelon form for this matrix is 1 4 6! 02 3 . 0.e. Let A be the matrix having the given vectors as· its rows. from the above. 2) form a basis for the row space of A or. 0.t consists of the set of all vectors which are orthogonal to u. . We h ave Wi = row(A).. t. 0) and W4 = (0.. = 16. We have W.4.0). 0. Let A be the matrix having the given vectors s. = = 14. In Exercise 10 we found that the vector u = (!. i. 4t.0. 0 1 0 0 ! Thus the vectors w 1 = (1.J. 4. 1) forms a basis for W J. x2. thus W w. 19. 7). 2.2. i. Solution 1. 7t. t}. thus W = W l. 0. 1. z) satisfying x + 2z = 0. vectors x = (x 1. 1. 1) forms a basis for w. 19.57. consists of the set of all vectors which are orthogonal to u . equivalently..2x4 = 0. . w2 (0. 0) forms a basis for W i. 0. 0. Let A be the matrix having the given ~ctors as its columns. Thus. 1.e. x4) satisfying 3xz. thus W = w. O. 243). The reduced row echelon form of A is R= [~ ~ ~] .t. 0. 57t. 18.l.2t. 1.l = null(A) = null(R).e. W 3 = (0. i. W3 = (0.1. v3 if and only if the linear system Ax= b is consistent. we conclude that W. X3.3. the vector u = (2. consists of the set of all vectors which are orthogonal to u.206 Chapter7 13. In Exercise 12 we found that the vector u = (0.0. X4) satisfying X3 + X4 = 0. y..t. y.. 57). x3. i. 3t. 1) forms a basis for W .L.
1) forms a basis for w1.19 ba then further b1 From this we. Let .. and that b if and only if b1 + 2b3 = 0. b3) is in lV if and only if 16b1 + 19b:~ + b3 = 0.e. Solution 3.EXERCISE SET 7. b2. The matrix reduced to VI] [ 5 4 4 \ v2 = . From this we conclude that W = spau{v 1. b3) lies in the space spanned by the given vectors iJ and only if b1 + 2b3 = 0. b2.J.e. v 2 . v3) has dimension 2. _ 1_::!! __ 2 _ [~1 b bt b2 b3 1 3] can be row reduced to [ _o__ . b3} is in W Thus Solution 3.trix having the given vect ors ns its columns. if and only if u · b =0.4 be the ma. and b2 0 l 016] 1 . v2 .~ .l. v3} has dimension 2. Thus b = (bt. In Exercise 10 we fOIJnd t hat the vector u = I ul 1) forms a basis for b = (bt . Solution 2. b2. if and only if u · b = 0.linear combination of v 1 . Then a (column) vector b is a . b3 } is in W ::: W l.i 1 0 _! b. reduced to can be row reduced to and then further [ l ~~jb. = (b t . The a ugmented matrix of this system is ·· · and a row echelon form for this matrix is T hus a vector b = (b 1 .. v 3 if and only if the linear system Ax = b is consistent.. In Exercise 9 we found that the vector u = (16. if and only if Hib1 + 19b2 + b3 = 0.l.. Solution l. v 2. b3 ) is in W = W. conclude that W = span{ v 1. i. n wl. if and only if bt + 2b3 = 0.. 19. and that b = (bt. i. b2.3 207 ~ 1 b3 ) From this we conclude that b = ( o1 if 16b1 + 19bz + b3 = 0.s'___ ~ . 20. .. 1 lies in the space spanned by the given vectors if and only Solution 2..
and then further reduced to l 0 () 0 0 0 0 0 0 0 1 0 0 0 1 0 0 From t his we conclude t hat W = span{vlo v z. Solution 1. The augmented mat.b3) is in W if and only if . b:~. = 0.. v 4 if and only if the linear system Ax ::::: b is consistent. 0. " Solutwn 3. if and only if . bz .... Thus b = (b 1 .hat b = (b1.WL1 if and o nly if u · b = 0. v 3 .0! b2 ba b4 ~ : ~ ~1. b3) lies in the space spanned by the given vectors if and only if b3 + b4 = 0. Tn E. The matrix v3 b bl b4 02 b3 ~ ~ ! ~] 4 2 1 . Solution 1.rix of this system is [~ and a row echelon form for this matrix is ] 2 2 3 3 5 6 4 3 1 9 6 I I b. bt 1 ba ] . 8olution 2. .bz.e.b3 + b4 = 0. 22. The augmented matrix of this system is [~ and a row reduced form for this matrix is 0 1 0 2 0 2 4 0 1 1 2 2 1 1 b2 b.2b4 = 0. b2 . b4 ) 1~0 1 ~1 9 23 ! 5bl ~b32b:l ~ i! 0 0 l 0 0 t:!bl+ i b2 0 : ! b4 .xcrcise 1J we found t.nd only if the linear system Ax = b is consistent. ~ . b3 . 1. Let A be the matrix having the given vectors as its columns.] b2 03 I I I I 04 Thus b = (b.! b2 I 3 2 lies in the space spanned by t he given vectors i£ and only )f 3~. l ) forms a basis for W . v 1 .208 Chap1er 7 21. v 2 .hat ~be vector u = {0.b3 + b.L. v:l. Let A be the matrix having the gi ven vectors as its columns. Then a (column) vector b is a linear combination <>f. and t.1 can be row reduced to [ 0 0 0 0 ·. · ~· + b2 ~b3 + b4 Thus a vector b = (b1 .] b3 b4 2 z 2' 1 0 0 0 2: 0: 4: I b. i. Then a (column) vector b is a linear combination of these vectors if a. v 4 } has dimension 3. b~) is in IV =.
nd b :t lie in span{v 1 .1 . . v 3 . .3 . 0 = Solution 3. v3...] 0 0 1 2 0 1 1 3 1 2: I I I 2 I I 0 2 2 1 .t.1 1 1 I I I I ' I 2:' 1 5: 5 2 4: . but b s does not.. b2. In Exercise 12 we foun d that t he vector u = (0. can be row reduced to Solution 2.2b4 0. w .2 : ~~ t 1 I' 3 2 1 0 l 2 1 0 ' 1 : ' 7 : 1 : 6 I I 2 I' 2 : 4 1: 2:1 and the reduced row echelon form of this matrix is 1 0 0 0 0 0 1 0 0 0 0 0 1 0: _! : 0 1 I I I 1) 1 1 o: t 0 0 o: o:' t t 5 6 t .~. v 2.2 2: . b4 ) is in W if and only if 3b2 . a nd that b = (bl> bz.t 5 ' 1t § I} I 0 0 0 0 1 0 0 From this we conclude t ha t t he vectors b 1 and b 2 lie in span{v1 . but b 3 does not.2b 4 0. 1) forms a basis for W . and then further b bt In b 1 0 0 0 b1 0 0 0 0  reduced to 0 0 0 1 0 0 2 0 3 1 0 0 0 0 0 ~~ +b4 From this we conclude that W = span{ v 1 .t if and only if u · b = ~b2 + b4 0. b~ ) is in W . v 4 } .~. i. v 4 } has dimension 3. v 3 . b3 . 0. v 4 I ht l b z l b "J is ··2 l . The matrix l~~ ~Y2 2 . = = 23.1 1 0 and the reduced row echelon form of this matrix is r~ 0 l 0 l~ 2 4.2 J 0 1 0 1 a 2 o o o .L 0 2 : 1: o:2 I 3 • 3 : .3 209 l 0 0 0 vl] r2 4 s 6 ] [ ·= l} . matrix jv 1 Vz '13 v 4 I b 1 I b2 I b s l is 1 1 0 1 0 ._ _. b3. v 4 }. vz . The augmentl~d 0 1 0 0 0 0 1 I' 1: l ' 't 0 1 0 0 0 0 1 0 0 0 o: l: 2 0 0 0 0 From this we conclude Uwt the vectors h 1 C~. if and only if 3b2 .L. The augmented matrix lv 1 v2 V:. vz. Thus b = {bJ .EXERCISE SET 7.e.
0. ~).0. 1. 0). follows that ruw(A) = row(B). 1. 1. r~ = (0.0. 0. 1.~). r2 = (0.3.0) . 1.n inspection of R that the null space of A (solutions of Ax= 0) consists of vectors of the form = x = s( . O. V. 0). a. = ( l . T hus row(A) and onll(A) are orthogonal subspaces of R" .0.3 . The vectors r. .<' of A \Ve also conclude from nn inspection of R that the null space of A (solutions of Ax ::: O) consists of vectors of the form n.hat r. · Dj = 0 for a. 0) and n2 = (4.0)..0. a.3. 0. 0.:k t. 2. 1.0. 0) Thus the vectors n 1 = ( . 1) form a basis for the null s pace of A. form a basis for the row space of A. It 30.~.1.hnt row(A) = row(B).O.ll i.1).nd r3 = (0.0. . .0).0. The vectors r1 = (1.subspat:es of R 6 • 26 . basis for t he row space of A (see Exercise 25).3.0) + t{4. r2 = (0.).1 . ·. ~} for m a. ! . r 2 = (0. Thus the vectors n 1 = ( · ~. 4. The reduced row echelon forms of A and B are r~ ~ ~ ~] and [~ ~ ~ ~] respectively.4. ~ space of A (see Exercise 26). 0. 27.0. r 3 = (O. r3 = (0. } . l. basis for t he row 29. 0 0 0 1 vectors r l :::: (1. !.O. 0. The reduced row ec~1elon forms of A and B are rl~ ~ ~ ~] and r~ ~ ~ ~] respectively. 0.0. The reduced row echelon form of the matrix A is R= r~~~ ~ ~~J 0 0 0 0 1 0 0 0 0 0 0 1 Thus the vectors r 1 = (1.0 .0). 0. 1. 1. r4 (0.nd TJ = (0.0. r 3 = (0.0.0. r2 = (0.j 1 1 R= 0 0 0 0 1 !] 5 0 5 3 0 0 0 Thus the vectors f ) = (1. 0. 2.1. 0. It is easy to che<. 0. 0. follows t. 0. ~ . 2. 0. 0.trix .0. · n 1 = 0 for all i. n form ~ ~~~ Thus the a basis for both of the row spaces.2.O).0.0. j . r4 = (0.: (1.~. 0. 0. 28. 0. l.210 Chapter7 25.0 .3) form a basis for bot~ of the row spaces.3.0. 0) form a basis for the null space of A. n 2 =: ( ~. ~) form a basis for the row ~par. r2 = (0.1) f<'l rm a.0. j. The rf>(\\lc~C'd row (>Chf'lon form o f the m11. We also conclude from a. 0.4 is 1 0 1 0 0 4 2 t .0.4. r 2 = (0. It is easy to check that r. 0.0. 1.~. ~ ..0. 1. Thus row( A) and null(A) are orthogonaf. 00 13 OQI J Thus the 0 0 0 0 vectors r 1 :::.0. 0. It . 1. 2.~. 0). 0).0.5).
T his statement is t. (d) False. 1.y matrix. 0. and so the matrix A = [ 1 4 1 0] 2 0 0 1 has the property that null (A) = row{A)J. ( b ) The matrix B = lr~ ~ ~1 0 0 I has the specified n ull space n.3) that SJ. 4.o points on lht} xyplane. 1).Onsists of all ' vectors tha t are of the form which corresponds t. where oo < s. t < oo.nd column space. T his follows from the fact (T heorem 7. ~ ~] and x = [:]. F.or example A= [~ ~] and B spaces. = [~ ~] have the same row space but ~ifferent column . t he cokm n sp:. (a) True.e. if m = n and A is invert ible. is a subspace of Rn . (e) True. DISCUSSION AND DISCOVERY D 1. The specified condition implies that row(A) ~ row( B ) fro m which it follows that mJll(A) = row(A)J. .DISCUSSION AND DISCOVERY 211 31. then V J. thus row(A ) = col(A) = R" .sis for the null space of B . (d) False. 1f A is invertible.:. ( b ) Fah>e.l.. Thus the null sp•ce of A corresponds t o point. Let B be the matrix having the given vectors as its rows.hanrl. 0) and w2 = (2. thus the red11ced row echelon form is t he ident. 1) form a ha. D2. then the rows of A a nd the columns of A each form a basis for W'.nk n . t he opposite is true: lf W is a subspace of V. is a subspace of W . 0. 0.rue if and only if the nonzero rows o f A are linear. then there can be no zero rows in an echelon form for A. = null( B).J. (a) If A= [~. = row(B). then row(A) = Rn and null(A) = {0}. =row(B). 2 row(B)L = null(B) but in general the inclus ion can be proper. If A ha. (a) False. then Ax = 0 if and only if x = y 0. (c) True. T hus the vectors w 1 = (1. For example. " ( b) T rue. if Lhcre a.it. 1. ln fact.0) + t~2.s ru. 4.l.tll echelo n form.ly inde pendent.cc cf A t. t hen E is invertible and so EA x = 0 if a nd only if Ax= 0 . (c) False.. i. since every vector that is orthogona l to V will also be orthogonal to W . On the othe:.s on t he zaxis. 1f E is a11 elementary matrix .!.re no additional zero rows in i. 0. = 32.3. The reduced row echelon form of B is R= [~ 0 1 4 ~] a nd from this it follows that a general solution of Bx= 0 is given by x = s (1 .
WORKING WITH PROOFS Pl. If E i:. since there are exactly n of them. ) 1) •• • •• . . = 0. D3. then (PA)x = P(Ax) = 0 if and only if A x= 0 . . On the other hancl . 09. (a) Since S has equation y = 3x. P2. and (span(S)J. then w.5t of A must be a scalar multiple of the vector {3. it fo llows that any li ne. ...3.k} and j E {k + 1. this is not possible..t. WehaveAA. i. Thus (a) holds. This shows that row(A) ~ row{B). k + 2.k columns of A.1) 0 0 ·· · l ticular. = a. is the plane through the origin that is perpendicular to that line. Thus each row 3t .~x. The row vectors of an invertible matrix are linearly independent and so.r c.5y = 0.(A . From Theorem 7.. 1 .8. .3. . DB.. it follows that PA and A. )l.J.) ] [ 1 0 ··· 0] . has equation y = .1 = .. = [Oijl· In par r 11 (A) · C! (A. Thus rank(PA) = dim(row(PA)) = dirn(row(A)) =rank( A).. and Sl. invertible.212 Chapter 7 (e) True.L. The row space of an invertible n x n matrix is all of Rn since its rows form a ba.I.n(S) has equation y = 2x. two matrices arc innrtiblc. If null{ A) is the line 3x. The null space of A corresponds to the kernel of TA. Conversely. . .n(S) is a 1 P5. = a.1 ) r 2{A)·c2(A[ 1 = span{S) since spa.. . This is in fact true for any invertible matrix E. The mill ~pace of the matri. and a si1uilar argument shows that row(B) ~ row(A).1 ) r n{A) • C2{A  1 ) · •· r n(A) · Cn(A .x A = {~ ~] is the line 3x + y = 0 . r 2(A)·cn(A. thus in each cnse the null spuce is {0}. then row(A) = null( A ). thus it is always true that row(EA) ~ row(A) . . 2.ombination of the rows of A can be expressed as a linear combination of the rows of B . and so nullity{PA) = nullity( A). Then. = (span{S)l. = span(S) rJ(A)·cl(A. Sl.t. . P3. The first. lt is clear. l = span(S) has equation y = 2x . suppose that (c) holds. D7..1) This shows that the first k rows of A and the last n . that (a) implies (c). th~n i < j and so ": ( A)· c. .1 ) 0 1 ·· · 0 = . and the column space of A corresponds to the range of T A .l subspace.4 we have SJ. = S has equation y = 3x. From T heorem 7. they form a basis for R" . 2)}. lf W D6.5). (a) If nuU( A) is a line through the origin. . = span(S)l.l is the plane through the origin thnt is perpendicular to that line.L =span{a} is the !dimensional subspace spanned by the vector a...). rt(A)·cn(A. D4. P4. then null( AT)= col(A).. thus the matrices PA and A have the same null space. _.e . Thus (S l. . the row space of a singular matrix is a proper subspace of Rn since its rows are linearly dependent and do not s pan all of Rn. .he rows of A. from the definition of row space. if it= {1. If Pis invertible. )l. then spa.a. n}.~ x ann Sl.• A i'i of the form A= [:ls !Js].1 (EA) and so we also have row{A) ~ row(EA). is the line 5x + 3y = 0 . . (b) If S == { ( 1. . then A = E. then row(A) = null(A)l.) rl(A)·c:J(Ar'l(A)·ct(A. has equation y = . . thus Sl.sis for R . D5. since <::ach row of A is a linear combination of the rows of B. . have also have the same row space. (b) If col(A) is a line through the origin.1 are orthogonal. ·No... The rows of EA are linear combinations of t. and the null space of A = {~ ~] is all of R 2 .
· 1 2 ]} 1. . and a general solution of Ax= 0 iR given by x <= s(.1.0} + t(~ .helon form for A is Thus rallk(A) = 2. 0 .~ ·. Thus rank(A) + nullity(A) = 2 + 3 = 0 2 5. It fo llows that n111lity(A) = 3.EXERCISE SET 7. The reduced row ~chelon form for A is 1 0 3 . 1) nul. [~ 4. Thus rank (A) + nullity(A) = 2 + 2 = 4 the number of columns of A. [~ 3.1. l .y(A) = 2.1._______ .0. 0. 0) +t(~.2. T he reduced row echelon form for A . 1).ti.6 5 n I 4 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 Thus rank(A) = 3. a nd a general solution of Ax= 0 is given by x = (1t. It follows t hat nullit.. .y(A) = 2. · 2.0. t) = t(16.0. 19. t) = s(O. 0 1 0 l l tl :! 7 0 0 T hus rank(A) = 2. 0.4 1. and a general solution of Ax= 0 is given by It follows that fi. The reduced row echelon form for A is [~ 2.. 19t. The reduced row echelon form for A is ~ ~] 0 0 Thus rank(A) = 1..o.~ .. 1 ) . It foll ows that nullity(A) = 1. 1) 5.4 213 EXERCISE SET 7. l. s. 0) + s(. .. Thus rank( A) + nullity(A) = 2 + 1 = 3 the number of columns of A. and a general solution of Ax= 0 is given by x = ( 16t. and a general solution of Ax= 0 is given by ] l 0 0 1 ~ ~ ~~ ~ . It. The reduced r ow ~r.is 0 16] 1 19 0 0 Thus rank(A) = 2. 1).1.. 0 . x ""' s(2. nullity(A) = 2. Thus rank(A} + nullHy(A) ""'3 + 2 = 5.. follows t hat. 1. 0.. ~ .J 2 x = r( 1 . Thus rank(A) + nullity(A) = 1 + 2 = 3 t he number of columns of A .. 1.O) + t(~. 0) + t( 1 .  1.
(b) Tf A is a 7 x 4 matrix having nullity 2.1 free parameters in a general solution of Ax .~. (b) If A is an 8 x 6 matrix having nullity 3. tn.l. 0. Thus rank(A) = 2.O).0. (b) If A is a. ~ 1 }. 0.re 3 pivot variables and 5 free parameters in a general solution of Ax = 0 ..0} + s( j.O.0.1. 0.) is 0. Thus there a.0 8.3 = ·1 Thus there are 3 pivot variables and 4 free paramettrs in a genera] solution of Ax . there are 3 pivot variables and 3 free parameters in a general. If A is a 7 x 9 matrix having rank 5.t. A.. then its nulHty must be 8. 0. then the largest possible value for rank(A) is 2 and the smallest possible value for nullity(A) is 4 (c) If A 1 a 5 x 5 matrix. 0. t.0) + v(~. Thus there are 2 pivot variables and 2 free parameters in a general solution of Ax = 0..0. Thus there are 5 pivot variables and t free parameters in a general solution of Ax = 0 . ~. then the largest possible value for rank(A) is 3 and the smallest possible valur for null1ty( '\) is 2 If A is a 4 x 4 mn.0) ll follows + u(~. ~~ 0. 1.2 = 4 Thus there are 2 pivot variables and . ~. then A hn.0.214 Chapter 7 redu~P. form a basts for R 4 v2=(0. The row echelon form for A is [~ X = 0 1 3 3 1 1 3 7 2 3 4 2 3 4 0 0 0 0 3 3 0 0 0 0 3 0 0 l ~] 7. w4 =(~. t hen the largest possible value ior ranK( A) IS 4 and the smallest possible value for nulltty(A.1) that nullity( A)= 5. then its nullity must be 9 . then the largest possible value for rank( . 9. 0) + t( j.4) is 4 and the smallest possible s value fm nulhty(A) is 0. 0.trix.0.5). and a gene1al solution of Ax = 0 is given by r( s.1 has rank 3 and nulhty 7 .~. 2 x 6 matrix.d 6. then its rank must be 4 . ~. Let A be the 2 x 4 matnx havmg v 1 and v 2 as its rows Then the reduced row echelon fonn of A is 0 ~ 3 " + and so a g<>n('ral solulton of Ax= 0 is given by x = s(~ .5 = 4.l) is 3 and the smallest possible value for nulllty(A) is 0 If A is a 3 x 5 :natrix. Thus the vectors WJ=(1.3 = 5. (c) lf A is a 6 x G matrix whose row echelon forms have 2 nonz!'rO rows.3. 1). 0. 0) Vt= (·! . 1.1.solution of Ax = 0 (c) If A is a i x 7 matriX ''hose row echelon forms ha\f' 3 nonzero rows then . (a) lf A is a 5 x 8 matrix having rank 3. then its rank must be 63 = 3 Thu:. l . 0.2 = 2. If 4 1 a 6 x I rn::~. then the largest posstble value for rank(A) ts 5 and the smaiJest possible s value for nulltty(A) is 0 (a) 11.. 0. 10.O). 0. then lhe largest possible value for rank(.0 (a) (a) (b) (c) [fA i~ a 5 x 3 rnatnx.s rank 2 and nullity 6.l) .lrix. 0.. Thus rank{A) + nullity(A) = 2 + 5 = 7. 0.
then there are two nonzero rows and ra. T hen the reduced row echelon form of A is 19 0 0 .8 0 0 15 43 . and v:l as its row:>.20 25 28 35 ~: 2~] is a symmetric matrix. then the la.2.01 5) = (~. 16 20 .] 1 3 3 1 4. 1.. then there are three nonzero rows and rank(A) = 3. 3 l 1 • 3 1 1 0 0 16.0). Let A be the 3 x 5 matrix having v 1 .nk(A) = 2. 3.2.6 12 6 ~ _ . 3.\ = [ ] ~ 6 10 tB:.. }.0.16 2:l . W3 = ( . W4 3 V3 15 8 = (4 1 1.t2 0 l [1 t. = ( ~ . 5).EXERCISE SET 7. lJ = [~ .nk 1 with l1 = [_~ ~:1 = (.) 6 2 3 .21. 1) form a basis for R5 • 13.~( 1i ._ +:1[1 : : :.4 215 12. v 2 . ~' l. 1i. 9 2 2] . (c) T his matdx i< oh ank 1 with A = H~ .: : ] 8 . . If t = 2.~ . . 0. Thus the vectors v2 = (1.i·Z . . 1).1 1 3 5 6 . (a) This matrix is of ra.nk 2. The matrix A can be row reduced to upper triangular form as follows: t 1 t+ 0 1 . 1 7.~. 0) + t (. 0.~1{1 7] = u vT. 3.3 4) = UVT 15.ts a sv mmetnc rnat nx.~ . . (a) This matrix is of re.21 6 3 3 .¥. 1~.8 ¥ ] 15 8 43 3 and so a general solution of Ax= 0 isx Vt = .t) ] If t = 1.t (2+ t)(l . 1) . (b) This matrix ~ of '""k 1 with A= L~ 6 10 12 .8] [ 2 I 9r~3 1 3 . 2..1 0 1 t 1. If t :f: lor . 3.·l ~= = [ ~1[1 6 3[ = uv'''. (h) T his matrix is of rank 2. (c) This matrix is of rank 1 with .tter has only one nonzero row and so ran~(A) = 1.
and sox= t. Similar!_\. If B 1<:: obtainerl from A by changing only one entry.z) 20. The subspace n·.. then A . l r tl r 3 3 3 3. 19. we have A2 x = (u · v )Ax = (u · v)>.. The suh. fhus the vectors (1. z = ty = t3 . [:r 1 l/ z] has rank 1 then the first row must be a scalar multiple of the second row. x.z) whlch are orthogonal to (2.216 Chapter 7 18. 3.A has rank 1. lhcn A 2 = ( uvT)( uvT) == u (vTu)vT = (v · u ) uvT 2 2 24 . tx = t2 .A is of the form uvT and 8 =A+ uvT. Thus Ax= 0 if Bnd only if v · x = 0.spare W.) = I + 2 = 3 If u and v nre nonzc>ro column vectors and A = uvr.1) forms a basis for W. 2. .l.l.x.1.y.3z = 0 A gt•neral !iolution of the latter ts given by x = (s. 4) form a basis for W = row(A). v )A. dim(W)+dim(W). and rank(A) = 3. This shows that ker (T) = v.of A is thus the vt•ctors Ax w1 = (1. 1. 3.• if and only if xis orthogonal to v . For all other If lhe matrix Thus (x. 1) l\ wtwre .B) = 1.. 2s. (b) If B is obtained from A by changing only one column (or one row). = = 23. :f nit.B) = 1. (c) 11 B 1s obtamed from A in the specified manner.l. then A . 1) form dim(W) 22 . ( b ) If Ax= >. t) = s(1. since A 2 = (u · v )A. We have =0 tf and only if Rx = 0 . (a) (a) If A= uvr.y. = 2. Finally.l. and so rank(A. : Lhus the vf'r. and so rank{A .x for some nonzero vector x. basis for w. then t he third row of the latter matrix is a zero row and rank( A) values of t there are no zero rows. 2. Let A be the 3 x 3 matrix having v 2 . = . has dimension 1.3t. which satisfy the equation 2x. then A x = >. On the other hand.3} .. 21. The matrix A can be row red11ced as follows: A= [~ 1 3 6 2 + 3 t 1] Y X [i 2 + 0 0 3 1 3 6 _. a. v 3 as ils rows. 4. i.y).3).nd if>.x . and a general solution of the latter is given by x  Stl ~ = [ 4 = t [5] .l. i.l is the hyperplane cons1$tmg of all vectors x = (x. follows that>.e. 1.B has only one nonzero column (or row) . x ) u . =row( A) = row(R ).3t 2+3t 1 + t 2 + 0 3 0 0 t ] 2+ 3t (t.oo < t < oo.tor x 1 '>+I== 3 (5. 0) and (0.2 = (u · v )>. . .t' and we have + dim(lV. consisting of all vector!'J of the form x = t(2.= u · v . = (u. 5) and w2 = (0.y. then Ax = (uvT)x u (vrx) (v. . 0. and so ran(T) =span{ u }..l){2t . Then B . The reduced row echelon form. I he range ofT conststs of all vectors of the form Ax= (v · x }u . Thus B .3) If t = 1 or t = ~. O) + t(O. = t(l.B has only one nonzero entry (hence only one nonzero row).e. Thus >. y = VI.
DISCUSSION AND DISCOVERY 217 (c) If A is any square matrix. (b) False. If. If A is a 5 x 3 matrix. und (assuming A =/= 0) the number of parameters in a general solution of Ax= 0 is a. then the number of leading l's in the reduced row echelon form is at most 3. For example. If A is a 3 x 5 matrix. D7. either r .ors. 4.vector x such tha. or 0.2 or s. (c) False. if m = 1 then rank( A)= 1 and nullity( A) = n .1. D6. 4. and the corresponding values for nufHlypt)mc&."<.he nullity is n .1. then nullity(A) = 1 and so rank(A) ::: n . Assuming A =I 0. thPn ·rank(A) :::. then the rank will not be increased . thus it is not possible to have rank( A) and nullity(A) both C'qual t. (a) True. = 1 is an eigenvalue of A.t most 4. 1. the rank of A= u vT is 1.::: 1 and so the number of free parameters in a general solution of Ax = 0 is at most 2. 3. Assuming u and v are nonzero vect. we have rank(A) :. thus t. DISCUSSION AND DISCOVERY Dl. D9. or 3. D8. 1. this is equivalent to saying t hat>. by Theorem 7. then the reduced row echelon from of A is r~ 0 0 0 1 1 0 0 L. 1. Let A be the s tandard matrix ofT.he corresponding values for nullity(A) are 5. or 3. Otherwise. D3. If ker(T) is a. thus nulHty(A) .nd s = 1. 0 0 {] . then the possible values for rank(/\) are 0. and the corresponding values for nullity(A) are 3. then rank(A) = 2. Thus if A ::::: uvT is a nmk 1 r£tatrix for which I . if A ism x n where m > n (more rows than columns) then the rows of A form a set of m vectors in Rn and must therefore be linearly dependent. 3.::: 1. U Ax= b is inconsistent for some b then A is not invertible and so Ax = 0 hM nontrivial solutions.or · ~·: If A is a 5 X 3 matrix. then the possible values for rank(A) are 0. Note thnt. 1.l (or both) is ~ 0 . 2. We must have rank(A) + nullity(A) = 3. or 5. Tt follows that ran(T) = col (A) has dimension n.A is invertible.4. 1.·&. or 0. then I . and t. 2. 2. if m < n then the columns of A must be linearly dependent.rank(A) == L (e) False.4. D4.t (I .1 and thus js a hyperplane in R" . D2. (f) 11uP. 2. thus nullity(A):::::: n .1.\¥ 0.A fails to be jnvertible if and only if there is a nonzero . D5. If A is a 5 x 5 matrix.1.A)x = 0.4 2 :fA. we must have u · v =/= 1 and . and rank(A) = 3.he other hand.o 1. then AT is n x m and so. 2. s·uch a matrix must have rank less than n. If A is a 3 x 5 matri. line through the origin. For example. 3 and so the n umber of leading l's in the red uced rvw echelon form of A is at most 3. we have rank(AT) + nullity( AT )= m . (d) True. If r == 2 a. rank(A) can never be 1 (or 0). If A is m x n.. On t. If the additional row is a linear combination of the existing rows. then the possible values for rank(A) are 0 (if A= 0) . since the first and fourt h rows are linearly independent.
l'lt2·a13). AB = lc. Tlten we have \\'ithout loss of gPnerality . it follows = y(gy)T ==. WORKING WITH PROOFS Pl. . 0 and x Since x andy are nonzero we have xry that A= yxr = i~Y· If xTy > 0.4 = [~ ~] and B = [~ ~]· Then rank(A) = rank(B) = = [~ ~] bas rank 1 and B2 = (~ ~] has rank 0. conversely. Suppose. thus rank{A) ~nullity( B). lf A is of rank 1. 1f >. S1milarly.U )a. that the cond1t10n ( #) holds. = 0.a:n).y.. we note that.n = rank(AB) = 0. First. On the other hand. and further t Interchange columns if neces!lary) that.. and so rank(A) + rank(B} 5 n.(~)yy1' = (~y)(~y)T = respondmg formula 1s A = . Lhen we also have A = AT= (xyT)T = pr .n.(~)(ntt. DlO. On. = 0 is the value for which the matrix A has lowest rank. whereas A 2 and so rank(A) = 2. and so the second row is a scalar multiple o f the first ••II row P2. Dll. Thus>.l3 an an Thus (a~t .CA ) c1(A) c~c(A)J [ (B) r 2(B} = Ct(A )r t(B ) r~c(B) + c2(A)r 2(8) + · · · + C~t (A)r~t ( B) . Then the same 1s true of each of the 2 >< 2 matrices that appear in (#) and so each of these det erminants IS equal to zero. Let . and each of the prod11l'l~ c1 ( A)r1 (B) is a rank 1 matrix.nullity( B) 5 n . rank(B) 5 nullity(A). A is symmetric.e may assume that the first row of A is not a row of zeros.uur where u = j~~. the matrix A= ( 011 021 012 an 013 ] 023 fails to be of rank 2 if and only if one of the rows is a scalar multiple of the other (which includes the case where one 01 both is a row of zeros).U )at 2 and a2~ = ( l!. r l uuT IfxTy < 0. Thus it is sufficient to prove that the latter is equivalent to the condition that Suppose that one of the rows of A is a scalar multiple of the other. in addition. Since rank( B) . a11 =1: 0 We then have a22 = (l!.218 Chapter 7 and so rank(A) = 3. then the reduced row echelon form is 0 1 ~ 5 0 0 2 0 0 il 1.nulHty(B). If AB = 0 then rank( A) + rank(B) . From this it follows that x(yr x) = (xyr )x = (yxT )x = y(xr x ) = yllxll 2 = yTx = y · x :/. then thecor P3. then A = xyT for some (nonzero) column vect ors x andy. it follows that rank(A) + n . If.
.1 ).4.k(B) which.. then nA.rank(AB} 2n . dn. suffices to show that V U W is a linearly independent set.1 is singular and so B is singular.1 u = . .A= 1 + vr . Vz. This shows that V U W is a linearly independent set.1 ... The <edueed cow . ..n ~ rank{AB} ~ ~rank( A) it follows that n.n.1u ' u(vT A. Then T Al T) (A ~ l A.EXERCISE SET 7.. and dim(null(AT)) == 43 parameters in a general solution of Ax = 0. + CzVz + · · · + CJ. . using Theorem 7.. Since the set V U W contains n vectors.1 .1 u uvTA .5 1. .lu =(A+ uvT)ABA. Suppose .rank(A) ~ n. Sjnce dim(null(A)) = 3. It follows that n · n = llnll 2 = 0 and son= 0.1 . nullity(B) S nullity(AB) ~ nullity(A) + nullity(B). Thus we simultaneous ly have c1 v1 + czVz + · · · + q v. there are 2 free . The reduced row echelon form of A is l 0 0 1 0 I 0 l 0 0 1 0 _ .1 0 l 1 [j 0 7 l 9 0 0 0 0 4. = 4.1 u )vTA . = 0 and d1 w1 + d2w2 + · · · + dn k Wnk = 0.1 A uv TA .=: # 0.'I.A uv A Note.rank{A) .1 + u (vTA.6 5 2 0 0 5] and the reduced row echelon form of 0 0 AT is 0 0 0 0 0 0 0 0 0 J 0 0 0 T hus dim(row(A)) = 2 and dim(col(A) = dim(row(AT)) = 2.·· EXERCISE SET 7.:Vk + d1W1 + dzWz + · ·· + dnkWnk = 0 c1v1 + czvz + · · · + c.!§.1. . thus . c2 . Wz. Yk are linearly indepenjent it follows that c1 = cz = · · · = ck = 0.  P6.1 A~u( A'..l uv1'At) I (A + uv A =  + uv · uvTA1 + uvT Aluvr AA  t . t here are 3 free ~ 2 0 7 ~1 2. It follows t hat dim(null(A)) = 53 == 2.vk = (d1w1 + dzW?. 1 l +vTA.[ and so B is invertible with B · 1 = X= A.1 = l + uvT A. and since w1.t c1 .'2 = 0 0 2. Thus dim(row(A)) ~ 3 a nd di.. Since dim(nuli(A)) = 2. P5.. Wnk are linearly independent it follows that d1 = dz = · · · = dnk = 0.ra. = 1.l  A1 =u+ l+vT. + · · · + dn . and dim(null(AT)) parameters in a general solution of Ax = 0.1 u) = u u = 0. it. Since the vectors v1. c~c and d 1 . If vTA.. 2 1 0 0 0 0 1] and the reduced row echelon form of AT is 0 o 1 0 .5 219 P4 . is the same as nullity( A) ~ nullity(AB) S nullity(A) +nullity( B) Similarly.m(col(A) 0 0 0 0 0 0 0 0 = dim{row(AT)) = 3.. t ' 1 u Bx . .1 uvTA. TAt 1u =A .k are scalars with the property that CtVJ Then the vector n = to V = row(A) and to W = null(A) = row(A)~.. . Supp?se then tha. d 2 . ••• . . . =l+uv rA . and let X= A.ehelon fonn of A is 0 . It follows that diro(null(A)) =52 = 3.JcWnk) belongs both From the inequality rank(A) + rank(B) . •.
dim(null (A)) = 3 I = 2. <lim(null(AT)) = 33 = 0. (e) <lim (row(A)) = dJm(col(A)) = rank(A) = 2. general 3 the system is consistent. i. rank[A I bJ = 1 the system is consistent. dim(null(A )) = 7 2 = 5. = rank[A I bJ the system is inconsistent. (d) Since rank( A) = rank[A I bJ = solution 1s n . (c) dim(row(A)) = c. dirn(nuii(AT)) = 4 .. (d) dim(row(A)) = dim(coi(A)) = rank(A) = 2.2 = 2. (a) Since rank(A) = rnnk!A I bJ= 3 the system is consis tent. dim(rlllJI(A)) = 3 . 4 the system is consistent.e. (c) Since rank( A) = rank[A I bJ = solution is n . rank[ A I bj = 2 the system is consistent. (c) dim (row(A)). dim(nuii(A)) = ot . (e) dim(row( A)) = dim(col(A)) = rank(A) = 2.2. dim(nuli(A)) = 4 2 = 2. dim(nuli(AT)) = 6 .3 = 0.2 = 1. The number of parameters in a general i.dim(coi(A)) = rank{A) = 1.r {d) Since rank(A) soluLion JS n .3 = 1.1 = 2.r = 3 . dim(null(AT)) = 3 . The reduced •ow echelon form of A is [~ ~ r~ :.220 Chapter 7 3.) = 2 = 'o ~ and the reduced row echelon form of AT is 5. dim(nuli(Ar)) = 33 = 0. The number of parameters in a general = 3 . .weters in a. (d) dim(row(A)) = d1m (coi{A)) = rank( A) = 2. dim(null(AT)) = 3.I .2 = 7.1 = 2. dim(nuii(AT)) = 9 .2 = 5. dim(nuii(AT)) = 7 . dim(null(A)) = 3 .r = 7 . {b) dtro(row(A)} = dlm(col(A)) = rank(A) = 2.llm( coi(A)) = rank(A) = 1. The number of parameters in a general = 9. general (b) Since rank( A) = rank[ A I bl = solution is n .! :] 0 ] . dim(row(A)) dim(col(A)) rank(A) = 3. The number of parameters in a.2 = 2. dim(nuii(A)) = 5 2 = 3.l = 5. dim(null(A)} = 3 . The number of parh.r (b) Since rank(A ) t(c) Since rank{ A) = solution is n . dim{nuli( AT)) =5 .r = 3. (a) dim(row(A)) = d im(col(A)) = rank(A) = 3.2 = 1. the system has a unique solution . Thusrank(A) = 2 and rank(A. (a) Since rank{A) t.2 = 7.2 = 3. 8.2 = 1. 4. .3 = 0. The number of parameters in a general solution is n . dim(nuii{A)) = 9 .1 0 0 3 and rnnk(AT) = 3.4 = 3. 7. dim(nuii(AT)) = 52 = 3.rank[A I b) the system is inconsistent. The reduce<! row echelon form of A is Thus rank(A) [~ ~ !] and the reduced row echelon form of AT [~ is 0 2 1 . (a) = 6. 2 the system is consistent.3 = 0. tLc system has a unique solution. dim(null(A)) = 3 .e.r = 4 . (b) dim(row(A)) = dim(col(A)) = rank(A) = 2.2 = 7.
linearly d~p. (a) 4 2 1 ] = 64 :f= 0 and det(AA T) "" det ( ] 4 ]{) 2 17 This corre~ponds to the fact that A has full det(AT A) . (a) This matrix bas full column rank because the two column vectors are not scalar multiples of each other. thus AT A Is not lnmtlble = 61 =f 0. (b) This matrix does not have full row rank because the 2nd row is a scalar mul'. (a) Thls matrix has full column rank because the two column vectors are not scalar multiples of each other. 11. This corresponds to the fact. 12. (c) This matrix has full row rank because the two row vectors arP. not scalar mult iples of each other. thus AT A is not invertible but AAT is invertible._. it does not ~ave full row rank because any three vectors in R 2 ar.. (a) det(ATA) = det l 6 1 . This corresponds to t he fact that A does not have full column rank but does have full row rank .det [ IS 2~ : =~~] = o and d et(AAT) = det. This corresponds to the fact that A hu. it does not have full column rank because any three vectors in R2 are linearly dependent.1] = 0. thus AT A is invertible ·. it does not have full row rank because any three vectors in R 2 are linearly dependent. does not have full column rank beeause any three vectors iu R 2 are linearly dependen~. This corresponds t..1 . thus AT A and AAT are column rank and full row rank... This corresponds to the fact (see Exercise 9a) that A has fuJl column rank but not full row rank.2 I and AAT is not invertible. it does not have full column rank because any three vectors in R 2 are linearly dependent.EXERCISE SET 7. [~~ . thus neither ATA nor AAT 20 !0 40 invertible.o the fact. (b) This matrix has full row rank because the two row vectors are not scalar multiples of each other.iple of the 1st row. (d) This (square) matrix is invertible. (c) det(AT A)::. (d) dct(A~'A) hoth invertible.1). rlct [2 10 4 10 2] = 0 and det(AAT) = del [ ~ 34 13 3 13 37 4 ~] = g 1269 :j: 0.endent. (c) Thls matrix does not have full row rank because the 2nd row is a scalar multiple of the lst row. it has full row rank and full column rank.~] 2 ·· . that A has does not have full column.. rank nor full row rank.~] = o.s does not have full column rank but.5 221 9.]: ~ 'l'"1 det(AAT) ~dot[': :] ~ 66. det [~~ . it has full row rank and full column rank.~ ~~] 60 15 45 2 = 0 and det(AAT) = det [~~ ~~~] = 0.2 17 ible and AAT is not invertible.. does have full row rank . (b) det(AT A) =. .. 10. (d) This (square) matrix is invertible. ="" 149 =f 0 and det(AAT) = dct [10 2 2.. t hus ATA is invE>rt4 2 i ll . __. 0. Th1s corresponds to the fact (see Exercise lOa) that A has full column rank but not full row rank (b) ciet(AT. (c) det(AT A) ~ det [:: "" d el(!'> ... thus neither AT A nor AAT is invertible. that A doeR not have full column rank nor full row rank. it.. det [ : 1 ~:] = 45 =f 0 and det(AAT) = det [ ~~ ~~ =~] = 0.. it does not have full column rank since any three vectors in R2 are linearly dependent. but A A1 i&inveitll)fe.
b5 ) s~tisfies the equations b3 = 3bt 1 4b2. = 2bt.2b2 + b3 =I= 0). i e all V('ctors of the form v = s( 2.and bs = ?b1 + 8~.trix of the system Ax = b can be rel!\lo. .. 'fh~> augmented ma. Similarly. 15.tions of the vect ors u 1 = (1.2). 7) and u 2 = {0. b3 . all \'CClor of the form v = s( .7. I .·lb2 ) 18.6). or bas exactly one solution {if b1 + ~ = 0). there will be in!'lnitely many . It is clear from inspection t hnt the rows of A and . The augmented matnx of Ax= b can be row reduced to o [ I 0: 1 !4b ~ b3 1  ~ b3 • l Thus lhe system Ax = b 0 01 bt + b'J is either inconsistent {if b1 + ~ :f:.ition.2i>l + b3 o 1 l Ax = b is t!ither inconsistent (if bt . i e. The latter includes t he case b1 = b2 = b3 = 0. If A = [. in this case. Thus row(A) = row(Ar A) is the !dimensional space consisting of all scalar multiples of u .222 61 41 41 Chapter 1 (d) det(AT A)= dct [ 1 soj = 1369 =/= 0 and det(AAT ) = det [~ 4 ~1 37 ] 31 1369 =/= 0.7 8 [ 0 0 0 thus the system is consistent if and only if b3 solutions.4. 16. where b1 and~ can assume any values.2b2 + 63 = 0). Thus the system [ 0 0 1 b1 .2bt . thus AT A and AAT are both invertible This corresponds to the fact that A has both full column r<illk and full row rank 1 3.bl b3 . 1 : 5 0 : ! = b2 + b1 and.:ed to 3 0 0 0 0 1 bt b2 .~.8b2 + ~. Note Lhat the latter includes the case b1 = b2 = b3 = 0. b2 .~ ) ~] 2 then A r A I [ 6 12 12 24 ]. The augmented matrix of the system Ax= b can be 1educed to 1 2 3 0 . 1). 1 4. thus the system Ax = 0 h as only the t rivial solut ion. or has exactly o ne solution (if b1 . The reduced ·row echelon from of A = [ 1 1 1 2 3 4 ] is [ 1 0 0 I 6 7 ] and the reduced row echelon form of AT A = [ ~ 1~ ~~]~s [~ ~ ~]· Thusrow(A)=row(ATA)isthe2dimensionalspaceconsisting 7 .2b. 0 0 0 + 3bl b4 b5.11 17 0 0 0 of all linear comb1ne. 0.T A are multiples of the single vector u = (1. null (A) = null(AT A) is the !dimensional space consisting of all vectors v in R2 which are orthogonal to u . + 7bt thus the system will be inconsistent unless (b 1 .6. thus the system Ax = 0 has only the trivial sdl'i. 0}.re o rthogonal to both u 1 and u 2. b. b4 . 1) 17. The augmented matrix of Ax = b can be reduced to 4: bt 3: i>l. Thus null(A) =null( AT A) 1s the 1dimtlnsional space consisting of aJI vectors v in R3 which n.
if A is m >< n. 0) + t( . 1. (a) The solution$ of the system are given by x = (b .1. ·(a ) False. from T heorem 7. D4.1.he rows of A are a se t of 5 vectors in R3 a nd thus a.nd full column rank.is m >< n . If A is m x n where m < n. (a) Since null(A1'A) = null(A). from T heorems 7.l = null(A' = row(A T A ). = row(AT A) = row(A) = col(AT).hen the columns of /1 are a.5. (a) n6.t. we h ave dim(row(AT A))= rank(A1'A) rank(A1 ') = rank(A) = k and dim(row(AAT}) = rank(AAT) = rank( A) = k. (b) False.nullity(A} = rank( A). = ra nk(A) .1. P4. if m > n .5.nk(AT) . 0) 1.1. Thus. 2.5. 0.nd the corres ponding nullity is 3. As above. t < oo. ·or 0. s. whe re (b.nk(Al' ) = ra nk( A) . If A 1'J\ and A A'T a re both invertible then.2 and 7. general solution of the corresponding homo\ geneous system. with m f n.10 . then m = dim(row(A )) = rank(.t . (e) True. This docs not violate T heorem 7. If an m x n matrix A has full row rank u. (d} True. t.nullity(AT) ""' ra.4) = dim(col(A)) = n.WORKJNG WITH PROOFS 223 DISCUSSION AND DISCOVERY D l. Frnm T heorem 7. 0. 2. True. or 3 a. 1.s( . O) + t ( . = D3 .nd so rank(A AT) ::= m .n ~llity ( A A T) = m .nullity(A) P2. we mus t have n ~ m and dim(row(A)) = dim(col(A) ) = m . 0. 0) is a pa. P3.8(a) we have nuli(AT A) = nuii(A) .nuility (AT A) = n . the system Ax = b is consistent (for any b) and so the ma t riCes A and [A I b J have the same rank. then eit her t he columns of A are line arly dependent or the rows of A arc linearly depende nt (or both). Similarly.9. Under these assumptions. t hus A is square. nuli{AAT) = nuli(AT ) a. T he rank of a 3 x 3 matrix is 0. Similarly. 0. It is always true that rank(A) = ra.1. If AT x = 0 has only the trivial solution then.nu l1ity( A'~' A)= n . t hen AT A is n x nand so ra nk (ATA) = n . t hus dim( row( AT)) = dim( col( AT ))= 3 and dim (nuli(AT)) = 7 . 1. {c) True. If A has rank k the n. 02. from 'Theorem 7. Thus if A . we have row(A) {b) Sinr.11. If A is a 7 X 5 matrix wit h rank 3. (b) Lf A i~ 5 x 3.5. t hen A'r also h as rank 3.oo < s.hen t. If A is 3 x 5 .7(b). t. A has full row rank.5. (b) The solutions can be expressed as (b. we have col(AT A ) = nuB{A).1).~ AT A is symmetric. t) where .rt.s. WO RKING WITH PROOFS Pl. D5 . then the rows of A form a set of m vectors in R~'~ and thus are linea r ly depender.5. The row space and column space always have t he same dimension. l) is a.re linearly dependent.3 = 4. A has full column (f) rank ami full r ow rank. A)l.icula r solution and s( . (c) Jf A is m x n . we have rank( AT A) = n . set of five vectors in R3 and thus t~re linearly dependent. then the columns of A form a set of n vectors in Rm and thus are linearly dependent . whether A is square or not.
:~0~2~ ~:.. rnnk(C P) EXERCISE SET 7.~T. we prove that if the largest invertible submatrix of A is k x k. Then y = Ax for some x in R" and Ay = 0 . Step 2.0 :o::::eed 0 0 0 0 [~ ~ ~ ~]. thus the first two columns of A are the pivot . together with Step l. Since A 2 x = Ay = 0 . P6. Suppose D is an r x r submatrix of A with r > k. The first column of A forms a basis for col(A). Then the columns of C are linearly independent and so the columns of A that contain the columns of C are also linearly independent. Suppose now that y belongs to null( A) n col( A) .sumption that rank(A) > k has led to a contradiction. 3. thus 0 0 0 = 2.224 Chapter 7 P5. and the first two rows of A form a basis for row(A) 2. using the cited exercise and Theorem 7.~: o::·::.. Then dim(col(A)) = r. b asis for dim(row(A)) ~ol(A). null(A 2 ). The ma. This shows that null( A) n col( A)= {0}.:h~:::~:: .rank( A)= dim(nuii(A}) and.tnx A can be row reduced to B = :l:[r ~d 'I . and all submatrices of larger size are singular. Let C be the submatrix of B having these vectorc. This. If A is an m x n matrix with rank k.rank(G' P) = n. and so A has r linearly independent columns. Suppose rank(A) = r > k. the columns of A which contain those of D must be linearly dependent. The proof is organized as suggested. Thus D is singular Conversely. then A has at least one invertible k x k submatrix. and soy = Ax= 0.. P7.rank(A 2 ) n. It follows that bne columns of D are linearly dependent since a nontrivial linear dependence arx1ong the containing columns results in a nontrivial linear dependence among the columns of D.:::::::::. then A has rank k Step 1. Let B be the m x k submatrix of A having these vectors as tts columns.st two columns of A are the ph•oL columns A row echelon from for AT is C = and these column vectors form a. Step 1. 8 = [~ : 0 0 •!]. If A is invertible then so is AT..c. Then Cis an invertible k x k submatrix of A.5 2. [~ ~ ~]. Let 8 be the m x r submatrix of A having these vectors as its columns. since null(A) <.6 1. Let C be the k x k submatrix of B having these vectors as its rows. since dim(coi(A}) = k < r. it follows that null(A) = null(A 2 ) . Let C be an invertible k x k submatrix of A. Thus. Then. then dim(col(A)) = k and so A has k linearly independent columns. Step 2. This matrix also has rank k and thus bas k linearly independent rows. it follows that the vector x belongs to null(A 2 ) = null(A) . A mw eehelon fmm fo• A .::::0 . thus the fi. This shows that rank(A) = dim(coi(A)) ~ k .rnnk(CT) =rank( C) this 1l also follows that nullity(CP) = n. and th~ first row of A forms a basis for row(A). as lt'l rm\''l Thf'n (' is a nonsingular r x r submatrix of A Thus the a. First we prove that if A is a nonzero matrix with rank k. If rank(A 2 ) = ran 1'(A) then dim(null(A 2 )) = n. Then B also has rank r and thus has r lmearly independent rows.rank( C)= nullity(C). we have = and from = rank((C P)T) = rank (Prcr). shows that rank(A) = k.
we first form t he matrix ha. aud •lt. The reduced row echelon form of A is fJ = 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 c= [~ 0 0 0 1 2 0 0 : 1  4 0 0 5 t 0 o oJ 0 l1'1 . 1 Q 2 0 0 0 0 i 3 I 0 0 0 0 l · 6 5. '~hus the vectors v 1 . The redur. v 2 .h columns of A form a basis for co!( A). 0 0 0 ~ 0 O J and the reduced row echelon form ()( Al' is C: = [0~ .<> of A form a basis for col(A).EXERCISE SET 7. Thus the first two columns of 0 0 A form a basis for col(A). The reduced row echelon form of A is B =.ed row echelon form of A is B = [i ~ i] ii 0 and the reduced row echelon form of AT is C = 0 0 0 ~ =: ~jl 0 0 0 0 0 0 0 0 . 2nd. 1 3 ·o o and the reduced row echelon form of AT · iS 6.!(. and 4th rows of A for m a basis for row(A). 0 Thus lhe 1st. Proceeding as in Example 2. . and the first two rows of A form a basis for row(A) . v 3 a. 7 .6 225 4. and the 1st.re lineMiy independent and form a basis for the space which they span (the column space of A).¥ing the given vectors ns its columns: G 4 1] 10 6 .11 2 The reduced row echelon form of A is: 3 From this we conclude that all three columns of A are pivot columns. 1 0 0 0 1 ] . 3rd . and the 0 0 () 0 first three rows of A form a hasis for row( A) .. Thus the fi rst tii <ee column.
1) and forms a basis for nuii(AT) . we also conclude that v2 = 2vt and v 1 v 1 +. 0. v 4 form a basis fnr W = coi(A). Proceeding as in Example 2. 1) form a basis for 12. The reduced row echelon form of A is 0 1 2 1 01] 0 1 3 2 0 0 From this we conclude that the vectors and v:~ = v 1 1.v 3 10. 0 0 0 0 v 2 .0) and v 2 = (0. 1. Since c2 (R) == 2c1 (R) and c4(R) = c 1 (R) + c3(R).] = [~~:~!~~~~] Thus the vectors v1 = (1. Let A be the matrix having the given vectors as its columns. v2} is a basis for W and v 4 == 2Vt + v2. and it also follows that v3 = 2v 1 + v 2 9. . v3} is a basis for W col(A). The matrix {A I /3j can be row reduced (and further partitioned) to [~+ ~. [~~~. thus {v 1.v 2 11. Let A be t he matrix having the given vectors as its columns.1 = [: 0 1 : 1 0 '2 I 0 0 0 I 0 I I 0 0 ~] can be row reduced (and further parmioned) to 0 0 0 : 0 0 1 null(AT).. The m•tnx lA I 1.226 Chapter7 T~~ 8. v 1. we first form the matrix having the given vectors as its columns: A== [ ~ ~ ~ ~] 0 3 0 6 2 0 2 3 The reduced row echelon form of A is: R=[~~~~l From this we conclude that the 1st and 3rd columns of A are the pivot columns. = col(A}. reduced row echelon form of A is R= 0 0 [ ~ 0 0 ~] ~ ~ 0 0 0 0 From thls we conclude that {v 1.4.3vz + 2v~. and that v 3 = 2vt.] [i~~i~:~~~~ :~1~] = Thus the vector v 1 = (1.
The first three rows of R form a basis for row( A) = row( R).~ . 1. O. 3rd. The reduced row echelon form of the partitioned matrix [A I !4} is [ 1 0 . 1. ~ .~ . s.1 4 0 0 .LO.~ ) form a basis for null(AT).t) = t( . and so these three columns form a basis for col( A). The fi rst two rows of R form a basis for row(A) = row(R). We have Ax = 0 if and only if Rx ""' 0. O.s.1. £ .0. The reduced ww <ehelon fo•m of A = [: .! l I t ~ 4 0 0 0 0 : 0 1 ~ Thus t he vectors ( 1.. 2nd . i.3s+1t. ~. The redu:ed row echelon form of A is R = f~ ! =~ =~] . 1) and v 2 + s(O. 16.0 0 0 0 : 1 0 ~ _ . The reduced row echelon form of the part itioned mat rix [A I J 1 j is l 0 0 1 :. and the general solution of t he latter is x = ( ~s. 7. 'The lst. 0 .~s.EXERCISE SET 7. .. O = sO.~.he vector (0. l .0. .1). 40 1 80 11 o 5 1 5 5 Thus t.0) and (4. 0. ~.e.I 4 0 8 1 lG o 0 .1. . forrn a has is for col(A) . s... Orrorms ) a basis for null( A). if and only if x is of t he for(ll X = (. 0 and 4th columns of A are the pivot columns. 0.t) = s{1.e..4t.. 0.~s.1. 15. 14. und the general solution of the latter is x = (s+4t .O).0.0. Thus (1.1 2 0 s :! I 1 I ~~~~ ~.3 7 I 0 0 0 1 . lo o o o From this w~ conclude that the first two columns of A are the pivot columns.lt..O. 7.~.0.3.0) (d) Thus the vect or u = (4 . 0.. i. thus these vectors form a basis for (b) The first three rows of A form a basis for row( A).O) = t(4.. l .l. 0) forms a basis for null(A).6 227 13. 0) form a basis for nuli(Ar).. . {a) col(A).~ ) and (0.: ticn is ~ ~] is . (c) We have Ax = 0 if and only if Rx = 0. We have ATx = 0 if and on ly if C x = 0.~] 0 1 .. and 4th columns of A are the pi vot columns. 1. 1) fonn a basis for null(A) ..0) +t(4. 1.1.O.} ~~~.1) Thus t he vectors v 1 = (. if and only if x is of the form x = ( .0} = (0. R = ~ ~ 0 0 : 0 :].Ro ~ [~ ~ t]· Thus the columnrow factori>a· A=CR = [~ :]l~ ~ il . We have Ax = 0 if and only if Rx = 0.t. .3. .~. and so these two columm. The •educed ww echelon fmm of A . Thus ( ~ . From this we conclude t hat the lsl.t. t) forms a b&Sis for null (Ar).
The pivot columns of a matrix A are those columns tha~ correspond tv tht: columns of a row ec. then the number of leading l's in a ro w echelon form of A is at mos t rn. and the nullity of AT is at most 2 (assuming tl # 0) If A as a fl"' 3 rnatnx then the number o f leading l 's in a row echelon form of A is at most 3. (b ) (c) (d ) D2. the rank o f A T is at most min{m .2 .ssumang A ¥= 0 ).228 Chapter 7 .10 4 5 0 7 6 6 12 5 28 .. the rank of A T is at most. For example. R . the n . al most 4 (nssummg A "# 0) If A iR a 4 x 4 matrix.1 12] = [1 2 0 0 0 0 0 3 1 0 0 0 0 0 ~] and ::o the 1st. and the nullity of A T is at most. a nd the n ullity of AT is at most m.4 = [l 1 9 . 3 (assummg A i 0). ·1 .Plon form R o f A which contain a leading 1.nk o f A is at most 3. l he rnnk o f A is at most 4. "] 14 18 Ro = [' 0 0 0 0 0 s 3 0 0 =..1nd the ro rrespcmding columnrow expansion is A~ m [1 17. The reduced row echelon fo rm of A row factorization is 0 II+ 4 2 8 HJ jO 1 ~I 7 0 3 . A ~ [~ 0 4 2 . nnd 5th rolumns of A are tbe pivot columns . If A is an m x n matrix. the r ank of AT is at most 3. n} . general solution of Ax= 0 is at most 2 {a.]. the rank of AT is at most 3. the number of parameters m a general solution of Ax = 0 is at most 4 (assummg A¥= 0 ). the rank of A is at most min{m..1 (assuming A # 0). tbP number of parameters in a general solution of Ax = 0 is at most 3 (assuming A i 0). then the number of leading l 's an a row echelon form of A is at most 3. n}. Thus the column 6 10 10 A = CR= and the corresponding columnrow expansion is  3 ~ DISCUSSION AND DISCOVERY Dl. mber of parameters in a general solution o f Ax : 0 is at most n .1 (assuming A# 0). then the number o f leading 1 's in a row echelon form of A' is at most I. and the nullity of AT is. the num ber of pararnrters in a. 3rd. I he rank of A is at most 3. (a) 1f A •~ a 3 J( 5 matrix. lhe r1.
gona l to a is x. an d x . and the fo. and cos (J 5 [I ·~ ] ·. and we have . 2.prOJaX. 2).:. ·!4 ) . 20 .1):::. the standard rnatrix for the projection is Pe = 5 ~.~ ).(Z 1 Q.:. . ~ . ~ ) = CZi. 575 ).llel to l is a = (3. since sin O = Po = [_1 . the standard matrix for the projection is 5..  3. Using Formula (5) we have projaX = 1 2 a= ( 1 )(1. a·x 6) = llall2 a ·""' ( 3)(1. 0. D4. 1. (b) aJ = 4a 1 . (a) fa 1. 7}  (11. 2) = (3 5 5 5• On the other hand. since sin 0 = . the projection of x on lis given by proJaX . az . 1) · . 2) . 1.proj. proj .. ~. = (~.!l] · ·¥ 7&. prOJ8 X . and the component j orth. The vector component of x along a is projax = {a:lij a = ~(4. e72.··· 1. . 5) Mid v 2 = (0. 9 . 8.7 1. _ X·a · _ _ 5 10 1[)) . A vector pnr11. [_ ~~~ ~) ~9 and we have Pex 29 = [_ t ~] (~] ~ [~] . . ~).1i . [o. = (1. 1. On the other hand.X = ll all2a :.L{o . 1) ·.::(IO . Using Formula (5) we have proj 8 x = u:u~a = (~)(2.6} = [.14. • .o = 6a1 + 7a2 + 2a4. 0~ 0. ·1). the st andard mat rix for the projection is Pe = 3.ro. since sin(} = and cos B = the standard matrix for the projection is Po [ ~] ii !.') 4. ~ . T he vector component of x along a is proj 6 x = 11':"12 a = (0. ( Iii• 14• i4 . basis for col( A). ~. 5 ~ = 1 v 15 .iiGlr a = 5 ( l . a ·x a )( 3 ) ( 33 11 proJ.projax = (5. Pex = [li fs l ~] 1 i_ [.(0. Thus. 1. 3) . 2. llprOJaxll = ~ = J4 +9 + 36 = J49 = 7 Ia · xl 1 .x = (2.7 229 D3..x = ~ (2. and we have Pex = i ! [~] = ~ .(~.6 + 241 2 .iven by .1. 2. 1. Thus. 5.5) = (~~. ~. ~) .1.lo) = (~. 4.w ) On the other hand. i4• 14 . . EXERCISE SET 7.EXERCISE SET 7. 20 x. = .!!) · A vector parallel to l is a = (1. the projection of x on l is g. ~ ~ ~) 0 ! 6 .?to. 2) = (151 . a4} is a.~ 29 19 :. using Form ula.~J · It) 10 . i ). = 2.~. ~. 14  ( 5 10 15) _ ( 23 14 . ./ [3] 1 2] [i .3a2 and a. ... ~ . The vectors v 1 = (4. 7. . ~. using Formula 5. ¥). 1) = (0. 0. 4 . 2) component orthogonal to a is x .14• 10 I .~. since sinO = ~ and cosO =.110 ).proj 8 x = (1. I.. 1) form a bnsis for null( AT).1] 10 10 vko and cosB = and we have Pex""" [_ ~ 10 11 [~J = r. 2 .:1{ On the other hand.
Ia · xl 18.16 11. 3 onto W = span {a 1.m = 2!1 [~~ ~: ~] 193 =P .2.] 9 1 'l and.1. 2) then the standard matrix for the orthogonal projection of R 3 onto span{ a} is given by 1 T 1 P = . If a = (7.2 4 and so P 1s tdempotent. 14. from Theorem 7.9 26 2 0 1 ] 3 = 113 1 84 257 [ 96 84 208 56 96] 56 193 We not e from inspection t. lt is also apparent that P has rank I since each of its rows is a scalar multiple of a . the standard matrix for the orthogonal projection of R 3 onto span{a} is given by P= .mpot enl.hat lhe matrix Pis symmetric.J33 3 .[13 9] [3 4 1 3 257 .a a = . Finally. The reduced row echelon form of P is [~ ! ~] and from this we conclude that P has rank 2. .11 3IJ5i 13.14 19 14 11 11] 4 4 4 4 We note front in!>pection that Pis symmetric.\lar mult1plc uf a Finally. 1 2..230 Chapter 7 nail= J4 + 4 +. Finally. a2} P= AI(MTM)IMT = [! ~]~. llprOJ aXII = Taf = V49 + I + 0 + 1 = J5I = ~ 1353 + 0.10 + 4 1 2 1 J6 v'24=J66 . it is easy to check that p2 = _. llprOJ nxll = Taf = a· la·xl xl 18+62+151 v16 +4+4 + 9 = J33 = 31 11 . [~ 30 3 10 2 10 4 . the standard matrix for the is given by orthogonal prOJCclton of !1.3. it is easy to check that P 2 = P and so P is idl.. " 1] 11 [ 5 5 2) = 2 3~ [ 15 2] 5 25 10 2 10 4 We note from inspection that P is symmetric.m 2!1 [~:: ~~: . it is easy to check that P = 2~7 [ ~:~ ' and so P ic:: idl'mpotent ~~: ..~ .7. a a aaT = .1 I 3 ?. Let M = [ . If a = ( . 2) then.] u :l Then i\f 1 M .2 aTa 57 2 [ _ 7] lr 1 2 2] = 57 [. =[ 16 . from Theorem 7 7 5 .!. 30 2 [~ 2~ ~~] ~ [~ ~: ~~] = _.!. Hi. 5. It is also apparent that P has rank 1 since each of its rows is a sc.
:1 [~ ~ :] [~ ~ :J. 2.1 1 lJ 1 1 2 The orthogonal projection of the vector v on the plane is Pv [ = ~ [~ . Finally. l [~ ~ ~] 0 0 1 This Then MT M ~ [~ ~~ . The reduced row echelon form of P is [~ 0 1 0 and from this we conclude that P has ranK. 17. . then MT M = (~ ~] and the standard matrix of the orthogonal projection onto the plane is ~1 2 12 l] [1 01 0] = 3 [~ ~ =~] ~ ~ 2 .: ~]· Then MT M ~ [.~ :]and the •tandW"d matrix for the orthogonal ~wjection of R 3 onto W = span{ a 1 . The standard matrix for the orthogonal projection of R 3 onto the xzplane is P = agcees with the following computation rn.2 1 J 1 . We proceeri as iu Example 6.ing Po. and M(MT M) . The standard matrix for t he orthogonal projection of R 3 onto the yzp]ane is P = [~ 0 0 ~ ~]· I This 19. Let M ~ [ .1 2 1 [ 2] = 1[ 1] ] 4 . = 18. mula 0 (2n Let M ~ [: .1 7 . The general ::oolution of the equa tion x +y +z = 0 can Le written as and so the two column vectors on the right form a basis for the plane. it is easy to check that P 2 = P.7 231 16.EXERCISE SET 7. a2} is given by 23][1 2 30 2 4 ~~ ~~~] 102 305 From inspection we see that P is symmetr ic.8 .MT ~ [: ~] [. If M is the 3 x 2 matrix h~ving these vectors as Jts columns.
he vettor~ a 1 and a 3 form a basts for the ~ubspact II' spanned b}· the given vectors (column space of A). The reduced row echelon form of A is .20 30 20] 4 and the standard matrix for the orthogonal projection of R 4 onto W ts g1ven by M(MT M)t MT = [ .: 2 4 _1_ [ 30 2 1760 20 5 89 ~] .15 15 = _1_ 105 220 .20] [4 72 1 6 2 0 2 105 135 . Let A be the matrix having the given vectors as its columns.75 185 22. 1f M is the 3 x 2 matrix having these vectors as its columns. Let M be the 4 x 2 matrix havmg a 1 and a 3 as its columns.1 MT = [ 0 1 14 6 1 3 _!_ [ 10 .232 Chapter 7 20.y + 3z = 0 can be written as and so the two column vectors on the right form a basis for the plane. The reduced row echel<m form of A is and from this we conclude that t.3 [ 25 3 .15 31 75 25] 15 . The general solution of the equation 2x .1 53 • 5 21. then NfT M projection onto the plane is = [! 1~} and the standard matnx of the or~bogonal 5 0 3 1 14 6 3 5 P = M(MT M).6] [ 1 2 0] = _!_ [ 10 13 6] 0] 2 3 2 2 10 2 14 [ 6 The orthogonal projection of the vector von the plane is Pv = ]_ 13 2 6] [ 2] = 14 [34] 1 3 1 li 4 . Then 6 2 4 6 4 lj 0 72 0 2 5] [ 2 2 5 = [. Let A be the matrix having the given vectors as its columns.
thus the gcnP.7. a· v ( 15 14 .!. 0._ 89 220 153 89 87 31 37] 13 59 7 . 1. Let M be the 4 x 2 matrix having a 1 <md a 2 as its columns.4 6 3 .] · In ot h& words. and from this we conclude that the vectors a 1 and o· · a2 form a basis for the subspace W spanned by the 36 24 24 ] 126 oiven vectors. 1.m Ax = 0 can be expcessed as = t r~] '"' 11 lo o (to avoid lcactions) = • [. 2) on the solution space is g iven (in column form ) hy = Pv= 1·[~ =~0 ~ 3 24. 6. 3) on th<! solution space is given by proJ 8 v = !l aW~ a = i4 ( 3.1 and the standard matr ix for t he orthogonal projection from R 4 onto the solution space is P ~ IJ(BT 8) .' Br ~ ll 'l ~ ~] [: ! 2 _ . 7. the orthogonal projection of v = (1. The ceduced COW ·~helon !com of the matdx A is R ~ r: X 0 : ~] .19 19 193 0 0 9 [ 31 13 .~] r s J 1 OJ l 0 I 2 3  2 2 2 where the t. ? 1 l 2 2 1 :.S its columns. Thus. 2).! . from The.).6 0 1 1] = _. Then 1 0 0] 1 [=! i] [~ 1 0 0 1 = 0~ 0 = 2 [1 OJ ] ~ 01 0 . 2.EXERCISE SET 7.lem 7. 0.2. Then M r M = [ and the stand~u·ct matrix for t.he orthogonal projection of R4 onto W is given by M(MT A1)l 5 3] MT = 3 6 _1_ [ 21 4] [5 3 [11 9 660 . matrix having these two vec tors a.1 _:] 0 2 1 ·· 1 2 0 Thus t he orthogonal projectio n of v · {5. thus a general solution of the 1 2 x= ts + !t1=s [~] +t [ l _! lt _.37 59 23. ~ the solution space of Ax= 0 js equal to span{a) where a = ( 3.hP. 1. [0 _! 0 0] = ~ 1 3 [t .!. 2) =  . The reduced row echelon from of the matrix A is R system Ax= 0 can be expressed as = [~ 1 1] i i . Let D be t.ra. 14 .wo column vectors on the nght hand side form a basis for Lhe solution space. 14 ) s 10 .7 23:. 0.l solution of 2 X thesyst.
] [: ~ ~] = ~ IS H~ 3 9 6 ~] 26.. Orthogonal projet.c:. and the first two columns of 0 1 3 0 1 1 .73 P.2 . The reduced row echelon form of the ma.11 1] [1 4 11 0 0 2 43] 1 1 1] 2_ 35 [~~ ~~ 7 7 1 8 9 2 11 7 ] 8 2 1l 29 Orthogonal proJCCtJOn of R3 onto col(A): Let C be the 3 x 2 matnx having the first two columns of A as its columns.tlon onto row(A): Let B be t.nd the first two columns o f A form a basis for col(A).he 5 x 2 matrix having the first two rows of R as ItS column'l Then sT B G ~) and the standard matrix for the orthogonal projection of Rl'> on to 7 row(A) is given by 5 7 2 4 9 Pr = B(BTB)I BT = 2_ 24 5 2 9 2 3 6 3 Orthogonal projection onto col(A): Let C be the 1 its columns. = cccrc)'cr = [i ~] H~ ~.tBT = [~ ~] 2_ [21 1 3 2 35 14 .46 203 .t~rix A= ~~ ~ ~ ~] is R = [~ ~ ~ ~] From this we conclude that the first two rows of R A form a ba. a. Orthogonal proJeCt JOn of R4 onto row( A): Let B be the 4 x 2 matrix having the first two rows of R as 1ts columns. Tht reduced row echelon form o f A~ R = [~ ! l ~ ~]· 0 0 0 0 0 From this we conclude that the first two rows of R form a basis for row(A).73 369 13 95 [ 194 64 29 .4] 1 2 3 4 and the standard matrix fo r the orthogonal p rojection o f 10 ] [ = 11 14 [14 21] R4 onto row(A) is g iven by = Pr = B(BTB) .46 =~~ 1 ::] . Then 3 15 x 2 matrix having t he fi rst two columns of A as 2 3 9 6 5 15 3 ere= [1: :] and t he stand ard matrix fo r the or t hogonal projection of 2 R 4 onto col(A) is given by 237 . ThPn 3 2 1 1 0 0 0 0 form a basis for row(A). Then cr c = [t 1 3 [~ ~1 = [11 ] 102 32 75 and the standard matnx for the orthogonal projection of R 3 onto col(A) is given by P. c = C(crc)lcr = 419 _I_ .is for col(A).234 Chapter7 25.
1. given in Theorem 7. 1) and so the computation above is consistent with the formula.1 2 Note that W L is t he !dimensional space (line) ::~panned by t he vect or a= (1. and the solutio n m 299 .f. ami let C = BT B = [_ ~ .5 1 . rrom this we o. and the solution of Ax= b which lies in row(A) is tl· 1 Xrow ( A) = Pxo = r l~ 1 _. 1 =~] 2 Thus the standard matrix for the orthogonal projection onto w.~] .! .m 299 124 299 90 2!19 .. Let B be the4 x 2 matrix having the first t wo rows of R as its columns.299 299 48 Xrow (A ) = Pxo = I. and let C = BTB fii: [.trix for the orthogonal projection of R 3 onto the plane W with equation x + y + z = 0 is P = 1 3 [~ ~ 1 1 .299 25 A 299 90 299 "lr .<i9 209 m 124 53 .7 235 27.~ ~: 0I 1] 0 k. The reduced row echelon foxm of the matrix [A I b) is [R[cJ= [~ 0 0 1 5 8 I g . In Exercise Hl we found t hat the standard ma.EXERCISE SET 7.~ ~:c: its columns. Prom this we 0 :~c~:d~ :Y::~:i:o~ ::~~ns:::· :·:~::n~: :~::.t (the line perpendicular toW) is 1 1 1 1.3. .! • .~. .!'] of R 4 given by = [~ Then the s tandard matrix for the orthogonal projection onto row(R) = row(A) is P = BC.25 7 '25 2 !R 25 11 2!1 11 2~ 25 25 1 ·n ~] [i] 25 25 2 !! 25 1 0 0 = n1 0 0 1 28 . and that x0 = [ ~] is one solution.: t~o .~ .7. The reduced row echelon for m of the matrix [A I b j is !RIc] ::: 0 [ 0 2 I I 4] 3 3 1 3 fi fi :..·:::v:g t~: ~. 20 131 :!99 20 13 l 299 2~9 299 224 299 32 299 :~2 . 0 0 conclude that Ax ~ b is onnsht<nt.l I"'l . :16 72 Then t he s tandard matrix for the orthogonAl projection of R 4 onto row(R) =row( A) is p ::: of Ax = b which lies in row( A) is given by vc1 BT .p = [~ ~ ~] ~ [~ ~ =~] 0 0 1 ·.3 _ 0 l j !M 0 !!l 29.!_ 25 25 25 .1 Br.
2.8 14 8] 30 = spa.x1 ll x2ll 2 then llxll 2 == llxdl 2 + llx2ll 2 and so = llxll 2 llxtll 2 = llxll 2 = llx2ll where x2 ('~.n{v 1.31 25 20 [ 1 25] [1] 20 1: 59 5 ~ = ~  1 1] 66 [127] ~~ [38] 23 ~: DISCUSSION AND DISCOVERY 01.p = [~ ~ ~10 0 1 1 14 [ 1 6 ~ 1 ~ ~] 3 5 [~ ~ 6 3 :] 9 31.1} onto w.1 178 2 [15 4] 4 7 l3 fl 51 2 4 0 1 23 23 54 25] 1 2] = 89 [28 .1 !] 2 3 = [. (the line perpendicular to W) !s I . 1. and has rank 3. and onto its orthogonal complement is n . (a) (b) The rank of the standarrl matrix for the orthogonal projection of Rn onto a line through the on gm 1s 1.31 89 28 . v 2 } 28 . idempotent. . 1.31 25 20 1 . and onto its orthogonal complement is n . Then !I'TA = [1 2 3 4 3 1 0] [~3 2 0 . 02. D3. A 5 x 5 rnatnx Pis thl' s tandard matrix for an orthogonal projection of R5 onto some 3dimensional subspace if and only 1f it is symmetric.y + 3z = 0 is P=~ 10 2 14 [ 6 13 3 2 6] 3 5 1 = 14 Thus the standard matrix for the orthogonal projection onto W J. If n ~ 2. then the rank of the standard matrix for the orthogonal projection of Rn onto a plane through t he ongin is 2.89 Thus the orthogonal projection of the vector v = (1. is gtven {in column form) by l 89 51 \ Pv ll r:] 1 23 23 54 28 .1.236 Chapter 7 30.~~ 1 ) 2 = llxll 2  ~~~~~r Thus q =/ ll x ll 2  1 o~IW is the vector component of x orthogonal to a .31 59 5 20 5 14 is and the sta ndard matrix for the orthogonal projection of R4 onto the subspace W p = A(A r A)1 l r = [~ ~]1 3 0 .c . In Exercise 20 we found that the standard matrix for the orthogonal prOJection of R 3 onto the plane W with equation 2.1. If x1 = projax and x2 = x. Let A be the 4 x 2 matnx having the vectors v 1 and v 2 as its columns.
P) 2 =I . 0}. Since projwu belongs toW and projw.ny solution of Ax = b ..7. = 0 or 1. On the other hand. We have (3 . 3) and v 2 == (0.P is idempotent. e. it follows that ((Wl.). D8.. 0. 3 . then P 2 (Pis idempotent) and so p k = P for all k .x. D5 . (c) True.L. = W. and that>.tor as its columns. (e) False. and the standard matrix for the orthogonal projection of R 3 onto W = row(A} is given by ~ ~] = 1\ [~ ~~ ~] 3 1 10 Finally.. but is not the standard matrix of an 1 l 1 orthogonal projection. D7. and so >. 1. thus the general solution of 0 0 010 < t < oo. Note that A 2 = 3A. 3. Since (W. Let. it follows 'that >. J) form a basis for the row space of = . Suppose that A is ann x n idempotent matrix. we have Xrow = Px = 11 1 [ ~ ~~ ~] 3 1 10 [.. since projcoi(Aj b belongs to col(A). and its matrix is the identity matrix.48t + 1lt2 and so the solut. subspace W. so A is not idempotent.8). For example. In this case the row space of A is equal to all of Rn.ii. Then BT B = [1~ ~].1. the two vectors a. to = (7=.t) 2 llxll 2 = {7. t} where oo r~ ~ ~l ~] .re orthogonal (b) False.g.3t. 48 + 22t = 0.t .\2 x. In fa. Thus the orthogonal projection of Rn onto row(A) is the identity transformation. = W (Theorem 7 . Then A 2 x = A(Ax) = A(. we also have (J .. ~1) 5 = ( 1p 1~. D6. B be the 3 x 2 matrix having these ver.P .] = 0 f [ !] 1 24 . The matrix A = 1l 1 ] [ 1 1 1 is symmetric and has rank 1. is nn eigenvalue of A with corresponding eigenvector x (x f 0). Since P 2 = P.DISCUSSION AND DISCOVERY 237 D4. the matrix P = [~ ~] satisfies P 2 .h) = . (a) =P TJ"ue.l)l. onto t he row space of A.2P + P 2 = I.7.=: ~1 · We conclude that Xrow + t2 = 58 . DlO. See the proof of Theorem 7.7. we have pn = P.ion vector of smallest length corresponds to ft lllxii 2 J = ~~).4 . u belongs to W .e.ct. in agreement with the calculus solution. we have A 2 x = Ax = . From the row reduction alluded to above. since A 2 =A . D9. thus I. If P i~ the standard matrix for the orthogonal projection of Rn on a. i. x (7. Using ca lculus: The reduced row echelon form of lA I bj is Ax = b is x = (7 .H. Since x =/= 0. we sec that the vectors v 1 = (1.:::: 2. Using an orthogonal projecti on: The solution Xrow is equal to the orthogonal projection of a. 3. the system Ax= projcol(A)b is always consistent.>.:: P but is not symmetric and there fore does not correspond to an orthogonal projection. In particular.1.3t) 2 + t :.? = >.L )l. (d) True.
Px) = Px t (I. WORKING WITH PROOFS Pl.P = 0 and so for evcrv x and y m R" This completes the proof. The rows of R Corm a basis for the row space of A. since pT = P = P 2 (Pis symmetric and idempotent).Ax )· Ct(A) = 1\ !(6){1) + {27}(2) + (15)(4)] = 0 . (b.7. We will show that P is the standard matrix for the orthogonal projecl ion of Rn onto W.P) = P . P 2. then bTb = b · b = (ta) · (ta) = t 2a · a = t 2aTa and (similarly) b b T = t2aaT. To this end. First we note that the columns of A are linearly independent since they are not scalar multiples of each other.3(b) that the system Ax= b has a unique least squares solution given by The least squares error vector is b.e. It follows from Theorem 7. Lf'l P hl' a symmetric n x n matrix that is idempotent.P)x = 0 for all y m R" Finally. EXERCISE SET 7. If x anJ y are vectors in Rn and if a and /3 are scalars.P 2 . If b = ta. we first note that Px belongs to W and that x = Px + (x. from Theorem 7. i. that Px = projwx for all x in Rn .A x = [~] . thus 1 T 1 2T 1 T bTb b b = t2aTa t a a ~ aT a aa P3. Thus.7.8. Thus T(ax + /3y) = a · (ax + /3y) (a · x) (a · y) llall 2 a = a:lja112a + /3~a = aT(x) + /3T(y) wh ich shows that T is linear. For example.1GT is the standard matrix for the orthogonal projection of Rn onto lV = row{A).1nd since W .P )x To show that Px = projwx it suffices (from Theorem 7. and has rank k.P . then a· (ax+ /3y) =a( a· x) + /3(a · y). thus A has full column rank.[~ ~] ~ (~0] ~ [~~] 5 4 5 11 8 = 11 15 and it is easy to chP"k that this vector is in fact orthogonal to each of the columns of A . this IS equtvalem to showmg that Py · (J .5. Then W = col(?) is a kdimens•onal subspace of R".238 Chapter 7 Dll. G(GTG).col{ A) = ran(P). we have P(J.P)x belongs to \1' • .4) to show that (I. and G = RT has these vectors as its columns.8 J.
[ 9] = [3 .check that this vector is orthogonal to each of the columns of A.!.l1 21 1 4 . the standard matrix for the orthogona.. ~] [21 35 [11 2 4] = _1 [~~~ !~ ~~1 25]l 3 5 220 25 4 5 20 90 170 :). 3.to.1  9 The least squares error vector is and it is easy . The colunms of A are linearly independent and so A has full column rank. From Exercise 1. the least squares solution of Ax= b is x = f1 [~~). the standard matrix of the orthogonal projection ont.8 239 2.)tAT::::.J projection of R 3 onto col(A) is p _ A(ATA. thus 13 Ax = 1 2 2] .!:_ . Thus the system Ax= b has a unique least squares solution given by 6] 2 I I] 21 2..EXERCISE SEl' 7.thus 16 Ax = 11] ~ [~ ] = ~ [28] [ 2 4 3 11 5 0 8 11 40 On the other hand. the least squares solution of Ax = b is x = 2\ t:].nd so we have projcoi(A)b = Pb = 2~0 [·~~~ ~~ ~~1 ~1 20 90 170 [· f> [ 1 = 11 [~:] 40 = Ax 4.21 0 l [ 2 1 3 [ .: [~.. [~~1 13 =Ax .o col(A) is and so we have 17 ~1 [~] 1 = 21 1 . [14] lj.5 21 [ tj6] On the other hand. From Exercise 2.
1ed by solving the normal system AT A x = ATb which is 0 1 The augmented matdx of this system reduces to [: 0 ' 1 : 'a 7] 0 i .4Th which IS Sinc·p the rnntrix on the left is nonsingular. thus there are infinitely many 0' solutions given by . 7.lor jc:. The least. n and the least squares error is li b ..Axil= V(~)2 + (~)2 + (~)2 = J* = :Lfl. this system has the unique solution X= [J:1] = [24 8]l [12] = 2_ [8 8] [1 =[to] 6 24 8 l 2] 8 6 8 80 X2 The error vector is and the least squares error is li b . The least squares solutions of Ax = b are obtained by solving the associated normal system AT Ax= .240 Chapter 7 5. The least SCJUf\rf'S solut1ons of Ax= b are obtai.AxiJ = V(~) 2 + (~) 2 + (0) 2 = Ji = ~· 6. squares solutions of Ax= b are obtained by solving the normal system AT Ax = ATb which IS 14 42] 4] [42 126 [Xl] = [12 X2 This (redundant) system has infinJtely many solutions given by X= [::J = r~ ~ 3tl = [!J + t r The error ve.
EXERCISE SET 7.Ax il = 8. The least squa<"' solm ion i:> o b tained by sobing the norrnal sys tem MT~H v :.1 1 7] 3 0] 20 8 2 47 7 10 T hus the least squares straight line fit to the given data is y==  + I0 x. = [~} '[=:] .8 241 T he error vector is and lhc least squares error is lib . 0 0 x = [:. this system has a.Axll = V( !iY + {fr )2 + {. 12 I 3 456 7 . tf 6 1 : I thus there are infinitely many .~) 2 = ~ = ~ The least squares solutions of Ax= bare obtained by solving AT Ax= ATb which is 1 [ 17 33 ~ ~~ ~~] 50 1 [::] X3 = 0 0 Tbe augmented matrix of this system reduces to :.'C tor is [W :] .·fi)2 =y wh"c M = 9. unique solution given by 1 [ 4 16Jl [IO] ] [V2 = 16 74 47 = v y 4 2 1 [··"37 8] [tO] ::::: 10 [3= r.] T he error vt.] . 2 and t he least squares error is llb .f1j.:: M T y which is [1 ~:] [~~] = [~~1 : Since t he matrix on t he left is nonsingular. The hnc"' model fo.olutions given by [~ 1 [~] 1:sl . = f[I . J(!)2 + ( ~)2 + (. the given data b M v ·~ [i il 3 10 and y = [.
l [ :] = [.t 22 62 [l.] [:] = [. The Hnear model for the given data is Alv ~ y where M ~ [1 ~] = MTy and y ~ [!]· The leaSt squares solution i~ obtajned by solving the normal system AfT Mv which is r: 2~J r::J = r~J Since the matrix on the left is nonsingular..242 Chapter 7 10. The quadratk least squar" model for the ghen data is Mv.~ 178 27 ! 6  1 J .y whe" M The least squares solution IS ~ [~ ~ ~] and y ~ [!]· = MT y which 1s obtained by solving the normal system M 7 Mv r .'t ] [ 9'] 8 8 22] 1Jl V3 l22 62 178 Since the matrix on the left JS 27 nonsingular. this system has a unique solution given by [~:] = [ : V3 22 62 2~ ~~] .16~] ! 3 27 1 3 Thus the leas1 squares quadratic fit to the given data is y = 1 y 11 x 6 + jx 2 I . tllls system has a unique solution given by [Vl] = [4 8 V1 22 8] l [4] 1[22 8] [4] = 1[16] = [~] ! 9 = 24 8 4 9 24 4 Thus the least squares straight line fit to the given data is y y = j + !x. 2 I 2 3 4 I 11.
The quadratic least.4 ~ and the solution.9 where 1 4 !) r 1 M::::: 1 2 3 4 5 s 27 64 25 125 16 10.a3) 14. squares m.864. The model for the least squares cubic fit to the given data is Mv (5 .n the give n tlat<'.ion is obtained by solving t he normal system Mr llifv = M'I'y which is Since the ma.41 916."3t squares cubic fit to the given data is Mv 1 =y 4.he lea.] [ 1 H ~] [=:] == ~ 2 Thus the least squar~:. is (ao.775).9 y~ 2 3 4 9 10 8 27 64 3.8 27.rix on the left is nonsingular.3 .1 5 15 55 225 225] n a1 a2  a3 l 2168 18822.0.ti~d normal sysle111 A(r /11 v = lv[T y 55 is l~~ r 979 225 979 4425 2'25 979 442:> 2051.'> quadratic fit i:. The model for t.160. written in comma delimited form. =y where M= 1 1 1 1 1 0 1 0 1 4 0 1 0.0 4087.1 57. 1.EXERCISE SET 7.odel for the given data is Atv = y where A1 = 110lj andy= 1 0 1 1 1 1 2 4 The le3st squares snl tti. a2 .8 243 12.9 y= .4 24 .0 T he associl\.1 9. a 1. 60.is y = 1 .2 113. t his system has a unique solution given by [~:] [! : 1~] l [~] 113 = 6 10 18 14 [~t : . {).t.2 _.811.~:r + ~x 13.
a2..2x4 = 0 is d = I( 1)(1) + (1){2) + (2)(0) + (2)( 1)1 /(1)2 + (1)2 + (2)2 + (2)~ = _1 _ Jill = JTij 10 9 and the point in the plane that is closest to Po is Q == ( 10 . as) ~ (0. 1.y.(1)\1)1 /{1)2+(1)2+(1)2 = _:__ = 2.! computing thr orthogonal[~r oje~]tion of the vector b vectors of the matrix A = onto IV ag ). 1 %2 ·· •· · 1 :J:n .4 94.171. Thus the normal system can be written as .x 2 + 2x 3 . 2. L. If M = 1:tal and y = [. is (ao. 1 • :t2 • 312 [Yal .2 and the solution. . 1) to the hyperplane x 1 . ' and 1 :tn Yn 2 1 Xn MTy = [ XI !Ill 1 1··· 1] Y = [JY• ]. 2. (a) The distance from the point Po= {1. 100] 354 a1 a2 5 10 30 [ 100 10 30 100 30 100 354 354 1300 a3 15. • X l 1 2:2 n : ] [1. 0. !in DISCUSSION AND DISCOVERY Dl.817.4 4890 4396.200).z d = 0 is = IP)(l) + {1)(2) . The latter is found by ~ 1 = OP0 :+ onto the plane· ThP column ~ form a bnsis for tv and so the orthogonal projection of b given by (b) The distance from the point P0 = (1. 2..Chapter 7 The associated normal system MT M v = MT y is aol [ 323. :t!] ~ • .'3 V3 3 and the point in the plane that is closest to Po is Q = (~. The latter is found by computing the orthogonal projection of the vector b = OP~ onto the hyperplane: 0 P' Jwb = [~ ! ~] io [~ : :] H ! ~] Ul ~ 1 10 [ = ~~] ..8] 1300 [ = 1174. .120 . a1. 3. .586. :ral = [r: :z:. 1) to the plane tv with equation x + y.~. :1:2 :Z:n : L.x. written in comma delimited form. then MT M = [ 1 .1~ ). ~~..
would be rel\. T he m<><. It follows that b .s least "''"'"' fit .nd b .nr.] [:] = [~] if and only if Ax + r =b and AT r = 0.lATh.1 ATb .. We have A = = b if and only if b ..[~ ~] (~] = [~].. the vector x is not a least sq11ates solution of Ax = b.[~] .'lulting in y = 251 + ~ i as the best least squares fit.Ax belongs t.DISCUSSION AND DISCOVERY 245 D2.1 A'T b . •nd the conespond.1)(2) + (3)( .8. Note that AT r = 0 if and only if r is orthogonal to col(A) . ~ Thus th~ l~a.~ 4 5 s 14 s . From Theorem 7. l [~5] 21 = rP.~e to x = 45."t squares solution is al 3 2 3]1 11 [i * [~ L= J l [~ . (b) The least squares solution of Ax == b is X= (JiT A) .93 Ti u~se equations are dearly iucornpatible and so we conclude that.Ax = [~] .. 05 .68 =0= 5s .7) + (5 )( s  14) ·1s .l .1) .Ax is the least squares error vec". (d) The least squn. (e) D3.onable to p~rform a linear least squares fit and then use the resulting linear formula y = a + bx to extrap0l a. (c) The least squares error vector isb A( AT A).s [ 'l 36 4~] [:] ~ r~.Ax belongs = [ [~ ~] 4 5 a. by a curve of this type.1 Ar .A(AT A)1 A1'bll.res error is li b.14 ~7 1.c. y 10 6 4 2 I 2 3 4 5 6 7 D6. [: : il 1:1 : m.ng nonnal system .J. thus it. We have [~ ~'. (a) The vect or in col(A) Lhat is closest to b is projcol( t!)b = A ( AT A) . a vector xis a least squares solution of Ax to coi(A).L and so. from Theorem 7. D4.o coi{A).8. x is a least squares solution of Ax= b and r = b . The stan<iard matrix for the orthogonal projectiou onto col(A) is P = A (A1'A).4. thus b .n 1 =  9 [1 ] ~1 ¥ . .el foe t h. for any value of s.4..Ax is orthogonal to col( A) if and only if ( 1)(2) + (2)( 7) + (4)( s  14) = 0 ""' ( . The given data points nearly fall on a stra ight line.
m + 1 of the numbt>rs x 1 . q2::: (~. v 2 form an orthogonal set.1. . If Ax= b is consistent. is a root of the polynomial P (x) = a 0 + a 1 x + · · + amxm.O. v2.i) . ~). the solution space of the latter is the translated subspace X. thus projwb is the only best approximation to b from W . in addition. if n > m and if at. Xn are distinct. . thus the vectors v 1o v 2 do not form an orthogonal set. j. x 2 .~.q2 = (7s. 0. the least squares solution is the same as the exact solution of Ax = b P 2. we have Ax = projcoi(A) b = 0 if and only if x = 0. · v2 = 0. then projcoi(A)b = 0 . thus the vectors v 1 . ~. (b) These vectors do not form an orthogonal set since v2 · v3 (c) = .9 1. then ao = a 1 = a2 = · · = am = 0. as in the proof of Theorem 7. . ( d ) We have v 1 · v 2 = v 1 • v 3 v 2 · v 3 = 0. v 3 form an orthogonal seL. T h us. then b is in the column space of A and any solu t ion of Ax = b is also a least squares solution (since li b . If. wll > If ao.~. a 1. The correspondmgorthonormal set isql = (J5. v 2 . thus the vectors v 1.. q3 = (~. 2. a2. .js).3.Vt · v3 = 0 we have v2 · v3 = (1)(4) vectors v1 . fr om Exercise P5. v 3 do not form an orthogonal set.. since t he columns of A are linearly independent. These vector~ form an orthogonal set but not ~n orthonormnl set since llv 3ll . from Theorem 7 8. (a) v. If w is in W and w :1: projw b then. v 2 . :r2 . we have l b lib . The corresponding Orthonormal SC t IS ql = = (~. + amCm+l(M) = 0.~). · v2 = (2)(3) t (3)(2) = 12:/. q2 = 11:~11 = ~).~).j3 :1: 1.. t he least squares solution is also umque. If at least m + I of the numbers :r1 . (a) These vectors form an orthonormal set..'3. thus the (a) .. P 3. 0. v2 do not form an orthogonal set (b) Vt · v2 = ( 1}(1) t (1}(1) = 0.+ W where :X is any least squares solution and W = null(AT A) = nuii(A). thus M has full column rank and M T M is invertible. ~).• :rn are distinct then. u::u (32. ~). 0. • ..projw b ll ..8. = 3. EXERCISE SET 7. the column vect ors of M are linearly independent. ~). v2 . am are scalars such t hat aocdM) + a 1c2(M) t a2c3(M ) + then ao + n1:ri + a2x~ + · · · + amxi = 0 . the columns of A are linearly indep endent lhen there is only one solution of Ax = b and.Axil= 0). P 4. 2. 0. . . (d) Although v1 · v2. ! . This shows that the column vectors of M are linearly independent P6. in this case.246 Chapter 7 WORKING WITH PROOFS Pl.. v 2 form an orthogonal set. Thus. ?tJ) (b) v 1 • v2 :1: 0.. q3 = (!. :/. least. (c) We hnve v1 · vz = V t • v 3 = v2 · v 3 = 0. = . From Theorem 3 5 1. The least squares solutions of A x = b are t he solutions of the normal system AT Ax = AT b . q2 = (~. But such a polynomial (if not identically zero) can have at most m distinct roots. If b is orthogonal to the column space of A. (c) V t • v2 :/. + (2)(3) + (5)(0) =  2:/. thus the vectors v 1. Thus.. for each t = 1. P5. t hus the vectors v 1 . The corresponding orthonormal set is q l = (~. thus the vectors v 1 . v3 do not form an orthogonal set. n Thus each x. thus the vectors v 1 . v3 form an orthogonal set. The corresponding orthonormal set is q1 = (~.
l projection of w onto W = SJJan{ v 1. (a) (b ) 13.9 247 4.!. . we h.~) + (x · vz)vz + (x · v 3) v3 = (l )vl + (2)v 2 + (O)v3 = {~. we have P = [ ·~ 3 2 1 J :i ~J r·32 2 3 '2 3 I J 14. !. Using t.nd llv zll f..~) 7. . 0. v 2 } is Pw = . we have he . 1) = (~. 1. the or thogona. and llvJII f. r! s~9 _1 [ 0 = [ ~] ~] 2] ~~ 3 l 9 2 ~9 9 !§ 9 _L98 On L other hand. 1.. ( b) No.hf! matrix fouud in Exen.. L 6.isc 1:3. ~ .:we P = [ ~ 72 V'6 ?J 76 ~ I js]. 0. llv1 1l (c) No. (n) (b) 8.EXERCISE SET 7. Using Formula (6). • vz f. ~ . Using Formula (6). (a) projw x = (x · v1)v1 (b) projwx = (x · vt )vl + (x · v:a)vz = (l )vt + (2)v z = (!. 4} + (1. .1. ~. Vt f. using Formula (7) . 1 . vz · vJ f. (a) Yes. llv<dl # l.[ I~ 2 I 7u 3 2  I 76 2~] ~· I 275 6 :. ~. 1 5.
v 2} is [~ ~ ~] Hl ~ [ =t] = span{v 1 . From Exe<Cise 17 we have P ~ [ t t ~] .l· ~)and n~~ll = ~. We have 11 :!11 (~. Thus the standard matrix for the orthogonal projection of R 3 onto W = span{v 1 . Using the matrix found in Exerctse 17.mg Formula (7). the orthogonal projection of w onto W Pw= . v 2} is given by (S. We have 11~:11 = and ~~~~II = (~. using Formula (8). p = [!_~31 j1] [l Pw 3 3 2 1 il 19. . we have projww = (w · Yt)Vt + (w · v2)v2 = (~)(j. ~) + {~)(!. j. using Formula {8). ~ ) 1 6 17. Using the matrix found in Exercise 18. ~) = (~.9~~] [~2] = [~OJ 9 On the other hand. we havP 20. ~). v 2 } is On t ht> r. [ !~9 ! . 7J) P= '7s· 76). Using the matrix found in Exercise 13.thus tcaoe( P) ~ j + j + I ~ 2. we have 21. 73 76 1 1 73 [ 1 ~ = 73 [! : ~] 0 0 1 18. 7J. the orthogonal projection of w onto W =span{ v 1 .. v 2 } is given by (7J.2~8 Chapter 7 16. !.t her hand. v 2 } is On the other hand. Thus the standard matrix for the orthogonal projection of R 3 onto W span{v 1 . the orthogonal projection of w onto W = span{v1 .i. u:.
v2} is an orthogona l basis for R 2.. 1. 1. and form u. v. ff 24. 28..( 1. The dimensio n of the range is equal to tr(P) = ~ +~+ ~ = 2.. 5)  ft:!n n)(l.EXERCISE SET 7. w 2 . The dimension of the range of the projection is equal t o tr(P ) = ~ + 251 + = 2. 1) = (1. Let YJ = w =(1. t 25.. = llvdl = (l. )(1 . .IJJ:I!~ Vt = (2. 7.3) and v 2 = w2 . .u..fg + !~ = 1. 1. . ./11925 J uns 1 VJ (Q 30 105 ) form an orthonormal basis for . ftc) forrn an or2 Let v 1 thonon nal hAS is for R . thus Pis the standard matrix. T he dimension of t he range of the projection is equal t o tr(P ) = ~ + + ~ = 1. O. I' and i t is easy to check that P 2 = P . and Then {v 1 1 v 2.0). We have pT = P and it is ea.~ . Y 2 = w 2 . Let V J = Wt = (1. J. Then {v 1. thus Pis the standard ma.0) ami q 2 = for R 2 · ff'J:u~ v 1 = (3. The dimension of the range is equal to tr(P) = 19 + . ~ + ~ = 2. R3 . and the vectors q 1 = 1 ~: II = (~. I) . . We have pT:. t hus trace( F ) '9 ] 2 ~ & = ~ . v3} is an orthogonal basis for R 3 1 and the vectors q. Let.'!y to check that P 2 = P . From Excrciso !8 we have P . jfo) and q 2 = 1 :~ 1 = (fro. O). WJ = (1. 2). of an orthogonal projection. O.t. = {0.:>). 26.is s 29. 30. 0}.(f)(l . 2). !> = ~ {:3. VJ q3 = llv3 U= ' .u! v l = (3. 27.1} fo rm an orthonormal ba:.0) = (0. 0) and V2 = W 'J Then Q 1 = 11 ::~ = (1.n orthon orma l hasls for R 3 . . 1). = wl = (1. 7.9 249 22. 0) = (0 .. 23.3) = (Jj.lfv~[Jvl = (1.(~){ 1 . 0).rix of an orthogonal projection. o.. 2). [! 9 4 ~ . voz = .
and w 3 = (0.ts • s fo rm an orthonormal basis for R4. v 3 .1. 2.{IQ ~ 10 ' 10. 1.£@ v'iO) s .I[~ VJ = (1.250 Chapter 7 31. i. 1. and Then {v 1 .a.fJ. 2. v2. 1. q2  .ill uTI . and the vectors ql   YJ llvdl  {0 2Vs J5 0) '"5• Sl .•  11 v~ll v~ .~· 0). V2 = w2. Let VJ = WJ = (1. ~~ ~). v3 . 0) = (~. v 2 . . The vectors w. = (~· ~.s.1. 0) = (1. 0). 0)  ( ~)(0. Let Vl = Wt = (0. v 4 } is an orthogonal basis for R 4 . and Then {v 1 .~.2. and the vectors form an orthonormal basis for R4 • 33. V2 = Wz IIJ:Ii~vl = (1. 0.Q 0) 30 ' IS I ' q 3 11 v3 ll VJ  (.(~){ 1 . 32.iil) 15• 1s . . 2. 0). w2 = ( ~·. 1) form an "rthonormal basis for R3. . 0).( v'3(i llv211 6 I q4   VJo :.&. 0). 0. 1.(v'iS . 2.~ . v 4 } is an orthogonal basis for ~. 0).
. V:~=WJ  = \1: 2  ~!./155630105 1 J l55630105 37. No~ that u 1 and u 2 are orthonormal vectors. v . 11 1 35.EXERCISE SET 7.. w2}'.W z t. w3.o 11 :.0). 2) and 2)( 2 1) v 2= w 2. and q4 = 11 :!g = (dis.llv Then {vt .7).c. Q z. 7) . NoLe that w. 7rn.~ . 0.~ llvdl 2 Then {v 1. = .w 3 • Thus the ::. !) and w2 = ( ~.uhspace \V spanned hy the given vectors i. 3) (~. the vectors w 3 and w4 ar<:: orthogonal to each other.1 = w 1 .0. VJ} is an orthogonal ha. The reduced row echelon form of the augmented matrix for this system is [~ and oo a general ootut.)(l • 2 • 4 • . 1) form a basis for span{w1. 1 2 w. 7:3• 1:. [\ . 1.( . .l ) . w2}. 0) and w4 = (.~) .0) as i~s rows. Note also t ha t. w2} .o).1. Thus the orthog~na. Note that W 3 = w 1 + Wz .1! ' _J.41 . t\nd let.ffl.)r8. anti the vector!' Ut V1 = llvdl = tO. v2 . ~.Jt process to t hese vectors amounts to simply n ormnlizing them.on . Then row(A) = s pa. 1.!. I UJ form an orthonormal basis for W. 0. 0.w2·V1 v1 =(1. Q3 = 4. ( 9 ) jj2 v2 = (2.. Q:$. · l 2 ) form an orthonor mal ba..f. 7. A basis for null(A) can be found by solving the linPnr system Ax = 0 .:~!. 7J. £~ • ·.·[!]+ •[=..( 5 0. ~· ~· ~).·l} i::: an Ol lbogonal basis for\\' .9 251 34. ..1. 2.n{wt. ~V .'5 I  w3 • v 1 llvdl 2 v1.0) = ( .)5. { qt .) ""' (.2 s2 35 ) JS'iit4 ' •/5614 1 15iii4' J5614 ' v an( = ~ llvlil = ( 9876 3768 5891 3032 ) J 155630 105' v'l55630t05 ' . ~) and the component of w orthogonal to W is Wz = WWl =(1. w2. in adrlition t o being orthogonal to w1 a.n{w1. ~) . w2}.0.~ :3dimensional with b(l~is { w 1 .1 .. 3)(.( 31843) ( ... Let A be the 2 x 4 ma.. w3}. ~· 0).g. 2.~.osis for U2 w. 0 _l 2 1 .1. 2. Thus B is an orthogonal basis for R4 and application of the GrarnSchmic. and uull{A) = spa. 4. W ?..vl = (3 • 0 ' 4 ' 2) ·. Let v1 = Wt = (0.1.(2.2) .is for 36. ~.l projectior.•] = w. Thus the subspace W spanned by t he given vectors is 2diinensiooal with basis {w 1. Q 2 = W z:::: (7.] Th~ by 1 .nd W'J..41 .~ 26 ' ~) 401 14' 7) 7 2 70 2 14 9876 3768 5891 3032) = ( 2005 ) 2005 > 2005 200. = (~.~.l52 ) .trixhavingthe vectors w 1 = (~. and the vectors Uj = n::u = (j.o. = (js .2. l)=(~. This results in the orthon o110td l>asi::.. w 4} is a basis for R 4 .. .!1. Uv211 = ( . Let v 1 = w 1 = (1 .~ · v 2 .1. :76.. 4 _! 4 : I I 2 l 0] 0 the vootorn I ~··• [m . vf w onto the subspace W spanned by these two vectors is given by WJ = projw w = (w · u t) u 1 + (w · u 2) u 2 = ( 1 )(~. Jlv 1 jj 2 ' · . and B = {wJ. /S' ~ .2. 2.. q 4} where Q1 = Wt ~ (L ~. ~) + (2)(0. .
0. Any orthonormal basis for col(P ) can be used to form the columns of M . Q) . If w 1 is a linear combination of w 1 . Jh.1) 1(1. ~~ ~) = (~ 1 ~~ 19 4 1 . when applying the GramSchmidt process at the jth step. !.n{vJ . the vector v i will be 0 . 7s. D4. If w = (a. ami applKHion of tlw GramSchm1dt process to these vectors yields '\n orthonormal basis {Q 1.'2 c DISCUSSION AND DISCOVERY D 1. ••. .~. D3. Q2} for w by applying the GramSchmidt process to {UtI u 2}· Let = U t = (1. c). D5.U~Vt = {0~ 1 1 0. w 2 . 2. 6. col(M) = col(P ) (b) Find an orthonormal basis for col(P) and use these vectors as the columns of the matrix M . and let Vt q l = 1~: 11 = ( 0. (c) No.~) 39. 0) onto W is given by Js.c 2 ab 2 b b acl be c.( ~ 1 . 1. w2 . . Q2} is an orthonormal basis for H' . a) and u2 (0. and so the orthogonal projection of w = (. w 3 (b) v 3 IS orthogonal to span{ wt. b. w2 .1. the sta. q2 = 11~~11 = ( ~· ~ ·T hen {qt . Q2} = = where 02. w~c are linearly dependent. Ji2).6. ..ndard matrix for Lhe orthogonal projection of R 3 onto W is P = u T u = a2 1 :2 + c2 a] Ia [c b b c] .r~. then u 1 (1. v 3} =span{ Wt.l basis {Q]. (a) span{vt} =span{ wl}.2) = (j. If the vectors Wt. = a 2 + ~2 + c2 [aab a. 0.252 Chapter 7 38. (a) . .!}. Thus.l projection onto the column space of A. fs). w2}.l. If A has orthonormal columns. w 2} 1 and spa. b) form a basis for the plane z = ax + by. then AAT is the standard matrix for the orthogotia.0 1 11 2) 1 v2 = U 2 . 2. v2} = span{w 1. span{ v 1. v2 . If a and b" are nonzero. then the vector is an orthonormal b~is for lhe !dimensional subspace lV spanned by w . and the component of w orthogonal to W is W2 =W  Wt = ( 1. w 1 _ 1 then .1. t hen at least one of the vectors in the list is a linear combination of the previous ones. First we find vD orthonorma. using Formula (6).
. . l!v2ll V2 ) llv1ll + . the orthogonal projection of a vector x on W can be expressed as 0 proJwX = ( x . Thus A= UlJT where U is any 11 x k matrix whose column vectors form an orthonormal basis for tV . t. (d ) 1'ruc. v~c/llvkl} is a. wj 1 . . w2 ___ . WORKING WITH PROOFS Pl.. .nd v.. .EX ERCISE SET 7.. . v. W j } .ors is linea.. 2.or q 3 is ort hogonal to the subspace span{w 1. . Thus ap fll 2 i] 3 = [~ ~] v'5 V5 [J5 0 V5j = QR v'5j . A ny orthonormal s et of vectors is linearly iudeptmdent. An orthogona:l set may contain 0 . However .o rs of the matrix A are w 1 procE'SS to thf' ~'>E: = g} and Wz = [~] _ Applicat ion of t he GramSchmidt vect. v'/. prove that v i E span {w 1.1.10 1 . 0 T hesf: two s te ps complete the proof by induction . . it follows that v J+ l C span{ Wt . (7 s pan{ Wt .pan{ w d .he stat ement is true for j = L ~qual Step 2 (inductio n step) . .r ne fo r integers k which a re less t han or to j. t hen A is the s tandard matrix of an o rthogon al projection operator.. w 2.. w 2}. we have v 1 E span{w 1 } . (c) False.n orthonor mal basis. fork = l . . .lse. i. is true for e ach of the integers k = 1. ./5q 2..rly independent. Strictly speaking. However. .. + Y2 ( x.. (a) True. v2/llv2ll. If {v1. wi. . .e.ric and idempot ent.c. P3 . Thus . We rnust. T he column vec. . v2. 2. w·2 ). hence no o r thonormal basis. Then and since v 1 C . w. Uv~oll V~c ) llv~cll Vk PZ.. . lf A. the subspace {0} has no basis. . .or yields We have W1 := { w.The proof is by induction on J Step 1. Since v 1 = w 1 . w 2 .. j . . true tha t any nonzero subspace has an orthonormal basiS. 2.:!a (3) yields the following QR·decomposition of A : A=== + . it is t r ue t hat any orthogonal 'set of no nzero vect. E span {Wt. (b ) Pa. .~c} is an orthogonal basis for W.. it i~.:ation of Form. EXERCISE SET 7. Suppose the st atemeot is t. is symmet.1} Thus if the s tatement. · q i)q 1 = J5q l and w 2 = {w2 · q 1 )q l + (w2 · Q2)Q2 = /5q l plio.10 253 D 6. 'The vect.t. Tfv~ Yt ) llvtll + V1 ( x. a. using part (a).hllS t. . then {vt/llvl ll . .} for each J . j then it is also true fork= j + 1. namely the ort hogonal projectio n of R" onto W = col(A).
w2 = ~ q 1 following QRdecomposition of A : + ~q2.J2q 1  1q2 + ¥ Q3· This yields the follow A v'2 1 1 1]= [~0 ~ 7s [v'2 v'3 02 ~] 0 73' [0 2 Q 1 0 0 72 73' Ja I l  il J2] =QR 3 h1 3 of A yields 5. Application of the GramSchmidt process to the column vectors w 1 and Q2 =[ s0t 8 aJ26 JJ26 l . w3 of A vields \Ve have w 1 = J2q 1 . Application of the GramSchmidt process to the column vectors w 11 w 2 . Application of the GramSchmidt process to the column vectors and w2 of A yields We have Wt = J2q 1 and w2 = 3J2q 1 + J3q2. This yields the following QRdecomposition of A: A= 0 1 [l [ 1 2 1 4 72 "J! 0 1 1 = 1 1] 73 72 13 [ 0 J2 3v'2 J3 l = QR w2 3. )q l + (wz · Q2)Q23 [~ 3726 [o ~] =QR : 3~ 8 ~ Ql + " f q z_ This yields A= ~ [ ~ ~ 11] = l 3726 1\ppliratton nf thP Grn111Schrnidt process to the column \'ectors w 1 . and w3 = .lf A: vl1q2. w 2 = "lq1 + ing QRdec01npositinn l.i!! 19 . and w2 the following QRdecomposilion of A: = (w2 · q. w3 Q2 = [ 3 ~] ~ 719 This yields the We have Wt = J2q 1 . and wa = J2q l 1 + '(P q2 + ~ Q3 · A= [~ 2 1] I 3 1 = l [~ 72 0 738 73'8 73'8 6 1 0 ~ =QR J2] .254 Chapter 7 Wt 2. wz .l of A yields We have WJ = (wl · Qt)CJ1 .3q.
. From fsxerci~e 3.ill 2 0 v'2 l . in this example. the system Ax = b is consistent and ~his is its exact solution.?S8 1 1 1 1 kl [Vi 0 o 41 0 3'(P 19 J2] = QR. l . we have A = . Xz ~. Note that. x .etn Ax = b is consistent and this is its exact solution. in this example. From Ex.v'38 'J19 3 Solving this system by back substitution yields X3 = 16. which is: v2 :r 2.EXERCISE SET 7.r·t u I J= Q R T hus the nnrmnl ~y~t.xprcssed as Rx = Q1'b.1 1 0 [o ]2 ~ 1 1 1 0 2 2 0 h 72 J] =QR 7. which is: w 2 . F'rom Excrci11e 5. and w3 = ~ Ql + ~qz + . = 8..J. . Application of the GramSchmidt process to the column vectors w 1.10 255 6.f6 . = = 0. x .i~.4 = [. 0 3 4~ W 3 [::] X3 = [~ v6 0 J3 2 1 ~ v·'G Solving this system by back substit ution yields X3 = ~. x 1 = .] ·"' QR.erci$c 1. Thus the normal .:/f.[72 [ jz 0 1 1  0 .. x2 = 5. which is: 3 W'16 LO ~ .1 1 = 2 1] 2 J [3_: j~l f3 3 2 :Jv"itl 7 Ax = b can be expressed as Rx = Qrb.s system hy back substitution yields the least squares solution Xz = ~g. ~ .t>rn for 3 [~ 8. 1 oJ ~. the syst. we have A = system for Ax 12 1] [~ ~ · ~ [0 31 = ~0 ~ 719 '.16 o v'3 v'2 " 73 )6 0 () Thus the normal 120 system for Ax = b can be e. 9... ·decomposition of A: w2 = q1 +qz./iG3· This yields the following QR A= [ r :L [. we have . w2. 102]. w 3 of 11 yields We have w 1 = 2ql.~ ~~] [v'2 v"2 .ill = b can be expressed as Rx = QTb .!l m Lil = Solving thi. Note that.
which is [~ ~ i][=:] [t = 0 0 ~ X3 0 72 1 Solving this system by back substitution yields x 3 = 2.1 13.1 1 2 2 4 J 0 1 15. t.aa ara = [~ ~ ~] .nd the retlec:ti•>n of the vector b = (1. H =1. Thus.§.b can be expressed as Rx = Qrb.1) about tltat plane is given. II 2 .aa aT a = [~ ~] 0 6 2 [. in column form. where a= (2. 2) about that plane is given. by 9 .l 9 9 o.he standard matnx for the reflection of R 3 about the plane tS 18 ~ [ 1 4] [ ~ ~ I 4 . x 1 = ~· 11. writing a as a column vector.I~ [~ ~ ~] 0 0 1 = [ 6 3 9 7 !~6 and the reflectiOn of I he vee lor b = (1.2 a aaT aT [~ 0 0 1 0 0 0 1 OJ 0 0 2 11 ~ r 0 u !l u ~il 3 3 3 3 3 2 2 I I 2 2 3 1 .\x . Thus the normal system for . x 2 = 121 .1 . Fi = 1.1 ~ 3 3 ~] ~ [~ 0 0 IT IT 9 2 IT 9 TI _. lhevlane..l. Thus. the standard matrix for the reflection of R3 about the plane is H =1 2 T .y + 3z = 0 corresponds to a. in column form. 1.. ~ ~ ~ _l ~] :1 .: Hb 0 0 0 I ~ H: J][~] Ul . by Hb= !2. 3).4 16 :.I O I 2 72 l I ll v~ = QR.0. 2 aT a aa  T [~ ~]~ [: 1 1 1 :] = ~1 0 = 2 1 14. The plane 2x .x 1y..l.writingaMacolumnvector.: . H = l .lz Ocorrespondstoa 1 wherea=(l.·0] IT 7 n II 6 i II . 2. we have A= [~ !:] ~ H ~ 1 1 0 ~ [: .256 Chapter 7 10.4). Prom Exercise 6.
1 0 0] 2[ 25 5 10] [. 1...1)(5.l.5)=(3.2~[: Then His the Householder matrix for the reflection about a.1 0 30 5 1 2 = 4 3 3 10 2 2 l] I5 11 is the . 1). L.rix is: e. 1.{0..w = (1.13 )] mat.0)=(2.1).0) H = J . /2) = (l.(¥.l . 0) = (3.. 0).standard matrix for the Householder reflection of R 3 about a. H. 2).aa aT a = = ( 5.. 0. Then the appropriate Householder matrix is: ~aar "'~ a1'a [1 OJ _ _ __ 2 0 1 ·12v'2 [(12 2 v'2) 1/2 1.1__. and Hv = w. Then the appropriate HoH~C holrlcr ( 3/'3 )( 12.w..t w = (llvll.(v'2. (b) Let a=vw=(3.v'2] v2 0 1 42J2 1(1.i13 ) 2 = (1 0 o] _ [~ 1 2 3±2/3] 2 2 32:/3 ..! 5 1 15 4. (1._2_ [ 1 1.. li=Iaf.l7v'2 1725/2] 22 33+8. and Hv = w...EXERCISE SET7. __.J2). Then the appropriate Householder matrix is: 5 .aar = aT a [~ 0 1 0 1 0 0 0 0 o 0] 2 10 [2 1 3 1 2 4 3 6 9 3 0 1 6 2 ~] T5 4 15 13 4 15 7 5 2 _1 = 2 5 2 ii _.4).i_ H' 4 s :..w 2 T lJ =I. (c) Let a= v.aaaT=[~ ~].fl Then 3 15 15 14 1 19. H=Iaf.4)(0. l'n 1 .2 [_: 10 ~]=[: ~] • Then H is the Householder matrix for the reflection about a./2) 2 = ( <:} [ot o] _[¥ l v'2 2 2. 1)..{/) = ( 6 S+l'l).10 257 2 16.l. = (1.aa aT a 2 T = [1 0 0] . 1) ./3).v'2] I ~~l [~ ~£] = 2 2 (b) Let a~ v . . 1). 0./2 18.. l. Then the appropriate Householder matrix is: H = 1 _ _ aaT = aTa 2 [1 0] .12.( Y1) ' 1+2v'3) = e2n. H = I .:::: [67~2v'2 2 1 &0.w = (3.. and a= v.~ [ ~ 0 0 1 0 .aaaT=[~ ~]..l../2 17·~5v'2 = r . and H v = w. (a) Let a=vw==(3... (a) Let a= v.v'2 2 ~] = [~~ J:i 2 Let a =v w = ( t. 15 2 g 15 15 5 13 '] 4 15 17..v'2. .4).
and a = v . 1). 0).4/S [(2 ..v'5l 2 . and a= v. 1). _fl 2  Y{l this :D ~] l = [ ~ ~ [_1 3 2] = [. 0). settmg Q = QT = Q.. Let w = (llvll.w = (2. setting Q = H .l'r 4] ~ 2 1 + J2] 1 J _fl 2 = [ . nnps v into w The standard matrix for this reflection JS OJ _ I = 2 10..aar aT a w 0 = (2.. a= v 2 H =1 . 2).f5 [~ OJ 1 Now let Q. Then the Householder reflection abouL a.v's.w = (1. 0) = (J2. Let v = (2.1 3 _fl 2]=[ :f1 2 X}] [h ~ ]~ {/0 Q R 22. 0.l. 2. Let v = (1. and Hv H !l l j '2 j 4 3 = w.1). 1). 0. Then the Ho\lseholder reflection about a. we have the following QRdecomposition of the matrix A: ¥ y] 0] [10 ~] =QR 0 . 0) = ( v's.l. 21.2/2 = 0] 1 2 [(1+J2)2 1 v'2 1 [ol o] _ 2 + J2 = We have HA [~ [3 +2J2 J2 ~].4. = [~ 1 0 ~]1 lq : 4 2 Then 4 4 4 = t 4] 4 is the standard matrix for the Householder reflection of R 3 about a. . 0) = (3.. w = (llvll. The standard matrix for this reflection is H =1  _2_aaT = [1 aTa 0 1 .2 0 ~] ~ =R and./2.. = [~ iJ] = [: 'i [2.''5) 2 2.258 Chapter 7 20. w = (llvll.::~ ~ = ~ ~ 5 5 l 1 [¥ ~ J s ~ ~] :. y1elds the following QRdecompositioo of the matrix A: A=[ .1 = HT = H.L maps v into w. 1 _M 5 ~]· and multiply the given matnx on the left by Q _w o s This ytelds ~][~ ~] = [~o Js]=R o 1 and. 0).
~j __ ~ 0 I I I ·I ':~ I r. the third entryin t he second colum'n A = {/ . 2 ·1 we obtain the following QRdecomposition of A: 1 [ 0 24.3'{2 3 o .. 0 ~ 2 0 0 ~] [~ ~ ~] = [~ 0 4 5 0 ~ 2~ 2/2 4 ..1'2 2 v. .l [V2 0 0 0 0 .1 /~.EXERCISE SET 7. a nd this can be achieved by interchanging the corresponding rows of Q 1 • This yields . multiplyiug by t he orthogomli matrix Q2 as indicated below: Q·> Q 1 . it can be made so by interchanging the 2nd and 3rd rows. Referring to the construction in Exercise 21. setting Q == Q:.. the second entry in the first column of A can be zeroed out by multiplying by the orthogonal matrix Q1 as indicated below: · 1] = [v2 3 0 5 0 2¥'2 4 0 .2J2 ./2] =R and finally..  ~ ~ ·~ ~~}~. QzA = fl 2 [ _fl 2 f.'2 ·~] = QR Referring to the construction in Exercise 18 (a) .2./2] 5 Although the matrix on the right is not upper triangular..2 ~.J2 0 0 0 w 2 ..:£1 2 ~ 2 \12 = v'6 2 0 0 0 2 ~ :& 2 _ ill 2 _:L§ 2 = R 0 0 .lL! 2 ...~··oe )zeroed out by From a similar construction ../2 2 ~ '  ~ V[l = 3 f2 :!.i§ 2 _ ill _ _.l o r= oi cY 1 o : o 1 o 0 1 \ . I · r. 't .. 1] o :If.. 1 ·.. ~ ~ ' y6 3 ·~~~l [v2 ¥ 0 0 I . / .:.i§ 6 2 1  :il 6 :il 6 :il 6 fl 2 A= ::11 2 0 ~ 6 2 1 2 I 1 ~ J ~ ~ QR 0 __ill 0 0 2 .. : 0 o: o 0 [l1 2 .Yf : 0 0 o o: 16 :If ./3 0 0 2 0 Finally.olof Q o . we obtain the following QRdccomposition of A: :£1 2 . 1 "" Qf.<> indicated below! QI A 1 ·. A= 1 2 ~ l [1 = ~ 0 ·0 1 _ 1..10 259 23.}1 . setting Q = Q[QfQT = Q1Q2Q3.._.. the second enlry in the first column of A can be zeroed out by multiplying hy the orthogonal matrix Q 1 a.: ¥ ' L.& 2 2 ~ 0 0 "' v'3 1 From a Uurd such construction. the fourth entry in the third column of Q2Q1 A can be zeroed out by multiplying by the orthogonal matrix Q3 as indicated below: sV2 2 .
(0.l mnps v into w Since llwll = llvll . 0..af.l. 12).rt (a). 1. To show that H = I .w = (8. D5. (v.l corresponds to x + 2y.nd . is the line 8x + 12y = 0. m)) given by D3. 3) = (1. and so (v . 1.·e used the fact that aaT aaT =a(aT a )a T = (aTa)naT.j [~ ~ ~] "" [1 1 1 1 ~ a _1 3 : =~] _ and so l 3 DISCUSSION AND DISCOVERY Dl. we have Hx = x. we have Hx =(I.w ).aaT a Ta aTa + .:i§ 3 Solving this system by back substitution yields :t3 = 1. D4.4.he others. 1}./2 v'3 0 0 0 33] 0 7s 4 [:ttl = :t2 X3 [730 .. 1). maps v into w . Then the reflection of R 3 about a.aaT)x = /x .. The standard matrix for the reflection of R 3 about ef is (as should be expected) 2 [~ 0 0 1 ~ ~] [~ ~ ~] 0 0 0 [ 0 ~ ~ ~] 0 1 and ~mularly for t. we have H = I .w ). Let a = v . . maps v into w .2aaT + . aaT .w). then llwll = ll vll and tbe Householder reflect1on about.af.z = 0 or z = x + 2y.: (1.. Since A = QR. (a) (b) Since aaTx = a (aTx) = (aTx)a.af..e:1T:)a . 1) = ( ~ . the system Ax = b is equivalent to the upper triangular system Rx = QTb whlcb is: [ v'3 . or y = ~x..w = ( 1. Jj).aaT is orthogonal we must show that HT = H H H T = ( I ..( $:)a = (3. 2 1 Using the formula in pa.. 2.faaaT ~ ~] . 2).a aTaaT aTa aTa (aTa )2 2 =I..2 aaT) aTa 1• This follows from (I . If s WORKING WITH PROOFS P2.l.4 aaT =I aTa where we ha. 2.260 Chapter 7 25. :t2 = 1. and the plane a .2 aTa ' 4 aaTJ T = I .. the Householder reflection about (v .2 aaT . :t 1 = 1.aaaTx =X. We have v . The standard matnx for the reflection of R 2 about the line y = mx is (taking a = (1. D2.2. = [~ 0 0 l On the other ha. = ±v's3. 26...l.
. . w2 . EXERCISE SET 7. for ea. w 9.. . The vector equation c 1 v 1 c2v·~ + c v 3 3 =w is equivalent to [~ ~ ~1 [~~1 = [~] · o 0 3 C3 Solving this sys 3 tem by back subi:ititut ion y ir. Wj} we must have w 3 • CV ::j:. 2.. 0. w2 . · v 'l .2. 7v2. 2. .~<jCA:(A) for each j = 1. 5) + (b) (w )n =(I . 0.. w. .he vectors Ct(Q).c2 = 2. . (w }B = (w · v 1 .. then u 6. . .11 261 P 3. if R.. 3 14 .2_ ~)=(~ _hQ y]) ' ' /2' . 30) . .2(2. 1) and ~~] [win= ~. 3. f4) a nd lwl o = ( ~]. v 2 2 + c3 v 3 = w is equiva.EXERCISE SET 7.ch j == 1. then Q = AR. 1) (w)£J =(W • Vt W·v 2 W·v 3 )=(.5{ 1.8.3 = l.1 = js. q. [ 4. 0. . Thus (w)a = (3 . :>. w · v 2) = ( ~ .c 1 = 3.. it follows that they form a basis for col(A). Thus (w ) B = {fa. ClJ 1 } = span{w1 . Th~1s in the expansion . 2.. then u = 8v 1 + 4v3 = 8(1 . From this it follows that the columns of Q belong to the column space of A. . . thus (w)B = (3.. c2 = 0. The vector equation c 1v 1 ~· r./3' v6 2 ' 3 • 6 . 5.11 1.~] = [~~] 3 Solving this 69C:J system by row reduction yif!k!s c 1 =' 2. 0) + (3.2. Finally.4 the 1:. .. (w)a = (w · v 1 . If (u )B = 7v 1  2v 2 5v 2 + v:~ = 7(1. w · v 3 )::: (0. .2. k. 9) =(56. . .l } which would mean that { w 1 .lent to [~ 3 . c3 = 1.} = span {w 1 . 7) and [w]B = [_~]· (b) T h e vect or equation c 1 v 1 + c2 v 2 = w is equiva.. 3). 5~).1 ) = StjCI(A) + s2jc2(A) + · · · + S.  1. P4. _. ... .[11)' and .j]. One ofthe features of the GramSchmidt process is that span{ q 1 . since dim(col(A)) = k and t.ck(Q) are linearly independent.1 .:. If (u)n = (7. c2 (Q). Thus (w)B = ( . 7.. . 6) + 4(7... for otherwise wj would be in span {Q1 .5. 41 . ] [. In particular. 2.1ent to t 11e Iinear system [ 2 a3) [cc2'] . ( 2. (a) We have w '"" 3v 1 . . Q2. (Q) = Acj(R.. ~) = 8. 1).. = (8. then from Q = AR.. k.1 it follows that c.ion of this system is c 1 = 1 c2 = 8. ../2. . . <1 ). 0) ·. 2. w 2 . (a) (w )B = (2.Q. q 2. If A = QR is a QRdecomposition of A. 2. w 1} is a linearly dependent set.olut. . 3). 1} 3. 3) = (6... 1). _...ldsc.
= ll(v)o 1+ (5)2 {2)2 + (2)2 llwll = ll(w)all = ..11. then (x'.. 5.y') = (~.a· (PB'+D). 1 and y' = ~x + y. y) = (1. y) =x = /2y. y') = (a.! = . . = (0.b).1) 2 llvll + (1)2 + (3)2 = Jl5 = li(v)sll = }(0) 2 t (3)2 + (1)2 + (5) 2 = J35 U . I ) (o. 1...fi2)2 + (4)2 + (3)2 + (1)2 = v'30 wll uv + w 11 = 11 <v >s + < >s 11 = 11 <. b).e2} be the standard basis for R2 .0). y') = (0. (a) If (x. 2+:<"'="'s 4=. Thuc.. = 3.(2)(0) = 21 15. thPn (x'.(5)(3) + (5)(0) + (2)(3) 4.. L).5. (b) ll u ll = ll(u )sll = j( 2)2 + (1)2 + (2)2 u·v i. 1. /2).1).y) ( b ) 1f (x..(w )all = 11(2. v 2 } be the basis corresponding to the .. v'2b) be the basis for the x'y'system ) 1 Let B = {i.1 [v2Jsl = [ ~] 1 0 V'1 and so = [~ ~]. llv.Jf05 5 !lull= ll(u) 8 11 = Hvll 1 j(0)2 + (0)2 + (1)2 + (1)2 j(5) 2 . and Jf3. >""":. 1. Computing directly· llull = V(~) 2 4.wll = li(v)a.!p. s = [[vi]a Ps..y) = ~x 73 0 ]. then (x'. !lull = ll(u)Dil = J( . Then Ps· .2. y = (u}s (v)a = (1)(1) + (1)(4) = 3.w U= il(v)a.. 1).+_c.~). = (1. 1.y') = (1.< 2+:< w =2=> =7~F. 4)11 = y'(2)2 + (1)2 + (2)2 + (4)2 = v · w = (v)a · (w)a = (0)(2) + (3)(4) + (1)(3) + (5)(1) = 20 14. We have u 0 3 /¥s !iii= 12. then (x'.p = Jl7. ~)and v = 3vl + Ov2.s  = {u 1 . ... then (x'.0}. L). 4. llvll = ll(v)sll = J(1}2 + (4)2 = Using Theoretn 7. = (a. llv ll = Vr(~)2_+_(_ 130_)_2_+_(_~)2 = and u · v = (~)( !) + (i)( 13°) + (~)(!) =. 2 + (1) 2 = /2.y')=(~a.(~) 2 = = /2. 6) 11 = .y) (c) (d) If (x.o(s=)~+. 1 .p + e. and let B' as described in Figure Ex16.y') = (1. /2). 1). and let B'.{e 1 .y') = (2./58 w = (viB · (w )a. then (x'. llvll = Je. Then P9•.0). 2)11 = v=(s=)2. and u. (a) We have u = 2v1 + v2 + 2v3 = (~.ll( w )sll = .y) = (0.2v3 = (~. then (x'. . 5..then(x'.. In particular: (b ) If (x. J¥ = 13.0).y and y' It follows that x'y'coordinates are related to xycoordinates by the equations x' (a) (c) 16.99° = 10.y) lf (x. (a) ( b) = v1 + v~ = (~.. y') = (0..~ )and v = .2: U ll = ll(u )all = y'(1) u m.r'y'systcm that is described in Ftgure Ex15. 2.. 6 ) = ~ = 3.+~(=s)=2+.. [f (x.j} be the standard basis for R 2 . = (1.(2) 2 Jf3. llv ll = ll(v )sll = J(3) 2 + (0) 2 + ( 2)2 = = (u )s · (v )s = {2}(3) + (1)(0) + (2)(2) = .262 Chapter 7 11...).y) (d ) If (x..b. 2)11 = y'(2)'l + (5)2 + (1)2 + (2)2 = v'34 v · = /2 =.{v 1 . and u · v = (t)( ~ ) + (!)( .. u 2} =  (*f ~ 1 1 0 ] and Pas = (Ps .l. Let B . If (x...j(3)2 + {0) 2 + (3) 2 + (0)2 = /18 = 3v'2 2 llv + w U = ll(v)a + (w)all = 11(8.10. ll u ll = J<~)2 + (D2 + (~)'2 = fj = 3..JJa+b).v1 + 4v2 = (~.( =2~)2 = JIT8 llv .(w)all = 11(2. In = (v'J. 8 = [ ~3 x'y'coordinates are related to xycoordinates by tlre equations x' particu Jar.
t o [~ ~ ~1 [~~ ~~ ~] .s[w]s [_ ~ ~] [_ = [=~J.'1!J1 77 30 o :. [~~ ~~ ~]5 2 1 (c) lt is easy to chec.. 39] 77 30 19.n .11 263 (v2)s) (b) The row reduced echelon form of [B 4 3 IT Ti ] [ = (2 l 3 . thus [w)s = [_~] and lwls = Psts[w}s = G!][_ ~] = = [_~]· (e) We have w =3e 1  6e2. {c) Note that (Ps~_.).::J I : [1 4 0 0) IS [1 .EXERCISE SET 7. [) 0 !S [11 ~] ..hus ( Po .. (a) ~ :'!: 0 8 (b) The malrix [B I SJ .~ ·r: ~ (5) ~ ro Lt) = Ps. . thus () (e) w = 5et..1 = ~1 .1 = [2 ~ 1 J] 4 I   ..] 0 01 I [~ ~ ~ j~ ~ ~1 1 0 B10 0 1 ·40 H) (J I l.3e.3 has o 8 I ~~ I l 5] 1 [1 0 0 0 I I 2.40 5 3 [. 0 1 OI I I rr I I 4 li _J. i I 13 5 ·. thus [wls = [3l ~'] and lw] s 6 91 5] = Ps ·tH[w)s = [. g 10 2 (b) The row reduced echelon form of [B2I [ 0 _§) [ 13 11 2 Bd '"" [~ l t 2 I 1 ~] i:.). I as its reduceu row echelon form. ll thus Ps+B :::: n H l 2 .[~ 0 0: t.PStB· · li 4 3 (d) We have w = v 1 . + e3.] . [ 1 'J.t ] .82 · 10 .3 13 ) 5 2 1 1 4_1 I I = lr 2 .s)I 0 8 5 2 1 1 = Ps+B· 0 0 1 (d) a .v z.5 ··. 1]. 1 I S'J = :2 .. (a) The row reduced echelon form of !Bt I B2] = (2 2 l . (c) Note that {Ps~s)..1 3 l 1] . [~ 2 .1 2 ·1 9 as its reduced row echelon ] form.s..!. Lim~ Ps_.k tho. !] 18. thus [w]s =[_~) and [w]D =Ps .
5] 1 1 3 I thus Pa. (a) The row redured echelon form of [B2l Bt] ~ ~ j~l thus Ps. =Pa.+B.(w]s.).2u1 .1 = [_~ ~r· = (1)[~ ~] = Pe. = [_~ ~l.!1 . tHus (w )s.i0u 1 + ~u 2 .. BJ = ~ u [ 6 2 2 3 3 1] [Io =[ 6 6 3: 0 4 I 0 1 0 0 I I 0 2 6 is 7 I 3 . e . (w)s. = [_~ ~] [_n = r~]· = :3v 1  v 2.. thus Pa.g. ."" 4 l (a ) The row reduced echelon form of the matdx (B2I B t] I [ =[ ~ I 1 I 0 2 I 2 1 I 1 I 0 0 0 1 0 0: 3 2 0:2 .2 .3 11.. =[=~]and (w)a.a. we have (w )a 1 = (9. s.a. = [_~ We ha.. . = I 1 11 [11 = [=:].. a. 6 ~] 3 2 I S (b ) Ifw = (5...HJ is 715 (c) The row reduced e<'helon form of IB2 I w] = (ti. ·= Pe. ~).~ 5).u 2. (c) (d ) (e) Note that (Pe.e.wehave(w )a 1 =(l.. Wf! have w = [_~]and (w)e...'1 I 4 . We have w . = [11 and (w]B. a. 22..which agrees with the computation in part (b).1 1 0 0 q :~ u . = 6 ~] [ 3 2 2 3 51 4 .2] I~ .I 1 1'2 I 4 17 4 3 4 '2 i2 2 0 l . 1 I 0 I o! H 1 I I I !. = [_~ _~] [_~] [_~]· 3 21.8 5). thus [w]a 2 = [_~]and [w]B 1 = Pa. r~1 ~] r=~l = [11· . l )andso 3 4 2 12 17 3 1Jm._. (b) The row reduced echelon form of (B2I Bt] ' = [3 1 : 2] IS ( 1 •1 . = Ps. [=: =~ =~ l ~l [~ 0 4 ..) 3 H H ~ ~ (b ) lfw=(5. . 9. l . thus [w]B.5) and so . 0 4 2 3 0 .4v 1 7v 2i thus (w]s.. thus [w]B..264 Chapter 7 (d ) (e) w = .[wJo.I·'~IB.. .·e w .
Similarly.~a .+ B~ true of Po. ~~ ~ )..[l 10 0] 0 0 :l. 1) = Oe 1 + le2 and vz = (l. t hus (w)a~ = ] o: ¥ I '2 (t. ·•B. s). . = [' _ 1 76 ~ 4. s = [:7~~: . 0) = le\ t Oe2 .s = [~ ~] then. thus Pa. a.. = (Pe. (a) We have Vt = (0.5 1 0 l 0•X..}. s ric~y.. the same is 24.nd thus is an orthogonal transformation . 23.1 = (Ps .. we have = p. 0 0 l : () 2 . cos2B). 2 ~. c3 = . b · Geomct ..1 = Ps ... .0.s). since P is orthogonal. 6). Thus the transition matrix PB 2 4B.l = (PB_... sin28] = (sin20.!.. =x preserves length and thus (a) We have Vt = (cos20. 25. 3 1 372 0 3¥"2 2\/'J 21 'l H is <'MY to check th <lt 2 1 0 ~] . 0 0 1 Thus PFJ 1 4 Bt is an orthogonal matrix.. this corresponds to the fact that reflection about y is an orthogonal transformation. we have pr = p .)"· t = (Pn?•B. ~6.~ .1 1 265 3 (c) The row reduced echeloq form of [B2 I w) = [ ·53 ~ ~ !:] is [o 21 .and : = '7J O the · 76 C3 tTl solution of this system is c1 = c2 = ~. The vector equation c 1u 1 + c2u2 + C3 ll3 = v1 is equivalent to [* t ~] [] [ ~] ..EXERCISE SET 7.nd !lince Pr.. which agrees with the computation in part (b). ~)and (v3)s1 = (.cos28 · then. is ?s• l).. Thus (vt) s 1 = (.rr . since P is orthogonal..3 ~.i2 6  . 0 ( b) 1f P = Ps.1 = Ps+D · Geometrically. thus pT Po . Ps1+B.. we have (v2)s1 = (~.h. s = [o 1 1} . this corresponds to the fact that reflection about the line preserves length a..sin28) and Vz (b) II P = PB .
(a) If [:] [~]. w 2 = %e1 . e2. * nl l 0 1 3 z' 0 [ t].r] = [cose4") sin( 3 >] [:r:] = r. 3 30 If[~] {i]· then[~:]= (b If[::]= Ul· then[~]= ) (a) We have [::] 31..e3} to B = {wl .and w 3 = je.2. (b) rr pis t he transition matrix from = {e .~ e3 = ( ~ . + j e2 + = (~. lhen [~:] = [sm( c~ca. (a) Let 8 = {v.0.) :11~ ~][~] +~ ~] = [~ 1] (b) rr x'] [ ~ lhen [x] .!e2 + ~eJ = (~ . } Y (:r] = [~ 72 ll 28. and W J in terms of e1.e2. W3 correspond to the column vectors of the matrix p. v 2 = (1.v2 . w 2 .: [cos(~) c:i. ) sin(34 ")] cos( 3.8. Then.. 1 .72 ~ sm( ft) cos( ll 2 ll 3 4 3 ") 4 3.~~) ~] r~:l [~ ~ ~] ~] = ~ sin~fi} [~: = ' = [ 0 1 nl.2). ~hen e.then[~]= l~ z (b) If r::J = [~l· then [:J = [~ rJ [:]= l~ rJ [~] [~:~]· rJ[~:1 [~ rJ[~l [~ ~l= = = 29.:. a..] m [~:] [! :!] [: ] and = [t ::][:] [~ rl nl [~~~] u:!l [: ] =u: !l Ul =[~~~] = : = Thus [~::J = [! r .e3}. 1.!.11.v3 = (0.e2. ~). ~.nd e3 = 2w 2 + W J. W J}. =[ ~ r. s i· !>· . + ~e2. from Theorem 1.J_ I~ [ R 7 . Solving these vector equations for w 1 .> 6 11 y : [x:)= [5]' then [. = Wt + w2 . + 2w3. Note that w w 2. e2 = w. Pis the transition matrix from 8 to the standard basis S = {e 1 .::] DISCUSSION AND DISCOVERY . ~)..~~ 2. and e3 results in w 1 = ~e. v 3}.! • l!] IS 02. 0) . where v 1 = (1. 1) correspond to the column veclors of the matrix P . (a) (b) If If (x) = (2]. w 2. (a) If [:] = then [::] =[::(.)[t ~l m [ 4 ~ •I r \e3 .266 Chapter 7 27 .
t hen the transition mat rD.bn) = (v)B + (w)s.) (a l. ... v.. . . Thus the vectors Y1. If lwls B is =w holds for every w .. . 1. . then { ]s = IY]B and sox= y . 1 1 P 2. (v. Cz.. . .n Un) 1. {ut . from the standard basis S to the basis Ps .V a re linearly independe nt. 2.. If c1. and v :l D 4. u 71 } is a basis for [C' .' . v3} and if ~ ~ [~ ~ (1..!) B. ~. vk arc linearly indc· pendent if and only if (v 1) a.. 'Vk span Rn jf and only if (vt)e. . . P :l. . Vz . 3. (vk).e. can)= c(a1 . . u.. 0) = ( 4. It follows that c1 v 1 + Cz v2 + · · · + ck v 1: = 0 if a nd on ly if c1(v 1)a I· cz(vz)s + · ·· + ck(vk) B = 0.. a~. . 1..I J + l>z u 2 + · · ·'.. .... .4[x ]B "' C[xJ a for every x in R 11 • P4.WORKING WITH PROOFS 267 D3 . 0) + ( 1. v2. ... A = C if and only if ... . u 2. .· .4. Ck arc scalars. there exist scalars c1 c2. an) + (bt .. B = ([eds I (ez]s l le3]sl = rel l ez les} =In and soB= S = {eb e2. v 2.: c(v)s and (v + w)e =(at + b1. we can conclude that.: Cy for every y in R 11 • Thus. an + b. If !x. + bn )Un · (a1111 Thus (c:v)s = (ca. x D 5. b2. . using Theorem 3. . . an. (v 2) B. 1.. . 0) = (3.b.en}· 0. 0) . . Suppose B . . . 1 ) . If 8 v1 == ~ f v 1. . then = 2(1. = . 0.(b1U1 + b2U2 + ' ' · + bnUn) =(eLl+ bt) Ut + (a2 + b2)u 2 +··· +(a... T he vec tors VJ. then (c1v 1 + czv2 + · · · + c~o: v~o: ) s = c1 (v t ) s + cz(vz )s + · · · + ck(v~o)B · Note also that ( v) 8 = 0 if and only if v = 0. i. ' (vk)B span 1 nn. Since (v)n = c1(v1) B + c2(v2)B + · · · + ck(vk)B and the coordinate mapping v + (v)a is onto. . . . we have: = a1 u1 + a2 U·l + · · · + a 71 Un anJ w = V + W= + a2 u2 + · · ' + O. . . 0). . we have A{xJs = C!xle for every x in R" if and only if Ay =:. .a21. . Since the coorciinate map x + !xJn is onto.) :::. ' VI( s pan Rn if and only if every vector v in nn can be expressed as a linear combination of them. .. ca2.ition matrix from B to the given basis. Ck SllCh t.. 0) + (1.4 . V ·l : ] is ~he l<an. T hen if v il1 l.hat v = r.1 v 1 + c2v 2 + · · · + c~c v~.. 0. . = 3(1. . it follows tha t the vectors v l.Yls = WORKING WITH PROOFS P l. .
r. Then P = [[vds (v2JsJ = [~ ~]and P[TJ 8 p l = [I 1] [2 1] [ 0 1] = [1 1] _[TJ 1 0 2 0 1 1 1 1 4..CHAPTER 8 Diagonalization EXERCISE SET 8. ] = IT]a[x]a which is Formula (7).1 Thus [x]a that = [.2 :t~:rJ..jx.++ ~~ 2 ).) m H·· + jx.1 5 5 2 ~[ 1 2] [l3 l = ~] = [T] 5.:r. For P\'ery v€:ctor x m R3 . we have X~ [::] ~ (jx. + jx.) m + 268 . Let P = Ps~ o where SPITJaP 1 { e1o ez} is the standard basis. Then P = llv!]s [vz]s] = [~ ~] and = [~ 2] 1 [t . Thus [x]s = r i_"':r.!% 1:. and ITis = IITvda (Tv2JaJ = [ =~ ~ I  •s~] "5 ".X2] = [22 l X2 ol] [ X~  X2 X.:rt1 ~. we note that \\"l11ch is Formula (7) 3.e2} is the standard basis.. we note = rxl2l. Let P = Ps•B where S = {e 1. :>  18 5] _ . + ~~. ' . Finally. and (T]s = I[Tvt]a (Tx]s [Tv z]a] = [~ ~]Finally. + !•· jx.) [!] + (jx..]. (TxJa [Tx]o = [~ 71 . [Tx]a = [:z\:2:r2 ).
+ !x.2Xt ~x1 xz . + lx.) nl and [T] F = [[Tv.!x.Exercise Set 8. 6. .!x3 ] X2  ~X3  + 1 2 X3 [~ + tlx. we note that .) m + (jx.) nl '~"' + {.fz. w~ have X = [~.] = Hx. we note that [TxJH = [.1 269 ami Finally.4x: + . ]!. + which is Formula (7).]o [Tv.]o [7'v.] = [ Finally.For every vector x in R3 .x. .il J y [ 8 12 7 which is Formula (7) . .
:!.~~ [_~] = . 'I Stmt nrIy. P !] [1:1 2 . we [Tis where. ns before.2 11 T hus.~~ v 1. from Exercise 1 we 0.] .s•. from Exercise 9. :J I 9.sv.72s v2 and Tv2 = [~] i v. Thus 1 J =IT [2] +IT r3] = nv 2 3 1 4 1 2 + nv2 I r v2 I [1 I] = . [T]s = Ps .~v~ .~~] 11 []1 1] = p• [T]s· P 1 = Pe.. + v2 and = v.270 0 Chapter 8 7... = 11 vl 1 I 9 + ~~ v2. Le~ P =.. Thus :) 31 9] ~ S1me v! = 10v1 r 8v2 and v 2 = . .Ps .¥ v2.~~V?.%~ 3AJ [tS h _.. = [82 156] : ~r 22 + 2] 1798[1 = ") 15 _86v 15 1 + L ~ 2 and T v 2 798v 4 45 ~ v2 • Tv~ = [:~] = 81i 31 2 GJ [ [Tis P = l ~ ~5 _~ ~~ 61 ] and [Tis· = =.1 [Tis•P.1. Let P .rrv l2 19 + nv2 .1 [Tl"'•P. a• r= ~ TI ~] I I anti [T]s· and 1 = [~ ~]· II II Since Vj = v.. '2 ~5 [ = = .!. The e>quation P[T]sP have [TJs [Tis· is equivalent to [T]s = P .]s lv>lsl = 31] [y 3 3 2 I~ 8 0 _'J. we hnve 2 = Pa ... · I= 12. ~ 2 Thus.l~] =II r~]..] = [¥.rs .. 1 ilH t] [l 2 l 2 1 3 2 2 l 2 1 2 J] H = _ ! 8 1 0 8.. Then P = [lv 1]s Jv. P = Poe·· .321 v.qualion P[T]B p ... 45 6] 1 90 1 3 90 ~] [10 8 whf•rt•... 1'vI1 = r.¥ _ 7i ~] 1 i = [T]s·· 11 .21 ¥J(_§§ !i: 1 . 1 M fl] [~ TI 26 I 2 ~] = [A 2 ~ II 32] =!Tis = [3]16 6l v 90 1 IT 10. = [ ~7 :! 45 86 49 ..Ps .l = [Tis• is equivalent to [T]a have = p.v2. The . = [()] n an d 2 I n = [TJ _ .!. a where S is the standa rd basis...j P[T is P  1 = [1o 8 .8 and 0 2 Tv2 = [~~) 4 V ?.!.... + . a where 5' is the standard basis. We have T v l . FJ• = lo ~ ] ancl [8 _.[ . we have p = = [: _:) P[l]sP =[ 4S 11 1] [_. We ha\·e Tvl Si1mlarly. as before.] and ={Tj 0 P[TJaP1 = [. T hen P = [lvt]s 2 l I I I [v2Js [vJ]sJ 1 = [i . 12 11 2 j] J = 8 3 [ 2 ~] 0 0 1 0 . .
These matrices are related by the equation where P 1 5. we have For ev€r y x a.l [~ ~] = [10 2] = jTJ 1 ll _25 11 TI 'l 6 22 5 22 22 14.y) in R 2 . from Exercise 10. o 1 matrices are related by the equation 1 = [1 ~ 2] and. The standard matrix is [T} .y. Tx .z) in R 3 . we have !T}s = [ . we ha. from Exercise 9.1 = [ 5 3 5] [1~ l!. (7). 16.. we hn.nd (b) Ju agreement wit b l·hrmu Ia.!x+h+~z) [1] ~ +(4z) [0] ~ I ( v. (a) = [vl I v:J = (:.Exercise Set 8. x+ Y + z 1 [1] [ 2y:~ 4z j =(~x+~y+~.1 13.. u .ve and \ .rr ft ~]. . we have {T)s = [~ _~] .z) ~ +(.c.. {a) For every x " (x. The standard matrix is [T] = [~ ~ and. ? ! / ) J • '·· ..vr..These P[T] 8 P .
we have x = [: 1 = (~x1 + %xz) 3 [~] + (. (7).. we have 17. in agreement with Formula (26). in agreement with Formula. For every vector x in R 3 . we have . we have 19.{0x1 + fo:r2) [~1 = (~xl + ix2)v1 + (.Chapter 8 272 (b) ln agreement with Formula. and ITJ s•s = I!Tv1J B' 3 z XI + '2 J%2 Finally.:t:z .10xl + foxz)v2 and Finally. l· 1 X2 jTxJ s' = .2 :r1 1 [ ~x:z + 3 :r2 I l . (26). Fo r every vector x in R'l. For every vector x m R2 . we h ave 18. we have anrl t hus lxls = l~ 1 :tt  4 Xl 1.
[J ) [1] 4 t hUS [X IB = [ zXl l 4xt + ~X2 I l 2 x1 1 1 2 xz + '2:2:3 I . l ) = (lri' .~] and !Tv:da = Md T vz = 3vl +5v2 = 3 [~} + s[~] = [}~] For every vector x in R 2 . T (xJ> x2) = ( 1 (d) ix1.xt+ 5 x2)l ..X:. (a) [TvdB = [_~] (b) (c) Tvl = VI  GJ· 2v2 = [ ~]. it follows t hat 1 2 f Tx==(·.2. r3]+ (6 2 1 ..1 273 and Tx = 2x x · [3Xt + 2X3] = 1 _ 2 ( ·x 1 5 r  .t2 7 14' 4 l 7Xl.~X3] ' .~:~ • = rl  lx1 . Using the f.[ l 1 5 Xt 22x + ll x 2 1 5 ~X2] or. xz 14  4 . 353 ) . we have T(l .ii 21. 0 5) + ( 2 1 ) [ 5 x1+5x2 11 7] . we have Thus.1 ~ + 2 x:z + 2x3 14X2...~x2. ¥ + ¥) = ('5u. in agreement with Formula (26). fjx1 + Jtx2).. Ill 'f Iii Finally. For every vector x in R 3 .x1 1 3 x2 1<1 2 + 1 .~.2GJ = [.! I· l [ ~ 14 13 ZJ  \.Exercise Set 8. [TX JB' = 6 [rrt . .'fX3] 3 2 . we have 20..1. 7XJf4X2+'f:t3 1 ann [T]B'B = r~ ! !4] I . ITxl..XJ ) .. using the linearity ofT.:mt:la obtained in part (c) .il. in t he comma delimited form .. we hnvc and ·~ !] lf 2 1 • Finally.
2x2.!x3. it follows th:t~ Tx + (~Xt . If Tis the identity operator then IT] 3 ] 2 = IT] B = IT] a• = 37 1 Jij vl 19 + 36v21 [~ ~ ~]· On the other hand.!x. = [~ ~]. which leads to the following formula forT (d ) Using Lhe formula obtained in par t (c).2.!:v.4x3 + (. 0) IS = (.~x3 + 11 x4)v1 2 + (~xt . 12).274 Chapter 8 (c) For every vector x in R 4 . we have T(2. since T e 1 = e 1 and Te2 = e2.a)Tv3 + (~Xt + ~x2 + !x3.(~x1.31 1 37.!xJ. thus ITJs•. Similarly. [~~]and IT]s• = [~ ~J On the other hand..!x3 1 x4)Tv1 . 0.!x3 + ~x4)Tv2 + (~x1 + ~x2. we have IT]= 23.4x2. .v}. 1 229 I 144v3.r2 + !xJ. using lht lin~arity + ~X4)v2 + ~.r2. 2. 1f T [T]a the identity operator then.4x2 .: 19 [ ?] 36 [g] = ~ II 1J 1 36 v 3.~x4)Tv4 + (~ xt . we have x . T v'2  _ [Q] ~  299 1 144vl 209 + mv2.B = [~ ~l· ~~ 5I 24.~x~)v4 ofT.~x1 + ~x2.!x4)v3 + (~:r1 I hu:>. we have Tv1 = 0 0 l [~ = 36 37 [ ~J 1+ 36 . Tv1 = [~] H[~] + H[_:] = gv~ + ~~vz and Tv 2 = [ ~) = {.
(f ~ t he effect of the operator 1' is to rotate vectors counterclockwise by an angle of 120 degrees about the v.t is ~o st. and stret ch by a factor of 6 in the y'direction. v.DISCUSSION ANO DISCOVERY 37 ~ 275 and Tv3  [9] = 4 3 173 4~ V. v2 . u m} be ba. Wr. and to stretch the v 2 component by a factor of 6. axis (looking toward the origin from the tip of v!).} is any basis for Rn.1 + . we have T(v1) = kvi for aJl j = 1. v .ses for R" a nd R m respectively. If the xycoordinat e axeg are rotated 45 degrees clockwise to produce an x'y'coordinate system who::. . We have Tv1 = and Tv2 = [~ ~J [~] = [ ~] = 6vz.v2 } is [T]B = On t he other hand.e axes are aligned with the directions of the vectors v 1 ttnd v 2 .he x'dircction. n. .IJ! 36 '109 ill] 48 119 . then the effe.t) and B' = {u 1 . .1 0 :. since e1 = ~v 1 and e2 = *v2. thus the standard matrix forT is [Tj == r~ t]. [TI = 29. •. .retch by a fact or of 4 in t.. reflect about the y'axis.. we have Te1 = iTv 1 = ~v2 = 2e2 and = ~Tv2 = ~vi = ~e 1 . u 2..36 ..B = [0 I 0 I · · · I OJ is the zero matrix. [T]: [H!] (~ ~] [ _ i~] = [ ~] = 4vl 28. 25. . Let B {v 1. lbeu stretch by a factor of 2. . = .m 48 . Thus {TJB'.• . T hus. v 2 .. 0] [. Thus [1']a = [2 1 vf:J = 2 [10 ! . if B = {v1 .. 30. have ami T v3 = ~ 0 J3 . From this we see that the effect of the operator 1' is to stre tch the v 1 component of a vector by a fac tor of 4 and reverse its direction. 2. if T is the zero transformation .{/0 . . thus {Tla = [ ~ ~) .. There is a. . 26. .3. scalar k > 0 such that T(x) = kx for all x in R". . 2.illv'2 48 115 48 v'· 3• thus [T}a•' D = m 299 48 m [ '229 ll5 II . Then. Since Tv1 = v2 [~ ~]· Te2 and Tv2 = vl> the matrix ofT with respect to the basis B = {v 1 . n and so [T]a = ~~ ~ 0 0 0 0 1 0 0 1 =k k 0 0 27 . we have = for each i = 1. From lhis we sec that 0 0] 0 ] o DISCUSSION AND DISCOVERY Dl.v'3v2  v3.
JB" . .alx]s Tds•·.mapping { x]s [T(x )Js is linear. and T(x.romrt rir effect. The !lppropriate diagram is c• c (T)Bl 03.B· and rrolll tIll: It follows thal IT2 0 . vn} and [T)s =I.B = /2 D 5. If x andy are vectors and cis a scala r then . h:t B = {e1.B' IT1 (x )Js• = IT2)B". then we ha"e [T1 (x)J a• = [Tds•. n and it = (y . e2}.J = v1c for each k = 1. x). he ~~. Ifx is Thu: tn R" and y ts m Rk. ( b) l· abe.B'IY)s•. 8' = {ez. since T is linear. IT2('i JIX))]s" = IT2] B". .s = (:t..c.276 Chapter 8 D2. l·ot example.B'!Tds·. 2. of the operator WORKING WITH PROOFS Pl . the zero operator has th~ same rn:ttrJX (t hP ZNO nMtnx) with respect to any basis for R 2 I'luc We ha\·e IT1 (x ))e (c) (d ) True If 8 = { v1. Them IT]s •. then T(v. follows from this that T(x ) = x for all x . False For example. we have c[x)e = [cx]s 7 {T(cx )Ja = [cT (x )Je = c{T(x )]a t[T(x + y )]s 7 [x)o + [Y)s = jx + Y)s = jT(x) + T(y)Js  {Tfx )]e + {T(y )]s Thts shows that th. The approp1inte diagram is c• D D4 (a) s{x)s = {T2Js• e[x)s = !T2 (x))s·: thus Tt(x) = T2(x). = !Tds . y) but T ~ ~ not the 1dent1t.ct}.y operator.B'!Td s·. v2 .is may more clearly reAect t. Ouc 1cason 1s that lhf tcpresPntation 0f the operator in s•"~rn ~ orht'r b:L.s lx)s and IT2(y)J a" = !T2)B". P2.
A) = (. If x is a vector in Rn. Since A is triangular. then [T]s[x]D = [Tx] B = 0 if a. The eigenspace corresponding to A= 1 is the solution space of the system (I. i.A .l)(A.A= 2 (multiplicity 1). >.A)x = 0 which is . and >.. = 2. 6.we have [T.2 each have dimension J. 5.\. A= 6 (multiplici ty 1). the eigenspace corresponding to ). or 3.1 oT} 8 = [J]B = /. = 8 (multiplicity 7). . and the eigenspaces corresponding to A= .1. The eigenspact! corresponding to A = 3 has dimension 1 or 2. We have rank( A) = 3 and rank(B) We have rank(A) = 1 and rank(B) = 2: thus A and B (a) The size of the matrix corresponds to the deg ree of it. = 6 has dimensio n 1. 7. A= 1 (rm1ltiplicity 1).\ . (b) The matrix is 11 x 11 with eigenvalues A = . thus the eigenspa« is 1dime.1]s = [TaJ. 7. B EXERCISE SET 8.'5ional and so A = 1 has geometric multiplicity 1. P4.s characteristic polynomial.J has dimension 1. 3. 2. 4.2 277 P3. 2. or 8. Thus the eigelll'alues of !I are . The eigenspace corresponding to A = 2 is the solution space of the system (2/.A= 2 (multiplicity 3).A)x = 0.1 or. The eigenspace corresp(1nding to >.A= 0. to A = 0 ha. = 1.\= 1 have dimension 1 or 2. = . 2. and >. with algebraic multiplicities 2 a nd 1 respectively. since [T'" 1]a[T]B = [T. (a) The matrix is 5 x 5 with eigenvalues A= 0 (multiplicity 1)..nspace corrcspondinr.2). and the eigenspace corresponding to A = 2 has dimension 1.1)(.2) =(. if T is onetoone. and the eigenspace corresponding to A = 8 have dimension I. = . and . (b) The matrix is 6 x 6 with eigenvalues A= 0 (multiplicity 2).iplicity 1).A= 1 and>. 3.1)2 (A. and A= 1 (mult. The e igenspace corresponding to A = 0 has dimension 1. A=. The eigenspaces corresponding to . its characteristic polynomial is p(.iplicity 2). ""' 3 has dimension 1. We have det(A) = 18 and det(B) = 14. that: [T]a is an invertible matrix.e. 4. 5.. and . = 1.2 1.3 (multiplicity 1). thus A and Bare not similar. are tJOt similar. so in this case we have a 5 x 5 mat rix . thus A and B are not similar. the eigP. Furthermore.A = 3 (multiplicity 2). (){ 3. 6.= 2. which is The general solution or this system is x = [i] = t [:] . it follows that [TJB[X]B = 0 if and only if [x] a = J.<> dimension 1 or 2.EXERCISE SET 8.1 (multiplicity 3). Thus. We have tr{A) = 3 and tr(B) 2. The eigenspace corresponding to >. [TJs = [T(vi) a I T(v2)al ··· I T(vn}s] = [Tle. The eigenvalues of the matrix wiLh t heir algebraic mu!Liplicities arc A = 0 (wult.1 (multiplicity 2). A = .nd only if Tx :::: 0 . thus A and B are not similar.
= 3. = 5. u.1 and >. which is Tbe gene>al solution of this system is x ~ mt ~ [!l. = 3 also has geometric multiplicity l ..A)x = 0.\ = .. The characteristic polynomial of A is p(>. The eigenspace corresponding to >. = 1 JS 1dirnensional. . The eigenvalues of A are ).nd so >. = 3 is the solution space of the system (31 .] = [~] ~ s [ 1"' + t [0] . The characteristic polynomial vf A is p(>.3.>.1 ~ 1 1 =~] 1 . with algebraic multiplicities 2 and 1 respectively.2s ~~ = s[ 2 ~~ . thus the eigenspace is 1<limensional aud so ).5) 2 (>. with algebraic multiplici ties 1 and 2 respectively..) = det(>J A) = (. 2 (>.A = A =[ .. + 3). thus the eigenspace is ]dimensional and so ). Thus the eigenvalues of A are>.) = >. and so ~J ~ ).2 = >. The eigenspace corresponding to ). = 3. Thus the eigenvalues of A are>. = 5 is the solution space of the system (51. = 1 has geometnc multiplicity 1. = 3 is Lhe solution space of the system (31. = 0 and . rcultipliclty 1. = 2 also has geometric multiplicity 1. = 3.A)x = 0 which is The solution space of this system is x = [ .A)x = 0 which is [_~ ~ ~] The general solution of Lhis system is x [:.X. = 5 has geometric multiplicity 1. ll. and >. thus the eigenspace is 2dimensional. = l . >. The characleristic poiynomial of A is ().3) 2 . 3 + 3>.278 Chapter 8 The solution space of tbis system . The rank of the matrix OJ . + 1)(. each with algebraic multiplicity 1 and geometric 9. The eigenspace corresponding to ). thus the eigenvalues are >. The eigenspace corresponding to >. with algebraic multiplicities 2 and 1 respectively. = 5 and >.= . = 3 geometric multiplicity 2.3). 8. 10. x ~ [':] = s [l thus the eigenspace is 1dimensiotlai and so ).
>.) = .6 0 (I 0 of 11 ·..l 4 .A) == 3 .A= [=~ ~ ~~] is [~ ~ .5)(>. The characteris tic polynomial of A is (>. Thus > = . and this is the geometric multiplicity of.1 ~nd >.2 279 :=: is dearly 1 since each of the rows is a scalar multiple o f the lst row. . = . The matrix 5/ .l)(A 2 2A + 2).A = [=~ ~ =~] . <a. = 1 and>.2 and A= 1. thus >. the matrix 3..A .l) 2 .A ""' [~ ~ =~] .EXERCISE SET 6. On the o t her ha. The characteristic polynomia.1 ] has rank l since each of the rows is a scalar multiple of the 1st row.A is 2 and the geomet ric multiplicity of ). The reduced row echelon form of the matrix li . It follows that A hM 1 + 2 '== 3 line<'lrly independent eigenvectors and t hus is diagonali<>able. with algeb<ak multipHcities I and 2 <espocti vdy. It follows that A has l + 2 = 3 linearly independent eigenvectors thus is diagonalizable.21 . thus the rank .>) =.2 + 39A . 13.11>. which is [~~ =:~] [:~] = [~] · The general solution of thls system is x = t [ ~) Thus.._1 = 2. = 1 is nullity( ll . The characteristic pc. = 1 is the only real eigenvalue of A.A) = 3 . The matrix . = 1 has geometric multiplicity 2.A) = 3 .\ = 3. =[ ~ : = ~1 l has rank 1.2 = 1.3.3) 2 .1 2 1 1] Thus nullity( 31.~1 . Thus nullity(OJ .s two distinct eigenvalues. = 5 and . the matrix 3T.ic~u~t~plicity 1. since its •eduC€d <Ow cohelon fo•m is [~ ~J. with algebraic multipli<.A) and this is tbe geometric multiplicity of ..1 1 .\= .. The characteristic polynomial of A is {A+ 2)(>.2 .1 .l = 2. .Jll< 2.>. 1 4 .l) (A.>. Thus nuUity(3I .3 .lynomial of A is p(. .nd. thus the eigenvalues are >.[: : and t he matrix 11 A = :] h.2 has geome~. The eigenspace corresponding to A = 1 is obtained by solving the system (/ . ): ~· 31 A = 2 1 2 [.:ities 1 and 2 respectively.45 = (A .3 ..= 2. [~  0 1] 1 1 .. On the other h and.A)x = 0..3>.\ = 0.\ ""'3.\ = 5. 0 0 12. and has rank 2 since its reduced row echelon form is this is t he geometric multiplicity of >. thus A ho.1 0 l has <ank 2.2).2 = l. 15.2 "" 1. and this is the geometric multiplicity of . ! 0 Thus nullity(f>l A) .l of A is p(>. thus the eigenvalues are. and . + 2 = (>.
thus A ha.1 AP +l. . ~ [:]. and v ~ [!] 2 and v 3 v~ ~ r~Ja<e v 3 J has the linearly independent eigenvectors corresponding to .(.) = .: = (v 1 19.>. Thus the mat.A)x ~ 0. Corresponding eigenvectors are P1 [~]. >.>. ~ [i] is an eigenvecto. cwesponding to ~ ~ 2..2)(.I 280 Chapter 8 taking t = 5. Finally.2 = 1 .~] [~ ~ ~] ~ ~ ~] [~ ~ ~] 101 011011 [ 002 18. the genml solution of (I. . p . The characteristic polynomial of A is (.. 1 = 1.1)(..>. = 3.>. we see that ?l = [:] is an eigenvector for >.. thus A has Lhree distinct eigenvalues >. + 1).. .>. The characteristic polynomial of A is (A. and v. [~0 ~ ~] [=~ 1 1 3 : ~] [~ ~ ~] 1 3 1 3 4 it = l~~ 0 3~] ~ 0 Note. = 1. the matrix P = [p 1 p 2] = [! !} has the property that = p1 AP = [ 5 4 3] [14 12] [4 3] [l 0] 4 20 17 5 4 0 2 = (~] and P2 = 16.2 + 11>.3) 2 . . ~ m. The matrix P property that. = 1. P 2 = [!] is an eigenvector for A= 2. The characteristic polynomiaJ of A is p(>.2)(>. >. = 2.A)x ~0 [~  ~ . The vecto' v 1 = 2 and >. thus tl. which is n=!] [::] : =~ ~ [~] The ~enecal solution of this system is x ~ ~ 0 is x = Similarly. Similarly.6 . thus A has three distinct eigenvalues. and the matrix P = !P 1 P2J has the property that p l AP = (_~ ~] [~ ~] [~ ~] = [~ ~]. = 0.. and >...t AP = [~ ~0 1 0~0~] 0 0 1 r~ 0 3 ~1 [~ 3 r~ ~ ~] 0 ~ ~]· 0 = >.3 6>.>.1)(.\.>.s two distinct eigenvalues..Thus A is diagonalizable and P ~ [: pIAP = [:].3 =3.2 = 2 and . The characteristic polynomial of A is p(>.e eigenvalues of A are >.) = (. .A)x is x ~ t [:].1 = 1 and .>.>. .3). depends on the choice (and the order) of tht: .>.1)(>. = 0 is obtained by solving· the system (OJ. This is just one possibility. The diagonalizing matrix P is not unique· eigenvectors. The eigenspace corresponding to >.X = 3.ix P ~ [! ~ :] has the P':perty that p.2). +:J and the gene<al solution of (2! . 17. Corresponding eigenvectors are v 1 : :] has the pmpe<ty that = v.
] 0 = m The genml solution of this system is X t [~]· which shows that t ht:' eigenspacP has dimension 1. = [ v2 = 0. and the vecto'5 v.. = 1 The ThusA IS diagonalizable and thr matnx P = lv 1 p .it!S vf its vig1~tl\'alucs is less than 3 22. each of which has n.2 = (>. A is nnt.A)x ~ 0 is x  s[ il· >.. i. diagnn~liz.tblf' since thn sum of t. which has algebraic multipUcity 3.EXERCISE SET 8.\ = 2 and . system ( 2/..A)x = 0.\) = (>. + 2) 2 t.) = (>. hM dimension 2.A )x .lgebraic rnulLiplicity 2.. ~]and v. . and >. 21.::b::i:l:: b::~:n: t:·~~s[t~l~ 17 9 f 7) which shows I hat the etgenspacP corresponding to solution of (I . ~ [:] fotm a The "'""' ' ' = m fotms a ba. The characteristic polynomial of A is p(>.) = >. The eigewJpace corresponding L(l .ss than 4 .<is lot the eigenspace:wespondut< :o ~ = I. = 5 is obtained by solving the system (51. The characteristic polynomtaJ nf A is >. the general solution of (31 .\ = 2 is obtained by solving thP.A)x = 0. thus A has one eigenvalue.3) 2 • thus A has two etgcnwtlues.P .0 is x = I [~]· •nd so ~ not d iagonalizable since the sum of the geometric multi :' liriti~>s of its eigenvalues is lt>.hr gcomelrif multiplicil.. llw general which shows that the eigenspace conesponding tn A = t also has dimension 1.. i.e.5)3 . 2 + 5>. that the eigenvalue >. On the other hand. = 2 has geomeLric multiplicity 2. =2 ~ has dunension 1 Simi larly. which is H: ~] [:.1)2 .= 3. The characteristic polynomial of A is p(>. thus the eigenvalues of A are \ eigenspace corresponding to A hMis lot this space.1}.3 0 t >.. = 5.::e~v:::~~~ :::: ~s~[~·~r:~ e:g]e[::l~~c [~]""'.t AP 23 = [~ : ~][~ ~ ~] u! ~] [~ ~ ~] v 3) has the property that ThP charactt>risttc polynomtal o f A is p( . thus A bas two . It follows that the matrix A is not diagonalizable since it has only two linearly independent eigenvectors.::d~::e::: ~~::. which is The gonctnl solution of this system ts x = r ~] + s [~]· whteh shows that the eigenspace has dimenSton = 3 has geomet d e mu I liphcity I It follows that A is 2.\.>. The eigenspace corresponding to >.2 281 20. Lhe eigenvalue has geometric mulliplicity 1 lL follows that.2 (>. .2)(>. >...
>.. The vec<on v.e.~. 1f the matrix A is upper triangular with l 's on the main diagonal. . = 1.A)x.3) 2 . If C is similar to A then there is an invertsble matrix P such that C = pl AP It follows that sf A IS invertible. . Thus A is diagonalizable and the matrix P = fv 1 has the property that v 4J p'AP= [~ 1 1 0 0 0 0 1 0 0 1 ~1 r~ ~ ~ ~1 r~ ~ ~ .. then its characteristic polynomial is p(>.. Thus Ap. Similarly.ty 2. It follows that Ax = >.~ an<. I< diagonali1.t svlu<ion of <hissys<em is x .= r~ ~l 0 0 0 0 3 0 0 0 30 010 0 01 0 2 •. . 26. i. each of algebrak multiplici<y 2.  [~] and v 2 = [~] form a basi~ v2 for <he eige "'P"' correspondi og to ~ .< . and the characteristic polynomial of A l 1 0 + 2)(>. 28..) . = 0 for every vector x in R" and so I . then AP = (Aps I Ap:~l · · · I ApnJ and PD =[A t PI I A2P2I · · · I AnPnJ where \1 \2.1) 2 Thus the eigenvalues ofT are A= 2 and >.e. + 2)2(>. Th<' standard rnatnx of the operator Tis A= i. <hus ~  3 has geome<nc mulbplic.J = [~ ~ = and the characteristic polynomial ~]. with qJ~ehraic mulllplicltles 1 and 2 respectively.2.I: Pk for each k = 1.1)" and >. and <he vectors v. =>.1 2 + 6A2 The gen"' . n\'alue of . An at c the diagonal entries of D. (>. smce PCP 1 == A. A= A1 . the system (1 A)x = 0 m11st haven linearly independent solutions. its geometric snult tplictty 1S also 1.. is an ~·t.l ~ 3.. the invertibihty of C implies the invertibility of A.. In other words. thus A has two eigenvalues A= 2 au.282 Chapter 8 2~.  rrl and v. A =I. The :>tandard matrix of the linear operator T is A of A ts p(>. n. then C ts invertible since it is a product of invert1ble matnces. i. 1 x fos a.l P1r is an eigenvector corresponding to A. + 9>.[ ~] form a basis for <he v3 0 e1genspace corresponding to A= 3.(A+ 3) 2 • Thus the eigenvalues ofT are >. If P =!P sI P2l · I PnJ. Since>.) = (>.A is the zero matrix.. 2. whsch IS of geometric rnultJphc1ty 3. A.A)x = 0 whsth IS 1 .! I x in R 3 .~. = 0 and A= 3. If A 1s a 3 x 3 matrix with a threedimensional eigenspace then A has only one eigenvalue.s [ follows that T IS ~] +t [:] . in order for A to be diagonalizable.able since the sum of the geometric multiplicities of its eigenvalues is 3. = lss the only eigenvalue.~. Thus.t. if this is true.0 0 0 0 0 3 0 ~] 25. and so A = >. >.= 0 has algebraic multiplicity 1. [~ ~ ~1. The eigenspace associated with A = 3 is found by solving ( 31 . 29. 1 I is a diagonal matrix. the eigenspace corresponding to A1 is all of R3 . then (/.) = (>. with algebraic multiplici~·~ 1 . The charactenstic polynorrual of A is p(>. But. 30. 27.
! roots. thon p(>.2  (a+ d)>.1 BP1 and B = F 2..be). thus xis an eigenvector ofT corresponding to >. thus D has nonzero diagonal entries and entries are the reciprocals of the corresponding entries of D.thenA. then [Tx)a = [T]a[x]a and [>vc]B = . v 3 } is a linearly independent set. 1f PIS an mvert1ble ma1 and so A.'P. and the dis criminant of thi's quadratic polynomial is (a+ 4(ad.>. we have p• A._:ca d)2  >. wh1ch has 03 . In the situation described {v 1 . then D must also be m1 is the diagoncJ matnx whose diagonal vertible.d) 2 + 4bc.1 =(P.) has two distinct reo. If A 1s s tnHiar to B and B is similar to C. ( cl) False. 1f p l AP is a diagonal matrix then so IS Q 1 AQ where Q = 2P.e. It follows that T is diagonaltzal>le ~ mu~ the sum of lhe geometric multiplicities of its eigenvalues is 3. the solution space of(!.gonali~ab l e since it has (b) two distinct Pigenvalues.A= [ ~ ~ ~] 1 .Lmple.:) True. t h"n r(>. The diagonalizing matrix (if it exists) is not uruque! (. If x is a vector in Rn and >.DISCUSSION ANIJ DISCOVERY 283 and 2 respedively. >.>. then there arc mvert1ble matrices P 1 nnd P2 surh that A .1 l has rnnk 1. 1  [~ ~] is diagonalizahle False. The eigenspace associated with>. For example.1 = P . This statement does not guarantee that there are enough linearly independent eigenve