You are on page 1of 259

CHAPTER 2

Systems of Linear Equations


EXERCISE SET 2.1
1. (a) and {c) are linear. (b) is not linear due to the x1x3 term. {d) is not' linear due to the x}
2
term.
2. (a) a.nd (d) are linear. (b) is not linear because of the xyz t erm. (c) is not linear because of the
3/5
x
1
term.
3. (a) is linear. (b) is linear if k # 0. (c) is linear only if k = 1.
4. (a) is linear. (b) is linear jf m =f: 0. (c) is linear only if m = 1.
5. (a), (d), and (c) are solutions; these sets of values satisfy all three equations. (b) and (c) are not
solutions.
6. (b) , (d), and (e) are solutions; these sets of values satisfy all three equations. (a) and (c) are not
solutions.
7. The tluee lines intersect at the point {1, 0) (see figure). The values x = 1, y = 0 satisfy all three
equations and this is the unique solution of the system.
3.x-3y"' 3
:c
The augmented matrix of the system is
l
r231 ~ ~ ~ ]
-3 ! 3
Add -2 times row 1 to row 2 and add -3 times row 1 to row 3:
~ =! ! ]
Multiply row 2 by - j and add 9 times the new row 2 t.o row 3:
~ ~ ~ ]
From the last row we s ~ that t he system is redundant (reduces to only two equations). From the
second row we see that y = 0 and, from back substitution, it follows t hat x = 1 - 2y = 1.
22
Exercise Set 2.1
8. The three lines do not intersect in a. common point (see figure). This system has no solution.
)'
The augmented matrix of the system is
and the reduced row echelon form of this matrix (details omitted) is:
The last row corresponds to the equation 0 = 1, so the system is jnconsistent.
23
9. (a) The solution set of the equation 7x - 5y = 3 can be described parametrically by (for example)
solving the equation for x in terms of y and then making y into a parameter. This leads to
x ==
3
'*:,5t, y = t , where -oo < t < oo.
(b ) The solution set of 3x
1
- 5xz + 4x3 = 7 can be described by solving the equation for x1 in
terms of xz and x
3
, then making Xz and x
3
into parameters. This leads to x
1
=


x:z = s, x3 = t , where - oo < s, t < oo.
(c) The solution set of - 8x
1
+ 2x
2
- 5x
3
+ 6x
4
= 1 can be described by (for example) solving
the equation for x
2
in t.erms of x
1
, and x
4
, then making x
1
, x3, and x
4
into parameters.
This leads to :t
1
= r, x 2 = lBrt
5
s -
6
t, X3 = s, X4 = t, where -<Xl < r, s, t < oo.
(d) The solution set of 3v- Bw + 2x- y + 4z = 0 can be described by (for example) solving
the equation for y in terms of the other variables, and then making those variables into
parameters. This leads to v = t1, w = t2, x = t3, y = 3tl - 8t2 + Zt3 + 4t4, z = t4, where
- oo < t11 t2 , t3, t4 < oo.
10. (a) x = 2- lOt, y = t, where - (X)< t < OO.
(b) x1 = 3- 3s + l2t, x
2
= s, x
3
= t, where -oo < s, t < oo.
(c) Xt = r , x2 = s, X3 = t, X4 = 20- 4r - 2s- 3t, where - oo < r, .s, t. < oo.
(d) v = t1, w = t2, x = -t1 - t2 + 5t3 - 7t4, y = t3, z = t4, where -oo < h, t2, t3, t4 < oo.
11. (a) If the solution set is described by the equations x = 5 + 2t, y = t , then on replacing t by y in
t he first equation we have x = 5 + 2y or x- 2y = 5. This is a linear equation with the given
solution set.
(b) The solution set ca.n also be described by solving the equation for y in terms of x. and then
making x jnto a parameter. This leads to the equations x = t, y = !t-
24 Chapter2
12. (a) If x1 = -3 + t and x2 = 2t, ther. t = Xt + 3 and so x2 = 2x1 + 6 or -2xl + x2 = 6._ This is
a. linear equation with the given solution set.
(b) The solution set can also be described by solving the equation for X2 in terms of x
1
, and then
making x
1
into a parameter. This leads to the equations x
1
= t, x
2
= 2t + 6.
13. We can find parametric equations fN the line of intersection by (for example) solving the given
equations for x and y in terms of z, then making z into a parameter:
x+y= 3+z} =}
2x + y = 4- 3.z
2x+y = 4 -3z}
=} x = l-4z
-x -y = -3- z
From the above it follows that y = 3 + z - x = 3 + x - 1 + 4z = 2 + Sz, and this leads to the
parametric equations
X = 1 - 4t, y = 2 + 5t, Z = t
for the line of intersection. The corresponding vector equation is
(x,y,z) = (1,2,0) + t(-4,5, 1)
14. We can find parametric equations for the line of intersection by (for example) solving the given
equations for x and y in terms of z, then making z into a parameter:
x + 2y = 1- 3z}
3x- 2y = 2- z
:::;. 4x = ( 1 - 3z) + ( 2 - z) = 3 - 4z ::::}
3-4z
X=--
4
From the above it follows hat y =
1
t,
8
z _ This leads to the parametric equations
X :::: i - t, y = ~ - t, Z = t
and the corresponding vector equation is
(x, y, z) = ~ , l 0) + t( -1, -1, 1}
15. If k :fi 6, then the equations x- y = 3, 2x - 2y = k represent nonintersecting parallel lines and so
the system of equations has no solution. If k = 6, the two lines coincide and so there are infinitely
many solutions: x = 3 + t, y = t, where -oo < t < oo.
16. No solutions if k-:/: 3; infinitely many solutions if k = 3.
[
3 -2 ! -1]
17. The augmented matrix of the system is 4 5 l 3 .
18. The augmented matrix is [!
0 2
-1 4
1 -1
7 3: 2
Exercise Set 2.1
The augmented matrix o! the system is ~
2 0 - 1 1
~ ]
19.
3 2 0 - 1
3 1 7 0
The augmented matrix is [
0
0 : 11
20.
1 0 12 .
0
1 ! 3 J
21. A system of equations corresponding to the given augmented matrix is: .
2xt = 0
3xl- 4x2 = 0
X2 = 1
22. A system of equations corresponding to the given augmented matrix is:
3x
1
- 2x
3
= 5
7xl + x2 + 4x3 = - 3
- 2x2 + X3 = 7
23. A system of equations corresponding to the given augmented ma.tri.x is:
?x1 + 2x2 + x3 - 3x-t = 5
x 1 + 2x2 + 4x3 = 1
24. X}= 7, Xz = - 2, X3 = 3, :J:4 = 4
25
25. (a) B is obtained from A by adding 2 times the first row to the second row. A is obtained from
B by adding -2 times the first row to the second row.
(b) B is obtained from A by multiplying the first row by! A is obtained from B by multiplying
the first row by 2.
26, (a) B is obtained from A by interchanging the first and third rows. A is obtained from B by
interchanging the first and third rows.
(b) B is obtained from A by multiplying the third row by 5. A is obtained from B by multiplying
the third row by }.
27. 2x + 3y + z = 7
2x + y + 3z = 9
4x + 2y + 5z = 16
29. X -J- y + Z = 12
2x + y + 2z = 5
X - z = - 1
31. (a) Jc1 + c2 + 2c3 - C4 = 5
cz + 3c3 + 2c4 = 6
-c1 + c2 + 5c4 = 5
2ct + c2 + 2c3 """ 5
28. 2x + 3y + 12z = 4
8x + 9y + 6z = 8
6x + 6y + 12z = 7
30. X + y + Z = 3
y+z=lO
-y + z = 6
(b ) 3cl + cz + 2c3 - C4 = 8
Cz + 3c3 + 2c4 = 3
- c1 + c2 + 5c4 = -2
2c1 + cz + 2c3 = 6
26
32.
(c)
(a)
3cl + ~ + 2c3 - c4:;;;; 4
c2 + 3c3 + 2c, := 4
-cl + Cz + Sc4 = 6
2ct + c2 + 2c3 = 2
Ct + C2 + 2c3
=
2
2c
1
- 2c3 + 5C4 = -2
- 4cl + 2cz- c
3
+ 4c4 = -8
+ 2Cl +
CJ-
C4 =
0
5ct -
Cz + 3c3 + C4 =
12
(c) c1 + cz + 2c3 = 4
2c1 - 2c3 + Sc4 = - 4
- 4cl + 2c2 - c3 + 4c = 2
+ 2c2 + c3 - C4 = 0
5c1 - c2 + 3c3 + c4 = 24
Chapter 2
(b)
Ct + c2 + 2c3 = 5
2Ct - 2c3 + 6c4 = -3
-4ct + 2cz- C3 + 4c4 = -9
+ 2cz +
CJ-
c.&=
4
5Ct - Cz + 3c3 + C4 = 11
DISCUSSION AND DISCOVERY
Dl. (a) There is no common intersection point.
(b) There is exactly one common point of intersection.
(c) The three lines coincide.
D2. A consistent system has at least one solution; moreover, it either has exactly one solution or it
has infinitely many solutions.
If the system has exactly one solution, then there are two possibilities. If the three lines are
all distinct but have a. common point of intersECtion, then any one of the three equations can be
discarded without altering the solution set. On the other hand, if two of the lines coincide, then
one of the corresponding equations can be discarded wi t hout altering the solution set.
If the system has infinitely many solutions, then the three lines coincide. In this case any one
(in fact any two) of the equations can be discarded without altering the solution set.
D3. Yes. If B can be obtained from A by multiplying a row by a. nonzero constant, then A can be
obtained from B by multiplying the same row by the reciprocal of that constant. If B can be
obtained from A by interchanging two rows, then A can be obtained from B by interchanging the
same two rows. Finally, if B can be obtained from A by adding a multiple of a row to another
row, then A can be obtained from B by subtracting the same multiple of that row from the other
row.
D4. If k = l = m = 0, then x = y = 0 is a solution of all three equations and so the system is consistent.
If the system has exactly one solution then the three lines intersect at the origin.
05. The parabolay = ax
2
+ bx + c will pass through the points {1, 1), (2, 4), and (-1, 1) if and only if
a+ b+c=l
4a + 2b + c = 4
a - b+c=I
Since there is a unique parabola passing through any three non-collinear points, one would expect
this system to ba.ve exactly one solution.
Discussion and Discovery 27
D6. The parabola y == ax
2
+ bx + c passes through the points (x1, Yt), (xz, Y2), a.nd (x3, Ys) if and only
if
+ + c = Yl
+ Zbx2 + c = Y2
- bxs + c = Y3
i.e. if and only if a, b, and c satisfy the linear sy5tem whose augmented matrix is
1 Y1]
1 Y2
1 Y3
D7. To say that the equations have the same solution set is the same thing as to say that they represent
the same line. From the first equation the x
1
-intercept of the line is x1 = c, and from the second
equation the x
1
-intercept is x
1
= d; thus c = d. If the line is vertical then k = l = 0. If the line
is not vertical then from the first equation the slope is m = t, and from the second equation the
slope ism= f; thus k = l. In summary, we conclude that c = d and k = .l; thus the two equations
are identicaL
08. (a) True. If there are n 2 columns, then the first n- 1 columns correspond to the coefficients
of the variables that appear in the equations ar1d the last column corresponds to the constants
that appear on the right-hand side of the equal sign.
{b) False. Referring to Example 6: The of linear systems appearing in the left-hand
column all have the same solution set, but the corresponding augmented matrices appearing
in the right-hand column are all different .
(c) False. Multiplying a row of the augmented matrix by zero corresponds to multiplying both
sides of the corresponciing equation by zero. But this is equivalent to discarding one of the
equations!
(d) True. If the system is consistent, one can solve for two of the variables in terms of the third
or (if further redundancy is present) for one of the variables in terms of the other two. ln any
case, there is at lca.c:;t one "free" variable that ca.n be made into a parameter in describing
the solution set of the system. Thus if the system is consistent, it will have infinitely many
solutions.
D9. (a) True. A plane in 3-space corresponds to a linear equation in three variables. Thus a set of
four planes corresponds to a system of four linear equations in three variables. If there is
enough redundancy in the equations so that the system reduces to a system of two indepen-
dent equations, then the solution set will be a line. For example, four vertical planes each
containing the z-axis and intersecting the xy-plane in four distinct lines.
( b) False. Interchanging the first two columns corresponds to interchanging the coefficients of
the first two variables. This results in a different system with a differeut solution set. [It
is oka.y to interchange rows since this corresponds to interchanging equations and therefore
does not alter the solution set.]
(c) False. If there Is enough redundancy so that the system reduces to a system of only two
(or fewer) equations, and if these equat ions are consistent, then the original system will be
consistent.
(d) True. Such a system will always have the trivial solution x
1
= x2 = = Xn = 0.
28 Chapter2
EXERCISE SET 2.2
1. The matrice1' (a), (c), and (d) are in reduced row echelon form. The matrix (b) does not satisfy
property 4 of the definition, and the matrix (e) does not satisfy property 2.
2. Tbe mo.trices (c), {d), an(l {e) are in reduced row echelon form. The matrix (a) does not satisfy
property 3 of the definition, and the matrix {b} does not satisfy property 4.
3. The matrices (a) and (b) are in row echelon form. The matrix (c) does not satisfy property 1 or
property 3 of the definition.
4. The matrices (a) and {b) are in row echelon form. The matrix (c) does not satisfy property 2.
5. The matrices (a) and (c) are in reduced row echelon form. The matrix (b) does not satisfy property
3 and thus ls not in row echelon form or reduced row echelon form.
6. The matrix (c) is in reduced row echelon form. The matrix (a) is in row echelon frem but does
not satisfy property 4. The matrix (b) does not satisfy property 3 and thus is not in row echelon
form or reduced row echelon form.
7. The pos:;ible 2 by 2 reduced row echelon forms are [8 g). [8 A]. [6 and [A o] with any real
number substituted for the *.
8. The possible 3 by 3 reduced row echelon forms are
. !] . !
and

:]
0 0 0 0 0 0 0 0 0 . o 0 0
with any real numbers substituted for the *'s.
9. The given matrix corresponds to the system
X1 = -3
Xz 0
X3 = 7
which clearly has the unique solution x
1
= -3, x
2
= 0, X3 = 7.
10. The given matrix corresponds to the system
x1 + 2x2 + Zx4 = - 1
X3 + 3x4 = 4
Solving these equations for the leading variables (x
1
and x
3
) in terms of the free variables (x2 and
x4) results in x
1
= -1- 2x
2
- 2x
4
and x
3
= 4- 3x
4
. Thus, by assigning arbitrary values to x2
and x4, the solution set of the system can be represented by the parametric equa.tious
Xt = -1- 28- 2t, X2 = S, X3 = 4- 3t, X4 = t
where - oo < s, t < S>O The corresponding vector form is
x2, X3, X4) = (-1, 0, 4, 0} + s( -2, 1, 0, G)+ t( -2, 0, -3, 1)
Exercl&e Set 2.2
11. The given matrix corresponds to the system
+ 3xs = -2
+ 4xs = 7
X4 + 5xs = 8
29
where the equation corresponding to the zero row has been omitted. Solving these equations
for the leading variables (x1, xs, and x4) in terms of the free variables (x2 and xs) results in
Xt = -2 + 6x2- 3xs, X3 = 7- 4xs and X4 = 8- 5xs. Thus, assigning arbitrary values to x
2
and
xs, the solution set can be represented by the parametric equations
X] = -2 + 6s- 3t, X2 = S, X3 = 7- 4t, X4 = 8- 5t, X5 = t
where -oo < s, t < oo. The correspondjng vector form is
12. The given matrix corresponds to the system
x1 - 3x
2
= 0
X3 = 0
0=1
which is clearly inconsistent since the last equation is not satisfied for any values of x
1
, x
2
, and x
3
.
13. The given matrix corresponds to the system
- 7x4 = 8
+ 3x4 = 2
X3 + X4 = -5
Solving these equations for the leading variables in terms of the free variable results in x
1
== 8 + 7 x
4
,
x2 =o 2- 3x1, and x3 = -5 - x
4
. Thus, making x
4
into a parameter, the solution set of the system
can be represented by the parametric equations
X1 = 8 + 7t, Xz == 2- 3t, X3 = -5- t, X4 = t
where -oo < t < oo. The corresponding vector form is
14. The given matrix corresponds to the single equation Xt + 2xz + 2x.t - x
5
= 3 in which x3 does n0t
appear. Solving for x1 in terms of the other variables results in x
1
= 3- 2xz- 2x4 + X5. Thus,
making x2, x3, x4, and Xs into parameters, the solution set of the equation is given by
Xt ::::: 3 - 2s - 2u + V, X2 = S, X3 = t, X4 = u, X5 = V
where -oo < s, t, u, v < :x>. The corresponding (column) vector form is
XI 3- 2s- 2u+v 3 -2 0
r ~
+. ~
X2 s 0 1 0
X3
=
0 +s 0 +t 1
+u l !
X4 u 0 0 0
l ~
xs v 0 0 0
3C Chapter2
15. The system of equations corresponding to the given matrix is
x1 - 3xz + 4x3 = 7
x2 + 2x3 = 2
X3 = 5
Starting with the last equation and W'orking up, it follows t hat Z3 == 5, Xz = 2 - 2x3 = 2 - 10 = - 8,
and x1 = 7 + 3x
2
- 4x3 = 7-24-20 = -37.
Alternate solution via Gauss- Jordan (starting from the original matrix and reducing further):
[
1 -3
0 1
0 0
4
2
1
Add - 2 times row 3 to row 2. Add -4 times row 3 to row 1.
Add 3 times row 2 to row 1.
0
1
0
0 -13]
0 --8
l 5
0 -37]
0 - 8
1 5
From this we conclude (as before) that x
1
= -37, x2 = - 8, and x3 = 5.
16. The system of equations corresponding to the given matrix is
x
1
+ 8x3 - S:c4 = 6
x2 + 4x3 - 9x.: = 3
X3 + X 4 = 2
Starting with the last equation and working up, we have .1:3 = 2 - x
4
, x2 = 3 - 4x3 + 9x4 =
3- 4(2- .r4) + 9x4 = -5 + 13x
4
, and x
1
= 6- 8xa + 5x4 = 6- 8(2 - x
4
) + 5x4 = .;::: 10 + 13x4
Finally, assigning an arbitrary value to x
4
, the solution set can be described by the paramet-
ric equations x
1
= -10 + 13t, x
2
= - 5 + 13t, x
3
= 2- t, x
4
= t.
Alternate solution via Gauss- Jordan (starting from the original matrix and reducing further):
~
0 8 -5
~ ]
1 4 - 9
0 1 l
Add - 4 times row 3 to row 2. Add -8 times row 3 to row 1.
~
0 0 -13
-10]
1 0 -13
-5
0 1 1 2
From this we conclude (as before) that x
1
= -10 + 13t, x
2
~ -5 + 13t, x
3
= 2 - t , x4 = t.
Exercise Set 2.2 31
17. The corresponding system of equations is
x1 + 7xz - 2x3 - 8xs = - 3
X3 + x4 + 6xs = 5
X4 + 3xs ,... 9
St<Uting with the last equation aml working up, it follows that x" = 9- 3xs, X3 = 5 - :t4 - 6xs =
5- (9 - 3xs}- 6xs = -4- 3xs, and Xt = - 3- 1x2 + 2x3 + 8xs = - 3- 7x'l + 2( -4- 3xs) + 8xs =
-11 - ?x2 + 2xs. Fina.lly, assigning arbitrary 'Values to xz and :ts, the solut ion set can be described
by
Xt = -11- 7s + 2t, Xz = S, X3 = - 4- 3t, X4 = 9 - 3t, Xs = t
18. The corresponding system
X1 - 3x
2
+ 7XJ = 1
:tz + 4x3 = 0
0 = 1
is inconsistent since t here are no values of x
1
, x
2
, and x3 which satisfy the third equation.
10. The corrcspouding system is
x
1
+ x2 - Sx3 + 2x4 = 1
X2 + 4x3 = 3
X4 = 2
Starting with the last equation, we have x
4
= 2, x
2
= 3 - 4x3. X 1 = 1 - x2 + 3x3 - 2x4 =
l - (3 - 4x
3
) + 3x
3
- 2{2) = - 6 + 7x
3
. Thus, making x
3
inLo a paramet er , the solution set can be
described by the equat ions
Xl = - 6 + 7t , X2 = 3 - 4t, = t , X
4
= 2
20. The correspondi.ng system is
x1 + 5x3 + 3x4 = 2
x2 - 2x3 + 4x4 = - 7
X3 + X4 = 3
Thus x
4
is a free. and, setting x
4
= t, we have :r:, = 3 - t , :r.2 = - 7 + 2(3 - t) -- 4t =
- l - 6l, a.nd x
1
= 2 - 5(3 - t)- 3t = - 13 + 2t.
21. Star ting wi th the first equation and working down, we have x1 = 2, xz = Hs - x1) = S(S- 2) = l,
and X3 = l(12 - J:r1 - 2:r2) = i (12- 6- 2} = !-
4 - 2x
1
4 + 2
22. :r1= - l , x2 =
3
= -
3
-=2,x3=5 -x
1
- 4x2=5+1-8 = - 2
23. The augmented matrix of the system is
Add row 1 to row 2. Add -3 times row 1 to row 3.
[
1
-1
-10
32
Multiply row 2 by - 1. Add 10 t imes t he new row 2 to row 3.

Multiply row 3 by - f
2

[i
1 2
1 - 5
0 - 52
8]
-9
-114

0 1 2
Add 5 t imes row 3 to row 2. Add -2 times row 3 to row 1.

1 0
!]
1 0
0 1
Add - 1 times row 2 to row 1.

0 0
!]
1 0
0 1
Thus the solution is x
1
= 3, x 2 = 1, x3 = 2.
Chapter 2
24. The augmented matrix of the system is
H
2
5
!
Multiply row 1 by Add 2 t imes the new row 1 to row 2. Add -8 times the new row 1 to row 3.
I Q 7 4 l
fl 1 1
lo -7 - 4 -1
Multiply row 2 by Add 7 t imes t he new row 2 to row 3.

l 1
!]
1
4
7
0 0
Add - 1 times row 2 to row 1.

0
3
-!]
7
1
4
1
0 0
Finally, assigning a n arbitrary value t o the free variable X3, the solution set is represented by the
parametric equations
Exercise Set 2.2
25. The augmented matrix of the system is


3
-1 2 -1 -1]
1 -2 -2 -2
2 -4 1 1
0 0 -3 -3
Add -2 times row 1 to row 2. Add row 1 to row 3. Add -3 times row 1 to row 4.
[
1 -1 2 -1 -1]
0 3 -6 0 0
0 -1 -2 0 0
0 3 -6 0 0
Multiply row 2 by Add the new row 2 to row 3. Add -3 times the new row 2 to row 4.
1 -1 2 -1

0 1 -2 0
0 0 0 0
0 0 0 0
Add row 2 to row 1.

0 0
-1 -1]
1 -2 0 0
0 0 0 0
0 0 0 0
33
Thus, setting z = s and w ., t, the solution set of the system is represented by the parametric
equations
x = -1 + t, y = 2s, z = s, w = t
26. The augmented matrix of the system is


6 3 5
Interchange rows 1 and 2. Multiply the new row l by j. Add -6 times the new row 1 to row 3.
[
1 2 -1
0 -2 3 .2
0 -6 9 9
Multiply row 2 by - Add 6 times the new row 2 to row 3.

2 -1
1 -1
0 0 3
It is now clear from the last row that the system is inconsistent.
34 Chapter 2
27. The augmented matrix of the system is
28.
Multiply row 3 by 2, then add -1 times row l t o row 2 and - 3 times row 1 to the new row 3.
The last two rows correspond t o the (incompatible) equations 4 x2 = 3 and 13x2 = 8; thus the
system is inconsistent.
The augmented matrix of the system is

2 -1
3 2
1 3 11
-4 2 30
and the reduced row echelon form of this matrix is
Thus the system is inconsistent.
29. As an intermediate step in Exercise 2:i, the augmented matrix of the system was reduced to
Starting with the last row and working up, it follows that x3 = 2, x2 = -9 + Sx3 = -9 + 10 = 1,
and x1 = 8 - x1 - 2x3 = 8 - J - 4 = 3.
30. As an intermediate step in 24, the augmented matrix of the system was reduced to
[
1 1 1
0 1
0 0 0
i]
Starting with the last equation and working up, it follows that x2 = - a.nd :::1 = -x2- X3 =

x
3
=

Finally, assigni ng an arbitrary value to x


3
, the solution set can be
described by the parametric equations

E.xerclse Set 2.2
31. As an intermediate step in Exercise 25, the augmented matrix of the syste10 was reduced to
[
1 -1 2
0 1 - 2
0 0 0
0 0 0
~ ~ ]
0 0
0 0
35
It follows t hat y = 2z a.nd x = -1 + y- 2z +w = - 1 + w. Thus, setting z = s and w = t, the
solution set of the system is represented by the parametric equations x = -1 + t , y = 2s, z = s,
W = t.
32. As in Exercise 26, the augmented matrix of the syst em can be reduced to
[
1 2 - 1
0 1 -!
0 0 0
~ ]
- 1
3
and from this we can immediately conclude that the system has no solution.
33. (a ) There are more unknowns than equat ions in t his homogeneous system. Thus, by Theorem
2.2.3, there at e infinitely many nontrivial solutions.
(b) From back substit ution it is clear that x
1
= X-t = X3 "" 0. This system hns only the trivial
solution.
34 . . (a) There are more unknowns than equations in this homogeneous system: thus there are in-
finitely many nont rivial solutions.
(b) The second equation is a multiple of the first. T hus t he system r educes to only one equation
in two unknowns and t here are infinitely many solut ions.
35. The augmented matrix of the homogeneous system is
[
2 1 3 0]
1 2 0 0
0 l 2 0
Interchange rows 1 and 2. Add -2 times the new row 1 to the new row 2.
[
1 2
0 - 3
0 1
0
3
2
~ ]
Multiply row 2 by ~ . Add - 1 times row 2 to row 3. Multiply the new row 3 by !
[ ~
2 0
1 - 1
0 1
~ ]
The last row of t his matrix corresponds to x
3
= 0 a..nd, f!om back substitution, it follows that
xz = x3 = 0 and x
1
= - 2x2 = 0. This system bas only the trivial solut ion.
36 <;;napter 2
36. The augmented matrix of the homogeneous system is
37.
[
3 1
5 -1
1 1
1 -1
Multiply row 2 by 3. Add - 5 times row 1 to the new row 2, then multiply this last row 2 by
[
3111 0]
0 4 1 4 0
Let x
3
= 4s, x
4
:::: t. Then, using back substitution, we have 4x
2
= - x
3
- 4x4 = -4s - 4t and
3:c
1
= -x
2
- :c
3
- x
4
= s + t - 4s- t = -3s. Thus the solution set of the system can be described
by t he parametric equations x1 = -s, x2 = -s- t, X3 = 4s, X4 = t.
The augmented matrix of the homogeneous system is

2 2 4

0
-1
-3
i 3 -2
lnterchange rows l and 2. Add 2 times the new row 1 to row 3.

0 -1 -3
2 2 4
1 1 -8

. Multiply row 2 by !- Add - 1 times the new row 2 to row 3. Mul tiply the new row 3 by



0
-1 -3

1 1 2
0 0 1
Add -2 times row 3 to row 2. Add 3 times row 3 to row 1.

0
-1 0

1 1 0
0 0 l
This is the reduced row echelon form of the matrix. From this we see that y (the third variable)
is a free variable a.nd, on setting y = t, the solution set of the system ca.n be described by the
parametric equations w = t, x = - t, y = t, z = 0.
38. The augmented matrix of the homogeneous system is
[
2 -1 - 3
-1 2 -3
1 1 4
and the reduced row echelon form of this matrix is
[H H]

Thus has only the trivial solution x = y = z = 0.
Exercise Set 2.2 37
39.
40.
The augmented matrix of this homogeneous system is
u
1 3 -2
01
1 -4 3
~ j
3 2
-1
- 3 5
_,,
and the reduced row echelon form of this matrix is
~
0
7 5
~ ]
-2
2
1 3 -2
0 0 0
0 0 0
Thus, setting w == 2s and x = 2t, the solution set of the system can be described by the parametric
equations u = 7s - 5t, v = -6s + 4t, w = 2s, x = 2t.
The augmented matrix of the homogeneous system is
I a 0 I 0
1 4 2 0 0
0 - 2 -2 -1
0
2 - 4 I 1 0
1 - 2 -1 0
ancf the reduced row E-chelon form of this matrix is
1 0 0 0 0
0 1 0 0 0
0 0 1 0 0
0 0 0 1 0
0 0 0 0 0
Thus t h ~ system re<.luces to only four equations, and these equations have only the trivial solution
X1 =. X2 = Xj :::: X4 :;;;: 0.
41. We will solve the system by Gaussian elimination, i.e. hy reducing the augmented matrix of the
system to a row-echelon form: The augmented matrix of the original system is
~
-1 3 4
~ ]
0 -2 7
-3 5
1 4 -1
Interchange rows 1 and 2. Add - 2 times the new row 1 to the new row 2. Add -3 times the new
row 1 to row 3. Add -2 times the new row 1 to row 4.
{ ~
0 -2 7
~ ]
-1 7 - 10
-3 7 -16
1 8 - 10
Chapter2
Multiply row 2 by -1. Add 3 times the new row 2 to row 3. Add -1 times the new row 2 to row 4.
0 -2 7
1 - 7 10
0 - 14 14
0 15
~ 2 0
Multiply row 3 by -{
4
. Add - 15 times the new row 3 to row 4. Multi ply the new row 4 by ! .
0 - 2 7
1 - 7 10
0 1 -1
0 0 1
~ ]
This is a row-echelon form for the augmented matrix. Ftom the last row we conclude that /
4
= 0,
and from back substitution it follows that I
3
= /
2
= 1
1
= 0 also. This system has onlJ!, the trivial
solution.
42. The augmentcn matrix of the homogeneous system is
43.
1
1
[
~ ~ ~ ~
1 1 -2 0 -1
2 2 - 1 0 1
and the reduced row echelon form of this matrix is
[
~ ~ ~ ~ ~ ~ ]
0 0 0 1 0 0
0 0 0 0 0 0
~ ]
From this we concl ude that the second and fifth variables are free variables, and that the solution
set of the system can be described by the parametric equations
Z1 = - .s - t , z2 = s, z3 = - t, z4 = o, z ~ = t
The augmented matrix of the system is
[!
2 3
.:2]
-1 5
1 -14
Add - 3 ti mes row 1 to row2. Add -4 times row 1 to row 3.
[
1 2 3 4]
0 -7 -4 - 10
0 -7 - 26 a - 14
Multiply row 2 by -1. Add tne new row 2 to row 3.
[ ~
2 3
7 4
0 -22
4 ]
10
a-4
Exercise Set 2.2 39
44.
45.
46.
From the last row we conclude that z = and, from back substitution, it is clear that y and x
are uniquely determined as well. This system has exactly one solution for every value of a.
The augmented matrix of the system is

2 1

-2 3
2 a
2
- 3
Add -2 times t he fi rst row to the second row. Add - 1 times the first row to the third row.
(
1 2 1 2]
0 -6 1 -3
0 0 a
2
- 4 a- 2
The last rowcorresponds to (a
2
- 4}z =a- 2. If a= - 2, thls becomes 0 = -4 and so the system
is inconsistent. If a = 2, the last equation becomes 0 = 0; thus the system reduces to only two
equations and, with z serving as a free variable, bas infinitely many solutions. If a f.: 2 the system
has a unique solution.
The augmented matrix of the system is
[1
2
n ll
2 a
2
- 5
Add -2 times row 1 to row 2.

2
3]
a
2
- 9
Jf a= 3, then the last row corresponds to 0 = 0 and the system has infinitely many solutions.
1 a= - 3, the last row corresponds to 0 = -6 and the system is inconsist ent. If a-:/; 3, then
y = :,-:_
3
9
=


and, from back subst-itution, x is uniquely determined as well; the system has
exactly one solution in this case.
The augmented matrix of the is

1 7
-7]
3 17
16
2
a
2
+ 1 3a
This reduces to

1 7
-7]
1 :3
_"l
0 a
2
- 9 3a+9
The last row corresponds to (a
2
- 9)z = 3a + 9. If a= -3 thls becomes 0 = 0, and the system will
infinitely many solutions. If a = 3, then the last row corresponds to 0 = 18; the system is
inconsistent. If a 'I- 3, then z = = o.:_
3
and, from back substitution, y and x are uniquely
determined as well; the system has exactly one solut ion.
41. (a) If x + y + z = 1, then 2x + 2y + 2z = 2 f.: 4; thus the system has no solution. The planes
represented by the two equations do not intersect (they are parallel).
40 Chapter 2
(b) lf x + y + z = 0, then 2x + 2y + 2z = 0 alsoi thus the system js redundant and has infinitely
many solutions. Any set of values of the form x = -s- t, y = s, and z = twill satisfy both
equations. The planes represented by the equations coincide.
48. To rt:<.luce thl' matrix [ ~ ~ ~ ] to reduced r ow-echelon form without introducing f1actions:
3 4 5
Add -1 times row 1 to row 3. Interchange rows 1 and 3. Add -2 times row 1 to row 3.
[ ~ = ~ _:]
Add - 3 times row 2 to row 3. Interchange rows 2 and 3. Add 2 times row 2 to row 3. Multiply
row 3 by -
3
1
7

[ ~
3 2]
1 - 22
0 1
Add 22 times row 3 to row 2. Add -2 times row 3 to row l. Add - 3 times row 2 to row 1.
This is the reduced row-echelon form.
49. The system is linear in the variab1es x = sin a, y = cos{3, and z = tan-y.
2x- y + 3z = 3
4x+ 2y- 2z = 2
6x - 3y + z = 9
We solve the system by per forming the indicated rvw operations on the augmented matrix
[
2 -1 3
4 2 - 2
6 -3 1
!]
Add - 2 times row 1 t o row 2. Add -3 times row 1 to row 3.
r
20 -1 3 31
4 - 8 -4
lo o -8 oj
From this we conclude that tan 1 = z = 0 a.nd, from back substitution, that cos {3 = y = -1 and
sin a= x = 1. Thus a = t. {3 = 1r, and '"Y = 0.
50. Tbis sy11t.em is linear in the variables X = x"J., Y = y
2
, and Z = z
2
, with augmented matrix
[i -:
1
2
Exercise Set 2.2 41
The reduced row-echelon form for this matrix is
! H]
It follows from this that X = 1, Y = 3, a.nd Z = 2; thus x = 1, y = ..;3, and z = JZ.
51. This system is homogeneous with augmented matrix
[
2- ). - 1
2 1 - ).
-2 - 2

1-).
1f). = 1, the augmented matrix js [ i -& 8], and the reduced form of this
-2 - 2 (} 0
matrix is (6 g g]. Thus x = y = z = 0, i.e. the system has only the trivial solution.
0 0 1 0
1f .A = 2, the augmented matrix is [ g}, and the reduced row-echelon form of this
-2 - 2 - 1 0
matrix is Thus the system ha.<l infinitely many solutions: x = -!t, y = 0, z = t ,
0 0 0 0
where - co < t < co.
52. The augmented matrix U I reduces to the following form:

1
1
0
2
l
0
a l
a -b
c - a-b
Thus the syst.em is consistent if and only if c- a - b = 0.
53. (a) Starting with the giVfm system and proceeding as directed, we have
O.OOOl x + l.OOOy 1.000
1.000x - l.OOOy 0.000
l .OOOx + 10000y 10000
l .OOOx - l.OOOy 0.000
l.OOO:t + lOOOOy = 10000
- 1 OOOOy = -10000
which results in y 1.000 and x 0.000.
(b) If we first interchange rows and then proceed as directed, we have
l.OOOx - l.OOOy = 0.000
O.OOOl x + l.OOOy = 1.000
l .OOOx - l.OOOy = 0.000
l.OOOy = 1.000
\Vhich results in y 1.000 and x 1.000.
42
(c) The exact solutjon is x =
1
4
0:: andy =
The approximate solution without using partial pivoting is
0.00002x + l.OOOy = 1.000
l.OOOx + l.OOOy = 3.000
l.OOOx + 50000y = 50000
l.OOOx + l.OOOy = 3.000
l.OOOx + 50000y = 50000
- 50000y = -50000
which results in y 1.000 and x 0.000.
The approximate solution using partial pivoting is
0.00002x + l.OOOy = 1.000
l.OOOx + l.OOOy = 3.000
l.OOOx + LOOOy = 3.000
0.00002x + LOOOy = 1.000
l.OOOx + l.OOOy = 3.000
l.OOOy = 1.000
which results in y 1.000 and x ::::-: 2.000.
54. (a) Solving the system a..-; directed. we have
0.21x + 0.33y = 0.54
0.70x + 0.24y = 0.94
0.70x + 0.24y = 0.94
0.21x + 0.33y = 0.54
l.OOx + 0.34y = 1.34
0.2lx + 0.33y = 0.54
l.OOx + 0.34y = 1.34
0.26y = 0.26
resulting in y ::::: 1.00 and x :=::: 1.00. The exact solution is x = 1, y = 1.
(b) Solving the system as directed, we have
O.llx1 - + 0.20xs = -0.02
O.lOxt + 0.36x2 + 0.45xa = 0.25
0.50xt- O.Olx2 + 0.30xs = -0.70
O.SOx1 - O.Olx2 + 0.30xs = -0.70
O.IOx1 + 0.36x:J + 0.45xs = 0.25
O.Uz:t - + 0.20xa = -0:02
l.OOx1 - 0.02x2 + 0.60x3 = -1.40
O.lOx1 + 0.36x:z + 0.45x3 = 0.25
O.ll x1 - 0.13x2 + 0.20xs = -0.02
Chapter2
Discussion and Discovery 43
0.02x2
+
0.60XJ
-1.40
0.36x2
+
0.39x3 0.39
0.13x2
+
0.13x3
=
0.13
0.02x2
+
0.60x3
-
-1.40
l.OOxz
+
i.08x3
1.08
0.13x2
+
0.13x3
=
0.13
O.OZx2
+
0.60x3
=
-1.40
l.OOx2
+
1.08XJ 1.08
0.27x3 = 0.27
resulting in x3 ;:::,; 1.00, x2;:::,; 0.00, and x1;:::,; - 2.00. The exact solution is x
1
= - 2, x
2
= 0,
X3 = 1.
DISCUSSION AND DISCOVERY
Dl. lf the homogeneous system has only the trivial solution, then the non-homogeneous system will
either be inconsistent or have exactly one solution.
02. (a) All three liues pass through the origin and at least two of t hem do not coincide.
(b) If the system has nontrivial solutions then tbe lines must coincide and pass through the
origin.
D3. (a) Yes. If axo + bvo = 0 then a(kxo) + b(kyo) = k(axo + byo) = 0. Similarly for the other equa-
tion.
(b) Yes. If a:r.o + byo "" 0 and ax
1
+ lly1 = 0 then
and similarly for 1.he other equation.
(c) Yes in both cases. These statements are not true for non-homogenous systems.
D4. The first system may be inconsistent, but the second system always has (at least) the trivial
solution. If the first s y ~ t m is consistent then the solution sets will be parallel objects (points,
lines, or the entire plane) with the second containing Lhe origin.
D5. (a)
(b)
(c)
06. (a)
(b)
(c)
D7. (a)
(b)
(c)
(d)
At most three (the number of rows in the matrix).
At most five (if B is the zero matrix) . If B is not the zero matrix, then there are at most 4
free variables (5 - r where 1 is the number of non-z.ero rows in a. row echelon form) .
At most three (the number of rows in the matrix) .
At most three (the nwnbc.- of columns).
At most three (if D is the zero matrix) . U 8 is not the zero matrix, then there are at most
2 free variables (3- r where r is the number of non-zero rows in a. row echelon form).
At most three (the number of columns).
False. For ex<.mple, x + y + z = 0 and x + y + z = 1 are inconsistent.
False. If there is more than one solution then there are infinitel y many solutions.
Fabe. If the system is consistent t hen, since there is at least one free variable, there will be
infinitely many solutions.
T:n..e. A homogeneou:-; syi:\tem always has (at least) t he trivial S<llution.
44
Chapter 2
08. (a) True. For 1] can be reduced to either [5
(b) False. The reduce<l row echelon form of a. matrix is unique.
(c) False. The appearance of a row of zeros means that there was some redundancy in the
system. But the remaining equations may be inconsistent, have exM:tly one solution, or have
infinitely many solutions. All of these are possible.
(d) False. There may be redundancy in the system. For example, the system consisting of the
equations x + y = 1, 2x + 2y = 2, and 3x + 3y = 3 has infinitely many solutions.
D9. The system is li near in the variables x = sina, y = cos/3, z = tan7, and this system has only the
trivial solution x = y = z = 0. Thus sin a = 0, cos/3 = 0, tan-y = 0. It follows that a= 0, 11', or
2rr; /3 = or
3
;; and 1 = 0, rr, or 21r. There are eighteen possible combinations in all. This does
not contradict Theorem 2.1.1 since the equations are not linear in the variables a, f3t 'Y
WORKING WITH PROOFS
Pl. {a) lf a ;f. 0, then the reduction can be accomplished as follows:
If a = 0, then b =I 0 and c =f 0, so the reduction can be carried out as follows:
(b) If can be reduced to [5 ?] , then the corresponding row operations on the augmented
matr ix r] will reduce it to a matrix of the form [5 ? {] and from this it follows that
the syst.ern has the unique solutjon x = K, y = L.
CHAPTER 3
Matrices and Matrix Algebra
EXERCISE SET 3.1
1. Since two matrices are equal if and only if their corresponding entries are equal, we have a - b = 8,
b + a = 1, 3d+ c = 7, and 2d- c = 6. Adding the first two equations we see that 2a = 9; thus
a = and it follows that b = 1 - a = - Adding the second two equat!ons we see that 5d = 13;
thus d =
1
i and c = 7 - 3d = -
2. For the two matrices to be equal, we must have a= 4, d- 2c = 3, d + 2c = - 1, and a+ b = -2.
From the first and fourth equations, we see immediately that a= 4 and b = - 2- a= -6. Adding
the second a.nd third equations we see that 2d = 2; thus d = 1 and c = - l
2
-d = - 1.
3. (a) A has size 3 x 4, BT has size 4 x 2.
(b) ll32 = 3, 0.23 = 11
(c) tl,J = 3 if aud only if (i,j) - (1, 1), (3, 1), or (3, 2)
(d) c, (A') = Ul (e) , ,(2BT) = [1 2)
4 . (a) B has size 2 x 4. AT has size 4 x 3.
(b) bl2 = i. b21 = 4
(c) b,; = 1 if and only if (i, j) = (2, 2) or (2, 3)
5. (a) A+2B = H
(b) A - is not defined
(c) 4D - 3CT =


{d)
D _ DT = [ 1 1] _ [1 -3] = [ 0
-3 3 1 3 - 4
(e) G + (2FT) =l +
4 1 3 4 2
'"
(f) (7 A, - B) + E is not defined
55

56
6. (a) 3C + D = (
3
9
) + [
1
-3. - 3
= [!
(b)

= [
(c) 40 - 5DT = [
4
12
-[: = [
(d) F _, pT = [-
3 2
- !l -H
(e) B+(4T) = H i] + H
""
20 12 20
(f) (7C - D) + B is not defined
7. (a) CD= [l 0][ 1 1]=[1 1]
3 - 1 - 3 3 6 0
(b) AE = H
[1 4 2] = [!
1 3 1 5 4 5

( c) FG= =l


3 2 4j 4 1 3 32 9 25
(d) B''P = G -; !]
1
!
(e) naT = H -; = H
(f) G E is not defined
8. (a) GA = H :
r 1 s
(b) FB =
(c) GP = H :
= !)
3 1 1 14 5
=
4 4 0 16 5
H = [::
Chapter 3
Exercise Set 3.1
(d)


t1 1 3
3 101
3 7
(e) E"
(f) D A is not defined
9. Ax = H
!J HJ -l m +J m = HJ
10. Ax = [-! :
3 - 1
=+!} +5 (_;] - 2 = [q] .
11. (a)
[::1
12. (a)
;1 [::] n1
13. 5x
1
+ 6x
2
- 7x3 == 2
-:tl - 2xz + :l.r3 = 0
1
l Xz - I3 = 3
(b) !] [=:] :;: [ :)
1 5 - 2 X3 -2
(b) [-! =! = nJ
14. X 1 + X2 + X3 = 2
2Xt + 3X2 = 2
5xl - Jxz - 6x3 = - 9
15. (ABh3 = r2( A) c3(B) = (6)(4) + (5)(3) + (4)(5) =59
16. (BAht= r 2(B} c1 (A) = (0)(3) + (1)(6) + (3)(0) = 6
[6 -2
!] = {67
17. (a) r
1
(AB) = rt(A)B = [3
-2 7] 0 1
41 41]
7 7
[6 -2
!] = {63
(b)
r3(AB) = r 3(A)B = IO
4 9} 0 1
67 57]
7 7
c,(AB) = Ac, (B) =
- 2 7
1 r-
2
1 n
(c)
5
4 1 = 21
4
9 7 67
57
58
Chapter3
t3 -2 7]
18. (a) ' ' (BA) 1(B)A [6 -2 : : = [6 -6 70[
(b) r3 (BA) 3(8).4 [7 7 5[ - : ;) = {6.1 41 122[
(c) c2(BA) = Bcz(A) =
7 7
4] [-2] [-6]
3 5 = 17
5 4 41
19. (a) tr(A) = (A)u + (A)22 + (A)33 = 3 + 5 + 9 = 17
(b) tr(AT) = (AT)ll + (AT)2;: + (AT)33 = 3 + 5 + 9 = 17
(c) tr(AB) = (AB)u + (AB)2
2
+ (AB)33 = 67 + 21 +57 = 145, tr(B) = 6 + 1 + 5 = -12; thus
t.r( AB) - tr(A)tr(B) = 145- (17)(12) = 145-204 = - 59
20. (a) tr(B)= 6+1+ 5 = 12 (b) tr(BT)=6+ 1+ 5 = 12
(c) tr(BA) = (BA)n + (BA)22 + {BA)33 = 6 + 17 + 122 = 145, tr(A) = 3 + 5 + 9 = 17; thus
tr(BA)- tr(B)tr(A) = 145- (12){17} = 145- 204 = - 59
21. (a) uT' v = {-2 3] [:] = -8 + 15 = 7 (b) uvT = [4 5j
(d) v'ru = l4 5J[-!] = - 8+ 15=7 = uTv
(e ) tr(uvT) = t r(vuT) = u v = v u = uTv = vTu = 7
22. (a) uTv = [3 -4
5] m = 6 _ 28+ 0 = --22
(b) uvr = [ -:] [2
[
6 21 0]
7 0] = - 8 -28 0
10 35 0
(c) tr(uvT)=6-28+0 = -22=uTv
(d) 7 OJ [-:] =6-28 + 0 = - 22 =uTv
(e) tr(uvT) = tr(vuT) = u v = v. u = uTv = vTu = -22
23. [k l ll = [k
(k + 1}
2
= 0 if and only if k = -1.
Exercise Set 3.1
24. [2 2 k] [ ~ ~ ~ ] r ] = 12 2
0 3 1 lk
if and only if k = -2 or k = - 10.
59
kJ [4: 3kl = 12 + 2(4 + 3k) + k(6 + k) = k
2
+ 12k +20 = 0
6+k
25. Let F = (C(DE}. Then, from Theorem 3.1.7, the entry in the ith row and jth column ofF is
the dot product of the ith row of C and the jth column of DE. Thus (F)23 can be computed as
follows:
27. Suppose tha t A is-m x nand B is r x s. If AB is defined, then n = r . On the other hand, if BA
is defined, then s = m. T hus A is m x n and B is n x m. It follows that AB is m x m and BA is
n xn.
28. Suppose that A is m x n and B is r x s. If BA is defined, then s = m a.nd BA is r x n. If, in
addition, A(BA) is defined, then n = r . Thus B is ann x m matrix.
29. (a) If the i th row of A is a row of zeros, then (using the row rule) r i{AB) = ri(A)B = OB = 0
and so the ith row of AB is a row of zeros.
(b) If the jth column of B is a column of zeros, then (using the column rule) c_i(AB) = Ac; (B) =
AO = 0 and so the jth column of AB is a column of zeros.
30. (a) If B anu C have the same jth column, then ci(AB) ::= Aci (B) = Acj (C) = cj(AC) and so
AB and AC have the same jth column.
(b) If B and C have the sa.me ith row, then ri (BA) = ri (B)A = r;(C)A = ri(CA) and so BA
and C A have the same ith row.
31. (a) If i ::j: j, t.heu a;
1
h ~ unequal row and column numbers; that is, it is off (a bove or below) the
main diagonal of the matrix [a;iJ; thus the matrix has zeros in all of the positions that are
above or below the main diagonal.
an
0 0 0 0 0
0
a22
0 0 0 0
[aijJ =
0 0 a33 0 0 0
0 0 0 a44 0 0
0 0 0 0
a 55
0
0 0 0 0 0
a 66
(b) If i > j , then the entry Cl;j has row number larger than column number ; that is, it lies below
the main diagonal. Thus laii] has zeros in all of the positions below t he main diagonal.
(c) If i < j, then the entry a;.i has row number smaller than column number; that is, it lies above
the main diagonaL Thus la;j] has zeros in a.ll of the positions above the main diagonal.
(d) If li - il > 1, then either i - j > 1 or i - j < -1; that is, either i > j + 1 or j > i + 1. The
first of these inequal ities says that the entry aij lies below the main diagonal and also be-
low t he "subdiagonal" cor>sisting of entries immediately below the diagonal entries. The
60 Chapter 3
second inequality says that the entry aii lies above the diagonal and also above the entries
immediat ely above the diagonal entries. Thus the matrix A has the following form:
an <lt2
0 0 0 0
a21 a2.2 l23
0 0 0
A = ia;j} =
0 asz a33 a34 0 0
0 0 a43 a44 a4s 0
0 0 0 as4 ass as6
0 0 0 0
a6s aM
32. (a) The entry a;j = i + j is the sum of the row and column numbers. Thus the matrix is
: .....
(b) The entry aii = (- l)i+i is - 1 if i + j is odd and +1 if i + j is even. Thus the matrix is
r
-: -:
1 - 1 1
(c) We have a;j = -1 if i = j or i = j 1; otherwise a.;i = l. Thus the ent ries on the main
di agonal, and those on the subdiagonals immediately above and below the main diagonal ,
arc all -l; whereas the remaining entries are all + 1. The matrL'<. is


1 - 1 - 1 -1
1 1 - 1 -1
33. The component,s of the matrix product [! : represent the total expenditures for
3 -
purchases during each of the first four months of the year. For example, the February expenditures
were (5)($1) + (6)($2) + (0)($3) == $17.
34. (a)
(b)
[
45 + 30 60 + 33 7f:> + 40] [75 93 115]
. . 30+ 21 30 +23 40 +25 51 53 65
T he entnes of the matrnc M + J = = represent t he
12+ 9 65+12 45 + 11 21 77 56
15 + 8 10 + 10 35 + 9 23 50 44
total units sold in ea.ch of the categories during the months of May and June. For example,
t he total number of medium raincoats sold was M
3
2 + J32 = 40 + 10 = 50.
[
15 27 35]
The entries of the matrix M - J = :

represent the difference between May and


1 30 26
June sales in each of the categories. Note that June sales were less than May sales in each
<:Me; thus e ntries represent decreMP.S.
Discussion and Discovery 61
(c) Let x r :] Then th7 wmponents of M x = ::] [:] = represent the totalnum-
l 15 <10 35 90
ber (all sizes) of shirts, jeans, suits, and raincoats sold in May.
(d ) Let y = 11 1 1 1]. Then the components of
[
45 60
30 30
yM=[l 1 1 1)
12 65
15 40
75]
= 1102 195 205J
35
represent the total number of small, medium, and large items sold in May.
(e) The product yMx = [1 1 1 = {1 1 =492 represt>nts the
15 40 35 90
total number if items (all sizes and categories) sold in May.
DiSCUSSION AND DISCOVERY
Dl. If AB has 6 rows and 8 columns, then A must have size 6 x k a nd B must have six k x 8; thus A
has 6 rows and B has 8 columns.
D2. If A = [o o] then AA = [o o] [o o] = (o o].
10' 1010 00
03. Let A = G and B = In the following we illustrate three different methods for computing
the product AB.
Met.'wd 1. Using Definition 3. 1.6. This is the same as what is later referred to as thP. column rule.
Since
we have AB = [:
Method 2. Using Theorem 3.1.7 (the dot product rule}, we have
Method 3. Using the row rule. Since
rt(AB) = rt(A)B = 11 2] [! = [7 Sj and r2(AB) = t2(A)B = [1 lj ""' [4 3]
we have AB = [: .
62 Chapter 3
D4. The matnx A -i . is the only 3 x 3 with property. Here is a proof
Since xc, (A) + yc,(A) + zc
3
( A) A[:] , we must have xc, (A)+ yc,(A) + zc,( A) [= for
all x, y, and z. Talting x 1, y = 0, z = 0, it follows that c, (A) = Similarly, c2(A) = [ -i]
and c3 (A) = m.
D5. There is no such matrix. Here is a proof (by contradiction):
Suppo"' A ;,. 3 x 3 for which A[:] ["f] for all x, y, nd z. Then A[:] on the other
hand, we must have A [:] = -A [ =: l = - m . Thus there is no such matrix.
D6. (a) S1 = and S2 = are two square roots of the matrix A=
(b) The matrices S = and S = [:s are four different square roots of B =
(c) Not all matrices have square roots. For example, it. is easy to check that the matrix [-
has no square root.
07. Yes, the zero matrix A = ha.o:; the property that AB has three equal rows (rows of zeros)


for every 3 x 3 rnatrix B.
DS. Yes, the matrix A = has the property that AB = 28 for every 3 x 3 matrix B.
09. (a) False. For example, if A is 2 x 3 and B is 3 x 2, then AB and BA are both defined.
(b) 'frue. If AB and BA are bot h defined and A ism x n, then B must ben x m; thus AB is
m x m and BA is n x n. If, in addition, AB + BA is then AB and BA must have
the same size, i.e. m = n.
(c) True. From the column rule, c
1
(AB) = Ac;(B). Thus if B has a column of zeros, then AB
will have a column of zeroll.
(d) False. For example, =
(e) True. If A is n x m, then AT ism x n, AAT is n x n, and AT A ism x m. Thus AT A :1.nd
AAT both square matrices, 8Jld so tr(AT A) and tr(AAT) are both defined.
(f) False. If u and v are 1 x n row vectors, then u T v is ann x n matrix.
Exercise S&t 3.2
DlO. The second column of AB is also the sum of the first and third columns:
s
Dll. {a) :L)aikbkj) = a,lblj + a;2b'2j + + aisb.i
.1.:=1
(b) This sum represents (AB)ij, the ijth entry of the matrix AB.
WORKING WITH PROOFS
63
Pl. Suppose B = is an s x n matrix andy= !Yt yz
yB = fy c1(B) y c2(B)
Ys] Then yB is the 1 X n row vector
Y Cn(B)]
whose jth component is y Cj(B). On the other hand, the jth component of the vector
Ytrl (B)+ Y2r2(B) + + Ysr s(B}
is Ytblj + Y2b2j + + Ysbsj = y Cj(B). Thus Formula (21) is valid.
P2. Since Ax= x
1
c
1
(.t1) + x
2
c
2
(A) + + XnCn(A), the linear system Ax= b js equivalent to
X1C1 (A)+ X2c2(A) + + XnC,l(A) = b
Thus the system Ax == b is consistent if and only if the vector b can be expressed as a linear
combination of the <.:olumn vectors of A.
EXERCISE SET 3.2
flO
-4
-2] [0
-2
31 [10
-6
1!]
1. (a) (A+ B)+ C = 0 5 7 + 1 7
4j = 1
12
l 2
. 6
10 3
f.)
9 5 -1 19
[j
-1
+ [:
-5
-2] [10
-6
1:]
A+ (B r C)= 4 8 6 = 1 12
4 7 -2 15 5 -1 19
[28
-28

[0 -2

[-10
-222
26]
(b)
(AB)C =
-31 1 7
=
83 -67 278
-21 36 3 5 87 33 240
u
-1
3] [ -18
-62
-33] [ -10
-222
26]
A(BC) = r1
5 7 17
22 = 83 -67 278
1 4 11 -27
38 87 33 240
(a+ b)C = (-3) [!
-2
!]

6 -9]
(c) 7 -21 -12
5 -9 -15 -27
[
-8
12] [ 0
14
-21] [ 0
6
-9]
aC+bG = 28 16 + -7 -49 -28 = -3 -21 -12
12 20 36 -21 -35 -63 -9 -15 -27
64
Chapter3
(d) a(B - C)= (4) =
1 - 12 - 3 4 - 48 - 12
[
32 - 12 -20] [ 0 - 8
aB - aC = 0 4 8 - 4 28
16 - 28 24 12 "20
=
36 4 - 48 -12
(r -3 -51 r
- 2
m
2. (a) a(BC) = (4) 0 1 2 1 7
4 - 7 6 3 5
[ -18 -62
[-72
- 248
-132]
= (4) 7 17 22 = 28 68 88
11 - 27 38 44 - 108 152
[32 -12 -20] [0 -2
3] [ -72
- 248
-!32]
(aB)C = 0 4 8 1 7
4 = 28 68 88
16 - 28 24 3 5 9 44 - 108 152
r 8 -3 -5
1
[ 0 -8
12] r-72
-248 -132]
B0C) = 0 1 2 4 28 16 = 28 G8 88
L 4 -7 6 12- 20 36 44 -108 152
[8 -5
-2][ 2 -1 3] [ 20 -30
-9]
(b) (B + C)A = 1 8
6 0 4 5 = - 10 37
()7
7 -2 15 -2 1 4 -16 0 71
[ 26 -25 -11] [ -6 -5 2] [ 20
- 30
-9]
BA +CA = -4 6 13 + - 6
31 54 = - 10 37 6i
- 4 -26 1 -12 26 70 - lfi 0 71
(AT)T H
0 _T [ 2
-1
3]
3. (a ) 4 1 = 0 4
! = A
5 4 -2 1
riO
-4
-r [10
0

( b) (A+ B)r = 0 5 7 = - 4 5
L 2
-6 10 - 2 7 10
AT H
0
-2] [ 8
0
-l
0 2]
4 1 + -3 1
5 -6
5 4 -5 2 6 - 2 7 10
(c) . (3C)T
-6
T [ 0
3
1:] =(3)
1
!] 3C'
21 12 = -6 21 7
15 27 9 12 27 3 4
Exercise Set 3.2
T =
-2r 36 6
20 0]
- 31 -21
38 36

2 6 3 5 4 6

38 36
4. (b)
H
-1 -8] T [ 8 -1 1]
- 6 - 2 = -1 -6 - 12
- 12 - 3 - 8 - 2 -3
sr- cr =
-5
=
2 6 3 4 9 - 8 - 2

-3
[
-18
(d) (BC)'i' """"" 7
ll
-62 -33
1
T r-18
17 22 = -62
-27 38 - 33
1
22 38
cr Br ,...
:1
0 =
4 9 - 5 2 6 -33

22 38
5 (a) lr ( [ : ]) '' 2 +1 +I = 10, and
t r(A') ,, ( [ _ ; -m = 2 + 4+4 = 10
(b) , ,(3.4) ,, ( [ :m 6 + 12 + 12 = 30 3tr(;lj
(c) tr(A) 10, tr(B) tr m _! -m = 8 +I +6 = 15,
nnd t r(A+ tc ( n "' 25; thustr(A tr(A) +tr(B)"
(d) tr(AB) tr ( ::: 28-31 + 36 = 33,
( [
26 - 25 - ul )
and t r(BA) = t.r -4 6 13 = 26 + 6-+ I = 33; t.hll.s tr(AB) = tr(HA) .
-4 -26 1
65
66
6. (c)
(d)
[
6
Lr(A - B) = tr 0 _
-6
([
-18
t.t (BC) = t r


= -6 + 3-2 = - 5 = 10 - 15 = tr(A)- tr(B).
8 -2
= -18 + 17 + 38 = 37,
- 27 38 J
Chapter 3
(
[
12 - 23
and t r(CB) = tr 24 -24
60 - 67
!! ]) 12 - 24 + 49 = 37. Thus h(BC) l<(C B).
7. (a) A matrix X satisfies the equation tr(B)A + 3X = BC if and only if 3X = BC - tr(B) A, i.e.
3] [ 2 -1
4 -15 0 4
9 - 2 1
[
8 -3 -5] [0 -2
3X = 0 1 2 1 7
:1 - 7 6 3 5
!]
[
-18 --62 -33] [ 30 -- 15
= 7 17 22 - 0 60
11 - 42 38 -30 15
45] [-48
75 = 7
60 41
in w},i ch case we have X = 1 -47 -53
[
-48 -47 -78]
. . 41 - 42 -22
(b) A matrix X satisfies the equation B +(A+ X)T = C tf a.ud only if
(A +X)T =C - B
A +X = ((A+ X)T)T = (C - 8)
7
X= (C- B )T- A = CT- BT - A
-78]
-53
-22
-47
-47
- 42
r
0 1
Thu:; X = -:'. 7
4
3] [ 8 0 4] [ 2 -1
5 - - 3 1 - 7 - 0 4
9 - 5 2 6 - 2 1
[-!0 2 -4}
v - 1 2 7 .
4 10 1 - 1
8. (a) X = B - =
2 2

(b) X = CT- BT = _!_
10 10 8
-92 - 167 -282
9. (a) dPt.(A) = 6-5= 1
A-1 = r 2
-
l - 5
(b)


{A-t)-1 = [3
s

(c)
A'l' = G , det(AT) = 1
(AT)-1 = [ 2
- 1
= (A- l)T
(d)
2A = [
6
10
,det(2A) = 4
4
4 -10
-2] = [ 2
6 2 -5
=

Exercise Set 3.2 67
10. (a) det(B)=8+12=20
s-1_- 1 [ 4
20 -4

(b)
det(B-1) = 8 + 12 = _!_
20
2
20
(B-I)-1 = 20 ( _!_ [2
20 4
-!]) =
-3]
4 -::::B
(c)
BT = [ 2
-a
:] , det(BT) = 20
T -i 1 [4
(B ) = 20 :;
= (IJ-l)T
(d)
3B = [
6
12
-9]
12
, det(3B) = 180
-1 1 [ 12
(
3
B) = 180 -12
9] 1 [ 4
6 = 60 -4
=


-1 [10 -5]-l 1 [ -7 5]
ll. (a) (AB) = 18 -7 = 20 -18 10
s-lA- = [_: = ;o
-l
(b) (ABC)_ 1 _ [ 70 45] __ 1 [ 79 -45]
- 122 79 - 40 -122 70


-4] _!_ [ 4 3] [ 2
2 2 6 20 -4 2 -5
-1] l [ 79 -45]
3 = 40 -122 70
12
_ (a} (BC)_ 1 .,.- [18 11] -l = _!_ [ 12 -11]
. 16 12 10 -16 18
c-ts-1 =! r-1 -4] _!__ [ 4 3] = __!__ [ 12 -111
2 2 6 20 -4 2 40 -16 18J
(b) (BCD)_
1
= [36 33] -I = _1 [ -33]
32 36 240 --32 aG
u-lc-1 n-1 = r- -:] [._: = [
14
. X= (C _B)-I AB = [-5 -7] [10 -5] = __!__ [-88 37]
22 6 4 18 - 7 11 66 - 29
68
Chapter3
16. (a)
-24]
67
(b)
[
6 4] 1 [3 o] 1 r 6 4]
= 2 -1 6 o 2 = sl-3 -1
19. {a) Given that A-
1
-= noting that det( A-
1
) 13, we have A = (A-
1
)
1
=
1
1
3
[_!
(b) Given that (7A)-
1
= have A = =
( )
, ( T) 1 [-3 -1] l T [-3 -
2
1] -
1
__ (-l) _
3
1] __ -.1>],
20. a (,ivc'? that 5A - =
5 2
it fol ows t hat 5A =
5
:; ::;. ._,
and so = - =- . A
1 [-:.!. - lJT 1 [-2 5)
5 5 3 5 -) 3
(b) Given (I+ 2A)-
1
= we have I+ ZA =

=
1
1
3
r-: and so it follows that
1 ( 1 [-5 2] [I OJ) 1 [-9 l]
A :-:: 2 13 -1 1 - o l . = 13 2 -c '
21. The matrix A = is invertible if and only if det(A) = c
2
- c f. 0, i.e. if and only if c :/:. 0, 1.
22. The matrix A = [ - :) is invertible if anJ only if dct(A) = -c
2
+ 1 0, i.e. if aud only if c =/= L
23. One such example is A = In genera), any matrix of the form [: ; ] .
3 2 3J e J c
24. One such example is A =

In general, any matrix of the form ;].


3 -2 0 -e - ! 0
25. Let X= [xiJJ . Then AX= I if and only if
3x3

0
'] ['"
X}2 xnl

"

lJ
1 0 X21 X22
X23 j
1
1 1 X31 X32 X33
0
Exercise Set 3.2
i.e. if and only if the entries of X satisfy the following system of equations;
Xn
+ X:z t
+ X22
Xt3 + X23
xz1
-!- X3 1 =}
+ X32 = 0
+ X:tJ =: 0
=0
=1
=0
=0
= 0
+ X33 = 1
69
This system has the unique solution xu = :t12 = xzz = x23 = X31 = X33::;; ~ and x 13 = x2
1
=
x;12 = - !Thus A is invertible and A-
1
= [-! ~ -!]
'2 -- 2 'l
26. Let X = lx,iJ . Then AX = I if anrl only if
3X3

~ ~ ~ ] [:::
2 0 l XJJ
X13] [} 0 ()]
X23 = 0 1 0
x:n 0 0 l
i.e. if and only i i xu = 1, :tu = 0, x13 = 0, :t21 = 0, xn = 1, r23 = 0, 2x11 + X31 = 0, 2x12 + X32 =
0, and 2x
1
3 + X33 = l. This is a very simple syst.em of equations whicb has the solution
Xll =- 1, X12 = 0, X13 = 0, X21 = 0, X22 = l , X23 = 0, X31 = - 2, X32 = 0, X33 = 1
Thu& A is invertible and A- t = [ ~ ~ ~ ]
- 2 0 l
28. (a) The mat<ix uvT ~ [-:] [I -I 3) = - 2 2 -6 ha;: the property that
[
3 -3 9]
4 -4 12
- 3
2
-4
70 Chapter 3
(b) u Av HJ. ([_! ; [-:]) HJ. 50
ATU V = (U -:][-;)) HJ HJ -1-12+63
29. U A is invertible and AB = AG then, using the Associative law (Theorem 3.2.2(a)). we have
30. If A is invertible and AC = 0, then C = IC = (A-
1
A)C = A-
1
(AC) = A-
1
0= 0. Similarly, if C
is invertible and AG = 0, then A= AI= A(CC-
1
) = (AC)C-
1
= OC-
1
= 0.
31. (a) If A= ::::] , then det(A} = cos
2
() + sin
2
B = 1. Thus A is invertible and :
for every value of ().
A_1 = [cosO
si nO
- si n fJ]
cosO
(b) The given system can be written so
i.e. x' = xcosO- ysi n8 andy'= xsinO + ycos1.
32. (a) If A
2
:o:: I. t hen (1- A)
2
= 1
2
- 2A + A
2
= 1 - 2A +A = I - A; t hus J - A is idempotent.
( b) If .4
2
= / , then (2A - 1)(2A- I)= 4A
2
- 4.4 + !
2
= 4A- 4A + I = I ; thus 2A -I is invert-
ible anci (2A - 1)-
1
= 2.4- I.
33. (a) If A anc.l B are invertible square matrices of the same size, and if A + B is also invertible,
then
A(A-
1
+ s-
1
)B(A + B)-
1
::: (I+ AB-
1
)B(A + B)-
1
= (B + A)(A + B)-
1
=I
(b) From part( a) it follows that A(A-
1
+ B-
1
)B;;::. A+ B, and so A-t + B-
1
= A-
1
(A+ B)B-
1

Thus the matrix A-
1
+ B-
1
is invertible nnd (A-
1
+ B-
1
) -
1
= B(A B) -
1
.4 .
34. Since uTv = u v:::: v u = vru, we have (uvT)
2
= (uvT)(uvT) = u (vTu)vT = (vTu)uvT =
(uT v)uvT. T hus, if uTv =f - 1, we have
an( so the matrix A = I+ uvT is invertible with A-
1
= I -

vuvT.
Discussion and Discovery 71
35. If A = [: and p(x) = x
2
- (a+ b}x +(ad - be) , then
p(A) = - {a+ b)A + (ad - bc)I = !]
2
- (a+ b) [ + (ad- be)
= [a2+bc ab + ba1_[a
2
J-da ab ldb]+(ad - bc)(l 0
1
]
ca + de cb + d
2
J ac + de ad + d
2
0
36. From parts (d) and (e) of Theorem 3.2.12, it follows that
tr(AB - BA) = tr(AB)- tr(BA) = 0
for a.ny two n x n matrices A and B. On the other hand, tr(fn) = 1 t 1 + + I = n. Thus it is
not possible to have AB - BA= I.,.. .
37. The adj acency matrix A and its square (computed using the row-column rule) are as follows:
0 1 1 1 0 0 0 0 0 0 0 1 3 1
0 0 0 0 1 l 0 0 0 0 0 0 0 2
0 0 0 0 0 l 0 0 0 0 0 0 0
4- s - 0 0 0 0 0 1 1
Az =:
0 0 0 0 0 0 1
0
()
0 0 0 0 1 0 0 0 0 0 0 0
0 0 0 0 0 0 1 0 0 0
0 0 0 0
0 0 0 0
(}
0 0 0 0 0 0 0 0 0
The entry in the ijth position of t he mat rix l1'
2
represents t he number of ways of traveling from i
to j wit h one intermediate stop. For example, t here are three such ways of traveling from J to 6
(t hrough 2, 3 or 1) and two such ways of t raveli ng from 2 to 7 (t hrough 5 or 6).
In genern.l t.h entry in tjth position of the matrix An represent s the number of ways of traveling
from i t o j with exactly n - 1 int.errnediatc stops .
DI SCUSSION AND DISCOVERY
01. (a) Let and B = Then A
2
= B
2
= A2- B
2
= On
the other hand, (A + B)( A- G =
(b) (A + B)(A - B) = A
2
- AB + BA - B
2
(c) (A + B )(A - B) = A
2
- 8
2
if and only if AB = BA.
D2. If A is any one of the eight matrices A =

;J A
2
= f.:!.
0 0 1
D3. (a) If A
2
+ 2A +I= 0, ihcn i = - A
2
- 2A = A(-A- 21) ; thus A is invertible and we have
A -
1
=-A- 21.
72
(b) If p(A) = anA"+ a
11
_
1
A"-
1
+ + a
1
A + aol = 0, where a.o =I 0, then we have
A(- a .. An- 1 - an-I A" - 2- .. . - al J) = I
ao ao ao
Thus A is invertible with A-t= -!!.li. A"-
1
-- . .. - !!.!.[.
' oo ao oo
Chapter 3
D4. No. First note that if A
3
is defined then A must be square. Thus if A
3
= AA
2
=I, it follows that
A is invertible with A -I = A
2
.
D5. (a) False. (AB)
2
= (AB)(AB) = A(BA)B. If A and B commute then (AB)
2
= A
2
B
2
, but if
B A : AB then this _will not in general be true.
(b) True. Expanding both expressions, we ha.ve (A - B)'J. = A
2
- AB-BA + B
2
and
(B- A)Z = 8
2
- BA - AB + A
2
; thus {A- B)
2
= (B- A)
2

(c) True. The basic fact (from Theorem 3.2.11) is that (AT)-
1
= (A-
1
)7', and fr0m this it
follows that (A- n)T = ({A") -
1
)T = {(A")T)-
1
= ((AT)n)-1 = (AT) - n.
(d) False. For example, if A= and B = G then u (AB) = tr = 3, whereas
tr(A)tr(B) = (2)(2) = 4.
(e) False. For example, if B = -A then A+ B = 0 is not invertible (whether A is invertible or
not).
D6. (a) If A is invertible, then the system Ax= b has a. unique solution for every vector b in
R
3
, namely x = A-
1
b. Let x
1
, X:t, and x
3
be the solutions of Ax= e
1
, Ax= e
2
, and
Ax= e3 r espectively, and let B be the matrix having these vectors as its columns; B =
!x, x2 x3}. T hen we have AB = Ajx
1
X 2 x3] = [Ax1 Ax2 Ax3] = !e, ez e3] =I.
Thus A-
1
= B = [x1 Xz x3].
(b) From part {a) , the columns of the matrix A-
1
are the solutions of Ax= e1. Ax= e
2
, and
Ax= e3 . The augmented matrix of Ax = e
1
is
[i H
a.nd the reduced row echelon form of this matrix is

0
1
0
0 3]
0 -1
1 0
D7. The matrices G and n have determinant equal to 1. The matrices
have determinant equal to -1. These six matrices are invertible. The other ten matrices6U
have determinant equal to 0, and thus are: not invertible.
DS. True. If AB = BA, then n- A-l = ( AB)-
1
= (BA)-
1
= A-
1
s-
1

Worki ng with Proofs 73
WORKJNG WITH PROOFS
Pl. We proceed as in the proof of part (b) given in the text. It is clear that the matrices (ab)A and
a(bA) ha.ve the same size. The following shows that corresponding entries are equal:
P2. 'The following shows that corresponding entries on the two sides are equal:
P3. The argument that the matrices A(B- C) and AB- AC must have the same size is the sa.rne as
in the proof of part (b) given in t he text. The following shows that corresponding column vectors
are equal:
cjiA( B - C)J = Acj(B- C) = A(cj(B)- Cj(C))
= Ac
1
( B)- Aci (C) = Cj (AB)- c
1
(AC) = cj (AB - AC)
P1. Th('.se t. hree rn<\t r ices clel\rly ha.ve t he same size. The following shows t hat corresponding column
vectors are equal:
cj ja(BC)] = ac
1
(BC) = a(Bc
1
(C)) = (aB)cj(C) = c; [(aB)CJ
Cj!a(BC)J = acj ( BC) = a(Bc;(C)) = B(acj(C)) = B(cj (aG)) = Cj [B(aC)]
P5. If cA = 0 a.nd c :;f 0 then, using Theorem 3.2.l(c)), we have A = 1A = ( ( D c) A = (cA) = ~ 0 = 0.
P6. (a) Tf A is invertible, then AA-
1
=I and A-
1
A = ! ; thus A-
1
is invertible and (A-
1
)-
1
=A
(b) . If A is invertible t hen, from Theorem 3.2.8, A2 is invertible and (A
2
) - l = A-
1
A -
1
= (A-
1
)
2
.
P7. (a)
It t hen follows that A
3
is invertible and (A
3
) l = (A
2
) -
1
A-
1
= (A ..
1
)
2
A-
1
= {A-
1
)
3
, et c.
In general, from the remark following Theorem 3.2.8, (An ) -
1
= A-
1
-A-
1
A -
1
= (A-
1
)n.
AA-
1
" - - = -- -
[
a b] { 1 [ d -bl) l [ad be
c d \ad .... be -c a (t<l - (,c 0
(
1 [ d ... b]) [f'l b 'J 1 [da- be
A-
1
A = ad - be - c a c d = ad - be 0
(b) Let A =- (: ;] where ad - be = 0. Then A is invertible if and only if !.here a.re scalars e, f,
g, and h, such Lhnt
[: !] (; ~ ] = ~ ~ ]
i.e. if ami only if the followhg system of equa . ions is consistent:
ae + bg = 1
ce + dg = 0
af + bh = 0
cf + dh = 1
Multiply t he fi rst equation by d, multi ply t he second equat ion by b, and subtract . This Jea.ds
t o
(da - bc)e =- d
74 Chapter3
and from this we conclude that d = 0 is a necessary condition in order for the system to be
consistent. It then follows (since ad- be = 0} that be= 0 and so either b = 0 or c = 0. Let
us assume that b = 0 (the case c = 0 can be handled similarly). Then the equations reduce
to
ae = 1
ce = 0
af = 0
cf = 1
and these equations are easily seen to be inconsistent. Why? From ae = 1 we conclude
that e f. 0; and from ce = 0 we conclude t hat c = 0 or e = 0. From this it follows that c
must be equal to 0. But this is inconsistent with c/ = 1! ln summary, we have shown that
if ad- be = 0, then the system of equations has no solution and so the matrix. A is not
invertible.
n n n n n n
tr(AB) = L = L = L L btkakt = L(BA)u = tr(BA)
k=: l k=l / "" 1 1=1 k= l 1=1
EXERCISE SET 3.3
1. (a) This matrix is elementary; it is obtained from J2 by adding -5 times the first row to the
second row.
2.
3.
4.
5.
(b) This matrix is not elementary since two row operations are needed to obtain it from I
2
(follow
the one in part (a,) by interchanging the rows).
(c) This matrix js elementary; it is obtained from h by interchanging the first and third rows.
(d) This matrix is not invertible, and therefore not elementary, since it has a row of zeros.
(c) is elementary. {a) , (b) , and (d) are not elementary.
(a) Add 3 times the firs t row to the second row.
{b) Multiply the third ro .... hy ! .
(c) lnt.erchange the first and fourth rows.
(d) Add times the third row to the first row.
(a) Add -2 times the second row to the first row.
(b) Multiply the first row by - 1.
{c) Interchange the first and third rows.
(d) Add -12 times the second row t o t.he fourth row.
f
=


0

(a) (b) 1
0
r 0 OJ =
0 1 0
0 0 !
3

0 0
W'

0 0

0
I
- 7
(c)
1 0 1 0
{d)
1 0
=
0 1
0 l 0 1
0 0 0 0 0 0
r

0
l

7
1 0
==
0 1
I\
0 v
EXERCISE SET 3.3
(b) =
0 0 1 0 0 1
(d) j ! r J
7. (a)
:: 1rr1r fmm A by interchanging the first and third rows; thus EA B where
(b) EB =A where E is the same as in part (a).
0
0
1
0
75

(c) :; : A by adding -2 times the first row to the third row; thus EA = c where
-2 0 1
(d) EC = A where E is the inverse of the matrix in part (c)
1
i.e. E =
2 0 1
8. (a) E =

(b)C
(d) !
(c) E =
0
0 0]
l 2
0 1
9. Using thr. method of Example 3, we start with the partitioned rr.atrix lA I I] and perfonn row
operations aimed at reducing the left side to I; these same row operations performed simultaneously
on the right ::;ide will produce t.he matrix A -J.
[
1 5 I 1 0
1
]
2 20: 0
Add -2 times the first row to the second row.
[
1 5 : 1 0]
0 10 l -2 1
Multipiy the seconci row by

then add -5 times the new second row to the first row.
[
1 0! 2
0 11_! .L
I 5 10
[ 2 _!]
Thus A -l = _
1
1 . On the other hano
1
using the formula from Theorem 3. 2. 7
1
ami the fact
5 10
det(A) = 10
1
we have A- =
10
=
1
?
l 1 [ 21) -5] [ 2 -l]
-2 1 -g iO
76
10.
11.
Chapter 3
-1 1 [ 1
A = 14 - 4
3] ;= [


2 - -
. 7
l
(a)
(b)
Start with the partitioned matrix [A I I J.
[!
4 - 1 l1 0

I
0
J I 0 1
I
5 - 4l0 0
Interchange rows 1 and 2.

0 3l0 1

4 -1 : 1 0
I
5 - 4l0 0
Add - 3 times row 1 to row 2. Add - 2 times row 1 to row 3.

0 3:o
1

4 - 10: 1 -3
I
5 -10: 0 - 2
Add - 1 t imes row 2 to row 3.

0 3l 0 1

4 - 10: 1 -3
I
1 0: -1
1
Add -4 times row 3 to row 2, then interchange rows 2 and 3.

0 3: 0
' l Q I --1
I
0 - 10: 5
1 0]
__ :
Multiply row 3 by -
1
1
0
, then add -3 times the new row 3 to row 1.
0 : _!l I 10 5
0 : -1 1 1
: _l 7 2
I 2 10 S
1
0
From this we conclude that A is i nvertiblc , and that A-
1
- [
Start with the partitioned matrix [A I IJ.

3 - 4 l 0

4 I 0 1
-4 2 -9 0 0
11
-TO
l
-
7 '2
10 6
Multiply row 1 by -1. Add - 2 times the new row 1 to row 2; add 4 times the new row 1 to
row 3.
[
1 -3
0 10
() - 10
0
1
0

EXERCISE SET 3.3 77
(c)
Add row 2 to row 3.
[
1 -3
0 10
0 0
0
1
1
At this point, since we have obtained a row of on the left side, we conclude that the
matrix .4 is not invertible.
St art with the partitioned matrix !A I Jj.

0 1 : 1 0

I 1 : 0 1
I
1 o:o 0
Add -1 times row 1 to row 3.
fl
0 1 :
1 0


I
0 1 1
j I
I
1 -1: - 1 0
Add 1 times row 2 t o row 3, then mult iply the new row 3 by -1.

0 1 1 0
j]
1 1 0 1
0 1
1 l
2 2
Add -1 times row :3 t.o rows 2 and 1.

0 o:
l
l
-ll
I
2
-2
1
0: - 4
1
2
I
I I
0
1 I
I 2 2
. r
f\, "" t h ;, weconchtdet hnt ,\ ;, iot li ble, nod that A ' = [-;
I
-ll
-2
I
2
I
'i
12. (a) A -l
(b) A-
1
= 1 - 3
[
l 0 0]
[
1 --1 Ol
(c) A-
1
= 0
0 - 1 4 0 0 !.
3
13. As in t he inversion al gori t hm, we s tart wit h the po.rti tioncd matrix [A I I J and perform row oper-
ations a.irr..;)d at redudng left. side to its reduced row echelon form R. If A is invertible, then
R will be the identity matrix and the matrix produced on the right side will be A - l . In the more
general situation the reduced matrix will have the form [R I B} = [BA I BIJ where the matrix B
on the right has the property that B.1 = R. [Note that B is the product of ejementary matrices
and thus i" always an invertible matrix, whether A is invertible or not .]
[
1 2
0 0
1 2
3[1 0 0]
1 ! 0 l 0
4 : 0 0 1
78
Chapter 3
Add -1 Limes row 1 to row 3.

2 3: 1 0

0
I
0 1 I 1
I
0 1 1 -1 0
Add --1 t imes row 2 t o row 3.

2 3: 1 0

0
I
0
1 I 1
I
0 0 : -1 - 1
Add - 3 times row 2 to row l.

2 0: 1 -3

I
0 0
} I
1
I
0 0: -1 - 1
[) 2 0]
u
- 3

The reduced row echelon form of A is R = 0 0 1 , and the matrix B = 1 has the
0 0 0 - 1
property that B A = R.
R =
0

B=H
0
0]
14. 1 0
l
8
0
1

15. 1f c = 0, t.hen t.hc row is a row of zeros, so the matrix is not inverti ble If c f- 0, then after
rnult.iplying the fir5t row by 1/ c, we have:
Add -I timE'-'> the tl rst row to t he second row and to the t hird row.
[
] 1
0 c- 1
0 0
If c = 1, then t he second and third rows are rows of zeros, and so t he matrix is not invertible. If
c :j: 1, then we can divide the second and third rows by c - 1 obtaining
l :]
from this it is clear that the reduced row echelon form is the identity mat ri x. Thus we conclude
that the mat rix is invertible if and only if c :f:. 0, 1.
16. c = 0, v2
17. The matrix B is obtained by starting with t he identity matrix and performing the same sequence
of row operation' ; thus B = [! Hl = : :J
EXERCISE SET 3.3 79
18 B
19. 1f any one of the ki's is 0, then the matrix A has a 1.ero row and thus is not invertible. If the kt.'s
are all nonzero, then multiplying the ith row of the matrix [A I IJ by 1/ki for i = 1, 2, 3, 4 and then
reversing the order of the rows yields
20.
0
0
l
k2
0
0
l
k3
0
0

thus A is and A - l is the matrix in the right-hand block above.
[ '
0 0

k
l I
0
' -p k
k =r 0; A-
1
=
1 I 1
p - k1
k
1 \ I
F
p -1?1
21. (a) The identity matrix can be obtained from A by first adding 5 times row 1 to row 3, and then
multiplying row 3 by Thus if E1 = and E2 = ;] , thn E2E
1
A = I.
(h) A
1
E
2
E
1
where
1
and E
2
a.re as in part (a) .
(c) A = E}
1
E2
1
=
22. ( a )
:anT::, [rof .: ::''. ad[t'fg A


0 0 1 0 0 2
then
(b) A -I = E2E
1
where
1
and E2 are as in par t (a).
(c)
23. The identity ma.Lrix can be obtained from A by the following sequence of row operations. The
corresponding matrices and their inverses arc as indicated .

0

0

(1) Interchange rows 1 and 3.
E1=
1
s-1_
1
l -
0 0
H
0 ol

0

(2) Add -1 times row 1 to row 2.
2 =
l

E-I_
1
2 -
0 0
80
Chapter 3
E, [
0 0]
s;
0 0]
{3) Add - 2 times row 1 to row 3.
1 0
1. 0
-2 0 1
0 1
E,
0

Ei'
0

{4) Add row 2 to row 3.
1
1
1
-1
Es
0
j]
E;'
0
j]
(5) Multiply row 3 by -i
1 ]
0
0
E, [:
0
:]

0
-H
(6) Add row 3 to row 2.
1
E- t _
1
() -
0
0
E,
()
Ei' [:
0

(7) Add - 2 times row 3 to row 1.
1
1
0
0
Ea =
- 1


1

(8) A<ici - 1 t imes row 2 to row 1.
l
1
LO
0
0
It follows that A-
1
= EaE1E6EsE4E3E2E
1
and A= E}
1
E2
1
Ej
1
Ei'5'Ei
1
E7
1
E8
1
.
25. The two syst ems have t he same coefficient matrix. If we augment this m:\trix with the two columns
of constants on the right sides, we obtain
[
1 2 1 ! -1 : 0]
1 3 21 30
I I
0 1 z: 4 l 1
the reduced row echelon form of this matrix is

! -: !-:]
0 0 t: o: 4
From this WP. conclude that the first system has the solution x
1
= - 9, x
2
= 4, :r
3
= 0; and the
second syst em has the solution 4, x2 = -4, xa = 4.
EXERCISE SET 3.3 81
26. 'The coefficient matrix, augmented with the two colUJnns of constants, is
27.

2 5: -1 : 2]
I I
l - 3 I - } I 1
I I
0 1 : 3 : - 1
and the reduced row echelon form of this matrix is
[
1 0 0 l - 32 ! 11]
0 1 0 I 8 I -2
I I
0 0 1 l 3 : - 1
Thus the first system has the solution x
1
= -32, x
2
= 8, x
3
= 3; and the second system has the
solution x
1
= 11, x2 = - 2, X3 = -1.
(a) The systems can be. written in matrix form as

2
:] [::]
n1

2

m
3
::0
and 3
:::
1
]
["
:]
-3 ']
The inverse of the coefficient matrix A = 3 2 -l . Thus the solutions
-l 1
are given by
[-:
- 3
-:] [-!]
n1 H

Hl
2 and 2
- 1 - 1
b,] [- :
-3
' ][ - J 0] [-9 4]
(b) A-
1
!b 1 2 -1 3
= -: .
1
- 1 l 4
28. ( The systems can be wri tten in matrix form us
nod [ J
Tbe inverse of the coefficient matrix
tions are given by
-:]
82 Chapter3
29 0 The augmented matrix can be row reduced to l bt

2
b:z) 0 Thus the system is
consistent if and only if b1 - = 0, i.e. if and only if b
1
= 2b2.
30.
[
1 -2 5: btl
The augmented matrix 4 -5 8: b2 can be row reduced to
-:1 3 -3 I b3
0

-2 6 : bl l
3 - 12l bz- 4bl 0 Thus
o 0 I - I> 1 + bz + b3
the system is consistent if and only if -b
1
+ bz + b
3
= 0.
31.
[
l -2 -llbll [1
The augmented matrix -2 3 2 ! &a can be row reduced to o
-4 7 4 , b3 0
-2 -1 : bl l
-1 o ! t 2b1 . Thus the
0 0 I 2bt - b2 + 113 .
system is consistent if and only if 2b1 - b2 + b3 = Oo
32. The augmented matrix for t his system can be row reduced to
[
1 -1 3 2
0 1 -11 - 5
0 0 0 0
0 0 0 0
and from tohis it foll ows that t he system Ax = b is consistent if and only if t he components of
b sat isfy the equations -b
1
+ bs + b
4
= 0 and + b
2
+ b
4
= 0. The general solution of this
homogeneous system can be written as b
1
::: s + t , b2 = 2s + t, bs = s, b4 = t. Thus the original
system Ax b is consistent if and only if the ve:to' b is of the fonn b [ ;, (
11
l s [i] + t m
33. The matrix A can be reduced to a row echelon from R by the following sequence of row operations:
r o
1 7
j]
A= ll
3 3
-2 - 5 1
(1) Intercha.nge rows 1 and 2.
[ :
3 3
j]
1 7
-2 - 5 1
(2) Add 2 times row 1 t o row 3.
r 3
8
:J
0 1 7
0 1 7
(3) Add -1 times row 2 to row 3.
R l:
3 3 8]
l 7 8
0 0 0
It follows from this that R = E
3
E
2
E
1
A where E
1
, E'2, E
3
<lie the elementary matrices correspond-
ing to the row operations indicated above. Finally, we have the factorization
A EFG R [! u ! ;] [:

1 1 0 0 0 0
where E = E!
1
F = Ei
1
, and G = 3
1

WORKING WITH PROOFS 83
34. The matrix A is obtained from the identity matrix by aJding a times the first row to the third row
and adding b times t he second row to the third raw. If either a = 0 or b = 0 (or both) the result
is an elementary matrix. On the other hand, if a =I= 0 and b =I= 0, the result is not a.n elementary
matrix since two elementary row operations are required to produce it. Thus A is elen,enta.ry if
and only if ab = 0.
DISCUSSION AND DISCOVERY
Dl. Suppose the matrix A can be reduced to the identity matrix by a known sequence of elementary
row operations, and let E
1
, E
2
, ... Ek be the corresponding sequence o elementary matrices.
Then we have Ek E
2
EtA =I, and so A= E;-
1
E2
1
Ej;
1
l. Thus we can find A by applying
the inverse row operations to I in the reverse order.
D2. There i.'J not. For example, let b = 1 and a"" c = d = 0. Then =
D3. There is no nontrivial solution. From the last equation we ste. that x
4
= 0 and, from back substi-
tution, it follows immediately that x
3
= x'2 = x
1
= 0 also. The coefficient matrix is invertible.
D4. (a) The matrix B = [
1
:
0
0
] has the requited property.
0 -2
(b) Yes, t he mat rix B = _: works just ns well.
(c) The matrix B must be of size 3 x 2: it is not square and therefore not invert.ible.
D5. (a ) False. Only invertible mat.rkes can be as a produ<:t of elementary matrices.
(b) F'alse. For example t he product =_ cannot be obtained from the identity by
a single elementary row operation.
(c) This row operation is equivalent t.o multiplying the given mat.rix by an elementary
matrix: and, since any elementary matrix is invertible, the product is still invertible.
(d) 'T'r11e. If A is invertible and AR = 0, then B = IB = (A'-
1
A)B = A-
1
(AB) = A-
1
0= 0.
(e) True. If r1. is invt!rtible then the homogeneous system Ax = 0 has only the trivial solution;
otherwise (if A is singular) there are infinitely many solutions.
D6. All of these statements arc true.
D7. No. An iuvertible matrix cannot have a row of zeros. Thus, for A to be invertible we must have
a#- 0 and h =I= 0. But (assuming this) if we add -d/a times row 1 to row 3, and add -e/h times
row 5 to row 3; we have a matrix with a row of zeros in the third row.
WORKlNG WITH PROOFS
Pl. If AB is invertible then, by Theorem 3.3.8, both A and B a.re invertible. Thus, if either A orB is
singular, then the product AB must be singular.
P2. Suppose that A = BC where B is invertible, and that B is reduced to I by a sequence of elementary
row operations corresponding to the elementary matrices E
1
, E2, .. . , Ek Then I= Ek EzE1B,
and so B-
1
= Ek EzE1. From this it follows that Ek E2E
1
A == E,.. E2BC = C; thus the
same sequence of r{)w operations will reduce A to C.
84 Chapter 3
P3. Suppose Ax= 0 iff x = 0. We wish to prove that, for any positive integer k, Akx = 0 iff x = 0.
Our proof is by induction on the exponent k.
Step 1. If k = 1, then A
1
x =Ax= 0 iff x = 0. Thus the statement is true fork= 1.
Step 2 (induction step): Suppose the statement is true fork= j, where j is any fixed integer 2:: 1.
Then A1+
1
x =A) (Ax) """' 0 iff Ax= 0, and this is true iff x = 0. This shows that if the statement
is true fork= j it is also true for k = j + 1.
..
These two steps complete the proof by induction.
P4. From Theorem 3.3.7, the system Ax= 0 has only the trivial solution if and only if A is invertible.
But, since B is invertible, A is invertible iff B A is invertible. Thus Ax = 0 has only the trivial
solution iff (BA)x = 0 has only the trivial solution.
P 5. Let e1, ez, .. . , em be the standard unit vectors in Rm. Note that ell e2, ... , em are the rows of
the identity matrix I= lm. Thus, for any m x n matrix A, we have
for i = l, 2, ... , m. Suppose now that E is a.n elementary matrix that is obtained by performing a
single elementary row operation on I. \Ve consider the three types of row operations separately.
Row Interchange. Suppose E is obtained from I by interchanging rows i and j. Then
r
1
(EA) = fi(E)A = eiA = ri(A)
ri(EA) = rj(E)A = e;A = ri(A)
and rk(EA) = rk(E)A = = rk(A) fork =J: i,j. Thus EA is the matrix that is obtained from
A by interchanging rows i a.nd j.
Row Scaling. Suppose E is obtained from I by multiplying row i by a nonzero scalar c. Then
and rk(EA) = r,(E)A = ekA = rk(A) for all k =I= i. Thus EA is the matrix that is obtained from
A by multiplying row i by the scalar c.
Row Replacement. Suppose E is obtained from I by adding c times row ito row j (i :f.=j). Then
ri(EA) == rj(E)A = (ej + cei)A eiA + ceiA = ri(A} + cri(A)
and = rk(E)A = ekA = rk(A) for all k =J:. j. Thus EA is the matrix that is obtained from
A by adding c times row i to row j.
P6. Let A = [: !] , and let B = Then AB = BA if and only if [! :] = [: i.e. if and only if
a= d and b =c. Thus the only matrices c0mmute with Bare those of the form A= !] .
Suppose now that A is a matrix of this type, a.nd let G = Then AC = C A if a.nd only if
[: = [;b
2
ba], and this is true if and only if b = 0. Thus the only 2 x 2 matrices that commute
with both B and C are those of the form A= where -oo < a < oo. It is easy to
see that such a matrix will commute with all other 2 x 2 matrices as well.
EXERCISE SET 3.4
85
P7. Every 111 x n matrix A can be tra.nsforrr:ed to reduced row echelon form B by a sequence of elemen-
tary row operations. Let E
1
, E2, . . . , be the corresponding sequence of elementary
Then we have B = E2E
1
A = CA where C = E1r: E
2
E
1
is a.n inver tible m:1trix.
P8. Suppose that A is reduced to I= In by a sequence of elementary r ow operations, a.nd that
Et. Fh, .. . , n.re the correspondjng elementary matrices. Then E2E
1
A =I and so
E2E1 = A-
1
. Jt foUows that the same sequence of elementary row operations will reduce
the matrix [A I B] to t.he matrix I Ek E2E1B) = [I I A-
1
BJ.
EXERCISE SET 3.4
1. (a)
(b)
(c)
2.
(
(b)
(c )
3. (a)
(b)
4. (a)
(b)
5. (a)
(b)
(XJ,X2) = t(l, -1); Xt = t, X2 = -t
(Xt
1
X;!, X3) :: t(2, 1, - 4); X} = 2t, X2 = t, X3 = -4t
(x,,.:1:2, X3,X4) = t(l, 1,-2,3); X1 = t, X2 = t, X3 = - 2t, X
4
= 3t
(x1, x2) = t(O, - 4); :c1 = 0, x2 = -4t
(.t: , ,:l:<!, :2:3 ) = t(Q, - 1, 5) ; XI= 0, X2 = -t, X3 = 5t
(Xl
1
X2, X3
1
X4 ) = (2, 0, 5, _ ,1); Xt = 2t, X2 = 0, X3 = 5t , :t
4
= - 4t
(Xt,X2, X3) = s(4, 4, 2) + t (-3,5, 7); Xt = 4s - 3t, X2 =-tiS + 5t, :t
3
= 2s + 7t .
(::tt , 2'2 X3, X4 ) = s( l , 2, 1, - 3) + t(3, 4, 5, 0); X1 = S + 3t , X2 = 2s + 4t, :r
3
= S + 5t, x
4
= -3.'f
(:z:
1
,:r2 X3) :-:: s{0,- 1,0) +t(-2,1, 9); x
1
= 2t, X2 = -s+t, X3 = 9t
lXt, x2. ::3, :r4, x:;) = $(1, 5, - 1, 4, 2) + t (2, 2. 0, 1, - 4); x
1
= s + 2t, x2 = Ss + 2l , x
3
= - s,
X4 = 1s + t, X:; = 2s - 4t
u = - 2v; Lhus u is in the s ubspace span{v}.
u :f: kv for a ny scalar k ; thus u is not in the subspace s pan{v}.
6. (a) no
{b) yes
7. (a) Two vect ors are linearly dependent if and only if one is a scalar multiple of the other; thus
these vectors 1\re linea.rJy dependent.
(b) These vectors are linearly independent.
8. (a) linear ly independent.
(b) linearly dependent
9. u = 2v - '\v; thus the u, v , w are linearly dependent.
10. u = 4v - 2w
11. (a) A line (a 1-dilnensio nal subspace) in R
4
that passes through the origin and is parallel to the
vector u = (2, - 3, 1, 4) = 4(4, -6, 2, 8}.
( b) A plane (a 2-.:!imensional suospace) in R" that passes through the origin and is parallel to
the vectors u = (3,-2, 2, 5) and v = (6, -4, 4, 0).
12. (a) A plane in tha t passes through the origin and is parallel to the vectors u = (6, -2, -4,8)
and v = (3, 0. 2, - 4).
(h) A line in R
4
that passes through the origin and is parallel to the vector u = (6, -2, -4, 8).
86
13. The augmented matrix of the homogenous system is
H
6 2 -5

-6 -1 -3
12 5 -18
a.nd the reduced row-echelon form of this matrix is

6 0 11

0 1 -8
0 0 0
Thus a general solution of the system can be written in parametric form as
Xt = -6s- llt, X2 = s, X3 := 8t, X4 = t
or in vector form as
(Xt. X2
1
x
3
, X4) = S( -6
1
1,0, 0) + t( -11, 0, 8, 1)
Chapter 3
... :
This shows that the solution space is span{ v
1
, v2} where v1 = ( -6, 1, 0, 0) and v2 = ( -11, 0, 8, 1 ).
14. The reduced row ech!:!lon form of the augmented matrix is
[
1 -1
0 0
0 0
0 -1 1
1 2 - 3
0 0 0
thus a. general solution of the system is given by
(x
1
, x
2
, x
3
, x
4
, x
5
) = r( l, 1, 0, 0, 0) + s(l , 0, - 2, l , 0) + t( -l, 0, 3, 0, 1}
15. The augmented matrix of the homogeneous system is
16.
2 1
4 -1
and the reduced row echelon form of this matrix is
l
0
[
1 2 0
0 0 1
1
1
1 0]
i 0

Thus a general solution of the system can be written in parametric form as
X
1
= -2r- js- X2 = T, X3 = .X4 = S, X$= t
or in vector form as
(xt, xz, .x3, X4, xs) 0,0,0) + s( -l , 0, 1, 0) + 0, -l,O, 1)
The vectors v
1
= ( -2, 1, 0,0, 0), v2 = (- 3. 0, -i, 1, 0), and v3 = (- 0,- 0, 1) span the solution
sp3ce.
The reduced row echelon form of t!1a augmented matrix is

0 -4 0 1

1 1 0 -2
0 0 1 - 3
0 0 0 0
thus a. general solution is given by (xt. x2, X3, x4, xs) = s(4, -1, 1, 0 , O) + t( - 1, 2, 0, 3, 1 ).
EXERCISE SET 3.4
17. (a) Vz is a scalar multiple ol v
1
(v2 = -5vl); thus these two vectors are linearly dependent.
(b) Any set of more than 2 vectors in R
2
is linearly dependent (Theorem 3.4.8).
18. (a) Any set containing the zero vector is linearly dependent.
(b) Any set of more than 3 vectors in R
3
is linearly depenrlcnt (Theorem 3.4.8).
19. (a) Those two vectors arc linearly independent since neither is a scalar multiple of the other.
(b) v
1
is a scalar multiple of vz(v
1
= -3vz)i thus these two vectors are linearly dependent .
87
(c) These three vectors are linearly Independent since the system n -: [:]- has
only the trivial solution. The coefficient matrix, which is the matrix having the given vectors
as its columns, is invertible.
(d) These four vectors are linearly dependent since any set of more than 3 vectors in R
3
is linearly
dependent.
20. (a) linearly independent
(c) linearly dependent
(b) linearly independent
(d) linearly dependent
21. (a) The matrix having t hese vectors as its columns is invertible. Thus the vectors are linearly
independent; they do not lie in a. plane.
(b) These vectors are linearly dependent (v
1
= 2v
2
- 3v3); they lie in a. plane but not on a line.
(c) These vectors line on a line; v
1
= 2v2 and v3 = 3vz.
22. In parts (a) and (b} the vectors are linearly independent; they do not lie in a plane. In part {c)
the vectors are linearly dependent; they lie in a plane but not on a line .
... ...---...,""
23. (a) This vectors is it is closed under scalar and addition:
k(a, 0, 0) = (ka, 0, O) and (a1. a, 0) + (az, 0, 0) = (a1 + az, 0, 0).
(b) This seL of ''cctors is not a subspace; it is not closed under scalar multipllcation.
(c) .. set of vectors is a. subspace. lf b = a+ c, then . kb = ka + kc. If b1 ;;; a 1 + c1, and
b2 = nz + cz, then (b1 + bz) = (aJ + a2) + (c1 + cz) .
(d) This set of vectors is not a subspace; it is not closed under addition or scalar multiplication.
24. Sets (a) and (b) are not subspaces. Sets (c) and (d) are suhspaces.
25. The set W consists of all vectors of the form x = a(l,O, 1, 0); thus W = span{v} v =
(l,O, 1, 0) . This corresponds t o a line (i.e. a 1-dimensional subspace) through the origin in R
4
.
26. W = 5pan{ v
1
, v
2
} .vhcre v
1
::: (1, 0, 2, 0, - 1) and v
2
= (0, 1, 0, 3, 0). This corresponds to a. plane
(i.e. a 2-dimensional subspace) through the origi n R::..
27. (a)
(b)
7vl - 2vz + 3v3 = 0
2 3 7 3 7 2
YJ = 7Y2 - 7V3, Vz = zVl + 2YJ 1 V3 = -3Vl + 3 V2
28. If ,\ = ! , then the given vectors are clearly dependent. Otherwise, consider the matrix having
these vectors as its columns. Are there any other values of..\ for which it is sing\.!lar? To determine
this, it is convenient to start by multiplying all of the rows by 2.
[Y 2: JJ
88 Chapter 3
Assum!ng >. - this matrix can be reduced to the row echelon form

l
0 2+2.>.
which is singular if and only if>.= - 1. Thus the gjven vectors are dependent iff >.= ! or>.= -1.
29. (a) Suppose S = { v
1
, v
3
} is a. linearly independent set. Note first that none of these vectors
can be equal to 0 (otherwise S would be linearl y dependent). and so each of the sets {v
1
},
{v2}, and {v3} is linearly independent . Suppose then that Tis a 2-element subset of S,
e.g. T = {v
1
, II T is linearly dependent then there are scalars c
1
and c
2
, not both
zero, such that c1 v
1
+ c
2
vz = 0. But, if this were true, then c1 Vt + c2v2 + Ovs = 0 would
be a nontrivial linear relationship among the vectors v'h v2, v 3, and so S would-. be linearly
dependent. Thus T = {v
1
, v
2
} is linearly independent. The same argument applies to any
2-element subset of S. Thus if S is linearly independent , then each of its nonen:tpty subsets
is linearly independent.
(b) If S = {v
1
, v2, v
3
} is linearly dependent, then there a.re scalars c1o c
2
, and c
3
, not all zero,
such that c
1
v1 + c2v2 + CJVJ = 0. Thus, for any vector v in R", we have c
1
v
1
+ c
2
v
2
+
c3 v3 + Ov = 0 and this is a nontri vial linear relationsbjp among the vectors v
1
, v
2
, vs, v.
This shows that if S = { v
1
, v
2
, v
3
} is linearly dependent then so is T = ( v
1
, v
2
, v
3
, v} for
any v.
30. The arguments used in Exercise 29 can easily be adapted to this more general situation.
31. (u - v ) + ( v - w) + (w - u) = 0; t hus the vectors u - v, v - w, and w - u form a linearly de--
pendent set .
32. First not e that the relationship between the vectors u , v and s, g can be written as
= (:)
and so the inverse relationship is given by
[
s] [ 1 0.06] _, [u] 1 [ 1
g = 0.12 1 v = 0.9928 - 0.12
Parts (a.), (b) , (c) of t he problem can now be answered as follows:
(a) s =
6 9
i
28
( u - 0.06v)
(b)
(c)


33. (a) No. 1t is not closed under either addition or scalar multiplication.
(b) P 876 = 0.38c + 0.59m + 0. 73y + 0.07k
P 2t6 = 0.83m + 0.34y + 0.47k
P328 = c + 0.47y + 0.30k
(c) H Pe75 + P2J6) corresponds to the CMYK vector (0.19,0.71 , 0.535, 0.27).
34. (a) k1 = k2 = = (b) k1 = k
2
= = k1 =
(c) The components of x = + tc2 + 4c3 represent the averag.e scores for t he students if the
Test 3 grade (the final exam?) is weighted twice as heavily as Test 1 and Test 2.
DISCUSSION ANP DISCOVERY 89
35. (a) kt = kz = k3 = k4 = (b) k1 = k2 = k3 = k4 = k& = i
(c) The components of x; =

+ ! r
2
+ represent the average total population of Philadel-
phia, Bucks, and Delaware counties in each of the sampled years.
n
36. v = L vk
k = l
=DISCUSSION AND DISCOVERY
Dl. (a) Two nonzero vectors will span R
2
if and only if t hey a.re do not lie on a line.
(b) Three nonzero vectors . will span R
3
if and only if they do not lie in a
D2. ( a ) Two vectors in Rn will span a plane if and only if they are nonzero and not scalar multiples
of one another.
(b) Two vectors in R" will span a line if and only if they are not both zero and one is a scalar
multiple of the other.
(c) span { u} = span { v} if and only if one of vectors u and v is a. scalar multiple of the other.
D3. (a) Yes. If three nonzero vectors are mutually orthogonal then none of t.hem lit:S in the plane
spanned by the other two; thus t he three are linearly independent.
(b) Suppose the vectors vl> v2, and v
3
are nonzero and mutually orthogonal; thus v, v; =
llvdl
2
> 0 fori= 1, 2, 3 and v; vi = 0 for i=/: j . To prove they are linearly independent we
must show that if
C1V1 + C2V2 + C:JVJ = 0
then Ct = c-_a = C3 = 0. Tbis follows from the fact that if c 1v
1
+ c 2v2 + C3Y3 = 0, then
C; llv;ll
2
= vi (c1v1 + c2v2 + c3v3) = V
1
0 = 0
for i = 1, 2, 3.
D4. The vectors in t he fi rst figure are linearly independent since none of them lies in the plane spanlled
by the other two (none of them can be expressed as a linear combi nation of the other two). The
vectors in the second figure arc linearly dependent since v
3
= v
1
+ v
2
.
DS. This :set is d ose>d under scalar multi plication, but uot under For example, the vectors
u = (1,2) nnd v = (-2, - 1) correspond to points in Lhe set, but 11 + v = (-1, 1) does not.
06. (a) False. For example, two of the vectors ma.y lie on a line (so one is a scalar multiple of the
other), but the third vector may not lie on this same line and therefore cannot be expressed
a.s a linear combination of the other two.
(b) False. The set of all linear combinations of two vectors can be {0} (if both are 0), a line (if
one is a scalar multiple of the other), or a plane (if they are linearly independent) .
{c) False. For example, v and w might be linearly dependent (scalar multiples of each other).
[But it is true that if {v, w} is a linearly independent set, and if u cannot be expressed as
a linear combination of v and w, then { u, v, w} is a independent set . J
(d) True. See Example 9.
(e) Ttne:. lf c1(kv1)+ cz(kv2 ) + cz(kvz) = 0, then k(c1 v1 +c2v2 + c3v2) = 0. Thus, since k =/:
0, it follows that Ct v
1
+ c
2
v2 + c
3
v
2
:::::: 0 and so c
1
= cz = c
3
= 0.
D7. (a) False. The set { u, ku} is always a linearly dependent set.
(b) False. This statement is true for a homogeneous system (b:::::: 0), but not for a non-homogen-
eous system. !The solution space of a non-homogeneous linear system is a translated sub-
space.J
Chapter 3
(c) True. If W is a subspace, then W is closed under scalar multiplication and addition,
and so spa.n(W} = W.
(d) False. Forexample,ifSt = {(I ,O),(O, l}}a.nd S2 = {(1, l) ,(O, l}}.thenspan(SI) = span(S
2
)=
R
2
, but. S1 # S2.
08. Since span(S) is a subspace (already closed under scalar multiplication and addition), we have
span(spa.n(S}) = spa.n(S).
WORKING WITH PROOFS
Pl. Let() be the angle between u and w, and let 4> be the angle between v and w. We will show that
9 = . First recall that u w = llu!lllwll cosO, so u w = kllwll cos9. Simllarly, v w.=lllwllcos.
On the other hand we have
u w = u (lu + kv) = l(u u) + k(u v) = lk
2
+ k(u v)
aud ::;o kllwll u. w = lk
2
+ k(u . v) , i.e. l!wll cos e = lk + ( u. v) . Similar calculations show
that llwll cos = (v u) + kl ; thus Uwll cos B = Uwll cos. It follows that cos 9 =cos and () = </>.
P2. If X belongs to WIn w2 and k is a scalar, then kx also belongs to Wt n wl since both Wt and
w2 are subspaces. Similarly, if XI and X2 belong to Wt n W2 , then x, + X2 belongs to WIn w2.
Thus WI n w2 is closed under scalar multiplication and addition, .i.e. wl n w2 is a subspace.
P3. First we show t hat w
1
+ W
2
is closed under scalar multiplication: Suppose z = x + y where xis in
W1 andy is in W2. Then, for any scalar k, we have kz = k(x + y) = kx + ky, where kx is in W
1
and ky is in (since w, and w2 are subspaces); thus kz IS in w} + w2. Finally we show that
WI + w2 is closed under addition: Suppose Zl =X) + Yl and Z2 = X2 + Y2 o where X) and X2 are
in W1 and Y and Y2 are in W2 . Then z
1
+ Z2 == (x1 + y, ) + (x2 + Yz) = (x1 + x2) + (Yl + Y2) .
where X} + X2 is in l t\ and Yl + Y2 is in w2 (since WI and w2 subspaces); thus Zl + Z2 is in
H.-\+ w2.
EXERCISE SET 3.5
1. (a) The reduced r,Jw echelon form of the augmented matrix of the homogeneous system is
thus a general sol ut ion is x1
l
0
0
!
Gr (in column vector form)
EXERCISE SET 3.5
(c) From (a) and (b), a general solution of the nonhomogeneous system is gjven by
(d) The reduced row echelon form of the of the nonhomogeneous system is
.. --- - .. -
,,.

2 1
;]
3
-3
f
0 0
.:., 0 0
This solut ion is related to the one in Qa.rt (c) by the change of variable .s
1
= s, t
1
= t + 1.
-- ------- r-- .. otNo----.-
2. (o) The <educed <OW echelon ro,m is [;
ll
0 T
2

0 0
0] [Zil o ; thus a general solution is :r
2
= t .
0 - - - - - :1:3 1
(d) The rnd uced row echelon form is


0
1 -g
o o
: [:} [1J +t' nv
'.. ..-"
3. (a) The reduced row echelon form of the augmented matrix of the give n system is
! 0 .!. ]
3 3
0 1 1
0 0 0
thus a general sol ut ion is given by x
1
= ! - -1t, x2 = s, X3 = t , X4 = 1; or
91
92 Chapter 3
(b) A general solution of the !s
. --:::....
[
::] ::::: s [-!] + ,r-;1 \.
' .. X4 0 l 0
and a particular solution of the given nonhomogenous system is x1 = !. x2 = 0, X3 = 0, x
4
= 1.
- : .. o-- -
4. (a) The reduced row echelon form of the augmented matrix of the given system is
0 _1: 133]
1 9 -7
0 0 0
thus a general solution is given by
(b) A general solution of the associated homogeneo'..ls system is
and a particular solution of the nonhomogeneous is x
1
=
1
i, x2 = 0, x3 = -7, X4 = 0.
5. The vector w can be expressed as a linear combination of v
1
, v
2
, and v
3
, if and only if the system

: [::] :::::
1 5 -12 c
3
1
is consistent. The reduced row echelon form of the augmented matrix of this system is
0
1
0
0 -2]
0 3
1 1
From this we conclude that the system has a. unique solution, and that w = -2v
1
+ 3v
2
+ v3.
6. The vector w can be expressed as a linear combination of v
1
, v
2
, and v
3
, if and only if the system
f
3 8
- 2 1
l2 4
EXERCISE SET 3.5 93
is consistent. The reduced row echelon form of the augmented matrix of this system is
--- - . . /
,.
'

'
/
0 3
10]
/
/
l -1 -6
0 0 0
,
... --- .......
..
and from this we conclude that the :;ystem. hilS inftnitcly many given b}: c1 .:;::;.10 - ..


c
2
== -6 + t, c
3
= t. Thus.w can be expressed as a linear combination of v1, v2, and V3. fii particutat;
taking t = 0, we have w = 10v
1
- 6vz.
7. The vector w is in span{v
1
, v
2
, v
3
, v
4
} if and only if the system
H
1 l
-3
0 2
is consistent. The row reduced echelon form of the augmented mo.trix of this system is
!
. . : .... ... . . . . . . ..... -. . . ::.::-_-;:-,..._
From this we conclude that the system has infinitely many solutions; w is in span{v
1
, vz, v
3
, v
4
}. / i
,......-------" -
8. The vector w is in span{v
1
, v
2
, v
3
} if and only if the system
is consistent.. The reduced row echelon form of the augmented matrix of this system is

0 2
-1
0 0
From this we conclude that the system has infinitely many solutions; thus w is in span{vt, v2, VJ}.
9. (a) The hyperplane al. consists of all vectors x = (x, y) such that a x = 0, i.e. -2x + 3y = 0. This
corresponds to the line through the origin wit.h parametric equations x = it, y = t.
(b) The hyperplane a..!. consists of all vectors x = (x, y, z) such that a x = 0, i.e. 4x- 5z = 0. This
corresponds to the plane through the origin with parametric equations x = y = s, z = t.
(c) a.i. consists of all vectors x = (xbx
2
, X3, x4) such that a x = 0, i.e. Xt + 2x2- 3x3 + 7x4 = 0.
This is a hyperplane in R
4
with parametric equations Xt = -2r + 3s- 7t, x2 = r, X3 = s,
X4 = t.
0. (a) x = 4t, y = t (b) x = s, y = -3s + 6t, z = t
(c) x1 = r + 2s, x
2
= r, x
3
= s, s;
4
= t
94 Chapter 3
11. This system reduces to a single equation, Xt + x2 + x3 = 0. Thus a general solution is given by
x
1
= -s- t, x
2
= s, x
3
=: t; or (in vector form)
The solution space is two-dimensional.
13. The reduced row echelon form of the augmented matrix of the system is

0
1

7

7
19
T
l
-7
Thus a general solution is x
1
= I_( s- x
2
= + ts + t, X3 = T, x
4
= s, x
5
= t; or
3 19
-:1
X1
7
-7
X2
2 1
-7
7
xs =r
1
+s
0
+t

X4
0 1
X f.>
0 0
The solution space is three-dimensional.
15. (a) A genecal x =: t:_y \;t [: l m + f l] + t nl
. .. ' -----..
(b) The solution space corresponds to the plane which passes through the point fP( 1, 0, 0) il.nd is
-,. _______ .,.... _____
... - ... - ...................... ........ - .... ......_.. - --- ....... , . ..,.
16. (a) A general solution is x = 1- t, y = f;
-
(b) The solution corresponds to the line which passes through the point P(l, 0) and is parallel
to the vecto( v = ( -1,1).
17. (a) A vector x = (x, y,z) is orthogonal to a= (1, 1, 1) and b = ( -2, 3,0) if and only if
y- z. = 0
g+3y j
(b) The solution space is the line through the to the vectors a and b.
EXERCISE SET 3.5
(c) The reduced row echelon form of the augmented matrix of the system is
iJ -
"-
and so a by x z = t; or (x, y,z) l).,ote
that the vectof: .. I:!.,. --
- ..
l 8. (a) A vector x = (x,y,z) is orthogonal to a = ( - 3,2,-1) and h= (0, -2, -2) if and only if
- 3x + 2y - z =0
- 2y- 2z = 0
(b) The solution space is th.e llne through .. ori,g!11 .. tha,:t is perpendicular to
(c) ( x, y, z ) = _: 1, 1); note that the vector v = ( - 1, :.__ i, 1) is orthogonal to both a and b.
- -
.9. {a} A \'ector x = (x
1
,x2,x3,x
4
) is orthogonal to v1 = (1, 1, 2,2) and vz = {5, 4,3,4) if and only if
x 1 + xz + 2x3 + 2x4 = 0
5xl + 4xz + 3x3 1- 4X-t = 0
(b) The solution space is the plane (2 dimensional subspace) in R
4
that passes through the origin
a.nd is perpendicular to the vectors v1 and v
2
.
(c) The redUl:ed row P-chelon form of the augmented matrix of t.he system is
l
- 5 -1
7 6

0
and so a. general solution of the system is given by (x,,xz,XJ ,x
4
) = s{5, -7,1,0} + t(4, - 6,0, 1) .
Note t hat t he vectors (5,-7, 1,0) and (4,-6,0, 1) are orthogonal to both v
1
and v2.
:o. ( a ) A vector x = (x
1
,:c
2
,r.
1
,x
4
, is orthogonal to the vectors v
1
= (1, 3, 4,4, -1),
v
2
== (3, 2, 1, 2, - 3) , and v3 == (1, 5, 0, - 2, - 4) if and only if
x1 + 3xz + 4x3 + - Xs = 0
3xl + 2x2 + X3 + 2x4 - 3xs = 0
x, + 5xz - 2x4 - 4xs = 0
{b) The solution space is the plane (2 dimensional subspace) in R
5
t hat. passes through the origin
and is perpendicular to the vectors v1, vz, a.nd v3.
(c) The reduced row echelon for m of the augmented matrix of the system is

0 0
3 7

5
-- IO
1 0
13 _33
- Z5
50
0
31 21
25 50
and so a. general solution of the system is given by
(x1 , x2, x
3
, x-1, xs) = s( 1, 0) + t( {o, I)
The (- t W. 1, 0) and ( t
7
0
, 0, 1) are orthogonal to v Vz , and v3.
96
Chapter 3
DISCUSSION AND DISCOVERY
Dl. The solution set of Ax = b is a. translated subspace x
0
+ W , where W is the solution space of
Ax= 0.
D2. If vis orthogonal to every row of A, t hen ilv = 0, and so (since A is invertible) v = 0 .
D3. The general solution will have at least 3 free variables. Thus, assuming A i: 0, the solution space
will be of dimension 3, 4, 5, or 6 depending on how much redunda.ocy there is.
D4. (a) True. The solution set of Ax= b is of the form x
0
+ W where W is the solution space of
Ax = O.
(b)
(c)
(d)
(e)
False. For example, the system :r - Y =
0
is inconsistent, but the associated homogeneous
x-y=l
system has infinitely many solutions.
True. Each hyperplane corresponds to a single homogeneous linear equation in four variables,
and there must be a least four equations in order to have a unique solution.
'Tme. Every plane in R
3
corrr.:>ponrls to a equation of the form ax + by + cz = d.
False. A vector x is orthogonal to row(A) if and only if x is a solution of the homogenous
system Ax = 0.
WORKING WITH PROOFS
P 1. Suppose that Ax = 0 has only the and t hat Ax = b is consistent. If Xt and Xz
are t wo sol utions of Ax= b, then A(x
1
- x
1
) = Ax
1
- Axz = b - b = 0 anti so Xt - x z = 0, i. e.
x
1
= xz. Thus, if Ax= 0 has only the t rivial solution, t he system Ax = b is either inconsistent
or has exactly one solution.
PZ. Suppose that Ax = 0 has infinitely many solutions and that Ax= b is consistent. Let Xo be
any solution of Ax = b . Then, for any solution w of Ax= 0 , we have A(xo + w) = A.xo + Aw =
b + 0 = b . Thus, if Ax = 0 has infinitely many solutions, the system Ax = b is either inconsistent
or ha.<> infinitely many solutions. Conversely, if Ax= b has at. most. one solution, then Ax= 0 has
only t.he trivial solution.
P3. If x
1
is<:\ solution of Ax = band x2 is a solution of Ax = c, then A(x, + x2) = Ax1 + Axz = b + c;
i.e. x
1
+ X2 is a solution of Ax= h +c. Thus if Ax= band Ax ;= care consistent systems, then
Ax = b + c is also consistent. This argument c2-n easily be adapted to prove the following:
Theorem. If Ax = bi is consistent for each j = 1, 2, ... , r and if b = b
1
+ b2 + + b,., then
Ax = b is also consistent .
P4. Since (ka) x = k(a x) and k f. 0, it follows that (ka) x = 0 if and only if a x = 0. This proves
that. (ka)..L = a..L.
EXERCISE SET 3.6
1. (a) This mat rix has a row of zeros; it is not invertible.
(b) A diagonal matrix with nonzero entries on the


o
0
] -
1
o ol
20 = O!O
u 0 0 3
EXERCISE SET 3.6
97

0
r rl
0
fl
2. (a}
- 2
0 = 0
1
(b) not invertible.
-2
0
3 0
0

0
_; l [!
[ 2 1] ' 0 [ 10 -2]
3. (a) - 1
(b) -4 1 - 2] = - 20 - 2
0 2 2 5 4 10
2 5 10 -10

0
0][ 2 1] 0 [3
0
0][ 10
-2]
ro -6] (c) - 1
-: -2] =
-1 0 -20 - 2
=
20 2
0 0 2 10 -10 20 - 20
[ 6
- 2

[ -12
-5

[ - 24 - 10
12]
4. (a) - 1 - 2 (b) -3 10
(c) 3 - 10

- 20
"
15 5 60 20

0


0
11

0

5. (a) 4
{b)
1
(c)
)k
4
0
0
0
(a)
0
1]
(b)
0

[2'
0
3]
6.
I
9
(c) A - k
3k
9
0
0
0
7. A is if and only if x =!= l , - 2, 4 (the diagonal entries must be nonzero).
9.
Apply the invers ion algori t hm (see Section 3.3) to find the inverse of A.

2 3 : 1 0

1 -2:0 1
'
0 1 : 0 0
Add 2 t.i 111es row 3 to row 2.
Add -'- 3 ti mes row 3 t o row l.

2
0: 1 0
-;]
1 o: o 1
I
0
1 : 0 0
Add -2 times row 2 to row 1.

0 0: 1
-2

1 ol o 1
I
0 1 : 0 0
Thus the ;nverse of the upper triangular matrix A [ i
23] ['
- 2
-:r
1
is A-
1
=
I
0
0
98
If A=
; } ,lhro A-
1
=
r[
0
;]
0 0 0 1
10. -1 -1 0 - 2 - 1
4 4 1 - 11 4
u
0
i]
12.
0 -8]
11. A = 0 0 -4
- 1 4 0
13. The matrix A is symmetric if and only if a, b, and c satisfy the following equations:
a - 2b + 2c = 3
r2a +-b+ c-,;- -{) ..

The augmented matrix of this system is
nnd the reduced row echelon form is
:}.- :

0
1
0

1 - 13
0
0
Chapter 3
Thus, in order for the matrix A to be symmetric we must have a = 11 , b = - 9, and c = -13.
14. There are infinitely ma.ny solutions: a = l + lOt, b = 1t, c = t , d = 0, where - oo < t < oo.
15. 'vVe hnve A -l = = - ; t hus A
1
ts symmetnc
[
2 -1 J . I 1 [3 }] _ . .
- 1 3 5 I 2
16. We have A-
1
= - 2 1 -7 =- - 13 - 5 1 ; thus A-
1
is symmetric.
[
l - 2

1 r-45 -13 ll l
3 - 7 4
14
11 1 -3
17. AB =



=
0 0 - 4 0 0 3 0

0 - 12
18. From Theorem 3.6.3, the factors commute if and only if the product is symmetric. Thus the factors
;n (a) do not commute, whe, eas the ractocs ;n (h) do commute.
19. If A =

] _l
0 0 - 1 0 0 (-1)-
5
0 -1 .
.......... _ ..

( 1)-2 0 0 I
20. If A=
0 0] [(! )-
2
4 0 , then A-
2
=
3
0
0 l 0
0
EXERCISE SET 3.6
21. The lower tria.ngular matrix A = [! is invertible because of the nonzero diagonal entries.
1 3 1
Thus, from Theorem 3.6.5, AAT ond AT A are also invert ible; furthermore, since these matrices
are symmetric, their inverses are also symmetric by Theorem 3.6.4. .Following are the matric:es
AAT, AT A, and their inverses.
r
6

[ I -3 8]
ATA =
10 (AT A)-
1
= -3 10 -27
3 8 - 27 74
[l
3

[ 74
- 27
-;]
AAT :::::
10
(AAT)- 1 =

10
6 - 3
22. Following are the matrice..<; AAT and AT A, and their inverses.
['i
5

[ I
- 3

AT.4 = 5
{AT,4.)-1 =
- 3 10
2 5 - 17 30

3

r 35 -13

AAT =
10
(AA
1
')- = l-13
5
5 5 - 2
23. The fixed points of t he matrix A == are ihe solut ions of the homogeneous system (1 - A)x =
0. The augmented matrix of this system is
[
- 1 -1
0
o]
- J -1
and from this it is easy to see that the system has the solution x
1
= t, x
2
= -t. T hus the
fixed poi nts of A are vectors of t he form x = = t [ _ where - oo < t < oo.
24. All vectors of thcform x = H] = t [-:] , whero - oo < t < oo.
25. (a) If A
2
= = Thus A is nilpotent with nilpotency mdex 2,
and the inverse of I- A = is (I- A)-
1
= I+ A=
(b) The nilpotency index of A = I- A =:'

is
8 l 0 - 8 -1
100 Chapter 3
26. (a) A= has nilpotency indt>.x 2.
1-A=[lO]
-1 1
(I - A) -t = I + A = [
1 0
]
. 1 1
(b)
[
0 2 1,]
A = o o 3 has nilpotency index 3.
0 0 0

=:]
+ + =
27. (a) If A is invertible and skew symmetric, then (A-
1
)T = (AT)-
1
= (-A)-
1
= -A-
1
; thus A-
1
is skew symmetric.
(b) If A and B are skew symmetric, then
(A+ B)T =AT+ BT =-A- B =-(A+ B)
(A- B)T =AT- BT = -A+ B = -(A- B)
(kA)r =kAT= k( -A)'"'"' -kA
thus the matrices A+ B, A- B, and kA are also skew-syn\met.ric.
28. (a) If A is any square matrix, then (A+ AT)T =AT+ ATT = A'r +A= A+ AT, so A+ AT is
symmetric. Similarly, (A- AT)T =-(A- AT), so A- AT is skew-symmetric.
(b) A= HA +.A
1
') AT)
(c) A= = +
29. If H =In- 2uuT then we have nr = I'I- 2(uuT)T =In- 2uuT = H; thus li is symmetric.
30.
T r2 T
['']
AA = [r,
rf
r;,:]
['''!
rlrr
Tl
r1 r1 :r 1 r2
'' rm 1
r1rm
= r2r1
r2rr

[ ''; r,
r2 rz
f2 r rn
T T
rm rl I'm rz Tm I'm J rmr
1
r,.,.r
2
fmfm
31. Using Formula (9), we have tr(Ar A}= lladl
2
+ lla2!1
2
+ + tlanll
2
= 1 + 1 + + 1 = n.
DISCUSSION AND DISCOVERY
Dl. (a)
(b)
[
3an 5an 7al3] [au a12 a13l [3 0 0]
A= 3a21 Sa22 1a23 = a:n a22 a23J
1
0 5 0 = BD
3a::n Saa2 1a33 a31 a32 a33 0 0 7
There are other such factorizations as well. For example, A "" [!::: :::
3a31 an
DISCUSSION AND DISCOVERY
101
D2. {a) A is symmetric since a
31
= j
2
+ i
2
= i
2
+ j
2
for each i, J .
(b) A is not symmetric.since a;;= j- i = - (i- j) = - a;; for each i, j. In fact, A is skew-
symmroetric.
(c) A is symmetric since a,, = 2j + 2i = 2i + 2j = a,i for each i, j.
(d) A i s not symmetric since a
1
, = zp + 2t
3
f:. 2i
2
+ 2j
3
==-a.; if i = 1, j = 2.
In general, the matrix A= IJ(i,j)} is symmetric if and on1y if f(j , i} = f(i,j) for each i , j.
03. No. In fact, if A and B a.re commuting skew-symmetric matrices, t hen
(AB)T = BT AT= (-D)(-A) = BA = AB
and so the product AB is symmetric rather than skew-symmetric.
04. Using Formula {9), we have tr(AT A)= lln1JI
2
+ 1la21J
2
+ + IJa,.jJ
2
. Thus, if ArA = 0, then it
follows that llatll
2
= 1Jazll
2
= = llanll
2
= 0 amlso A= 0.
0 5. (a) lf A = then A
2
= thus A
2
==- A if and only if = d; for 1 = 1, Z, and
0 0 d3 0 0
D6.
3, n.nd this i:> t.rue iff di - 0 or 1 for i = 1, 2, ancl 3. There are a. tOtal of eight such matrices
(3 x 3 matrices whose diagonal entries are either 0 or 1)
(b) There are a t otal of 2n such matrices (n x n. rnatrice.s whose diagonal entries are either 0 or 1).
If A = [d'
0
], then A
2
+ 5A + 6!
2
= 0 if and only if d? + 5'-. + G = 0 for i = 1 and 2; i.e. if and
0 dl
only if d; = - 2 or - 3 for i = 1 and 2. There are a t.ota.l of fonr such matrices (auy 2 x 2 diagonal
matrix diagonal entries are either - 2 or - 3) .
D7. If A is both symmetric and skew symmetric, then AT= A and AT = - A; thus A = -A and so
A =0.
0 8. In a symmetric matrix, enlries that are symmetr ically positioned across the main diagonal are
equal to each other. Thus a symmetric matrix is completely determined by the entries that lie on
or above the main rliagonal, and entries t hat. appear below the main diagonal a.re duplicates of
entries that appear above the mai n diagonal. Au n x n matrix has n entriP.s on the main diagonal ,
n - l entries on the diagonal just above the maiu diagonal, etc. Thus there are a total of
n(n + 1)
n + (n- 1) + + 2 + 1 =
2
entries lhat lie on or above the main diagonal. For a ;;ymmet.ric matrix, t his is the maximum
number of distinct entries the matrix can have.
In a skew-symmetric matrix, the diagonal entries are 0 and entries t hat are symmetrically posi-
tioned ac:ross the main diagonal arc the negatives of each other. The maximum nwnber of distinct
entries can be attained by selecting distinct positive entries for the p,.,sit.ions above the 1.0ain
diagonal The entries in the n(n
2
-
11
positions below the main diagonal will then automatically be
distinct from each other and from the entries on or above the main diagonal. Thus the maximum
number of distinct entries in a skew-symmetric matrix is
n(n- 1} n(n- I )
--'--:---.....:. + + 1 = n(n- 1) + 1
2 2
09. If D == then AD = ldt a1 d2a2J where a
1
and a2 are lhe rolumns of A. Thus AD =
I= let e2] (where e1 and are the standard unit vectors in R
2
) if and only if dlaJ = e1
and dza2 = e2. But this is true if and only if d1 i- 0, d2 :/; 0, a
1
= -J;e
1
, and a
2
= :, Thus
102
Chapter 3
[
_!. 0]
A =
1
where d
1
, d
2
=1- 0. Although described here for the case n = 2, it should. be clear
d2 '
that the same argument can be applied to a square matrix of any size. Thus, if .1D = 1, then the
f
..!. 0 0
dt
l
o .l. .. . o
d2
tliagonal entri es d
1
, dz , ... ,d.n of D must be nonzero, and A= : : . . . ; .
0 0 ... ..1.
d,..
DlO. (a) False. If A is not square then A is not invertible; it doesnt matter whether AAT (wh.ich
is always square) is invertible or not. (But if A is square and AAT is invertible, then A is
invertible by Theorem 3.6.5.]
(b) False. For example if A = G and B = !] , then A + B = G is
( c) True. If A is both symmetric and triangular, then A must be a diagonal matrix. Thus
A = :, and so p(A) = : ] is also a. diagonal matrix (both
0 0 Un 0 0 p(d., )
symmetric and triangular).
(d) True. For example, in the 3 x 3 case, we have
(e) True. If Ax= 0 has only the trivial solution, then A is invertible. But if A is invertible then
so is AT {Theorem 3.2.11); thus Ar x =a has only the trivial sol ution.
Dll. (a) False. For =
(b) True. If A is invertible then Ak is invertible, and thus Ak =1- 0, for every k = 1, 2, 3, .. . . This
shows that an invertible matrix cannot be nilpotent; equivalently, a nilpotent matrix cannot
be inverti ble.
(c) True (assuming A i= 0). If A
3
= A, then .4
6
= A, A
9
=A, A
12
=A, . . . . Thus is it not
possible to have A,.. :::: 0 for any positive integer k , since this would imply that = 0 for all
j ?. k.
(d) 1'rue. See Theorem 3.2.11.
(e) False. For example, I is invert)ble but I -1 = 0 is not invertible.
WORKING WITH PROOFS
Pl. If A and 1J are symmetric, then A
1
' = A and BT = B. It follows that
(AT)T = AT
(A+ B)T AT + BT = A+ B
(A - B)T = AT - BT = A- B
(kA)T =kAT = kA
th11s t.hc matrices Al' . A+ B, A - B , and kA are also symmetric.
EXERCISE SET 3.7
P 2. Our proof is by induction on the exponent k .
[
d
0
J: o:] = [d
0
I:
Step 1. We have D
1
= D = ....,
0
] ; thus the statement is true for k = I.
rl l
n
103
Step 2 st ep). Suppose the statement is t rue for k = j, where j is an integer 1. 'rhen
and so the statement is also true for k = j + l.
These two steps complete the proof by induction.

;2 ::: 1 =
P 3. If d, d2, ... ,d
11
are nonzero, then
0 0 d., Q 0 I 0 0

} , 0 .. . 01
0 }; 0
is inver tible with n-' = On the other hand if auy one of
0 0 :i;;- .
D=

:, : l
0 0 dn
the diagonal entries is zero, then D has a row of zeros and thus is not. invertible.
P 4. We will show t hat if A is symmetric (i.e. if AT == A), then ( An) T = A
11
for each positive integer .
n. Our proof is by induction on the exponent n.
Step L Since A is symmetric, we have (A
1
) T A = A
1
; thus t he statement is true for
n = 1.
Step 2 (induction step). Suppose the statement is true for n = j, where j is An integer 1. Then
(Ai+l f = (AAi)T = (Ai )T A1' = AJ A= AH
1
and so the statement is also true for n = j + I.
These two steps complete the proof by induction.
P5. If A is invertible, then Theorem 3.2.11 implies AT is !::1vertible; thus t he products AAT and AT A
are invertible as well. On the other hand, if either AAT or AT A is invertible, then Theorem
3.3.8 implies that A is invertihle. It follows that A, AAT, and AT A are all invert ible or all
singular.
EXERCISE SET 3. 7
1. We will solve Ax = b by first soiving Ly = b for y, and t hen solving Ux = y for x.
The system Ly = b is
3yl = 0
-2yl + Y2 = 1
from which, by forward substitution, we obtain Yl = 0, Y2 = 1.
104
The system U x = y 1s
x1- 2x2 = 0
X2 = 1
Chapter 3
from which, by back substitution, we obtain x
1
= 2, X2 = 1. It is easy to check that this is in fact
the solution of Ax = b .
2. The solution of Ly = b is y
1
= 3, Y2 = The solution of Ux = y (a.nd of Ax= b ) is x
1
=
X
- l
2- '1
3. We will solve Ax = b by first solving Ly = b for y , and then solving Ux = y for x.
The system Ly = b is
3yx = -3
2yl + 4y2 = -22
- 4yl - Y2 + 2y3 = 3
from which, by forward substitution, obta.in Yl = -1, Y2 = -5, YJ = - 3.
The system U x = y is
XJ - 2X2 X3 -= -1
x2 + 2x3 = -5
X3 = -3
from which, by back substitution, we obtain :x:
1
= -2, x
2
= 1, x
3
= - 3. It is easy to check t ha.t
this is in fact the solution of Ax = b .
4 . T he solution of Ly = b is Yt = 1, Y2 = 5, Y3 = 11. The solution of Ux = y {and of Ax = b ) is
X - ill X - 37 X _ 11
l - 14 ' 2 - i4) 3 - i4.
5. T he matrix A can be reduced to row echelon form by the following operations:
The mult.ipliers associated wit h these operations 1, and
A = LU = [ 2 0] [1 4]
-1 3 0 1
is an LU-factorization of A.
To solve the system Ax= b where b = we first solve Ly = b for y , and then solve Ux = y
for x :
The system Ly ;:: b is
from which we obtain y
1
=:=. - 1, Y2 ::::: - 1.
The system U x = y is
2yl = - 2
-yl + 3y2 = - 2
X 1 + 4X2 = - 1
X2 = - 1
from which we obtain x
1
= 3, x
2
= - 1. It is easy to check that thi! is in fact the solution of
Ax. =:::. b.
EXERCISE SET 3.7 105
6. An LU-decomposition of the matrix A= [ - : is A= LU = ( -:
The solution of Ly = b, ;here b = is Yl = 2, = - 1. The solution of Ux = y (and of
Ax = b) is x
1
= 4, x2 = .. -1.
7. The matrix A can be re<luced to row echelon form by the following sequence of operations:

-2
-2] [ 1
-1
-1] ['
- 1

A= -2 2 -7 0 -2 2 -7 0 - 2 -7
5 2 -1 5 2 0 4
-1 -1

l - 1 -7 0 -l] [' 1 - 1
-I]
-7 0 1 - 1 = u
[' -1 -1]
4 1 0 0 5 0 0 1
The multipliers associ ated wilh these operations arc 4, 0 (for the second row), 1, -4, aud
thus
A = LU = [
- 1
0
-2
4
is an LU-factorization of A.
To solve the ystem Ax = b w hae b = [ , we fi.-s.,olve Ly = b ro, y, and then wive U x = y
for x:
The system Ly = b is
2yl = -4
- 2y2 = - 2
-y! + 4yz + 5y3 = {)
from which we obtain Y1 = -2, Y2 = 1, YJ = 0.
The system Ux = y is
X 1 - X2 - X3 = -2
xz- x 3 = I
X3 = 0
from which we obtain x
1
= -1, .1.'
2
= 1, X3 = 0. Jt is easy to check that this is the solution of
Ax=b.
8. An LU-decomposition of t he matrix A = [-! is A = LU = [-!
0

1

0 4 26 0 4 - 2 0 0 l
Th solution or Ly = b, wh,. b m , ;., y, = 0, Y> = I, Yl = 0. The solution or U x = y (and or
Ax = b) is Xt = -1, xz = 1, x3 = 0.
106
9.
Chapter 3
The matrix A can be reduced to row echelon form by the following sequence of operations:

0 1

0 - 1

0
- 1

3 - 2 3 - 2 3 0
A=
2
- 1
-1 2 0
-1 -1
2
0 0 1 0 1 0 1

0 -1


0 -1
;1
0 -1

1 0 l 0 1 0
-1 2

0 2 0 1
0 1 0 1 0 1
."!.:.

0 -1
01 [1
0 - 1

1 0
2J 0
l 0
0 1 1 0 0 1
0 0 4 0 0 0
The multipliers associated with these operations are -1, -2, 0 (for the third row), 0, 1, 0,
-1, and thus
[-1
0 0

0 -1

A-LU-
3 0 1 0
- 1 2 0 1
0 1 0 0
is an LU-decomposition of A.
To solve thesystem Ax= b where h = [-;],we first solve Ly - b lory, and then solve Ux - y
for x:
The system Ly = b is
-yl == 5
2yl + 3yz = - 1
Y2 + 2y3 == 3
Y3 + 4y4 = 7
from which we obtain Y1 = - 5, Y2 = 3, Y3 = 3, Y4 == 1.
The sy.:>tcm Ux = y is
x
1
- X3 = - 5
xz + 2x4 = 3
X3 + X4 = 3
X4 = 1
from which we obtain x
1
= -3, x
2
= 1, x
3
= 2, x
4
= 1. It is easy to check that this is in fact the
solution of Ax = b.
EXERCISE SET 3.7
107
10.
['-'
0 0] ['
0 0
j
- 2 0
0]
An LU-decomposition of !1 =
12
is A= LU =
4 0 1 3 0
2
0 2 0 1
. The
0 -1 -4 - 5 0 -1 -1 0 0
ro!ution o[ Ly = b, wh"e b = [!lis y, = 4, Y> = - l, y, Y< ,J;. The soluUon of Ux y
(and of Ax= b) is x
1
= -1 , x
2
= X 3 = !, X4 =
1
1
0
.
....,...,_ ..... _--..._,
11. Le!, elt e2, the standard unit vectors in R
3
. ,_?, ?.) __ __ c?J'!IDP..?fL2L.
the matriX-".A-
1
1S-ofitaiiiea-E'y solvTng Ax= ei for x = x;. Using the given LU-
wyj oy llrsf'solviiig ej" _then sqlving U'x=Y-;19.!.-
x_=;= xi.
(1) The sys tem 0'_:::
= 1
2yl + 4yz = 0
-4yl - Y2 + 2yJ = 0
from which we obtain Yt = Y'2 = - Y3 = TI . Then the system Ux = Yt is
X 1 - 2Xz - X3 = !
r 2"' - l
T -"3 - -6
- 7
:r.3 - il
from which we obtain x 1 = - L xz = -1, x3 = TI Thus X 1 = [=!]

(2) ComputatioP- of xz: The system Ly = e2 is
:1yl = 0
2yl + 4y2 = 1
-4y1 - Y2 + 2y3 = 0
from which we oblaiu !11 = 0, )/2 ::.:. L Y3 = ! . Then the system U x = Yz is
xa - 2x2 - X3 = 0
xz + 2:r3 =!
X3 = 4
fmm which we j,., = 0, ., l Thus x, m
(3} Computation of x
3
: The syst em Ly = e3 is
3yl = 0
2yl + 4y2 == 0
-4Yt - Y?. + 2y3 = 1
108
from whieh we obtain y
1
""' 0, y
2
= 0, y
3
= Then t he system Ux = YJ is
x1 - 2x2 - X3 = 0
x2 + 2x3 = 0
X3 =
fom whkh w' obtain x, - l , x, - 1, x, ! . Thus x, [
Finally, as a result of these computations, we conclude that A-
1
= [x1
Chapter 3
-!]
0 -1 .
l
12. Let e
1
, e
2
, e
3
of! the standard unit vectors in R
3
The jth column x i of the matrix A - .L obtained
by solving the syst em Ax = ej for x = Xj. Using the given LU-decomposi tion, we do this by .first
solving Ly = e
1
for y = y
1
, and then solving Ux = Yj for x = x;.
The sdution of L. y - c, io y, [ nd the solution a[ Ux y , is x, r::J
The sohotion of Ly c, is y
3
[:]. a nd the.olution of Ux = Y2 is x, m.
The solution of Ly e3 is Yl - m, nd the solution of Vx y3 is x, [
_..2._]
7 14
* -t. .
'l l
7 r.i
13. The nta.t l ix il can 1.-c reduced to row echelon form by Lhe following sequence of oper ations:
H
1

[-;
! -ll

1
-:]
2
A:.::
0 -t
0 2 --+ 1
-t
2
2 2 I 2

1
-11 r
I
-:]
i 2
lj -t 0
1 =U
1 2 0 0
where thfl a.o:;sociated multipliers are 2, -2, 1 (for the leading entry in t he second row) , and
- J. This leads t.o the factorization A = LU where L =
0
If instead we prefer a lower
2 1 l
t.ri<mgular factor t hat has 1s on the main diagonal, this can be achieved by shifting the diagonal
entries of L to a diagonal mat rix D and writing the factorization as
EXERCISE SET 3.7
14. A =

=





= L'DU
-2 -4 l1 2 2 1 0 0 63 0 0 1
15. (a) This is a permutation matrix; it is obtained by interchanging the rows of J.z.
(b) This is not a. permutation matrix; the second row is not a. row of h.
109
(c) This js a permutation mat rix; it is obtained by reordering the rows of 1
4
(3rd, 2nd, 4th, pt ).
16. (b) is a permutation matrix. (a) and (c) are not permutation matrices.
17. The system b i<equivalent top-> p -> b where p-> P [! :]. Using the given
decomposition, the system p - l A;x = p - l b can be written as
18.
[
1 0
LUx= 1
3 -5
We solve this by first solving Ly = p-lb for y, and then solving Ux = y for x.
The system Ly = p - I b is
Y1 = 1
Y2 = 2
3yl - 5y2 + = 5
from which we cbt..ain y
1
= 1, '!/2 = 2, y
3
= 12. Finally, t he system Ux = y is
x1 + 2x2 + 2x3 = 1
X2 + 4X3 = 2
17x3 = 12
from which we obtaiu Xt = x2 ::::: - = g, This is the solution of Ax = b.
The system Ax h iseq uivolent to p- 'Ax P -' b whe>e p- ' P [ i
decomposition, the syst.em p-l Ax = p-lb can be wri tten M
(\ 0]
o 1 . Using the given
1 0
The solution of Ly = p - l b is y
1
= 3, y
2
= 0, Y3 = 0. The solution of Ux = y (and of Ax= b ) is
X1 = X2 = 0, X3 = 0.
19. If we interchange rows 2 and 3 of A, then the resulting matrix can be reduced t o row echelon form
without any further row !nterchanges. This is equivalent to first multiplying A on the left by the
corresponding permutation matrix P:
[
3 -1
0 2
3 -1
-1
-1
2
:1
J
110 3
The reduct ion of P A to row echelon form proceeds as follows:
[
3 -1
PA = 0 2
3 - 1
This corresponds to t he LU-decomposition
[
3 -1
PA = 0 2
3 -1
0] [3 0 0] [1
1 = 0 2 0 0
1 3 0 1 0
of t.he matrix PA, or to the following P LU -decomposition of A
[
1 0 0] [3 0 0] [1
A = 0 0 1 0 2 0 0 1
0 1 0 3 0 1 0 0

= p -
1
LU
Note that, since p-
1
decomposition ca.n also be as A = P LU.
The system Ax = b = ! is equivalent to PAx= Pb = and, using the LU-decornposjtion
obtained above, this can be written as
[
3 o olll - 4 ol [x
1
] [-2]
PAx == DUx == 0 2 0 0 1 ! x 2 4 = Pb
3 0 } 0 0 } X3 1
Finally, the solutjon of Ly = Pb is Yt = y
2
= 2, y
3
= 3, and the solution of Ux = y (and of
Ax = b) is x
1
= - x2 = X3 = 3.
20. If we interchange rows 1 and 2 of A, then the resulti ng matrix can be reduced to row echelon form
wi thoul any furt her row interchanges. This is equi valent to first multiplying A on the left by the
corresponding permut at ion matrix P:
The LU-decomposit ion of t he matrix PA is
P A = LU = -;]
2 0 -3 0 0 1
a.nd the corresponding FLU-decomposit ion of A is
Since P
1
= P , this <.:<tn also be wri tten as A "" P LU.
DISCUSSION AND DISCOVERY 111
The system Ax = b = [ _ !] is equivalent to PAx = Pb = using the LU -decomposition
obtained above, this can be WTitten as
PAx = LUx = 1 -;] [ ::] = [ = Pb
2 Q -3 0 0 1 X3 -2
The solution of Ly = Pb is y
1
= 5, Y2 = y
3
= 4; and the solution of Ux = y (and of Ax= b)
is X1 = - 16, X2 = 5, X3 = 4.
21. (a) We have n = l 0
5
for the given systern and so, from Table 3. 7.1, the number of gigaflops
required for the forward and backward phases a.re approximately
Grorwa.r<l =

X 10-
9
=

X w-
9
= s X 10
6
;::::: 670,000
Gbackwt>rd = n
2
X 10-
9
= (10
5
)'
2
x 10-
9
= 10
Thus if a computer can execute l gigaflop per second it will require approximately 66.67 x 10
4
s for the forward phase, and approximately 10 s for the backward phase.
(b) If n = 10
4
then, from Example 4, the total number of gigaflops required for the forward and
backwo.rd phasE>.s is approximately x 10
3
+ 10-
1
;::::: 667. Thus, in order to complete
task in IE'.ss than 0.5 s, a computer must be a.ble to execute at least

= 1334 gigaflops per


second.
22. (a) If A has such au LU -decomposition, it ca.n be written us
This yiel ds the syslem of equations
x = a, y = b, w:z; = c, wy + z = d
and, since a =I= 0, this system has the unique solution
x =a, y= b,
(b ) fl-orn the above, we ha\'e
c
711 = -,
a
z=
(ad- be)
a
[:
DISCUSSION AND DISCOVERY
01. The rows of Pare obtaine<l by reordering of the rO'I.VS of the identity matrix 1
4
(4th , 3'd, 2nd,
Thus P is a per mutation mn.trix. Multiplication of A (on the left) !;,y P results in (.he corresponding
reordering of the rows of A; thus
3 - 3
- 11 12
PA=
0 7
2 1
CHAPTER 4
Determinants
EXERCISE SET 4.1
1. :1 = (3}(4)- (5)( - 2) = 12 + 10 = 22
2. 1: = (4)(2) _ (1){8) = s- 8 = o
3. ,-S
7
1 = ( -5){-2) - (7)(-7) = 10 + 49 = 59
-7 -2
4. IV: (J2)(/3) _ (4)(./6) = -3v'6
I
a - 3
5. -3
5 I
a - 2
= (a- 3)(a - 2) - (5)( -3) = (a
2
-Sa+ 6) + 15 = a
2
- 5a + 21
-2 7 6
6. 5 1 -2 = (- 8 - 42 + 240)- (18 + 140 + 32) = 190- 190 = 0
3 R 4
-2 1 4
1. 3 5 - 7 = ( - 20 - 7 + 72) - (20 + 81 + 6) = 45- 110 = -65
l 6 2
-1 1 2
8. 3 0 -5 = (O- 5 + 42) - {0 + 6 + 35) = 37 - 41 = - 4
7 2
3 0 0
9. 2 - 1 5 = (12 + 0 + 0) - (0 + 135 + 0) = - 123
1 9 - 4
c - 4 3
10. 2 1
c2
= {2c -l6c
2
+ 6(c -l)) - (12+ c
3
(c- 1) - l6) = - c +c
3
-16c
2
+Be- 2
4 c- 1 2
11. (a) { 4, 1, 3, 5, 2} is a.n odd permutation interchanges). The signed product is
{b) { 5, 3, 4, 2, 1} is an odd permutation (3 interchanges). The signed product. is -atsa.z3a34a.42ast
(c) { 4, 2, 5, 3, 1} is an odd pArmuta.tion (3 interchanges). The signed product is -al4ll22llJS043ll51
(d) {5, 4, 3, 2, 1} is a.n even permutation (2 interchanges) . The signed product is +ats<124a33a.czas1
{e) { 1, 2, 3, 4, 5} is an even permutation (O interchanges). The signed product is +au
(f) {1 , 4, 2,3, 5} is an even permutation (2 interchanges) . The signed product is


118
Exercise Set 4.1 119
1 2. (a), (b) , (c), and (d) are even; (e) and {f) are odd.
13. det(A) = (,\- 2)(>. + 4) + 5 = >.
2
+ 2>. - 3 = (..\ -1)(>. + 3}. Thus det(A) = 0 if and only if/\= 1
or >.= - 3.
14. >. = 1, ,\ = 3, or ,.\ = - 2.
15. det(A) = (>. - 1)(>. + l }. Thus dct(A} = 0 if and only if A"" 1 or .A= - 1.
16. A = 2 or ,\ = 5.
17. We have 1;
1
-=_
1
xl = x( l- x) + 3 = -x
2
+ x + 3, and
18.
19.
20.
21.
1 0 -3
2 x -6 = ((x(x - 5) + 0- 18) - ( -3x- 18 + 0) = x
2
- 2x
] 3 X- 5
Thus t he gi ven equation is if and only if -:t
2
+ :-r + 3 = .x
2
- 2x, i.e. if 2x
2
- 3x - 3 = 0. The
roots of this quadro.tic equation are x =
3
* {33.
y=3
0
0 0 0

1 0
0
(a) 0 = (1)(-1)(1) = - 1 (b)
1 2
- 1
0
3
=0
0 0
0
'1
0
1 2 3 8
1 2 7 - 3
(c)
0 I -4 1
= (1)(1)(2) (3) = 6
0
()
2 7
0 0 0 a
1 1 1
2 0 0
(a) 0 2 0 = 2
3
= 8 (b)
0 2 2
= (1 )(2)(3)( 4) = 24
0 0 2
0 0 3
0 0 0 4
0 0 0
{c)
l 2 0 0
= ( - 3)(2)( -1)(3) = 18
40 10 - 1 u
100 200 -- 23 3
Mn = 1;
-11
4
= 29. c11 = 29
Ml2 =
-11
4
= 2l,Ct2 = - 21 M13 = 21, = 27
,-2
M21 = 1
!I = G21 = 11
M22 =
!I= 13,c22 = 13
M23 = -5, C23 = s
1-2
M3t = 7 31 _
1
= - 19, C31 = - 19
M32
31 _
1
=== -19, C32 == 19 M33 == 19, c33 = 19
120
22.
Mll
= 6, cn = 6
M12 = 12,012 = -12 M1s = 3, C1a = 3
M21 =
2, C21 = - 2
M22 = 4, Cn = 4 M2a = l , Oza = -1
M31
21 )
6
= O,Ca1 = 0 Ma2 = O,C
3
2 =0 Ms3 = 0, Cas = 0
0 0 3
23. (a) M
13
= 4 1 14 = (0 + 0 + 12) - (12 + 0 + 0) = 0 0
13
= 0
4 1 2
4 -1 6
(b) M
23
= 4 1 14 = (8 - 56+24)-(24+56-8)=-96 0
23
= 96
4 1 2
4 l 6
(c) Mn= 4 0 14 =(0+ 56 +72)-(0+8+168) = -48 0
22
= - 48
4 3 2
- 1 1 6
(ct) M21= 1 o 14 = {O+I4+18)-(0+2 - 42) = 72 c21= - n
1 3 2
24. (a) M3z = - 30, C32 = 30
(c) .M41= - l,C41= l
{b) M44 = 13, 044 = 13
(d) M24 = 0,024 = 0
25. (a) det(A) = (l) Cu + (- 2)C
1
2 + (3)0
13
= (1)(29) + (- 2)(-21) + (3)(27) = 152
(b) det(A) = ( I)Cll + (6)C21 + (-3)C
31
= (1){29) + (6)(11) + (-3)(-19) = 152
(c) det(A) == (6)C21 + (7)C22 +(-1)0
23
= (6)(11) + (7)(13) + (-1)(5) = 152
(d) det.(A) = (-2)C12 + (7)C22 + (1)0
32
= (-2)(- 21) + (7)(13) + {1){19) == 152
(e) det(A) = ( -3)CJI + (l)Ca2 + ( 4)Caa = ( -3)( - 19) + ( 1)(19) + (4)( 19) = 152
(f) det(A) = (3)013 + ( -l)Czs + (4)0
33
= (3)(27} + ( - 1)(5) + (4)(19) = 152
26. (a) det(A) = (l)Cll + (l)Ct2 + (2)0
13
= (1)(6) + (1)(- 12) + (2)(3) = 0
(b) det(A) = (l)011 + {6) C21 + (3)Ca
1
= (1)(6) + (3)(-2) + (0)(0) = 0
(c) det(A) = (3)021 + (3)022 + (6)023 = (3){-2) + (3)(4) + (6)(-1} = 0
(d) det(A) = (l)Ou + (3)022 + (I)Cs
2
= (1)(-12) + {3)(4) + (1)(0) = 0
(e) det(A) = (O)C31 + (l)C
32
+ (4)C
33
= {0)(0) + (1)(0) + (4)(0) = 0
(f) = (2)013 + ( G)02J + {4)033 = (2)(3} + (6)(- 1) + (4)(0) = 0
27. Using column 2: det{A} = ;j = (5){-15+ 7) = - 40
28. Using row 2: det(A) = (-4) = (1)( -18) + ( -4)(12) = - 66
Chapter 4
DISCUSSION AND DISCOVERY
30. Jet( A) ::::: k
3
- 8k
2
- IOk + 95
3 3 5 3
:n. Using column 3: det(A) = ( -3) 2 2 -2 - (3) 2
2 10 2 4
32. det(A) = 0
33. By expanding along the t hird column, we have
3 5
2 - 2 = ( - 3)(128) - (3)( -48) = -240
0
sinO
-cosO
cos B
0
I sinB cosO'
sin8 0 = (1)
8
. =sin
2
fJ+cos
2
B= 1
-cos smB
s inO- cosO sinO+ cosB 1
for all values of 0.
121
34. AB = bf) a.nd BA =

bd ce]. Thus AB = BA if and onJy if ae + bf = bd + ce , and


this is equi valent to t he condition that lb
0
- cl = b(<l - f ) - (a - c)e = 0.
e d - f
[
a b] ) 2 [a
2
+be ab +bdl 2 2 2
36. If A = d , then tr(A = a + d, A = d b d
2
1, and tr(A ) =a + 2bc + d . Thus
c co+ c c + J
l I tr( ,1) 1 I 1 I a+ d 1 I 1 2 2 2
2 tr(A2) tr(A) =2 a.2 +Zbc+d2 a +d =2((a+d) -(a +2bc+d ) =ad-bc=det(A).
DISCUSSION AND DISCOVERY
D 1. Lhe product of integers is always t\n integer , each elemf!ntary product is an integer and so
the of the elementa.ry products is an integer
D2. The signed elementary products will all be (1)(1) (1) = 1, with half of t hem equal to +1
and half equal to -1. Thus the determinant. will be zero.
D3. A 3 x 3 matrix A can have as many as six zeros wit hout having det(A) = 0. For example, let. A
be a diag0nal matrix with nonzero diagonal entries.
X y
D4. If we expand along the first row, the equation a
1
bt 1 = 0 becomes
a .. 2 b1 1
and thjs is an equation for t he line through t he points Pt (a
1
, bt) and P2(a2, b2).
D5. If ur = (a
1
,a2, a
3
) and' vT =

then each of the six elementary products of uvT is of


Lhe form (a1b;,)(a2bi?)(a3bj
3
) where {j
1
,iz,]J} is a permutation of {1,2,3}; thus each of the
elementary products is equal to (ata2as)(b1b2b3).
122 Chapter4
WORKING WITH PROOFS
Xt l/1 1 a l7l 1
Pl. If t he three points lie on a vertical line then XJ = x2 = xa = a and we have x2 yz 1 = a yz 1 = 0.
:r:s l/3 1 a !13 1
Thus, without loss of generality, we may assume that the points do not lie on a vertical line. In
this case the points are collinear if and only if = The latter condition is equivalent
%,q -x1 :1!2 - Xl
to (Y3 - Yt)(x2 - xt) = (Y2- Y1)(:r3- x1) wh.ich can be writ ten as:
(X2Y3 - X:;Yz) - (XtY3- XJYl) + (XJY2- X2Y1) = 0
On the other hand, expanding along the third column, we have:
X l Yt 1
Thus the three points are collinear if and only if x
2
v2 1 = 0.
XJ l/3 1
P2. We wish to pwve that for each positive integer n, t here are n! permutations of a set {it , )2, . .. , j .,.}
of n distinct elements. Our proof is by induction on n .
Step 1. It is clear that there is exactly 1 = 1! permutation of the set {j
1
} . Thus the statement is
true for the case n = 1.
Step 2 (induction srep). Suppose that the statement is true for n = k, where k is a fixed integer
1. Let S = {j1, j2, ... ,jk, )k+I }- Then a permutation of the setS is for med by first choosing
one of k + 1 positions for the element

and then choosing a permutation for the remaining


k elements in t he remaining Jc positions. There are k + 1 possibilities for the first choice and, by
the hypothesis, k! possibilities for the second choice. Thus there are a. total of (k + l )k! = (k + 1)!
of S . This shows that if the statement is true for n = k it must also be true for
n = k + l. These two steps complete the proof by induetion.
EXERCISE SET 4.2
1. (a) det(A) = = -8 -3 = -11
1
-2 11
dct{AT) =
3 4
= - 8 -3= - 11
(b)
2. (a)
3. (a)
2 -1
det(A) = 1 2
5 -3
2
det(AT) = -1
3
3
4 = (24 - 20 - 9) - (30 - 6 - 24) = - 5 - 0 = - 5
6
1 5
2 -3 = (24 - 9 - 20) -- (30- 6 - 24) = -5- 0 = -5
4 6
det(A) = det{AT) = 2 (b) det(A) = det(AT) = 17
3 1 3
331
0
1
9 22
3
= (3) (!) (-2)(2) = -4
0 0 - 2 12
0 0 0 2
EXERCISE SET 4.2
123
(b)
~ c )
4. (a)
5. (a)
(b)
(c)
(d )
6. (a)
7. (a)
(b)
3 1 9
- 1 2 -3 = 0 ~ t and third columns are proportional)
l 5 3
3
-17
4
0 5 1 = (3)(5)( -2) = - 30
0 0 - 2
-2 (b) 0 (identical rows) (c) 0 (proportional rows)
d e
f
a b c a b c
g h i = (- 1) g h i = (- 1)(-1) d e I =(-1)(-1)(- 6)= - 6
a b c d e
1
g h i
3a 3b 3c
(l b c
I a
b c a b c
- d -e
- ! = (3)
-d -e
-f
= (- 3) I d
e f = ( - 12) d e f = (- 1'2)( - 6) = 72
4g 4h 4i 4g 4h 4i 1g 4h 4i g h
a+g b I h 1:+ i a b c
d e
f =
d e f = -6
g h i g h i
-3a -3b -3c -3a
-3b
-3cl a
b c
d e
f .
= d e f = (-3) d e f = (-3)(-t)) = 18
g-4d h- 4e 1 - 4/ g h i
I
9
h l
6 (b) 0 (c) 6 (d) - 12
,-2
det(2A) =
6
:1 = - 16- 24 = -40
,-1
2
2
det(A) = 4
3
2
1 = 4( - 4 - 6) .., - 40
4
- tl 2
<.let(- 2A) = -6 - 4
-2 -8
2
( - 2)
3
det(A) = ( - 8) 3
- 6
- 2
- 10
- 1
2
4
= (-160 + 8 - 288) - (-48 - 64 + 120) = 440 - 8 = - 4t18
3
1 = ( - 8)((20 - 1 + 36) - (6 + 8 - 15)) = ( - 8)(55 + 1) = -448
5
8. (a) del(-4A) = - 224 = (-4)
2
(-14) = (-4)
2
det(A)
(b) det(3A)-= -63 = (3)
2
(-7) = (3)
2
det(A)
9. lf x = 0, the given matrix becomes A ~ [ ~ ~ ~ ] and, since the first and third rows are r-ropor-
o 0 -5
tiona!, we have det(A) = 0. If x = 2, the given matrix becomes B = [ ~ i ;] and, since the first
0 0 - 5
and second rows are proport ional, we have det(B) ::::: 0.
124 Chapter 4
LO. If we replace the first row of the matrix by the sum of the first and second rows, we conclude that
[
b+c b+a] [a+b+ c b+c+a
det a b c = det a b
1 l 1 1 1
c+b+a]
a = 0
1
since the first and third rows of the latter matrix are proportional .
11. We use the properties of determinants stated in Theorem 4.2.2. Corresponding row operations are
as indicated.
3 6 -9 1 2 - 3
det(A) = 0 0 -2 = (3) 0 0 - 2
A common factor of 3 was
- 2 1 5 -2 1 5
taken from the first row.
1 2 -3
:;:; (3)
0 0 -2
the first mw "''-'
0 5 -1
to the third row.
1 2 - 3
= (3)( -1) 0 5 -1
The second and third rows
were interchanged.
0 0 -2
= (3)( -1)(- 10) = 30
12. det(A) = 5
13. We use the properties of determinants stated in Theorerr1 4.2.2. Corresponding row operations are
as indicated.
1 - 3 0 l - 3
det(A) = -2 4
=
0 - 2
5 -2 2 5 - 2
-3 0
-
0 -2
0 13 2
1 - 3 0
= (-2) 0 1
-l
2
0 13 2
1 -3 0
= {- 2) 0 1
1
- 2
0 0
17
2
= (-2} en = -17
14. det(A) = 33
0
l
2
2 times the first row was
added to the second row.
-5 t imes the first row was
added to the third row.
A factor of - 2 was taken
from the second row.
- 13 times the second row
was added to the third row.
EXERCISE SET 4.2 125
15. We use t he properties of determinant s stated in Theorem 4.2.2. Corresponding r ow opera.t.ions are
as indicate'i.
16.
] 7 .
1 -2 3 1
det(A) =
5 - 9 6 3
-1 2 - 6 - 2
2 8 6 1
1 2 - 3
0 1 -9
=
0 0 -3
0 0 108
1 2 -3
0 1 - 9
=
0 0 - 3
0 0 0
= 39
det(A) = 6
1
0
=
0
0
1
- 2
- 1
23
1
-2
- 1
- l 3
2 - 3
-9
0 - 3
12 0
1
- 2
-1
- 1
-5 times row 1 was added to row 2;
row 1 was added to row 3; - 2 times
row l was added to row 4.
- 12 times r ow 2 was added to row 4.
36 times row 3 was added row 4 _.]
We usc the properties of deter minants stated in T heorem 4.2.2. Corresponding row operations are
as indicated.
0 1 1 1
1 1
1
1
2 2 2
1
1
1
1
0 1 1 1
The first and second rows
det(A) =
2 2 l
= (-1)
'2 l 1
0
2 1 l
0
were interchanged.
3 3 3 3 3 3
l 2
0 0
l 2
0 0 - :;
3
- 3
3
1 l 2 l
= - 1 ) ~ )
0 l 1 1
A facto< of l was t ~
'2 I I
3 :l 3
0 from the first row.
I 2
0 0
- 3
:i
1 1 2 1
= - 1 ) ~ )
0 1 l 1
- ~ times row 1 was added
0
I
- l
2
to row 3; ~ times row 1
-3 -3
was added to row 4.
0
'2 l
3 3
1 1 2 1
= (-1) ~ )
0 1 l
l times row 2 was added
0 0
2 1
t o row 3; - 1 t.imes row 2
-3 -3
was added to row 4.
0 0
1 2
- 3 -3
1 1 2 1
= (-1) ~ ) ( ~ )
0 1 1
A factor of - ~ was t aken
I
0 0 1
2
from row 3.
0 0
l 2
- 3
3
126
18.
19.
1 1
= (-1) (
0 1
0 0
0 0
2
1
1
0
1
1
1

1
-2
! times row 3 was added
to row 4.
= (- 1) ( ( =
det.{A) = - 2
al bt G! + b1 + C1 al
(a) a2 b2 02 + b2 + Cz = 02
03 b3 as+ b3 + C3 .:l3
a1 bt C)
= bz C2
<l3 bJ C:J
a1 + b1 a1 - b1 C)
2a
1
(b) az + b2
(12 - b'J
C2
=
2a2
a3 + b3 a3 - bs CJ
2a
3
Gt a1 - b1 Ct
=2 02 02- b2 C2
03 GJ - b3 C3
a1 - bl Ct
=2 02 - b2 C2
,a3
- hJ CJ
GJ bl Cl
= -2 02 b2 C2
(13 03 CJ
a1 + b1 t + b2t OJ + b3t
bl bt + Ct
bz b2 + C2
bJ b3 + ca
Gl- Ot CI
(1.2- b2 C2
(1.3- b3 ca
Add -1 times column 1 to
column 3.
Add -1 times column 2 to
column 3.
J Add column 2 to column 1.
Factor of 2 taken from
column 1.
Add -1 times column 1 to
column 2.
Factor of - 1 taken from
column 2.
a 1 + b1t a2 + bzt a3 +
Chapter 4
20. (a) att +bt a3t+b3
(1- t
2
)bl (1 - (
2
)b2 {1- t
2
}b3
Add - t times
row 1 to row 2.
Gt + btt
= (1 - t
2
) bt
Ct
(1.1 a2
= (1.- t2 ) bt b:z
Ct C2
a2 + bzt
b2
C2
GJ
b3
CJ
a3 + b3t
b3
CJ
Factor of (1 - t
2
)
t.Rken from row 2.
Add - t iin:.es
row 2 to row 1.
EXERCISE SET 4.2
al b1 + ta1 c1 + rbt + sa1
la1
(b)
a2 b2 + ta2 c2 + + sa2
= r2
a3 b:1 + ta3 c3 + rb3 + sa3 a a
a 1 b1 C)
- a2 b2 c2
a3 b3 c3
1 X
x2
1 X
x2
21. det{A) = 1 y
y2
= 0 y - x
Y2 _ x2
1 ;;
z 2
0 z- x
z2 - x2
x2 I
b1
b-l
UJ
Ct +rb1 + sa1
C2 + 1'b2 + sa2
C3 + rb3 + sa3
Add -t times column 1
to column 2.
Add -s times column 1
to column 3.
Add -r times col umn 2
to column 3.
1 x x
2
=(y-x)(z - x) 0 1 y+x
0 1 z +x
1 X
=(y- x)( z-x) 0 1
0 0
y + xl = (y - x)(z - x) (z- y)
z - y
22. dct(.4) :... (a - b)
3
(a + 3b)
127
23. If we add the first row of A to the second row, the rMult is a lfla.trix B that has two identical row$;
thus dct( A) = dct(.H) = 0.
24. If we add of the first four rows of A t o the last row, t.he result is a matrix B that has a. row of
thus det.(A) = det(B) = 0
(a) \Ve have dct(A) = (k- 3)(k 2) - 4 = k
2
- 5k + 2. Thus, from Theorem 4.2.4, the matrix .4
is invertible if and if lc
2
- 5k + 2 -f:. 0, i.e. k #- 5 fTI .
(b ) We ha..-e dE>t.( A) = ( + :1 ..,.. 8 + 8k. Thus, from Theorem -1.2.4, the matri x
A is invertible if and only if 8 + Bk f 0, i.e. k =/' ... 1.
(a) k i: 2
(a) det(aA) = 3
3
det(A) = (27)(7) = 189
(c) tlct(2A-
1
) ''= 2
3
dct{A-
1
) = =
{h )
k=L.!
" -1
- 1 1 1
(b) det(A ) = det(A) = 7
1 1 1 1
( d) clet((
2
A)-t ) det(2A) z3 det(A) = (8){7) = 56
:8. (a) det(- A) = (-l)
4
det{A) = (1)(-2) = - 2
( c) = det(2A) = 2
4
det(A) = -32
&. We have [i
= [ !
2 -2 0 0 2 10
2
det(AB) = 6
10
4 -5 2
15 -6 = 0
22 - 23 10
(b) d
. ,t - 1) 1 1
et.t.
1
= det(A} :;:; -2
(d) det(A
3
) = { det(A))
3
= ( - 2)
3
= -8
t hus
22 - 23
4 - 5
3 9 = 21
3 9
) = 2(6 - 18) = -24
2 2 2 2
128
Chapter 4
On the other hand, det(A) = (1)(3)( -2) = -6 and det(B) = (2)(1){2) = 4. Thus det(AB) =
det(A) det{B).
30. We have AB = = and so det(AB) = -170. On the other hand,
0 0 2 5 0 l 10 0 2
det(A) = 10 and det(B) = -17. Thus det(AB) = det(A)det(B).
31. The following sequence of row operations reduce A to an upper triangular form:
H
- 2 3

-2 3
-}
-2 3

-2 3
1]
-9 6 1 -9 1 -9 1 -9 -2
2 -6 0 -3 -1 0 0 -3 -1 0 0 -3 -1
8 6 12 0 -1 0 0 108 23 0 0
0\' -13
The corresponding LV -factorization is
[ !
-2 3
'] [ 1
0 0

-2
3
1]
-9
I)
=
1 0 1 - 9 - 2
A=
-1 2 0 0
=LU
- G 1 - 3 - 1
2 8 6 1 2 12 - 36 0 0 - 13
and from this we conclude that det(A) = det(L) det(U) = (1)(39) = 39.
32. The following sequence of row operations reduce A to an upper triangular form:

1 3

I .. l
r
I 3
-;]

1
3
_;]
2 'J
2 2 2 2
0 1
l l
;
l 1 1 1
-2 '2
-4
2 1
2 1 0 -1 0 1 -2
1 2
1 2
0 1 0 0 6
The corres ponding LU-decomposition is

1 3
l] "'
0 0

l 3
_; ] = LU
2 2
0 1
1
0 l 1
A =
-2
2 1 2 - 1 0 1 - 2
1 2 3 0 l 1 0 0 6
a.nd from this we conclude that det.(Jt) = dct(L) det(U) :::: (1)(6) = 6.
33. U we add the firs t row of A to the second row, using the identity sin
2
B + cos
2
0 :::: 1, we see that
sin
2
o sin
2
/3 sin
2
1 sin
2
o
det( A) = cos
2
o cos
2
/3 cos
2
1 = 1
1 1 1 1
sin
2
f3 sin
2
1
1 1 = 0
1 1
since the resulti(lg matrix has two identical rows. Thus, from Theorem 4.2.4, A. is not invertible.
34. (a) .A=3or.A=-1 (b) .A=6or.A =-l (c) >. -= 2 or >. = -2
35. (a) Since det(AT) = det{A), we have det{AT A)= det(AT) det(A) = (det(A))
2
= det(A) det(AT) =
det(AAT).
(b} Since det( AT A)= (det (A))
2
, it follows that det(AT A) = 0 if and only if det(A) = 0. Thus. from
'T'heorem 4.2.4, AT A invP.rt.ihiP. if ano only if A is invertible.
DISCUSSION AND DISCOVERY
36. det(A-
1
BA) = det(A -I) det(B) dct( A) = dct(B} det(A} = det(B)
37. llxii
2
IIYII
2
- (x. y)
2
== (xi+ T vi+ v5>- (XlYl + XzY2 + XsYa)
2
Jx
1
x
2
1
2
+ IXJ X:J I
2
+ lxz :r
3
1
2
= (x1Y2 - :r2yd
2
+ (x1y3- X3Y1)
2
+ (x2Y3- X3Y2)
2
IYl Y2 YJ 'YJ Y2 Y3
- 2XtY2X2Yl + + xi y- 2XJY3X3Yl + + 2X2Y3X3Y2 +
38. (a) We: have det =5 -4 = 1 and det = 3- 6 = -3; thus det(M) =(I){ -3) = -3.
l 2 0 3 0 0
129
(b) We have 2 5 o =
2
1 = 2(5 - 4) = 2 and 2
- 1 3 2
5
- 3
1 o = (3)(1 )( - 4) = -12; th\18 det(M) =
8 - 4
(2)( -12) = - 24.

3 5 1 3 5
121
12
112
39. (a) det(M) =
1
- 2 6 2 = (fi + 4) 0 12 12 = (10)(1) = -1080
- 4 - 13
3 5 2 0 -4 - . 13
det (M) =
2 0

(1){1) ,.,, l
(b) 1 2
jo
()
l
DISCUSSION AND DISCOVERY
Dl. The matrices sin-gular if and only if the corresponding determinants are zero. This leads to
the system of equations
from whir.h it follows t.hat s =


and t =


D2. Since det(AB) = det(A) det(B) == det(B) dP. t(A) = det(BA), it i;, always true that det(AB) =
det(BA).
D3. If .4 or B is not invertible then either det(A) = 0 or det(B) = 0 (or both). It follows that det{AB) =
det(A) det(B) = 0; thus AB is not invertible.
130 Chapter 4
D4. For convenience call the given matri'V An If n = 2 or 3, t hen An can be reduced to the identity
matrix by interchanging the first and last rows. Thus det(An) = - 1 if n = 2 or 3. lf = 4 or 5,
then two row interchanges are required to reduce An to the identity (interchange the first and last
rows, then interchange the second and next to last rows). Thus det (An) = + 1 if n = 4 or 5. This
pattern continues and can be summarized as follows:
det(A2k) = = - 1 fork = 1,3,5, .. .
det(A2k) = det(Azk+t) = +1 fork= 2, 4,6, .. .
DS. If A is skew-symmetric, then det(A) = det(AT) = det(-A) = (-l)ndet(A) where n is the size of
A. It follows that if A is a skew-symmetric matrix of odd order, then det(A) =- det(A) and so
det(A) = 0.
D6. Let A be an n x n matrix, and let B be the matrix that results when the rows of A are written
in reverse order. Then the matrix B can be reduced to A by a series of row interchanges. If
n = 2 or 3, then only one interchange is needed and so det(B) = - det(A). If n = 4 or 5, then two
interchanges are required and so det(B) = +det( A). This pattern continues;
D7. (a)
(b)
(c)
(d)
DB. (a)
(b)
(c)
(d)
(e)
det(B) = - det(A) for n = 2k or 2k + 1 where k is odd
det(B) = + det(A) for n = 2k or 2k + 1 where k is even
False. For example, if A= I= /2, then det(J +A)= det(2/} = 4, whereas 1 + det{A) = 2.
Thue. F'tom Theorem 4.2.5 it follows that det(A") = (det(A))n for every n = 1, 2, 3, ... .
False. From Theorem 4.2.3(c), we have det(3A) = 3n det(A) where n is the size of A. Thus
the: stu.ternent is false except when n = 1 or riet(A) = 0.
Thue. If det (A) = 0, the matrix is singular and so the system Ax = 0 has infinitely many
solutions.
True. If A is invert ible, then det(A) =F 0. Since det(ABA) = det(A) det(B) det(A) it follows
that if A is invertible and det( ABA) = 0, then det(B) = 0.
'frue. If A ,.,; A - l , then since det( A - l) = det
1
(A) , itiollows that {det( A ))2 == 1 and so det (A) =
1.
True. If the reduced row echelon form of A has a row of zeros, then A is not invertible.
'frue. Since = det(A), it follows that det(AAT) = det(A) det(AT) = (det{A))
2
:?: 0.
True. If det.(A) f 0 then A is invertible, and an invertible matrix can always be written as
a product of elementary matrices.
D9. If A = A
2
, then det(A) = det(A
2
) = (det(A))
2
and so det(A) = 0 or det(A) = 1. If A= A3, then
det(A) = det(A
3
) = (det(A))
3
and so det(A) = 0 or det(A) = 1.
DlO. Each elementary product of this matrix must include a factor that comes from the 3 x 3 block of
zeros on the upper right. Thus all of the elementary products are zero. It follows that det(A) = 0,
no matter what values are assigned to the starred quantities.
Dll. This permutation of the columns of an n x n matrix A can be attained via a sequence of n- 1
column interchanges which successively move the first column to the right by one position (i.e.
interchange columns 1 and 2, then interchange columns 2 and 3, etc.). Thus the determinant of
the resulting matrix is equal to (-1)"-
1
det(A).
EXERCISE SET 4.3
WORKING WITH PROOFS
Pl.
If x [] and y = [] then, using cofacto expansions along the jth w lumn, we have
det(G) = (x1 + Y1)C1; + (xz + Y2)C2; + + (xn +yn)Cn;
131
P2. Suppose A is a square matrix, and B is the matrix that is obtained from A by adding k t imes the
ith row to the jth row. Then, expanding along the jth row ofB, we have
det(B) = (ajl + kao)Gjl + (aj2 + ka;2)Cj2 + + (a;n + ktJ.in)Cjn
= det(A) +kdct(C)
where C is the mat.rix obtained from .t1 by replacing the jth row by a copy of tho ith row. Since
C has two identical rows, it follows that det( C) = 0, and so det( R) = det(A).
EXERCISE SET 4.3
1. The matrix of cofactors from A )s C = -! det(A) = (2)(-3) + (5)(3) + (5)( -2) = -1.
& - s 3
Thus adj{A) = lr -! and A-t = -a.dj(A) = -
- 2 2 3 2 - 2 - 3

I 2
3 3
0 -1
0
3. The matrix of cofactors from A is C = 6 4 0 , and det (A) = (2}(2) + ( - 3)(0) + (5)(0) = 4. Thus
[
2 0 0]
4.
5.
[
2 6
adj(A) = o 4
0 0
adj(A) [
29
I!

4 6 2
1] 1 1 [2 6 4] I]
G and A-
1
= - adj(.4) = - 0 4 6 = o 1 .
2
4 4
oo2 1
0 0 2
0

[
0
12 1
- 6
12 29
l
12
-2
17
!I 18 9 13 26 13
Xt = 17
:::: - = -
X2 = j;
= - =-

16 8

16 8
3
;]
132 Chapter 4
- 153
6. :r = .:....__ _ _;_ = -- = 3
- 51
204
y = !..:- ,-1 = --5-1 = -4
6 -4 1 1 6 1 1 - 4 6
-1 -1 2 4 -1 2 4 - 1 - 1
7. x=
-20 2 - 3
1 - 4 1
144
2 -20 -3
61
2 2 -20
-230 46
=- y= = - z= =--=-
- 55
1 - 4 1
- 55
1 -4 1
-55 11
4 - 1 2 4 - 1 2 4 -1 2
2 2 -3 2 2 -3 2 2 -3
4 - 3 1 1 4 1
1 -3 4
- 2 - 1 0 2 -2 0
2 - 1 -2
0 0 3
30
4 0 -3
38
8.
XJ = = - Xz = =-
X3 =
1 - 3
] - 11
1 -3 1
- 11
4 0 0
1 - 3 1
40
= -
- 11
2 - 1 0 2 - 1 0
2 - 1 0
4 0 - 3 4 0 -3
4 0 -3
- 32 - 4 2 1 - 1 -32 2 1
11 -1 7 9 2 14 7 9
11 1 3 - 1 11 3 1
- .t - 2 I - 4 - 2115 1 -4 1 -4
= - 3384 = 8
9. I t=
2
= - -=5
Xz =
- I - 4 1 - 423 - 1 - 4 2 1 - 423
2 - 1 7
9 2 -1 7 9
- 1
:3 - 1 1 3 1
I - 2 - .t -2 1 - 4
- 1 - 4 - 32 1 - 1 - 4 2 - 32
2 - l 14 9 2 -1 7 14
- I 1 ll 1 - 1 1 3 11
1 - 2 - 4 - 4
= - 1269 = 3
1 -2 1 -4
423
X3 =
- 423
x4 =
- 423
= -- = - 1
- 423 - 423
4 2 - 1 1 2 4 - 1 1
6 3 - 1 2 4 6 - 1 2
12 5 - 3 4
8
"' -3 4
6 3 - 2 2
2 3 6 -2 2
2
10.
Xt = = -= l Iz = =-=1
2 2 - l 1 2
2 2 - 1 1 2
4 3 - 1 2 4 3 - 1 2
8 5 - 3 4
8 5 - 3 4
3 3 - 2 2 3 3 -2 2
EXERCISE SET 4.3 133
2 2 4 1
2 2 -1 4
4 3 0 2
4 3
- 1
6
8 5 12 4
8 5 -3 i2
3 3 6 2 - 2 3 3 -2 6 - 2
=
2
= -- = -1 X4 =
2
::;: -- = - 1
2
2
1 3 4 1 -2 3
2 - 2 - 1 3 1 1
4 1 1
21 3
11. x= =- =-
2 3 4
14 2
-1 -3 - 2
- 33
12. y= = -
1 2 3
41
1 - 2 -1 3 -1 1
3 l 1
-1 4 - 2
13.
[
cosO
The matr ix of cofactors is C = - e
sin9 0]
, and det ( A) = cos
2
(} + sin
2
8 = 1. Thus A is invertible
[
co38 - sin(1 0]
and A-
1
. ::: adj(A) = sinO cos @ 0 .
0 0 l
14. Using the identity 1 + tan
2
o = sec
2
a:, we have det(A!) = 1 foro :f. + n1r. Thus, for these values of
15.
n, t.he ma trix A is invertible and A-
1
= adj(/1.) = -cos
2
o ' l.an o cos
2
a 0 .
[
tan acos2 a cos
2
a 0 l
0 0 sec
2
a
The coefficienl matrix is 11 = <1 k 2 , and dct(A) = k(k- 4).
[
3 3 I]
Thus the system has a unique
2k 2k k
solution if k #- 0 and k i 4. In this case the solution is given by:
x =
1 3 1 3 1 l
2 k 2 4 2 2
1 2k k
(k- l)(k - 6)
2k l k
2(k - 1)
--- = y =
=-
k(k - 4) k(k- 4) k(k - 4) k(k- 4)
3 3 1
4 k 2
z=
2k 2k l
k(k- 4)
(2k - 3)(k - 4) 2k - 3
k(k- 4)
= ---
k
3{k
2
- k + 3)
(5k - 3}(3k + 2)
16.
r 5' 3
14k - 3
X = ..,.-----.-,.-----:-
(5k- 3)(3k + 2)
y =
3k
z=---
5k - 3
17. We have det(A) = 1i- x
2
y = y(y
2
- x
2
). Thus A is invertible if a.nd only if y =I= 0 a.nd y =I= x. The
l [-xy 0 y2]
formula. for the inverse is A-
1
= (
2 2
) y
2
0 - xy .
. '!/ Y - X - :ry y2 _ ;r2 ;r2
134
Chapter 4
18. We have det(A) = (s
2
- t
2
)
2
. Thus A is invertible if ;md only if s =/; t. The formula for the inverse
is
[
s 0 -t 0]
\-1 1 0 s 0 -t
1
= (s2 - t
2
) -t 0 s 0
0 -t 0 s
19. ]det(A)I =II+ 2] = 3 20. ldet(A)I = ]o- 61 = 6
21. ]det(A)J = 1(0- 4 + 6)- (0 + 4 + 2)] = l-41 = 4
22. ]det(A)I = ](1 + 1 + 0)- (0- 3 + 2}1 = 131 = 3
[33]
23. The parallelogram has the vectors P
1
Pz = (3,2) and P1P4 = (3, 1) as adjacent sides. Let. A=
2 1
.
Then, from Theorem 4.3.5, the area of the parallelogram is ]det(A)] = ]3- 61 = 3.
----t [2 "o].
24. The parallelogram has the vectors P
1
Pz = (2, 2) and P1P4 -= (4, 0) as adjacent sides. Let A ;;;.;
2
Then, front Theorem 4.3.5, the area of the parallelogram is ]det(A)] = ]0- 8] = 8.
25. area =
1
.
2
2 0 1
3
1
1 1 =7
-1 2 1
1 1
1
26. area -
2
2 2
3 -3
27. V = ]det(A)I wlsere A= [-
2
2
6 thus V = ]-16] = 16.
-2 -4
28. v = 45
1
1 = 3
1
29. The vectors lie in the same plane if and only if the parallelepiped that they determine is Jegenerate
in the sen.;;e that. it.s "volume" is zero. In this example, we have
-1
v = -2
5
0 -4 = 16
-2 0
and so the \'Cctors do not lie in the same plane.
30. These vectors lie in the same plane.
31. a= )5(0, 2, 1)
j k
33. U XV= 2
2
3 -6 = 36i- 24j
3 6
' .
l j k
--+ --+
32. a= fsi(6, -3, 4)
sin f) = l]u X vll = )1296 + 576 = )1872 = 12M
j!u!IJlvll v'4y49 49 49
34. (a) ABxAC= -1 2 2 = -4i + j - 3k
area = ! x ACII = P
1 1 -1
(b) area IIXBII h = thus h = =
EXERCISE SET 4.3
15. (a) v x w = 0
2
j k
2 - 3 = (14 + 18)i- {0 + 6)j + (0- 4)k = 3Zi - 6j - lk
6 7
j k
135
{b) U X ( V X W) = 3
32
2 -1 = (- 8 - 6}i - {- J2 + 32)j + (-18 - 64)k = --14i- 20j- 82k
16.
17.
-6 -4
j k
(c) uxv = 3 2 - 1 =(-6+2)i-(-9+0)j+(6 - 0)k =-4i + 9j +6k
0 2 - 3
j k
(uxv)xw = - 4 9 6 =(63-36)i-(-28 - 12)j +(- 24 - 18)k = 27i+40j-42k
2 6 7
(a)
(0, 171), ..... 2fJ4)
U X V ~ ~ ~ (a)
I
3
( b) U XV = - 2
3
j
4
1
(h) (-44,55,-22) ( (:) ( - 8, -3, -8)
k
2 = 18i + 36j -18k = (18, 36, -18) is orthogonal to both u and v.
5
j k
1 5 = - 3i + 9j - 3k = ( -3, 9, -3) is orthogonal to both u and v.
0 -3
i8. (a) u x v = (0, - 6, - 3)
(b) U X V = (2,-6, 12)
vz l) = -v xu
U2
.o.
(l
t 12
u x (v + w) .,.,.,
v2 + w:i
= (u x v) + (u x w)
= (ku) x v
136
j
43. (a) u >< v = 1 -1
0 3
j
(b) uxv= 2 3
- 1 2
44. (a) A = lfu x vlf = 0
---+ ---+
j
45. PtP2 X plp3 = - 1 - 5
2 0
k
.2
1
k
0
-2
= -7.i-j+ 3k A= ll u x v ii = J49 + 1 + 9 = J59
= -6i +4j +7k
k
2 = - 15i + 7j + IOk
3
A= ll u x v lf = J 36 + 16 + 49 = Ji01
(b) A= llu x vii = /IT4
li M X = + 49+ 100 = 4TI
46. A= t II:PQ X = tJll40 = J285
Chapter 4
47. Recall that the dot product distributes across addition, i.e. (a+ d ) u =a u + d u . Thus, with
u = b x c, it follows that (a+ d ) (b x c) =a (b x c)+ d (b x c) .
48. Using properties of cross products stated in Theorem 4.3.8, we have
(u + vj x (u - v ) = u x (u - v) + v x ( u- v) =-= ( u x u)+ ( u x ( - v) ) + (v x u) + (v x ( -v))
= (u x u) - (u x v) + {v x u) - (v x v) = 0 - (u x v) - (u x v)- 0 = - 2(u x v )
49. The vcc: lor AB x AC = 1 -3 = 8i + 4j + 4k is perpendicular t o AB aiH.I AC, and thus is per-
-- ---t j k .I -)- ---t
- I 3 -1
pendic\l lar to the plane determined by t he points A, B, and C.
50. (a) W h
(I
vz 1131 1111 t1311
11
1 vzl) th
e uve v x w = , - . ; us
W2 W3 W1 W3 W} W 2
(b) lu (v x w)l is equal to the volume of tlte parallelpiped having the vectors u, v , w as adjacent
edges. A proof can be found in any standard calculus text.
o , and so A-
1
= detCA)adj(A) = -t -14 7 51. {a) We have adj(A) =


I 0
l] [ 34 -21
1 1 0

-1
(b) The ,.duced .ow echelon fo<mof(A IT];, [:
0
0
0:-
I
0 I 2
I
t :
_: A-
1
=
0 'f - 'i
_: -;].
0 7
EXERCISE SET 4.3 137
(c) The method used in (b) requires much less computation.
>2. We have det(AJ.:) = (det(A))k. Thus if A''-'= 0 for some k, then det{A) = 0 and so A is not invertible.
53. F:rom Theorem 4.3.9, we know that v x w is orthogonal t.o the plane determined by v and w. Thus
a vector lies in the plane determined by v and w if and only if it is orthogonal to v x w. Therefore,
since u x (v x w) is orthogona.l to v x w, it follows that u x (v x w) lies in the plane determined by
v and w.
>4. Since (u x v) x w = -w x (u x v), it follows from the previous exercise that (u x v) x w lies in the
plane determined by u and v.
55. If A is upper triangular, and if j > i, then the submatrix that remains when the ith row and jth
column of A are deleted is upper triangular and has a zero on its main diagonal; thus C;; (the
ijth cofactor of A) must be zero if j > i. It follows that the cofactor matrix C is lower triangular,
and so adj( A) "" or is upper triangular. Thus, if A is invertible and upper triangular. then A_, =
is also upper triangular.
56. If A is lower triangular and invertible, then AT is upper triangular and so (A-
1
)T = (A
1
'}-
1
is upper
triangular; thus A -I is lower triangular.
57. The polynomialp(x) = ax
3
+ bx
2
+ex+ d passes through the points (0, 1}, (1, -1), (2, -1), and (3, 7)
if and only if
d=
a+ b+ c+d=-1
Ra + '1b + 2c + d = -1
27 a + 9h + 3c + d = 7
Using Cra.mers Rule, the solution of this system is given by
1 0 0
ll
0
-1 1 1 1 1 -1
-1 4 2 1 8 -l
7
9 3 12 127
7
-:::: ---1
b=
0 0 0 1
- 12-
0 0
1 1 I 1 1 1
8 4 2 l 8 4
27 9 3 1 27 9
0 0 1 I
0 0
l l -1 1 1 1
8 4 -1 1
8 1
27 9 7 1 -12 27 9
c= =-=-1 d=
12
12 12
Thus the interpolating polynomial is p(x) = x:l - 2x
2
- x + 1.
0
1 1
2 1
3 -24
= -- = -2
0 1 12
1 1
2 1
3
0 1
1
-]
2 - 1
3 71 12
=-=1
12
138
DISCUSSION AND DISCOVERY
,. ux v
I
I v
Chapter 4
Dl. (a) The vector w = v x (u x v} is orthogonal to both v and u x v;
thus w is orthogonal to v and lies in the plane determined by
u and t.

(b) Since w is orthogonal to v, we have v w = 0. On the other hand, u w = l!ullllwll cos9 =
llullllwll sin(%- B) where() is the angle between u and w. It follows that lu wl is equal to
the area of the parallelogram having u and v as adjacent edges.
D2. No. For example, let u = (1, 0, 0), v = (0; 1, 0), and w = (1, 1, 0). Then u x v = u x w = (0, 0, 1),
but vi= w.
D3. (u v) x w docs not make sense since the first factor is a scalar rather than a vector.
D4. If either u or v is the zero vector, then u x v = 0. If u and v are nonzero then, from Theorem
4.3.10, we have flux vii= JluiJI!viJsinO where B is the angle between u and v. Thus if u x v = 0,
with u and v not zero, then sin B = 0 and so u and v are parallel.
D5. The associative law of multiplication is not valid for the cross product; that is u x (v x w) is not
in general the same as (u x v) x w.
D6. Let A = [ c -(l- c)]. Then det(A) = c
2
+ (1 - c)
2
= 2c
2
- 2c + l f. 0 for all values of c. Thus,
l-c c
for every c, the system has a unique solution given by
1
3 -(l-c)l
-4 c 7c- 4
x
1
= 'l.c2 - 2c + 1 = -2c""" 2,.---2c_+_l
07. (c) The solution by Gauss-Jordan elimination requires much less computation.
D8. {a) True. As was shown in the proof of Theorem 4.3.3, we have Aadj(A} = det(A)L
(b) False. In addition, the determinant of the coefficient matrix must be nonzero.
(c) True. In fact we have adj(A) = det(A)A-
1
and so (adj(A))-
1
=
(d) False. For example, if adj(A) =
Ut UJ
(e) True. Both sides "ne equal to v1 l!'J t13
W1 W2 W3
WORKING WITH PROOFS
P 1. We have u v = II ullllvll cos 0 and II u x vII = II ul!ll vI! sin 0; thus tan 0 = II '::.;11 .
P2. The angle between the vectors is () = o:- /3; thus u v = Uullllvll cos( a-- {3) or cos( a:- /3) =
UV
Uullllvll'
EXERCISE SET 4.4 139
P3. (a) Using properties of cross products from Theorem 4.3.8, we have
(u + kv) x v = (u x v) + (kv x v) = (u x v) + k(v x v) = (u x v) + kO = u x v
(b) Using part (a) of Exercise 5U, we have
u
1
Uz U3 VJ V2 V3
u {v x w) = Vt v2 V3 Ut 1l2 1l3 = -v {u x w) = -(u x w) v
W} W2 W3 W) W2 W3
P4. If a, b, c, and d all lie in the same plane, then ax b and c x d are both perpendicular to that
plane, and thus parallel to each other. It follows that (ax b) x (c x d} = 0.
P5. Let Q1 = (xt,Yt,l), Q2 = (x2,y2,l), Q3 = (x3,Y3,l), and letT denote the tetrahedron in R
3

having the vectors OQ1, OQ2, OQ:.h a..c; adjacent edges. The base of this tetrahedron lies in the
plane z = l and is congruent to the triangle 6P
1
P2h; thus vol(T)"" !area(6P
1
P
2
P
3
). On the
ot.her hand, voi(T) iR equal to timP.s the volume of the p<\rallelepipe(l having OQ
1
, OQ
2
, OQ3,
as adjacent edges and, from part (b) of Exercise 50, the latter is equal to OQ
1
(OQ:z x OQ3).
Thus
X} Yl
area(6.P1P2P3) = 3voi(T) = (OQ2 x OQ3) = X2 Y2
'X3 1/3
l
l
1
EXERCISE SET 4.4
1. (a)
(b)
The matrix A= [
1 0
1 has nontrivial fixed points since det(I-- A)""' 1
0
1 "'o. The fixed points
1 0 1 -1
are the solutions of the system ( l -.. A)x = 0, which can be expressed in vector form a..<> x =
where -oo < t < oo.
The matrix B . [
0
'] has nontrivial fixed points since det(J - B) = I
1
= o. The fixed
l 0 -l 1
points are the solutions of the system (I - B)x = 0, which can be expressed in vector form as
x = t where -oo < t < co.
2. (a) No nontrivial fixed points. (b) No nontrivial fixed points.
3. We have Ax= [: :] m = 5x; thus X= [:] ,, an cigcnvedor of A corresponding to the
eigenvalue ..\ = 5.
4. We have Ax = [ =: =;] [:] m thus m ,, an eigenvector of A corresponding to the
eigenvalue >. ==- 0.
140
5. (a)
(b)
(c)
6. (a)
(b)
(c)
7. (a)
Chapter 4
The characteristic equation of A=[! is det(.XI- A) = 1>.--=-
8
3

1
1 = (,X- + 1} = 0.
Thus ..X = 3 and >. = - 1 are eigenvalues of A; has algebraic multiplicity 1.
The characteristic equation is I.X =
4
10
).. :
2
1 = (>. - lO)(..l. + 2) + 36 = (>. - 4)2 = 0. Thus ,X = 4
is the only it has algebraic multiplicity 2.
The characteristic equation is 1.>.-
2
,
0
I = (..l.- 2)
2
= 0. Thus ,X= 2 is the only it
-1 "'-2
has algebraic multiplicity 2.
The characteristic equation is .X
2
- 16 = 0. Thus ,X = 4 are eigenvalues; each has aJgebra.ic
multiplicity 1.
The characteristic equation is >..
2
= 0. Thus ,\ = 0 is the only eigenvalue; it has aJgebra.ic
multiplicity 2.
The characteristic equation is(>.. = 0. Thus >.. = 1 is the only eigen.value; it fi8s algebraic
multiplicity 2.
.>. - 1 0 - 1
The characteristic l!quat io!l is 2 .>.- 1 o = ).
3
- 6).
2
+ 11 .A- 6 = (.A - 1)(>. - 2){>..- 3) =
2 0 A - 1
0. Thus .A = 1, >.. = 2, and >. = 3 are eigenvalues; each has .algebraic multiplicity 1.
.A-4 5 5
(b) The characteristic equation is .X -1 1 = A
3
- 4>.
2
+ 4A =..\(A - 2)
2
= 0. Thus A= 0
-t 3 ). + 1 ..
and ). = 2 are eigenvalues; >. = 0 has algebraic multiplicity 1, and ). = 2 hFlS multiplicity 2.
).. -3 -4 1
(c) The characteristic equation is .x + 2 - 1 = ).
3
- >..
2
- 8>. + 12 = (,X+ 3)(A- 2)
2
= 0. Thus
- 3 -9 ).
A = - 3 and ,\ = 2 are eigenvalues; A= - 3 has multiplicity 1, and >.. = 2 has multiplicity 2.
8. {a) The characteristic equation is >.
3
+ 2..\
2
+ A=>.(>.+ 1)
2
= 0. Thus >. = 0 is an eigenvalue of
multiplicity 1, and ).. = -1 is an eigenvalue of multiplicity 2.
(b) The characteristic tlltuation is ).
3
- 6)..
2
+ 12)., - 8 = (A- 2)
3
= D; thus,\ = 2 is an eigenvalue of
multiplicity 3.
(c) The chara.ctcrist.ic equation is >..
3
- 2A
2
- 15>. + 36 = (,\ + 4)(A - 3)
2
= 0; t hus A= -4 is an
eigenvalue of multiplicity 1, and ,\ = 3 is an eigenvalue of multiplicity 2.
9. (a) The eigenspace corresponding to >. = 3 is found by solving the system [ (;] = This
yields the general solution x = t, y = 2t; t.hus the eigenspace consists of all vectors of the form
[:J = Geometrically, this is the line y = 2x in the xy-plane.
The eigenspace to A = - 1 is found by solving the system [ =: (:] = This
yields solution x = 0, y = t; thus the eigenspace consists of all vectors of the form
= t Geometrica.lly, this is the line x = 0 (y-a.x.is).
(b) The eigenspace corresponding to .A = 4 is found by solving the system [ =: !] [:] = . This
yields the general solution x = 3t, y = 2t; thus the eigenspace consist of a ll vectors of the form
[:J = Geometrically, this is the line y = 3x-
E.XERCISE SET 4.4 141
(c) The eigenspace corresponding to >. = 2 is found by solving the system [
0 0
) (x] = (
0
]. This
-1 0 y 0 .
yields the general solution x = 0, y = t; thus the eigenspace consists of all vectors of the form
[:] = Geometrically, t his is the line x = 0.
11). (a) The eigenspace corresponding to>. = 4 consist s of all vectors of the for m (:) = this is
the line y = The eigenspace corresponding to >. = - 4 consists of all vectors of the forrri
[:] = t this is t he line y = -
11.
(b) The eigenspace corresponding to>.= 0 consists of all vectors of the form = + this
is the entire xy-plane.
(c) The eigenspace corresponding to A. = 1 consists of all vectors of the form (:J = t this is the
line x = 0.
(a)
(b)
(c)
The eigenspace corresponding to >. = 1 is obtained hy solving "
[
- [oool
yields the general solution x == 0, y = t, z = 0; thus the eigenspace consists of all vectors of the
[ t [:] ; thls cones pond to a Hne thmugh the oigin (the y-axis) in R
3
Slmilad y,
the eigenspace to !. 2 consists of all vecto" of the fo<m [:] = t [ = , and the
eigeosp""e conesponding to !. 3 consists of all vccto" of the fo<m [:] t [- : ]
The eigenspace corresponding to >. == 0 is found by solving the system
5
f :] =
- i :l l v 0
This yields the general solution x = 5t, y = t , z = :H .. Thus the eigenspace of all vectors
of the fo<m [:] = t [!]; this is the line through the odgin >nd the point. (5, 1, 3).
The eigenspace con ... pondh;g to!.= 2 is found by olving the system [=I ! :] m = This
yields [:] = s + t [:], which conesponds to a plane thwugh the odgin.
The eigenspace corresponding to>.= - 3 is fo:1nd by solving [:] = This yields
-3 -9 -3 z 0
[: ] = t [ , which cones ponds to a Hoe though the odgin. The eigenspa<e to
!. = h found by solvi ng u -: ][:] = m This [ l f l] , whichabo con&
sponds to a line through t he origin.
142 Chapter4
12. (a) The cigenspAe CO<J<SpOnding to A = 0 consists of vectors of the form m = t [ -!]. and the
. concsponding to A = -1 consist-s of vectorn oft he focm [: ] = t [ ]
(b) The eigenspa.;e conesponding to ). = 2 consists of vectors of the fe<m m = t Li l
(c) The eigenspace cnn esponding to A = - 4 consists of vectorn of t he fOTm [:] - t [: ] and the
eigenspace concponding to A= 3 oonsit of vectOTS of the form m = t [ -:]
13. ( a ) The c.ha.md.t1ristic polynomial is p(>.) = (>. + 1)(>.- 5) . The eigenvalues a.rc >. = ..:_ 1 and>.= 5.
( b) The characteristic polynomial is p(A) = (.A - 3) (.X - 7)(>.- 1). 'The eigenvalues are >.= 3, ). = 7,
and A= 1.
(c) The characteristic polynomial is p(A) = (>. + -l)(.X- The eigenvalues are.-\= -s
(with multiplicity 2), ). = 1, and>.=

?0
2
?.]
14. 'l'wo examples are A :::: L v
.o 0 0 -1
and B = [
-3
0
0
1
u
0
0
-1
- 1
15. Using t he block diagonal structure, the characterist-ic polynomial of t he given matrix is
p(),) =
), - 2 - 3
l
0
0
), - 6
0
0
0
0
.\+2
-1
0
0
- 5
>- - 2
= !). - 2 --3 If), + 2 - 5 I
l >. -6 - 1 >.- 2
.::: [{>- - 2)(>. - 6) + 3)[(>. + 2)(>- - 2)- 5J =' " (>-
2
- 8), + 15)(,\
2
- 9) = (>-- 5)(.), - 3)
2
(-X + 3)
Thus the eigenvalues are 5, ), = 3 (with multiplicity 2), and), = -3.
16. Using the block triangular structure, the characteristic polynomial of the given matrix is
I A 011 ). + 2 0 I ,
p(>-) = I = >- 2 (). + 2)(>.- 1)
- 1). 0 >- - 1
Thus the eigenvalues of B are ). = 0 (with muitiplicity 2), ,\ = - 2, and ), = l.
17 . The characteristic polynomial of A is
>. +1 2
p( A) = det( AI - A) = -1 ). - 2
1
2
-1 =(.-\+ 1)(>. -1)
2
).
EXERCISE SET 4.4 143
thus the eigenvalues are .X = -1 and ..\ = 1 (with multiplicity 2}. The eigenspace correspontJing to
.>. = - 1 is obtained by sol.-ing the system
whkh yield$ [:] l [ - :]. Similarly, the eigenspace co"esponding to !. = I is obtained by sol viug
H -: -:1m = m
which has the general solution ' = t, y " -t - ' ' = ' ' or (in vedorform) = ' [-:] + t [-1]
The eigenvalues of A.!
5
are). = ( - 1)
25
= -1 and >. = ( 1)
25
= 1. Correspcnding eigenvectors the
same as above.
18. Thoeigenvalues of A are !. = J , !. = j, !. = 0, end !. = 2 ng eigcnveclo" are [ :j. r:J. .
rll and [ r"'peotively. The eigenvalues of A
9
are A= (1)' 1, A = (!)' = ,:, . :0)' =
0
0.
and >. == {2)!} == 512. Corresponding eigenvectors arc the same as above.
l9. The characteristic polynomial of A is p()..) = >.
3
-- >.
2
- 5).. - 3 = (..>.- 3)().. + 1)
2
; thus the cigeo-
are .>.
1
= 3, .A2 = -1, >..3 = -l. We bnve det( A) = 3 and tr(A) = 1. Thus det(A) = 3:;
(3)(- 1)(-1} = .A
1
.A2>..3 and tr(A) = I= (3) + (- 1) + ( -1) =.At+ .X2 + .>.3.
W. The characteristic polynomial of A is p(>..) = >.
3
- 6.:\
2
I 12.>- - 8 = (.>.- 2)
3
; thus the eigenvalues are
At = 2, -\2 = 2, A3 = 2. We have det (A) = 8 and tr(A) = (). Thus det(A) = 8 = (2)(2)(2) = ..\.
1
.\2.:\3
and tr(A} = 6 = (2) + (2) + (2) = )q + .:\2 + .\3.
!1. The eigenvalues are .\ = 0 and .>. = 5, with associa.t.ed eigenvectors
( ann GJ respectively. Thus the eigenspaces correspond to t he
perpendicular li nes y = -!x andy .::::: 2x.
:2. The eigenvalues are ..\ = 2 and >. = -1, with associated eigenvec-
tors [ v;] and [ respectively. Thus the eigenspace::; correspond
to the perpendicular lines y""' andy= - J2x.
144 Chapter 4
23. The inv<l.riant lines, if any, correspond to eigenspaces of the matrix.
(a) The eigenvalues 2 and.\:::::: 3, with associated eigenvectors g] and Thus
the lines y -= 2x ami y = x arc invariant under the given matrix.
(b) This matrix }m.':i no rt'lal eigenvalues, so t her e are no invariant lines.
(c) The only eigenvalue is A= 2 (multiplicity 2), with associated eigenvector Thus the line
y = 0 is invariant under the given matrLx..
24. The char acteristic polynomi al of A is p{A) = >..
2
- (b + l )A + (b- 6a), so A has the stated eigenvalues
if and only if p( 4) = p( - 3) = 0. This leads to the equations
from which we conclude that a = 2l b.,., 0.
6a - 4b = 12
6a + 3b = 12
25. T he characteristic polynomial of A i:; p( A) = >.
2
- (b + 3)A + (3b - 2a), so A has the s tated eigenvalW!S
if l.l.lld onl y if p(2) = p(5) = 0. This leads to the equations
- 2a + b = 2
a+ b=5
from whi ch we conclude t.hat a = 1 and b = 4.
26. The chl'l.r!'l.ctcristic polynomial of A is p(>. ) = (A - 3)(A
2
- 2Ax + x
2
- 4). Note that the second factor
in this polynomial cannot. have a double root (for any value of x) since ( -2x)
2
- 4(x
2
- 4) = 16 # 0.
Thus t he only possible repeated eigenvalue of A is .>- = 3, and this occurs if and only if .\ = 3 is a
root of t.he second factor of p(A), i.e. if and only if 9-- 6x + x
2
- 4 = 0. The roots of this quadratic
equation are x = 1 and :t - 5. For t hese values of x, A= 3 is an eigenvalue of multiplicity 2.
27. lf A'l =.f. then A(x l- tlx) = Ax+ A
2
x :-:- Ax+ x = x +Ax; thus y = x +Ax is an eigenvector of A
corresponding to A = 1. Sirni lady, z = x - Ax is a.n eigenvector of A corresponding to A = -1.
28. Accorrling to Theorern 4.4.8, the characteristic polynomial of A can be expressed as
where >.1, >.
2
, ... , Ak arc the distinct eigenvalues of A and m
1
+ m
2
+ + mk = n. The constant
t.crrn iu this polynomial is p(O) . On the other har.J, p(O) == det( - A) = (-l)ndet(A).
29. (a) lJsinf:! Formulu. (22), the d1atacteristic equation of A is .\
2
+ (a+ d)>..+ (ad - be)= 0. This is n
qundratric equation with discriminant
(a+ d)
2
- 4(ad - be) = a
2
+ 2ad + d
2
- 4ad + 4bc = (a - d)
2
+ 4bc
Thus the eigenvalues of A are given by .A= d) J(a- d}'l + 4bcj.
{b) If (u d)
2
+ 4/>c > 0 then, from (a) , the characteristic equation has two distinct real roots.
(c) If (a- df + 4bc :-:o 0 then, from (a), il:l one real eigenval ue {of multiplicity 2).
{d) If {n.- dV + 4bc < ()then, from (a), t here a re no real eigenvalues.
EXERCISE SET 4.4 145
30. lf (a- d)Z + 4bc > 0, we have two distinct real eigenvalues A
1
and Az. The corresponding eigenvectors
are obtained by solving the homogeneous system
a):tt - bx2 = 0
- cx1 + (Ai- d)x2 = 0
Since A; is an eigenval ue this system is redundant, and (using the first equation) a general sol ution
is given by
x
1
= t,
Finally, setting t = -b. we ;,;ee that [a =b>.J is an eigenvector corresponding to >. = ..\i
n. If the characteristic polynomial of A is p(.A) = A
2
+ 3>.- 4 = (>.- l){A + 4) then the eigenvalues of
A are >.1 = 1 and A2 = - 4.
(a) From Exercise P3 below, A-
1
has eigenvalues .-\
1
= 1 and ,\2 =
(b) From (a), together with Theorem 4.4.6, it follows that. A-
3
has eigenvalues A
1
"'""(1 )
3
= 1 and
>-z-=- ( -- - o\.
(c) Prom P4 below, A - 41 has eigenvalues AJ = 1 --4 = -3 and ,\2 = - 4 - 4 = - 8.
(d) From P 5 below, 5A has eigenvalues)" = 5 and ..\
2
;::: -20.
( e) From P2( a.) below, t he eigenvalues of <'lA:r + 2I = ( 4A 1- 21) T arc the same !'IS those of 4A + 2J;
namely A
1
= 4(1) + 2 = 6 and ..\2 = 4(- 4) + 2 = -\4.
l2. If ...-lx -= Ax, where x -? 0, then (Ax) x = (..\x) x .,. >.(x x) == .AIIxll
2
and so A= (i1:
1
?t.
'3. (a) The characteristic polynomial of the matrix C is
>.. 0
()
0 co
- 1 A 0 0 Ct
p( ,\} = det ( >. J - C) ""
0 -I
,\
0 Cz
0 0 0 - 1
A+ Cn- 1
Add .>. l ime.-> the row to the first row, then expand by cofactors a.long the first column:
0
A2
0 0 Co+ CtA
)..'2
0 0 co +Ci A
- 1 >. 0 0 Ct
..\ 0
.>.
-1
Cz
p(A) =
0
-1
0 cz
=
0 0 0 - 1
..\ + C:n - 1
0 0 -1
..\+ Cn- 1
Add A
2
times t.he secvnd row to the first row, then expand by cofactors along the first column.
0
Az
0 co+ Ct >. + cz>..
2
,\2
0 co + C) A+ C2A
2
- 1
).
0 C2
p(.>.) = =
0 0
- 1 ..\ + c,._l
0
-1
A+ Cn-l
146 Chapter4
Continuing in this fashion for n - 2 steps, we obtain
= 1,\n-l co + Ct A + c2.\
2
+ + Cn.-2>-.n-
2
1
-1 >-+Cn-t
= Co+ CtA + C2 A
2
+ + Cn-2A
11
-
2
+ <'-n - 1An- l + >."
(b) The matrix C = [l
0
0
1
0
0 -2]
0 3
hasp(>.) = 2 - 3>. + .>-
2
- 5>.
3
+ .>.
4
as its cha.ra.cteristic poly-
o -1
1 5
nomial.
DISCUSSION AND DISCOVERY
Dl. (a) The characteristic polynomial p(.>.) has degree 6; thus A is a 6 x 6 matrix.
(b ) Yes. From Theorem 4.4 .12, we have dct{A) = {1)(3)
2
(4)
3
= 576 =/: 0; thus A is invertible.
D2. If A is :;quare mn.trix all of whose entries arc the sl:l mc then det(A) = 0; thus Ax = 0 has nontrivial
solutions and >. = 0 is an eigenvalue of A.
03. Using Formul a (22), the characteristic polynomial of A is p(..X) = ).
2
- 4.\ + 4 = (..\- 2)
2
. Thus
). = 2 is the only eigenvalue of A (it has multiplicity 2).
04. The eigenvalues of A (wit h multiplicity) arc 3, 3, and - 2, -2, - 2 . . Thus, from Theorem 4.4. 12, we
l1ave det(A) = (3)(3)( - 2) ( -2)( -2) = - 72 m1d tr(1t) = 3 + 3 - 2- 2- 2 = 0.
D5. The matrix A = [a bl j satisfies the condition tr(A) = det(A) if and only if a+ d = ad- be. If
c d
d = 1 then this is equation is satisfied if and only if be = - 1, e.g., A = lf d f. 1, then the
t
. t . r. d r 1 1 r d tbc A [
1
-
1
]
equa 1011 1s sa. 1S1l 1 an< on y 1 a = <J
1
, e.g.,
1 2

06. The characteristic polynomial of A factors a.s p(..X) = (>.- 1)(-\ + 2)
3
; t hus t he eigenval ues of A are
). = 1 and).= - 2. It follo\vS from Theorem 4.4.6 t hat the eigenvalues of A
2
are ). = (J)
2
= 1 and
>. = ( - 2)
2
= 4. .
D7. {a)
(b)
(c)
(d)
DB. (a)
(b)
False. For example, x = 0 sa.tisfies this condition for any A and .>. . The correct statement is
t.hat if Ax = ..\x for some nonzero vector x, then x is a n eigen\'ector of A.
True. If >. is an eigenva lue of A, then .-\
2
is an eigenvalue of A
2
; thus (.>.
2
I - A
1
)x = 0 has
uontrivial sol utions.
False. If A = 0 is an eigenvalue of A, then the system Ax = 0 ha.s nontrivial solutions; thus
A is not invertible anrl so the row vectors and column vectors of A are linearly dependent.
!The statement becomes true if "independent" is replaced by "dependent" .J
False. For example, A= G has eigenva.lues ). = 1 and .\ = 2. !But it is true that a
:;ymmetric mat rix has real eigenvalucs.J
fl OJ . [1 OJ
False. For example, the reduced row echelon form of A = h
2
l S I =
0 1
.
True. We havP. A(x
1
+ x2) = A
1
x
1
+ .>.2x2 and, if ..X1 f. >.2 it can be shown (sine"' x1 and X2
must be linearly indep(:ndent) thal .)qxl + A2Xz f. {J(xl + X2) for any value of a.
WORKING WITH PROOFS 147
(c) '!'rue. The characteristic polynomial of A is a. cubic polynomial, o.nd every cubic polynomial
has at least one real root.
(d) 1Tue. If r(A) = .>." + 1, then det(A) = (-l }np(D) = 1 =I 0; thus A is invertible.
WORKING WITH PROOFS
Pl. If A = [ac b], t hen A
2
= [
02
+be ab + bdl
2
and tr(A)A = (a+ d) [a b] = [
02
+do ab + d!]; thus
d ca + de cb+ d
2
j c d ac+dc ad+ a-
A
2
_ tr (A)A = [be -
0
ad cb _o ad] = (- de
0
t(A) 0 ]
_ det.(A) =- det(A)J
nod so p(A)....: A'l - tr(A)A + det(A)J = 0.
P 2. (a) Using previously estabJished properties, we have
11
1
') = det(Afr - AT)= dct((M- A)r) = det(M - 11)
Thus A a11d AT have t.hc *lame characteristic polynomial.
(b ) The e;gcnvalues are 2 and 3 in each case. The eigenspace of A conesponrling to ). = 2 is ob-
tai ned by solving the system r:J t.he eigenspace of AT corresponding ': '
t o >.. = 2 is obto.inf:d by solving [:] Thus the eigenspace of A corresponds to the
line y = 2x, whereas eigenspace of A'r corresponds to 11 = 0. Simtlarly, for A 3, the
eigf'nspacc of A corresponds to x = 0; whereas the eigeuspace of AT corresponds toy=
P3. Suppose that Ax = .Xx where x i- 0 and A is invert.ible. Then x = A-
1
Ax = A-
1
>.x = AA-
1
x
and, s1nce ,\ :f; 0 (because .1 is invertible), it follows that A-
1
x = }x. Thus l is an eigenvalue of
.t\ - t and x is cormsponding
P4. Suppose tlt;H Ax .\.x whcr<' x -:f. 0 . T hen (A - sJ)x = Ax - six = >.x - sx = (A- s}x. Thus
A sis an C'i);nvaluc: nf A - sf and a r.orresponding eigenvector.
P 5. Suppo:;e that, Ax = AX where x :f 0. Then (sA)x = s(Ax} = s(.>.x) = (s>..)x. Thus s.>. is aa eigen-
value of sA and x is a. corres ponciing eigenvector.
P 6. If the matrix A = [: :] is symmP.tric, then c :=: b a.nd so (a- d)
2
+ 4b<: = (a - d)
2
+-4b
2
.
ln the case that A has a repeated eigenvalue, we must have (a- d)
2
+ 4b
2
= 0 and so a= d a.nd
b = 0. T hus t he only sytnructric 2 x 2 matrices wi Lh repeated eigenvalues arc those of the form
A """ al . [;ud1 a. ntatrix has .>. = a as its only eigenvalue, and the corresponding eigenspa<:e is R
2
.
This prov<s part (a) of Theorem 4.4..\ I.
If (a- d)
2
+ 4b
2
> 0, t hen A has two distinct real eigenvalues .>.
1
and ..\2, with corresponding
eigenvectors x
1
and x2, g.iven by:
.>. 1 = +d)+ J(a- d)2 + 4b2)
148 Chapter 4
The eigenspaces correspond t o the lines y = m
1
x andy = mzx where m; = = ~ , j = .1, 2. Since
(a- >.
1
){a- >.
2
) = (H(a- d)+ J(a- d)2 + 4b2J) (t!(a - d) - J(a - d)2 + 4b2])
= t((a - d)
2
- (a - d)
2
- 4b
2
l = - b
2
we have m
1
m
2
= - 1; thus the eigenspaces correspond to perpendicular lines. This proves part (b)
of Theorem 4.4.11.
Note. It is not possible to have (a - d)
2
+ 4b
2
< 0; thus the eigenvalues of a 2 x 2 symmetric
mat rix must necessarily be real
P7. Suppose that Ax = .XX a.nd Bx = x. Then we have ABx = A(Bx) = A(x) = >.x and BAx =
B(Ax) = B(>.x) = .XX. Thus >. is an eigenvalue of both AB and BA, and xis a corresponding
eigenvector.
CHAPTER 6
Linear Transformations
EXERCISE SET 6.1
1. (a) TA: R
2
---? R
3
; domain = RZ, codomain = R
3
(b) TA: R
3
---? R
2
; domain = R
3
, codomain= R
2
(c) TA: R
3
---? R?; domain= R
3
, codomain= R
3
3. The domain of Q, the codomain ofT T(l , - 2) = (-1,2,3).
4. The domain ofT is R
3
, the codomain ofT is R
2
, o.nd T(0, - 1,4) = (-2, 2).
6. (a)
[
-2
T(x) =
[
-2xl + :!.:7 + 4XJ]
= + 5x:z + 7xa
6x1 -
(b) T(x) =
7. (a) We have = b !f and only if xis a solution of the linear syst,em
[
1 2 [XJ ] [ 1]
0 -l 3 X2 = - 1
2 5 -3 X;! 3
The reduced row echelon form of the augmented matrix of the above system is

0
6 -111
1 - 3
o o oJ
and it follows that the system ha..o; the general solut.ion x
1
= -1 - 6t, x2 = 1 + 3t, x
3
= t.
Thus any vee to< of the fo<m X [-:J + t nl will have the pwpe<ty that [-J
(b) We have TA (x) = b if a nd only if x is a. solution of the linear system

=
2 5 - 3 XJ 1
165
166
Chapter6
The reduced row echelon form of the augmented matrix of the above system is


0 0 0 1
and from the last row we see that the system is inconsistent. Thus there is no vector x in
R' for which TA(x) l:J
8. ( a) We have TA(x) = b if and only if xis a solution of the linear system
[
[=:] =
-1 2 0 5 xa 4
The reduced row echelon form of the augmented matrix of the system is

0
1
0
2 -1
1 2
0 0
and it follows t.ho.t the system has general solution x
1
= 2- 2s + L, .r
2
= 3- s- 2t,
X3 S, x, (. any v.,.;tor ofthe form X m +s [ =r] + t [ - ; ] will have t he p<Operly
that TA(x ) =
(h) We have TA (x ) = b if and only if x is a sol ution of the li near system
[
! [::] =
- 1 2 0 5 X3 3
The reduced row er.hclon form of the augmented matrix of t he system is


0 0 0
Thus the system is inconsistent; there is no vector x in R
4
for which TA(x) =
9. (a). (c), and (d) are linear t ransformations. (b) is not linear; neitht!r homogeneous nor
10. (a) and (c) are linear t ransformations. (b) and (d) are not linear; neither homogeneous nor addit ive.
11. (a)_ and (c) are linear transformations. (b) is not linear; neither homogeneous Mr addit ive.
12. (n) nml (c) o.re (b) is not linear ; nPi t her homogene<ms nor add:tive.
exercise Set 6.1 167
13 . This lmnsfonn lion can wri tten in [o,m "' [ : : ] [! =: :] l::J ; it is a linear t rans-
formation with domair@. and
The domain is R
2
o.nd the codoma in is R:i.
.( __ __ - .,.,.--J ..__ _ _ --- - ------ - -. --- ...
15. This transformation can be written in matrix form as = - l [=:]; it is a linear
1U3 2 -4 -1 XJ
transformation with domt nd
16 . clomain:ls R
4
land the codomain(is R
2
. !
( ' --:_) 1.._ ' .
... -
17. [TJ "" !T(eJ) T(e
2
)] = 18. [T] = [T(et) T(e:z) T(e3)};:;; [!
4 0 2 2 IJ
19. (a)
(b)
We have T( l , O) = (-1, 0) a nrl T(O, 1) = (1, 1); thf' st<lP.d'l.rd matrix is [TJ =
ing the matrix, we have T ( [- = [ [ -:] = [:]. This a grees with direct calculation
using t he given formula: T{ -1, 4) = ( -( -1) + 4, 4) = (5, 4).
We have T(l,O,O) = (2, 0,0), T(O, 1,0) = (-1, 1,0), and 1'(0,0, 1) = (1,1,0) ; t hus the stan-
"G -: l U<ingthc matri x wc haveT ([J) [i
[ This usmg the given formula:
-l ll [_rJ =
T (2, l , - 3) = (2(2) - ( 1) I (-3j, 1 + ( -3), 0) = (0, -2,0)
20. (a) [1'] =
(b) ITl = [
3 0
]
0 - 5
r([_m [:, Ul
T ( [ = [
21. ( a) The stn.ndard mat.rix of the transformation is IT] = [: -i
(b ) If x -= (- 1, 2,4) t.hen, using the equations, we have
T(x) = (3(-1) + 5{2)- (4),4(- 1)- (2) + (4) , 3(-1) + 2(2) - (4)} = (3, - 2, -3)
On the other hand, using t he matrix, we luwe
168
Chapter 8
22. (a)
{T] = [2 -3 1] .
3 5 - 1
( b)
T ([-m = _:] nl = r-:]
23.
(a)
=
(b)
r o -1) [-2] = [-1]
l-1 0 1 2
(c)
=
(d)
[ = GJ
24.
(a)
=
(b)
=
(c)
=
(d)
[ - = [ -
25. (a)
[to -t-J n = [- t>J [-070l
(b)
r:J = r-:J
72 72 4 72 4.950
(c)
[- - = [ =!J
(d) [>f !] [3]- [ >f'+]- [4598]
-! :/,f 4 - + 2VJ - 1.964
'26.
(a)
[ -1] [] [2J3 - [1 946]
4 3 = 2 + 4.598
{b )
=
. .
(c)

(d ) [ l [ 2 + ] - [ 4598]
-:1/- 4 3 - -2/3 + -1.964
27. (a) [ -! [ -l + [19641
! 4 = 3'{3 + 2 4.598.
(b)
[ l 4] rJ [ + JJ] [2482]
4 4 = + 3 4.299
28. (a)

[] [ -2- l [ -4598]
3 = -2v'3 + - l.9G4
[ 1
-4] [] [ 1- l [ -0.2991 (b)
4
':i
3 = 0.518J
-
29.
, . [ - - [ 8 -sin 8]
1 he matnx A =
1
corresponds to R
8
= .
8 8
where B =
3
; (135).
- S i ll COS
30.
' . 72] [cos28 sin29)
rhe matnx A = I I corresponds to He = . where B = i (22.5 ).
- 7l SJn 28 - COS 28
31. (a) liL
__ [cos 28 sin28] __ [cos
2
8-sin
2
8 2sin9cos8]
.
2
wherecosB = a.nd sin8 =
sm28 - cos 28 2sin8cos0 sm .. 8 - cos 8 v l +m '1/JTrn-
Thus Hr.= [
1
;:
2
m;r:_
1
] .
{b) p = ( cos
2
8 sin8cos9] = 1 [1 m]
L :Jin9 cotJ 9 oinl(J l+,....'l m m
2
Exercl &e Set 6.1 169
32. (a) We have m == 2; thus H = HL = [-! ;J and H ( [!]) = [!] = t [z:] =
(b ) We have m = 2; thus P = h = [ ! ] and P ( [ !] ) = 1 [!] = ! ( = [ ::! ]
33. (a) We bave m = 3; thus H = HL = fo [-: !] = !] and the reflection of x = [: ] about
the line y = 3x is given by H ([:]) = g !] [:] = k =
(b) We have m = 3; t hus P = PL =

!] and P ([:]) = fo :] [:] =


1
1
0
=
34. If Tis defined by the formula. T(x,y} = (0,0), then T(cx, cy) = (0,0) = c(O, O) = cT(x,y) and
T(x1 + X2, Yt + Y2) = (0, 0) = (0, 0) + {0, 0) = T(x1 , Y1) + T(x2 , Y2); thus Tis linear.
If T is defined hy T(x, y).::: (1, 1), then T(2x, 2y) = (1, I) f. 2(1, 1) = 2T(x, y) and T(x
1
+ x
2
,
Yl + Y2) ::..., (1, l) =I (1 , 1) + (1, 1) = T(x
1
, yl) + T(xz, Y2); thus T is neither homogeneous nor ad-
Jitive.
36. The gi ven equations can we writt.en in matrix form us r [ :] = [;], and so
r:J = r:J = l -3:
.. . .... - . ... .
- -- <'' __ ........ _ ... .... ..
Thus t he image of t he line :e + y = 1 corresponds to { -s + 4t ) + (3s- t) = 1, i.e. t o Lhe line
2s - .:.-=: 1. ----- -- --- ---
[
1 00
37. (a) [T] = {b) IT] =
0 (iJ 1
(c) [T] = !
38. (a) (b)
.f
170
(c) (d)
X
[
-1 0]
39. T(x, y) = ( - x, 0); thus IT} =
0 0
.
40. T(x,y) = (y, - x ); thus IT]= [-i
DISCUSSION AND DISCOVERY
Dl. (a) False. For example, T(x,y} = {x
2
,y
2
) satisfies T(O) = 0 but is not linear.
(b) True. Such a. transformation is both homogeneous and additive.
(c) True. This is the transformation with standard matrix (T] = -[.
Chapter 6
(d) True. The z.cro transformation T(x, y) o-:: (0, 0) is the only linear transformation with this
property.
(e) False. Such a transformation cannot be linear since T(O) = v o f. 0.
D2. The eigenvalues of A are >.= 1 and .>. = - 1 with corresponding eigenspaces given (respectively)
by and where -oo < t < oo.
D3. From familiar trigonometric identities, we have A = . e
[
cos 29 -sin 28] __ R- .
sin 29 cos 28 "
Thus muJtiplication
by A corresponds to rot ation about the origin through the angle 2fJ.
D4. Jf
A - R _ [cos O .. sin 81 h AT _ [ cos 0 sin 8] _ [cos( -9)
/ 1 - .. o - t en -- .. _
sin9 cosO' -sin9 cosO sin(-8)
= R- o Thus multipli-
cation by !1.,. corresponJs to rotation t hrough the angle -0.
D5. Since T(O) :::: x
0
=f:. 0, this transformation is not linear. Geometrically, it corresponds to a. rotation
followed by a translation.
D6. 1f b = 0, theu f is both addi t ive and homogeneous. If b =f:. 0, then j is nei ther additive nor homo-
geneous.
D7. Since Tis linear, we have T (x
0
+tv) = T(x
0
) + tT(v). Thus, if T(v) =f:. 0, the image of the line
x = xo +tv is the line y = Yo+ tw where y
0
= T (xo) and w = T(v) . lf T(v) = 0, then the image
of x = xo + tv is the point Yo = T(xo).
EXERCISE SET 6.2
1. ArA = [_:I]

(_: iJ
[
aQ _
2 1
J [ !Q llJ rl 0
1
[aQ _ll J
2. AT A=
2
!.1
29 29 29
- thus A is orthogonal and A-
1
=AT=
29 29
ll !Q - ! :o - 0 l ' ll .aQ
' D 'iG 29 zg
Exercise Set 6.2
171
u
- 5
- !s
4
5
_ g
25
1] .
16
25
12] [ 4
25 5

5 25
l2 u
2S 25

I 2 1
76 73
0
=
3 16
5 25
[
1 0 0]
0 1 0 ;
0 0 l
thus A is orthogonal and A -l = AT =


5. (a) AT A = r-t = thus A is orthogonal. We have det(A) = 1, and
A = [ _ = Ro where B =
3
;. Thus multiplication by A corresponds to CO\tnterclock-
wise rotation about the origin through the angle
3
;.
(b) [ J [ ; thus A is o'thogonal. We have det(A) -1, and so
A = [ =He wbere fJ = i Thus multiplication by A corresponds to ref!ed ion
the line origin through the origin maki ng an angle with the positive x- axis.
6. (a)
AT A= [ 4 1-J [ -1] = [
1 0
1
]; thus A is orthogonal. We have det (A) = l, and
_v'J * :a l 0
2 < 2 2
[
l v'Jl
A = J -

= Re where () =
(b) rlT A= [ v;l [ Y,}l = (
1 0
] Lhus A is or thogonal. We have dct(A) = -1, and so
a _ J_ ./3 _! o 1
2 2 2
A = 2 2
[
! ./3]
.:1 - !
7. 2
= He where 0 = %
1. (a)
A = [t ;]
(b)
A = [!
(c)
A =
(d)

8. (a)
A =
(b)
A =
(c)
n
(d) A= [I
2
9. (a) Expansion in the x-direction with factor ::! {b) Contraction wi th factor
(c) Shear in the x-direction with factor 4. (d) Shear in they-direction with factor - 4.
10. (a) Compression in the y-direction with factor
(b) Dilation wjt b factor 8.
(c) Shear in the x-direction with factor - 3.
(d) Shear in the y-clirflct.ion wi th factor 3.
172 Chapter 6
11. The action of T on t he stanciatd unit vectors is e
1
= [ ]-t [ -t [ J -t [ = Te1 . and e2 =
-t -; -t thus [T] = [Te1 Te2) =
12. The action ofT on t he staudard uni t vectors is e1 = -t -t -t and
e2 = -t -t -t =Te2; thus IT) = [Te1 Te2) = [-!
13. The actioo of T on the "anda<d unJt vedo<s is e, = m 4 m -; m = Te, e, = m
4
m -;
m ='Fe, , and e, = m -+ m -+ m = Te,; thus 111 =
14. The action of T on t he standard unit. vectors is e1 = [i] -} = T et, e2 =
t hus [T]

0 - 1
15. ( a) ( b)
y
(c)

16. (a} (b) (c)
y
17. (a)
(b)
iL
(c)
Exercise Set 6.2
18. (a)

19. (a)


9 0 - 1 s -3
nJ nl
I ' " '' ,.-, ................ ,._,
20. (a)
21. (a) [TJ(: - '
.. 0 1 0
,, , .
...... _ .....
[
1 I) (01 ,1]
22. (o) [T] _, o o
0 - 1
23. (a) tnt [:
(b) IRI =
[


(c) !RI [ f
24. (a) !Ill = [:
[
.iJ
(b) [Rj = _;
[
I
7z
(c) [RJ =:
(b)
{b)
-! Hl
[
1 0 OJ r-2
1
r-2
1
000 1 = 0
0 0 1 3 3
(b)
(b) [T) = [
-1 0 0
[
0 0 -1]
(b) [Tj = o 1 o
L 0 0
173
(c) !11.
(c) n ! m
(c) ! nJ m
(c) [T) =
0 \) 1
(c) IT]= [!
25. The matrix A is a. rotation matrix since it is orthogonal anrl det (A) == 1. Tbe axis of rotation
;, found by .olv;ng the system (/- A)x = 0, whkh ;s r=J = A geneml solutioo
ofth;s sys<em ;s x = [:J = t thus the matdx A co:,.;; :, to a mt:.;on about the x-a>ds.
Choosing the positive orientation and comparing with Table 6.2.6, we see tha.t the angle of rotation
is !I =
174 Chapt&r6
26. The matrix A is a rotation matrix since it is orthogonal and = l. The axis of rotation is
found by solving the s[:sltem A)x = 0 , which is u -n = m. A genenJ solution
of t his system is :<. = = t ; thus the axis of rotation is t he line pnssing through the origin
and the point (1, 1, 1). The plane passing through the origin that is perpendicular to this line has
equation x + y + z = 0, and w == ( -1, 1, 0) is a vector in this plane. Writing w in column vector
form, we have
[
-1]
W= l ,
0
Thus the rotation angle 8 relative to the orientation u = w x Aw = (1,1, 1) is detenn.ined by
0
_ w-Aw _ 1 d Ll _ 2>r(120o)
cos - -
2
, an so 17-
3
.
27. We have tr(A) = 1, and so Formula (17) reduces to v = Ax + ATx. Taking x = e
1
, this results in
v = Ax+ AT x = {A + Ar)x =

000 = 'j
From this we concl ude that t he x-axis is t.he ax.is of rotat ion. Finally, using Formula (16), the
rotation angle is determined by cosO = = 0, and so 0 =
28. We have tr(A) = 0, and so Formula ( 17) reduces to v = Ax + AT x + x . Ta king x = e1. this results
in
v = Ax+ AT x 1- x = (A 1 + 1 )x = -

llll
Thus the axis of rotation is the line through the origin and the point (1,1,1). Using Formula {16),
the rotation angle is determined by cos()= = - thus 8 =
2
;.
29. (a) We have T, ([:]) = m = thus T, is a lineA< ope<ato< and 1!, =
Similad y, M, = ! and M
3
=
(b) If x= (x , y , z), then T
1
x (x-T
1
x) = (x,O,O) (O, y,z) = 0; th,;;; T
1
x and x-T
1
x are or-
thogonal for every vector x in R
3
. Simllarly for T
2
and T3 .
30. (a) We have S(e
1
) = S(l, 0, 0) = {1, 0, 0), S{e
2
) = S(O, 1, 0) = (0, 1, 0), and S(e
3
) = S(O, 0, 1) =
[
I 0 kl
(k, k, l) ; thus the standard matrix for S is [S] = o l k .
0 0 1
DISCUSSION AND DISCOVERY 175
(b) The shear in the xz-direction with factor k is defined by S(x, y, z) = (x + ky,y, z + ky); and
[
lkO] .
its s t andard matrix is lSI = o l o . Similarly, the shear in the yz-direction with factor k
is dcnned by S{x, y, z) = (x , y
0
+ :x
1
, z + kx); anti its standard matrix is lS) =
k 0 1
31. If u = (1, 0,0) then, on substituting a= 1 and b = c = 0 into Formula ( 13) , we have
0 0 l
cos () -sinO
sin{) cos J
The other entries of Table 6.2.6 are obtained similarly.
DISCUSSION AND DISCOVERY
Dl. (a) The unit square is mapped onto th<'! segment 0 $ x s; 2 along the x-ax.is (y = 0) .
(b) Tho umt square is rnappC(J onto the 0 y 3 along they-axis (x = 0).
(c) The unit square is mapped onto the rectl'ingle 0 5 x $ 2, 0 5 y S 3.
(d) T he unit square is rotated a hout. the or igin through an angle () --= - i (30 degrees clockwise) .
D2. I n order for the mat rix to be orthogonal we rnust have
and from this it follows that o. = 0, b = - -jg, c = "73 or a. ""' 0, b = r. = -

These are the


only porss ibi lit.ies.
D3. The t. wo columns of t.his matrix a.re or thogonal for any values of a and b. Thus, for t he matrix to
be orthogonal, all t hat is required is t hat the column \ectors be of length I. T hus a and b must
satisfy (<1 + b}
2
+ (a - b)
2
= 1, or (equiv<\lcntly) 2a
2
+ 21} = L
D4. If A is an orthogonal matrix and Ax :=. >.x . t hen llxll = I! Axll = 11>-xll = I.XI II xll. Thus the eigen-
values of A (i f any) must be of ah;;olutc L
D5. (a) Vect Qrs parallel to the line y = x will be eigenvectors corresponding to the eigenvalue>.= 1.
Vectors perpendicular to y = x will be eigenvect ors corresponding to the eigenvalue ,\ = -1.
(b) Every nonzero vector is an eigenvector corresponding to the eigenvalue ,X =
DO. The shel.'lr in the x .. dircction with factor - 2; thus 1'(x, y) = (x - 2y, y) and ITl = -n
D7. From thP. polarization identity, we have x y = Hllx + yJ]
2
- l!x - yll
2
) = i(l 6 - 4) = 3.
D8. !f llx + Yll = llx - Yll tho.n the parallelogram having x and y as
adjacent edges has diagonals of equal length a.nd mus t t herefore be
a rectangle.
y-x X+Y

X
176 Chapter 6
EXERCISE SET 6.3
1. (a) ker(T) = {(O,y)J -' oo < y < vo} (y-a.xis), ran(T) = {(x,O)l- oo < x < oo} The
transformation T is neither 1-1 nor onto.
(b) ker(T) = {(x, 0, 0)1- oo < x <co} (x-a. "<is), tan(T) = {{0, y, z)l - oo < y, z < oo} (yz-plane).
The transformation T is neither nor
(c) ker(T) = { } ran(T) = R
2
The transformation Tis both and onto.
(d) ke<(T) = { m} ran(T) = R". The transfonnotion T i both J.] Md onto.
2. (a) ker(T) = {(x, 0)[ - oo < x < oo} (x-axis), ran(T) = {(O,y)l- oo < y < oo} (y..,a,-xis). The
transformation T is neither 1-1 nor onto.
(b) ker(T) == {(0, y, 0)1- co< y < oo} (y-a.xis), ran(T) = {(x,O, z)l - oo < x,z < oo} (xz-plane) .
Tho transformation T is neither 1-1 nor onto.
{c) ker(T) = { ran(T) = R
2
The transformat.ion Tis both 1-1 nnd ont o.
(d) ker(T} = { } ran(T) = R
3
. The transformation Tis both 1-1 and onto.
3. The kernel of the tro.nsformation is the solution set of the system Ax = 0. The augmented matrix
of thi.s system can be reduced to t;hus the solution set consists of all vectors of the form
x = where - oo < t < oo.
4. The kernP.l of the transformation js the solution set of _: - !] = The augmented
matrix of this system can be reduced t.o
vectors of the form x = t [ where -oo < t < oo.
2 : 0]
- 5 l 0 ; thus the solution set consists of all
0 : o
5. The kernel of the t<aosformatOon is the solut;on set of [-: - ; -:J [:] = m. The augmented
matrix of this system can he reduced to _: thus the .solution set consists of all vectors
0 0 0 : 0
of the form x = t [ -:J where -oo < t < oo.
6. The kernel of the t ransformation TA : R
3
-t R
4
is the solution set of the linear system
[
: =; -:] [::] =
3 4 - 3 XJ 0 J
Exercise Set 6.3
0
0
0
The augmented matrix of-this sy, tem can be <educed to [:
consists of aJI vcct.ocs of the form x = t [ -il whe<e -oo < t: oo
177
-! :

3 I
b I
thus the solution set
o:
0 :
7. The kernel of T is equal to the solution space of [ _: = l] l:l = m . The augmented matrix
of this system can be reduced to ! m] ; thus has only the trivial solution and so
0 0 o:o
ker(T) = { } .
8. Thr. of 1' is eqml.! to the solution space of the system = l J. <tugmented
[
l 0 1
1
OJ .
matrix of this system can be redlJced to
0 1
0
i
0
; thus kcr(T) cons1sts of all vectors of the form
x = t [ where oo < t < oo.
9. (a) The vector b is in the column space of A if and only if the llnE\ar system Ax = b is consistent.
The atJ grnented matrix ()f Ax = b is
H
2
2
8
0
1
l
and the reduced row ochf: lon form of this matrix is

0
L
0
I
6
0

Frorn this we conclude tha.t the system is inconsistent; thus b is not in the column space of A.
(b) ThP. augmented matrix of the system Ax = b is
H
2
2
8
0: 3]
1 -1
l
1: 8
and the reduced row echelon form of this matrix is

0 - k
1
0
I
6
0
1] 6
0
178
Chapter6
From this we conclude that the system is con'listent, with general solution
[
4 ltl [4] [ 1]
3 + 3 3 3
X = = + t
Taking t == 0, the vector b can expressed as a linear combination of the column vectors of A
as follows:

= b = + + Oc3(A) = +
8 3 6 3168
10. (a) The vector b is in the column spttee of A if a.nd only if the linear system Ax =
The augmented matrix of Ax = b is
[3 -2
1 5
i
2
] 1 1 5 -3
I }
I
0 1 1 -1
: 0
and the reduced row echelon form of this matrix is

0 1
i 0]
1 1 - 1 0
I
0 0 0
: 1
From trus we conclude t hat the system is inconsistent; thus b is not in the column space of A.
(b) The augmented matrix of the system Ax = b is
[
3 - 2
1 4
0 1
1 5 : 4]
5 -3 : (j
1 -1 : 1
and reduced row echelon form of t his matrix is

0
1
0
1 1 : 2]
l -1 : l
0 0 : 0
Prow this we conclude that the system is consistent, with general solution
Taking s = t = 0, the vector b can expressed as a linear combi nation of the column vectors
of A as follows:
m b 2c, (A) + lc,{A) + Oc, (A) + lk, (A) 2 [!] + n l
Exercise Set 6.3
179
11. The vector w is in the range of the linear operator T if and only if the linear system
2x- y = 3
x +z = 3
y - z=O
is consistent.. The augmented matrix of t.his system can be nxiuced to

0
1
0
0 : 2]
0 : 1
I
1 : 1
Thus the system has a unique sol ution x = (2, 1,1), a.nd we have Tx = T(2, 1, 1) = (3,3, 0) = w.
12. The vector w is in the range of the linear operator T if and only if the linear system
X - y = 1
x +y+z = 2
:t +2z =-l
.is consist ent. The augmented matrix of thls system be reduced to

0
1
0
O' I]
I 3
0: 4
1 J
I
I _
I 3
Thus the system has a unique solution x = G, - and we have T x = T ( - =
(1, 2, - 1) = w .
13. The operator can be wri tten as [:: J = ( -n [:: J, t hus the standard mat rix is A = [
Since df't(A) = 17 :f. 0, the operator is both 11 and onto.
14. The operator can be wri tten as [::] = [ ::] ; t h11s t he standard matrix is A = Since
det(A) '" 0, the operu.tor is neither 1-l nor onto.
15. The
operator can he written as wz = 2 o " 2:2 : t hus the
[
'WI] [-1 3 2] [Xi]
stanrlard ma.trix is A =
16.
:\ 2] 1.113 1 :3 6 ::Z:J
o 4 . Siw;e A) = 0, the operator is neither l - 1 nor onto.
3 G
[
'WI] [l 2 3] [XJ]
The operator can be written as 1.112 = 2 5 3 :z:2 ; thus the standard matrix is A =
1.1/3 1 0 8 X3
Since dct(A) = - 1 f. 0, the operator is both 1-1 and onto.
17. The operator can be written as w =TAx, where A= Since det(A) = 0, TA is not onto.
,--- - - -.
The_ ran_gr. TA consists vedors of w = t (j_:here -oo < t < oo. Jn particular,
w = is not in the range of TA .
180
18.

The operator can be written as w =TAX, where A = 5 - 1 3 . Since det(A) = 0, TA
[
1. -2 1]
4 1 2
onto. The range of T,1 consists of all vectors w = (w
1
, w2, w3) f0r which the linear system

= [::] =
4 1 2 X3 W3
is consistent. The augmented matrix of this system can be row reduced as follows:
[
1 - 2
5 - 1
4 1
1: Wt] [1 -2 1 : Wt ] [1 -2 1 \ W1 ]
3l W2 -t 0 9 -2 l W2 - 5wl -+ 0 9 -2 : W2 - 5w
1
2 : W3 0 9 - 2 ] W3 - 4w1 0 0 0 : W3 - W2 +. 'tJIH
is not
Thus the system is consistent {w .is in the range of 1A} if ar:d only if w
1
- w
2
+ w
3
= 0. In
partiClllar, the vector w = (1, 1,1) is not in the range of TA .
19. (a) Th,, linear t.rl\.nsformation TA : R
2
-+ R
3
is one-to-one if A-nd only if the linear system Ax = 0
has only the t ri vial solution. The augmented matrix of Ax = 0 is
and the row echelon form of this matrix is
From t his we concl ude that Ax = 0 has only the trivial solution, and so TA IS one-to-one.
(b) The augmented matrix of the system Ax -=-= 0 is
[
2 3
0 - 4
a.nd t he reduced row echeJon form of t his matrix is

o 4 : o
0
J
1
} I
-2 :
Fwm this we conclude that .4x 0 has the gene'"! solution x = t [ In p"'ticula<, the
systerr, Ax = 0 has nontrivial solutions and so the transformation TA : R
3
-+ R
2
is not one-
t.o-one.
20. (a) The range of the t ransformation : R
2
-: R
3
consists of those vectors w in Jl3 for which
the linear system Ax = w is consistent. The augmented matrix of Ax = w is
[
1 -1 ! Wt ]
2 0 : W2
3 -4 : W3
Exercise Set 6.3
and this matrix can be row reduced as follows:
[
1 -1 : WJ] [1 . -: : W) l [} -}
2 0 : 'W2 , 0 2 : W2 - 2WI , 0 2
3 -4 : W3 0 -1 : W3 - 3w1 0 -2
[
1 -1 l
, 0 2:
I
0 o:
181
Wt l
Wz - 2Wt
2w3 + w2- 8w1
From t his we conclude that Ax = w i..; consistent if and only if - 8w1 + w
2
+ 2w., = 0; thus
the transformation TA is not onto.
(b) The range of the transformation TA : R
3
R
2
consists of those vectors win R
2
for which
the 1inear system Ax = w is consistent. The augmented matrix of Ax = w is
2 3 l Wt]
0 - 4 1 W2
and this matrix can be r ow reduced as follows:

2 3 : WJ l
2 -1 : u.2 +w1 ..r
F'rorn this we conclude that Ax = w is consistent for every vector w in R
2
; thus TA is onto.
21. (a) The augmented matrix of the system Ax= b can be row reduced as follow:; :

- 2
- 1
3
b, l [I -2 -l 3 : b, l
4 6 - 2 b2 0 k 8 - 8 ; 02 - 2bl
0 3 3 b:J 0 6 6 -6 : 03 - 3bl
['
-2 - 1 3
I
b, ]
I
I
- {bl
, 0
1 l -1
I
I
I
kbz -
0 0 0
Ql
I
It follows that Ax = b is consistent if and only \f ib3 - - ib = 0, or 6/>1 + 3bz - 4b3 = 0.
( b) The range of t ransformation T A consists b o tne form
[-ls:+1'] =' [-!] +t m
Note. This is just one possibility; jt was obtained by solving for b
1
in t.erms of 02 and b.> and
then making bz and o
3
into parameters.
(c) The augmented matrix of the system Ax ::; 0 can be row reduced to


(l
0
1
0
1 L : 0]
1 -1 l 0
0 0 : 0
Thus the kernel of TA (i .e. the solution s pace of Ax = 0) consists of all vectors of the form
X=
[
-s- tl [-1] [-1]
-s + t -1 1
= s + t
s 1 0
t 0 1
182 Chapter 6
DISCUSSION AND DISCOVERY
DL (a) True. If T is one-to-one, then Tx = 0 if and only if x = 0; thus T( u - v) = 0 implies
u - v = 0 and u = v.
(b) True. lf T : Rn -+ Rn is onto, then (from Theorem 6.3.14) it is one-to-one and so the argu-
ment given in part (a) applies.
(c) 'ITue. See Theorem 6.3.15.
(d) 'ltue. If TA is not one-to-one, then the homogeneous linear system Ax = 0 has infinitely
many nontrivial solutions.
(e) True. The standard matrix of a shear operator T is of the form A = [ or A = . In
either case, we have det(A) =:= 1 'f:. 0 and so T = TA is one-to-one.
D2. No. The transformation is not one-to-one since T(v) = ax v = 0 for all vectors v tb.at:a.re parallel
to a.
D3. The transformation TA : R"-+ Rm .is onto.
D4. No (assuming vo is not a scalar multiple of v). The line x = vo +tv does not pass through the
origin and thus is not a subspace of nn . It. follows from Theorem 6.3.7 that this line Calmot be
flQ ual t o range of a linear operator.
WORKING WITH PROOFS
P l. Jf Bx = 0, then (11B)x = A(Bx) = AO = 0; thus x is in t he nullspace of AB.
EXERCISE SET 6.4

- 3
!][:
- 2
0] [ 5 -1
2!]
1. !Tn

= BA = 0 1 -3 = 10 - 8
2 4 45 25
[:
-2
[:
- 3
:1] [ -8 - 3 I]
[TA oToJ = A E3 = 1 0 1 = -5 - 15 - 8
2 4 6 7 44 - 11 45
[ 4
0

3 -1] [40
0
20]
2. [Ts oTAJ = BA =
- 1 5 0 1 = 12 - 9 18
2 -3 - 3 6 38 -18 43
fTA oTaf = AB =
3 -1][ 4
0
.J r
18
22]
0 1 -1 5 2 = 10 - 3 16 .
- 3 6 2 - 3 8 31 58
3. (a) iT!) = [12) =
{b) [72 c T1] = = [! ITt oTz) = = _:]
(c) T2(Tt (x1, x2)) = (3xt + Jx2, 6x1 - 2x2) T1 (T2(x1t x2)) = (Sx1 + 4x2, x1 - 4x2}
Exercise Set 6.4 183

0

2

4. {a) !T1] = 1 [T2] =
()
- 1
-3 0 - 1

2 0][ 4
0

u
2

{b) [12 o T1] := 0 - 1 -2 l = 3
0 -1 - 1 -3 3
{T1 o T3] [-
0

2 0]
[ 4 8 0]
1 0 -1
=
-2 -4 -1
-1
-3 0 -1 - 1 -2 3
(c) T2{T,(x1,x2,x3)) = (2x2,:r1 +3x2,17:r1 +3x2)
T1 (T2(x1, x2, x3)) = ( 4:rt + 8x2, -2x1 - 4x2 - x3, -xi - 2x2 + 3x3)
5. (a) The st andard matrix for the rotat1on is A
1
= [ - and the standard matrix for the
tion is A2 = [ Thus t he standard matrix for the rotation followed by the reflection is
[
0 1] [0 -1] [l OJ
A
2
A
1
= 1 0 1 0 = 0 - 1
(b) The st andard matrix for the projection followed by the contraction is
(c) The standard matrix for the reflection followed by the dilation is
. [:l OJ [1 OJ [3 01
.ilz A1 = =
0 3 0 -1 0 -3
6. (a) The st andard matrix for the r.omposition is
A,A, A, [ _J]
( b) The standard matrix for the composition i:s
(c) The composition corresponds to a counterclockwise rotation of 180. The standard matrix is
[
-1 0]
R z<. R1n R_ .n = R 11 7rr r. = Rrr =
3 12 !2 12+J2+3 0 - 1
7. (a) The standard matrix for the reflection followed by the projection is
184
{b) The standard matrix for the rotation followed by the dilation is
0] [ 0 [ 1 0
o o 1 o = o V2
v'2 -72 0 - 1 0
(c) The matrix for t he projection followed by the reflection is
8. (a) The standard matrix for t he reflect ion followed by the projection is
(b) The st.andard matrix for the rotation followed by the contraction is
[
l 0 0] [' 0 0] .[! 0 0]
A2A1 = 0 0 {! --! = 0 :If
00 -3
1
0! ,/3 0.! ,/3
2 2 6 6
(c) The standard matrix for the projection followed by t he reflection is
D. (a) The standard ma.trix for the composition is
(b) T he standard matri x for the composit ion is
10. (a) The standard matrix for the composition is
_fl
16
.l..
16
l
8
Chapter 6
.1.. ]
16
_fl-
16
1-
Exercise Set 6.4 185
(b) The standard matrix for the composition is
11. (a) We have A= = thus multiplication by A corresponds to expansion by a
factor of 3 in the y-direction and expansion by a factor of 2 in the x-direction.
(b) We have A = = multiplication by A corresponds to reflection about
the line y = x followed by a shear in the x-direction with factor 2.
Note. The factoriut.t.ion of A as a product of elementary matrices is not unique. This is just
one possibility.
(c) We have = multiplicat ion by A corresponds to
reflection about t he x-axis, followed by expansion in the y-di rect.ion by a fad or of 2, expansion
in the x-direct ion by a factor of 4, and reflection about the line y = x.
(d) We havcA= [: -:] =


in the x-dircction with factor -3, followed hy expans ion in the y-dircction with factor 18,
and a shear in t he y-direction with factor 4.
12. (a) We have A=[! ;] = !] multipHcatlon by A corresponds t.o compression
by a factor of in the x-direct.ion and compression by a factor of in t he y-<.lirection.
(b) We have = n; thus mul tiplication by A corresponds to a shear in the
:.r.-direction with f(l(: t(Jt 2, followed by reflection about t he line y = :r;.
(c) We have A= [ = g thus multiplk:a Lion by A corresponds to
reflection about the line y = x. followed by followed by expans ion in the x-direct ion by a
factor of 2, expansion in the y-direction by a factor of [) , and reflection about t.he x-axis.
(d) We have :] = g multiplicat ion by A corresponds a shear in the
x-dircction with factor 4, followed by a shear in they-direction with factor 2.
13. (a) Reflection of R.
2
about the :r -axis.
(b) Rotation of R
2
about the origin through an angle of -
(c ) Contraction of R
2
by a factor of
(d) Expansion of R
2
in the y-direction wj th factor 2.
14. (a ) Refl ection about they-axis.
(b j Rotation about the origin through an angle of
(c) Dilation by a fact.or of 5.
(d) Compression ill x-direct.ion with factor
186 Chapter 6
15. The st andan.l matrix fqr the operator Tis A= [! Since A is invertible, Tis one-to-one and
the st andard matrix for r -
1
is A-
1
= [- : thus r -
1
(w1, w
2
) = (-w1 + 2'W2, tv
1
- w
2
) .
16. The standard matrix for the opP.rator Tis A = Since A is invertible, Tis one-to-one and
the standard matrix for y-l is A-
1
= !l thus r -
1
(wl,w2) = <lwl + 1w2, - 1
1
2wl + iw2)
17. The standard matrix for the operator Tis Since A is invertible, T 1s one-t(rone and
the sto.ndn.rd matrix for r - l is A-
1
= [ thus r-
1
(wl, w2) = (W2, - wl)
18. The standard matrix for the operator Tis A = Since A is not invertible, Tis not one-to-one.
[
1 -2 2]
2 1 1 . Since A is invertible, T is
1 1 0
19. The standard mat r ix for the operator T is A =
and thf' standard matrix for r-
1
is A- t =
-1
-2 41
2 - 3 ; t hus the formula for T -l is
3 -sJ . .
20. The st andard matrix for 7' is A = [ -i Since A is not invertible, Tis not one-to-one.
21. The st. andarrl matrix for the operator T is A =
[
3 3
-2 -2
and the standard matrix for T-
1
is A -I =
I I
-2 '.i
4 -1]
7 I . Since A is inver t ible, T is one- to-one
3 0
Ill
=1 ; thus the fo<mula fo< r -
1
is
22. The standa<d mat<lx fo< the ope<ato< TIs A = [; : :J. Since A is inve<tible, TIs one-to-one and
r-* -r5
the standard mat rix for T -
1
is A -l = l fi - i-J
I 5
26 T3
7]
26
= ; thus the formula for r-
1
is
26
23. (a) lL is easy to see directly (from the geometric definitions) that T
1
o T
2
= 0 = T
2
o T
1
This
aJ.qn follows fTom = and IT2} lTd= =
Exercise Set 6.4 187
(b) It js easy to see directly that the composition of Tt and T2 (in either order) corresponds to
rotation about the origin through the angle 81 + Bz; thus 11 o Tz = T2 o T
1
. This also follows
(c)
from the computation carried out in Example l.
r
l 0 0 l [cos 0
2
- sin 0
2
We have lTd = lO cosOt -sin9t and [12] = sin02 cos02
0 sin 1 1 r.:os&1 0 0
[
cos0
2
- sinBz
[Tx oTz] =ITt} !Tz) = coslh sin Bz cosB1 cos82
sin lh sin Bz sin B1 cos B2
thus
- cos 8
1
sin Bz
cosfh cosBz
sin 8
1
sin 81 sin Ozl
- sin 81 cos Bz
COSO)
and it follows that T
1
o Tz f Tz o T1 .
24 . (a) It is ea..<;y t.o see i.ha.t T
1
oT
2
=-I :;: TzoT
1
. This l::llso follows from the fact that
[Td [Tz!= = and lTz]ITJ] = =
(b) We have [Tl] = and [T:d = [::: IIi oTz] = jTJ} [T
2
] = and
[T
2
o T
1
] = [T2} [T
1
) =
0
l-1= [T
1
o 12] . lt follows that T
1
o 1'
2
f Tz oTt .
Sin j
(c) We have =kl ; t.hus fTI] [Tz] === (kJ)[TzJ=kl12] and [TzJ ITl i = ITz]{kl} =
0 0 k
k [T
2
}. Jt follows that 1'
1
o T
2
= T
2
o T
1
= kT2.
[
_.\ f '\!j l
25. Wt:; have JJ,
13
= and J-1,
16
-- _,; . T int:> the standard matrix for the composition
'l 2 ' - 1 2

.ill [ !
2 - 2
1 - viJ
2 -z-
26. We have H:ri-i = and f/,
18
= T hus t. he standard matrix for the composition is
li H . = 7i] [0 1] _ [ 72
rr /8 7ft4 ..!_ _ 1 l O - _ _!_
V2 72 v'2
27. The image of the unit square is the parallelogram having the
vectors T( e!) = U] and T( ez) = [- as adjacent sides. The area
of this parallelogram is jdet(A}I = ldet = 12 + II= 3.
188
28. The image of the unit square is the parallelogram having the
vectors T(e
1
) = g] T{e
2
) = [!] as adjacent sides. The ar(;a
of this parallelogram is ldct(.A)I = \det !] I = 18 - 91 = 1.
DISCUSSION AND DISCOVERY
y
7f-
"---
( 3)
--
")
'f
1/
ChapterS
J
{5. 7)
J
,
I
(3. )
-
.1:
5
Dl . (a) 'l'rue. If 1't(X) = 0 with x # 0, then T2(T
1
(x)) = T
2
(0) = 0; t hus T2 o is not one-to-one.
( b ) False. If T
2
(ran(T
1
)) = R" then T
2
o T,, is onto.
(c) False. If xis in Rn, then Tz(T
1
(x)) = 0 if and only if T,(x) belongs t o the kernel of Tz;
thus if Tt is one-to-one and ra.n(TI) n ker(Ti) = {OL then the transformation 72 oTt will be
one-t o-one.
(d ) True. If ran{T
2
) :F Rk then ra.n(T
2
oT
1
) ran(T
2
) #- Rk.
02. We have RfJ = - sinf{J'], H
0
= (
0
1 0
1
], and R;;
1
= R-p = [ Thus
sm "" cos - " - sm cos"'
R H R_
1
_ [cos
2
{3 - sin
2
{3 2sinf1cosf3 l - [cos2/3 sin2/3] - H
0 0
f.i - 2 sin {3 cos f3 sin
2
f3 - cos
2
f3 J - sin 2/3 -cos 2/3 - i3
and su multiplication by the matrix
1
corresponds to reflection about the line L.
D3. From Example 2, we have He, Hez = R
2
ctJ
2
- Ba) Since {)= 2(0- it follows t hat Re =
Thus every rotation can be expressed as a composition of two reflections.
CHAPTER 7
Dimension and Structure
EXERCISE SET 7.1
1 . (a)
2. (a)
(b)
{b)
(c)
(c)
5. (a) Any one of the vectors (1
1
2), (! ,1), ( - 1, - 2) forms a basis for the line y = 2x.
(b) Any two of the vectors (1. - 1, 0), (2, 0, - 1), (0, 2, -1) form a basis for x + y + 2z = 0.
6. (a) Any one of the vectors (1, 3), (2, 6) , - 1 ~ -3) forms a basis for the line x = t
1
y = 3t .
(b) Any two of the vectors (1, 1,3), (1,-1
1
2}, ('2, 0, 5) form n. basis for the plane x=t
1
+t
21
y =- tl - l:t, z ~ 3t, + 2t2.
7. The augmented matrix of the system is [
3 1 1
:
0
] and the reduced row echelon form of
5 - 1 I - 1 1 0
[
1 0 l 0 I 0]
this mat rix is ~ : . Thus a general solution of the system is
0 1
4
1
1
0
x- (-l" _!s - t s t) - "(-.! - ! 1 O)+t(O - 1 0 1)
- 4 VJ 4 J ) - tJI 4 I 4 t ' l l \
where -oo < s, t < oo. T he solution space is 2-dimensional wil.h canonical basis {v
1
,v2_} where
V J :.; ( - ~ 1 - ! 1 1 ) 0) (l.J1 d V 2 """ ( 0 > - 1, 0 I l)
8. Tho augmented mat,;x of tho system i' [: =: ,; ~ ~ ] nnd t ho ceduced row echelon fo,m of th;s
[
I () :1: 0]
matri x is 0 1 2 o . Thus the general solution is
I
0 0 OtO
X= ( -3t, - 2t , t) ~ = t( -3, -2, 1)
where -co< l < oc. The solution space is 1-dimcnsional with cnnonic..al basis {( -3, -2, 1)}.
9. The augmented matrix of tl1e ::;ystcm is [- ~ - ~ - ~ - ~ ~ ]
0
0
] and the reduced row echelon form
l 1 - 2 0 -1 I
'
0 0 I l lt O
[
I 1 0 IJ I :0
of this matrix is ~ ~ ~ ~ ~ : ~ . Thus the general solution is
0 0 0 0 o:o
x = (- s - t , s,-t,O,t) = s(-1, 1,0,0,0) +t(-1,0, - 1,0,1)
where -oo < .'l, t < oo. The solution space is 2-dimensional with canonical basis {v
1
, v'2} where
Vt = (-1, 1,0,0,0) and v 2 = (- J, 0, - 1,0, 1).
195
t96
10.
Chapter 7
The augmented matrix of the system is r: -: -! i and the reduced row e-chelon form
0 1 0 -1 i I 0
. . . 0
[
1
of thJs maLnx 1s
1 0 - 1 1 I 0
0 0 3 - 3 : 0]
_ l . Thus t he general solution is
x = ( -3s + t, !t,s,t) = s(-3,1, 0,1,0) + t( ! , -1, j, o, 1)
where - oo < 8 , t < oo. The solution space is 2-dimensional with canonical basis {vh v
2
} where
Yt = ( - 3,1, 0,1, 0) and Y2 = (3, - 1,1,0, 1).
11. (a) T he hyperplane (1, 2, -3).1 consists of all vectors x = (x, y,z) in R
3
satisfying.: the equation
x + 2y- 3z = 0. Using y =sand z = t as free variables, the solutions of thisequation can
be written in the form
x = ( -2s + 3t, s , t) = s( - 2, l, 0) + t(3, 0, 1)
where -oo < s, t < oo. Thus the vectors v
1
= (- 2, 1, 0) and v
2
= (3, 0, 1) form a basis for
the hyperplane.
(b ) The hyperplanC! (2, -1, 4, 1)..!. consists of all vectors x= (x
1
, x
2
, xa, x
4
) in R
4
satisfying t he
equation 2xt - xz + 4x3 + x
4
= 0. Using x1 = r, x3 = s and x
4
= t as free variables, the
solutions of this equation can be written in the form
x = (r, 2r + 4s + t, s, t ) = r(l, 2, 0, 0) + s(O, 4, 1, 0) 't t(O, 1, 0, 1)
where -oo < r , s, t < oo. Thus thevcctors v
1
= (1, 2, 0, 0), v
2
"' (0, 4, 1, 0), and v
3
= (0, 1, 0, 1
form a basis for the hyperplane.
1 2. (a) The hyperplane ( - 2, 1, 4).L consists of all vectors x = (x, y, z) in R
3
satisfying the equation
- 2x + y + 4z = 0. Using x = s and z = t as free variables, the solutions of this equation can
he writ ten in t he form
x = (s, 2s - 4t, t ) = s(l , 2,0) + t(O, -4,1)
where - oo < s, t < oo. Thus the vect.o-rs v
1
= (1, 2, 0) and v 2 = {0, - 4, 1) form a. basis for
t.he hyperplane.
(b ) The hyperplane (0, - 3, 5, 7) J. consists of all vectors x = {x
1
, x2 , x
3
, X.t) in R4: satisfying t he
equation -3x2 + 5x3 + 7x4 = 0. Using x
1
= r, X3 = s and x
4
= t as free variables, the sol u-
tions of this equation can be written in the form
x = (r, + s, t) = r(l , 0, 0, 0) + s(O, j ,1, 0) + t(O, 0, 1)
where - oo < r, 8, t < oo. Thus the vectors v
1
= (1, 0, 0, 0), v
2
= {0, 1, 0), and v 3 = (0, 0,
form a basis for the hyperplane.
DISCUSSION AND DISCOVERY
Dl. (a) Assuming A :1 0, the dimension of t he solution space of Ax = 0 is at most n- 1.
(b) A hyperplane in R!' has dimension 5.
(c) The subspaces of R
5
ha.ve dimensions 0, 1, 2, 3, 4, or 5.
(d) The vectors Yt = (1;0,1,0), v
2
=::: (1,1, 0,0), and v
3
= (1,1,1,0) are linearly independent,
t hus they span a 3-dimensional subspace of R
4

F.XERCISE SET 7.2 197
D2. Yes, they are linearly independent. If we write the vectors in the order
v
4
= (0,0, 0,0, = (0,0,0,1, "', '),v2 = (0,0, 1, , , ), v1 = (l, , , , , )
then, because of positions of the leading ls, it is clear that none of these vectors can be written
as a linear combination of the preceding ones in t.l!c list.
D3. False. Such a set is linearly dependent when set is wrltt.eH in reverse order, some vector
is a linear combination of its predecessors.
D4. The solution space of Ax = 0 has positive dimension if and only H the system has nontri vial
solutions, this occurs if and only if det(A) = 0. Since det(A) = t
2
+ 7t + 12 = (t + 3)(t + 4),
it foll ows t hat the solution space has positive dimension if and only if t = -3 or t = -4. The
solution space bas dimension 1 in each case.
WORKING WITH PROOFS
Pl. (a) If S = {vh v2, ... , vk} is a linearly dependent set in R" then t here C\re scalars Ct , cz, ... ,ck
(not all zero) such that c
1
v
1
+ c
2
v2 + + = 0. lt follows that the setS'= {v
1
, v
2
, .. . ,
vk, w1, .. . , wr} is lillearly dependent since c
1
v
1
+ c2v2 I + C>Yk + Owl+ + Owr -= 0
i::; nuut1 i v i<\l Jepcndeucy rela.tio n i:l.lllung its dements.
(b) If S= {vt, v
2
, . . . ,vJc} is a linearly independent set in R"' then there is no nont rivial de-
pendency relation among its elements, i.e., if Ct VJ + c2v2 I+ = 0 then Ct = c2 =
= == 0. It follows t hat if S' is any nonempty subset of S then S' must. aho be linearly
independent since a nontrivial dependency among its elements would also be a nontrivi al
among t he elements of S.
P2./ If k #0, then (ka) x ' - k( a- x) = (l if and only if a x = 0: t.lm::; ( k:a )J = a.l
EXERCISE SET 7.2
1. (a) A bMis for R
2
must contain exactly two vectors (three are t.oo many); any of three vectors
in R
2
is linearly dependent.
(b) A basis for R
3
must centum three vect ors (two nre not eno11gh); s<>t. 0f (.wo V<'('t.01'S
cannot span R:
1
(c) Thf: v
1
and v
2
are linc<trly dependPnt (v
2
= 2v
1
).
2. (a) A basis for R
2
must contain exactly two vectQrS (one is not enough).
(b) A basis for R
3
must contain exact.ly three vect.ors (four are t.oo many)
(c) These vectors are linearly (any set of vectors cont<.>ini ng t.he zero vector is dependent.).
3. (a) The vectors v
1
= (2, 1) and v
2
""'(0,3) are linearly independent neither is a scalar m1.tlt iplc
of the other; thus these two \'ect.ors form a hasis for R
2
.
(b) The vector Vz = (-7,8,0) is not a scalar multiple of vl = (4, l,O), and V:J:::: (1, 1, 1) is not a
linear combination of v
1
and v
2
since any such linear combination would have 0 in the third
component. Thus v
1
, v
2
, and v
3
are Iincarly independent, anJ it follows that t.hese vectors
from a basis for R
3
.
4. (a) The vectors v
1
= (7, 5) and v
2
= (4, 8) arc linearly independent since neither is a scalar multi ple
of the other; thus these two vectors form a for R
2
.
(b) The vector v2 = (0
1
8,0) is not a. scalar multiple of v
1
= (0, 1, 3), and v
3
= (1,6,0) is not a
linear combi1iation of v
1
and v2 since any such linear combim.t.ioJl would have 0 in the first
component. Thus v
1
, v
2
, and v
3
are linearly il.dependcnt, and it follows that these vectors
from a basis for R
3
.
'
.".
il-\
198 Chapter 7
5. (a) The mat r ix having v
1
, v2, v
3
as its column vectors is A = [
4
Sir .cc det{A) := -5 :f: 0,
- 4 - 2 - 3
the column vectors are linearly independent and bence form a basis for Rl.
( b ) The matrix having v
1
, v
2
, and v3 as its column vectors is A= [ : Since det(.l.) = 0,
- 5 - 2 8
t he column vectors are linearly dependent and hence do not form a ba.si& for Jl3.
6. (a) The matrix having v
1
, v
2
, and v
3
as its column vectors is A= [ Since det(A) = 0,
-8 5 -3
the column vectors are linearly dependent and hence do not form a basis for R
3
.
(b) The matrix having v
1
, v
2
, and v
3
as its column vectors is A = -! det(A) =
l l -1
4 =f 0, t he column vectors are linearly independent a..nd hence form a basis for R
3
.
7 (a) An arlitmry vector x = (:r. , y , z:) in R
1
can be written as
t hus the vectors v
1
, v
2
, v
3
, and v
4
span R
3
. Oo tbe other hand, si nce
the vecl ors v
1
, v2, v3, and v
4
are linea rly dependent. and do not form a basis for R
3
.
(b) ThP ,ector equat ion v = c
1
v
1
+ c
2
v
2
+ c
3
v
3
+ c
4
v
4
is equivalent to the linear system
8. (a)
[
1 0 0
0 2 0
0 0 3
:J = m
and, = t, n general solution of this system is given by
.. - = .t
=' -.. . - ... .. l ...... . . ..
where - oo < t < oo. Thus
for a.ny value of t . In par ticular, corresponding to t = 0, t = 1, and t = -6, we have v =
v , + v 2 + v3, v = 4 v
2
+ + v4, and v = 7v
1
+ 4v2 + 3v
3
- 6v
4
.
An nrhitra ry \'ector x = (x, y, z} can be written as
x = zv
1
+ (y- z)v, + (x - y)v3 + Ov4
r} I J
t.hus the vectors v , , v2, v
3
, and v
4
span R
3
On the other hand, since v
4
= 2v
2
+ v3, the
vt:ct.ors v 1, v 2 , v
3
, v 4 are linearly dependent and do not form a basis for R
3
. .,
..
..
(
l .. .,.),
'
.. . V1 ... ,
.) I


I \1.,
EXERCISE SET 7.2 199
{b) The vector equation v = c
1
v
1
+ c
2
v 2 + c
3
v
3
+ c4v4 is equivalent to the linear system
The augmented matrixof this system is

1 3 I 1]
1 0 2 :2
o o ol3
and the reduced row echelon form of this matrix is
[
1 0 0 0: 3]
0 1 0 2:-1
Q 0 1 1 I -1
Thm:, c.
1
= t , a general solut ion of the system i:; t;i:cn by
C1 = 3, cz =: -1- 2t , C3 = - 1 - - t,c4 = t
where -oo < t < oo. Thu:;
v = 3v
1
- (1 + 2t)v-z - (1 -1- t)v3 + LV4
for any value of t. For example, corresponding _t9 t = 0, t = - 1, and t = - we have v =
3v
1
- v7 - v3, v = 3v
1
+ v2- v
4
, and v = 3v, -
9. The vector v
2
= {1, -2, - 2) is not a scalar multiple of v
1
= ( - 1, 2, 3) ; thus S == { v
1
, v
2
} is a linearly
independent set. Let A be the matrix having v
11
v
2
, and e
1
as its columns, i.e.,
[
-1 1 1]
A ::: 2 - 2 0
3 - 2 0
Then det( A) = 2 ;6 0, and it follow:; from this that S' = { v
1
, Vz, e1} is a linearly indepew.lent set and
hence a basis for R
3
. [Similarly, it can he $hown that {v
1
,vz, e2} is a basis for R
3
. On t.he other
hand, set. {vl> v
2
,e
3
} is linearly dependent and thus not t':. basis for R
3
.J
10. The vector v
2
= (3, 1, - 2) is not a scalar multiple of Vt = (1, -1, 0) ; thns S = {v
1
, v 2} is a linearly
independent set . Lt A be the matrix having v
11
v 2, and c
1
as columns, i.e.,
11.
.4 = [- :
0 - 2 0
Then det(A) = 2 f 0, and it follows ftom this that S' = { vb v2 , e1} is a linearly independent set and
hence a basis for R
3
. [Similarly, i t can be shown t hat {v
1
,v
2
, e2} and {v
1
, v2,e3} are bases for R
3
.]
200 Chapter7
12.
13.
We have v2 = 2v1, a.nd if we vector v2 from the set S then the remaining vectors a.re
linearly independent since det 2 2 l = 27 =I= 0. Thus S' = { Vt, v
3
, v
4
} is a basis for R
3

2 -3 4
[
1 1 I]
Since dct o 1 1 = 1 f 0, the vectors VJ. V3 (column vectors of the matrix) form a basis for R
3
.
0 0 1
The vector equation v = c1v1 + c2v2 +c3v3 is equivalent to the linear system
which, from back substitution, has the sol ution c3 = 1, c2 = 5- c3 = 4, c1 = 2- c2 - c:v=--3. Thus
v = -3vl + 4v2 + v3.
14. Since del = - 2 =f. 0, the vect ors v1r v2, va (column vectors of the matrix) form a basis for
0 I J
R
3
. The \'P.ctor cquatioH v ::: c
1
v
1
+ c2v
2
+ c3v3 is equivalent to the linear system
15. (a) Since u n .. (1)(2) + (2)(0) ( - 1)(1) = 1 i: 0, the vector u is not orthogonal ton. Thus Vis
not contained in W.
(b) The line Vis parallel to u = (2, - 1, 1), and the plane W has normal vector n = (1,3, 1). Since
un:o::: (2)0)+(-1)(3)+(1}(1) :::0, the vector u is orthogonal ton. Thus Vis contained
il' w.
16. (a) Sim:e u n = (1) (2) + (1)( l) + (3)( - J) = 0, the vector u is ton. Thus Vis contained
in IV.
(b) T he line V is parallel to u = (J, 2, - 5), a.nd the plane W has normal vector n = (3,2, 1). Since
u n = (1) (3) + (2)(2) + (-5)(1) = 2 i: 0, the vector u is not orthogonal ton. Thus. Vis not
contained in W.
17- (a) The vector equation c1 (1, 1, 0) + cz(O, 1, 2) + CJ(2, 1, 3) = (3, 2, -1) is equivalent to the linear
::;ystem with augmented mo.trix

0
1
2
The reduced row echelon form of this rna.t.rix is
0
1
0
0! l;l
Q l _i
I 1i
I
1 I 1
I S
1 hus (3, 2, - 1) =
1
5
3
(1, 1, 0) - (0, 1, 2) + t {2, 1, 3) and, by linearity, it follows that
... 1)= \
3
(2,1,-1)- = !(26, 13, -20)
EXERCISE SET 7.2 201
(b) The vector equation CHI, 1,0} + c
2
(0,1, 2) + c
3
(2, 1,3) = (a,b,c) is equivalent to the linear sys-
tem with augmented matrix

1
.. o 2 3 : c
The reduced row echelon form of this matrix is
Thus (a, b, c) 0) + lb + tc){O, 1,2) + ic)(2, 1, 3) and
T(a , b, c) = (! a + 1, -1) + + !c)(l , 0, 2) + 1, 0)
(
7 , 3b +I 1 + 4b 2 + )
= gaT !) SC> ga [; - -gC, -a C
( c) From lhe formula above, we have {TI = [ !
-1
18. (a) 'fhe vector equation c
1
(1, l , l) + c2(l , l, 0} + c3(l, 0, 0) = ( 4, 3, 0) is equivalent to t he linear sys-
tern
has solution c
1
= 0, c
2
= 3, = 1. Thus (4, 3, 0) = O(l, l.Jlj-.-3.(Ll 0) +
hy Jineari t.)', \V nave - . .
/
T('t 3, o) = o(3, o, I)+ 3(2, 1, 3, - 1) + 1(5, - 2, 1, .(11, 1, 10, -3)/
- .. . . -_..?
( b) The vector eqnatiun c, (1, l, 1) I c2 ( l, l, 0) + c:l( 1, 0, 0) = (a, b, c) ill equivaleht .to the li near syil-
rl I 1] [:t1] [a]
:: =
which ha.:; so!ution x
1
= c, x2 = b - c, X3 =a - b. Thus
(a, b, c) = c(l, 1,1) + (b - c)(l, 1,0) + (a-- b)(l , O, 0)
and so, by lmearity, we have
T(a, b, c) = c(3, 2, 0, 1) + (b- c)(2, 1, 3, -1) + (a - b)(5, -2, 1, 0)
= (5a - 3b+ c, - 2a +3b+ c,a + 2b - 3c,-b + 2c)
(c) F\-om the lo<mula above, we have ITI = [-j
-3 1]
3 l
2 -3 .
- 1 2
202 Chapter 7
19. Since det -! -:] := - 100 0, the vectors v
1
= (1, -7, - 5), v 2 = (3, -2, 8), V3 = ( 4, :-3, 5) form
- 5 8 5 -
a basis for R
3
. The vector equation c
1
v
1
+ c
2
v
2
+

= x is equivalent to the linear system with
augmented matrix
[
1 3 4:x]
-7 - 2 - 3 l y
- 5 8 5 : z
and the reduced row echelon form of this matrix is
[
1 0 0 i- iox - 1
1
doY +
0 1 0
1
- !x - !., + 1z
I 2 4" 4
0 0 1
: 33 23 19
1 50x + 10oY- Imiz
Thus a general vector x = (x, y, z) can be expressed in terms of the basis vectors v
1
,



as follows:
X = (- ;0 X - y + 160 Z j V l + (- - h + Z) V2 + + '!/ - Z )v 3
DISCUSSION AND DISCOVERY
01. (a) True. Any set of more than n vectors in Rn is linearly depe ndent.
(b) True. Any set of less than n vectors cannot be spanning set for Rn.
( c) True. 1f every vector in Rn can be expressed in exactly one way as a linear combina tion of
the vectors in S , then S is a busis for Rn and t hus must contain exact ly n vectors.
(d) True. If Ax = 0 Iws infinitely many solutions. then det(A) ::: 0 and so the row vectors of r1
are linearly dependent.
(e) True. If V :; W (or W V) and if dim( V) = dim( W), then V = W .
02. No. If s = {vl , Y2, - . . 'Vn} is a linearly dependent set in Rn, then s is not a spanning set for n n;
r. hus it. is not possible to create a basis by formi ug linear combinat ions of the vectors in S .
03. Each such operat()r corresponds to (and is determined by) a permutation of the vectors in t he
basis B . Thus t here a rc a t.otal of n! such operators.
04. Let A be the matrix having the vectors v
1
and v2 as its columns. Then
dct (A) ""' (sin
2
o- sin
2
/3) - (cos
2
a - cos
2
{3) = - cos 2o + cos 2{3
and det(A) =/= 0 if and only if cos 2a j cos 2/3, i. e., if and only if o =/= /3 + br where k = 0, I ,
2, .. . . For these values of a and {3, the vectors v
1
and v
2
form a basis for R
2
.
D5. Suppose W is a. subspace of Rn and dim( W) = k. If S = {w
1
, w
2
, .. . ,wj } is a spanning set for
W, then eit her S is a basis for W (in which caseS contains exactly k ve<:: tors) or, from Theorem
7.2.2, a. basis for W can be obtained by removing appropriate vectors from S. Thus the number
of clements in a spanning set must be at least k , and the smallest possible number is k.
WORKING WITH PROOFS
Pl. Let V be a nonzero subspace uf R", and let v
1
be a nonzero vector in V. If dim(V) = 1, then
S = { v
1
} is a ba..<; is for V. Otherwise, from Theorem 7.2.2(bJ, a busis for V can be obtained by
adding appropriate vedors from V to the set S.
WORKING WITH PROOFS 203
P2. Let {v1, Vz, ...
1
vn} be a basis for Rn and, fork any integer between 1 and n, let V = spa.n{v1, v2,
1 vk} Then S = {vt. v
2
, ... , vk} is a basis for V and so dim{V) = k. The subspace V = {0}
has dimension 0.
P3. LetS= {v1o v2, ... , vn} Since every vector in Rn can be written as a Hnearcombinationofvectors
in S, we have span( S) = Rn. Moreover, from the uniqueness, if Ct v1 + cz v2 + +en v n = 0 then
Ct = Cz ==en= 0. Thus the vectors Vt. vz, ... , Yn span Rn and are linearly independent, i.e.,
S = {vl, Vz, ...
1
vn} is a basis for Rn.
P4. Since we know that dim(R"} = n, it suffices to show that the vectors T(vt)
1
T(v2), ... ,T(vn) are
)inearly independent. This follows from the fact that if ctT(vl) + CzT(vz) + + c
11
T(v,) = 0
then
T{c1v1 + CzVz + + CnVn) = CtT(vt} + CzT(vz) + + CnT(vn) = 0
and so, since Tis one-to-one, we must have c1v1 + czvz + + CnVn = 0. Since VJt v2, ... , Vn are
linearly independent it follows from this that c
1
= c
2
= =en= 0. Thus T(v
1
), T(vz), ... , T(v,)
are linearly independent.
P 5. Since B = { v
1
, v2, ... , vn} is a basis for R"', every vector x in Rn can he expressed as 1i linear
combination x = c
1
v
1
+ czv
2
+ + c,. Vn for exactly one choice of scalars c
1
, c ~ h ... , Cn. Thus it
makes sense to define a transformation T: Rn --1 Rn by setting
T(x) = T{ctVl + CzVz + ... + CnYn) = clwl + c2w2 + ... + CnWn
n n
It is easy to check that Tis linear. For example, if x = I: CjVj andy= E d;v
1
, then
j=l j=l
n n n n
T(x + y) = T(l= (c; + d;)vJ) = L (cj + dj)wj = L CjWj + L d;w; = Tx + Ty
j=l j=l j;l j=l
and so T is additive. Finally, the transformation T has the property that Tv; = w; for each
j = 1, ... , n, and it is clear from the defining formula that T is uniquely determined by this
property.
P6. (a) Since { u
1 1
Uz
1
U3} has the correct number of elements, we need only show that the vectors are
linearly independent. Suppose c
1
, C:lJ c
3
are scalars such that c
1
u
1
+ CzUz + C3U3 = 0. Then
(c1 + Cz + c3)v1 + (cz + c3)v2 + c3v3 = 0 and, since { Yt. Vz, v3} is a linearly independent
set, we must have c
1
+ c2 + c3 = c
2
+ c
3
= c
3
= 0. It follows that c
1
= c2 = C3 = 0, and this
shows that Ut, uz, us are linearly independent.
(b) If { v1, v2, ... , Vn} is a basis for R
1
", then so is { u
1
, u2, ... , un} where
... '
P7. Suppose x is an eigenvector of A. Then x :f. 0, and Ax = >.x for some scalar >.. It follows that
span{x,Ax} = spa.n{x,>.x} = span{x} and so span{x,Ax} has dimension 1.
Conv\...sely, suppose tha.t span{ x, Ax} has dimension 1. Then the vectors x f. 0 and Ax are linearly
dependent; thus there exist scaJars CJ a:.d c
2
, not both zero, such that c
1
x + c
2
Ax = 0. We note
further that c2 =I= 0, for if c2 = 0 then since x =I= 0 we would have c
1
= 0 also. Thus Ax = >.x where
>.::;;; -cdc2.
P8. SupposeS= {vt, v2, ... , vk} is a basis for V, where V C Wand dim(V) = dim(W). Then Sis
a linearly independent set in W, and it follows that ~ must be basis for W. Otherwise, from
Theorem 7.2.2, a basis for W could be obtained by adding additional vectors from W to Sand
this would violate the assumption that dim(V) = dim(W). Finally, since S is a. basis for W and
S C V, we must have W = span(S) C V and soW= V.
204
Chapter 7
EXERCISE SET 7.3
1. The orthogonal complement of S = { v
1
, v
2
} is the solution set of the .system
A general solution of this system is given by x = -Jt, y = !t, z =tor (.x, y,z) = t( Thus
S.l is the line through the origin that is parallel to the vector (- ! , 1}.
A vector that is orthogonal to both v
1
and v
2
is w = v
1
x Vz = 1 1 .3 = -7i + j + 21<: = ( -7, 1, 2).
[
i j I<]
0 2 -1
Note the vector w is parallel to the one obtained in our first solution. Thus SJ. is the line through
the origin that is parallel to { -7, 1, 2) or, equivalently, parallel ,1).
2. The orthogonal complement of S = {v
1
, v
2
} is the solution set of the system
A general solution of this system is given by x = y =-1ft, z =tor (x,y,z} = -
1
i
1
1). Thus
S.l. is the line tluough the origin that is parallel to the vector (!, - , 1).
. . [I j kl
A vector that is orthogonal to both v
1
and v
2
is w = v
1
x v
2
= 2 0 - 1 = i - llj + 2k = (1, - 11 , 2).
1 1 5
Note the vector w is parallel to the one obtained In our first solution. Thus 51.. is the line through
the origin that is parallel to (1, -11,2) or, equivalently, parallel to (j,-


3. We have u v
1
= ( -1)(6) + (1)(2) + (0)(7) 1- (2)(2) = 0. Similarly, u v
2
= 0 and u v3 = 0. Thus u
is orthogonal to any linear combination of the vectors v
1
, v
2
, and v
3
, i,e., u bek>,ngs to the orthogonal
complement of W = spa.n{vl> v2, v3}.
"
4. We have u v
1
= (0)(4) + (2)(1} + (1)(2) + ( -2)(2) = 0, u v
2
= 0, and u v
3
= 0. Thus u is orthog-
onal to any linear combination of the vectors vlt v
2
, and v
3
, i.e., u is in the orthogonal complement
ofW.
5; The line y = 2.x corresponds to vectors of the form u = t{1,2), Le., W = span{(l, 2)}. Thus WJ.
corresponds to the line y = or, equivalently, to vectors of the form w = s(2, -1).
6. Here W J. is the line which is normal to the plane x - 2y - 3z = 0 8Jld pa:3Sea through the origin. A
normal vector to this plane is n = (1, - 2, -3); thus parametric equations for WJ. are given by:
X = t , y = -2t, Z = - 3t
7. The line W corresponds to S<:alar multiples of t he vedor u = (2, -5, 4); thus a vector w = (x, y, z) is
in WJ. if and only if u w = 2x- 5y + 4z = 0. Parametric equations for this plane are:
X = - 2t, y = s, Z = t
'
q
EXERCISE SET 7.3 205 't
\ \

I
8. Here W is the line of intersection of the planes x + y + z = 0 and x - y + z = 0. Parametric equatip_ns '\
for this line are x = - tli==-9.!! W corresponds to vectors of the ... 1
follows that w = (x,y, z) 1s m w1. if and only if x = z, i.e. if and only if w is of the Torm
w = (r, s, r) = r(l , 0, 1) + s{O,l, 0)
----
Parametric equations for this plane are given by x = r, y == s, z = r.
Let A be the matrix having the given vectors as its rows. The reduced row echelon form of A is
0 -16]
1 -19
0 . 0
Thus the vectors w
1
= (1,0, -16) and w2 == (0, 1, - 19) form a basis for the row space o( A or, equiv-
alently, for the s ubspace W s panned by the given vectors.
We have W..l. = n.1w(A).l = nuii(.A) = null(U}. Thus, from the above, we cpnsists
of all vectors of the i.e., the vector u = (16, 19, 1) forms a basis for w.l..
l 0. Let A bt! wL ix having lhe given vect ors as ils rows. This matrix can be row reduced to
u !
Thus the vect orl; w
1
= (2. 0, - 1) and w
2
= (0, 1, 0) form a. basis for the row space of A or, equivalently,
for the subspace W spanned by the given vectors.
We have w.t = row(A) .L = null(A) = null(U). Thus, from the above, we conclude that w.1. consists
of all vectors of the form x = ( 0, t), i.e., the vector u = (!, 0, 1) forms a basis for W .1..
11. Let A be t he matr ix having the given vectors as its rows. The reduced row echelon form of A is
1
ll
0 0

.-:"
1 0
'
';- l
R=
0 . ,.
y
\._--,
1
0 0
Thus the vect ors w1 = (1, 0, 0, 0), w
2
= (0,1, 0, 0), and w
3
= (0, 0, 1,1) form a basis for the row space
of A or; equivalently, for the space W spanned by the given vectors.
We have w.t = row(A).l = null(A) = null(R). Thus, from the above, we conclude that W.l consists
of all vect OrS Of the form X = (Q, 0, -t, t), i.e., the vector U = (0, 0, -1, 1) formS a basis for W.l.
12. Let A be the matrix having the given vectors as its rows. The reduced row echelon form of A is
[
1 0 0
R= 0 1 0 i
0 0 1 0
0 0 0 0
Thus the vectors w
1
= (1, 0, 0, 0), = (0, 1, 0, i), and w
3
= (0, 0, 1, 0) form a. basis for the row space
of A or, equivalently, for the space W spanned by the given vectors.
We have Wl. = rvw(A).l = null(A) = null(R). Thus, from the above, we conclude that w.t consists
of all vectors of the form x = (0, - 0, t) , i.e., tho vector u = (0, 0, 1) foriDB a basis for W .1.
I
. \
"" \
. . .
-..i... ',
,
t'
206
Chapter7
13. Let A be the matrix having the given vectors s.s its rows. The reduced row echelon form of A is
14.
R = ; !
Thus the vectors w
1
= (1,0, 0,2,0), w2 = (0, 1,0,3,0), W3 = (0, 0,1,4, 0) and W4 = (0,0, 0,0,1) form
a basis for the row space of A or, equivalently, for the space W spanned by the given vectoiS.
We have W.J. = row(A)..L = nuli(A) = null(R). Thus, from the above, we conclude that W.l consists
of all vectors of the form x = ( -2t, -3t, -4t, t, 0), i.e., the vector u = ( -2, -3, -4, 1, 0) forms a basis
for W i.
Let A be the matrix having the given vectors as its rows. The reduced row echelon form of A is
1 0 0 0 -243
0 1 0 0 57
R= 0 0 1 0 -7
0 0 0 1 2
0 0 0 0 0
Thus the vectors w1 = (1, 0, 0, 0, -243), w2 = (0, 1, 0, 0, 57), W3 = (0, 0, 1, O, -7), W4 = (0, 0, 0, 1, 2)
form a basis for the row space of A or, equivalently, for the space W spanned by the given vectors.
We have Wi = row(A).l = null(A) = null(R). Thus, from the above, we conclude that W.J. consists
of all vectors of the form x = (243t, -57t, 7t, - 2t, t}, i.e., the vector u = (243, - 57, 7, -2, 1) forms a.
basis for W J..
15. In Exercise 9 we found that the vectoru = (16, 19, 1) forms a basis for W.l; thus W = w.t..t consists of
the set of all vectors which are orthogonal to u, i.e., vectors x = (x, y, z} satisfying 16x + 19y t z = 0.
16. In Exercise 10 we found that the vector u = (!, 0, 1) forms a basis for W J.; thus W = W .J..J. consists
of the set of all vectors which are orthogonal to u, i.e., vectors x = (x, y, z) satisfying x + 2z = 0.
17. In Exercise 11 we found that the vector u = (0, 0, -1, 1) forms a basis for W .l; thus W = W l..l
consists of the set of all vectors which are orthogonal to u, i.e., vectors x = (x
1
, x2, X3, X4) satisfying
-X3 + X4 = 0.
18. In Exercise 12 we found that the vector u = 1) forms a basis for w.t; thus W = w.t.L.
consists of the set of all vectors which are orthogonal to u , i.e., vectors x = (x1, x2, x3, x4) satisfying
3x-z- 2x4 = 0.
19. Solution 1. Let A be the matrix having the given as its columns. Then a (column) vector
b is a linear combination of v1, v2, v3 if and only if the linear system Ax= b is consistent. The
augmented matrix of this system is
[
1 5 7 l bl]
-1 -4 -6! 02
3 - 4 21 03
and a row echelon form for this matrix is
EXERCISE SET 7.3 207
From this we conclude that b = ( o
1 1

1
b
3
) lies in the space spanned by the given vectors if and only
if 16b
1
+ 19bz + b3 = 0.
Solution 2. The matri x
[
VI ] [ \ -1 3]
v 5 -4 -4
2
= ,
2
can be row reduced to
_1_::!! __ _
b bt b2 b3
[
l 0 -16]
0 1 - 19
_o __ .s' ___ , and then further
b1 b2 ba
reduced to
From this we. conclude that W = span{ v
1
, v
2
, v3) has dimension 2, and that b = (bt, b2, b3) is in lV
if and only if 16b
1
+ + b3 = 0.
Solution 3. In Exercise 9 we found that the vector u = (16, 19, 1) forms a basis for w1.. Thus
b =

is in W ::-: Wl. l. if and only if u b = 0, i.e., if and only if Hib1 + 19b2 + b3 = 0.


20. Solution l. Let .4 be the ma.trix having the given vect ors ns its columns. Then a (column) vector
b is a .linear combination of v
1
, v
2
, v
3
if and only if the linear system Ax = b is consistent. The
augmented matrix of this system is
and a row echelon form for this matrix is
Thus a vector b = (b
1
, b2, b3) lies in the space spanned by the given vectors iJ and only if b
1
+ 2b3 = 0.
Solution 2. can be row reduced to and then further
reduced to
[
1 0 _! l
b;
From this we conclude that W = spau{v
1
, v2, v3} has dimension 2, and that b = (bt , b2, b3} is in W
if and only if b1 + 2b3 = 0.
Solution 3. In Exercise 10 we fOIJnd t hat the vector u = n I ul 1) forms a basis for w l.. Thus
b = (bt , b2, b
3
) is in W = W.l..J.. if and only if u b = 0, i.e., if and only if bt + 2b3 = 0.
208 Chap1er 7
21. Solution 1. Let A be the matrix having the given vectors as its columns. Then a (column) vector b
is a linear combination <>f. v
1
, v
2
, v:l, v 4 if and only if the linear system Ax ::::: b is consistent. The
augmented matrix of this system is

0 - 2 4
b,]
0 0 2 b2
1 2 -1
b3
1 2 -1
b4
and a row reduced form for this matrix is
0 - 2
4: b, ]
z
I
1 -1 1 ba
0 2'
-2: -bt + b2
0 0 0: + b4

Thus a vector b = (b
1
, bz , b
3
) lies in the space spanned by the given vectors if and only if -b3 + b4 = 0.
8olution 2. The matrix v
3 !
4 2 -1 - 1
-----------
bl 02 b3 b4
can be row reduced to [ : and then
0 0 0 0
--- ----- ---
0! b2 ba b4
b
further reduced to
l
0
0
0
0
0
()
0
0
0
0
1
0
0
0
1
0
From t his we conclude t hat W = span{vlo v z, v
3
, v
4
} has di mension 3, and t.hat b = (b1.bz,b3) is in
W if and only if - b3 + b." = 0.
Solutwn 3. Tn E..xcrcise 1 J we found t.hat vector u = {0, 0, -1, l ) forms a basis for W .L. Thus
b = (b
1
, b
2
, is in IV =- WL
1
if and onl y if u b = 0, i.e., if and only if - b
3
+ b
4
= 0.
22. Solution 1. Let A be the matrix having the gi ven vectors as its columns. Then a (column) vector
b is a linear combination of these vectors if a.nd only if the linear system Ax = b is consistent. The
augmented mat.rix of this system is
[
]
3 5
b, ]
2 6 4 I b2
I
- 2 -3 -1
I
03
I
3 9 6
I
04
I
and a row echelon form for this matrix is

i ! b3 l
1 9 23 ! 5bl - 2b:l
0 0 t:-!bl+ i b2
0 0 0 : ! b4 - ! b2
I 3 2
Thus b = (b;. b
3
, b
4
) lies in the space spanned by t he given vectors i and only )f 2b4 = 0.
EXERCISE SET 7.3 209
23.
24.
[
vl] r 2 4 -s 6]
Y2 l} 2 - 2 J
Solution 2. The matrix = can be row reduced to
b bt In b b1
reduced to
1 0 0 0
0 1 0
3
2
0 0 1 0
0 0 0 0
------------------
0 0 0
+b4
l 0 0 0
a
0 1 0 2
o o 1 o , and then further
0 0 0 0
---------
From this we conclude that W = span{ v 1, vz, v3, v
4
} has dimension 3, and that b = (bl> bz, b3, b
4
) is
in W if and only if 3b2 - 2b4 = 0.
Solution 3. In Exercise 12 we found that t he vector u = (0, - 0, 1) forms a basis for W .L. Thus b =
{bJ , b2. b3 , is in W ;; w .t.t if and only if u b = + b
4
= 0, i. e., if and only if 3b2 -- 2b
4
= 0.
The augmented matrix lv
1
v2 V:; v4 I ht l b z l b "J is
-2 - ] 0 I -2 I
I I
0 - 2
l 0 1 2: 4: - 2 2
0 1 2 - 1
I
2: - 3 I - 1
I
'
-1 1 1 1
I
2: -1 1
I
2 3 - 1 1 ' 5 : 5 0
I
and the reduced row echelon form of this matrix is

0 0 0
1 '
0 0
I
l 0 0 1 : 1 0

0 1 0 l: -2 0
'
0 0 1
l ' 0 0
t
0 0 0 o:
0
From t his we conclude Uwt the vectors h
1
b :t lie in span{v
1
, v 2, v
3
, v
4
}, but bs does not.
The matrix jv
1
Vz '13 v 4 I b 1 I b2 I bsl is
0 1 - L 3
t

1 1 0 -2 : -1: o:2
0 3 2
I
1 '
7 : -1 : 6
I I I
2 1 0 '
'
2 '
I
2 : 4
0 -l 1 1 : 1 : 2 : 1
and the reduced row echelon form of this matrix is
1 0 0
0 1 0
0 0 1
0 0 0
0 0 0
0: _! :
1
I 1) 1
0 I I-}
I 5 I
o:
6 t
1 - t
t
5 '
1 t
t
1 t
'
0
o: o: 0
0
0
0
0
From this we conclude t hat t he vectors b
1
and b
2
lie in span{v
1
, vz, v
3
, v
4
} , but b
3
does not.
210
Chapter7
25. The reduced row echelon form of the matrix A is
26 .

0 0 0 0 1 0
0 0 0 0 0 1
Thus the vectors r
1
= (1,3, 0,4, 0,0), r2 = (0,0,1,2,0,0), r 3 = (O, O,O,O, l,O), r4 = (0, 0,0,0,0,1),
form a basis for the row space of A. We also conclude from a.n inspection of R that the null space of
A (solutions of Ax= 0) consists of vectors of the form
x = s( - 3, 1, 0, 0,0,0) + t{-4, 0, -2, 1, 0, 0)
Thus the vectors n
1
= ( - 3, 1, 0, 0, 0, 0) and n2 = (-4, 0, -2, 1, 0, 0) form a basis for the null space of
A. It is easy to check that r; Dj = 0 for a.ll i, j. Thus row( A) and null(A) are orthogonaf.subspat:es
of R
6

The row (>Chf'lon form of the m11.trix .4 is
1 0 0
1
-!]
4
0 1 0
1
R=
2 5
0 0 1
t 3
;j
5
0 0 0 0 0
Thus the vectors f ) = (1, 0, 0, n. r 2 = (0, 1, 0, 4. a.nd TJ = (0, 0, 1, }, form a basis for the
row of A \Ve also conclude from nn inspection of R that the null space of A (solutions of
Ax -::: O) consists of vectors of the form
Thus the vect ors n
1
= ( 1, 0), n
2
=: ( - 0, 1) form a basis for the null s pace of A.
It is easy to che<.:k t.hat r, n
1
= 0 for all i, j . Thus row(A) and onll(A) are orthogonal subspaces of
R".
27. The vectors r1 = (1,3,0,4,0, 0), r2 = (0,0, 1, 2,0,0), r3 = (0,0,0,0, 1, 0), r
4
= (0,0,0,0,0,1) f<'lrm a.
basis for t he row space of A (see Exercise 25).
28. The vectors r, = ( l , 0, 0, !. - ), r2 = (0, l, 0, V. a..nd r 3 = (0, 0, 1, ! , for m a. basis for t he row
space of A (see Exercise 26).
29. The reduced row echelon forms of A and B are and respectively. Thus the
0 0 0 1
30.
vectors r l :::: (1,0,3, 0), r2 = (0, 1, 2,0) , r 3 = (0,0,0, n form a basis for both of the row spaces. It
follows t.hnt row(A) = row(B).
The reduced row forms of A and B are and respectively. Thus the
00 13 OQI J
0 0 0 0
vectors r
1
:::;: (1,0,0,5), r
2
= = (0,0,1,3) form a basis for of the row spaces. It
follows that ruw(A) = row(B).
DISCUSSION AND DISCOVERY
31. Let B be the matrix havi ng the given vectors as its rows. The reduced row echelon form of B is
R =
0 - 1
- 4

211
and from this it follows that a general solution of Bx= 0 is gi ven by x = s (1, 4, 1,0) 1),
where -oo < s, t < oo. T hus the vectors w
1
= (1, 4, 1, 0) and w2 = ( -2, 0, 0, 1) form a ha.sis for the
null space of B, and so the matrix
A = [ 141 0]
-2 0 0 1
has the property that null (A) = row{A)J. = null( B).!. = row(B).l.J. =row( B).
32. (a) If A= ... and x = [:], then Ax = 0 if and only if x = y = 0. Thus the null spce of A
corresponds t o point.s on t he z-axis. On the othe:- hanrl, t he cokmn sp:.:.cc cf A t.'Onsists of all
vectors that are of the form
which corresponds t.o points on lht} xy-plane.
( b) The matrix B = has the specified null space n. nd column space.
0 0 I
DISCUSSION AND DISCOVERY
D 1. (a) False. This statement is t. rue if and only if the nonzero rows of A are linear.ly independent,
i.e. , if Lhcre a.re no additional zero rows in i.tll echel on form. "
( b) True. 1f E is a11 elementary matrix, t hen E is invertible and so EAx = 0 if and only if
Ax= 0.
(c) True. If A ha.s ru.nk n , then there can be no zero rows in an echelon form for A; thus the
red11ced row echelon form is t he ident.it.y matrix.
(d) False. For example, if m = n and A is invert ible, then row(A) -= Rn and null(A) = {0}.
(e) True. T his follows from the fact (T heorem 7.3.3) that SJ. is a subspace of Rn.
D2. (a) True. 1f A is invertible, then the rows of A and the columns of A each form a basis for W';
thus row(A) = col(A) = R" .
( b) Fah>e. ln fact, t he opposite is true: lf W is a subspace of V, then V J. is a subspace of W.l.
since every vector that is orthogonal to V will also be orthogonal to W.
(c) False. The specified condition implies that row(A) row( B) from which it follows that
mJll(A) = row(A)J. 2 row(B)L = null(B)
but in general the inclusion can be proper.
(d) False. F.or example A= and B = have the same row space but column
spaces.
212 Chapter 7
(e) True. This is in fact true for any invertible matrix E. The rows of EA are linear combinations
of t.he rows of A; thus it is always true that row(EA) row(A) . If E i:; invertible, then
A = E-
1
(EA) and so we also have row{A) row(EA).
D3. If null{ A) is the line 3x- 5y = 0, then row(A) = null(A)l. is the line 5x + 3y = 0. Thus each row
of A must be a scalar multiple of the vector {3,-5), i.e . A i'i of the form A= [:ls -!Js].
3t - 5t
D4. The null space of A corresponds to the kernel of TA, and the column space of A corresponds to
the range of T A .
D5. lf W = a..t. , then w..t. = a.L.L =span{ a} is the !-dimensional subspace spanned by the vector a.
D6. (a) If nuU( A) is a line through the origin, then row(A) = null( A )..I. is the plane through the origin
that is perpendicular to that line.
(b) If col(A) is a line through the origin, then null( AT)= col(A)..l is the plane through the origin
thnt is perpendicular to that line.
D7. The first. two matrices arc innrtiblc; thus in each cnse the null spuce is {0}. The mill of the
matri..x A = is the line 3x + y = 0, and the null space of A = is all of R
2
.
DB. (a) Since S has equation y = 3x, Sl. has equation y = -- and Sl.J. = S has equation y = 3x.
(b) If S == { ( 1, 2)}, then spa.n(S) has equation y = 2x; thus Sl. has equation y = - x ann Sl. l =
span(S) has equation y = 2x.
09. No, this is not possible. The row space of an invertible n x n matrix is all of Rn since its rows
form a ba.sis for R ... On the other hancl , the row space of a singular matrix is a proper subspace
of Rn since its rows are linearly dependent and do not span all of Rn.
WORKING WITH PROOFS
Pl. lt is clear, from the definition of row space, that (a) implies (c). Conversely, suppose that (c)
holds. Then, since <::ach row of A is a linear combination of the rows of B, it follows that any
li ne..a.r c.ombination of the rows of A can be expressed as a linear combination of the rows of B .
This shows that row(A) row{B), and a si1uilar argument shows that row(B) row(A). Thus
(a) holds.
P2. The row vectors of an invertible matrix are linearly independent and so, since there are exactly n
of them, they form a basis for R".
P3. If Pis invertible, then (PA)x = P(Ax) = 0 if and only if Ax= 0 ; thus the matrices PA and A have
the same null space, and so nullity{PA) = nullity( A). From Theorem 7.3.8, it follows that PA and
A. have also have the same row space. Thus rank(PA) = dim(row(PA)) = dirn(row(A)) =rank( A).
P4. From Theorem 7.3.4 we have SJ. = span(S)l., and (span(S)J.)..l = span{S) since spa.n(S) is a
subspace. Thus (Sl. )l. = (span{S)l. )l. = span(S)
[
rJ(A)cl(A-
1
) rl(A)c:J(A-
1
) rt(A)cn(A-
1
)][ .. 1 0 0]
r'l(A)ct(A-
1
) r 2{A)c2(A-
1
) r 2(A)cn(A-
1
) 0 1 0
P5. WehaveAA-
1
= . . . . = . . . . = [Oijl In par-
. . . . .
. . . .
r
11
(A) C! (A-
1
) r n{A) C2{A-
1
) r n(A) Cn(A-
1
) 0 0 l
ticular, if it= {1, 2, . . . ,k} and j E {k + 1, k + 2,. _., n}, i < j and so ": ( A) c;(A-
1
) = 0.
This shows that the first k rows of A and the last n - k columns of A-
1
are orthogonal.
EXERCISE SET 7.4 213
EXERCISE SET 7.4
1. The reduced row echelon form for A is

0 -16]
1 -19
0 0
Thus rank(A) = 2, and a general solution of Ax= 0 is given by x = ( 16t, 19t, t) = t(16, 19, 1). It
foll ows that nullity(A) = 1. Thus rank( A) + nullity(A) = 2 + 1 = 3 the number of columns of A.
2. T he reduced row echelon form for A .is


0 0
Thus rank(A) = 1, and a general solution of Ax= 0 is given by x = (1t, s, t) = s(O, l ,O) +
It. follows t hat. nullity(A) = 2. Thus rank(A) + nullity(A) = 1 + 2 = 3 t he number of columns of A.
3. The reduced row echelon form for A is

0
1
0
l -tl
l :!
7
0 0
Thus rank(A) = 2, and a general solution of Ax= 0 iR given by x <= s(- 1,-- 1, 1, 0) 1).
It follows t hat nullit.y(A) = 2. Thus rank(A) + null ity(A) = 2 + 2 = 4 the number of columns of A.
4. The reduced r ow form for A is
5.
0 1 1-..J 2
l
] 0 1 2 ]}
.. -.. .. ____ ___ - .-
Thus rallk(A) = 2, and a general solution of Ax= 0 is given by
x = r( -1, - 1, 1, 0, 0) + s(-2, - 1, 0, l, 0) + t( -1, - 2, 0, 0, 1)
It follows that n111lity(A) = 3. Thus rank(A) + nullity(A) = 2 + 3 = 5.
The reduced row form for A is
1 0 0 2
4
3
0 0 0
I
-- 6
0 0 1 0
5
- n
0 0 0 0 0
0 0 0 0 0
Thus rank(A) = 3, and a general solution of Ax= 0 is given by
x ""' s(-2,0,0,1,0} + fi,o, 1)
It follows that nul;ti.y(A) = 2. Thus rank(A} + nullHy(A) ""'3 + 2 = 5.
214 Chapter 7
6. The row echelon form for A is

0
1 2 2 4

3 3 3 3
1
1 7 4 l
3
-3 -3
3
0 0 0 0 0
0 0 0 0 0
Thus rank(A) = 2, and a gene1al solution of Ax = 0 is given by
X = r( -s,

0,0, 0,0} + s( -j, 0, 1, 0, 0, 0) + t( -j, t, 0, 0, 1, 0,0)


+ -A. 0,0,0,1,0) + 0, 0, 0, 0,1)
ll follows that nullity( A)= 5. Thus rank{A) + nullity(A) = 2 + 5 = 7.
7. (a) lf A is a 5 x 8 matrix having rank 3, then its nulHty must be 8- 3 = 5. Thus there a.re 3 pivot
variables and 5 free parameters in a general solution of Ax = 0.
(b) Tf A is a 7 x 4 matrix having nullity 2, then its rank must be 4 - 2 = 2. Thus there are 2 pivot
variables and 2 free parameters in a general solution of Ax = 0.
(c) lf A is a 6 x G matrix whose row echelon forms have 2 nonz!'rO rows, then A hn.s rank 2 and
nullity 6- 2 = 4 Thus there are 2 pivot variables and ..1 free parameters in a general solution
of Ax - 0
8. (a) If A is a 7 x 9 matrix having rank 5, then its nullity must be 9 - 5 = 4. Thus there are 5 pivot
variables and t free parameters in a general solution of Ax = 0.
(b) If A is an 8 x 6 matrix having nullity 3, then its rank must be 6-3 = 3 Thu:; there are 3 pivot
variables and 3 free parameters in a general.solution of Ax = 0
(c) If A is a i x 7 matriX ''hose row echelon forms ha\f' 3 nonzero rows then ..1 has rank 3 and
nulhty 7 - 3 = 1 Thus there are 3 pivot variables and 4 free paramett-rs in a genera] solution
of Ax - 0
9. (a) [fA a 5 x 3 rnatnx. then the largest possible value for rank( ... l) is 3 and the smallest possible
value for nulllty(A) is 0
(b) If A is a 3 x 5 :natrix, then the largest possible value for rank(A) is 3 and the smallest possible
valur for null1ty( '\) is 2
(c) If A is a 4 x 4 mn.trix, t hen the largest possible value ior ranK( A) IS 4 and the smallest possible
value for nulltty(A.) is 0.
10. (a) If 4 1s a 6 x I then lhe largest possible value for rank(.4) is 4 and the smallest possible
value fm nulhty(A) is 0.
(b) If A is a. 2 x 6 matrix, then the largest possible value for rank(A) is 2 and the smallest possible
value for nullity(A) is 4
(c) If A 1s a 5 x 5 matrix, then the largest posstble value for rank(A) ts 5 and the smaiJest possible
value for nulltty(A) is 0
11. Let A be the 2 x 4 matnx havmg v
1
and v
2
as its rows Then the reduced row echelon fonn of A is
0
"
3
and so a g<>n('ral solulton of Ax= 0 is given by x = 0) + tn,- 0, 1). Thus the vect ors
Vt= (! , l ,O,O), v2=(0,3,.t,5),
form a basts for R
4
EXERCISE SET 7.4 215
12. Let A be the 3 x 5 mat rix having v
1
, v
2
, and v:l as its row:>. Then the reduced row echelon form of
A is
0 0
19
]
-- 8
0
2:l 15
15
- 8
0
43 43
- 16
8
and so a general solution of Ax= 0 isx =
1
i ,- l, 0) + t (-
2
1,
1
i, -

0, 1). Thus t he vectors


Vt = (1, 0, -2, 3, -5), v2 = ( - 2, 3, 3, }, -1), V3 = (4
1
1, -3,0
1
5)
W3 = 1,0), W4 =


1
8
5
,-,0, 1)
form a basis for R
5

13. (a) This matrix is of re.nk 1 with l1 = =
(h) T his matrix is of rank 2.
-7] = u vT.
(c) This matdx i< oh ank 1 with A = H -: -: _:;] +:1[1
1 3 3
1 4. (a) This matrix is of ra.nk 2.
15.
16.
(b) This of '""k 1 wi th A= __ ,: ::] = [ 6 -3[ = uv''' .
(c) This matrix is of rank 1 with
;\ = [
3
-1
lJ =
-6 10 12 - 6 8] [ 2]
-6 10 12 -6 8 2
1
- iZ 3 I
3 - .) -6 - l - 1
-3 5 6 - 3 4) = UVT

6 2 2]
9 3 3 . . t .
ts a svmmetnc rna nx.
3 l 1
3 1 1
0 0
16 -20
- 20 25
28 -35


is a symmetric matrix.
1 7. The matrix A can be row reduced to upper triangular form as follows:
t l [1 1
1 -t--+ 0 t- 1
1 - t
2
0 0
t ]
1- t
(2+ t)(l - t)
If t = 1, then the la.tter has only one nonzero row and so = 1. If t = -2, then there are two
nonzero rows and ra.nk(A) = 2. If t :f: lor - 2, then there are three nonzero rows and rank(A) = 3.
216 Chapter 7
18. The matrix A can be row red11ced as follows:
[
3
-1]
[i
3
-tl r
3
_, l r
3
-t ]
A= 6 -2
-+
6 -2 -+ 0 -3 -2+3t -+ 0 -3 -2+ 3t
-1 3 t 3
-1 0 3- 3t -1 + t
2
0 0 -(t- l){2t - 3)
If t = 1 or t = then t he third row of the latter matrix is a zero row and rank( A) = 2. For all other
values of t there are no zero rows, and rank(A) = 3.
19. If lhe matrix [:r Y z] has rank 1 then the first row must be a scalar multiple of the second row.
1 X l/
Thus (x.y,z) = t(l,x,y), and sox= t, y = tx = t
2
, z = ty = t
3
.
20. Let A be the 3 x 3 matrix having VI. v
2
, v
3
as ils rows. The reduced row echelon form.of A is
...
thus the vtctors w1 -= (1, 0, 5) and w 2 = (0, 1, -4) form a basis for W =row( A) = row(R). We have
[
-Stl [-5]
Ax = 0 tf and only if Rx = 0, and a general solution of the latter is given by x = 4: = t ~ ;
Lhus the vf'r.tor x
1
- (-5, 4,1) forms a basis for W.l. = row(A).l.. Finally, dim(W)+dim(W).l. =
'>+I== 3 .
21. The suh.spare W, consisting of all vector!'J of the form x = t(2, -1, - 3}, has dimension 1. The subspace
n.l is the hyperplane cons1$tmg of all vectors x = (x,y,z) whlch are orthogonal to (2,-1,3), i.e.,
which satisfy the equation
2x- y- 3z = 0
A gtneral !iolution of the latter ts given by
x = (s. 2s- 3t, t) = s(1. 2, O) + t(O, -3, 1)
wtwre - oo < t < oo. fhus the vectors (1, 2, 0) and (0, -3, 1) form l\ basis for w.t' and we have
dim(W) + dim(lV.l.) = I + 2 = 3
22. If u and v nre nonzc>ro column vectors and A = uvr, then Ax = (uvT)x = u(vrx) = (v. x) u . Thus
Ax= 0 if Bnd only if v x = 0. i.e . if and only if xis orthogonal to v . This shows that ker (T) = v.l. .
Similar!_\, I he range ofT conststs of all vectors of the form Ax= (v x }u, and so ran(T) =span{ u}.
23. (a) If B 1<:: obtainerl from A by changing only one entry, then A - B has only one nonzero entry
(hence only one nonzero row), and so rank(A- B) = 1.
(b) If B is obtained from A by changing only one column (or one row), then A - B has only one
nonzero column (or row) , and so rank{A - B) = 1.
(c) 11 B 1s obtamed from A in the specified manner. Then B - A has rank 1. Thus B - A is of the
form uvT and 8 =A+ uvT.
24. (a) If A= uvr, lhcn A
2
= ( uvT)( uvT) == u (vTu)vT = (v u) uvT = (u. v )A.
( b ) If Ax= >. x for some nonzero vector x, then A
2
x = >.
2
x. On the other hand, since A
2
= (u v )A,
we have A
2
x = (u v )Ax = (u v)>.x . Thus >.
2
= (u v)>., a.nd if>. :f nit. follows that>.= u v .
DISCUSSION AND DISCOVERY 217
(c) If A is any square matrix, then I - A fails to be jnvertible if and only if there is a nonzero .vector
x such tha.t (I - A)x = 0; this is equivalent to saying t hat>. = 1 is an eigenvalue of A. Thus if
A ::::: uvT is a nmk 1 rtatrix for which I- A is invertible, we must have u v =/= 1 and .4
2
:fA.
DISCUSSION AND DISCOVERY
Dl. (a) True. For example, if A ism x n where m > n (more rows than columns) then the rows of
A form a set of m vectors in Rn and must therefore be linearly dependent. On t.he other
hand, if m < n then the columns of A must be linearly dependent.
(b) False. If the additional row is a li near combination of the existing rows, then the rank will
not be increased.
(c) False. For example, if m = 1 then rank( A)= 1 and nullity( A) = n - 1.
(d) True. such a matrix must have rank less than n; thus nullity(A):::::: n - rank(A) == L
(e) False. U Ax= b is inconsistent for some b then A is not invertible and so Ax = 0 hM
nontrivial solutions; thus nulHty(A) ;::: 1.
(f) 11-uP.. We must have rank(A) + nullity(A) = 3; thus it is not possible to have rank( A) and
nullity(A) both C' qual t.o 1.
D2. If A is m x n, then AT is n x m and so, by Theorem 7.4.1, we have rank(AT) + nullity( AT)= m.
D3. If A is a 3 x 5 matri."<, then the number of leading l's in the reduced row echelon form is at most
3, und (assuming A =/= 0) the number of parameters in a general solution of Ax= 0 is a.t most 4.
D4. If A is a 5 x 3 matrix, thP-n rank( A) :::; 3 and so the number of leading l's in the reduced rvw
echelon form of A is at most 3. Assuming A =I 0, we have rank(A) :;::: 1 and so the number of free
parameters in a general solution of Ax = 0 is at most 2.
D5. If A is a 3 x 5 matrix, then the possible values for rank(A) are 0 (if A= 0) , 1, 2, or 3, and the
corresponding values for If A is a 5 X 3 matrix, then the possi ble values
for rank(/\) are 0, 1, 2, or 3, and the corresponding values for nullity(A) are 3, 2, 1, or 0. If A is
a 5 x 5 matrix, then the possible values for rank(A) are 0, 1, 2, 3, 4, or 5, and t.he corresponding
values for nullity(A) are 5, 4, 3, 2, 1, or 0.
D6. Assuming u and v are nonzero vect.ors, the rank of A= uvT is 1; thus t.he nullity is n - 1.
D7. Let A be the standard matrix ofT. If ker(T) is a. line through the origi n, then nullity(A) = 1 and
so rank(A) ::: n - 1. Tt follows that ran(T) = col (A) has dimension n- 1 and thus js a hyperplane
in R".
D8. If r == 2 a.nd s = 1, then rank(A) = 2. Otherwise, either r - 2 or s- l (or both) is 0, and
rank(A) = 3. Note thnt, since the first and fourt h rows are linearly independent, rank(A) can
never be 1 (or 0).
D9. If,\ 0, then the reduced row echelon from of A is

0 0
-{]
1 0
0 1
0 0 0
L.
218
and so rank(A) = 3. On the other hand, 1f >. = 0, then the reduced row echelon form is
0
1
0
0
5
2
0
0 il
and so rank(A) = 2. Thus>. = 0 is the value for which the matrix A has lowest rank.
Chapter 7
DlO. Let .4 = and B = Then rank(A) = rank(B) = 1, whereas A
2
= bas rank 1 and
B2 = has rank 0.
Dll. If AB = 0 then rank( A) + rank(B) - n = rank(AB) = 0, and so rank(A) + rank(B} 5 n. Since
rank( B) - n- nulHty(B), it follows that rank(A) + n - nullity( B) 5 n; thus rank{A) B).
S1milarly, rank(B) 5 nullity(A).
WORKING WITH PROOFS
Pl. First. we note that. the matrix A= (
011 012 013
] fails to be of rank 2 if and only if one of the rows
021 an 023
is a scalar multiple of the other (which includes the case where one 01 both is a row of zeros).
Thus it is sufficient to prove that the latter is equivalent to the condition that
Suppose that one of the rows of A is a scalar multiple of the other. Then the same 1s true of each
of the 2 >< 2 matrices that appear in ( #) and so each of these det erminants IS equal to zero.
Suppose, conversely, that the cond1t10n ( #) holds. Tlten we have
\\'ithout loss of gPnerality ,,.e may assume that the first row of A is not a row of zeros, and further
t Interchange columns if neces!lary) that. a11 =1: 0 We then have a22 = ( l!.U )at 2 and = ( l!.U )a.l3
an an
Thus and so the second row is a scalar multiple of the first
II
row
P2. lf A is of rank 1, then A = xyT for some (nonzero) column vect ors x andy. If, in addition, A is
symmetric, Lhen we also have A = AT= (xyT)T = pr. From this it follows that
x(yr x) = (xyr )x = (yxT)x = y(xr x) = yllxll
2
Since x andy are nonzero we have xry = yTx = y x :/; 0 and x = If xTy > 0, it follows
that A= yxr = y(gy)T ==- = = uuT IfxTy < 0, then thecor-
respondmg formula 1s A = - uur where u =
P3. AB = lc.CA) c1(A)
[
r l (B)
r 2(B}
= Ct(A)r t(B ) + c2(A)r 2(8) + +

and
each of the c
1
( A)r
1
(B) is a rank 1 matrix.
EXERCISE SET 7.5 219
P4. Since the set V U W contains n vectors, it. suffices to show that V U W is a linearly independent
set. Supp?se then tha.t c
1
, c
2
, . . , and d
1
, d
2
, , dn- k are scalars with the property that
CtVJ + CzVz + + CJ.:Vk + d1W1 + dzWz + + d-n-kWn-k = 0
Then the vector n = c1v1 + czvz + + c,.vk = -(d1w1 + dzW?. + + dn- JcWn-k) belongs both
to V = row(A) and to W = null(A) = It follows that n n = llnll
2
= 0 and son= 0.
Thus we simultaneously have c1 v1 + czVz + + q v,. = 0 and d1 w1 + d2w2 + + dn-k Wn-k =
0. Since the vectors v1, Vz, . .. , Yk are linearly indepenjent it follows that c
1
= cz = = ck = 0,
and since w1, Wz, ... , Wn-k are linearly independent it follows that d1 = dz = = dn-k = 0.
This shows that V U W is a linearly independent set.
P5. From the inequality rank(A) + rank( B) - n rank{AB} A) it follows that
n- rank(A) n- rank(AB} 2n - rank{A) - ra.n.k(B)
which, using Theorem 7.4.1, is the same as
nullity( A) nullity(AB) S nullity(A) +nullity( B)
Similarly, nullity(B) S nullity(AB) nullity(A) + nullity(B).
P6. Suppose .A= 1 + vr ; t '
1
u # 0, and let X= A-
1
- A-'. Then
Bx (A
T) A- l uv1' A-t) I T A-l uvT A-1 + uvT A-luvr A- t
.=: + uv - A = + uv - A
. uvTA-
1
+ u(vTA-
1
u)vTA-
1
l +vTA-
1
u
=l+uv rA-
1
- = l + uvTA-
1
- uvTA-
1
;;;;;[
). A
1 1 A
-1 TA-t l A-1 TA- 1
and so B is invertible with B- = X= A- - uv =A- - uv
1
A l+vT.'I. - u '
Note. If vTA-
1
u = - 1, then nA- lu =(A+ uvT)A-
1
u = u + u(vT A-
1
u) = u - u = 0; thus
BA-
1
is singular and so B is singular.
EXERCISE SET 7.5
1. The <edueed cow .ehelon fonn of A is [j
0 - 1 2
0 l 1 J
, ....
0 -7
l -9
0 0
0 0
4. -5]
5 - 6
0 0
and the reduced row echelon form of
0 0
AT is
0 0 0 0 Thus dim(row(A)) = 2 and dim(col(A) = dim(row(AT)) = 2. It follows that
0 0 0 0
0 0 0 0
diro(null(A)) =5-2 = 3, and dim(null(AT)) = 4- '2 = 2. Sjnce dim(null(A)) = 3, t here are 3 free
parameters in a general solution of Ax = 0.
2. The reduced row echelon form of A is
l 0 0 1
0 I 0 l
0
0
0
0 0
1 0
0 1
0 0

_ .!.
--1]
2
and the reduced row echelon form
7
-2
0
of AT is 0 o 1 0 . Thus dim(row(A)) 3 and di.m(col(A) = dim{row(AT)) = 3. It follows t hat
0 0 0 0
0 0 0 0
dim(null(A)) = 5-3 == 2, and dim(null(AT)) == 4-3 = 1. Since dim(nuli(A)) = 2, there are 2 free
parameters in a general solution of Ax = 0.
220 Chapter 7
3. The reduce<! row echelon form of A is ~ ~ !] and the reduced row echelon form of AT is ~
Thus rank(A) 3 and rnnk(AT) = 3.
0 2
1 - 1
0 0
4. The reduced ow echelon form of A is r ~ : -,! -:] and the reduced row echelon form of AT is
~ ~ ] . Thusrank(A) = 2 and rank(A;) =
0
2 'o ~
5. (a) dim(row(A)) = dim(col(A)) = rank(A) = 3,
dim(rlllJI(A)) = 3 - 3 = 0, dim(nuli(Ar)) = 3-3 = 0.
6.
7.
8.
(b) dim(row(A)) = dim(col(A)) = rank(A) = 2,
dim(null(A)) = 3 - 2 = 1, dim(null(AT)) = 3- 2 = 1.
(c) dim(row(A))- dim(coi(A)) = rank{A) = 1,
dim(null(A)) = 3 - I - 2, dim(null(AT)) = 3 - 1 = 2.
(d) dim(row(A)) = dim(coi(A)) = rank(A) = 2,
dim(nuii{A)) = 9 - 2 = 7, dim(nuii(AT)) = 5-2 = 3.
(e) <lim(row(A)) = dJm(col(A)) = rank(A) = 2,
(a)
{b)
(c)
(d)
(e)
(a)
(b)
(c)
{d)
(a)
(b)
(c)
(d)
dim(nuii(A)) = 5 2 = 3, dim(nuii(AT)) = 9 - 2 = 7.
dim(row(A)) = dim(col(A)) = rank(A) = 3.
dim(nuii(A)) = ot - 3 = 1, <lim(null(AT)) = 3-3 = 0.
dtro(row(A)} = dlm(col(A)) = rank(A) = 2,
dim(null(A)} = 3 - 2 = 1, dirn(nuii(AT)) = 4 - 2 = 2.
dim(row(A)) = c.llm( coi(A)) = rank(A) = 1,
dim(null (A)) = 3 I = 2, dim(nuli(AT)) = 6 - l = 5.
dim(row(A)) = d1m(coi{A)) = rank( A) = 2,
dim(null(A)) = 7 2 = 5, dim{nuli( AT)) =5 - 2 = 3.
dim(row( A)) = dim(col(A)) = rank(A) = 2,
dim(nuli(A)) = 4 2 = 2, dim(nuii(AT)) = 7 - 2 = 5.
Since rank(A) = rnnk!A I bJ = 3 the system is consistent. The number of parameters in a general
solution is n - r = 3- 3 = 0; i.e., the system has a unique solution.
Since rank(A) t- rank[A I bJ the system is inconsistent.
Since rank{ A) = rank[A I bJ = 1 the system is consistent. The number of parameters in a general
solution is n - r = 3 - 1 = 2.
Since rank(A) = rank[ A I bj = 2 the system is consistent. The number of parameters in a general
soluLion JS n - r = 9- 2 = 7.
Since rank{A) t- rank[A I b) the system is inconsistent.
Since rank( A) = rank[ A I bl = 2 the system is consistent. The number of parameters in a. general
solution is n - r = 4 - 2 = 2.
Since rank( A) = rank[A I bJ = 3 the system is consistent. The number of parameters in a general
solution is n - r = 3 - 3 = 0, i.e. , tLc system has a unique solution.
Since rank( A) = rank[A I bJ = 4 the system is consistent. The number of parh.weters in a. general
solution 1s n - r = 7 - 4 = 3.
EXERCISE SET 7.5 221
9. (a) This matrix bas full column rank because the two column vectors are not scalar multiples of
each other; it does not full row rank because any three vectors in R
2
ar..; linearly
This matrix does not have full row rank because the 2nd row is a scalar mul'.iple of the 1st row;
it does not have full col umn rank since any three vectors in R
2
are linearly dependent.
(b)
(c)
(d)
This matrix has full row rank because the two row vectors arP. not scalar mult iples of each other,
it. does not have full column rank beeause any three vectors iu R
2
are linearly
This (square) matrix is invertible, it has full row rank and full column rank.
10. (a) Thls matrix has full column rank because the two column vectors are not scalar multiples of
each other; it does not have full row rank because any three vectors in R
2
are linearly dependent.
(b) This matrix has full row rank because the two row vectors are not scalar multiples of each other;
it does not have full column rank because any three vectors in R
2
are linearly dependent.
11.
(c) Thls mat rix does not have full row rank because the 2nd row is a scalar multiple of the lst row;
it does not have full column rank because any three vectors in R
2
are linearly dependent.
(d) This (square) matrix is invertible; it has full row rank and full column rank.
(a)
. [10 2 i 1]
det(ATA) = det l-
6
- ="-" 149 =f 0 and det(AAT) = dct 2 4 - 2 = 0; t hus ATA is invE>rt-
1
2
ll - 2 17
ible and AAT is not invertible. This corresponds to the fact (see Exercise 9a) that A has fuJl
column rank but not full row rank.
(b) det(AT A) =- det [ : = o and det(AAT) = det. = o; thus neither AT A nor AAT
-!0 -40 20
IS invertible. This corresponds t.o the fact. that A doeR not have full column rank nor full row
rank.
(c) det(AT A) det [:: ; __ ;_;]: 'l-'"1 det(AAT) :] 66,; 0; thus AT A Is not lnmtlble
but A A
1
-i&inveitll)fe. This corresponds to the fact that A hu.s does not have full column rank
but. does have full row rank.
(d) "" del(!'>
1
] = 64 -:f= 0 and det(AAT) "" det (
4 2
] = 61 =f 0; thus AT A and AAT are
4 ]{) 2 17
hoth invertible. This to the fact that A has full column rank and full row rank.
12. (a) det(AT A) ...,. det [
1
: = 45 =f 0 and det(AAT) = det [ = 0; thus AT A is invertible
- 1 - 2 I
and AAT is not invertible. Th1s corresponds to the fact (see Exercise lOa) that A has full
column rank but not full row rank
[
4 10 -2] g
(b) ciet(AT.1)..,., rlct 10 34 13 = 0 and det(AAT) = del

= 1269 :j: 0; thus AT A is not


-2 13 37
invertible but AAT is invertible. This corresponds to t he fact that A does not have full column
rank but does have full row rank.
(c) det(AT A)::; det

= 0 and det(AAT) = det = 0; thus neither AT A nor


60 -15 45
AA T is invertible. This corresponds to the fact. that A has does not have full column. rank nor
full row rank.
222 Chapter 1
13.
14.
15.
16.
(d) det(AT A)= dct [
61 41
1 = 1369 =/= 0 and det(AAT) = det
3
3
7
] 1369 =/= 0; thus AT A and AAT
41 soj 1
are both invertible This corresponds to the fact that A has both full column r<illk and full row
rank
[
I 0: b3 l
The augmented matnx of Ax= b can be row reduced to o 1 ! 4b
1
- b
3
Thus lhe system Ax = b
0 01 bt + b'J
is either inconsistent {if b
1
+ :f:. 0}, or bas exactly one sol ution {if b
1
+ = 0). Note Lhat the latter
includes the case b
1
= b2 = b3 = 0; thus the system Ax = 0 has only the t rivial solut ion.
[
1 4: bt l
The augmented matrix of Ax = b can be reduced to o -3: i>l- 2bt . Thus the system Ax = b
0 0 1 b1 - 2i>l + b3
is t! ither inconsistent (if bt - 2b2 + b
3
=I= 0), or has exactly one solution (if b
1
- 2b
2
+ 6
3
= 0). The
latter incl udes t he case b
1
= b2 = b
3
= 0; thus the system Ax = 0 has only the trivial sdl'i.ition.
If A = [ then A r A [
6 12
]. It is clear from inspection t hnt the rows of A and .4. T A
- )
2
I 12 24
are multiples of the single vector u = (1,2). Thus row(A) = row(Ar A) is the !-dimensional space
consisting of all scalar multiples of u . Similarly, null (A) = null(AT A) is the !-dimensional space
consisting of all vectors v in R
2
which are orthogonal to u , i e all V('ctors of the form v = s( -2, 1).
The reduced row echelon from of A = [
1 1 1
] is [
0
1 0 7
] and the reduced row echelon form of
2 3 -4 I -6
ATA = [ Thusrow(A)=row(ATA)isthe2-dimensionalspaceconsisting
-7 - 11 17 0 0 0
of all li near comb1ne.tions of the vect ors u
1
= (1, 0, 7) and u
2
= {0, I , - 6). Thus null(A) =null( AT A)
1s the 1-dimtlnsional space consisting of aJI vectors v in R
3
which n.re orthogonal to both u
1
and u 2,
i e, all \'CClor of the form v = s( - 7,6, 1)
17. augmented ma.trix of the system Ax = b can be rel!\lo.:ed to
-3 bt
0 1
b2 - bl
0 0 b3 - lb2 + 3bl
)
0 0 b4 + 2b,
0 0 b5- 8b2 + 7bt
thus the system will be inconsistent unless (b
1
, b
2
, b
3
, b
4
, b
5
) the equations b3 = -3bt -1 4b2,
b,. = 2bt- bs = -?b1 + where b
1
can assume any values.
18. The augmented matrix of the system Ax= b can be 1educed to
[
1 2 3 -1 :
0 - 7 -8 5 !
0 0 0 0 :
thus the system is consistent if and only if b
3
= b
2
+ b
1
and, in this case, there will be in!'lnitely many
solutions.
WORKJNG WITH PROOFS 223
DISCUSSION AND DISCOVERY
D l. If A is a 7 X 5 matrix wit h rank 3, t hen A'r also has rank 3; t hus dim( row( AT)) = dim( col( AT))= 3
and dim(nuli(AT)) = 7 - 3 = 4.
02. If A has rank k then, from Theorems 7.5.2 and 7.5.9, we have dim(row(AT A))= rank(A1'A) =
rank(A
1
') = rank(A) = k and dim(row(AAT}) = rank(AAT) = rank( A) = k.
D3. If AT x = 0 has only the trivial solution then, from Theorem 7.5.11, A has full row rank. Thus, if
A is m >< n, we must m and dim(row(A)) = dim(col(A) ) = m.
D4. (a)
(b)
{c)
(d}
(e)
(f)
D5. (a)
(b)
\
n6. (a)
(b)
(c)
False. The row space and column space always have t he same dimension.
False. It is always true that rank(A) = ra.nk(AT) , whether A is square or not.
True. Under these assumptions, the system Ax = b is consistent (for any b) and so the
mat riCes A and [A I bJ have the same rank.
True. If an m x n matrix A has full row rank u.nd full column rank, then m = dim(row(A)) =
rank(.4) = dim(col(A)) = n.
True. If A
1
'J\ and AA'T are both inverti ble then, from 'Theorem 7.5.10, A has full column
rank ami full r ow rank; t hus A is square.
True. The rank of a 3 x 3 matrix is 0, 1, 2, or 3 a.nd the corresponding nullity is 3, 2, 1, or 0.
The solution$ of the system are given by x = (b- s- t, s, t) where - oo < s, t < oo. This
docs not violate Theorem 7.5.7(b).
The solutions can be expressed as (b, 0, 0) -1- s( - 1,1, O) + t ( - 1, 0,1), where (b, 0, 0) is a pa.r-
t.i cular solution and s( - 1, 1, 0) + t( - 1, 0, l) is a. general solution of the corresponding homo-
geneous system.
If A is 3 x 5 , t.hen the columns of /1 are a. set of five vectors in R
3
and thus linearly
dependent .
Lf A 5 x 3, t. hen t. he rows of A are a set of 5 vectors in R
3
and thus a.re linearly dependent.
Jf A is m x n , with m f n, then eit her t he columns of A are linearly dependent or the rows
of A arc li nearly dependent (or both).
WORKING WITH PROOFS
Pl. Frnm T heorem 7.5.8(a) we have nuli(AT A) = nuii(A) . Thus if A .is m >< n, t hen AT A is n x nand
so rank(ATA) = n - nuility(AT A) = n - nulli ty(A} = rank( A). Similarly, nul i{AAT) = nuli(AT)
a.nd so rank(AAT) ::= m - = m - nullity(AT) ""' ra.nk(Al' ) = rank( A) .
P2. As above, we have rank( AT A) = n - A)= n - nulli ty(A) = rank(A) .
P3, (a) Since null(A1' A) = null(A), we have row(A) = nuB{A).l = null(A' A)l. = row(AT A).
{b) AT A is symmetric, we have col(AT A) = row(AT A) = row(A) = col(AT).
P4. If A is m x n where m < n, then the columns of A form a set of n vectors in Rm and thus are
linearly dependent . Similarly, if m > n, then the rows of A form a set of m vectors in and thus
are li near ly depender.t
224 Chapter 7
P5. If rank(A
2
) = ran
1
'(A) then dim(null(A
2
)) = n- rank(A
2
) = n- rank( A)= dim(nuii(A}) and,
since null(A) <;;; null(A
2
), it follows that null(A) = null(A
2
) .
Suppose now that y belongs to null( A) n col( A) . Then y = Ax for some x in R" and Ay = 0. Since
A
2
x = Ay = 0, it follows that the vector x belongs to null(A
2
) = null(A) , and soy = Ax= 0. This
shows that null( A) n col( A)= {0}.
P6. First we prove that if A is a nonzero matrix with rank k, then A has at least one invertible k x k
submatrix, and all submatrices of larger size are singular. The proof is organized as suggested.
Step 1. If A is an m x n matrix with rank k, then dim(col(A)) = k and so A has k linearly
independent columns. Let B be the m x k submatrix of A having these vectors as tts columns.
This matrix also has rank k and thus bas k linearly independent rows. Let C be the k x k
submatrix of B having these vectors as its rows. Then Cis an invertible k x k submatrix of A.
Step 2. Suppose D is an r x r submatrix of A with r > k. Then, since dim(coi(A}) = k < r, the
columns of A which contain those of D must be linearly dependent. It follows that bne columns
of D are linearly dependent since a nontrivial linear dependence arx1ong the containing columns
results in a nontrivial linear dependence among the columns of D. Thus D is singular
Conversely, we prove that if the largest invertible submatrix of A is k x k, then A has rank k
Step 1. Let C be an invertible k x k submatrix of A. Then the columns of C are linearly
independent and so the columns of A that contain the columns of C are also linearly independent.
This shows that rank(A) = dim(coi(A)) k.
Step 2. Suppose rank(A) = r > k. Then dim(col(A)) = r, and so A has r linearly independent
columns. Let 8 be the m x r submatrix of A having these vectors as its columns. Then B also
has rank r and thus has r lmearly independent rows. Let C be the submatrix of B having these
vec-torc; as lt'l rm\''l Thf'n (' is a nonsingular r x r submatrix of A Thus the a..c;sumption that
rank(A) > k has led to a contradicti on. This, together with Step l, shows that rank(A) = k.
P7. If A is invertible then so is AT. Thus, using the cited exercise and Theorem 7.5 2, we have
rnnk(C P) = rank((C P)T) = rank(Prcr)- rnnk(CT) =rank( C)
and from this 1l also follows that nullity(CP) = n- rank(G' P) = n- rank( C)= nullity( C).
EXERCISE SET 7.6
1. A mw eehelon fmm fo A ;, 8 = -: -!]; thus the fi.st two columns of A are the phoL columns
and these column vectors form a. b
0
asis
0
for A row echelon from for AT is C = thus
0 0 0
dim(row(A)) = 2, and the first two rows of A form a basis for row(A)
2. The first column of A forms a basis for col(A), and first row of A forms a basis for row(A).
3. The ma.tnx A can be row reduced to B = thus the first two columns of A are the pivot
0 0 0 0
:l:[r 'I

::::
0
:o::::eed ,0
EXERCISE SET 7.6 225
4. The redur.ed row echelon form of A is B = [i i i i] and the reduced row echelon form of AT
5.
0 =:
is C =
0 0 0 0
. Thus the first two columns of A form a basis for col(A), and the first two
0 0 0 0
0 0 0 0
rows of A form a basis for row(A) .
1 Q 0 2
i
3
0 0 0
I
- 6
The reduced row echelon form of A is B =.
0 0 l 0

and the reduced row echelon form
0 0 0 0 0
0 0 0 0
OJ
() ( Al' is C: =


; ! (;
0
1
] . Thus the fi rst tii <ee column..<> of A form a basis for col(A), and the
0 0 ()
first three rows of A form a hasis for row( A) .
1 3 o o
0 0 1 0
6. The reduced row echelon form of A is fJ =
0 0 0 1
and the reduced row echelon form of AT iS
0 0 0 0
0 0 0 0
0 0 0 0
1'1
:
1
- l . Thus lhe 1st. 3rd, aud lt.h columns of A form a basis for co!( A), and
0 0
o oJ
c =
0
1
0
2
5
0
-4
0 0 t
0 0 0
the 1st, 2nd, and 4th rows of A for m a basis for row(A).
7 . Proceeding as in Example 2, we first form t he matrix ha.ing the given vectors ns its columns:
The reduced row echelon form of A is:
4 1]
G 10
-6 - 11
2 3
From this we conclude that all three columns of A are pivot columns; the vectors v
1
, v
2
, v
3
a.re
lineMiy independent and form a basis for the space which they span (the column space of A).
226 Chapter7
8. Let A be the matrix having the given vectors as its columns. reduced row echelon form of A is
R=


0 0 0 0
0 0 0 0
From thls we conclude that {v
1
, v2} is a basis for W = col(A}, and it also follows that v
3
= 2v
1
+ v
2
and v
4
== -2Vt + v2.
9. Proceeding as in Example 2, we first form the matrix having the given vectors as its columns:
A==


0 0 2 2
3 6 0 3
The reduced row echelon form of A is:

From this we conclude that the 1st and 3rd columns of A are the pivot columns; thus {v
1
, v3} is
a basis for W col(A). Since c2(R) == 2c
1
(R) and c4(R) = c
1
(R) + c3(R), we also conclude that
v2 = 2vt and v
1
v
1
-+- v
3
10. Let A be t he matrix having the given vectors as its columns. The reduced row echelon form of A is
0 2
1 -1
0 0
0 0
0 -1]
0 3
1 2
0 0
From this we conclude that the vectors v
1
, v
2
, v
4
form a basis fnr W = coi(A), and that v
3
= 2vt- v
2
and = -v
1
1- 3vz +
11. The mtnx lA I 1,1 = [:
0 - 1 : 1
0 -'2 I 0
I
0 0 I 0
0
I
0
can be row reduced (and further parmioned) to
=
0 0 0 : 0 0 1
Thus the vectors v
1
= (1,- 4,0) and v
2
= (0, 0, 1) form a basis for null(AT).
12. The matrix {A I /3j can be row reduced (and further partitioned) to
=
Thus the vector v
1
= (-1, 1, 1) and forms a basis for nuii(AT) .
EXERCISE SET 7.6 227
13. The redu-:ed row echelon form of A is R = ! . From this conclude that the first two
lo o o o
columns of A are the pivot columns, and so these two columm; forrn a hasis for col(A) . The fi rst
two rows of R form a basis for row(A) = row(R). We have Ax = 0 if and only if Rx = 0, und the
general solution of the latter is x = (s+4t , 3s+1t, s,t) = s{1,3,1,0) +t(4, 7, 0,1). Thus (1,3,1,0)
and (4, 7, 0, 1) fonn a basis for null(A) .
The reduced row echelon form of the partitioned matrix [A I !4} is
[
1 0 - 1 -4 l 0 0 t
0 1 - 3 -7 I 0 0 0 -1

0 0 0 0 : 1 0 _ .!
I 4
0 0 0 0 : 0 1
Thus t he vectors ( 1, 0, - - and (0, 1, - form a basis for null( AT).
14. The educed ww echelon fmm of A ;, R = -: :]. From this we conclude t hat the lsl. 2nd,
0 0 0 0
and 4th columns of A are the pivot columns, and so these three columns form a basis for col( A).
The first three rows of R form a basis for row( A) = row( R). We have Ax = 0 if and only if Rx ""' 0,
and the general solution of t he latter is x = s, O) l ,O). l , Orrorms
a basis for null( A). The reduced row echelon form of the part itioned mat rix [A I J
1
j is
l 0 :...1 0 :-! 0 - .l. 11 2 I 8 40 o
0 1
s
0
1 1 o 1 1
4 I l G 80 -5
t--- ---}
I 5 5
Thus t.he vector (0, 1, -, t) forms a b&Sis for null (A-r).
15. {a) 'The lst, 3rd, and 4th columns of A are the pi vot columns; thus these vectors form a basis for
col(A).
(b) The fi rst three rows of A form a basis for row( A).
(c) We have Ax = 0 if and only if Rx = 0, i.e., if and only if x is of t he for(ll
X = (- 4t,t,O,O) = t(-4,1,0,0)
Thus the vect or u = (-4, 1, 0, 0) forms a basis for null(A).
(d) We have AT x = 0 if and only if Cx = 0, i.e. , if and only if x is of the form
x = = + s(O,
Thus t he vectors v
1
= (- LO, O, O, 1) and v
2
= (0, 1, 0) form a basis for nuli(Ar).
16. The reduced ww <ehelon fom of A = [: - : is .Ro [ t ] Thus the column-row factori>a
ticn is
A=CR= il
228 Chapter 7
17.
.1nd the rorrespcmding column-row expansion is
m [1
0
II+ HJ jO
1

[l
2 "] ['
- 4
1 - 4 - 2 . 0
The reduced row echelon form of A =
8 14 18 Ro = 0
- 9
6 -10 -10 0
row factorization is
A = CR=
and the corresponding column-row expansion is
0
0
0
-
3
7
=;]. Thus the column-
-3
s
-3
0
0
DISCUSSION AND DISCOVERY
Dl. (a) 1f A a 3 J( 5 matrix, then the number of leading l 's an a row echelon form of A is at most
3, the number of parameters m a general solution of Ax = 0 is at most 4 (assummg A= 0 ),
I he rank of A is at most 3, the rank of AT is at most 3, and the nullity of AT is at most 2
(assuming tl -# 0)
D2.
( b ) If A as a fl"' 3 rnatnx then the number of leading l 's in a row echelon form of A is at most
3, the number of pararnrters in a. general solution of Ax= 0 is at most 2 {a.ssumang A = 0 ),
lhe r1.nk o f A is at most 3. the r ank of AT is at most 3, and the nullity of AT is. al most 4
(nssummg A "# 0)
(c) If A iR a 4 x 4 matrix, then the number of leading 1 's in a row echelon form of A' is at most
I, tbP number of parameters in a general solution of Ax = 0 is at most 3 (assuming A i 0),
l he rnnk of A is at most 4, the rank of AT is at most. 1 , and the nullity of AT is at most. 3
(assummg A i 0).
(d ) If A is an m x n matrix, then the number of leading l's in a row echelon form of A is at mos t
rn, the n .. mber of parameters in a general solution o f Ax : 0 is at most n - 1 (assumi ng
A# 0), the rank of A is at most min{m, n}, the rank of AT is at most min{m, n}, and the
nulli ty of AT is at most m- 1 (assuming A# 0).
The pivot columns of a matrix A are those columns correspond tv tht: columns of a row
ec..,Plon form R of A which contain a leading 1. For example,

0 -2 0 7
12] [1
2 0 3 0

4 - 10 6 12 28 --. R = 0 0 1 0 0
4 -5 6 -5 - 1 0 0 0 0
and :-:o the 1st, 3rd, nnd 5th rolumns of A are tbe pivot columns
EXERCISE SET 7.7 229
D3. The vectors v
1
= (4, 0, 1, -4, 5) Mid v
2
= (0, 1, 0, 1) form a bnsis for null( AT).
D4. (a) fa
1
, az, a4} is a. basis for col( A).
(b) aJ = 4a
1
- 3a2 and a.o = 6a1 + 7a2 + 2a4.
EXERCISE SET 7.7
1. Using Formula (5) we have projaX =
1
;:.:1{
2
a= (
1
l )(1, 2) = (
1
5
1
,
2
i ).
On the other hand, since sin(} = fs and cos B = 7&, the standard mat rix for the projection is Po =
[
!. . [l 1 [.!l]
i i and we have Pex = i i_ [-
6
} =
2. Using Formula (5) we have proj
8
x = = =
On the other hand, since and the st andard mat rix for the projection is
Pe = [ _ and we have Pex = [_
29 29 19 :.!!)
3. A vector parallel to l is a = (1, 2). Thus, using Formula 5, the projection of x on lis given by
. ax ( 3)( ) (3 6)
proJaX = llall2 a ""' 5 1, 2 = 5 5
On the other hand, since sin 0 = .1,
5
and cos (J =
1
15
, the standard rnatrix for the projection is Pe =
[
1 2] [I [3] v
i ; and we have Pex = i ! = .
5 5 .')
4. A vector pnr11.llel to l is a = (3, -1). Thus, using Formula. 5, the projection of x on l is g.iven by
. a x -a)(
3
) ( 33 11
proJ.,X = ll all 2a :.::(IO . , 1. = - ro,w)
On the other hand, since sin O=- vko and cosB = ;?to. the standard matrix for the projection is
Po = [_1 ... 1] and we have Pex""" [_ -11 = r-
10 10 10 It) 10
5. The vector component of x along a is proj
6
x =
11
':"
1
j
2
a = ! (0, 2, -1) = (0, - ), and the component
orth.:.gona l to a is x- proj
8
x = (1, 1, 1) - (0, -0 = (1,
6 , _ X a 5 ( l 2 3) -.. ( 5 10 1[)) d _ (Z Q 1) ( 5 10 15) _ ( 23 10 I )
. prOJ
8
X - iiGlr a =
14
, , _ .. Iii 14 i4 , an x - - prOJaX-
1
, -
14
, i4 14 - 14 - 14, -!4 .
7. The vector component of x along a is projax = {a:lij a = -4, 2, -2) = L {o -
1
1
0
), and the
component ort hogonal to a is x - proj.x = (2, I, 1, 2) - fo, - lo) = ( [o,
8. proj .. x = 1,-1, -- 1):::; e7
2
, -),
x- projax = (5, 0, - 3, 7} - (11, = CZi,
1
i ,
5
7
5
).
. Ia xl 12 - 6 + 241 20 20
9. llprOJaxll = = J4 +9 + 36 = J49 = 7
230
11.
Ia xl 18- 10 + 41
-nail= J4 + 4 +- 16
2 1 J6
v'24=J6-6
. laxl 1-8+6-2+151 11 .J33
llprOJ nxll = Taf = v16 + 4 + 4 + 9 = J33 = -3-
Chapter 7
. a xl 135-3 + 0- 11 31 3IJ5i
12. llprOJaXII = Taf = V49 + I + 0 + 1 = J5I =
13. If a = ( - 1, 5, 2) then, from Theorem 7.7.3, the standard matrix for the orthogonal projection of R
3
onto span{a} is given by
P = ;. aaT = ... 5 1-1 5 2) =

- 5 25 10
[
-1] [ 1 -5 -2]
a a " 2 - 2 10 4
We note from inspection that P is symmetric. lt is also apparent that P has rank I since each of its
rows is a scalar multiple of a . Finally, it is easy to check that
p2 = _.!..
30 -2

10 4
3
-2
= _.!..
10 4
30
- 2
and so P 1s tdempotent .
14. If a = (7, - 2, 2) then the standard matrix for the orthogonal projection of R
3
onto span{ a} is given
by
Hi.
[
7] [ 19
1 T 1 _ 1
P = -aa =- -2 lr -2 2] =- - 14
aTa 57 57
2 14
11 11]
4 - 4
-4 4
We note front in!>pection that Pis symmetric. It is also apparent that P has rank 1 si nce each of its
rows is a sc,\lar mult1plc uf a Finally, it is easy to check that P
2
= P and so P is idl.mpot enl.
[
3 ?.] . 16 'l
Let M = - 1 u Then i\f
1
M = [
9 1
;] and. from Theorem 7 7 5, the standard matrix for the
I :l
orthogonal prOJCclton of !1.
3
onto W = span { a
1
, a2} is given by
P= AI(MTM)-IMT --= [-! [ 13 -9] [3 -4
1 3 257 - 9 26 2 0
[
113 -84
1
] = -
1
- -84 208
3 257 96 56
96]
56
193
We not e from inspection t.hat lhe matrix Pis symmetric. The reduced row echelon form of P is
!
and from this we conclude that P has rank 2. Finally, it is easy to check that
P' = [ ,m 2!1 [ ,m = 2!1 [
=P
193
and so P ic:: idl'mpotent
EXERCISE SET 7.7 231
16. Let M [ - : Then MT M :]and the tandW"d matrix for the orthogonal
of R
3
onto W = span{ a
1
, a2} is given by
-23] [1 -2
30 4 - 2

-102 305
From inspection we see that P is symmetr ic. The reduced row echelon form of P is

0
1
0
and from this we conclude that P has ranK. 2. Finally, it is easy to check that P
2
= P.
17. The standard matrix for the orthogonal projection of R
3
onto the xz-plane is P = This
0 0 1
agcees with the following computation rn;ing Po, mula (2n Let M [: ; l Then MT M ,
and M(MT M) -
0
MT [: [; :1 :] = :J.
18. The standard matrix for t he orthogonal projection of R
3
onto the yz-p]ane is P = This
0 0 I
19. We proceeri as iu Example 6. The general ::oolution of the equation x + y + z = 0 can Le writ ten as
and so the two column vectors on the right form a basis for the plane. If M is the 3 x 2 matrix
these vectors as Jts columns, then MT M = and the standard matrix of the orthogonal
projection onto the plane is
[ 2 -l] [-1 1 0] =
2 -1 2 - 1 0 1 3
lJ -1 - 1 2
The orthogonal projection of the vector v on the plane is Pv =
-1
- J -1] [ 2] [ 1]
2 -1 4 = 1 7 .
- 1 2 - 1 - 8
232 Chapter 7
20. The general solution of the equation 2x - y + 3z = 0 can be written as
and so the two column vectors on the right form a basis for the plane. 1f M is the 3 x 2 matrix
having these vectors as its columns, then NfT M = [!

and the standard matnx of the


projection onto the plane is
P = M(MT M)-
1
MT = 2 3 _!_ [
10
-
6
] [
1 2 0
] = _!_ 2 13 3
[
1 0] [ 10 2 -6]
0
14 -6 5 0 3 1 14 6
1 - 3 5
[
10
The orthogonal projection of the vector von the plane is Pv = ]_ 2
14 -6
2 -6] [ 2] 1 [ 34]
13 3 4 = 14 53
1 li - 1 -5
21. Let A be the matrix having the given vectors as its columns. The reduced row echel<m form of A is
and from this we conclude that t.he a
1
and a 3 form a basts for the II' spanned b}
the given vectors (column space of A). Let M be the 4 x 2 matrix havmg a
1
and a
3
as its columns.
Then
6 2 -4 -6 0 72
[
4 lj
0 -2 5] 2 - 2 = [- 20
-4 5
20]
30
and the standard matrix for the orthogonal projection of R
4
onto W ts g1ven by
M(MT M)-t MT = [ - : _1_ [ 30 - 20] [4
2 -2 1760 -20 72 1
-4 5
[
89 -105
= _1_ -105 135
220 - 3 - 15
25 15
- 3
- 15
31
-75
25]
15
- 75
185
- 6 2
0 - 2
22. Let A be the matrix having the given vectors as its columns. The reduced row echelon form of A is
EXERCISE SET 7.7 23:,
and from this we conclude that the vectors a
1
and a2 form a basis for the subspace W spanned by the
oiven vectors. Let M be the 4 x 2 matrix havi ng a
1
<md a
2
as its columns. Then Mr M = [
36 24
]
o 24 126
and the matrix for t;he orthogonal projection of R
4
onto W is given by
[
5 3]
M(MT A-1)-l MT = -3 -6 _1_ [ 21
1 0 660 - 4
-1 9
-4] [5 -3
6 3 - 6
[
153 -89 31 -37]
1 -1] = _.!._ -89 87 -13 -59
0 9 220 31 -13 7 -19
- 37 -59 - 19 193
23. The reduced row echelon from of the matrix A is R =
system Ax= 0 can be expressed as
0
1 1]
i -i ; thus a general solution of the
1 2
r
-ts + !t1 [
_!
3
- lt _.! -l
x= 2 2 =s 2 +t 2
s J 1 OJ
l 0 I
where the t,wo col umn vectors on the nght hand side form a basis for Lhe solution space. Let D be
t.hP. matrix having these two vectors a.S its columns. Then
1
0
0] [=! -i] = 0] = [1 OJ
1 1 0 2 01
0 1
and the standar d matr ix for t he orthogonal projection from R
4
onto the solution space is
l-l
'l [-t
0 - 1
2
P IJ(BT8) - ' Br -!
_ .!. ? 1
[ -:
-l
1
0] =
- 1
:; [ 0
2
_!
0 1 3
-1 2
2
1 0
_:]
0
2
Thus t he orthogonal projection of v = {5, 6, 7, 2) on the solution space is given (in column form) hy
0
3
24. The ceduced COW !com of the matdx A is R r:
0
: thus the gcnP.ra.l solution of
lo o -2
thesyst.m Ax = 0 can be expcessed as X = t r- ' "' (to avoid lcactions) X = [- ] In ot h& words,
the solution space of Ax= 0 js equal to span{a) where a = ( -3, 0, 1, 2). Thus, from The;).lem 7.7.2,
the orthogonal projection of v = (1, 1, 2, 3) on th<! solution space is given by
. a v 11 ( 15 s 10
proJ
8
v = a = i4 ( -3, 0, 1, 2) = -
14
, 0,
14
,
14
)
234 Chapter7
25. The reduced row echelon form of the A= is R = From
3 2 -1 1 0 0 0 0
this we conclude that the first two rows of R form a basis for row(A), and the first two columns of A
form a ba.c:;is for col(A).
Orthogonal proJeCt JOn of R
4
onto row( A): Let B be the 4 x 2 matrix having the first two rows of R
as 1ts columns. ThPn
0 1 3 0 1 11 -14
[
1 0]
1 - 2 - 4] 1 -2 = [-14 21]
3 - 4
and the standard matrix for the orthogonal projection of R
4
onto row(A) is given by
Pr = B(BTB) - tBT = [21 14] [1 0 1 3] = 2_
1 -2 35 14 11 0 1 -2 -4 35 -7 -8
3 - 11 7 -2
-1 7]
-8 -2
9 11
1l 29
Orthogonal proJCCtJOn of R
3
onto col(A): Let C be the 3 x 2 matnx having the first two columns of
A as its columns. Then
cr c = [t 1 3] = [11 1]
102 32 75
and the standard matnx for the orthogonal projection of R
3
onto col(A) is given by
P, = cccrc)-'cr = [i [: = H
26. Tht reduced row echelon form of IS R = ! l From this we conclude that the first two
0 0 0 0 0
rows of R form a basis for row(A), a.nd the first two columns of A form a basis for col(A).
Orthogonal projet..tlon onto row(A): Let B be t.he 5 x 2 mat rix having the first two rows of R as
ItS column'l Then sT B G and the standard matrix for the orthogonal projection of Rl'> onto
row(A) is given by
7 -5 2 9 - 3
- 5 7 2 -3 9
Pr = B(BTB)-I BT = 2_
2 2 4 6 6
24
9 -3 6 3 15
-3 9 5 3 15
Orthogonal projection onto col(A): Let C be the -1 x 2 matrix having t he fi rst two col umns of A as
its columns. Then ere= [
1
:
2
:] and t he standard matrix for the or t hogonal proj ection of R
4
onto
col(A) is given by
[
237
P. = C(crc)-lcr = _I_ - 73
c 419 -13
194
- 73
369
1::]
29 - 46
64 - 46 203
-95
EXERCISE SET 7.7 235
0
5 g . - 5
27. The reduced row echelon foxm of the matrix [A I b) is
1 8 I 1]
1 - : k . Prom this we
0 0 0 I 0

[ -,! -,! - ,!']
fii:
its columns, and let C = BTB = tl Then the standard matrix for the orthogonal projection
of R
4
onto row(R) = row(A) is P = BC-
1
Br, and the solution of Ax= b which lies in row(A) is
given by
r
1 _.!_ 2
[-i]
n1
25 25 25
1 ! R 11
Xrow( A) = Pxo =

25
- 25
=
n 7
2!1
-25
'25 25 0
11 1 2
!! 0
25 25 25
1 3 -3 1 3
[
0 2 I I 4]
28. The reduced row echelon for m of the matrix [A I b j is !RIc] ::: 0 1 fi fi :- . rrom this we
0 0 0 o, 0
conclude that b is onnsht<nt, and that x
0
= [- is one solution. Let B be the4 x 2 matrix
having the fi rst t wo rows of R as its columns, ami let C = BT B = [_ - Then t he s tandard
:16 72
matrix for the orthogonA-l projection of R
4
onto row(R) =row( A) is p ::-: vc-
1
BT, and the solution
of Ax = b which lies in row( A) is given by
I ...
20 131
" l r .l I "'l
:!99 299 299 - f.<i9 j 299
20 224 124 l 48
299 209
m - 3 _ -m
Xrow(A) = Pxo =
13
l
32 90
-A 0 - !M
299 299 2!19 299 299
53 124 25
90 0 !!l
- -m
299
- 299 299 - 299
29. In Exercise Hl we found t hat the standard ma.trix for the orthogonal projection of R
3
onto the plane
W with equation x + y + z = 0 is
1
P =-
3

-
- 1 . 1 2
Thus the standard matrix for the orthogonal projection onto w.t (the line perpendicular toW) is
1 - p =
0 0 1 -1 - 1 2
1
1
1
Note that W L is t he !-dimensional space (line) by t he vect or a= (1.1, 1) and so the com-
putation above is consistent with the formula. given in Theorem 7.7.3.
236 Chapter 7
30. In Exercise 20 we found that the standard matrix for the orthogonal prOJection of R
3
onto the plane
W with equation 2.c - y + 3z = 0 is
[
10
2
14 -6
2 -6]
13 3
3 5
Thus the standard matrix for the orthogonal projection onto W J. (the line perpendicular to W) !s
I - p = 1
1
4 [

= 1
1
4 -:]
0 0 1 -6 3 5 6 -3 9
31. Let A be the 4 x 2 matnx having the vectors v
1
and v
2
as its columns. Then
!I'TA = [1 -2
3 4 -1
3
0] !] = [ 14 -8]
2 3 - 1 - 8 30
0 2
and the sta ndard matrix for the orthogonal projection of R
4
onto the subspace W = spa.n{v
1
, v
2
} is
p -= A(Ar A)-1 l r = [15 4] fl -2 3
3 - 1 178 4 7 l3 4 - 1
0 2
[
51 23
0 1 23 54
2] = 89 28 - 31
25 20
28
- 31
59
5
25]
20
5
14
Thus the orthogonal projection of the vector v = (1, 1, 1,1} onto w.1. is gtven {in column form) by
\ Pv
r:]
ll
[
51
1 23
89 28
25
23 28
54 - 31
- 31 59
20 5
DISCUSSION AND DISCOVERY
25] [1] 1] [127] [-38]
20 1 1 l 66 1 23
1: = - 89 - 89
01. (a) The rank of the standarrl matrix for the orthogonal projection of Rn onto a line through the
ongm 1s 1, and onto its orthogonal complement is n - 1.
(b) If n 2, then the rank of the standard matrix for the orthogonal projection of Rn onto a
plane through t he ongin is 2, and onto its orthogonal complement is n - 2.
02. A 5 x 5 rnatnx Pis thl' standard matrix for an orthogonal projection of R
5
onto some 3-dimensional
subspace if and only 1f it is symmetric, idempotent, and has rank 3.
D3. If x1 = projax and x2 = x- x1 then llxll
2
== llxdl
2
+ llx2ll
2
and so
ll x2ll
2
= llxll
2
-llxtll
2
= llxll
2
-


2
= llxll
2
-
Thus q = / ll x ll
2
-

= ll x2ll where x2 is the vector component of x orthogonal to a.


DISCUSSION AND DISCOVERY 237
D4. If P the standard matrix for the orthogonal projection of Rn on a. subspace W, then P
2
= P
(Pis idempotent) and so p k = P for all k ;:::: 2. In particular, we have pn = P.
D5. (a) TJ"ue. Since projwu belongs toW and projw.1. u belongs to W.L, the two vectors a.re orthog-
onal
(b) False. For example, the matrix P = satisfies P
2
;:: P but is not symmetric and there-
fore does not correspond to an orthogonal projection.
(c) True. See the proof of Theorem 7.7.7.
(d) True. Since P
2
= P, we also have (J - P)
2
=I - 2P + P
2
= I- P; thus I- P is idempo-
tent.
(e) False. In fa.ct, since projcoi(Aj b belongs to col(A). the system Ax= projcol(A)b is always
consistent.
D6. Since (W.L )l. = W (Theorem 7.7.8), it follows that ((Wl.)..l)l. = W.1..
D7.
[
1 l 1]
The matrix A = 1 1 1 is symmetric and has rank 1, but is not the standard
1 l 1
matrix of an
orthogonal projection. Note that A
2
= 3A, so A is not idempotent.
D8. In this case the row space of A is equal to all of Rn. Thus the orthogonal projection of Rn onto
row(A) is the identity transformation, and its matrix is the identity matrix.
D9. Suppose that A is ann x n idempotent matrix, and that>. is nn eigenvalue of A with corresponding
eigenvector x(x f 0). Then A
2
x = A(Ax) = A(.h) = ..\
2
x. On the other hand, since A
2
=A, we
have A
2
x = Ax = .>.x. Since x =/= 0, it follows 'that >.? = >. and so >. = 0 or 1.
DlO. Using calculus: The reduced row echelon form of lA I bj is thus the general solution of
0 0 010
Ax = b is x = (7 - 3t, 3 - t , t} where -oo < t < oo. We have
ll xll
2
= {7- 3t)
2
+ (3 - t)
2
+ t
2
= 58 - 48t + 1lt
2
and so the solut.ion vector of smallest length corresponds to ft lllxii
2
J = -48 + 22t = 0, i.e., to
t :.=: We conclude that Xrow = (7=-- ii, 3- H, = (
1
5
p


Using an orthogonal projecti on: The solution Xrow is equal to the orthogonal projection of a.ny
solution of Ax = b, e. g., x = (7, 3, 0}, onto t he row space of A. From the row reduction alluded
to above, we sec that the vectors v
1
= (1, 0, 3) and v
2
== (0, 1, J) form a basis for the row space of
.4. Let. B be the 3 x 2 matrix having these ver,tor as its columns. Then BT B =

and the
standard matrix for the orthogonal projection of R
3
onto W = row(A} is given by
= 1\
3 1 10
Finally, in agreement with the calculus solution, we have
1
Xrow = Px =-
11

[;] = f [ !]
3 1 10 0
1
24
238
Chapter 7
Dll. The rows of R Corm a basis for the row space of A, and G = RT has these vectors as its columns.
Thus, from Theorem 7.7.5, G(GTG)-
1
GT is the standard matrix for the orthogonal projection of
Rn onto lV = row{A).
WORKING WITH PROOFS
Pl. If x anJ y are vectors in Rn and if a and /3 are scalars, then a (ax+ /3y) =a( a x) + /3(a y).
Thus
a (ax + /3y) (a x) (a y)
T(ax + /3y) = llall
2
a = a:lja112a + = aT(x) + /3T(y)
which shows that T is linear.
P 2. If b = ta, then bTb = b b = (ta) (ta) = t
2
a a = t
2
aTa and (similarly) b bT = t2aaT; thus
1 T 1 2T 1 T
bTb bb = t2aTa t a a aT a aa
P3. Lf'l P hl' a symmetric n x n matrix that is idempotent. and has rank k. Then W -= col(?) is a
k-dimensonal subspace of R". We will show that P is the standard matrix for the orthogonal
projecl ion of Rn onto W, i.e, that Px = projwx for all x in Rn . To this end, we first note that
Px belongs to W and that
x = Px + (x- Px) = Px t (I- P)x
To show that Px = projwx it suffices (from Theorem 7.7.4) to show that (I- P)x belongs to
\1' .1nd since W - col{ A) = ran(P). this IS equtvalem to showmg that Py (J - P)x = 0 for all
y m R" Finally, since pT = P = P
2
(Pis symmetric and idempotent), we have P(J- P) =
P - P
2
- P - P = 0 and so
for evcrv x and y m R" This completes the proof.
EXERCISE SET 7.8
J . First we note that the columns of A are linearly independent since they are not scalar multiples of
each other; thus A has full column rank. It follows from Theorem 7.8.3(b) that the system Ax= b
has a unique least squares solution given by
The least squares error vector is
b- Ax =

=
5 4 5
11 8 11
15
and it is easy to chP"k that this vector is in fact orthogonal to each of the columns of A. For example,
(b- Ax) Ct(A) =
1
\ !(-6){1) + {-27}(2) + (15)(4)] = 0
EXERCISE SEl' 7.8 239
2. The colunms of A are linearly independent and so A has full column rank. Thus the system Ax= b
has a unique least squares solution given by
0 2 1 3 - 1 - 2.. 9
-l [ [ 21
6] -2 I I] lj- 21 [-14]
The least squares error vector is
and it is easy ,to.check that this vector is orthogonal to each of the columns of A.
3. From Exercise 1, the least squares solution of Ax= b is x = f1
Ax = 2 3

= 16
[
1 -1] [28]
4 5
11 8 11
40
On the other hand, the standard matrix for the orthogona.J projection of R
3
onto col(A) is
p _ A(ATA.)-tAT::::.: [21 25]-l [ 1 2 4] = _1
25 35 -1 3 5 220
4 5 20 90 170
:).nd so we have
projcoi(A)b = Pb =



20 90
[- =
1
1
1
= Ax
170 f> 40
4. From Exercise 2, the least squares solution of Ax = b is x =
2
\ [ -t:]; thus
[
2 -- 2] [ tj6]
Ax = 1 .l .!.. [
9
] = ..!:_ - 5
3 1 21 -14 21 13
On the other hand, the standard matrix of the orthogonal projection ont.o col(A) is
and so we have
= 2
1
1
=Ax
17 1 . 13
240 Chapter 7
5. The least squares solutions of Ax = b are obtained by solving the associated normal system AT Ax=
.4Th which IS
Sincp the rnntrix on the left is nonsingular, this system has the unique solution
X= [J:1] = [24 8]-l [12] = 2_ [ 6 -8] [12] =[to]
X2 8 6 8 80 -8 24 8 l
The error vector is
and the least squares error is li b - AxiJ =

+

+ (0)
2
= Ji =
6. The least. squares solutions of Ax= b are obtained by solving the normal system AT Ax = ATb which
IS
[
14 42] [Xl] [ 4]
42 126 X2 = 12
This (redundant) system has infinJtely many solutions given by
X= [::J = 3tl = [!J + t r n
The error ve,.lor jc:;
and the least squares error is li b - Axil= + + = J* = :Lfl.
7. The least SCJUf\rf'S solut1ons of Ax= b are obtai;1ed by solving the normal system AT Ax = ATb which
is
The augmented matdx of this system reduces to [:
solutions given by
0
0
1
' 7]
'-a
1 : i ; thus there are infinitely many
0' 0
EXERCISE SET 7.8 241
The error vector is
and l hc least squares error is li b - Axil = J(!)2 +

= =
8. The least squares solutions of Ax= bare obtained by solving AT Ax= ATb which is
[
1 [::] =
17 33 50 X3 6
Tbe augmented matrix of this system reduces to

0
1:-sl .
1 : tf ; thus there are infinitely many
0 0 I 0
:;olutions given by
x = [:;] [W
1
,=:]
T he error vt.'Ctor is
and t he least squares error is llb - Axll = V( !iY + { fr )2 + {- fi-)2 = ,f1j; =
2
f[I .
9. The hnc"' model fo, the given data b M v = y wh"c M [i il and y = [;] . The least squa<"'
solmion i:> obtained by sobing the norrnal sys tem M Ty which is
[1: =
Since t he matrix on t he left is nonsingular, this system has a. unique solution given by
[
v
1
] [ 4 16J -l [IO] 1 [ 37 -8] [tO] 1 [-3] r- 1
3
0]
V2 = 16 74 47 = 20 " 8 2 47 ::::: 10 7 =
1
7
0
Thus the least squares st raight line fit to t he given data is y-== -
1
3
0
+ I
0
x.
y
4
2
12 3 456 7
-I
242 Chapter 7
10. The Hnear model for the given data is Alv y where M [1 and y [!] The leaSt squares
solution obtajned by solving the normal system AfT Mv = MTy which is
r: r::J =
Since the matrix on the left is nonsingular, tllls system has a unique solution given by
[
Vl] [4 8] -l [4] 1 [ 22 -8] [4] 1 [16]
V1 = 8 22 9 = 24 -8 4 9 = 24 4 = !
Thus the least squares straight line fit to the given data is y = j + !x.
y
2
-I 2 3 4
-I
11. The quadratk least squar" model for the gh-en data is Mv- y whe" M and y [!]
The least squares solution IS obtained by solving the normal system M
7
Mv = MT y which 1s
8 22 62 1Jl 9
r .t 8 22] [l..'t ] [ ']
l22 62 178 V3 27
Since the matrix on the left JS nonsingular, this system has a unique solution given by

= [ : l [ :] = [- -J -;] [ :] = [-


V3 22 62 1 78 27 ! -1 ! 27 1
6 3 3
Thus the leas1 squares quadratic fit to the given data is y = 1 -
1
6
1
x + jx
2
y
-I
EXERCISE SET 7.8
12. The quadratic least. squares m.odel for the given data is Atv = y where A1 =
1 0 0
andy=
1 1 1
1 1 lj
1 2 4
The le3st squares snl tti.ion is obtained by solving t he normal system Mr llifv = M'I' y which is
Since the ma.t.rix on the left is nonsingular, t his system has a unique solution given by

[! : [ = -: _;] [ == [=:]
113 6 10 18 14 t - 2 1 H
Thus the least quadratic fit i:.n the given tl at <'- is y = -1 - +

13. The model for t.he lea."3t squares cubic fit to the gi ven data is Mv = y where
1 1
r
4.9
1 2 4
s
10.8
M:::::
3
!)
27 y= 27.9
.
4 16 64 60.2
1 5 25 125 113.0
T he normal sysle111 A(r /11 v = lv[T y is
r 5
15 55
225]
n l 2168
1
l
55 225 979 a1 916.0
225 979 4425
a2 - 4087.4
2'25 979 442:> 2051.1 a3 18822.4
243
and the solution, written in comma delimited form, is (ao, a
1
, a2 ,a3) (5.160, -1.864,0.811, {).775).
14. The model for the least squares cubic fit to the given data is Mv = y where
1 0 0 0 0.9
1 1 1 1 3. 1
M= 1 2 4
8

9.4
1 3 9 27 24.1
1 4 10 64 57.3
15.
The associated normal system MT M v = MT y is
[
5 10
10 30
30 100
100 354
30
100
354
1300
100]
354
1300
4890
[
aol [ 94.8]
a1 323.4
a2 = 1174.4
a3 4396.2
Chapter 7
and the solution, written in comma delimited form, is (ao, a1, a2, as) (0.817, 3.586, -2.171, 1.200).
[
1 :tal [Yal [1 :ral
1 :t2 312 1 1 1
1
2:2 n :
If M = . and y = . , then MT M = [ ] . = [r: !] and
X l %2 :J:n :z:, L. :t
. . . . . '
1 :tn Yn 1 Xn
MTy = [ 1 1 1]
X I :1:2 :Z:n
!Ill
Y.
2
= [J-Y ] . Thus the normal system can be written as
: L.x,y,
!in
DISCUSSION AND DISCOVERY
Dl. (a) The distance from the point Po= {1, -2, 1) to the plane tv with equation x + y- z = 0 is
d = IP)(l) + {1)(-2) ;- (-1)\1)1 = _:__ = 2,'3
/{1)2+(1)2+(-1)2 V3 3
and the point in the plane that is closest to Po is Q = ( ). The latter is found by
-:---+
computing thr

of the vector b = OP
0
onto the plane ThP column
vectors of the matrix A = form a bnsis for tv and so the orthogonal projection of b
onto IV ag given by
(b) The distance from the point P
0
= (1, 2, 0, -1) to the hyperplane x
1
- x
2
+ 2x
3
- 2x
4
= 0 is
d = I( 1 )(1) + ( -1){2) + (2)(0) + ( -2)( -1)1 = _ 1_ = JTij
/(1)2 + (-1)2 + (2)2 + Jill 10
and the point in the plane that is closest to Po is Q == (
1
9
0
,
1
2
0
, -

). The latter is found


by computing the orthogonal projection of the vector b = onto the hyperplane:
P'
0
Jwb = -! io [ : -:] H ! Ul =
1
1
0
[
DISCUSSION AND DISCOVERY
D2. (a) The vect or in col(A) Lhat is closest to b is projcol( t! )b = A( AT A) -
1
ATb .
(b) The least squares solution of Ax == b is X= (JiT A) - lATh.
(c) The least squares error vector isb -A( AT A)-
1
A'Tb .
(d) The least squn.res error is li b-- A(AT A)-
1
A
1
' bll .
(e) The stan<iard matrix for the orthogonal projectiou onto col(A) is P = A(A
1
' A)-
1
Ar .
245
D3. From Theorem 7.8.4, a vector xis a least squares solution of Ax = b if and only if b - Ax belongs
to coi(A).l . We have A = a.nd b - Ax = = = [ thus
4 5 4 5 s 14 s- 14
b - Ax is orthogonal to col( A) if and only if
( 1 )(2) + (2)( -7) + ( 4)( s - 14) = 0 ""' ( - 1 )(2) + (3)( - 7) + (5 )( s - 14)
1s - 68 = 0 = 5s - 93
equations are dearly iucornpatible and so we conclude that, for any value of s, the vector x
is not a least sq11ates solution of Ax = b.
D4. The given data points nearly fall on a straight line; thus it, would be rel\..c;onable to a linear
least squares fit and then use the resulting linear formula y = a + bx to to x = 45.
05. The m<><;el foe t h;s least "''"'"' fit ;, [: il 1:1 : m, nd t he conespond; ng nonnal system ;s
[
:

[:] Thus squares solution is


'l 36
a l 3
2
11 ;n -
1
... 1
21
[
3]1 [ l [ .1) 9] [1 l [ 5]
LJ = i * = - =
rP.'l ulting in y =
2
5
1
+ i as the best least squares fit. by a curve of this type.
y
10
6
4
2
I 2 3 4 5 6 7
D6. We have [:] = if and only if Ax + r =b and AT r = 0. Note that AT r = 0 if and only
if r is orthogonal to col(A) . It follows that b - Ax belongs t.o coi{A).L and so, from Theorem 7.8.4,
x is a least squares solution of Ax= b and r = b - Ax is the least squares error vec".nr.
246 Chapter 7
WORKING WITH PROOFS
Pl. If Ax= b is consistent, then b is in the column space of A and any solut ion of Ax = b is also a least
squares solution (since li b - Axil= 0). If, in addition, the columns of A are linearly independent
lhen there is only one solution of Ax = b and, from Theorem 7 8.3, t he least squares solution is
also umque. Thus, in this case, the least squares solution is the same as the exact solution of
Ax = b
P 2. If b is orthogonal to the column space of A, then projcoi(A)b = 0. T hus, since t he columns of A
are linearly independent, we have Ax = projcoi(A) b = 0 if and only if x = 0.
P 3. The least squares solutions of Ax = b are t he solutions of the normal system AT Ax = ATb. From
Theorem 3 5 1, the solution space of the latter is the translated subspace X.+ W where :X is any
least squares solution and W = null(AT A) = nuii(A).
P 4. If w is in W and w :1: projwb then, as in the proof of Theorem 7.8.1, we have l b - wll >
li b- projwb ll ; thus projwb is the only best approximation to b from W.
P5. If ao, a 1, a2, ... , am are scalars such t hat aocdM) + a 1c2(M) t a2c3(M ) + .. + amCm+l(M) = 0,
then
ao + n1:ri + + + amxi = 0
for each t = 1, 2, ... , n Thus each x, is a root of the polynomial P(x) = a
0
+ a
1
x + + amxm.
But such a polynomial (if not identically zero) can have at most m distinct roots. Thus, if n > m
and if at. least. m + 1 of the numbt>rs x
1
, x
2
, . , Xn are distinct, then ao = a 1 = a2 = = am = 0.
This shows that the column vectors of M are linearly independent
P6. If at least m + I of the numbers :r
1
, :r
2
, ... :rn are distinct then, fr om Exercise P5, the column
vect ors of M are linearly independent, thus M has full column rank and MT M is invertible.
EXERCISE SET 7.9
1. (a) v, v2 = (2)(3) -t (3)(2) = 12-:/; 0, thus the vectors v
1
, v2 do not form an orthogonal set
(b) Vt v2 = ( 1}(1) t (1}(1) = 0; thus the vectors v
1
, v 2 form an orthogonal set. The correspond-
ing Orthonormal SCt IS ql = u::u = = = (-32,
(c) We hnve v1 vz = V t v3 = v2 v3 = 0; thus the vectors v
1
, v
2
, v3 form an orthogonal set. The
correspondmgorthonormal set isql = = (7s,O.js), q3 =
(d) Although v1 v2- Vt v3 = 0 we have v2 v3 = (1)(4) + (2)(-3) + (5)(0) = - 2-:/; 0; thus the
vectors v1 , v2, v3 do not form an orthogonal set.
2. (a) .,. v2 = 0; thus the vectors v
1
, v 2 form an orthogonal set. The corresponding orthonormal set
is q l = ?tJ)
(b) v
1
v2 :1: 0; thus the vectors v 1o v
2
do not form an orthogonal set.
(c) V t v2 -:/; 0, thus the vectors v
1
, v
2
, v
3
do not form an orthogonal set .
( d ) We have v
1
v
2
= v
1
v
3
= v
2
v
3
= 0; t hus the vectors v
1
, v
2
, v
3
form an orthogonal seL.
The corresponding orthonormal set is q1 = -j, = ! .-i), q3 = (!,
3. (a) These vectors form an orthonormal set.
(b) These vectors do not form an orthogonal set since v2 v3 = - '3; -:/; 0.
(c) These form an orthogonal set but not orthonormnl set si nce ll v3ll = .j3 :1: 1.
EXERCISE SET 7.9 247
4. (a) Yes.
( b) No; ll v1 1l f. 1 ;,.nd ll v zll f. 1.
(c) No; Vt vz f. 0. vz vJ f. 0, llv<dl # l, and ll vJII f. L
6. (a) projwx = (x v1)v1 + (x v:a)vz = (l )vt + (2)vz = ( !. !, 4} + (1, 1, - 1, -1) =
(b) projwx = (x vt )vl + (x vz)vz + (x v3) v3 = (l )vl + (2)v2 + (O)v3 =
7. (n)
(b)
8. (a)
(b)
13. Using Formula (6), we have P =

r2
1 2 3
J :i I
2 '2 3
3 -3
72
14.
[
J
Using Formula (6), we h.:we P =
V'6
js]- [
I 2 - 76
?J -76 I
275
I
7u
2
3

- I

:.
6
1 5. Using t.hf! matrix fouud in Exen;isc 1:3, the or thogona.l projection of w onto W = SJJan{ v
1
, v
2
} is
Pw =
r
! [ 0] [
.. s _1 2 = !

9 9 9
- 2 - 3 _L98
- 9
On Lhe other hand, using Formula (7) , we have

Chapter 7
16. Using the matrix found in Exercise 13, the orthogonal projection of w onto W = span{v
1
, v
2
} is
On the other hand, u:.mg Formula (7), we have
projww = (w Yt)Vt + (w v2)v2 = !, + j, =


17. We have = (7J, 7J, 7J) and = '7s -76). Thus the standard matrix for the orthog-
onal projection of R
3
onto W = span{v
1
, v
2
} is given by
[
1
73
P=
73
1
73
1
76
= [! :
0 0 1
18. We have
11
:!11 = (S, Thus the standard matrix for the orthogonal pro-
jection of R
3
onto W span{v
1
, v
2
} is given by
p = 1]
j [l
1
3
2
3
-il
19. Using the matrix found in Exerctse 17, the orthogonal projection of w onto W =span{ v
1
, v
2
} is
Pw Hl [ =t]
On t ht> r,t her hand, using Formula (8), we havP
20. Using the matrix found in Exercise 18, the orthogonal projection of w onto W = span{ v
1
, v
2
} is
Pw=

! [ 2] [ OJ
!9 ; =
-9
On the other hand, using Formula {8), we have
21. From Exe<Cise 17 we have P [ t t ; thus tcaoe( P) j + j + I 2.
EXERCISE SET 7.9 249
22.
From Excrciso !8 we have P ,. [!
4 '2]
9 9
- ; t hus trace( F ) = .; + = 2.
&
23. We have pT:;.. I' and i t is easy to check that P
2
= P; thus Pis the standard matrix. of an orthogonal
projection. The dimension of the range of the projection is equal t o tr(P ) = +
2
5
1
+ ff = 2.
24. The dimension of the range is equal to tr(P) = 19 + ;fg + = 1.
25. We have pT = P and it is ea.'!y to check that P
2
= P ; thus Pis the standard ma.t.rix of an orthogonal
projection. The dimension of t he range of the projection is equal t o tr(P ) = + t + = 1.
26. The dimension of the range is equal to tr(P) = + + = 2.
27. Let v
1
= wl = (1, - 3) and v 2 = w2 Vt = (2, 2)- ( 1; )(1, - 3) =
(Jj, !> = {:3, I) . Then { v
1
, v2} is an orthogonal basis for R
2
, and the
vectors q1 =

II = ( jfo) and q2 =



= (fro, ftc) forrn an or-
thononnal hAS is for R
2
.
28. Let V J = Wt = (1, 0) and V2 = W 'J - v 1 = (3, -5) - n)(l , 0) = (0, - :>) .
Then Q1 =

= (1,0) ami q 2 = ft:!n = {0, - 1} form an orthonormal ba:;is
for R
2
29. Let YJ = w, = (1, 1, 1), voz = w 2 -- = (-1, 1,0)- J. , 1) = (--1, 1, 0}, and
form u.n orthonormal hasls for R
3
.
30. Let. v, , .. WJ = (1, o. 0), Y2 = w 2- u;;u! v l = (3. 7, -2)- (f)(l , O, 0) = (0, 7, -2). and
Then { v 1
1
v2, v3} is an orthogonal basis for R
3
1
and the vect ors
-s
VJ
q, = llvdl = (l, O, O),
VJ (Q 30 105 )
q
3
= llv
3
U = ' ./11925
1
J uns
form an orthonormal basis for R
3
.
250 Chapter 7
31. Let V l = Wt = (0, 2, 1, 0), V2 = w2- VJ = (1, -1, 0, 0) - ( 1, 0) = (1, -i,
and
Then { v
1
, v
2
, v
3
, v
4
} is an orthogonal basis for and the vectors
- YJ - {0 2Vs J5 0) - - ( v'3(i - VJo :.&.Q 0)
ql - llvdl - '"5 Sl , q2 - llv211 6 I 30 ' IS I '
- VJ - (.{IQ .@ v'iO) - - (v'iS .ill uTI .iil)
q 3-
11
v
3
ll- 10 ' 10,- s ,--s- q 4

15 1s , - ts s
form an orthonormal basis for R4.
32. Let VJ = WJ = (1. 2.1. 0). V2 = Wz- = (1,1, 2, 0)- 2, 1, 0) = ( 0),
and
Then { v
1
, v2, v3, v
4
} is an orthogonal basis for R
4
, and the vectors
form an orthonormal basis for R
4

33. The vectors w, = ( 0), w
2
= ( 0), and w
3
= (0, 0, 1) form an "rthonormal basis for
R3.
EXERCISE SET 7.9
251
34. Let A be the 2 x 4 ma.trixhavingthe vectors w
1
= !) and w2 = 7J, as rows.
Then row(A) = spa.n{w
1
, w2}, and uull{A) = spa.n{wt, w2}.1. . A basis for null(A) can be found by
solving the linPnr system Ax = 0 . The reduced row echelon form of the augmented matrix for this
system is
1
0
_l
2
l
2
4 I
.!. : 0]
_! I 0
4 I
and oo a general ootut;on ;, by [m , [\ -
1
;] = [-!] + [=;] the vootorn w, =
4, 1, 0) and w4 = ( 0, 1) form a basis for span{w1, w2}-'-, and B = {wJ, w2, w3, w
4
} is
a basis for R
4
. Note also t ha t, in adrlition t o being orthogonal to w1 a.nd W'J, the vectors w
3
and
w4 ar<:: orthogonal to each other. Thus B is an orthogonal basis for R4 and application of the
Grarn-Schmic.Jt process to t hese vectors amounts to simply normnlizing them. This results in the
orthono110td l>asi::; {qt , Qz, Q:$, q 4} where Q1 = Wt (L Q2 = Wz:::: (-7.f, 7:3 0), Q3 =
11
:;
11
= (-js ,-)5, -:76,0), and q4 =
11
:!g = (--dis, - )r
8
, 7rn,o).
35. Note that W3 = w
1
+ Wz . Thus the subspace W spanned by t he given vectors is 2-diinensiooal with
basis {w
1
, w2} . Let v1 = Wt = (0, 1, 2) and
w2V1 ) ( 2)( ) ( 2 1)
v2= w2- ll vdl
2
v1 =(-1, 0,l -
5
0,1 ,2 ;;-
Then {v
1
. ,l} i::: an Ol lbogonal basis for\\' , anti the vect-or!'
V1 l 2 )
Ut = ll vdl = tO. -/S' ,
form an orthonor mal ba.c;is for
36. NoLe that w.1 = w
1
- W -z t- w
3
Thus the ::;uhspace \V spanned hy the given vectors :3-dimensional
with { w 1 , W ?, w3}. Let v
1
= w
1
= ( -1, 2, 4, 7) . t\nd let.
v ... = \1: 2 - = (-3 0 4 -2) - (2.)(-l 2 4 .., , ) ""' (- 1! _J.
.. Jlv
1
jj 2 ' ' ' ;o 11 ' -:, 1 2
w3 v 1 v 2 , ( 9 )
llvdl
2
v1- llv
2
jj2 v2 = (2, 2, 7, -3)-
70
(- 1, 2,-1,7)-
(
31843) ( - 41 - 26
401
14' 7) 7 ' 2
14
(
9876 3768 5891 3032)
= 2005 ) 2005 > 2005 I - 200.'5
Then {vt . v2, VJ} is an orthogonal ha.osis for w. and the vectors Uj = n::u = (j.ffl,
,;!1.,. ( - 41 - 2 s2 -35 ) I ( 9876 3768 5891 -3032 )
U
2
= Uv211 = JS'iit4 ' /5614
1
v
1
5iii4' J5614 ' an( UJ = llvlil = J 155630105' v'l55630t05 ' ,/155630105
1
J l55630105
form an orthonormal basis for W.
37. that u
1
and u 2 are orthonormal vectors. Thus the projectior. vf w onto the subspace
W spanned by these two vectors is given by
WJ = projww = (w u t) u
1
+ (w u 2)u 2 = ( 0, + (2)(0, 1,0) = 2,
and the component of w orthogonal to W is
Wz = W-Wl =(1,2, 3)


252 Chapter 7
38. First we find vD orthonorma.l basis { Q], Q2} for w by applying the Gram-Schmidt process to {UtI u2}
Let Vt = U t = (-1,0
1
1
1
2)
1
v2 = U2- =

-1(-1,0,1,2) = (j,l, -!,!}, and let


ql = = ( --Js, 0, 7s, fs), q2 = = ( Jh. Ji2). Then {qt , Q2} is an orthonormal
basis for H' . and so the orthogonal projection of w = ( - 1, 2, 6, 0) onto W is given by
and the component of w orthogonal to W is
W2 = W - Wt = ( -1, 2,6, Q) - (
1
= (
1

1
4
9
1
39. If w = (a., b, c), then the vector
is an orthonormal for lhe !-dimensional subspace lV spanned by w. Thus, using Formula (6),
the sta.ndard matrix for Lhe orthogonal projection of R
3
onto W is
P = u T u =
2
:
2 2
b Ia b c] =
2

2
ab b
2
be
[
a] [a
2
ab acl
a 1- + c a + + c b .'2
c . a.c c c-
DISCUSSION AND DISCOVERY
D 1. If a and b" are nonzero, then u
1
= (1, 0, a) and u2 = (0, 1, b) form a basis for the plane z = ax + by,
ami applKHion of tlw Gram-Schm1dt process to these vectors yields '\n orthonormal basis { Q1, Q2}
where
02. (a) span{ vt} =span{ wl}, span{ v
1
, v2} = span{w
1
, w2}
1
and spa.n{vJ , v2 , v 3} =span{ Wt, w2, w3
(b) v 3 IS orthogonal to span{wt, w2}.
D3. If the vectors Wt, w2, .. , are linearly dependent, t hen at least one of the vectors in the list is
a linear combination of the previous ones. If w
1
is a linear combination of w
1
, w
2
, . , w
1
_
1
then,
when applying the Gram-Schmidt process at the jth step, the vector v i will be 0.
D4. If A has orthonormal columns, then AAT is the standard matrix for the orthogotia.l projection
onto the column space of A.
D5. (a) col(M) = col(P )
(b) Find an orthonormal basis for col(P) and use these vectors as the columns of the matrix M.
(c) No. Any orthonormal basis for col(P ) can be used t o form t he columns of M .
EXERCISE SET 7.10
253
D 6. (a) True. Any orthonormal set of vectors is linearly iudeptmdent.
(b ) Pa.lse. An orthogona:l set may contain 0. However, it is t r ue t hat any orthogonal 'set of
nonzero vect.ors is linea.rly independent.
(c) False. Strictly speaking, the subspace {0} has no basis, hence no or thonormal basis. However ,
it true that any nonzero subspace has an orthonormal basiS.
(d ) 1'ruc. 'The vect.or q 3 is ort hogonal to the subspace span{w
1
, w2}.
WORKING WITH PROOFS
Pl. If {v1, v2, .. . , is an orthogonal basis for W, then {vt/llvl ll , v2/llv2ll. . .. , is a.n
orthonor mal basis. Thus, using part (a), the orthogonal projection of a vector x on W can be
expressed as
0 ( Yt ) V1 ( V2 ) Y2 ( ) Vk
proJwX = x . llvtll + x. l!v2ll ll v1ll + . .. + x.
PZ. lf A. is symmet.ric and idempot ent, t hen A is the s tandard matrix of an orthogonal projection
operator; namely the ort hogonal projection of R" onto W = col(A). Thus A= UlJT where U is
any 11 x k matrix whose column vectors form an orthonormal basis for tV .
P3. We rnust. prove that v i E span{w
1
, w2 ___ , w,} for each J - 1, 2, . . -- The proof is by induction on
J
Step 1. Since v
1
= w
1
, we have v
1
E span{w
1
} ; t.hllS t.he stat ement is true for j = L
Step 2 (induction step) . Suppose the st atemeot is t.r ne for integers k which are less t han or
to j, i.e., fork = l , 2, . . . , j . Then
and since v 1 C ,c;pan{ w d, v'/. E span { Wt, w
2
), . .. 0 a.nd v; (7 s pan{ Wt , w
2
, . .. , Wj } , it follows that
vJ+l C span{ Wt , w2, .. . , wi, wj
1
.1} Thus if the s tatement. is true for each of the integers k =
1, 2, ... , j then it is also true fork= j + 1.
T hesf: two s teps complete the proof by induction.
EXERCISE SET 7.10
1 . T he column vec.t,ors of the mat rix A are w
1
= g} and Wz = _ Applicat ion of t he Gram-Schmidt
procE'SS to vect.or yields
We have W1 := { w, q i)q 1 = J5q l and w2 = {w2 q
1
)q l + (w2 Q2)Q2 = -/5q l + ../5q2. Thus ap-
plio.:ation of Form;.:!a (3) yields the following QRdecomposition of A:
A=== fl l -i] =
2 3 v'5
[J5 V5j = QR
V5 0 v'5j
254
Chapter 7
2. Application of the Gram-Schmidt process to the column vectors Wt and w2 of A yields
We have Wt = J2q
1
and w2 = 3J2q
1
+ J3q2. This yields the following QR-decomposition of A:
[
l
[
1 1] 1 2 72 -"J! J2 3v'2
A = 0 1 = 0 13 [ l = QR
1 4 1 1 0 J3
72 73
3. Application of the Gram-Schmidt process to the column vectors w
1
and w 2 of A yields
[
8 l
aJ26
Q2 = s0t
JJ26
We have WJ = (wl Qt)CJ1 - 3q, and w2 = (w2 q, )q l + (wz Q2)Q2- + "fq z_
the following QR-decomposilion of A:
[
11]
A= = -:
8 l
3726 3 .l

[o =QR
3726
1\ppliratton nf thP Grn111-Schrnidt process to the column \'ectors w
1
, wz , w3 of A vields
This yields
\Ve have w1 = J2q
1
, w
2
= "lq
1
+ vl1q2, and w 3 = .J2q
1
- 1q2 + Q3 This yields the follow-
ing QR-dec01npositinn l.lf A:
A
[
1 0 2]
011= 0 73'
1 2 Q I l
72 73'
[v'2
7s 0
--Ja 0
v'2
v'3
0
J2]
il
- 3
h1
3
=QR
5. Application of the Gram-Schmidt process to the column vectors w
11
w
2
, w3 of A yields


Q2 =
719
We have Wt = J2q
1
, w2 =

+ and wa = J2ql +
3
'(Pq2 + This yields the
following QR-decomposition of A:
A=
2 1]
I 1 = 72
3 l 0
1
738
1
-73'8
6
73'8
J2] =QR
.i!!
19
0
EXERCISE SET 7.10 255
6. Application of the Gram-Schmidt process to the column vectors w 1, w2, w
3
of 11 yields
We have w
1
= 2ql, w2 = - q1 + qz, and w3 = + + ./i-G3 This yields the followi ng QR-
decomposition of A:
2 - 1
A=
l


- 1
r :L [-; 1
1 oJ -2 2
0]
[o 1
1 0 0
J] =QR
72
h
[-
1 1] = [-3_: f3
7 . From 3, we have .4 = 2 1
3
:Jv"itl = R
I J
..rtu Q
T hus the nnrmnl for
2 J 2 7 LO
3 W'16
3
Ax = b can be expressed as Rx = Qrb, which is:

,J,!l m = Lil
Solving thi.s system hy back substitution yields the least squares solution Xz = x
1
= -
[
1 0 2]- [-72 - [v'2 v"2
8. From Ex.erci$c 1, we have A = 0 1 1 - 0 ,
16
o v'3
120 -jz 73 -)6 0 ()
system for Ax = b can be e.xprcssed as Rx = Q1' b, which is:
, ..
v2


[::] =
0
W X3
3 v 6
0
1
J3
2
v'G
v'2]
- :/f. ""' QR. Thus the normal
2..f6
:r
Solving this system by back substit ution yields X3 = Xz = x, = 0. Note that, in this example,
the syst.etn Ax = b is consistent and this is its exact solution.
[
1 2 1] - kl [Vi
9. F'rom Excrci11e 5, we have A = 1 1 1 = o
0 31 0 '.;?S8 719 0
41
0
J2]
3'(P = QR. Thus the normal
.ill
19
system for Ax = b can be expressed as Rx = QTb , which is:
w
2
1
v'2
.ill
2
l
- v'38
3
0 'J19
Solving this system by back substitution yields X3 = 16, x2 = -5, x, = -8. Note that, in this exam-
ple, the system Ax = b is consistent and is its exact solution.
256 Chapter 7
10. Prom Exercise 6, we have A= ! :] -H [: - I ll = QR. Thus the normal system
-1 1 0 I l I O
2 72
for .-\x - b can be expressed as Rx = Qrb, which is

i] [=:] = [t
0 0 X3 0
1
72
Solving this system by back substitution yields x
3
= -2, x
2
=
1
2
1
, x
1
=
11. The plane 2x - y + 3z = 0 corresponds to a.l. where a= (2, -1, 3). Thus, writing a as a column
vector, the standard matrix for the reflection of R
3
about the plane is
2 T
H = 1 - --aa =
a r a

= [


0 0 1 6 -3 9 -7
and the reflectiOn of I he vee lor b = (1, 2, 2) about that plane is given, in column form, by
Hb=
!2. lhevlane.x 1-y- lz Ocorrespondstoa
1
wherea=(l,l,-4). Thus,writingaMacolumnvector,
t.he standard matnx for the reflection of R
3
about the plane tS
[ 1 -4] [ -
18 I - 4 :;
-4 - -l 16 :1 .: _l
9 9 9
o.nd the retlec:ti>n of the vector b = (1,0,1) about tltat plane is given, in column form, by
Hb H -: Ul

0
[-:
- 1
-:]
u
2
-!l
3
13.
2 T
1 1
I
H = l - - aa -
=
3
aT a
0 -1
2
3

0

2 [ - :
- 1

u
I

3
14.
2 1
I 1
2
H =1- --aa =
=
3
aT a
6 2
4J
0 -2
2
3

0 0
OJ r
0 0

0 0
- 0]
2 T
1 0 0 2 0 1 - 1
9 2
IT IT II
15. Fi = 1- - aa
2 9 6
aT a
0 1
11
- 1 1
TI IT IT
0 0 3 -3
_...
i
7
II II
-n
EXERCISE SET7.10
257

=
13 4 2
']
0 0
0] [ 1
-2 -3 T5 15 5 15
2 1 0 0 2 -2 4 6
4 7 _1 4.
16. H- l- -aar =
15 H' 5
-15
aT a
0 1 o 10 -3 6 9
2 4 1 2
5
-ii --s
-g
0 0
1 -1 2 3
2 _.i_ :.! 13
15 15
-5
15
17. (a) Let a=v-w=-=(3,,1)-(5,0)=(-2,4),


Then His the Householder matrix for the reflection about a..l, and Hv = w.
(b) Let a=v-w=(3,4)-(0,5)=(3,-1),


4
5
Then H is the Householder matrix for the reflection about a..l, and Hv = w.
(c) Let a= v.- w = (3,4)- (,- {/) = (
6
-;12, S+l'l). Then the appropriate Householder ma-
trix is:
2 T [1 0] 2
H = I- --aa = - ----;::::
aT a 0 1 &0- l7v'2
17-25/2] r .1__..
2 l'n
= 1
33+8../2 -
-2- ../2
18. (a) Let a= v- w,., (1, 1)- (v'2,0) = (1- v'2, 1). Then the appropriate Householder matrix is:
H = J ..... [1 OJ ___ 2 _ [(1- v'2)
2
1- v'2]
a1'a 0 1 1-2v'2 1-/2 I
=
2 2 2
(b) Let v- w = (1, 1)- {0, /2) = (l, 1- .J2). Then the appropriate Householder matrix is:
H =
1
__ 2 aaT = [1 0] ... __ ._2_ [ 1 1- v'2]
aTa 0 1 4-2J2 1- v2 (1- /2)
2
= [ot o] _ [
l v'2
-2
=
2- v'2 J:i
-2- 2
( <:} Let a = v - w = ( t, 1) - ( Y1-) ' 1+2v'3) = e-2-n. l-;/3). Then the appropriate holrlcr
mat.rix is:
= (
0
1 o] _
1 3-2:/3
2
( 3-/'3 )( 1-2.,13 )]
e-.i
13
)
2
-32/3]
2
.fl
2
19. L;;t w = (llvll, 0, 0) = (3, 0, 0), and a= v- w = ( -5, 1, 2). Then
2 T
lJ =I- --aa =
aT a
[
1 0 0] 2 [ 25 -5 -10] [-
0 1 0 - - -5 1 2 = 3
0 0 1
30
-10 2 4
3
1
3
14
15
2
-15
-l]
11
I5
is the .standard matrix for the Householder reflection of R
3
about a..l, and H v = w.
258 Chapter 7
20. Let w = (llvll, 0, 0) = (3, 0, 0), a= v - w = ( -2, -2, 2). Then

0
lq :
4
-4]
H
'2
!l
2
-j
H =1 - - aar = 1 4 -4 =
l
aT a
j
0 1 -4 -4 t
4
3
is the standard matrix for the Householder reflection of R
3
about a.l, and Hv = w.
21. Let v = (1, -1), w = (llvll, 0) = ( J2, 0), and a= v- w = (1- -/2, - 1). Then the Householder re-
flection abouL a..L maps v into w. The standard matrix for this reflection is
H = 1 - _2_aaT = [1 0] 2 [(1 -J2)2
aTa 0 1 - 4- 2-/2 -1 + v'2
= [ol o] _ 2 + J2 [3 -2J2 -1 + J2]
1 2 -1 + J2 1 J
= l'r
4] = [ .;; - Y{l
_fl _fl
2 2 2
[
:D [ l 2] [.;2
We have HA = _
1 3
=
0
= R and, setting
Q = H -
1
= HT = H, this
y1elds the following QR-decompositioo of the matrix A:
A=[ 1 2]=[ :f-
- 1 3 _fl
2
-X}] [h - ]- R
-{/- 0 -Q
22. Let v = (2, 1 ), w = (llvll, 0) = ( v's, 0), and a = v - w = (2- v's, 1 ). Then the Ho\lseholder reflection
about a.l. nnps v into w The standard matrix for this reflection JS
Now let Q, = iJ] = [: 'i
OJ _ 2 [(2 - '' 5)
2
2- v'5l
I 10- 4/S 2 - ,f5 1 J
=
OJ - l = [
1
5 5 s :;
and multiply the given matnx on the left by Q
1
_M
5
Js]=R
_w o 1 o o
s
and, settmg Q = QT = Q,, we have the following QR-decomposition of the matrix A:
y] 0
0] [1
- 0
=QR
This ytelds
EXERCISE SET 7.10 259
23. Referring to the construction in Exercise 21, the second entry in the first column of A can be zeroed
24.
out by multiplying by the orthogonal matrix Q
1
as indicated below:
1] [v-2
3 = 0
5 0
2'2 -../2]
0 - 2J2
4 5
Although the matrix on the ri ght is not upper triangul ar, it can be made so by interchanging the 2nd
and 3rd rows, and this can be achieved by interchanging the corresponding rows of Q
1
This yields .
[
fl -f.
QzA =
2
0 0
_fl
2 2
=
0 0 4 5 0
-../2]

=R
and finally, setting Q == Q:;
1
"" Qf, we obtain the following QR-decomposi tion of A:
[
1 2
A = - 1 - 2
0 1
ll [ 1
=
0 - v;l [V2
0 _ 1.1'2 0
2
1 0 0
2/2
4

0 - 2..;'2
= QR
Referring to the construction in Exercise 18 (a) , the second enlry in the first column of A can be
zeroed out by multiplying hy the orthogonal matrix Q
1
a.<> indicated below!
:If .;_; : 0 0 [l 2 1] ' -L.}-1
QI
A = {/ - Yf : 0 0 1 3 - 2 o: - :If- 3'{2 ....
-o ---o-: -1---6 o - l o r= oi cY o :
o o : o 1 o
0 1
\ ol o , 1 )
- / .
From a similar construction, the third entry-in t he second colum'n of Q
1
A zeroed out by
multi plyiug by t he orthogomli matrix Q
2
as indicated below:
__ - [v-2 -V[l
I r.; - I
0 I - - y6 I 0 0 ,/2 3 f2
Q
Q
I I 3 . -- :!..lL!
> 1 ; 1 - ' ' 2 2 =
... I r.:. t
- ' -
,J2 w
_ill
2 2
0
.i
_ _.&
2 2
0 0 "' v'3
0 0 1
From a Uurd such construction, the fourt-h entry in the thi rd column of Q2Q
1
A can be zeroed out by
multi plying by the orthogonal matrix Q
3
as indicated below:
sV2 - :1
-2- 2
v'6
2 2 =
0 - ./3
0
\12
0
0
0
_ill
2 2
:& _:L
2 2 = R
0 2
0 0
Finally, setting Q = Q[QfQT = Q1Q2Q3, we obtain the following QR-dccomposition of A:
:1 - .i 1 - :il
2 6 2 6
::11 1 :il
2 6 -2 6
0 1 :il
J -2 6
I fl
2 2
_ _ill

A=
0 0
0
......
260 Chapter 7
25. Since A = QR, the system Ax = b is equivalent to the upper triangular system Rx = QTb whlcb is:
26.
[
v'3 - v'3 -33] [:ttl [73
0 .../2 0 :t2 = 0
0 0
4
X3 .:i
7s 3
Solving t his system by back substit ution yields :t3 = 1, :t2 = 1, :t
1
= 1.
(a)
(b)
Since aaTx = a (aTx) = (aTx)a, we have Hx =(I- af.aaT)x = /x - af.aaaTx =X- e:1T:)a.
Using the formula in pa.rt (a), we have Hx = x- (
2
$:)a = (3,4, 1}-
1
: (1, 1, 1) = ( -1, -Jj).
On the other ha.nd, we have H = I - ;faaaT = j "" [-1 -: and so
0 0 l 1 1 1 _1 l
a 3 _ 3
DISCUSSION AND DISCOVERY
Dl. The standard matrix for the reflection of R
3
about ef is (as should be expected)

2 [-
0 0 1 0 0 0 0 0 1
and for t.he others.
D2. The standard matnx for the reflection of R
2
about t he line y = mx is (taking a = (1, m)) given
by
D3. If s = v's3, then llwll = ll vll and tbe Householder reflect1on about. (v- w).l mnps v into w
D4. Since llwll = llvll , the Householder reflection about (v - w).l. maps v into w , We have v - w =
(-8, 12), and so (v - w).l. is the line -8x + 12y = 0, or y =
D5. Let a = v - w = ( 1, 2, 2)- (0, 0, 3) = (1, 2, -1). Then the reflection of R
3
about a.l. maps v into
w, and the plane a .l corresponds to x + 2y- z = 0 or z = x + 2y.
WORKING WITH PROOFS
P2. To show that H = I - af;aaT is orthogonal we must show that HT = H-
1
This follows from
T ( 2 T) ( 2 TJ' T 2 T 2 T 4 T T
H H = I - --aa I- - aa = I - - aa - --aa + ---aa aa
aTa aTa aTa aTa (aTa)2
=I- .2.. aaT - -
2
- aaT + -
4
- aaT =I
aTa aTa aTa
where we ha,e used the fact that aaT aaT =a( aT a)aT = (aTa)naT.
EXERCISE SET 7.11
261
P 3. One ofthe features of the Gram-Schmidt process is that span{ q
1
, q 2, . . . , q,} = span { w
1
, w
2
, , . . , Wj}
for ea.ch j == 1, 2, . .. , k. in the expansion
we must have w
3
CV -::j:. 0, for otherwise w j would be in span { Q1 , Q2, ... , ClJ -1 } =
span{w1 , w2 , ... , w;- l } which would mean that { w
1
, w2, . . . , w
1
} is a linearly dependent set.
P4. If A = QR is a QR-decomposition of A, then Q = AR-
1
. From this it follows that the columns of
Q belong to the column space of A. In particular, if R-
1
= js,j], then from Q = AR-
1
it follows
that
c; (Q) = Acj(R-
1
) = StjCI(A) + s2jc2(A) + +
for each j = 1, 2, ... , k. Finally, since dim(col(A)) = k and t.he vectors Ct(Q), c2(Q), ... ,ck(Q) are
linearly independent, it follows that they form a basis for col(A).
EXERCISE SET 7.11
1. (a) We have w '"" 3v1 ... 7v
2
; thus (w)B = (3, -7) and [w]B =
(b) h
1 1 I
[
2 a3) [cc2'] -- [11)' and
T e vect or equation c
1
v
1
+ c
2
v
2
= w is equiva. ent to t 1e inear system
- 4
the 1:.olut.ion of this system is c
1
= 1
8
, c2 =
1
3
4
. Thus (w ) B = {fa, f4) and lwl o = (
2. (a) (w)B = ( -2, 5)
(b) (w)n =(I , 1}
3. The vector equation c1 v1 + + c
3
v
3
= w is equivalent to = Solving this sys-
o 0 3 C3 3
[

tem by back subi:ititut ion yir.ldsc.
3
= l,c
2
= -2,c
1
= 3. Thus (w)a = (3,-2, 1) and [win=
4. The vector equation c1 v
1
r.
2
v
2
+ c
3
v
3
= w is equiva.lent to - :. _; ] = Solving this
3 69C:J 3
system by row reduction yif!k!s c
1
=' -2, c
2
= 0, c
3
= 1. Thus (w)B = ( - 2, 0, 1).
5. If (u)n = (7, - 2, 1), then u = 7v
1
- 2v
2
+ = 7(1, 0, 0) - 2(2, 2, 0) + (3, 3, 3) = (6, - 1, 3).
6. If (u)B = (8, - 5, <1 ), then u = 8v
1
- 5v
2
+ 4v
3
= 8(1, 2, 3)- 5{ -1, :>, 6) + 4(7, - 8, 9) =(56, -41, 30) .
7. (w}B = (w v 1 , w v2) = = ..
8. (w)a = (w v
1
, w v 'l , w v
3
)::: (0, -2, 1)
9. (w)J =(W Vt Wv
2
Wv
3
)=(....Q... _..2_ _hQ y])
' ' -/2' ,/3' v6 2 ' 3 6
262 Chapter 7
11. (a) We have u = v1 + = v = - v1 + 4v2 =
1
;).
( b) Using Theoretn 7.11.2: Uu ll = ll(u)all = y'(1)
2
+ (1)
2
= /2, llvll = ll(v)sll = J(-1}
2
+ (4)2 =
m. and u. y = (u}s
0
(v)a = (1)(-1) + (1)(4) = 3.
Computing directly llull =

4-

= /s = /2, llvll = Je;p + e;p = !iii= Jl7.
and u v = (t)(

+ (-!)( ;
6
) = = 3.
12. (a) We have u = -2v1 + v2 + 2v3 = -i, v = 3vl + Ov2- 2v3 = .!p, 1).
(b) ll u ll = ll(u)sll = j( -2)
2
+ (1)
2
+ (2)2 = 3, ll v ll = ll(v)sll = J(3)
2
+ (0)
2
+ ( -2)2 = Jf3, and
u v = (u)s (v )s = {-2}(3) + (1)(0) + (2)(-2) = - 10.
ll u ll = + (-D2 + = fj = 3, llv ll = = J = Jf3,
and u v = -!) + ( -i)(
1
3
) + ( =-
9
9
= -10.
13. !lull = ll(u)Dil = J( - 1)
2
1- (2)
2
+ (1)2 + (3)2 = Jl5
llvll = li(v)sll = }(0)
2
t (-3)2 + (1)
2
+ (5)
2
= J35
Uwll - ll( w)sll = ,fi2)
2
+ ( -4)2 + (3)
2
+ (1)2 = v'30
u v + w 11 = 11 < v > s + < w > s 11 = 11 <-2, -1, 4, 6) 11 = .;,..,....< -----=2=>
2
----+--:-<
2
-+----:<"'="'s >""":.! = .Jf05
llv- wll = li(v)a- (w)all = 11(2, 1, -2, 4)11 = y'(2)2 + (1)2 + ( -2)2 + (4)2 = 5
v w = (v)a (w)a = (0)(-2) + (-3)(-4) + (1)(3) + (5)(1) = 20
14. !l ull= ll(u)
8
11 = j(0)2 + (0)2 + (-1)2 + (-1)2 = /2
Hvll = ll(v)o
1
1- j(5)
2
+ (5)2 ..l.. {-2)2 + (-2)2 =,/58
ll wll = ll(w)all = .j(3)2 + {0)
2
+ (-3)
2
+ (0)2 = /18 = 3v'2
ll v + wU = ll(v)a + (w)all = 11(8,5, -5, -2)11 =

-+-;-( = JIT8
ll v - wU = il(v)a- (w)all = 11(2, 5, 1, -2)11 = y'(2)'l + (5)2 + (1)2 + (-2)2 = v'34
v w = (viB (w)a- (5)(3) + (5)(0) + (-2)(-3) 4- (-2)(0) = 21
15. Let B - {e
1
,e2} be the standard basis for R
2
, and let B'- {v
1
, v
2
} be the basis corresponding to
the .r'y'-systcm that is described in Ftgure Ex-15. Then Ps ..... s = [[vi]a [v2Jsl = [
1
and so
0 V'1
Ps-.a (PB'-+D)-
1
= It follows that x'y'-coordinates are relat ed to xy-coordinates by
the equations x' = x - y and y' = -/2y. In particular:
(a) If (x, y) = (1, L), then (x', y') = (0, /2).
(c) [f (x,y) = (0,1), thPn (x',y') = (-1, /2).
(b ) If (x,y) = (1,0), then (x',y') = (1,0).
(d ) If (x, y) = (a, b), then (x', y') = (a- b, v'2b)
16. Let B = {i, j } be the standard basis for R
2
, and let B' = {u
1
, u2} be the basis for the x'y'-system
as described in Figure Ex-16. Then P9- s - (*f
10
] and Pa-s = (Ps ....
8
)
1
= [

0
]. Thuc;
----= - 1 -73 1
x'y'-coordinates are related to xy-coordinates by tlre equations x' = and y' = + y. In
particu Jar.
(a) If (x,y) = (v'J, 1), then (x',y') = (2,0}.
( b ) 1f (x,y) = (1.0), then (x',y') =
(c) If (x,y) = (0, L). then (x', y') = (0, I )
(d) lf (x,y)
EXERCISE SET 7.11 263
[
:2 - ::J I
(b) The row reduced echelon form of [ B I S'J =
1 4
:
0
[
4 3 ]
IT Ti
l 2 .
-n H
(c) Note that

= [
2
1
-J] -I - .!.. [
4 3
] - P
4 - li -1 'J. - S-tB
0) . [1
IS
1 0
(v2)s) = (
2
-
3
).
l -1
O I 4
I rr
li _J..
I I I
1]; thus Ps-+B ::::
ll
(d) We have w = v
1
- v z; thus [w)s = and lwls = Ps-ts[w}s = G -!] =
(e) We have w = 3e1 - 6e2; thus [w]s [w]D = Ps ,s[w]s = [_!] =
18. (a) ;]
(b)
(c)
(d)
(e)
19. (a)
(b)
{c)
0 8
The malrix [B I SJ :'!:
1 0 B10 0 1
form; Ps_.n .,.,
5 -2 -1
0
01 40
I
(J I 13
I
l, i 5
H) 9]
-.5 -.1 as its reduced row echelon
-2 -1
lt is easy to chec.k tho.t
I 0 8 5 -2 -1 0
0 0]
.t o : t.hus ( Po-.s)-
1
= Ps-+B
0 1
a l - 3 has o
I 5] [1
8 I 1 ()
0
0
0 I 2.'1!J1
o :. 77 as its reduceu row echelon form; thus
I I 30
[
[-- 40 ) 6 91 [ 5] lr -239]
w = 5et- 3e, + e3; thus [wls = -3 and lw] s = Ps -t H[w)s = 13 -5 -3 - 3 = 77 .
l 5 -2 -1 1 30
(2
4_1
l
-1] . [)
The row reduced echelon form of !Bt I B2] -=
2
I
!S
- 1 I 3 -l 0
[11

10
2
-g
The row reduced echelon form of [B2I Bd '""
-11 2

i:.

I
-l t 2 -1
[ 0 _ )
-2 - ;; .
[ 13
Note that

= r: ro (-5)
Lt) = Ps,-.82
10
264
Chapter 7
(d ) w = -;i
0
u
1
+

thus [w]B, = [-11 and (w]B, = = -11 [-11 = [=:].


(e) We ha,e w - 4v1 -7v 2i thus (w]s, (w)a, = Pa, .... a, (w)s, = = [-11
[
' 1 1 2] . (
0
1
(b) The row reduced echelon form of (B2I Bt] =
3 4
:
2 3
IS
1 ; thus Pa,-+B, =
0 I 2 5]
1 I - 1 - 3

(c) Note that (Pe,-.e.)-
1
= = = Pe, .... s, .
(d ) We have w - 2u1 - u 2; thus [w]B, (w)e, = Pe, ... a,(w]s, = [_n =
(e) Wf! have w :3v1 - v 2; thus [w]a
2
[w]B
1
= Pa, ... e ,[wJo, = _ =
21. (a)
[
-6 -2 -2 I -3 -3 1] [I
The row redured echelon form of [B2l Bt] = -6 -6 -3: 0 2 6 is o
[
0 4 7 I -3 - 1 -1 0
thus Ps, -BJ = -H -H
u
0
1
0
(b ) lfw=(-5,8 5).wehave(w)a
1
=(l, l , l )andso
3
4
17
-12
2
3
-1J m , HJ
0
I 3
I 4
0
I 4
q 0
3
4
17
-u
'2
.)
1 l
i2
-.!1 .
1'2 I
2
3
(c) The row reduced e<'helon form of IB2 I w] = [=: is
0 4 71-5
;
I
o! -H ; tHus (w)s,""
0
I !..2]
( ti,- g, agrees with the computation in part (b).
22. (a )
The row reduced echelon form of the matdx (B2I Bt] = [
[
I 0 0 : 3 2 [ 3 2 -S
0 1 0:-2 - 3 -4 ; thus Pa,_.a, = -2 -3 -4 .
0 0 11.'1 I 6 51 6
(b ) Ifw = -5), we have (w)a
1
= (9, -9,-5) and so
I
1 4
I l
I -1
1
2 2
I
I 0 I I -1
1
-3 2 I I
EXERCISE SET 7.11 265
(c) The row reduced echeloq form of [B2 I w) = [
3
- ! -:] is [o
0
1
-5-3 21 - 5
( -t,
2
,}, 6), which agrees with the computation in part (b).
0
l
0
I '2
0 -X]
o: ; t hus =
l : ()
23. The vector equation c1u 1 + c2u2 + C3 ll3 = v1 is equivalent to [* -t [::] = the
'7J O -76 C
3
tTl
24.
25.

solution of this system is c
1
= ?s c2 = = Thus (vt) s
1
= ( ,h, Similarly, we
have (v2)s
1
= (v
3
)s
1
=

l). Thus the transition matrix PB


2
-4B, is
[ '
1
' l
76
3
-372
Ps1-+B, = _
1

0
21 4.
2\/'J 3"2
H is <'MY to check th<lt
1
2
0
[l 0 0]
:l,i2 - 0 1 0
.!. 0 0 1
6
Thus PFJ
1
4 Bt is an orthogonal matri x, a.nd !lince = = (Pn?-B,rr , the same is
true of Po, -B, .
[
o
0
1} .
(a) We have Vt = (0, 1) = Oe
1
+ le2 and vz = ( l, 0) = le\ t Oe2; thus Po .... s =
1
( b) 1f P = Ps-.s = then, since P is orthogonal, we have pT = p-
1
= (Ps ..... s)-
1
= Ps-+D
Geometrically, this corresponds to the fact that reflection about y = x preserves lengt h and thus
is an orthogonal transformation.
(a) We have Vt = (cos20,sin28) and Vz = (sin20, -cos2B); thus Pa ..... s =
sin28]
- cos28
(b) II P = PB .... s then, since P is orthogonal, we have pr = p-l = (PB_.s)-
1
= Ps ..... b Geomct-
this corresponds to the fact that reflection about the line preserves length a.nd thus is
an orthogonal transformation.
266
27 .
(a) If (x) = (-2], lhen = [ sin(34")] (:r] =
y 6 11 -sm(
3
;) cos(
3
;} Y -72
(b) If [x:) = [5] ' then [;r] = [cose4" ) -sin(
3
: >] [:r:] = r-
ll 2 ll sm(
3
4
ft) cos(
3
4
") ll 72
28. (a) If [:] z rJ [:]-= rJ =
(b) If r::J = then [:J = -rJ = -rJ =
29. (a) If [:] = nl, then [::] = [-::(;) * nl = l
= - [ = [-t].
0 1 z' 0 0 1 -3 -3
(b)
[
x'] [ 1] [x]
rr = ' lhen ;:
30 (a) {i] [t : -:][:] = : -rl nl =
(b) If[::]= Ul u : !l [::] = u: !l Ul =
31. We have [::] = [ r ;] m and = [! : -!] [::] Thus
= [! r -;)[-t r m

-4
.:.::]
I
DISCUSSION AND DISCOVERY
[
- .J_ l!]
IS
7 2.!
R
Chapter 7
02. (a) Let 8 = {v,,v2, v 3}, where v
1
= (1, 1, 0) , v2 = (1,0,2),v3 = (0,2, 1) correspond to the col-
umn veclors of the matrix P. Then, from Theorem 1.11.8, Pis the transition matrix from
8 to the standard basis S = {e
1
,e
2
,e
3
}.
(b) rr pis t he transition matrix from s = {e ,,e2,e3} to B = {wl . w2. WJ}, e, = Wt + w2.
e2 = w, + 2w3, a.nd e3 = 2w2 + WJ. Solving these vector equations for w
1
, w
2
, and WJ in
terms of e1, e2, and e3 results in w
1
= + = w2 = %e1 - !e2 +
= i w3 = -je, + j e2 + \e3 = !> Note that w
1
, w2, W3 correspond
to the column vectors of the matrix p- .
WORKING WITH PROOFS
267
D3. If 8 f v
1
, v, , v 3} and if [ : ] is l<an,ition matrix from B to the given basis, then
v
1
==- (1, 1, 1 ) , Vl = 3(1, 1, 0) + (1, 0, 0) = ( 4, 3, 0) , and v :l = 2(1, 1, 0) + ( 1, 0, 0) = (3, 2, 0).
D4. If lwls = w holds for every w, t hen the transi tion mat rD.. from the standard basis S to the basis
B is
Ps .... B = ([eds I (ez]s l le3]sl = rel l ez les} =In
and soB= S = {ebe2, ... ,en}
D5. If !x- Yls = 0, then {x]s = IY]B and sox= y .
WORKING WITH PROOFS
P l. If c1, Cz, . . . , Ck arc scalars, then (c1 v 1 + czv2 + + = c1 (v t ) s + cz(vz )s + +
Note also that ( v) 8 = 0 if and only if v = 0. It follows that c
1
v
1
+ Cz v2 + + ck v 1: = 0 if and
only if c
1
(v
1
)a I cz(vz)s + + ck(vk) B = 0. Thus the vectors Y1, v2, . . ,
1
vk arc linearly indc
pendent if and only if (v
1
) a, (v;!) B, ...
1
(vk).V are linearly independent.
P 2. The vec tors VJ, Vz , . .. ' VI( s pan Rn if and only if every vector v in nn can be expressed as a linear
combination of them, i.e., there exist scalars c
1 1
c2, .. . , Ck SllCh t.hat v = r.1 v
1
+ c
2
v
2
+ +
Since (v)n = c1(v1) B + c2(v2)B + + ck(vk)B and the coordinate mapping v -+ (v)a is onto,
it follows that the vectors v l, v2,' . . 'Vk span Rn jf and only if (vt)e. (v 2) B, .. ' (vk)B span nn.
P :l. Since the coorciinate map x -+ !xJn is onto, we have A{xJs = C!xle for every x in R" if and only
if Ay =:.: Cy for every y in R
11
Thus, using Theorem 3.4.4, we can conclude that. A = C if and
only if .4[x ]B -"' C[xJ a for every x in R
11

P4.. Suppose B , ... {ut , u 2, . . . , u
71
} is a basis for [C' . T hen if v = a1 u1 + a2 Ul + + a
71
Un anJ w =
il1 l.I J + l>zu 2 + -'- b, u,.. , we have:
V + W = (a1111 + a2 u2 + ' + O.n Un) -1- (b1U1 + b2U2 + ' ' + bnUn)
=(eLl+ bt) Ut + (a2 + b2)u 2 + +(a, + bn )Un
Thus (c:v)s = ca2, .. . , can)= c(a1 , . .. , an.) :::;: c(v)s and (v + w)e =(at + b1, .. . , an + b,.,) =
(al,a21 . .. , an) + (bt , b2, . .. ,bn) = (v)B + (w)s.
EXERCISE SET 8.1
CHAPTER 8
Diagonalization
Thus [x]a = [.];
2
[Tx]a = [:z\:
2
:r
2
), and (T]s = I[Tvt]a [Tv z]a] = we note
that
(Tx]s = rxl2l- X2] = [22 -ol] [ X2 ] = IT]a[x]a
l X2 X,
which is Formula (7).
Thus [x]s = r i_"'' +

[Tx]o =

!%
1
1- and ITis = IITvda
:r, + ,r, :;:rt-
[
I -
(Tv2JaJ = " .
-"5
Finally, we note that
(TxJa
\\"l11ch is Formula (7)
3. Let P = Ps-B where S = {e
1
,e2} is the standard basis. Then P = [[vds (v2JsJ =
P[TJ p -l = [I -1] [2 -1] [ 0 1] = [1 -1] _ [TJ
8
1 0 2 0 -1 1 1 1
4. Let P = o where S- { e1o ez} is the standard basis. Then P = llv!]s [vz]s] = and
PITJaP
1
=
5. For P\'ery v:ctor x m R
3
, we have
2] [-t
1 -..,.
:>
-
1
5
8
] [ 1 2] = [l
_ .1 5 -2 l 3
5
= [T]
[::] (jx,- jx, + jx,) m + H + jx, + [!] + (jx, + ! -jx,) m
268
Exercise Set 8. 1
269
ami
Finally, we note that
[
-xz - !x3 ]
[TxJH = - 4x: + X2
1
- -
- 2Xt + 2 X3

which is Formula (7).
6. .For every vector x in R
3
, have
X = = Hx, - fz, + !x,) nl + tlx, + + {,x,) m + ( -jx, + lx,-!x,) nl
and
[T] F = [[Tv,]o [7'v,]o [Tv, ]!,] = [ -J
Finally, we note that
which is Formula (7) .
[
-y
12
7
-8
-il
270 Chapter 8
7.
[i
0
;]
P =- Ps .... a where 5' is the standard basis. T hen P = [lvt]s
[v2Js [vJ]sJ = and
[;
0
il H
3
t] [-l
I
J]
H
-1
_;]
-2 -2
P[TJaP-
1
=
1 I
=
1 ={Tj
2 l
1
l l 1 I
0
2
-2 2
2
8. Let P - Ps ..... a where S is the standard basis. Then P = [lv
1
]s
Jv,]s lv>lsl = n
3 -1] [-y
3 2
3 0 _'J.
8
-:J n
_ !
8
..1..
12
3
8
j] [0 -2
112 = 0 0
J 1 0
- 8
= [TJ
- 2
9. We have T v l = =-II =- v 1- and Tv2 = = 1
9
1 vl +
.. 'I I 1' I [()] J [2] 2 r-3] 3 I 2 I d r I [-1 I] l2 19 I Th
Stmt nr y, v
1
=
1
=IT
1
+IT
4
= nv
1
+ nv
2
an v
2
-
4
= - rrv
1
+ nv
2
. us
[T]s = anti [T]s = Since V j = v; + v2 and V?. = v;- v2, we have p =
I I TI II II
Ps ... a = [: _:) and
P[l]sP
1
=
[
11
1] [_..:!..
-1 -M
fl] = [A
26 I I 2
TI 2 -2 IT
32]
=!Tis
II
10. We ha\e Tvl = [ 156] : 2] + 1798[ ") = _86v + L798v and Tv = [-3]- 6l v -
-82 4S 22 15 -1 -15 1 2 2 16 - 90 1 45 2
Si1mlarly. = [ = -
3
2
1
GJ -

[ = :) = -
3
2
1
v; -
7
2
s v2 and Tv2 = [ i v; + ;sv;. Thus
l
81i 61 ] [ 31 9]
[Tis = _ and [Tis = =; S1me v! = 10v1 r 8v2 and v 2 = - - v2, we hnve
'2 2
[
lo [1o -
2
1 (_ 3-AJ [tS _..!.. ] [--
P = Pa ... FJ =
8
_;j ancl P[T is P -
1
=
8
-J 1!i: h _
1
i = -7i = [T]s
11. The .-qualion P[T]B p - l = [Tis is equivalent to [T]a = p-
1
[Tis P. Thus, from Exercise 9, we
have
[Tis
-!] [ 1:1
2 11
where, as before, P = Pe.-s.
- []
.!.2 1
11
1] = p- [T]s P
-1
12. The e>quation P[T]sP I= [Tis is equivalent to [T]s = P -
1
[Tl"' P. Thus, from Exercise 10, we
have
[
86
[TJs =


45
61]
90
49
-;rs
whfrt, ns before. P = Po-e
[
13
- 90
- ..!.
45
[10
8
2
Exercise Set 8.1
13. The standard matrix is [T} = [
1

and, from Exercise 9, we have !T}s = [- ft . These


. , o 1 - rr u
matrices are related by the equation
[
-1
P[T]
8
P-
1
=
5
5]
-3 _25
11
l!.. l
ll 22
'l 6 5
TI 22
= [1 -2] = jTJ
22 0 1
14. The standard matrix is [T] = and, from Exercise 10, we have {T)s = [ _ These
matrices are related by the equation
where P = [vl I v:J
15. (a) For evr y x = (:.c,y) in R
2
, we have
a.nd
(b) Ju agreement wit b lhrmu Ia. (7), we hn.vr.
16. {a) For every x -" (x.y,z) in R
3
, we ha.ve
and
\ .. Tx .,;,
[
x + Y + z1 [1] [-1] [0]
j +(4z)
I /
( v, -? !
) J
'
Chapter 8
272
(b) ln agreement with Formula. (7), we have
17. For every vector x in R'l, we have
x = [::1 = + %xz) + (- {
0
x1 + -fo:r2) = + ix2)v1 + (-
1
3
0
xl + -foxz)v2
and
Finally, in agreement with Formula. (26), we have
18. For every vector x m R
2
, we have
anrl
[
1 + I l
1
1
-
2
:r1
3
:r2
-:tt - X2
t hus lxls = ; l jTxJs' = , and ITJss = I!Tv1J B'
4 Xl 1- ;:t:z 3 '2
z X I + J%2
Finally, in agreement with Formula (26), we have
19. For every vect or x in R
3
, we have
Exercise Set 8.1 273
and
Tx = = --x
1
- - .. xz - ;;XJ + - x1 - -x2 + - ;[J
[
3Xt + 2X3] ( 5 1 4 ) r-3] (6 3 2 ) [1]
2x
1
_ x
2
r 14 ,
2
1 1<1 1
4
[
4xt + - 6 1 1
h [
I
I l 1 [T J [-rrt -14X2- 'fX3]
t US X B = zXl -
2
xz + '2:2:3 , X B' =
3 2
. 1
l 1 I 7XJ-f4X2+'f:t3
-
2
x1 +
2
x:z +
2
x3
ann [T]B'B = r- -!
Ill 'f
-!4]
I .
Iii
Finally, in agreement with Formula (26), we have
20. For every vector x in R
3
, we hnvc
and
- -!]
1
Finally,
lf 2
=
rl - lx1 - .2.. .t2 l [- 7 14' - 14
4 l - 13
7Xl- il.X:.! I ZJ \.ii
21. (a) [TvdB = and !Tv:da = GJ
(b) Tvl = VI - 2v2 = [ 2GJ = Md T vz = 3vl +5v2 = + =
(c) For every vector x in R
2
, we have
Thus, using the linearity ofT, it follows t hat
1 2 f -5) ( 2 1 ) [ 7] -- [
1
l Xt -
Tx==( - .- xt+
5
x2)l
0
+ -
5
x1+5x2
11
.., 22x + ll x
5 1 5 2
or, in t he comma delimited form, T (xJ> x2) = (
1
ix1- fjx1 + Jtx2).
(d) Using the f.--:- mt:la obtained in part (c) , we have T(l , l ) = (lri' - + ) = ( '
5
u,
3
5
3
) .
274 Chapter 8
(c) For every vector x in R
4
, we have
x - 4x2- +
1
2
1
x4)v1 + - 2.r2- 4x3 +
+ (- + !xJ- !x4)v3 + + + !xJ-
I hu:>, using lht ofT, it follows
Tx - 4x2 - !x3 +
1
; x4)Tv1 + 2x2- !x3 +
+ + !x3- !x.a)Tv3 + + + !x3-
which leads to the following formula forT
(d ) Using Lhe formula obtained in par t (c), we have T(2, 2, 0, 0) = ( - 31
1
37, 12).
23. 1f T IS the identity operator then, since Te1 = e
1
and Te2 = e2, we have IT]= Similarly,
[T]a IT]s = On the other hand, Tv1 = = + H[_:] = +
and Tv
2
= [ = -{,v}- !:v;; thus ITJs.B =
5I
24. If Tis the identity operator then IT] = IT] B = IT] a = On the other hand, we have Tv1 =
0 0 l
[
3
] 37 [
2
1 19 [ ?] II [g] 37 1 19 1 1J 1 T _ [Q] - 299 1 209 1 229 I
= 36 + 36 -: -36 = Jij vl + 36v2- 36 v 3, v'2 - - 144vl + mv2. - 144v3,
DISCUSSION ANO DISCOVERY
[
37
3
and Tv3 - [9] =
173
V. + illv' -
115
v' thus [T}a D = .IJ!
- 1 . 48 2 48 3 ' 36
4 I I
- 36
299
m
'109
m
'229
- m
ill]
48
119
48 .
ll5
-48
275
25. Let B = {v
1
, v2, ,.. , v ,t) and B' = {u
1
, u 2, ... , um} be ba.ses for R" and Rm respectively. Then,
if T is the zero transformation, we have
for each i = 1, 2, ... , n. Thus {TJB' ,B = [0 I 0 I I OJ is the zero matrix.
26. There is a. scalar k > 0 such that T(x) = kx for all x in R". Thus, if B = {v
1
, v
2
, . , v,} is any
basis for Rn, we have T(v
1
) = kvi for aJl j = 1, 2, . .. , n and so
[T]a =
0 0
27. [T]:
[H !]
0
0
k
=k
1 0
0 1
0 0
28. [TI =
29. We have Tv1 = [ _ = [ = -4vl and Tv2 = = [ = 6vz; thus {Tla =
[ From this we see that the effect of the operator 1' is to stretch the v
1
component of a
vector by a factor of 4 and reverse its direction, and to stretch the v 2 component by a factor of 6.
If the xy-coordinat e axeg are rotated 45 degrees clockwise to produce an x'y'-coordinate system
who::;e axes are aligned with the directions of the vectors v
1
ttnd v 2, then the effe-.t is st.retch
by a fact or of 4 in t.he x' -dircction, reflect about the y'-axis, and stret ch by a factor of 6 in the
y'-direction.
30. Wr. have ami
Tv3 = = - v'3v2 - v3. Thus [1']a = o -1 -vf:J = 2 0 -! - {/ . From lhis we sec that
[
0] [2 0 0] [1 0 0]
- 3; 0 J3 - 1 0 :,(f
t he effect of the operator 1' is to rotate vectors counterclockwise by an angle of 120 degrees about
the v, axis (looking toward the origin from the tip of v!), l-beu stretch by a factor of 2.
DISCUSSION AND DISCOVERY
Dl. Since Tv1 = v2 and Tv2 = vl> the matrix ofT with respect to the basis B = {v
1
,v
2
} is [T]B =
On t he other hand, since e1 =

and e2 = *v2, we have Te1 = iTv
1
= = 2e2 and
Te2 = =

thus the standard matrix forT is [Tj == t].


276 Chapter 8
D2. The !lppropri ate diagram is
c-
c
(T)Bl
03. The approp1 inte diagram is
c- D
D4 (a) I'luc We ha\e IT1 (x ))e = !Tds s{x)s = {T2Js e[x)s = !T2(x))s: thus Tt(x) = T2(x).
( b) l abe. lot example, the zero operator has same rn:ttrJX (t hP ZNO nMtnx) with respect to
any basis for R
2
(c) True If 8 = { v1, v2, , vn} and [T)s =I, then T(v,.J = v1c for each k = 1, 2, . , n and it
follows from this that T(x) = x for all x .
(d ) False For example, h:t B = {e1, e2}. 8' = {ez,ct}, and T(x, y) = (y, x). Them IT]s. B = / 2
but T not the 1dent1t.y operator.
D 5. Ouc 1cason 1s that lhf tcpresPntation 0f the operator in orht'r b:L.c;is may more clearly reAect
t. he rir effect. of the operator
WORKING WITH PROOFS
Pl . If x andy are vectors and cis a scalar then, si nce T is linear, we have
c[x)e = [cx]s -7 {T(cx)Ja = [cT(x )Je = c{T(x)]a
[x)o + [Y)s = jx + Y)s -t[T(x + y)]s = jT(x) + T(y)Js - {Tfx)]e + {T(y)]s
Thts shows that th- mapping {x]s -7 [T(x)Js is linear.
P2. Ifx is tn R" and y ts m Rk, then we ha"e [T1 (x)J a = [Tds.s lx)s and IT2(y)J a" = !T2) B".B'IY)s.
Thu:-
IT2('i JIX))]s" = IT2] B", B' IT1 (x)Js = IT2)B".B'!Tds.alx]s
and rrolll tIll-: It follows thal IT2 0 Tds,s = (:t;JB" ,B'!Tds.B
EXERCISE SET 8.2 277
P3. If x is a vector in Rn, then [T]s[x]D = [Tx] B = 0 if a.nd only if Tx :::: 0. Thus, if T is one-to-
one, it follows that [TJB[X]B = 0 if and only if [x] a = J, i.e., that: [T]a is an invertible matrix.
Furthermore, since [T'"
1
]a[T]B = [T-
1
oT}
8
= [J]B = /,we have [T-
1
]s = [TaJ-
1
.
P4. [TJs = [T(vi) a I T(v2)al I T(vn}s] = [Tle. B
EXERCISE SET 8.2
1. We have tr{A) = 3 and tr(B) = -1; thus A and Bare not similar.
2. We have det(A) = 18 and det(B) = 14; thus A and B are not similar.
3. We have rank( A) = 3 and rank(B) = 2; thus A and B are not similar.
4. We have rank(A) = 1 and rank(B) = 2: thus A and B are tJOt similar.
5. (a) The size of the matrix corresponds to the degree of it.s characteristic polynomial; so in this
case we have a 5 x 5 mat rix. The eigenvalues of the matrix wi Lh t hei r algebraic mu!Liplicities
arc A = 0 (wult.iplicity 1), A=- -1 (multiplicity 2), and A= 1 (mult.iplicity 2). The eigenspace
corresponding to A = 0 has dimension 1, and the eigenspaces corresponding to A= - 1 or..\= 1
have dimension 1 or 2.
(b) The matrix is 11 x 11 with eigenvalues A = - 3 (multiplicity 1 ), A = - 1 (multiplicity 3), and
>. = 8 (multiplicity 7). The eigenspace corresponding to >. ""' -3 has dimension 1; the eigenspace
corresponding to ). = - J has dimension 1, 2, or 3; and the eigenspace corresponding to A = 8
have dimension I, 2, 3, 4, 5, 6, 7, or 8.
6. (a) The matrix is 5 x 5 with eigenvalues A= 0 (multiplicity 1), A= 1 (rm1ltiplicity 1), .A= -2
(multiplicity 1), and .A = 3 (multiplicity 2). The eigenspaces corresponding to .A= 0, >. = 1; and
>. = - 2 each have dimension J. The eigenspact! corresponding to A = 3 has di mension 1 or 2.
(b) The matrix is 6 x 6 with eigenvalues A= 0 (multiplicity 2), A= 6 (multiplici ty 1) , and .A=
2 (multiplicity 3). The eigenspace corresp(1nding to >. = 6 has dimension 1, the eigP.nspace
corrcspondinr; to A = 0 ha.<> dimension 1 or 2; and the eigenspace corresponding to A = 2 has
dimension 1, 2, (){ 3.
7. Since A is triangular, its characteristic polynomial is p(.A) = (..\- l)(A- 1)(.\ - 2) =(.A -- 1)
2
(A- 2).
Thus the eigelll'alues of !I are .A= 1 and>.= 2, with algebraic multiplicities 2 a nd 1 respectively. The
eigenspace corresponding to A= 1 is the solution space of the system (I- A)x = 0, which is
The general solution or this system is x = [i] = t [:] ; thus the eigenspa is 1-dime.'5ional and so
A = 1 has geometric multiplicity 1. The eigenspace corresponding to A = 2 is the solution space of
the system (2/- A)x = 0 which is
278 Chapter 8
The solution space of tbis system ;., x ~ [':] = s [l thus the eigenspace is 1-dimensiotlai and so
). = 2 also has geometric multiplicity 1.
8. The eigenvalues of A are ). = l , >. = 3, and >. = 5; each with algebraic multiplicity 1 and geometric
rcultipliclty 1.
9. The characteristic polynomial of A is p(>.) = det(>J -A) = (.X- 5)
2
(>. - 3). Thus the eigenvalues of
A are>. = 5 and >. = 3, with algebraic multiplicities 2 and 1 respectively. The eigenspace correspond-
ing to ). = 5 is the solution space of the system (51- A)x = 0, which is
Tbe gene>al solution of this system is x ~ m t [! l ; thus the eigenspace is 1-<limensional aud so
). = 5 has geometric multiplicity 1. The eigenspace corresponding to ). = 3 is the solution space of
the system (31 - A)x = 0 which is
The solution space of this system is x = [ ~ ~ = s [ ~ ~ ; thus the eigenspace is ]-dimensional and so
- 2s -2
). = 3 also has geometric multiplicity l .
10. The characleristic poiynomial of A is (). + 1)(.>.- 3)
2
. Thus the eigenvalues of A are>.= - 1 and
>. = 3, with al gebraic multiplici ties 1 and 2 respectively. The eigenspace corresponding to >. = -1 JS
1-dirnensional, u.nd so >. = -1 has geometnc multiplicity 1. The eigenspace corresponding to >. = 3
is Lhe solution space of the system (31- A)x = 0 which is
[ _ ~ ~
~ ] [:;] = [ ~ ]
[
-1"' [0]
The general solution of Lhis system is x ~ s ~ J + t ~ ; thus the eigenspace is 2-dimensional, and so
). = 3 geometric multiplicity 2.
ll. The characteristic polynomial vf A is p(>.) = >.
3
+ 3>.
2
= >.
2
(>. + 3); thus the eigenvalues are >. = 0
and ..\ = - 3, with algebraic multiplicities 2 and 1 respectively. The rank of the matrix
OJ - A = -A = [ ~ 1 = ~ ]
- 1 -1 1
EXERCISE SET 6.2 279
is dearly 1 since each of the rows is a scalar multiple of the lst row. Thus nullity( OJ - A) :=: 3- _1 = 2,
and this is tbe geometric multiplicity of ..\ = 0. On the other hand, the matrix
-31 --A =-

[
- 2 1 -1]
1 -2 -1
- 1 - 1 -2

0 1]
1 1 . Thus nullity( -31- A) == 3 - 2 = 1, and
0 0
has rank 2 since its reduced row echelon form is
this is t he geometric multiplicity of >. = - 3.
12. The characteris tic polynomial of A is (>.- l)(A
2
- 2A + 2); thus >. = 1 is the only real eigenvalue of
A. The reduced row echelon form of the matrix li - A= is - thus the rank
- l 4 - 6 0 (I 0
of 11 - A is 2 and the geomet ric multi plicity of ).. = 1 is nulli ty( ll - A) = 3 - 2 "" 1.
13. The characteristic polynomia.l of A is p(>-.) = .>.
3
- 11>.
2
+ 39A - 45 = (A- 5)(>.- 3)
2
; thus the eigen-
values are >. = 5 and ,\ = 3, with algebraic mul tipli<.:i ties 1 and 2 respectively. The matrix
5/ - A ""'
- 1 0 l
has <ank 2, since its eduCd <Ow cohelon fom is ! Thus nullity(f>l --A) - 3 - 2 = l, and
this is the geometric multi plicity of ,\ = 5. On the o t her ha.nd, the matrix
3T- A =
- 1 0 -]
has rank l since each of the rows is a scalar multiple of the 1st row. Thus nuUity(3I - A) = 3 - l = 2,
and this is the geometric multiplicity of,\ ""'3. It follows that A hM 1 + 2 '-=-= 3 line<'lrly independent
eigenvectors and t hus is diagonali<>able.
14 . The characteristic polynomial of A is {A+ 2)(>. - l)
2
; thus the eigenvalues are,\= - 2 and A= 1,
with algeb<ak multipHcities I and 2 <espocti vdy. The matrix - 21 - A - [-: -: =:] h,. <a.Jll< 2,
and t he matrix 11 -A = [ : = has rank 1. Thus >- = - 2 has 1, and
- 1 - 1 l
...
>. = 1 has geometric multiplicity 2. It follows that A has l + 2 = 3 linearly independent eigenvectors
thus is diagonalizable.
15. The characteristic pc,lynomial of A is p(.>-) = .>-
2
- 3>. + 2 = (>.- l) (A- 2); thus A ho.s two distinct
eigenvalues,>. = 1 and>.= 2. The eigenspace corresponding to A = 1 is obtained by solving the system
(/ - A)x = 0. which is = The general solution of thls system is x = Thus,
I
280 Chapter 8
taking t = 5, we see that ?l = [:] is an eigenvector for >. = 1. Similarly, P2 = [!] is an eigenvector
for A= 2. Finally, the matrix P = [p
1
p
2
] = [! !} has the property that
p-1 AP = [ 4 -3] [-14 12] [4 3] = [l 0]
-5 4 -20 17 5 4 0 2
16. The characteristic polynomial of A is (.>.- 1)(.>. + 1); thus A ha.s two distinct eigenvalues, .>.
1
= 1 and
.>.2 = -1. Corresponding eigenvectors are P1 = and P2 = and the matrix P = !P
1
P2J has
the property that p -l AP = =
17. The characteristic polynomial of A is p(>.) = .>.(.>.- 1)(,\- 2); thus A has three distinct eigenvalues,
>. = 0, >. = 1, and >. = 2. The eigenspace corresponding to >.. = 0 is obtained by solving the system
(OJ- A)x 0, which is n -: =!] [::] The solution of this system is x +:J
Similarly, the genml solution of (I- A)x 0 is x = +l, and the gene<al solution of (2! - A)x 0
is x t [:]. Thus the mat,ix P [-! :] has the P':perty that
p-
1
AP =

- . . [-
011011 101 002
18. The characteristic polynomial of A is (A- 2)(>. - 3)
2
; thus tl.e eigenvalues of A are >. = 2 and >.. = 3.
19.
The vecto' v 1 [i] is an eigenvecto. cwesponding to 2, and v 2 [!] and v3
linearly independent eigenvectors corresponding to .X = 3. The matrix P = (v
1
v
3
J has the
property that, p - t AP = ;:
001003001 003
The characteristic polynomiaJ of A is p(>.) = >.
3
- 6>.
2
+ 11>.. - 6 = (..>..- 1)(>. - 2)(.>.- 3); thus A has
Lhree distinct eigenvalues >.
1
= 1, .>.2 = 2 and .>.
3
= 3. Corresponding eigenvectors are v
1
=
v, m, and v, [:]- Thus A is diagonalizable and P [: : :] has the pmpe<ty that
p-IAP = : =
0 -1 1 -3 1 3 1 3 4 0 0 3
[:].
Note. The diagonalizing matrix P is not unique it depends on the choice (and the order) of tht:
eigenvectors. This is just one possibility.
EXERCISE SET 8.2 281
20. The characteristic polynomial of A is p(>.) = >..
3
- -t >.
2
+ 5>.- 2 = (>.- 2)(>.- 1)
2
; thus A bas two
::::
-17 9 f 7) 0
which shows I hat the etgenspacP corresponding to >. = 2 has dunension 1 Simi larly, llw general
solution of (I - A )x 0 is x - s [ il which shows that the eigenspace conesponding tn A = t also
has dimension 1. It follows that the matrix A is not diagonalizable since it has only two linearly
independent eigenvectors.
21. The characteristic polynomial of A is p(>.) = (>. - 5)
3
; thus A has one eigenvalue, >. = 5, which has
algebraic multipUcity 3. The eigenspace corresponding to >.. = 5 is obtained by solving the system
(51- A)x = 0, which is H : [:;] = m The genml solution of this system is X- t
which shows that t ht:' eigenspacP has dimension 1, i.P , Lhe eigenvalue has geometric mulliplicity 1 lL
follows that. A is nnt. since thn sum of t.hr gcomelrif multiplicil.it!S vf its is
less than 3
22. The characteristic polynomtaJ nf A is >.
2
(>.- 1}; thus the eigenvalues of A are \ = 0, and >. = 1 The
eigenspace corresponding to A 0 hM dimension 2. and the vecto'5 v, = [ v, [:] fotm a
hMis lot this space. The "'""' ' ' = m fotms a ba.<is lot the eigenspace-:wespondut< :o = I.
Thus--A IS diagonalizable and thr matnx P = lv
1
v2 v 3) has the property that
p - t AP = : u !
23 ThP charactt>risttc polynomtal of A is p( .\) = (>. + 2)
2
t.\- 3)
2
thus A has two etgcnwtlues, .-\ = -2
and .>- = 3, each of which has n.lgebraic rnulLiplicity 2. The eigewJpace corresponding L(l ..\ = -2 is
obtained by solving thP. system ( -2/- A)x = 0, whi ch is
The gonctnl solution of this system ts x = r + s whteh shows that the eigenspace has dimenSton
2, i.e., that the eigenvalue >. = -2 has geomeLric multiplicity 2. On the other hand, the general
solution of (31 - A )x - 0 is x = I nd so = 3 has geomet de mu I liphcity I It follows that A is
not diagonalizable since the sum of the geometric of its eigenvalues is lt>..ss than 4
282 Chapter 8
The charactenstic polynorrual of A is p(>.) = (>. + 2)2(>.- 3)
2
; thus A has two eigenvalues A= -2
au.l 3, each of algebrak multiplici<y 2. The vec<on v, - and v2 = form a for <he
eige "'P"' correspondi og to - -2, and <he vectors v, - rrl and v, - [- form a basis for <he
e1genspace corresponding to A= 3. Thus A is diagonalizable and the matrix P = fv
1
v
2
v
3
v
4
J
has the property that
p-'AP=
1 -1
0 0
0 1
0 0
- =
0 0 0 3 0 0 010 0
1 0 0 0 30 0 01 0
0 0
-2
-0
0 3
0 0

25. 1f the matrix A is upper triangular with l 's on the main diagonal, then its characteristic polynomial is
p(>.) = (>.- 1 )" and >. = lss the only eigenvalue. Thus, in order for A to be diagonalizable, the system
(1 A)x = 0 m11st haven linearly independent solutions. But, if this is true, then (/- A)x. = 0 for
every vector x in R" and so I - A is the zero matrix, i.e., A =I.
26. If A 1s a 3 x 3 matrix with a three-dimensional eigenspace then A has only one eigenvalue, A= A
1
,
whsch IS of geometric rnultJphc1ty 3. In other words, the eigenspace corresponding to A
1
is all of R
3
.
It follows that Ax = >.
1
x fos a.! I x in R
3
, and so A = >.
1
I is a diagonal matrix.
27. If C is similar to A then there is an invertsble matrix P such that C = p-l AP It follows that
sf A IS invertible, then C ts invertible since it is a product of invert1ble matnces. Similarly, smce
PCP
1
=-= A, the invertibihty of C implies the invertibility of A.
28. If P =!PsI P2l I PnJ. then AP = (Aps I I ApnJ and PD =[At PI I A2P2I I AnPnJ where
\1 -\2. . An at c the diagonal entries of D. Thus A.I: Pk for each k = 1, 2, , n, i.e., is an
of an<.l P1r is an eigenvector corresponding to
29. The :>tandard matrix of the linear operator T is A = = and the characteristic polynomial
-1 - 1 -2
of A ts p(>.) - >.J + 6A
2
+ 9>. =>.(A+ 3)
2
Thus the eigenvalues ofT are >. = 0 and A= -3, with
mulllplicltles 1 and 2 respectively. Since>.= 0 has algebraic multiplicity 1, its -geometric
snult tplictty 1S also 1. The eigenspace associated with A = -3 is found by solving ( -31 - A)x = 0
whst h IS
The gen"' ..t svlu<ion of < hissys<em is x - s [ +t [:] ; <hus - -3 has geome<nc mulbplic.ty 2. I<
follows that T IS diagonali1.able since the sum of the geometric multiplicities of its eigenvalues is 3.
30. Th<' standard rnatnx of the operator Tis A= and the characteristic polynomial of A
l 1 0
i-; (>. + 2)(>.- 1)
2
Thus the eigenvalues ofT are A= -2 and >. = 1, with algebraic 1
DISCUSSION ANIJ DISCOVERY
283
and 2 respedively. Smce >. = -2 has algebraic multiplicity 1, tts geometric multiplicity is also 1.
The eigenspace associated with>.= 1 is found by solving the system (J - A)x = 0 Since the matrix
l - A= [ has rnnk 1, the solution space of(!- A)x = 0 is two dimrnsional, i.e.,>.= 1
-1 - 1 l
is an eigenvalue of geometnc multiplicity 2. It follows that T is diagonaltzal>le the sum of lhe
geometric multiplicities of its eigenvalues is 3.
31. If x is a vector in Rn and >. is scalar, then [Tx)a = [T]a[x]a and [>vc]B = .>.[x]a. lL follows that
Tx =Ax if and only if [T}s[x]s = [Tx]B =[Ax:] a = >.[x)a; thus xis an eigenvector ofT corresponding
to >. if and only if [x]s is an eigenvector of [T]
8
corresponding to >..
32. The characteristic polynomial of A is p(>.) = 1>. _:-ca >. -:_b dl = >.
2
- (a+ d)>.+ (ad - be), and the dis-
criminant of thi's quadratic polynomial is (a+ d)
2
- 4(ad- be) = (a- d)
2
+ 4bc.
(a) lf (a - d)
2
+ 4bc > 0, thon p(>.) has two distinct reo.! roots; thus A is since it has
two distinct Pigenvalues.
(b) Tf ( o - d)
2
+ tbc n, t h"n r(>.) has no roots; t hue: A has no rc:1l t'lgf"nvalues and is the
nol diagonahzablP.
DISCUSSION AND DISCOVERY
01.
are not s1mtlar smce rank(A) = 1 and rank(8) = 2
02. (a) 'ITue. We havP A = p-I AP where P =I
(b) 1Tue. If A 1s s tnHiar to B and B is similar to C. then there arc mvert1ble matrices P
1
nnd P2 surh that A - P
1
-
1
BP1 and B = F
2
-
1
(' P2 . lt follows that A = P
1
1
( P
2
1
C P
2
)P
1
=
(P2P1)
1
C(e'JJ>I ), thus A is sinular to C
(c) True. IfA - P
1
BP,thenA-
1
=(P-
1
BP) -
1
= P -
1
B-
1
(P
1
)
1
p-
1
e- 'P.
( cl) False. This statement does not guarantee that there are enough linearly independent eigen-
vectors. For example, the matrix A= has only one (real) eigenvalue, >. = 1,
0 -1 0
wh1ch has mull1pltcity 1, but A is not dJagonalizable
03. (a) False. For examplr, 1 - is diagonalizahle
(b) False. For exc.Lmple, 1f p -l AP is a diagonal matrix then so IS Q
1
AQ where Q = 2P. The
diagonalizing matrix (if it exists) is not uruque!
(;:) True. Vectors from different eigenspaces correspond to djfferent eigenvalues and are therdore
linearly independent. In the situation described { v
1
, v2, v
3
} is a linearly independent set.
(d ) True. If an invert1ble matrix A is similar to a diagtlnal matrix D, then D must also be m-
vertible, thus D has nonzero diagonal entries and o-
1
is the diagoncJ matnx whose diagonal
entries are the reciprocals of the corresponding entries of D. Finally, 1f PIS an mvert1ble ma-
trix such that p -I AP = D, we have p- A-
1
P = (P-
1
AP) -
1
= o-
1
and so A-
1
is sintilar
too-l
284
(e)
\
04. (a)
(b)
(c)
(d)
05. (a)
(b)
(c)
Chapter 8
1'ru_y The vectors in a basis are linearly independent; thus A bas n linear independent

A is a 6 x 6 matrix.
The cigenspuce corresponding to >. = 1 has dimension 1 The eigt:nspace corresponding to
>. = 3 has dimension l or 2. The eigenspace corresponding to >.= 4 has dtmension 1, 2 or 3.
If A is diagonalizable, then the eigenspaces corresponding to >. = 1, >. = 3, and >. = 4 have
dimensions 1, 2, and 3 respectively.
These vectors must correspond to the eigenvalue >. = 4.
If >.
1
has geometric multiplicity 2 and >.2 has geometric multiplicity 3, then >.3 must have
multiplicity l. Thus the sum of the geometric multiplicities is 6 and so A is diagonalizable.
In this case the matrix is not diagonalizable since the sum of the geometric multiplicities of
t he e1genvalues is less than 6.
The matrix may or may not be diagonalizable. The geomet.ric multiplicity of >.3 .IIWSt be 1 or
2. If the geometric multiplicity of >.3 is 2, then the matrix is diagonalizable. If the geometric
mllll.iplicitv of >-1 Is 1, then the matrix is not dia.gonaliza.ble.
WORKING WITH PROOFS
Pl. If A and Bare similar, then there is an invertible matnx P such that A = p - t BP. Thus PA = BP
and so, usira,v the result of t he ci ted Exercise, we have rank( A) = rank(PA) = rank(BP) = rank(B)
and nullity(A) = nullity(PA) nullity(BP) = nnllity( B).
P2. If A and 13 are sumlar lhen t here ts an invert ible matnx P such that A = p - BP Thus, using
part (e) of Theorem 3 2 12, we have tr(A) = tr(P -
1
BP) == tr(P
1
( IJP)) o..: tr(( BP)P -
1
) = tr(B).
P3. If X f. 0 and Ax = >.x t.hen, since pis invertible and cp-l = p - l A, we have
C' P-
1
x = p - I Ax = p-
1
(,\x) = >.P-
1
x
wit h p - x -:f 0 T hus p -x is an eigenvector of C corresponding to >..
P4. If A and B ure :m111lar, lhen there is an invertible matrix P such t ha t A = p - l BP. We will prove,
by inducti on, tha t Air P
1
B" P (thus Ak and BJr are similnr) for every positive integer k .
Step l The fact that , \
1
= .1 = p -
1
BP = p-l B
1
P is gtven
Step 2. If Ak = p - l BI<P, where k is a fixed integer then we have
A.\:+
1
= AAic = (P-
1
BP)(P-
1
BkP) = p -
1
B(PP-
1
}BI<P = p - tBI<+lp
These two steps compl( t e the proof by induction
PS If A ts dtagonalt zablc, t hen there is an invertible matrix P and a diagonal matrix D such that
P - AP = D. We will prove, by induction, that p - tA'" P = Dk for every posttive integer k . Since
D" is diagonal thts shows that A" IS diagonalizable.
Step 1 The fact that p-
1
A
1
P = P-
1
AP = D = D
1
is given.
Step 2. If p - t Ak P = D" , where k is a fixed integer 1, then we have
p - lAH
1
p = p -
1
AAkP = (P-
1
AP)( P-
1
A
4
P) = DD" = Dlc+l
These two steps l"omplcte thP proof by induction.
EXERCISE SET 8.3
285
P6. (a ) Let W be the eigenspace corresponding to >-o. Choose a basis {u 1, uz, ... , uk} for W, then
extend it to obtain 11. basis B = { Ut, u2, ... , u,., Uk+l .. , un} for R".
(b) If P = [u1 I u2l I I I un] =[Btl B2j then t.hc product AP has the form
AP = [-\out I >.o u2l I I
On the other hand, if Cis ann x n matrix of the form C = t hen PC has the form
where Z = if Z = p -l AB
2
, we have AP =PC.
(c) Since AP =PC, we have p - l AP =C. Thus A is similar to C = so A and C
have thesame characteristic polynomjaJ.
(d ) Due to the special block structure of C, its characteristic polynomial of has the form
Thus the algebraic multiplicity of >.
0
as an eigenvalue of C, and of A, is greater than or equal
to k
EXERCISE SET 8.3
1. The charactenslic polynom1'\l of A is p(>.) = ,\
2
- 5,\ = >.(>.- 5). Thus the eigenvalues of A are).= 0
and ,\ = 5, and each of the cigcnspaces has climension 1.
2. The characteristic polynomial of A 1s p(,\) == >.
3
- 27 ...\-54== (,\- 6){>. + 3)
2
Thus the eigenvalues of
A are,\ = 6 and .>, = -3. Thf> t>igenspace corresponrling to,\ = C hl\.o:; dimension 1, anrllhe e1genspace
corresponding to ,\ =- -3 ha.s rlimPnsion 2.
3. The characteristic polynom1a.l of A is p(>.) = >.
3
- 3).
2
= ..\
2
(.X- 3). Thus t he eigenvalues of A are
A= 0 and ,\ 3. The eigensptLCt: corresponding to >. = 0 has dillleu.sion 2, and the eigenspace corre-
sponding to >. - 3 has dimcllSIOn 1
4 . The charactenstic polynomial of A is p(>.) = ,\,
3
- 9...\
2
+ 15).- 7- (,\- 7)(>-- 1)
2
Thus the eigen-
values of A are >. = 7 and >. - 1. The eigenspace cor responding to ,\ = 7 has dimension I, and t he
eigenspace corrPspondmg to >. = l has dimension 2.
5. The gen"al solution or the system (0/ - A)x 0 is x r + s [ -:], thus the ectors v
1
[
and v3 [-:] form a basts for the eigenspace corresponding to = 0. Similarl y the vector v, = m
forms a basis for t he eigenspace corresponding to>.= 3. Since v 3 is or thogonal to both v
1
and v
2
it
follows that the two eigenspnces are
286 Chapter 8
6. The general wlution or (7f >i )x 0 is T [i} thus the vcctoc v, [:] roms a basis foe the
eigenspa<e ro"esponding to >. 7. Similacly the vecto"' v2 [- ]nd v3 [-:] form a basis for
the etgenspace corres ponding to >. = 1. Since v
1
is orthogonal to both v, and v
3
ll fvllows that the
two etgenspaces are orthogonal
7. The characteristic polynomial of A is p(>.) = >.
2
- 6>. + 8 = (>.- 2)(>. - 4); thus the eigenvalues of A
are>.= 2 and >.- 4. The vector v
1
= forms a basis for the eigenspace corresponding to>.= 2,
and the vector v2 = forms a basis for the eigenspace corresponding to >. = 4. These vectors are
orthogonal to each other, and the orthogonal matrix P = J = [ - has the property

that
PTAP=
72
[3 l]
J2 1 3 -12
= [2 0] = D
\/2 0 "
8. The characteristic polynomial of A is (>.- 2}(>. - 7); thus the eigenvalues of A are >. = 2 and >. = 7.
Correspondmg Pigcnvectors are v
1
= and v
2
= respectively. These vectors are odhogonal to
each other, and the orthogonal matrix P = 1
11
::
11


= hns the property that
pT_.1P = [- ts] [ 6 -2] = [2 = D
..!S -2 3 '75 75 0 I
9. The characteristic polynomial of A is p(>.) = .>.
3
+ 6>.
2
- 32 = (>.- 2j(>. + 4)
2
; thus the eigenvalues of
A 2 and .\ = - 4. The g'ncral solu<ion of(21 A)x 0 is T m' and the gcnecal solution
of (-If - A)x - 0 is x s [-l] + t [ Thus <he vedo< v 1 [:] forms a basis foe the eigenspace
corresponding to >. - 2, and the vectors v, [ -:] and v, r:J form basis for the eigenspace
corresponding to>.= -4. Application of the Gram-Schmidt process to {vt} and to {v2, v3} yield
orthonormal bases { u 1} and { u
2
, u
3
} for the eigenspaces, and the orthogonal mat nx
[
?s -73]
P = lut u2 u3) =
...16 0 73
has the property
r
I
76
pT AP = -12

2] [7:
2 76
0 2
76
[2
= 0
73 0
=D
Note. The diagonali?.ing matrix I' is not unique. It depends on the choice of bases for lhe eigenspaces.
This is j ,tst onf' possibilit.y.
EXERCISE SET 8.3
287
10. The characteristac polynomial of A is p(>.) = >.
3
+ 28>.
2
- 1175>.- 3750 = (>. + 3)(>.- 25)(>. +50);
thus the eigenvalues of A are >.
1
= -3, >.
2
= 25, an<.l >.3 = -50. Corresponding eigenvectors are
v, = [!], v, = [ i], and v, = m. These vc-eto<S "!' mutually orthogonal, and the orthogonal
matrix
li::u] = [ ~
4
:J
~
[ V1
v2
0
p = llv1l1
ll v2ll
3
5
has the property that
pTAP = [ ~
0][ -2 0 -36] [0
4
~ ]
n
0
5 ~ ]
-g
0
~ 0
-3 0 1 0 = 25 =D
:J
0 i -36
0 -23 0
3
0
5 5
11. The charactenstic pnlyn(lmial of A is p(>.) = \
3
- 2>.
2
= ).
2
(>,- 2); t.hus the eigf'nvalues of A are>.= 0
and > = 2. The genoral oluUon or (OJ - .4)x = 0 is X = T m + s [ -:]' and the general soluUon or
( 2/ - A )x = 0 is x I [ l] Thus the vedors v, = m and v, = [-: l form a basis or the eigenspace
""respondin to I 0. o>nd >he vector v, = [l] mms a basos or the eigenspace eonesponding to
>. = 2. These VPctors are mutually orthogonal, and t.he orthogonal matrix
[
Vt
F = llvdl
1
-72
I
72
0
~ ]
72
0
hn.-, the propt>rtv 1 hnl
[
n
~
72
~ ~ ] [ ~ ~ ~ ] [ ~
~ 0 0 0 0 1
1
l [ l
V2 I) 0 0
72 = 0 0 0 =D
0 0 0 2
12. The <"haracterrsltr. polynomaal of A JS p(>.) = .>.
3
- 6>.
2
+ 9>. = >.(>.- 3)
2
; thus the eigenvalues of A
are > = 0 and > = 3 The ve<:tor v, = [:] forms a ha.,is or the eigenspace corresponding to A= 0,
and the vectors v, = [-i ]d v
3
= [-~ ] form a basis or the eigenspace correspondi ng to A = 3.
Application of Gram-Schmidt to { vt) and to { v
2
, v3} yield orthonormal bases { u
1
} and { u
21
u
3
} for
the eigenspaces, and the orthogonal matrix
288
has the property that
1
7J
1
"J2
I

tal [ 2 - 1
0 -1 2
2
-1 -1
76
-1]
- 1 73
2 ....!_
v3
1
---
'1.2
I

0
Chapter 8
l l [ l
-76 0 0 0
= 0 3 0 =D
2
0 0 3
7G
13. The characteristic polynomial of A is p(.X) = >.
4
- 6>.
3
+ 8.X
2
= >.
2
(>. - 2)(>. - 4); thus the eigenvalues
of A are A 0, A 2, and A = 4. The solution of (01 A)x 0 is x r + s [!], :he
gene,aJ solution of (21 - A)x 0 is x t l J, and the genecal solution of (4/- A)x = 0 is x u
Thus the ve<to" v, and v
3
fo<m bO!Sis fo< th< cicnspace coHe;ponding to 0.
v 3 [ -ll fo<ms a basi: fo, the eigcn:pace co"esponding to >. 2, and v, [ ll fo,ms a basis
:o:::di:;::o >. 2 rlr the
1 0 0 0
[
0 0 0 1] [3 1 0 0] [0
T 0 0101300 0
pAP= 1 I
-72 72 0 0 0 0 0 0 0
o o lo o o o 1
0
0
1
0
-.J..
v'l
l
72
0
()
[0 0 0 0]
- 0 0 0 0
0 - 0 0 2 0
0 0 0 0 4
14. The characteristic polynomial of A is p(..\) = >.
4
- 1250..\
2
+ 390625 = (>.- 25)
2
(..\ + 25)
2
; thus the
eigenvalues of A ace 25, and A= -25. The vecto" v1 = v3 fo<m a basis fo, the
eigenspace COHespondi og to A 25, and the vee to" v 3 v, [ -ll fo<m a basis fo< the
eigenspace corresponding to..\ = -25. These four vectors art> mutually orthogonal , and the orthogcnal
matrix P
1
: :
11
1 has the property that
PTAP=

s
0
4
5
0
3
s
0
0
;!
s
0
4

01 [-7
24
0 0
0
s
24 0
7 0
0 -7
0 24

3
s
4
5
0
0
0
0
3
g
4
s
4
-$
3
5
0
0 -il
25
0
=
0
0
J
EXERCISE SET 8.3
289
15. The cig<>nvalues of the matrix A = are >
11
= 2 and >..
2
= tl, with corresponding nOrJ:!lalized
eigeuvccturs u
1
= [_ and u
2
= Thus the s pectral Jccompo.'lition of A is
71 7-i
l
:l l] = (2) [ [ I
1 3 -72 72

16. The eigenvalues of A are >-t = 2 and .X2 = 1, wit h corresponding normali zed eigenvect ors u
1
= [
and u 2 = [- Thus the spectral decomposition of A is
17.
18.
:::.::,:i ii :.-:
is
l
-3
:2] [js] I [-72] [-+a] 1
= 2 [;n 76 1.]- [- ,', ;!; o]-' -73 ;!;]
=
2
[! ! ll-+t -t -! =j]
Note fhl' :;pPctral decomposilion Lc; noluni()ue. l t depends on the r hoice of bases for the eigenspaces.
Thic; 1s juc;l one possibility.
A = [
- 2
0 -36
1
n
OJ + 25 n
1
0
lJ _ 50 m ll
0 -3 0 ....: -3 1 {0 1 0

3G 0 -23 0
OJ [
" [.
lll
[0 0
0 - 25


0 -25 25 0
25
= -3 0 l
o o] o o
0
0 0
0 _g
0 9 12 0
16
25 25 25 25
[
-1] [-32] . 19. T he mal nx A has eigenvalues >. = -1 and ). = 2, with corresponding eigenvectors
1
and
Thus the matr ix P = [- : - has the property that p- t AP = lJ = [- It follows that
-3] [1 OJ [ 2
2 0 1024 - 1
3] [ 3070 3069]
-1 - 2046 -2045
290
Chapter 8
20. The marix A has eigenvalues>.= 2 and>. = -2, with corresponding eigenvectors Thus
the matr;., P = ;) has the property that p-l AP = D = It, follows that
AlO = PDIOp- 1 = [1 -1] [1024 0 ] [-7 4] = [1024 0 ]
2 7 0 1024 2 - 1 0 1024
21. The matrix A has elgenv&lues and >. 1. The ve<:to' [:] fonns a basis fo' !he eigenspace
conesponding - 1, and the vectorn m and [!] form basis fo, the eigenspa<e conesponding
to >. = 1. Thus the matrix P = [: bas the property that p-I AP = D = and it
4 1 0 0 0 1
follows that
AlOOO = PDIOOO p - t = [: - =
4 1 0 0 0 1 -3 2 3 0 0 1
22. The mat,ix A has egenvalu., >. 0, >. 1, and >. - 1 , with conesponding eigenvedorn [:]. [:] ,
and [!] Thu P [: ; !] has the pcope,ty that p- AP D ! and it follow that
A1ooo = potooop-1 =
[
6:]
23. (a) The characteristic polynomial of A is p(>.) = >.
3
- 6>.
2
+ 12>.- 8 Computing successive powers
of A, we have A
2
..,. [ : :1 and A
3
= thus
12 -24 16 36 -72 44
-24
- 40
-72
6 [ :
44 12 -24
4] [3 -2
8 + 12 2 -2
16 3 -6
8/
which shows that A satisfies its characteristic equation, i.e., that p(A} = 0.
{b ) Since A
3
=-6A
2
- 12A + 8/, we have A
4
= 6A
3
-l2A
2
+ 8A = 24A
2
- 64A ..._ 4 I
(c) Since A
3
- 6A
2
-1 12A - 81 = 0, we have A(A
2
- 6A + 12/) = 81 and A-
1
=

6A + 12/).
EXERCISE SET 8.3
291
24. (a) The characteristic polynomial of A is p(>.) = >.
3
- >.
2
- >. + 1. Computmg successive powers of
A, we have A
2
= =I and A
3
= = A, thus
0 0 1 -4 0 5
[
-5 0 6] [1 0 0] [-5 0 6]
.A
2
- A = -3 1 3 - 0 1 0 - -3 l 3 =
-4 0 5 0 0 1 -4 0 5
which shows that. A satisfies its characteristic t>quat1on, i.e., that p(A) = 0.
(b) Since A
3
=A, we have A
4
= AA
3
= AA = A
2
= I .
(c) Since A
2
= I, we have A-
1
=A.
l
25 . From Exercise 7 we hnve pT AP = [t [ = = D. Thus A = PDPT and
_ PetD pT = [ *) [e
2
' 0] [ -?z] =
1
[ t e.
4
' -l''lt +
- -k -L 0.:
11
-h .J,.
2
-e:!-Tt'
11
":!'+elt'
"2 .,!} "2 "'2
27 Prom Exercise 9 we have
[ I
I
n

76
J;; 1
pT AP = =f
l
-3
72
2 76
I
2
n 'l
--
-rr..
v'3 vJ ../3 \ (>
Thus A= PDP'l' nnd
[ I
1
['''
0
76
-/2
e'A
Pe'o J>r =
I
j
e .,,
76
..;2
-73 0
-36
0
0
()
.... a
[ e" + se-"
e2' _ e-4t
e2t _ e-H
e
21
+ 5e-
41
()
2e2t- 2e-4t
2e2t- 2e-"'
28 . Prom Exercise 10 we havP
PTAP = _! 0 0
[
0 1 0] [ -2
i 0 i -36
Thus A= PDPT and
4
-g
0
3
s
I


0

-72
1
- I
=D
72 ../3
=
0
1
0 - I
7J
][I
I
0 7ti
76
0 -?z
I
J2
e-4t
I
-73
2e
21
- 2e-"]
2e2'- 2e-4t
4e2t + 2e- 4t
= D
0 -50
292
Chapter 8
=
4
-5
0
3
5
[ " e'" + -2-e-'"
25 25
0
_ 12 e25t + 12 e-sot
25 25
0
- + " e-'"] 25 25
e-31
0
0
..!!. e25t + !! e 50t
25 25
[
sln(21f) 0 0 l [0 0 OJ
29. Note that o
0
sin( -
0
4rr) 0 = o o
0
o
0
Thus, proceeding as in Exercise 27:
sin( - 47<) 0
[ l
I
[0
0
0][ to
1


0 0]
76
-72
76
sin('rrA) = Psin (7rD)Pr = ts
I
0
1
!,
0 0
72
-73 0 0 --:;,
72
=
0
1 0
0
0 -+a
l

11 0 0
76 73
-73

4
;] [oos(t)
0
0 ][ 0
il
-5
30. cos("ITA)- Pcos(1rD}PT =
0 cos(257r) 0 - 1
0
3
0 cos(
507r) i
0
5

4
;J n
0
H
1
0] [-f.
0
lll
-g
25
0 -1
0
! = 0
-1 0
3
0
0
4 24
0
7
5 5 25 B
31. rr A [! then A' and A
3
Thus A is nilpotent and
0] [0 0 0] [0 0 0] [1 0 0]
0 + I 1 0 0 0 0 0 = 1 1 0
! 210 !flO
2
32. Smc(' .4
3
-0 WP have sin(1r A) = sin(O)/ +1rcos(O}A- t,1r
2
sin(O)A
2
= 'Tf'A =
cos(1r.A) = J ;
1
A
2
[
-T 0 l
[
0 0 0]
rr 0 0
2tr rr 0
and
33. If P is symmetric nnd orthogonal, then pT = P and pT P = I ; thus P
2
= pr P = I If >. is an
eigenvaluE' of P thPn lhere is a nonzero vector x such thal Px = >.x S10ce F
2
= 1 1t follows lhat
..\
2
x = = Jx = x, thus >.
2
= 1 and so >. = 1
DISCUSSION AND DISCOVERY
Dl. (a) True The malrix AAr is symmetric and hence is orthogonally diagonalizable.
(b) False If A s dtagonahzable but not symmetric (therefme not orthogonally daagonalizable),
then there as a basis for R" (but not an orthogonal basis) cons1slmg of eJgenvectors of A.
WORKJNG WITH PROOFS
(c) False. An orthogonal matrix need not be symmetric; for ex\\mple A = .

293
{d) '!Tue If A is an invertible orthogonally diagonalizablc mntnX
1
then there is an orthogonal
mat rix P such that pT AP = D where D is a diagonal matnx with nonzero entries {lhe
eigenvalues of A) on the main diagonal. It follows that pr A- lp - (PT AP)-
1
= D-
1
and
t.hat D-
1
is a dtagonal matnx with nonzero entries (the reciprocals of the on
the main diagonal Thus the matrix A -l is orthogonally diagonalizable.
(e) 1Tue If A is orthogonally cliagonalizable
1
t hen A is symmetnc and thus has r eal eigenvalues.
D2. (a) A = P DPT = [ ; ; ]
0 0 0 7 0
t
72
0
1
7z
-72] [3 0 0]
0 = 0 3 4
I 0 4 3
72
(b) No. 'l'he vectors v 2 and v3 correspond to different eigenvalues, but are not orthogonal.
T here fore t hey cannot be eigenvectors of a symmet ric mat rix.
03. Yes. Since A is cll!lgonn.lizable and the eigenspaces are mutually orthogonlll
1
there 1s an orthonormal
basts for R" cons1stmg ol e1genvectors of A. Thus A ts orthogonally d1agouahzablc and therefore
mu11t be symmetric
WORKING WITH PROOFS
Pl. \'\'e first show that if A and C are orthogonally smular, then there orthonormal ha.c:es with
rc .. J t lv hkb tht:) rt'pn:scnl the same l1ut:ar '-per ll Jr Fur 1!11:. lJUrpose, let T the v1 eratvr
defined by T(x) = Ax fhen A = [T], i.e., A is the matnx of T relative to the standard basis
B = { e,, e2
1 1
en} Smce A and C are orthogonally similar, there 1s an orthogonal matrL< P
such that G=PTAP. Let B' ={v
1
, v
21
.. , vn} where v
1
, v
2
, . , vn are the column vectors
of P. Then B' is an orthonormal basis for R" , and P = Pa'-B Thus jTJa = PjTJs PT and
[T]s = pT[T]sP = pT AP =C. This shows that there exist orthonormal basrs with respect to
which .4 n.nJ C repn:!:>Pnt t.he same linear operator.
ConwrsPly, ll uppost: lhat A = [1']a and C = [TJa when T: R" - > Rn IS a linear operator
and B, fJ' n.re h1\SCS for nu. If P = PD'-+8 t.hen Pis an ort,hogonnl matrix and C = [T]a =
pTITlnP pTA P. Thus A and C are orthogonally si1ni ln.r.
P2. Suppose A - c1 Ut U{ I c2u 2uf + + Cn ll nu;, where {u
1
, u2, ... , un} is an orthonormal basis for
Rn. Sincf> (u
1
uJ)T = u J'
7
' u J = u
1
uf it follov.;s that A'T =A; thus A is symmetnc. Furthermore,
since u{' u
1
= u, u
1
- o,,
1
\"e have
T\
Au
1
={c1u1uf +c2u2uf -

2: c, u,u?' u
1
=c
1
u
1
I
for each j = 1
1
2, , n Thus c
1
, c
2
.
1
Cn are e1genvalues of A
P3. T he s pect r al decomposition A= >q u
1
uf + A2 u 2ui + -r ts equivalent. to A= PDPT
where P = !u tI u 2l I unJ and D = A21 .. , >-n); thus
/(A)= P /(D)PT = Pdiag(f( At). /(>2), ... , /(An))PT
= /(AI) ulUJf + j(>.2) u2uf + +
294 Chapter 8
P4. (a) Suppose A is a symmetric matnx, and >.o is an eigenvalue of A having geometric multi-
plicity J.. Let l\' be the eigenspace corresponding to >.
0
. Choose an orthonormal basis
{ u
1
, u
2
,. , uk} for W, extend it to an orthonormal basis B = { u1, u2, ... , Uk+J, .. . , un}
for R", and let P be the orthogonal matrix having the vectors -:>f B as its columns. Then, as
..;hown in Exercise P6(b) of Section 8.2, the product AP can be written asAP=
Since P IS orthogonal, we have
and since pT AP is a symmetric matrix, it follows that X = 0.
(b) Since A is similar to C = has the same characteristic polynomial as C, namely
(>.- >.o)" det(>.In-k- Y) = (>.- >.o)kpy(>.) where py(>.) is the characteristic pofynomial of
Y . We will now prove that py (>.o) :f. 0 and thus that the a lgebraic multiplicity of >.
0
is
exactly k. The proof is by contradiction:
Suppose py (Ao) - 0, i.e., that >.o is an eigenvo.lue of the matrix Y. Then there is a
nonzero vector y in R"-k such that Yy = ..\oY Let x = [;] be Lhe vector in R" whose first
k components are 0 and whose last n- k components are those of y Then
and sox IS an eigenvector of C corresponding to >.o Since AP = PC, it follows that Px is
an eigenvector of A corresponding to >.o But. note that e
1
, . , are also eigenvectors of C
to >.o, and that { e
1
. . , ek x} IS a linearly independent set It follows that
{Pet, .. , Px} 1s a linearly mdependent. set of eigenvectors of A corresponding to >.
0
.
But this implies that the geometric muJtiplkity of >.
0
IS greater than k, a contradiction!
(c) It follows from part (b) that the sum of the d1mensions of the eigenspaces of A is equal ton;
thus A is dJagonalizable F\u-thermore, smce A IS symmetric, the e1genspaces corresponrling
to different eigenvaluPs are orthogonal. Thus we can form an orthonormal bas1s for R" by
choosing an orthonormal basis for each of tht> e1genspaces and JOining them together. Since
the surn of the dimensions is n, this will be an orthonormal bns1s consisting of ejgenvectors
uf A. Thus A IS orthogonally diagonalizable.
EXERCISE SET 8.4
1. (a) 3x? + -!xJ x2]

] [ 4 -3] [X']
X
2
-3 -9 X2
EXERCISE SET 8.4
295
5. The quadratic form Q = 2xr 1- - 2x
1
x
2
can be expressed in matrix notation as
The matrix . \ has ds!n:alucs ,\
1
= 1 and ,\
2
= 3, wilh corresponding dgt:m'l(tor:> v
1
- GJ am.l
v2 = [ -:] respective)) Thus matrix P = orthogonally diagona.IJZcs A, and the change
of \'&riable [:
2
1
) = x - Py [tz - [Y
1
] eliminates the cross product terms in Q
7z 72 Y2
Note that the in verst: relationship between x aad y is = y = pT x = [ -J'i [r
1
]
,. 7i J:'l
- I
G. rho v"' """"'' ,.;, !om ' "" ' " in Lt ,,, notation M Q -
1
A X w hccc A [: ]
The matrix A hns rigcnvahJ<'S ,\
1
= 1, )..
2
= 4, )..
3
= 6, with corresponding (orLhogonnl) eigenvectors
,.,
v' H l ' v, = m . v, [I] Thos the- ma:ix p = [ - : l octhogonally di .. on al izes A' and
tht' (:hang!' of vn.liablc x = Py eliminates the cross product terms in Q
Y2
[
l 0
0 ..t
0 0
The given quadiM>e (o, m can be exp<essed in matdx notation as Q = x' Ax whm A = _: -:]
The matrix A has eigenvalues )..
1
= l, )..
2
= 4, )..3 = 7, with cor responding (orthogonal) eigenvectors
v, - nl' v, = [:]' v, = Ul Thus the matdx p = n ! -ll dagonaH..s A,
296 Chapter 8
and the change of variable x = Py eliminates the cross product terms in Q:
. ll 0 OJ lYlj
Q = xTAx = yr(pT AP)y = Y'l Ya} 0 4 0 Y'2 = + 4yi +
0 0 7 Y3
Note that the di agonalizing matrix P is symmetric and so the inverse relationship between x and y
is

= y = pT x = Px = [-!
Y3 .
3
:2
3
1
3
2
3
8. The given quadratic form can be expressed as Q = xT Ax where A= [ The matrix
-2 - 4 5 .
A has eigenvalues I and 10. The vec:o" v, - m and v, [ fo<m a basis fo< the
eigenspace w 'espond ing to 1, and v
3
[ _:] fo,ms a basis fo' the eigenspace co,esponding to

v: ,;,> p::du;: :::::::::::::::::::


A and the change of variable x = Py eliminates the cross product terms in Q:
Q = xT Ax = yT (PT AP)y = !Y1 Y'2 YJ) = + Yi + 10y5
0 0 10 Y3
(b) !x
yj [:J + 17 -8) [:J -5 = 0
10. (a) lx
[ 1 -l][x]
y] y -t 15
8} [:) -3 = 0
(b)
!x
Yl [;
11. (a) Ellipse
12. (a) Ellipse

(b) Hyperbola.
(b) Hyperbola
(c) Parabola
(c) Parabola
(d) Circle
(d) Circle
13. The CGUation can be written in matrix form as xT Ax= -8 where A = [
2
-
2
]. The ei"envalues of A
- - 2 - 1 - q:;
are >. 1 = 3 and >.
2
= - 2, .vith corresponding eigenvectors v
1
= v
2
= respectively. Thus
Lhe matrix P = [ ts] orthogonally diagonA-Iizes A. Note det(P) = 1, so P is a rotation
-Vi 7s
EXERCISE SET 8.4
297
matrix. The equation of lhe conic m the rotated x'y'-coordinate system is
which can be written ns 2y'
2
:.,;g'
2
= 8.:. thus. is a. hyperbola. The a11gle through which the
axes have bePn rotated 1s 9 = tan
1
(- -26 6.
14. The equation can be written m matrix form as xT Ax= 9 where A= : ] . The eigenvalues of A are
At = 3 and .X
2
= 7, with corresponding eigenvectors v1 = and v2 = GJ respectively. Thus the
matrix P = [ orthogonally diagonalizes A. Note that det(P) = 1, soP is a rotation matrix.
The equation of the conic in the rotated x'y'-coordinate system is
[x' y'] [::] =- 9
whkh ran It<: f iy'
2
= !1 thns th,. .1n ThP angle of rotation corresponds
to and !lin 0 = lhus 8 = -45.
15. The equation can be writ len in matrix form as xT Ax= 15 where A = The etgenvalues of
A are At = 20 and A2 = -5, with corresponding e1genvectors v
1
= and v
2
= [ respectively.
Thus the matrix P -n orthogonally diagooallzE>s A Note that det(P)-= I , soP is a rotation
matrix The equation of the come in the rotated .x'y'-coordinnte system is
wh1ch we can wntf' as 1r'
2
- y'
2
== 3; thus the conir is a hyperbola The angle through which the
axes havP been rotntcd IS (I = U\0 t ( ::::; 36. !)
0
16. The equation co.n be wrinen in rnalrix form as xr Ax= where A = :] . The eigenvalues of A
nrc .A1 = 4 and >.2 ;;: wit.h cur responding eigenvectors v
1
= [ _ and v2 - Thus 1 he matrix
P dmgonalizes .4. Note that dct(P) = l, soP is a rotat1on matnx. The
equat1on of lhc conic in the rotntrrl x'y'-coordinate system is
0] [x'] =
;! y' 2
2
which we can write as :r'
2
1- 3y'
2
= 1; thus thP conic is an ellipse. The angle of rotation corresponds
lo cosO= and sm (} = 7i, thus(}= -45_
17. (a) The e1genvalucs of A= and A= 1 and .X= 2; thus A is positive defimtP
( b) negative drfinil(' (c) indefinite
(d) positive semidPfimtc (e) negative semidefinite
298
18. (a)
(b)
(d)
The eigenval ues of A =
negative definite '
negative semidefinite
0
] and A = 2 and A = -5; thus A is indefinite.
-5
(c) posi tive definjte
(e) positive semidefinite
Chapter 8
19. We have Q = xi+ > 0 for (x 1 . x2) 'f: (0, 0) ; thus Q is positive definite.
20. negative definite
21. We have Q = (x1 - x2)
2
> 0 for x1 I x2 and Q = 0 for x1 = x2; thus Q is positive semidefinite.
22. negative semidefinite
23. We have Q = x{ - > 0 for x1 of 0, x
2
= 0 and Q < 0 for x
1
= 0, xz ..f 0; thus Q is indefinite.
24. indefinite
25. (a) The eigenvalues of the matrix A = are A= 3 and A = 7; thus A is posi tive definite.
Since lSI = 5 and I
5

= 21 are positive, we reach the same conclusion using Theorem 8.4.5.


-2 5
(b) The eigenvalues of A= are >. = 1, >. = 3, and ,\ = 5; thus A is positive definite.
0 0 5 12 -} 0
The determinants of the principal submatrices are 121 = = 3, and - l 2 o = 15;
0 0 5
26. ( a)
(b)
thus we reach the same conclusion using Theorem 8.4.5.
I
The eigenvalues of the matrix A = are .A = 1 and >. = 3; thus A is positive definite. Since
121 = '2 and 1
2 1
] = 3 an positive, we reach t.hc sttme conclusion using Theorem 8. 4.5
,1 2
The eigenvalues of A= [- - are >. = 1, .-\ = 3, and ,\ = 4; thus A is positive definite.
0 -1 3 3 -1 0
The determinants of the principal submatrices are 131 = 3, = 5, and - 1 2 - 1 = 12;
0 -1 3
thus we reach the conclusion using Theorem 8.4.5.
27. (a) The matrix A h.as eigenvalues >-
1
= 3 and >.2 = 7, with corresponding eigenvectors v
1
= g) and
v 2 = (- Thus the matrix P = ( -t] orthogonally diagonalizes A, and the matrix
B =
72
7;] = [Y.! + 4
- !':< .il - :11.
v2 2 2
has the property that 8
2
= A.
EXERCISE SET 8.4
299
(b) The matrix A has eigenvnlues >.
1
I, >.
2
3, >., 5, with corresponding ttigcnvecto<S v t m ,
' 2 n l , nnd v, m Thus P [ - }nhogonWiy diagonWizea A, and
[
I
72
D=
has I he property that 8
2
= A.
0
v'3
[-
0 .J5 0
+
= ! _il
'2 '2
0 0
28 . (a) The n1atrix A has eigenvalues >.
1
= 1 a.nd >.
2
= 3, with corresponding eigenvectors v
1
=
nnd v2 = [:] . Thus lhc matrix P = [-% orthogonally dia.gonalizt>.s A, and the matrix
2 9.
B=
72
has the property Lhnl 8
2
= A.
(b ) The "'""" A has icn<Jtoes \ 1 I, >., 3, >., ' , wu h co,.spondin< eigen"""'"" v 1 [:] ,
[
-1] [ '] [:.e --j; 1
v 2 - , and v 3 - : Thus P = ; - orthogonally dia.gonalizes A, and
[7.
I
ton

[j
2

+ fl
I
' V>l
- 72 0
7s 6 2
- j
6- T
B - -t. 0
v'3
0
I I

\IG
- "J:i 0
..;2
72
= -3
3
I t
I 0
0
1 1 I
- fl
I
7a
..;'2
73 7J
-7:1
7:i
6 2
-3
has t.h P property that 8
2
= A.
The q11ad rnt tc form Q = + + + 4x
1
x2 - 2x
1
x3 - 2x2x
3
can be in matrix nota-
tion as Q - x T Ax where A = [ = :] . The determinants of the principal submatrices of A are
- 1 - 1 k
s 2 - 1
1
5 21
!SJ = 5.
2 1
= 1, and :! 1 - 1 = k- 2. Thus Q IS positi\c dfiuit<' tf and only if k ;> 2.
-1 -1 Jc
0
3 0 . T he quadrat 1r form can be expressed in matrix notation as Q = xr Ax where A =
u
1 Jc . -tl
k 2
3 0 -1
Jc = 5- 3k
2
2
The del<'rrnlnanls of the pnncipal submatrices of A are J31 = 3, = 3, and
IS - 1
Thus Q 1s positive definite i! :1::u only if lkl < V
0
Jc
300 Chapter 8
31. (a) The matrix A has e:genvalues

= 3 and >.
2
= 15, with corresponding eigenvectors vJ =
and vz = [:]. Thus A is positive definite, the matrix P = [ orthogonally diagonalues
.4, 1\n:i the matrix
8-
has the property that 8
2
= A.
(b) The LOU-decomposition (pl59-160) of the matrix A is
A = !] = [i [; = LDU
and, since L = ur, this can be written as
which is a factorization of the required type.
32. (a) T(x + y) (x + y)T A{x + y) = (xT + yT)A(x + y) = xT Ax+ xT Ay + yT Ax+ yT Ay
= xT Ax+ 2xT Ay + yT Ay = T(x) + 2xT Ay + T(y)
(b) T(cx.) = (cx)T A(cx) = c
2
(xT Ax)= 2T(x)
n n
33. We have (ctXl + czxz + + CnXn)
2
= L; c'fx? + 2: 2: 2c.c
1
x,x
1
= xr Ax where
t=l t=l J=t+l
[q
c1c2 C1Cn
c,:c2

c
2
c.,
A=
c,cn
C2Cn
c2
n
34. (a) For each t 1, .. . ,n we have (x;- i)
2
= 2x,x + x
2
= x?- f: x
1
+ (t x
1
)
2
=
J=l
xf - : x,x
1
+ ( f: + 2 f: f x
1
xk). Thus in the 4uadratic form
J=l r=1 J=l k=
1
+1
s; = -
1
-[(x
1
- x)
2
+ (x2- x)
2
+ + (xn - x)
2
]
n-1
the coefficient of x? is

[1 - * + = and the coefficient of x,x
1
for t :f. j is
+ = -n(n
2
_
1
>. lt foUows that s; = xT Ax where
l
1 I
n -n(n-1) - n{n-1)
__ 1_
.!.
I
n(n-1) n - n(n-1)
A=
1 1
l
-n{n-1) - n(n-1) n
DISCUSSION AND DISCOVERY 301
(b) We have si = ":
1
1(x, - i)
2
+ (xz - i)
2
-t- + (xn- :t)
2
J;::: 0, and 0 if a nd only if x
1
=
i, Xz = i, ... ,:en= :i:, i.e., if and only if x
1
= X2 = ::::: Xn. Thus s; is posit1ve semJdefinite.
35. (a) The quadratic form Q = jx
2
+

+

+ + + can be expressed in matrix
tation as Q x' Ax whe<e .4 [! !l The mal<ix A has eigenvalues >. l and >. j.
The vectorn v, [-!] and v, form a bas;. for the eigenspace corresponding j,
and v
3
[:] forms a hasis for the eigenspace corresponding to >. ! Application of the
Gram-Schmidt process to { v
1
, v
2
, v
3
} produces orthonormal eigenvec tors {p 1, P2, PJ}, and t he
matrix
PJ = [-i
'
76
P = Pt
I
P2 -"JS
1
-76

73
orthogonally diagonahz.es A. Thus L ange of variable x = Px' converts Q into a quadratic
form m the variables x'- (x',y',z') w1thout cross product terms.
From this wP cnnrludc I hat thr equation Q 1 to an ellipsoid with a.xis lengths
2j"i = J6 in thP x' and y' directions, and 2.jf = the z' direction
{b) The matnx A must bP. positive definite.
DISCUSSION AND DI SCOVERY
Dl . (a)
(b)
(c)
(d)
(e)
(f )
0 2. (a)
(b)
(c)
(d )
(e)
False. For cxmnple the matrix A= hn.s r.igcnvalues l and 3; tlmii A 1s mdefi nite.
False. The term 4x
1
.z:'2x3 is not quadratic tn the variables x
1
, x2, XJ.
True. When cxpuudcd, each of t.he terms of lhe res uiLing expression is (of degree
2) in t he variables
True. The eigenvalues of a pos1tJVe definite matnx A are strictly positive, m particular, 0 Is
not an eigenvalue of A and so A is invert.1ble
False For examplr th!C' matnx A= is pos1t1ve sem1-definite
True. rr the eigenvalues of. \ are posttive, tlu>n the eigenvalues of - A arc negat1ve
True. When wr itten in matrix form, we have x x = xT Ax where A = I .
True. Tf A has pos1t1ve eigenvalues, then so does A-
1
.
True. See Theorem 8.11 3(a)
True. Both of thE' principal S'lbmatrices of A wi ll have a positive determinant
False. If A = ( :] , then xT = x' + y'< On the other hand, the statement is true if A
is assumed to be symmetric.
302 Chapter 8
(f) False If c > 0 the graph is an ellipse. If c < 0 the graph is empty.
D3. The eigenvalues of A must be pos1lhe and equal to each other; in other words, A must have a
positive eigenvalue of muiLlplicily 2.
WORKING WITH PROOFS
Pl. Rotating the coordinate axes through an angle 8 corresponds to the change of va.ria.ble x = Px!
where P = [:: -:::], i.e., x = x' cosO- y' sinO andy= x' sin 8 + y' cosO. Substituting these
expressions into the quadratic form ax
2
+ 2bxy + cy
2
leads to Ax'
2
+ Bx'y' + Cy'
2
, where the
coefficient of the cross product term is
B = -2acos0sin 8 + 2b(cos
2
0- sin
2
0) + 2ccos OsinO = (-a+ c) sin 20 + 2bc'os20
Thus the resulting quadratic form in the variables x' and y' has no cross product ter):nP.if and only
if (-a+ c) si n 28 + 2b cos 20 = 0, or (equivalently) cot 28 =
P2. From the Print1paJ Ax1s Theorem (8.4. 1), there is an orthogonal change of varial>le x = Py for
wh1ch xr Ax= yl'Dy - >-.t!Jf 1- where )q and >.2 are the eigenvalues of A. Since ,\
1
and >.2
are nonnegat1ve, it follows that xT Ax 0 for every vector x in Rn.
EXERCISE SET 8.5
l. {a) The first partial derivatives ot fare fz(x , y) = 4y- 4x
3
and f
11
(x, y) = 4x - 4y
3
To find the
cnt1cal pomts we sl'l l:r and fv equal to zero. This yields the equations y = x3 and x = y3.
1-\om Lh1s we conduJe Lhat y = y
9
and so y = 0 or y = 1 Since x = y
3
the corresponding
values of x are x = 0 and x = 1 respectively Thus there are three critical points: (0, 0), (1, 1),
and ( 1,-1).
(b) The Jl c:-.sian matrix IS H(x.y) = [ln((x.y)) /xy((z,y)] = r-l
2
:r
2
4
2] Evaluating this matrix
r y fvv z.y) 4 - 12y
2. (a)
{b)
at thP. cnticf\1 points off yaelds
/l(O,O)
[
0 l]
4 c I [
-12 4]
H(l,l) = 4 -12 '
[
- 12
H( - 1,-1) =
4
The eigenvalues of are A= 4; thus the matrix H(O, 0) is indefinite and so 1 has a
saddle poinl at (0, 0) . The eigenvalues of

are>.= -8 and >. = - 16; thus Lhe matrix


H(l,l) = H( - 1, - 1) is negative definHe and so f has a relative maximum at {1,1) and at
(-1,-1)
The first partial derivatives off are f%(x, y) = 3x
2
- 6y and fv(x, y) = -6x- 3y
2
. To find the
points we set 1% and f
11
equal to zero. This yields the equations y = 4x
2
and x = -


From Lhis we conclude that y = h
4
and so y = 0 or y = 2. The corresponding values of x are
x = 0 and 4 - 2 respectavdy. Thus there are two critical points: (0, 0) and ( -2, 2).
The Hessian matrix is H(x,y) = [/u((x,y))
1
1
zv((:z:,y))] = (
6
x -
6
]. The eigenvalues of H(O,O} =
fvz :z;, Y 1111 %, Y -6 -611
[ are >. = 6; this matrix is indefinite and so f has a saddle point at (0, 0) . The
eigenvalues of H( -2, 2) = r-
12
-
6
] are >. = -6 and >. = -18, this matnx is negative definite
-6 -12
and so f has a relnLive maximum at ( -2, 2)
EXERCISE SET 8.5 305
13. The constraint equation 4x
2
+ 8y
2
= 16 can be rewritten as ( )
2
+ ( )
2
= l. Thus, with the change
of variable (x, y) = (2x', v'2y'), the problem is to find the exvreme values of z = xy = 2v'2x'y' subject
lo xn + y'
2
= 1 Note that. z = xy = 2v'2x'y' can be expressed as z = xrT Ax' where A= [
The of A are >.
1
'= v'2 and>.,= -J2, with corresponding (normalized) eigenvectors v
1
=
l and v2 = [ Thus the constrained maximum is z = J2 occurring at (x', y') =
(x,y) = (J2, 1). Similarly, the constrained minimum is z = -v'2 occurdng at (x',y') = (-72,
or (x,y)= (-J2,1)
14. The constraint :r
2
+ 3y
2
= 16 can be rewritten as ( )
2
+ ( 4y)
2
= 1. Thus, setting (x, y) = (4:r',
lhe problem is Lu find the extreme values of z = x
2
+ xy 1 2y
2
= 16x'
2
t- + v'
2
subject to
x'
2
+ y'
2
= 1. Note that z = l6x'
2
+ t- y'
2
can be expressed as z = xfT Ax' where A =
[

The eigenvalue:. of A are >.1 = 8 with correspondmg eigenvectors v, = [
an:! v/= [ Normalized are u 1 = u::
11
= [ _ and ll ? = = [ Thus the
constrained m;ucimum is z =
5
3
6
occurring at (x',y') = {4, !>or (x, y) = .J:(2J3, 1s> Similarly,
the constrained mmimum is z =a occurring at (x', y') = -4) or (x, y) = {2. -2)
15. The level curve correspondmg to the constramed maximum 1s
the hyp.:!rbola 5x
2
- y
2
= 5; it touches the unit ci rcle at (x, y) =
( 1, 0). The curve corre ponuing to the constrained mini-
mum IS the hyperbola 5z - = -1, it touches the unit Circle al
(:r,y) = (0,1)
16. !'he ltv .. l cunp l'nrrPspondmg to th"' constramed maxirnum 1s the
hype1bola xy- 1t touches lhr> un1t circle at (x, y) == 72).
The level curvn corresponding to the constrained minimum is
the hyperbola xy - -!; it touches the unit circle at. {x, y) =
( -J2)
'Y "_.!.
2
-4
I -2
.ry: 2
-4
xy=.!.
1
x.y = _l
1
17. The area of the inscribed rectangle is z = 4xy. where (x, y) is the corner pomt that lies in the first
quadrant. Our problem is to find the maximum value of z = 4xy subject to the constrakts x;::: 0,
y ;::: 0, x
2
+ = 25 The constraint equat1on can be rewritten as x'
2
+ y'
2
= 1 where x = Sx'
and y = y'. In terms of the vanables x' and y', our problem IS to find the maximum value of z =
20x'y' subject to x'
2
+ y'
2
= l , x';::: 0, y';::: 0. Note that z = x'T Ax' where
1
A =

The largest
eigenvalue of A IS >. = 10 with c:orresponding (normalized) eigenvector [ Thus the maximum
area IS z = 10. and th1s occurs whl'n (z',y') = (-jz, (x,y) =
306 Chapter 8
18. Our problem is to find the extreme values of z = 4x
2
- 4xy + y
2
subject to x
2
+ y
2
= 25. Setting
x = 5x' and y = 5y' this is equivalent to finding the extreme values of z = 100x'
2
- lOOx'y' + 25y'
2
subject to x'
2
t- y'
2
= 1 Note that z = xff Ax' where .-\ = [ The eigenvalues of A are .-\
1
=
125 t'.nd >.2 = 0, wit h corresponding (normalized) eigenvectors v1 = [- and v
2
= [ Thus the
maximum temperature encount ered by the ant is z = 125, and th1s occurs at (x', y') = (--;h. 75) or
(x, y) = ( -2-.15, VS). The minimum temperature encountered is z = 0, and t his occurs at (x', y') =
or {x,y) = (VS,2VS).
DISCUSSION AND DISCOVERY
Dl. (a) We have fz(x, y) = 4x
3
and /,Ax, y ) = 4y
3
; thus f has a critical point at (0, 0). Simifarly,
9:z(x, y) = 4x
3
and g
11
(x, y) = - 4y
3
, and so g has a critical point a.t {0, 0). The Hessian
matrices for f and g are HJ(x,y) = [
12
;
2
12
Y
2
] and H
9
(x,y) = [
12
;
2


respectively.
Since H,(O,O)- H
9
(0.0) = second derhntl\e test IS mconclusive in both cases.
(b) It is clear that f has a relative minimum at {0, 0) since /{0, 0) = 0 and f(x, y) = x
4
+ y
4
is
strictly positive at all other points (x,y). ln contrast, we have g(O,O) = 0, g(x,O) = x
4
> 0
for x ::/; 0, and g(O, y) = -y
4
< 0 for y f. 0. Thus g has a saddle point at (0, 0).
D2. The eigenvalues of H are A= 6 and.,\= -2. Thus His indefinite and so the crit1cal points
off (if any} nrc s'\dcile pmnts. Starting from fr.Ax y\- (.r y} = 2 and /y:r(:r y) = /-ry(xy) = 4
it follows, using partial mlegrat.ion, that. the quadratic form f 1s f(x, y) = :r
2
+ 4xy + y
2
. This
function has one cril!cal point (a saddle) which is located at the origin
D3. lfx is a uni t f>lgenvector corresponding to..\. then q(x) = x1'Ax = xT(.-\x) = >.(x Tx ) = >.(1) = >..
WORKING WITH PROOFS
Pl. First. note that, as in 03, we have ur,Au m = m and ur,AuM = M On the otl.er hand, since um
and liM are orthogonal, we have = = M(u;: u M) = M(O) = 0 and ur,Aum =
0. It follows that if Xr j .f
1
-_":r., UM, then
T (M- e) r (e-m )
1
(M-e ) (e-m)
xcAXc= M-m umAum+O+O+ J\f-m u MAUt.t= M -m m + M- m M=c
EXERCISE SET 8.6
1. The ch.,.a<tedstic polynomial of A
7
m (t 2 o] is >.
2
(>. - 5); thus the eigenvalues
of AT A are At = 5 and A2 = 0, and cr
1
= v'5 is a singular value of A.
2. The eigenvalues of AT A = =

are A1 = liJ and A


2
= 9; thus u
1
= v'i6 = 4 and
u2 = J9 = 3 are smgular values of A.
Exercise Set 8.6 307
3. The eigenvalues of A
7
'A [_ = >.
1
= 5 and >.
2
= 5 (i.e., >. = 5 is an eigeJJ.value
of mulliplicaty 2); thus the 'singular values of A are crt = J5 and cr2 .:: JS.
4 . The eigemalucs of ATA = ('7, = [Jz
v'2]
2
are >. 1 = 4 and >.
2
- 1; thus the singular
values of A are cr
1
= J4 - 2 and cr
2
= v'l = 1. ---
5. The only eigenvalue of AT A= = is >. = 2 (multiplicity 2), and the vectors
VJ = and v2 = form an orthonormal basis for the eigenspace (which is all of R
2
). The
singular values of A are crt = ../2 ar:d u2 = .J2. We have Ut = ;, Av1 = (; =
u2 = "\ Av2 "" - = [ This res ults in t.he following singular value decomposition
of A:
A
[
1 -1] = [v2 [l OJ _ UEVT
1 1 72 0 v2 0 I
6. The of AT A [-J
0
] [-
3 0
] = [
0
9 0
] are )q = 16 and >.
2
= 9, with corresponding
0 -1 0 -4 16
unit cagenvectors v
1
= and v
2
= resperLively The smgular values of A are u
1
= 4 and
a2 = 3 \Vp IHl,e Ut : <r\ Avt = t12 = .,
1
,Av 2 = =
This results in lhe following singular '-a.l\!e dPCOmposition:
A c =

= UEVT
7. The of A r A = ['
1 0
] (
4 6
] = [
16
arf' .\
1
= 6-t und = -1, with correspomling umt
0 I 0 ct 2.J L1
eigen\'t.!Liors " 1 = [ l] and v::: = [- The :;mgular values of A arc cr
1
= 8 and
o2 = 2. We hnvc

..L;\v
1
=

= [-jg] and u
2
- ..L.4v
2
= ![
4
6
) [-};] =
c l 8 0 tl +. -!.,. (1, 2 0 4 _JA' ..l.
v$ v5 vr> .;:;
This results in the following singular value decomposition:
8. The ('tgcnvn.lucs of AT A [
3 3
] [
3 3
] = (
18
are >.
1
= 36 and >.2 = 0, with corresponding unit
3 3 3 3 18 I,.,
eigenwcLors Vt - anrl "2 = rPspccthely. The only smgular value of A is Ut = 6, and
we have u 1 = ; Av1 = [
3 3
] = [+ 1. Thf' vector u2 must be chosen so thaL { u1 , u2}
I 3 J 7i 72J
is an or thonormal bas1s for R
2
, e.g., u
2
= (- This results in the following singular value
decomposition.
A = [3 3] = [6 OJ [ =UEVT
3 3 72 72 0 0 - 72 72
308 Chapter 8
9. eigenvalues of AT A = = [_: -:] are >.
1
= 18 and .A2 = 0, with corre-
2 -2
spondtl'g uniL etgenvectors v1 = and v2 = respectively The only singular value of
A u
1
= ,/18 = 3,/2, and we have u
1
= ;, Av, = [ =! _:] [- = Ul Wemus',choose
the u2 and u3 so that { u1, u2, u3} is an othonomal has;s fo R
3
, e.g., u, = [i]nd
u, = [ -:] . This results io the foUowU.g singuiOI" value deoomposWon'
[
-2 2] [
A = - 1 1 =
2 -2 _.
3
t -1]


2 ! 0 0
3 3
Note. The singular value decomposition is not unique. It depends on the choice of the (extended)
orthonormal basis for R
3
. This is just one possibility.
10. Theeigenvalu"',"f ATA = [ =: J -: J- U _; =:]a<e .>1 = 18 and .>1 = .>3 = 0 The
ve<to.v, = [ -il is a unit eigenvedo<eonesponding to .1, = 18. The vecto<S [-:]and m fom
a basts for the eigenspace corresponding to ,\ = 0, and application of the Gram-Schmidt process
[
jsj"
to these vector:. yields orthononnc.l eigen,eci:.ors v2 = - and v
3
= . The only smgular
[
-2 -1 2] [
value of A is o- 1 = v'18 = 3v'2, and we have u, = a\ Av1 =
2 1
_
2
_ t = We
must choose the vector u
2
so that { u
1
, u
2
} is an orthonormal basis ror R
2
, e.g., u
2
= [ 1- This

in tt[.ll-VfJ = UEVT
M 3v:>
ll. The eigenvalues of AT A =
- [ = are At = 3 and .A2 = 2, with corresponding
-1 I
unit eigenvectors v
1
= and v
2
== tlspectively. The singular values or A are u
1
= J3 and
Exercise Set 8.6 309
singular value decomposition:
,..
I
[ l [ '
0
']
[J3 0 l
I 0 .fi
76
A= 1 l =
I I

=UEVT
72
-"76
-1 1 -73
1
+s 72
? I
,
12. The eigenvalues or AT A = :1 [: i] = r;: ::1 are = 64 and = 4, with conespondlng
] 3.
unit eigenvectors v 1 = [ and v
2
= [_ resp,.ctively. The sinf?:nlar valtlf'.s <>fA
and o2 = 2. We have UJ k and = [_ = choose
'- 4 0 vf> ...!.. J 4 0 :iS
-- oJS
u, = [! l so that { u, , u,, u,} Is an octhonocma\-basis roc R
3
. This cesults m the rollowmg singular
value decornposttton

I 0

v5
I
-::-s
0
0][80]['2 1]
1 ll 2 .. /, _"J. = l/EVT
0 0 ll ..... $ "
5

6) [.J.. -J.J [s
Using the smgular vrdtH' decomposition A = =


0 <l 75 0
0
] [ -3; '
75
] - U E vr found
2 -:;a *
in Exercise 7, we have thr following polar de>composition nf A:
14. Using the singular value decowpo.<:ition A = G = = UEVT found
in Exercise 8, we have the following polar decomposition of A:
r6 0
1
[- r

[- = [3 3
1
0
1
= PQ
72 0 0 72 Ji , ,_,:! 72 72 72 3 3 0 1
310 Chapter 8
15. ln Exercise 11 we found the following s:ngular value decomposition
[
10] [i1
A= 1 1 =
-1 1
Since A has rank 2, the corresponding reduced singular decomposition is
A=
and the reduced singular value expansion is
16. Ir: Exercise 10 we found the following singular value decomposition:
A - [-2 -1 2] = [--3-2 7z] [3J2 0 0] [
2 1 -2 72 0 0 0
315
Since A has rank 1, the corresponding reduced singular value decomposition is
and the singular \'alue expansion is
17. The characteristi c polynomial of A is (.>. + 1 )(.>. - 3)
2
; thus>. = - 1 is an eigenvalue cf multiplicity 1
and >. = 3 is an Pig en value of multiplicity 2. The vector v
1
= [-il forms a basis for the eigenspace
ing to >. - -1' and the vee toes v' = m and v' [ i l form an (orthogonal) basis for
the etgenspace <'orresponding to>.= 3. Thus the matrix P = [
1
::
1
fvil) orthogonally diag-
onalizes A and the eigenvalue decomposition of A is
A=
[
1 2 0] [--3i
2 1 0 =
0 0 3 0
0 72] [-1 0 0] [-72 7, 0]
030 0 01
0 003 72720
Exerc.se Set 8.6
311
18.
The correspondi ng singular value decomposition of A is obtained by sh1ft1ng t he negat1ve sign from
the diagonal factor t o t he second orthogonal factor
[
l
[
lol l [ ] [
1
1 2 0 -72 7i 1 0 0 T2
A= 2 1 0 = 0:30 0
003 01 0 003
I
72
The characteristic polynomial of A is(..\+ 1)(..\ + 2)(..\- 4), t hus t h: ozrA are ..\1 =
.12 - 2, and .13 4. Corresponding unit eigenvecto" are v, [;,] ,
1
v, m a v, [ _;,]
The matrix P = [v1 v 2 v3j orthogonally di agonalizes A o.nd lhe eigenvalue <.lec9 n position of A is
A= [
-2
0 -2] [-35
-2 0 = 0
0 3 .l,..
v5
[-1
0 0 - 2
0 0 0
'115
0
0
---- -
The corresponding singular \'alue decomposition of A is obtained by shiflmg the llt!gative signs
from the diagonal factor to the SC'cond vrthogonal factor.
0 -2] [ ...
2
'5
-2 0 = 0
0 3 -!,..
'11'5
0
I 0
0 --l.

I !
01
0 -'75
=
1
2
19. (a)
-!
fnrm n basis for col(.4), u
3
and
1
'i
u, -! form basis for col(A).L nuii (AT), and vr-
for m a bes" for row(A). and v3 [
[
12 0
(b) A = : =:
12 0

10
6
I l
2 2
1 I
2 -2
1 I
2 -2
I I
2 2
forms n bas1s for row(At:_= nuli(A).

- !l
20. Since A = UEVT and V is orthogonal we have AV = UE. Written in t.olumn vl'ctor form th1s 1s
312
and, since T(x) = Ax, it follows that
0
0
0
0
and [T(v
1
)]
8
, - 0 for j- k + 1, ... ,n. Thus [TJ
8
.
8
=E.
.. '
0
0
0
Chapter 8
21. Since A'J'A is positive semidefinite and symmetric its eigenvalues arc nonnegative, and its singular
values c;Lre the same as its nonzero eigenvalues. On the other hand, the singular values of A are the
squnre roots of the nonzero eigenvalues of AT A. Thus the singular values of AT A are the squares
oft hP sing\llar valnc>s of A.
22. lf A - U'i:.VT is a singular value decomposition of A, then
23.
"vhcre D IS the diagonal matrix having the eigenvalues of AAT (squares of the singular values of
A) on its main diagonal Thus ur AATU = D, i.e' u orthogonally diagonahzes AAT.
r fl !] .
We have Q = l
2
;, = [cos e -Sin
8
] where e = 330' lhus multiplication by Q corresponds to
_! .:c.1 sm8 cos8
4 l
rotat1on nhout thP orsgm through an angle ('If 330 The symmetric matrix P has eigenvalues A.= 3
and ,\ = l. \\'llh rorrP.spondmg unil eigenvectors u1 = [ rJ and U2 = [ Thus v = [ul U2] is
a clmgonnJi?,ing mat.nx for P:

[v'3 2] -4] vrp-v=


_ ! :i] 0 J3 ! fl
2 2 2 2
From this we conclude tha.L multiplication by P stretches R
2
by n factor of 3 m the directron of
u1 and by a factor of 1 in the direction vf u 2.
DISCUSSION AND DISCOVERY
01. (a) H A Ul.vr IS a s1ngular value dccomposiLton of an m x n matrix of rank k, then U has
size m x m, E has size m x n and V bas size n x n.
{b) If A = Ur Er Vt IS a reduced smgular value decomposition of an m x n matnx of rank k,
then U has size m x k, E
1
has size k x k and V has size n x k .
02. If A 1s an invertible matrix, then it.s eigenvalues are nonzero Thus if A = UEVT is a singular
valur decomposition of A, then E is invertible and A-
1
= (VT)-
1
E-
1
U-
1
= vE-
1
UT. Note also
I hal the clta.gonal "!1lril''i of r;-t are the reciprocals of the diagonal entnes of r;, which are the
vnlurs of A-t. Thus A -
1
= VE-
1
UT is a singular value decomposition of A -l
Exerclsa Set 8.7
313
D3. If A and B are orthogonally similar matrices, then there is an orthogonal matrix P such that
B = PAPT. Thus, 'fA = (JEVT is a singular value decomposition of A. then
B = PAPT = P(UEVT)pT = (PU)E(PV)r
is a singular value decomposition of B. It follows that Band A have the same smgular values (the
nonzero diagonal entries of E).
D4. If P is the matrix for the or thogonal projection of Rn onto a subspace W of d1mensioo k, then
P
2
= P and the eigenvalues of P are ). = 1 (with multiplicity k) and >. = 0 (with multiplicity
n- k). Thus t he singular values of Pare u
1
= 1, u2 = 1, ... , U!t = 1.
EXERCISE SET 8.7
1. We have AT A = [3 4] = 25; thus A+= ( AT A)-
1
AT= .}
5
(3 4] = [fs 2i J
2. WP. have AT A -
1 2
) = [
6 6
), thus the pseudoinversc of A is
3 1 6 11
2 I
A' =(AT A)-1 AT= ....L [ Ll -6] [I 1 2] = [i -370
30
-6 6 1 3 1 0
s
3. We ha-e AT A - [; :J = [;: thus U>e p"'udotnwse of A ts
+ _ T _ 1 T _ 1 [ 26 -321 [7 0 5] _ [ !
A -(A A) A - goo -32 74 1 0 5 -
5. (a) AA1-A = [1]= CJ =A
(b) A+AA+ = (ls /ll] [ts = [1] [I& is]= (ts fs) =A+
(c) AA+ = = is symmetnc; thus (AA+)T = AA+ .
{d) A+ =Ill is svmmetnc; thus (.4+ A)T = A+ A.
_!!.. ]
IS
I

0
0
(e)
The e1genvalues of AAT:; [
9 12
] are >q = 25 and

= 0, with corresponding unit eigen-
12 16
vectors Vt = rn and v2 = [ -:J respectively. The only singular value of AT is 0'! = 5, and
we have u 1 = o'. AT v
1
= !(3 4] [i] = [1]. This results in the singular value decomposition
,1 T = [3 J = II I [5 OJ [-! ll = UEVT
314
6.
Chapter 8
ThP corresponding reduced singular value decomposition is
AT = (3 4] = fll [5] [l = UlEl vr
and from this we obtam
(AT>+ = "Ei"ur = [i] UJ 111 = = (A+>r
(f) The eigenvalues of (A+)T A+ =

are At=
2
1
5
and A2 = 0, with corresponcling unit
eigenvectors v
1
[i] and v2 = respectively. The only singular value of A+ is a
1
= t
and we have u 1 - ct
1
, A+v = 5[,\ fs) [!] = [1). This results in the singular value decompo-
sition
(a)
(b)
(c)
(d)
(e)
The corresponding reduced singular value decomposition is
A+= [
2
3
5
= [1) [t) = u.E. vt
and from thts we obtam
4: 11- .4 =
.4
1
A:\
1
[l
=

[
l
= 5
0
__ ]_
30
2
5
.
-30
2
5
-1]
[l ;]
=
r 11 [I I]

=A
[i [!
7

7
- 30 - Jo
2 2
5 s
I
-.t l
6
'2!1
3o
I 13
- yg
15
AA' = [; i]
-1 z

is symmetric, thus (AA+)T -= AA+


-1$ E
1] [; i] is 'ymme.,ic; thus W A)T A.
The eigenvalues of AAT = are A
1
= 15, >.
2
= '2, and AJ = 0, with corresponding

3:]
unol eognvocto.s v, },;; v, *' Hl and ., = f,o nl The singular values of
Exercise Set 8.7 315
AT a" u, = Jj5 and = .Ji. Setting u, = :, AT v, = *' [: [':] = *' 1:1 and
u, = :/r v, = [: :J [ -!] = f,; [_:].we have
kJ-u 't"' vr
4 - 1
726
and from this it foUows t.hat
[
i& - :15]
(C) The eignn-.;tlues of (A")rAi- = -
1
;
40


arc ,\1 =ft . ..\2 ,\3 = 0, wtth
43 -m ns
conespondmg uniteigenw>CIO<S v' = ).;; [':]. v, = ;+. H ] and VJ = ;;;\; n ] The sin
gular values of A ... are Ot =


and 02 = Seltmg

7 l [ I]
-30 _I_ -3 = _L_ [ 3]
v'26 -1 \113 -2
we have
and fro111 this it follows that
[
5
...lm
( 1
) = r E -1 ur = 11
I I I I .)t%
7
Ji9s
[J}:; 0 l [vp _ip] = =A
-!...
0
.J2 :13 713 2 -1
v26
7. The only eigenvalue of AT A =- !?.5J is >.
1
= 25, with corresponding unit eigenvector v
1
= [ lj. The
only smgular value of A ts a 1 = 5 We have u 1 = ;, Av1 = [!] [Ij = [i], and we choose u2 = [ -:1
so that { u 1, u2} is an orthonormal basis for R
2
This results in the singular value decomposition
A= [!] = [: = UEVT
316 Chapter 8
T he corresponding reduced singular value decomposition is
and from this we obtain
8. The eigenvalues of AT A = [!

are )q = 15 and A2 = 2, with corresponding unit eigenvectors


v1 = -Jb [!] and = 7iJ [ ThE' singular val ues of A are cr1 = VIs and <12 = V2. Setting
U t = _!_Avt = [
2
3
] =
cr1 v iS
2 1
j v 13 vl95
7

we have
)]- - [v'IS 0 l
3 - Ji9s 726 0 J2
1 7 4
:t'9s 725
and it follows that
.4 = V1Ej
1
UT = [ [;.
9. The eigenvalues of AT A = [
14 32
] are )q = 90 and .X
2
= 10, w1th correspondmg unit eigenvectors
32 26
v 1 = and v2 . [_ The smgnla.r values of A are <1
1
= '1/'90 = 3v'i0 and a
2
= JIO. \\'e
7S ..n;
have u, = 3 Jr,; i] = [; ]. u, = :,Av, = ,)rn i] = [_;,] , and we
choose u3 = so that { u
1
, u 2, UJ} is an orthonormal basis for R
3
. This yields the singular value
decomposition
1
71
0
0] [3v'l0 0 l
1 0 v'lO
0 0 0
The corresponding reduced singular value decomposition is
[
l
[
I I l
7 1 72 2
A -= 0 0 = 0 0 [
3
ViQ O l [ 7s
5t: I I 0 jjO -jw
i) J2 -71
Exercise Set 8 7
317
10.
ancl from t h1s we obtain
.,+ - \.
- I
[
2 1][1 0][1 7s W'iO 72
-35-?s 0 Jrn
0 [ l
v2 G
0 = -t
o - :ro]
0 Jo
The ouly of A I' A = IH] is A 1 = 41, with correspouJiug unit. v
1
= 11 }. The
only value of A is cr
1
= .J41. We have u 1 = ;, Av
1
= Jh [:] Ill = ( and we choose
u'l = r-7fl so that { Uj, U2} is an orthonormal basis for R
2
This results in the singular value
7oii
decomposition
A= (
4
] = [vf. [J4I]IIJ = ur;vr
5 74i 74i 0
The cora esponding reduced singular value decomposition is
and fm111 th1s we obtain
At VtE}
1
U[= [lJ[7,rr][Jrr

;.)
11. Smcc A = has full column rank, we have .4+ = (ATA)-
1
AT: thus
:4+ = rlZJ 3]-l [2 -1] = _!_ [ ;:; -3] [:! [1
. 3 5 2 1 16 -3 5 2 1 .j 1 1
12. Smcc rl has f11ll column rank, we have A+= (AT A)-
1
A:r: thus
[
'0:.! ,2_]
13. The mamx A = :] does not have full column rank, so Formula (3) does nol apply. The
eageuv<llues of AT A = are )q = 4 and .X
2
:::: 0, with correspondtng unit eagenwc tors v
1
=
and v2 ....., The only singular value or .4 is Ot = 2 We have U t = = [: :] =

we ,.hoose Ut that {u
1
,u
2
} is an orthonormal basis for R
2
Th1s results m
7i
the smgular value decompmntion
A= [I l] = [2 o] [ = ur:vr
1 I 72 72 0 0 -72 v"l
The corresponding reduced singular value decomposition is
318 Chapter 8
and fron th1s we obtain
A = v, Et 'ur = [ 1 l 1 [ 7, 7,] = [: i]
14. The m<ltrix A does n:>t have fuU column rank, so Formula (3} d.:>es not apply. By inspection. we
see that a smgular value decompositiOn of A is given by
A = = = UEVT
0 0 0 0 1 0 0
The corresponding reduced singular value decomposition is
cllld fr0111 llt iS WE' 0 htO.in
15. The standard mfltrix for the orthogonal proJectmn of R
3
onto col{A} is
[
1 11 [1 f 8] [! A
\ + _ I :\ 6 - 30 15 _ !
- 0 ; _ ! - 6 30
2 1 5 5 ! _J..
3 15

13
u
16. The standard matrix for the orthogonal projection of R
3
onto col(A) is
A A+ = [_ 0 - l =
55 G Q jQ 001
17. The giv<n system can be wd tten in matdx fom as Ax = b whm A = [ b =. [ _ ] The
matrix A has thf' fnll owinp; reduced singular value decomposition and pseudoinverse:
A+ V
1
E)
1
Uf = (! f j) =


72 18 9 9
Thus the least squares solut1on of minimwn norm for the system Ax = b is:
Working with Proofs
319
18. The g;ven system can he wdtten as Ax= b whece A= [: and b = [:] . The pseudo;,..,,. of
_!_ [
11
-
9
] (
1 2 2
] = -fii fi,] 0 Thus the least squares solution of
IS -9 9 l 3 l 0 -
2
19 Since AT has full column rank, we have (A+)T = (AT)+ = (AAT)-
1
A = (14)-
1
(I 2 3] = (f.
1
2
4
fi)
and so A+= [H
20. The matrix AT has full column rank and. from Ext:n: t::.c 18, have (AT)+= [
[
0]
T- $ I
.4 - -n -: .
!1 -l
'" l
--&
0
Lhus
I 1
1
2 -2
DISCUSSION AND DI SCOVERY
cnJ 1<; an m x n matrix with orthogonal (nonzero) column vectors, then
1-1
AT=


Note If the columns of A r\re orthonormal, then A+ = AT 0
02. (a) If A = a uv1', then A;. = lvuTo
a
I
Uc:tl
2
0
0
0
0
0
0
I
jjCJj"'
(b ) A A= ( = v( u: u vr = vvT and = = uur.
0 3 If c IS a nomero scalar, l hen ( cA )"'
0
= i .4 + .
D5o (a) AA+ and A+ A are the standard matrices of orthoJ!'onal projection operators.
320 Chapter 8
(b) Using parts (a) and (b) of Theorem 8.7.2, we have (AA+)(AA+) = A(A+ AA+) = AA+ and
(A+ A)(A+ A)= Jl,+(AA+ A)= A+ A; thus AA,. and A+ A are idempotent.
WORKING WITH PROOFS
PI. If A has rank k, then we have U'{U
1
== V{V
1
= h; thus
.. .
P5. Using P4 and Pl, we have .4+ AA+ =A+ A H A+= A+.
P6. Using P4 and P2, we have (AA+)'r =(A++ A+)T = A++ A+= AA+.
P7. First note that, as in Exercise P2, we have AA+ = (U
1
L:
1

= U
1
U'[. Thus, since
the columns of U
1
form an orthonormal basis for col(A), the matrix AA+ = U
1
UT is the standard
matrix of the orthogonal projection of R" onto col(A).
P8. It follows from Exercise P7 t hat AT(AT)+ is the standard matrix of the orthogonal projection of
R" onto coi(AT) =row(A). Furthermore, using parts (d), (e), and (f) of Theorem 8.7.2, we have
and so .4+ A is the matrix of the orthogonal projection of R" onto row(A) .

You might also like