Professional Documents
Culture Documents
(Solution Manual) Contemporary Linear Algebra by Howard Anton, Robert C. Busby PDF
(Solution Manual) Contemporary Linear Algebra by Howard Anton, Robert C. Busby PDF
2. (a) a.nd (d) are linear. (b) is not linear because of the xyz term. (c) is not linear because of the
x 3/5
1 term.
5. (a), (d), and (c) are solutions; these sets of values satisfy all three equations. (b) and (c) are not
solutions.
6. (b) , (d), and (e) are solutions; these sets of values satisfy all three equations. (a) and (c) are not
solutions.
7. The tluee lines intersect at the point {1, 0) (see figure). The values x = 1, y = 0 satisfy all three
equations and this is the unique solution of the system.
3.x-3y"' 3
:c
~ ~ ~]
l
r231
-3
Add -2 times row 1 to row 2 and add -3 times row 1 to row 3:
!3
[~ =! !~]
Multiply row 2 by - j and add 9 times the new row 2 t.o row 3:
[~ ~ ~]
From the last row we s~ that the system is redundant (reduces to only two equations). From the
second row we see that y =0 and, from back substitution, it follows that x = 1 - 2y = 1.
22
Exercise Set 2.1 23
8. The three lines do not intersect in a. common point (see figure). This system has no solution.
)'
and the reduced row echelon form of this matrix (details omitted) is:
9. (a) The solution set of the equation 7x - 5y = 3 can be described parametrically by (for example)
solving the equation for x in terms of y and then making y into a parameter. This leads to
x == 3'*:,5t, y = t , where -oo < t < oo.
(b ) The solution set of 3x 1 - 5xz + 4x3 = 7 can be described by solving the equation for x1 in
terms of xz and x 3 , then making Xz and x 3 into parameters. This leads to x 1 = I..t 5;-<~ t,
=
x:z = s, x3 t, where - oo < s, t < oo.
(c) The solution set of - 8x 1 + 2x 2 - 5x 3 + 6x 4 = 1 can be described by (for example) solving
the equation for x 2 in t.erms of x 1 , x;~, and x 4 , then making x 1 , x3, and x 4 into parameters.
This leads to :t 1 = r, x 2 = lBrt5 s - 6 t, X3 = s, X4 = t, where -<Xl < r, s, t < oo.
(d) The solution set of 3v- Bw + 2x- y + 4z = 0 can be described by (for example) solving
the equation for y in terms of the other variables, and then making those variables into
parameters. This leads to v = t1, w = t2, x = t3, y = 3tl - 8t2 + Zt3 + 4t4, z t4, where =
- oo < t11 t2 , t3, t4 < oo.
(d) v = t1, w = t2, x = -t1 - t2 + 5t3 - 7t4, y = t3, z = t4, where -oo < h, t2, t3, t4 < oo.
11. (a) If the solution set is described by the equations x = 5 + 2t, y = t , then on replacing t by y in
t he first equation we have x = 5 + 2y or x- 2y = 5. This is a linear equation with the given
solution set.
(b) The solution set ca.n also be described by solving the equation for y in terms of x. and then
making x jnto a parameter. This leads to the equations x = t, y = ~-!t-
24 Chapter2
12. (a) If x1 = -3 + t and x2 = 2t, ther. t = Xt + 3 and so x2 = 2x1 + 6 or -2xl + x2 = 6._ This is
a. linear equation with the given solution set.
(b) The solution set can also be described by solving the equation for X2 in terms of x 1 , and then
making x 1 into a parameter. This leads to the equations x 1 = t, x 2 = 2t + 6.
13. We can find parametric equations fN the line of intersection by (for example) solving the given
equations for x and y in terms of z, then making z into a parameter:
From the above it follows that y =3+z - x = 3 + x - 1 + 4z = 2 + Sz, and this leads to the
parametric equations
X =1 - 4t, y = 2 + 5t, Z =t
for the line of intersection. The corresponding vector equation is
14. We can find parametric equations for the line of intersection by (for example) solving the given
equations for x and y in terms of z, then making z into a parameter:
x + 2y =
3x- 2y =
1-3z}
2- z
:::;. 4x = ( 1 - 3z) + (2 - z) =3- 4z ::::}
3-4z
X=--
4
From the above it follows hat y = 1t,8z _ This leads to the parametric equations
X :::: i - t, y = ~ - t, Z =t
and the corresponding vector equation is
3-2 !l -1]
17. The augmented matrix of the system is 4
7
[ 5
3:
3 .
2
1 -1
2
4
Exercise Set 2.1 25
[~ ~]
2 0 -1 1
19. The augmented matrix o! the system is 3 2 0 - 1
3 1 7 0
11
~
0 0 :
20. The augmented matrix is [ 1 0 12 .
0 1 3 !J
21. A system of equations corresponding to the given augmented matrix is: .
2xt =0
3xl- 4x2 = 0
X2 = 1
3x 1 - 2x 3 = 5
7xl + x2 + 4x3 =- 3
- 2x2 + X3 = 7
26, (a) B is obtained from A by interchanging the first and third rows. A is obtained from B by
interchanging the first and third rows.
(b) B is obtained from A by multiplying the third row by 5. A is obtained from B by multiplying
the third row by }.
-cl + Cz + Sc4 = 6
2ct + c2 + 2c3 =2
(c) c1 + cz + 2c3 = 4
2c1 - 2c3 + Sc4 = - 4
- 4cl + 2c2 - + 4c = 2
c3
+ 2c2 + c3 - C4 = 0
5c1 - c2 + 3c3 + c4 = 24
D2. A consistent system has at least one solution; moreover, it either has exactly one solution or it
has infinitely many solutions.
If the system has exactly one solution, then there are two possibilities. If the three lines are
all distinct but have a. common point of intersECtion, then any one of the three equations can be
discarded without altering the solution set. On the other hand, if two of the lines coincide, then
one of the corresponding equations can be discarded without altering the solution set.
If the system has infinitely many solutions, then the three lines coincide. In this case any one
(in fact any two) of the equations can be discarded without altering the solution set.
D3. Yes. If B can be obtained from A by multiplying a row by a. nonzero constant, then A can be
obtained from B by multiplying the same row by the reciprocal of that constant. If B can be
obtained from A by interchanging two rows, then A can be obtained from B by interchanging the
same two rows. Finally, if B can be obtained from A by adding a multiple of a row to another
row, then A can be obtained from B by subtracting the same multiple of that row from the other
row.
D4. If k = l = m = 0, then x = y = 0 is a solution of all three equations and so the system is consistent.
If the system has exactly one solution then the three lines intersect at the origin.
05. The parabolay = ax2 + bx + c will pass through the points {1, 1), (2, 4), and (-1, 1) if and only if
a+ b+c=l
4a + 2b + c = 4
a- b+c=I
Since there is a unique parabola passing through any three non-collinear points, one would expect
this system to ba.ve exactly onesolution.
Discussion and Discovery 27
D6. The parabola y == ax 2 + bx + c passes through the points (x1, Yt), (xz, Y2), a.nd (x3, Ys) if and only
if
ax~ + b~1 + c = Yl
4ax~ + Zbx2 + c = Y2
ax~ - bxs + c = Y3
i.e. if and only if a, b, and c satisfy the linear sy5tem whose augmented matrix is
1 Y1]
1 Y2
1 Y3
D7. To say that the equations have the same solution set is the same thing as to say that they represent
the same line. From the first equation the x 1-intercept of the line is x1 = c, and from the second
equation the x 1 -intercept is x 1 = d; thus c = d. If the line is vertical then k = l = 0. If the line
is not vertical then from the first equation the slope is m = t,
and from the second equation the
slope ism= f;
thus k = l. In summary, we conclude that c = d and k = .l; thus the two equations
are identicaL
08. (a) True. If there are n ~ 2 columns, then the first n- 1 columns correspond to the coefficients
of the variables that appear in the equations ar1d the last column corresponds to the constants
that appear on the right-hand side of the equal sign.
{b) False. Referring to Example 6: The sequen_c~ of linear systems appearing in the left-hand
column all have the same solution set, but the corresponding augmented matrices appearing
in the right-hand column are all different .
(c) False. Multiplying a row of the augmented matrix by zero corresponds to multiplyingboth
sides of the corresponciing equation by zero. But this is equivalent to discarding one of the
equations!
(d) True. If the system is consistent, one can solve for two of the variables in terms of the third
or (if further redundancy is present) for one of the variables in terms of the other two. ln any
case, there is at lca.c:;t one "free" variable that ca.n be made into a parameter in describing
the solution set of the system. Thus if the system is consistent, it will have infinitely many
solutions.
D9. (a) True. A plane in 3-space corresponds to a linear equation in three variables. Thus a set of
four planes corresponds to a system of four linear equations in three variables. If there is
enough redundancy in the equations so that the system reduces to a system of two indepen-
dent equations, then the solution set will be a line. For example, four vertical planes each
containing the z-axis and intersecting the xy-plane in four distinct lines.
( b) False. Interchanging the first two columns corresponds to interchanging the coefficients of
the first two variables. This results in a different system with a differeut solution set. [It
is oka.y to interchange rows since this corresponds to interchanging equations and therefore
does not alter the solution set.]
(c) False. If there Is enough redundancy so that the system reduces to a system of only two
(or fewer) equations, and if these equations are consistent, then the original system will be
consistent.
(d) True. Such a system will always have the trivial solution x 1 = x2 = = Xn = 0.
28 Chapter2
2. Tbe mo.trices (c), {d), an(l {e) are in reduced row echelon form . The matrix (a) does not satisfy
property 3 of the definition, and the matrix {b} does not satisfy property 4.
3. The matrices (a) and (b) are in row echelon form. The matrix (c) does not satisfy property 1 or
property 3 of the definition.
4. The matrices (a) and {b) are in row echelon form. The matrix (c) does not satisfy property 2.
5. The matrices (a) and (c) are in reduced row echelon form. The matrix (b) does not satisfy property
3 and thus ls not in row echelon form or reduced row echelon form.
6. The matrix (c) is in reduced row echelon form. The matrix (a) is in row echelon frem but does
not satisfy property 4. The matrix (b) does not satisfy property 3 and thus is not in row echelon
form or reduced row echelon form.
7. The pos:;ible 2 by 2 reduced row echelon forms are [8 g). [8 A]. [6 ~), and [A o] with any real
number substituted for the *.
8. The possible 3 by 3 reduced row echelon forms are
[~ ~ ~][~ ~ ~] [~ ~ ~][~. o ~ :]
0 0 0 0 0 0 0 0 0 0 0
with any real numbers substituted for the *'s.
+ 3xs = -2
+ 4xs = 7
X4 + 5xs = 8
where the equation corresponding to the zero row has been omitted. Solving these equations
for the leading variables (x1, xs, and x4) in terms of the free variables (x2 and xs) results in
Xt = -2 + 6x2- 3xs, X3 = 7- 4xs and X4 = 8- 5xs. Thus, assigning arbitrary values to x 2 and
xs, the solution set can be represented by the parametric equations
x1 - 3x 2 =0
X3 =0
0=1
which is clearly inconsistent since the last equation is not satisfied for any values of x 1 , x2 , and x 3 .
- 7x4 = 8
+ 3x4 = 2
X3 + X4 = -5
Solving these equations for the leading variables in terms of the free variable results in x 1 == 8 + 7x 4 ,
x2 =o 2- 3x1, and x3 = -5 - x 4. Thus, making x 4 into a parameter, the solution set of the system
can be represented by the parametric equations
14. The given matrix corresponds to the single equation Xt + 2xz + 2x.t - x 5 = 3 in which x3 does n0t
appear. Solving for x1 in terms of the other variables results in x 1 = 3- 2xz- 2x4 + X5. Thus,
making x2, x3, x4, and Xs into parameters, the solution set of the equation is given by
Xt ::::: 3 - 2s - 2u + V, X2 = S, X3 = t, X4 = u, X5 =V
where -oo < s, t, u, v < :x>. The corresponding (column) vector form is
r-~
r~
XI 3- 2s- 2u+v 3 -2 0
X2
X3
X4
xs
s
u
v
=
0
0 +s
0
0
1
0
0
0
0
+t 1
0
0
+u l! +.
l~
3C Chapter2
x1 - 3xz + 4x3
= 7
x2 + 2x3 = 2
X3 = 5
Starting with the last equation and W'orking up, it follows t hat Z3 == 5, Xz = 2- 2x3 = 2 - 10 = - 8,
and x1 = 7 + 3x 2 - 4x3 = 7-24-20 = -37.
Alternate solution via Gauss- Jordan (starting from the original matrix and reducing further):
1 -3 4
0 1 2
[
0 0 1
0-13]
0 --8
l 5
x1 + 8x3 - S:c4 6 =
x2 + 4x3 - 9x.: = 3
X3 + X4 = 2
Starting with the last equation and working up, we have .1:3 = 2 - x 4 , x2 = 3 - 4x3 + 9x4 =
=
3- 4(2- .r4) + 9x4 = -5 + 13x4 , and x 1 6- 8xa + 5x4 = 6- 8(2 - x 4 ) + 5x4 .;:::10 + 13x4 =
Finally, assigning an arbitrary value to x 4 , the solution set can be described by the paramet-
ric equations x 1 = -10 + 13t, x 2 = - 5 + 13t, x 3 = 2- t, x 4 = t.
Alternate solution via Gauss- Jordan (starting from the original matrix and reducing further):
[~ ~]
0 8 -5
1 4 -9
0 1 l
[~
0
1
0 -13
0 -13
-10]
-5
0 1 1 2
St<Uting with the last equation aml working up, it follows that x" = 9- 3xs , X3 = 5 - :t4 - 6xs =
5- (9 - 3xs}- 6xs = -4- 3xs, and Xt = - 3- 1x2 + 2x3 + 8xs = - 3- 7x'l + 2( -4- 3xs) + 8xs =
-11 - ?x2 + 2xs . Fina.lly, assigning arbitrary 'Values to xz and :ts, the solut ion set can be described
by
Xt = -11- 7s + 2t, Xz = S, X3 = - 4- 3t, X4 =9 - 3t, Xs = t
Xl = - 6 + 7t , X2 = 3 - 4t, X:~ = t, X4 =2
20. The correspondi.ng system is
x1 + 5x3 + 3x4 = 2
2x3 + 4x4 = - 7
x2 -
X3 + X4 = 3
Thus x 4 is a free. vc:~.riable and, setting x 4 = t, we have :r:, = 3 - t , :r.2 = - 7 + 2(3 - t) -- 4t =
- l - 6l, a.nd x 1 = 2 - 5(3 - t)- 3t = - 13 + 2t.
21. Star ting with the first equation and working down, we have x1 = 2, xz = Hs - x1) = S(S- 2) = l,
and X3 = l(12 - J:r1 - 2:r2) = i (12- 6- 2} = !-
4 - 2x 1 4+2
22. :r1= - l , x2 = = - -=2,x3=5 -x 1 - 4x2=5+1-8 = - 2
3 3
23. The augmented matrix of the system is
[~
1
-1
-10
32 Chapter 2
[~
1 2
1 -5 -9 8]
0 - 52 -114
Multiply row 3 by - f2
[i 0
~ -~1 -~]2
Add 5 times row 3 to row 2. Add -2 times row 3 to row 1.
[~
1 0
1 0
0 1 !]
Add - 1 times row 2 to row 1.
[~
0 0
1 0
0 1 !]
Thus the solution is x1 = 3, x 2 = 1, x3 = 2.
24. The augmented matrix of the system is
H
2
5
!
Multiply row 1 by ~ Add 2 t imes the new row 1 to row 2. Add -8 times the new row 1 to row 3.
fl 1 1 OJ~
IQ 7 4 l
lo -7 - 4 -1
[~
l 1
!]
4
1 7
0 0
[~
0 73
1 14
0 0
-!]
Finally, assigning a n arbitrary value to the free variable X3, the solution set is represented by the
parametric equations
Exercise Set 2.2 33
-1]
[ -~
-1 2 -1
1 -2 -2 -2
2 -4 1 1
3 0 0 -3 -3
Add -2 times row 1 to row 2. Add row 1 to row 3. Add -3 times row 1 to row 4.
Multiply row 2 by ~- Add the new row 2 to row 3. Add -3 times the new row 2 to row 4.
-~]
1 -1 2 -1
0 1 -2 0
0 0 0 0
0 0 0 0
1 2 -1 -~]
0 -2 3 .2
[
0 -6 9 9
[~
2 -1 -~]
1 -~ -1
0 0 3
It is now clear from the last row that the system is inconsistent.
34 Chapter 2
Multiply row 3 by 2, then add -1 times row l to row 2 and - 3 times row 1 to the new row 3.
The last two rows correspond to the (incompatible) equations 4 x2 =3 and 13x2 = 8; thus the
system is inconsistent.
[~
-I~]
2 -1
3 2
1 3 11
-4 2 30
and the reduced row echelon form of this matrix is
29. As an intermediate step in Exercise 2:i, the augmented matrix of the system was reduced to
Starting with the last row and working up, it follows that x3 = 2, x2 = -9 + Sx3 = -9 + 10 = 1,
and x1 = 8 - x1 - 2x3 = 8 - J - 4 = 3.
30. As an intermediate step in Exercis~ 24, the augmented matrix of the system was reduced to
1 1 1
[
0
0 0 0
1 ~
i]
Starting with the last equation and working up, it follows that x2 = ~ - ~X 3, a.nd :::1 = - x 2 - X3 =
- (~- ~ x 3 )- x 3 = - ~- ~x 3 . Finally, assigning an arbitrary value to x 3 , the solution set can be
described by the parametric equations
X )=- ~- ~t ,
E.x erclse Set 2.2 35
31. As an intermediate s te p in Exercise 25, the augmented matrix of the syste10 was reduced to
-~ -~]
1 -1 2
0 1 -2
0 0
[0 0 0
0 0 0 0 0
32. As in Exercise 26, the augmented matrix of the syst em can be reduced to
-! -~]
1 2 - 1
0 1 - 1
[
0 0 0 3
and from this we can immediately conclude that the system has no solution.
33. (a ) There a re more unknowns than equat ions in t his homogeneous system. Thus, by Theorem
2.2.3, there a t e in finitely many nontrivial solutions.
(b) From back s ubstit ution it is clear that x 1 = X-t = X3 "" 0. This syste m hns only the trivial
solution.
34 . . (a) There are more unknowns than equations in this homogeneous system: thus there are in-
finitely many nont rivial solutions.
(b) The second equation is a multiple of the first. T hus t he system r educes to only one equation
in two unknowns and t here are infinitely many solut ions.
21 12 03 00]
[0 2 0
l
Interchange rows 1 and 2. Add -2 times the new row 1 to the new row 2.
~]
1 2 0
0 -3 3
[0 1 2
Multiply row 2 by - ~ . Add - 1 times row 2 to row 3. Multiply the new row 3 by !
[~ ~]
2 0
1 - 1
0 1
T he last row of t his matrix corresponds to x 3 = 0 a..nd, f!om back s ubstitution, it follows that
xz = x3 = 0 and x 1 = - 2x2 = 0. This system bas only the trivial solut ion .
36 <;;napter 2
~]
3 1 1 1
[5 -1 1 -1
Multiply row 2 by 3. Add - 5 times row 1 to the new row 2, then multiply this last row 2 by ~.
3111 0]
[0 4 1 4 0
Let x 3 = 4s, x 4 :::: t. Then, using back substitution, we have 4x 2 = - x 3 - 4x4 = -4s - 4t and
3:c 1 = -x2 - :c3 - x 4 = s + t - 4s- t = -3s. Thus the solution set of the system can be described
by t he parametric equations x1 = -s, x2 = - s - t, X3 = 4s, X4 = t.
[_~ ~]
2 2 4
0 -1 -3
i 3 -2
lnterchange rows l and 2. Add 2 times the new row 1 to row 3.
[~ ~]
0 -1 -3
2 2 4
1 1 -8
. Multiply row 2 by !- Add - 1 times the new row 2 to row 3. Multiply the new row 3 by - 1~
[~ ~]
0 -1 -3
1 1 2
0 0 1
[~ ~]
0 -1 0
1 1 0
0 0 l
This is the reduced row echelon form of the matrix. From this we see that y (the third variable)
is a free variable a.nd, on setting y = t, the solution set of the system ca.n be described by the
parametric equations w = t, x = - t, y = t, z = 0.
~]
2 -1 - 3
-1 2 -3
[ 1 1 4
and the reduced row echelon form of this matrix is
[H H]
Thus thi~ ~ystem has only the trivial solution x =y =z = 0.
Exercise Set 2.2 37
1 3 -2 01
u - 3
7
2
5
3
_,,
-1
5
~j
[~ ~]
0 -2 2
1 3 -2
0 0 0
0 0 0
Thus, setting w == 2s and x = 2t, the solution set of the system can be described by the parametric
equations u = 7s - 5t, v = -6s + 4t, w = 2s, x = 2t.
I a 0 I 0
1 4 2 0 0
0 - 2 -2 -1 0
2 - 4 I 1 0
1 - 2 -1 0
1 0 0 0 0
0 1 0 0 0
0 0 1 0 0
0 0 0 1 0
0 0 0 0 0
Thus th~ system re<.luces to only four equations, and these equations have only the trivial solution
X 1 =. X2 = Xj :::: X4 :;;;: 0.
41. We will solve the system by Gaussian elimination, i.e. hy reducing the augmented matrix of the
system to a row-echelon form: The augmented matrix of the original system is
[~ ~]
-1 3 4
0 -2 7
-3 5
1 4 -1
Interchange rows 1 and 2. Add - 2 times the new row 1 to the new row 2. Add -3 times the new
row 1 to row 3. Add -2 times the new row 1 to row 4.
{~ ~]
0 -2 7
-1 7 - 10
-3 7 -16
1 8 - 10
Chapter2
Multiply row 2 by -1 . Add 3 times the new row 2 to row 3. Add -1 times the new row 2 to row 4.
0 -2 7
1 -7 10
0 - 14 14
0 15 -~20
Multiply row 3 by -{4 . Add - 15 times the new row 3 to row 4. Multiply the new row 4 by !.
~]
0 - 2 7
1 - 7 10
0 1 -1
0 0 1
This is a row-echelon form for the augmented matrix. Ftom the last row we conclude that / 4 = 0,
and from back substitution it follows that I 3 = / 2 = 11 = 0 also. This system has onlJ!,the trivial
solution.
[ -~ -~ ~ -~
~]
1
1
1 1 -2 0 -1
2 2 - 1 0 1
and the reduced row echelon form of this matrix is
[~ ~ ~ ~ ~ ~]
0 0
0 0 0 0 0 0
0 1 0 0
From this we conclude that the second and fifth variables are free variables, and that the solution
set of the system can be described by the parametric equations
Z1 = - .s - t, z2= s , z3 = - t, z4 = o, z~ =t
43. The augmented matrix of the system is
2 3
[!
-1
1
[~
2 3
7 4 4 ]
10
0 -22 a-4
Exercise Set 2.2 39
From the last row we conclude that z = ~2~ and, from back substitution, it is clear that y and x
are uniquely determined as well. This system has exactly one solution for every value of a.
[~ ~]
2 1
-2 3
2 a2 - 3
Add -2 times t he fi rst row to the second row. Add - 1 times the first row to the third row.
1 -62 11 2]
( a 0
0 0 2
-
-3
4 a- 2
The last rowcorresponds to (a2 - 4}z =a- 2. If a= - 2, thls becomes 0 = -4 and so the system
is inconsistent. If a = 2, the last equation becomes 0 = 0; thus the system reduces to only two
equations and , with z serving as a free variable, bas infinitely many solutions. If a f.: 2 the system
has a unique solution.
[12 ~ ll
2
a2 - 5 n
[~ a~ 3]
2
2
a - 9
Jf a= 3, then the last row corresponds to 0 =0 and the system has infinitely many solutions.
1 a= - 3, the last row corresponds to 0 = -6 and the system is inconsistent. If a-:/; 3, then
y = :,-:_39 = 4 ~ 3 and, from back subst-itution, x is uniquely determined as well; the system has
exactly one solution in this case.
[~ -7]
1 7
3 17 ~ 16
2 a2 + 1 3a
This reduces to
-7]
[~
1 7
1 :3 _"l
2
0 a - 9 3a+9
The last row corresponds to (a 2 - 9)z = 3a + 9. If a= -3 thls becomes 0 = 0, and the system will
ha~e infinitely many solutions. If a = 3, then the last row corresponds to 0 = 18; the system is
inconsistent. If a 'I- 3, then z = !~:!:~ = o.:_ 3 and, from back substitution, y and x are uniquely
determined as well; the system has exactly one solution.
41. (a) If x + y + z = 1, then 2x + 2y + 2z = 2 f.: 4; thus the system has no solution. The planes
represented by the two equations do not intersect (they are parallel).
40 Chapter 2
(b) lf x + y + z = 0, then 2x + 2y + 2z = 0 alsoi thus the system js redundant and has infinitely
many solutions. Any set of values of the form x = -s- t, y = s, and z = twill satisfy both
equations. T he planes represented by the equations coincide.
48. To rt:<.luce thl' matrix [~ -~ ~] to reduced r ow-echelon form without introducing f1actions:
3 4 5
Add -1 times row 1 to row 3. Interchange rows 1 and 3. Add -2 times row 1 to row 3.
[~ =~ _:]
Add - 3 times row 2 to row 3. Interchange rows 2 and 3. Add 2 times row 2 to row 3. Multiply
row 3 by - 317
[~
31 - 222]
0 1
Add 22 times row 3 to row 2. Add -2 times row 3 to row l. Add - 3 times row 2 to row 1.
49. The system is linear in the variab1es x = sin a, y = cos{3, and z = tan-y.
2x- y + 3z = 3
4x+ 2y- 2z = 2
6x - 3y + z = 9
We solve the system by per forming the indicated rvw operations on the augmented matrix
2 -1 3
4
[6
2 -2
-3 1 !]
Add - 2 times row 1 t o row 2. Add -3 times row 1 to row 3.
50. Tbis sy11t.em is linear in the variables X = x"J., Y = y 2 , and Z = z 2 , with augmented matrix
[i -: 2
Exercise Set 2.2 41
[~ ! H]
It follows from this that X = 1, Y = 3, a.nd Z = 2; thus x = 1 , y = ..;3, and z = JZ.
51. This system is homogeneous with augmented matrix
~ ~0]
2- ). - 1
2 1 - ).
[ 1-).
-2 -2
matrix is (60 ~ g1 g0]. Thus x = y = z = 0, i.e. the system has only the trivial solution.
0
[~
1
1
0
2
l
0
a -b
a
c - a-b
l
Thus the syst.em is consistent if and only if c- a - b = 0.
53. (a) Starting with the giVfm system and proceeding as directed, we have
O.OOOlx + l.OOOy 1.000
1.000x - l.OOOy 0.000
l .OOOx + 10000y 10000
l .OOOx - l.OOOy 0.000
l.OOO:t + lOOOOy = 10000
- 1 OOOOy = -10000
which results in y ~ 1.000 and x ~ 0.000.
(b ) If we first interchange rows and then proceed as directed, we have
l.OOOx - l.OOOy = 0.000
O.OOOl x + l.OOOy = 1.000
l .OOOx - l.OOOy = 0.000
l.OOOy = 1.000
\Vhich results in y ~ 1.000 and x ~ 1.000.
42 Chapter2
02. (a) All three liues pass through the origin and at least two of t hem do not coincide.
(b ) If the system has nontrivial solutions then tbe lines must coincide and pass through the
origin.
D3. (a) Yes. If axo + bvo = 0 then a(kxo) + b(kyo) = k(axo + byo) = 0. Similarly for the other equa-
tion.
(b) Yes. If a:r.o + byo "" 0 and ax 1 + lly1 = 0 then
D4. The first system may be inconsistent, but the second system always has (at least) the trivial
solution. If the first sy~ tem is consistent then the solution sets will be parallel objects (points,
lines, or the entire plane) with the second containing Lhe origin.
D9. The system is linear in the variables x = sina, y = cos/3, z = tan 7, and this system has only the
trivial solution x = y = z = 0. Thus sin a = 0, cos/3 = 0, tan-y = 0. It follows that a= 0, 11', or
2rr; /3 = ~ or 3; ; and 1 = 0, rr, or 21r. There are eighteen possible combinations in all. This does
not contradict Theorem 2.1.1 since the equations are not linear in the variables a, f3t 'Y
(b) If [~ ~] can be reduced to [5 ?] , then the corresponding row operations on the augmented
matr ix [~ ~ r] will reduce it to a matrix of the form [5 ? {] and from this it follows that
the syst.ern has the unique solutjon x = K, y = L.
CHAPTER 3
Matrices and Matrix Algebra
1. Since two matrices are equal if and only if their corresponding entries are equal, we have a - b = 8,
b + a = 1, 3d+ c = 7, and 2d- c = 6. Adding the first two equation s we see that 2a = 9; thus
a = ~ and it follows that b = 1 - a = - ~. Adding the second two equat!ons we see that 5d = 13;
i
thus d = 1 and c = 7 - 3d = - ~.
6. (a) 3C + D = (
3
9
)
-3.
+ [ - 3 ~]
1
[! ~] =
(b) AE = H~] [1 4 2] [! ~~
1 3 1 5
=
4 5 ~]
[-~ ~ ~l [-~ ~ ~1 =lr-~ ~ ~]
1
( c) FG=
3 2 4j 4 1 3 32 9 25
(e)
(f)
naT = Hi][~ -; ~] H_:~ -:~]
GE is not defined
=
r1
~] [-~ ~] [-~ -~]
s
(b) FB = l-~ ~ =
4 4 0 16 5
9. Ax = H~ !J HJ ~+~J -l m m HJ
+J =
12. (a)
[~ ~1 ;1 [::]~ n1 (b) [-! =! -~n::J nJ =
1r- 1
4
9
2
1 =
7 n21
67
58 Chapter3
21. (a) uT'v = {-2 3] [:] = -8 + 15 = 7 (b) uvT = [-~] [4 5j ~ [~~ -~~]
(d) v'ru = l4 5J[-!] = - 8+ 15=7 = uTv
6 21 0]0
(b) uvr = [ -:] [2 7 0] = - 8 -28
[ 10 35 0
(c) tr(uvT)=6-28+0 = -22=uTv
23. [k l ll [~ ~ -~][:] = [k
24. [2 2 k] [~ ~ ~] lkr~] -= 12
0 3 1
2 kJ [4: 3kl = 12 + 2(4 + 3k) + k(6 + k) = k + 12k +20
6+k
2
= 0
25. Let F = (C(DE}. Then, from Theorem 3.1.7, the entry in the ith row and jth column ofF is
the dot product of the ith row of C and the jth column of DE. Thus (F)23 can be computed as
follows:
27. Suppose tha t A is-m x nand B is r x s. If AB is defined, then n = r . On the other hand, if BA
is defined, then s = m . T hus A is m x n and B is n x m. It follows that AB is m x m and BA is
n xn.
29. (a) If the i th row of A is a row of zeros, then (using the row rule) r i{AB) = ri(A )B = OB = 0
and so the ith row of AB is a row of zeros.
(b) If the jth column of B is a column of zeros, then (us ing the column rule) c_i(AB) = Ac; (B) =
A O = 0 and so the jth column of AB is a column of zeros.
30 . (a) If B anu C have the same jth column, then ci(AB) ::= Aci (B) = Acj (C ) = cj( AC) and so
AB and AC have the same jth column.
(b) If B and C have the sa.me ith row, then ri (BA) = ri (B)A = r;(C )A = ri(CA) and so BA
and C A have the same ith row.
31. (a) If i ::j: j, t.heu a ; 1 h~ unequal row and column numbers; that is, it is off (a bove or below) the
main diagonal of the matrix [a;iJ; thus the matrix has zeros in all of the positions that are
above or below the main diagonal.
an 0 0 0 0 0
0 a22 0 0 0 0
0 0 a33 0 0 0
[aijJ = 0 0 0 0 0
a44
0 0 0 0 a 55 0
0 0 0 0 0 a 66
(b) If i > j , then the entry Cl;j has row number larger than column number ; that is, it lies below
the main diagonal. Thus laii] has zeros in all of the positions below t he main diagonal.
(c) If i < j, then the entry a;.i has row number smaller than column number; that is, it lies above
the main diagonaL Thus la;j] has zeros in a.ll of the positions above the main diagonal.
(d) If li - il > 1, then either i - j > 1 or i - j < -1 ; that is, either i > j + 1 or j > i + 1. The
first of these inequalities says that the entry aij lies below the main diagonal and also be-
low t he "subdiagonal" cor>sisting of entries immediately below the diagonal entries. The
60 Chapter 3
second inequality says that the entry aii lies above the diagonal and also above the entries
immediately above the diagonal entries. Thus the matrix A has the following form:
an <lt2 0 0 0 0
a21 a2.2 l23 0 0 0
0 asz a33 a34 0 0
A = ia;j} = 0 0 a43 a44 a4s 0
0 0 0 as4 ass as6
0 0 0 0 a6s aM
32. (a) The entry a;j =i +j is the sum of the row and column numbers. Thus the matrix is
: .....
(b) The entry aii = (- l)i+i is - 1 if i +j is odd and +1 if i +j is even. Thus the matrix is
r
-:
~.-1
~: -: ~:]
1 - 1 1
(c) We have a;j = -1 if i = j or i = j 1; otherwise a.;i = l. Thus the entries on the main
diagonal, and those on the subdiagonals immediately above and below the main diagonal ,
arc all -l; whereas the remaining entries are all + 1. The matrL'<. is
r
=~ =~ -~ ~]
1 - 1 - 1 -1
1 1 - 1 -1
33. The component,s of the matrix product [!~ :~ ~]~ [~]- r~~l~~
3 -
represent the total expenditures for
purchases during each of the first four months of the year. For example, the February expenditures
were (5)($1)+ (6)($2) + (0)($3) == $17.
. .
45 + 30 6030 ++23
33 7f40:> ++2540] [75
30+ 21
93 115]
51 53 65
34. (a) T he entnes of the matrnc M + J = = represent t he
[ 65+1212+ 945
15 + 8
10 + 10 35 + 9
+ 11 21 77
23 50
56
44
total units sold in ea.ch of the categories during the months of May and June. For example,
the total number of medium raincoats sold was M3 2 + J32 = 40 + 10 = 50.
15 27 35]
(b) The entries of the matrix M - J = : 5 ~ ~: represent the difference between May and
[ 1 30 26
June sales in each of the categories. Note that June sales were less than May sales in each
<:Me; thus thes~ e ntries represent decreMP.S.
Discussion and Discovery 61
(c) Let x ~ r:] Then th7 wmponents of M x = [:~ :~ ::] [:] = [:~~] represent the totalnum-
l 15 <10 35 90
ber (all sizes) of shirts, jeans, suits, and raincoats sold in May.
(d ) Let y = 11 1 1 1]. Then the components of
45 60
yM=[l 1 1 1)
30 30 75]
:~ = 1102 195 205J
12 65
[
15 40 35
represent the total number of small, medium, and large items sold in May.
(e) The product yMx = [1 1 1 1]~:~ :~ ~] [~] = ~ {1 1 l][~E] =492 represt>nts the
15 40 35 90
total number if items (all sizes and categories) sold in May.
Dl. If AB has 6 rows and 8 columns, then A must have size 6 x k a nd B must have six k x 8; thus A
has 6 rows and B has 8 columns.
03. Let A = G ~] and B = [~ ~] . In the following we illustrate three different methods for computing
the product AB.
Met.'wd 1. Using Definition 3. 1.6. This is the same as what is later referred to as thP. column rule.
Since
we have AB = [: ~]
Method 2. Using Theorem 3.1 .7 (the dot product rule}, we have
we have AB = [: ~) .
62 Chapter 3
D4. The matnx A ~ [~ -i .~] is the only 3 x 3 with th~ property. Here is a proof
Since xc, (A) + yc,(A) + zc3 ( A) ~ A[:] , we must have xc, (A)+ yc,(A) + zc,( A) ~ [= ~:] for
Suppo"' A;,. 3 x 3 for which A[:] ~ ["f] for all x, y, nd z. Then A[:] [~]and, on the other
hand , we must have A [:] = -A [ =: l =- m. Thus there is no such matrix.
D6. (a) S1 = [~ ~] and S 2 = [=~ =~] are two square roots of the matrix A= [~ ~].
(b) The matrices S = [~ ~) and S = [:s ~~] are four different square roots of B = [~ ~].
(c) Not all matrices have square roots. For example, it. is easy to check that the matrix [- ~ ~1
has no square root.
DS. Yes, the matrix A = [~ ~ ~] has the property that AB = 28 for every 3 x 3 matrix B.
09. (a ) False. For example, if A is 2 x 3 and B is 3 x 2, then AB and BA are both defined.
(b ) 'frue. If AB and BA are bot h defined and A ism x n, then B must ben x m ; thus AB is
m x m and BA is n x n. If, in addition, AB + BA is defi~d then AB and BA must have
the same size, i.e. m = n.
(c) True. From the column rule, c 1 (AB) = Ac;(B). Thus if B has a column of zeros, then AB
will have a column of zeroll.
DlO. The second column of AB is also the sum of the first and third columns:
s
Dll. {a) :L)aikbkj) = a,lblj + a;2b'2j + + aisb.i
.1.:=1
(b) This sum represents (AB)ij, the ijth entry of the matrix AB.
P2. Since Ax= x 1 c 1 (.t1) + x 2 c 2 (A) + + XnCn(A), the linear system Ax= b js equivalent to
~] + [:
-1 -5 -6
-2]6 [101
A+ (B r C)=
[j 4
4 7 -2
8
15
=
5
12
-1
1:]
19
A(BC) =
u -1
r1
1
3]5 [-187
4 11
-62
17
-27
-33] = [-10
22
38
83
87
-222
-67
33
278
240
26]
aC+bG = [~12
-8
28
20
12]
16 +
36
[ 0 14
-21]
-21
[ 0 6
-7 -49 -28 = -3 -21
-35 -63 -9 -15
-12
-27
-9]
64 Chapter3
aB - aC =
32 - 12 -20]
0 4
[ 16 - 28
8 -
[ 0 -8
4 28 ~~] [~: -~: -~~]
=
24 12 "20 36 4 - 48 -12
= (4)
[-18 -627
[-72 -132]
17
-~]
22 = 28
- 248
68 88
11 - 27 38 44 - 108 152
(aB)C =
[320 -124 -20]8 [01 -27 3] [-72 4 = 28
- 248
68 -!32]
88
16 - 28 24 3 5 9 44 - 108 152
B0C) =
r8 -3 -5 [ 0 -8
0 1 28 1162] = r-7228 -248G8 -132]
2 4 88
L4 -7 6 1 12- 20 36 44 -108 152
~H _
T [2 04 -1
3. (a ) (AT)T 1 = 0 4
!3] =A
-r
5 4 -2 1
-~]
-4 0
( b ) (A+ B)r = riO0 5 7 = [10
-4 5
L2 -6 10 -2 7 10
AT + BT ~ H 0
4
5
-21] + [-38
4 -5
0
1
2
-l [~~
6 -2
05 -62]
7 10
(c) . (3C)T ~ [~
-6
21
15
12
27
T[0 = -6 21
9 12
3
1:] =(3) [-~
27 3
1
7
4
!]~ 3C'
Exercise Set 3.2 65
4. (b) (B - W ~ H-1-8] -6
- 12
-2
- 3
T
=
[
-1
-8
8-1 1]
-6 - 12
- 2 -3
t r(A') ~ ,, ( [ _ ~ -m ; = 2 + 4+4 = 10
and tr(CB) = tr
12 - 23
-24
24
( [ 60 - 67
!!]) ~ 12 - 24 + 49 = 37. T hus h(BC) ~ l<(CB).
7. (a) A matrix X satisfies the equation tr(B)A + 3X = BC if and only if 3X = BC - tr(B) A, i.e.
8 -3 -5] [0 -2 3] [ 2 -1
3X = 0
[
1
:1 - 7
2
6 !] 1
3
7
5
4 -15
9
0
-2
4
1
=
-18
7
--62
17
-33]
22 -
[ 30
0
-- 15
60
45] = [-48
75 7
-47
-47
-78]
-53
[ 11 - 42 38 -30 15 60 41 - 42 -22
(d ) 2A = [
6
10
~] ,det(2A) = 4 (2A)-l=~ [ 4
4 -10
-2]6 = ~2 [ -52 -~] = ~A- 1
Exercise Set 3.2 67
(c) BT =[ 2 :] , det(BT) = 20 (B )
T -i
=
1 [4
20 :; --~) = (IJ-l)T
-a
(d) 3B = [
6
12
-9]
12
, det(3B) = 180 ( 3B)
-1
= 180
1 [ 12
-12
9]6 = 601[-44 ~] = ~ 8 -1
c-ls-~A- 1 =~[-I
2 2
-4] _!_ [ 4 3] [ 2 -1] = l [ 79 -45]
6 20 -4 2 -5 3 40 -122 70
c-ts-1 =!
2
r-12 -4]6 20_!__ [ -44 3]2 = __!__ [
40 -16
12 -111
18J
14 . X= (C _B)-I AB =~
22
[-5 -7] [10 -5] = [-88
6 4 18 - 7
__!__
11 66
37]
- 29
68 Chapter3
(b)
= [2
6 -14] 61[3o o]2 = sl-3
1r 6 -14]
19. {a) Given that A - 1 -= (~ -~],and noting that det( A- 1 ) -~ 13, we have A = (A- 1 ) 1 =
1
13
[_! ~).
20. ( a ) ,
(,ivc'? that ( 5AT) - 1 = [-3 -1] it foll ows t hat 5A
5 2
T = [-3 -
5
1] -
2
1
__ (-l) [-~:; _31] __ [-~::;. -.1>],
._,
and so A = -1 [-:.!.
5 5
-lJT =-51 [-2
3 -)
5) . 3
=
1
13
r-: ~] and so it follows that
A :-:: 21(13
1[-5-1 2]1 -
[Io OJ) = 131 [-9-cl]'
l . 2
21 . The matrix A = [~ ~} is invertible if and only if det(A) = c2 - c f. 0, i.e. if and only if c :/:. 0, 1.
22. The matrix A = [ -~ - :) is invertible if anJ only if dct(A) = -c2 + 1 ~ 0, i.e. if aud only if c =/= L
23. One such example is A = [~ ~ ~1- In genera), any matrix of the form [: ~ ;] .
3 2 3J e J c
[~ '] ['" [~ ~]
0 X}2 xnl "
lJ
1 0
1 1
X21
X31
X22
X32
X23 j 1
0
X33
Exercise Set 3.2 69
i.e. if and only if the entries of X satisfy the following system of equations;
Xn -!- X3 1 =}
+ X32 =0
+ X:tJ =: 0
+ X:z t =0
+ X22 =1
Xt3 + X23 =0
xz1 =0
=0
+ X33 = 1
This system has the unique solution xu = :t12 = xzz = x23 = X31 = X33::;; ~ and x 13 = x2 1 =
[~ ~ ~] [:::
X13]
X23 =
[}00
1
()]
0
2 0 l XJJ x:n 0 0 l
i.e. if and only ii xu = 1, :tu = 0, x13 = 0, :t21 = 0, xn = 1, r23 = 0, 2x11 + X31 = 0, 2x12 + X32 =
0, and 2x 13 + X33 = l. This is a very simple syst.em of equations whicb has the solution
-3
2
-4
70 Chapter 3
is
30. If A invertible and
AC = 0, then C = IC = 1
(A-
A)C = A- 1 (AC) = A- 1 0 = 0. Similarly, if C
is invertible and AG = 0, then A= AI= A(CC- ) = (AC)C- 1 = OC- 1 = 0.
1
31. (a) If A= [._:~~: ::::] , then det(A} = cos2 () + sin 2 B = 1. Thus A is invertible and :
33. (a) If A anc.l B are invertible square matrices of the same size, and if A + B is also invertible,
then
A(A- 1 + s- 1 )B(A + B)- 1 ::: (I+ AB- 1 )B(A + B)- 1 = (B + A)(A + B)- 1 =I
(b) From part( a) it follows that A(A- 1 + B - 1 )B;;::. A+ B, and so A -t + B - 1 = A- 1 (A+ B)B- 1
Thus the matrix A- 1 + B- 1 is invertible nnd (A- 1 + B- 1 ) - 1 = B(A ~ B) - 1 .4 .
2
= [a2+bc ab + ba1_[a J-da ab ldb]+(ad - bc)(l 0]
ca + de cb + d2 J ac + de ad + d2 0 1
36. From parts (d) and (e) of Theorem 3.2.12, it follows that
37. The adjacency matrix A and its square (computed using the row-column rule) are as follows:
0 1 1 1 0 0 0 0 0 0 0 1 3 1
0 0 0 0 1 l 0 0 0 0 0 0 0 2
0 0 0 0 0 l 0 0 0 0 0 0 0
s 4-
- 0 0 0 0 0 1 1 Az =: 0 0 0 0 0 0 1
0 () 0 0 0 0 1 0 0 0 0 0 0 0
0 0 0 0 0 0 1 0 0 0 0 0 0 0
0 0 0 0 (} 0 0 0 0 0 0 0 0 0
The entry in the ijth position of t he mat rix l1'2 represents t he number of ways of traveling from i
to j wit h one intermediate stop. For example, t here are three such ways of traveling from J to 6
(t hrough 2, 3 or 1) and two such ways of t raveling from 2 to 7 (t hrough 5 or 6).
In genern.l t.h entry in th~ tjth position of the matrix An represents the number of ways of traveling
from i to j with exactly n - 1 int.errnediatc s tops .
D3. (a) If A 2 + 2A +I= 0, ihcn i = - A2 - 2A = A(-A- 21) ; thus A is invertible and we have
1
A- =-A - 21.
72 Chapter 3
(b) If p(A) = anA"+ a11 _ 1 A"- 1 + + a 1 A + aol = 0, where a.o =I 0, then we have
A(- a .. An -
ao
1- an-I
ao
A" - 2 - .. . - al
ao
J) = I
Thus A is invertible with A-t= -!!.li. A"- 1 -- ~::..!. A n - 2 - . .. - !!.!.[ .
' oo ao oo
D4. No. First note that if A3 is defined then A must be square. Thus if A 3 = AA2 =I, it follows that
A is invertible with A - I = A 2 .
D5. (a) False. (AB) 2 = (AB)(AB) = A(BA)B. If A and B commute then (AB) 2 = A2 B 2 , but if
B A : AB then this _will not in general be true.
(b) True. Expanding both expressions, we ha.ve (A - B)'J. = A 2 - AB-BA + B 2 and
(B- A)Z = 8 2 - BA - AB + A2 ; thus {A- B) 2 = (B- A) 2
(c) True. The basic fact (from Theorem 3.2.11) is that (AT)- 1 = (A- 1 )7', and fr0m this it
follows that (A - n)T = ({A") - 1 )T = {(A")T)- 1 = ((AT)n)-1 = (AT) - n.
(d) False. For example, if A= (~ ~) and B = G ~J then u (AB) = tr ([~ ~J) = 3, whereas
tr(A)tr(B) = (2)(2) = 4.
(e) False. For example, if B = -A then A+ B = 0 is not invertible (whether A is invertible or
not).
D6. (a) If A is invertible, then the system Ax= b has a. unique solution for every vector b in
R3 , namely x = A- 1 b . Let x 1 , X:t, and x 3 be the solutions of Ax= e 1 , Ax= e 2 , and
Ax= e3 r espectively, and let B be the matrix having these vectors as its columns; B =
!x, x2 x3}. T hen we have AB = Ajx 1 X 2 x3] = [Ax1 Ax2 Ax3] = !e, ez e3] =I.
Thus A- 1 = B = [x1 Xz x3].
(b) From part {a) , the columns of the matrix A- 1 are the solutions of Ax= e1. Ax= e 2 , and
Ax= e3 . The augmented matrix of Ax = e 1 is
[i H~]
a.nd the reduced row echelon form of this matrix is
[~
0
1
00 -13]
0 1 0
D7. The matrices [~ ~], G~]I and [~ n have determinant equal to 1. The matrices (~ ~] . [~ ~]. and
[~ ~] have determinant equal to -1. These six matrices are invertible. The other ten matrices6U
have determinant equal to 0, and thus are: not invertible.
Pl. We proceed as in the proof of part (b) given in the text. It is clear that the matrices (ab)A and
a(bA) ha.ve the same size. The following shows tha t corresponding entries are equal:
P2. 'The following shows that corresponding entries on the two sides are equal:
P3. The argument that the matrices A(B- C) and AB- AC must have the same size is the sa.rne as
in the proof of part (b) given in t he text. T he following shows that corresponding column vectors
are equal:
P1 . T h('.se t.hree rn<\t r ices clel\rly ha.ve t he same s ize. The following shows t ha t corresponding column
vectors are equal:
P5. If cA = 0 a.nd c :;f 0 then, using Theorem 3.2.l(c)), we have A = 1A = (( Dc) A = ~ (cA) = ~ 0 = 0.
A- A
1
=(
1
ad - be - c
[ d ...ab]) [f'cl db'J = ad -1 be [da-0 be
(b) Let A =- (: ;] where ad - be = 0. Then A is invertible if and only if !.here a.re scalars e, f,
g, and h , such Lhnt
[: !] (; ~] = [~ ~]
i.e. if ami only if the followhg system of equa .ions is consistent:
ae + bg = 1
ce + dg =0
af + bh = 0
cf + dh = 1
Multiply t he fi rst equation by d , multiply t he second equat ion by b, and subtract . This Jea.ds
to
(da - bc)e =- d
74 Chapter3
and from this we conclude that d = 0 is a necessary condition in order for the system to be
consistent. It then follows (since ad- be = 0} that be= 0 and so either b = 0 or c = 0. Let
us assume that b = 0 (the case c = 0 can be handled similarly). Then the equations reduce
to
ae = 1
ce = 0
af = 0
cf = 1
and these equations are easily seen to be inconsistent. Why? From ae = 1 we conclude
that e f. 0; and from ce = 0 we conclude that c = 0 or e = 0. From this it follows that c
must be equal to 0. But this is inconsistent with c/ = 1! ln summary, we have shown that
if ad- be = 0, then the system of equations has no solution and so the matrix. A is not
invertible.
n n n n n n
[~ ~r r
0
5. (a)
f-~ ~r~ = r~ ~J (b) 1 = 0 OJ
0 1 0
0 0 0 3!
l~ [~ ~] [~ r [~ ~]
I l
0 0 0 0 0 - 7 0 7
1 0 1 0 1 0 1 0
(c) = {d) ==
0 1
0 0 W' 0 l
0 0
0
0
1
0
0
I\
v
1
0
EXERCISE SET 3.3 75
r~ j !r r~ J ~]
0
0
1
(d)
0
7. (a) :: 1rr1r fmm A by interchanging the first and third rows; thus EA ~ B where
8. (a) E =
~] (b)C ~ [~ -~ ~]
(c) E = r~ 0 0]
0
l
0
2
1
(d) E~[~ ! -~]
9. Using thr. method of Example 3, we start with the partitioned rr.atrix lA I I] and perfonn row
operations aimed at reducing the left side to I; these same row operations performed simultaneously
on the right ::;ide will produce t.he matrix A -J.
1 5 I 1 0]
[2 20: 0 1
Add -2 times the first row to the second row.
1 5 : 1 0]
[0 10 l -2 1
Multipiy the seconci row by 1~, then add -5 times the new second row to the first row.
1 0! 2 -~]
[0 11_! .L
I 5 10
~
Thus A -l = [ _ 12 _!]1 . On the other hano 1 using the formula from Theorem 3. 2. 71 ami the fact
-l]
5 10
10. A
-1
=
1 [ 1
14 - 4 3]2 ;= [ t ~ ~~~
1
- -
. 7
l
11. (a) Start with the partitioned matrix [A I I J.
~]
4 - 1 l1 0
I
0 J I
I
0 1
[! 5 - 4l0 0
Interchange rows 1 and 2.
[~ ~]
0 3l0 1
4 -1 : 1 0
I
5 - 4l0 0
Add - 3 times row 1 to row 2. Add - 2 times row 1 to row 3.
[~ ~]
0 3:o 1
4 - 10:I 1 -3
5 -10: 0 -2
Add - 1 t imes row 2 to row 3.
[~ ~]
0 3l 0 1
4 - 10: 1 -3
I
1 0: -1 1
Add -4 times row 3 to row 2, then interchange rows 2 and 3.
1 0]
[~
0 3: 0
l
0 - 10:
Q '
I
I
--1
5 ~ __ :
1
Multiply row 3 by - 10
, then add -3 times the new row 3 to row 1.
0 0 :I ~~ _!l
10 -- ~5 ]
1 0 : -1 1 1
: _l 7 2
I 2 10 S
~; - ~]
11
-TO
1
From this we conclude that A is i nvertiblc , and that A- - [ l
7 '2
10 6
[-~ ~]
3 -4 l 0
4 I 0 1
-4 2 -9 0 0
Multiply row 1 by -1. Add - 2 times the new row 1 to row 2; add 4 times the new row 1 to
row 3.
~]
1 -3 0
0 10 1
[ () - 10 0
EXERCISE SET 3.3 77
[~ ~]
0 1:1 0
I 1 :I 0 1
1 o:o 0
~]
fl 0 1 :
I
1 0
l~
1 j I
I
0 1
1 -1: - 1 0
Add 1 times row 2 t o row 3, then mult ip ly the new row 3 by -1.
[~ j]
0 1 1 0
1 1 0 1
1 l
0 1 2 2
[~
0 2 -2
I
0: - 4
-ll
1
1 2
I
1 I I
0 I
I 2 2
. r ~ -2
I
I
f\, "" t h ;, weconchtdet hnt ,\ ;, iot li ble , nod that A ' = [-; 2
I
'i -ll
12. (a) A -l (b) A- 1
= ~
l
1
0 0] - 3 (c) A-
1
1
= [0
--1
~ -~J
Ol
[0 - 1 4 0 0 !.
3
13. As in t he inversion algorit hm, we s tart wit h the po.rtitioncd matrix [A I I J and perfo rm row oper-
ations a.irr..;)d at redudng th~ left. side to its reduced row echelon form R. If A is invertible, then
R will be the identity m atrix and the matrix pro duced on the right side will be A - l . In the more
general situation the reduced matrix will have the form [R I B} = [BA I BIJ where the matrix B
on the right has the property that B.1 = R. [Note that B is the product of ejementary matrices
and thus i" always an inve rtible matrix, whether A is invertible or not .]
1 2 3[1 0 0]
0 0
[1 2
1!0 l 0
4:0 0 1
78 Chapter 3
- 1
1
~] has the
[~ ~l B=H
0 0
14.
R=
1 0 0]
l
8
0 1 -~
15. 1f c = 0, t.hen t.hc fir~t. row is a row of zeros, so the matrix is not invertible If c f- 0, then after
rnult.iplying the fir5t row by 1/ c, we have:
Add -I timE'-'> the tlrst row to t he second row and to the t hird row.
] 1
0 c- 1
[
0 0
If c = 1, then t he second and third rows are rows of zeros, and so t he matrix is not invertible. If
c :j: 1, then we can divide the second and third rows by c - 1 obtaining
[~ l :]
::~.nd from this it is clear that the reduced row echelon form is the identity matrix. Thus we conclude
that the matrix is invertible if and only if c :f:. 0, 1.
16 . c = 0, v2
17. The matrix B is obtained by starting with t he identity matrix and performing the same sequence
18 B ~ [~ ~. ~]
19. 1f any one of the ki's is 0, then the matrix A has a 1.ero row and thus is not invertible. If the kt.'s
are all nonzero, then multiplying the ith row of the matrix [A I IJ by 1/ki for i = 1, 2, 3, 4 and then
reversing the order of the rows yields
~]
0 0
l
0 k3
l
k2 0
0 0
[' ~J
k 0 0
l I
0
20. k =r' 0; A- 1 =
-p
1
k
I 1
p - k1 k
1 \ I
F p -1?1
21. (a) The identity matrix can be obtained from A by first adding 5 times row 1 to row 3, and then
multiplying row 3 by ~ Thus if E 1 = [~ ~] and E2 = [~ ;] , thn E2E1 A = I.
1
(h) A E 2 E 1 where 1 and E 2 a.re as in part (a) .
(c) A = E}
1
E2
1
= [-~ ~] [~ ~]-
(c)
23. The identity ma.Lrix can be obtained from A by the following sequence of row operations. T he
corresponding ~l~mcnta.ry matrices and their inverses arc as indicated .
[~ ~] [~ ~]
0 0
(1) Interchange rows 1 and 3. E1= 1 s-1_
l - 1
0 0
ol
[~ ~]
0 0
(2) Add -1 times row 1 to row 2. 2 =
H ~j l
0
E-I_
2 - 1
0
80
Chapter 3
~ [~ ~] ~ [~ ~]
0 0
{4 ) Add row 2 to row 3. 1 1
E, Ei'
1 -1
~ [~ j] ~ [~ j]
0 0
(5) Multiply row 3 by -i Es 1 E;' ]
0 0
~ [: :] [~ -H
0 0
(6) Add row 3 to row 2. 1 E ()- t -_ 1
E,
0 0
E, ~ [~ -~] ~ ~]
() 0
(7) Add - 2 times row 3 to row 1. 1 1
Ei' [:
0 0
r~ ~] EO'~ (~ ~]
- 1 1
(8) A<ici - 1 t imes row 2 to row 1. Ea = l 1
LO 0 0
25 . The two systems have t he same coefficient matrix. If we augment this m:\trix with the two columns
of cons tants on the right sides, we obtain
1 2 1!-1:0]
[ z:0
1 3
1
21I 30
I
4 l1
~d the reduced row echelon form of this matrix is
[~ ~ ~t:!-:o:!-:]
0 0 4
From this W P. conclude that the first system has the solution x 1 = - 9, x = 4, :r = 0;
2 3
and the
second system has the solution ~ l ~ 4, x2 = -4, xa = 4.
EXE RCISE SET 3.3 81
26. 'The coefficient matrix, augmented with the two colUJnns of constants, is
2 5: -1: 2]
[~
I I
l -3 I
I
- } 1 I
I
0 1 : 3 :- 1
1 0 0 l - 32 ! 11]
0 1 0 8 -2 I I
[0 0 1 l 3 : - 1 I I
Thus the first system has the solution x 1 = -32, x 2 = 8, x 3 = 3; and the second system has the
solution x 1 = 11, x2 = - 2, X3 = -1.
[~ :] [::] [~ ~][:] m
2 2
3
1
::0
n1 and 3
]
:::
H -:1m ~ Hl
-~~
-3
[-: -:] [-!] n1
2
- 1
and 2
- 1
nod [~ ~ -~][;} [ J
Tbe inverse of the coefficient matrix A~ [~ ~ -:]
tions are given by
82 Chapter3
29 0 The augmented matrix [~ =~: ~] can be row reduced to (~ -~ lbt ~22b:z) 0 T hus the system is
consistent if and only if b1 - 2~ = 0, i.e. if and only if b1 = 2b2.
30. The augmented matrix
[
1 -2 5:
4 -5 8: b2
-:1 3 -3 I b3
btl
0
can be row reduced to
[1~ -2
3
o
- 12l
6: bl
bz- 4bl
0 I - I> 1 + bz + b3
l0 Thus
l
homogeneous system can be written as b1 ::: s + t , b2 = 2s + t, bs = s, b4 = t. Thus the original
33. The matrix A can be reduced to a row echelon from R by the following sequence of row operations:
ro
A= ll -2
3
- 5
1 7
3
1
j]
(1) Intercha.nge rows 1 and 2.
[: -2 - 5
3
1
3
7
1
j]
(2) Add 2 times row 1 to row 3.
r
0 1
0 1
37
8
7 :J
(3) Add -1 times row 2 to row 3.
R ~ l: 3
l
0 0 0
3 88]
7
It follows from this that R = E 3 E 2 E 1 A where E 1 , E'2, E 3 <lie the elementary matrices correspond-
u! ;][:
ing to the row operations indicated above. Finally, we have the factorization
A ~ EFG R ~ [! ~ ~] ~~][~~
1
~:]
0 0 0 0
1
where E = E! 1 F = Ei 1 , and G = 3 1
WORKING WITH PROOFS 83
34. The matrix A is obtained from the identity matrix by aJding a times the first row to the third row
and adding b times t he second row to the third raw. If either a = 0 or b = 0 (or both) the result
is an elementary matrix. On the other hand, if a =I= 0 and b =I= 0, the result is not a.n elementary
matrix since two elementary row operations are required to produce it. Thus A is elen,enta.ry if
and only if ab = 0.
D2. There i.'J not. For example, let b = 1 and a"" c = d = 0. Then A[~ ~] = (~ :::]=I=[~ ~]
D3. There is no nontrivial solution. From the last equation we ste. that x 4 = 0 and, from back substi-
tution , it follows immediately that x 3 = x'2 = x 1 = 0 also. The coefficient matrix is invertible.
(c) The matrix B must be of size 3 x 2: it is not square and therefore not invert.ible.
(b) F'alse. For example t he product [~ ~] =_[~ ~} [~ ~) cannot be obtained from the identity by
a single elementary row operation.
(c) 'Jhu~ This row operation is equivalent t.o multiplying the given mat.rix by an elementary
matrix: and, since any elementary matrix is invertible, the product is still invertible.
(d) 'T'r11e. If A is invertible and AR = 0, then B = IB = (A '- 1 A)B = A- 1 (AB) = A- 1 0= 0.
(e) True. If r1. is invt!rtible then the homogeneous system Ax = 0 has only the trivial solution;
otherwise (if A is singular) there are infinitely many solutions.
D7. No. An iuvertible matrix cannot have a row of zeros. Thus, for A to be invertible we must have
a#- 0 and h =I= 0. But (assuming this) if we add -d/a times row 1 to row 3, and add -e/h times
row 5 to row 3; we have a matrix with a row of zeros in the third row.
P2. Suppose that A = BC where B is invertible, and that B is reduced to I by a sequence of elementary
row operations corresponding to the elementary matrices E 1 , E2, .. . , Ek Then I= Ek EzE1B,
and so B - 1 = Ek EzE1. From this it follows that Ek E2E1 A == E,.. E2BC = C; thus the
same sequence of r{)w operations will reduce A to C.
84 Chapter 3
P3. Suppose Ax= 0 iff x = 0. We wish to prove that, for any positive integer k, Akx = 0 iff x = 0.
Our proof is by induction on the exponent k.
Step 1. If k = 1, then A 1x =Ax= 0 iff x = 0. Thus the statement is true fork= 1.
Step 2 (induction step): Suppose the statement is true fork= j, where j is any fixed integer 2:: 1.
Then A1+1 x =A) (Ax) """' 0 iff Ax= 0, and this is true iff x = 0. This shows that if the statement
is true fork= j it is also true for k = j + 1.
..
P4. From Theorem 3.3.7, the system Ax= 0 has only the trivial solution if and only if A is invertible.
But, since B is invertible, A is invertible iff B A is invertible. Thus Ax = 0 has only the trivial
solution iff (BA)x = 0 has only the trivial solution.
P 5. Let e1, ez, .. . , em be the standard unit vectors in Rm. Note that ell e2, ... , em are the rows of
the identity matrix I= lm. Thus, for any m x n matrix A, we have
for i = l, 2, ... , m. Suppose now that E is a.n elementary matrix that is obtained by performing a
single elementary row operation on I. \Ve consider the three types of row operations separately.
Row Interchange. Suppose E is obtained from I by interchanging rows i and j. Then
and rk(EA) = rk(E)A = c~eA = rk(A) fork =J: i,j. Thus EA is the matrix that is obtained from
A by interchanging rows i a.nd j.
Row Scaling. Suppose E is obtained from I by multiplying row i by a nonzero scalar c. Then
and rk(EA) = r,(E)A = ekA = rk(A) for all k =I= i. Thus EA is the matrix that is obtained from
A by multiplying row i by the scalar c.
Row Replacement. Suppose E is obtained from I by adding c times row ito row j (i :f.=j). Then
P6. Let A = [: !] ,and let B = [~ ~]. Then AB = BA if and only if [! :] = [: ~, i.e. if and only if
a= d and b =c. Thus the only matrices tha~ c0mmute with Bare those of the form A= (~ !] .
Suppose now that A is a matrix of this type, a.nd let G = [~ ~]. Then AC = C A if a.nd only if
[: ~!] = [;b 2ba], and this is true if and only if b = 0. Thus the only 2 x 2 matrices that commute
with both B and C are those of the form A= [~ ~] =a[~ ~) where -oo < a< oo. It is easy to
see that such a matrix will commute with all other 2 x 2 matrices as well.
EXERCISE SET 3.4 85
P7. Every 111 x n ma trix A can be tra.nsforrr:ed to reduced row echelon form B by a sequence of elemen-
tary row o pe rations. Le t E 1 , E2 , . . . , E~c be the corresponding sequen ce of elementary m~;~.trices.
Then we have B = E~r: E2E 1A = CA where C = E1r: E 2 E 1 is a.n inver tible m:1trix.
3. (a) (Xt,X2 , X3) = s(4, 4, 2) + t(-3,5, 7); Xt = 4s - 3t, X2 =-tiS + 5t, :t3 = 2s + 7t .
(b) (::tt , 2'2 X3, X4 ) = s( l , 2, 1, - 3) + t(3, 4, 5, 0); X1 = S + 3t , X2 = 2s + 4t, :r3 = S + 5t, x 4 = -3.'f
4. (a) (:z: 1 ,:r2 X3) :-:: s{0,- 1,0) +t(-2,1 , 9); x 1 = 2t, X2 = -s+t, X3 = 9t
(b) lXt, x2. ::3, :r4, x :;) = $(1, 5, - 1, 4, 2) + t (2, 2. 0, 1, - 4); x 1 = s + 2t, x2 = Ss + 2l , x 3 = - s,
X 4 = 1s + t, X:; = 2s - 4t
7. (a) Two vectors are linearly dependent if and only if one is a sca la r m ultiple of the other; thus
these vectors 1\re linea.rJy dependent.
(b) These vectors are linearly independent.
10. u = 4v - 2 w
11. (a) A line (a 1-dilnens io nal subspace) in R 4 that passes thro ugh the origin and is parallel to the
vector u = (2, - 3, 1, 4) = 4(4, -6, 2, 8}.
(b) A plane (a 2-.:!ime nsional suospace) in R" that passes through the origin and is parallel to
the vecto rs u = (3,- 2, 2, 5) and v = (6, -4, 4, 0).
12. (a) A pla ne in R~ tha t passes through the origin and is parallel to the vectors u = (6, -2, -4,8)
and v = (3, 0. 2, - 4).
(h) A line in R 4 tha t passes through the origin and is parallel to the vector u = (6, -2, -4, 8).
86 Chapter 3
~]
2 -5
H
6
-6 -1 -3
12 5 -18
a.nd the reduced row-echelon form of this matrix is
[~ ~]
6 0 11
0 1 -8
0 0 0
Thus a general solution of the system can be written in parametric form as
Xt = -6s- llt, X2 = s, X3 := 8t, X4 = t
...:
:.~
or in vector form as
(Xt. X2 1 x 3 , X4) = S( -61 1,0, 0) + t( -11, 0, 8, 1)
This shows that the solution space is span{ v 1, v2} where v1 = (-6, 1, 0, 0) and v2 = (-11, 0, 8, 1).
14. The reduced row ech!:!lon form of the augmented matrix is
[
1 -1
0 0
0-1 1 0~]
1 2 - 3
0 0 0 0 0
thus a. general solution of the system is given by
(x 1 , x 2 , x 3 , x 4 , x 5 ) = r( l, 1, 0 , 0, 0) + s(l , 0 , - 2, l , 0) + t( -l , 0 , 3 , 0, 1}
~]
2 1 l 1
4 -1 0 1
and the reduced row echelon form of this matrix is
1 2 0 ~ 1 0]
[0 0 1 ~ i 0
Thus a general solution of the system can be written in parametric form as
X1 = -2r- js- ~t, X2 = T, X3 = - ~S - ~t, .X4 = S, X$= t
or in vector form as
~ _r_~i;;l, 0,0,0) + s( -l ,0, -~, 1, 0) + t(-~, 0, -l,O, 1)
(xt, x z, .x3, X4, xs)
The vectors v 1 = (-2, 1, 0,0, 0), v2 = (- 3. 0, -i, 1, 0), and v 3 = (- ~. 0,- ~, 0 , 1) span the solution
sp3ce.
[~ ~]
0 -4 0 1
1 1 0 -2
0 0 1 -3
0 0 0 0
thus a. general solution is given by (xt. x2, X3, x4 , xs) = s(4, -1, 1, 0 , O) + t( - 1, 2, 0, 3, 1).
EXERCISE SET 3.4 87
17. (a) Vz is a scalar multiple ol v 1(v2 = -5vl); thus these two vectors are linearly dependent.
(b) Any set of more than 2 vectors in R 2 is linearly dependent (Theorem 3.4.8).
18. (a) Any set containing the zero vector is linearly dependent.
(b) Any set of more than 3 vectors in R 3 is linearly depenrlcnt (Theorem 3.4.8).
19. (a) Those two vectors arc linearly independent since neither is a scalar multiple of the other.
(c) These three vectors are linearly Independent since the system n-: [:]-
(b) v 1 is a scalar multiple of vz(v 1 = -3vz)i thus these two vectors are linearly dependent .
~]
only the trivial solution . The coefficient matrix, which is the matrix having the given vectors
[~] has
as its columns, is invertible.
(d) These four vectors are linearly dependent since any set of more than 3 vectors in R 3 is linearly
dependent.
21. (a) The matrix having t hese vectors as its columns is invertible. Thus the vectors are linearly
independent; they do not lie in a. plane.
(b) These vectors are linearly dependent (v 1 = 2v2 - 3v3); they lie in a. plane but not on a line.
(c) These vectors line on a line; v 1 = 2v2 and v3 = 3vz.
22. In parts (a) and (b} the vectors are linearly independent; they do not lie in a plane. In part {c)
the vectors are linearly dependent; they lie in a plane but not on a line .
......---...,""
23. (a) This se~-~f vectors is ~(~~; it is closed under scalar ~~ltipl~~.a.~ion and addition:
k(a, 0, 0) = (ka, 0, O) and (a1. a, 0) + (az, 0, 0) = (a1 + az, 0, 0).
(b) This seL of ''cctors is not a subspace; it is not closed under scalar multipllcation.
(c) Thl~ set of vectors is a. subspace. lf b = a+ c, then . kb = ka + kc. If b1 ;;;a 1 + c1, and
.
b2 = nz + cz, then (b1 + bz) = (aJ + a2) + (c1 + cz) .
(d) This set of vectors is not a subspace; it is not closed under addition or scalar multiplication.
24. Sets (a) and (b) are not subspaces. Sets (c) and (d) are suhspaces.
25. The set W consists of all vectors of the form x = a(l,O, 1, 0); thus W = span{v} wh~>.re v =
(l,O , 1, 0) . This corresponds to a line (i.e. a 1-dimensional s ubs pace) through the origin in R 4 .
26 . W = 5pan{v 1 , v 2 } .vhcre v 1 ::: (1, 0, 2, 0, - 1) and v 2 = (0, 1, 0, 3, 0). This corresponds to a. plane
(i.e. a 2-dimensional subspace) through the origin R::..
[Y 2: JJ
88 Chapter 3
Assum!ng >. - ~, this matrix can be reduced to the row echelon form
[~ ~ ~~
0 2+2.>.
l
which is singular if and only if>.= - 1. T hus the gjven vectors are d ependent iff >.= ! or>.= -1.
29. (a) Suppose S = { v 1 , v:~, v 3 } is a. linearly independent set. Note first that none of these vectors
can be equal to 0 (otherwise S would be linearl y dependent). and so each of the sets {v1 },
{v 2}, and {v 3} is linearly independent . Suppose then that Tis a 2-element subset of S,
e.g. T = {v 1 , v:~} . II T is linearly dependent then there are scalars c1 and c2 , not both
zero, such that c1 v 1 + c 2v z = 0 . But, if this were true, then c1 V t + c2v 2 + Ovs = 0 would
be a nontrivial linear relationship among the vectors v'h v2 , v 3, and so S would-.be linearly
dependent. Thus T = {v1 , v 2 } is linearly independent. T h e same argument applies to any
2-element subset of S. Thus if S is linearly independent , then each of its nonen:tpty subsets
is linearly independent.
(b) If S = {v1 , v 2, v 3 } is linearly dependent, then there a.re scalars c1o c2 , and c3 , not all zero,
s uch that c 1 v1 + c2v2 + CJVJ = 0 . Thus, for any vector v in R", we have c 1 v 1 + c2 v 2 +
c3 v3 + Ov = 0 and this is a nontrivial linear relationsbjp among the vectors v 1 , v 2, vs, v.
This shows that if S = {v 1 , v 2 , v 3 } is linearly dependent then so is T = (v 1 , v 2 , v 3 , v} for
any v .
30. The arguments used in Exercise 29 can easily be adapted to this more general situation.
32. First note that the relationship between the vectors u , v and s, g can be written as
[s] = [0.12
g
1 0.06] _, [u]
1 v
1
= 0.9928
[ 1
- 0.12
Parts (a.), (b) , (c) of the problem can now be a nswered as follows:
(a) s = 6 9i 28 ( u - 0.06v)
(b) g=o.9~28(-0.12u+v)
~s+h= l.9~ 56 (0.88u +0.94v) = 1 2~ u +~!~~v
5
(c)
33. (a) No. 1t is not closed under either addition or scalar multiplication.
(b) P 876 = 0.38c + 0.59m + 0. 73y + 0.07k
P 2t6 = 0.83m + 0.34y + 0.47k
P328 = c + 0.47y + 0.30k
(c) H P e75 + P2J6) corresponds to the CMYK vector (0.19,0.71 , 0.535, 0.27).
D2. ( a ) T wo vectors in Rn will span a plane if and only if they are nonzero and not scalar multiples
of one another.
(b) T wo vectors in R" will span a line if and only if they are not both zero and one is a scalar
multiple of the other.
(c) span {u} = span {v} if and only if one of ~be vectors u and v is a. scalar multiple of the other.
D 3. (a) Yes. If three nonzero vectors are mutually orthogonal then none of t.hem lit:S in the plane
spanned by the other two; thus t he three are linearly independent.
(b) Suppose the vectors vl> v2 , and v 3 are nonzero and mutually orthogonal; thus v, v; =
llvdl 2 > 0 fori= 1, 2, 3 and v; vi = 0 for i=/: j . To prove they are linearly independent we
must s how that if
C1V1 + C2V2 + C:JVJ = 0
then Ct = c-_a = C3 = 0. Tbis follows from the fact that if c 1v1 + c 2v2 + C3 Y3 = 0, then
2
C; llv;ll = vi (c1v1 + c2v2 + c3v3) = V1 0 = 0
for i = 1, 2, 3.
D4. The vectors in the fi rst figure are linearly independent since none of them lies in the plane spanlled
by the other two (none of them can be expressed as a linear combination of the other two). The
vectors in the second figure arc linearly dependent since v 3 = v 1 + v 2 .
DS. This :set is d ose>d under scalar multiplication, but uot under addi~ion . For example, the vectors
u = (1,2) nnd v = (-2, - 1) correspond to points in Lhe set, but 11 + v = (-1, 1) does not.
06. (a) False. For example, two of the vectors ma.y lie on a line (so one is a scalar multiple of the
other), but the third vector may not lie on this same line and therefore cannot be expressed
a.s a linear combination of the other two.
(b) False. The set of all linear combin ations of two vectors can be {0} (if both are 0), a line (if
one is a scalar multiple of the other), or a plane (if they are linearly independent) .
{c) False. For example, v and w might be linearly dependent (scalar multiples of each other).
[But it is true that if {v, w} is a linearly independent set, and if u cannot be expressed as
a linear combination of v and w, then {u, v, w} is a lin~~trly independent set .J
(d) True. See Example 9.
(e) Ttne:. lf c1(kv1)+ cz(kv2 ) + cz(kvz) = 0, then k(c1 v1 +c2v2 + c3v2) = 0. Thus, since k =/:
0, it follows that Ct v 1 + c2 v2 + c 3 v 2 :::::: 0 and so c1 = cz = c3 = 0.
D7. (a) False. The set {u, ku} is always a linearly dependent set.
(b) False. This statement is true for a homogeneous system (b:::::: 0), but not for a non-homogen-
eous system. !The solution space of a non-homogeneous linear system is a translated sub-
space.J
Chapter 3
(c) True. If W is a subspace, then W is alr~ady closed under scalar multiplication and addition,
and so spa.n(W} = W .
(d) False. Forexample,ifSt = {(I ,O),(O, l}}a.nd S2 = {(1, l) ,(O, l}}.thenspan(SI) = span(S2 )=
R2 , but. S1 # S2.
08. Since span(S) is a subspace (already closed under scalar multiplication and addition), we have
s pan(spa.n(S}) = spa.n(S).
Pl. Let() be the angle between u and w, and let 4> be the angle between v and w . We will show that
9 = . First recall that u w = llu!lllwll cosO, so u w =
kllwll cos9. Simllarly, v w.=lllwllcos.
O n the other hand we have
aud ::;o kllwll COB()~ u. w = lk 2 + k(u . v ) , i.e. l!wll cos e = lk + ( u. v) . Similar calculations show
that llwll cos = (v u) + k l ; thus Uwll cos B = Uwll cos. It follows that cos 9 =cos and () = </>.
P2. If X belongs to WIn w2 and k is a scalar, then kx also belongs to Wt n wl since both Wt and
w2 are subspaces. Similarly, if XI and X2 belong to Wt n W2 , then x, + X2 belongs to WIn w2.
Thus WI n w2 is closed under scalar multiplication and addition, .i.e. wl n w2 is a subspace.
P3. First we show t hat w1 + W 2 is closed under scalar multiplication: Suppose z = x + y where xis in
W1 andy is in W2. Then, for any scalar k , we have kz = k(x + y) = kx + ky, where kx is in W1
and ky is in W:~ (since w, and w2 are subspaces); thus kz IS in w} + w2. Finally we show that
WI + w2 is closed under addition: Suppose Z l =X) + Yl and Z 2 = X 2 + Y2 o where X) and X2 are
in W1 and Y and Y2 are in W2 . Then z 1 + Z2 == (x1 + y, ) + (x2 + Yz) = (x1 + x2) + (Yl + Y2) .
where X } + X2 is in lt\ and Yl + Y2 is in w2 (since WI and w2 ar~ subspaces); thus Zl + Z2 is in
H.-\ + w2.
1. (a) The reduced r,Jw echelon form of the augmented matrix of the h omogeneous system is
l
0
0
!-~
~]
thus a general solut ion is x1 Gr (in column vector form)
EXERCISE SET 3.5 91
(c) From (a) and (b), a general solution of the nonhomogeneous system is gjven by
(d) The reduced row echelon form of the augmen_t~J;l ).~atrix of the nonhomogeneous system is
.. --- - .. -- ::~~-:--- - -
,,.
1
[~
2
3 -3
f
.:.,
0
0
0
0
;]
T his solution is related to the one in Qa.rt (c) by the change of variable .s1 = s, 1
t = t + 1.
-- ------- r--. otNo----.- ~-
1
[~
0
(d) T he rnd uced row echelon form is 1 -g
o o
: [:} [1J
'..
+t' nv ..-"
3. (a) The reduced row echelon form of the augmented matrix of the give n system is
!3 0 .3!.]
0 1 1
0 0 0
4. (a) The reduced row echelon form of the augmented matrix of the given system is
0 _1: 133]
1 9 -7
0 0 0
[~
1
: =~~] [: ] : : [-~]
5 -12 c3 1
is consistent. The reduced row echelon form of the augmented matrix of this system is
0
1
0 -2]
0 3
0 1 1
From this we conclude that the system has a. unique solution, and that w = -2v1 + 3v2 + v3.
6. The vector w can be expressed as a linear combination of v 1 , v 2 , and v 3 , if and only if the system
f3
-2
8
1
l2 4
EXERCISE SET 3.5 93
is consistent. The reduced row echelon form of the augmented matrix of this system is
,.'
[~
/
0 3
/
/ '
l -1 10]
-6
,
0 0 0
..
...--- ...... .
--- - . . /
and from this we conclude that the :;ystem. hilS inftnitcly many solutio.n~ given b}: c1 .:;::;.10 - ..3t 1~
c 2 == -6 + t, c3 = t. Thus.w can be expressed as a linear combination of v1, v2, and V3. fii particutat;
taking t = 0, we have w = 10v 1 - 6vz.
H
1 l
-3
0 2
is consistent. The row reduced echelon form of the augmented mo.trix of this system is
is consistent.. The reduced row echelon form of the augmented matrix of this system is
[~
0 2
-1
0 0
From this we conclude that the system has infinitely many solutions; thus w is in span{vt, v2, VJ}.
9. (a) The hyperplane al. consists of all vectors x = (x, y) such that a x = 0, i.e. -2x + 3y = 0. This
corresponds to the line through the origin wit.h parametric equations x = y = t. it,
(b) The hyperplane a..!. consists of all vectors x = (x, y, z) such that a x = 0, i.e. 4x- 5z = 0. This
corresponds to the plane through the origin with parametric equations x = ~t, y = s, z =t.
(c) a.i. consists of all vectors x = (xbx 2, X3, x4) such that a x = 0, i.e. Xt + 2x2- 3x3 + 7x4 = 0.
This is a hyperplane in R 4 with parametric equations Xt = -2r + 3s- 7t, x2 = r, X3 = s,
X4 = t.
0. (a) x = 4t, y = t (b) x = s, y = -3s + 6t, z = t
(c) x1 = r + 2s, x 2 = r, x 3 = s, s;4 = t
94 Chapter 3
11. This system reduces to a single equation, Xt + x2 + x3 = 0. Thus a general solution is given by
x 1 = -s- t, x 2 = s, x 3 =: t; or (in vector form)
13. The reduced row echelon form of the augmented matrix of the system is
[~
0 -~7 T
19
1 ~ l
7 -7
X2
xs =r
-7
7
2
+s
-7
1
7
+t
-:1
~j
1 0
X4 0 1
X f.> 0 0
17. (a) A vector x = (x, y,z) is orthogonal to a= (1, 1, 1) and b = ( -2, 3,0) if and only if
~~-;~ y- ~ z. = 0~-,;
g+3y ~o j
(b) The solution space is the line through the :iii;~--~th~tj;~erpendicu)ar to the vectors a and b.
EXERCISE SET 3.5
(c) The reduced row echelon form of the augmented matrix of the system is
( [~-~-T iJ -
and so a
"-
gtneraj,..Ql:ut~giv~ by x =~-=:: -~t, z = t; or (x, y,z) i;(=j:~~ . l).,ote
that the vectof:
- ==-~=!.~
..
. :.t.~~..J orthog~~a.l !9,,_botl!.~_ap<:\ I:!.,. ~ --
l8. (a) A vector x = (x,y,z) is orthogonal to a = ( - 3,2,-1) and h= (0, -2, -2) if and only if
- 3x + 2y - z =0
- 2y- 2z = 0
(b) The solution space is th.e llne through th.~ . ori,g!11.. tha,:t is perpendicular to th~ ~~~t~rs a.and"~
(c) (x, y, z ) = 'i:{~i~ _: 1, 1); note that the vector v = (- 1, :.__ i, 1) is orthogonal to both a and b.
-==--~-
--
.9. {a} A \'ector x = (x 1 ,x2,x3 ,x4) is orthogonal to v1 = (1, 1, 2,2) and vz = {5, 4,3,4) if and only if
x 1 + xz + 2x3 + 2x4 = 0
5xl + 4x z + 3x3 1- 4X-t = 0
(b) The solution space is the plane (2 dimens ional s ubspace) in R4 that passes through the origin
a.nd is perpendicular to the vectors v1 and v 2.
(c) The redUl:ed row P-chelon form of the augmented matrix of t.he system is
~]
0 - 5 -1
l 7 6
=
and so a. general solution of the system is given by (x ,,xz,XJ ,x4) s{5, -7,1 ,0} + t(4, - 6,0, 1) .
Note that t he vectors (5,-7, 1,0) and (4 ,-6,0, 1) are orthogonal to both v 1 and v2.
[~ ~]
0 0 5 -- IO
13 _33
1 0 - Z5 50
31 21
0 25 50
Dl. The solution set of Ax = b is a. translated subspace x 0 + W , where W is the solution space of
Ax= 0 .
D2. If vis orthogon al to every row of A, t hen ilv = 0 , and so (since A is invertible) v = 0 .
D3. T he general solution will have at least 3 free variables. Thus, assuming A i: 0, the solution space
will be of dimension 3, 4, 5, or 6 depending on how much redunda.ocy there is.
D4. (a) True. The solution set of Ax= b is of the form x 0 + W where W is the solution space of
Ax = O.
0
(b) False. For example, the system :r - Y = is inconsistent, but the associated homogeneous
x-y=l
system has infinitely many solutions.
(c) True. Each hyperplane corresponds to a single homogeneous linear equation in four variables,
and there must be a least four equations in order to have a unique solution.
(d) 'Tme . Every pla ne in R3 corrr.:>ponrls to a equation of the form ax + by + cz = d.
(e) False. A vector x is orthogonal to row(A) if and only if x is a solution of the homogenous
system Ax = 0.
PZ . Suppose that Ax = 0 has infinitely many solutions and that Ax= b is consistent. Let Xo be
any solution of Ax = b . Then, for any solution w of Ax= 0 , we have A(xo + w) = A.xo + Aw =
b + 0 = b . Thus, if Ax = 0 has infinitely many solutions, the system Ax = b is either inconsistent
or ha.<> infinitely many solutions. Conversely, if Ax= b has at. most. one solution, then Ax= 0 has
only t.he trivial solution.
P3. If x 1 is<:\ solution of Ax = band x2 is a solution of Ax = c, then A(x, + x2) = Ax1 + Axz = b + c;
i.e. x 1 + X2 is a solution of Ax= h +c. Thus if Ax= band Ax ;= care consistent systems, then
Ax = b + c is also consistent. This argument c2-n easily be adapted to prove the following:
Theorem. If Ax = bi is consistent for each j = 1, 2, ... , r and if b = b 1 + b2 + + b,., then
Ax = b is also consistent.
P4 . Since (ka) x = k(a x) and k f. 0, it follows that (ka) x = 0 if and only if a x = 0. This proves
that. (ka)..L =
a ..L.
~-1O!O
1
2. (a}
[~ -2
0
0
r 0
3
= rl0
0
0
1
-2
0 fl
(b) not invertible.
3. (a)
[~ - 1
0
0
~][
2
_; l
2 5
[! -~]
4 10
(b )
[ 2 1] [~' 0 [ 10 -2]
-4 1
2 5
- 2] = - 20
10
- 2
-10
[~
(c) - 1
0
0~][-:2 1~] [~ -2]0= [3~ 0
-1
0][ 10 -2]
0 - 2 = ro -6
-20 ] 20 2
0 0 2 10 -10 20 - 20
~]
[-12-3
-~l
4. (a) [ - 16 - 2
-2
(b)
-5
10 (c)
[ - 24 - 10
3 - 10
12]
- 20
" 15 5 60 20 -1~
9. Apply the invers ion a lgorit hm (see Section 3.3) to find the inve rse of A .
[~ ~]
2 3 :1 0
1 -2:0 1
0 '
1 :0 0
Add 2 t.i111es row 3 to row 2. Add -'- 3 times row 3 t o row l.
[~ -;]
2 0: 1 0
1 o:o
I
1
0 1 :0 0
Add -2 times row 2 to row 1.
[~ -~]
0 0: 1 -2
1 olI o 1
0 1:0 0
10.
If A= [~ -1
0
4
; } ,lhro A- = [~ 1
0
-1
4
0
0
1 r[ -
1
2
- 11
- 1
0
4 ;]
11. A =
u 0
0
- 1 i]
13. The matrix A is symmetric if and only if a, b, and c satisfy the following equations:
12. A = [~ 0-8] 0
4
-4
0
a - 2b + 2c = 3
r2a +- b+ c-,;- -{) ..
~ - +- c-~ -2
14. There are infinitely ma.ny solutions: a = l + lOt, b = 1t, c = t , d = 0, whe re - oo < t < oo.
18. From Theorem 3.6.3, the factors commute if and only if the product is symmetric. Thus the factors
- 12
A=[~
0 0] [(! 0)- 2
~ ] - [~~~]
3 0
20. If 4 0 , then A- 2
=
0 l 0 ( 1)-2 0 0 I
EXERCISE SET 3.6
Thus, from Theorem 3.6.5, AAT ond AT A are also invertible; furthermore, since these matrices
are symmetric, their inverses are also symmetric by Theorem 3.6.4. .Following are the matric:es
AAT, AT A, and their inverses.
ATA =
r ~ 10
6
3 ~] (AT A)- 1
=
[ I -3 8]
-3
8
10
- 27
-27
74
~~l
3 - 27
AAT :::::
[l 10
6
(AAT)- 1 = [ 74
-2~ 10
-3
-;]
22. Following are the matrice..<; A AT and AT A, and their inverses.
['i ~] -I~]
5 -3
AT.4 = 5 {AT,4.)-1 = [ - 3I 10
2 5 - 17 30
r
[~ ~] -~]
3 35 -13
AAT = 10 1
(AA ')- = l-13 5
5 5 -2
23. The fixed points of t he matrix A == [~ ~] are ihe solut ions of the homogeneous system (1 - A )x =
0. The augmented matrix of this system is
- 1
[- J
-1 o]
-1 0
and from this it is easy to see tha t the system has the gem~ra.l solution x 1 = t, x2 = -t. T hus the
fixed poi nts of A are vectors of t he form x = [_~J = t [_ ~], where - oo < t < oo.
25. (a) If A = [~ ~]. then A2 = [~ ~] [~ ~] =[~ ~J Thus A is nilpotent with nilpotency mdex 2,
(b) A o o2 31,]
= [0
0 0 0
has nilpotency index 3. I-A~ r~ =:]
~] + [~ ~ ~] + [~ ~ ~] [~ ~ ~] =
27. (a) If A is invertible and skew symmetric, then (A- 1 )T = (AT)- 1 = (-A)- 1 = -A- 1 ; thus A- 1
is skew symmetric.
(b) If A and B are skew symmetric, then
28. (a) If A is any square matrix, then (A+ AT)T =AT+ ATT = A'r +A= A+ AT, so A+ AT is
symmetric. Similarly, (A- AT)T =-(A- AT), so A- AT is skew-symmetric.
(b) A= HA
+.A1') +~(A- AT)
(c) A= [~ ~) = ~ [~ ~] + ~ [~ -~]
29. If H =In- 2uuT then we have nr = I'I- 2(uuT)T =In- 2uuT = H; thus li is symmetric.
30.
AA
T
= ['']
r2
'~ [r, rf
T
r;,:]
['' ! 1
rlrr r1rm r1 r1 :r 1 r2
'' rm
= r2r1 r2rr ,,,~Tl [ ''; r,
r2 rz f2 r rn
rmr1
T r,.,.rT2 fmfm rm rl I'm rz Tm I'm J
31. Using Formula (9), we have tr(Ar A}= lladl 2 + lla2!1 2 + + tlanll 2 = 1 + 1 + + 1 = n.
(b) There are other such factorizations as well. For example, A "" [!::: :::
3a31 an
DISCUSSION AND DISCOVERY 101
04. Using Formula {9), we have tr(AT A)= lln1JI 2 + 1la21J 2 + + IJa,.jJ 2 . Thus, if ArA = 0, then it
follows that llatll 2 = 1Jazll 2 =
= llanll2 = 0 amlso A= 0.
0 5. (a) lf A = [d~ d~ ~1, then A2= [d~ d~ ~1; thus A 2==- A if and only if d~ = d; for 1 = 1, Z, and
0 0 d3 0 0 d~
3, n.nd this i:> t.rue iff di - 0 or 1 for i = 1, 2, ancl 3. There are a. tOtal of eight such matrices
(3 x 3 matrices whose diagonal entries are either 0 or 1)
(b) There are a t otal of 2n such matrices (n x n. rnatrice.s whose diagonal entries are either 0 or 1).
D6. If A = [d'0 dl0], then A 2 + 5A + 6!2 = 0 if and only if d? + 5'-. + G = 0 for i = 1 and 2; i.e. if and
only if d; = - 2 or - 3 for i = 1 and 2. There are a t.ota.l of fonr s uch matrices (auy 2 x 2 diagonal
matrix ~hose diagonal entries are either - 2 or - 3) .
D7. If A is both symmetric and skew symmetric, then AT= A and AT = - A ; thus A = -A and so
A =0.
0 8. In a symmetric matrix, enlries that are symmetr ically positioned across the main diagonal are
equal to each other. Thus a symmetric matrix is completely determined by the entries that lie on
or above the main rliagonal, and entries t hat. appear below the main diagonal a.re duplicates of
entries that appear above the main diagonal. Au n x n matrix has n entriP.s on the main diagonal ,
n - l entries on the diagonal just above the maiu diagonal, etc . Thus there are a total of
n + (n- 1) + + 2 + 1 = n(n 2+ 1)
entries lhat lie on or above the main diagonal. For a ;;ymmet.ric matrix, t his is the maximum
number of distinct entries the matrix can have.
In a skew-symmetric matrix, the diagonal entries are 0 and entries t hat are symmetrically posi-
tioned ac:ross the main diagonal arc the negatives of each other. The maximum nwnber of distinct
entries can be attained by selecting distinct positive entries for the n(~- l) p,.,sit.ions above the 1.0ain
diagonal The entries in the n(n2- 11 positions below the main diagonal will then automatically be
distinct from each other and from the entries on or above the main diagonal. Thus the maximum
number of distinct entries in a skew-symmetric matrix is
n(n- 1} n(n- I )
--'--:---.....:. +
2 2
+ 1 = n(n- 1) + 1
09. If D == rd~ d~J, then AD = ldt a1 d2a2J where a 1 and a2 are lhe rolumns of A. Thus AD =
I= lete2] (where e1 and e::~ are the standard unit vectors in R 2) if and only if dlaJ = e1
and dza2 = e2. But this is true if and only if d1 i- 0, d2 :/; 0, a 1 =
-J;e 1 , and a 2 = :, ~2 Thus
102 Chapter 3
A =
_!.
d~ 1
0] where d 1 , d 2 =1- 0. Although described here for the case n = 2 , it should. be clear
[
d2 '
that the same argument can be applied to a square matrix of any size . Thus, if .1D = 1, then the
..!. 0 0
f dt
o o
l
.l. .. .
d2
tliagonal entries d 1 , dz , ... ,d.n of D must be nonzero, and A= : : ... ; .
0 0 ... ..1.
d,..
D lO. (a) False. If A is not square then A is not invertible; it doesnt matter whether AAT (wh.ich
is always square) is invertible or not. (But if A is square and AAT is invertible, then A is
invertible by Theorem 3.6.5.]
(b) False. For example if A = G~) and B = [~ !] ,then A + B = G~) is symme~r~~-
( c) True. If A is both symmetric and triangular, then A must be a diagonal m atrix. Thus
0 0 Un 0 0 p(d., )
symmetric and triangular).
(d) True. For example, in the 3 x 3 case, we have
(e) True. If Ax= 0 has only the trivial solution, then A is invertible. But if A is invertible then
so is AT {Theorem 3.2.11); thus Ar x =a has only the trivial sol ution.
Step 2 (induc~ion st ep). Suppose the statement is t rue for k = j, where j is an integer ~ 1. 'rhen
:
, :
l
0 0 d., Q 0 I 0 0
d~
01
~
[
}, 0 .. .
0 }; 0
D= is inver tible with n-' = On the other hand if auy one of
0 0 dn 0 0 :i;;- .
the diagonal entries is zero, then D has a row of zeros and thus is not. invertible.
P 4. We will show t hat if A is symmetric (i.e. if AT == A), then ( An)T = A 11 for each positive integer .
n. Our proof is by induction on the exponent n .
Step L Since A is symmetric, we have (A 1) T == A'~' = A = A 1 ; thus t he statement is true for
n = 1.
Step 2 (induction step). Suppose the statement is true for n = j, where j is An integer ~ 1. Then
(Ai+ l f = (AAi)T = (Ai )T A1' = AJ A= AH 1
P5. If A is invertible, then Theorem 3.2.11 implies AT is !::1vertible; thus t he products AAT and AT A
are invertible as well. On the other hand, if either A AT or A T A is invertible, then Theorem
3.3.8 implies that A is invertihle. It follows that A , AAT, and AT A are eith~p all invertible or all
singular.
The system U x =y 1s
x1- 2x2 =0
X2 = 1
from which, by back substitution, we obtain x 1 = 2, X2 = 1. It is easy to check that this is in fact
the solution of Ax = b .
x2 + 2x3 = -5
X 3 = -3
from which, by back s ubstitution, we obtain :x: 1 = -2, x2 = 1, x 3 = - 3. It is easy to check t ha.t
this is in fact the solution of Ax = b .
is an LU-factorization of A.
To solve the system Ax= b where b = r=~J , we first solve Ly = b for y , and then solve Ux = y
for x :
The system Ly ;:: b is
2yl = - 2
-yl + 3y2 = - 2
from which we obtain y 1 =:=. - 1, Y2 ::::: - 1.
The system Ux = y is
X1 + 4X2 = - 1
X2 = - 1
from which we obtain x 1 = 3, x 2 = - 1. It is easy to check that thi! is in fact the solution of
Ax. =:::. b.
EXERCISE SET 3.7 105
7. The matrix A can be re<luced to row echelon form by the following sequence of operations:
[~
-1
l
4
-l] ['
-1
1
-7 0
0
-1
1
0
-I] [' -1 -1]
- 1 -7
5
0
0
1 - 1 =
0 1
u
The multipliers associated wilh these operations arc 4, 0 (for the second row), 1, -~. -4 , aud ~i
thus
~
0
A = LU = [ -2
- 1 4
is an LU-factorization of A.
To solve the ystem Ax =b w hae b = [=~] , we fi.-s.,olve Ly =b ro, y, and then wive U x =y
for x:
The system Ly = b is
2yl = -4
- 2y2 = - 2
-y! + 4yz + 5y3 = {)
from which we obtain Y1 = -2, Y2 = 1, YJ = 0.
The system Ux = y is
X1 - X2 - X3 = -2
xz- x 3 = I
X3 = 0
from which we obtain x 1 = -1, .1.' 2 = 1, X 3 = 0. Jt is easy to check that this is the solution of
Ax=b.
4
~1 [~
- 2 0
1
0
~]
l
9. The matrix A can be reduced to row echelon form by the following sequence of operations:
[-~
~] [~ ~1 ~ [~ ~1 ~
0 1 0 - 1 0 - 1
3 -2 3 -2 3 0
A= -1
-1
0 -1 2 2 -1 2
0 0 1 0 1 0 1
[~ ~1 [~ ;1 ~ [~ ~1 ~
0 -1 0 -1 0 -1
1 0 l 0 1 0
~
-1 2 0 2 0 1
0 1 0 1 0 1
."!.:.
[~ ~1 -u
01 [1
0 -1 0 -1
1
0
0
1 2J1 ~ 00 0
l 0
1
0 0 4 0 0 0
The multipliers associated with these operations are -1 , -2, 0 (for the third row), 0, ~ . 1, 0, ~.
-1 , and ~; thus
~][~ ~]
0 0 0 -1
A-LU-
[-1~ - 1
3 0 1 0
2 0 1
0 1 0 0
is an LU-decomposition of A.
To solve thesystem Ax= b where h = [-;],we first solve Ly - b lory, and then solve Ux - y
for x :
The system Ly =b is
-yl == 5
2yl + 3yz =- 1
Y2 + 2y3 == 3
Y3 + 4y4 = 7
x1 - X3 =- 5
xz + 2x4 = 3
X3 + X4 = 3
X4 = 1
from which we obtain x 1 = -3, x 2 = 1, x 3 = 2, x 4 = 1. It is easy to check that this is in fact the
solution of Ax = b.
EXERCISE SET 3.7 107
0 -1 -4 - 5
0 0] A= LU = ['~
0
4
0 2
0 -1 -1
0
0
j [~
-2
1
0
0
0
3
1
0
0]0~ . The
X1 - 2Xz - X3 = !
r
-~2
T 2"'
-"3 -- -6l
:r.3 -- 7
il
:1yl = 0
2yl + 4y2 = 1
-4y1 - Y2 + 2y3 = 0
from which we oblaiu !11 = 0, )/2 ::.:. L Y3 = ! . Then the system U x = Yz is
xa - 2x2 - X3 = 0
xz + 2:r3 =!
X3 = 4
x1 - 2x2 - X3 = 0
x2 + 2x3 = 0
X3 = ~
l ~
12. Let e 1 , e 2 , e 3 of! the s tandard unit vectors in R 3 The jth column x i of the matrix A - .L ~ obtained
by solving the syst em Ax = ej for x = Xj. Using the given LU-decomposition, we do this by .first
solving Ly = e1 for y = y 1 , and then solving Ux = Yj for x = x;.
* -t. .
'l l
7 r.i
13. The nta.t l ix il can 1.-c reduced to row echelon form by Lhe followin g sequence of oper ations:
-:]
1 2
A:.::
H 0
2
-t
2
0
2
2
I
--+ 1
2
-t
[~
1
i
1
-11lj 0
2
-t
r
0
I
2
1
0
-:] =U
where thfl a.o:;sociated multipliers are ~, 2, -2, 1 (for the leading entry in t he second row) , and
= [-~ ~1
0
- J. This leads t.o the factorization A = LU where L If instead we prefer a lower
2 1 l
t.ri<mgular factor t hat has 1s on the main diagonal, this can be achieved by shifting the diagonal
entries of L to a diagonal mat rix D and writing the factorization as
EXERCISE S ET 3.7 109
15.
-2 -4 l1 1 2 2 1 1 0 0 63 1 0
(b) T his is not a. permutation matrix; the second row is not a. row of h.
(c) T his js a permutation matrix; it is obtained by reordering the rows of 14 (3rd, 2nd , 4th, pt ).
16. (b) is a permutation matrix. (a) and (c) are not permutation matrices.
17. The system Ax~ b i<equivalent top-> Ax ~ p -> b where p-> ~ P ~ ~ [! :]. Using the given
1 0
LUx= ~ 1
[
3 -5
We solve this by first solving Ly = p-lb for y, and then solving Ux = y for x.
The system Ly = p - I b is
Y1 = 1
Y2 = 2
3yl - 5y2 + y:,~ = 5
from which we cbt..ain y 1 = 1, '!/2 = 2, y3 = 12. Finally, t he system Ux = y is
x1 + 2x2 + 2x3 = 1
X2 + 4X3 = 2
17x3 = 12
19. If we interchange rows 2 and 3 of A, then the resulting matrix can be reduced t o row echelon form
without any further row !nterchanges. This is equivalent to first multiplying A on the left by the
corresponding permutation matrix P:
3 -1 -1
0
[3 -1
2 -1
2 :1
J
110 Cha~ter 3
3 -1
PA = 0 2
[3 -1
PA =
3
0
-1
2
[ 3 -1
0] [3 0 0] [1
1 = 0 2 0 0
1 3 0 1 0
1 0 0] [3 0 0] [1 -~1 -0~] = p - 1 LU
A=
[0 1 0 3 0 1 0
0 0 1 0 2 0 0
0
Note that, since p- =[~zlthis decomposition ca.n also be '~~~~ ~tlten as A = P LU.
1
PA x == DUx == 0 2 0 0 1 x 2] 4 Pb
3 0 } 0 0 } X3 1
20. If we interchange rows 1 and 2 of A, then the resulti ng matrix can be reduced to row echelon form
withoul any furt her row interchanges. This is equivalent to first multiplying A on the left by the
corresponding permutat ion matrix P:
P A = LU = [~ ~ ~1 [~ ~
2 0 -3 0 0
-;]
1
PAx = LUx = [~ ~ ~1 [~
2 Q -3 0
1
0
-;] [ ::] = [
1 X3 -2
~1 = Pb
21. (a) We have n = l 0 5 for the given systern and so, from Table 3. 7.1, the number of gigaflops
required for the forward and backward phases a.re approximately
Grorwa.r<l = ~n 3
X 10-9 = ~(10:; ) 3 X w- 9 = s X 106 ;::::: 670,000
x = a, y = b, w:z; = c, wy + z =d
and , since a =I= 0, this system has the unique solution
c (ad- be)
x =a, y= b, 711 = -, z=
a a
(b ) fl-orn the above, we ha\'e
[: !] = [~ ~][~ ~]
DISCUSSION AND DISCOVERY
01. The rows of Pare obtaine<l by reordering of the rO'I.VS of the identity matrix 14 (4th , 3'd, 2nd, P~) .
Thus P is a per mutation mn.trix. Multiplication of A (on the left) !;,y P results in (.he corresponding
reordering of the rows of A; thus
3 - 3
- 11 12
PA=
0 7
2 1
CHAPTER 4
Determinants
2. 1: ~I = (4)(2) _ (1){8) = s- 8 = o
7
3. ,-S 1 = (-5){-2) - (7)(-7) = 10 + 49 = 59
-7 -2
5.
I
a- 3
I
-3 a -5 2 = (a- 3)(a - 2) - (5)( -3) = (a2 -Sa+ 6) + 15 = a2 - 5a + 21
-2 7 6
6. 5 1 -2 = (- 8 - 42 + 240)- (18 + 140 + 32) = 190- 190 = 0
3 R 4
-2 1 4
1. 3 5 - 7 = ( - 20 - 7 + 72) - (20 + 81 + 6) = 45- 110 = -65
l 6 2
-1 1 2
8. 3 0 -5 = (O- 5 + 42) - {0 + 6 + 35) = 37 - 41 = - 4
7 2
3 0 0
9. 2 - 1 5 = (12 + 0 + 0) - (0 + 135 + 0) = - 123
1 9 -4
c -4 3
10 . 2 1 c2 = {2c -l6c2 + 6(c - l)) - (12+ c3 (c- 1) - l6) = - c +c3 -16c2 +Be- 2
4 c- 1 2
11. (a) {4, 1, 3, 5, 2} is a.n odd permutation ~3 interchanges). T he signed product is -a 1 4a.21a.33a4sa.~2
{b) {5, 3, 4, 2, 1} is an odd permutation (3 interchanges). The signed product. is -atsa.z3a34a.42ast
(c) {4, 2, 5, 3, 1} is an odd pArmuta.tion (3 interchanges). The signed product is -al4ll22llJS043ll51
(d) {5, 4, 3, 2, 1} is a.n even permutation (2 interchanges) . The signed product is +ats<124a33a.czas1
{e) { 1, 2, 3, 4 , 5} is an even permutation (O interchanges). The signed product is +au ana33a44a.~~
(f) {1 , 4, 2,3, 5} is an even permutation (2 interchanges) . T he signed product is +a 11 a:~-!a32 a43ass
11 8
Exercise Set 4.1 119
1 2. (a), (b) , (c), and (d) are even; (e) and {f) are odd.
13. det(A) = (,\- 2)(>. + 4) + 5 = >.2 + 2>. - 3 =(..\ -1)(>. + 3}. Thus det(A) = 0 if and only if/\= 1
or >.= - 3.
16. A = 2 or ,\ = 5.
17. We have 1; 1-=_1xl = x( l - x) + 3 = -x2 + x + 3, and
1 0 -3
2 x -6 = ((x(x - 5) + 0- 18) - ( -3x- 18 + 0) = x 2 - 2x
] 3 X- 5
Thus t he given equation is v<~,lid if and only if -:t 2 + :-r + 3 = .x 2 - 2x, i.e. if 2x 2 - 3x - 3 = 0. The
roots of this quadro.tic equation are x = 3 {33. *
18. y=3
~I
0 0 0
1 0 0
1 2 0
19. (a) 0 - 1 0 = (1)(-1)(1) = - 1 (b)
0 '1 3 0
=0
0 0
1 2 3 8
1 2 7 -3
0 I -4 1
(c)
0 () 2 7
= (1)(1)(2) (3) =6
0 0 0 a
1 1 1
2 0 0
~ = (1 )(2)(3)(4) =
0 2 2
20. (a) 0 2 0 = 23 = 8 (b)
0 0 3
24
0 0 2
0 0 0 4
-~ 0 0 0
l 2 0 0
{c) = ( - 3)(2)( -1)(3) = 18
40 10 - 1 u
100 200 -- 23 3
21. Mn =1
; -11
4
= 29. c11 = 29 Ml2 = ~-~ -114 = 2l,Ct2 = - 21 M13 = 21, C1 :~ = 27
22.
Mll =I~ ~~ = 6, cn = 6 M12 =12,012 = -12 M1s = 3, C1a = 3
M31 = I~ 621 = )
O,Ca1 =0 Ma2 = O,C3 2 =0 Ms3 = 0, Cas = 0
0 0 3
23. (a) M 13 = 4 1 14 = (0 + 0 + 12) - (12 + 0 + 0) = 0 0 13 =0
4 1 2
4 -1 6
(b) M 23 = 4 1 14 = (8 - 56+24)-(24+56-8)=-96 0 23 = 96
4 1 2
4 l 6
(c) Mn= 4 0 14 =(0+ 56 +72)-(0+8+168) = -48 0 22 = - 48
4 3 2
- 1 1 6
(ct) M21= 1 o 14 = {O+I4+18)-(0+2 - 42) = 72 c21= -n
1 3 2
25. (a) det(A) = (l) Cu + (- 2)C12 + (3)013 = (1)(29) + (- 2)(-21) + (3)(27) = 152
(b) det(A) = ( I)C ll + (6)C21 + (-3)C31 = (1){29) + (6)(11) + (-3)(-19) = 152
(c) det(A) == (6)C 21 + (7)C22 +(-1 )023 = (6)(11) + (7)(13) + (-1 )(5) = 152
(d) det.(A) = (-2)C12 + (7)C22 + (1)032 = (-2)(- 21) + (7)(13) + {1){19) == 152
(e) det(A) = ( -3)CJI + (l)Ca2 + ( 4)Caa = ( -3)( - 19) + ( 1)(19) + (4)( 19) = 152
(f) det(A) = (3)013 + (-l)Czs + (4)033 = (3)(27} + (- 1)(5) + (4)(19) = 152
26. (a) det(A) = (l)Cll + (l)Ct2 + (2)013 = (1)(6) + (1)(- 12) + (2)(3) = 0
(b) det(A) = (l)011 + {6) C21 + (3)Ca 1 = (1)(6) + (3)(-2) + (0)(0) = 0
(c) det(A) = (3)021 + (3)022 + (6)023 = (3){-2) + (3)(4) + (6)(-1} = 0
(d) det(A) = (l)Ou + (3)022 + (I)Cs2 = (1)(-12) + {3)(4) + (1)(0) = 0
(e) det(A) = (O)C31 + (l)C32 + (4)C33 = {0)(0) + (1)(0) + (4)(0) = 0
(f) d~t( A) = (2)013 + ( G)02J + {4)033 = (2)(3} + (6)(- 1) + (4)(0) = 0
3 3 5 3 3 5
:n. Using column 3: det(A) = ( -3) 2 2 -2 - (3) 2 2 - 2 = (- 3)(128) - (3)( -48) = -240
2 10 2 4 0
32. det(A) = 0
sinO
-cosO sin8
cos B
0
0 = (1)
I sinB cosO'
.
-cos 8 smB
=sin 2 fJ+cos 2 B = 1
s inO- cosO sinO+ cosB 1
for a ll values of 0.
34. AB = [~ ~c:~ bf) a.nd BA = [0 ~ bd~ ce]. Thus AB = BA if and onJy if ae + bf = bd + ce , and
this is equivalent to t he condition that lb 0
e d -
- cl=
f
b(<l - f ) - (a - c)e = 0.
I
2l tr(A2)
tr( ,1) 1
tr(A)
I= 2 I 1 a+ d 1 1 2 2I 2
a.2 +Zbc+ d2 a +d =2 ((a+d) -(a +2bc+d ) =ad -bc=det(A).
D2. The signed ele mentary p roducts will all be (1)(1) (1) = 1 , with half of t hem equal to +1
and half equal to -1 . Thus the determinant. will be zero.
D3. A 3 x 3 matrix A can have as many as six zeros without having det(A) = 0. For example, let. A
be a diag0nal matrix with nonzero diagonal entries.
X y
D4. If we e xpand a long the first row, the equation a1 bt 1 = 0 becomes
a ..2 b1 1
and thjs is an equation for t he line through t he points Pt (a 1 , bt) and P2(a2, b2).
D5. If ur = (a1 ,a2, a 3) and ' vT = (b 1 ,~,b3 ), then each of the six elementary products of uvT is o f
Lhe form (a1b;,)(a2bi?)(a3bj3 ) where {j 1 ,iz,]J} is a permutation of {1,2,3}; thus each of the
elementary products is equal to (ata2as)(b1b2b3).
122 Chapter4
Xl Yt 1
Thus the three points are collinear if and only if x2 v2 1 = 0.
XJ l/3 1
P2. We wish to pwve that for each positive integer n, t here are n! permutations of a set {it , )2, . .. , j .,.}
of n distinct elements. Our proof is by induction on n .
Step 1. It is clear that there is exactly 1 = 1! permutation of the set {j 1 } . Thus the statement is
true for the case n = 1.
Step 2 (induction srep). Suppose that the statement is true for n = k, where k is a fixed integer
~ 1. Let S = {j1, j2, ... ,jk, )k+I }- Then a permutation of the setS is for med by first choosing
one of k + 1 positions for the element )~;+ 1 , and then choosing a permutation for the remaining
k elements in t he remaining Jc positions. There are k + 1 possib ilities for the first choice and , by
the hypothesis, k! possibilities for the second choice . Thus there are a. total of (k + l )k! = (k + 1)!
p~r mutations of S . T his shows that if the statement is true for n = k it must also be true for
n = k + l. These two steps complete the proof by induetion.
2 -1 3
(b) det(A) = 1 2 4 = (24 - 20 - 9) - (30 - 6 - 24) = - 5 - 0 =- 5
5 -3 6
2 1 5
det(AT) = -1 2 -3 = (24 - 9 - 20) -- (30- 6 - 24) = -5- 0 = -5
3 4 6
3 1 3 331
1
0 9 22
3. (a)
0
3
0 - 2 12
= (3) (!) (-2)(2) = -4
0 0 0 2
EXERCISE SET 4.2 123
3 1 9
(b) - 1 2 -3 = 0 (~t and third columns are proportional)
l 5 3
3 -17 4
~c) 0 5 1 = (3)(5)(-2) = - 30
0 0 -2
d e f a b c a b c
5. (a) g h i = (- 1) g h i = (- 1)(-1) d e I =(-1)(-1)( - 6)= - 6
a b c d e 1 g h i
3a 3b 3c (l b c Ia b c a b c
(b) -d
4g
-e - ! = (3) -d
4h 4i 4g
-e -f
4h 4i
= (- 3) d e I
1g 4h
f = (-
4i
12) d
g
e f
h
= (- 1'2)( - 6) = 72
a+g b I h 1:+ i a b c
(c) d e f = d e f = -6
g h i g h i
- tl 2 -6
(b) <.let(- 2A) = -6 -4 -2 = (-160 + 8 - 288) - (-48 - 64 + 120) = 440 - 8 = - 4t18
-2 -8 - 10
2 - 1 3
(- 2) 3 det(A) = ( - 8) 3 2 1 = ( - 8)((20 - 1 + 36) - (6 + 8 - 15)) = (- 8)(55 + 1) = -448
4 5
9 . lf x = 0, the given matrix becomes A ~ [~ ~ ~] and, since the first and third rows are r-ropor-
o 0 -5
tiona!, we have det(A) = 0. If x = 2, the given matrix becomes B = [~ i ;] and, since the first
0 0 -5
and second rows are proportional, we have det(B) ::::: 0.
124 Chapter 4
LO. If we replace the first row of the matrix by the sum of the first and second rows, we conclude that
since the first and third rows of the latter matrix are proportional .
11. We use the properties of determinants stated in Theorem 4.2.2. Corresponding row operations are
as indicated.
3 6 -9 1 2 -3
-2 = (3) 0 A common factor of 3 was
det(A) = 0 0 0 -2
taken from the first row.
-2 1 5 -2 1 5
1 2 -3
:;:; (3) 0 0 -2 ~times the first mw "''-'
~ded to the third row.
0 5 -1
1 2 -3
The second and third rows
= (3)( -1) 0 5 -1
were interchanged.
0 0 -2
1 -3 0 l -3 0
2 times the first row was
det(A) = -2 4 = 0 -2 l
added to the second row.
5 -2 2 5 - 2 2
-3 0
-5 times the first row was
- 0 -2
added to the third row.
0 13 2
1 -3 0
A fac tor of - 2 was taken
= (- 2) 0 1 -l
2
from the second row.
0 13 2
1 -3 0
- 13 times the second row
= {- 2) 0 1 - 2
1
was added to the third row.
17
0 0 2
= (-2} en = -17
14. det(A) = 33
EXERCISE SET 4.2 125
1 2 -3 1
0 1 -9 - 2
= 0 0 -3 - 1
- 12 times r ow 2 was added to row 4.
0 0 108 23
1 2 -3 1
0 1 - 9 -2
= 0 0 - 3 - 1
36 times row 3 was added row 4 _. ]
0 0 0 - l3
= 39
16. det(A) =6
]7. We usc the properties of deter minants stated in T heorem 4.2.2. Corresponding row operations are
as indicated.
1 1 1
0 1 1 1 2 2 1 2
1 1
1 1
0 1 1 1
2 l T he first and second rows
det(A ) = 2
'2 l 1 = (- 1) 2 1 l
0 were interchanged.
3 3 3 0 3 3 3
- :;l 2
3 0 0 l
-3
2
3 0 0
1 l 2 l
= ( - 1)(~)
0
'2 I
l 1
I
1
l
A facto< of was t~
3 :l 3 0 from the first row.
I 2
- 3 :i 0 0
1 1 2 1
= (-1)(~)
0 1 l 1 1
- ~ times row was added
to row 3; ~ times row 1
I 2
0 -3 - l -3
was added to row 4.
'2 l
0 3 3
1 1 2 1
0 1 l l times row 2 was added
= (- 1) (~) 0 0 -3
2
-3
1 t o row 3; - 1 t.imes row 2
was added to row 4.
1 2
0 0 - 3 -3
1 1 2 1
0 1 1
= (-1) (~) ( - ~) 0 0 1 I
2
A factor of - ~ was t aken
from row 3.
l 2
0 0 - 3 3
126 Chapter 4
1 1 2 1
0 1 1 1 !
= (-1) (~) ( -~) 0 0 1 1
~
times row 3 was added
to row 4.
1
0 0 0 -2
al bt G! + b1 + C1 al bl bt + Ct
Add -1 times column 1 to
19. (a) a2 b2 02+ b2 + Cz = 02 bz b2 + C2 column 3.
03 b3 as+ b3 + C3 .:l3 bJ b3 + ca
a1 bt C)
Add -1 times column 2 to
= (l~ bz C2
column 3.
<l3 bJ C:J
a1 + b1 a1 - b1 C) 2a 1 Gl- Ot CI
Gt a1 - b1 Ct
Factor of 2 taken from
=2 02 02- b2 C2
column 1.
03 GJ - b3 C3
a1 - bl Ct
Add -1 times column 1 to
=2 02 - b2 C2
column 2 .
,a3 - hJ CJ
GJ bl Cl
Factor of - 1 taken from
= -2 02 b2 C2
column 2.
(13 03 CJ
(1.1 a2 GJ
Add - t iin:.es
= (1.- t2 ) bt b:z b3 row 2 to row 1.
Ct C2 CJ
EXERCISE SET 4.2 127
a1 b1 C)
Add -s times column 1
to column 3.
- a2 b2 c2
Add -r times column 2
a3 b3 c3
to column 3.
1 X x2 1 X x2 1 x x2
21. det{A) = 1 y y2 = 0 y - x Y2 _ x2 =(y-x)(z - x) 0 1 y+x
1 ;; z2 0 z- x z2 - x2 0 1 z +x
1 X x2 I
=( y- x)( z-x) 0 1 xl
y + = (y - x)(z - x) (z- y)
0 0 z- y
24 . If we add ~ach of the first four rows of A t o the last row, t.he result is a matrix B that h as a. row of
z~ros: thus det.(A) = det(B) = 0
~5. (a) \Ve have dct(A) = (k- 3)(k 2) - 4 = k 2 - 5k + 2. Thus, from Theorem 4.2.4, the matrix .4
is invertible if and ~)nly if lc2 - 5k + 2 -f:. 0, i.e. k #- 5 fTI .
(b ) We ha..-e dE>t.( A) = ( 1 )~~ ~~- (3)~~ ~~ + (k)~~ :1 . ,. 8 + 8k. T hus, from Theorem -1.2.4, the matrix
A is invertible if and only if 8 + Bk f 0, i.e. k =/' ... 1.
1 1
~7. (a) det(aA) = 33 det(A) = (27)(7) = 189 (b) det(A
- 1
) = det(A) =7
(c) tlct(2A- 1 ) 23 dct{A- 1 )
''= = (8)( ~) =
1 1 1 1
( d) clet((2 A)-t ) ~ det(2A) z3 det(A) = (8){7) = 56
2 4 -5 2 4 - 5
3 9
det(AB) = 6 15 -6 = 0 3 9 = 21 ) = 2(6 - 18) = -24
10 22 - 23 10 2 2 2 2
128 Chapter 4
On the other hand, det(A) = (1)(3)( -2) = -6 and det(B) = (2)(1){2) = 4. Thus det(AB) =
det(A) det{B).
30. We have AB = [~ ~ ~] [~ -~ ~1 = [3~ -~ 1~1 , and so det(AB) = -170. On the other hand,
0 0 2 5 0 l 10 0 2
det(A) = 10 and det(B) = -17. Thus det(AB) = de t(A)det(B).
31. The following sequence of row operations reduce A to an upper triangular form:
H
-2
-9
2
8
3
6
-6
6
-~] ~ [~
-2
1
0
12
-9
3
-3 -1
0
-}[~
-1
0
0
-2
1
0
0 108
3
-9
-3 -1 -~] ~ [~
23
0
0
-2
1
0
0
3
-9
-3
1]
-2
-1
0\' -13
~lf~
-2 3 0 0 -2 3
A= [! -9 I)
-~'] = [-~1
1 0 1 - 9 -2 1] =LU
-1 2 -G 0 1 0 -3 - 1
2 8 6 1 2 12 - 36 0 0 - 13
and from this we conclude that det(A) = det(L) det(U) = (1)(39) = 39.
32. The following sequence of row operations reduce A to an upper triangular form :
I .. l
;~l ~ r~
I 3 1 3
[~ ~] ~ [~ [~
1 3 2
l
'J
l
2 2 2 2
_;]
0
2
1
1
1
2
-2
2
1
'2
1
2
l
0
0
1
-1
1
-;] -4
1
0
0
1
1
0
-2
6
[~ l] "' [~ ~][~
1 3 0 0 2 2
1
A=
0 1 -2 0 l 1 _; ] = LU
2 1 2 - 1 0 1 - 2
1 2 3 0 l 1 0 0 6
a.nd from this we conclude that det.(Jt) = dct(L) det(U) :::: (1)(6) = 6.
33. U we add the firs t row of A to the second row, using the identity sin 2 B + cos 2 0 :::: 1, we see that
35. (a) Since det(AT) = det{A), we have det{AT A)= det(AT) det(A) = (det(A)) 2 = det(A) det(AT) =
det(AAT ).
(b} Since det(AT A)= (d et (A)) 2 , it follows that det(AT A) = 0 if and only if det(A) = 0. Thus. from
'T'heorem 4.2.4, AT A i~ invP.rt.ihiP. if ano only if A is invertible.
DISCUSSION AND DISCOVERY 129
36. det(A- 1 BA) = det(A -I) det(B) dct( A) = det~ll) dct(B } det(A} = det(B)
37. llxii 2 IIYII 2 - (x. y) 2 == (xi+ X~ + x~ )(yf T vi+ v5>- (XlYl + XzY2 + XsYa)2
2 2 2
:r2yd 2 + (x1y3- X3Y1) 2 + (x2Y3- X3Y2) 2
1 2
Jx x 1 + IXJ X:J I + lxz :r
3
1 = (x1Y2 -
IYl Y2 YJ 'YJ Y2 Y3
~ xiy~ - 2XtY2X2Yl + x~y~ + xi y- 2XJY3X3Yl + x~yf + x~y~- 2X2Y3X3Y2 + X~y~
38. (a) We: have det [~ ~J =5 -4 = 1 and det [~ ~] = 3- 6 = -3; thus det(M) =(I){ -3) = -3.
l 2 0 3 0 0
= {2)~~
2
(b) We have 2 5 o
5
1 = 2(5 - 4) = 2 and 2 1 o= (3)(1 )( - 4) = -12; th\18 det(M) =
- 1 3 2 - 3 8 -4
(2)( -12) = - 24.
3 5 1 3 5
39. (a) det(M) = 121 -~I -2 6 2 = (fi + 4) 0 12 12
- . 13
= (10)(1) 112
- 4
121=
- 13
-1 080
3 5 2 0 -4
~~
2 0
(b) det(M) =
jo
1 2
() l
I~ ~I = (1){1) ,.,, l
D2. Since det(AB) = det(A) det(B) == det(B) d P.t(A) = det(BA), it i;, always true that det(AB) =
det(BA).
D3. If .4 or B is not invertible then either det(A) = 0 or det(B ) = 0 (or both). It follows that det{AB) =
det(A) det(B) = 0; thus AB is not invertible.
130 Chapter 4
D4. For convenience call the given matri'V A n If n = 2 or 3, then An can be reduced to the identity
matrix by interchanging the first and last rows. Thus det(An) = - 1 if n = 2 or 3. lf ~ = 4 or 5,
then two row interchanges are required to reduce An to the identity (interchange the first and last
rows, then interchange the second and next to last rows). Thus det (An) = +1 if n = 4 or 5. This
pattern continues and can be summarized as follows:
DS. If A is skew-symmetric, then det(A) = det(AT) = det(-A) = (-l)ndet(A) where n is the size of
A. It follows that if A is a skew-symmetric matrix of odd order, then det(A) =- det(A) and so
det(A) = 0.
D6. Let A be an n x n matrix, and let B be the matrix that results when the rows of A are written
in reverse order. Then the matrix B can be reduced to A by a series of row interchanges. If
n = 2 or 3, then only one interchange is needed and so det(B) = - det(A). If n = 4 or 5, then two
interchanges are required and so det(B) = +det( A). This pattern continues;
DB. (a) True. If A is invertible, then det(A) =F 0. Since det(ABA) = det(A) det(B) det(A) it follows
that if A is invertible and det( ABA) = 0, then det(B) = 0.
(b) 'frue. If A ,.,; A - l , then since det( A - l) = det1(A) , itiollows that {det( A ))2 == 1 and so det (A) =
1.
(c) True. If the reduced row echelon form of A has a row of zeros, then A is n ot invertible.
(d) 'frue. Since det(A~') = det(A), it follows that det(AAT) = det(A) det(AT) = (det{A)) 2 :?: 0.
(e) True. If det.(A) f 0 then A is invertible, and an invertible matrix can always be written as
a prod uct of elementary matrices.
D9. If A = A 2 , then det(A) = det(A 2 ) = (det(A)) 2 and so det(A) = 0 or det(A) = 1. If A= A3, then
det(A) = det(A 3 ) = (det(A)) 3 and so det(A) = 0 or det(A) = 1.
DlO. Each elementary product of this matrix must include a factor that comes from the 3 x 3 block of
zeros on the upper right. Thus all of the elementary products are zero. It follows that det(A) = 0,
no matter what values are assigned to the starred quantities.
Dll . This permutation of the columns of an n x n matrix A can be attained via a sequence of n- 1
column interchanges which successively move the first column to the right by one position (i.e.
interchange columns 1 and 2, then interchange columns 2 and 3, etc.). Thus the determinant of
the resulting matrix is equal to (-1)"- 1 det(A).
EXERCISE SET 4.3 131
Pl. If x ~ [] and y = [ ] then, using cofacto expansions along the jth w lumn, we have
P2. Suppose A is a square matrix, and B is the matrix that is obtained from A by adding k times the
ith row to the jth row. Then, expanding along the jth row ofB , we have
det(B) = (ajl + kao)Gjl + (aj2 + ka;2)Cj2 + + (a;n + ktJ.in )Cjn
= det(A) +kdct(C)
where C is the mat.rix obtained from .t1 by replacing the jth row by a copy of tho ith row. Since
C has two identical rows, it follows that det( C) = 0, and so det( R) = det(A).
1. The matrix of cofactors from A )s C = [-~ -! -~],and det(A) = (2)(-3) + (5)(3) + (5)( -2) = -1.
& -s 3
~]
0
I 2
3 3
0 -1
3. The matrix of cofactors from A is C = 2 0 0]0 , and det (A) = (2}(2) + (- 3)(0) + (5)(0) = 4. Thus
[4 6 2
6 4
adj(A) =
[
2 6
o 4 G 1] and A- 1 = -1 adj(.4) = -1 0
4
[2 6 4] [ ~o ~1
4 6 =
4 oo2
~ . I]
0 0 2 0 0 21
I~! ~1 = -- 153- = 3
6. :r = .:....___ _;_
I~~ ~~1 204
y = !..:-,-1~-~.,-l-1 = -- 5-1 = -4
1 1~ ~I - 51
6 -4 1 1 6 1 1 -4 6
-1 -1 2 4 -1 2 4 - 1 - 1
-20 2 -3 144 2 -20 -3 61 2 2 -20 -230 46
7. x= =- y= = - z= =--=-
1 -4 1 - 55 1 -4 1 - 55 1 -4 1 -55 11
4 - 1 2 4 -1 2 4 -1 2
2 2 -3 2 2 -3 2 2 -3
4 -3 1 1 4 1 1 -3 4
-2 - 1 0 2 -2 0 2 - 1 -2
0 0 3 30 4 0 -3 38 4 0 0 40
8. XJ =
1 -3 ]
= -
- 11
Xz =
1 -3 1
=-
- 11
X3 = 1 - 3 1
= -
- 11
2 - 1 0 2 - 1 0 2 - 1 0
4 0 -3 4 0 -3 4 0 -3
- 32 - 4 2 1 - 1 -32 2 1
11 -1 7 9 2 14 7 9
11 1 3 - 1 11 3 1
- .t - 2 I -4 - 2115 1 -4 1 -4 = - 3384 = 8
9. I t=
- I -4 2 1
= - -=5
- 423
Xz = - 1 -4 2 1 - 423
2 - 1 7 9 2 -1 7 9
- 1 :3 - 1 1 3 1
I -2 - .t -2 1 -4
- 1 -4 - 32 1 - 1 -4 2 - 32
2 - l 14 9 2 -1 7 14
- I 1 ll 1 - 1 1 3 11
1 -2 -4 -4 = - 1269 = 3 1 -2 1 -4 423
X3 = x4 = = -- = - 1
- 423 - 423 - 423 - 423
4 2 - 1 1 2 4 - 1 1
6 3 - 1 2 4 6 - 1 2
12 5 -3 4 8 "'
~.: -3 4
6 3 - 2 2 2 3 6 -2 2 2
10. Xt = 2 2 - l 1
= -= l
2
Iz =
2 2 - 1 1
=-=1
2
4 3 - 1 2 4 3 - 1 2
8 5 -3 4 8 5 -3 4
3 3 -2 2 3 3 -2 2
EXERCISE SET 4.3 133
2 2 -1 4
2 2 4 1
4 3 0 2 4 3 - 1 6
8 5 12 4 8 5 -3 i2
3 3 6 2 - 2 3 3 -2 6 -2
:r~ = 2
= --
2
= -1 X4 = 2
::;: --
2
=- 1
1 3 4 1 -2 3
2 -2 - 1 3 1 1
4 1 1 21 3 -1 -3 -2 - 33
11. x= = - =- 12. y= = -
2 3 4 14 2 1 3 2 41
1 - 2 -1 3 -1 1
3 l 1 -1 4 - 2
0
+ n1r.
0
0
sec 2 a
l
Thus, for these values o f
1 3 1 3 1 l
2 k 2 4 2 2
1 2k k (k- l)(k - 6) 2k l k 2(k - 1)
x = --- =
k (k - 4 ) k(k- 4)
y=
k(k - 4) =-k(k- 4)
3 3 1
4 k 2
2k 2k l (2k - 3)(k - 4) 2k - 3
z= = - --
k(k- 4) k(k- 4) k
3{k2 - k + 3)
16. lc4~ -~3 X
14k - 3
= ..,.-----.-,.-----:- y= z=---
3k
r 5' (5k- 3)(3k + 2) (5k - 3}(3k + 2) 5k - 3
17. We have d et(A) = 1i- x 2 y = y(y 2 - x 2 ). Thus A is invertible if a.nd only if y =I= 0 a.nd y =I= x. The
18. We have det(A) = (s 2 - t2 ) 2 . Thus A is invertible if ;md only if s =/; t. The formula for the inverse
is
\-1 1
s
0
0s -t0 -t0]
1
= (s2 - t 2) -t 0 s 0
[
0 -t 0 s
Then, from Theorem 4.3.5, the area of the parallelogram is ]det(A)] = ]3- 61 = 3.
~
24. The parallelogram has the vectors P1 Pz = (2, 2) and P1P4
----t -= (4, 0) as adjacent sides. Let A ;;;.; [22 "o].
Then, front Theorem 4.3.5, the area of the parallelogram is ]det(A)] = ]0- 8] = 8.
2 0 1 1 1 1
1
25. area ~ABC = 1
. 3 1
1 1 =7 26. area ~ABC= - 2 2 1 =3
2
-1 2 1
2 3 -3 1
28. v = 45
29. The vectors lie in the same plane if and only if the parallelepiped that they determine is Jegenerate
in the sen.;;e that. it.s "volume" is zero. In this example, we have
-1 ~i 5
v = -2 0 -4 = 16
-2 0
and so the \'Cctors do not lie in the same plane.
j k
33. U XV= 2 3 -6 = 36i- 24j sin f) = l]u X vll = )1296 + 576 = )1872 = 12M
2 3 6
j!u!IJlvll v'4y49 49 49
' .l j k
--+ --+
34. (a) ABxAC= -1 2 2 = -4i +j - 3k area ~ABC = ! ~~~ x ACII = P
1 1 -1
j k
15. (a) v x w = 0 2 - 3 = (14 + 18)i- {0 + 6)j + (0- 4)k = 3Zi - 6j - lk
2 6 7
j k
{b) U X (V X W) = 3 2 -1 = (- 8 - 6}i - {- J2 + 32)j + (-18 - 64)k = -- 14i- 20j- 82k
32 -6 -4
j k
(c) uxv = 3 2 - 1 =(-6+2)i-(-9+0)j+(6 - 0)k =- 4i + 9j +6k
0 2 - 3
j k
(uxv)xw = - 4 9 6 =(63-36)i-(-28 - 12)j +( - 24 - 18)k = 27i+40j-42k
2 6 7
16. (a) (0, 171), ..... 2fJ4) (h) (-44,55,-22) ( (:) ( - 8, -3, -8)
XV ~ ~ -~
k j
17. (a) U 4 2 = 18i + 36j -18k = (18, 36, -18) is orthogonal to both u and v.
I 3 1 5
j k
( b) U XV = - 2 1 5 = - 3i + 9j - 3k = ( -3, 9, -3) is orthogonal to both u and v.
3 0 -3
vzl) =
U2
-v xu
(l
t 12
.o. u x (v + w) .,.,.,
v2 + w:i
= (u x v) + (u x w)
= (ku) x v
136 Chapter 4
j k
43. (a) u >< v = 1 -1 .2 = -7.i-j+ 3k A= ll u x v ii = J49 + 1 + 9 = J59
0 3 1
j k
(b) uxv= 2 3 0 = -6i +4j +7k A= ll u x v lf = J 36 + 16 + 49 = Ji01
- 1 2 -2
j k
---+ ---+
45. PtP2 X plp3 = - 1 -5 2 = - 15i + 7j + IOk
2 0 3
-- ~ ---t j k .I ---t
49. The vcc:lo r AB x A C = 1 -3 = 8i + 4j
-)-
(b) lu (v x w )l is equal to the volume of tlte parallelpiped having the vectors u , v , w as adjacent
edges. A proof can be found in any standard calculus text.
0
0:-
0
I
I
I
t : -~
2 _:
0
-~];thus A- = [-~ _:
'f
1
- 'i 0
-;].
7
EXERCISE SET 4.3 137
>2. We have det(AJ.:) = (det(A))k. Thus if A''-'= 0 for some k, then det{A) = 0 and so A is not invertible.
53. F:rom Theorem 4.3.9, we know that v x w is orthogonal t.o the plane determined by v and w. Thus
a vector lies in the plane determined by v and w if and only if it is orthogonal to v x w. Therefore,
since u x (v x w) is orthogona.l to v x w, it follows that u x (v x w) lies in the plane determined by
v and w.
>4. Since (u x v) x w = -w x (u x v), it follows from the previous exercise that (u x v) x w lies in the
plane determined by u and v.
55. If A is upper triangular, and if j > i, then the submatrix that remains when the ith row and jth
column of A are deleted is upper triangular and has a zero on its main diagonal; thus C;; (the
ijth cofactor of A) must be zero if j > i. It follows that the cofactor matrix C is lower triangular,
and so adj( A) "" or is upper triangular. Thus, if A is invertible and upper triangular. then A_, =
det~A)adj(A) is also upper triangular.
56. If A is lower triangular and invertible, then AT is upper triangular and so (A- 1)T = (A 1'}- 1 is upper
triangular; thus A - I is lower triangular.
57. The polynomialp(x) = ax 3 + bx2 +ex+ d passes through the points (0, 1}, (1, -1), (2, -1), and (3, 7)
if and only if
d=
a+ b+ c+d=-1
Ra + '1b + 2c + d = -1
27 a + 9h + 3c + d = 7
1 0 0 ll 0 0
-1 1 1 1 1 -1 1 1
-1 4 2 1 8 -l 2 1
7 9 3 12 127 7 3 -24
(~ -:::: ---1 b= = - - = -2
0 0 0 1 - 12- 0 0 0 1 12
1 1 I 1 1 1 1 1
8 4 2 l 8 4 2 1
27 9 3 1 27 9 3
0 0 1 I 0 0 0 1
l l -1 1 1 1 1 -]
8 4 -1 1 8 1 2 - 1
c=
27 9 7 1 -12
=-=-1 d=
27 9 3 71 12
=-=1
12 12 12 12
Thus the interpolating polynomial is p(x) = x:l - 2x 2 - x + 1.
138 Chapter 4
~X(UX>)
(b) Since w is orthogonal to v, we have v w = 0. On the other hand, u w = l!ullllwll cos9 =
llullllwll sin(%- B) where() is the angle between u and w. It follows that lu wl is equal to
the area of the parallelogram having u and v as adjacent edges.
D2. No. For example, let u = (1, 0, 0), v = (0; 1, 0), and w = (1, 1, 0). Then u x v =ux w = (0, 0, 1),
but vi= w.
D3. (u v) x w docs not make sense since the first factor is a scalar rather than a vector.
D4. If either u or v is the zero vector, then u x v = 0. If u and v are nonzero then, from Theorem
4.3.10, we have flux vii= JluiJI!viJsinO where B is the angle between u and v. Thus if u x v = 0,
with u and v not zero, then sin B = 0 and so u and v are parallel.
D5. The associative law of multiplication is not valid for the cross product; that is u x (v x w) is not
in general the same as (u x v) x w.
3 -(l-c)l
1-4 c 7c- 4
x1 = 'l.c2 - 2c + 1 = -2c"""2,.---2c_+_l
07. (c) The solution by Gauss-Jordan elimination requires much less computation.
D8. {a) True. As was shown in the proof of Theorem 4.3.3, we have Aadj(A} = det(A)L
(b) False. In addition, the determinant of the coefficient matrix must be nonzero.
(c) True. In fact we have adj(A) = det(A)A- 1 and so (adj(A))- 1 = det~A)A.
W1 W2 W3
P3. (a) Using properties of cross products from Theorem 4.3.8, we have
u1 Uz U3 VJ V2 V3
P4. If a, b, c, and d all lie in the same plane, then ax b and c x d are both perpendicular to that
plane, and thus parallel to each other. It follows that (ax b) x (c x d} = 0.
P5. Let Q1 = (xt,Yt,l), Q2 = (x2,y2,l), Q3 = (x3,Y3,l), and letT denote the tetrahedron in R3
having the vectors OQ1, OQ2, OQ:.h a..c; adjacent edges. The base of this tetrahedron lies in the
plane z = l and is congruent to the triangle 6P1 P2h; thus vol(T)"" !area(6P1 P 2 P3 ). On the
ot.her hand, voi(T) iR equal to ~ timP.s the volume of the p<\rallelepipe(l having OQ 1, OQ 2 , OQ3,
as adjacent edges and, from part (b) of Exercise 50, the latter is equal to OQ 1 (OQ:z x OQ3).
Thus
X} Yl l
area(6.P1P2P3) = 3voi(T) = ~OQ1 (OQ2 x OQ3) = ~ X2 Y2 l
'X3 1/3 1
are the solutions of the system ( l -.. A)x = 0, which can be expressed in vector form a..<> x = t[~]
where -oo < t < oo.
eigenvalue ..\ = 5.
6. (a) The characteristic equation is .X2 - 16 = 0. Thus ,X = 4 are eigenvalues; each has aJgebra.ic
multiplicity 1.
(b) The characteristic equation is >.. 2 = 0. Thus ,\ = 0 is the only eigenvalue; it has aJgebra.ic
multiplicity 2.
(c) The characteristic equation is(>.. -1f~ = 0. Thus >.. = 1 is the only eigen.value; it fi8s algebraic
multiplicity 2.
.>. - 1 0 - 1
7. (a) The characteristic l!quat io!l is 2 .>.- 1 o = ).3 - 6). 2 + 11 .A- 6 = (.A - 1)(>. - 2){>..- 3) =
2 0 A- 1
0. Thus .A = 1, >.. = 2, and >. = 3 are eigenvalues; each has .algebraic multiplicity 1.
.A-4 5 5
(b) The characteristic equation is -~ .X -1 1 = A3 - 4>.2 + 4A =..\(A - 2) 2 = 0. Thus A= 0
-t 3 ). + 1 ..
and ). = 2 are eigenvalues; >. = 0 has algebraic multiplicity 1, and ). = 2 hFlS multiplicity 2.
).. -3 -4 1
(c) The characteristic equation is .x + 2 - 1 = ).3 - >.. 2 - 8>. + 12 = (,X+ 3)(A- 2) 2 = 0. Thus
- 3 -9 ).
A = - 3 and ,\ = 2 are eigenvalues; A= - 3 has multiplicity 1, and >.. = 2 has multiplicity 2.
8. {a) The characteristic equation is >. 3 + 2..\ 2 + A=>.(>.+ 1) 2 = 0 . Thus >. = 0 is an eigenvalue of
multiplicity 1, and ).. = -1 is an eigenvalue of multiplicity 2.
(b) The characteristic tlltuation is ).3 - 6).. 2 + 12)., - 8 = (A- 2) 3 = D; thus,\ = 2 is an eigenvalue of
multiplicity 3.
(c) The chara.ctcris t.ic equation is >.. 3 - 2A 2 - 15>. + 36 = (,\ + 4)(A - 3) 2 = 0; t hus A= -4 is an
eigenvalue of multiplicity 1, and ,\ = 3 is an eigenvalue of multiplicity 2.
9. (a) The eigenspace corresponding to >. = 3 is found by solving the system [ -~ ~] (;] = (~]. This
yields the general solution x = t, y = 2t; t.hus the eigenspace consists of all vectors of the form
[:J = t[~J - Geometrically, this is the line y = 2x in the xy-plane.
The eigenspace corrc~ponding to A = - 1 is found by solving the system [ =:~) (:] = [~]. This
yields tlu~ ~eneral solution x = 0, y = t; thus the eigenspace consists of all vectors of the form
[~] = t [~] . Geometrica.lly, this is the line x = 0 (y-a.x.is).
(b) The eigenspace corresponding to .A =4 is found by solving the system [ =:!] [:] = [~] . This
yields the general solution x = 3t, y = 2t; thus the eigenspace consist of a ll vectors of the form
[:J = t(~J. Geometrically, this is the line y = 3x-
E.X ERCISE SET 4.4 141
11). (a) The eigenspace corresponding to>. = 4 consists of a ll vectors of the for m (:) = t (~J; this is
the line y = ~x. T he eigenspace corresponding to >. = - 4 consists of all vectors of the fo rrri
[:] = t [_~]; this is t he line y = - ~x.
(b) The eigenspace corresponding to>.= 0 consists of all vectors of the form (~) = s [~] + t[~) ; this
is the entire xy-plane.
(c) The eigenspace corresponding to A. = 1 consists of all vectors of the form (:J= t [~]; this is the
line x = 0.
the eigenspace co~""'Ponding to !. ~ 2 consists of all vecto" of the fo< m [:] = t [=~] , and the
eigeosp""e conesponding to !. ~ 3 consists of all vccto" of the fo<m [:] ~ t [- :]
[~; ~] f:] = r~l -
5
(b) The eigenspace corresponding to >. == 0 is found by solving the system
of the fo<m [:] = t [!]; this is the line through the odgin >nd the point. (5, 1, 3).
The eigenspace con...pondh;g to!.= 2 is found by olving the system [=I !:] m [~] = This
yields [:] = s [~] + t [:], which conesponds to a plane thwugh the odgin.
[: ] = t [ ~:] , which cones ponds to a Hoe though the odgin. The eigenspa<e cxmespon~ing to
!. = h found by solving u ~: -:][:] m
sponds to a line through t he origin .
= This ~elds l f l],
[ whichabo con&
142 Chapter4
12. (a) The cigenspAe CO<J<SpOnding to A = 0 consists of vectors of the form m -!].= t[ and the
(c) T he eigenspace cnn esponding to A = - 4 consists of vectorn of t he fOTm [:] - t [ :] and the
13. (a ) The c.ha.md.t1ristic polynomial is p(>.) = (>. + 1)(>.- 5) . The eigenvalues a.rc >. = ..:_ 1 and>.= 5.
( b ) The cha racteristic polynomial is p(A) = (.A - 3) (.X - 7)(>.- 1). 'The eigenvalues are >.= 3, ). = 7,
and A= 1.
(c) T he characteristic polynomial is p(A) = (>. + ~)'2(). - l)(.X- ~ ). The eigenvalues are.-\=
(with multiplicity 2), ). = 1, and>.= ~-
-s
[~0 ~ ~
0 0
2 0 0
14. 'l'wo examples are A :::: ?0 L ?.]v and B = [ 1 -1
.o 0 0 -1 -3 u - 1
15. Using t he block diagonal structure, the characterist-ic polynomial of t he given matrix is
), - 2 -3 0 0
p(),) =
0
l ), - 6
0
0
.\+2
0
-5
= !). -l 2 --3
>. -6
If), + 2
- 1
-5
>.- 2
I
0 0 -1 >- - 2
.::: [{>- - 2)(>. - 6) + 3)[(>. + 2)(>- - 2)- 5J ='" (>- 2 - 8), + 15)(,\2 - 9) = (>-- 5)(.), - 3)2 (-X + 3)
T hus the eigenvalues are A~ 5, ), = 3 (with multiplicity 2), and), = -3.
16. Using the block triangular structure, the characteristic polynomial of the given matrix is
>. +1 2 2
p( A) = det( AI - A) = -1 ). - 2 -1 =(.-\+ 1)(>. -1) 2
1 ).
EXERCISE SET 4.4 143
thus the eigenvalues are .X = -1 and ..\ = 1 (with multiplicity 2}. The eigenspace correspontJing to
.>. = - 1 is obtained by sol.-ing the system
whkh yield$ [:] ~ l [ - :]. Similarly, the eigenspace co"esponding to !. = I is obtained by solviug
H-: -:1m =m
which has the general solution ' =t, y " -t - ' ' = ' ' or (in vedorform) [~] = ' [-:] + t [- 1]
The eigenvalues of A.! 5 are). = ( - 1) 25 = -1 and >. = ( 1) 25 = 1. Correspcnding eigenvectors ar~ the
same as above.
0.
and >. == {2)!} == 512. Corresponding eigenvectors arc the same as above.
l9. The characteristic polynomial of A is p()..) = >.3 -- >. 2 - 5).. - 3 = (..>.- 3)().. + 1) 2 ; thus the cigeo-
valuP~<> are .>. 1 = 3, .A2 = -1 , >..3 = -l. We bnve det( A) = 3 and tr(A) = 1. Thus det(A) = 3:;
(3)(- 1)(-1} = .A 1.A2>..3 and tr(A) = I= (3) + (- 1) + ( -1) =.A t+ .X2 + .>.3.
W. The characteristic polynomial of A is p(>..) = >. 3 - 6.:\ 2 I 12.>- - 8 = (.>.- 2)3; thus the eigenvalues are
At = 2, -\2 = 2, A3 = 2. We have det(A) = 8 and tr(A) = (). T hus det(A) = 8 = (2)(2)(2) = ..\. 1.\2.:\3
and tr(A} = 6 = (2) + (2) + (2) = )q + .:\2 + .\3.
:2. The eigenvalues are ..\ = 2 and >. = -1, with associated eigenvec-
(b) This matrix }m.':i no rt'lal eigenvalues, so t her e are no invariant lines.
(c) The only eigenvalue is A= 2 (multiplicity 2), with associated eigenvector [~] Thus the line
y = 0 is invariant unde r the given matrLx..
24 . The char acteristic polynomia l of A is p{A) = >..2 - (b + l )A + (b- 6a), so A has the stated eigenvalues
if and o nly if p( 4) = p( - 3) = 0. This leads to the equations
6a - 4b = 12
6a + 3b = 12
25. T he characteristic polynomial of A i:; p( A) = >. 2 - (b + 3)A + (3b - 2a), so A has the s tated eigenvalW!S
if l.l.lld o nly if p(2) = p(5) = 0. This leads to the equations
- 2a + b = 2
a+ b=5
26. The chl'l.r!'l.ctcristic polynomial of A is p(>. ) = (A - 3)(A2 - 2Ax + x 2 - 4). Note that the second factor
in this polynomial cannot. have a double root (for any value of x) since ( -2x) 2 - 4(x 2 - 4) = 16 # 0.
Thus t he o nly possible repeated eigenvalue of A is .>- = 3, and this occurs if and only if .\ = 3 is a
root of t.he second factor of p(A), i.e. if and only if 9-- 6x + x 2 - 4 = 0. The roots of this quadratic
eq uatio n are x = 1 and :t - 5. For t hese values of x, A = 3 is an e igenvalue of multiplicity 2.
27. lf A'l =.f. then A (x l- tlx) = Ax+ A 2 x :-:- Ax+ x = x +Ax; thus y = x +Ax is an e igenvector of A
corresponding to A = 1. Sirni la dy, z = x - Ax is a.n eigenvector of A corresponding to A = -1.
28. Accorrling to Theorern 4.4 .8, the characteristic polynomial of A can be expressed as
where >.1, >. 2 , ... , Ak arc the distinct eigenvalues of A and m 1 + m 2 + + mk = n. The constant
t.crrn iu this polynomial is p(O) . On the other har.J, p(O) == det( - A) = (-l)ndet(A) .
29. (a) lJsinf:! Formulu. (22), the d1atacteristic equation of A is .\2 + (a+ d)>..+ (ad - be)= 0. This is n
qundratric equation with discriminant
30. lf (a- d)Z + 4bc > 0, we have two distinct real eigenvalues A1 and Az. The corresponding eigenvectors
are obtained by solving the homogeneous system
x 1 = t,
Finally, setting t = -b. we ;,;ee that [a =b>.J is an eigenvector corresponding to >. = ..\i
n. If the characteristic polynomial of A is p(.A) = A2 + 3>.- 4 = (>.- l){A + 4) then the eigenvalues of
A are >.1 = 1 a nd A2 = - 4.
(a) From Exercise P 3 below, A- 1 has eigenvalues .-\ 1 = 1 a nd ,\2 = - ~-
(b) From (a), together with Theorem 4.4.6, it follows tha t. A- 3 has eigenvalues A1 "'""(1 )3 = 1 and
>-z-=- ( -- ~)3- - o\.
(c) Prom P4 below, A - 41 has eigenvalues AJ = 1 --4 = -3 and ,\2 = - 4 - 4 = - 8.
(d) From P 5 below, 5A has eigenvalues)" = 5 and ..\2 ;::: -20.
( e) From P2( a.) below , t he eigenvalues of <'lA:r + 2I = (4A 1- 21) T arc the same !'IS those of 4A + 2J;
namely A1 = 4(1 ) + 2 = 6 and ..\2 = 4(- 4) + 2 = -\4.
l2. If ...-lx -= Ax, where x -? 0 , then (Ax) x = (..\x) x .,. >.(x x) == .AIIxll 2 and so A= (i1:1?t.
'3. (a) The characteristic polynomial of the matrix C is
>.. 0 () 0 co
- 1 A 0 0 Ct
0 0 0 - 1 A+ Cn- 1
Add .>. l ime.-> the ~econd row to the first row, then expand by cofactors a.long the first column:
0 0 -1 ..\+ Cn - 1
0 0 0 - 1 ..\ + C:n - 1
Add A2 times t.he secvnd row to the first row, then expand by cofactors along the first column.
D2. If A is :;quare m n.trix all of whose entries arc the sl:l mc then de t(A) = 0; thus Ax = 0 has nontrivial
solutions and >. = 0 is an eigenvalue of A.
03. Using Formul a (22), the characteristic polynomial of A is p(..X) = ). 2 - 4.\ + 4 = (..\- 2) 2. Thus
). = 2 is the only eigenvalue of A (it has multiplicity 2).
04. The eigenvalues of A (wit h multiplicity) arc 3, 3, and - 2, -2, - 2 . .Thus, from Theore m 4.4.12, we
l1ave det(A) = (3)(3)( - 2) ( -2)( -2) = - 72 m1d tr(1t) = 3 + 3 - 2- 2- 2 = 0.
d = 1 then this is equation is satisfied if and only if be = - 1, e .g., A = (~ -~J. lf d f. 1, then the
.
equa t 1011 1s . r. d 1r an<1 o n 1y 1r a =
sa.t 1S1l dtbc
<J , e.g., A ~:: [1 - 1]
1 1 2
06 . The characteristic polynomial of A fac tors a.s p(..X) = (>.- 1)(-\ + 2) 3 ; t hus t he eigenvalues of A a re
). = 1 and).= - 2. It follo\vS from Theorem 4.4.6 t hat the eigenvalues of A 2 are ). = (J) 2 = 1 and
>. = ( - 2) 2 = 4. .
D7. {a) False. For example, x = 0 sa.tisfies this condition for any A and .>.. The correct statement is
t.hat if A x = ..\x for some nonzero vector x, then x is a n eigen\'ector of A.
(b) True. If >. is an eigenva lue of A, then .-\2 is an eigenvalue of A 2 ; thus (.>. 2I - A1)x = 0 has
uontrivial solutions.
(c) False. If A = 0 is an eigenvalue of A, then the system Ax = 0 ha.s nontrivial solutions; thus
A is not invertible anrl so the row vectors and column vectors of A are linearly dependent.
!The statement becomes true if "independent" is replaced by "dependent" .J
(d) False. For example, A= G~] has eigenva.lues ). =1 and .\ = 2. !But it is true tha t a
:;ymmetric matrix has real eigenvalucs.J
DB. (a) False. For example, the reduced row echelon form of A =h
fl OJ l.S
2
I = [1 OJ .
0 1
(b) True. We havP. A(x 1 + x2) = A1x 1 + .>.2x2 and, if ..X1 f. >.2 it can be shown (sine"' x1 and X2
must be linearly indep(:ndent) thal .)qxl + A2Xz f. {J(xl + X2) for any value of a.
WOR KING WITH PROOFS 147
(c) '!'rue. The characteristic polynomial of A is a. cubic polynomial, o.nd every cubic polynomial
has at least one real root.
(d) 1Tue. If r(A) = .>." + 1, then det(A) = (- l }np(D) = 1 =I 0; thus A is invertible.
t o >.. = 2 is obto.inf:d by solving (~ ~~] [:] ~ [~J . Thus the eigenspace of A corresponds to the
P3. Suppose that Ax = .Xx where x i- 0 and A is invert.ible. Then x = A- 1 Ax = A- 1 >.x = AA- 1 x
and, s1nce ,\ :f; 0 (because .1 is invertible), it follows that A- 1 x = }x. Thus is an eigenvalue of l
.t\ - t and x is ~ cormsponding ~~igenvector.
P4. Suppose tlt;H Ax .\.x whcr<' x -:f. 0 . T hen (A - sJ)x = Ax - six = >.x - sx = (A- s}x . Thus
A sis an C'i);nvaluc: nf A - sf and xi~ a r.orresponding eigenvector.
P 5. Suppo:;e tha t, Ax = AX where x :f 0. Then (sA)x = s(Ax} =s(.>.x) = (s>..)x. Thus s.>. is a a eigen-
value of sA and x is a. corres ponciing eigenvector.
ln the case that A has a repeated eigenvalue, we must have (a- d) 2 + 4b2 = 0 and so a= d a.nd
b = 0. T hus t he o nly sytnructric 2 x 2 matrices wi Lh repeated eigenvalues arc those of the form
A """ al . [;ud1 a. ntatrix has .>. = a as its only eigenvalue, and the corresponding eigenspa<:e is R 2 .
This prov<s part (a) of Theorem 4.4..\ I.
If (a- d) 2 + 4b 2 > 0, t hen A has two distinct real eigenvalues .>. 1 a nd ..\2, with corresponding
eigenvectors x 1 and x2 , g.iven by:
The eigenspaces correspond to the lines y = m 1x andy = mzx where m; = a=~', j = .1, 2. Since
(a- >. 1 ){a- >.2 ) = (H(a- d)+ J(a- d)2 + 4b2J) (t!(a - d) - J(a - d)2 + 4b2])
= t((a - d) 2
- (a - d) 2
-
2
4b l= 2
-b
we have m 1 m 2 = - 1; thus the eigenspaces correspond to perpendicular lines. This proves part (b)
of Theorem 4.4.11.
Note. It is not possible to have (a - d) 2 + 4b2 < 0; thus the eigenvalues of a 2 x 2 symmetric
matrix must necessarily be real
P7. Suppose that Ax = .XX a.nd Bx = x . Then we have ABx = A(Bx) = A(x) = >.x and BAx =
B (Ax) = B (>.x) = .XX. Thus >. is an eigenvalue of both AB and BA, and xis a corresponding
eigenvector.
CHAPTER 6
Linear Transformations
EXERCISE SET 6.1
(b) T(x) =
7. (a) We have T;~(x) = b !f and only if xis a solution of the linear syst,em
1 2 ~)] [XJX 2] = [- 1]
[2 5 -3
0 -l 3
X;!
1
3
The reduced row echelon form of the augmented matrix of the above system is
[~
0
1
6
-3
-111
o o oJ
= -1 - = t.
nl
and it follows that the system ha..o; the general solut.ion x 1 6t, x2 = 1 + 3t, x 3
[~ -~ ~]l:~] = [~]
2 5 -3 XJ 1
165
166 Chapter6
The reduced row echelon form of the augmented matrix of the above system is
[~ 0
~ -~ -~]
0 0 1
and from the last row we see that the system is inconsistent. Thus there is no vector x in
The reduced row echelon form of the augmented matrix of the system is
[~
0 2 -1
1 1 2
0 0 0
and it fo llows t.ho.t the system has general solution x 1 = 2- 2s + L, .r2 = 3- s- 2t,
X3 ~ S, x, ~ (. ~hUS any v.,.;tor ofthe form ~ m =r] +
X +s [ t [ - ;] will have t he p<Operly
[~ ~ ~ -~ ~1]
0 0 0
Thus the system is inconsistent; there is no vector x in R4 for which TA(x) = [-~]
9. (a). (c), and (d) are linear t ransformations. (b) is not linear; neitht!r homogeneous nor addi~ivc.
10. (a) and (c) are linear t ransformations. (b) and (d) are not linear; neither homogeneous nor addit ive.
11. (a)_and (c) are linear transformations. (b) is not linear; neither homogeneous Mr additive.
12. (n) nml (c) o.re li n~t\r ~ransformations. (b) is not linear ; nPit her homogene<ms nor add:tive.
exercise Set 6.1 167
13 . T his lmnsfonn lion can ~e written in m~: [o,m "' [: : ] ~ [! =: :] l::J ; it is a linear trans-
19. (a) We have T( l , O) = (-1 , 0) a nrl T(O, 1) = (1, 1); thn~ thf' st<lP.d'l.rd matrix is [TJ = [-~ ~].Us
ing the m atrix, we have T ( [- ~]) = [ -~ ~] = [:]. This a grees with direct calculation
[ -:]
(b) We have T (l,O,O) = (2, 0,0), T(O, 1,0) = (-1, 1,0), and 1'(0,0, 1) = (1,1,0) ; t hus the stan-
T(x) = (3(-1) + 5{2)- (4),4(- 1)- (2) + (4) , 3(-1) + 2(2) - (4)} = (3 , - 2, -3)
On th e other hand, using t he matrix , we luwe
168 Chapter 8
+~ ~
~] [1 946]
3 = 2 4.598
{b ) [_~ ~] [~) = [_~]
..
(c) [_~ ~] [~] - [_~] (d ) [ l ~]!] -
-:1/- 4 [2++ ] - [ 4598]
3 - -2 /3 ~ ~ -1.964
(b ) [ 41
':i
1- ~l~ [-0.2991
-4]~ [] [-VJ+3 =
- ~:. 0.518J
29. 1, he matnx
.
A = [- ~
~ -
-
~]
~
1 corresponds to R 8 = [ ~os
. 8 -sin 8] where B =
S i ll 8 COS 8
3; (135).
30 . '
rhe matnx A . = r~ I
~
72] corresponds to He =
- 7l
I
[cos28
.
S Jn 28 -
sin29)
COS 28
where B = i (22.5 ).
2 2
31. (a) liL __ [cos 28 sin28] __ [cos 8-sin 8 2sin9cos8]
. ~ wherecosB =~ ,' a.nd sin8 = ~lm+
sm28 - cos 28 2sin8cos0 sm .. 8 - cos 2 8 v l +m '1/JTrn-
l +~' [ 1 ; : m;r:_ 1] .
2
Thus Hr.=
{b) p
L
coscotJ
= (:Jin9 2
8
9
sin8cos9]
oinl(J
= l+,....'l
1 [1
m
m]
m2
Exercl&e Set 6.1 169
32. (a) We have m == 2; thus H = HL = ~ [-! ;J and H ( [!]) t(-~ ~] [!] = t [z:] = [~::]
=
(b ) We have m = 2; thus P = h = ~ ~ !]
[ !] ) ~ [~ ~1[!] = ! (~~] = [::!]
and P ( [ =
33. (a) We bave m = 3; thus H = HL = fo [-: !] = ~ [-~ !] and the reflection of x = [: ] about
34. If Tis defined by the formula. T(x,y} = (0,0), then T (cx, cy) = (0,0) = c(O, O) = cT(x,y) and
T(x1 + X2, Yt + Y2) = (0, 0) = (0, 0) + {0, 0) = T(x1 , Y1) + T(x2 , Y2); thus Tis linear.
If T is defined hy T(x , y).::: (1, 1), then T(2x, 2y) = (1, I) f. 2(1, 1) = 2T(x, y) and T(x 1 + x 2 ,
=
Yl + Y2) ::..., (1, l) =I (1 , 1) + (1 , 1) T(x 1 , yl) + T(xz, Y2); thus T is neither homogeneous nor ad-
Jitive.
36. The given equations can we writt.en in matrix form us r~ ~] [:] = [;], and so
.f
170 Chapter 6
(c) (d)
D3. From familiar trigonometric identities, we have A = cos29 -sin 28] __ R-.e . Thus muJtiplication
[ sin 29 cos 28 "
by A corresponds to rotation about the origin through the angle 2fJ.
D4. Jf A -
/ 1 - ..
R o -_ [cos O .. sin 81 t h en AT _
-- [ cos 0 sin 8] ..__ [cos( -9) -::~=:~] = R- o Thus multipli-
sin9 cosO' -sin9 cosO sin(-8)
cation by !1.,. corresponJs to rotation t hrough the ang le -0.
D5 . Since T(O) :::: x 0 =f:. 0, this transformation is not linear. Geometrically, it corresponds to a. rotation
followed by a translation.
D6. 1f b = 0, theu f is both addi tive and homogeneous. If b =f:. 0, then j is neither additive nor homo-
geneous.
D7 . Since Tis linear, we have T (x 0 +tv) = T(x0 ) + tT(v). Thus, if T(v) =f:. 0, the image of the line
x = x o +tv is the line y = Yo+ tw where y 0 = T (xo) and w = T(v) . lf T(v) = 0 , then the image
of x = xo + tv is the point Yo = T (xo).
u - 5
- !s
_g
5
4
25
12]
25
5
~
l2
2S
[-~54
u
25
25
0
~ -~
3
5
-~]
16
25
= [0
100]0 ;
1
0 0 l
thus A is orthogonal and A - l =AT =
1].
16
25
-~ ~] [~ I
76 -~
2 1
73
~]
-~
5. (a) AT A = r-t =~J r=~ -~J = [~ ~]; thus A is orthogonal. We have det(A) = 1, and
A = [ _
=~ ~] = Ro where B =
wise rotation about the origin through the a ngle 3;.
3
;. Thus multiplication by A corresponds to CO\tnterclock-
6. (a) AT A= [
2
* [:a~ l = [0
_v'4J 1-J < 2
-1] 2
1 0
1
]; thus A is orthogonal. We have det (A) = l, and
A = [
J-
l v'~Jl = Re where () = ~ -
2
a~ v;l [./3~ _!
Y,}l = (o1 0
(b) rlT A= [
_ J_ 1
] Lhus A is or thogonal. We have dct(A ) = -1, a nd so
2 2 2 ~
A= !2 ./3] = He where 0 = %
2
[ .:1
7.
- !
2
9. (a) Expansion in the x-direction with factor ::! {b) Contraction with factor ~.
(c) Shear in the x-direction with factor 4. (d) Shear in they-direction with factor - 4.
11. The action of T on the stanciatd unit vectors is e 1 = [ ~ ]-t [~] -t [~J -t [~] = Te1 . and e2 =
12. The action ofT on the staudard unit vectors is e 1 = [~] -t [-~J -t [-~] -t [- ~] = Tel and
e2 = [~) -t r-~] -t [~] -t [~] =Te2; thus IT) = [Te1 Te2) = [-! ~J
13. The actioo of T on the "anda<d unJt vedo<s is e, = m m-; m
4 = Te, e, = m m-;
4
14. The action of T on the standard unit. vectors is e 1 = [~]-t [i] -}[~] -t [-~] = T et, e2 =
t hus [T]
-~ ~l -
0 -1
y
15. ( a) ( b) (c)
I~
(a} (b) y
16. (c)
iL
!11.
Exercise Set 6.2 173
~
1 8. (a) (b)
(c)
19. (a)
[9~ ~ ~][~]=[~]
s 0 - 1 -3
{b )
[~ -! ~ll:J ~ Hl (c) n!~] m~ r-~l
20. (a)
r~: ~J nJ ~ nl I ' " '' ,.-,................ ,._,
(b) 10OJ r-21 = r-20
[000
0 0 1 1 1 3 3 (c) [~ !~] nJ ~ m
21. (a) [TJ(: [~ ~ -~] ' (b) [T) = [ ~ ~ ~] (c) [T) = [-~0 ~\) ~]1
,, ...0... _.....1
.. ,.0 -1 0 0
1 I) (01,1 ] 0o 0-1]o [! -~ ~]
22. (o) [T] _, o o
[
0 - 1
(b) [Tj =
[L 0
1
0
(c) IT]=
.iJ
(b) [Rj = [ _;
I
7z
(c) [RJ =: -~
[
25. The matrix A is a. rotation matrix since it is orthogonal anrl det(A) == 1. Tbe axis o f rotation
;, found by .olv;ng the system (/- A)x = 0 , whkh ;s [~ ~ ~] r=J= [~] . A geneml solutioo
Choosing the positive orientation and comparing with Table 6.2.6, we see tha.t the angle of rotation
is !I = ~ .
174 Chapt&r6
26. The matrix A is a rotation matrix since it is orthogonal and de~( A) = l. The axis of rotation is
found by solving the s[:sltem ([~]- A)x = 0 , which is u -~ -n[~] m. = A genenJ solution
of t his system is :<. = ~ = t ~ ; thus the axis of rotation is t he line pnssing through the origin
and the point (1, 1, 1). The plane passing through the origin that is perpendicular to this line has
equation x + y + z = 0, and w == (-1, 1, 0) is a vector in this plane. Writing w in column vector
form, we have
-1]
W= l ,
[ 0
27. We have tr(A) = 1, and so Formula (17) reduces to v = Ax + ATx. Taking x = e 1 , this results in
From this we conclude th at t he x-axis is t.he ax.is of rotation. Finally, using Formula (16), the
rotation angle is determined by cosO = t.r(~)- l = 0, and so 0 = ~
28. We have tr(A) = 0, and so Formula ( 17) reduces to v = Ax + AT x + x . Ta king x = e1. this results
in
Thus the axis of rotation is the line through the origin and the point (1,1,1). Using Formula {16),
the rotation angle is determined by cos()= tr(~)-l = - ~ ; thus 8 = 2; .
30. (a) We have S(e 1 ) = S(l, 0, 0) = {1, 0, 0), S{e2 ) = S(O, 1, 0) = (0, 1, 0), and S(e3 ) = S(O, 0, 1) =
(k, k, l) ; thus the standard matrix for S is [S] = oI 0l kkl .
[
0 0 1
DISCUSSION AND DISCOVERY 175
(b) The shear in the xz-directio n with factor k is defined by S(x, y, z) = (x + ky,y, z + ky); and
31. If u = (1, 0,0) then , on substituting a= 1 and b = c = 0 into Formula ( 13) , we have
0 () -sinO
cos 0
sin{) cos J
l
The other entries of Table 6.2.6 are obtained similarly.
and from this it follows that o. = 0, b = - -jg, c = "73 or a. ""' 0, b = ~, r. = - ~3 . These are the
only p orss ibilit.ies.
D3. The t.wo columns of t.his matrix a.re or thogonal for any values of a a nd b. T hus, for t he m a trix to
be orthogonal, all t ha t is required is t ha t the column \ectors be of length I. T hus a and b must
satisfy (<1 + b} 2 + (a - b) 2 = 1, or (equiv<\lcntly) 2a 2 + 21} = L
D4 . If A is an o rthogonal matrix and Ax :=. >.x . t hen llxll = I!Axll = 11>-xll = I.XIIIxll. Thus the eigen-
values of A (if any) must be of ah;;olutc vain~; L
D 5. (a) VectQrs paralle l to the line y = x will be eigenvectors corresponding to the eigenvalue>.= 1.
Vectors perpendicular to y =
x will be eigenvect ors corres p o nding to the eigenvalue ,\ = -1.
(b) E very nonzero vector is an eigenvecto r corresponding to the eigenvalue ,X = ~ .
DO. The shel.'lr in the x .. dircction with factor - 2; thus 1'(x, y) = (x - 2y, y) and ITl = [~ -n
D7. From thP. polarization identity, we have x y = Hllx + yJ]2 - l!x - yll
2
) = i(l 6 - 4) = 3.
D8. !f llx + Yll = llx - Yll tho.n the parallelogram having x and y as y-x X+Y
7~
adjacent edges has diagonals of equal length a.nd mus t t herefore be
a rectangle.
X
176 Chapter 6
(c) ker(T) = { [~] } ran(T) = R2 The transformation Tis both 1~1 and onto.
2. (a) ker(T) = {(x, 0)[ - oo < x < oo} (x-axis), ran(T) = {(O,y)l- oo < y < oo} (y..,a,-xis). The
transformation T is neither 1-1 nor onto.
(b) ker(T) == {(0, y, 0)1- co< y < oo} (y-a.xis), ran(T) = {(x,O, z)l - oo < x,z < oo} (xz-plane) .
Tho transformation T is neither 1-1 nor onto.
{c) ker(T) = { [~]}. ran(T) = R2 The transformat.ion Tis both 1-1 nnd onto.
(d) ker(T} = { [~] } ran(T) = R 3 . The transformation Tis both 1-1 and onto.
3. The kernel of the tro.nsformation is the solution set of the system Ax = 0 . The augmented matrix
of thi.s system can be reduced to [~ ~: ~]; t;hus the solution set consists of all vectors of the form
4. The kernP.l of the transformation js the solution set of [~ _: - !] [~] = [~] The augmented
5. The kernel of the t<aosformatOon is the solut;on set of [-: - ; -:J [:] m. = The augmented
matrix of this system can he reduced to [~ ~ _: ~ ~] ; thus the .solution set consists of all vectors
0 0 0 : 0
6. The kernel of the t ransformation TA : R 3 -t R4 is the solution set of the linear system
-! :
~ ]
0
3 I
b I
thus the solution set
The augmented matrix of-this sy,tem can be <educed to [:
0 o:
consists of aJI vcct.ocs of the form x = t [ -il whe<e -oo < t: 0
oo
0 :
7. The kernel of T is equal to the solution space of [ ~ _: =l] l:l= m.The augmented matrix
of this system can be reduced to [~ !m]; t~e s~st:.mthus has only the trivial solution and so
0 0 o:o
ker(T) = { [~] } .
8. Thr. k~rncl of 1' is eqml.! to the solution space of the system [~ ~ ~] [~] = ~J. 't'h~ <tugmented l
l 0 1 OJ 1
.
matrix of this system can be redlJced to [0 1 0 i0 ; thus kcr(T) cons1sts of all vectors of the form
x = t [ -~] where oo < t < oo.
9. (a) The vector b is in the column space of A if and only if the llnE\ar system Ax = b is consistent.
The a tJgrnented matrix ()f Ax = b is
H
2 0
2 1
8 l
[~ ~]
0 -~
I
L 6
0 0
Frorn this we conclude tha.t the system is inconsistent; thus b is not in the column space of A.
(b) ThP. augmented matrix of the system Ax = b is
0:1 -13]
H
2
2 l
8 1: 8
-k
[~
0
1
0
I
6
0
1]
6
0
178 Chapter6
From this we conclude that the system is con'listent, with general solution
[-~] =
8
b = ~c1{A) + ~c2(A} +
3 6
Oc3(A) = ~ [-~] + ~ [~]
3168
10. is
(a) The vector b in the column spttee of A if a.nd only if the linear system Ax = b~i~consistent.
The augmented matrix of Ax = b is
[31 -21 1 5 2
5 -3 i]
I
I
}
0 1 1 -1 : 0
[~ i 0]
0 1
1 1 - 1 0
I
0 0 0 : 1
From trus we conclude t hat the system is inconsistent; thus b is not in the column space of A.
(b) The augmented matrix of the system Ax = b is
3 -2
1 4
1 5: 4]
5 -3 : (j
[0 1 1 -1 : 1
[~
0
1
1l -11:: 2l ]
0 0 0 : 0
Prow this we conclude that the system is consistent, with general solution
Taking s = t = 0, the vector b can expressed as a linear combination of the column vectors
of A as follows:
11. The vector w is in the range of the linear operator T if and only if the linear system
2x- y =3
x +z = 3
y - z=O
is consistent.. The augmented matrix of t.his system can be nxiuced to
[~
0
1
00 :: 21]
I
0 1 : 1
Thus the system h as a unique solution x = (2, 1,1), a.nd we have Tx = T(2, 1, 1) = (3,3, 0) = w.
12. The vector w is in the range of the linear operator T if and only if the linear system
X - y = 1
x +y+z = 2
:t +2z =- l
.is consistent. The augmented matrix of thls system ~;an be reduced to
[~
0
1
O
0:' I]
J
I
1
3
4
I _
0 I I 3
13. The operator can be written as [:: J = (~ -n [::J, t hus the standard mat rix is A = [ ~ -~].
Since df't(A) = 17 :f. 0, the operator is both 11 and onto.
14. The operator can be written as [::] = [~ ~] [::] ; th11s t he standard matrix is A = [~ ~] . Since
det(A) '" 0, the operu.tor is neither 1-l nor onto.
'WI] [l 23 ] [XJ] =
16. The operator can be written as
[ = 12 50 83
1.112
1.1/3 X3
:z:2 ; thus the s tandard matrix is A
17. The operator can be written as w =TAx , where A= [~ =~]. Since det(A) = 0, TA is not onto.
,- -- - - -. -~---- --
The_ran_gr. -~ TA consists o~~~~ vedors of -~~.foriw = t (j_:here -oo < t < oo. Jn particular,
[~ =~ ~] [::] = [:~]
4 1 2 X3 W3
is consistent. The augmented matrix of this system can be row reduced as follows:
1 -2
5 -1
1:
3l
Wt]
W2 -t
[1 -2 1:l
0 9 -2 W2 -
Wt ]
5wl -+ 0
[1 -2 1\
9 -2 : W2 -
W1
5w1
]
[4 1 2: W3 0 9 -2 ] W3 - 4w1 0 0 0: W3 - W2 +. 'tJIH
Thus the system is consistent {w .is in the range of 1A} if ar:d only if w 1 - w 2 + w3 = 0. In
partiClllar, the vector w = (1 , 1,1 ) is not in the range of TA .
19. (a) Th,, linear t.rl\.nsformation T A : R 2 -+ R 3 is one-to-one if A-nd only if the linear system Ax = 0
has only the t rivial solution. The augmented matrix of Ax = 0 is
From t his we conclude that Ax = 0 has only the trivial solution, and so TA IS one-to-one.
(b) The augmented matrix of the system Ax -=-= 0 is
[-~ ~)
2 3
0 -4
a.nd t he reduced row echeJon form of t his matrix is
o }4 :I oJ
1 -2 : 0
Fwm this we conclude that .4x ~ 0 has the gene'"! solution x = t [ -~) . In p"'ticula<, the
20. (a) The range of the t ransformation T.~, : R2 -: R3 consists of those vectors w in Jl3 for which
the linear system Ax = w is consistent. The augmented matrix of Ax = w is
1 -1 ! Wt]
2 0 : W2
[
3 -4 : W3
Exercise Set 6.3 181
W3
, 0 2 :
0 -1 :
W)
W2 - 2WI
W3 - 3w1
l [} -}
, 0
0 -2
2 ,
[
1 -1
0
0
2:
o:
l
I
Wz Wt
- 2Wt
2w3 + w2- 8w1
l
From t his we conclude that Ax = w i..; consistent if and only if - 8w1 + w 2 + 2w., = 0; thus
the transformation TA is not onto.
(b) The range of the transformation TA : R3 ~ R 2 consists of those vectors win R2 for which
the 1inear system Ax = w is consistent. T he augme nted matrix of Ax = w is
2 3
0 - 4
l Wt]
1 W2
[_~ 2 3 : WJ
2 -1 : u.2 +w1 . r
l
F'rorn this we conclude that Ax = w is consistent for every vector w in R 2 ; thus TA is onto.
21. (a) The augmented matrix of the system Ax= b can be row reduced as follow:;:
b,l ~ [I -l 3: b, l
[~
-2 -1 3 -2
4 6 -2 b2 0 k 8 - 8 ; 02 - 2bl
0 3 3 b:J 0 6 6 -6 : 03 - 3bl
I
-2 - 1
['
3 I
I b, ]
, 0 1 l -1 I
I ~ b2 - {bl
I
0 0 0 Ql
I ~b3 - kbz - ~b1
[-ls:+1']=' [-!] +t m
Note. This is just one possibility; jt was obtained by solving for b1 in t.erms of 02 a nd b.> and
then making bz and o3 into parameters.
(c) The augmented matrix of the system Ax ::; 0 can be row reduced to
[~
0
1 L: 0]
1 1 -1 0 l
(l 0 0 0 : 0
Thus the kernel of TA (i .e. the solution s pace of Ax = 0) consists of all vectors of the form
X=
-s + t
-s- tl = [-1]+ [-1]
s
-1
t
1
[
s 1 0
t 0 1
182 Chapter 6
D2. No. The transformation is not one-to-one since T(v ) = ax v = 0 for all vectors v tb.at:a.re parallel
to a .
D 4. No (assu ming vo is not a scalar multiple of v ). T he line x = vo +tv does not p ass through the
origin and thus is not a subspace of nn .
It. follows from Theorem 6.3.7 that this line Calmot be
flQ ual t o range of a linear operator.
[~ !][:
-3 -2
1. !Tn o 'I~4 J = BA = 0 1 -3
0] = [105 -1- 8 2!]
2 4 45 ~~ 25
3 -1]1 = [4012
~][!
2. [Ts o TAJ =BA = [ - 41 5
0
0
0
-9
20]
18
r
2 -3 -3 6 38 -18 43
[~ 30 -1][ 4 0 18
22]
fTA oTaf = AB =
-3
1 -1
6 2 -3
5
.J
2
8
= 10
31 - :~3
-3 16 .
58
[-~ [~ -~]
~]
0 2
4. {a) !T1] = 1 [T2] = ()
- 1 -3 0 - 1
2 0][
[~ ~] ~] u
0 2
4
{b) [12 o T1] := 0 - 1 -2 l = 3
0 -1 - 1 -3 3
~ ~ ~}[~ 2 0] = [ 4 8 0]
0
{T1 o T3] [- 1 0 -1 -2 -4 -1
-1 -3 0 -1 - 1 -2 3
(c) T2{T,(x1,x2,x3)) = (2x2,:r1 +3x2,17:r1 +3x2)
T1 (T2(x1, x2, x3)) = (4:rt + 8x2, -2x1 - 4x2 - x3, -xi - 2x2 + 3x3)
5. (a) The s t andard matrix for the rotat1on is A 1 = [~ - ~] and the standard matrix for the rP.tl~c
tion is A2 = [~ ~] . Thus t he standard m atrix for the rotation followed by the reflection is
(b) The s t andard matrix for the projection followed by the contraction is
(c) The s tandard matrix for the reflection followed by the dilation is
.
.ilz A1 = [:l0 3OJ [10 -1OJ = [30 -301
6. (a) T he s t andard matrix for the r.omposition is
(c) The composition corresponds to a counterclockwise rotation of 180. The standard matrix is
R z<. R 1n R_
3 12
.n
!2
= R 11 7rr
12+J2+ 3
r. =R rr = [-1 0]
0 - 1
7. (a) The s tandard matrix for the reflection followed by the projection is
184 Chapter 6
{b) The standard matrix for the rotation followed by the dilation is
0o] [ ~o 01 ~]
o =[1
o V20
v'2 -72 0 ~ -1 0
(c) The standar~ matrix for the projection followed by the reflection is
=
l 0~ 00] ['0
~ 0 0] .[!0 :If0-~0]
{! --! =
A2A1
[00 -31 0! ,/3 0.! ,/3
2 2 6 6
_ fl
16 16
.1. ]
.l.. _fl-
16 16
l
8 1-
(b) T he standard matrix for the composition is
15. The st andan.l matrix fqr the operator Tis A= [! ~]. Since A is invertible, Tis one-to-one and
16. The standard matrix for the opP.rator Tis A = [~ -~]. Since A is invertible, Tis one-to-one and
the standard matrix for y-l is A-
1
= 2~ [-~ !l thus r -
1
(wl,w2) = <lwl + 1w2, - 1
12wl + iw2)
17. The standard matrix for the operator Tis A = [~ -~]. Since A is invertible, T 1s one-t(rone and
the sto.ndn.rd matrix for r - l is A- 1 = [-~ ~]; thus r- 1(wl, w2) = (W2, - wl)
18 . The standard matrix for the operator Tis A = {~ ~]. Since A is not invertible, Tis not one-to-one.
1-2 2]
19. The standard mat r ix for the operator T is A = [ 2
1
1
1
1 .
0
Since A is invertible, T is one-t~>oone
41
= [-~
-2
a n d thf' standard matrix for r- 1 is A- t 2 -3 ; t hus the formula for T -l is
-1 3 -sJ ..
20. The standard matrix for 7 ' is A =[ -i ~! ~]. Since A is not invertible, Tis not one-to-one.
21. The st.andarrl matrix for the operator T is A = [~ 4-1] Since A is inver tible, T is one- to-one
7 I .
3 0
I
Ill
=1 ;thus the fo<mula fo< r - 1
is
-2 '.i
22. The standa<d mat<lx fo< the ope<ato< TIs A = [; : :J. Since A is inve<tible, TIs one-to-one and
23. (a) lL is easy to see d irectly (from the geometric definitions) that T 1 o T 2 =0 = T 2 o T 1 This
(b) It js easy to see directly that the composition of Tt and T2 (in either order) corresponds to
rotation about the origin through the angle 81 + Bz; thus 11 o Tz = T2 o T1 . This also follows
from the computation carried out in Example l.
~]; thus
0 sin 11 r.:os&1 0 0
cos02 - sinBz
[Td [Tz!= [~ -~] [-~ ~1 = [-- ~ -~1 and lTz]ITJ] = [-~ ~] [~ -~1 = [-~ -~J -
(b) We have [Tl] = [~ ~] and [T:d = [::: -:~::];thus IIi oTz] = jTJ} [T2 ] = [co~O -s~ne] and
(c) We have ITt] ::;[~~~] =kl ; t.hus fTI] [Tz] === (kJ)[TzJ=kl12] and [TzJ ITl i = ITz]{kl} =
0 0 k
k [T2 }. Jt follows that 1'1 o T 2 = T 2 o T 1 = kT2.
.ill [
2
1
2
-
-
-z-
!2
viJ
26. We have H:ri-i = [~ ~] and f/, 18 = [~ - ~). T hus t.he standard matrix for the composition is
lirr /8 H . = [ '~
7ft4 ..!_
V2
_ 7i][0 1] _[ 72
72
1 l O - _ _!_
v'2
27. The image of the unit square is the parallelogram having the
28. The image of the unit square is the parallelogram having the y
[~ !] I = 18 - 91 = 1. "---
,J
(3. )
") I
( 3)
-- -
'f
1/ .1:
2 2
R H R _1 _ [cos {3 - sin {3 2sinf1cosf3 l - [cos2/3 sin2/3] - H
0 0
f.i - 2 sin {3 cos f3 sin 2 f3 - cos2 f3J - sin 2/3 -cos 2/3 - i3
and su multiplication by the matrix RiJHoR~ 1 corresponds to reflection about the line L.
D3. From Example 2, we have He, Hez = R 2ctJ 2 - Ba) Since {)= 2(0- ~8), it follows t hat Re = He Her~
Thus every rotation can be expressed as a composition of two reflections.
CHAPTER 7
Dimension and Structure
5. 1 (!
(a) Any one of the vectors (1 2), ,1), ( - 1, - 2) forms a basis for the line y = 2x.
(b) Any two of the vectors (1. - 1, 0), (2, 0, - 1), (0, 2, -1) form a basis for x + y + 2z = 0.
6. (a) Any one of the vectors (1, 3), (2, 6) , ( - 1 ~ -3) forms a basis for the line x = t y = 3t . 1
(b) Any two of the vectors (1, 1,3), (1,-1 2}, ('2, 0, 5) form n. basis for the plane x=t1 +t
1 21
3 1 1 0
7. The augmented matrix of the system is [ : ] and the reduced row echelon form of
5 - 1 I - 1 1 0
1 0 l 0 I 0]
this matrix is ~ : . Thus a general solution of the system is
[0 1
4 1 10
x-- (-l"
4 VJ 4 - t J s ) t) --
_!s "(-.!4 - !4 1 O)+t(O - 1 0 1)
tJI I t ' l l \
where -oo < s, t < oo. T he solution space is 2-dimensional wil.h canonical basis {v 1 ,v2_} where
V J :.; ( - ~ 1 - ! 1 1) 0) (l.J1 d V 2 """ ( 0 > - 1, 0 l)
I
8. Tho augmented mat,;x of tho system i' [: =: ,; ~ ~] nnd t ho ceduced row echelon fo,m of th ;s
matrix is
I () :1: 0]
0 1 2 o . Thus the general solution is
[ I
0 0 O tO
X= ( -3t, - 2t , t) ~= t( -3 , -2, 1)
where -co< l < oc. The solution space is 1-dimcnsional with cnnonic..al basis {( -3, -2, 1)} .
0 0 I l lt' O
I 1 0 IJ I :0
where -oo < .'l, t < oo. The solution space is 2-dimensional with canonical basis {v 1 , v'2} where
Vt = (-1, 1,0,0,0) and v 2 = (- J, 0, - 1,0, 1).
195
t96 Chapter 7
10. The augmented matrix of the system is r: ~ ~: -: -!i~] and the reduced row e-chelon form
0 1 0 -1 i I 0
1
01 00 - 31 - 31 : 0]0
. .
. 0
o f thJs maLnx 1s ~
[
~ ~ ~ _l .~
I
11. (a) T he hyperplane (1 ,2, -3).1 consists of all vectors x = (x, y,z) in R 3 satisfying.:the equation
x + 2y- 3z = 0. Using y =sand z = t as free variables, the solutions of thisequation can
be written in the form
x = (-2s + 3t, s , t) = s( - 2, l, 0) + t(3, 0, 1)
where -oo < s, t < oo. T hus the vectors v 1 = (- 2, 1, 0) and v 2 = (3, 0, 1)
form a basis for
the hyperplane.
(b ) T he hyperplanC! (2, -1, 4, 1)..!. consists of all vectors x= (x 1 , x 2 , xa, x 4 ) in R 4 satisfying t he
equation 2xt - xz + 4x3 + x 4 = 0. Using x1 = r, x3 s and x 4 = t as free variables, the =
solutions of this equation can be written in the form
x = (r, 2r + 4s + t, s, t ) = r(l, 2, 0, 0) + s(O, 4, 1, 0) 't t(O, 1, 0, 1)
where -oo < r , s, t < oo. T hus thevcctors v 1 = (1, 2, 0, 0), v 2 "' (0, 4, 1, 0), and v 3 = (0, 1, 0, 1
form a basis for the hyperplane.
1 2. (a) The h yperplane ( - 2, 1, 4).L consists of all vectors x = (x , y, z) in R 3 satisfying the equation
- 2x + y + 4z = 0. Using x = s and z = t as free variables, the solutions of this equation can
he written in t he form
x = (s, 2s - 4t, t ) = s(l , 2,0) + t(O , -4 ,1)
where - oo < s, t < oo. Thus the vect.o-rs v 1 = (1, 2, 0) and v 2 = {0, - 4, 1) form a. basis for
t.he hyperplane.
(b ) The hyperplane (0, - 3, 5, 7)J. consists of all vectors x = {x 1 , x2 , x 3, X.t) in R4:satisfy ing t he
equation -3x2 + 5x3 + 7x4 = 0. Using x 1 = r, X3 = s and x 4 = t as free variables, the solu-
tions of this equation can b e written in the form
D2. Yes, they are linearly independent. If we write the vectors in the order
v 4 = (0,0, 0,0, l,~) ,v3 = (0,0,0,1, "', '),v2 = (0,0, 1, , , ), v1 = (l, , , , , )
then, because of ~he positions of the leading ls, it is clear that none of these vectors can be written
as a linear combination of the preceding ones in t.l!c list.
D3. False. Such a set is linearly dependent ~ince, when tht~ set is wrltt.eH in reverse order, some vector
is a linear combination of its predecessors.
D4. The solution space of Ax = 0 has positive dimension if and only H the system has nontrivial
solutions, ar~d this occurs if and only if det(A) = 0. Since det(A) = t 2 + 7t + 12 = (t + 3)(t + 4),
it foll ows t hat the solution space has positive dimension if and only if t = -3 or t = -4. The
solution space bas dimension 1 in each case.
P2./ If k #0, then (ka) x ' - k( a- x) = (l if and only if a x = 0: t.lm::; ( k:a )J = a.l
EXERCISE SET 7.2
1. (a) A bMis for R 2 must contain exactly two vectors (three are t.oo many); any ~ct of three vectors
in R 2 is linearly dependent.
(b) A basis for R3 must centum ~xadly three vect ors (two nre not eno11gh); <~ s<>t. 0 f (.wo V<'('t.01'S
cannot span R:1
(c) Thf: vector~ v 1 and v 2 are linc<trly dependPnt (v 2 = 2v 1 ).
2. (a) A basis for R2 must contain exactly two vectQrS (one is not enough).
(b) A basis for R 3 must contain exact.ly three vect.ors (four are t.oo many)
(c) These vectors are linearly d ~7.pendent (any set of vectors cont<.>ining t.he zero vector is de pendent.).
3. (a) The vectors v 1 = (2, 1) and v 2 ""'(0,3) are linearly independent ~incc neither is a scalar m1.tltiplc
of the other; thus these two \'ect.ors form a hasis for R 2 .
(b) The vector Vz = (-7,8,0) is not a scalar multiple of vl = (4, l,O), and V:J:::: (1, 1, 1) is not a
linear combination of v 1 and v 2 since a ny such linear combination would have 0 in the third
component. Thus v 1 , v 2 , and v 3 are Iincarly independent, anJ it fo llows that t.hese vectors
from a basis for R 3 .
4 . (a) The vectors v 1 = (7, 5) and v 2 = (4 , 8) arc linearly independent since neither is a scalar multiple
of the other; thus these two vectors form a b~is for R 2 .
=
(b) The vector v2 (0 1 8,0) is not a. scalar multiple of v 1 = (0, 1, 3), and v 3 = (1,6,0) is not a
linear combi1iation of v 1 and v2 since any such linear combim.t.ioJl would h ave 0 in the first
component. Thus v 1 , v 2 , and v 3 are linearly il.dependcnt, a nd it follows that these vecto rs
from a basis for R3 .
198 Chapter 7
~ ~] Sir.cc det{A)
4
5. (a) The ma t r ix having v 1 , v2 , v 3 as its column vectors is A = [ := -5 :f: 0,
-4 -2 -3
the column vectors a re linearly independent and bence form a basis for Rl.
6. (a) The matrix having v 1 , v 2 , and v 3 as its column vectors is A= [ ~ ~ ~]- Since det(A) = 0,
-8 5 -3
the column vectors are linearly dependent and hence do not form a basis for R 3 .
(b) The matrix having v 1 , v 2 , and v 3 as its column vectors is A = [-~ -! ~]- Si~:e det(A) =
l l -1
4 =f 0, t he column vec tors are linearly independent a..nd hence form a basis for R3 .
the vecl ors v 1, v2, v 3, and v 4 are linea rly dependent. and do not form a basis for R 3 .
(b ) T hP ,ector equa t ion v = c 1 v 1 + c2 v 2 + c 3 v 3 + c4 v 4 is equivalent to the linear system
l~ l
1 0 0
0 2 0
[0 0 3 :J = m
a nd, ~e t.l ing c~ = t, n general solution of this system is given by
CJ/~;-_-;:~,~i -= ~t,~3 ~ .~. - ~,;~C4 = .t
\'~. ='-. . .~,.- - . . .
l ...... . ...
.
.-~;, '
, ~1
..
(l . .,.),
.:~.~"' . V1 . .,
~ ..,.'I
~ ':~. I
.) I
\ I \1.,
.~
'~
EXERCISE SET 7.2 199
{b) The vector equation v = c1v 1 + c2v 2 + c3v 3 + c4v4 is equivalent to the linear system
1 3 1]
[
}~ 1 0 2 :2
I
o o ol3
and the reduced row echelon form of this matrix is
1 0 0 0: 3]
[0Q 01 01 2:-1
1 -1 I
Thm:, ~>~-t.t.ing c.1 = t , a general solut io n of the sys tem i:; t;i:cn by
C1 = 3, cz =: -1- 2t , C3 = - 1 - - t,c4 =t
where -oo < t < oo. Thu:;
v = 3v1 - (1 + 2t)v-z - (1 -1- t)v3 + LV4
for any value of t. For example, correspond ing _t9 t =0, t = - 1, and t = - ~ . we have v =
3v 1 - v7 - v3, v = 3v 1 + v2- v 4 , and v = 3v , - ~ v3 - ~ v4 .
-1 1 1]
A :::
[ 2
3
-2
- 2
0
0
Then det( A) = 2 ;6 0, and it follow:; from this that S' = { v 1, Vz, e1} is a linearly indepew.lent set and
hence a basis for R 3 . [Similarly, it can he $hown that {v 1 ,vz, e2} is a b asis for R3 . On t.he other
hand, ~he set. {vl> v 2 ,e3 } is linearly dependent and thus not t':. basis for R 3 .J
10. The vector v2 = (3, 1, - 2) is not a scalar multiple of Vt = (1 , -1 , 0) ; thns S = {v1 , v 2} is a linearly
independent set . Lt A be the matrix having v 11 v 2, and c 1 as i~s columns, i.e.,
.4 = [- ~
0
:
- 2
~]
0
Then det(A) = 2 f 0, and it follows ftom this that S' = { vb v2 , e1} is a linearly independent set and
hence a basis for R 3 . [Similarly, it can be shown t hat {v 1,v 2 , e2} and {v 1, v 2,e3} are bases for R 3 .]
11.
200 Chapter7
12. We h ave v 2 = 2v 1, a.nd if we [~ele~ t~el vector v 2 from the set S then the remaining vectors a.re
linearly independent since det 2 2 l = 27 =I= 0. Thus S' = { Vt, v 3 , v 4 } is a basis for R3
2 -3 4
The vector equation v = c1v1 + c2v2 +c3v3 is equivalent to the linear system
which, from back substitution, has the solution c3 = 1, c2 = 5- c3 = 4, c1 = 2- c2 - c:v=- -3. Thus
v = -3vl + 4v2 + v3.
14. Since del [~ ~ ~] = - 2 =f. 0, the vectors v1r v2, va (column vectors of the matrix) form a basis for
0 I J
R3 . The \'P.ctor cquatioH v ::: c1v 1 + c2v 2 + c3v3 is equivalent to the linear system
15. (a) Since u n . (1)(2) + (2)(0) ~ ( - 1)(1) = 1 i: 0, the vector u is not orthogonal ton. Thus Vis
not contained in W.
(b) The line Vis parallel to u = (2, - 1, 1), and the plane W has normal vector n = (1,3, 1). Since
un:o::: (2)0)+( -1 )(3)+(1}(1) :::0, the vector u is orthogonal ton. Thus Vis contained
il' w.
16. (a) Sim:e u n = (1) (2) + (1)( l) + (3)( - J) = 0, the vector u is orthogon~l ton. Thus Vis contained
in IV.
(b) T he line V is parallel to u = (J, 2, - 5), a.nd the plane W has normal vector n = (3,2, 1). Since
u n = (1) (3) + (2)(2) + (-5)(1) = 2 i: 0, the vector u is not orthogonal ton. Thus. Vis not
contained in W .
17- (a) T he vector equation c1(1, 1, 0) + cz(O, 1, 2) + CJ(2, 1, 3) = (3, 2 , -1) is equivalent to the linear
::;ystem with augmented mo.trix
[~
0
1
2
T he reduced row echelon form of this rna.t.rix is
0
1
0!
Q l _i
I
I
l;l
1i
0 1 I 1
I S
1 hus (3, 2, - 1) = 13
5 t
(1, 1, 0) - ~ (0, 1, 2) + {2, 1, 3) and, by linearity, it follows that
T(~,2, ... 1)= \ 3 (2,1 ,-1)- ~(1,0,2)+ ~(4,1, 0) = !(26, 13, -20)
EXERCISE SET 7.2 201
(b) The vector equation CHI, 1,0} + c 2 (0,1, 2) + c 3 (2, 1,3) = (a,b,c) is equivalent to the linear sys-
te m with augmented matrix
[..o~ ~ ~ ~]
2
1
3 : c
Thus (a, b, c) =(~a+ ~b- ~c).(l,l, 0) + (-~a+ lb + tc){O, 1,2) + { ~a-lb+ ic)(2, 1, 3) and
T(a , b, c) = (! a + ~b- ~c)(2, 1, -1) + (- ~a+ ~b + !c)(l , 0, 2) +(~a - ~b + ~ c}(4, 1, 0)
7
= ( gaT , !) 1 + [;
3b + ISC> ga 4b - -gC,
2 -a + C)
18. (a) 'fhe vector equation c 1 (1, l , l) + c2(l , l, 0} + c3(l, 0, 0) = ( 4, 3, 0) is equivalent to t he linear sys-
tern
~hich has solution c1 = 0, c 2 = 3, c:,~ = 1. Thus (4, 3,0) = O(l, l.Jlj-.-3.(Ll 0) + l(l~Q._QL.and,
hy Jineari t.)', \V nave - . .
/
( b) T he vector eqnatiun c, (1, l, 1) I c2 ( l, l, 0) + c:l( 1, 0, 0) = (a , b, c) ill equivaleht .to the linear syil-
rl I 1] [:t1] [a]
l~ ~ ~ :: = ~
which ha.:; s o!ution x1 = c, x2 =b - c, X3 =a - b. Thus
19. Since det [-~ -! -:] := - 100 ~ 0, the vectors v 1 = (1 , -7, - 5), v 2 = (3, -2, 8), V3 = (4, :-3, 5) form
-5 8 5 -
a b asis for R 3 . The vector equation c 1v 1 + c 2 v 2 + c:~ v 3 = x is equivalent to the linear system with
augmented matrix
1 - 32 - 34:x]
-7 ly
[- 5 8 5 : z
and the reduced row echelon form of this matrix is
0 1 0 1
I
- !x
2
- !.,
4"
+ 1z
4
[
0 0 1 1: 33
50 x
23
+ 10oY- 19
Imiz
Thus a general vector x = (x, y, z) can be expressed in terms of the basis vectors v 1 , v 21 _~3 as follows:
02. No. If s = {vl , Y2, - . . 'Vn} is a linearly dependent set in Rn, then s is not a spanning set for n n;
r.hus it. is not possible to create a basis by formiug linear combinations of the vectors in S .
03. E ach such operat()r corresponds to (and is determined by) a permutation of the vectors in t he
basis B . Thus t here a rc a t.otal of n! s uch operators.
04. Let A be the matrix having the vectors v 1 and v2 as its columns. Then
dct (A) ""' (sin 2 o- sin 2 /3) - (cos 2 a - cos2 {3) = - cos 2o + cos 2{3
and det(A) =/= 0 if and only if cos 2a j cos 2/3 , i.e., if and only if o =/= /3 + br where k = 0, I ,
2, .. . . For these values of a and {3, the vectors v 1 and v 2 form a basis for R 2 .
D5. Suppose W is a. s ubspace of Rn and dim( W) = k. If S = {w1 , w2 , .. . ,wj } is a spanning set for
W, then eit her S is a basis for W (in which caseS contains exactly k ve<::tors) or, from Theorem
7.2.2, a. b asis for W can be obtained by removing appropriate vectors from S. Thus the number
of clements in a spanning set must be at least k , and the smallest possible number is k.
P2. Let {v1, Vz, ... 1 vn} be a basis for Rn and, fork any integer between 1 and n, let V = spa.n{v1, v2,
1 vk} Then S = {vt. v 2, ... , vk} is a basis for V and so dim{V) = k. The subspace V = {0}
has dimension 0.
P3. LetS= {v1o v2, ... , vn} Since every vector in Rn can be written as a Hnearcombinationofvectors
in S, we have span(S) = Rn. Moreover, from the uniqueness, if Ct v1 + cz v2 + +env n = 0 then
Ct = Cz ==en= 0. Thus the vectors Vt. vz, ... , Yn span Rn and are linearly independent, i.e.,
S = {vl, Vz, ... 1 vn} is a basis for Rn.
P4. Since we know that dim(R"} = n, it suffices to show that the vectors T(vt) 1 T(v2), ... ,T(vn) are
)inearly independent. This follows from the fact that if ctT(vl) + CzT(vz) + + c 11T(v,) = 0
then
T{c1v1 + CzVz + + CnVn) = CtT(vt} + CzT(vz) + + CnT(vn) = 0
and so, since Tis one-to-one, we must have c1v1 + czvz + + CnVn = 0. Since VJt v2, ... , Vn are
linearly independent it follows from this that c 1 = c2 = =en= 0. Thus T(v1 ), T(vz), ... , T(v,)
are linearly independent.
P 5. Since B = {v 1 , v2, ... , vn} is a basis for R"', every vector x in Rn can he expressed as 1i linear
combination x = c1 v 1 + czv 2 + + c,. Vn for exactly one choice of scalars c 1 , c~h ... , Cn. Thus it
makes sense to define a transformation T: Rn --1 Rn by setting
T(x) = T{ctVl + CzVz + ... + CnYn) =clwl + c2w2 + ... + CnWn
n n
It is easy to check that Tis linear. For example, if x = I: CjVj andy= E d;v1 , then
j=l j=l
n n n n
T(x + y) = T(l= (c; + d;)vJ) = L (cj + dj)wj = L CjWj +L d;w; = Tx + Ty
j=l j=l j;l j=l
and so T is additive. Finally, the transformation T has the property that Tv; = w; for each
j = 1, ... , n, and it is clear from the defining formula that T is uniquely determined by this
property.
P6. (a) Since { u 1 1 Uz 1 U3} has the correct number of elements, we need only show that the vectors are
linearly independent. Suppose c 1, C:lJ c3 are scalars such that c1u 1 + CzUz + C3U3 = 0. Then
(c1 + Cz + c3)v1 + (cz + c3)v2 + c3v3 = 0 and, since {Yt. Vz, v3} is a linearly independent
set, we must have c 1 + c2 + c3 = c2 + c 3 = c3 = 0. It follows that c 1 = c2 = C3 = 0, and this
shows that Ut, uz, us are linearly independent.
(b) If { v1, v2, ... , Vn} is a basis for R 1", then so is { u 1, u2, ... , un} where
... '
P7. Suppose x is an eigenvector of A. Then x :f. 0, and Ax = >.x for some scalar >.. It follows that
span{x,Ax} = spa.n{x,>.x} = span{x} and so span{x,Ax} has dimension 1.
Conv\...sely, suppose tha.t span{x, Ax} has dimension 1. Then the vectors x f. 0 and Ax are linearly
dependent; thus there exist scaJars CJ a:.d c2 , not both zero, such that c 1 x + c2 Ax = 0. We note
further that c2 =I= 0, for if c2 = 0 then since x =I= 0 we would have c1 = 0 also. Thus Ax = >.x where
>.::;;; -cdc2.
P8. SupposeS= {vt, v2, ... , vk} is a basis for V, where V C Wand dim(V) = dim(W). Then Sis
a linearly independent set in W, and it follows that .~ must be basis for W. Otherwise, from
Theorem 7.2.2, a basis for W could be obtained by adding additional vectors from W to Sand
this would violate the assumption that dim(V) = dim(W). Finally, since S is a. basis for W and
S C V, we must have W = span(S) C V and soW= V.
204 Chapter 7
A general solution of this system is given by x = ~t, y =-1ft, z =tor (x,y,z} = t(~ . - i
1
1 1). Thus
S.l. is the line tluough the origin that is parallel to the vector (!, - , 1).
..
A vector that is orthogonal to both v 1 and v 2 is w = v 1 x v 2 = [I
2
j
0 - 1
kl = i - ll j + 2k = (1, - 11 , 2).
1 1 5
Note the vector w is parallel to the one obtained In our first solution. Thus 51.. is the line through
the origin that is parallel to (1, -11,2) or, equivalently, parallel to (j,- ~1 ,1).
5; The line y = 2.x corresponds to vectors of the form u = t{1,2), Le., W = span{(l, 2)} . Thus WJ.
corresponds to the line y = -~x or, equivalently, to vectors of the form w = s(2, -1).
6. Here W J. is the line which is normal to the plane x - 2y - 3z = 0 8Jld pa:3Sea through the origin. A
normal vector to this plane is n = (1, - 2, -3); thus parametric equations for WJ. are given by:
X = t, y = -2t, Z = - 3t
7. The line W corresponds to S<:alar multiples of the vedor u = (2, -5, 4); thus a vector w = (x, y, z) is
in WJ. if and only if u w =
2x- 5y + 4z = 0. Parametric equations for this plane are:
X = ~8 - 2t, y = s, Z =t
q
0 -16]
1 -19
0 . 0
Thus the vectors w 1 = (1,0, -16) and w2 == (0, 1, - 19) form a basis for the row space o( A or, equiv-
alently, for the s ubspace W s panned by the given vectors.
We have W..l. = n.1w(A).l = nuii(.A) = null(U}. Thus, from the above, we c~~!h.a.t.. .W..~. cpnsists
of all vectors of the form~), i.e., the vector u = (16, 19, 1) forms a basis for w.l..
l 0. Let A bt! llH~ wL ix having lhe given vect ors as ils rows. This matrix can be row reduced to
u ~ [~ ! -~]
Thus the vect orl; w 1 = (2. 0, - 1) and w 2 = (0, 1, 0) form a. basis for the row space of A o r, equivalently,
for the subspace W spanned by the given vectors.
We have w.t = row(A) .L = null(A) = null(U). Thus , from the above, we conclude that consists w.1.
of all vectors of the form x = (~t, 0, t), i.e., the vector u = (!, 0, 1) forms a basis for W .1..
11. Let A be t he matr ix having the given vectors as its rows. The reduced row echelon form of A is
l ~]
1 0 0
.-:"
1 0
'
\._--, ';- l R=
. ,. y 0 1
0 0
'
Thus the vectors w1 = (1, 0, 0, 0), w 2 = (0,1, 0, 0), and w 3 = (0, 0, 1,1) form a basis for the row space
of A or; equivalently, for the space W spanned by the given vectors.
We have w.t = row(A).l = null(A) = null(R). Thus, from the above, we conclude that W.l consists
of all vect OrS Of the form X = (Q, 0, -t, t), i.e., the vector U = (0, 0, -1, 1) formS a basis for W.l.
12. Let A be the matrix having the given vectors as its rows. The reduced row echelon form of A is
R= 01 01 00 i~]
[0 0 1 0
0 0 0 0
Thus the vectors w 1 = (1, 0, 0, 0), w~ = (0, 1, 0, i), and w3 = (0, 0, 1, 0) form a. basis for the row space
of A or, equivalently, for the space W spanned by the given vectors.
We have Wl. = rvw(A).l = null(A) = null(R). Thus, from the above, we conclude that consists w.t
of all vectors of the form x = (0, - ~t, 0, t) , i.e., tho vector u = (0, -~, 0, 1) foriDB a basis for W .1.
.\
. .
"". ~ \
~ I ~. -..i... ',
206 Chapter7
13. Let A be the matrix having the given vectors s.s its rows. The reduced row echelon form of A is
R=
=
[~ ~ ~]
; !
Thus the vectors w 1 = (1,0, 0,2,0), w2 (0, 1,0,3,0), W3 = (0, 0,1,4, 0) and W4 = (0,0, 0,0,1) form
a basis for the row space of A or, equivalently, for the space W spanned by the given vectoiS.
=
We have W.J. = row(A)..L = nuli(A) null(R). Thus, from the above, we conclude that W.l consists
of all vectors of the form x = ( -2t, -3t, -4t, t, 0), i.e., the vector u = (-2, -3, -4, 1, 0) forms a basis
for W i.
14. Let A be the matrix having the given vectors as its rows. The reduced row echelon form of A is
1 0 0 0 -243
0 1 0 0 57
R= 0 0 1 0 -7
0 0 0 1 2
0 0 0 0 0
Thus the vectors w1 = (1, 0, 0, 0, -243), w2 = (0, 1, 0, 0, 57), W 3 = (0, 0, 1, O, -7), W4 = (0, 0, 0, 1, 2)
form a basis for the row space of A or, equivalently, for the space W spanned by the given vectors.
We h ave Wi = row(A).l = null(A) = null(R). Thus, from the above, we conclude that W.J. consists
of all vectors of the form x = (243t, -57t, 7t, - 2t, t}, i.e., the vector u = (243, - 57, 7, -2, 1) forms a.
basis for W J..
15. In Exercise 9 we found that the vectoru = (16, 19, 1) forms a basis for W.l; thus W =w.t..t consists of
the set of all vectors which are orthogonal to u, i.e., vectors x = (x, y, z} satisfying 16x + 19y t z = 0.
=
16. In Exercise 10 we found that the vector u = (!, 0, 1) forms a basis for W J.; thus W W .J..J. consists
of the set of all vectors which are orthogonal to u, i.e., vectors x = (x, y, z) satisfying x + 2z = 0.
17. In Exercise 11 we found that the vector u = (0, 0, -1, 1) forms a basis for W .l; thus W = W l..l
consists of the set of all vectors which are orthogonal to u, i.e., vectors x = (x 1, x2, X3, X4) satisfying
-X3 + X4 = 0.
18. In Exercise 12 we found that the vector u = (0,-~,0, 1) forms a basis for w.t; thus W = w.t.L.
consists of the set of all vectors which are orthogonal to u , i.e., vectors x = (x1, x2, x3, x4) satisfying
3x-z- 2x4 = 0.
19. Solution 1. Let A be the matrix having the given ~ctors as its columns. Then a (column) vector
b is a linear combination of v1, v2, v3 if and only if the linear system Ax= b is consistent. The
augmented matrix of this system is
1 5 7l bl]
-1 -4 -6! 02
[ 3 - 4 21 03
and a row echelon form for this matrix is
EXERCISE SET 7.3 207
From this we conclude that b = ( o1 1 ~ 1 b3 ) lies in the space spanned by the given vectors if and only
if 16b1 + 19bz + b3 = 0.
VI]
v2 [ 5\ -4 -4 -1 3] 0
l 0-16]
1 - 19
Solution 2. The matrix = , can be row reduced to _o__ .s'___ ~ , and
[~1
b
_1_::!! __ 2_
bt b2 b3
[b1 b2 ba
then further
reduced to
From this we. conclude that W = span{ v 1, v 2, v3) has dimension 2, and that b = (bt, b2, b3) is in lV
if and only if 16b1 + 19b:~ + b3 = 0.
Solution 3. In Exercise 9 we found that the vector u = (16, 19, 1) forms a basis for w1.. Thus
b = (bt,~ , b3 } is in W ::-: W l.l. if and only if u b = 0, i.e., if and only if Hib1 + 19b2 + b3 = 0.
20. Solution l. Let .4 be the ma.trix having the given vect ors ns its columns. Then a (column) vector
b is a .linear combination of v 1 , v 2 , v 3 if and only if the linear system Ax = b is consistent. The
a ugmented matrix of this system is
T hus a vector b = (b 1 , b2, b3) lies in the space spanned by the given vectors iJ and only if b1 + 2b3 = 0.
reduced to
l
~--~-jb,- i
1 0 _!
[
b;
From this we conclude that W = spau{v 1, v2 , v3} has dimension 2, and that b = (b t , b2, b3} is in W
if and only if b1 + 2b3 = 0.
n
Solution 3. In Exercise 10 we fOIJnd t hat the vector u = I ul 1) forms a basis for wl.. Thus
b = (bt , b2, b3 ) is in W = W.l..J.. if and only if u b =0, i.e., if and only if bt + 2b3 = 0.
208 Chap1er 7
21. Solution 1. Let A be the matrix having the given vectors as its columns. Then a (column) vector b
is a linear combination <>f. v 1 , v 2 , v:l, v 4 if and only if the linear system Ax ::::: b is consistent. The
augmented matrix of this system is
[~
0
0
-2
0 2
4
b,]
b2
1 2 -1 b3
1 2 -1 b4
and a row reduced form for this matrix is
0
1
-2
z 4:
-1
I
1
b,
ba
]
0
0
2'
0
-2:0: -bt + b2
~b3 + b4
. ~
Thus a vector b = (b1 , bz , b3) lies in the space spanned by the given vectors if and only if -b3 + b4 = 0.
b
4
-----------
bl 02
b4
2 -1 - 1
b3
--- -----
0! b2 ba b4
---0
further reduced to
l 0 0 0
0 0 0
0 () 1 1
0 0 0 0
0 0
From t his we conclude t hat W = span{vlo v z, v 3 , v 4 } has dimension 3, and t.hat b = (b1.bz,b3) is in
W if and only if - b3 + b." = 0.
Solutwn 3. Tn E..xcrcise 1J we found t.hat ~be vector u = {0, 0, -1, l ) forms a basis for W .L. Thus
b = (b 1 , b2 , b:~, b~) is in IV =- WL1 if and o nly if u b = 0, i.e., if and only if - b3 + b4 = 0.
22. Solution 1. Let A be the matrix having the gi ven vectors as its columns. Then a (column) vector
b is a linear combination of these vectors if a.nd only if the linear system Ax = b is consistent. The
augmented mat.rix of this system is
[-~
]
2
3
6
5
4 I
b,]
b2
I
-2 -3 -1 I
I
03
I
3 9 6 I 04
and a row echelon form for this matrix is
vl] rl}2 4 -s 6] l 0 0 0
a2
[ Y2 2 - 2 J
Solution 2. The matrix l~~ = -~- -;- ~;_ _; can be row reduced to
0
o o
1 0
1 o , and then further
0 0 0 0
b bt In b b1 ---------
reduced to
1 0 0 0
3
0 1 0 2
0 0 1 0
0 0 0 0
------------------
0 0 0 -~~ +b4
From this we conclude that W = span{ v 1 , vz, v3, v 4 } has dimension 3, a nd that b = (bl> bz, b3, b4 ) is
in W if and only if 3b2 - 2b4 0.=
Solution 3. In Exercise 12 we foun d that t he vector u = (0, - ~, 0, 1) forms a basis for W .L. Thus b =
{bJ , b2. b3 , b~ ) is in W ;; w .t.t if and only if u b = -~b2 + b4 =
0, i.e., if and only if 3b2 -- 2b 4 0. =
23. The augmented matrix lv 1 v2 V:; v 4 I ht l b z l b "J is
-2 - ] 0 I
I
-2 I
I
0 -2
l 0 1 2: I
4: - 2 2
0 1 2 - 1 I 2: - 3 - 1
2:' -1
I
-1 1 1 1 I
I
1
2 3 - 1 1 '
I 5: 5 0
and the reduced row echelon form of this matrix is
r~
0 0 0 1 I' 0 0
l 0 0 1: 1 0
l~
0 1 0 l: -2 0
0 0 1 l ''t
0 0
0 0 0 o: 0
From this we conclude Uwt the vectors h 1 C~.nd b :t lie in span{v 1 , v 2, v 3 , v 4 }, but b s does not.
0 0 1 o: t
6 t
- t
5 '
1 0
0 0 0 1 t
t
1t 0 0
0 0 0 o: o:' 0
From this we conclude t ha t t he vectors b 1 and b 2 lie in span{v1 , vz , v 3 , v 4 } , but b 3 does not.
210 Chapter7
R= r~~~ ~ ~~J
0 0 0 0 1 0
0 0 0 0 0 1
Thus the vectors r 1 = (1,3, 0,4, 0,0), r2 = (0,0,1,2,0,0), r 3 = (O, O,O,O, l,O), r4 (0, 0,0,0,0,1), =
form a basis for the row space of A. We also conclude from a.n inspection of R that the null space of
A (solutions of Ax= 0) consists of vectors of the form
R=
0
0
1
0
0
1
4
1
2
t
;j
-!]5
3
5
0 0 0 0 0
Thus the vectors n 1 = ( ~, - ~,- ~, 1, 0), n 2 =: ( ~, - ~, - ~, 0, 1) form a basis for the null s pace of A.
It is easy to che<.:k t.hat r, n 1 = 0 for all i, j . T hus row(A) and onll(A) are orthogonal subspaces of
R" .
27. The vectors r1 = (1,3 ,0,4,0, 0), r2 = (0,0, 1, 2,0 ,0), r3 = (0,0,0,0, 1, 0), r4 = (0,0,0,0,0,1) f<'l rm a.
basis for t he row space of A (see Exercise 25).
28. The vectors r, = ( l , 0, 0, ~ !. - ), r2 = (0, l, 0, ~ , V. a..nd r3 = (0, 0, 1, ! , ~} for m a. basis for t he row
space of A (see Exercise 26).
29. The reduced row echelon forms of A and B are r~ ~ ~ ~] and [~ ~ ~ ~] respectively. Thus the
0 0 0 1 ~ ~~~
vectors r l :::: (1,0,3, 0), r2 = (0, 1, 2,0) , r 3 = (0,0,0, n form a basis for both of the row spaces. It
follows t.hnt row(A) = row(B).
30. The reduced row ec~1elon forms of A and B are rl~ ~ ~ ~] and r~ ~ ~ ~] respectively.
00 13 OQI J
Thus the
0 0 0 0
vectors r 1 :::;: (1,0,0,5), r 2 = (0,1,0 ,~), r~ = (0,0,1 ,3) form a basis for bot~ of the row spaces. It
follows that ruw(A) = row(B).
DISCUSSION AND DISCOVERY 211
31. Let B be the matrix having the given vectors as its rows. The reduced row echelon form of B is
[~ ~]
0 -1
R= -4
a nd from this it follows that a general solution of Bx= 0 is given by x = s (1 , 4, 1,0) + t~-2, 0, 0, 1),
where -oo < s, t < oo. T hus the vectors w 1 = (1, 4, 1, 0) and w2 = (-2, 0, 0, 1) form a ha.sis for the
null space of B , and so the matrix
A = [ 1 4 1 0]
-2 0 0 1
has the property that null (A) = row{A)J. = null( B).!. = row(B).l.J. =row(B).
32. (a) If A= [~... ~ ~] and x = [:], then Ax = 0 if and only if x = y = 0. Thus the null spce of A
corresponds t o point.s on t he z-axis. On the othe:- hanrl, t he cokm n sp:.:.cc cf A t.'Onsists of all
vectors tha t are of the form
( b ) The matrix B = lr~ ~ ~1 has the specified n ull space n.nd column space.
0 0 I
D 1. (a) False. T his statement is t.rue if and only if the nonzero rows o f A are linear.ly inde pendent,
i.e. , if Lhcre a.re no additional zero rows in i.tll echelo n form. "
( b) T rue. 1f E is a11 elementary matrix , t hen E is invertible and so EA x = 0 if a nd only if
Ax= 0 .
(c) True. If A ha.s ru.nk n , then there can be no zero rows in an echelon form for A; thus the
red11ced row echelon form is t he ident.it.y matrix.
(d) False. For example, if m = n and A is invert ible, then row(A) -= Rn and null(A) = {0}.
(e) True. T his follows from the fact (T heorem 7.3.3) that SJ. is a subspace of Rn .
D2. (a) True. 1f A is invertible, then the rows of A a nd the columns of A each form a basis for W';
thus row(A ) = col(A) = R" .
( b ) Fah>e. ln fact, t he opposite is true: lf W is a subspace of V, then V J. is a subspace of W .l.
since every vector that is orthogona l to V will also be orthogonal to W .
(c) False. The specified condition implies that row(A) ~ row( B ) fro m which it follows that
(e) True. This is in fact true for any invertible matrix E. The rows of EA are linear combinations
of t.he rows of A; thus it is always true that row(EA) ~ row(A) . If E i:; invertible, then
A = E- 1 (EA) and so we also have row{A) ~ row(EA).
D3. If null{ A) is the line 3x- 5y = 0, then row(A) = null(A)l. is the line 5x + 3y = 0 . Thus each row
of A must be a scalar multiple of the vector {3,-5), i.e . A i'i of the form A= [:ls
3t - 5t
-!Js].
D4. The null space of A corresponds to the kernel of TA, and the column space of A corresponds to
the range of T A .
D5. lf W = a..t. , then w..t. = a.L.L =span{a} is the !-dimensional subspace spanned by the vector a.
D6. (a) If nuU( A) is a line through the origin, then row(A) = null( A )..I. is the plane through the origin
that is perpendicular to that line.
(b) If col(A) is a line through the origin, then null( AT)= col(A)..l is the plane through the origin
thnt is perpendicular to that line.
D7. The first. two matrices arc innrtiblc; thus in each cnse the null spuce is {0}. The mill ~pace of the
matri..x A = {~ ~] is the line 3x + y = 0 , and the null space of A = {~ ~] is all of R 2 .
DB. (a) Since S has equation y = 3x, Sl. has equation y = -- ~x, and Sl.J. = S has equation y = 3x.
(b) If S == { ( 1, 2)}, then spa.n(S) has equation y = 2x; thus Sl. has equation y = - ~ x ann Sl. l =
span(S) has equation y = 2x .
09. No, this is not possible. The row space of an invertible n x n matrix is all of Rn since its rows
form a ba.sis for R ... On the other hancl , the row space of a singular matrix is a proper subspace
of Rn since its rows are linearly dependent and do not s pan all of Rn.
P2. The row vectors of an invertible matrix are linearly independent and so, since there are exactly n
of them, they form a basis for R" .
P3. If Pis invertible, then (PA)x = P(Ax) = 0 if and only if A x= 0 ; thus the matrices PA and A have
the same null space, and so nullity{PA) = nullity( A). From Theorem 7.3.8, it follows that PA and
A. have also have the same row space. Thus rank(PA) = dim(row(PA)) = dirn(row(A)) =rank( A).
P4. From T heorem 7.3.4 we have SJ. = span(S)l., and (span(S)J.)..l = span{S) since spa.n(S) is a
subspace. Thus (S l. )l. = (span{S)l. )l. = span(S)
1 1 1
rJ(A)cl(A- ) rl(A)c:J(A- ) rt(A)cn(A- ) ] [..1 0 0]
r'l(A)ct(A- 1 ) r 2{A)c2(A- 1) r 2(A)cn(A- 1 ) 0 1 0
P5. WehaveAA- 1 = ... .. .. . = . . . . = [Oijl In par-
[ . .. ..
r 11 (A) C! (A- 1 ) r n{A) C2{A - 1
) r n(A) Cn(A - 1) 0 0 l
ticular, if it= {1, 2, . . . ,k} and j E {k + 1, k + 2,. _., n}, th~n i < j and so ": ( A) c;(A - 1) = 0.
This shows that the first k rows of A and the last n - k columns of A- 1 are orthogonal.
EXERCISE SET 7.4 213
0 -16]
[~ 1
0
-19
0
Thus rank(A) = 2, and a general solution of Ax= 0 is given by x = ( 16t, 19t, t) = t(16, 19, 1). It
foll ows that nullity(A) = 1. Thus rank( A) + nullity(A) = 2 + 1 = 3 the number of columns of A.
[~ ~ -~]
0 0
Thus rank(A) = 1, a nd a general solution of Ax= 0 is given by x = (1t, s, t) = s(O, l ,O) + t(~,0, 1 ) .
It. follows t hat. nullity(A) = 2. Thus rank(A) + nullity(A) = 1 + 2 = 3 t he number of columns of A .
-tl
[~
0
1 ll :!7
0 0 0
T hus rank(A) = 2, and a general solution of Ax= 0 iR given by x <= s(- 1,-- 1, 1, 0) +t(~,- ~ , 0 , 1).
It follows t hat nullit.y(A) = 2. Thus rank (A) + nullity(A) = 2 + 2 = 4 the number of columns of A.
]
l
0 1 2 ]}
0 1 1-..J 2
~ ~ ~~ ~ .. - ~ -...._______ - .-
x = r( -1 , - 1, 1, 0, 0) + s(- 2, - 1, 0, l, 0) + t( -1 , - 2, 0 , 0, 1)
It fo llows that n111lity(A) = 3. Thus rank(A) + nullity(A) = 2 + 3 = 5.
-~]
1 2 4
[~
0 3 3 3 3
1 7 4 l
1 3 -3 -3 3
0 0 0 0 0
0 0 0 0 0
7. (a) lf A is a 5 x 8 matrix having rank 3, then its nulHty must be 8- 3 = 5. Thus there a.re 3 pivot
variables and 5 free parameters in a general solution of Ax = 0 .
(b) Tf A is a 7 x 4 matrix having nullity 2, then its rank must be 4 - 2 = 2. Thus there are 2 pivot
variables and 2 free parameters in a general solution of Ax = 0.
(c) lf A is a 6 x G matrix whose row echelon forms have 2 nonz!'rO rows, then A hn.s rank 2 and
nullity 6- 2 = 4 Thus there are 2 pivot variables and ..1 free parameters in a general solution
of Ax - 0
8. (a) If A is a 7 x 9 matrix having rank 5, then its nullity must be 9 - 5 = 4. Thus there are 5 pivot
variables and t free parameters in a general solution of Ax = 0 .
(b) If A is an 8 x 6 matrix having nullity 3, then its rank must be 6-3 = 3 Thu:; there are 3 pivot
variables and 3 free parameters in a general.solution of Ax = 0
(c) If A is a i x 7 matriX ''hose row echelon forms ha\f' 3 nonzero rows then ..1 has rank 3 and
nulhty 7 - 3 = 1 Thus there are 3 pivot variables and 4 free paramett-rs in a genera] solution
of Ax - 0
9. (a) [fA i~ a 5 x 3 rnatnx. then the largest possible value for rank( ...l) is 3 and the smallest possible
value for nulllty(A) is 0
(b) If A is a 3 x 5 :natrix, then the largest possible value for rank(A) is 3 and the smallest possible
valur for null1ty( '\) is 2
(c) If A is a 4 x 4 mn.trix, t hen the largest possible value ior ranK( A) IS 4 and the smallest possible
value for nulltty(A.) is 0.
10. (a) If 4 1s a 6 x I rn::~.lrix, then lhe largest possible value for rank(.4) is 4 and the smallest possible
value fm nulhty(A) is 0.
(b) If A is a. 2 x 6 matrix, then the largest possible value for rank(A) is 2 and the smallest possible
value for nullity(A) is 4
(c) If A 1s a 5 x 5 matrix, then the largest posstble value for rank(A) ts 5 and the smaiJest possible
value for nulltty(A) is 0
11. Let A be the 2 x 4 matnx havmg v 1 and v 2 as its rows Then the reduced row echelon fonn of A is
0 -~
"
3
and so a g<>n('ral solulton of Ax= 0 is given by x = s(~ , -~,1, 0) + tn,- ~. 0, 1). Thus the vectors
12. Let A be the 3 x 5 matrix having v 1 , v 2 , and v:l as its row:>. T hen the reduced row echelon form of
A is
0 -- 819
0
0 2:l
15
43
]
15
- 8
43
0 - 16 8
and so a general solution of Ax= 0 isx = .~( 1i ,- ~ , ~' l, 0) + t (- 21, 1i, - ~ , 0, 1). Thus the vectors
3
13. (a) This matrix is of re.nk 1 with l1 = [_~ ~:1 = (- ~1{1 -7] = u vT.
(c) T his matdx i< oh ank 1 with A = H-~ -:-: _:;] +:1[1 1 3 3
;\ = [ ~ --69--r~-
3
-6
10 10
-tB---:-
8] [ 2]
12 - 6
12 -6
I
8
- iZ
-~=
2
3
1 -3 5 6 - 3 4) = UVT
-1 3 - .) -6 -~ - l - 1
[~
.21 69 32 32] .ts a sv mmetnc. rnat nx.
.
15. lJ = 3 l 1
3 1 1
t
1 -t--+
1- t 2
0
0
l [1 0
1
t- 1 t t
1-
(2+ t)(l - t)
]
If t = 1, then the la.tter has only one nonzero row and so ran~(A) = 1. If t = -2, then there are two
nonzero rows and ra.nk(A) = 2. If t :f: lor - 2, then there are three nonzero rows and rank(A) = 3.
216 Chapter 7
_, l r
18. The matrix A can be row red11ced as follows:
[~ -1] -tl r
3 3 3 3
-t ]
A=
-1
6 -2 -+
3 t [i 6 -2 -+ 0
3 -1 0
-3
3- 3t
-2+3t
-1 + t 2
-+ 0 -3
0 0
-2+ 3t
-(t- l){2t - 3)
If t = 1 or t = ~, then t he third row of the latter matrix is a zero row and rank( A) = 2. For all other
values of t there are no zero rows, and rank(A) = 3.
thus the vtctors w1 -= (1, 0, 5) and w2 = (0, 1, -4) form a basis for W =row( A) = row(R ). We have
21. The suh.spare W, consisting of all vector!'J of the form x = t(2, -1, - 3} , has dimension 1. The subspace
n.l is the hyperplane cons1$tmg of all vectors x = (x,y,z) whlch are orthogonal to (2,-1,3), i.e.,
which satisfy the equation
2x- y- 3z = 0
A gtneral !iolution of the latter ts given by
dim(W) + dim(lV.l.) = I + 2 = 3
22 . If u and v nre nonzc>ro column vectors and A = uvr, then Ax = (uvT)x u (vrx) (v. x ) u . Thus = =
Ax= 0 if Bnd only if v x = 0. i.e. if and only if xis orthogonal to v . This shows that ker (T) = v.l. .
Similar!_\, I he range ofT conststs of all vectors of the form Ax= (v x }u , and so ran(T) =span{ u }.
23. (a) If B 1<:: obtainerl from A by changing only one entry, then A - B has only one nonzero entry
(hence only one nonzero row), and so rank(A- B) = 1.
(b) If B is obtained from A by changing only one column (or one row), then A - B has only one
nonzero column (or row) , and so rank{A - B) = 1.
(c) 11 B 1s obtamed from A in the specified manner. Then B - A has rank 1. Thus B - A is of the
form uvT and 8 =A+ uvT.
we have A2 x = (u v )Ax = (u v)>.x . Thus >.2 = (u v )>., a.nd if>. :f nit. follows that>.= u v .
DISCUSSION AND DISCOVERY 217
(c) If A is any square matrix, then I - A fails to be jnvertible if and only if there is a nonzero .vector
x such tha.t (I - A)x = 0; this is equivalent to saying t hat>. = 1 is an eigenvalue of A. Thus if
A ::::: uvT is a nmk 1 rtatrix for which I - A is invertible, we must have u v =/= 1 and .4 2 :fA.
Dl. (a) True. For example, if A ism x n where m > n (more rows than columns) then the rows of
A form a set of m vectors in Rn and must therefore be linearly dependent. On t.he other
hand, if m < n then the columns of A must be linearly dependent.
(b) False. If the additional row is a linear combination of the existing rows, then the rank will
not be increased .
(c) False. For example, if m = 1 then rank( A)= 1 and nullity( A) = n - 1.
(d) True. such a matrix must have rank less than n; thus nullity(A):::::: n - rank(A) == L
(e) False. U Ax= b is inconsistent for some b then A is not invertible and so Ax = 0 hM
nontrivial solutions; thus nulHty(A) ;::: 1.
(f) 11-uP.. We must have rank(A) + nullity(A) = 3; thus it is not possible to have rank( A) and
nullity(A) both C'qual t.o 1.
D3. If A is a 3 x 5 matri."<, then the number of leading l's in the reduced row echelon form is at most
3, und (assuming A =/= 0) the number of parameters in a general solution of Ax= 0 is a.t most 4.
D4. If A is a 5 x 3 matrix, thP-n rank(A) :::; 3 and so the n umber of leading l's in the red uced rvw
echelon form of A is at most 3. Assuming A =I 0, we have rank(A) :;::: 1 and so the number of free
parameters in a general solution of Ax = 0 is at most 2.
D5. If A is a 3 x 5 matrix, then the possible values for rank(A) are 0 (if A= 0) , 1, 2, or 3, and the
corresponding values for nufHlypt)-m-c-&;-4,&,-or ~: If A is a 5 X 3 matrix, then the possible values
for rank(/\) are 0, 1, 2, or 3, and the corresponding values for nullity(A) are 3, 2, 1, or 0. If A is
a 5 x 5 matrix, then the possible values for rank(A) are 0, 1, 2, 3, 4, or 5, and t.he corresponding
values for nullity(A) are 5, 4, 3, 2, 1, or 0.
D6. Assuming u and v are nonzero vect.ors, the rank of A= u vT is 1; thus t.he nullity is n - 1.
D7. Let A be the s tandard matrix ofT. If ker(T) is a. line through the origin, then nullity(A) = 1 and
so rank(A) ::: n - 1. Tt follows that ran(T) = col (A) has dimension n- 1 and thus js a hyperplane
in R" .
r~
0 0
1 0
L.
0
0
0
1
0
-{]
218 Chapter 7
and so rank(A) = 3. On the other hand, 1f >. = 0, then the reduced row echelon form is
0 -~
il
5
1 2
0 0
0 0
and so rank(A) = 2. Thus>. = 0 is the value for which the matrix A has lowest rank.
DlO. Let .4 = [~ ~] and B = [~ ~] Then rank(A) = rank(B) = 1, whereas A 2 = [~ ~] bas rank 1 and
B2 = (~ ~] has rank 0.
Suppose that one of the rows of A is a scalar multiple of the other. Then the same 1s true of each
of the 2 >< 2 matrices that appear in (#) and so each of these det erminants IS equal to zero.
Suppose, conversely, that the cond1t10n ( #) holds. Tlten we have
\\'ithout loss of gPnerality ,,.e may assume that the first row of A is not a row of zeros, and further
t Interchange columns if neces!lary) that. a11 =1: 0 We then have a22 = (l!.U
an
)at 2 and a2~ = ( l!.U
an
)a.l3
Thus (a~t , On,a:n)- (~)(ntt,l'lt2a13).
II
and so the second row is a scalar multiple o f the first
row
P2. lf A is of rank 1, then A = xyT for some (nonzero) column vect ors x andy. If, in addition, A is
symmetric, Lhen we also have A = AT= (xyT)T = pr . From this it follows that
r l(B)
r 2(B}
P3. AB = lc.CA ) c1(A) c~c(A)J = Ct(A )r t(B ) + c2(A)r 2(8) + + C~t (A)r~t ( B) , and
[
r~c(B)
each of the prod11l'l~ c1 ( A)r1 (B) is a rank 1 matrix.
EXERCISE SET 7.5 219
P4 . Since the set V U W contains n vectors, it. suffices to show that V U W is a linearly independent
set. Supp?se then tha.t c1 , c2 , . . , c~c and d 1 , d 2 , , dn- k are scalars with the property that
CtVJ + CzVz + + CJ.:Vk + d1W1 + dzWz + + d-n-kWn-k = 0
Then the vector n = c1v1 + czvz + + c,.vk = -(d1w1 + dzW?. + + dn - JcWn-k) belongs both
to V = row(A) and to W = null(A) = row(A)~. It follows that n n = llnll 2 = 0 and son= 0.
Thus we simultaneous ly have c1 v1 + czVz + + q v,. = 0 and d1 w1 + d2w2 + + dn -k Wn-k =
0. Since the vectors v1, Vz, . .. , Yk are linearly indepenjent it follows that c1 = cz = = ck = 0,
and since w1, Wz, ... , Wn-k are linearly independent it follows that d1 = dz = = dn-k = 0.
This shows that V U W is a linearly independent set.
P5. From the inequality rank(A) + rank(B) - n ~ rank{AB} ~rank( A) it follows that
n- rank(A) ~ n- rank(AB} ~ 2n - rank{A) - ra.n.k(B)
which, using Theorem 7.4.1 , is the same as
nullity( A) ~ nullity(AB) S nullity(A) +nullity( B)
Similarly, nullity(B) S nullity(AB) ~ nullity(A) + nullity(B).
P6. Suppose .A= 1 + vr ; t ' 1u # 0, and let X= A- 1 - A-~u( A-'. Then
0 0 ~ ~1
0 1 0 _ .!.
2. The reduced row echelon form of A is 2
and the reduced row echelon form
l 0 0 1
0
0
0
0
1
0
-2
7
0
--1]
0 I 0 l
of AT is 0 o 1 0 . Thus dim(row(A)) ~ 3 a nd di.m(col(A) = dim{row(AT)) = 3. It follows t hat
0 0 0 0
0 0 0 0
dim(null(A)) = 5-3 == 2, and dim(null(AT)) == 4-3 = 1. Since dim(nuli(A)) = 2, there are 2 free
parameters in a general solution of Ax = 0.
220 Chapter 7
4. The reduced ow echelon form of A is r~ :-,! -:] and the reduced row echelon form of AT is
[~ ~ ~
0
5. (a) = =
dim(row(A)) dim(col(A)) rank(A) = 3,
dim(rlllJI(A)) = 3 - 3 = 0, dim(nuli(Ar)) = 3-3 = 0.
(b) dim(row(A)) = dim(col(A)) = rank(A) = 2,
dim(null(A)) = 3 - 2 = 1, dim(null(AT)) = 3- 2 = 1.
(c) dim (row(A))- dim(coi(A)) = rank{A) = 1,
dim(null(A)) = 3 - I - 2, dim(null(AT)) = 3 - 1 = 2.
(d) dim(row(A)) = dim(coi(A)) = rank(A) = 2,
dim(nuii{A)) = 9 - 2 = 7, dim(nuii(AT)) = 5-2 = 3.
(e) <lim (row(A)) = dJm(col(A)) = rank(A) = 2,
dim(nuii(A)) = 5 2 = 3, dim(nuii(AT)) = 9 - 2 = 7.
7. (a) Since rank(A) = rnnk!A I bJ= 3 the system is consis tent. The number of parameters in a general
solution is n - r = 3- 3 = 0; i.e., the system has a unique solution .
(b) Since rank(A ) t- rank[A I bJ the system is inconsistent.
(c) Since rank{ A) = rank[A I bJ = 1 the system is consistent. The number of parameters in a general
solution is n - r = 3 - 1 = 2.
{d) Since rank(A) = rank[ A I bj = 2 the system is consistent. The number of parameters in a general
soluLion JS n - r = 9- 2 = 7.
8. (a) Since rank{A) t- rank[A I b) the system is inconsistent.
(b) Since rank( A) = rank[ A I bl = 2 the system is consistent. The number of parameters in a. general
solution is n - r = 4 - 2 = 2.
(c) Since rank( A) = rank[A I bJ = 3 the system is consistent. The number of parameters in a general
solution is n - r = 3 - 3 = 0, i.e. , tLc system has a unique solution.
(d) Since rank( A) = rank[A I bJ = 4 the system is consistent. The number of parh.weters in a. general
solution 1s n - r = 7 - 4 = 3.
EXERCISE SET 7.5 221
9. (a) This matrix bas full column rank because the two column vectors are not scalar multiples of
each other; it does not ~ave full row rank because any three vectors in R 2 ar..; linearly d~p.endent.
(b) This matrix does not have full row rank because the 2nd row is a scalar mul'.iple of the 1st row;
it does not have full column rank since any three vectors in R2 are linearly dependent.
(c) This matrix has full row rank because the two row vectors arP. not scalar mult iples of each other,
it. does not have full column rank beeause any three vectors iu R 2 are linearly dependen~.
(d) This (square) matrix is invertible, it has full row rank and full column rank.
10. (a) Thls matrix has full column rank because the two column vectors are not scalar multiples of
each other; it does not have full row rank because any three vectors in R 2 are linearly dependent.
(b) This matrix has full row rank because the two row vectors are not scalar multiples of each other;
it does not have full column rank because any three vectors in R2 are linearly dependent.
(c) Thls matrix does not have full row rank because the 2nd row is a scalar multiple of the lst row;
it does not have full column rank because any three vectors in R 2 are linearly dependent.
(d) This (square) matrix is invertible; it has full row rank and full column rank.
(b) det(AT A) =- det [ 2~ : =~~] = o and d et(AAT) = det. [~~ ;~] = o; thus neither ATA nor AAT
-!0 -40 20
ISinvertible. This corresponds t.o the fact. that A doeR not have full column rank nor full row
rank.
(c) det(AT A) ~ det [:: ; __;_;]: ~ 'l-'"1 det(AAT) ~dot[': :] ~ 66,; 0; thus AT A Is not lnmtlble
but A A1 -i&inveitll)fe. This corresponds to the fact that A hu.s does not have full column rank
but. does have full row rank .
invertible but AAT is invertible. This corresponds to t he fact that A does not have full column
rank but does have full row rank .
[-~~ - ~ -~~]
2
(c) det(AT A)::; det = 0 and det(AAT) = det [~~ ~~~] = 0; thus neither AT A nor
60 -15 45
AAT is invertible. This corresponds to the fact. that A has does not have full column. rank nor
full row rank.
222 Chapter 1
are both invertible This corresponds to the fact that A has both full column r<illk and full row
l
rank
is t!ither inconsistent (if bt - 2b2 + b3 =I= 0), or has exactly o ne solution (if b1 - 2b2 + 63 = 0). The
latter includes t he case b1 = b2 = b3 = 0; thus the system Ax = 0 has only the trivial sdl'i.ition.
15. If A = [- ~) ~] I
then A r A [
6 12
12 24
]. It is clear from inspection t hnt the rows of A and .4.T A
2
are multiples of the single vector u = (1,2). Thus row(A) = row(Ar A) is the !-dimensional space
consisting of all scalar multiples of u . Similarly, null (A) = null(AT A) is the !-dimensional space
consisting of all vectors v in R2 which are orthogonal to u , i e all V('ctors of the form v = s( -2, 1).
of all linear comb1ne.tions of the vect ors u 1 = (1, 0, 7) and u 2 = {0, I , - 6). Thus null(A) =null( AT A)
1s the 1-dimtlnsional space consisting of aJI vectors v in R3 which n.re o rthogonal to both u 1 and u 2,
i e, all \'CClor of the form v = s( - 7,6, 1)
-3 bt
0 1 b2 - bl
0 0 b3 - lb2 + 3bl
+ ~- 2b,
) 0
0
0
0
b4
b5- 8b2 + 7bt
thus the system will be inconsistent unless (b 1 , b2 , b3 , b4 , b5 ) s~tisfies the equations b3 = -3bt -1 4b2,
b,. = 2bt- ~.and bs = -?b1 + 8~, where b1 and~ can assume any values.
1 2 3 -1 :
[
0 - 7 -8 5 !
0 0 0 0 :
thus the system is consistent if and only if b3 = b2 + b1 and, in this case, there will be in!'lnitely many
solutions.
WORKJNG WITH PROOFS 223
D l. If A is a 7 X 5 matrix wit h rank 3, t hen A'r also h as rank 3; t hus dim( row( AT)) = dim( col( AT ))= 3
and dim (nuli(AT)) = 7 - 3 = 4.
02. If A has rank k the n, from T heorems 7.5.2 and 7.5.9, we h ave dim(row(AT A))= rank(A1'A) =
rank(A1 ') = rank(A) = k and dim(row(AAT}) = rank(AAT) = rank( A) = k.
D3 . If AT x = 0 has only the trivial solution then, from T heorem 7.5.11, A has full row rank. Thus, if
A is m >< n, we mus t have n ~ m and dim(row(A)) = dim(col(A) ) = m .
D4. (a ) False. The row space and column space always have t he same dimension.
(b) False. It is always true that rank(A) = ra.nk(AT) , whether A is square or not.
{c) True. Under these assumptions, the system Ax = b is consistent (for any b) and so the
ma t riCes A and [A I b J have the same rank.
(d} True. If an m x n matrix A has full row rank u.nd full column rank, then m = dim(row(A )) =
rank(.4) = dim(col(A)) = n.
(e) True. If A 1'J\ and A A'T a re both invertible then, from 'Theorem 7.5.10 , A has full column
rank ami full r ow rank; t hus A is square.
(f) True. T he rank of a 3 x 3 matrix is 0, 1, 2, or 3 a.nd the corres ponding nullity is 3, 2, 1, or 0.
D5 . (a) The solution$ of the system are given by x = (b - s- t, s, t) where - oo < s, t < oo. This
docs not violate T heorem 7.5.7(b).
(b) The solutions can be expressed as (b, 0, 0) -1- s( - 1,1, O) + t ( - 1, 0,1), whe re (b, 0, 0) is a pa.r-
t.icula r solution and s( - 1, 1, 0) + t( - 1, 0, l) is a. general solution of the corresponding homo-
\ geneous system.
n6. (a) If A is 3 x 5 , t.hen the columns of /1 are a. set of five vectors in R3 and thus t~re linearly
dependent.
(b) Lf A i~ 5 x 3, t.hen t.he rows of A are a se t of 5 vectors in R3 a nd thus a.re linearly dependent.
(c) Jf A is m x n , with m f n, then eit her t he columns of A are line arly dependent or the rows
of A arc linearly depende nt (or both).
Pl. Frnm T heorem 7.5.8(a) we have nuli(AT A) = nuii(A) . Thus if A .is m >< n , t hen AT A is n x nand
so ra nk (ATA) = n - nuility (AT A) = n - nullity(A} = rank( A). Similarly, nuli{AAT) = nuli(AT )
a.nd so rank(A AT) ::= m - n ~llity ( A A T) = m - nullity(AT) ""' ra.nk(Al' ) = ra nk( A) .
P3, (a) Since null(A1'A) = null(A), we have row(A) = nuB{A).l = null(A' A)l. = row(A T A ).
{b) Sinr.~ AT A is symmetric, we have col(AT A ) = row(AT A) = row(A) = col(AT).
P4. If A is m x n where m < n, then the columns of A form a set of n vectors in Rm and thus are
linearly dependent . Similarly, if m > n , then the rows of A form a set of m vectors in R~'~ and thus
are linea r ly depender.t
224 Chapter 7
=
P5. If rank(A 2 ) = ran 1'(A) then dim(null(A 2 )) = n- rank(A 2 ) n- rank( A)= dim(nuii(A}) and,
since null(A) <;;; null(A 2 ), it follows that null(A) = null(A 2 ) .
Suppose now that y belongs to null( A) n col( A) . Then y = Ax for some x in R" and Ay = 0 . Since
A 2 x = Ay = 0 , it follows that the vector x belongs to null(A 2 ) = null(A) , and soy = Ax= 0. This
shows that null( A) n col( A)= {0}.
P6. First we prove that if A is a nonzero matrix with rank k, then A has at least one invertible k x k
submatrix, and all submatrices of larger size are singular. The proof is organized as suggested.
Step 1. If A is an m x n matrix with rank k, then dim(col(A)) = k and so A has k linearly
independent columns. Let B be the m x k submatrix of A having these vectors as tts columns.
This matrix also has rank k and thus bas k linearly independent rows. Let C be the k x k
submatrix of B having these vectors as its rows. Then Cis an invertible k x k submatrix of A.
Step 2. Suppose D is an r x r submatrix of A with r > k. Then, since dim(coi(A}) = k < r, the
columns of A which contain those of D must be linearly dependent. It follows that bne columns
of D are linearly dependent since a nontrivial linear dependence arx1ong the containing columns
results in a nontrivial linear dependence among the columns of D. Thus D is singular
Conversely, we prove that if the largest invertible submatrix of A is k x k, then A has rank k
Step 1. Let C be an invertible k x k submatrix of A. Then the columns of C are linearly
independent and so the columns of A that contain the columns of C are also linearly independent.
This shows that rank(A) = dim(coi(A)) ~ k .
Step 2. Suppose rank(A) = r > k. Then dim(col(A)) = r, and so A has r linearly independent
columns. Let 8 be the m x r submatrix of A having these vectors as its columns. Then B also
has rank r and thus has r lmearly independent rows. Let C be the submatrix of B having these
vec-torc; as lt'l rm\''l Thf'n (' is a nonsingular r x r submatrix of A Thus the a..c;sumption that
rank(A) > k has led to a contradiction. This, together with Step l, shows that rank(A) = k.
P7. If A is invertible then so is AT. Thus, using the cited exercise and Theorem 7.5 2, we have
rnnk(C P) = rank((C P)T) = rank (Prcr)- rnnk(CT) =rank( C)
and from this 1l also follows that nullity(CP) = n- rank(G' P) = n- rank( C)= nullity(C).
1. A mw eehelon fmm fo A ;, 8 = [~ -: -!]; thus the fi.st two columns of A are the phoL columns
2. The first column of A forms a basis for col(A), and th~ first row of A forms a basis for row(A).
3. The ma.tnx A can be row reduced to B = [~ ~ ~ ~]; thus the first two columns of A are the pivot
is C =
~ =: ~jl
0
0
0 0 0 . Thus the first two columns of A form a basis for col(A), and the first two
0 0 0 0
0 0 0 0
rows of A form a basis for row(A) .
1 Q 0 2 i
3
I
0 0 0 - 6
5. The reduced row echelon form of A is B =. 0 0 l 0 -~ and the reduced row echelon form
0 0 0 0 0
0 0 0 0
[0~ ;!(;
OJ
1
1
()( Al' is C: = ] . Thus the fi rst tii <ee column..<> of A form a basis for col(A), and the
0 0 () 0
1 3 o o
0 0 1 0
0 0 0 1
6. The reduced row echelon form of A is fJ = 0 0 0 0
and the reduced row echelon form of AT iS
0 0 0 0
0 0 0 0
[~
0 1 0
2
c= 0
-4
5
0
0
t
:
0
1
- l1'1 .
0
Thus lhe 1st. 3rd , aud lt.h columns of A form a basis for co!( A), and
0 0 0 o oJ
the 1st, 2nd, and 4th rows of A for m a basis for row(A).
7 . Proceeding as in Example 2, we first form t he matrix ha.ing the given vectors ns its columns:
4 1]
G 10
-6 - 11
2 3
The reduced row echelon form of A is:
From this we conclude that all three columns of A are pivot columns; '~hus the vectors v 1 , v 2 , v 3 a.re
lineMiy independent and form a basis for the space which they span (the column space of A).
226 Chapter7
8. Let A be the matrix having the given vectors as its columns. T~~ reduced row echelon form of A is
R= 0
[ 0
~ 0~ 0~ -~]0
0 0 0
From thls we conclude that {v 1, v2} is a basis for W = col(A}, and it also follows that v3 = 2v 1 + v 2
and v 4 == -2Vt + v2.
9. Proceeding as in Example 2, we first form the matrix having the given vectors as its columns:
A==
[ -~ -~ -~ -~]
0
3
0
6
2
0
2
3
The reduced row echelon form of A is:
R=[~~~~l
From this we conclude that the 1st and 3rd columns of A are the pivot columns; thus {v 1, v3} is
a basis for W col(A). Since c2 (R) == 2c1 (R) and c4(R) = c 1 (R) + c3(R), we also conclude that
v2 = 2vt and v 1 v 1 -+- v 3
10. Let A be t he matrix having the given vectors as its columns. The reduced row echelon form of A is
0
1 -1
2
0-1]
0 3
0 0 1 2
0 0 0 0
From this we conclude that the vectors v 1, v 2 , v 4 form a basis fnr W = coi(A), and that v 3 = 2vt- v 2
and v:~ = -v 1 1- 3vz + 2v~.
[~-~~;] = [~---~--:~-!-~--~~---~]
0 0 0 : 0 0 1
Thus the vectors v1 = (1,- 4,0) and v 2 = (0, 0, 1) form a basis for null(AT).
12. The matrix {A I /3j can be row reduced (and further partitioned) to
13. The redu-:ed row echelon form of A is R = f~ ! =~ =~] . From this w~ conclude that the first two
lo o o o
columns of A are the pivot columns, and so these two columm; forrn a has is for col(A) . The fi rst
two rows of R form a basis for row(A) = row(R). We have Ax = 0 if and only if Rx = 0, und the
general solution of the latter is x = (s+4t ,3s+1t, s,t) = s{1,3,1,0) +t(4, 7, 0,1). Thus (1,3,1,0)
and (4, 7, 0, 1) fonn a basis for null(A) .
The reduced row echelon form of the partitioned matrix [A I !4} is
1 0 - 1 -4 0 0 l- ~] t
0 1 - 3 -7 I 0 0 0 -1
------- ----- -~- - -- ---- - -- ---
0 0 0 0 : 1 0 -~ _ .!
[ I 4
0 0 0 0 : 0 1 ~ -~
Thus t he vectors ( 1, 0, - ~, - ~ ) and (0, 1, ~, - ~ ) form a basis for null(AT).
14. The educed ww echelon fmm of A ;, R = ~ ~ -: :]. From this we conclude t hat the lsl. 2nd ,
0 0 0 0
and 4th columns of A are the pivot columns, and so these three columns form a basis for col( A).
The first three rows of R form a basis for row( A) = row( R). We have Ax = 0 if and only if Rx ""' 0,
and the general solution of t he latter is x = ( ~s, - ~s, s, O) = sO,- ~ . l ,O). Thus ( ~ . - ~ . l , Orrorms
a basis for null( A). The reduced row echelon form of the part itioned mat rix [A I J 1 j is
l 0 :...12
s
0 :-!
I
1 1
8 0
o
- .l.
40
1
11o 1
0 1 4 0 I lG 80 -5
16. The reduced ww <ehelon fom of A = [: - : ~ ~] is .Ro ~ [~ ~ t] Thus the column-row factori>a
ticn is
A=CR = [~ -:]l~ ~ il
228 Chapter 7
A~ m[1 0
II+ HJ jO
1 ~I
- 4 0 -37
A = CR=
- ~
3
D2. The pivot columns of a matrix A are those columns tha~ correspond tv tht: columns of a row
ec..,Plon form R o f A which contain a leading 1. For example,
A ~ [~ 12] [1
~]
0 -2 0 7 2 0 3 0
4 - 10 6 12 28 --. R = 0 0 1 0 0
4 -5 6 -5 - 1 0 0 0 0
and :-:o the 1st, 3rd, nnd 5th rolumns of A are tbe pivot columns
EXERCISE SET 7.7 229
D3. The vectors v 1 = (4, 0, 1, -4, 5) Mid v 2 = (0, 1, 0~ 0, 1) form a bnsis for null( AT).
Pe = [_ ~~~ -~)
~9 29
and we have Pex = [_ t.~ -~] (~] ~ [-~]
29 19 :.!!)
3. A vector parallel to l is a = (1, 2). Thus, using Formula 5, the projection of x on lis given by
.
proJaX
ax a ""' ( 3)(
= llall2 ) (3 6)
5 1, 2 = 5 5
5 ~ 5 ~. .')
4. A vector pnr11.llel to l is a = (3, -1). Thus, using Form ula. 5, the projection of x on l is g.iven by
. a x -a )( ) ( 33 11
proJ.,X = ll all2a :.::(IO . 3, 1. = - ro,w )
On the other hand, since sin O =- vko and cosB = ;?to. the standard matrix for the projection is
Po = [_1 . .1]
10 10
and we have Pex""" [_ ~
10
-11 [-~J = r- ~J
It) 10
!
5. T he vector component of x along a is proj 6 x = 11':"1j2 a = (0, 2, -1) = (0, ~, - ~ ), and the component
orth.:.gona l to a is x- proj 8 x = (1, 1, 1) - (0, ~. = (1, ~ ~ ~)- -0
6 . prOJ8 X, -_ iiGlr
Xa a = 5 ( l , 2, 3) -..
_.. ( Iii
5 14
10 i4 _ (Z 1 Q, 1)
1[)) , an d x - - prOJaX- ( 5 10 15) _ ( 23 10 -!4
I ).
14 -
14 , i4 14 - 14 - 14,
7. The vector component of x along a is projax = {a:lij a = ~(4, -4 , 2, -2) = (~, - L{o - 110 ), and the
component orthogonal to a is x - proj.x = (2, I, 1, 2) - (~, -~ . fo, - lo) = (~, ~. [o, ~) .
8. proj ..x = ~ (2, 1,-1, -- 1):::; e72, ~. - ~, -),
x- projax = (5, 0, - 3, 7} - (11, ~ . -~ , -~ ) = CZi, -~,- 1i , 575 ).
. Ia xl 12- 6 + 241 20 20
9 . llprOJaxll = ~ = J4 +9 + 36 = J49 = 7
230 Chapter 7
Ia xl 18- 10 + 41 2 1 J6
-nail= J4 + 4 +- 16 v'24=J6-6
. laxl 1-8+6-2+151 11 .J33
11. llprOJ nxll = Taf = v16 +4+4 + 9 = J33 = -3-
. a xl 135-3 + 0- 11 31 3IJ5i
1 2. llprOJ aXII = Taf = V49 + I + 0 + 1 = J5I = ~
13. If a = ( - 1, 5, 2) then, from Theorem 7.7.3, the standard matrix for the orthogonal projection of R 3
onto span{a} is given by
P= ;. aaT = .~
. -1]
[ 1-1 5 5 2) =
3~
[ 1-5 -2]
-5 25 10
a a " 2 -2 10 4
We note from inspection that P is symmetric. lt is also apparent that P has rank I since each of its
rows is a scalar multiple of a . Finally, it is easy to check that
p2 = _.!..
30 -2
[-~ 2~ ~~] ~3 [-~ ~: ~~] = _.!..30 [-~
10 -2 10
4 4 - 2
and so P 1s tdempotent.
14. If a = (7, - 2, 2) then the standard matrix for the orthogonal projection of R 3 onto span{ a} is given
by
1 T 1
P = - a a = - -2 7] lr_ 1
-2 2] = - [- 1419 114 -11]
4
aTa 57
2 [ 57
14 -4 4
We note front in!>pection that Pis symmetric. It is also apparent that P has rank 1 since each of its
rows is a sc,\lar mult1plc uf a Finally, it is easy to check that P 2 = P and so P is idl.mpot enl.
Hi. Let M =
3
- 1
?.]
u Then i\f 1 M
.
=[
16
;]
'l
and. from Theorem 7 7 5 , the standard matrix for the
[ 9 1
I :l
orthogonal prOJCclton of !1. 3 onto W = span {a 1, a2} is given by
[~ ! ~]
and from this we conclude that P has rank 2. Finally, it is easy to check that
P' = 2~7 [ ~:~ ~~: ,m 2!1 [~:: ~~: ,m = 2!1 [~~ ~: ~] 193
=P
16. Let M ~ [ - : -~] Then MT M ~ [;~ :]and the tandW"d matrix for the orthogonal ~wjection
of R 3 onto W = span{ a 1 , a2} is given by
-23]
30
[1 -2
4 -2
-~~ -~~~]
-102 305
From inspection we see that P is symmetr ic. The reduced row echelon form of P is
[~
0
1
0
and from this we conclude that P has ranK. 2. Finally, it is easy to check that P 2 = P.
17. The standard matrix for the orthogonal projection of R 3 onto the xz-plane is P = [~ ~ ~] - This
l
0 0 1
agcees with the following computation rn;ing Po, mula (2n Let M ~ [: ; Then MT M ~ [~ ~~ ,
and M(MT M) - MT
0
~ [: ~] [; :1 [~ ~ :] [~ ~ :J.
=
18. The standard matrix for t he orthogonal projection of R 3 onto the yz-p]ane is P = [~ ~ ~] This
0 0 I
19. We proceeri as iu Example 6. The general ::oolution of the equa tion x +y +z = 0 can Le written as
and so the two column vectors on the right form a basis for the plane. If M is the 3 x 2 matrix
h~ving these vectors as Jts columns, then MT M = (~ ~] and the standard matrix of the orthogonal
projection onto the plane is
lJ
[
~1 2~ -12 -l]2 [-1 1 0] = 3~ [-~ -~ =~]
- 1 0 1
-1 -1 2
and so the two column vectors on the right form a basis for the plane. 1f M is the 3 x 2 matrix
having these vectors as its columns, then NfT M = [! 1~} and the standard matnx of the or~bogonal
projection onto the plane is
10
The orthogonal projection of the vector von the plane is Pv = ]_ 2 2 -6] [ 2] = 141[34]
13 3 4 53
14 [ -6 1 li - 1 -5
21. Let A be the matrix having the given vectors as its columns. The reduced row echel<m form of A is
and from this we conclude that t.he vet-tor~ a 1 and a 3 form a basts for the ~ubspact II' spanned b}
the given vectors (column space of A). Let M be the 4 x 2 matrix havmg a 1 and a 3 as its columns.
Then
6 2 -4 -6 4 lj 0 72 20]
0 -2 5]
[-4
2 -2
5
= [- 20 30
and the standard matrix for the orthogonal projection of R 4 onto W ts g1ven by
M(MT M)-t MT = [ - :
2
~]
_1_ [ 30
-2 1760 -20
- 20] [4
72 1
-6
0 -2
2
-4 5
89 -105 -3
= _1_ -105 135 - 15 25]
15
220 - 3 - 15 31 - 75
[
25 15 -75 185
22. Let A be the matrix having the given vectors as its columns. The reduced row echelon form of A is
EXERCISE SET 7.7 23:,
and from this we conclude that the vectors a 1 and a2 form a basis for the subspace W spanned by the
36 24
oiven vectors. Let M be the 4 x 2 matrix having a 1 <md a 2 as its columns. Then M r M = [ ]
o 24 126
and the stand~uct matrix for t;he orthogonal projection of R4 onto W is given by
M(MT A-1)-l
5 3]
MT = -3 -6 _1_ [ 21 -4] [5 -3 1 -1] = _.!._ -89
153 -89
87
31 -37]
-13 -59
[-11 9 660 - 4 6 3 - 6
0 0 9 220
[ 31 -13
- 37 -59
7
- 19
-19
193
x=
_! + !t1
-ts lt3 -
=s
[-~]
_.!
+t
[- l~]
r s J
2
10
2
OJI
2 2
l
where the t,wo column vectors on the nght hand side form a basis for Lhe solution space. Let D be
t.hP. matrix having these two vec tors a.S its columns. Then
1
0
0]1 [=! -i] [~
1 0
=
0~
0] = 2~ [101OJ
0 1
and the standard matr ix for t he orthogonal projection from R 4 onto the solution space is
:;
? 1
[0
-l
_!
2
2
1
0
0]1 = ~3
[-t -1
0 - 1
- 1
2
_:]
0
1 0 2
Thus t he orthogonal projectio n of v = {5, 6, 7, 2) on the solution space is g iven (in column form ) hy
the solution space of Ax= 0 js equal to span{a) where a = ( -3, 0, 1, 2). Thus, from The;).lem 7.7.2,
the orthogonal projection of v = (1, 1, 2, 3) on th<! solution space is given by
. a v 11 ( 15 s 10
proJ 8 v = !l aW~ a = i4 ( -3, 0, 1, 2) = - 14 , 0, 14 , 14 )
234 Chapter7
25. The reduced row echelon form of the ma.t~rix A= ~~ ~ -~ -~] is R = [~ ~ -~ -~] From
3 2 -1 1 0 0 0 0
this we conclude that the first two rows of R A
form a basis for row(A), and the first two columns of
form a ba.c:;is for col(A).
Orthogonal proJeCt JOn of R4 onto row( A): Let B be the 4 x 2 matrix having the first two rows of R
as 1ts columns. ThPn
0 1 3 0 1 1 0] 11 -14
1 - 2 - 4] 1 -2
[ = [-14 21]
3 -4
and the standard matrix fo r the orthogonal p rojection o f R4 onto row(A) is g iven by
cr c = [t 1 3] [~ ~1 = [11
102 32 75
1]
and the standard matnx for the orthogonal projection of R 3 onto col(A) is given by
=~~ 1::]
237 - 73
P. = C(crc)-lcr = _I_ - 73 369
c 419 -13 -95 29 - 46
[
194 64 - 46 203
EXERCISE SET 7.7 235
1]
[R[cJ= [~
0 51 g8 .I - 5
27. The reduced row echelon foxm of the matrix [A I b) is 1 - ~ ~: k. Prom this we
0 0 0I 0
[-,! - ,! - ,!']
fii:
:~c~:d~ :Y::~:i:o~ ::~~ns::: ::~::n~: :~::.:::v:g t~: ~,: t~o ,~,~ ~:c:
its columns, and let C = BTB = [~ tl
Then the s tandard matrix for the orthogonal projection
of R 4onto row(R) = row(A) is P = BC- 1 Br, and the solution of Ax= b which lies in row(A) is
given by
~] [-i]
1 _.!_ 2
r 25 25 25
n1
1 !R 11
l-~
25 - 25
Xrow ( A) = Pxo = n
-25
7 =
2!1 '25 25 0
11 1 2 !! 0
2~ 25 25 25
1 0 32 -3I 1I 34]
28 . The reduced row echelon for m of the matrix [A I b j is !RIc] ::: 0 1 fi fi :- ~ . rrom this we
[
0 0 0 o, 0
conclude that Ax ~ b is onnsht<nt, and that x0 = [- ~] is one solution. Let B be the4 x 2 matrix
having the first t wo rows of R as its columns, ami let C = BT B = [_ ~ - ~] . Then t he s tandard
:16 72
matrix for the orthogonA-l projection of R 4 onto row(R) =row( A) is p ::-: vc-1 BT , and the solutio n
of Ax = b which lies in row( A) is given by
Xrow (A ) = Pxo =
I. .
:!99
20
2~9
13 l
299
20
299
224
299
32
299
131
299
:~2
209
90
2!19
- f.<i9
-A
"lr .l I"'l
124
m
299
j
l
- 3 _
0 -
-m
299
48
!M
299
53
- -m 124 25
- 299
90
299
0 - 299
!!l
299
29. In Exercise Hl we found t hat the standard ma.trix for the orthogonal projection of R 3 onto the plane
W with equation x + y + z = 0 is
P =-
1
3 [-~ -~
-1 . 1
=~]
2
Thus the standard matrix for the orthogonal projection onto w.t (the line perpendicular toW) is
[~ ~ ~] ~ [-~ -~ =~]
1
1- p = 1
0 0 1 -1 - 1 2 1
Note that W L is t he !-dimensional space (line) ::~panned by t he vect or a= (1.1, 1) and so the com-
putation above is consistent with the formula. given in Theorem 7.7.3.
236 Chapter 7
30. In Exercise 20 we found that the standard matrix for the orthogonal prOJection of R 3 onto the plane
W with equation 2.c - y + 3z = 0 is
P=~
10
2
2 -6]
13 3
14 [ -6 3 5
Thus the standard matrix for the orthogonal projection onto W J. (the line perpendicular to W) !s
[~ ~ ~1-
1
I - p =
0 0 1
1
14 [
-6
~ 1 ~ -~]
3 5
= 14
1
[-~ -~
6 -3
-:]
9
31. Let A be the 4 x 2 matnx having the vectors v 1 and v 2 as its columns. Then
!I'TA = [1 -2
3 4
3
-1
0]2 [-~3 - 1
!] = [- 8 14 -8]
30
0 2
and the sta ndard matrix for the orthogonal projection of R4 onto the subspace W = spa.n{v 1, v 2 } is
51
p -= A(A r A)-1 l r = [-~ ~]-1 3 - 1 178
[15 4]
4 7 l3
fl -2
4
3
-1
0
2] = 89
1 23
[2825 - 3120
23
54
28
- 31
59
25]
20
5
0 2 5 14
Thus the orthogonal projection of the vector v = (1, 1, 1,1} onto w.1. is gtven {in column form) by
51 23 28
1 23 54 - 31 25] [1]1
20 1]1 l [127]
66 1 [-38]
23
\ Pv
r:]
ll 25
[
89 28 - 31
20
59
5
1: ~ = ~ - 89 ~~ - 89 ~:
02. A 5 x 5 rnatnx Pis thl' s tandard matrix for an orthogonal projection of R5 onto some 3-dimensional
subspace if and only 1f it is symmetric, idempotent, and has rank 3.
D4. If P i~ the standard matrix for the orthogonal projection of Rn on a. subspace W, then P 2 =P
(Pis idempotent) and so p k = P for all k ;:::: 2. In particular, we have pn = P.
D5 . (a) TJ"ue. Since projwu belongs toW and projw.1. u belongs to W .L, the two vectors a.re orthog-
onal
(b) False. For example, the matrix P = [~ ~] satisfies P 2 ;:: P but is not symmetric and there-
fore does not correspond to an orthogonal projection.
(c) True. See the proof of Theorem 7.7.7.
(d) True. Since P 2 = P, we also have (J - P) 2 =I - 2P + P 2 = I- P ; thus I- P is idempo-
tent.
(e) False. In fa.ct, since projcoi(Aj b belongs to col(A). the system Ax= projcol(A)b is always
consistent.
D6. Since (W.L )l. = W (Theorem 7 .7.8), it follows that ((Wl.)..l)l. = W.1..
1l 1]
D7. The matrix A =
[1 1 1
1 l 1
is symmetric and has rank 1, but is not the standard matrix of an
D8. In this case the row space of A is equal to all of Rn. Thus the orthogonal projection of Rn onto
row(A) is the identity transformation, and its matrix is the identity matrix.
D9. Suppose that A is ann x n idempotent matrix, and that>. is nn eigenvalue of A with corresponding
eigenvector x (x f 0). Then A 2 x = A(Ax) = A(.h) = ..\2 x. On the other hand, since A 2 =A , we
have A 2 x = Ax = .>.x. Since x =/= 0, it follows 'that >.? = >. and so >. = 0 or 1.
DlO. Using ca lculus: The reduced row echelon form of lA I bj is r~ ~ ~l ~] ; thus the general solution of
0 0 010
Ax = b is x = (7 - 3t, 3 - t , t} where -oo < t < oo. We have
llxll 2 = {7- 3t) 2 + (3 - t) 2 + t2 = 58 - 48t + 1lt2
and so the solut.ion vector of smallest length corresponds to ft lllxii 2 J = -48 + 22t = 0, i.e., to
t :.=: ~1 We conclude that Xrow = (7=-- ii, 3- H, ~1) 5
= ( 1p 1~, ~~).
Using an orthogonal projecti on: The solution Xrow is equal to the orthogonal projection of a.ny
solution of Ax = b , e.g., x =
(7, 3, 0}, onto t he row space of A. From the row reduction alluded
to above, we sec that the vectors v 1 = (1, 0, 3) and v 2 == (0, 1, J) form a basis for the row space of
.4 . Let. B be the 3 x 2 matrix having these ver,tor as its columns. Then BT B = [1~ ~], and the
standard matrix for the orthogonal projection of R 3 onto W = row(A} is given by
~ ~] = 1\ [-~ ~~ ~] 3 1 10
Xrow = Px = -
11
1
[ -~ ~~ ~]
3 1 10
[;] =
0
f [ !]
1
24
238 Chapter 7
Dll. The rows of R Corm a basis for the row space of A, and G = RT has these vectors as its columns.
Thus, from Theorem 7.7.5, G(GTG)- 1GT is the standard matrix for the orthogonal projection of
Rn onto lV = row{A).
P 2. If b = ta, then bTb = b b = (ta) (ta) = t 2a a = t 2aTa and (similarly) b b T = t2aaT; thus
1 T 1 2T 1 T
bTb b b = t2aTa t a a ~ aT a aa
P3. Lf'l P hl' a symmetric n x n matrix that is idempotent. and has rank k. Then W -= col(?) is a
k-dimensonal subspace of R". We will show that P is the standard matrix for the orthogonal
projecl ion of Rn onto W, i.e, that Px = projwx for all x in Rn . To this end, we first note that
Px belongs to W and that
To show that Px = projwx it suffices (from Theorem 7.7.4) to show that (I- P)x belongs to
\1' .1nd since W - col{ A) = ran(P). this IS equtvalem to showmg that Py (J - P)x = 0 for all
y m R" Finally, since pT = P = P 2 (Pis symmetric and idempotent), we have P(J- P) =
P - P 2 - P - P = 0 and so
and it is easy to chP"k that this vector is in fact orthogonal to each of the columns of A . For example,
(b- Ax ) Ct(A) = 1\ !(-6){1) + {-27}(2) + (15)(4)] = 0
EXERCISE SEl' 7.8 239
2. The colunms of A are linearly independent and so A has full column rank. Thus the system Ax= b
has a unique least squares solution given by
1 2..
6] -2 I I] 2lj-
0 -l [ 2 1 3
21 [-14] [ - 1 - 9
and it is easy ,to.check that this vector is orthogonal to each of the columns of A.
Ax =
1
[
2
-1]
3 ~ [~ ] = ~ [28]
11
0
8 11
16
4 5 40
On the other hand, the standard matrix for the orthogona.J projection of R 3 onto col(A) is
On the other hand, the standard matrix of the orthogonal projection ont.o col(A) is
and so we have
17
~1 [-~] 1 .
= 21
1
[~~113
=Ax
240 Chapter 7
5. The least squares solutions of Ax = b are obtained by solving the associated normal system AT Ax=
.4Th which IS
Sincp the rnntrix on the left is nonsingular, this system has the unique solution
and the least squares error is li b - Axil= V(~)2 + (~)2 + (~)2 = J* = :Lfl.
7. The least SCJUf\rf'S solut1ons of Ax= b are obtai;1ed by solving the normal system AT A x = ATb which
is
0 1 '-a7]
'
The augmented matdx of this system reduces to [: 1 : i ; thus there are infinitely many
0 0' 0
solutions given by
EXERCISE SET 7.8 241
T he error vector is
1 ~ ~~ ~~]
[ 17 33 50
[::]
X3
= [~]
6
1:-sltf ; .
[~
1 0
Tbe augmented matrix of this system reduces to 1 : thus there are infinitely many
0 0 I 0
:;olutions given by
[~} '[=:]
1
x = [:;] [W,=:]
T he error vt.'C tor is
and t he least squares error is llb - Axll = V( !iY + {fr )2 + {- fi-)2 = ,f1j; = f[I .
2
9. The hnc"' model fo, the given data b M v =y wh"c M ~ [i il and y = [;] . The least squa<"'
solm ion i:> o b tained by sobing the norrnal sys tem MT~H v :.:: M T y which is
T hus the least squares straight line fit to the given data is y-== - 3
10
+ I0 x.
y
12 3 456 7
-I
242 Chapter 7
10. The Hnear model for the given data is Alv ~ y where M ~ [1 ~] and y ~ [!] The leaSt squares
[Vl] = [48
V1
8] -l [4] 1[22 -8] [4] = 1[16] = [~]!
22 9 = 24 -8 4 9 24 4
Thus the least squares straight line fit to the given data is y = j + !x.
y
-I 2 3 4
-I
11. The quadratk least squar" model for the gh-en data is Mv- y whe" M ~ [~ ~ ~] and y ~ [!]
The least squares solution IS obtained by solving the normal system M 7 Mv = MT y which 1s
Since the matrix on the left JS nonsingular, this system has a unique solution given by
[~:] = [ :
V3 22 62
2~ ~~] - l [ :] = [- ~
178 27 !
6
J -;] [:] = [- 16~]
-
-1 !
3
27 1
3
-I
EXERCISE SET 7.8 243
12. The quadratic least. squares m.odel for the given data is Atv = y where A1 =
110lj andy=
1 0
1 1 1
1 2 4
The le3st squares snl tti.ion is obtained by solving t he normal system Mr llifv = M'I'y which is
Since the ma.t.rix on the left is nonsingular, t his system has a unique solution given by
Thus the least squar~:.'> quadratic fit i:.n the give n tlat<'- is y = -1 - ~:r + ~x 2
13. The model for t.he lea."3t squares cubic fit to the given data is Mv =y where
r
1 1 4.9
1 2 4 s 10.8
M::::: 3 !) 27 y= 27.9.
4 16 64 60.2
1 5 25 125 113.0
n
T he associl\.ti~d normal sysle111 A(r /11 v = lv[T y is
r 5
15 55
225] l 2168
l~~ 225
55
979
979
4425
225
a3
-
916.0
4087.4
18822.41
and the solution, written in comma delimited form, is (ao, a 1, a2 ,a3) ~ (5 .160, -1.864,0.811, {).775).
14. The model for the least squares cubic fit to the given data is Mv =y where
1 0 0 0 0.9
1 1 1 1 3.1
M= 1 2 4 8 y~ 9.4
1 3 9 27 24 .1
1 4 10 64 57.3
Chapter 7
5 10 30
10 aol [ 323.4
30 100 94.8] 100] a1
354
[
30
1300
100
4890
[ = 1174.4
100 354
354 1300 4396.2
a2
a3
and the solution, written in comma delimited form, is (ao, a1, a2, as) ~ (0.817, 3.586, -2.171, 1.200).
15. If M =
11:tal. and y =
:t2 [Yal
312
. , then MT M = [
1 1 1
] [1. :ral = [r: :z:, L.
1 2:2 n
~ :t!]
:
and
[. .
1 :tn
Yn
.
X l %2 :J:n
. .
1 Xn
'
MTy = [
XI
1 1 1] Y!Ill. = [J-Y ]. Thus the normal system can be written as
:1:2 :Z:n
2
: L.x,y,
!in
and the point in the plane that is closest to Po is Q = (~,- ~,! ). The latter is found by
-:---+
computing thr orthogonal[~r oje~]tion of the vector b = OP0
1
onto the plane ThP column
vectors of the matrix A = ~ ~ form a bnsis for tv and so the orthogonal projection of b
onto IV ag given by
(b) The distance from the point P0 = (1, 2, 0, -1) to the hyperplane x 1 - x 2 + 2x 3 - 2x4 = 0 is
d = I( 1)(1) + (-1){2) + (2)(0) + (-2)( -1)1 = _ 1_ = JTij
/(1)2 + (-1)2 + (2)2 + (-2)~ Jill 10
and the point in the plane that is closest to Po is Q == ( 190 , ~~,- 120 , - 1~ ). The latter is found
by computing the orthogonal projection of the vector b = OP~ onto the hyperplane:
0
P' Jwb = [~ -! ~] io [-~ : -:] H~ ! ~] Ul 1
= 0 [
1 ~~]
DISCUSSION AND DISCOVERY 245
D2. (a) The vect or in col(A) Lhat is closest to b is projcol( t!)b = A ( AT A) - 1 ATb .
(b) The least squares solution of Ax == b is X= (JiT A) - lATh.
(c) The least squares error vector isb -A( AT A)- 1 A'T b .
(d) The least squn.res error is li b-- A(AT A)-1 A1'bll.
(e) The stan<iard matrix for the orthogonal projectiou onto col(A) is P = A (A1'A)- 1 Ar .
D3. From Theorem 7.8.4, a vector xis a least squares solution of Ax = b if and only if b - Ax belongs
Ti u~se equations are dearly iucornpatible and so we conclude that, for any value of s, the vector x
is not a least sq11ates solution of Ax = b.
D4. The given data points nearly fall on a stra ight line; thus it, would be rel\..c;onable to p~rform a linear
least squares fit and then use the resulting linear formula y = a + bx to extrap0l a.~e to x = 45.
05 . T he m<><;el foe t h;s least "''"'"' fit ;, [: il 1:1 : m, nd the conespond;ng nonnal system ;s
[
:
'l
4~]
36
[:] ~ r~;J.
~
Thus th~ l~a.."t squares solution is
[
3
al
3]1
LJ= i * ~
2 [11
l [~= -
;n
.1) 9]
-1
~
[11 ...
l [~5]
=
21
rP.'lulting in y = 251 + ~ i as the best least squares fit. by a curve of this type.
y
10
I 2 3 4 5 6 7
D6. We have [~ ~';] [:] = [~] if and only if Ax + r =b and AT r = 0. Note that AT r = 0 if and only
if r is orthogonal to col(A) . It follows that b - Ax belongs t.o coi{A).L and so, from Theorem 7.8.4,
x is a least squares solution of Ax= b and r = b - Ax is the least squares error vec".nr.
246 Chapter 7
P 3. The least squares solutions of A x = b are t he solutions of the normal system AT Ax = AT b . From
Theorem 3 5 1, the solution space of the latter is the translated subspace X.+ W where :X is any
least squares solution and W = null(AT A) = nuii(A).
P 4. If w is in W and w :1: projw b then, as in the proof of Theorem 7.8.1, we have l b - wll >
lib - projw b ll ; thus projwb is the only best approximation to b from W .
P5. If ao, a 1, a2, ... , am are scalars such t hat aocdM) + a 1c2(M) t a2c3(M ) + .. + amCm+l(M) = 0,
then
ao + n1:ri + a2x~ + + amxi = 0
for each t = 1, 2, ... , n Thus each x, is a root of the polynomial P (x) = a 0 + a 1 x + + amxm.
But such a polynomial (if not identically zero) can have at most m distinct roots. Thus, if n > m
and if at. least. m + 1 of the numbt>rs x 1 , x 2 , . , Xn are distinct, then ao = a 1 = a2 = = am = 0.
This shows that the column vectors of M are linearly independent
P6. If at least m + I of the numbers :r1 , :r2 , ... :rn are distinct then, fr om Exercise P5, the column
vect ors of M are linearly independent, thus M has full column rank and M T M is invertible.
2. (a) .,. v2 = 0; thus the vectors v 1 , v 2 form an orthogonal set. The corresponding orthonormal set
is q l = (~. ~), q2::: (-~. ?tJ)
(b) v 1 v2 :1: 0; thus the vectors v 1o v 2 do not form an orthogonal set.
(c) V t v2 -:/; 0, thus the vectors v 1 , v 2 , v 3 do not form an orthogonal set.
=
( d ) We have v 1 v 2 = v 1 v 3 v 2 v 3 = 0; t hus the vectors v 1 , v 2 , v 3 form an orthogonal seL.
The corresponding orthonormal set is q1 = (~, -j, ~), q2 = (~. ! .- i) , q3 = (!, ~. ~).
4. (a) Yes.
( b) No; llv1 1l f. 1 ;,.nd llv zll f. 1.
(c) No; Vt vz f. 0. vz vJ f. 0, llv<dl # l, and llvJII f. L
6. (a) projw x = (x v1)v1 + (x v:a)vz = (l )vt + (2)v z = (!. ~. !, 4} + (1, 1, - 1, -1) = (~. ~, -~ , - ~)
(b) projwx = (x vt )vl + (x vz)vz + (x v 3) v3 = (l )vl + (2)v 2 + (O)v3 = {~, ~ . - !,-~)
7. (n)
(b)
8. (a)
(b )
~ js]- [ I~ 2~]
J I
72 7u
14. Using Formula (6), we h.:we P = ~ I 2 - 76
2
3
- I
~
[ ?J -76 I :.
V'6 275 6
1 5. Using t.hf! matrix fouud in Exen;isc 1:3, the or thogona.l projection of w onto W = SJJan{ v 1, v 2 } is
l -9
9 9
- 2 ~9
-3 _L98
16. Using the matrix found in Exercise 13, the orthogonal projection of w onto W = span{v1 , v 2 } is
[! : ~]
1
73 1
73
P= ~ 1 =
[ 76 0 0 1
73
[!_~31 j1] [l
1
p = 3
2
3 -il
19. Using the matrix found in Exerctse 17, the orthogonal projection of w onto W =span{ v 1 , v 2} is
Pw [~ ~ ~] Hl ~ [=t]
On t ht> r,t her hand, using Formula (8), we havP
20. Using the matrix found in Exercise 18, the orthogonal projection of w onto W = span{v 1 , v 2 } is
Pw=
[ !~9 !; -;9~~] [-~2] = [-~OJ
-9
On the other hand, using Formula {8), we have
23. We have pT:;.. I' and i t is easy to check that P 2 = P ; thus Pis the standard matrix. of an orthogonal
projection. The dimension of the range of the projection is equal t o tr(P ) = ~ + 251 + = 2. ff
24. The dimension of the range is equal to tr(P) = 19 + ;fg + !~ = 1.
25. We have pT = P and it is ea.'!y to check that P 2 = P ; thus Pis the standard ma.t.rix of an orthogonal
projection. T he dimension of t he range of the projection is equal t o tr(P ) = ~ + + ~ = 1. t
26. The dimensio n of the range is equal to tr(P) = ~ +~+ ~ = 2.
-s
29. Let YJ = w, =(1, 1, 1), voz = w 2 -- lfv~[Jvl = (-1, 1,0)- (~){ 1 , J. , 1) = (--1, 1, 0}, and
30. Let. v, ,.. WJ = (1, o. 0), Y 2 = w 2 - u;;u! v l = (3. 7, -2)- (f)(l , O, 0) = (0 , 7, -2). and
31. Let Vl = Wt = (0, 2, 1, 0), V2 = w2- fJ;I[~ VJ = (1, -1, 0, 0) - ( -~)(0,2, 1, 0) = (1, -i, ~~ ~),
and
32. Let VJ = WJ = (1. 2.1. 0). V2 = Wz- IIJ:Ii~vl = (1,1, 2, 0)- (~){ 1 , 2, 1, 0) = (~.-~.a. 0),
and
33. The vectors w, = (~ ~, 0), w2 = ( ~- ~ 0), and w 3 = (0, 0, 1) form an "rthonormal basis for
R3.
EXERCISE SET 7.9 251
34. Let A be the 2 x 4 ma.trixhavingthe vectors w 1 = (~, ~.o, !) and w2 = ( -~, 7J, ~,0) as i~s rows.
Then row(A) = s pa.n{w1, w2}, and uull{A) = spa.n{wt, w2}.1. . A basis for null(A) can be found by
solving the linPnr system Ax = 0 . The reduced row echelon form of the augmented matrix for this
system is
0 _l2 0]
[~ 4.!. :I
1 l _! I 0
2 4 I
v 2= w 2- w2V1
llvdl 2 v1 =(-1, 0,l ) - ( 52)(0,1 ,2) ;;- ( - 1,-g-2 ,~1)
Then {v 1. ,l} i::: an Ol lbogonal basis for\\' , anti the vect-or!'
V1 l 2 )
Ut = llvdl = tO. -/S' ~ ,
36. NoLe that w.1 = w 1 - W -z t- w 3 Thus the ::;uhspace \V spanned hy the given vectors i.~ :3-dimensional
with b(l~is { w 1 , W ?, w3}. Let v 1 = w 1 = (-1 , 2, 4, 7) . t\nd let.
V:~=WJ -
w3 v 1
llvdl 2 v1- llv
w;~ v 2 ,
jj2 v2 = (2, 2, 7, -3)-
( 9 )
(- 1, 2,-1,7)- ( 31843) ( - 41 - ~ 26 -~)
70 401 14' 7) 7 ' 2
2 14
9876 3768 5891 3032)
= ( 2005 ) 2005 > 2005 200.'5 I -
Then {vt . v2 , VJ} is an orthogonal ha.osis for w. and the vectors Uj = n::u = (j.ffl, ~ ~ ~),
,;!1.,. ( - 41 - 2 s2 -35 ) I ~ ( 9876 3768 5891 -3032 )
U2 = Uv211 = v
JS'iit4 ' /5614 1 15iii4' J5614 ' an( UJ = llvlil = J 155630 105' v'l55630t05 ' ,/155630105 1 J l55630105
form an orthonormal basis for W.
37. No~ that u 1 and u 2 are orthonormal vectors. Thus the orthog~na.l projectior. vf w onto the subspace
W spanned by these two vectors is given by
38. First we find vD orthonorma.l basis {Q], Q2} for w by applying the Gram-Schmidt process to {UtI u 2}
Let Vt
= U t = (-1,0 1 11 2) 1 v2 = U 2 - r~;U~Vt = {0~ 1 1 0,1) -1(-1,0,1,2) = (j,l, and let -!,!},
q l = 1~: 11 = ( --Js, 7s, fs),
0, q2 = 11~~11 = ( ~ ~ - Jh. Ji2).
T hen {qt , Q2} is an orthonormal
basis for H' . and so the orthogonal projection of w = (- 1, 2, 6, 0) onto W is given by
W2 =W - Wt = ( -1, 2,6, Q) - ( -~ 1 - ~, ~~ ~) = (~ 1 ~~ 4 1 - ~)
19
is an orthonormal b~is for lhe !-dimensional subspace lV spanned by w . Thus, using Formula (6),
the sta.ndard matrix for Lhe orthogonal projection of R 3 onto W is
= u T u = a2 a] Ia = a 2 + ~2 + c2 [aab 2
ab2 acl
P
1-
:2
+c
2
[c
b b c]
. a.c
b
b
c
be
.'2
c-
02. (a) span{vt} =span{ wl}, span{ v 1, v2} = span{w 1, w 2} 1 and spa.n{vJ , v2 , v 3} =span{ Wt, w2 , w 3
(b) v 3 IS orthogonal to span{ wt, w2}.
D3. If the vectors Wt, w2 , . . , w~c are linearly dependent, t hen at least one of the vectors in the list is
a linear combination of the previous ones. If w 1 is a linear combination of w 1 , w 2 , . , w 1 _ 1 then ,
when applying the Gram-Schmidt process at the jth step, the vector v i will be 0 .
D4. If A has orthonormal columns, then AAT is the standard matrix for the orthogotia.l projection
onto the column space of A.
Yt ) V1 ( V2 ) Y2 ( V~c ) Vk
x . Tfv~ x. Uv~oll llv~cll
0 (
PZ. lf A. is symmet.ric and idempot ent, t hen A is the s tandard matrix of an o rthogon al projection
operator; namely the ort hogonal projectio n of R" onto W = col(A). Thus A= UlJT where U is
any 11 x k matrix whose column vectors form an orthonormal basis for tV .
P3 . We rnust. prove that v i E span {w 1, w2 ___ , w,} for each J - 1, 2, . . -- The proof is by induction on
J
Step 1. Since v 1 = w 1 , we have v 1 E span{w 1 } ; t.hllS t.he stat ement is true for j = L
Step 2 (inductio n step) . Suppose the st atemeot is t.r ne fo r integers k which a re less t han or ~qual
to j, i.e., fork = l , 2, . . . , j . Then
and since v 1 C ,c;pan{ w d , v'/. E span {Wt, w2 ), . .. a.nd v; (7 s pan{ Wt , w 2 , . .. , W j } , it follows that
0
v J+ l C span{ Wt , w 2, .. . , wi, wj 1.1} Thus if the s tatement. is true for e ach of the integers k =
1, 2, ... , j then it is also true fork= j + 1.
T hesf: two s te ps complete the proof by induction .
1 . T he column vec.t,o rs of the matrix A are w 1 = g} and Wz = [-~] _ Applicat ion of t he Gram-Schmidt
procE'SS to thf'~'>E: vect.or yields
We have W1 := { w, q i)q 1 = J5q l and w 2 = {w2 q 1 )q l + (w2 Q2)Q2 = -/5q l + ../5q 2. Thus ap-
plio.:ation of Form;.:!a (3) yields the following QRdecomposition of A :
A=== fll
2
-i]
3
= [~
v'5
-~]
V5
[J5
0
V5j = QR
v'5j
254 Chapter 7
We have Wt = J2q 1 and w2 = 3J2q 1 + J3q2. This yields the following QR-decomposition of A:
[l [
A= 0 1
1 2
1 4
=
1 1]
72 -"J!
0
1
72
13 [ 0
73
1
J2
J3
l
3v'2
= QR
l
w2
aJ26
Q2 =[ s0t
8
JJ26
We have WJ = (wl Qt)CJ1 - 3q, and w2 = (w2 q, )q l + (wz Q2)Q2- ~ Ql + " f q z_ This yields
the following QR-decomposilion of A:
A=
11]
[ -~ ~ =
[~ 3726 3
- : 3 ~ [o ~] =QR
3726
8 l .l
\Ve have w 1 = J2q 1 , w 2 = "lq1 + vl1q2, and w3 = .J2q 1 - 1q2 + Q3 This yields the follow-
ing QR-dec01npositinn l.lf A:
v'2
A
1
[0
0 2
11=
] [~0
-~
73' ~] [v'20 v'3
7s -
il
J2]
3 =QR
73' --Ja
1 2 Q I l
h1
72 0 0 3
Q2 =
[ -~]
~
719
We have Wt = J2q 1 , w2 = ~ q 1 + ~q2, and wa = J2q l + '(P q2 + ~ Q3
3
This yields the
following QR-decomposition of A :
1
A= [~ 2 1]
I 1 = [~
72 -73'8
738
1
~J2] =QR
3 l 0 6
73'8 0 .i!!
19
EXERCISE SET 7.10 255
We have w 1 = 2ql, w2 =- q1 +qz, and w3 = ~ Ql + ~qz + ./i-G3 This yields the following QR-
decomposition of A:
[ r :L
-~. [
-; 2 -1
A=
l 1 ~ [o
0
] 1 J] =QR
oJ
- 1 1 -2 2 h
1 0 0 72
[~ ,J,!l m Lil =
Solving thi.s system hy back substitution yields the least squares solution Xz = ~g, x 1 = - i~.
Solving this system by back substit ution yields X3 = ~, Xz ~, x , = = 0. Note that, in this example,
the syst.etn Ax = b is consistent and this is its exact solution.
l
.ill
2 - v'38
3
0 'J19
Solving this system by back substitution yields X3 = 16, x2 = -5, x , = -8. Note that, in this exam-
ple, the system Ax = b is consistent and ~his is its exact solution.
256 Chapter 7
[~ -~ i][=:] [t
0 0 ~ X3
=
0 1
72
Solving this system by back substitution yields x 3 = -2, x 2 = 121 , x 1 = ~
11. The plane 2x - y + 3z = 0 corresponds to a.l. where a= (2, -1, 3). Thus, writing a as a column
vector, the standard matrix for the reflection of R3 about the plane is
H =1- 2
- -aa
ara
T
=
[~ ~ ~] - I~ [-~ -~ -~]
0 0 1 6 -3 9
= [
-7
!~6
and the reflectiOn of I he vee lor b = (1, 2, 2) about that plane is given, in column form, by
Hb=
~
18
[ 1 -4] [ ~ -~
I - 4 :; -~ ~
~]
~
-4 - -l 16 :1 .: _l
9 9 9
o.nd the retlec:ti>n of the vector b = (1,0,1) about tltat plane is given, in column form, by
Hb ~ H-: J][~] Ul
u -!l
2
[~ ~]-~ [-:
0 - 1 3
13. H = l - -
2
aT a
T
aa - 1
0 -1
1
-:] = 3
2
3
I
u ~il
I
[~ ~] -~1
0 - 1 3
2 1
14. H =1- - -aa
aT a
= I 2 [- : 1 =
2
3
6 2 -2
0 4J 2
3
[~ r -~] ~ [~
0 0 0 0 0 0
13 4 2
[~ -~]
0 0 -2 -3 T5 5
']
15 15
2
16. H- l - -aar =
1 0 0 0] 2 [-21 4 6
4
15
7
H'
_1
5
4.
-15
aT a 0 1 o 10 -3 6 9 =
5
2 4
-ii --s 1
-g2
0 0 1 -1 2 3 2 _.i_ :.! 13
15 15 -5 15
2 r .1__..
= [10 0] - ----;::::
2 [67-~2v'2 17-25/2]
H = I - --aa
aT a
T
1 &0- l7v'2 17-~5v'2 2
33+8../2
-2-
= l'n -1
../2
18. (a) Let a= v- w,., (1, 1)- (v'2,0) = (1- v'2, 1). Then the appropriate Householder matrix is:
~-~l [~ -~]
2
=
2 2
(b) Let a-~ v - w = (1, 1)- {0, /2) = (l, 1- .J2). Then the appropriate Householder matrix is:
H = 1 _ _ aaT = 2
aTa
[1 0] . . __._2_ [ 1v2 1- v'2]
0 1 4-2J2 1- (1- /2) 2
( <:} Let a =v- w = ( t, 1) - ( Y1-) ' 1+2v'3) = e-2-n. l-;/3). Then the appropriate HoH~C holrlcr
mat.rix is:
( 3-/'3 )( 1-2.,13 )]
e-.i13 ) 2
= (1 o]1 _ [~ -32/3] 2
0 3-2:/3 .fl
2 2
1 0 0] 2[ 25 -5 -10] [- ~ -l]
1
3
2 T 14
lJ =I- --aa
aT a
=
[ 0
0
1 0 - -
0 1 ~ 30
-10
-5 1
2
2 =
4
3
3 -15
15
2 11
I5
is the .standard matrix for the Householder reflection of R 3 about a..l, and H v = w.
258 Chapter 7
H =1 - -
2
aT a
aar =
[~
0
1
0
~]-
1
lq :
-4 -4
4
4 -4]
-4 =
t
H !l
'2
-j
l
j
4
3
is the standard matrix for the Householder reflection of R 3 about a.l, and Hv = w.
21. Let v = (1, -1), w = (llvll, 0) = (J2, 0), and a= v- w = (1- -/2, - 1). Then the Householder re-
flection abouL a..L maps v into w. The standard matrix for this reflection is
H =1 - _2_aaT = [1
aTa 0 1 - 4- 2-/2
0] 2 [(1 - J2)2
-1 + v'2
= [ol o] _ 2 + J2
1
[3 -+2J2J2
2 -1
-1 + J2]
1 J
= [~ ~]- l'r 4] ~
2
= [ .;;
_fl
2
-
_fl
Y{l
2
A=[ - 1 2]=[_fl:f-
1 3
-X}] [h -~
-{/-
-~ ]- 0 -Q
R
2
22. Let v = (2, 1), w = (llvll, 0) = ( v's, 0), and a = v - w = (2- v's, 1). Then the Ho\lseholder reflection
about a.l. nnps v into w The standard matrix for this reflection JS
_w
s
o
~][~ ~] = [~o Js]=R
1 o
and, settmg Q = QT = Q,, we have the following QR-decomposition of the matrix A:
0] [10 ~] =QR
y]
- 0
EXERCISE SET 7.10 259
23. Referring to the construction in Exercise 21, the second entry in the first column of A can be zeroed
out by multiplying by the orthogonal matrix Q1 as indicated below:
-f.
~] [-~ -~ ~] = [~ -../2] =R
fl
2
=
~ -2~
QzA 0 0
[ _fl -~ 0 0 4 5 0
2 2
and finally, setting Q == Q:; 1 "" Qf, we obtain the following QR-decomposition of A:
A=
[ 0
1 2
-1 -2
1
l [1
~ = -~
0 -
0
1
v;l [V2
_ 1.1'2
2
0
0
0
2/2
4
0
-~] = QR
- 2..;'2
24. Referring to the construction in Exercise 18 (a) , the second enlry in the first column of A can be
zeroed out by multiplying hy the orthogonal matrix Q 1 a.<> indicated below!
A
:If
= {/ - Yf : 0 0
.;_; : 0 0 [l1 23 - 21] ~.1 /~.o: -:If- ' -L.}-1
3'{2 ....
QI
-o ---o-: -1---6 o - l o r= oi cY o :
o o: o
- ol o, 1 ) 1 o 0 1 \
/ .
. ....
From a similar construction , the third entry-in t he second colum'n of Q A ~oe zeroed out by 1
multiplyiug by t he orthogomli matrix Q2 as indicated below:
w2
Q>... Q 1 ; 1 -
~j
0 __ ~
I
I -
I
I
r.;
'-:-~
'
I
-- ~-~-~l0 [v-20
y6
3
-
r.:.
I
I
.
't
,/2
--
2
-V[l =
3 f2
:!..lL!
2
,J2
0 .i
2
_ill
_ _.&
2
~ ~ -~ ~~---}~. ~ -~ ~ ' - ~ ~ 0
0
0
0
"' v'3
1
From a Uurd such construction, the fourt-h entry in the third column of Q2Q1 A can be zeroed out by
multiplying by the orthogonal matrix Q3 as indicated below:
sV2
-2-
v'6
2
- :1
-~
2
2 =
\12
0
~2
:&
2
_ill
2
_:L
2 = R
0 - ./3 0 0 2
0 0 0 0
Finally, setting Q = Q[QfQT = Q1Q2Q3, we obtain the following QR-dccomposition of A:
:1 - .i 1 - :il
2 6 2 6
__ill
-~ ~ QR
::11 ~ 1 :il
2 6 -2 6
A=
0 -~ :il
J -21 6
0 0 I fl 0
2 2
260 Chapter 7
25. Since A = QR, the system Ax = b is equivalent to the upper triangular system Rx = QTb whlcb is:
v'30 - .../2
v'3 -33]0 [:ttl
:t2 = [730
[ 0 0 4 X3 .:i
7s 3
26. (a) Since aaTx = a (aTx) = (aTx)a, we have Hx =(I- af.aaT)x = /x - af.aaaTx =X- e:1T:)a.
(b) Using the formula in pa.rt (a), we have Hx = x- ( 2$:)a = (3,4, 1}- 1: (1, 1, 1) = ( -~ , -1, -Jj).
[~ 0
~ ~]
0 1
2 [~ ~ ~]
0 0 0
[- ~ ~ ~]
0 0 1
and ~mularly for t.he others.
D2. The standard matnx for the reflection of R 2 about the line y = mx is (taking a = (1, m)) given
by
D3. = v's3, then llwll = ll vll and tbe Householder reflect1on about. (v- w ).l mnps v into w
If s
D4. Since llwll = llvll , the Householder reflection about (v - w).l. maps v into w , We have v - w =
(-8, 12), and so (v - w ).l. is the line -8x + 12y = 0, or y = ~x.
D5. Let a = v - w = ( 1, 2, 2)- (0, 0, 3) = (1, 2, -1). Then the reflection of R 3 about a.l. maps v into
w , and the plane a .l corresponds to x + 2y- z = 0 or z = x + 2y.
P2. To show that H = I - af;aaT is orthogonal we must show that HT = H - 1 This follows from
H H T = ( I - -2-aaT) (I - - 2 aaTJ
' T = I - - 2 aaT - -2-aaT + - -4- a aTaaT
aTa aTa aTa aTa (aTa )2
2 4
=I- .2.. aaT - - - aaT
a Ta aTa
+ -aTa
- aaT =I
where we ha,e used the fact that aaT aaT =a(aT a )a T = (aTa)naT.
EXERCISE SET 7.11 261
P 3. One ofthe features of the Gram-Schmidt process is that span{ q 1 , q 2, . . ., q,} = span {w 1 , w 2 , , . . , Wj}
for ea.ch j == 1, 2, . .. , k. Th~1s in the expansion
we must have w 3 CV -::j:. 0, for otherwise wj would be in span {Q1 , Q2, ... , ClJ -1 } =
span{w1 , w2 , ... , w;- l } which would mean that { w 1 , w2 , . . . , w 1} is a linearly dependent set.
P4. If A = QR is a QR-decomposition of A, then Q = AR- 1 . From this it follows that the columns of
Q belong to the column space of A. In particular, if R- 1 = js,j], then from Q = AR- 1 it follows
that
c; (Q) = Acj(R- 1 ) = StjCI(A) + s2jc2(A) + + S.~<-jCA:(A)
for each j = 1, 2, ... , k. Finally, since dim(col(A)) = k and t.he vectors Ct(Q), c2 (Q), ... ,ck(Q) are
linearly independent, it follows that they form a basis for col(A).
1. (a) We have w '"" 3v 1 ... 7v2; thus (w)B = (3, -7) and [w]B = [_~]
(b) T h e vect or equation c 1 v 1 + c2 v 2 = w is equiva.1ent to t 11e Iinear system [ 2 a3) [cc2'] -- [11)' and
- 4
3. The vector equation c 1 v 1 + c2v~ + c v 3 3 =w is equivalent to [~ ~ ~1 [~~1 = [-~] Solving this sys-
o 0 3 C3 3
~~]
tem by back subi:ititut ion y ir.ldsc.3 = l,c2 = -2,c 1 = 3. Thus (w)a = (3 ,- 2, 1) and [win= -~.
[
4. The vector equation c 1v 1 ~ r. v
2 2 + c3 v 3 = w is equiva.lent to [~ - :. _; ] [;,~] = [-~~] Solving this
3 69C:J 3
system by row reduction yif!k!s c 1 =' -2, c2 = 0, c3 = 1. Thus (w)B = ( - 2, 0, 1).
5. If (u)n = (7, - 2, 1), then u = 7v 1 - 2v 2 + v:~ = 7(1, 0, 0) - 2(2, 2, 0) + (3, 3, 3) = (6, - 1, 3).
6. If (u )B = (8, - 5, <1 ), then u = 8v 1 - 5v 2 + 4v3 = 8(1 , 2, 3)- 5{ -1, :>, 6) + 4(7, - 8, 9) =(56, -41 , 30) .
7. (w }B = (w v 1 , w v 2) = ( -~ , ~) = ( -2,/2, 5~)- ..
12. (a) We have u = -2v1 + v2 + 2v3 = (~, -i, -~)and v = 3vl + Ov2- 2v3 = (-~, .!p, 1).
(b) ll u ll = ll(u )sll = j( -2)2 + (1)2 + (2)2 = 3, llv ll = ll(v )sll = J(3) 2 + (0) 2 + ( -2)2 = Jf3, and
uv = (u )s (v )s = {-2}(3) + (1)(0) + (2)(-2) = - 10.
ll u ll = J<~)2 + (-D2 + (-~)'2 = fj = 3, llv ll = Vr-(----~)-2_+_(_130_)_2_+_(_~)-2 = = Jf3, J
and u v = (~)( -!) + (-i)( 3) + (-~)(!) =- 9 = -10.
1 9
llv - w U= il(v)a- (w)all = 11(2, 5, 1, -2)11 = y'(2)'l + (5)2 + (1)2 + (-2)2 = v'34
v w = (viB (w )a- (5)(3) + (5)(0) + (-2)(-3) 4- (-2)(0) = 21
15. Let B - {e 1 ,e2} be the standard basis for R2 , and let B'- {v 1 , v 2 } be the basis corresponding to
= [ ~]
1
the .r'y'-systcm that is described in Ftgure Ex-15. Then Ps ..... s = [[vi]a [v2Jsl and so
0 V'1
(v2)s) = (2l 3
- ).
-1
(d) We have w = v 1 - v z; thus [w)s = [_~] and lwls = Ps-ts[w}s = G-!][_ ~] = [_~]
(e) We have w =3e 1 - 6e2; thus [w]s =[_~) and [w]D =Ps ,s[w]s = [_ ~ ~] [_!] = [=~J.
18. (a) ~
0 8
;]
(b) The malrix [B I SJ :'!: [~ ~ ~ j~ ~ ~1
0 01
(J I
I
40
13
H)
-.5 -.1
9] as its reduced row echelon
I
1 0 B10 0 1 l, i 5 -2 -1
~~ I
l 5] [1 2.'1!J1
0 0 I
(d) a - 3 has o o :. 77 as its reduceu row echelon form; thus
8 I 1 () 0 I I 30
(e) w = 5et- 3e, + e3; thus [wls = ~'] and lw] s = Ps -tH[w)s = [-- 40 )6 -391 [- 35] = lr -239]
[-3l 13 -5
5 -2 -1 1
77 .
30
19. (a) The row reduced echelon form of !Bt I B2] -= (22 4_1
- 1
I
I
l
3 -l
-1] . [)0
!S
[11 -~] .
10
2
-g
Bd '"" [~ -l ~] i:.
[~
-11 2
(b) The row reduced echelon form of [B2I I
t2 -1
[ 0 _)
-2 - ;; .
(d ) w = -;i0u 1 + ~u 2 ; thus [w]B, = [-11 and (w]B, = Ps,-.a,I'~IB, = [_~ -11 [-11 = [=:].
(e) We ha,e w - 4v 1 -7v 2i thus (w]s, =[=~]and (w)a, =Pa,.... a, (w)s, = r~1 -~] r=~l = [-11
[_~ -~l.
(c) Note that (Pe,-.e.)- 1 = [_~ -~r = (-1)[-~ -~] = Pe, .... s, .
(d ) We have w - 2u1 - u 2; thus [w]B, = [_~]and (w)e, = Pe, ... a,(w]s, = [_~ -~] [_n = r-~]
(e) Wf! have w :3v 1 - v 2; thus [w]a 2 = [_~]and [w]B 1 = Pa, ... e ,[wJo, = [_~ _~] [_~] = [_~]
[ ~ j~l q '2 2
0 4 7 I -3 - 1 -1 0 0 0 .) 3
thus Ps, -BJ = -~ -H -H
u ~ ~
(b ) lfw=(-5,8 5).wehave(w )a 1 =(l, l , l )andso
3
4
-12
17
2
3
-1Jm,HJ
(c) The row reduced e<'helon form of IB2 I w] =
I
II
o! -H
1
I
!..2]
I~
4
l
; tHus (w )s,""
~
I -1 1 2 2
I
22. (a ) The row reduced echelon form of the matdx (B2I B t] =[ I 0 I
1
I -1
-3 2 I
~] ~]
I
I 0 0: 3 2 [ 3 2 -S
0 1 0:-2 - 3 -4 ; thus Pa,_.a, = -2 -3 -4 .
[
0 0 11.'1 I 6 51 6
~ ~ !-:] is [o 0o:-X]
3 1 0 I '2
(c) The row reduced echeloq form of [B2 I w) = [ - l ; t hus (w)a~ =
-5-3 21 - 5 0 0 l : ()
(-t, 2
,}, 6), which agrees with the computation in part (b).
Ps1-+B, = ['
_
76
1
~
2\/'J
4.
3"2
1
3
0 'l
-372
21
H is <'MY to check th <lt
Thus PFJ 1 4 Bt is an orthogonal matrix, a.nd !lince Pr; ,-+ B~ = (Pe,-~a ,)" t = (Pn?-B,rr , the same is
true of Po, -B, .
24.
25. (a) We have Vt = (0, 1) = Oe 1 + le2 and vz = (l, 0) = le\ t Oe2 ; thus Po .... s = [o1 1} .
0
( b) 1f P = Ps-.s = [~ ~] then, since P is orthogonal, we have pT = p- 1 = (Ps ..... s)- 1 = Ps-+D
Geometrically, this corresponds to the fact that reflection about y =x preserves length and thus
is an orthogonal transformation.
~6. (a) We have Vt = (cos20,sin28) and Vz = (sin20, -cos2B); thus Pa..... s = [:7~~: - cos28
sin28]
(b) II P = PB .... s then, since P is orthogonal, we have pr = p - l = (PB_.s)- 1 = Ps ..... b Geomct-
ric~y. this corresponds to the fact that reflection about the line preserves length a.nd thus is
an orthogonal transformation .
266 Chapter 7
nl,
29. (a) If [:] = then [::] =[-::(;) :11~ ~][~] +~ ~] = [~ * nl l
[~: - ~ lhen ~ ;: sin~fi} -c:i;~~) ~] r~:l [~ ~ ~] ~] =
(b) rr x' ] [ 1] [x] [cos(~) = '
0 1 z'
=
0
-
0 1
[
-3
[-t].
-3
[~::J = [! r -;)[-t ~l m [ -4 r ~
.:.::]
I
[
- .J_
I~
7
l!]
IS
2.!
R - ~~
02. (a) Let 8 = {v,,v2 , v 3}, where v 1 = (1, 1, 0) , v 2 = (1,0,2),v3 = (0,2, 1) correspond to the col-
umn veclors of the matrix P . Then, from Theorem 1.11.8, Pis the transition matrix from
8 to the standard basis S = {e 1 ,e2,e3}.
s
(b) rr pis t he transition matrix from = {e ,,e2,e3} to B = {wl . w 2. W J}, ~hen e, = Wt + w2 .
e2 = w, + 2w3, a.nd e3 = 2w 2 + W J. Solving these vector equations for w 1 , w 2 , and W J in
terms of e1, e2, and e3 results in w 1 = ~e, + ~e2- ~ e3 = ( ~ .!, -~). w 2 = %e1 - !e2 + ~eJ
= (~ . i
~).and w 3 = -je, + j e2 +
to the column vectors of the matrix p- .
\e3
= (-~. ~. Note that w w 2, W3 correspond !> 1,
WORKING WITH PROOFS 267
D3 . If 8 ~ f v 1, v, , v3} and if ~ ~ [~ ~ : ] is ~he l<an,ition matrix from B to the given basis, then
v1 ==- (1, 1, 1 ) , V l = 3(1, 1, 0) + (1, 0, 0) = ( 4, 3, 0) , and v :l = 2(1, 1, 0) + ( 1, 0, 0) = (3, 2, 0).
D 4. If lwls =w holds for every w , t hen the transition mat rD.. from the standard basis S to the basis
B is
Ps .... B = ([eds I (ez]s l le3]sl = rel l ez les} =In
and soB= S = {eb e2, ... ,en}
D 5. If !x- Yls = 0, then {x ]s = IY]B and sox= y .
P 2. T he vec tors VJ, Vz , . .. ' VI( s pan Rn if and only if every vector v in nn can be expressed as a linear
combination of them, i.e., there exist scalars c1 c2, .. . , Ck SllCh t.hat v = r.1 v 1 + c2v 2 + + c~c v~,; .
1
Since (v)n = c1(v1) B + c2(v2)B + + ck(vk)B and the coordinate mapping v -+ (v)a is onto,
it follows tha t the vectors v l, v2,' . . 'Vk span Rn jf and only if (vt)e. (v 2) B, .. ' (vk)B span nn.
P :l. Since the coorciinate map x -+ !xJn is onto, we have A{xJs = C!xle for every x in R" if and only
if Ay =:.: Cy for every y in R 11 Thus, using Theorem 3.4 .4, we can conclude that. A = C if and
only if .4[x ]B -"' C[xJ a for every x in R 11
Thus (c:v)s = (ca. ~, ca2, .. . , can)= c(a1 , a~, . .. , an.) :::;: c(v)s and (v + w)e =(at + b1, .. . , an + b,.,) =
(a l,a21. .. , an) + (bt , b2, . .. ,bn) = (v)B + (w)s.
CHAPTER 8
Diagonalization
Thus [x]a = [.];2 :t~:rJ. [Tx]a = [:z\:2:r2 ), and (T]s = I[Tvt]a [Tv z]a] = [~ -~]-Finally, we note
that
(Tx]s = rxl2l- X2] = [22
l X2
-ol] [ X~ -
X2 X, ] = IT]a[x]a
which is Formula (7).
(TxJa
3. Let P = Ps-B where S = {e 1,e2} is the standard basis. Then P = [[vds (v2JsJ = [~ -~]and
4. Let P = Ps~ o where S- { e1o ez} is the standard basis. Then P = llv!]s [vz]s] = [~ -~] and
PITJaP
1
= [~ 2]1 [-t -..,.
:>
-
18
5]
_ .1
5
~[
5 -2
1 2]l [l3
= ~] = [T]
5. For P\'ery v:ctor x m R3 , we have
ami
+ 1
-
2 X3
~X3 -
[-~
which is Formula (7).
-y
[12
7
-8
which is Formula (7) .
270 Chapter 8
0
= [lvt]s =
7. Le~ P =- Ps .... a where 5' is the standard basis. T hen P [v2Js [vJ]sJ
[i ;] and
3 I
0 -2 -2
ilH t] [-l
-1
P[TJaP- 1
=
[; 1
1
2
l l 1
I
l
I J] H = 1
0
_;] ={Tj
n
2 -2 2 2
8. Let P - Ps ..... a where S is the standa rd basis. Then P = [lv 1]s Jv,]s lv>lsl =
3-1] [-y
n j]
_ !
8
..1.. [0 -2 ~]
3
3
2
0
I~
_'J.
8 -:J 12
3
8
112
- 8
J
= 0
1
0
0 -2
= [TJ
[T]s = r= ~ TI~]
I I
anti [T]s = [~ -~]
II II
Since Vj = v; + v2 and V ?. = v;- v2, we have p =
Ps ... a = [: _:) and
-M fl] [~
P[l]sP 1
=[ 11
1] [_..:!.. 26 I ~] = [A
I 2 -~
32] =!Tis
-1 TI 2 -2 IT II
~5
_~
61 ]
~~
and [Tis =
[
=; 31
'2
9]
~ S1me v! = 10v1 r 8v2 and v 2 = - ~v~ - v2, we hnve
2
P = Pa ... FJ =
lo -~ ]
[8 _;j
ancl P[T is P -
1
= [1o8 - 21(_
-J !i: 1
3-AJ
- %-~
[tSh _ = [--
1
i
_..!..] -7i ~] = [T]s
11 . The .-qualion P[T]B p - l = [Tis is equivalent to [T]a = p- 1 [TisP. T hus, from Exercise 9, we
have
P[T] 8 P - 1 = [
-1
5 -3
5] [-1~ l!..l [~ ~] = [10 -2]1 = jTJ
_25
ll
'l 6
22
5
22
11 TI 22
14. The standard matrix is [T] = [~ -~ and, from Exercise 10, we have {T)s = [~ _~] . These
matrices are related by the equation
a.nd
and
\ .. Tx .,;, x+ Y + z1 [1]
[ 2y:~ 4z j =(~x+~y+~.z) ~ +(- !x+h+~z)
[-1]~ +(4z)
[0]~
I /
( v, -? !
) J '
Chapter 8
272
x = [: 1 = (~x1 + %xz) [~] + (- {0x1 + -fo:r2) [-~1 = (~xl + ix2)v1 + (- 130xl + -foxz)v2
and
anrl
t hus lxls
1
= l -:tt
~ -
4 Xl
; l
1 X2
1- ;:t:z
jTxJ s' = [
- 2 :r1
3
1
z XI
+ 3I :r2
~x:z
+ '2
J%2
l, and ITJ ss = I!Tv1J B'
and
Tx = [3Xt
2x
+ 2X3]
x =
1
_
2
( --x
5
r 1 - - 1.. xz
14
- 4
;;XJ
,
) r-3]
2
+ (6- x1 -
1
3
-x2
1<1
+ 12
- ;[J ) [1]
4
2 x1 + 2 x:z + 2x3
Finally, in agreement with Formula (26), we have
and
-~ -!] 1
Finally,
lf 2
ITxl.~:~ = rl -
4
lx1
7
l
- .2...t2
7Xl- il.X:.!
14'
I ZJ
l [-
-
-
~
14
13
\.ii
(b) Tvl = VI - 2v2 = [ -~]- 2GJ = [- ~] Md T vz = 3vl +5v2 = 3 [-~} + s[~] = [}~]-
(c) For every vector x in R 2 , we have
or, in t he comma delimited form , T (xJ> x2) = ( 1 ix1- ~x2, fjx1 + Jtx2).
(d) Using the f.--:-mt:la obtained in part (c) , we have T(l , l ) = (lri' - ~, + ) = ('5u, 353 ) .
274 Chapter 8
(d ) Using Lhe formula obtained in par t (c), we have T(2, 2, 0, 0) = (- 31 1 37, 12).
23. 1f T IS the identity operator then, since T e 1 = e 1 and Te2 = e2, we have IT]= [~ ~]. Similarly,
[T]a [~~]and IT]s = [~ ~J On the other hand, Tv1 = [~] = H[~] + H[_:] = gv~ + ~~vz
and Tv 2 = [ ~) = -{,v}- !:v;; thus ITJs.B = [~ -~l ~~ 5I
24. If Tis the identity operator then IT] = IT] B = IT] a = [~ ~ ~] On the other hand, we have Tv1 =
0 0 l
[~
3
]
= 37 [
36
2
1+
-~J
19 [
36 - :
?] -36 [g]~ =
II 37 1
Jij vl
19
+ 36v2-
1 1J 1
36 v 3,
T _
v'2 -
[Q] -
~ -
299 1
144vl
209
+ mv2. -
1 229 I
144v3,
DISCUSSION ANO DISCOVERY 275
ill]
37 299
3 ~ m 48
and Tv3 -- [9] = 173
4~ V.1 + .illv'
48 2 -
115
48 v'
3 thus [T}a' D =
[
.IJ!
36
'109
m 48
119
.
4 II '229 ll5
- 36 - m -48
25. Let B ={v 1, v2 , ,.. , v ,t) and B' = {u 1 , u 2, ... , u m} be ba.ses for R" a nd R m respectively. Then,
if T is the zero transformation , we have
26. There is a. scalar k > 0 such that T(x) = kx for all x in R". T hus, if B = {v1 , v 2 , . , v,} is any
basis for Rn, we have T(v1) = kvi for aJl j = 1, 2, . .. , n and so
~~ ~
0 1 0
0 0 1
[T]a = =k
0 0 k 0 0
27 . [T]:
[H!] 28. [TI =
29. We have Tv1 = (~ ~] [ _ -i~] = [ ~] = -4vl and Tv2 = [~ ~J [~] = [ ~] = 6vz; thus {Tla =
[ -~ ~) . From this we see that the effect of the operator 1' is to stre tch the v 1 component of a
vector by a fac tor of 4 and reverse its direction, and to stretch the v 2 component by a factor of 6.
If the xy-coordinat e axeg are rotated 45 degrees clockwise to produce an x'y'-coordinate system
who::;e axes are aligned with the directions of the vectors v 1 ttnd v 2 , then the effe-.t is ~o st.retch
by a fact or of 4 in t.he x'-dircction, reflect about the y'-axis, and stret ch by a factor of 6 in the
y'-direction.
T v3 = -~
0] [2o -10 -vf:J0] = 2 [10 -!0 - {/0]. From lhis we sec that
[- 3; = - v'3v2 - v3. Thus [1']a
0 J3
=
- 1 0 :,(f -~
t he effect of the operator 1' is to rotate vectors counterclockwise by an angle of 120 degrees about
the v, axis (looking toward the origin from the tip of v!), l-beu stretch by a factor of 2.
Te2 = ~Tv2 = ~vi = ~e 1 ; thus the standard matrix forT is [Tj == r~ t].
276 Chapter 8
c- c
(T)Bl
c- D
D4 (a) I'luc We ha\e IT1 (x ))e = !Tds s{x)s = {T2Js e[x)s = !T2 (x))s: thus Tt(x) = T2(x).
( b) l abe. lot example, the zero operator has th~ same rn:ttrJX (t hP ZNO nMtnx) with respect to
any basis for R 2
(c) True If 8 = { v1, v2 , , vn} and [T)s =I, then T(v,.J = v1c for each k = 1, 2, . , n and it
follows from this that T(x ) = x for all x .
(d ) False For example, h:t B = {e1, e2}. 8' = {ez,ct}, and T(x, y) = (y , x). Them IT]s .B = /2
but T ~ ~ not the 1dent1t.y operator.
D 5. Ouc 1cason 1s that lhf tcpresPntation 0f the operator in s"~rn ~ orht'r b:L.c;is may more clearly reAect
t. he ~~;romrt rir effect. of the operator
Pl . If x andy are vectors and cis a scala r then , since T is linear, we have
[x)o + [Y)s = jx + Y)s -t[T(x + y )]s = jT(x) + T(y)Js - {Tfx )]e + {T(y )]s
Thts shows that th- mapping {x]s -7 [T(x )Js is linear.
P2. Ifx is tn R" and y ts m Rk, then we ha"e [T1 (x)J a = [Tds.s lx)s and IT2(y)J a" = !T2)B".B'IY)s.
Thu:-
IT2('i JIX))]s" = IT2] B",B' IT1 (x )Js = IT2)B".B'!Tds.alx]s
and rrolll tIll-: It follows thal IT2 0 Tds,s = (:t;JB" ,B'!Td s.B
EXERCISE SET 8.2 277
P3. If x is a vector in Rn, then [T]s[x]D = [Tx] B = 0 if a.nd only if Tx :::: 0 . Thus, if T is one-to-
one, it follows that [TJB[X]B = 0 if and only if [x] a = J, i.e., that: [T]a is an invertible matrix.
Furthermore, since [T'" 1]a[T]B = [T- 1 oT} 8 = [J]B = /,we have [T- 1]s = [TaJ- 1.
2. We have det(A) = 18 and det(B) = 14; thus A and B are not similar.
3. We have rank( A) = 3 and rank(B) = 2; thus A and B are not similar.
5. (a) The size of the matrix corresponds to the deg ree of it.s characteristic polynomial; so in this
case we have a 5 x 5 mat rix . The eigenvalues of the matrix wiLh t heir algebraic mu!Liplicities
arc A = 0 (wult.iplicity 1), A=- -1 (multiplicity 2), and A= 1 (mult.iplicity 2). The e igenspace
corresponding to A = 0 has dimension 1, and the eigenspaces corresponding to A= - 1 or..\= 1
have dimension 1 or 2.
(b) The matrix is 11 x 11 with eigenvalues A = - 3 (multiplicity 1), A = - 1 (multiplicity 3), and
>. = 8 (multiplicity 7). The eigenspace corresponding to >. ""' -3 has dimension 1; the eigenspace
corresponding to ). = - J has dimension 1, 2, or 3; and the eigenspace corresponding to A = 8
have dimension I, 2, 3, 4, 5, 6, 7, or 8.
6. (a) The matrix is 5 x 5 with eigenvalues A= 0 (multiplicity 1), A= 1 (rm1ltiplicity 1), .A= -2
(multiplicity 1), and .A = 3 (multiplicity 2). The eigenspaces corresponding to .A= 0, >. = 1; and
>. = - 2 each have dimension J. The eigenspact! corresponding to A = 3 has dimension 1 or 2.
(b) The matrix is 6 x 6 with eigenvalues A= 0 (multiplicity 2), A= 6 (multiplici ty 1), and .A=
2 (multiplicity 3). The eigenspace corresp(1nding to >. = 6 has dimensio n 1, the eigP.nspace
corrcspondinr; to A = 0 ha.<> dimension 1 or 2; and the eigenspace corresponding to A = 2 has
dimension 1, 2, (){ 3.
7. Since A is triangular, its characteristic polynomial is p(.A) = (..\- l)(A- 1)(.\ - 2) =(.A -- 1)2 (A- 2).
Thus the eigelll'alues of !I are .A= 1 and>.= 2, with algebraic multiplicities 2 a nd 1 respectively. The
eigenspace corresponding to A= 1 is the solution space of the system (I- A)x = 0, which is
The general solution or this system is x = [i] = t [:] ; thus the eigenspa is 1-dime.'5ional and so
The solution space of tbis system ;., x ~ [':] = s [l thus the eigenspace is 1-dimensiotlai and so
9. The characteristic polynomial of A is p(>.) = det(>J -A) = (.X- 5) 2 (>. - 3). Thus the eigenvalues of
A are>. = 5 and >. = 3, with algebraic multiplicities 2 and 1 respectively. The eigenspace correspond-
ing to ). = 5 is the solution space of the system (51- A)x = 0, which is
Tbe gene>al solution of this system is x ~ m~ t [!l; thus the eigenspace is 1-<limensional aud so
The solution space of this system is x = [ ~~ = s[ ~~ ; thus the eigenspace is ]-dimensional and so
- 2s -2
). = 3 also has geometric multiplicity l .
10. The characleristic poiynomial of A is (). + 1)(.>.- 3) 2 . Thus the eigenvalues of A are>.= - 1 and
>. = 3, with algebraic multiplici ties 1 and 2 respectively. The eigenspace corresponding to >. = -1 JS
1-dirnensional, u.nd so >. = -1 has geometnc multiplicity 1. The eigenspace corresponding to >. = 3
is Lhe solution space of the system (31- A)x = 0 which is
OJ - A = -A =[ ~
- 1 -1
1 =~]
1
EXERCISE SET 6.2 279
is dearly 1 since each of the rows is a scalar multiple o f the lst row. Thus nullity(OJ - A) :=: 3- _1 = 2,
and this is tbe geometric multiplicity of ..\ = 0. On the other h and, the matrix
-31 --A =-
): ~
-2
1
1
-2 -1
-1]
[- 1 - 1 -2
[~
has rank 2 since its reduced row echelon form is
0 1]
1 1 . Thus nullity( -31- A) == 3 - 2 = 1, and
0 0
this is t he geometric multiplicity of >. = - 3.
12. The characteris tic polynomial of A is (>.- l)(A 2 - 2A + 2); thus >. = 1 is the only real eigenvalue of
A. The reduced row echelon form of the matrix li - A= [=~ ~ -~~] is [~ ~ - ~1 ; thus the rank
- l 4 - 6 0 (I 0
of 11 - A is 2 and the geomet ric multiplicity of ).. = 1 is nullity( ll - A) = 3 - 2 "" 1.
13. The characteristic polynomia.l of A is p(>-.) = .>.3 - 11>.2 + 39A - 45 = (A - 5)(>.- 3) 2 ; thus the eigen-
values are >. = 5 and ,\ = 3, with algebraic multipli<.:ities 1 and 2 respectively. The matrix
has <ank 2, since its eduCd <Ow cohelon fom is [~ ~J. ! Thus nullity(f>l --A) - 3 - 2 = l, and
has rank l since each of the rows is a scalar multiple of the 1st row. Thus nuUity(3I - A) = 3 - l = 2,
and this is the geometric multiplicity of,\ ""'3. It follows that A hM 1 + 2 '-=-= 3 line<'lrly independent
eigenvectors and t hus is diagonali<>able.
1 4 . The characteristic polynomial of A is {A+ 2)(>. - l) 2 ; thus the eigenvalues are,\= - 2 and A= 1,
with algeb<ak multipHcities I and 2 <espocti vdy. The matrix - 21 - A - [-: -: =:] h,. <a.Jll< 2,
and t he matrix 11 -A =[ ~ : =~1 has rank 1. Thus >- = - 2 has geome~;ic-~u~t~plicity 1, and
... - 1 - 1 l
>. = 1 has geometric multiplicity 2. It follows that A has l + 2 = 3 linearly independent eigenvectors
~nd thus is diagonalizable.
15. The characteristic pc,lynomial of A is p(.>-) =.>- 2 - 3>. + 2 = (>.- l) (A- 2); thus A ho.s two distinct
eigenvalues,>. = 1 and>.= 2. The eigenspace corresponding to A = 1 is obtained by solving the system
(/ - A)x = 0. which is [~~ =:~] [:~] = [~] The general solution of thls system is x = t [ ~)- Thus,
I
280 Chapter 8
taking t = 5, we see that ?l = [:] is an eigenvector for >. = 1. Similarly, P 2 = [!] is an eigenvector
for A= 2. Finally, the matrix P = [p 1 p 2] = [! !} has the property that
p-1 AP = [ 4 -3]4 [-14
-5
12] [4 3] [l0 0]2
-20 17 5 4
=
16. The characteristic polynomial of A is (.>.- 1)(.>. + 1); thus A ha.s two distinct eigenvalues, .>.1 = 1 and
.>.2 = -1 . Corresponding eigenvectors are P1 = (~] and P2 = [~], and the matrix P = !P 1 P2J has
17. The characteristic polynomial of A is p(>.) = .>.(.>.- 1)(,\- 2); thus A has three distinct eigenvalues,
p- 1 AP =~
[~ -
011011
[-
~ . .~] [~ ~ ~] ~ ~ ~] [~ ~ ~]
101 002
18. The characteristic polynomial of A is (A- 2)(>. - 3) 2 ; thus tl.e eigenvalues of A are >. = 2 and >.. = 3.
20. The characteristic polynomial of A is p(>.) = >..3 - -t >. 2 + 5>.- 2 = (>.- 2)(>.- 1)2 ; thus A bas two
has dimension 1. It follows that the matrix A is not diagonalizable since it has only two linearly
independent eigenvectors.
21. The characteristic polynomial of A is p(>.) = (>. - 5)3 ; thus A has one eigenvalue, >. = 5, which has
algebraic multipUcity 3. The eigenspace corresponding to >.. = 5 is obtained by solving the system
(51- A)x = 0, which is H: ~] [:;] = m The genml solution of this system is X- t [~]
which shows that t ht:' eigenspacP has dimension 1, i.P , Lhe eigenvalue has geometric mulliplicity 1 lL
follows that. A is nnt. diagnn~liz;tblf' since thn sum of t.hr gcomelrif multiplicil.it!S vf its vig1~tl\'alucs is
less than 3
22. The characteristic polynomtaJ nf A is >.2 (>.- 1}; thus the eigenvalues of A are \ = 0, and >. = 1 The
hMis lot this space. The "'""' ' ' = mfotms a ba.<is lot the eigenspace-:wespondut< :o ~ = I.
u! ~] [~ ~ ~]
Thus--A IS diagonalizable and thr matnx P = lv 1 v2 v 3) has the property that
p - t AP = [~ : ~][~ ~ ~]
23 ThP charactt>risttc polynomtal o f A is p( .\) = (>. + 2) 2 t.\- 3) 2 thus A has two etgcnwtlues, .-\ = -2
and .>- = 3, each of which has n.lgebraic rnulLiplicity 2. The eigewJpace corresponding L(l ..\ = -2 is
obtained by solving thP. system ( -2/- A)x = 0, which is
The gonctnl solution of this system ts x = r ~] + s [~] whteh shows that the eigenspace has dimenSton
2, i.e., that the eigenvalue >. = -2 has geomeLric multiplicity 2. On the other hand, the general
not d iagonalizable since the sum of the geometric multi :' liriti~>s of its eigenvalues is lt>..ss than 4
282 Chapter 8
2~. The charactenstic polynorrual of A is p(>.) = (>. + 2)2(>.- 3) 2 ; thus A has two eigenvalues A= -2
au.l ~- 3, each of algebrak multiplici<y 2. The vec<on v, - [~] and v 2 = [~] form a basi~ for <he
eige "'P"' correspondi og to ~ - -2, and <he vectors v, - rrl and v, - [- ~] form a basis for <he
25. 1f the matrix A is upper triangular with l 's on the main diagonal, then its characteristic polynomial is
p(>.) = (>.- 1)" and >. = lss the only eigenvalue. Thus, in order for A to be diagonalizable, the system
(1 A)x = 0 m11st haven linearly independent solutions. But, if this is true, then (/- A)x. = 0 for
every vector x in R" and so I - A is the zero matrix, i.e., A =I.
26. If A 1s a 3 x 3 matrix with a three-dimensional eigenspace then A has only one eigenvalue, A= A1 ,
whsch IS of geometric rnultJphc1ty 3. In other words, the eigenspace corresponding to A1 is all of R3 .
It follows that Ax = >. 1 x fos a.! I x in R 3 , and so A = >. 1 I is a diagonal matrix.
27. If C is similar to A then there is an invertsble matrix P such that C = p-l AP It follows that
sf A IS invertible, then C ts invertible since it is a product of invert1ble matnces. Similarly, smce
PCP 1 =-= A, the invertibihty of C implies the invertibility of A.
28. If P =!P sI P2l I PnJ. then AP = (Aps I Ap:~l I ApnJ and PD =[A t PI I A2P2I I AnPnJ where
\1 -\2. . An at c the diagonal entries of D. Thus Ap.~; A.I: Pk for each k = 1, 2, , n, i.e., >..~; is an
~t,t;;< . n\'alue of .~ an<.l P1r is an eigenvector corresponding to A.~;
29. The :>tandard matrix of the linear operator T is A = [-~ -~ =~]. and the characteristic polynomial
-1 - 1 -2
of A ts p(>.) - >.J + 9>. =>.(A+ 3) 2 Thus the eigenvalues ofT are >. = 0 and A= -3, with
+ 6A2
qJ~ehraic mulllplicltles 1 and 2 respectively. Since>.= 0 has algebraic multiplicity 1, its -geometric
snult tplictty 1S also 1. The eigenspace associated with A = -3 is found by solving ( -31 - A)x = 0
whsth IS
The gen"' ..t svlu<ion of <hissys<em is x - s [ ~] +t [:] ; <hus ~ - -3 has geome<nc mulbplic.ty 2. I<
follows that T IS diagonali1.able since the sum of the geometric multiplicities of its eigenvalues is 3.
30. Th<' standard rnatnx of the operator Tis A= [-~ -~ ~1, and the characteristic polynomial of A
l 1 0
i-; (>. + 2)(>.- 1) 2 Thus the eigenvalues ofT are A= -2 and >. = 1, with algebraic multiplici~~ 1
DISCUSSION ANIJ DISCOVERY 283
and 2 respedively. Smce >. = -2 has algebraic multiplicity 1, tts geometric multiplicity is also 1.
The eigenspace associated with>.= 1 is found by solving the system (J - A)x = 0 Since the matrix
l - A= [ ~ ~ -~] has rnnk 1, the solution space of(!- A)x =0 is two dimrnsional, i.e.,>.= 1
-1 - 1 l
is an eigenvalue of geometnc multiplicity 2. It follows that T is diagonaltzal>le ~ mu~ the sum of lhe
geometric multiplicities of its eigenvalues is 3.
31 . If x is a vector in Rn and >. is scalar, then [Tx)a = [T]a[x]a and [>vc]B = .>.[x]a. lL follows that
Tx =Ax if and only if [T}s[x]s = [Tx]B =[Ax:]a = >.[x)a; thus xis an eigenvector ofT corresponding
to >. if and only if [x]s is an eigenvector of [T] 8 corresponding to >..
32. The characteristic polynomial of A is p(>.) = 1>._:-ca >. -:_b dl = >.2 - (a+ d)>.+ (ad - be), and the dis-
criminant of thi's quadratic polynomial is (a+ d)2 - 4(ad- be) = (a- d) 2 + 4bc.
(a) lf (a - d) 2 + 4bc > 0, thon p(>.) has two distinct reo.! roots; thus A is dia.gonali~ab l e since it has
two distinct Pigenvalues.
(b) Tf ( o - d) 2 + tbc n, t h"n r(>.) has no r~al roots; t hue: A has no rc:1l t'lgf"nvalues and is the refer~"
nol diagonahzablP.
vectors. For example, the matrix A= [~ ~ ~1 has only one (real) eigenvalue, >. = 1,
0 -1 0
wh1ch has algcbra1~ mull1pltcity 1, but A is not dJagonalizable
(b) False. For exc.Lmple, 1f p -l AP is a diagonal matrix then so IS Q 1 AQ where Q = 2P. The
diagonalizing matrix (if it exists) is not uruque!
(;:) True. Vectors from different eigenspaces correspond to djfferent eigenvalues and are therdore
linearly independent. In the situation described {v 1 , v2 , v 3 } is a linearly independent set.
(d ) True. If an invert1ble matrix A is similar to a diagtlnal matrix D, then D must also be m-
vertible, thus D has nonzero diagonal entries and o-
1 is the diagoncJ matnx whose diagonal
entries are the reciprocals of the corresponding entries of D. Finally, 1f PIS an mvert1ble ma-
trix such that p -I AP = D, we have p- A- 1P (P- 1 AP) - 1 = = o-
1 and so A- 1 is sintilar
too-l
284 Chapter 8
(e) 1'ru_y The vectors in a basis are linearly independent; thus A bas n linear independent
\
~rlvectors
05. (a) If >. 1 has geometric multiplicity 2 and >.2 has geometric multiplicity 3, then >.3 must have
multiplicity l. Thus the sum of the geometric multiplicities is 6 and so A is diagonalizable.
(b) In this case the matrix is not diagonalizable since the sum o f the geometric multiplicities of
t he e1genvalues is less than 6.
(c) The matrix may or may not be diagonalizable. The geomet.ric multiplicity of >.3 .IIWSt be 1 or
2. If the geometric multiplicity of >.3 is 2, then the matrix is diagonalizable. If the geometric
mllll.iplicitv of >-1 Is 1, then the matrix is not dia.gonaliza.ble.
P2. If A and 13 are sum lar lhen t here ts an invertible matnx P such that A = p - BP Thus, using
part (e) of Theorem 3 2 12, we have tr(A) = tr(P - 1 BP ) == tr(P 1 ( IJP)) o..: tr(( BP)P - 1 ) = tr(B ).
P3. If X f. 0 and Ax = >.x t.hen, since pis invertible and cp-l = p - l A, we have
C'P - 1 x = p - I Ax = p - 1 (,\x) = >.P- 1x
P4. If A and B ure :m111lar, lhen there is an invertible matrix P such t ha t A = p - l BP. We will prove,
by induction, tha t Air P 1 B" P (thus Ak and BJr are similnr) for every positive integer k .
Step l The fact that , \ 1 = .1 = p - 1BP = p-l B 1P is gtven
Step 2. If Ak = p - l BI<P, where k is a fixed integer ~ 1, then we have
A.\:+ 1 = AAic = (P- 1 BP)(P - 1 BkP) = p - 1 B (PP- 1 }BI<P = p - tBI<+lp
PS If A ts d tagonaltzablc, t hen there is an invertible matrix P and a diagonal matrix D such that
P - AP = D . We will prove, by induction, that p - tA'" P = Dk for every posttive integer k . Since
D" is diagonal thts s hows that A" IS diagonalizable.
Step 1 The fact that p- 1 A 1 P = P- 1 AP = D = D1 is given.
Step 2. If p - t Ak P = D" , where k is a fixed integer ~ 1, then we have
p - lA H 1 p = p - 1 AAkP = (P- 1AP)( P- 1A 4 P ) = DD" = Dlc+ l
P6. (a ) Let W be the eigenspace corresponding to >-o. Choose a basis {u 1, u z, ... , uk} for W, then
extend it to obtain 11. basis B = { U t, u 2, ... , u ,., Uk+l .. , u n} for R".
(b) If P = [u1 I u2l l u.~; I U .~;+J I I un] =[Btl B2j then t.hc product AP has the form
On the other hand, if Cis ann x n matrix o f the form C = ['>.~" ~] , t hen PC has the form
Thus the algebraic multiplicity of >.0 as an eigenvalue of C, and of A, is greater than or equal
to k
1. The charactenslic polynom1'\l of A is p(>.) = ,\2 - 5,\ = >.(>.- 5). Thus the eigenvalues of A are).= 0
and ,\ = 5, and each of the cigcnspaces has climension 1.
+
2 . The characteristic polynomial of A 1s p(,\) == >. 3 - 27...\-54== (,\- 6){>. 3) 2 Thus the eigenvalues of
=
A are,\ = 6 and .>, -3 . Thf> t>igenspace corresponrling to,\ = C hl\.o:; dimension 1, anrllhe e1genspace
corresponding to ,\ =- -3 ha.s rlimPnsion 2.
3. T he characteristic polynom1a.l of A is p(>.) = >.3 - 3).2 = ..\2 (.X- 3). Thus t he eigenvalues of A are
A= 0 and ,\ 3. The eigensptLCt: corresponding to >. = 0 has dillleu.sion 2, and the eigenspace corre-
sponding to >. - 3 has dimcllSIOn 1
4 . The charactenstic polynomial of A is p(>.) = ,\,3 - 9...\ 2 + 15).- 7- (,\- 7)(>-- 1) 2 Thus the eigen-
values of A are >. = 7 and >. - 1. The eigenspace cor responding to ,\ = 7 has dimension I, and t he
eigenspace corrPspondmg to >. = l has dimension 2.
5 . The gen"al solution or the system (0/ - A)x ~ 0 is x ~ [-~] + s [-:], thus the ectors v ~ -~]
r 1 [
and v 3 ~ [-:] form a basts for the eigenspace corresponding to ~ = 0. Similarly the vector v, = m
forms a basis for t he eigenspace corresponding to>.= 3. Since v3 is or thogonal to both v1 and v2 it
follows that the two eigenspnces are o~thogonal.
286 Chapter 8
6. The general wlution or (7f >i )x ~ 0 is X~ T [i} thus the vcctoc v, ~ [:] roms a basis foe the
eigenspa<e ro"esponding to >. ~ 7. Similacly the vecto"' v ~ ~]nd v ~ 2 [- 3 [-:] form a basis for
the e tgenspace corres ponding to >. = 1. Since v 1 is orthogonal to both v, and v 3 ll fvllows that the
two etgenspaces a re orthogonal
7. The characteristic polynomial of A is p(>.) = >.2 - 6>. + 8 = (>.- 2)(>. - 4); thus the eigenvalues of A
are>.= 2 and >.- 4. The vector v 1 = [-~] forms a basis for the eigenspace corresponding to>.= 2,
and the vector v2 = [~] forms a basis for the eigenspace corresponding to >. = 4 . These vectors are
orthogonal to each other, and the orthogonal matrix P = [~~~:II ~~~~II J = [- ~ ~] has the property
~72
that
PTAP= [-~ ~] [31 l] [-~ ~] = [20 0]" = D
72 J2 3 -12 \/2
8. The characteristic polynomial of A is (>.- 2}(>. - 7); thus the eigenvalues of A are >. = 2 and >. = 7.
Correspondmg Pigcnvectors are v 1 = [~] and v 2 = [-~] respectively. These vectors are odhogonal to
each other, and the orthogonal matrix P = 111 :: 11 1 :~ 1 1 = [~ -~] hns the property that
pT _.1P = ~1
[- ..!S ts] [ 6 -2] ['~
~ -2 3 '75
-~]
75
= [2 ~] = D
0 I
9. The characteristic polynomial of A is p(>.) = .>.3 + 6>.2 - 32 = (>.- 2j(>. + 4) 2 ; thus the eigenvalues of
A ace>. ~ 2 and .\ = - 4. The g'ncral solu<ion of(21 A)x ~ 0 is X~ m' and the gcnecal solution T
of (-If - A)x - 0 is x ~ s [-l] + t [ -~] Thus <he vedo< v forms a basis foe the eigenspace 1 [:]
corresponding to >. - 2, and the vectors v, ~ [ -:] and v, ~ r:J form basis for the eigenspace
corresponding to>.= -4. Application of the Gram-Schmidt process to {vt} and to {v2 , v 3} yield
orthonormal bases { u 1} and {u 2 , u 3 } for the eigenspaces, and the orthogonal matnx
?s -~ -73]
P= lut u2 u3) =
[~ ~ -~
...16 0 73
has the property th~>.t
I
pT AP =
76
r-12 2]2 [7: 76
-~] [2
-~ = 0 =D
l-~ 0
76
2
73 0
Note. The diagonali?.ing matrix I' is not unique. It depends on the choice of bases for lhe eigenspaces.
This is j ,ts t onf' possibilit.y.
EXERCISE SET 8.3 287
10. The characteristac polynomial of A is p(>.) = >.3 + 28>. 2 - 1175>.- 3750 = (>. + 3)(>.- 25)(>. +50);
thus the eigenvalues of A are >. 1= -3, >.2 = 25, an<.l >.3 = -50. Corresponding eigenvectors are
v, = [!], v, =[ i], and v, = m. These vc-eto<S "!' mutually orthogonal, and the orthogonal
matrix
4
li::u] = [~
-~
[ V1
v2 0
= llv1l1 llv2ll
has the property that
p
3
5 :J
11.
pTAP = [-~ 0][ -2 0 -36] [0
:J
5 i
0
0
~
- 2>.2
1
0
-g4
0
5
3 ~] =
n 0
25
0
and > = 2. The genoral oluUon or (OJ - .4)x =0 is X =T m+ s [ -:]' and the general soluUon or
""respondin to I 0. o>nd >he vector v, = [l] mms a basos or the eigenspace eonesponding to
>. = 2 . These VPctors are mutually orthogonal, and t.he orthogonal matrix
~]
1
-72
Vt I
F = [ llvdl 72 72
0 0
hn.-, the propt>rtv 1hnl
n
l[ l
1
~ ~] [~ ~ ~] [~
V2 I) 0 0
[ -~
72 = 0 0 0 =D
~ 0 0 0 0 1 0 0 0 2
72
12. The <"haracterrsltr. polynomaal of A JS p(>.) = .>.3 - 6>. 2 + 9>. = >.(>.- 3) 2 ; thus the eigenvalues of A
are > = 0 and > =3 The ve<:tor v, = [:] forms a ha.,is or the eigenspace corresponding to A= 0,
Application of Gram-Schmidt to { vt) and to { v 2, v3} yield orthonormal bases { u 1} and {u 2 1 u 3 } for
the eigenspaces, and the orthogonal matrix
288 Chapter 8
1
7J
1
"J2
tal [
0 -12 - 21 -1] [~73
- 1
---
'1.2
~
1
I
-76
-~
l l[ l =
0 0 0
0 3 0 =D
I 2 -1 -1 2 ....!_ 0 2 0 0 3
-~ 76 v3 7G
13. The characteristic polynomial of A is p(.X) = >.4 - 6>.3 + 8.X2 = >.2 (>. - 2)(>. - 4); thus the eigenvalues
of A are A ~ ~
0, A 2, and A = 4. The g~~e.al solution of (01 A)x ~ 0 is x ~ [~] + [!],
r s :he
gene,aJ solution of (21 - A)x ~ 0 is ~ t x lJ, and the genecal solution of (4/- A)x = 0 is x ~ u [~].
Thus the ve<to" v, ~ [~] and v ~ [~] 3 fo<m bO!Sis fo< th< cicnspace coHe;ponding to A~ 0.
v3 ~ -ll fo<ms a basi: fo, the eigcn:pace co"esponding to >. ~
[ 2, and v, ~ [ll fo,ms a basis
1 0 0 0
~] - [00 00 00 0]0
0 -.J..
T
00 0
0100 [3 310000] [00
1] 1 0
v'l
72
l
~
pAP= 1 I
[~-72 72 0 0
~ o o lo o o o
0 0 0 0 0
1
1
0
0
()
0 -
0
0 0 2 0
0 0 0 4
14. The characteristic polynomial of A is p(..\) = >. 4 - 1250..\2 + 390625 = (>.- 25) 2 (..\ + 25) 2 ; thus the
eigenvalues of A ace A~ 25, and A= -25. The vecto" v = [~]and v ~ [~] fo<m a basis fo, the
1 3
eigenspace COHespondi og to A ~ 25, and the vee to" v ~ [-~]and v, ~ [-ll fo<m a basis fo< the
3
eigenspace corresponding to..\ = -25. These four vectors art> mutually orthogonal , and the orthogcnal
matrix P lu~:u u~!H r~!" 1: : 11 1has the property that
J
~ 4
0 3
0 4
2~] 2~ -2~
s 5 24 0 s -$ 25
PTAP=
0 0 ;!
s 01~ [-7
24 7 0 4
5 0 3
5
=
0
-il
3 0 -7 3
s 0 0 0 0 g 0 0
0 4 ~ 0 0 24 0 4
0 0
-~ s s
EXERCISE SET 8.3 289
1 5. The cig<>nvalues of the matrix A = [~ ~] are >11 = 2 and >.. 2 = tl, with corresponding nOrJ:!lalized
l
:l
1
l]
3
~]
= (2) [ -72 [ I
72 - },j+(4)[~][J. ~]
16. The eigenvalues of A are >-t = 2 and .X2 = 1, with corresponding normalized eigenvect ors u 1 = [ ~]
and u 2 = [- ~]. Thus the spectral decomposition of A is ~
1 7.
ii
;:~::::::::~::: :::.::,:i~A[ A~~: r~r f~r:::o::~:~:::::::~n:ii~::b::
is
:.-:
l
-3 :2]
~ =2
[js]
~ [;n I
76 1.]- [-72]
~ [- ,', ;!; o]-' [-+a]
-~ [-~ - 73 ;!;] 1
=
2
[! ! l - +t -t ~l-'l-t -! =j]
Note fhl' :;pPctral decomposilion Lc; noluni()ue. l t depends on the r hoice of bases for the eigenspaces.
Thic; 1s juc;l one possibility.
18.
A= [
-2
0
3G
0 -36
-3 0 ....: -3 1 {0
0 -23 1 0
n 1 OJ + 25 n 1
I_~ 0 lJ _ 50 mll 0 ~]
~~ "o] ~so [.
2 0 -25 25 0 25
[0 0
= -3 0 l 0OJ - 25 [ o o o lll
0
0 0 0 _g 0 9 12 0 16
25 25 25 25
19. T he maln x A has eigenvalues >. = -1 and ). = 2, with corresponding eigenvectors [-11] and [-32].
Thus the matr ix P = [- : - ~] has the property that p- t AP = lJ = [- ~ ~]. It follows that
20. The marix A has eigenvalues>.= 2 and>. = -2, with corresponding eigenvectors [~)and!~] Thus
the matr;., P = [~ ;) has the property that p-l AP = D = (~ -~J. It, follows that
21. The matrix A has elgenv&lues >.~-I and >. ~ 1. The ve<:to' [:] fonns a basis fo' !he eigenspace
conesponding to>. ~ - 1, and the vectorn mand [!] form basis fo, the eigenspa<e conesponding
to >. = 1. Thus the matrix P = [: ~ ~1 bas the property that p-I AP =D= [-~ ~~~] . and it
4 1 0 0 0 1
follows that
AlOOO = PDIOOO p - t = [:
4
~ ~1 [~ ~ ~1 (~) [-~ ~ ~1 = [~ ~ ~1
1 0 0 0 1 -3 2
- 3 0 0 1
22. The mat,ix A has egenvalu., >. ~ 0, >. ~ 1, and >. ~ - 1, with conesponding eigenvedorn [:]. [:] ,
and [!] Thu P ~ [: ; !] has the pcope,ty that p- AP ~ D ~ [~ ! -~l, and it follow that
A1ooo = potooop-1 =
[1~ 2~ 6:] [0~
23. (a) The characteristic polynomial of A is p(>.) = >.3 - 6>. 2 + 12>.- 8 Computing successive powers
of A, we have A 2 ..,. [ :
12 -24
~~ :1
16
and A 3 = [~~ =~~ ~~1; thus
36 -72 44
~~]- -~~ 84
] + 12 [32 -2-2 ~] ~ 8/
-24
- 40 6 [ :
-72 44 12 -24 16 3 -6
which shows that A satisfies its characteristic equation, i.e., that p(A} = 0.
{b ) Since A 3 =-6A 2 - 12A + 8/, we have A 4 = 6A 3 -l2A 2 + 8A = 24A 2 - 64A ..._ 4 I
(c) Since A 3
- 6A -1 12A - 81 = 0, we have A(A
2 2
- 6A + 12/) = 8 1 and A- 1 = ~(A 2 - 6A + 12/).
EXERCISE SET 8.3 291
24. (a) The characteristic polynomial of A is p(>.) = >.3 - >.2 - >. + 1. Computmg successive powers of
= -5 0 6] [1 0 0] [-5 0 6]
.A2 - A
[ -3 1 3 -
-4 0 5
0 1 0 -
0 0 1
-3
-4
l
0
3 =
5
l
which shows that. A satisfies its characteristic t>quat1on, i.e., that p(A) = 0.
3
(b) Since A =A, we have = A4 AA3 = AA = A2 = I.
2 1
(c) Since A = I, we have A- =A.
pT AP =
v'3 -- :~J
72
vJ
I
J;;
l
I
../3
n -3
2
1
2][~76
2
n 'l
-rr..
\ (>
-72
I
1
72
0
=~] = [~
../3
7J
1 0
0
- I
~]
- I
=D
..;2
0
-~] [''
-73
~
0
0
0
e .,,
()
0
0
e- 4t
][I-?z 7ti
- ~ -73I
I
76
I
J2
-36 .. a
[ e"e2t +_ se-"
e-H
e2' _
e21 + 5e- 41
e-4t
2e2e2' - 2e-"]
21
-
2e-4t
()
2e2t- 2e-4t 2e2t- 2e-"' 4e2t + 2e- 4t
PTAP =
0 0 0] [ -20
_!
1
~
-g
2~0 -50~] =
[i 0 i -36
0
3
s
D
4
-5
0
3
5
[ 25
" e'" + -2-e-'"
25 0 ~'" + 25
- 25 " e-'" ]
= 0 e-31 0
_ 12 e25t + 25
12 e-sot 0 ..!!. e25 t + !! e 50t
l
25 25 25
sln(21f) 0 0
29. Note that o sin( - 4rr) 0 = [0o o0 OJ
o Thus, proceeding as in Exercise 27:
[
0 0 sin( - 47<) 0 0 0
76 0 73
0 1
-+a 0 0 -73
l
~J 11 0 0
;][oos(t)
[~
-5 0
00 ][- 10 0
30. cos("ITA ) - Pcos(1rD}PT = 0
3
5
cos(257r)
0 cos( 507r) i 0 il
~ [~
-g
0
4
3
5
;J n ~l H 0
-1
0
1
0
0
!0] = [-f.0
4
5
24
25
-1
0
0
ll
25
B
0
7
32. Smc(' .4 3 -0 have sin(1rA) = sin(O)/ +1rcos(O}A- t,1r 2 sin(O)A 2 = 'Tf'A = 0 00 00] and
WP
[2tr
rr
rr 0
cos(1r.A ) =J
1
; A
2
[ .~ ~ ~]
-T 0 l
(c) False. An orthogonal matrix need not be symmetric; for ex\\mple A = (~ -~] .
~ ~
{d ) '!Tue If A is an invertible orthogonally diagonalizablc mntnX 1 then there is an orthogonal
matrix P such that pT AP = D where D is a diagonal matnx with nonzero entries {lhe
eigenvalues of A) on the main diagonal. It follows that pr A - lp - (PT AP)- 1 = D- 1 and
t.hat D - 1 is a dtagonal matnx with nonzero entries (the reciprocals of the (~lgemalues) on
the main diagonal Thus the matrix A - l is orthogonally diagonalizable.
(e) 1Tue If A is orthogon ally cliagonalizable 1 t hen A is symmetnc and thus has r eal eigenvalues.
~ [-~ ~ ~] [~ -72] [3 0 0]
72
D2. (a) A = P DPT = [ ; ; ] 0 0 = 0 3 4
-~ 0 ~ 0 0 7 0 1 I 0 4 3
7z 72
(b) No. 'l'he vectors v 2 and v3 correspond to different eigenvalues, b ut are no t orthogona l.
T here fore t hey canno t be eigenvectors of a symmetric mat rix.
03. Yes. Since A is cll!lgonn.lizable and the eigenspaces are mutually orthogonlll 1 there 1s an orthonormal
basts for R" cons1stmg ol e1genvectors of A. Thus A ts orthogonally d1agouahzablc and therefo re
mu11t be symmetric
Pl. \'\'e first show that if A and C are orthogonally smular, then there exi~ t orthonormal ha.c:es with
rc.. J t lv \~ hkb tht:) rt'pn:scnl the same l1ut:ar '-per ll Jr Fur 1!11:. lJUrpose, let T L~ the v1 eratvr
defined by T(x) = Ax fhen A = [T], i.e., A is the matnx of T relative to the standard basis
B = { e,, e2 1 1en} Smce A and C are orthogonally similar, there 1s an orthogonal matrL< P
such that G=PTAP. Let B' ={v 1 , v 21. . , v n} where v 1 , v 2 , . , vn are the column vectors
of P. Then B' is an orthonormal basis for R" , and P = Pa'-B Thus jTJa = PjTJs PT and
[T]s = pT[T]sP = pT AP =C. This shows that there exist orthonormal basrs with respect to
which .4 n.nJ C repn:!:>Pnt t.he same linear operator.
ConwrsPly, lluppost: lhat A = [1']a and C = [TJa when T: R" - > Rn IS a linear operator
and B, fJ' n.re h1\SCS for nu.
If P = PD'-+8 t.hen Pis an ort,hogonnl matrix and C = [T]a =
pT ITlnP pTA P. Thus A and C are orthogonally si1ni ln.r.
P2 . Suppose A - c1Ut U{ I c2 u 2uf + + Cn llnu ;, where {u 1, u2, ... , u n} is an orthonormal basis for
Rn. Sincf> (u 1 uJ)T = u J'7 ' u J = u1 u f it follov.;s that A'T =A; thus A is symmetnc. Furthermore,
since u{' u1 = u, u 1 - o,, 1 \"e have
T\
P4. (a) Suppose A is a symmetric matnx, and >.o is an eigenvalue of A having geometric multi-
plicity J.. Let l\' be the eigenspace corresponding to >. 0 . Choose an orthonormal basis
{ u 1, u 2,. , uk} for W, extend it to an orthonormal basis B = {u 1, u2, ... , u ~c , Uk+J, .. . , u n}
for R" , and let P be the orthogonal matrix having the vectors -:>f B as its columns. Then, as
..;hown in Exercise P6(b) of Section 8.2, the product AP can be written asAP= P [-'~t ~].
Since P IS orthogonal, we have
v2 = [-:] respective)) Thus th~ matrix P = [~ -~] orthogonally diagona.IJZcs A, and the change
of \'&riable [:
1
2
) =x - Py [tz -~] [Y
7z 72
1
Y2
] eliminates the cross product terms in Q
- I
G. rho v"' """"'' ,.;, !om ' "" '" exp<e~ed in Lt ,,, notation M Q- ~ 1
A X whccc A ~ ~~[: ]
= 6,
l
The matrix A hns rigcnvahJ<'S ,\ 1 = 1, ).. 2 = 4, ).. 3 with corresponding (orLhogonnl) eigenvectors
v' ~ Hl'v, = m.v, [I] Thos the- ma:ix p = [ - ~: ~ octhogonally di ..on alizes A' and
,., The given quadiM>e (o, m can be exp<essed in matdx notation as Q = x' Ax whm A = [~ _: -:]
n!-ll
with cor responding (orthogonal) eigenvectors
orthogon~ly dagonaH..s A,
296 Chapter 8
[~:] [-!
:2
3
= y = pTx = Px = 1
3
2
Y3 . 3 3
8. The given quadratic form can be expressed as Q = xT Ax where A= [ ~ ~ =~] The matrix
-2 - 4 5 .
A has eigenvalues ~ ~ I and ~ ~ 10. The vec:o" v, - mand v, ~ [-~] fo<m a basis fo< the
eigenspace w 'espond ing to ~ ~ 1, and v ~ fo,m s a basis fo' the eigenspace co,esponding to
3 [ _:]
are >.1 = 3 and >.2 = - 2, .vith corresponding eigenvectors v 1 = [_~] ! and v 2 = [~] respectively. Thus
Lhe matrix P =[ ~
-Vi 7s
ts] orthogonally diagonA-Iizes A. Note ~that det(P) = 1, so P is a rotation
EXERCISE SET 8.4 297
which can be written ns 2y' 2 :.,;g'2 = 8.:. thus. ~ic is a. hyperbola. The a11gle through which the
axes have bePn rotated 1s 9 = tan 1 ( - ~) ~ -26 6.
14. The equation can be written m matrix form as xT Ax= 9 where A= [~ :] . The eigenvalues of A are
At = 3 and .X2 = 7, with corresponding eigenvectors v1 = [_~] and v2 = GJ respectively. Thus the
A are At = 20 and A2 = -5, with corresponding e1genvectors v 1 = [~] and v 2 = [-~] respectively.
Thus the matrix P [~ -n orthogonally diagooallzE>s A Note that det(P)-= I , soP is a rotation
wh1ch we can wntf' as 1r'2 - y' 2 == 3; thus the conir is a hyperbola The angle through which the
axes havP been rotntcd IS (I = U\0 t ( ~) ::::; 36. !) 0
16. The equation co.n be wrinen in rnalrix form as xr Ax= ~ where A = [~ :] . The eigenvalues of A
nrc .A1 = 4 and >.2 ;;: ~, wit.h cur responding eigenvectors v 1 = [_~] and v2 - [~] . Thus 1he matrix
P [-~ ~]orthogonally dmgonalizes .4. Note that dct(P) = l, soP is a rotat1on matnx. The
0] [x']
;! y'
= ~2
2
which we can write as :r' 2 1- 3y'2 = 1; thus thP conic is an ellipse. The angle of rotation corresponds
lo cosO= ~ and sm (} = 7i,
thus(}= -45_
17. (a) The e1genvalucs of A= [~ ~] and A= 1 and .X= 2; thus A is positive defimtP
(b) negative drfinil(' (c) indefinite
(d) positive semidPfimtc (e) negative semidefinite
298 Chapter 8
19. We have Q = xi+ x~ > 0 for (x 1 . x2) 'f: (0, 0) ; thus Q is positive definite.
21. We have Q = (x1 - x2) 2 > 0 for x1 I x2 and Q = 0 for x1 = x2; thus Q is positive semidefinite.
23. We have Q = x{ - x~ > 0 for x1 of 0, x 2 = 0 and Q < 0 for x 1 = 0, xz ..f 0; thus Q is indefinite.
24. indefinite
25. (a) The eigenvalues of the matrix A = [_~ -~] are A= 3 and A = 7; thus A is posi tive definite.
Since lSI = 5 and I 5
-2
- 52 ~ = 21 are positive, we reach the same conclusion using Theorem 8.4.5.
(b ) The eigenvalues of A= [-~ -~ ~] are >. = 1, >. = 3, and ,\ = 5; thus A is positive definite.
0 0 5 12 -} 0
The determinants of the principal submatrices are 121 = 2,] _~ -~~ = 3, and - l 2 o = 15;
I
0 0 5
thus we reach the same conclusion using T heorem 8.4.5.
26. ( a) The eigenvalues of the matrix A = [~ ~J are .A = 1 and >. = 3; thus A is positive definite. Since
2 1
121= '2 a nd 1
,1 2
] = 3 an positive, we reach t.hc sttme conclusion using Theorem 8.4.5
(b) The eigenvalues of A= [- ~ -~ - ~] are >. = 1, .-\ = 3, and ,\ = 4; thus A is positive definite.
0 -1 3 3 -1 0
The determinants of the principal submatrices are 131= 3, ~-~ -~~ = 5, and - 1 2 - 1 = 12;
0 -1 3
thus we reach the s;;~.me conclusion using Theorem 8.4.5.
27. (a) The matrix A h.as eigenvalues >- 1 = 3 and >.2 = 7, with correspo nding eigenvectors v 1 = g) and
B = [~
72
7;]
- !':<
v2
= [Y..il! +- 4:11.
2 2
(b ) The matrix A has eigenvnlues >. 1 ~ I, >. ~ 3, >., ~ 5, with corresponding ttigcnvecto<S v t ~ m
2 ,
' 2 nl, nnd v, ~ mThus P ~ [ ~ - ~ }nhogonWiy diagonWizea A, and
~] [-~
72 0
~
D= [ ~ v'3 v~ '2 '2
0 .J5 0 0 0
B= [-~
72
has the property Lhnl 8 2 = A.
(b ) The "'""" A has icn<Jtoes \ 1 ~ I, >., ~ 3, >., ~ ' , wu h co,.spondin< eigen"""'"" v ~ 1 [:] ,
v2 -
[-1]~ , and v 3 - [ '] Thus P = [:.e~ --j;; - ~]
: ~ orthogonally dia.gonalizes A, and 1
~]
2 I
- 72
B- [7. -t.
0 7s - j 6- T
' V>l
~~3
\IG
0 - "J:i 0 v'3 ..;2 0 72 = -3
I I
3
I t 0 1 1 I
7a ..;'2
I
73 0
7J -7:1 7:i -
6
fl
2 -3
I
s 2 -1
!SJ = 5. 5 21 = 1, and :! 1 - 1 = k- 2. Thus Q IS positi\c dfiuit<' tf and only if k ;> 2.
12 1
-1 -1 Jc
5- 3k2
-tl
Jc .
2
31. (a) The matrix A has e:genvalues ~1 = 3 and >.2 = 15, with corresponding eigenvectors vJ = [-~]
and vz = [:]. Thus A is positive definite, the matrix P = [ -~ ~] orthogonally diagonalues
.4, 1\n:i the matrix
A = [~ !] [i ~] [~ ~] [; ~] = LDU
=
A=
[q c,:c2
c1c2
~
C1Cn
c2 c.,
c,cn C2Cn c2
n
xf - ~ : x,x1 + ~ ( r=1
J=l
f: x~ + 2 f: f +1 x1xk). Thus in the 4uadratic form
J=l k=1
- 1 -[(x
s; = n-1 1- x) 2 + (x2- x) 2 + + (xn - x) 2 ]
*
the coefficient of x? is "~ 1 [1 - + ~n) = ~. and the coefficient of x,x1 for
n~J [-~- ~ + ~n] = -n(n2_ 1 >. lt foUows that s; = xT Ax where
t :f. j is
l 1 I
n -n(n-1) - n{n-1)
_ _1_ I
.!.
n(n-1) n - n(n-1)
A=
1 1 l
-n{n-1) - n(n-1) n
DISCUSSION AND DISCOVERY 301
(b) We have si
= ": 1 1(x, - i)2 + (xz - i) 2 -t- + (xn- :t)2 J;::: 0, and .~~- 0 if a nd only if x 1 =
i, Xz = i, ... ,:en= :i:, i.e., if and only if x 1 = X2 = ::::: Xn. Thus s; is posit1ve semJdefinite.
tation as Q x' Ax whe<e .4 ~ ~ [! !l The mal<ix A has eigenvalues >. l and >. ~ j.
The vectorn v, ~ [-!]and v, ~ [-~] form a bas;. for the eigenspace corresponding to>. ~ j,
and v 3 ~ [:] forms a hasis for the eigenspace corresponding to >. ~! Application of the
P = Pt P2
PJ = [-i -76
'
76
I
-"JS
1
73
~]
orthogonally diagonahz.es A. Thus L ange of variable x = Px' converts Q into a quadratic
form m the variables x'- (x',y',z') w1thout cross product terms.
From this wP cnnrludc I hat thr equation Q 1 ~orn>sponrls to an ellipsoid with a.xis lengths
2j"i = J6 in thP x' and y' directions, and 2.jf = ~in the z' direction
D l . (a) False. For cxmnple the matrix A= [~ ~] hn.s r.igcnvalues l and 3; tlmii A 1s mdefi nite.
(b ) False. The term 4x 1.z:'2x3 is not quadratic tn the variables x 1 , x2, XJ.
(c) True. When cxpuudcd, each of t.he terms of lhe res uiLing expression is quadm~ic (of degree
2) in t he variables
(d) True. The eigenvalues of a pos1tJVe definite matnx A are strictly positive, m particular, 0 Is
not an eigenvalue of A and so A is invert.1ble
(e) False For examplr th!C' matnx A= [~ ~] is pos1t1ve sem1-definite
(f ) True. rr the eigenvalues of. \ are posttive, tlu>n the eigenvalues of - A arc negat1ve
(f) False If c > 0 the graph is an ellipse. If c < 0 the graph is empty.
D3. The eigenvalues of A must be pos1lhe and equal to each other; in other words, A must have a
positive eigenvalue of muiLlplicily 2.
P2 . From the Print1paJ Ax1s Theorem (8.4.1), there is an orthogonal change of varial>le x = Py for
wh1ch xr Ax= yl'Dy - >-.t!Jf 1- >.2y~, where )q and >.2 are the eigenvalues of A. Since ,\ 1 and >.2
are nonnegat1ve, it follows that xT Ax ~ 0 for every vector x in Rn.
/l(O,O) 0
[4
cl] I H(l,l) = [-12 4]
4 -12 ' H( - 1,-1) = [
- 12
4
The eigenvalues of (~ ~) are A= 4; thus the matrix H(O, 0) is indefinite and so 1 has a
saddle poinl at (0, 0) . The eigenvalues of [- I~ _ 1 ~] are>.= -8 and >. = - 16; thus Lhe matrix
H(l,l) = H( - 1, - 1) is negative definHe and so f has a relative maximum at {1,1) and at
(-1,-1)
2. (a) The first partial derivatives off are f%(x, y) = 3x2 - 6y and fv(x, y) = -6x- 3y2 . To find the
cntic~l points we set 1% and f 11 equal to zero. This yields the equations y = 2 and x = - ~y 2 4x
From Lhis we conclude that y = h and so y = 0 or y = 2. The corresponding values of x are
4
x = 0 and 4 - 2 respectavdy. Thus there are two critical points: (0, 0) and ( -2, 2).
1 3 . The constraint equation 4x 2 + 8y2 = 16 can be rewritten as ( ~ )2 + ( ~ )2 = l. Thus, with the change
of variable (x, y) = (2x', v'2y'), the problem is to find the exvreme values of z = xy = 2v'2x'y' subject
lo xn + y'2 = 1 Note that. z = xy = 2v'2x'y' can be expressed as z = xrT Ax' where A= [ ~ ~).
The eigcmaJue~ of A are >. 1 '= v'2 and>.,= -J2, with corresponding (normalized) eigenvectors v 1 =
l~] and v2 = [-~]. Thus the constrained maximum is z = J2 occurring at (x', y') = (~~)or
(x,y) = (J2, 1). Similarly, the constrained minimum is z = -v'2 occurdng at (x',y') = (-72, ~)
or (x,y)= (-J2,1)
2
14. The constraint :r2 + 3y2 = 16 can be rewritten as ( ~ ) + (4y)2 = 1. Thus, setting (x, y) = (4:r', ~y'),
lhe problem is Lu find the extreme values of z = x 2 + xy 1 2y 2 = 16x'2 t- ~x'y' + v' 2 subject to
x' 2 + y' 2 = 1. Note that z = l6x' 2 + ~x'y' t- y' can be expressed as z = xfT Ax' where A =
2
~ ~]. The eigenvalue:. of A are >. = 8 and>.,=~. with correspondmg eigenvectors v, = [ -~]
6
[ 1
constrained m;ucimum is z = 536 occurring at (x',y') = {4, !>or (x, y) = .J:(2J3, 1s> Similarly,
the constrained mmimum is z =a occurring at (x', y') = (~. -4) or (x, y) = {2. -2)
( ;~. -J2)
-4
I -2 x.y = _l
.ry: 2 1
-4
17. The area of the inscribed rectangle is z = 4xy. where (x, y) is the corner pomt that lies in the first
quadrant. Our problem is to find the maximum value of z = 4xy subject to the constrakts x;::: 0,
y ;::: 0, x 2 + 25y~ = 25 The constraint equat1on can be rewritten as x' 2 + y'2 = 1 where x = Sx'
and y = y'. In terms of the vanables x' and y', our problem IS to find the maximum value of z =
20x'y' subject to x' 2 + y' 2 = l , x';::: 0, y';::: 0. = x'T Ax' where A = [ 1 ~ 1 ~1 The largest
Note that z
1
eigenvalue of A IS >. = 10 with c:orresponding (normalized) eigenvector [ ~]. Thus the maximum
area IS z = 10. and th1s occurs whl'n (z',y') = (-jz, ~)or (x,y) = (~. ~)
306 Chapter 8
18. Our problem is to find the extreme values of z = 4x2 - 4xy + y2 s ubject to x 2 + y2 = 25. Setting
x = 5x' and y = 5y' this is equivalent to finding the extreme values of z = 100x'2 - lOOx'y' + 25y'2
subject to x'2 t- y' 2 =1 Note that z = xff Ax' where .-\ = [ ~: -~]. The eigenvalues of A are .-\ 1 =
125 t'.nd >.2 = 0, wit h corresponding (normalized) eigenvectors v 1 = [- ~] and v 2 = [~]. Thus the
maximum temperature encountered by the ant is z = 125, and th1s occurs at (x', y') = (--;h. 75) or
(x, y) = (-2-.15, VS). The minimum temperature encountered is z = 0, and t his occurs at (x', y') =
(~ . -;h) or {x,y) = (VS,2VS).
Since H,(O,O)- H9 (0.0) = [~ ~]the second derhntl\e test IS mconclusive in both cases.
(b) It is clear that f has a relative minimum at {0, 0) since /{0, 0) = 0 and f(x , y) = x4 + y 4 is
strictly positive at all other points (x,y). ln contrast, we have g(O,O) = 0, g(x,O) = x 4 > 0
for x ::/; 0, and g(O, y) = -y4 < 0 for y f. 0. Thus g has a saddle point at (0, 0).
D2. The eigenvalues of H [~ ~) are A= 6 and.,\= -2. Thus His indefinite and so the crit1cal points
off (if any} nrc s'\dcile pmnts. Starting from fr.Ax y\- J~ (.r y} =
2 and /y:r(:r y) = /-ry(xy) = 4
it follows, using partial mlegrat.ion, that. the quadratic form f 1s f(x, y) = :r2 + 4xy + y 2 . This
function has one cril!cal point (a saddle) which is located at the origin
D3. lfx is a uni t f>lgenvector corresponding to..\. then q(x) = x1'Ax = xT(.-\x) = >.(x T x ) = >.(1) = >..
x TcAXc= (M-e)
M-m u rmAum+O+O+ (e-m ) J\f-m
1
u MAUt.t=
(M-e
M -m )m + (e-m)
M- m M=c
1. The ch.,.a<tedstic polynomial of A7 A~ m(t 2 o] ~ [~ ~ ~] is >. (>. - 5); thus the eigenvalues
2
5. The only eigenvalue of AT A= (_~ ~] [~ -~] = [~ ~] is >. =2 (multiplicity 2), and the vectors
VJ = [~] and v2 = [~] form an orthonormal basis for the eigenspace (which is all of R2 ). The
singular values of A are crt = ../2 ar:d u2 = .J2. We have Ut = ;, Av1 = ~ (; -~] [~] = [~],and
u2 = "\ Av2 "" ~ [~ - ~] [~] = [-~]. This res ults in t.he following singular value decomposition
of A:
a2 =3 \Vp IHl,e Ut : <r\ Avt = t[-~ -~J [~]=[_~].and t1 2 = .,1,A v 2 = ~[-~ -~l [~] = [-~]
This results in lhe following singular '-a.l\!e dPCOmposition:
u 1 ,~ ![~ ~] [~]
5
o2 = 2. We hnvc ..L;\v 1 =
c l 8 0 tl +. = [-jg]
-!.,.
and u 2 - ..L.4v 2
(1, = ![4
2 0
6) [-};] =
4 _JA'
[-~]
..l. .
v$ v5 vr> .;:;
This results in the fo llowing singular value decomposition:
1
8. The ('tgcnvn.lucs of AT A [
3 3 3 3
] [
3 3 3 3
] = (18 ~]
18 I,.,
are >. 1 = 36 and >.2 = 0, with corresponding unit
eigenwcLors Vt - [~] anrl "2 = r-~] rPspccthely. The only smgular value of A is Ut =6, and
we have u 1 =; I
Av1 = ~ [33 3
J
] [~]
7i
= [+1.
72J
Thf' vector mu~t u2 must be chosen so thaL {u1 , u 2}
is an or thonormal bas1s for R 2 , e.g., u 2 = (- ~]. This results in the following singular value
decomposition.
9. Th~ eigenvalues of AT A = [-~ -~ -~] [=~ ~1 = [_: -:] are >. 1 = 18 and .A2 = 0, with corre-
2 -2
spondtl'g uniL etgenvectors v1 = [-~) and v2 = [~] respectively The only singular value of
[-2 2] [ ~ t -1] [ ~ ~]
3
A= - 1 1 = ~
2 -2 _. 2 ! 0 0
3 3 3
Note. The singular value decomposition is not unique. It depends on the choice of the (extended)
orthonormal basis for R 3 . This is just one possibility.
10. Theeigenvalu"',"f ATA = [=: J ~-~ -: J- U_; =:]a<e .> 1 = 18 and .>1 = .>3 = 0 The
ve<to.v, -il
=[ is a unit eigenvedo<eonesponding to .1, = 18. The vecto<S [-:]and m fom
a basts for the eigenspace corresponding to ,\ = 0, and application of the Gram-Schmidt process
must choose the vector u 2 so that { u 1 , u 2 } is an orthonormal basis ror R2 , e.g., u 2 = [ ~ 1- This
~J
~- t lv~. \J M 3v:>
unit eigenvectors v 1 = [~] and v 2 == [~] tlspectively. The singular values or A are u 1 = J3 and
Exercise Set 8.6 309
,..
singular value decomposition: I
[!l
'- 4 0 vf> ...!.. 4 0 :iS
-- .~ oJS
u, = so that {u, , u,, u,} Is an octhonocma\-basis roc R3 . This cesults m the rollowmg singular
value decornposttton
A =[~ ~] =[~
-::-sI
0 0][80]['2
1 ll 2 1] = l/EVT
../, _"J.
I 0 0 0 ll ... $ " 5
v5
* ] - UE vr found
in Exercise 7, we have thr following polar de>composition nf A:
14. Using the singular value decowpo.<:ition A = G~] = [~ -~] [~ ~] [-~ ~] = UEVT found
in Exercise 8, we have the following polar decomposition of A:
~l r60 0
72
[- ~ ~l) r, ,_,:!
01 72 Ji
r~ -~ 72 1
~ ~l) = [33 331r~0 0
72 72 11
= PQ [-
310 Chapter 8
A=
[
10]
1 1 =
[i1
~
-1 1 -~
A=
il
17.
and >. = 3 is an Pigenvalue of multiplicity 2. The vector v 1 = [- forms a basis for the eigenspace
oonespon~ ing to >. - -1' and the vee toes v' = m and v' ~ [il form an (orthogonal) basis for
the etgenspace <'orresponding to>.= 3. Thus the matrix P = [1:: 1 1 :~1 fvil) orthogonally diag-
onalizes A and the eigenvalue decomposition of A is
A= 2 1 0
120] [--3i~ 072] [-1030
00] [-720 7,010]
0~
[0 0 3 =
0 0 003 72720
Exerc.se Set 8.6 311
The corresponding singular value decomposition of A is obtained by sh1ft1ng t he negat1ve sign from
the diagonal factor t o t he second orthogonal factor
1
1 2 0 -72 7i 1 0 0 T2
003
[ l [ lol l [ ] [
A= 2 1 0 = ~0~
01 0
0:30
003 ~
0
I
72
18. The characteristic polynomial of A is(..\+ 1)(..\ + 2)(..\- 4), t hus t h: ei~ues ozrA are ..\1 = ~ 1 ,
The matrix P = [v1 v 2 v 3j orthogonally diagonalizes A o.nd lhe eigenvalue <.lec9 n position of A is
---- 0 -~
-2 0 3 .l,.. 0 0
v5 '115
The corresponding singular \'alue decomposition of A is obtained by shiflmg the llt!gative signs
from the diagonal factor to the SC'cond vrthogonal factor.
-
~]0 -~21!10]
2
0-2] = [.
-2 0
'5
0
0I 01
= U~ \ r
0 3 -!,.. 0 --l. 0 -'75
'11'5 "~
I ! 1
2
~ ] and vr- [~ - !l
for m a bes" for row(A). and v 3 ~ -~] forms n bas1s for row(At:_= nuli(A).
[
~
I l
=: ~~]
12 0 2 2
1 I
2 -2
(b) A = : 10 1 I
2 -2
[
12 0 6 I I
2 2
20. Since A = UEVT and V is orthogonal we have AV = UE. Written in t.olumn vl'ctor form th1s 1s
312 Chapter 8
0
0
0 .. '
0
0 0 0
21. Since A'J'A is positive semidefinite and symmetric its eigenvalues arc nonnegative, and its singular
values c;Lre the same as its nonzero eigenvalues. On the other hand, the singular values of A are the
squnre roots of the nonzero eigenvalues of AT A. Thus the singular values of AT A are the squares
o ft hP sing\llar valnc>s of A.
"vhcre D IS the diagonal matrix having the eigenvalues of AAT (squares of the singular values of
A) on its main diagonal Thus ur AATU = D, i.e' u orthogonally diagonahzes AAT .
23. We have Q = l !]
r fl
_!
2
4
;,
.:c.1
l
= [cos
sm8
e
.
-Sin
cos8
8
] where e = 330' lhus multiplication by Q corresponds to
rotat1on nhout thP orsgm through an angle ('If 330 The symmetric matrix P has eigenvalues A.= 3
and ,\ = l. \\'llh rorrP.spondmg unil eigenvectors u1 = [
a clmgonnJi?,ing mat.nx for P:
rJ and U2 = [~] Thus v = [ul U2] is
vrp-v= [~
_ ! :i]
2
~]
2
[v'03 J32] [~ !
2
-4]
fl
2
01. (a) H A Ul.vr IS a s1ngular value dccomposiLton of an m x n matrix of rank k, then U has
size m x m, E has size m x n and V bas size n x n.
{b) If A = Ur Er Vt IS a reduced smgular value decomposition of an m x n matnx of rank k,
then U has size m x k, E 1 has size k x k and V has size n x k .
02. If A 1s an invertible matrix, then it.s eigenvalues are nonzero Thus if A = UEVT is a singular
valur decomposition of A, then E is invertible and A- 1 = (VT)- 1 E- 1U- 1 = vE- 1 UT. Note also
Ihal the clta.gonal "!1lril''i of r;-t are the reciprocals of the diagonal entnes of r;, which are the
.~tr,Kulnr vnlurs of A -t. Thus A - 1 = VE- 1UT is a singular value decomposition of A - l
Exercls a Set 8.7 313
D 3. If A and B are orthogonally similar matrices, then there is an orthogonal matrix P such that
B = PAPT. Thus, 'fA = (JEVT is a singular value decomposition of A. then ~
D4. If P is the matrix for the or thogonal projection of Rn onto a subspace W of d1mensioo k, then
P 2 = P and the eigenvalues of P are ). = 1 (with multiplicity k) and >. = 0 (with multiplicity
n- k). T hus t he singular values of Pare u1 = 1, u2 = 1, ... , U!t = 1.
2 . WP. have AT A - [~ 1 2
3 1
) ~~2 ~~I = [ 6 6
6 11
), thus the pseudoinversc of A is
(b ) A+AA+ = (ls /ll] [~] [ts ~) = [1] [I& is]= (ts fs) =A+
vectors Vt = rn and v2 =[ -:J respectively. The only singular value of AT is 0'! =5, and
we have u 1 = o'. AT v 1 = !(3 4] [i] = [1]. This results in the singular value decomposition
(f) The eigenvalues of (A+)T A+ = 6 ~ 5 [ 1~ ~!] are At= 215 and A2 = 0, with corresponcling unit
eigenvectors v 1 ~ [i] and v2 = [-~] respectively. The only singular value of A+ is a 1 = t
and we have u 1 -
1
ct , A+v = 5[,\ fs) [!] = [1). This results in the singular value decompo-
sition
6. (a) 4 : 11- .4 =
[l ~] [~ -~il [l ;]
.
-30
2
5
= r~ ~j [~ ~] ~ [I~ ~I]
11 =A
(b) .4 1 A:\ 1
= [~
l
__ ]_
30
5
2
-1] [i ~][~
7
- 30
2
5 ~1] [~
- Jo
7
2
s ~1] [! - ygI
3o
I
6
'2!1
-.t
13
15
l
=[5
0
3 -1$
is symmetric, thus (AA +)T -= AA+
unol eognvocto.s v, ~ },;; [~] v, ~ *' Hl and ., = f,o nl The singular values of
Exercise Set 8.7 315
kJ-u
4
726
- 1~1
't"' vr1
and from this it foUows t.hat
n]
[ 40
-m
H]
43 ns
conespondmg uniteigenw>CIO<S v' = ).;; [':]. v, = ;+. and VJ = ;;;\; The sin
[~~]
-30
7
~ -~
lv'26 [ I]
I~ _I_ -3 = _L_ [ 3]
-1 \113 -2
we have
7. The only eigenvalue of AT A =- !?.5J is >. 1 = 25, with corresponding unit eigenvector v 1 = [lj. The
only smgular value of A ts a 1 =5 We have u 1 = ;, Av1 = ~ [!] [Ij = [i], and we choose u 2 = [ -:1
so that {u 1 , u2} is an orthonormal basis for R 2 This results in the singular value decomposition
v1 = -Jb [!] and v:~ = 7iJ [-~J. ThE' singular values of A are cr 1 = VIs and <12 = V2. Setting
Ut _!_Avt = v iS
= cr1 ~ [~ ~1j ~ [23] = ~ [1~]
2 1
v 13 vl95
7
we have
~HJ
)]- [~
3
1
Ji9s -~]
-726 [v'IS J2
:t'9s
7
0
725
4
0 l
and it follows that
.4
1
= V1Ej UT = [~ ~] [ ~ ~] [ ; . _v~
14 32
9. The eigenvalues of AT A = [32 26
] are )q = 90 and .X2 = 10, w1th correspondmg unit eigenvectors
v1 = l~1] and v2 . [_ ~~] The smgnla.r values of A are <1 1 = '1/'90 = 3v'i0 and a2 = JIO. \\'e
7S ..n;
choose u3 = [~] so that {u 1, u 2, UJ} is an orthonormal basis for R 3 . This yields the singula r value
l
decomposition
1
71
0 0] [3v'0l0 v'lO0
1
0 0 0
l
The corresponding red uced singular value decomposition is
A
[l [
-= 0 0 =
7 1
5t:
i)
72
I
I
J2 -71
0
~
I
I
3
0 [ ViQ
0
O
jjO
l[ 2
7s
-jw
Exercise Set 8 7 317
u'l = r-7fl
7oii
so that { U j, U2} is an orthonormal basis for R 2 This results in the s ingular value
decomposition
4
A= ( ] =
5 74i 74i 0
[vf. -~] [J4I]IIJ
= ur;vr
The cora esponding reduced singular value decomposition is
11. Smcc A = (_~ ~] has full column rank, we have .4+ = (ATA )- 1 AT : thus
:4+ = rlZJ
.
3]-l [22 -1]1
3 5
= _!_ [ ;:; -3]
16 -3 5
[:!2 -I]=~ [1 -~]=A-I
1 .j 1 1
12. Smcc rl '0:.! ,2_] has f11ll column rank, we have A+= (AT A)- 1 A:r: thus
[
13. The mamx A = [~ :] does not have full column rank, so Formu la (3) does nol apply. The
eageuv<llues of AT A = [~ ~) are )q = 4 and .X 2 :::: 0, with correspondtng unit eagenwc tors v 1 = [~]
and v2 ....., r-~] The only singular value or .4 is Ot = 2 We have Ut = ~Avl = ~ [: :] (~] =
A= [~ ~1 = [~ ~ ~1 [~ ~1 [~ ~]
0 0 0 0 1 0 0
= UEVT
15. The standard mfltrix for the orthogonal proJectmn of R 3 onto col{A} is
\ ~+ _
-
[2 1
11
1I :\
~
[160 - f
30
; _!
5
8]
15
5
_
-
[!
!
6
!
~A
30
_J..
-~]u 13
3 15
16. The standard matrix for the orthogonal projection of R 3 onto col(A) is
A A+ = [~ ~] [_ ~
55 G
0 -
Q
~ l [~ ~ ~]
jQ
=
001
17. The giv<n system can be wd tten in matdx fom as Ax = b whm A = [~ ~]and b =. [_~] The
matrix A has thf' fnllowinp; red uced s ingular value decomposition and pseudoinverse:
A+
1
V1 E) Uf = [~] [~] (! f j) = [ 1~ ~ ~]
72 18 9 9
Thus the least squares solut1on of minimwn norm for the system Ax =b is:
Working with Proofs 319
18. The g;ven system can he wdtten as Ax= b whece A= [: ~] and b = [:] . The pseudo;,..,,. of
_!_ [
IS -9
11 9 1 2 2
- ] (
9 l 3 l
] = r~0 -fii~ -
fi,]
2
0 Thus the least squares solution of
19 Since AT has full column rank, we have (A+)T = (AT)+ = (AAT)- 1 A = (14)- 1 (I 2 3] = (f. 2 fi)
[H
14
and so A+=
20 . The matrix A T has full column rank and. from Ext:n: t::.c 18, w~ have (AT)+= [ ~ --& ~] I 1
0
1
Lhus
2 -2
$
.4 T-- -n
~ -:0] .
I
[ !1 -l
'" l
cnJ 1<; an m x n matrix with orthogonal (nonzero) column vecto rs, then
I
~ 1-1
Uc:tl 2 0 0
0 0
AT=
ll~nll
7 I
0 0 jjCJj"'
02. (a) If A = = lv
a u v1', then A;.
a
uTo
(b ) A A= ( ~v uT)(a u vr) = v( u: u vr = vvT and AA~ = (a uvT){~v u r) = uur.
03 If c IS a nomero scalar, l hen (cA )"' = 0
i .4+ .
D5o (a) AA+ and A+ A are the standard matrices o f orthoJ!'onal projection operators.
320 Chapter 8
(b) Using parts (a) and (b) of Theorem 8.7.2, we have (AA+)(AA+) = A(A+ AA+) = AA+ and
(A+ A)(A+ A)= Jl,+(AA+ A)= A+ A; thus AA,. and A+ A are idempotent.
~ ...
P6. Using P4 and P2, we have (AA+)'r =(A++ A+)T = A++ A+= AA+.
P7. First note that, as in Exercise P2, we have AA+ = (U1 L: 1 V{)(V1 2:~ 1 U'[) = U1 U'[. Thus, since
the columns of U1 form an orthonormal basis for col(A), the matrix AA+ = U1 UT is the standard
matrix of the orthogonal projection of R" onto col(A).
P8. It follows from Exercise P7 t hat AT(AT)+ is the standard matrix of the orthogonal projection of
R" onto coi(AT) =row(A). Furthermore, using parts (d), (e), and (f) of Theorem 8.7.2, we have
and so .4+ A is the matrix of the orthogonal projection of R" onto row(A) .