You are on page 1of 6

Homework 9 Solutions

Chapter 6.1
   
0 −4
14. Since u = −5 and z = −1, ku − zk2 = [0 − (−4)]2 + [−5 − (−1)]2 + [2 − 8]2 = 68
  
2 8
√ √
and dist(u, z) = 68 = 2 17.

19. a. True. See the definition of kvk.


b. True. See Theorem 1(c).
c. True. See the discussion of Figure 5.
[ ]
1 1
d. False. Counterexample: .
0 0
e. True. See the box following Example 6.

23. One computes that u · v = 2(−7) + (−5)(−4) + (−1)6 = 0, kuk2 = u · u = 22 + (−5)2 +


(−1)2 = 30, kvk2 = v ·v = (−7)2 +(−4)2 +62 = 101, and ku+vk2 = (u+v)·(u+v) =
(2 + (−7))2 + (−5 + (−4))2 + (−1 + 6)2 = 131.

28. An arbitrary w in Span{u, v} has the form w = c1 u + c2 v. If y is orthogonal to u


and v, then u · y = v · y = 0. By Theorem 1(b) and 1(c), w · y = (c1 u + c2 v) · y =
c1 (u · y) + c2 (v · y) = 0 + 0 = 0.

30. a. If z is in W ⊥ , u is in W , and c is any scalar, then (cz) · u = c(z · u) = c0 = 0.


Since u is any element of W , cz is in W ⊥ .
b. Let z1 and z2 be in W ⊥ . Then for any u in W , (z1 +z2 )·u = z1 ·u+z2 ·v = 0+0 = 0.
Thus z1 + z2 is in W ⊥ .
c. Since 0 is orthogonal to every vector, 0 is in W ⊥ . Thus W ⊥ is a subspace.

Chapter 6.2
9. Since u1 · u2 = u1 · u3 = u2 · u3 = 0, {u1 , u2 , u3 } is an orthogonal set. Since the
vectors are non-zero, u1 , u2 , and u3 are linearly independent by Theorem 4. Three
such vectors in R3 automatically form a basis for R3 . So {u1 , u2 , u3 } is an orthogonal
basis for R3 . By Theorem 5, x = ux·u 1
1 ·u1
u1 + ux·u 2
2 ·u2
u2 + ux·u 3
3 ·u3
u3 = 52 u1 − 32 u2 + 2u3 .

1
[ ] [ ]
1 −4
11. Let y = and u = . The orthogonal projection of y onto the line through u
7 2
y·u
and the
[ origin
] is the orthogonal projection of y onto u, and this vector is ŷ = u·u u=
1 −2
2
u= .
1
[ ]
−4/5
13. The orthogonal projection of y onto u is ŷ = y·u
u·u
=u − 13
=
65
u . The compo-
[ ] 7/5
[ ] [ ]
14/5 −4/5 14/5
nent of y orthogonal to u is y − ŷ = . Thus y = ŷ +(y − ŷ) = + .
8/5 7/5 8/5

15. The distance from y to the [ ] [u and]the origin is ky − ŷk. One computes
[ ]line through
3 3 8 3/5 √
that y − ŷ = y − u·u
y·u
u= − 10 = , so ky − ŷk = 9/25 + 16/25 = 1 is
1 6 −4/5
the desired distance.

23. a. True. For example, the vectors u and y in Example 3 are linearly independent
but not orthogonal.
b. True. The formulas for the weights are given in Theorem 5.
c. False. See the paragraph following Example 5.
d. False. The matrix must also be square. See the paragraph before Example 7.
e. False. See Example 4. The distance is ky − ŷk.

25. To prove part (b), note that (U x) · (U y) = (U x)T (U y) = xT U T U y = xT y = x · y


because U T U = I. If y = x in part (b), (U x) · (U x) = x · x, which implies part (a).
Part (c) of the Theorem follows immediately fom part (b).

28. If U is an n × n orthogonal matrix, then I = U U −1 = U U T . Since U is the transpose of


U T , Theorem 6 applied to U T says that U T has orthogonal columns. In particular, the
columns of U T are linearly independent and hence form a basis for Rn by the Invert-
ible Matrix Theorem. That is, the rows of U form a basis (an orthonormal basis) for Rn .

y·u
31. Suppose that ŷ = u·u u. Replacing u by cu with c 6= 0 gives (cu)·(cu)
y·(cu)
(cu) = cc(y·u)
2 (u·u) (c)u =
y·u
u·u
u = ŷ. So ŷ does not depend on the choice of a nonzero u in the line L used in the
formula.

x·u
33. Let L = Span{u}, where u is nonzero, and let T (x) = u·u u. For any vectors x and y
in R and any scalars c and d, the properties of the inner product (Theorem 1) show
n

2
that T (cx + dy) = (cx+dy)·u
u·u
u = cx·u+dy·u
u·u
u = cx·u
u·u
u + dy·u
u·u
u = cT (x) + dT (y). Thus T
is a linear transformation. Another approach is to view T as the composition of the
following three linear mappings: x 7→ a = x · v, a 7→ b = a/v · v, and b 7→ bv.

Chapter 6.3
3. Since u1 ·u2 = −1+1+0 = 0, {u1 , u2 } is an orthogonal set. Theorthogonal
  projection
   of
1 −1 −1
y onto Span{u1 , u2 } is ŷ = u1 ·u1 u1 + u2 ·u2 u2 = 2 u1 + 2 u2 = 2 1 + 2 1 = 4 .
y·u1 y·u2 3 5 3  5  
0 0 0

9. Since u1 ·u2 = u1 ·u3 = u2 ·u3 = 0, {u1 , u2 , u3 } is an orthogonal set. By the Orthogonal  


2
 
Decomposition Theorem, ŷ = uy·u 1
u + y·u2
u + y·u3
u = 2u + 2
u − 2
u = 4,
1 ·u1
1 u2 ·u2 2 u3 ·u3 3 1 3 2 3 3 0
0
 
2
−1
z = y − ŷ =  
 3 , and y = ŷ + z, where ŷ is in W and z is in W .

−1

11. Note that v1 and v2 are orthogonal. The Best Approximation Theorem says that ŷ ,
which is the orthogonal projection of y onto W = Span{v1 , v2 }, is the closest point to
y in W . This vector is  
3
−1 
ŷ = vy·v 1
v + y·v2
v = 1
v + 3
v =  .
·v
1 1
1 v ·v
2 2
2 2 1 2 2  1
−1

13. Note that v1 and v2 are orthogonal. By the Best Approximation Theorem,   the closest
−1
−3
point in Span{v1 , v2 } to z is ẑ = vz·v 1
1 ·v1
v1 + vz·v 2
2 ·v2
v2 = 23 v1 − 37 v2 =  
−2.
3

15. The distance from the point y in R3 to a subspace W is defined as the distance from
y to the closest point in W . Since the closest point in W
 toy is ŷ = proj
 Wy, the
3 2
 
desired distance is ky − ŷk. One computes that ŷ = −9 , y − ŷ = 0, and
−1 6
√ √
ky − ŷk = 40 = 2 10.

3
 
[ ] 8/9 −2/9 2/9
1 0
17. a. U T U = , U U T = −2/9 5/9 4/9.
0 1
2/9 4/9 5/9
T
b. Since U U = I2 , the columns
 of U form an orthonormal
    basis for W , and by Theo-
8/9 −2/9 2/9 4 2
rem 10, projW y = U U T y = −2/9 5/9 4/9 8 = 4.
2/9 4/9 5/9 1 5

19. By the Orthogonal Decomposition Theorem, u3 is the sum of a vector in W =


Span{u1 , u2 } and a vector v orthogonal to  W .This
 exercise
 asks
 for
 the vector v:
0 0 0
v = u3 − projW u3 = u3 − (− 31 u1 + 15
1
u2 ) = 0 − −2/5 = 2/5. Any multiple of
1 4/5 1/5

the vector v will also be in W .

21. a. True. See the calculations for z2 in Example 1 or the box after Example 6 in
Section 6.1.
b. True. See the Orthogonal Decomposition Theorem.
c. False. See the last paragraph in the proof of Theorem 8, or see the second
paragraph after the statement of Theorem 9.
d. True. See the box before the Best Approximation Theorem.
e. True. Theorem 10 applies to the column space W of U because the columns of
U are linearly independent and hence form a basis for W .

23. By the Orthogonal Decomposition Theorem, each x in Rn can be written uniquely


as x = p + u, with p in Row A and u in (Row A)⊥ . By Theorem 3 in Section 6.1,
(Row A)⊥ = Nul A, so u is in Nul A. Next, suppose Ax = b is consistent. Let x be a
solution and write x = p+u as above. Then Ap = A(x−u) = Ax−Au = b−0 = b, so
the equation Ax = b has at least one solution p in Row A. Finally, suppose that p and
p1 are both in Row A and both satisfy Ax = b. Then p − p1 is in Nul A = (Row A)⊥ ,
since A(p − p1 ) = Ap − Ap1 = b − b = 0. The equations p = p1 + (p − p1 ) and
p = p + 0 both then decompose p as the sum of a vector in Row A and a vector in
(Row A)⊥ . By the uniqueness of the orthogonal decomposition (Theorem 8), p = p1 ,
and p is unique.

Chapter 6.4
11. Call the columns of the matrix x1 , x2 , and x3 and perform the Gram-Schmidt process
on these vectors:
v1 = x1

4
 
3
0
 
v2 = x2 − v1 ·v1 v1 = x2 − (−1)v1 = 
x2 ·v1
3

−3
3
 
2
0
 
v3 = x3 − v1 ·v1 v1 − v2 ·v2 v2 = x3 − 4v1 − (− 3 )v2 = 
x3 ·v1 x3 ·v2 1
2

2
    −2 

 1 3 2 

   0   0 
 −1
     

   
Thus an orthogonal basis for W is −1 ,  3  ,  2  . 

   −3  2  

 1 

 
1 3 −2
 
[ ] 5 9 [ ]
T 5/6 1/6 −3/6 1/6  1 7 6 12
13. Since A and Q are given, R = Q A = = .
−1/6 5/6 1/6 3/6 −3 −5 0 6
1 5

15. The columns of Q will be


 normalized
√ versions of the vectors v1 , v2 , and v3 found in
1/ √5 1/2 1/2
√ √ √ 
−1/ 5 0 0  5 − 5 4 5
 √ 
Exercise 11. Thus Q = 
−1/√ 5 1/2 1/2  T
, R = Q A = 0
 6 −2 .
 1/ 5 −1/2 1/2  0 0 4

1/ 5 1/2 −1/2

17. a. False. Scaling was used in Example 2, but the scale factor was nonzero.
b. True. See (1) in the statement of Theorem 11.
c. True. See the solution of Example 4.

19. Suppose that x satisfies Rx = 0; then QRx = Q0 = 0, and Ax = 0. Since the columns
of A are linearly independent, x must be 0. This fact, in turn, shows that the columns
of R are linearly indepedent. Since R is square, it is invertible by the Invertible Matrix
Theorem.

21. Denote the columns of Q by {q1 , . . . , qn }. Note that n ≤ m, because A is m×n and has
linearly independent columns. The columns of Q can be extended to an orthonormal
basis for Rm as follows. Let f1 be the first vector in the standard basis for Rm that is
not in Wn = Span{q1 , . . . , qn }, let u1 = f1 − projWn f1 , and let qn+1 = u1 /ku1 k. Then

5
{q1 , . . . , qn , qn+1 } is an orthonormal basis for Wn+1 = Span{q1 , . . . , qn , qn+1 }. Next
let f2 be the first vector in the standard basis for Rm that is not in Wn+1 , let u2 =
f2 −projWn+1 f2 , and let qn+2 = u2 /ku2 k. Then {q1 , . . . , qn , qn+1 , qn+2 } is an orthogonal
basis for Wn+2 = Span{q1 , . . . , qn , qn+1 , qn+2 }. This process will continue until m − n
vectors have been added to the original n vectors, and {q1 , . . . qn , qn+1 , . . . , qm } is an
orthonormal basis for Rn . Let Q0 =[ [q]n+1 . . . qm ] and Q1 = [Q Q0 ]. Then, using
R
partitioned matrix multiplication, Q1 = QR = A.
0

You might also like