Professional Documents
Culture Documents
Chapter 6.1
0 −4
14. Since u = −5 and z = −1, ku − zk2 = [0 − (−4)]2 + [−5 − (−1)]2 + [2 − 8]2 = 68
2 8
√ √
and dist(u, z) = 68 = 2 17.
Chapter 6.2
9. Since u1 · u2 = u1 · u3 = u2 · u3 = 0, {u1 , u2 , u3 } is an orthogonal set. Since the
vectors are non-zero, u1 , u2 , and u3 are linearly independent by Theorem 4. Three
such vectors in R3 automatically form a basis for R3 . So {u1 , u2 , u3 } is an orthogonal
basis for R3 . By Theorem 5, x = ux·u 1
1 ·u1
u1 + ux·u 2
2 ·u2
u2 + ux·u 3
3 ·u3
u3 = 52 u1 − 32 u2 + 2u3 .
1
[ ] [ ]
1 −4
11. Let y = and u = . The orthogonal projection of y onto the line through u
7 2
y·u
and the
[ origin
] is the orthogonal projection of y onto u, and this vector is ŷ = u·u u=
1 −2
2
u= .
1
[ ]
−4/5
13. The orthogonal projection of y onto u is ŷ = y·u
u·u
=u − 13
=
65
u . The compo-
[ ] 7/5
[ ] [ ]
14/5 −4/5 14/5
nent of y orthogonal to u is y − ŷ = . Thus y = ŷ +(y − ŷ) = + .
8/5 7/5 8/5
15. The distance from y to the [ ] [u and]the origin is ky − ŷk. One computes
[ ]line through
3 3 8 3/5 √
that y − ŷ = y − u·u
y·u
u= − 10 = , so ky − ŷk = 9/25 + 16/25 = 1 is
1 6 −4/5
the desired distance.
23. a. True. For example, the vectors u and y in Example 3 are linearly independent
but not orthogonal.
b. True. The formulas for the weights are given in Theorem 5.
c. False. See the paragraph following Example 5.
d. False. The matrix must also be square. See the paragraph before Example 7.
e. False. See Example 4. The distance is ky − ŷk.
y·u
31. Suppose that ŷ = u·u u. Replacing u by cu with c 6= 0 gives (cu)·(cu)
y·(cu)
(cu) = cc(y·u)
2 (u·u) (c)u =
y·u
u·u
u = ŷ. So ŷ does not depend on the choice of a nonzero u in the line L used in the
formula.
x·u
33. Let L = Span{u}, where u is nonzero, and let T (x) = u·u u. For any vectors x and y
in R and any scalars c and d, the properties of the inner product (Theorem 1) show
n
2
that T (cx + dy) = (cx+dy)·u
u·u
u = cx·u+dy·u
u·u
u = cx·u
u·u
u + dy·u
u·u
u = cT (x) + dT (y). Thus T
is a linear transformation. Another approach is to view T as the composition of the
following three linear mappings: x 7→ a = x · v, a 7→ b = a/v · v, and b 7→ bv.
Chapter 6.3
3. Since u1 ·u2 = −1+1+0 = 0, {u1 , u2 } is an orthogonal set. Theorthogonal
projection
of
1 −1 −1
y onto Span{u1 , u2 } is ŷ = u1 ·u1 u1 + u2 ·u2 u2 = 2 u1 + 2 u2 = 2 1 + 2 1 = 4 .
y·u1 y·u2 3 5 3 5
0 0 0
−1
11. Note that v1 and v2 are orthogonal. The Best Approximation Theorem says that ŷ ,
which is the orthogonal projection of y onto W = Span{v1 , v2 }, is the closest point to
y in W . This vector is
3
−1
ŷ = vy·v 1
v + y·v2
v = 1
v + 3
v = .
·v
1 1
1 v ·v
2 2
2 2 1 2 2 1
−1
13. Note that v1 and v2 are orthogonal. By the Best Approximation Theorem, the closest
−1
−3
point in Span{v1 , v2 } to z is ẑ = vz·v 1
1 ·v1
v1 + vz·v 2
2 ·v2
v2 = 23 v1 − 37 v2 =
−2.
3
15. The distance from the point y in R3 to a subspace W is defined as the distance from
y to the closest point in W . Since the closest point in W
toy is ŷ = proj
Wy, the
3 2
desired distance is ky − ŷk. One computes that ŷ = −9 , y − ŷ = 0, and
−1 6
√ √
ky − ŷk = 40 = 2 10.
3
[ ] 8/9 −2/9 2/9
1 0
17. a. U T U = , U U T = −2/9 5/9 4/9.
0 1
2/9 4/9 5/9
T
b. Since U U = I2 , the columns
of U form an orthonormal
basis for W , and by Theo-
8/9 −2/9 2/9 4 2
rem 10, projW y = U U T y = −2/9 5/9 4/9 8 = 4.
2/9 4/9 5/9 1 5
21. a. True. See the calculations for z2 in Example 1 or the box after Example 6 in
Section 6.1.
b. True. See the Orthogonal Decomposition Theorem.
c. False. See the last paragraph in the proof of Theorem 8, or see the second
paragraph after the statement of Theorem 9.
d. True. See the box before the Best Approximation Theorem.
e. True. Theorem 10 applies to the column space W of U because the columns of
U are linearly independent and hence form a basis for W .
Chapter 6.4
11. Call the columns of the matrix x1 , x2 , and x3 and perform the Gram-Schmidt process
on these vectors:
v1 = x1
4
3
0
v2 = x2 − v1 ·v1 v1 = x2 − (−1)v1 =
x2 ·v1
3
−3
3
2
0
v3 = x3 − v1 ·v1 v1 − v2 ·v2 v2 = x3 − 4v1 − (− 3 )v2 =
x3 ·v1 x3 ·v2 1
2
2
−2
1 3 2
0 0
−1
Thus an orthogonal basis for W is −1 , 3 , 2 .
−3 2
1
1 3 −2
[ ] 5 9 [ ]
T 5/6 1/6 −3/6 1/6 1 7 6 12
13. Since A and Q are given, R = Q A = = .
−1/6 5/6 1/6 3/6 −3 −5 0 6
1 5
17. a. False. Scaling was used in Example 2, but the scale factor was nonzero.
b. True. See (1) in the statement of Theorem 11.
c. True. See the solution of Example 4.
19. Suppose that x satisfies Rx = 0; then QRx = Q0 = 0, and Ax = 0. Since the columns
of A are linearly independent, x must be 0. This fact, in turn, shows that the columns
of R are linearly indepedent. Since R is square, it is invertible by the Invertible Matrix
Theorem.
21. Denote the columns of Q by {q1 , . . . , qn }. Note that n ≤ m, because A is m×n and has
linearly independent columns. The columns of Q can be extended to an orthonormal
basis for Rm as follows. Let f1 be the first vector in the standard basis for Rm that is
not in Wn = Span{q1 , . . . , qn }, let u1 = f1 − projWn f1 , and let qn+1 = u1 /ku1 k. Then
5
{q1 , . . . , qn , qn+1 } is an orthonormal basis for Wn+1 = Span{q1 , . . . , qn , qn+1 }. Next
let f2 be the first vector in the standard basis for Rm that is not in Wn+1 , let u2 =
f2 −projWn+1 f2 , and let qn+2 = u2 /ku2 k. Then {q1 , . . . , qn , qn+1 , qn+2 } is an orthogonal
basis for Wn+2 = Span{q1 , . . . , qn , qn+1 , qn+2 }. This process will continue until m − n
vectors have been added to the original n vectors, and {q1 , . . . qn , qn+1 , . . . , qm } is an
orthonormal basis for Rn . Let Q0 =[ [q]n+1 . . . qm ] and Q1 = [Q Q0 ]. Then, using
R
partitioned matrix multiplication, Q1 = QR = A.
0