You are on page 1of 53
INNER PRODUCT, LENGTH, AND ORTHOGONALITY The Inner Product If u and v are vectors in R, then we regard u and v as n x | matrices. The transpose u! isa 1 xn matrix, and the matrix product u’v is a 1 x | matrix, which we write as a single real number (a scalar) without brackets. The number uly is called the inner product of wand y, and often it is written as u-v. . This inner product _ is also referred to as a dot product. If uy UL % v u-|. | ad v=]. y then the inner product of u and v is vy U2 [ur us penne es Un enw 2 EXAMPLE 1 Compute u-v and v-u for u = [| and v= -1 - SOLUTION I wysuly=[2 —5 “yf 2]- + e9e+c0e9 - 1 3 2 vuswu=[3 2 =[4]-earecs-cacy—— / -1 Let u, v, and w be vectors in R", and let c be a sealar. Then a wysyu b. @+y)-w=u-w + ¥-w ¢. (cu)-v = c(u-y) = u-(cy) d u-u>0, andu-u=Oifandonly ifu =0 Properties (b) and (c) ean be combined several times to produce the following useful rule: (cru; + --- + cptp)-w = ci(Ui-w) + --- + ¢p(Up-w) The Length of a Vector If vis in R", with entries v1,..., vp, then the square root of v-v is defined because v-v is nonnegative. ‘The length (or norm) of v is the nonnegative scalar ||v|| defined by Iv = Vvev= Jot te}+e--+ 03, and Iv? =v-v Suppose v is in R?, say, v¥ = 5 If we identify v with a geometric point in the plane, as usual, then ||v|| coincides with the standard notion of the length of the line segment from the origin to y. This follows from the Pythagorean Theorem applied to a triangle such as the one in Figure 1 Asimilar calculation with the diagonal of a rectangular box shows that the definition of length of a vector v in R* coincides with the usual notion of length. For any scalar c, the length of cv is |c| times the length of v.Thatis, Pp llevl] = lelllvil FIGURE 1 Interpretation of ||v|] as length. A vector whose length is | is called a unit vector. If we divide a nonzero vector v by its length—that is, multiply by 1/||v|| —we obtain a unit vector u because the length of wis (1/|Ivll) |v]. The process of creating u from v is sometimes called normalizing vy. and we say that wis in the same direction as v. EXAMPLE 2 Lety = (1,—2, 2,0). Find a unit vector wu in the same direction as v. SOLUTION First, compute the length of v: |Iv||? = vew = (1)? + (2)? + 2)? + OP =9 vl = V9 =3 Then, multiply v by 1/||v|| to obtain 1/3 -2/3 " 2/3 0 1 =v Iv To check that |lul] = 1. it suffices to show that |lu||? = 1. (ful)? = ww = (5) + (-3)° + GY + 0 Ly4ya =th+g+oe1 1: EXAMPLE 3 Let W be the subspace of R? spanned by x = z that is a basis for W 1). Find a unit vector SOLUTION W consists of all multiples of x, as in Figure 2(a). Any nonzero vector in W is a basis for W. To simplify the calculation, “scale” x to eliminate fractions. That is, multiply x by 3 to get 2 =[3] Now compute |jy||? = 2? + 3? = 13, lly] = VT3. and normalize y to get —aEl-B] See Figure 2(b). Another unit vector is (-2//13.—3/VT3) (b) FIGURE 2 @ Normalizing a vector to produce a unit vector. Distance in R” We are ready now to describe how close one vector is to another. Recall that if a and b are real numbers, the distance on the number line between a and b is the number la — |. Two examples are shown in Figure 3. This definition of distance in R has a direct analogue in R". a b a b 123456789 3 012345 6 units apart | units apart | 12-8I=1-61=6 or I8-21=161=6 1C3)-4l=1-1=7 of 4—-C3)=171=7 FIGURE 3 Distances in R. For u and y in R", the distance between u and y, written as dist(u, v), is the length of the vector u — v. That is, dist(u, v) = |lu —v|] Ifw = (uj,u2,u5) and v = (v1, 2, v3), then dist(u, v) = |ju—v|| = /@—v)- av) =vu 2 + (uz — v9)? + (U3 — V3)? EXAMPLE 4 Compute the distance between the veetors u = (7, 1) and v = (3,2). SOLUTION Calculate _._f7]_f3]_] 4 eve ty 2|/>|[-1 Ju—vl = VP +C1P = Vi7 The vectors u,v, and u— v are shown in Figure 4. When the veetor u — vis added to v, the result is u. Notice that the parallelogram in Figure 4 shows that the distance from u to vis the same as the distance from u — v to 0. 7 FIGURE 4 The distance between u and vis the length of u—v. Orthogonal Vectors Consider R? or R} and two lines through the origin determined by vectors u and v. The two lines shown in Figure 5 are geometrically perpendicular if and only if the distance from u to v is the same as the distance from u to —v. This is the same as requiring the squares of the distances to be the same. Now [dist(u,—v) P= [lu — (v)IP = flu + vf? =(u+v)-(u+v) =u-(u+y)+v-(u+y) =uu+u-v+v-u+vV-V = |lall? + lv? + 20-¥ ‘The same calculations with v and —v interchanged show that [dist(w, v)}? = [jul]? + || — vl)? + 2u- (—v) = |lull? + |v? — 2u-v ‘The two squared distances are equal if and only if 2u-v = —2u-v, which happens if and only ifu-v = 0. This calculation shows that when vectors u and v are identified with geometric points, the corresponding lines through the points and the origin are perpendicular if and only if u-v = 0. Two vectors u and v in R” are orthogonal (to each other) if u-v = 0. Observe that the zero vector is orthogonal to every vector in R" because 0! y = 0 for all v. The Pythagorean Theorem Two vectors u and v are orthogonal if and only if ju + v||? Ila? + Iv. Orthogonal Complements Tia vector z is orthogonal to every vector in a subspace W of R", then z is said to be orthogonal to W.. The set of all vectors z that are orthogonal to W is called the orthogonal complement of W and is denoted by W-+ (and read as “W perpendicular” or simply “W perp”). EXAMPLE Let W be a plane through the origin in R*, and let L be the line through the origin and perpendicular to W. If z and w are nonzero, z is on L, and w is in W. then the line segment from 0 to z is perpendicular to the line segment from 0 to w: that is, z-W = 0. See Figure. So each vector on L. is orthogonal to every w in W. In fact, L consists of all vectors that are orthogonal to the w’s in W’. and W consists of all vectors orthogonal to the 2’s in L. That is, L=Wwt and W=Lt . N FIGURE 7 A plane and fine through 0 as orthogonal complements. 1. A vector x is in W~ if and only if xis orthogonal to every vector ina set that spans W. 2. W+ isa subspace of R". —+. NI ce rt ay ot A o pow" So, FIGURE 8 The fundamental subspaces determined byanm xm matrix A Let A be an m xn matrix. The orthogonal complement of the row space of A is the null space of A, and the orthogonal complement of the column space of A is the null space of AT: (Row A) =NulA and (Col A)+ = Nul A™ Angles in R? and R? Ifw and vare nonzero vectors in either R? or RB’, then there is a nice connection between their inner product and the angle # between the two line segments from the originto the points identified with u and v. The formula is u-v= [ull |\v|| cos 3 Q) To verify this formula for vectors in R?, consider the triangle shown in Figure, with sides of lengths Jul], ||v||. and {lu — v|). By the law of cosines, [lu — |]? = |u|? + [Iv]? — 20] Iv] cos 8 FIGURE The angle between two vectors. ORTHOGONAL SETS EXAMPLE 4. Show that {u;.u>,us} is an orthogonal set, where 3 -1 1/2 m=]t], m=] 2], m=] 2 1 1 7/2 SOLUTION Consider the three possible pairs of distinct vectors, namely, {0),u»}, {uy us}. and (up, Us}. wy-uy = 3(-1) + 1(2) + (1) =0 w-us = 3(—4) + 1(-2) +1 (2 =0 yey = -1 (—$) + 2(-2) +1) =0 Each pair of distinct vectors is orthogonal, and so {u 1), Us} is an orthogonal set. See Figure 1; the tluee line segments there are mutually perpendicular. FIGURE 1. up} is an orthogonal set of nonzero vectors in R”, then S is linearly independent and hence is a basis for the subspace spanned by S PROOF If0 = cyu, + --- + cpu, for some sealars ¢, ‘p» then = Oem = (eymy + 9m +--+ cpu) (1m)-) + (eoMg)-m, +--+ + (6pp)-1y (01+ Wy) + C2(Mg- Uy) + +++ + Cp(Mp- Oy) = c(U)- 4) because uy is orthogonal to up,...,ttp. Since w is nonzero, wy +t is not zero and so ¢ = 0. Similarly, cy, ...,cp must be zero. Thus S is linearly independent. / An orthogonal basis for a subspace W of IR” is a basis for W that is also an ‘orthogonal set. Let {uj, ...,p} be an orthogonal basis for a subspace W of R". Foreach y in W, the weights in the linear combination Y= Cw $e + cpUp are given by EXAMPLE 2 Theset S = {u,,u»,us} in Example 1 is an orthogonal basis for R°. 6 Express the vector y = 1 | as a linear combination of the vectors in S. —8 SOLUTION Compute yup =, yw =-12, a-u,=1l, wm =6, By Theorem 5, Wo -12 —33 =a tet Bh =u, — 20; — 205 / An Orthogonal Projection Given a nonzero vector win R” , consider the problem of decomposing a vector y in R" into the sum of two vectors. one a multiple of w and the other orthogonal to w. We wish to write yaorr a) where 7 = au for some scalar a and z is some vector orthogonal to w. See Figure 2. Given any scalar a let z = y — au, so that (1) is satisfied. Then y— $ is orthogonal to wif and only if = (y—au).u = yeu — (u) 0 = ” That is, (1) is satisfied with z orthogonal to w if and only if « = ‘The vector $ is called the orthogonal projection of y onto u, and the vector 2 is called the component of y orthogonal to u. If is any nonzero sealar and if w is replaced by cw in the definition of ¥, then the orthogonal projection of y onto cw is exactly the same as the orthogonal projection of ¥ onto u (Exercise 31). Hence this projection is determined by the subspace L spanned by wu (the line through w and 0). Sometimes § is denoted by proj, y and is called the orthogonal projection of y onto L.. That is, Fs projpy= FIGURE 2 Finding a to make y—§ onluopoual ow, EXAMPLE 3 Lety = [4] andu = [3 ] Fina te omiogonal postion of roato u. Then write y as the sum of two orthogonal vectors, one in Span {u} and one orthogonal tou. SOLUTION Compute -f)-C-C The sum of these two vectors is y. That is, I-20] This decomposition of y is illustrated in Figure 3. Note: If the calculations above are correct, then {¥,y — $} will be an orthogonal set. As a check, compute [ese FIGURE 3. The oxthogonal projection of y onto a Tine L through the origin. Since the line segment in Figure 3 between y and § is perpendicular to L, by con- struction of f, the point identified with jis the closest point of L toy. Orthonormal Sets Asset {uy,..., 0p} is an orthonormal set if it is an orthogonal set of unit vectors. If W is the subspace spanned by such a set, then {u),...,up} is an orthonormal basis for W, since the set is automatically linearly independent, by Theorem 4. The simplest example of an orthonormal set is the standard basis {e1,...,¢n} for R". Any nonempty subset of {e),...,e,} is orthonormal, too. Here is a more compli- cated example. EXAMPLE 5 Show that (vj, v2, ¥3} is an orthonormal basis of R, where 3/ VT v6 —1/ 466 w=l1/J1 |. w=] 2/f6 |. vs=| -4//66 Vv v6 7/ S66 SOLUTION Compute viva = —3/V66 + 2/66 + 1/166 = 0 Viv = —3/V726 — 4/.V726 + 7/ V726 = 0 Vor¥3 = 1/396 — 8/396 + 7/396 = 0 Thus {¥), V2, V3} is an orthogonal set. Also, viv = 9/4 1/1 $ /1 = 1 vo-¥) = 1/6 4+4/641/6=1 V3:V3 = 1/66 + 16/66 + 49/66 = 1 which shows that v; , v2, and v3 are unit vectors, Thus {v), V2, V3} is an orthonormal set. Since the set is linearly independent, its three vectors form a basis for RB’, See Figure 6. 3 When the vectors in an orthogonal set of nonzero vectors are normalized to have unit length, the new vectors will still be orthogonal, and hence the new set will be an orthonormal set. See Exercise 32. Anm x n matrix U has orthonormal columns if and only if U'U = 1. PROOF To simplify notation, we suppose that U has only three columns, each a vector in R”. The proof of the general case is essentially the same. Let U=[u; wu) us] and compute uw alu, uta, ulus vv =u \[m uy m= fata; ule ulus w uw uly, uta, ulus The entries in the matrix at the right are inner products, using transpose notation. The columns of U are orthogonal if and only if wu; = uta =0, wWaysulu,=0, wu =utu, =0 © The columns of U all have unit length if and only if, f wus =1 © 1, uju wu; Let U be an m xn matrix with orthonormal columns, and let x and y be in R". Then a. |Ux|| = [lsl) b. (Ux)-(Uy) =x-y c. (Ux) (Uy) = Of and only ifx-y =0 v2 2/3 B EXAMPLE 6 Let U =| 1/y2 —2/3 antx=[ 3] Neti tha U aso 0 173 thonormal columns and ee eee We 23) Ti o =! 9/3 2/3 1/3 Wye as =lo Verify that ||Ux|| = ||x\|. SOLUTION v2 2/3 3 Ux [Wz = [“]- [| 0 1/3 1 Ux = V94141= IT Ix] = V2E9= VI An orthogonal matrix is a square invertible matrix U such that U-! = UT. EXAMPLE 7 The matrix 3/VIL -1/¥6 1/466 U=) VIL 2/6 —4/V66 WV 1/v6 7/66 is an orthogonal matrix because it is square and because its columns are orthonormal, ORTHOGONAL PROJECTIONS The orthogonal projection of a point in R? onto a line through the origin has an important analogue in BR" . Given a vector y and a subspace W in R", there is a vector in W’ such that (1) ¥ is the unique vector in W for which y —¥ is orthogonal to W’, and (2) § is the unique vector in W closest to y. See Figure |. To prepare for the first theorem, observe that whenever a vector y is written as a Tinear combination of vectors u,,...,u,, in R", the terms in the sum for y can be grouped into two parts so that y can be written as yHut+h where z, is a linear combination of some of the u, and 23 is a linear combination of the rest of the ur. This idea is particularly useful when {uj,...,,} is an orthogonal basis. W+ denotes the set of all vectors orthogonal to a subspace W. EXAMPLE 1 Let {uy,... ,us} be an orthogonal basis for RS and let y= yu +--+ + Css Consider the subspace W = Span {u), u)}, and write y as the sum of a vector 2) in W and a vector 2 in W+. SOLUTION Write y= cyu + cou) + cts + Cgy + ests ae ss zy Zp where 2 = cM) + c2N is in Span {ui.u2} and Zp = C303 + cay + Csus is in Span {uy, Uy, Us}. To show that zo is in W+, it suffices to show that za is orthogonal to the vectors in the basis {u,,u} for W. Using properties of the inner product, compute Z_*Wy = (x3 + Cytty + CsUs) +O 9s - Wy + C44 Wy FH C5US- Uy =0 because u; is orthogonal to us, uy, and us. A similar calculation shows that z)-u) = 0. Thus 2 is in W+. 1 The Orthogonal Decomposition Theorem Let W be a subspace of R". Then each y in R” can be written uniquely in the form y=siz (dy where is in W and zis in W+.In fact, if {u),...,up} is any orthogonal basis of W, then 0 -u, my fo usa Up: Up Up (2) and z= y—§ The vector § in (1) is called the orthogonal projection of y onto W and often is written as projy y. See Figure 2. FIGURE 2 The orthogonal projection of y onto W. PROOF Let {ui....,u,} be any orthogonal basis for W, and define § by (2).' Then § is in W because § is a linear combination of the basis u),...,u,.Letz = y— 9. Since . ,Up, it follows from (2) that u, is orthogonal to up, . ou zu = (y—$)-04 =n, -( ya Jam 0-0 uu = yu —yeu, =0 Thus z is orthogonal to u1. Similarly, z is orthogonal to each uj in the basis for W. Hence z is orthogonal to every vector in W. That is, zis in W+. To show that the decomposition in (1) is unique, suppose y can also be written as y=$, +z, with}, in W and; in W+. Then f +z = §, + x (since both sides equal y). and so y-Hsu-z This equality shows that the vector v = § —§; is in W and in W+ (because 1 and z are both in W, and W+ is a subspace). Hence v-v = 0, which shows that v = 0. This proves that § =; and also z; = z. 7 2 2 1 EXAMPLE 2 Letu, = |= = [ | mts [2] oosmeacn.ns -1 1 3 is an orthogonal basis for W_= Span fu), u}. Write y as the sum of a vector in W and a vector orthogonal to W SOLUTION The orthogonal projection of y onto W is ou ¥ Uy apa ua 2 2 2 2 ~2/5 2 S]t+ 3 1}= 2 s]+ ib 1}= 5 30] 7] 6} y | 30] 7] 7 30] Ws 1 -2/5 7/5 y-g=|2]-}| 2 |=] 0 3 1/5 14/5 Theorem 8 ensures that y — ¥ is in W+. To check the calculations, however, it is a good idea to verify that y — ¥ is orthogonal to both u; and u, and hence to all of W. The desired decomposition of y is 1 —2/5 1/5 y=|2]= 2 }4+] 0 a 3 1/5 14/5 FIGURE 3. The orthogonal projection of y is the sum of its projections onto one-dimensional subspaces that are mutually orthogonal Properties of Orthogonal Projections Ify isin W = Span{uj,..., up}, then projy y = y. The Best Approximation Theorem Let W be a subspace of R”, let y be any vector in R", and let § be the orthogonal projection of y onto W’. Then fis the closest point in W to y. in the sense that lly = $I < lly—vll @) for all v in W distinct from FIGURE 4 The orthogonal projection of y onto W is the closest point in W toy. 2 2 EXAMPLE 3 ta =| s]e-[ i -1 1 as in Example 2, then the closest point in W to y is “Uy yu: ~2/5 ga + TM] 2 / Uew | wet 1s EXAMPLE 4 The distance from a point y in R" to a subspace W is defined as the distance from y to the nearest point in W’. Find the distance from y to W = Span {uy, us}, ee eleb mE) [3 SOLUTION By the Best Approximation Theorem, the distance from y to W is lly — $1]. where § = projy y. Since fu), up} is an orthogonal basis for W, 5 1 -l 15. 21 1 7 — —wm = =| - -s= 2|/=|]-1 lly-FI? = 3° +6 = 45 The distance from y to W is /45 = 35. 7 2] a W = Span {uy,u)}, If {u,, ..., up} is an orthonormal basis for a subspace W of R, then PLOjy ¥ = (Yeas )ui + (Y-ta)ue + +++ + (¥-Up)Up 4 U=[u uw --- up],then projy y= UU"y forall yin” (5) PROOF Formula (4) follows immediately from (2) in Theorem 8. Also, (4) shows that proj, y is a linear combination of the columns of U using the weights y-u), Yrub,...,Y-Up. The weights can be written as ul y,uzy,..., uy, showing that they are the entries in Uy and justifying (5). : Suppose U is ann x p matrix with orthonormal columns, and let W be the column space of U. Then UTUX=1,X =X forallxinR? — Theorem 6 UU'y =projyy forallyinR" — Theorem 10 If U is ann xn (square) matrix with orthonormal columns, then U is an orthogonal matrix, the column space W is all of Band UU"y = Iy = y forall yin R". THE GRAM-SCHMIDT PROCESS The Gram-Schmidt process is a simple algorithm for producing an orthogonal or orthonormal basis for any nonzero subspace of R". 3 1 EXAMPLE 1 Let W = Span {xi,.x2}, where x1 = :| and x) = [:] Con- i} 2 struct an orthogonal basis {v1, v2} for W. SOLUTION The subspace W is shown in Figure 1, along with x; , x2, and the projection Pp of X; onto x) . The component of x> orthogonal to x) is x» — p, which is in W because it is formed from x and a multiple of x). Let vy; = x; and x) ') isf3 o Wek pS Sos 2 ~ as] = 0 Then {¥), v9} is an orthogonal set of nonzero vectors in W. Since dim W = 2, the set {v1 v2} is a basis for W. 7 FIGURE 1 Construction of an orthogonal basis {¥),v2}. 1 0 EXAMPLE 2 Letxi=| | |.xo=| | 0 0 . sand x3 =] | |. Then {x1.x2,83} is 1 ew clearly linearly independent and thus is a basis for a subspace W of R*. Construct an orthogonal basis for W. SOLUTION Step 1. Let v1 = x1 and W; = Span {x1} = Span {vi}. Step 2. Let vo be the vector produced by subtracting from x) its projection onto the subspace Wj . That is, let Vo = Xo — prOjyy, X2 XVI =m Yj Since vi = x1 viv 0 1 —3/4 _f|tf 3faf_] a “Jatoafay oy a4 1 1 1/4 ‘As in Example 1, v2 is the component of x» orthogonal to x1, and {¥1,v2} is an orthogonal basis for the subspace W spanned by x; and x2. Step 2' (optional). If appropriate, scale v3 to simplify later computations. Since v9 has fractional entries, it is convenient to scale it by a factor of 4 and replace {vi, v2} by the orthogonal basis Step 3. Let v3 be the vector produced by subtracting from x3 its projection onto the subspace W}. Use the orthogonal basis {v1, v;} to compute this projection onto W4: Projection of Projection of x; onto ¥, x3 onto ¥, ' ' 1 3 0 ope SM Biv 2 | 1 2] 1] _ | 2/3 Pro = view t vw? Ta) | a2] 1 2/3 1 1 273 Then vs is the component of x3 orthogonal to W). namely, 0 0 0 _ . fo 2/3 | _ | -2/3 wasps =| 9)-| 5/5 1= | 9/3 1 2/3 1/3 See Figure 2 for a diagram of this construction. Observe that v3 is in W, because x3 and projyx3 are both in W. Thus {v;,v}, v3} is an orthogonal set of nonzero vectors and hence a linearly independent set in W’. Note that W is three-dimensional since it was defined by a basis of three vectors. {y;, v,, vs} is an orthogonal basis for W. W, = Span{v,, v5} The Gram-Schmidt Process Given a basis {x1,....Xp} for a nonzero subspace W of R”, define vW=M Xp Vo Xpt Vp ~ we PP Vp-1 Vp-1 Then {vi,...,vp} is an orthogonal basis for W. In addition Span {vi,-..,¥e} = Span{xi,-...%} forlsk

0 and (u,u) = Oif and only ifu=0 A vector space with an inner product is called an inner product space. EXAMPLE 4 Fix any two positive numbers—say, 4 and 5—and for vectors u = (it), 22) and v = (v;, v2) in R?, set (u,v) = 4uyvy + Suzv2 qd) Show that equation (1) defines an inner product. SOLUTION Certainly Axiom 1 is satisfied, because (u,v) = 4uiv1 + 5u2v2 = 4ujuy + Sou = (vu). If w= (w), w>), then (u + v,w) = 4(uy + vy)wy + S(u2 + v2)w2 = 4ujw) + Suzw, + 40, w, + Svzw2 = (u,w) + (v.w) This verifies Axiom 2. For Axiom 3, compute (eu, v) = 4(cu)v, + S(cu2)v2 = e(4urv1 + Si2v2) = c(u,¥) For Axiom 4, note that (u,u) = 4u7 + 5u3 = 0,and 4ut + 5u3 = Oonlyifuy = uw, = 0, that is, if u = 0. Also, (0,0) = 0. So (1) defines an inner product on R?. . EXAMPLE 2 Let‘... , be distinct real numbers. For p and q in P, , define (P.4) = plto)g(to) + plng(t) + +++ + plugin) @ Inner product Axioms 1-3 are readily checked. For Axiom 4, note that (p. P) = [plo + LPP +--+ + [md P > 0 Also, (0,0) = 0. (The boldface zero here denotes the zero polynomial, the zero vector in P,.) If (p, p) = 0, then p must vanish at n +1 points: fo,...,. This is possible only if p is the zero polynomial, because the degree of p is less than n + 1. Thus (2) defines an inner product on Py. 7 EXAMPLE 3 Let V be Pp, with the inner product from Example 2, where fo = 0, 1) = 3 and 4 = 1. Let p(t) = 121 and q(t) = 21 — 1. Compute (p,q) and (4.4) SOLUTION (p.4) = pO)4O) + p(3)4(G) + Pa) = OM-1 + BO) + 2) = 12 (q.4) = WOF + fa (3)P + ar = (-1) + 0 + (1)? =2 / Lengths, Distances, and Orthogonality Let V be an inner product space, with the inner product denoted by (u,v). Just as in B", we define the length, or norm, of a vector ¥ to be the scalar liv = viv.y) Equivalently, [lv]? = (v,v). (This definition makes sense because (v, v) > 0, but the definition does not say that (v, v) is a “sum of squares,” because v need not be an element of RB") A unit vector is one whose length is 1. The distance between u and v is |ju— v||. ‘Vectors u and v are orthogonal if (u,v) = 0. EXAMPLE 4 Let P, have the inner product (2) of Example 3. Compute the lengths of the vectors p(t) = 12/7 and q(t) = 2¢ - 1. SOLUTION PIP = (pp) = POF +[p HJ} + POP =0+ BP + [12P = 153 ipl = V153 From Example 3, (q,q) = 2. Hence |\q\| = V2. 1: EXAMPLE 5 Let V be P; with the inner product in Example 2. involving evaluation of polynomials at 2, —1, 0, 1, and 2, and view P, as a subspace of V. Produce an orthogonal basis for P by applying the Gram-Schmidt process to the polynomials 1, ¢, and ?. SOLUTION The inner product depends only on the values of a polynomial at—2, ...,2, so we list the values of each polynomial as a vector in RS, underneath the name of the polynomial: Polynomial: Vector of values: | on as The inner product of two polynomials in V equals the (standard) inner product of their corresponding vectors in R°, Observe that ¢ is orthogonal to the constant function 1. So take po(t) = 1 and p;(t) =r. For po, use the vectors in R* to compute the projection of /? onto Span { po, pi} (2, po) = (P11) =441404144=10 (Po; Po) = 5 (?. pi) = (Pt) =-8 + CD +0414+8=0 The orthogonal projection of 7? onto Span {1,1} is py + Op). Thus prlt) = © —2polt) =P —2 An orthogonal basis for the subspace P, of V is: Polynomial: Po Pi Pr 1 2 2 a 1 -1 -1 8) Vector of values: | 1 of}, |-2 1 1 -1 1 2 2 Best Approximation in Inner Product Spaces A common problem in applied mathematics involves a vector space V whose elements are functions. The problem is to approximate a function f in V by a function g froma specified subspace W of V. The “closeness” of the approximation of f depends on the way || f — gll is defined. We will consider only the case in which the distance between f and g is determined by an inner product. In this case, the best approximation to f by functions in W is the orthogonal projection of f onto the subspace W. EXAMPLE 6 Let V be P, with the inner product in Example 5, and let po, pi. and pp be the orthogonal basis found in Example 5 for the subspace P}. Find the best approximation to p(t) = 5 — 4r" by polynomials in P>. SOLUTION The values of po, pi. and pp at the numbers —2,—1,0, 1, and 2 are listed in RS vectors in (3) above. The corresponding values for p are —3, 9/2, 5, 9/2, and —3. Compute (P.Pi)=0, — (p. P2) = —31 (pa. pr) = 14 (P. Po) = (Po. Po) = ‘Then the best approximation in V to p by polynomials in P, is (Pp, Po) Do (p, Pi) Ds (p, P2) (po. po) a) (p2, 2) = Spo + Sp = 8-H -2). B = projp, p= Po This polynomial is the closest to p of all polynomials in Pz, when the distance between polynomials is measured only at 2,-1.0, 1, and 2. See Figure 1. . PAL) m0) Two Inequalities Given a vector v in an inner product space V and given a finite-dimensional subspace _W, we may apply the Pythagorean Theorem to the orthogonal decomposition of v with respect to W and obtain Iv? = I ptojyy vl? + Iv — proj VI See Figure 2. In particular, this show’ that the norm of the projection of v onto W does not exceed the norm of v itself. This simple observation leads to the following important inequality. Ist lv — projy vil FIGURE 2 The hypotenuse is the longest side. The Cauchy-Schwarz Inequality For all u, vin V, Iu. ¥)| < [full lvl @ PROOF Ifu = 0, then both sides of (4) are zero, and hence the inequality is true in this case. (See Practice Problem 1.) If u 0, let W be the subspace spanned by u. Recall that ||cul] = |c| ||ul] for any scalar c. Thus Keron wl = | ea) = Fe ay = ay = " (u,u) \(u,u)| {\m||? ell io.9)] . Since | projy vl] $ IW]. Wwe have TE < [lvl which gives (4). 0 iui u FIGURE 3 ‘The lengths of the sides of a triangle. The Triangle Inequality For all u, yin V, {a + vil < |jal| + [lvl PROOF jut vif uu -+ Vv, + ¥) = (u,u) + 2(u, v) + (v.¥) S lu? + 21(a,¥)| + lvl? S |ul? + 2a vl] + vIP— Canchy—Schwarz = (hall + [Ivl))? The triangle inequality follows immediately by taking square roots of both sides.

You might also like