You are on page 1of 5
an CHAP. 20. Numeric Linear Algebra is very important. (Proof im Rel (GeaRefT] listed inequality. Prove thatthe matix norms (10), (11) in in App. 1) Use ito prove See. 20.3 satisty the asiome of norm (93) [xb = [xh = Veils |Al| = Of and only if A = 0, (a9) © hla = [xl = Ix Al = [eA vn a+ Bl = [all + Bl (©) Formula (10) is often more practical than (9). 35, WRITING PROJECT. Norms and Their Use Derive (10) ftom (9), (@) Mat ‘examples of (12) This Section, Make a list of the most important ofthe norms. ustrate (11) with examples. Give ‘many ideas covered in this section and wnite a two with equality as well as with strict 20.5 Least Squares Method Having discussed numerics for linear systems, we now turn to an important application, ccurve fitting, in which the solutions are obtained from linear systems In curve fitting we are given n points (paits of numbers) (3, ¥2),-"°. Qu Jw) and We ‘want to determine a function f(x) such that PE) = ya FO) > Ye approximately. The type of function (for example, polynomials, exponential functio sine and cosine functions) may be suggested by the nature of the problem (the underlying physical law, for instance), and in many eases a polynomial of a certain degree will be appropriate. Let us begin with a motivation, If we require strict equality f(s) = yr, f(y) = yn and use polynomials of sufficiently high degree, we may apply one of the methods discussed in Sec. 19.3 in connection with interpolation. However, in certain situations this would not be the appropriate solution of the actual problem. For instance, to the four points a (-13,0.103), — (-0.1, 1.099, (02, 0.808), (1.3, 1.897) there corresponds the interpolation polynomial f(x) = x° — x + 1 (Fig. 446), but if we ‘raph the points, we see that they lie neatly on a straight line. Hence if these values are obtained in an experiment and thus involve an experimental error, and if the nature fof the experiment suggests linear relation, we better fit a straight line through the points (Fig. 446). Such a line may be useful for predicting values to be expected for other values of x. A widely used principle for fitting straight lines is the method Fig. 446. Approximate fitting of a straight line SEC.205 Least Squares Method an of least squares by Gauss and Legendre, In the present situation it may be formulated as follows, Method of Least Squares. The straight line @ yoatbr should be fitted through the given points (x1, ¥2).**"s (ys Yn) $0 that the sum of the squares of the distances of those points from the straight line is minimum, where the distance is measured in the vertical direction (the y-direction) The point on the line with abscissa x; has the ordinate a + bx. Hence its distance fom Gey 9p is by — 4 bul Pig 447) and that sum of aguas is a= Soy— a= bay? A 4 depends on a and b. A necessary condition for gto be minimum is B= 230, - a= x <0 eo aq Fp TED Oy a= bx = 0 (where we sum over j from 1 to n). Dividing by 2, writing each sum as three sums, and taking one of them to the right, we obtain the result om 0Dy- Dy “ ad tb DF Davy ‘These equations are called the normal equations of our problem, Fig. 447. Vettical distance of a point (x,y) from a straight line y = 2 + bx EXAMPLE 1 Straight Line ‘Using the method of least squares, it straight line to the four points given in formula (1) Solution. We obtain n=4, Dea 34, Dy = 3907, Drpy = 230. 374 CHAP. 20. Numeric Linear Algebra Hence the oem equsons are 4a 108 9070 Ola 24nb = 2.3838, ‘The solution rounded v0 4D) is 05601, b= 0.6570, and we obtain the suaight line (ig. 446) y= 0.9601 + 0.6670 . Curve Fitting by Polynomials of Degree m Our method of curve fitting can be generalized from a polynomial y= a ~ bx to a polynomial of degree m 6 PO) = Dy + BY HoH yak where m =n ~ 1. Then q takes the form a= Doy- pan" ma and depends on m1 parameters Dp by Instead of (3) we then have m+ 1 cendlions aq oq ° 3s bp hich give a system of m+ 1 normal equations Tn he cae of queda polynomial oO PC) = by + byx + byx® the normal equations are (summation from 1 to n) bon HA Day thay = Dy @ bo Day th Det hE = Davy bo Dak th Dah + DF = Tab ‘The derivation of (8) is left to the reader EXAMPLE 2 Quadratic Parabola by Least Squares ita parabola through the daa (05), 2.49. 1), 6.6). (8, lution, Yor the normal equations we need n = 5, Ex, 20, Bad = 120, ad = 400, af = S664, By ~ 2B, Vayy,~ 10h, Deby ~ 596, Hence there equations ae Shy + 20bj + 1208, = 28 by + 1208, + $00b = 108 120Hy + 800%, + S66, = 696 SEC.205 Least Squares Method 87s quae east squares parabola (Fg. 418) 11439 — 1.414294 + 02042042, 7 Ba Fig. 408, Least square: parabola in Examale 2 For a general polynomial () the normal equations form a linear system of equations in the unknowns bo,-‘+, by. When its matrix M is nonsingular, we can solve the system by Cholesky"s method (See. 20.2) because then M is positive definite (and symmettic). ‘When the equations are neatly linearly dependent, the normal equations may become ill-conditioned and should be replaced by other methods; sce [ES], Sec. 5.7, listed in App. 1 “The least squares method also plays a role in statistics (see See. 25.9). PROBLEM SET 20-5 16] FITTING A STRAIGHT LINE Fit a straight line tothe given points (x, 9) by least squares. Show the details. Check your result by sketching the points and the lin. Judge the goodness of fi 1.2, 20, G.-2. 6-3 2, Hoow does the line in Prob, 1 change if you add a point far above it, say, (1, 3)? Guess first. 3.18, (16, GLY, B19, G29 4, Hooke’s law F = ke, Estimate the spring modulus k from the force F [Ib] and the elongation s [em], where F.9) = 0,03), 2,07, 4,13), 6.19), 10,32), (20,63) 5. Average speed. Estimate the average speed vy of a cer traveling according to.s = v +r [km] (5 = distance ‘raved. [hx] = time) from (,s) = (@ 140), (10.220), (11,310), (2, 410), 6. Ohm's aw U = Ri Estimate R from i, U) = (2,104), 4, 208), (6, 314), (10, 530), 7. Derive the normal equations (8) S11] FITTING A QUADRATIC PARABOLA ica parabola (7) to the points (x,y). Check by sketching BLD, 3, 24. 8.9) 9.2,-9. B.0, GY. (6.9) 7-2) 40. r{br] = Worker's ime on duty, y [see] = His/her reaction time, (3) = (1.2.0), (2,178), @,1.90, 4,239), 6.2.70) AL, The data in Prob. 3. Plot the points, the line and the parabola jointly, Compare and comment. 12, Cubic parabola. Derive the formula for the normal equations of a cubic least squates parabola 413, Fit curves (2) and (7) anda cubie parabola by least squares fo (ey) = (=2, 30), (“1.8 0.4), 4), 2,22), (G, 68). Graph these curves andthe points on common axes, Comment on the goodness off Ad. TRAM PROJECT. he least squares approximation of a function f(s) on an interval @ by a uncto Fg) = aayols) + anyx00) 4 ddd 876 CHAP. 20. Numeric Linear Algebra where ya determination ofthe ceelicients a (2) are given functions, requires the ay such that @ | Ue) — Facade Decomes minimum, This inegral is denoted by f— Proll, and ||f— Fpl 18 called the Lz-norm of J ~ Fo (L suggesting Lebesgue). A necessary condition () Polynomial, What form does (10) take if Frp(a) = ay + aye boo + aye? What is the coefficient matrix of (10) in this ease when the interval isose=1? (¢) Orthogonal functions. What are the solutions of (10) if yo(2), >, 95402 ae orthogonal on the interval 'b? (Por the definition, see Sec. 11.5. See also Sec. 11.6) Least Squares versus Inter- 3 15, CAS EXPERIMENT. for that minimum is given by a] f— Fyl?/44, ~ 0, i = 0,---,m the analog of (6). (a) Show that this Pot ne eee at a ee ae oe Je ag cana ON ns choice find the interpolation pelysomial andthe least ms u > squares approximations (linear, quadratic, etc.), = Compare abd cement. Erne toe @) (2,0, 1,0, OD. G0, 0) ©) (4,0, (3,0, 2.0, (10. OD, ’ (1,0), (2.0), (3,0), 4,0) (1) tae = J ternterae (©) Choote five pints oma ssh ne, €. (0.03 G, Dy... Move one point 1 unit upward and find the quadratic least squares polynomial. Do this for cach point. Graph the five polynomials on common axes, Which ofthe five motions has the greatest effet? = | rye 20.6 Matrix Eigenvalue Problems: Introduction We now come to the second part of our chapter on numeric linear algebra. In the first part of this chapter we discussed methods of solving systems of linear equations, which included Gauss elimination with backward substitution. Tis method is known as a direct method since it gives solutions after a prescribed amount of computation. The Gauss method was modified by Doolitle’s method, Crout’s method, and Cholesky's method, cach requiring fewer arithmetic operations than Gauss. Finally we presented indirect rethods of solving systems of linear equations. that is, the Gauss-Seidel method and the Jacobi iteration. The inditect methods requize an undetermined number of iterations. That number depends on how far we stat ffom the tue solution and what degree of accuracy ‘we requize. Moreover, depending on the problem, convergence may be fast or slow or our computation cycle might not even converge. This led to the concepts of il-conditioned problems and condition numbers that help us gain some control over difficulties inerent ‘he second pat ofthis chapter deals with some of the most important ideas and numeric rethods for matrix eigenvalue problems. This very extensive past of numeric linear algebra is of great practical smportance, with much research going on, and hundreds, if not thousands, of papers published in various mathematical journals (see the references in (ES), [E®], [B11], (E29)). We begin with the concepts and general results we shall need in explaining and applying numeric methods for eigenvalue problems. (For typical models of eigenvalue problems see Chap. 8.) SHENRI LEBESGUE (1875-1941), grea Freach mathematician, crestor ofa moder theory of measure aad integration in bis fous doctoral these of 1902,

You might also like