You are on page 1of 19

Chapter 3.

Interpolation and Curve Fitting
Relevant Computer Lab Tutorials 3 and 4 are attached to the end of this Chapter
Many problems in engineering require mathematical functions to be fitted to discrete data. These data
typically correspond to experimental measurements or field observations, and the fitting functions used
are, usually (but not always) polynomials. Once the mathematical function has been derived, it is
possible to interpolate between the known discrete values in a consistent and reliable way.
Broadly speaking, there are two different strategies for deriving approximation functions. The first
approach, which is generally described as the interpolation method, chooses the function so that it
matches the discrete data exactly at every point. The second approach, which is often loosely described
as curve fitting, chooses the function merely to provide a "good fit" to the discrete data but does not
necessarily pass through all of the points. Both of these techniques will be discussed in this Chapter.
An nth degree polynomial is defined by an expansion of the form

pn ( x)  a0  a1 x  a2 x 2    an x n

(3.1)

where a0, a1, .., an are constants, some of which may be zero. Examples of polynomials are
P 1 (x) = 2 + 3x
(linear - first order)
P 1 (x) = 3x
(linear - first order)
P2(x) = 4.2 + 3x + 2.75x2
P3(x) = 2.75x2 + x3
(cubic - third order)
Note that it is the highest power of x that determines the order of the polynomial.
3.1 Taylor Polynomial Interpolation
Taylor interpolation is used where a polynomial approximation to a known mathematical function is
needed near a specified point. The method does not attempt to approximate a function over an interval
and should not be used for this purpose.
To illustrate the steps in the Taylor method, consider a third degree polynomial approximation
P3(x) = a0 + a1x + a2x2 + a3x3
(3.2)
to the function f(x) = sin(x) in the vicinity of x0 = 0. The four constants a0, a1, a2, a3 are chosen so that
p3(x) matches f(x), and as many of its derivatives as possible, at the specified point. By inspection, we
can match the function and its first, second and third derivatives at the point x = 0 by imposing the
conditions
p3(0) = f (0) = sin(0) = 0
p3' (0) = f ' (0) = cos(0) = 1
P3" (0) = f " (0) = – sin(0) = 0
p3"' (0) = f "' (0) = – cos(0)= – 1
Now from (3.2) we see that
p3(0) = a0
p3' (0) = a1
P3" (0) =2 a2
p3"' (0) = 6 a3
and hence
a0 = 0
(3.3)
a1 = 1
(3.4)
2 a2 = 0
(3.5)

59

the Taylor approximation gives sin(x)  0. The polynomial is defined uniquely by insisting that it matches the function exactly at a specified number of points. A plot of the third degree Taylor polynomial. indicates that it gives reasonably good approximations provided |x|  1. 3.10) ……………. ln (x) are Lagrange interpolation functions. 60 ..2 radians. l1 (x).1 Taylor polynomial approximation to sin(x) near x = 0.6 a3 = –1 Substituting in (3. For values of x outside this range.7) Thus. we can write that 1 sin( x)  x  x 3 6 (3. 3 f(x) p3(x) 2 1 0 -1 -2 -3 -3 -2 -1 0 1 2 3 x Figure 3.2 Lagrange Interpolation Lagrange interpolation provides a method for defining a polynomial which approximates a function over an interval.1.2) gives the Taylor polynomial as (3.198667. at x = 0. …. The general form of the Lagrange polynomial is Pn(x) = l0 (x) f (x0) + l1 (x) f (x1) + … + ln (x) f (xn) (3.6) 1 p3 ( x)  x  x 3 6 (3. in the vicinity of x = 0. This compares well with the exact value of sin (x) = 0.198669.9) where f (x0).5 radians. …. shown in Figure 3. the approximation rapidly becomes inaccurate. An exact fit is obtained at each of the n + 1 points by insisting that Pn(x0) = f (x0) Pn(x1) = f (x1) (3.8) For example. f (xn) are known and l0 (x). f (x1).

n and guarantees that the conditions of (3. is simply the equation 61 (3. This property may be stated mathematically as 1.10) are satisfied precisely.14) . 1. i  j (3.12) are satisfied. consider the linear approximation of the function f(x) = x2 over the interval [1. To illustrate the use of Lagrange interpolation. i  j li ( x j )   0.2. The linear form of (3.2] using the points x0 = 1 and x1 = 2. The Lagrange interpolation functions li (x) vanish at xj for all j  i and take the value 1 at xi. 1. we obtain the linear Lagrange interpolation functions as l 0 ( x)  Note that x  x1 x0  x1 x0  x1 1 x0  x1 x x l0 ( x1 )  1 1  0 x0  x1 l0 ( x0 )  l1 ( x)  x  x0 x1  x0 x1  x0 1 x1  x0 x  x0 l1 ( x0 )  0 0 x1  x0 l1 ( x1 )  so that the conditions of (3.13) we obtain p1 ( x)  x2 x 1 1   4  3x  2 1 2 2 1 This Lagrange polynomial. ….13) Substituting the known values x0 = 1 f(x0) = 1 x1 = 2 f(x1) = 4 into (3.12) for i. the Lagrange interpolation functions have degree n and are defined by li ( x)  ( x  x0 )( x  x0 )  ( x  xi 1 )( x  xi 1 )  ( x  x n ) ( xi  x0 )( xi  x0 )  ( xi  xi 1 )( xi  xi 1 )  ( xi  x n ) n  j 0 j i (x  x j ) (3. j = 0.Pn(xn) = f (xn) For a set of n+ 1 data points.11).9) gives the Lagrange polynomial as p1 ( x)  x  x0 x  x1 f ( x0 )  f ( x1 ) x0  x1 x1  x0 (3. n. Using (3. shown in Figure 3.11) ( xi  x j ) for i = 0. ….

To illustrate the use of Lagrange interpolation for a more complex example. we note that p1 ( x0 )  p(1)  3 1  2  1  f ( x0 ) p1 ( x1 )  p(2)  3  2  2  4  f ( x1 ) so that the p1(x) matches f(x) precisely at the data points. Using (3. It is also important to note that Lagrange interpolation should not be used outside the interval over which the approximation is defined.l4). a second order Lagrange polynomial of the form: p2(x) = l0 (x) f (x0) + l1 (x) f (x1) + l2 (x) f (x2) is appropriate.2: Linear Lagrange interpolation of the line that connects the two end points. consider the data points point 0 1 2 x 1 3 6 f(x) 1 5 10 and estimate the value of f(x) at x = 4.0 1. In practice. Since there are three data points. the number of points required to approximate the function accurately over an interval must be determined by trial and error. As a check on (3.0 x Figure 3.5.2 2 f(x0) 1 x0 x1 2.11) we obtain l0 ( x )  ( x  x1 )( x  x2 ) ( x  3)( x  6) 1 2   ( x  9 x  18) ( x0  x1 )( x0  x2 ) (1  3)(1  6) 10 62 . since the results are usually very inaccurate.f(x) = x2 5 f(x) 4 f(x1) 3 p1(x) = 3x .

65 15 The steps involved in Lagrange interpolation may be summarised as follows: 63 .5  18)  7.5)   1 (4. p2(x) matches f(x) at all of the data points according to p2(1) = 1 = f (x0).65 8 p2(x) 6 4 4.15) As expected.( x  x0 )( x  x2 ) ( x  1)( x  6) 1    ( x 2  7 x  6) ( x1  x0 )( x1  x2 ) (3  1)(3  6) 6 ( x  x0 )( x  x1 ) ( x  1)( x  3) 1 2 l2 ( x)    ( x  4 x  3) ( x2  x0 )( x2  x1 ) (6  1)(6  3) 15 l1 ( x)  10 7.52  34  4.3: Quadratic Lagrange interpolation polynomial so that 1 2 1 1 ( x  9 x  18)  1  ( x 2  7 x  6)  5  ( x 2  4 x  3)  10 10 6 15 1   ( x 2  34 x  18) 15 p2 ( x )  (3.5 2 1 2 3 4 5 6 x Figure 3.5 is p2 (4.3. indicates how the Lagrange fits the three data points.15). the required value of f(x) at x = 4. Using (3. p2(6) = 10 = f (x2) The plot of p2(x) shown in Figure 3. p2(3) = 5 = f (x1).

n (3.….1. a1. .3 Difference Interpolation Difference interpolation also provides an exact fit at each of the n + 1 data points and is based on an nth order polynomial of the form pn(x) = a0 + a1 (x – x0) + a2 (x – x0) (x – x1) + … + an(x – x0) (x – x1) …(x – xn-2) (x – xn-1) (3. the interpolated value of y at xp comment: loop over number of points yp = 0 loop i = 1 to n l=1 comment: compute each Lagrange interpolation function loop j = 1 to n if i  j then l = l * (xp – x(j))/(x(i) – x(j)) endif end loop comment: add term to Lagrange polynomial y p = y p + l  y(i) end loop Algorithm 3. The direct difference method described in the next section does not suffer from this shortcoming. .1: Lagrange interpolation The Lagrange approach is used very widely in finite element analysis and is thus one of the most important methods for interpolation.17) to give a0  f ( x0 ) f ( x1 )  a0 x1  x0 f ( x2 )  a0  a1 ( x2  x0 ) a2  ( x2  x0 )( x2  x1 ) a1  a3  f ( x3 )  a0  a1 ( x3  x0 )  a2 ( x3  x0 )( x3  x1 ) ( x3  x0 )( x3  x1 )( x3  x2 )  64 (3.18) . 3.Lagrange Interpolation Algorithm In: Number of points n. No use can be made of any lower order polynomial that has already been established. x-value at which interpolation is required xp Out: yp. n lots of y-values stored in vector y.16) The constants a0. n lots of x-values 'stored in vector x.. . an are determined from the exact matching conditions pn(xi) = f (xi) for i = 0. Its major drawback is that extra points can only be added by recomputing the interpolation afresh.

consider the data points point X f(x) 0 1 1 1 3 5 2 6 10 3 5 9 and estimate f(x) at x = 4. Since there are four points. Addition of the fourth point (5. extra data points can be included in the difference interpolation merely by adding extra terms to an existing polynomial. however.21) Equation (3.5. this is identical to the polynomial obtained from Lagrangian interpolation. This means that.19) 1   ( x 2  34 x  18) 15 As predicted.16) then furnishes the cubic interpolating polynomial as 65 .As mentioned previously.16) becomes p2 ( x)  1  2( x  1)  1 ( x  1)( x  3) 15 (3. Applying (3. To illustrate the use of difference interpolation. the interpolation does not have to be recomputed from scratch. we would expect the interpolating polynomial to be the same.20) (3.18) to the first three points furnishes a0  f ( x0 )  1 f ( x1 )  a0 5  1  2 x1  x0 3 2 f ( x2 )  a0  a1 ( x2  x0 ) 10  1  2(6  1) 1 a2    ( x2  x0 )( x2  x1 ) (6  1)(6  2) 15 a1  so that (3. Because the quadratic polynomial passing through any three points in the plane is unique.9) to the interpolation gives the extra constant a3  f ( x3 )  a0  a1 ( x3  x0 )  a2 ( x3  x0 )( x3  x1 ) ( x3  x0 )( x3  x1 )( x3  x2 ) 9  1  2(5  1)  1 / 15(5  1)(5  3) (5  1)(5  3)(5  6) 1  15  (3. Note that the first three data points are identical to those used for Lagrangian interpolation. unlike the Lagrangian approach. Before computing this cubic polynomial. the final order of p(x) will be cubic. we will derive the quadratic difference polynomial for the first three points and then modify this subsequently to incorporate the fourth point.

.. ...4 Newton Forward Difference Interpolation for Equally Spaced Points If the data is provided at equally spaced values of x.5 2 1 2 3 4 5 6 7 x Figure 3..175 15 10 8.22) This curve. .4: Cubic difference interpolation The ability to add extra points to the interpolation easily is a major advantage of the difference method.1 2 1 ( x  34 x  18)  ( x  1)( x  3)( x  6) 15 15 1   ( x 3  9 x 2  7 x) 15 p3 ( x)   (3.n Equation (3. it is possible to derive simple formulas for the coefficients a0. a1. shown in Figure 3.1.5)  8. Let h denote the spacing of all the x-values so that h = xi+1 – xi for i = 0. 3.52  7  4.18) then gives the first two constants as a0  f ( x0 ) a1  f ( x1 )  a0 f ( x1 )  f ( x0 ) f1  f 0 f 0    x1  x0 h h h 66 . an.53  9  4.4.5)   1 (4.175 8 p3(x) 6 4 4. estimates f(x) at x = 4..5 as p3 (4.

To illustrate the Newton forward difference method.where fi denotes a forward difference in f(x) according to fi = f(xi + h) – f(xi) = fi+1 – fi (3. .25) permit the f0 terms to be evaluated recursively for any j. we will consider the data shown below point x f(x) = cos x 67 . n. To help understand how the required difference terms can be computed.1 Newton forward difference table In order to obtain the terms j f0 for j = 0. they are often tabulated in the form shown below. it follows that the jth coefficient can be evaluated using j f0 aj  j!h j (3. 1.. the third constant is defined as a2       f ( x2 )  a0  a1 ( x2  x0 ) ( x2  x0 )( x2  x1 ) f 2  f 0  2( f1  f 0 ) 2h 2 ( f 2  f1 )  ( f1  f 0 ) 2h 2 f1  f 0 2h 2 ( f 0 ) 2h 2 2 f 0 2h 2 More generally. x f f x0 f0 x1 f1  f0 2 f0  f1 x2 2 f1 f2  f2 x3 3f 3 f0  f1 4f 4 f0 3  f2 2 f3  f3 x4 2f f4 Table 3. whilst (3.23) Similarly.24) where j fi = j-1 fi+1 – j-1 fi (3.24) gives the required constant. all of the terms in the table have to be evaluated.25) Equations (3. .23) and (3..

03338 25° 0. we arrange the data in the form of Table 3.00659 .04028 30° 0.00690 0.24) and are tabulated below j j f0 0 1 2 3 4 0.04687 35° 0.00031 0. First.93969 – 0.006676 -0.00031 . the cubic and quadratic terms are clearly of little importance in the interpolation.81915 4 40° 0.93969 -0.0.0.76604 and use the resulting approximation to estimate cos27°.000138( x  20)( x  25)  0.86603 0.000138 0.00036 -0.00005 0.000000003( x  20)( x  25)( x  30)( x  35) Inserting x = 27° we obtain p4(27) = 0.00623 .0000004( x  20)( x  25)( x  30)  0.0.000138(x – 20)(x – 25) This furnishes 68 .93969 .006676( x  20)  0. the quadratic interpolating polynomial is given by p4 ( x)  0.81915 0.0 20° 0. the coefficients for the interpolating polynomial are given by (3.0.006676(x – 20) – 0. Because of their small coefficients.90631 2 30° 0.03338 .0. Dropping these terms gives the much simpler quadratic interpolation p2(x) = 0.00005 a j  j f / j!h j 0.1 to give x 20° f f 0.0.86603 3 35° 0.05311 40° 0.0000004 0.00690 -0.90631 .93969 2f 3f 4f .000000003 Substituting these values in (3.16).93969 1 25° 0.93969  0.26) which is identical to the exact solution to 5 decimal places.0.76604 Noting that the interval between the x-values is h = 5.89101 (3.

Various criteria can be used to define the precise meaning of 'best fit'..29) where  n f (x ) f (x ) n f (x ) f (x )  n f (x ) f (x )   1 i 0 i  m i 0 i  0 i 0 i  i 1 i 1 i 1 n n  n  f ( x ) f ( x ) f ( x ) f ( x )  f ( x ) f ( x )  1 i 1 i  m i 1 i  0 i 1 i A   i 1 i 1  i1      n  n n  f ( x ) f ( x )  f ( x ) f ( x )   f ( x ) f ( x ) 0 i m i 1 i m i m i m i i 1 i 1  i1  69 .5 Curve Fitting Using Least Squares In cases where discrete data is available. of rank (m + 1).1.. fj(x) for j = 0. but the most common approach is to choose a function which minimises the sum of the squares of the deviations.. the constants a0. Consider n lots of discrete data (x1. a1... which can be written as Ac=b (3. (x2. …. am must satisfy (m + 1) conditions of the form n E  2 (a0 f 0 ( xi )  a1 f1 ( xi )    am f m ( xi )  yi ) f 0 ( xi )  0 a0 i 1 n E  2 (a0 f 0 ( xi )  a1 f1 ( xi )    am f m ( xi )  yi ) f1 ( xi )  0 a1 i 1  n E  2 (a0 f 0 ( xi )  a1 f1 ( xi )    am f m ( xi )  yi ) f m ( xi )  0 am i 1 These correspond to a symmetric linear system of equations.28) In this equation. such as results from experiments. yn) which we wish to model with a function of the form f(x) = a0 f0(x) + a1 f1(x) + . m are chosen functions of x and the constants a0. . 3.p2(27) = 0.. it is useful to be able to find a function which provides a 'best fit' to the points.89103 (3. ….. (xn.27) which is accurate to four decimal places.. y1). a1. + am fm(x) (3. Let E denote the sum of the squares of the differences between f(x) and the actual values of y at each of the data points according to n E   ( f ( xi )  y i ) 2 i 1 Substituting (3. am are determined so as to give the smallest deviations in a least squares sense. y2)..28) gives n E   (a0 f 0 ( xi )  a1 f1 ( xi )    am f m ( xi )  yi ) 2 i 1 To minimize E.

f1(x) = x and m = 1.7 8 12.5 64 100.1  1.5 4 7.1  = 66  = 97.3 2 3.6 100 156.1  66  749.5  66  97.3 1 1.2 9 12. n f (x ) y   0 i i   in1    f1 ( xi ) yi  b   i1  .0 6 8.1 121 177.5 70 .28) has the simple form f(x) = a0 + a1x The system of least squares equations then becomes  n  n  xi  i1  n   n  xi  a0    yi  i 1   a    n  1   x    xi yi  i 1   i1  i 1 n 2 i These can be solved by hand to give a0  n n n n i 1 i 1 i 1 i 1  xi2  yi   xi  xi yi n n x    xi  i 1  i1  n 2 a1  .31) gives a0  506  97. 2 i n n n i 1 i 1 i 1 n  xi y i   xi  y i n n x    xi  i 1  i 1  n 2 2 i To illustrate a linear least squares fit.1 49 70.0 81 117.8 7 10. and the unknown coefficients are    n    f m ( xi ) yi   i1   a0    a  c 1    a   m A very common procedure is the fitting of a straight line to a collection of data points. so that equation (3.0 10 15.0 25 35.0 9 13. Substituting this data in (3.0 16 20.6 4 5.276 .0 3 4.8 36 52.30) and (3.1  = 506  = 749.0 5 7.5  0. 2 11 506  66 a1  11 749.517 2 11 506  66 xi yi xi2 xi yi 1 1.0 11 16. consider the x-y data shown in the table below. In this special case we have f0(x) = 1.

635 71 ln xi yi 5.276 + 1.524 21.912 4.387 91. together with the data.367 3.064 2 .552 215.483 ln xi 3.517x A plot of this equation.337 15.932 163. a1  n n i 1 i 1 n n ln xi yi   ln xi  yi i 1 n (ln xi )    ln xi  i 1  i1  n 2 n The computations for a set of 5 points are tabulated below xi 29 50 74 103 yi 1.5.4 (ln xi)2 11.so that the least squares best fit line is f(x) = -0.304 4.0 46. consider the use of the function f(x) = a0 + a1 ln x to fit a discrete set of x-y data.5 38. To further illustrate the process of least squares fitting. In this case we have f0(x) = 1.6 23.5: Least squares fit with linear function which gives n a0  n n i 1 i 1 n  (ln xi ) 2  yi   ln xi  ln xi yi i 1 i 1 n (ln xi )    ln xi  i 1  i1  n 2 n 2 . f1(x) = ln x and m = 1.304 18.  a    n  (ln xi )  1    ln xi yi  i 1   i1  n 2 20 16 f(x) 12 8 4 0 0 4 2 6 8 10 12 x Figure 3. The resulting system of equations to be solved becomes  n  n  ln xi  i1  n   n  ln xi  a0    yi  i 1 i 1 . is shown in Figure 3.

is f(x) = -111. n lots of y-values stored in vector y.6: Least squares fit with log function The steps involved in least squares curve fitting may be summarised as follows (polynomial fitting): Polynomial Least Squares Fitting Algorithm Number of points n.125 + 34. shown in Figure 3.237  20.989 a1  5  709.989  709.410 233..237 Using (3.989 .762 =89. c(m + 1) yp.989 22.41  20.019 ln x 40 f(x) 20 0 0 50 100 150 x Figure 3.019 . so that the line of best fit.33) these values give a0  89. x-value at which interpolation is required xp...4 4.41158. c(2). n lots of x-values stored in vector x.j) = 0 end loop end loop In: 72 . Out: m + 1 least squares coefficients c(l).4  34.771 =20.9 =158. 2 5  89.989 158.416  20.302 =709. number of non-constant terms in fitting function m.32) and (3..4  20. the interpolated value of y at xp comment: initialise matrix A and right hand side b loop i = 1 to m + 1 b(i) = 0 loop j = 1 to m + 1 a(i.118 48. .237  111.125 2 5  89.6.

01 41.01 33.81 93. MATLAB tutorial 3: Curve fitting First.46 >> x = [33.11 57.04 34.86 10.01 43.69 4.57 85.33 15. in this exercise we will use two intrinsic MATLAB functions: 73 .81 78.01 33.05 6.28 28.comment: loop over number of points loop k = 1 to n comment: form matrix A and right hand side b loop i = 1 to m + 1 b(i) = b(i) + x(k)^(i-1)  y(k) loop j = 1 to m + 1 a(i.44 39.06 Stopping Distance (m) 4.11 57.21 71.06 49.69 4.11 65.01 41.86 10.61 22.j) = a(i.61 22.57 85.01 43.46]’ We wish to fit the stopping distance (y) as a linear function of velocity (x).2: Polynomial least squares fitting APPENDIX Computer Lab.j) + x(k)^(i-1) x(k)^(j-1) end loop end loop end loop comment: compute coefficients c for least squares function solve the linear system A c = b for c comment: compute function values at xp Algorithm 3.33 15.04 34.28 28.06]’ >> y = [4. In order to do this.81 93. we consider the following problem: The stopping distance of a car on a certain road is presented in the Table below as a function of initial velocity: Velocity (km/h) 33.16 65.44 39.06 49.05 6.81 78.21 71.

polyval Evaluation of polynomial fit. A measure of the goodness of fit is the residual. Compare the residuals for the linear and quadratic fits: >> p = polyfit (x. res2. grid on In the resulting figure. the difference between the observed and the predicted data.x). '+'.y1. We can see that the linear fit is not really working well for the given set of data.6762 -20. you will see presented the original data and the linear fit. This returns: p= 0.1) where ‘1’ means linear or first order.y. For a linear fit.y. plot (x.x).1765 -6.2188 To evaluate the fitted polynomial at the velocity values (x) and plot the fit against the observed data we use: >> y1 = polyval (p.y. 's') We can see that the quadratic fit is better than the linear one.0041 0. (‘1’ is replaced by ‘2’ for a quadratic or second order fit etc).x. How do we define the ‘goodness of fit’? By analyzing residuals. res2 = y – y2. res1.2) p= 0. res1 = y – y1.6235 >> y2 = polyval (p.'-'). plot (x.'+'. we will use: >> p = polyfit (x. x. 74 .Function polyfit Description Polynomial curve fit. The MATLAB polyfit function generates a ‘best fit’ polynomial of a specified order for a given set of data.

0000 10.1000 z can be split into a column vector x according to: 75 .0000 3.0000 11.5000 4.0000 5.5000 13.y-data contained in the following m-file example5: function lsq_data = example5() lsq_data = [1 1. 4 5.6.1.0000 4.0000 1.5. 3 4. 10 15.0000 9.8. Now we wish to use MATLAB to determine the least squares coefficients a0 and a1 for the straight line of best fit to the x.5.2.0000 7.) Computer Lab. 5 7.0. 6 8.3. Present a plot of the data (y) and the fitting function ( Ψ 2 ) and the residual.0. 11 16. MATLAB tutorial 4: Curve fitting cont.0000 15. 2 3.0000 8.1000 12. Running this m-file gives >>z=example5 z= 1. (b) Repeat the linear fitting procedure for the function Y = y .3000 3.6000 16.0000 7.0000 2. What is your conclusion? (c) Can you suggest any physical reason why the linear fit (with Y = y) is not a good one to describe the stopping distance of a car as a function of velocity? (Consider equations of motion with a constant deceleration.8000 10. Compare your results with the quadratic fit findings obtained above. 7 10.0. 8 12.0000 6. 9 13.Exercises (a) Calculate the norm of the residuals in the above example using the following expression: | resY | n (y i 1 i  Ψ ( xi )) 2 where Ψ (x) is the fitting function.0000 8. Calculate the norm of the residual.1].2000 5.

a1]’ is then obtained by: >> coeff=A\b coeff = 1. We use equation (3.2764 Exercises (a) Determine the column vector y from the data in example5. i i i which can be written as x 2   x with x   xy  x  a 0       1   a1  y  x / n etc. (b) Set up the matrix A.29) of the Notes. 76 . The vector coeff = [a0.5173 -0.m.*y ) mean(y)]' where the ' after ] produces a column vector. and b given by: >> b=[ mean(x.1) x= 1 2 3 4 5 6 7 8 9 10 11 and similarly for y. i By a suitable determination of A. x y . The easiest way for MATLAB to find a0 and a1 is by backslash division using the following linear system of two equations: a0  xi  na1 a0  x  a1  xi 2 i   y .>> x=z(:.

(d) Now assume that x is an independent variable and y is the dependent one. Compare the corresponding coefficients and the residuals. Find the new least squares coefficients c0 and c1 for the straight line x = c0 y + c1. 77 .(c) Solve for coeff. (e) Compare two lines of the linear best fit graphically.