262

C HAP. 5

C URVE F ITTING

5.2 Methods of Curve Fitting
Data Linearization Method for y = C e Ax
Suppose that we are given the points (x1 , y1 ), (x2 , y2 ), . . . , (x N , y N ) and want to fit an exponential curve of the form (1) y = Ce Ax .

The first step is to take the logarithm of both sides: (2) ln( y ) = Ax + ln(C ).

Then introduce the change of variables: (3) Y = ln( y ), X = x, and B = ln(C ).

This results in a linear relation between the new variables X and Y : (4) Y = AX + B .

The original points (xk , yk ) in the x y -plane are transformed into the points ( X k , Yk ) = (xk , ln( yk )) in the X Y -plane. This process is called data linearization. Then the leastsquares line (4) is fit to the points {( X k , Yk )}. The normal equations for finding A and B are
N 2 Xk N N

A+
k =1

Xk

B=
k =1 N

X k Yk , Yk .
k =1

(5)

k =1 N

Xk
k =1

A+

NB

=

After A and B have been found, the parameter C in equation (1) is computed: (6) C = eB.

Example 5.4. Use the data linearization method and find the exponential fit y = Ce Ax for the five data points (0, 1.5), (1, 2.5), (2, 3.5), (3, 5.0), and (4, 7.5). Apply the transformation (3) to the original points and obtain (7) {( X k , Yk )} = {(0, ln(1.5), (1, ln(2.5)), (2, ln(3.5)), (3, ln(5.0)), (4, ln(7.5))} = {(0, 0.40547), (1, 0.91629), (2, 1.25276), (3, 1.60944), (4, 2.01490)}.

These transformed points are shown in Figure 5.4 and exhibit a linearized form. The equation of the least-squares line Y = AX + B for the points (7) in Figure 5.4 is (8) Y = 0.391202 X + 0.457367.

5 5.309743 = X k Yk 0.5 1.0 16.0 0.4 The transformed data points {( X k .0 7.059612 16.5 2.0 yk 1.916291 2. Then C is obtained with the calculation C = e0.5): (10) y = 1.0 1.0 30.5 Xk 0.000000 0.505526 4.828314 8.198860.0 1.0 4.0 = Xk Yk = ln( yk ) 0.457367 = 1. Yk )}. The resulting linear system (5) for determining A and B is (9) 30 A + 10 B = 16. Yk )} xk 0.0 1. and these values for A and C are substituted into equation (1) to obtain the exponential fit (see Figure 5.0 10.579910.609438 2.0 2 Xk Calculation of the coefficients for the normal equations in (5) is shown in Table 5.457367. . 5.0 4.S EC .0 9.198860 = Yk = 2 Xk X k Yk 0.5 M ETHODS OF C URVE F ITTING 263 Y = AX + B X 0 1 2 3 4 Figure 5.0 4.5 3.916291 1.0 3. Table 5.309742 10 A + 5 B = 6.014903 6.4. The solution is A = 0.0 3.252763 1.3912023 and B = 0.3912023x (fit by data linearization).0 1.0 2.4 Obtaining Coefficients of the Normal Equations for the Transformed Data Points {( X k .405465 0.579910e0.0 2.2 Y 2.

. the resulting normal equations are N N C (15) k =1 xk e 2 Axk − xk yk e Axk = 0. 5 C URVE F ITTING y = Ce Ax x 0 1 2 3 4 Figure 5. (x N . The partial derivatives of E ( A .3912023x obtained by using the data linearization method. . (x2 . Nonlinear Least-Squares Method for y = C e Ax Suppose that we are given the points (x1 .264 y 8 6 4 2 C HAP. y2 ). k =1 When the partial derivatives in (13) and (14) are set equal to zero and then simplified. C ) with respect to A and C are (13) and (14) ∂E =2 ∂C N ∂E =2 ∂A N (Ce Axk − yk )(C xk e Axk ) k =1 (Ce Axk − yk )(e Axk ). . k =1 N N C k =1 e Axk − k =1 yk e Axk = 0. . C ) = k =1 (Ce Axk − yk )2 .579910e0. . The nonlinear least-squares procedure requires that we find a minimum of N (12) E ( A. y N ) and want to fit an exponential curve: (11) y = Ce Ax .5 The exponential fit y = 1. y1 ).

S EC . . A comparison of the solutions using data linearization and nonlinear least squares is given in Table 5.6108995e0. Many software packages have a built-in minimization subroutine for functions of several variables that can be used to minimize E ( A . (17) is usually the preferred choice. C ) directly.5).5). C ) = (C − 1. First we define E ( A . (1.*exp(2*A)-3. C ). the Nelder-Mead simplex algorithm can be used to minimize (12) directly and bypass the need for equations (13) through (15). 4] (see Table 5.^2+(C. C ).5)2 + (Ce A − 2. 5. We use the fmins command in MATLAB to approximate the values of A and C that minimize E ( A . function z=E(u) A=u(1).0.6). For the purpose of interpolation it can be seen that the approximations differ by no more than 2% over the interval [0.*exp(A)-2. For example.38357046980073 1..3835705 (fit by nonlinear least squares).5).*exp(3*A)-5.0). Example 5. the two solutions will diverge and the discrepancy increases to about 6% when x = 10.61089952247928 Thus the exponential fit to the five data points is (17) y = 1. (C. For this solution we must minimize the quantity E ( A .5.5). When extrapolation is made beyond the range of the data. 7.^2. we find >>fmins(’E’. (2.^2+(C.5 and Figure 5. If there is a normal distribution of the errors in the data.5)2 + (Ce3 A − 5.2 M ETHODS OF C URVE F ITTING 265 The equations in (15) are nonlinear in the unknowns A and C and can be solved using Newton’s method.^2+. C ) as an M-file in MATLAB. C=u(2).5). which is (16) E ( A .*exp(4*A)-7. 1. Using the fmins command in the MATLAB Command Window and the initial values A = 1.0)2 + (Ce4 A − 7. 3. There is a slight difference in the coefficients.5). This is a time-consuming computation and the iteration involved requires good starting values for A and C . (3.0 and C = 1. 5.5). z=(C-1.5). Use the least-squares method and determine the exponential fit y = Ce Ax for the five data points (0.^2+(C. and (4.5..5)2 + (Ce2 A − 3.5)2 .[1 1]) ans = 0.0). 2.

USA http://vig. Mathews and Kurtis K.com/ . 4th Edition. New Jersey. Fink ISBN: 0-13-065248-2 Prentice-Hall Inc.Numerical Methods Using Matlab.prenhall. 2004 John H. Upper Saddle River.

Sign up to vote on this title
UsefulNot useful