## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

by Marko Ledvij

**Curve Fitting Made Easy
**

cientists and engineers often want to represent empirical data using a model based on mathematical equations. With the correct model and calculus, one can determine important characteristics of the data, such as the rate of change anywhere on the curve (first derivative), the local minimum and maximum points of the function (zeros of the first derivative), and the area under the curve (integral). The goal of data (or curve) fitting is to find the parameter values that most closely match the data. The models to which data are fitted depend on adjustable parameters. Take the formula Y = A*exp(–X/X0) in which X is the independent variable, Y is the dependent variable, and A and X0 are the parameters. In an experiment, one typically measures variable Y at a discrete set of values of variable X. Curve fitting provides two general uses for this formula: • A person believes the formula describes the data and wants to know what parameter values of A and X0 correspond to the data. • A person wants to know which of several formulas best describes the data. To perform fitting, we define some function—which depends on the parameters— that measures the closeness between the data and the model. This function is then minimized to the smallest possible value with respect to the parameters. The parameter values that minimize the function are the best-fitting parameters. In some cases, the parameters are the coefficients of the terms of the model, for example, if the model is polynomial or Gaussian. In the simplest case, known as linear regression, a straight line is fitted to the data. In most scientific and engineering models, however, the dependent variable depends on the parameters in a nonlinear way.

S

data — it can only help in differentiating between models. Once a model is picked, one can do a rough assessment of it by plotting the data. At least some agreement is needed between the data and the model curve. If the model is supposed to represent exponential growth and the data are monotonically decreasing, the model is obviously wrong. Many commercial software programs perform linear and nonlinear regression analysis and curve fitting using the most common models. The better programs also allow users to specify their own fitting functions. A typical example of exponential decay in physics is radioactive decay. The measurable rate of decay as a function of time— that is, the number of decays per unit of time—is given by Λ(t) = Λ(0)e-λt where t is elapsed time and λ is the probability of decay of one nucleus during one time unit. By measuring the decay rate as a function of time and fitting the data to the exponential-decay law, one can infer the constant λ . As another example, the following Gaussian function describes a normal distribution:

There are problems with it, however, including the fact that the transformed data usually do not satisfy the assumptions of linear regression. Linear regression assumes that the scatter of points around the line follows a Gaussian distribution, and that the standard deviation is the same at every value of the independent variable. To find the values of the model’s parameters that yield the curve closest to the data points, one must define a function that measures the closeness between the data and the model. This function depends on the method used to do the fitting, and the function is typically chosen as the sum of the squares of the vertical deviations from each data point to the curve. (The deviations are first squared and then added up to avoid cancellations between positive and negative values.) This method assumes that the measurement errors are independent and normally distributed with constant standard deviation.

Linear regression

The simplest form of modeling is linear regression, which is the prediction of one variable from another when the relationship between them is assumed to be linear. One variable is considered to be independent and the other dependent. A linear regression line has an equation of the form y = a + bx, where x is the independent variable and y is the dependent variable. The slope of the line is b, and a is the intercept (the value of y when x = 0). The goal is to find the values of a and b that make the line closest to the data. The chi-square estimator in this case (where σi are measurement errors; if they are not known, they should all be set to 1) is given by

y (x ) =

1 ( x − x0 )2 exp − σ 2π 2σ 2

Model selection

One needs a good understanding of the underlying physics, chemistry, electronics, and other properties of a problem to choose the right model. No graphing or analysis software can pick a model for the given

APRIL/MAY 2003 © American Institute of Physics

Here, x 0 is the mean value and σ is the standard deviation of the distribution. Usually, several measurements are performed of a quantity x . The results are binned — assigned to discrete intervals called bins— as a function of x, such that y(x) represents the number of counts of the values in the bin centered around x. The mean value and standard deviation are then obtained by curve fitting. Sometimes, it is possible to linearize a nonlinear model. For example, consider the first equation. If we take the logarithm of both the left- and the righthand side, we get log(Y) = log(A) – X/X0 Hence, log(Y) is a linear function of X. This means that one could try to fit the logarithm of Y data to a linear function of X data to find A and X0. This approach was widely used before the age of computers.

χ 2 ( a, b) = ∑(

i= 1

N

yi − a − bxi 2 ) σi

χ2 is a function of the parameters of the model; N is the total number of points. We need to find the values of a and b that min2 imize χ . Typically, this means differentiat-

24 The Industrial Physicist

w

At

ing χ with respect to the parameters a and b and setting the derivatives equal to 0. For a linear model, the minimum can be found and expressed in analytic form as

2

0= 0=

N ∂χ 2 y − a − bxi = − 2∑ i ∂a σi 2 i= 1

∂χ = − 2∑ ∂b i= 1

2 N

xi (y i − a − bx i )

σ i2

The resulting system of two equations with two unknowns can be solved explicitly for a and b , producing the regression line like that shown in Figure 1.

This function has four parameters. A b and A t are two asymptotic values — values that the function approaches but never quite reaches—at small and large x . The curve crosses over between these two asymptotic values in a region of x values whose approximate width is w and which is centered

Y

At – Ab Y = Ab+1+exp[–( x – x0)/w] Ab

x0 X

Figure 2. In nonlinear curve-fitting, such as this sigmoidal function, to minimize the deviations between the observed and expected y values, it is necessary to estimate parameter values and use iterative procedures.

2 values and reevaluation of χ until a minimum is reached. The function f is the one to 2 which we are trying to fit the data, and χ and yi are the X and Y values of data points. Although the method involves sophisticated mathematics, end users of typical curve-fitting software usually only need to initialize parameters and press a button to start the iterative fitting procedure.

10 Data Linear fit

around x0. For the initial estimate of the values of A b and A t, we 6 can use the average of the lower third and the Y upper third of the Y 4 values of data points. a = 1.73247±0.12486 The initial value of x0 b = 0.78897±0.02052 2 is the average of X values of the crossover area, and the initial 0 value of w is the width 0 2 4 6 8 10 X of the region of points Figure 1. In linear regression, the correct values of a and b in the between the two straight line equation are those that minimize the value of χ2. asymptotic values. A good understanding of the selected model is important because the initial Nonlinear models In most cases, the relationship between guess of the parameter values must make measured values and measurement vari- sense in the real world. In addition, a good ables is nonlinear. Nonlinear curve fitting fitting software will try to provide an initial also seeks to find those parameter values guess for the functions built into it. The that minimize the deviations between the nonlinear fitting procedure is iterative. The observed y values and the expected y values. most widely used iterative procedure for In nonlinear models, however, one usually nonlinear curve fitting is the Levenberg–Marcannot solve the equation for the parame- quardt algorithm, which good curve-fitting ters. Various iterative procedures are used softwares implement. Starting from some instead. Nonlinear fitting always starts with initial values of parameters, the method an initial guess of parameter values. One tries to minimize 2 N usually looks at the general shape of the 2 yi − f ( xi , a, b) χ ( a , b ) = ∑ model curve and compares it with the σi i =1 shape of the data points. For example, look by successive small variations of parameter at the sigmoidal function in Figure 2.

8

Weighting

Often, curve fitting is done without 2 weighting. In the above formulas for χ , the weights σi are all set to 1. If experimental measurement errors are known, they should be used as σi. They then provide an uneven weighting of different experimental points: the smaller the measurement error for a particular experimental point, the larger the 2 weight of that point in χ . Proper curve-fitting software allows users to supply measurement errors for use in the fitting process. If the experimental errors are not known exactly, it is sometimes reasonable to make an assumption. For example, if you believe that the deviation of the experimental data from the theoretical curve is proportional to the value of the data, then you can use σi = yi Here, a weight of each point, which is the inverse of the square of σi , is inversely proportional to the square of the value of a data point.

25 The Industrial Physicist

Primer

from fitting is negative, we have obtained Data a physically meaningless result. This is true 10 even if the curve seemingly does fit the data well. In this Y 5 example, we have likely chosen an 0 incorrect model. There is a quantiResidual tative value that -5 plot describes how well the model fits the 0 5 10 15 20 25 30 data. The value is 2 X defined as χ divided Figure 3. The residual plot (blue) shows the deviations of the data by the number of points (black) from the fitted curve (red) and indicates by its non- degrees of freedom, randomness a possible hidden variable. and it is referred to as the standard deviation Interpreting results of the residuals, which are the vertical disIn assessing modeling results, we must tances of the data points from the line. The first see whether the program can fit the smaller this value, the better the fit. model to the data — that is, whether the An even better quantitative value is R2, iteration converged to a set of values for the known as the goodness of fit. It is computed parameters. If it did not, we probably made as the fraction of the total variation of the Y an error, such as choosing the wrong model values of data points that is attributable to or initial values. A look at the graphed data the assumed model curve, and its values points and fitted curve can show if the typically range from 0 to 1. Values close to curve is close to the data. If it is not, then 1 indicate a good fit. the fit has failed, perhaps because the iteraTo assess the certainty of the best-fit tive procedure did not find a minimum of parameter values, we look at the obtained χ 2 or the wrong model was chosen. standard errors of parameters, which indiThe parameter values must also make rectly tell us how well the curve fits the sense from the standpoint of the model. If, data. If the standard error is small, it means for example, one of the parameters is tem- that a small variation in the parameter perature in kelvins and the value obtained would produce a curve that fits the data

15

Fitted curve

much worse. Therefore, we say that we know the value of that parameter well. If the standard error is large, a relatively large variation of that parameter would not spoil the fit much; therefore, we do not really know the parameter’s value well.

Important intervals

Standard errors are related to another quantity, one usually obtained from the fitting and computed by curve-fitting software: the confidence interval . It says how good the obtained estimate of the value of the fitting function is at a particular value of X. The confidence interval gives a desired level of statistical confidence, expressed as a percentage, usually 95% or 99%. For a particular value of X, we can claim that the correct value for the fitting function lies within the obtained confidence interval. Good curve-fitting software computes the confidence interval and draws the confidence interval lines on the graph with the fitted curve and the data points. The smaller the confidence interval, the more likely the obtained curve is close to the real curve. Another useful quantity usually produced by curve-fitting software is the prediction interval. Whereas the confidence interval deals with the estimate of the fitting function, the prediction interval deals with the data. It is computed with respect to a desired confidence level p, whose values are usually chosen to be between 95% and 99%. The prediction interval is the interval of Y values for a given X value within which p% of all experimental points in a series of repeated

measurements are expected to fall. The narrower the interval, the better the predictive nature of the model used. The prediction interval is represented by two curves lying on opposite sides of the fitted curve. The residuals plot provides insight into the quality of the fitted curve. Residuals are the deviation of the data points from the fitted curve. Perhaps the most important information obtained is whether there are any lurking (hidden) variables. In Figure 3, the data points are black squares, the obtained fitted line is red, and the blue line represents residuals. Normally, if the curve fits the data well, one expects the data-point deviations to be randomly distributed around the fitted curve. But in this graph, for the data points whose values are less than about 22, all residuals are positive; that is, the data points are above the fitted curve. However, for data points with X values above 22, all residuals are negative and the data points fall below the fitted curve. This indicates a nonrandom behavior of residuals, which means that this behavior may be driven by a hidden variable. So curve fitting, although it cannot tell you how, can certainly tell you when your chosen model needs improvement.

Further reading

1. www.originlab.com (Origin). 2. http://mathworld.wolfram.com/Least SquaresFitting.html 3. Press, W. H.; et al. Numerical Recipes in C, 2nd ed.; Cambridge University Press: New York, 1992; 994 pp.; provides a detailed description of the Levenberg–Marquardt algorithm. 4. Bevington, P. R. Data Reduction and Error Analysis for the Physical Sciences , McGraw-Hill, New York, 1969. 5. Draper, N. R.; Smith, H. Applied Regression Analysis, 3rd edition, John Wiley & Sons, 1998. Ω

B I O G R A P H Y

Marko Ledvij is a senior software developer at OriginLab Corp. in Framingham, Massachusetts (marko@originlab.com).

27 The Industrial Physicist

- Lect Demand Forecasting
- Douglas Etal JoH2009
- AN IMPROVED CAPACITY SPECTRUM METHOD EMPLOYING STATISTICALLY OPTIMIZED LINEARIZATION PARAMETERS
- Eva Uation Methods 273 a Spring 09
- Doe
- Trends, RB-k Assign 8,9
- Analysis Debt Mkt
- nicorandil
- San Francisco Bread Co. Docx.
- moy.pdf
- _Aboody D, Lev B - The Value Relevance of Intangibles, The Case of Software Capitalization
- Weatherwax Ruppert Notes
- Cable Co Audit Write - up
- hello world
- Study on the Effects of Ceramic Particulates (Sic, Al2o3
- prmia ii
- Re Curse
- Regression
- Complete Notes
- Lee (1992)
- Sample Size Determination
- Homogeneity Variance
- MI
- Regression
- 23_Nicco
- 10a1 Ans
- The General Linear Model
- Influence of Media Composition on Xylitol Fermentation By
- MJC - Joint Modelling of Longitudinal and Survival Data
- Anindita Kusuma Ardiani G2A009148 BabVIIIKTI

Skip carousel

- CBS News 2016 Battleground Tracker, Methods
- NBC News SurveyMonkey Toplines and Methodology 7 11-7 17
- CBS News 2016 Battleground Tracker, Methods
- CBS News 2016 Battleground Tracker, Methods
- NBC-SM Nov Rep Primary Toplines Methodology ONLY 11.20.15
- CBS News 2016 Battleground Tracker
- CBS News 2016 Battleground Tracker
- NBC News SurveyMonkey Toplines and Methodology 9 12 9 18
- Go p Race Survey Monkey Top Lines
- CBS News 2016 Battleground Tracker, Methods
- NBC News SurveyMonkey Toplines and Methodology 7 18-724
- NBC News SurveyMonkey Toplines and Methodology 6 13-6 19
- NBC News SurveyMonkey Toplines and Methodology 7 25-731
- NBC News SurveyMonkey Toplines and Methodology 8 22-8 28
- Business Statistics/Series-3-2010(Code3009)
- NBC News SurveyMonkey Toplines and Methodology 8 1-8 7
- Toplines Questions 1
- Business Statistics Level 3/series 3-2009
- NBC News SurveyMonkey Toplines and Methodology 8 8-8 14
- NBC News SurveyMonkey Toplines and Methodology 6 27-7 3
- CBS News 2016 Battleground Tracker, Methods
- NBC News SurveyMonkey Toplines and Methodology 10.3-10.9
- NBC News|SurveyMonkey Weekly Election Tracking Poll
- 69410_2005-2009
- Methods, CBS News' Battleground Tracker, October 25
- New York languages.pdf
- frbclv_wp1984-05.pdf
- Rev Frbclev 1985q2
- CBS News Battleground Tracker
- rev_frbsf_1979no2.pdf

Skip carousel

- UT Dallas Syllabus for poec5313.5u1.08u taught by Timothy Bray (tmb021000)
- How fast are semiconductor prices falling?
- tmp698D
- 31 Fair empl.prac.cas. 1578, 31 Empl. Prac. Dec. P 33,571 Frank L. Eastland, Individually v. Tennessee Valley Authority, 704 F.2d 613, 11th Cir. (1983)
- Linking Lead and Education Data in Connecticut
- The Effects of Employer Knowledge and Product Awareness on Job Seekers’ Application Decisions
- Forecasting numbers of people affected annually by natural disasters up to 2015
- UT Dallas Syllabus for eco6314.001.08s taught by Kurt Beron (kberon)
- First Names and Crime
- Mortgage Lending and Financial Stability in Asia
- UT Dallas Syllabus for stat7334.501.10s taught by Robert Serfling (serfling)
- UT Dallas Syllabus for epps3405.001.11f taught by Michael Tiefelsdorf (mrt052000)
- UT Dallas Syllabus for eco5311.501 06s taught by Magnus Lofstrom (mjl023000)
- tmp7391.tmp
- The Union Wage Advantage for Low-Wage Workers
- UT Dallas Syllabus for epps7316.501.11s taught by Patrick Brandt (pxb054000)
- Use of Linear Regression in Machine Learning for Ranking
- tmp57E6.tmp
- Limiting Government through Direct Democracy
- 68627_1995-1999
- Effectiveness Review
- Tmp 7550
- Configuration Navigation Analysis Model for Regression Test Case Prioritization
- UT Dallas Syllabus for mkt6329.501 06s taught by Norris Bruce (nxb018100)
- UT Dallas Sample Syllabus for Chansu Jung
- tmp6850.tmp
- frbclv_wp1984-08.pdf
- tmp274D.tmp
- UT Dallas Syllabus for meco6320.001.09f taught by Yexiao Xu (yexiaoxu)
- UT Dallas Syllabus for mkt6362.501.11f taught by Brian Ratchford (btr051000)

Sign up to vote on this title

UsefulNot usefulClose Dialog## Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

Close Dialog## This title now requires a credit

Use one of your book credits to continue reading from where you left off, or restart the preview.

Loading