Professional Documents
Culture Documents
Curve Fitting and Interpolation Techniques: February 2018
Curve Fitting and Interpolation Techniques: February 2018
net/publication/323242977
CITATIONS READS
0 2,385
1 author:
SEE PROFILE
Some of the authors of this publication are also working on these related projects:
DDPM (dense discrete particle tracking model ) for predicting sand particles velocity View project
All content following this page was uploaded by Mohamed tawfik ahmed Eraky on 17 February 2018.
2
Table of figures
Figure 1 least square of optimal curve 5
Figure 2 Basis Functions for curve fittings 6
Figure 3 Linear regression 8
Figure 4 second order polynomial fit 9
Figure 5 polynomial fits 10
Figure 6 sinusoidal curve fitting 1 term 12
Figure 7 Sinusoidal 2 terms 13
Figure 8 ChebyShev Polynomials orthogonal over interval [-1,1] 13
Figure 9 Chebyshev polynomial fit 15
Figure 10 Legendre Polynomials orthogonal over interval [-1,1] 16
Figure 11 Legendre curve fits (3rd and 4th degree) 17
Figure 12 Hermite first order polynomial 19
Figure 13 Hermite curve fitting 5th order polynomials 20
Figure 14 Interpolation formulas 21
Figure 15 forward newton interpolation 23
Figure 16 Newton backward difference 25
Figure 17 gauss forward difference table 27
Figure 18 Gauss backward central difference 29
Figure 19 Lagrange interpolation polynomial 32
Figure 20 Lagrange interpolation polynomial 2nd order 33
Figure 21 General newton divided difference 36
Figure 22 Cubic spline 38
Figure 22 Cubic spline 38
3
Introduction
Given a set of data that results from an experiment (simulation based or otherwise), or perhaps
taken from a real-life physical scenario, we assume there is some function that passes through
the data points and perfectly represents the quantity of interest at all non-data points.
With curve fitting we simply want a function that is a good fit (typically a best fit in some sense)
to the original data points. With curve fitting the approximating function does not have to pass
through the original data set.
With interpolation we seek a function that allows us to approximate such that functional values
between the original data set values may be determined (estimated). The interpolating function
typically passes through the original data set.
4
1. Curve fitting
Given a set of data (n+1) points each of pair (𝑥 𝑖 , 𝑦𝑖 ) 𝑤ℎ𝑒𝑟𝑒 𝑖 = 0,1,2, … , 𝑛 ,it’s required to capture the
trend in the data points across the entire range by assigning a single function represents optimal curve
Such as optimal straight line (poly. Degree (1)) or optimal polynomial 2 nd degree or 3rd degree and so on.
The optimal curve can be obtained by a setting the function F(x) in the form
𝑛
𝐹(𝑥 ) = ∑ 𝑎𝑖 ∅𝑖 (𝑥)
𝑖 =0
𝑤ℎ𝑒𝑟𝑒 ∅𝑖 (𝑥) 𝑖𝑠 𝑡ℎ𝑒 𝑏𝑎𝑠𝑖𝑠 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛 , 𝑎𝑖 is the coefficients that set the error to minimum, the least-
squares method finds the optimal coefficients values by minimizing the sum 𝛿 of the squared residuals.
𝜕𝛿
= ∑ 2[ 𝑎0∅0(𝑥𝑖) + 𝑎1∅1 (𝑥𝑖 ) + 𝑎2 ∅2(𝑥𝑖 ) + … + 𝑎𝑛 ∅𝑛 (𝑥𝑖 ) − 𝑦𝑖 ]∅𝑗 = 0
𝜕𝑎𝑗
∀𝑖
∑∅0 ∅0 ∑ ∅1 ∅ 0 ∑ ∅2 ∅0 … ∑ ∅𝑛 ∅0 ∑𝑦𝑖 ∅0
∀𝑖 ∀𝑖 ∀𝑖 ∀𝑖 ∀𝑖
∑ ∅0 ∅1 ∑ ∅1 ∅1 ∑ ∅2 ∅1 … ∑ ∅𝑛 ∅1 𝑎 0 ∑ 𝑦𝑖 ∅1
𝑎1
∀𝑖 ∀𝑖 ∀𝑖 ∀𝑖 ∀𝑖
𝑎2 =
∑∅0 ∅2 ∑ ∅1 ∅ 2 ∑ ∅2 ∅2 … ∑ ∅0 ∅2 ⋮ ∑𝑦𝑖 ∅2
∀𝑖 ∀𝑖 ∀𝑖 ∀𝑖 [𝑎𝑛 ] ∀𝑖
⋮ ⋮ ⋮ ⋮ ⋮ ⋮
∑ ∅0 ∅ 𝑛 ∑ ∅ 1 ∅ 𝑛 ∑ ∅ 2 ∅ 𝑛 … ∑ ∅ 𝑛 ∅ 𝑛 ∑ 𝑦𝑖 ∅𝑛
[ ∀𝑖 ∀𝑖 ∀𝑖 ∀𝑖 ] [ ∀𝑖 ]
5
So, the optimal value for the coefficients are given by 𝑎 = 𝐴−1 𝐵
• The basis function ∅(𝑥) could be polynomials or orthogonal functions as
summarized in the next figure
Polynomials Orthogonal
Functions
Sinusoidal
Optimal straight line
Poly. degree(1)
chebyshev
Optimal polynomial
of degree 2,3,..
Legendre
Hermite
6
1.1 Polynomials
Optimal straight line or the so called Linear regression is a polynomial of degree 1 given by
𝐹(𝑥) = 𝑎𝑥 + 𝑏
Where
𝑎0 = 𝑏 , 𝑎1 = 𝑎 , ∅0 = 1 , ∅1 = 𝑥
∑1 ∑𝑥 ∑ 𝑦𝑖
∀𝑖 ∀𝑖𝑏
[ ] = ∀𝑖
∑ 𝑥 ∑ 𝑥2 𝑎 ∑ 𝑥𝑖 𝑦𝑖
[ ∀𝑖 ∀𝑖 ] [ ∀𝑖 ]
Example:
given the following data set
𝑥𝑖 0 0.5 1 1.5 2 2.5 3 4 4.5 5
𝑦𝑖 -0.4326 -0.1656 3.1253 4.7877 4.8535 8.6909 9.2 10.53 11.3 14.8
In matrix form
10 24 𝑏 66.6892
[ ][ ] = [ ]
24 84 𝑎 236.2283
Which yields
𝑏 −0.2560
[ ]=[ ]
𝑎 2.8854
So, the fit equation is
7
Verifying the solution using MATLAB curve fitting tool (CFtool)
Giving the coefficients as
∑1 ∑𝑥 ∑ 𝑥2 … ∑ 𝑥𝑛 ∑ 𝑦𝑖
∀𝑖 ∀𝑖 ∀𝑖 ∀𝑖 ∀𝑖
∑𝑥 ∑ 𝑥2 ∑ 𝑥3 … ∑𝑥 𝑛+1 𝑎0 ∑𝑦𝑖 𝑥𝑖
𝑎1
∀𝑖 ∀𝑖 ∀𝑖 ∀𝑖 ∀𝑖
𝑎 2 =
∑𝑥 2 ∑ 𝑥3 ∑ 𝑥4 … ∑𝑥 𝑛+2 ⋮ ∑ 𝑦𝑖 𝑥𝑖2
∀𝑖 ∀𝑖 ∀𝑖 ∀𝑖 [𝑎𝑛 ] ∀𝑖
⋮ ⋮ ⋮ ⋮ ⋮ ⋮
∑1 ∑𝑥 ∑𝑥 2 ∑ 𝑦𝑖
∀𝑖 ∀𝑖 ∀𝑖 ∀𝑖
𝑎0
∑𝑥 ∑ 𝑥2 ∑𝑥 3 [𝑎1 ] = ∑𝑦𝑖 𝑥𝑖
∀𝑖 ∀𝑖 ∀𝑖 𝑎2 ∀𝑖
∑ 𝑥2 ∑ 𝑥3 ∑𝑥 4 ∑ 𝑦𝑖 𝑥𝑖2
[ ∀𝑖 ∀𝑖 ∀𝑖 ] [ ∀𝑖 ]
10 24 84 𝑎0 66.6892
[24 84 𝑎
335.25][ 1 ] = 236.2283 ]
[
84 335.25 1433.3 𝑎2 937.6934
Hence the polynomial coefficients are
𝑎0 −0.8531
𝑎
[ 1] = [ 3.6915 ]
𝑎2 −0.1592
So, the second-degree polynomial fit can be written as
𝐹(𝑥) = −0.8531 + 3.6915 𝑥 + −0.1592𝑥 2
9
linear regression and higher order polynomials
10
1.2 Orthogonal functions
If the basis function is orthogonal function over the interval [𝑥 0 , 𝑥𝑓 ] the following is true
𝑥𝑓
∫ ∅𝑖 ∅𝑗 𝑑𝑥 = 0 , ∀𝑖≠𝑗
𝑥0
∑ ∅0 ∅0 0 0 … 0 ∑ 𝑦𝑖 ∅0
∀𝑖 ∀𝑖
0 ∑ ∅1 ∅1 0 … 0 𝑎0 ∑ 𝑦𝑖 ∅1
∀𝑖
𝑎1 ∀𝑖
𝑎2 =
0 0 ∑∅2 ∅2 … 0 ⋮ ∑ 𝑦𝑖 ∅2
∀𝑖 [𝑎 𝑛 ] ∀𝑖
⋮ ⋮ ⋮ ⋮ ⋮ ⋮
0 0 0 0 ∑ ∅𝑛 ∅𝑛 ∑𝑦𝑖 ∅𝑛
[ ∀𝑖 ] [ ∀𝑖 ]
The Fourier series is a sum of sine and cosine functions that describes a periodic signal. It is represented
in either the trigonometric form or the exponential form.
Trigonometric form
𝑀
Where 𝜔 is the fundamental frequency of the signal represented by the given data points, M is the
number of the terms (harmonics) in the series.
𝑇 𝑇
For a given period 𝑇 ∫0 ∅0 ∅1 𝑑𝑥 = ∫0 cos(𝜔𝑥 )𝑑𝑥 = 0 so, all the off-diagonal terms vanish
In matrix form
∑1 0 0 0 0 … ∑ 𝑦𝑖
∀𝑖 ∀𝑖
11
Example: 𝑥𝑑𝑎𝑡𝑎 𝑦𝑑𝑎𝑡𝑎
0 0
given the following data set (𝑥, 𝑦), over the period 2𝜋 ,perform a sinusoidal
0.3142 0.410416
curve fitting 0.6284 1.410235
0.9426 -0.20364
1.2568 1.116759
Fourier series with two terms 1.571 1.46198
1.8852 0.276659
𝐹(𝑥) = 𝑎0 + 𝑎1cos(𝜔𝑥) + 𝑏1sin(𝜔𝑥)
2.1994 0.777028
in Matrix form 2.5136 -0.05945
2.8278 0.329838
∑1 0 0 ∑ 𝑦𝑖 3.142 -0.00039
∀𝑖 ∀𝑖 3.4562 -0.34495
𝑎0 3.7704 -0.76425
0 ∑ 𝑐𝑜𝑠 2 (𝜔𝑥) 0 [𝑎1 ] = ∑ 𝑦𝑖 cos(𝜔𝑥) 4.0846 -1.2596
∀𝑖 𝑏1 ∀𝑖
4.3988 -0.48596
0 0 2
∑ 𝑠𝑖𝑛 (𝜔𝑥) ∑ 𝑦𝑖 sin(𝜔𝑥) 4.713 -0.40428
[ ∀𝑖 ] [ ∀𝑖 ] 5.0272 -0.90361
5.3414 -1.09992
5.6556 0.07293
5.9698 -0.91587
Calculating the coefficient of Matrix, A
6.284 0.001682
∑ 1 = 21 , ∑ 𝑐𝑜𝑠 2 (𝜔𝑥) = 6.4491 , ∑ 𝑠𝑖𝑛 2 (𝜔𝑥) = 14.5509
∀𝑖 ∀𝑖 ∀𝑖
21 0 0 𝑎0 −0.5844
[0 6.4491 0 ] [ 𝑎1 ] = [ 5.1780 ]
0 0 14.5509 1 𝑏 1.1450
𝑎0 −0.0278
[𝑎1 ] = [ 0.8029 ]
𝑏1 0.0787
Using MATLAB cure fitting tool
12
Plugging two terms of Fourier yields the following fitting parameters
𝐹 (𝑥) = 𝑎0 + 𝑎1cos(𝜔𝑥) + 𝑏1sin(𝜔𝑥 ) + 𝑎2cos(2𝜔𝑥 ) + +𝑏2sin(2𝜔𝑥)
𝑎0 𝑎1 𝑏1 𝑎2 𝑏2 𝜔
-0.4903 0.3685 0.8426 0.2056 0.559 0.5797
1.2.2chebyshev
Theorem
The fitting curve can be obtained by setting the base function as Chebyshev polynomial
𝑛
𝐹(𝑥 ) = ∑ 𝑎𝑖 𝑇𝑖 (𝑥)
𝑖 =0
𝑇2 (𝑥) = 2𝑥 2 − 1
𝑇3 (𝑥) = 4 𝑥 3 − 3𝑥
𝑇4 (𝑥) = 8𝑥 4 − 8𝑥 2 + 1
Figure 8 ChebyShev Polynomials orthogonal over interval [-1,1]
Chebyshev polynomials are orthogonal on the
1
interval −1 ≤ 𝑥 ≤ 1 with respect to the weight function 𝑤(𝑥 ) = √1−𝑥 2
13
Plugging Chebyshev polynomials into the orthogonal matrix form yields
∑1 0 0 … 0 ∑ 𝑦𝑖
∀𝑖 ∀𝑖
0 ∑ 𝑇1 𝑇1 0 … 0 𝑎0 ∑ 𝑦𝑖 𝑇1
∀𝑖
𝑎1 ∀𝑖
𝑎2 =
0 0 ∑ 𝑇2 𝑇2 … 0 ⋮ ∑ 𝑦𝑖 𝑇2
∀𝑖 [𝑎 𝑛 ] ∀𝑖
⋮ ⋮ ⋮ ⋱ ⋮ ⋮
0 0 0 0 ∑ 𝑇𝑛 𝑇𝑛 ∑ 𝑦𝑖 𝑇𝑛
[ ∀𝑖 ] [ ∀𝑖 ]
Example:
Use Chebyshev polynomials to generate a curve fitting the following data
14
Which yields the following results
10 0 0 𝑎0 66.6892
[0 4.24 0 ] [ 𝑎 1 ] = [ 27.8021 ]
0 0 5.4688 𝑎 2 −11.2142
And the coefficients are given by
𝑎0 6.6689
𝑎
[ 1 ] = [ 6.5571 ]
𝑎2 −2.0506
𝐹(𝑥 ) = ∑ 𝑎𝑖 𝑃𝑖 (𝑥)
𝑖=0
Legendre polynomials are orthogonal on the interval −1 ≤ 𝑥 ≤ 1 with respect to the weight function
𝑤(𝑥 ) = 1 and is defined as
1 𝑑𝑛
𝑃𝑛 (𝑥 ) = (𝑥 2 − 1)𝑛
2𝑛 𝑛! 𝑑𝑥 𝑛
The so-called Legendre functions of the first kind.
These polynomials satisfy the recursion formula given by
2𝑛 − 1 𝑛−1
𝑃𝑛 (𝑥 ) = 𝑥𝑃𝑛−1 (𝑥) − 𝑃𝑛−2 (𝑥)
𝑛 𝑛
15
The first few Legendre polynomials are listed
below
𝑃0 (𝑥) = 1
𝑃1 (𝑥) = 𝑥
1
𝑃2 (𝑥) = (3𝑥 2 − 1 )
2
1
𝑃3 (𝑥) = ( 5 𝑥 3 − 3𝑥)
2
1
𝑃4 (𝑥) = (35𝑥 4 − 30𝑥 2 + 3)
8
Plugging Legendre polynomials into the orthogonal matrix form yields the following
∑1 0 0 … 0 ∑ 𝑦𝑖
∀𝑖 ∀𝑖
0 ∑ 𝑃1 𝑃1 0 … 0 𝑎0 ∑ 𝑦𝑖 𝑃1
∀𝑖
𝑎1 ∀𝑖
𝑎2 =
0 0 ∑ 𝑃2 𝑃2 … 0 ⋮ ∑𝑦𝑖 𝑃2
∀𝑖 [𝑎 𝑛 ] ∀𝑖
⋮ ⋮ ⋮ ⋱ ⋮ ⋮
0 0 0 0 ∑ 𝑃𝑛 𝑃𝑛 ∑ 𝑦𝑖 𝑃𝑛
[ ∀𝑖 ] [ ∀𝑖 ]
Example:
Use Legendre polynomials to generate a curve fitting the following data
16
Which yields the following results
11 0 0 0 𝑎0 70.3983
0 4.4 0 0 𝑎 30.5821
[ ] [ 𝑎1 ] = [ ]
0 0 3.1988 0 2 8.2011
0 0 0 2.816 𝑎 3 11.4417
and the polynomial coefficients
𝑎0 6.3998
𝑎1 6.9505
[𝑎 ] = [ ]
2 2.5638
𝑎3 4.0631
17
1.2.4 Hermite polynomials
Theorem
The Hermite polynomials 𝐻𝑛 (𝑥) are set of orthogonal polynomials over the domain (-∞,∞) with weighting
2
function 𝑒 −𝑥 . An optimal curve fitting can be obtained by plugging Hermite polynomial as the basis
function
𝑛
𝐹 (𝑥 ) = ∑ 𝑎𝑖 𝐻𝑖 (𝑥)
𝑖=0
𝐻0 (𝑥) = 1
𝐻1 (𝑥) = 2𝑥
𝐻2 (𝑥) = 4𝑥 2 − 2
𝐻3 (𝑥) = 8𝑥 3 − 12𝑥
𝐻4 (𝑥) = 16𝑥 4 − 48𝑥 2 + 12 Figure 12 Hermite Polynomials orthogonal over interval [ (-∞, ∞)]
∑1 0 0 … 0 ∑ 𝑦𝑖
∀𝑖 ∀𝑖
0 ∑ 𝐻1 𝐻1 0 … 0 𝑎0 ∑ 𝑦𝑖 𝐻1
∀𝑖
𝑎1 ∀𝑖
𝑎2 =
0 0 ∑ 𝐻2 𝐻2 … 0 ⋮ ∑𝑦𝑖 𝐻2
∀𝑖 [𝑎 𝑛 ] ∀𝑖
⋮ ⋮ ⋮ ⋱ ⋮ ⋮
0 0 0 0 ∑ 𝐻𝑛 𝐻𝑛 ∑ 𝑦𝑖 𝐻𝑛
[ ∀𝑖 ] [ ∀𝑖 ]
Example 1:
Use Hermite Polynomials to generate a curve fitting the following data
18
The given x -range must be adjusted according to the following equation
Using the following equation
2
𝑋𝑛𝑒𝑤 = 1.575 (−1 + 5 𝑥𝑖 )
For a first order Hermite polynomial the following coefficients has been calculated
11 0 𝑎 70.3983
[ ] [ 𝑎0 ] = [ ]
0 43.659 1 96.3337
Hence, the coefficients
𝑎 6.3998
[ 𝑎0 ] = [ ]
1 2.2065
Example 2:
Fit the following data to using Hermite polynomials
𝑥𝑖 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.50 6
𝑦𝑖 0 0.5526 0.9996 1.3881 1.1682 .5392 -0.1 -0.19 -0.65 -0.79 -1.10 -0.65 -0.18
19
Which gives
13 0 0 0 0 0 𝑎0 1.1856
0 46 0 0 0 0 𝑎1 −13.4525
0 0 160.945 0 0 0 𝑎2 0.6139
0 0 0 360.7929 0 0 𝑎 3 = 29.8097
0 0 0 0 2868 0 𝑎4 −10.4263
[0 0 0 0 0 43297 ] [𝑎 5 ] [ −39.4838 ]
𝑎0 0.0912
𝑎1 −0.2924
𝑎2 0.0038
𝑎 3 = 0.0826
𝑎4 −0.0036
[𝑎 5 ] [−0.0009 ]
20
2. Interpolation
Given a set of data (n+1) points each of pair (𝑥 𝑖 , 𝑦𝑖 ) 𝑤ℎ𝑒𝑟𝑒 𝑖 = 0,1,2, … , 𝑛 ,it’s required to obtain a
function that should be passing through all the given points for the given data points, there are many
types of formula for both equal interval data and nonequal interval data summarized below.
Interpolation Functions
Lagrange
polynomial
Newton forward
General
Newton
Newton backward Cubic spline
Stirling Formula
21
2.1. Equal interval data
Are their corresponding values, where 𝑦 = 𝑓(𝑥) ,the interpolation curve is given by
. .
. . .
.
. . .
Example:
Use Newton forward difference to generate an interpolation curve for the following data
set.
𝑥𝑖 0 0.5 1 1.5 2 2.5 3 3.5 4
𝑦𝑖 0 0.5526 0.9996 1.3881 1.1682 .5392 -0.1 -0.19 -0.65
22
Constructing the forward difference table
𝑥𝑖 𝑦𝑖 ∆ 𝑦0 ∆2 𝑦0 ∆3 𝑦0 ∆4 𝑦0 ∆5 𝑦0 ∆ 6 𝑦0 ∆7 𝑦0 ∆ 7 𝑦0
0 0 0.5526 -0.1056 0.0471 -0.597 1.3462 -1.8958 2.4063 -4.5168
0.5 0.5526 0.447 -0.0585 -0.5499 0.7492 -0.5496 0.5105 -2.1105
1 0.9996 0.3885 -0.6084 0.1993 0.1996 -0.0391 -1.6
1.5 1.3881 -0.2199 -0.4091 0.3989 0.1605 -1.6391
2 1.1682 -0.629 -0.0102 0.5594 -1.4786
2.5 0.5392 -0.6392 0.5492 -0.9192
3 -0.1 -0.09 -0.37
3.5 -0.19 -0.46
4 -0.65
23
2.1.2 Newton backward difference formula
Theory
. .
. . .
. . .
.
.
Example:
Use Newton backward difference to generate an interpolation curve for the following
data set.
𝑥𝑖 0 0.5 1 1.5 2 2.5 3 3.5 4
𝑦𝑖 0 0.5526 0.9996 1.3881 1.1682 .5392 -0.1 -0.19 -0.65
24
Constructing the backward difference table
𝑥𝑖 𝑦𝑖 ∆ 𝑦0 ∆ 2 𝑦0 ∆3 𝑦0 ∆ 4 𝑦0 ∆5 𝑦0 ∆6 𝑦0 ∆ 7 𝑦0 ∆7 𝑦0
0 0
0.5 0.5526 0.5526
1 0.9996 0.447 -0.1056
1.5 1.3881 0.3885 -0.0585 0.0471
2 1.1682 -0.2199 -0.6084 -0.5499 -0.597
2.5 0.5392 -0.629 -0.4091 0.1993 0.7492 1.3462
3 -0.1 -0.6392 -0.0102 0.3989 0.1996 -0.5496 -1.8958
3.5 -0.19 -0.09 0.5492 0.5594 0.1605 -0.0391 0.5105 2.4063
4 -0.65 -0.46 -0.37 -0.9192 -1.4786 -1.6391 -1.6 -2.1105 -4.5168
25
2.1.3 Gauss forward difference formula
Theory
let are a given set of observations with common difference h ,for these
points the Gauss’s forward interpolation takes the following form [1]
𝑝(𝑝 − 1) 2 𝑝(𝑝 − 1)(𝑝 + 1) 3
𝑓(𝑥 ) = 𝑦0 + 𝑝∆ 𝑦0 + ∆ 𝑦−1 + ∆ 𝑦−1
2! 3!
𝑝 (𝑝 − 1)(𝑝 − 2)(𝑝 + 1) 4
+ ∆ 𝑦−2 + ⋯
4!
𝑦0 and the even differences ∆ 2 𝑦−1 , ∆ 4 𝑦−2 , .... which lie on the line containing 𝑥 0 , (called the central
line) and the odd differences ∆ 𝑦0 , ∆3 𝑦−1 ,... which lie on the line just below this line, in the difference
table.
𝑥−𝑥 0
Where 𝑝= ℎ
Example:
Use Gauss forward difference to get an interpolation curve for the following data set.
𝑥𝑖 1 2 3 4
𝑦𝑖 1 8 27 64
26
Using the following code
27
2.1.4 Gauss backward difference formula
Theory
let are a given set of observations with common difference h, for these
points the Gauss’s forward interpolation takes the following form
𝑝(𝑝 + 1) 2 𝑝(𝑝 − 1)(𝑝 + 1) 3
𝑓 (𝑥 ) = 𝑦0 + 𝑝∆ 𝑦−1 + ∆ 𝑦−1 + ∆ 𝑦−2
2! 3!
𝑝(𝑝 − 1)(𝑝 + 2)(𝑝 + 1) 4
+ ∆ 𝑦−2 + ⋯
4!
𝑝(𝑝 + 𝑛 − 1) … (𝑝 + 1)𝑝(𝑝 − 1) … (𝑝 − 𝑛 + 1) 2𝑛−1
+ ∆ 𝑦−n + ⋯
(2𝑛 − 1)!
𝑦0 and the even differences ∆ 2 𝑦−1 , ∆ 4 𝑦−2 , .... which lie on the line containing 𝑥 0 , (called the central
line) and the odd differences ∆ 𝑦0 , ∆3 𝑦−1 ,... which lie on the line just below this line, in the difference
table.
𝑥−𝑥 0
Where 𝑝= ℎ
Example:
Use Gauss forward difference to get an interpolation curve for the following data set.
𝑥𝑖 0 1 2 3
𝑦𝑖 1 4 27 43
28
Constructing the backward central difference table
xi yi dy1 dy2 dy3
0 1
3
1 4 20
23 -27
2 27 -7
16
3 43
29
2.1.5 Stirling’s Interpolation Formula
Stirling’s interpolation formula is used for odd number of equi spaced arguments. this formula is obtained
by taking the arithmetic mean of the Gauss’s forward and backward difference formulae
The Stirling’s interpolation formula gives the best approximate result when −0.25 < 𝑝 < 0.25. So,
we choose 𝑥0 𝑖𝑛 𝑠𝑢𝑐ℎ 𝑎 𝑤𝑎𝑦 𝑡ℎ𝑎𝑡 𝑝 = 𝑥 − 𝑥 0 ℎ
satisfy this condition
Example:
Using the following data, find by Sterling's formula, the value of at
0.225−0.22
Using sterling formula, 𝑝 = = 0.5
0.01
𝑓 (0.225) = 1.1708457
The exact value for
cot(𝜋 ∗ 0.225) = 1.1708496
30
2.2 Non-Equal interval data
If 𝑥 0 , 𝑥 1 , 𝑥 2 , … , 𝑥𝑛 are a given set of 𝑛 + 1 observations which are need not be equally spaced and
𝑦0 , 𝑦1 , 𝑦2 , … , 𝑦𝑛 be their corresponding values, where 𝑦 = 𝑓(𝑥) be the given function[2]
𝑛
𝑓(𝑥 ) = ∑ 𝐿 𝑘 (𝑥)𝑓(𝑥 𝑘 )
𝑘=0
Example 1:
Obtain the Lagrange polynomial of the following points
t(s) 1 5 9 13 17 21 25
V(km/h) 35 40 60 55 90 37.8 10
A general function has been implemented on MATLAB obtaining a Polynomial the input arguments is the
array points which gives the corresponding y values of the interpolation polynomial
31
Figure 19 Lagrange interpolation polynomial
Example 2
Obtain the interpolation polynomial for the given data points using Lagrange formula
𝑠 {(−2, 9), (5, −12), (10, 33)}
32
The interpolation polynomial
𝑓(𝑥 ) = 𝑥 2 − 6𝑥 − 7
33
2.2.2 General form of Newton divided difference
If 𝑥 0 , 𝑥 1 , 𝑥 2 , … , 𝑥 𝑛 are a given set of 𝑛 + 1 of data which are need not be equally spaced and
𝑦0 , 𝑦1 , 𝑦2 , … , 𝑦𝑛 be their corresponding values, where 𝑦 = 𝑓(𝑥) be the given function
Example:
The upward velocity of a rocket is given as a function of time in the next table, Determine the value of
the velocity at t 16 seconds with a third order polynomial using Newton divided difference
interpolation formula
34
For a third order polynomial, the velocity is given by
we need to choose the four data points that are closest to t 16 that also bracket t 16 to evaluate it
Then
Given
35
𝑣 (𝑡) = 227.04 + 27.148 (𝑡 − 10) + 0.3766(𝑡 − 10)(𝑡 − 15)
+ 5.5347𝑥10−3 (𝑡 − 10)(𝑡 − 15)(𝑡 − 20)
𝐴𝑡 𝑡 = 16 , 𝑣 = 392.06 𝑚/𝑠
Given nodes and data {(𝑥 0 , 𝑓(𝑥 0 )), (𝑥 1 , 𝑓(𝑥 1 )), . . . , (𝑥 𝑛 , 𝑓(𝑥 𝑛 ))} we have interpolated using Lagrange
interpolation, but Lagrange interpolating, such polynomials can possess large oscillations, an
alternative piecewise polynomial (Cubic spline) is specified
Theory
A cubic polynomial is specified by 4 coefficients
𝑝(𝑥) = 𝑎 + 𝑏𝑥 + 𝑐𝑥 2 + 𝑑𝑥 3 .
• While the spline may agree with 𝑓(𝑥) at the nodes, we cannot guarantee the derivatives of the
spline agree with the derivatives of 𝑓
36
Given a function f(x) defined on [a, b] and a set of nodes 𝑎 = 𝑥 0 < 𝑥 1 < 𝑥 2 < · · · < 𝑥𝑛 = 𝑏
a cubic spline interpolant, S, for f is a piecewise cubic polynomial,
𝑆𝑗 𝑜𝑛 [𝑥𝑗 ,𝑥𝑗 + 1] 𝑓𝑜𝑟 𝑗 = 0, 1, . . . , 𝑛 − 1.
Example:
Construct a piecewise cubic spline interpolant for the curve passing through {(5, 5), (7, 2), (9, 4)}
37
The first and second derivatives of the cubic must agree at their shared node x = 7.
The final two equations come from the natural boundary conditions.
38
Example2:
Construct a piecewise cubic spline interpolant for the curve passing through
𝑥 = [1 1.5 2 4.1 5] ; 𝑦 = [1 − 1 1 − 1 1];
39
References
[1] M. Pal, “Numerical Analysis for Scientists and Engineers: Theory and C Programs,” 2007.
[2] T. F. Chan et al., “Applications of Fade Approximation Theory in M i Dynamics,” Lect. Notes,
vol. 39, no. 5, pp. 1–32, 2009.
Websites
http://www.emptyloop.com/technotes/A%20tutorial%20on%20trigonometric%20curve%20fitting.pdf
http://www.mhtlab.uwaterloo.ca/courses/me755/web_chap5.pdf
https://www.mathworks.com/help/symbolic/legendrep.html
https://www.math.dartmouth.edu/~ddeford/Lagrange_Interpolation.pdf
http://cms.gcg11.ac.in/attachments/article/202/Interpolation.pdf
40
View publication stats