You are on page 1of 41

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/323242977

curve fitting and interpolation techniques

Method · February 2018


DOI: 10.13140/RG.2.2.10606.51520

CITATIONS READS
0 2,385

1 author:

Mohamed tawfik ahmed Eraky


The British University in Egypt
4 PUBLICATIONS   1 CITATION   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

DDPM (dense discrete particle tracking model ) for predicting sand particles velocity View project

All content following this page was uploaded by Mohamed tawfik ahmed Eraky on 17 February 2018.

The user has requested enhancement of the downloaded file.


Curve fitting and interpolation techniques

Author: Mohamed tawfik ahmed Eraky


Table of Contents
Table of figures 3
Introduction 4
1. Curve fitting 5
1.1 Polynomials 7
1.1.1 Optimal straight line (Linear Regression) 7
1.1.2 Optimal Polynomial 8
1.2 Orthogonal functions 11
1.2.1 Sinusoidal Functions 11
1.2.2chebyshev 13
1.2.3 Legendre Polynomial 15
1.2.4 Hermite polynomials 18
2. Interpolation 21
2.1. Equal interval data 22
2.1.1 Newton forward difference formula 22
2.1.2 Newton backward difference formula 24
2.1.3 Gauss forward difference formula 26
2.1.4 Gauss backward difference formula 28
2.1.5 Stirling’s Interpolation Formula 30
2.2 Non-Equal interval data 31
2.2.1 Lagrange interpolation formula 31
2.2.2 General form of Newton divided difference 34
2.2.3 Cubic spline interpolation 36
References 40

2
Table of figures
Figure 1 least square of optimal curve 5
Figure 2 Basis Functions for curve fittings 6
Figure 3 Linear regression 8
Figure 4 second order polynomial fit 9
Figure 5 polynomial fits 10
Figure 6 sinusoidal curve fitting 1 term 12
Figure 7 Sinusoidal 2 terms 13
Figure 8 ChebyShev Polynomials orthogonal over interval [-1,1] 13
Figure 9 Chebyshev polynomial fit 15
Figure 10 Legendre Polynomials orthogonal over interval [-1,1] 16
Figure 11 Legendre curve fits (3rd and 4th degree) 17
Figure 12 Hermite first order polynomial 19
Figure 13 Hermite curve fitting 5th order polynomials 20
Figure 14 Interpolation formulas 21
Figure 15 forward newton interpolation 23
Figure 16 Newton backward difference 25
Figure 17 gauss forward difference table 27
Figure 18 Gauss backward central difference 29
Figure 19 Lagrange interpolation polynomial 32
Figure 20 Lagrange interpolation polynomial 2nd order 33
Figure 21 General newton divided difference 36
Figure 22 Cubic spline 38
Figure 22 Cubic spline 38

3
Introduction

Given a set of data that results from an experiment (simulation based or otherwise), or perhaps
taken from a real-life physical scenario, we assume there is some function that passes through
the data points and perfectly represents the quantity of interest at all non-data points.

With curve fitting we simply want a function that is a good fit (typically a best fit in some sense)
to the original data points. With curve fitting the approximating function does not have to pass
through the original data set.

With interpolation we seek a function that allows us to approximate such that functional values
between the original data set values may be determined (estimated). The interpolating function
typically passes through the original data set.

4
1. Curve fitting

Given a set of data (n+1) points each of pair (𝑥 𝑖 , 𝑦𝑖 ) 𝑤ℎ𝑒𝑟𝑒 𝑖 = 0,1,2, … , 𝑛 ,it’s required to capture the
trend in the data points across the entire range by assigning a single function represents optimal curve
Such as optimal straight line (poly. Degree (1)) or optimal polynomial 2 nd degree or 3rd degree and so on.

The optimal curve can be obtained by a setting the function F(x) in the form
𝑛

𝐹(𝑥 ) = ∑ 𝑎𝑖 ∅𝑖 (𝑥)
𝑖 =0

𝑤ℎ𝑒𝑟𝑒 ∅𝑖 (𝑥) 𝑖𝑠 𝑡ℎ𝑒 𝑏𝑎𝑠𝑖𝑠 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛 , 𝑎𝑖 is the coefficients that set the error to minimum, the least-
squares method finds the optimal coefficients values by minimizing the sum 𝛿 of the squared residuals.

𝛿 = ∑|𝐹(𝑥𝑖 ) − 𝑦𝑖 |2 (x 1, y1) (x 1 , F(x 1 ))


∀𝑖

Hence, 𝛿 = 𝛿(𝑎0, 𝑎1 , 𝑎2 , 𝑎3 ,… , 𝑎𝑛)


(x 2, F(x 2))
(x 2 , y2)
To minimize the error, taking the derivative of the
𝜕𝛿
sum with respect to the coefficients =0
𝜕𝑎𝑗 Figure 1 least square of optimal curve

Which yields the following:

𝜕𝛿
= ∑ 2[ 𝑎0∅0(𝑥𝑖) + 𝑎1∅1 (𝑥𝑖 ) + 𝑎2 ∅2(𝑥𝑖 ) + … + 𝑎𝑛 ∅𝑛 (𝑥𝑖 ) − 𝑦𝑖 ]∅𝑗 = 0
𝜕𝑎𝑗
∀𝑖

Or in Matrix form 𝐴𝑎 =𝐵 I , (n+1) equations into (n+1) unknowns

∑∅0 ∅0 ∑ ∅1 ∅ 0 ∑ ∅2 ∅0 … ∑ ∅𝑛 ∅0 ∑𝑦𝑖 ∅0
∀𝑖 ∀𝑖 ∀𝑖 ∀𝑖 ∀𝑖

∑ ∅0 ∅1 ∑ ∅1 ∅1 ∑ ∅2 ∅1 … ∑ ∅𝑛 ∅1 𝑎 0 ∑ 𝑦𝑖 ∅1
𝑎1
∀𝑖 ∀𝑖 ∀𝑖 ∀𝑖 ∀𝑖
𝑎2 =
∑∅0 ∅2 ∑ ∅1 ∅ 2 ∑ ∅2 ∅2 … ∑ ∅0 ∅2 ⋮ ∑𝑦𝑖 ∅2
∀𝑖 ∀𝑖 ∀𝑖 ∀𝑖 [𝑎𝑛 ] ∀𝑖
⋮ ⋮ ⋮ ⋮ ⋮ ⋮

∑ ∅0 ∅ 𝑛 ∑ ∅ 1 ∅ 𝑛 ∑ ∅ 2 ∅ 𝑛 … ∑ ∅ 𝑛 ∅ 𝑛 ∑ 𝑦𝑖 ∅𝑛
[ ∀𝑖 ∀𝑖 ∀𝑖 ∀𝑖 ] [ ∀𝑖 ]

5
So, the optimal value for the coefficients are given by 𝑎 = 𝐴−1 𝐵
• The basis function ∅(𝑥) could be polynomials or orthogonal functions as
summarized in the next figure

Curve fitting Base functions ∅(𝑥)

Polynomials Orthogonal
Functions

Sinusoidal
Optimal straight line
Poly. degree(1)
chebyshev
Optimal polynomial
of degree 2,3,..
Legendre

Hermite

Figure 2 Basis Functions for curve fittings

We will discuss each basis function in the following

6
1.1 Polynomials

1.1.1 Optimal straight line (Linear Regression)

Optimal straight line or the so called Linear regression is a polynomial of degree 1 given by
𝐹(𝑥) = 𝑎𝑥 + 𝑏
Where

𝑎0 = 𝑏 , 𝑎1 = 𝑎 , ∅0 = 1 , ∅1 = 𝑥

Plugging these values in the matrix form I

∑1 ∑𝑥 ∑ 𝑦𝑖
∀𝑖 ∀𝑖𝑏
[ ] = ∀𝑖
∑ 𝑥 ∑ 𝑥2 𝑎 ∑ 𝑥𝑖 𝑦𝑖
[ ∀𝑖 ∀𝑖 ] [ ∀𝑖 ]
Example:
given the following data set
𝑥𝑖 0 0.5 1 1.5 2 2.5 3 4 4.5 5
𝑦𝑖 -0.4326 -0.1656 3.1253 4.7877 4.8535 8.6909 9.2 10.53 11.3 14.8

∑ 𝑥 = 24 , ∑ 𝑥 2 = 84 , ∑𝑦𝑖 = 66.6892 , ∑ 𝑥𝑖 𝑦𝑖 = 236.2283


∀𝑖 ∀𝑖 ∀𝑖 ∀𝑖

In matrix form
10 24 𝑏 66.6892
[ ][ ] = [ ]
24 84 𝑎 236.2283
Which yields
𝑏 −0.2560
[ ]=[ ]
𝑎 2.8854
So, the fit equation is

𝐹(𝑥 ) = 2.8854𝑥 − 0.256

7
Verifying the solution using MATLAB curve fitting tool (CFtool)
Giving the coefficients as

Figure 3 Linear regression

1.1.2 Optimal Polynomial

Generally polynomial is given by


𝐹(𝑥) = 𝑎0 + 𝑎1 𝑥 + 𝑎2 𝑥 2 + ⋯ + 𝑎𝑛 𝑥 𝑛

So, the Matrix form I can be written in the form

∑1 ∑𝑥 ∑ 𝑥2 … ∑ 𝑥𝑛 ∑ 𝑦𝑖
∀𝑖 ∀𝑖 ∀𝑖 ∀𝑖 ∀𝑖

∑𝑥 ∑ 𝑥2 ∑ 𝑥3 … ∑𝑥 𝑛+1 𝑎0 ∑𝑦𝑖 𝑥𝑖
𝑎1
∀𝑖 ∀𝑖 ∀𝑖 ∀𝑖 ∀𝑖
𝑎 2 =
∑𝑥 2 ∑ 𝑥3 ∑ 𝑥4 … ∑𝑥 𝑛+2 ⋮ ∑ 𝑦𝑖 𝑥𝑖2
∀𝑖 ∀𝑖 ∀𝑖 ∀𝑖 [𝑎𝑛 ] ∀𝑖
⋮ ⋮ ⋮ ⋮ ⋮ ⋮

∑ 𝑥𝑛 ∑ 𝑥 𝑛+1 ∑ 𝑥 𝑛+2 … ∑ 𝑥 𝑛+𝑛 ∑ 𝑦𝑖 𝑥𝑖𝑛


[ ∀𝑖 ∀𝑖 ∀𝑖 ∀𝑖 ] [ ∀𝑖 ]
Example:
given the following data set
𝑥𝑖 0 0.5 1 1.5 2 2.5 3 4 4.5 5
𝑦𝑖 -0.4326 -0.1656 3.1253 4.7877 4.8535 8.6909 9.2 10.53 11.3 14.8
8
For a second order polynomial fit
𝐹(𝑥) = 𝑎0 + 𝑎1 𝑥 + 𝑎2 𝑥 2

The Matrix I can be written in the form

∑1 ∑𝑥 ∑𝑥 2 ∑ 𝑦𝑖
∀𝑖 ∀𝑖 ∀𝑖 ∀𝑖
𝑎0
∑𝑥 ∑ 𝑥2 ∑𝑥 3 [𝑎1 ] = ∑𝑦𝑖 𝑥𝑖
∀𝑖 ∀𝑖 ∀𝑖 𝑎2 ∀𝑖

∑ 𝑥2 ∑ 𝑥3 ∑𝑥 4 ∑ 𝑦𝑖 𝑥𝑖2
[ ∀𝑖 ∀𝑖 ∀𝑖 ] [ ∀𝑖 ]
10 24 84 𝑎0 66.6892
[24 84 𝑎
335.25][ 1 ] = 236.2283 ]
[
84 335.25 1433.3 𝑎2 937.6934
Hence the polynomial coefficients are
𝑎0 −0.8531
𝑎
[ 1] = [ 3.6915 ]
𝑎2 −0.1592
So, the second-degree polynomial fit can be written as
𝐹(𝑥) = −0.8531 + 3.6915 𝑥 + −0.1592𝑥 2

Figure 4 second order polynomial fit

9
linear regression and higher order polynomials

Figure 5 polynomial fits

10
1.2 Orthogonal functions
If the basis function is orthogonal function over the interval [𝑥 0 , 𝑥𝑓 ] the following is true
𝑥𝑓
∫ ∅𝑖 ∅𝑗 𝑑𝑥 = 0 , ∀𝑖≠𝑗
𝑥0

So, all the off-Diagonal elements = 0 in the Matrix A

∑ ∅0 ∅0 0 0 … 0 ∑ 𝑦𝑖 ∅0
∀𝑖 ∀𝑖

0 ∑ ∅1 ∅1 0 … 0 𝑎0 ∑ 𝑦𝑖 ∅1
∀𝑖
𝑎1 ∀𝑖
𝑎2 =
0 0 ∑∅2 ∅2 … 0 ⋮ ∑ 𝑦𝑖 ∅2
∀𝑖 [𝑎 𝑛 ] ∀𝑖
⋮ ⋮ ⋮ ⋮ ⋮ ⋮
0 0 0 0 ∑ ∅𝑛 ∅𝑛 ∑𝑦𝑖 ∅𝑛
[ ∀𝑖 ] [ ∀𝑖 ]

We will go the orthogonal functions listed in figure 2

1.2.1 Sinusoidal Functions

The Fourier series is a sum of sine and cosine functions that describes a periodic signal. It is represented
in either the trigonometric form or the exponential form.
Trigonometric form
𝑀

𝐹(𝑥) = 𝑎0 + ∑ 𝑎𝑖cos(𝑖𝜔𝑥) + 𝑏𝑖sin(𝑖𝜔𝑥)


𝑖=1

Where 𝜔 is the fundamental frequency of the signal represented by the given data points, M is the
number of the terms (harmonics) in the series.
𝑇 𝑇
For a given period 𝑇 ∫0 ∅0 ∅1 𝑑𝑥 = ∫0 cos(𝜔𝑥 )𝑑𝑥 = 0 so, all the off-diagonal terms vanish

In matrix form
∑1 0 0 0 0 … ∑ 𝑦𝑖
∀𝑖 ∀𝑖

0 ∑ 𝑐𝑜𝑠 2 (𝜔𝑥) 0 0 0 … ∑ 𝑦𝑖 cos(𝜔𝑥)


𝑎0
∀𝑖 ∀𝑖
𝑎1
0 0 ∑ 𝑠𝑖𝑛 2 (𝜔𝑥) 0 0 … 𝑏1 ∑ 𝑦𝑖 sin(𝜔𝑥)
= ∀𝑖
∀𝑖 𝑎2
0 0 0 ∑ 𝑐𝑜𝑠 2 (2𝜔𝑥) 0 … 𝑏2 ∑ 𝑦𝑖 cos(2𝜔𝑥)
[⋮ ]
∀𝑖 ∀𝑖

0 0 0 0 ∑ 𝑠𝑖𝑛 2 (2𝜔𝑥) … ∑ 𝑦𝑖 sin(2𝜔𝑥)


∀𝑖 ∀𝑖
[ ⋮ ⋮ ⋮ ⋮ ⋮ ⋱] [ ⋮ ]

11
Example: 𝑥𝑑𝑎𝑡𝑎 𝑦𝑑𝑎𝑡𝑎
0 0
given the following data set (𝑥, 𝑦), over the period 2𝜋 ,perform a sinusoidal
0.3142 0.410416
curve fitting 0.6284 1.410235
0.9426 -0.20364
1.2568 1.116759
Fourier series with two terms 1.571 1.46198
1.8852 0.276659
𝐹(𝑥) = 𝑎0 + 𝑎1cos(𝜔𝑥) + 𝑏1sin(𝜔𝑥)
2.1994 0.777028
in Matrix form 2.5136 -0.05945
2.8278 0.329838
∑1 0 0 ∑ 𝑦𝑖 3.142 -0.00039
∀𝑖 ∀𝑖 3.4562 -0.34495
𝑎0 3.7704 -0.76425
0 ∑ 𝑐𝑜𝑠 2 (𝜔𝑥) 0 [𝑎1 ] = ∑ 𝑦𝑖 cos(𝜔𝑥) 4.0846 -1.2596
∀𝑖 𝑏1 ∀𝑖
4.3988 -0.48596
0 0 2
∑ 𝑠𝑖𝑛 (𝜔𝑥) ∑ 𝑦𝑖 sin(𝜔𝑥) 4.713 -0.40428
[ ∀𝑖 ] [ ∀𝑖 ] 5.0272 -0.90361
5.3414 -1.09992
5.6556 0.07293
5.9698 -0.91587
Calculating the coefficient of Matrix, A
6.284 0.001682
∑ 1 = 21 , ∑ 𝑐𝑜𝑠 2 (𝜔𝑥) = 6.4491 , ∑ 𝑠𝑖𝑛 2 (𝜔𝑥) = 14.5509
∀𝑖 ∀𝑖 ∀𝑖

∑𝑦𝑖 = −0.5844 , ∑ 𝑦𝑖 cos(𝜔𝑥) = 5.1780, ∑ 𝑦𝑖 sin(𝜔𝑥) = 1.1450


∀𝑖 ∀𝑖 ∀𝑖

21 0 0 𝑎0 −0.5844
[0 6.4491 0 ] [ 𝑎1 ] = [ 5.1780 ]
0 0 14.5509 1 𝑏 1.1450
𝑎0 −0.0278
[𝑎1 ] = [ 0.8029 ]
𝑏1 0.0787
Using MATLAB cure fitting tool

Figure 6 sinusoidal curve fitting 1 term

12
Plugging two terms of Fourier yields the following fitting parameters
𝐹 (𝑥) = 𝑎0 + 𝑎1cos(𝜔𝑥) + 𝑏1sin(𝜔𝑥 ) + 𝑎2cos(2𝜔𝑥 ) + +𝑏2sin(2𝜔𝑥)

𝑎0 𝑎1 𝑏1 𝑎2 𝑏2 𝜔
-0.4903 0.3685 0.8426 0.2056 0.559 0.5797

Figure 7 Sinusoidal 2 terms

1.2.2chebyshev
Theorem
The fitting curve can be obtained by setting the base function as Chebyshev polynomial
𝑛

𝐹(𝑥 ) = ∑ 𝑎𝑖 𝑇𝑖 (𝑥)
𝑖 =0

Where 𝑇𝑖 (𝑥) is Chebyshev polynomials given


by
Ti(x) = 𝑐𝑜𝑠(𝑖 ∗ 𝑎𝑟𝑐𝑐𝑜𝑠(x))

These polynomials satisfy the recursion formula


𝑇𝑖+1 (𝑥) = 2𝑥𝑇𝑖 (𝑥) − 𝑇𝑖 −1 (𝑥)
𝑇0 (𝑥) = 1
𝑇1 (𝑥) = 𝑥

𝑇2 (𝑥) = 2𝑥 2 − 1

𝑇3 (𝑥) = 4 𝑥 3 − 3𝑥
𝑇4 (𝑥) = 8𝑥 4 − 8𝑥 2 + 1
Figure 8 ChebyShev Polynomials orthogonal over interval [-1,1]
Chebyshev polynomials are orthogonal on the
1
interval −1 ≤ 𝑥 ≤ 1 with respect to the weight function 𝑤(𝑥 ) = √1−𝑥 2

13
Plugging Chebyshev polynomials into the orthogonal matrix form yields

∑1 0 0 … 0 ∑ 𝑦𝑖
∀𝑖 ∀𝑖

0 ∑ 𝑇1 𝑇1 0 … 0 𝑎0 ∑ 𝑦𝑖 𝑇1
∀𝑖
𝑎1 ∀𝑖
𝑎2 =
0 0 ∑ 𝑇2 𝑇2 … 0 ⋮ ∑ 𝑦𝑖 𝑇2
∀𝑖 [𝑎 𝑛 ] ∀𝑖
⋮ ⋮ ⋮ ⋱ ⋮ ⋮
0 0 0 0 ∑ 𝑇𝑛 𝑇𝑛 ∑ 𝑦𝑖 𝑇𝑛
[ ∀𝑖 ] [ ∀𝑖 ]

Example:
Use Chebyshev polynomials to generate a curve fitting the following data

𝑥𝑖 0 0.5 1 1.5 2 2.5 3 4 4.5 5


𝑦𝑖 -0.4326 -0.1656 3.1253 4.7877 4.8535 8.6909 9.2 10.53 11.3 14.8

The given x -range must be adjusted to the interval [-1,1]


Using the following equation
2
𝑋𝑛𝑒𝑤 = −1 + 𝑥𝑖
5
The following code has been implemented in MATLAB

14
Which yields the following results
10 0 0 𝑎0 66.6892
[0 4.24 0 ] [ 𝑎 1 ] = [ 27.8021 ]
0 0 5.4688 𝑎 2 −11.2142
And the coefficients are given by
𝑎0 6.6689
𝑎
[ 1 ] = [ 6.5571 ]
𝑎2 −2.0506

Figure 9 Chebyshev polynomial fit

1.2.3 Legendre Polynomial


Theorem
The fitting curve can be obtained by setting the base function as Legendre polynomial
𝑛

𝐹(𝑥 ) = ∑ 𝑎𝑖 𝑃𝑖 (𝑥)
𝑖=0

Legendre polynomials are orthogonal on the interval −1 ≤ 𝑥 ≤ 1 with respect to the weight function
𝑤(𝑥 ) = 1 and is defined as
1 𝑑𝑛
𝑃𝑛 (𝑥 ) = (𝑥 2 − 1)𝑛
2𝑛 𝑛! 𝑑𝑥 𝑛
The so-called Legendre functions of the first kind.
These polynomials satisfy the recursion formula given by

2𝑛 − 1 𝑛−1
𝑃𝑛 (𝑥 ) = 𝑥𝑃𝑛−1 (𝑥) − 𝑃𝑛−2 (𝑥)
𝑛 𝑛

15
The first few Legendre polynomials are listed
below
𝑃0 (𝑥) = 1
𝑃1 (𝑥) = 𝑥
1
𝑃2 (𝑥) = (3𝑥 2 − 1 )
2
1
𝑃3 (𝑥) = ( 5 𝑥 3 − 3𝑥)
2
1
𝑃4 (𝑥) = (35𝑥 4 − 30𝑥 2 + 3)
8

Figure 10 Legendre Polynomials orthogonal over interval [-1,1]

Plugging Legendre polynomials into the orthogonal matrix form yields the following

∑1 0 0 … 0 ∑ 𝑦𝑖
∀𝑖 ∀𝑖

0 ∑ 𝑃1 𝑃1 0 … 0 𝑎0 ∑ 𝑦𝑖 𝑃1
∀𝑖
𝑎1 ∀𝑖
𝑎2 =
0 0 ∑ 𝑃2 𝑃2 … 0 ⋮ ∑𝑦𝑖 𝑃2
∀𝑖 [𝑎 𝑛 ] ∀𝑖
⋮ ⋮ ⋮ ⋱ ⋮ ⋮
0 0 0 0 ∑ 𝑃𝑛 𝑃𝑛 ∑ 𝑦𝑖 𝑃𝑛
[ ∀𝑖 ] [ ∀𝑖 ]

Example:
Use Legendre polynomials to generate a curve fitting the following data

𝑥𝑖 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5


𝑦𝑖 -0.4326 -0.1656 3.1253 4.7877 4.8535 6.5 7.1 8 10.53 11.3 14.8

The given x -range must be adjusted to the interval [-1,1]


Using the following equation
2
𝑋𝑛𝑒𝑤 = −1 + 𝑥𝑖
5
The following code has been implemented in MATLAB

16
Which yields the following results
11 0 0 0 𝑎0 70.3983
0 4.4 0 0 𝑎 30.5821
[ ] [ 𝑎1 ] = [ ]
0 0 3.1988 0 2 8.2011
0 0 0 2.816 𝑎 3 11.4417
and the polynomial coefficients
𝑎0 6.3998
𝑎1 6.9505
[𝑎 ] = [ ]
2 2.5638
𝑎3 4.0631

Figure 11 Legendre curve fits (3rd and 4th degree)

17
1.2.4 Hermite polynomials
Theorem
The Hermite polynomials 𝐻𝑛 (𝑥) are set of orthogonal polynomials over the domain (-∞,∞) with weighting
2
function 𝑒 −𝑥 . An optimal curve fitting can be obtained by plugging Hermite polynomial as the basis
function
𝑛

𝐹 (𝑥 ) = ∑ 𝑎𝑖 𝐻𝑖 (𝑥)
𝑖=0

Where 𝐻𝑖 (𝑥) is Hermite polynomials given by the contour integral


𝑛! 2
Hi (x) = ∮ 𝑒 −𝑡 +2𝑡𝑥 𝑡 −𝑛−1 𝑑𝑡
2𝜋𝑖
where the contour encloses the origin, and is traversed
in a counterclockwise direction, the Hermite
polynomials are defined by the recursion formula
𝐻(𝑛) (𝑥) = 2𝑥𝐻(𝑛−1,𝑥) − 2(𝑛 − 1)𝐻(𝑛−2,𝑥)

𝐻0 (𝑥) = 1
𝐻1 (𝑥) = 2𝑥

𝐻2 (𝑥) = 4𝑥 2 − 2

𝐻3 (𝑥) = 8𝑥 3 − 12𝑥
𝐻4 (𝑥) = 16𝑥 4 − 48𝑥 2 + 12 Figure 12 Hermite Polynomials orthogonal over interval [ (-∞, ∞)]

Plugging Hermite polynomials into the orthogonal matrix form yields

∑1 0 0 … 0 ∑ 𝑦𝑖
∀𝑖 ∀𝑖

0 ∑ 𝐻1 𝐻1 0 … 0 𝑎0 ∑ 𝑦𝑖 𝐻1
∀𝑖
𝑎1 ∀𝑖
𝑎2 =
0 0 ∑ 𝐻2 𝐻2 … 0 ⋮ ∑𝑦𝑖 𝐻2
∀𝑖 [𝑎 𝑛 ] ∀𝑖
⋮ ⋮ ⋮ ⋱ ⋮ ⋮
0 0 0 0 ∑ 𝐻𝑛 𝐻𝑛 ∑ 𝑦𝑖 𝐻𝑛
[ ∀𝑖 ] [ ∀𝑖 ]

Example 1:
Use Hermite Polynomials to generate a curve fitting the following data

𝑥𝑖 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5


𝑦𝑖 -0.4326 -0.1656 3.1253 4.7877 4.8535 6.5 7.1 8 10.53 11.3 14.8

18
The given x -range must be adjusted according to the following equation
Using the following equation
2
𝑋𝑛𝑒𝑤 = 1.575 (−1 + 5 𝑥𝑖 )

For a first order Hermite polynomial the following coefficients has been calculated
11 0 𝑎 70.3983
[ ] [ 𝑎0 ] = [ ]
0 43.659 1 96.3337
Hence, the coefficients
𝑎 6.3998
[ 𝑎0 ] = [ ]
1 2.2065

Figure 12 Hermite first order polynomial

Example 2:
Fit the following data to using Hermite polynomials
𝑥𝑖 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.50 6
𝑦𝑖 0 0.5526 0.9996 1.3881 1.1682 .5392 -0.1 -0.19 -0.65 -0.79 -1.10 -0.65 -0.18

A Hermite polynomial of 5th order is used to fit the data


The following code has been implemented in MATLAB to calculate the matrix coefficients

19
Which gives
13 0 0 0 0 0 𝑎0 1.1856
0 46 0 0 0 0 𝑎1 −13.4525
0 0 160.945 0 0 0 𝑎2 0.6139
0 0 0 360.7929 0 0 𝑎 3 = 29.8097
0 0 0 0 2868 0 𝑎4 −10.4263
[0 0 0 0 0 43297 ] [𝑎 5 ] [ −39.4838 ]

And the coefficients as following

𝑎0 0.0912
𝑎1 −0.2924
𝑎2 0.0038
𝑎 3 = 0.0826
𝑎4 −0.0036
[𝑎 5 ] [−0.0009 ]

Figure 13 Hermite curve fitting 5th order polynomials

20
2. Interpolation

Given a set of data (n+1) points each of pair (𝑥 𝑖 , 𝑦𝑖 ) 𝑤ℎ𝑒𝑟𝑒 𝑖 = 0,1,2, … , 𝑛 ,it’s required to obtain a
function that should be passing through all the given points for the given data points, there are many
types of formula for both equal interval data and nonequal interval data summarized below.

Interpolation Functions

Equal Interval data Non-equal interval


data

Lagrange
polynomial
Newton forward
General
Newton
Newton backward Cubic spline

Gauss forward Gauss backward

Stirling Formula

Figure 14 Interpolation formulas

We will discuss each formula in detail

21
2.1. Equal interval data

2.1.1 Newton forward difference formula


Theory

If 𝑥 0 , 𝑥 1 , 𝑥 2 , … , 𝑥 𝑛 are a given set of observation with common difference h and 𝑦0 , 𝑦1 , 𝑦2 , 𝑦3 , … , 𝑦𝑛

Are their corresponding values, where 𝑦 = 𝑓(𝑥) ,the interpolation curve is given by

𝑝(𝑝 − 1) 2 𝑝(𝑝 − 1)(𝑝 − 2) 3


𝑓 (𝑥 ) = 𝑦0 , + 𝑝∆ 𝑦0 + ∆ 𝑦0 + ∆ 𝑦0 + ⋯
2! 3!
𝑝(𝑝 − 1)(𝑝 − 2) … (𝑝 − (𝑛 − 1)) 𝑛
+ ∆ 𝑦0
𝑛!
𝑥−𝑥 0
Where 𝑝= ℎ

The forward difference table is given by


First Forward Second Forward Third Forward Fourth differences
differences differences differences

. .
. . .
.
. . .

Example:
Use Newton forward difference to generate an interpolation curve for the following data
set.
𝑥𝑖 0 0.5 1 1.5 2 2.5 3 3.5 4
𝑦𝑖 0 0.5526 0.9996 1.3881 1.1682 .5392 -0.1 -0.19 -0.65

22
Constructing the forward difference table
𝑥𝑖 𝑦𝑖 ∆ 𝑦0 ∆2 𝑦0 ∆3 𝑦0 ∆4 𝑦0 ∆5 𝑦0 ∆ 6 𝑦0 ∆7 𝑦0 ∆ 7 𝑦0
0 0 0.5526 -0.1056 0.0471 -0.597 1.3462 -1.8958 2.4063 -4.5168
0.5 0.5526 0.447 -0.0585 -0.5499 0.7492 -0.5496 0.5105 -2.1105
1 0.9996 0.3885 -0.6084 0.1993 0.1996 -0.0391 -1.6
1.5 1.3881 -0.2199 -0.4091 0.3989 0.1605 -1.6391
2 1.1682 -0.629 -0.0102 0.5594 -1.4786
2.5 0.5392 -0.6392 0.5492 -0.9192
3 -0.1 -0.09 -0.37
3.5 -0.19 -0.46
4 -0.65

The interpolation polynomial is obtained using MATLAB code


𝐹(𝑋) = −((90336 ∗ 𝑋 8 − 1457208 ∗ 𝑋 7 + 9824164 ∗ 𝑋 6 − 35666358 ∗ 𝑋 5
+ 74091829 ∗ 𝑋 4 − 85393707 ∗ 𝑋 3 + 49624861 ∗ 𝑋 2
− 14262657𝑋))/3150000

Figure 15 forward newton interpolation

23
2.1.2 Newton backward difference formula

Theory

If 𝑥 0 , 𝑥 1 , 𝑥 2 , … , 𝑥 𝑛 are a given set of observations with common difference h and 𝑦0 , 𝑦1 , 𝑦2 , 𝑦3 , … , 𝑦𝑛

Are their corresponding values, where 𝑦 = 𝑓(𝑥) be the given function

𝑝(𝑝 + 1) 2 𝑝(𝑝 + 1)(𝑝 + 2) 3


𝑓(𝑥 ) = 𝑦n , + 𝑝∇𝑦n + ∇ 𝑦n + ∇ 𝑦n + ⋯
2! 3!
𝑝(𝑝 + 1)(𝑝 + 2) … (𝑝 + (𝑛 − 1)) 𝑛
+ ∇ 𝑦n
𝑛!
𝑥−𝑥 0
Where 𝑝= ℎ

The backward difference table is given by


First Backward Second Backward Third Backward Fourth differences
differences differences differences

. .
. . .
. . .
.
.

Example:
Use Newton backward difference to generate an interpolation curve for the following
data set.
𝑥𝑖 0 0.5 1 1.5 2 2.5 3 3.5 4
𝑦𝑖 0 0.5526 0.9996 1.3881 1.1682 .5392 -0.1 -0.19 -0.65

24
Constructing the backward difference table

𝑥𝑖 𝑦𝑖 ∆ 𝑦0 ∆ 2 𝑦0 ∆3 𝑦0 ∆ 4 𝑦0 ∆5 𝑦0 ∆6 𝑦0 ∆ 7 𝑦0 ∆7 𝑦0
0 0
0.5 0.5526 0.5526
1 0.9996 0.447 -0.1056
1.5 1.3881 0.3885 -0.0585 0.0471
2 1.1682 -0.2199 -0.6084 -0.5499 -0.597
2.5 0.5392 -0.629 -0.4091 0.1993 0.7492 1.3462
3 -0.1 -0.6392 -0.0102 0.3989 0.1996 -0.5496 -1.8958
3.5 -0.19 -0.09 0.5492 0.5594 0.1605 -0.0391 0.5105 2.4063
4 -0.65 -0.46 -0.37 -0.9192 -1.4786 -1.6391 -1.6 -2.1105 -4.5168

And the interpolation polynomial for the backward difference


𝑓(𝑋) = − (1882 ∗ 𝑋 8 )/65625 + (166109 ∗ 𝑋 7 )/131250 − (2753641
∗ 𝑋 6 )/112500 + (20272303 ∗ 𝑋 5 )/75000 − (837701737
∗ 𝑋 4 )/450000 + (409036533 ∗ 𝑋 3 )/50000 − (70525095157
∗ 𝑋 2 )/3150000 + (12205262431 ∗ 𝑋)/350000 − 29572161/1250

Figure 16 Newton backward difference

25
2.1.3 Gauss forward difference formula
Theory

let are a given set of observations with common difference h ,for these
points the Gauss’s forward interpolation takes the following form [1]
𝑝(𝑝 − 1) 2 𝑝(𝑝 − 1)(𝑝 + 1) 3
𝑓(𝑥 ) = 𝑦0 + 𝑝∆ 𝑦0 + ∆ 𝑦−1 + ∆ 𝑦−1
2! 3!
𝑝 (𝑝 − 1)(𝑝 − 2)(𝑝 + 1) 4
+ ∆ 𝑦−2 + ⋯
4!
𝑦0 and the even differences ∆ 2 𝑦−1 , ∆ 4 𝑦−2 , .... which lie on the line containing 𝑥 0 , (called the central
line) and the odd differences ∆ 𝑦0 , ∆3 𝑦−1 ,... which lie on the line just below this line, in the difference
table.
𝑥−𝑥 0
Where 𝑝= ℎ

The forward central difference table is given by

Example:
Use Gauss forward difference to get an interpolation curve for the following data set.
𝑥𝑖 1 2 3 4
𝑦𝑖 1 8 27 64

Constructing the forward central difference table

xi yi dy1 dy2 dy3


1 1
7
2 8 12
19 6
3 27 18
37
4 64

26
Using the following code

Figure 17 gauss forward difference table

27
2.1.4 Gauss backward difference formula
Theory

let are a given set of observations with common difference h, for these
points the Gauss’s forward interpolation takes the following form
𝑝(𝑝 + 1) 2 𝑝(𝑝 − 1)(𝑝 + 1) 3
𝑓 (𝑥 ) = 𝑦0 + 𝑝∆ 𝑦−1 + ∆ 𝑦−1 + ∆ 𝑦−2
2! 3!
𝑝(𝑝 − 1)(𝑝 + 2)(𝑝 + 1) 4
+ ∆ 𝑦−2 + ⋯
4!
𝑝(𝑝 + 𝑛 − 1) … (𝑝 + 1)𝑝(𝑝 − 1) … (𝑝 − 𝑛 + 1) 2𝑛−1
+ ∆ 𝑦−n + ⋯
(2𝑛 − 1)!

𝑦0 and the even differences ∆ 2 𝑦−1 , ∆ 4 𝑦−2 , .... which lie on the line containing 𝑥 0 , (called the central
line) and the odd differences ∆ 𝑦0 , ∆3 𝑦−1 ,... which lie on the line just below this line, in the difference
table.
𝑥−𝑥 0
Where 𝑝= ℎ

The backward central difference table is given by

Example:
Use Gauss forward difference to get an interpolation curve for the following data set.
𝑥𝑖 0 1 2 3
𝑦𝑖 1 4 27 43

28
Constructing the backward central difference table
xi yi dy1 dy2 dy3
0 1
3
1 4 20
23 -27
2 27 -7
16
3 43

And the interpolation polynomial for the backward difference


𝑓(𝑋) = 3 ∗ X + 10 ∗ X ∗ (X − 1) − (9 ∗ X ∗ (X − 1) ∗ (X − 2))/2 + 1

Figure 18 Gauss backward central difference

29
2.1.5 Stirling’s Interpolation Formula
Stirling’s interpolation formula is used for odd number of equi spaced arguments. this formula is obtained
by taking the arithmetic mean of the Gauss’s forward and backward difference formulae

𝑝(∆ 𝑦−1 + ∆ 𝑦0 ) 𝑝 2 2 𝑝(𝑝 − 1)(𝑝 + 1) (∆3 𝑦−2 +∆3 𝑦−1 )


𝑓(𝑥 ) = 𝑦0 + + ∆ 𝑦−1 +
2 2! 3! 2
𝑝 2 (𝑝 + 1)(𝑝 − 1) 4 𝑝 2 (𝑝2 − 12 )(𝑝 2 − 22 ) … (𝑝 2 − 𝑛 − 12 ) 2𝑛
+ ∆ 𝑦−2 + ⋯ + ∆ 𝑦−n
4! (2𝑛)!

The Stirling’s interpolation formula gives the best approximate result when −0.25 < 𝑝 < 0.25. So,
we choose 𝑥0 𝑖𝑛 𝑠𝑢𝑐ℎ 𝑎 𝑤𝑎𝑦 𝑡ℎ𝑎𝑡 𝑝 = 𝑥 − 𝑥 0 ℎ
satisfy this condition
Example:
Using the following data, find by Sterling's formula, the value of at

0.20 0.21 0.22 0.23 0.24

1.37638 1.28919 1.20879 1.13427 1.06489

Constructing the difference table


xi yi dy1 dy2 dy3 Dy4
0.2 1.37638
-0.08719
0.21 1.28919 0.00679
-0.0804 -0.00091
0.22 1.20879 0.00588 0.00017
-0.07452 -0.00074
0.23 1.13427 0.00514
-0.06938
0.24 1.06489

0.225−0.22
Using sterling formula, 𝑝 = = 0.5
0.01

𝑓 (0.225) = 1.1708457
The exact value for
cot(𝜋 ∗ 0.225) = 1.1708496

30
2.2 Non-Equal interval data

2.2.1 Lagrange interpolation formula

If 𝑥 0 , 𝑥 1 , 𝑥 2 , … , 𝑥𝑛 are a given set of 𝑛 + 1 observations which are need not be equally spaced and
𝑦0 , 𝑦1 , 𝑦2 , … , 𝑦𝑛 be their corresponding values, where 𝑦 = 𝑓(𝑥) be the given function[2]
𝑛

𝑓(𝑥 ) = ∑ 𝐿 𝑘 (𝑥)𝑓(𝑥 𝑘 )
𝑘=0

Which contains the Lagrange polynomials of 𝑛 grade that can be written as

Example 1:
Obtain the Lagrange polynomial of the following points
t(s) 1 5 9 13 17 21 25
V(km/h) 35 40 60 55 90 37.8 10

A general function has been implemented on MATLAB obtaining a Polynomial the input arguments is the
array points which gives the corresponding y values of the interpolation polynomial

31
Figure 19 Lagrange interpolation polynomial

Example 2
Obtain the interpolation polynomial for the given data points using Lagrange formula
𝑠 {(−2, 9), (5, −12), (10, 33)}

32
The interpolation polynomial

𝑓(𝑥 ) = 𝑥 2 − 6𝑥 − 7

Figure 20 Lagrange interpolation polynomial 2nd order

33
2.2.2 General form of Newton divided difference

If 𝑥 0 , 𝑥 1 , 𝑥 2 , … , 𝑥 𝑛 are a given set of 𝑛 + 1 of data which are need not be equally spaced and
𝑦0 , 𝑦1 , 𝑦2 , … , 𝑦𝑛 be their corresponding values, where 𝑦 = 𝑓(𝑥) be the given function

𝑓(𝑥) = 𝑓 (𝑥0) + (𝑥 − 𝑥0) 𝑓 [𝑥0 , 𝑥1] + (𝑥 − 𝑥0 ) (𝑥 − 𝑥1) 𝑓 [𝑥0 , 𝑥1,𝑥2 ]


+ . . . + (𝑥 − 𝑥0 ) (𝑥 − 𝑥1) . . . (𝑥 − 𝑥𝑘−1 ) 𝑓 [𝑥0 , 𝑥1,. . . , 𝑥𝑘 ].

For an example of a third order polynomial, given (𝑥 0 , 𝑦0 ), (𝑥1 , 𝑦1 ), (𝑥 2 , 𝑦2 ) 𝑎𝑛𝑑 (𝑥 3 , 𝑦3 )

Example:
The upward velocity of a rocket is given as a function of time in the next table, Determine the value of
the velocity at t  16 seconds with a third order polynomial using Newton divided difference
interpolation formula

34
For a third order polynomial, the velocity is given by

we need to choose the four data points that are closest to t  16 that also bracket t  16 to evaluate it
Then

Given

𝑏0 = 227.04 , 𝑏1 = 27.148 , 𝑏2 = 0.3766 , 𝑏3 = 5.4347𝑥10−3

35
𝑣 (𝑡) = 227.04 + 27.148 (𝑡 − 10) + 0.3766(𝑡 − 10)(𝑡 − 15)
+ 5.5347𝑥10−3 (𝑡 − 10)(𝑡 − 15)(𝑡 − 20)
𝐴𝑡 𝑡 = 16 , 𝑣 = 392.06 𝑚/𝑠

Figure 21 General newton divided difference

2.2.3 Cubic spline interpolation

Given nodes and data {(𝑥 0 , 𝑓(𝑥 0 )), (𝑥 1 , 𝑓(𝑥 1 )), . . . , (𝑥 𝑛 , 𝑓(𝑥 𝑛 ))} we have interpolated using Lagrange
interpolation, but Lagrange interpolating, such polynomials can possess large oscillations, an
alternative piecewise polynomial (Cubic spline) is specified
Theory
A cubic polynomial is specified by 4 coefficients
𝑝(𝑥) = 𝑎 + 𝑏𝑥 + 𝑐𝑥 2 + 𝑑𝑥 3 .

• The cubic spline is twice continuously differentiable


• The cubic spline has the flexibility to satisfy general types of boundary conditions.

• While the spline may agree with 𝑓(𝑥) at the nodes, we cannot guarantee the derivatives of the
spline agree with the derivatives of 𝑓
36
Given a function f(x) defined on [a, b] and a set of nodes 𝑎 = 𝑥 0 < 𝑥 1 < 𝑥 2 < · · · < 𝑥𝑛 = 𝑏
a cubic spline interpolant, S, for f is a piecewise cubic polynomial,
𝑆𝑗 𝑜𝑛 [𝑥𝑗 ,𝑥𝑗 + 1] 𝑓𝑜𝑟 𝑗 = 0, 1, . . . , 𝑛 − 1.

The cubic spline interpolant will have the following properties

One of the following boundary conditions (BCs) is satisfied:

𝑆 ′′(𝑥0 ) = 𝑆 ′′ (𝑥𝑛 ) = 0 (𝑓𝑟𝑒𝑒 𝑜𝑟 𝑛𝑎𝑡𝑢𝑟𝑎𝑙 𝐵𝐶𝑠).


𝑆 ′ (𝑥0) = 𝑓 ′ (𝑥0 ) 𝑎𝑛𝑑 𝑠′(𝑥𝑛 ) = 𝑓 ′ (𝑥𝑛 ) (𝑐𝑙𝑎𝑚𝑝𝑒𝑑 𝐵𝐶𝑠).

Example:
Construct a piecewise cubic spline interpolant for the curve passing through {(5, 5), (7, 2), (9, 4)}

with natural boundary conditions


This will require two cubic:

Since there are 8 coefficients, we must derive 8 equations to solve


The splines must agree with the function (the y-coordinates) at the nodes (the x-coordinates).

37
The first and second derivatives of the cubic must agree at their shared node x = 7.

The final two equations come from the natural boundary conditions.

All eight linear equations together form the system:

And the spline equation are

Figure 22 Cubic spline

Figure 23 Cubic spline

38
Example2:

Construct a piecewise cubic spline interpolant for the curve passing through
𝑥 = [1 1.5 2 4.1 5] ; 𝑦 = [1 − 1 1 − 1 1];

The following code has been implemented

Figure 23 cubic spline

39
References

[1] M. Pal, “Numerical Analysis for Scientists and Engineers: Theory and C Programs,” 2007.
[2] T. F. Chan et al., “Applications of Fade Approximation Theory in M i Dynamics,” Lect. Notes,
vol. 39, no. 5, pp. 1–32, 2009.

Websites
http://www.emptyloop.com/technotes/A%20tutorial%20on%20trigonometric%20curve%20fitting.pdf
http://www.mhtlab.uwaterloo.ca/courses/me755/web_chap5.pdf
https://www.mathworks.com/help/symbolic/legendrep.html
https://www.math.dartmouth.edu/~ddeford/Lagrange_Interpolation.pdf
http://cms.gcg11.ac.in/attachments/article/202/Interpolation.pdf

40
View publication stats

You might also like