You are on page 1of 30

MIZAN TEPI UNIVERSITY

COLLEGE OF ENGINEERING AND TECHNOLOGY

DEPARTMENT OF MECHANICAL ENGINEERING

NUMERICAL METHODS (Menge 2073)

CHAPTER FOUR

CURVE FITTING

BY: TESFAHUN MESHESHA (MSc.)

Feb 2021
Tepi, Ethiopia
CHAPTER FOUR
4. CURVE FITTING
4.1. Introduction
An effective cost analysis capability cannot exist without the systematic collection and analysis of data on
past, present, and projected programs. The analysis must result in the development of estimating
relationships which can be used as a basis for estimating the resource impact of future proposals. These
relationships typically relate resource requirements to the physical, performance, or operational
characteristics of the system, and are, in essence, formal statements of the way one or more variables relate
to each other. At times a simple factor such as cost or mile is all that is needed. Frequently, however, a
more complex relationship is required such as the one between the weight and cost of an aircraft and the
man hours required to maintain it. The necessary relationships are usually expressed as mathematical
equations, as curves drawn on coordinate paper, or both. In either case, the methods of curve fitting are
essential to the development process.
Just what do we mean by curve fitting? Suppose we plot a set of corresponding values of two variables on
coordinate paper. The problem of curve fitting is that of finding the equation of a curve that passes through
(or near) these points in the graph so as to indicate their general trend. An equation determined in this way
is called an empirical equation between two variables, and the process of finding it is called curve fitting.
While curve fitting as such deals primarily with relationships between two variables, certain of the basic
methods can be used to establish empirical equations among three or more variables.
Fortunately, however, many of the methods are useful in solving only a limited number of unique
problems, and for that reason are not of interest to us here. It is the intent of this Memorandum to explain
curve fitting it. A general sense and to present only those methods that experience has shown to be of
general use to the cost analyst.
Some basic analytic geometry:
This section discuss the mathematical properties of some functional forms, the general shape of the curves
portrayed by each, and the relationship between the shape of the curve, its location, and the values of the
equation constants. Since our greatest concern at this point is to develop the equation describing a
particular relationship, we will present the techniques for calculating the equation parameters, both
constants and coefficients. The different functional forms that will be treated are summarized below:
𝒚 = 𝑎 + 𝑏𝑥, straight line,
𝑦 = 𝑎 + 𝑏𝑥 + 𝑐𝑥 2 , parabola,
𝑦 = 𝑎𝑏 𝑥 , exponential
𝒚 = 𝑎𝑥 𝑏 , power function

P a g e 1 | 29
There are two basic methods of curve fitting. The first is least square regression, and the second and more
precise one is interpolation. The second approach is interpolation which is a more precise one. The basic
idea is to fit a curve or a series of curves that pass directly through each of the points.

4.2. Least Square Regression


Least square regression is one of curve fitting technique by which we will derive a single curve that
represent the general trends of the data. The term regression is the relation between the dependent and
independent variables. That means if the independent variable changes by a certain amount, the dependent
variable will change on average and certain amount. The basic types of least square regression are
discussed below:

4.2.1. Simple linear regression


The simplest example of the least squares approximation is fitting a straight line to a set of paired
observations (𝑥1 , 𝑦1 ), (𝑥2 , 𝑦2 ) … … (𝑥𝑛 , 𝑦𝑛 ). The mathematical expression for the straight line is

𝑦 = 𝑎0 + 𝑎1 𝑥 + 𝑒 (4.2.1.1)
𝑤ℎ𝑒𝑟𝑒
𝑎0 = 𝑦 − 𝑖𝑛𝑡𝑒𝑟𝑐𝑒𝑝𝑡
𝑎1 = 𝑠𝑙𝑜𝑝𝑒
𝑒 = 𝑒𝑟𝑟𝑜𝑟 𝑜𝑟 𝑟𝑒𝑠𝑖𝑑𝑢𝑎𝑙 𝑏𝑒𝑡𝑤𝑒𝑒𝑛 𝑚𝑜𝑑𝑒𝑙 𝑎𝑛𝑑 𝑜𝑏𝑠𝑒𝑟𝑣𝑎𝑡𝑖𝑜𝑛

From equation (4.2.1.1) we can see the residual is given by;


𝑒𝑖 = 𝑦𝑖 − 𝑎0 − 𝑎1 𝑥𝑖 (4.2.1.2)

For approximating a set of data by a linear equation (best fit), we should minimize the sum of residuals.
The least squares fit of straight line minimizes the sum of the squares of the residuals.
𝑛 𝑛 𝑛
2
𝑆𝑟 = ∑(𝑒𝑖 )2 = ∑(𝑦𝑖,𝑚𝑒𝑎𝑠𝑢𝑟𝑒𝑑 − 𝑦𝑖,𝑚𝑜𝑑𝑒𝑙 ) = ∑(𝑦𝑖 − 𝑎0 − 𝑎1 𝑥𝑖 )2 (4.2.1.3)
𝑖=1 𝑖=1 𝑖=1

To determine the values of 𝒂𝟎 and 𝒂𝟏 , differentiate equation (4.2.1.3) with respect to each coefficient
𝜕𝑆𝑟
= −2 ∑(𝑦𝑖 − 𝑎0 − 𝑎1 𝑥𝑖 ) (2.4.1.4)
𝜕𝑎0
𝜕𝑆𝑟
= −2 ∑[(𝑦𝑖 − 𝑎0 − 𝑎1 𝑥𝑖 )𝑥𝑖 ] (2.4.1.5)
𝜕𝑎1
Setting the derivatives equal to zero will result in a minimum 𝑆𝑟 . The equations can then be expressed as;

P a g e 2 | 29
∑ 𝑎0 + ∑ 𝑎1 𝑥𝑖 = ∑ 𝑦𝑖
(2.4.1.6)
2
∑ 𝑎0 𝑥𝑖 + ∑ 𝑎1 𝑥𝑖 = ∑ 𝑦𝑖 𝑥𝑖

Recall:
∑ 𝑎0 = 𝑛𝑎0 , ∑ 𝑎0 𝑥𝑖 = 𝑎0 ∑ 𝑥𝑖 , ∑ 𝑎1 𝑥𝑖 = 𝑎1 ∑ 𝑥𝑖 and ∑ 𝑎1 𝑥𝑖 2 = 𝑎1 ∑ 𝑥𝑖 2 . Solving for 𝑎0 and 𝑎1
simultaneously;

𝑛 ∑ 𝑥𝑖 𝑦𝑖 − ∑ 𝑥𝑖 ∑ 𝑦𝑖
𝑎1 = 𝑎𝑛𝑑 𝑎0 = 𝑦̅ − 𝑎1 𝑥̅ (2.4.1.7)
𝑛 ∑ 𝑥𝑖 2 − (∑ 𝑥𝑖 )2

Test for the goodness of fit:


The criteria for the best fit of a curve is to have a minimum sum of residuals for the available data. See
equation (4.2.8).
𝑛 𝑛

∑|𝑒𝑖 | = ∑|𝑦𝑖 − 𝑎0 − 𝑎1 𝑥𝑖 | (4.2.1.8)


𝑖=1 𝑖=1

𝑤ℎ𝑒𝑟𝑒; 𝑛 = 𝑡𝑜𝑡𝑎𝑙 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝑑𝑎𝑡𝑎 𝑝𝑜𝑖𝑛𝑡𝑠


Any line other than the one derived above gives larger sum of the squares of the residuals. The standard
deviation of the regression line can be determined by the standard error of the estimate

𝑟 𝑆
𝑆𝑦/𝑥 = √𝑛−2 (4.2.1.9)

Correlation coefficient 𝑟 for linear regression of dependent and independent variables is given by;

𝑛 ∑ 𝑥𝑖 𝑦𝑖 − (∑ 𝑥𝑖 )(∑ 𝑦𝑖 )
𝑟= (4.2.1.10)
√𝑛 ∑ 𝑥𝑖 2 − (∑ 𝑥𝑖 )2 √𝑛 ∑ 𝑦𝑖 2 − (∑ 𝑦𝑖 )2

For a perfect fit 𝒓 = 𝟏 and r should be close to 1 for a good fit of the curve.
Example 4.2.1:
Find the standard least squares line 𝑦 = 𝑎1 𝑥 + 𝑎0 for data points given below, and check for the goodness
of the fit.
(−1,10), (0,9), (1,7), (2,5), (3,4), (4,3), (5,0), (6, −1)
Solution:

P a g e 3 | 29
x y x2 xy y2 r = 0.993641413
-1 10 1 -10 100
n=8
0 9 0 0 81
1 7 1 7 49
𝟎
2 5 4 10 25 = = 2.5
3 4 9 12 16
4 3 16 12 9
5 0 25 0 0 𝒚= = 4.625
6 -1 36 -6 1

8 8 8 8 2 8 8 8 2
8 8
∑𝑥 ∑𝑦 ∑ 𝑥2 ∑ 𝑥𝑦 ∑𝑥 ∑ 𝑥∑ 𝑦 ∑ 𝑦2 ∑𝑦
𝑛=1 𝑛=1 𝑛=1 𝑛=1 𝑛=1 𝑛=1 𝑛=1
𝑛=1 𝑛=1

20 37 92 25 400 740 281 1369


By using equation (2.4.1.7)
𝑛 ∑ 𝑥𝑖 𝑦𝑖 − ∑ 𝑥𝑖 ∑ 𝑦𝑖 8(25) − (740) −45
𝑎1 = 2 2
=( )= 𝑎𝑛𝑑
𝑛 ∑ 𝑥𝑖 − (∑ 𝑥𝑖 ) 8(92) − 400 28
37 −45 20 121
𝑎0 = 𝑦̅ − 𝑎1 𝑥̅ = −( )( ) =
8 28 8 14
The approximate equation of line for the given data points is found to be;
121 45
𝑦= − 28 𝑥
14

𝑦 = 8.6428 − 1.6071𝑥
The goodness of fit is checked by calculating the correlation coefficient r using equation (4.2.1.10);
𝑛 ∑ 𝑥𝑖 𝑦𝑖 − (∑ 𝑥𝑖 )(∑ 𝑦𝑖 ) 8(25) − (20)(37)
𝑟= = = 0.9936414
√𝑛 ∑ 𝑥𝑖 −2 (∑ 𝑥𝑖 )2 √𝑛 ∑ 𝑦𝑖 2 − (∑ 𝑦𝑖 )2 (√8(92) − 400) (√8(281) − 1369)

Example 4.2.2:
At the end of this semester, the mark obtained for the course Numerical method is inversely proportional
to the time a certain student spend on Facebook. Since all the students of this class will not spend the same
amount of time on Facebook per a day we cannot relate this condition for all the students by using a single
curve. Using linear regression generate a trend line that specify this relation for the data given below for
randomly selected 10 students. Also check for the goodness of the fit.
Time spent on FB per day 0.5 2 0 1 6 5 0 3 7 4
Mark gained at semester end 75 65 95 70 50 52 89 67 40 56

Solution:

P a g e 4 | 29
x y x2 xy y2 r = 0.941527423
0.5 75 0.25 37.5 5625
n=10
2 65 4 130 4225
0 95 0 0 9025
.
1 70 1 70 4900 = = 2.85
𝟏𝟎
6 50 36 300 2500
5 52 25 260 2704
0 89 0 0 7921
3 67 9 201 4489 𝒚= = 65.9
𝟏𝟎
7 40 49 280 1600
4 56 16 224 3136

10 10 10 10 2 10 10 10 2
10 10
∑𝑥 ∑𝑦 ∑ 𝑥2 ∑ 𝑥𝑦 ∑𝑥 ∑ 𝑥∑ 𝑦 ∑ 𝑦2 ∑𝑦
𝑛=1 𝑛=1 𝑛=1 𝑛=1 𝑛=1 𝑛=1 𝑛=1
𝑛=1 𝑛=1
28.5 659 140.25 1502.5 812.25 18781.5 46125 434281
Substituting the values from the above table in to equation (2.4.1.7)
𝑛 ∑ 𝑥𝑖 𝑦𝑖 − ∑ 𝑥𝑖 ∑ 𝑦𝑖 10(1502.5) − (18781.5) −3756.5
𝑎1 = 2 2
=( )= = −6.36425 𝑎𝑛𝑑
𝑛 ∑ 𝑥𝑖 − (∑ 𝑥𝑖 ) 10(140.25) − 812.25 590.25
28.5 −3756.5 659
𝑎0 = 𝑦̅ − 𝑎1 𝑥̅ = −( )( ) = 422.2542
10 590.25 10
The approximate equation of line for the given data points is found to be;
2,492,355.625 −3756.5
𝑦= + ( 5,902.5 ) 𝑥
5,902.5

𝑦 = 422.2542 − 6.36425𝑥
The goodness of fit is checked by calculating the correlation coefficient r using equation (4.2.1.10);
𝑛 ∑ 𝑥𝑖 𝑦𝑖 − (∑ 𝑥𝑖 )(∑ 𝑦𝑖 ) 10(1502.5) − (28.5)(659)
𝑟= =
√𝑛 ∑ 𝑥𝑖 2 − (∑ 𝑥𝑖 )2 √𝑛 ∑ 𝑦𝑖 2 − (∑ 𝑦𝑖 )2 (√10(140.25) − 812.25) (√10(46125) − 434281)

𝑟 = 0.941527422
4.2.2. Polynomial regression
In situations where linear regression is inadequate other methods such as polynomial regression are
appropriate. Some engineering data, although exhibiting marked pattern, is poorly represented by straight
line.

For these cases a curve would be better suited to fit the data. One of the possible ways of solving this kind
of problems is to fit the data by a polynomial function. This is called polynomial regression. The least
squares method can be extended to fit data to a higher-order polynomial.
Suppose we want to fit a second order polynomial:

P a g e 5 | 29
𝑦𝑖 = 𝑎0 + 𝑎1 𝑥𝑖 + 𝑎2 𝑥𝑖 2 + 𝑒 (4.2.2.1)
For this case the sum of the squares of the residuals is given by;
𝑛

𝑆𝑟 = ∑(𝑦𝑖 − 𝑎0 − 𝑎1 𝑥𝑖 − 𝑎2 𝑥𝑖 2 )2 (4.2.2.2)
𝑖=1

Since the minimum residual errors need to be obtained, we must derivate the above equation with respect
to each unknown coefficients of the polynomial. This gives us;
𝜕𝑆𝑟
= −2 ∑(𝑦𝑖 − 𝑎0 − 𝑎1 𝑥𝑖 − 𝑎2 𝑥𝑖 2 )
𝜕𝑎0
𝜕𝑆𝑟
= −2 ∑ 𝑥𝑖 (𝑦𝑖 − 𝑎0 − 𝑎1 𝑥𝑖 − 𝑎2 𝑥𝑖 2 ) (4.2.2.3)
𝜕𝑎1
𝜕𝑆𝑟
= −2 ∑ 𝑥𝑖 2 (𝑦𝑖 − 𝑎0 − 𝑎1 𝑥𝑖 − 𝑎2 𝑥𝑖 2 )
𝜕𝑎2
Setting these equations equal to zero, and rearranging the following set of equations can be developed;

∑ 𝑎0 + 𝑎1 ∑ 𝑥𝑖 + 𝑎2 ∑ 𝑥𝑖 2 = ∑ 𝑦𝑖

𝑎0 ∑ 𝑥𝑖 + 𝑎1 ∑ 𝑥𝑖 2 + 𝑎2 ∑ 𝑥𝑖 3 = ∑ 𝑥𝑖 𝑦𝑖 (4.2.2.4)

𝑎0 ∑ 𝑥𝑖 2 + 𝑎1 ∑ 𝑥𝑖 3 + 𝑎2 ∑ 𝑥𝑖 4 = ∑ 𝑥𝑖 2 𝑦𝑖

Recall:

∑ 𝑎0 = 𝑛𝑎0

Thus equation (4.2.2.4) found in the form of system of simultaneous equation. Therefore it can be
presented in the form;

𝑛 ∑ 𝑥𝑖 ∑ 𝑥𝑖 2 𝑎0 ∑ 𝑦𝑖

[ 𝑥𝑖 ∑ 𝑥𝑖 2 ∑ 𝑥𝑖 ] {𝑎1 } = [ ∑ 𝑥𝑖 𝑦𝑖 ]
3 (4.2.2.5)
∑ 𝑥𝑖 2 ∑ 𝑥𝑖 3 ∑ 𝑥𝑖 4 𝑎2 ∑ 𝑥𝑖 2 𝑦𝑖
Solution for equations of this form are discussed in detail in chapter three. Using one of the methods, the
value of constant coefficients 𝑎0 , 𝑎1 , 𝑎𝑛𝑑 𝑎2 , can be obtained. Once these coefficients are found, we can
substitute them into equation (4.2.2.1) to develop the general trend line for the given data.
This discussion can easily be extended to an 𝒎𝒕𝒉 order polynomial as
𝑦 = 𝑎0 + 𝑎1 𝑥 + 𝑎2 𝑥 2 + ⋯ + 𝑎𝑚 𝑥 𝑚 + 𝑒 (4.2.2.6)
Thus determination of the coefficients of an 𝒎𝒕𝒉 order polynomial is equivalent to solving a system of
m+1 simultaneous linear equations. For this case the standard error is formulated as;

𝑆𝑟
𝑆𝑦/𝑥 = √ (4.2.2.7)
𝑛 − (𝑚 + 1)

P a g e 6 | 29
Example 4.2.2.1:
Find the standard east squares parabola 𝑦𝑖 = 𝑎0 + 𝑎1 𝑥𝑖 + 𝑎2 𝑥𝑖 2 for the data points;
(−1,10), (0,6), (1,2), (2,1), (3,0), (4,2), (5,4), (6,7)
Solution:

Substituting the values obtained in table above to equation (4.2.2.5);

8 20 92 𝑎0 32
[20 92 440 ] {𝑎1 } = [ 64 ]
92 440 2276 𝑎2 400

x y x2 x3 x4 xy x2y
-1 10 1 -1 1 -10 10
0 6 0 0 0 0 0
1 2 1 1 1 2 2
2 1 4 8 16 2 4
3 0 9 27 81 0 0
4 2 16 64 256 8 32
5 4 25 125 625 20 100
6 7 36 216 1296 42 252

8 8 8 8 8 8 8
∑𝑥 ∑𝑦 ∑ 𝑥2 ∑ 𝑥3 ∑ 𝑥4 ∑ 𝑥𝑦 ∑ 𝑥 2𝑦
𝑛=1 𝑛=1 𝑛=1 𝑛=1 𝑛=1 𝑛=1 𝑛=1

20 32 92 440 2276 64 400

We can use one of the method discussed in chapter three to solve the coefficients. I prefer to use LU
decomposition.

32
Note that 𝑏 = [ 64 ]
400

Now let’s decompose the 3X3 matrix into lower triangular and upper triangular matrix.

8 20 92
(20 92 440 )
92 440 2276

𝑅2 (1) = 𝑅2 − (20⁄8)𝑅1. Therefor multiplier (20/8), will be saved but only for row two column one

92
𝑅3 (1) = 𝑅3 − (92⁄8)𝑅1 . Here also multiplier ( 8 ) , will be saved but only for row three column one

P a g e 7 | 29
𝑎 20 20
𝑎21 (1) = 𝑎21 − 𝑎21 (𝑎11 ) = 20 − ( 8 ) (8) = ( 8 )
11
(1) 𝑎21 (1) 𝑎21 20
𝑅2 = 𝑅2 − 𝑎 (𝑅1 ) ≫≫ 𝑎22 = 𝑎22 − 𝑎 (𝑎12 ) = 92 − ( 8 ) (20) = 42 𝑎𝑛𝑑
11 11
𝑎 20
𝑎23 (1) = 𝑎23 − 𝑎21 (𝑎13 ) = 440 − ( 8 ) (92) = 210
{ 11

𝑎 92 92
𝑎31 (1) = 𝑎31 − 𝑎31 (𝑎11 ) = 92 − ( 8 ) (8) = ( 8 )
11
(1) 𝑎31 (1) 𝑎31 92
𝑅3 = 𝑅3 − 𝑎 (𝑅1 ) ≫≫ 𝑎32 = 𝑎32 − 𝑎 (𝑎12 ) = 440 − ( 8 ) (20) = 210
11 11
𝑎 92
𝑎33 (1) = 𝑎33 − 𝑎31 (𝑎13 ) = 2276 − ( 8 ) (92) = 1218
{ 11

8 20 92
20
(( 8 ) 42 210 )
92
(8) 210 1218
210
𝑅3 (2) = 𝑅3 (1) − ( 42 ) 𝑅1 (1) . Here the multiplier (5), will be saved but only for row three column two;
(1)
𝑎 92 20 20
𝑎31 (2) = 𝑎31 (1) − 𝑎32 (1) (𝑎21 (1) ) = ( 8 ) − (5) ( 8 ) = ( 8 )
22
𝑎 (1) (1)
𝑎
𝑅3 (2) = 𝑅3 (1) − 𝑎32 (1) (𝑅2 (1) ) ≫ 𝑎32 (2) = 𝑎32 (1) − 𝑎32 (1) (𝑎22 (1) ) = 210 − (5)(42) = (5)
22 22

(2) 𝑎32 (1)


{ 𝑎33 = 𝑎33 (1) − 𝑎 (1) (𝑎23 (1) ) = 1218 − (5)(210) = 168
22

8 20 92
20
(( 8 ) 42 210)
92
(8) (5) 168

From the above result, the matrix 𝐴 is decomposed to lower and upper triangular as follow;
Let 𝑈 be the upper triangular matrix produced, and let 𝐿 be the lower triangular matrix with the records
and ones on the diagonal
1 0 0
20 8 20 92
𝐿 = (8 1 0) 𝑎𝑛𝑑 𝑈 = (
0 42 210)
92
5 1 0 0 168
8

From equation (3.5.2), we have


1 0 0 𝑦1
32 20 32
𝐿𝑦 = 𝑏, Recall: 𝑏 = ( 64 ) ≫≫ ( 8 1 0 𝑦
) ( 2 ) = ( 64 )
400 92
5 1 𝑦3 400
8

𝑦1 = 32, 𝑦2 = −16, 𝑎𝑛𝑑 𝑦3 = 112


Replace 𝑥1 , 𝑥2 , 𝑎𝑛𝑑 𝑥3 𝑏𝑦 𝑎0 , 𝑎1 , 𝑎2 𝑟𝑒𝑠𝑝𝑒𝑐𝑡𝑖𝑣𝑒𝑙𝑦

P a g e 8 | 29
8 20 92 𝑎0 3
𝑈𝑎𝑖 = 𝑦𝑖 ≫≫ (0 𝑎
42 210) ( 1 ) = ( 64 )
0 0 168 𝑎2 400
𝑎0 = 5.61905, 𝑎1 = 3.71429, 𝑎𝑛𝑑 𝑎2 = 0.6667
Therefore the trend line is given by equation;
𝑦𝑖 = 5.61905 + 3.71429𝑥𝑖 + 0.66667𝑥𝑖 2
Example 4.2.2.2:
For the data points shown below, find a polynomial curve that best fit the distribution of variables.

x 0.05 0.11 0.15 0.31 0.46 0.52 0.70 0.74 0.82 0.98 1.171
y 0.956 0.890 0.832 0.717 0.571 0.539 0.378 0.370 0.306 0.242 0.104

Solution:
From the given data points the following data table can be generated.

x y x2 x3 x4 xy x2y
0.05 0.956 0.0025 0.000125 6.25E-06 0.0478 0.00239
0.11 0.89 0.0121 0.001331 0.0001464 0.0979 0.010769
0.15 0.832 0.0225 0.003375 0.0005063 0.1248 0.01872
0.31 0.717 0.0961 0.029791 0.0092352 0.22227 0.0689037
0.46 0.571 0.2116 0.097336 0.0447746 0.26266 0.1208236
0.52 0.539 0.2704 0.140608 0.0731162 0.28028 0.1457456
0.7 0.378 0.49 0.343 0.2401 0.2646 0.18522
0.74 0.37 0.5476 0.405224 0.2998658 0.2738 0.202612
0.82 0.306 0.6724 0.551368 0.4521218 0.25092 0.2057544
0.98 0.242 0.9604 0.941192 0.9223682 0.23716 0.2324168
1.171 0.104 1.37124 1.605723 1.8803019 0.121784 0.14260906

11 11 11 11 11 11 11
∑𝑥 ∑𝑦 ∑ 𝑥2 ∑ 𝑥3 ∑ 𝑥4 ∑ 𝑥𝑦 ∑ 𝑥 2𝑦
𝑛=1 𝑛=1 𝑛=1 𝑛=1 𝑛=1 𝑛=1 𝑛=1

6.011 5.905 4.656841 4.119073 3.9225424 2.183974 1.33596416


For 11 data points, the coefficients of polynomial curve are obtained by arranging results from the above
table in the following matrix configuration.
11 6.011 4.656841 𝑎0 5.905
[ 6.011 4.65684 𝑎
4.119073 ] { 1 } = [ 2.18397 ]
4.656841 4.119073 3.9225424 𝑎2 1.335964
Applying either of Gauss elimination, gauss Jordan elimination, or LU decomposition, the coefficients of
polynomial curve for the given data points become;
𝑎0 = 0.998, 𝑎1 = −1.018, 𝑎𝑛𝑑 𝑎2 = 0.225

P a g e 9 | 29
Therefore the polynomial curve to the general trend of data points given is found to be;
𝑦 = 0.998 − 0.1018𝑥 + 0.225𝑥 2
4.2.3. Multiple linear regression
A useful extension of linear regression is the case where y is a linear function of more than one independent
variable, say x1 and x2.
𝑦 = 𝑎0 + 𝑎1 𝑥1 + 𝑎2 𝑥2 + 𝑒 (4.2.3.1)
Such an equation is useful when fitting experimental data where the variable being studied is a function
of two other variables. In the same manner as the previous cases, the best values of the coefficients are
determined by setting the sum of the squares of the residuals to a minimum

Figure 4-1: graphical depiction of multiple linear regression where y is a


function of x1 and x2
𝑛

𝑆𝑟 = ∑(𝑦𝑖 − 𝑎0 − 𝑎1 𝑥1𝑖 − 𝑎2 𝑥2𝑖 )2 (4.2.3.2)


𝑖=1

Differentiating with respect to the unknown coefficients, we have;


𝜕𝑆𝑟
= −2 ∑(𝑦𝑖 − 𝑎0 − 𝑎1 𝑥1𝑖 − 𝑎2 𝑥2𝑖 )
𝜕𝑎0
𝜕𝑆𝑟
= −2 ∑ 𝑥1𝑖 (𝑦𝑖 − 𝑎0 − 𝑎1 𝑥1𝑖 − 𝑎2 𝑥2𝑖 ) (4.2.3.3)
𝜕𝑎1
𝜕𝑆𝑟
= −2 ∑ 𝑥2𝑖 (𝑦𝑖 − 𝑎0 − 𝑎1 𝑥1𝑖 − 𝑎2 𝑥2𝑖 )
𝜕𝑎2
The coefficients yielding the minimum sum of the residuals are obtained by setting the partial derivatives
equal to zero.

P a g e 10 | 29
∑ 𝑦𝑖 − ∑ 𝑎0 − 𝑎1 ∑ 𝑥1𝑖 − 𝑎2 ∑ 𝑥2𝑖 = 0

∑ 𝑥1𝑖 𝑦𝑖 − 𝑎0 ∑ 𝑥1𝑖 − 𝑎1 ∑ 𝑥1𝑖 2 − 𝑎2 ∑ 𝑥1𝑖 𝑥2𝑖 = 0 (4.2.3.4)

∑ 𝑥2𝑖 𝑦𝑖 − 𝑎0 ∑ 𝑥2𝑖 − 𝑎1 ∑ 𝑥1𝑖 𝑥2𝑖 − 𝑎2 ∑ 𝑥2𝑖 2 = 0

Thus equation (4.2.3.4) can be expressed in a matrix form as;


𝑛 ∑ 𝑥1𝑖 ∑ 𝑥2𝑖 𝑎0 ∑ 𝑦𝑖
[∑ 𝑥1𝑖 ∑ 𝑥1𝑖 2 ∑ 𝑥1𝑖 𝑥2𝑖 ] {𝑎1 } = [∑ 𝑥1𝑖 𝑦𝑖 ] (4.2.3.5)
∑ 𝑥2𝑖 ∑ 𝑥1𝑖 𝑥2𝑖 ∑ 𝑥2𝑖 2 𝑎2 ∑ 𝑥2𝑖 𝑦𝑖
Once the summations are obtained from the data points given, they can be arranged in matrix form shown
in equation (4.2.3.5). Then the coefficients are calculated using the concept from chapter three, and
substituted in to equation (4.2.3.1) to finalize the curve fitting using multiple linear regression.
Example 4.2.3.1:
Using a multiple regression line create a general trend line to fit for the following data points;
𝑥1 0 2 2.5 1 4 7
𝑥2 0 1 2 3 6 2
𝑦 5 10 9 0 3 27

Solution:
𝑥1 𝑥2 𝑦 𝑥1 𝑥2 𝑥12 𝑥22 𝑥1 𝑦 𝑥2 𝑦
0 0 5 0 0 0 0 0
2 1 10 2 4 1 20 10
2.5 2 9 5 6.25 4 22.5 18
1 3 0 3 1 9 0 0
4 6 3 24 16 36 12 18
7 2 27 14 49 4 189 54

6 6 6 6 6 6 6 6
∑𝑦 ∑ 𝑥1 2 2
∑ 𝑥1 ∑ 𝑥2 ∑ 𝑥1 𝑥2 ∑ 𝑥2 ∑ 𝑥1 𝑦 ∑ 𝑥 2𝑦
𝑛=1 𝑛=1 𝑛=1 𝑛=1 𝑛=1 𝑛=1 𝑛=1 𝑛=1

16.5 14 54 48 76.25 54 243.5 100


Substituting the values from the above data points to equation (4.2.3.5);
6 16.5 14 𝑎0 54
[16.5 76.25 48] {𝑎1 } = [243.5]
14 48 54 𝑎2 100
Now let’s use the concept of Gauss elimination method
Phase 1: Forward elimination

P a g e 11 | 29
First stage of elimination:
𝑎 16.5
𝑎21 (1) = 𝑎21 – (𝑎21 ) 𝑎11 = 16.5 − ( ) (6) = 0
11 6
𝑎 16.5
𝑎22 (1) = 𝑎22 – (𝑎21 ) 𝑎12 = 76.25 − ( ) (16.5) = 30.875
11 6 (𝟏) 𝒂
𝑎21 16.5
≫≫ 𝑹 = 𝑹 – (𝒂 𝟏 ) 𝑹𝟏
(1)
𝑎23 = 𝑎23 – (𝑎 ) 𝑎13 = 48 − ( ) (14) = 9.5 𝟏𝟏
11 6
(1) 𝑎 16.5
𝑏2 = 𝑏2 – (𝑎21 ) 𝑏1 = 243.5 − ( ) (54) = 95}
11 6
𝑎 14
𝑎31 (1) = 𝑎31 – (𝑎31 ) 𝑎11 = 14 − ( 6 ) (6) = 0
11
(1) 𝑎31 14
𝑎32 = 𝑎32 – (𝑎 ) 𝑎12 = 48 − ( 6 ) (16.5) = 9.5
11 (𝟏) 𝒂
𝑎31 14
≫≫ 𝑹 = 𝑹 – (𝒂 𝟏 ) 𝑹𝟏
(1)
𝑎33 = 𝑎33 – (𝑎 ) 𝑎13 = 54 − ( 6 ) (14) = 21.3333 𝟏𝟏
11
(1) 𝑎 14
𝑏3 = 𝑏3 – (𝑎31 ) 𝑏1 = 100 − ( 6 ) (54) = −26 }
11

6 16.5 14 𝑎0 54
[0 30.875 9.5 ] { 𝑎1 } = [ 95 ]
0 9.5 21.333 𝑎 2 −26
Second stage of elimination:
(𝟏)
( ) 𝒂 (𝟏)
𝑹 – (𝒂 (𝟏) )𝑹 ;

𝑎 (1) 9.5
𝑎32 (2) = 𝑎32 (1) – (𝑎32 (1)) 𝑎22 (1) = 9.5 − (30.875) (30.875) = 0
22
𝑎32 (1) 9.5
𝑎33 (2) = 𝑎33 (1) – (𝑎 (1) ) 𝑎23 (1) = 21.333 − (30.875) (9.5) = 18.409
22
(2) (1) 𝑎32 (1) (1) 9.5
{𝑏3 = 𝑏3 – (𝑎 (1) ) 𝑏2 = −26 − (30.875) (95) = −55.2307
22

6 16.5 14 𝑎0 54
[0 30.875 9.5 ] { 𝑎1 } = [ 95 ]
0 0 18.409 𝑎 2 −55.2307
Phase 2: Back substitution
From the result obtained at second stage of elimination, we can find the value of 𝑥𝑖
(18.409)𝑎2 = −55.2307
𝑎2 = −3
30.875𝑎1 + (9.5) (−3) = 95
𝑎1 = 4
6𝑎0 + 16.5(4) + 14(−3) = 54
𝑎0 = 5.066
𝒚 = .𝟎 +𝟒 𝟏 − ≫ 𝑇ℎ𝑖𝑠 𝑖𝑠 𝑡ℎ𝑒 𝑓𝑢𝑛𝑐𝑡𝑖𝑜𝑛 𝑜𝑓 𝑐𝑢𝑟𝑣𝑒 𝑡𝑜 𝑓𝑖𝑡 𝑠𝑒𝑡𝑠 𝑜𝑓 𝑝𝑜𝑖𝑛𝑡𝑠 𝑔𝑖𝑣𝑒𝑛
The above case can be extended to m dimension as;

P a g e 12 | 29
𝑦 = 𝑎0 + 𝑎1 𝑥1 + 𝑎2 𝑥2 + ⋯ + 𝑎𝑚 𝑥𝑚 + 𝑒 (4.2.3.6)

4.2.4. General linear least squares


In the preceding discussions we have seen three types of regression: linear, polynomial and multiple linear.
All these belong to the general linear least squares model given by;

𝑦 = 𝑎0 𝑧0 + 𝑎1 𝑧1 + 𝑎2 𝑧2 + ⋯ + 𝑎𝑚 𝑧𝑚 + 𝑒 (4.2.4.1)
𝒘𝒉𝒆𝒓𝒆 𝒛𝟎 , 𝒛𝟎 , 𝒛 , . . . . . , 𝒛𝒎 are 𝒎 + 𝟏 different functions
For multiple linear regression 𝒛𝟎 = 𝟏, 𝒛𝟏 = 𝟏, 𝒛 = , … , 𝒛𝒎 = 𝒎. For polynomial regression,
𝒁′ 𝒔 are simple monomials as in 𝒛𝟎 = 𝟏, 𝒛𝟏 = ,𝒛 = , . . . , 𝒛𝒎 = 𝒎. The terminology linear
refers only to the model's dependence on its parameters i.e. 𝒂′𝒔. As in the case of polynomial regression.
The functions themselves can be highly nonlinear. The general linear least squares model can be expressed
in a matrix form as;
{𝑦} = [𝑧]{𝐴} + {𝐸} (4.2.4.2)
𝑤ℎ𝑒𝑟𝑒 [𝒛] is a matrix of calculated values of the 𝒛 functions at the measured values of the independent
variables.

𝑍01 𝑍11 .. . . 𝑍𝑚1


𝑍02 𝑍12 .. . . 𝑍𝑚2
. . . . . (4.2.4.4)
. . . . .
[𝑍0𝑛 𝑍1𝑛 . . 𝑍𝑚𝑛 ]

𝒎 is the number of variables in the model and 𝒏 is the number of data points. Since 𝒏 ≥ 𝒎 + 𝟏, most of
the time [𝒛] is not a square matrix. The column vector {y} contains the observed values of the dependent
variable;

{𝑦}𝑇 = {𝑦1 , 𝑦2 , … , 𝑦𝑛 } (4.2.4.5)

The column vector {A} contains the unknown coefficients;

{𝐴}𝑇 = (𝑎0 , 𝑎1 , … , 𝑎𝑚 ) (4.2.4.6)

The column vector {E} contains the residuals. The sum of the squares of the residuals can be defined as

{𝐸}𝑇 = {𝑒0 , 𝑒1 , … , 𝑒𝑚 } (42.4.7)

2
𝑛 𝑚

𝑆𝑟 = ∑ (𝑦𝑖 − ∑ 𝑎𝑗 𝑧𝑗𝑖 ) (4.2.4.8)


𝑖=1 𝑗=0

P a g e 13 | 29
This quantity can be minimized by taking its partial derivative with respect to each of the coefficients and
setting the resulting equation equal to zero. The outcome of this process can be expressed in matrix form
as;
⌊[𝑧]𝑇 [𝑧]⌋{𝐴} = {[𝑧]𝑇 {𝑦}} (4.2.4.9)
−1
{𝐴} = [[𝑧]𝑇 [𝑧]] {[𝑧]𝑇 {𝑦}} (4.2.4.10)
4.3. Interpolation
Interpolation is the second approach for curve fitting which is a more precise one. The basic idea is to fit
a curve or a series of curves that pass directly through each of the points. In this portion we will discuss
the problem of approximating a given function by polynomials.
There are two main uses of these approximating polynomials. The first use is to reconstruct the function
𝑓(𝑥) when it is not given explicitly and only values of 𝑓(𝑥) and/ or its certain order derivatives are given
at a set of distinct points called nodes or tabular points. The second use is to perform the required
operations which were intended for 𝑓(𝑥), like determination of roots, differentiation and integration etc.
can be carried out using the approximating polynomial 𝑃(𝑥). The approximating polynomial 𝑃(𝑥) can
be used to predict the value of f(x) at a non-tabular point. The deviation of 𝑃(𝑥) from 𝑓(𝑥), that
is 𝑓(𝑥) – 𝑃(𝑥), is called the error of approximation.
Let 𝑓(𝑥) be a continuous function defined on some interval [𝑎, 𝑏], and be prescribed at 𝑛 + 1 distinct
tabular points 𝑥0 , 𝑥1 , … , 𝑥𝑛 such that 𝑎 = 𝑥0 < 𝑥1 < 𝑥2 < … < 𝑥𝑛 = 𝑏. The distinct tabular points
𝑥0 , 𝑥1 , … , 𝑥𝑛 may be non-equispaced or equispaced, that is 𝑥𝑘+1 – 𝑥𝑘 = ℎ; where
𝑘 = 0, 1, 2, … , 𝑛 – 1. The problem of polynomial approximation is to find a polynomial 𝑃𝑛 (𝑥), of
degree ≤ 𝑛, which fits the given data exactly, that is;
𝑃𝑛 (𝑥𝑖 ) = 𝑓(𝑥𝑖 ), 𝑖 = 0, 1, 2, … , 𝑛 (4.3.1)
The polynomial𝑃𝑛 (𝑥𝑖 ) is called the interpolating polynomial. The conditions given in (4.3.1) are called
the interpolating conditions.
Through two distinct points, we can construct a unique polynomial of degree 1(straight line). Through
three distinct points, we can construct a unique polynomial of degree 2 (parabola) or a unique polynomial
of degree1 (straight line). That is, through three distinct points, we can construct a unique polynomial of
degree ≤ 2. In general, through 𝑛 + 1 distinct points, we can construct a unique polynomial of degree ≤
𝑛. The interpolation polynomial fitting a given data is unique. We may express it in various forms but are
otherwise the same polynomial. For example, 𝑓(𝑥) = 𝑥2 – 2𝑥 – 1 can be written as;
𝑥 2 – 2𝑥 – 1 = – 2 + (𝑥 – 1) + (𝑥 – 1) (𝑥 – 2)
Here after interpolation methods with unevenly spaced data points are discussed.

P a g e 14 | 29
4.3.1. Newton's Divided Difference Interpolating Polynomials
Before finding out the Newton divided difference interpolating polynomial, let’s first grab the concept of
divided difference.
Divided difference:
Let the data, (𝑥𝑖 , 𝑓(𝑥𝑖 )), 𝑖 = 0, 1, 2, … , 𝑛, be given. We define the divided differences as follows. First
divided difference Consider any two consecutive data values (𝑥𝑖 , 𝑓(𝑥𝑖 )), (𝑥𝑖+1 , 𝑓(𝑥𝑖+1 )). Then, we define
the first divided difference as;
𝑓(𝑥𝑖+1 ) − 𝑓(𝑥𝑖 )
𝑓[𝑥𝑖 , 𝑥𝑖+1 ] = , 𝑖 = 1,2,3, … , 𝑛 − 1
𝑥𝑖+1 − 𝑥𝑖
Therefore,
𝑓(𝑥1 ) − 𝑓(𝑥0 ) 𝑓(𝑥2 ) − 𝑓(𝑥1 )
𝑓[𝑥0 , 𝑥1 ] = , 𝑓[𝑥1 , 𝑥2 ] = , 𝑒𝑡𝑐 (4.3.1.1)
𝑥1 − 𝑥0 𝑥2 − 𝑥1
Second divided difference: let’s consider the following three consecutive data values
(𝑥𝑖 , 𝑓(𝑥𝑖 )), (𝑥𝑖+1 , 𝑓(𝑥𝑖+1 )), (𝑥𝑖+2 , 𝑓(𝑥𝑖+2 )). Then, we define the second divided difference as;
𝑓[𝑥𝑖+1 , 𝑥𝑖+2 ] − 𝑓[𝑥𝑖 , 𝑥𝑖+1 ]
𝑓[𝑥𝑖 , 𝑥𝑖+1 , 𝑥𝑖+2 ] = , 𝑖 = 1,2,3, … , 𝑛 − 2
𝑥𝑖+2 − 𝑥𝑖
Therefore;
𝑓[𝑥1 , 𝑥2 ] − 𝑓[𝑥0 , 𝑥1 ]
𝑓[𝑥0 , 𝑥1 , 𝑥2 ] = , 𝑒𝑡𝑐
𝑥2 − 𝑥0
1 𝑓2 − 𝑓1 𝑓1 − 𝑓0
𝑓[𝑥0 , 𝑥1 , 𝑥2 ] = [ − ] (4.3.1.2)
𝑥2 − 𝑥0 𝑥2 − 𝑥1 𝑥1 − 𝑥0
𝑓0 𝑓1 1 1 𝑓2
= − [ + ]+
(𝑥2 − 𝑥0 )(𝑥1 − 𝑥0 ) (𝑥2 − 𝑥0 ) (𝑥2 − 𝑥1 ) (𝑥1 − 𝑥0 ) (𝑥2 − 𝑥0 )(𝑥2 − 𝑥1 )
𝒇𝟎 𝒇𝟏 𝒇
∴ 𝒇[ 𝟎 , 𝟏 , ] = + +
( − 𝟎 )( 𝟏 − 𝟎 ) ( − 𝟏 )( 𝟏 − 𝟎 ) ( − 𝟎 )( − 𝟏 )
The nth divided difference using all the data values in the table, is defined as;
𝒇[ 𝟏, ,…, 𝒏]− 𝒇[ 𝟎 , 𝟏, … , 𝒏−𝟏 ]
𝒇[ 𝟎, 𝟏, … , 𝒏] = (𝟒. . 𝟏. )
𝒏− 𝟎

The divided differences can be written in a tabular form as in Table 4.3.1


Table 4-1: Divided difference

𝑥𝑖 𝑓(𝑥𝑖 ) 𝑓𝑖𝑟𝑠𝑡 𝐷𝐷 𝑆𝑒𝑐𝑜𝑛𝑑 𝐷𝐷 𝑇ℎ𝑖𝑟𝑑 𝐷𝐷


𝑥0 𝑓(𝑥0 )
𝑓[𝑥0 , 𝑥1 ]
𝑥1 𝑓(𝑥1 ) 𝑓[𝑥0 , 𝑥1 , 𝑥2 ]
𝑓[𝑥1 , 𝑥2 ] 𝑓[𝑥0 , 𝑥1 , 𝑥2 , 𝑥3 ]
𝑥2 𝑓(𝑥2 ) 𝑓[𝑥1 , 𝑥2 , 𝑥3 ]
𝑓[𝑥2 , 𝑥3 ]
𝑥3 𝑓(𝑥3 )

P a g e 15 | 29
Example 4.3.1:
Obtain the divided difference table for the data;
𝑥 -1 0 2 3
𝑓(𝑥) -8 3 1 12
Solution:
First divided difference [equation (4.3.1.1)];
𝑓1 − 𝑓0 3 − (−8)
𝑓[𝑥0 , 𝑥1 ] = = = 11
𝑥1 − 𝑥0 0 − (−1)
𝑓2 − 𝑓1 1 − (3)
𝑓[𝑥1 , 𝑥2 ] = = = −1
𝑥2 − 𝑥1 2 − (0)

𝑓3 − 𝑓2 12 − (1)
𝑓[𝑥2 , 𝑥3 ] = = = 11
𝑥3 − 𝑥2 3 − (2)
Second divided difference [equation (4.3.1.2)];
1 1
𝑓[𝑥0 , 𝑥1 , 𝑥2 ] = [𝑓[𝑥1 , 𝑥2 ] − 𝑓[𝑥0 , 𝑥1 ]] = [−1 − (11)] = −4
𝑥2 − 𝑥0 2 − (−1)
1 1
𝑓[𝑥1 , 𝑥2 , 𝑥3 ] = [𝑓[𝑥2 , 𝑥3 ] − 𝑓[𝑥1 , 𝑥2 ]] = [11 − (−1)] = 4
𝑥3 − 𝑥1 3 − (0)
Third divided difference [equation (4.3.1.3)];
1 1
𝑓[𝑥0 , 𝑥1 , 𝑥2 , 𝑥3 ] = [𝑓[𝑥1 , 𝑥2 , 𝑥3 ] − 𝑓[𝑥0 , 𝑥1 , 𝑥2 ]] = [4 − (−4)] = 2
𝑥3 − 𝑥0 3 − (−1)

𝑥𝑖 𝑓(𝑥𝑖 ) 𝑓𝑖𝑟𝑠𝑡 𝐷𝐷 𝑆𝑒𝑐𝑜𝑛𝑑 𝐷𝐷 𝑇ℎ𝑖𝑟𝑑 𝐷𝐷


𝑥0 𝑓(𝑥0 ) 𝑓[𝑥0 , 𝑥1 ] = 11
𝑥1 𝑓(𝑥1 ) 𝑓[𝑥0 , 𝑥1 , 𝑥2 ] = −4
𝑓[𝑥1 , 𝑥2 ] = −1 𝑓[𝑥0 , 𝑥1 , 𝑥2 , 𝑥3 ] = 2
𝑥2 𝑓(𝑥2 ) 𝑓[𝑥1 , 𝑥2 , 𝑥3 ] = 4
𝑥3 𝑓(𝑥3 ) 𝑓[𝑥2 , 𝑥3 ] = 11

We mentioned earlier that the interpolating polynomial representing a given data values is unique, but the
polynomial can be represented in various forms. We write the interpolating polynomial as;
𝑓(𝑥) = 𝑃𝑛 (𝑥) = 𝑐0 + (𝑥 – 𝑥0 )𝑐1
+ (𝑥 – 𝑥0 )(𝑥 – 𝑥1 )𝑐2 + . . . + (𝑥 – 𝑥0 )(𝑥 – 𝑥1 ). . . (𝑥 – 𝑥𝑛−1 )𝑐𝑛 (4.3.1.4)
The polynomial fits the data 𝑃𝑛 (𝑥𝑖 ) = 𝑓(𝑥𝑖 ) = 𝑥𝑖
Setting 𝑃𝑖 (𝑥0 ) = 𝑓𝑖 , we obtain
𝑃𝑛 (𝑥0 ) = 𝑓0 = 𝑐0 (4.3.1.5)
Since all the remaining terms vanish.
Setting 𝑃𝑛 (𝑥0 ) = 𝑓0 , we obtain

P a g e 16 | 29
𝑓1 = 𝑐0 + (𝑥1 − 𝑥0 )𝑐1 , 𝑜𝑟
𝑓1 − 𝑐0 𝑓1 − 𝑓0
𝑐1 = = = 𝑓[𝑥0 , 𝑥1 ] (4.3.1.6)
𝑥1 − 𝑥0 𝑥1 − 𝑥0
Setting 𝑃𝑛 (𝑥2 ) = 𝑓2 , 𝑤𝑒 𝑜𝑏𝑡𝑎𝑖𝑛
𝑓2 = 𝑐0 + (𝑥2 − 𝑥0 )𝑐1 + (𝑥2 − 𝑥0 )(𝑥2 − 𝑥1 )𝑐2 , 𝑜𝑟
(𝑓2 − 𝑐0 ) − (𝑥2 − 𝑥0 )𝑐1 (𝑓2 − 𝑓0 ) − (𝑥2 − 𝑥0 )𝑓[𝑥0 , 𝑥1 ]
𝑐2 = =
(𝑥2 − 𝑥0 )(𝑥2 − 𝑥1 ) (𝑥2 − 𝑥0 )(𝑥2 − 𝑥1 )
1 𝑓1 − 𝑓0
= [𝑓2 − 𝑓0 − (𝑥2 − 𝑥0 ) ( )]
(𝑥2 − 𝑥0 )(𝑥2 − 𝑥1 ) 𝑥1 − 𝑥0
𝑓0 𝑓1 𝑓2
= + +
(𝑥1 − 𝑥0 )(𝑥2 − 𝑥0 ) (𝑥0 − 𝑥1 )(𝑥2 − 𝑥1 ) (𝑥2 − 𝑥0 )(𝑥2 − 𝑥1 )
𝑐2 = 𝑓[𝑥0 , 𝑥1 , 𝑥2 ] (4.3.1.7)
By induction, we can prove that;
𝑐𝑛 = 𝑓[𝑥0 , 𝑥1 , 𝑥2 , … , 𝑥𝑛 ] (4.3.1.8)
Substituting equation (4.3.1.5) through (4.3.1.8) into equation (4.3.1.4), we can find out the interpolation
polynomial that can represent the given data points.
Example 4.3.2:
Find f(x) as a polynomial in x for the following data by Newton’s divided difference formula
𝑥 -4 -1 0 2 5
𝑓(𝑥) 1245 33 5 9 1335
Solution:
This is a fourth degree polynomial. Therefore we need to find the fourth degree polynomial.
First divided difference [equation (4.3.1.1)];
𝑓1 − 𝑓0 33 − (1245)
𝑓[𝑥0 , 𝑥1 ] = = = −404
𝑥1 − 𝑥0 −1 − (−4)
𝑓2 − 𝑓1 5 − (33)
𝑓[𝑥1 , 𝑥2 ] = = = −28
𝑥2 − 𝑥1 0 − (−1)
𝑓3 − 𝑓2 9 − (5)
𝑓[𝑥2 , 𝑥3 ] = = =2
𝑥3 − 𝑥2 2 − (0)
𝑓4 − 𝑓3 1335 − (9)
𝑓[𝑥3 , 𝑥4 ] = = = 442
𝑥4 − 𝑥3 5 − (2)
Second divided difference [equation (4.3.1.2)];
1 1
𝑓[𝑥0 , 𝑥1 , 𝑥2 ] = [𝑓[𝑥1 , 𝑥2 ] − 𝑓[𝑥0 , 𝑥1 ]] = [−28 − (−404)] = 94
𝑥2 − 𝑥0 0 − (−4)
1 1
𝑓[𝑥1 , 𝑥2 , 𝑥3 ] = [𝑓[𝑥2 , 𝑥3 ] − 𝑓[𝑥1 , 𝑥2 ]] = [2 − (−28)] = 10
𝑥3 − 𝑥1 2 − (−1)

P a g e 17 | 29
1 1
𝑓[𝑥2 , 𝑥3 , 𝑥4 ] = [𝑓[𝑥3 , 𝑥4 ] − 𝑓[𝑥2 , 𝑥3 ]] = [442 − (2)] = 88
𝑥4 − 𝑥2 5 − (0)
Third divided difference [equation (4.3.1.3)];
1 1
𝑓[𝑥0 , 𝑥1 , 𝑥2 , 𝑥3 ] = [𝑓[𝑥1 , 𝑥2 , 𝑥3 ] − 𝑓[𝑥0 , 𝑥1 , 𝑥2 ]] = [10 − (94)] = −14
𝑥3 − 𝑥0 2 − (−4)
1 1
𝑓[𝑥1 , 𝑥2 , 𝑥3 , 𝑥4 ] = 𝑥 [𝑓[𝑥2 , 𝑥3 , 𝑥4 ] − 𝑓[𝑥1 , 𝑥2 , 𝑥3 ]] = 5−(−1) [88 − (10)] = 13
4 −𝑥1

Fourth divided difference [equation (4.3.1.3)];


1
𝑓[𝑥0 , 𝑥1 , 𝑥2 , 𝑥3 , 𝑥4 ] = [𝑓[𝑥1 , 𝑥2 , 𝑥3 , 𝑥4 ] − 𝑓[𝑥0 , 𝑥1 , 𝑥2 , 𝑥3 ]]
𝑥4 − 𝑥0
1
= [13 − (−14)] = 3
5 − (−4)
𝑥𝑖 𝑓(𝑥𝑖 ) 𝑓𝑖𝑟𝑠𝑡 𝐷𝐷 𝑆𝑒𝑐𝑜𝑛𝑑 𝐷𝐷 𝑇ℎ𝑖𝑟𝑑 𝐷𝐷 𝐹𝑜𝑢𝑟𝑡ℎ 𝐷𝐷

𝑥0 = −4 1245 𝑓[𝑥0 , 𝑥1 ] 𝑓[𝑥0 , 𝑥1 , 𝑥2 ]


= −404 𝑓[𝑥0 , 𝑥1 , 𝑥2 , 𝑥3 ]
𝑥1 = −1 33 = 94
𝑓[𝑥1 , 𝑥2 ] = −14
𝑥2 = 0 5 = −28 𝑓[𝑥1 , 𝑥2 , 𝑥3 ] 𝑓[𝑥0 , 𝑥1 , 𝑥2 , 𝑥3 , 𝑥4 ]
𝑓[𝑥1 , 𝑥2 , 𝑥3 , 𝑥4 ]
𝑥3 = 2 9 𝑓[𝑥2 , 𝑥3 ] = 10 =3
=2 =13
𝑥4 = 5 1335 𝑓[𝑥2 , 𝑥3 , 𝑥4 ]
𝑓[𝑥3 , 𝑥4 ]
= 88
= 442

𝑓(𝑥) = 𝑐0 + (𝑥 – 𝑥0 )𝑐1 + (𝑥 – 𝑥0 )(𝑥 – 𝑥1 )𝑐2


+(𝑥 – 𝑥0 )(𝑥 – 𝑥1 )(𝑥 – 𝑥2 )𝑐3 + (𝑥 – 𝑥0 )(𝑥 – 𝑥1 )(𝑥 – 𝑥2 )(𝑥 – 𝑥3 )𝑐4
𝑓(𝑥) = 1245 + [𝑥 – (−4)](−𝟒𝟎𝟒) + [(𝑥 – (−4))(𝑥 – (−1) )]( 𝟒)
+[(𝑥 – (−4))(𝑥 – (−1))(𝑥 – 0)](−𝟏𝟒) + [(𝑥 – (−4))(𝑥 – (−1))(𝑥 – 0)(𝑥 – 2)]( )
𝑓(𝑥) = 3𝑥 4 − 5𝑥 3 + 6𝑥 2 − 14𝑥 + 5
Example 4.3.3:
Find f(x) as an interpolating polynomial in x for the following data by Newton’s divided difference
formula.
𝑥 -2 -1 0 1 3 4
𝑓(𝑥) 9 16 17 18 44 81
Hence, interpolation at 𝑥 = 0.5, and 𝑥 = 3.1
Solution:
First let’s find table of divided difference;
First divided difference [equation (4.3.1.1)];

P a g e 18 | 29
𝑓1 − 𝑓0 16 − (9)
𝑓[𝑥0 , 𝑥1 ] = = =7
𝑥1 − 𝑥0 −1 − (−2)
𝑓2 − 𝑓1 17 − (16)
𝑓[𝑥1 , 𝑥2 ] = = =1
𝑥2 − 𝑥1 0 − (−1)
𝑓3 − 𝑓2 18 − (17)
𝑓[𝑥2 , 𝑥3 ] = = =1
𝑥3 − 𝑥2 1 − (0)
𝑓4 − 𝑓3 44 − (18)
𝑓[𝑥3 , 𝑥4 ] = = = 13
𝑥4 − 𝑥3 3 − (1)
𝑓5 − 𝑓4 81 − (44)
𝑓[𝑥4 , 𝑥5 ] = = = 37
𝑥5 − 𝑥4 4 − (3)
Second divided difference [equation (4.3.1.2)];
1 1
𝑓[𝑥0 , 𝑥1 , 𝑥2 ] = [𝑓[𝑥1 , 𝑥2 ] − 𝑓[𝑥0 , 𝑥1 ]] = [1 − (7)] = −3
𝑥2 − 𝑥0 0 − (−2)
1 1
𝑓[𝑥1 , 𝑥2 , 𝑥3 ] = [𝑓[𝑥2 , 𝑥3 ] − 𝑓[𝑥1 , 𝑥2 ]] = [1 − (1)] = 0
𝑥3 − 𝑥1 1 − (−1)
1 1
𝑓[𝑥2 , 𝑥3 , 𝑥4 ] = [𝑓[𝑥3 , 𝑥4 ] − 𝑓[𝑥2 , 𝑥3 ]] = [13 − (1)] = 4
𝑥4 − 𝑥2 3 − (0)
1 1
𝑓[𝑥3 , 𝑥4 , 𝑥5 ] = [𝑓[𝑥4 , 𝑥5 ] − 𝑓[𝑥3 , 𝑥4 ]] = [30 − (13)] = 8
𝑥5 − 𝑥3 4 − (1)
Third divided difference [equation (4.3.1.3)];
1 1
𝑓[𝑥0 , 𝑥1 , 𝑥2 , 𝑥3 ] = [𝑓[𝑥1 , 𝑥2 , 𝑥3 ] − 𝑓[𝑥0 , 𝑥1 , 𝑥2 ]] = [0 − (−3)] = 1
𝑥3 − 𝑥0 1 − (−2)
1 1
𝑓[𝑥1 , 𝑥2 , 𝑥3 , 𝑥4 ] = 𝑥 [𝑓[𝑥2 , 𝑥3 , 𝑥4 ] − 𝑓[𝑥1 , 𝑥2 , 𝑥3 ]] = 3−(−1) [4 − (0)] = 1
4 −𝑥1
1 1
𝑓[𝑥2 , 𝑥3 , 𝑥4 , 𝑥5 ] = [𝑓[𝑥3 , 𝑥4 , 𝑥5 ] − 𝑓[𝑥2 , 𝑥3 , 𝑥4 ]] = [8 − (4)] = 1
𝑥5 − 𝑥2 4 − (0)
Fourth divided difference [equation (4.3.1.3)];
1
𝑓[𝑥0 , 𝑥1 , 𝑥2 , 𝑥3 , 𝑥4 ] = [𝑓[𝑥1 , 𝑥2 , 𝑥3 , 𝑥4 ] − 𝑓[𝑥0 , 𝑥1 , 𝑥2 , 𝑥3 ]]
𝑥4 − 𝑥0
1
= [1 − (1)] = 0
3 − (−2)
1
𝑓[𝑥1 , 𝑥2 , 𝑥3 , 𝑥4 , 𝑥5 ] = [𝑓[𝑥2 , 𝑥3 , 𝑥4 , 𝑥5 ] − 𝑓[𝑥1 , 𝑥2 , 𝑥3 , 𝑥4 ]]
𝑥5 − 𝑥1
1
= [1 − (1)] = 0
4 − (−1)
Fifth divided difference [equation (4.3.1.3)];
1 1
𝑓[𝑥0 , 𝑥1 , 𝑥2 , 𝑥3 , 𝑥4 , 𝑥5 ] = [𝑓[𝑥1 , 𝑥2 , 𝑥3 , 𝑥4 , 𝑥5 ] − 𝑓[𝑥0 , 𝑥1 , 𝑥2 , 𝑥3 , 𝑥4 ]] = [0 − (0)] = 0
𝑥5 − 𝑥0 4 − (−2)
𝑓𝑖𝑟𝑠𝑡 𝐷𝐷 𝑆𝑒𝑐𝑜𝑛𝑑 𝐷𝐷 𝑇ℎ𝑖𝑟𝑑 𝐷𝐷 𝐹𝑜𝑢𝑟𝑡ℎ 𝐷𝐷 𝐹𝑖𝑓𝑡ℎ 𝐷𝐷
𝑓[𝑥0 , 𝑥1 ] = 7
𝑓[𝑥0 , 𝑥1 , 𝑥2 ] = −3 𝑓[𝑥0 , 𝑥1 , 𝑥2 , 𝑥3 ] = 1
𝑓[𝑥1 , 𝑥2 ] = 1 𝑓[𝑥0 , 𝑥1 , 𝑥2 , 𝑥3 , 𝑥4 ] = 0
𝑓[𝑥1 , 𝑥2 , 𝑥3 ] = 0 𝑓[𝑥1 , 𝑥2 , 𝑥3 , 𝑥4 ] = 1 𝑓[𝑥0 , 𝑥1 , 𝑥2 , 𝑥3 , 𝑥4 , 𝑥5 ] = 0
𝑓[𝑥2 , 𝑥3 ] = 1 𝑓[𝑥1 , 𝑥2 , 𝑥3 , 𝑥4 , 𝑥5 ] = 0
𝑓[𝑥2 , 𝑥3 , 𝑥4 ] = 4 𝑓[𝑥2 , 𝑥3 , 𝑥4 , 𝑥5 ] = 1
𝑓[𝑥3 , 𝑥4 ] = 13
𝑓[𝑥3 , 𝑥4 , 𝑥5 ] = 8
𝑓[𝑥4 , 𝑥5 ] = 37

P a g e 19 | 29
Therefore:𝑐0 = 𝑓0 = 9, 𝑐1 = 7, 𝑐2 = −3, 𝑐3 = 1, 𝑐4 = 0, 𝑎𝑛𝑑 𝑐5 = 0
According to equation (4.3.1.4), interpolating polynomial is given by;
𝑓(𝑥) = 𝒄𝟎 + (𝑥 – 𝑥0 )𝒄𝟏 + (𝑥 – 𝑥0 )(𝑥 – 𝑥1 )𝒄
+(𝑥 – 𝑥0 )(𝑥 – 𝑥1 )(𝑥 – 𝑥2 )𝒄 + (𝑥 – 𝑥0 )(𝑥 – 𝑥1 )(𝑥 – 𝑥2 )(𝑥 – 𝑥3 )𝒄𝟒
+(𝑥 – 𝑥0 )(𝑥 – 𝑥1 )(𝑥 – 𝑥2 )(𝑥 – 𝑥3 )(𝑥 – 𝑥4 )𝒄
= 9 + (𝑥 – (−2))( ) + [(𝑥 – (−2))(𝑥 – (−1))](− )
+[(𝑥 – (−2))(𝑥 – (−1))(𝑥 – 0)](𝟏) + [(𝑥 – (−2))(𝑥 – (−1))(𝑥 – 0)(𝑥 – 1)](𝟎)
+ [(𝑥 – (−2))(𝑥 – (−1))(𝑥 – 0)(𝑥 – 1)(𝑥 – 3)](𝟎)
= 9 + (𝑥 + 2)(7) + [(𝑥 + 2)(𝑥 + 1)](−3) + [(𝑥 + 2)(𝑥 + 1)(𝑥 )](1)
= 9 + 7𝑥 + 14 − 3𝑥 2 − 9𝑥 − 6 + 𝑥 3 + 3𝑥 2 + 2𝑥
𝑓(𝑥) = 𝑥 3 + 17 ≫ 𝑡ℎ𝑖𝑠 𝑖𝑠 𝑡ℎ𝑒 𝑖𝑛𝑡𝑒𝑟𝑝𝑜𝑙𝑎𝑡𝑖𝑛𝑔 𝑝𝑜𝑙𝑦𝑛𝑜𝑚𝑖𝑎𝑙
Then we can use this equation to interpolate at the given x values
𝑓(0.5) = 17.125 , and 𝑓(3.1) = 46.791

Exercise 4.3.1:
Find f(x) as a polynomial in x for the following data by Newton’s divided difference formula

𝑥 1 3 4 5 7 10

𝑓(𝑥) 3 31 69 131 351 1011


Hence, interpolate at 𝑥 = 3.5 and 𝑥 = 8.0. Also find, 𝑓′(3) and 𝑓″ (1.5).

Lagrange Interpolation Polynomials


Let the data

𝑥 𝑥0 𝑥1 𝑥2 𝑥3 ⋯ 𝑥𝑛
𝑓(𝑥) 𝑓(𝑥0 ) 𝑓(𝑥1 ) 𝑓(𝑥2 ) 𝑓(𝑥3 ) ⋯ 𝑓(𝑥𝑛 )

be given at distinct unevenly spaced points or non-uniform points 𝑥0 , 𝑥1 , . . . , 𝑥𝑛 . This data may also be
given at evenly spaced points. For this data, we can fit a unique polynomial of degree ≤ 𝑛. Since the
interpolating polynomial must use all the ordinates 𝑓(𝑥0 ), 𝑓(𝑥1 ), . . . 𝑓(𝑥𝑛 ), it can be written as a linear
combination of these ordinates. That is, we can write the polynomial as;
𝑃𝑛 (𝑥) = 𝑙0 (𝑥)𝑓(𝑥0 ) + 𝑙1 (𝑥)𝑓(𝑥1 ) + ⋯ + 𝑙𝑛 (𝑥)𝑓(𝑥𝑛 )
= 𝑙0 (𝑥)𝑓0 + 𝑙1 (𝑥)𝑓1 + ⋯ + 𝑙𝑛 (𝑥)𝑓𝑛 (4.3.2.1)
where;𝑓(𝑥𝑖 ) = 𝑓𝑖 𝑎𝑛𝑑 𝑙𝑖 (𝑥) = 𝑙𝑖 , 𝑖 = 1,2,3, … 𝑛 are polynomials of degree 𝑛.
At 𝑥 = 𝑥0 , 𝑤𝑒 𝑔𝑒𝑡
𝑓(𝑥0 ) = 𝑃𝑛 (𝑥0 ) = 𝑙0 (𝑥0 )𝑓(𝑥0 ) + 𝑙1 (𝑥0 )𝑓(𝑥1 ) + ⋯ + 𝑙𝑛 (𝑥0 )𝑓(𝑥𝑛 )
This equation is satisfied only when 𝑙0 (𝑥0 ) = 1 𝑎𝑛𝑑𝑙𝑖 (𝑥0 ) = 0, 𝑖 ≠ 0.
At a general point 𝑥 = 𝑥𝑖 , 𝑤𝑒 𝑔𝑒𝑡

P a g e 20 | 29
𝑓(𝑥𝑖 ) = 𝑃𝑛 (𝑥𝑖 ) = 𝑙0 (𝑥𝑖 )𝑓(𝑥0 ) + ⋯ + 𝑙𝑖 (𝑥𝑖 )𝑓(𝑥𝑖 ) + ⋯ + 𝑙𝑛 (𝑥𝑖 )𝑓(𝑥𝑛 ).
This equation is satisfied only when 𝑙𝑖 (𝑥𝑖 ) = 1 𝑎𝑛𝑑𝑙𝑗 (𝑥𝑖 ) = 0, 𝑖 ≠ 𝑗
Therefore, 𝑙𝑖 (𝑥), which are polynomials of degree 𝑛, satisfy the conditions
0, 𝑖 ≠ 𝑗
𝑙𝑖 (𝑥𝑗 ) = [ (4.3.2.2)
1, 𝑖 = 𝑗
Since, 𝑙𝑖 (𝑥) = 0 𝑎𝑡 𝑥 = 𝑥0 , 𝑥1 , … , 𝑥𝑖−1 , 𝑥𝑖+1 , … , 𝑤𝑒 𝑘𝑛𝑜𝑤 𝑡ℎ𝑎𝑡
(𝑥 − 𝑥0 ), (𝑥 − 𝑥1 ), … , (𝑥 − 𝑥𝑖−1 ), (𝑥 − 𝑥𝑖+1 ), … , (𝑥 − 𝑥𝑛 ), are factors 𝑙𝑖 (𝑥). The product of these factors
is a polynomial of degree 𝑛. Therefore, we can write;
𝑙𝑖 (𝑥) = 𝐶(𝑥 − 𝑥0 ), (𝑥 − 𝑥1 ), … , (𝑥 − 𝑥𝑖−1 ), (𝑥 − 𝑥𝑖+1 ), … , (𝑥 − 𝑥𝑛 )
𝑤ℎ𝑒𝑟𝑒 C is a constant.
Since 𝑙𝑖 (𝑥𝑖 ) = 1, we get;
𝑙𝑖 (𝑥𝑖 ) = 1 = 𝐶(𝑥𝑖 − 𝑥0 ), (𝑥𝑖 − 𝑥1 ), … , (𝑥𝑖 − 𝑥𝑖−1 ), (𝑥𝑖 − 𝑥𝑖+1 ), … , (𝑥𝑖 − 𝑥𝑛 )
Hence,
1
𝐶= .
(𝑥𝑖 − 𝑥0 ), (𝑥𝑖 − 𝑥1 ), … , (𝑥𝑖 − 𝑥𝑖−1 ), (𝑥𝑖 − 𝑥𝑖+1 ), … , (𝑥𝑖 − 𝑥𝑛 )
Therefore,
(𝑥 − 𝑥0 ), (𝑥 − 𝑥1 ), … , (𝑥 − 𝑥𝑖−1 ), (𝑥 − 𝑥𝑖+1 ), … , (𝑥 − 𝑥𝑛 )
𝑙𝑖 (𝑥) = (4.3.2.3)
(𝑥𝑖 − 𝑥0 ), (𝑥𝑖 − 𝑥1 ), … , (𝑥𝑖 − 𝑥𝑖−1 ), (𝑥𝑖 − 𝑥𝑖+1 ), … , (𝑥𝑖 − 𝑥𝑛 )
Note that the denominator on the right hand side of li(x) is obtained by setting 𝑥 = 𝑥𝑖 in the numerator.
The polynomial given in (4.3.2.1) where 𝑙𝑖 (𝑥) are defined by (4.3.2.3) is called the Lagrange interpolating
polynomial and 𝑙𝑖 (𝑥) are called the Lagrange fundamental polynomials.
Let us derive the linear and quadratic interpolating polynomials.

Linear interpolation
For n = 1, we have the data
𝑥 𝑥0 𝑥1
𝑓(𝑥) 𝑓(𝑥0 ) 𝑓(𝑥1 )
Then Lagrange fundamental polynomial of are given by;
𝑥 − 𝑥1 𝑥 − 𝑥0
𝑙0 (𝑥) = 𝑎𝑛𝑑 𝑙1 (𝑥) = (4.3.2.4)
𝑥0 − 𝑥1 𝑥1 − 𝑥0
Therefore, Lagrange linear interpolation polynomial is given by;
𝑃1 (𝑥) = 𝑙0 (𝑥)𝑓(𝑥0 ) + 𝑙1 (𝑥)𝑓(𝑥1 ) (4.3.2.5)

Quadratic interpolation
For n = 2, we have the data
𝑥 𝑥0 𝑥1 𝑥2
𝑓(𝑥) 𝑓(𝑥0 ) 𝑓(𝑥1 ) 𝑓(𝑥2 )

P a g e 21 | 29
Then Lagrange fundamental polynomial of are given by;
(𝑥 − 𝑥1 )(𝑥 − 𝑥2 ) (𝑥 − 𝑥0 )(𝑥 − 𝑥2 ) (𝑥 − 𝑥0 )(𝑥 − 𝑥1 )
𝑙0 (𝑥) = , 𝑙1 (𝑥) = , 𝑙2 (𝑥) = (4.3.2.6)
(𝑥0 − 𝑥1 )(𝑥0 − 𝑥1 ) (𝑥1 − 𝑥0 )(𝑥1 − 𝑥2 ) (𝑥2 − 𝑥0 )(𝑥2 − 𝑥1 )
Therefore, Lagrange quadratic interpolation polynomial is given by;
𝑃2 (𝑥) = 𝑙0 (𝑥)𝑓(𝑥0 ) + 𝑙1 (𝑥)𝑓(𝑥1 ) + 𝑙2 (𝑥)𝑓(𝑥2 ) (4.3.2.7)
Error of interpolation
We assume that f(x) has continuous derivatives of order up to n + 1 for all x ∈ (a, b). Since, f(x) is
approximated by 𝑃𝑛 (𝑥), the results contain errors. We define the error of interpolation or truncation error
as;
𝐸(𝑓, 𝑥) = 𝑓(𝑥)– 𝑃𝑛 (𝑥) (4.3.2.8)
Example 4.3.2.1:
Using the data sin (0.1) = 0.09983 and sin (0.2) = 0.19867, find an approximate value of sin (0.15) by
Lagrange interpolation.
Solution:
From the problem given we can have the following data table;
𝑥 0.1 0.2
𝑓(𝑥) 0.09983 0.19867
Then Lagrange fundamental polynomial of are found to be;
𝑥 − 𝑥1 𝑥 − 0.2 𝑥 − 𝑥0 𝑥 − 0.1
𝑙0 (𝑥) = = = 2 − 10𝑥 𝑎𝑛𝑑 𝑙1 (𝑥) = = = 10𝑥 − 1
𝑥0 − 𝑥1 0.1 − 0.2 𝑥1 − 𝑥0 0.2 − 0.1
Then linear Lagrange interpolation polynomial is found using equation (4.3.2.5)
𝑃1 (𝑥) = 𝑙0 (𝑥)𝑓(𝑥0 ) + 𝑙1 (𝑥)𝑓(𝑥1 ) = (2 − 10𝑥 )(0.09983) + (10𝑥 − 1)(0.19867)
= 0.19966 − 0.9983𝑥 + 1.9867𝑥 − 0.19867
𝑃1 (𝑥) = 0.9884𝑥 + 0.00099 ⇝ 𝑙𝑖𝑛𝑒𝑎𝑟 𝐿𝑎𝑔𝑟𝑎𝑛𝑔𝑒 𝑖𝑛𝑡𝑒𝑟𝑝𝑜𝑙𝑎𝑡𝑖𝑛𝑔 𝑝𝑜𝑙𝑦𝑛𝑜𝑚𝑖𝑎𝑙
∴ 𝑓(0.15) = 0.9884(0.15) + 0.00099 = 0.14925
Example 4.3.2.2:
Use Lagrange’s formula, to find the quadratic polynomial that takes the values;
𝑥 0 1 3
𝑓(𝑥) 0 1 0
Solution:
Lagrange fundamental polynomial of are found to be;
(𝑥 − 𝑥1 )(𝑥 − 𝑥2 ) (𝑥 − 1)(𝑥 − 3) 1 2 4
𝑙0 (𝑥) = = = 𝑥 − 𝑥+1
(𝑥0 − 𝑥1 )(𝑥0 − 𝑥2 ) (0 − 1)(0 − 3) 3 3

P a g e 22 | 29
(𝑥 − 𝑥0 )(𝑥 − 𝑥2 ) (𝑥 − 0)(𝑥 − 3) 1
𝑙1 (𝑥) = = = − (𝑥 2 − 3𝑥) 𝑎𝑛𝑑
(𝑥1 − 𝑥0 )(𝑥1 − 𝑥2 ) (1 − 0)(1 − 3) 2
(𝑥 − 𝑥0 )(𝑥 − 𝑥1 ) (𝑥 − 0)(𝑥 − 1) 𝑥 2 − 𝑥
𝑙2 (𝑥) = = = =∄
(𝑥2 − 𝑥0 )(𝑥2 − 𝑥1 ) (0 − 0)(0 − 1) 0
From equation (4.3.2.7), Lagrange quadratic interpolating polynomial is obtained as;
1 4 𝑥 2 −𝑥
𝑃2 (𝑥) = 𝑙0 (𝑥)𝑓(𝑥0 ) + 𝑙1 (𝑥)𝑓(𝑥1 ) + 𝑙2 (𝑥)𝑓(𝑥2 ) = (3 𝑥 2 − 3 𝑥 + 1) (0) + (𝑥 2 − 3𝑥 )(1) + ( 0
) (0)
1
∴ 𝑃2 (𝑥) = − 2 (𝑥 2 − 3𝑥)

Example 4.3.2.2:
Construct the Lagrange interpolation polynomial for the data
𝑥 −1 1 4 7
𝑓(𝑥) −2 0 63 342
Hence, interpolate at x = 5.
Solution:
Lagrange fundamental polynomial are found to be;
(𝑥 − 𝑥1 )(𝑥 − 𝑥2 )(𝑥 − 𝑥3 ) (𝑥 − 1)(𝑥 − 4)(𝑥 − 7) 1
𝑙0 (𝑥) = = = − (𝑥 3 −12𝑥 2 + 39𝑥 − 28)
(𝑥0 − 𝑥1 )(𝑥0 − 𝑥2 )(𝑥0 − 𝑥3 ) (−1 − 1)(−1 − 4)(−1 − 7) 80
(𝑥 − 𝑥0 )(𝑥 − 𝑥2 )(𝑥 − 𝑥3 ) (𝑥 − (−1))(𝑥 − 4)(𝑥 − 7) 1 3
𝑙1 (𝑥) = = = (𝑥 − 10𝑥 2 + 17𝑥 + 28)
(𝑥1 − 𝑥0 )(𝑥1 − 𝑥2 )(𝑥1 − 𝑥3 ) (1 − (−1))(1 − 4)(1 − 7) 36

(𝑥 − 𝑥0 )(𝑥 − 𝑥1 )(𝑥 − 𝑥3 ) (𝑥 − (−1))(𝑥 − 1)(𝑥 − 7) 1


𝑙2 (𝑥) = = = − (𝑥 3 − 7𝑥 2 − 𝑥 + 7)
(𝑥2 − 𝑥0 )(𝑥2 − 𝑥1 )(𝑥2 − 𝑥3 ) (4 − (−1))(4 − 1)(4 − 7) 45
(𝑥 − 𝑥0 )(𝑥 − 𝑥1 )(𝑥 − 𝑥2 ) (𝑥 − (−1))(𝑥 − 1)(𝑥 − 4) 1
𝑙3 (𝑥) = = = (𝑥 3 − 4𝑥 2 − 𝑥 + 4)
(𝑥3 − 𝑥0 )(𝑥3 − 𝑥1 )(𝑥3 − 𝑥2 ) (7 − (−1))(7 − 1)(7 − 4) 144

Lagrange interpolating polynomial is found to be;


𝑃2 (𝑥) = 𝑙0 (𝑥)𝑓(𝑥0 ) + 𝑙1 (𝑥)𝑓(𝑥1 ) + 𝑙2 (𝑥)𝑓(𝑥2 ) + 𝑙3 (𝑥)𝑓(𝑥3 )

1 3 1
= (− (𝑥 −12𝑥 2 + 39𝑥 − 28)) (−2) + ( (𝑥 3 − 10𝑥 2 + 17𝑥 + 28)) (0)
80 36

1 1
+ (− 45 (𝑥 3 − 7𝑥 2 − 𝑥 + 7)) (63) + (144 (𝑥 3 − 4𝑥 2 − 𝑥 + 4)) (342)

∴ 𝑓(𝑥) = 𝑃2 (𝑥) = 𝑥 3 − 1 ⇝ 𝑐𝑢𝑏𝑖𝑐 𝑖𝑛𝑡𝑒𝑟𝑝𝑜𝑙𝑎𝑡𝑖𝑛𝑔 𝑝𝑜𝑙𝑦𝑛𝑜𝑚𝑖𝑎𝑙


Hence,
𝑓(𝑥) = (5)3 − 1 = 124

P a g e 23 | 29
4.3.2. Spline interpolation polynomial
In the earlier days of development of engineering devices, the draftsman used a device to draw smooth
curves through a given sets of points such that the slope and curvature are also continuous along the curves,
that is, f(x), f′(x) and f″(x) are continuous on the curves. Such a device was called a spline and plotting
of the curve was called spline fitting.
We now define a spline.
Let the given interval [a, b] be subdivided into 𝑛 subintervals [𝑥0 , 𝑥1 ], [𝑥1 , 𝑥2 ], . . . , [𝑥𝑛−1 , 𝑥𝑛 ] 𝑤ℎ𝑒𝑟𝑒 𝑎 =
𝑥0 < 𝑥1 < 𝑥2 < . . . < 𝑥𝑛 = 𝑏. The points 𝑥0 , 𝑥1 , . . . , 𝑥𝑛 are called nodes or knots and 𝑥1 , . . . , 𝑥𝑛−1 are
called internal nodes.
Spline function; a spline function of degree 𝑛 with nodes 𝑥0 , 𝑥1 , . . . , 𝑥𝑛 , is a function F(x) satisfying the
following properties.
(𝑖) 𝐹(𝑥𝑖 ) = 𝑓(𝑥𝑖 ), 𝑖 = 0, 1, . . . . , 𝑛. (Interpolation conditions).
(𝑖𝑖) On each subinterval [𝑥𝑖−1 , 𝑥𝑖 ], 1 ≤ 𝑖 ≤ 𝑛, 𝐹(𝑥) is a polynomial of degree 𝑛.
(i) 𝐹(𝑥) and its first (𝑛 – 1) derivatives are continuous on (𝑎, 𝑏).
For our discussion, we shall consider cubic splines only. From the definition, a cubic spline has the
following properties.
(𝑖) 𝐹(𝑥𝑖) = 𝑓(𝑥𝑖), 𝑖 = 0, 1, . . . , 𝑛. (Interpolation conditions).
(𝑖𝑖) On each subinterval [𝑥𝑖−1 , 𝑥𝑖 ], 1 ≤ 𝑖 ≤ 𝑛, 𝐹(𝑥) is a third degree (cubic) polynomial.
(𝑖𝑖𝑖) 𝐹(𝑥), 𝐹′(𝑥) 𝑎𝑛𝑑 𝐹 ″(𝑥) are continuous on (𝑎, 𝑏).
Let 𝐹(𝑥) = 𝑃𝑖 (𝑥) = 𝑎𝑖 𝑥 3 + 𝑏𝑖 𝑥 2 + 𝑐𝑖 𝑥 + 𝑑𝑖 𝑜𝑛 [𝑥𝑖−1 , 𝑥𝑖 ]
𝑎𝑛𝑑 𝐹(𝑥) = 𝑃𝑖+1 (𝑥) = 𝑎𝑖+1 𝑥 3 + 𝑏𝑖+1 𝑥 2 + 𝑐𝑖+1 𝑥 + 𝑑𝑖+1 𝑜𝑛 [𝑥𝑖 , 𝑥𝑖+1 ].
On each interval, we have four unknowns 𝑎𝑖 , 𝑏𝑖 , 𝑐𝑖 𝑎𝑛𝑑 𝑑𝑖 , 𝑖 = 1, 2, . . . , 𝑛. Therefore, the total number
of unknowns is 4𝑛.
Continuity of 𝐹(𝑥), 𝐹′(𝑥) 𝑎𝑛𝑑 𝐹″(𝑥) 𝑜𝑛 (𝑎, 𝑏) implies the following.
(𝑖) Continuity of 𝐹(𝑥):
On [𝑥𝑖−1 , 𝑥𝑖 ]: 𝑃𝑖 (𝑥𝑖 ) = 𝑓(𝑥𝑖 ) = 𝑎𝑖 𝑥𝑖 3 + 𝑏𝑖 𝑥𝑖 2 + 𝑐𝑖 𝑥𝑖 + 𝑑𝑖
On [𝑥𝑖 , 𝑥𝑖+1 ]: 𝑃𝑖+1 (𝑥𝑖 ) = 𝑓(𝑥𝑖 ) = 𝑎𝑖+1 𝑥𝑖 3 + 𝑏𝑖+1 𝑥𝑖 2 + 𝑐𝑖+1 𝑥𝑖 + 𝑑𝑖+1 (4.3.3.1)
(𝑖𝑖) Continuity of 𝐹′(𝑥):
3𝑎𝑖 𝑥𝑖 2 + 2𝑏𝑖 𝑥𝑖 + 𝑐𝑖 = 3𝑎𝑖+1 𝑥𝑖 2 + 2𝑏𝑖+1 𝑥𝑖 + 𝑐𝑖+1 , 𝑖 = 1,2, … , 𝑛 − 1 (4.3.3.2)
(𝑖𝑖𝑖) Continuity of 𝐹′′(𝑥):
6𝑎𝑖 𝑥𝑖 + 2𝑏𝑖 = 6𝑎𝑖+1 𝑥𝑖 + 2𝑏𝑖+1 , 𝑖 = 1,2, … , 𝑛 − 1 (4.3.3.3)
At the end point 𝑥0 𝑎𝑛𝑑 𝑥𝑛 , we have the interpolation conditions
𝑓(𝑥0 ) = 𝑎1 𝑥0 3 + 𝑏1 𝑥0 2 + 𝑐1 𝑥0 + 𝑑1

P a g e 24 | 29
𝑎𝑛𝑑 𝑓(𝑥𝑛 ) = 𝑎𝑛 𝑥𝑛 3 + 𝑏𝑛 𝑥𝑛 2 + 𝑐𝑛 𝑥𝑛 + 𝑑𝑛 (4.3.3.4)
We have 2(n – 1) equations from (4.3.3.1), (n – 1) equations from (4.3.3.2), (n – 1) equations from (4.3.3.3)
and 2 equations from (4.3.3.4). That is, we have a total of 4n – 2 equations. We need two more equations
to obtain the polynomial uniquely. There are various types of conditions that can be prescribed to obtain
two more equations. We shall consider the case of a natural spline, in which we set the two conditions
as 𝐹 ″ (𝑥0 ) = 0, 𝐹″(𝑥𝑛 ) = 0.
The above procedure is a direct way of obtaining a cubic spline. However, we shall derive a simple method
to determine the cubic spline.
Cubic spline
From the definition, spline is a piecewise continuous cubic polynomial. Hence, 𝐹″(𝑥) is a linear function
of 𝑥 in all the intervals.
Consider the interval [𝑥𝑖−1 , 𝑥𝑖 ]. Using Lagrange interpolation in this interval, 𝐹 ″(𝑥) is written as;
𝑥 − 𝑥𝑖 ′′ 𝑥 − 𝑥𝑖−1 ′′
𝐹 ′′ (𝑥) = 𝐹 (𝑥𝑖−1 ) + 𝐹 (𝑥𝑖 )
𝑥𝑖−1 − 𝑥𝑖 𝑥𝑖 − 𝑥𝑖−1
𝑥𝑖 − 𝑥 ′′ 𝑥𝑖−1 − 𝑥 ′′
𝐹 ′′ (𝑥) = 𝐹 (𝑥𝑖−1 ) + 𝐹 (𝑥𝑖 ) (4.3.3.5)
𝑥𝑖 − 𝑥𝑖−1 𝑥𝑖−1 − 𝑥𝑖
Denoting, 𝐹 ′′ (𝑥𝑖−1 ) = 𝑀𝑖−1 𝑎𝑛𝑑 𝐹 ′′ (𝑥𝑖 ) = 𝑀𝑖
Integrating equation (4.3.3.5) with respect to 𝑥, we get;
(𝑥𝑖 − 𝑥)2 (𝑥 − 𝑥𝑖−1 )2
𝐹 ′ (𝑥) = − 𝑀𝑖−1 + 𝑀 +𝑎 (4.3.3.6)
2(𝑥𝑖 − 𝑥𝑖−1 ) 2(𝑥𝑖 − 𝑥𝑖−1 ) 𝑖
Integrating equation (4.3.3.5) again with respect to 𝑥, we get;
(𝑥𝑖 − 𝑥)3 (𝑥 − 𝑥𝑖−1 )3
𝐹(𝑥) = 𝑀𝑖−1 + + 𝑎𝑥 + 𝑏 (4.3.3.7)
6(𝑥𝑖 − 𝑥𝑖−1 ) 6(𝑥𝑖 − 𝑥𝑖−1 )
𝑤ℎ𝑒𝑟𝑒 𝑎, 𝑎𝑛𝑑 𝑏 are arbitrary constants to be determined by using the condition
𝐹(𝑥𝑖−1 ) = 𝑓(𝑥𝑖−1 ), 𝑎𝑛𝑑 𝐹(𝑥𝑖 ) = 𝑓(𝑥𝑖 ) (4.3.3.8)
Denote 𝑥𝑖 − 𝑥𝑖−1 = ℎ𝑖 , 𝑓(𝑥𝑖−1 ) = 𝑓𝑖−1 , 𝑎𝑛𝑑 𝑓(𝑥𝑖 ) = 𝑓𝑖 . Note that ℎ𝑖 is the length of interval [𝑥𝑖−1 − 𝑥𝑖 ].
To ease the computation, we write
𝑎𝑥 + 𝑏 = 𝑐(𝑥𝑖 − 𝑥) + 𝑑(𝑥 − 𝑥𝑖−1 ), 𝑤ℎ𝑒𝑟𝑒 𝑎 = 𝑑 − 𝑐 𝑎𝑛𝑑 𝑏 = 𝑐𝑥𝑖 − 𝑑𝑥𝑖−1
That is equation (4.3.3.7) written as;
(𝑥𝑖 − 𝑥)3 (𝑥 − 𝑥𝑖−1 )3
𝐹(𝑥) = 𝑀𝑖−1 + 𝑀𝑖 + 𝑐(𝑥𝑖 − 𝑥) + 𝑑(𝑥 − 𝑥𝑖−1 ) (4.3.3.9)
6ℎ𝑖 6ℎ𝑖
i) Using the condition 𝐹(𝑥𝑖−1 ) = 𝑓(𝑥𝑖−1 ) = 𝑓𝑖−1 , we get
(𝑥𝑖 − 𝑥𝑖−1 )3 ℎ𝑖 3
𝑓𝑖−1 = 𝑀𝑖−1 + 𝑐(𝑥𝑖 − 𝑥𝑖−1 ) = 𝑀 + 𝑐ℎ𝑖
6ℎ𝑖 6ℎ𝑖 𝑖−1

P a g e 25 | 29
1 ℎ𝑖 2
𝑐= [𝑓𝑖−1 − 𝑀 ]
ℎ𝑖 6 𝑖−1
ii) Using the condition 𝐹(𝑥𝑖 ) = 𝑓(𝑥𝑖 ) = 𝑓𝑖 , we get
(𝑥𝑖 − 𝑥𝑖−1 )3 (ℎ𝑖 )2
𝑓𝑖 = )
𝑀𝑖 + 𝑑(𝑥𝑖 − 𝑥𝑖−1 = 𝑀𝑖 + 𝑑ℎ𝑖
6ℎ𝑖 6
1 ℎ𝑖 2
𝑑= [𝑓𝑖 − 𝑀]
ℎ𝑖 6 𝑖
Substituting the expressions for 𝑐 and 𝑑 in equation (4.3.3.9), we obtain the spline in the interval [𝑥𝑖−1 , 𝑥𝑖 ]
as;
(𝑥𝑖 −𝑥)3 (𝑥−𝑥𝑖−1 )3 (𝑥𝑖 −𝑥) ℎ𝑖 2 (𝑥−𝑥𝑖−1 ) ℎ𝑖 2
𝐹𝑖 (𝑥) = 𝑀𝑖−1 + 𝑀𝑖 + [𝑓𝑖−1 − 𝑀𝑖−1 ] + [𝑓𝑖 − 𝑀𝑖 ] (4.3.3.9)
6ℎ𝑖 6ℎ𝑖 ℎ𝑖 6 ℎ𝑖 6

Note that the spline second derivatives 𝑀𝑖−1 , 𝑀𝑖 are unknowns and are to be determined.
Setting i = i + 1 in (4.3.3.9), we get the spline valid in the interval [𝑥𝑖 , 𝑥𝑖+1 ] as
(𝑥𝑖+1 − 𝑥)3 (𝑥 − 𝑥𝑖 )3 (𝑥𝑖+1 − 𝑥) ℎ𝑖+1 2 (𝑥 − 𝑥𝑖 ) ℎ𝑖+1 2
𝐹𝑖+1 (𝑥) = 𝑀𝑖 + 𝑀𝑖+1 + [𝑓𝑖 − 𝑀𝑖 ] + [𝑓𝑖+1 − 𝑀𝑖+1 ] (4.3.3.10)
6ℎ𝑖+1 6ℎ𝑖+1 ℎ𝑖+1 6 ℎ𝑖+1 6

𝑤ℎ𝑒𝑟𝑒 ℎ𝑖+1 = 𝑥𝑖+1 – 𝑥𝑖 .


Differentiating (4.3.3.9), we get
1 1 ℎ𝑖 2 1 ℎ𝑖 2
𝐹 ′ 𝑖 (𝑥) = [−3(𝑥𝑖 − 𝑥)2 𝑀𝑖−1 + 3(𝑥 − 𝑥𝑖−1 )2 𝑀𝑖 ] − [𝑓𝑖−1 − 𝑀𝑖−1 ] + [𝑓𝑖 − 𝑀] (4.3.3.11)
6ℎ𝑖 ℎ𝑖 6 ℎ𝑖 6 𝑖

Equation (4.3.3.11) is valid in the interval [𝑥𝑖−1 , 𝑥𝑖 ]


Similarly differentiating (4.3.3.10), we get
1 1 ℎ2 𝑖+1 1 ℎ2 𝑖+1
𝐹 ′ 𝑖+1 (𝑥) = [−3(𝑥𝑖+1 − 𝑥)2 𝑀𝑖 + 3(𝑥 − 𝑥𝑖 )2 𝑀𝑖+1 ] − [𝑓𝑖 − 𝑀𝑖 ] + [𝑓𝑖+1 − 𝑀𝑖+1 ] (4.3.3.12)
6ℎ𝑖+1 ℎ𝑖+1 6 ℎ𝑖+1 6

Equation (4.3.3.12) is valid in the interval [𝑥𝑖 , 𝑥𝑖+1 ]


Now, we require that the derivative 𝐹′(𝑥) be continuous at 𝑥 = 𝑥𝑖 . Hence, the left hand and right hand
derivatives of 𝐹′(𝑥) 𝑎𝑡 𝑥 = 𝑥𝑖 must be equal, that is,
lim 𝐹𝑖 ′ (𝑥𝑖 − 𝜀) = 𝑙𝑖𝑚 𝐹 ′ 𝑖+1 (𝑥𝑖 + 𝜀)
𝜀→0 𝜀→0

Using (4.3.3.11) and (4.3.3.12), we obtain


ℎ𝑖 1 ℎ𝑖 1 ℎ𝑖 ℎ𝑖+1 1 ℎ𝑖+1 1 ℎ𝑖+1
𝑀𝑖 − 𝑓𝑖−1 + 𝑀𝑖−1 + 𝑓𝑖 − 𝑀𝑖 = − 𝑀𝑖 − 𝑓𝑖 + 𝑀𝑖 + 𝑓𝑖+1 − 𝑀𝑖+1
2 ℎ𝑖 6 ℎ𝑖 6 2 ℎ𝑖+1 6 ℎ𝑖+1 6
ℎ𝑖 1 ℎ𝑖+1 1 1
𝑀𝑖−1 + (ℎ𝑖 + ℎ𝑖+1 )𝑀𝑖 + 𝑀𝑖+1 = (𝑓𝑖+1 − 𝑓𝑖 ) − (𝑓𝑖 − 𝑓𝑖−1 ), (4.3.3.13)
6 3 6 ℎ𝑖+1 ℎ𝑖
𝑖 = 1,2,3, … , 𝑛 − 1
This relation gives a system of n – 1 linear equations in n + 1 unknowns 𝑀0 , 𝑀1 , . . . , 𝑀𝑛 . The two additional
conditions required are the natural spline conditions 𝑀0 = 0 = 𝑀𝑛 . These equations are solved

P a g e 26 | 29
for 𝑀1 , 𝑀2 , . . . , 𝑀𝑛−1 . Substituting these values in (4.3.3.9), we obtain the spline valid in the
interval [𝑥𝑖−1 , 𝑥𝑖 ]. If the derivative is required, we can find it from (4.3.3.11).
Equispaced data: When the data is equispaced, we have ℎ𝑖 = ℎ𝑖+1 = ℎ and 𝑥𝑖 = 𝑥0 + 𝑖ℎ. Then, the
spline in the interval [𝑥𝑖−1 , 𝑥𝑖 ], given in (4.3.3.9) and the relation between the second derivatives given in
(4.3.3.13) is simplified as;
1 (𝑥𝑖 − 𝑥) ℎ2 (𝑥 − 𝑥𝑖−1 ) ℎ2
𝐹𝑖 (𝑥) = [(𝑥𝑖 − 𝑥)3 𝑀𝑖−1 + (𝑥 − 𝑥𝑖−1 )3 𝑀𝑖 ] + [𝑓𝑖−1 − 𝑀𝑖−1 ] + [𝑓𝑖 − 𝑀𝑖 ] (4.3.3.14)
6ℎ ℎ 6 ℎ 6
6
𝑀𝑖−1 + 4𝑀𝑖 + 𝑀𝑖+1 = (𝑓 − 2𝑓𝑖 + 𝑓𝑖−1 ) (4.3.3.15)
ℎ2 𝑖+1
𝑖=1,2,3,…,𝑛−1

Example 4.3.3.1:
Obtain the cubic spline approximation for the following data
𝑥 0 1 2
𝑓(𝑥) −1 3 29

𝑤𝑖𝑡ℎ 𝑀0 = 0, 𝑀2 = 0. Hence, interpolate at 𝑥 = 0.5, 𝑎𝑛𝑑 1.5


Solution:
We have equispaced data with h = 1. From equation (4.3.3.15), we obtain;
6
𝑀𝑖−1 + 4𝑀𝑖 + 𝑀𝑖+1 = (1)2 (𝑓𝑖+1 − 2𝑓𝑖 + 𝑓𝑖−1 )

For i = 1
6
𝑀0 + 4𝑀1 + 𝑀2 = (1)2 (𝑓2 − 2𝑓1 + 𝑓0 )

Since 𝑀0 = 0, 𝑀2 = 0, we get
4𝑀1 = 6(29 − 2(3) + (−1))
𝑀1 = 33
Thus the spline is given by equation (4.3.3.14),
1 (𝑥𝑖 − 𝑥) ℎ2 (𝑥 − 𝑥𝑖−1 ) ℎ2
𝐹𝑖 (𝑥) = [(𝑥𝑖 − 𝑥)3 𝑀𝑖−1 + (𝑥 − 𝑥𝑖−1 )3 𝑀𝑖 ] + [𝑓𝑖−1 − 𝑀𝑖−1 ] + [𝑓𝑖 − 𝑀𝑖 ]
6ℎ ℎ 6 ℎ 6
We have the following splines for each intervals:
On [0, 1] ↠ ℎ𝑒𝑟𝑒 𝑖 = 1, 𝑥𝑖−1 = 0, 𝑎𝑛𝑑 𝑥𝑖 = 1
1 (𝑥1 − 𝑥) (1)2 (𝑥 − 𝑥0 ) (1)2
𝐹(𝑥) = [(𝑥1 − 𝑥)3 𝑀0 + (𝑥 − 𝑥0 )3 𝑀1 ] + [𝑓0 − 𝑀0 ] + [𝑓1 − 𝑀1 ]
6(1) 1 6 (1) 6
1 (1 − 𝑥) (1)2 (𝑥 − 0) (1)2
= [(1 − 𝑥)3 (0) + (𝑥 − 0)3 (33)] + [−1 − (0)] + [3 − (33)]
6(1) 1 6 (1) 6
1 33𝑥
= [(𝑥)3 (33)] + 𝑥 − 1 + [3𝑥 − ]
6 6

P a g e 27 | 29
𝐹(𝑥) = 5.5𝑥 3 − 1.5𝑥 − 1
On [1, 2], we have↠ ℎ𝑒𝑟𝑒 𝑖 = 2, 𝑥𝑖−1 = 1, 𝑎𝑛𝑑 𝑥𝑖 = 2
1 (𝑥𝑖 − 𝑥) ℎ2 (𝑥 − 𝑥𝑖−1 ) ℎ2
𝐹(𝑥) = [(𝑥𝑖 − 𝑥)3 𝑀𝑖−1 + (𝑥 − 𝑥𝑖−1 )3 𝑀𝑖 ] + [𝑓𝑖−1 − 𝑀𝑖−1 ] + [𝑓𝑖 − 𝑀𝑖 ]
6ℎ ℎ 6 ℎ 6
1 (𝑥2 − 𝑥) (1)2 (𝑥 − 𝑥1 ) (1)2
= [(𝑥2 − 𝑥)3 𝑀1 + (𝑥 − 𝑥1 )3 𝑀2 ] + [𝑓1 − 𝑀1 ] + [𝑓2 − 𝑀2 ]
6 (1) 6 (1) 6
1 (2 − 𝑥) (1)2 (𝑥 − 1) (1)2
= [(2 − 𝑥)3 (33) + (𝑥 − 1)3 (0)] + [(3) − (33)] + [29 − (0)]
6 (1) 6 (1) 6
33 5
= (2 − 𝑥)3 − 5 + 𝑥 + (𝑥 − 1)29
6 2
11
𝐹(𝑥) = (2 − 𝑥)3 + 31.5𝑥 − 34
2
Since, 0.5 lies in the interval [0, 1], we obtain;
𝐹(0.5) = 5.5(0.5)3 − 1.5(0.5) − 1 = −1.0625
Since, 1.5 lies in the interval [1, 2], we obtain
11
𝐹(1.5) = (2 − 1.5)3 + 31.5(1.5) − 34 = 13.9375
2

Example 4.3.3.2:
Obtain the cubic spline approximation for the following data;
𝑥 0 1 2 3
𝑓(𝑥) 1 2 33 244

𝑤𝑖𝑡ℎ 𝑀0 = 0, 𝑀3 = 0. Hence, interpolate at x = 2.5


Solution: here we have equispaced data, with ℎ = 1
For 𝑖 = 1, we get
6
𝑀0 + 4𝑀1 + 𝑀2 = (1)2 (𝑓2 − 2𝑓1 + 𝑓0 )

6
0 + 4𝑀1 + 𝑀2 = (33 − 2(2) + 1)
(1)2
4𝑀1 + 𝑀2 = 180 (∗)
For 𝑖 = 2, we get
6
𝑀𝑖−1 + 4𝑀𝑖 + 𝑀𝑖+1 = (𝑓 − 2𝑓𝑖 + 𝑓𝑖−1 )
ℎ2 𝑖+1
6
𝑀1 + 4𝑀2 + 𝑀3 = (𝑓 − 2𝑓2 + 𝑓1 )
(ℎ)2 3
6
𝑀1 + 4𝑀2 + 0 = (244 − 2(33) + 2)
(1)2

P a g e 28 | 29
𝑀1 + 4𝑀2 = 1080 (∗∗)
Solving (*) and (**) simultaneously,
𝑀1 = −24, 𝑎𝑛𝑑 𝑀2 = 276
We have the following splines for each intervals;
On [0, 1] 𝑤𝑒 ℎ𝑎𝑣𝑒, 𝑖 = 1
1 (𝑥𝑖 − 𝑥) ℎ2 (𝑥 − 𝑥𝑖−1 ) ℎ2
𝐹(𝑥) = [(𝑥𝑖 − 𝑥)3 𝑀𝑖−1 + (𝑥 − 𝑥𝑖−1 )3 𝑀𝑖 ] + [𝑓𝑖−1 − 𝑀𝑖−1 ] + [𝑓𝑖 − 𝑀𝑖 ]
6ℎ ℎ 6 ℎ 6
(𝑥 (1) 2 (𝑥 ) (1) 2
1 1 − 𝑥) − 𝑥0
𝐹(𝑥) = [(𝑥 − 𝑥)3 𝑀0 + (𝑥 − 𝑥0 )3 𝑀1 ] + [𝑓0 − 𝑀0 ] + [𝑓1 − 𝑀1 ]
6(1) 1 (1) 6 (1) 6
1 3 (0) 3 (−24)]
(1 − 𝑥) (1)2 (𝑥 − 0) (1)2
= [(1 − 𝑥) (𝑥
+ − 0) + [1 − (0)] + [2 − (−24)]
6(1) (1) 6 (1) 6
−24 (1 − 𝑥) (𝑥) (1)2
= [(𝑥)3 ] + [1] + [2 − (−24)]
6 (1) (1) 6
𝐹(𝑥) = −4𝑥 3 + 5𝑥 + 1
On [1, 2] 𝑤𝑒 ℎ𝑎𝑣𝑒, 𝑖 = 2
1 3 3
(𝑥𝑖 − 𝑥) ℎ2 (𝑥 − 𝑥𝑖−1 ) ℎ2
𝐹(𝑥) = [(𝑥 (𝑥
− 𝑥) 𝑀𝑖−1 + − 𝑥𝑖−1 𝑀𝑖 + ) ] [𝑓𝑖−1 − 𝑀𝑖−1 ] + [𝑓𝑖 − 𝑀𝑖 ]
6ℎ 𝑖 ℎ 6 ℎ 6
(𝑥2 − 𝑥) (1)2 (𝑥 − 𝑥1 ) (1) 2
1
𝐹(𝑥) = [(𝑥2 − 𝑥)3 𝑀1 + (𝑥 − 𝑥1 )3 𝑀2 ] + [𝑓1 − 𝑀1 ] + [𝑓2 − 𝑀2 ]
6(1) (1) 6 (1) 6
1 (2 − 𝑥) (1)2 (𝑥 − 1) (1)2
= [(2 − 𝑥)3 (−24) + (𝑥 − 1)3 (276)] + [2 − (−24)] + [33 − (276)]
6(1) (1) 6 (1) 6
1 (2 − 𝑥) (1)2 (𝑥 − 1) (1)2
= [(2 − 𝑥)3 (−24) + (𝑥 − 1)3 (276)] + [2 − (−24)] + [33 − (276)]
6(1) (1) 6 (1) 6
𝐹(𝑥) = −4(2 − 𝑥)3 + 46(𝑥 − 1)3 − 19𝑥 + 25
On [2, 3] 𝑤𝑒 ℎ𝑎𝑣𝑒, 𝑖 = 3
1 (𝑥3 − 𝑥) (1)2 (𝑥 − 𝑥2 ) (1)2
𝐹(𝑥) = [(𝑥3 − 𝑥)3 𝑀2 + (𝑥 − 𝑥2 )3 𝑀3 ] + [𝑓2 − 𝑀2 ] + [𝑓3 − 𝑀3 ]
6(1) (1) 6 (1) 6
1 (3 − 𝑥) (1)2 (𝑥 − 2) (1)2
= [(3 − 𝑥)3 (276) + (𝑥 − 2)3 (0)] + [33 − (276)] + [244 − (0)]
6 (1) 6 (1) 6
1 (3 − 𝑥) (1)2 (𝑥 − 2) (1)2
= [(3 − 𝑥)3 (276) + (𝑥 − 2)3 (0)] + [33 − (276)] + [244 − (0)]
6 (1) 6 (1) 6
𝐹(𝑥) = 46(3 − 𝑥)3 + 13(𝑥 − 3) + 244𝑥 − 488 Since (2.5) is in the interval [2, 3], the estimate at 𝑥 =
2.5 is 𝐹(2.5) = 46(3 − 2.5)3 + 13(2.5 − 3) + 244(2.5) − 488 = 121.25

Exercise 4.3.3.1:
Obtain the cubic spline approximation valid in the interval [1, 2], for the function given in the
tabular form, under the natural cubic spline conditions f ″(0) = M(0) = 0, and f ″(3) = M(3) = 0.
𝑥 0 1 2 3
𝑓(𝑥) 1 4 10 8
Hence, interpolate at x = 1.5.

P a g e 29 | 29

You might also like