Professional Documents
Culture Documents
( x4 , y4 )
( x2 , y 2 )
( x3 , y3 )
( x1 , y1 )
( x5 , y 5 )
( x0 , y0 )
( x3 , y3 ) ( x4 , y4 )
( x2 , y2 )
( x1 , y1 )
( x5 , y5 )
y = mx + c , (3.1)
where m is the gradient of the line and c is the y intercept. In the above equation,
y1 y0
m= and c = y0 mx0 ,
x1 x0
which results in
y1 y0 x y x y
y= x+ 1 0 0 1 .
x1 x0 x1 x0
French mathematician, Joseph Louis Lagrange, proposed to rewrite the linear equation so that the
two interpolated points, ( x0 , y0 ) and ( x1 , y1 ) , are directly represented. With this in mind, the linear
equation is rewritten as
P1 ( x) = a0 ( x x1 ) + a1 ( x x0 )
( x x1 ) ( x x0 )
P1 ( x) = y0 + y1 . (3.2)
x0 x1 x1 x0
The quadratic form of the Lagrange polynomial interpolates three points, ( x0 , y0 ) , ( x1 , y1 ) and
( x2 , y2 ) . The polynomial has the form of
P2 ( x) = a0 ( x x1 )( x x2 ) + a1 ( x x0 )( x x2 ) + a2 ( x x0 )( x x1 ) ,
y0 = a0 ( x0 x1 )( x0 x2 ) + a1 ( x0 x0 )( x0 x2 ) + a2 ( x0 x0 )( x0 x1 ) , or
y0
a0 = .
( x0 x1 )( x0 x2 )
y1
a1 = ,
( x1 x0 )( x1 x2 )
y2
a2 = .
( x2 x0 )( x2 x1 )
( x x1 )( x x2 ) ( x x0 )( x x2 ) ( x x0 )( x x1 )
P2 ( x) = y0 + y1 + y2 . (3.3)
( x0 x1 )( x0 x2 ) ( x1 x0 )( x1 x2 ) ( x2 x0 )( x2 x1 )
n
( x xk ) ( x x0 )( x x1 )...( x xi1 )( x xi+1 )...( x xn )
Li ( x) = = . (3.4)
k =1 ( xi xk ) ( xi x0 )( xi x1 )...( xi xi1 )( xi xi+1 )...( xi xn )
k i
There are n factors in both the numerator and denominator of Equation (3.5). The inequality
k i denies zero value in the denominator which may cause a fatal error in division. It is obvious that
n = 1 produces a linear curve, or a straight line, while n = 2 produces a quadratic curve, or a parabola.
3-3 Shaharuddin Salleh
Algorithm 3.1. Lagrange Method.
Given the interpolating points ( xi , yi ) for i = 0,1,..., n ;
for i = 0 to n
n
( x xk )
Evaluate Li ( x) = ;
k =1 ( xi xk )
k i
endfor
Evaluate Pn ( x) = y0 L0 ( x) + y1 L1 ( x) + ... + yn Ln ( x) ;
In the above equation, ai for i = 0,1,..., n are constants whose values are determined by applying the
equation at the given interpolated points.
There are several different methods for evaluating the Newton polynomials. They include the
divided-difference, forward-difference, backward difference and central-difference. We discuss each of
these methods in this chapter.
Divided-Difference Method
The divided-difference method is a method for determining the coefficients ai for i = 0,1,..., n in
Equation (3.6) using the divided-difference constants, defined as follows:
d k 1,i+1 d k 1,i
d k ,i = .
xk xi
The general form of the linear equation in Equation (3.1) which interpolates ( x0 , y0 ) and ( x1 , y1 )
can also be expressed in the form of divided-difference constants,
P ( x) = d 0,0 + d1,0 ( x x0 ) ,
y1 y0
d 0,0 = y0 and d1,0 = .
x1 x0
y1 y0
This gives P1 ( x) = d 0,0 + d1,0 ( x x0 ) = y0 + ( x x0 ) .
x1 x0
At the same time, the quadratic form of the Newton polynomial which interpolates the points
( x0 , y0 ) , ( x1 , y1 ) and ( x2 , y2 ) can now be written as
In the above equation, x0 and x1 are the centers. Applying the quadratic equation to the three points,
we obtain
d 0,0 = y0 ,
d 0,1 d 0,0 y1 y0
d1,0 = = ,
x1 x0 x1 x0
d 0,2 d 0,1 y2 y1
d1,1 = =
x2 x1 x2 x1
y2 y1 y1 y0
d1,1 d1,0 x2 x1 x1 x0
d 2,0 = = .
x2 x0 x2 x0
Algorithm 3.2 summarizes the divided-difference approach. An example using this algorithm is
illustrated in Example 3.2.
In deriving the Newton polynomial using the forward-difference method, let h be the width of the
x subintervals. Uniform subintervals suggests all the subintervals in x has equal width given as h . In
another word, h = xi+1 xi for i = 0,1,..., n 1 .
The forward-difference formula is derived from the divided-difference equation. Consider a cubic
form of the divided-difference equation from Equation (3.6) given as
It can be proven that the general form of the forward-difference method for interpolating the
points ( xi , yi ) for i = 0,1,..., n can be extended from the cubic case above. The solution is given by
1,0 2,0 n ,0
Pn ( x) = y0 + ( x x0 ) + ( x x0 )( x x1 ) + ... + ( x x0 )( x x1 )...( x xn1 ) . (3.8)
h 2h nh
Definition 3.4. The backward difference operator k ,i is defined as the k th backward operator at xi , or
k ,i = k 1,i k 1,i1 . The initial value are 0,i = yi for i = 0,1,..., n .
Backward-difference method is also derived from Newton divided-difference method. The method
requires all the x subintervals to have uniform width, given as h . We discuss on the case of quadratic
polynomial from the divided-difference method in deriving the formula for the backward-difference
method. From Equation (3.7),
r (r + 1) 2 n f n n1
Pn ( x) = f n + rf n +
2!
f n + ... + (r + i) ,
n ! i =0
(3.9)
x xn
where k f i = k 1 f i k 1 f i 1 is the backward-difference operator and r =
.
h
Algorithm 3.4 summarizes the steps in the backward-difference method for generating the
Newton polynomial.
ei = yi pi . (3.18)
The sum of the squares of the errors ei at the points x = xi for i = 0,1,..., m 1 is expressed as an
objective function E , as follows:
m 1 m 1
E = ei2 = ( yi pi ) 2 (3.19)
i =0 i =0
( x0 , y0 )
e0
( x0 , p0 ) ( x4 , p4 )
( x3 , y3 )
( x2 , y2 ) e4
( x1 , p1 ) e3
e1 e2 ( x3 , p3 ) ( x4 , y4 )
( x2 , p2 ) ( x5 , p5 )
( x1 , y1 )
e5
( x5 , y5 )
Figure 3.3 shows a case of m = 6 points with the interpolated points ( xi , yi ) in white squares and
the approximated points ( xi , pi ) in dark squares. At each point xi , the error ei = yi pi is computed.
The objective function in Equation (3.18) is obtained by adding the sum of squares of all these errors.
The curves to be fitted in the least-square approximations are normally low-degree polynomials,
such as
In linear approximation, a straight line equation is used to approximate the points. The objective
function becomes
Our objective here is to find the values of a0 and a1 by minimizing the sum of the square of the
E E
errors. The minimization requires setting = 0 and = 0 to produce two linear equations which
a0 a1
will be sufficient to solve for a0 and a1 . The partial derivatives are obtained as follows:
E m 1
= 2 ( yi ( a0 + a1 xi )) ,
a0 i =0
E m 1
= 2 xi ( yi (a0 + a1 xi )) .
a1 i =0
E
Setting = 0 , we obtain the first linear equation
a0
m 1
2 ( yi (a0 + a1 xi )) = 0 ,
i =0
m 1 m 1 m 1
y a a x
i =0
i
i =0
0
i =0
1 i =0,
m 1 m 1 m 1
a0 1 + a1 xi = yi .
i =0 i =0 i =0
E
The second linear equation is obtained by setting =0:
a1
m 1
x (y
i =0
i i ( a0 + a1 xi )) = 0 ,
m 1 m 1 m 1
a0 xi + a1 xi2 = xi yi .
i =0 i =0 i =0
m 1 m 1
m 1
1 x a
i yi
i =0 i =0
0 = i =0 . (3.21)
m 1 m 1
a m 1
xi xi2 1 yi xi
i =0 i =0 i =0
m 1
E = ( y (a0 + a1 x + a2 x 2 )) 2 (3.22)
i =0
The approximation is obtained by minimizing the sum of the squares of the errors through
E E E
=0, = 0 and = 0 . The first equation is obtained through the following steps:
a0 a1 a2
E m 1
= 0 : ( yi (a0 + a1 xi + a2 xi2 )) = 0 ,
a0 i =0
m 1 m 1 m 1 m 1
a0 1 + a1 xi + a2 xi2 = yi .
i =0 i =0 i =0 i =0
E m 1
= 2 xi ( yi (a0 + a1 xi + a2 xi2 )) ,
a1 i =0
E m 1
= 0 : xi ( yi ( a0 + a1 xi + a2 xi2 )) = 0 ,
a0 i =0
m 1 m 1 m 1 m 1
a0 xi + a1 xi2 + a2 xi3 = xi yi .
i =0 i =0 i =0 i =0
E m 1
= 2 xi2 ( yi (a0 + a1 xi + a2 xi2 )) ,
a2 i =0
E m 1
= 0 : xi2 ( yi (a0 + a1 xi + a2 xi2 )) = 0 ,
a0 i =0
m 1 m 1 m 1 m 1
a0 xi2 + a1 xi3 + a2 xi4 = xi2 yi .
i =0 i =0 i =0 i =0
m 1 m 1 m 1
2 m 1
1 xi xi yi
i =0 i =0 i =0
a0 i = 0
m 1 m 1 m 1
m 1
xi x 2
i xi3 a1 = yi xi . (3.23)
i =0 i =0 i =0 a i =0
m 1 2 2 m 1
2
m 1 m 1
xi x yi xi
3
xi4
i = 0 i = 0
i
i =0 i =0
Equations (3.21) and (3.23) can be generalized into approximating a polynomial of degree n . The
least-square method produces the following system of (n + 1) x (n + 1) linear equations:
m 1 m 1 m 1 m 1
n m 1
1 xi ... xin1 x i yi
i =0 i =0 i =0 i =0
i =0
m 1 m 1 m 1 m 1
a m 1
xi xi2 x xin +1
n 0
... i
a yi xi
i =0 i =0 i =0 i =0 1 i =0
... ... ... ... ... ... = ... (3.24)
m 1 m 1 m 1 m 1 m 1
2 n 1 an 1 n 1
x x an
n +1 2n2
xin ... xi yi xi
i =0 i =0
i
i =0
i
i =0 i =0
m 1 m 1 m 1 m 1
m 1
x n +1 2n n
x x
n+2 2 n 1
... xi yi xi
i =0
i
i =0
i
i =0
i
i =0
i =0
3-10 Shaharuddin Salleh
Equation (3.23) is a generalization for fitting a polynomial of degree n into a set of m points using
m 1 m 1
the least-square approximation method. Letting si = xki and vi = yk xki , this equation can be
k =0 k =0
rewritten as
s0 s1 ... sn1 sn a0 v0
s s2 ... sn sn+1 a1 v1
1
... ... ... ... ... ... = ... . (3.25)
sn1 sn ... s2 n2 s2 n1 an1 vn1
sn sn+1 ... s2 n1 s2 n an vn
As a word of caution, since the equation involves a power of zero a value of x = 0 produces 00
which should be treated as 1.
Formula
n (x x j )
Pn ( x) = y 0 L0 ( x) + y1 L1 ( x) + ... + y n Ln ( x) where Li ( x) = is the Lagrange operator.
j =1 ( xi x j )
j i
Example
i 0 1 2 3
xi 1.0 1.2 1.5 2.0
y i = f ( xi ) 2.5 2.8 2.7 2.6
Solution
Therefore,
P ( x ) = y 0 L0 ( x ) + y1 L1 ( x ) + y 2 L 2 ( x ) + y 3 L3 ( x )
Formula:
n 1
P( x) = P3 ( x) = f 0[ 0 ] + f 0[1] ( x x 0 ) + f 0[ 2 ] ( x x 0 )( x x1 ) + ... + f 0[ n ] ( x x i ) where
i =0
[ k 1] [ k 1]
f i +1 fi
f i[ k ] = is the divided-difference operator.
xi+k xi
Example:
i 0 1 2 3
xi 1.0 1.2 1.5 2.0
y i = f ( xi ) 2.5 2.8 2.7 2.6
Solution:
i xi f i[ 0 ] = f ( x i ) f i[1] f i[ 2] f i [ 3]
0 1.0 2.5 1.500 -3.667 3.833
1 1.2 2.8 -0.333 0.167
2 1.5 2.7 -0.200
3 2.0 2.6
Therefore,
P( x) = P3 ( x) = f 0[ 0 ] + f 0[1] ( x x 0 ) + f 0[ 2] ( x x 0 )( x x1 ) + f 0[3] ( x x 0 )( x x1 )( x x 2 )
P( x) = 2.5 + 1.5( x 1.0) + (3.667)( x 1.0)( x 1.2) + 3.8333( x 1.0)( x 1.2)( x 1.5)
P (1.1) = 2.5 + 1.5(1.1 1.0) + (3.667)(1.1 1.0)(1.1 1.2) + 3.8333(1.1 1.0)(1.1 1.2)(1.1 1.5)
= 2.702
Formula:
r (r 1) 2 n f 0 n 1
Pn ( x) = f 0 + rf 0 +
2!
f 0 + ... +
n!
(r i) , where
i =0
k f i = k 1 f i +1 k 1 f i is the
x x0
forward-difference operator and r = .
h
Example:
i 0 1 2 3
xi 1.0 1.2 1.4 1.6
y i = f ( xi ) 2.5 2.8 2.4 2.6
Solution:
i xi 0 f i = f ( x i ) 1 f i 2 f i 3 f i
0 1.0 2.5 0.3 -0.7 1.3
1 1.2 2.8 -0.4 0.6
2 1.4 2.4 0.2
3 1.6 2.6
Therefore,
r (r 1) 2 r (r 1)(r 2) 3 x x0 x 1
P(r ) = P3 ( x) = f 0 + rf 0 + f0 + f 0 , where r = =
2! 3! h 0 .2
r (r 1) r (r 1)(r 2)
P(r ) = 2.5 + r (0.3) + ( 0 .7 ) + (1.2)
2! 3!
1.1 1
For x = 1.1 , r = = 0.5 .
0.2
(0.5)(0.5 1) (0.5)(0.5 1)(0.5 2)
We get P (1.1) = 2.5 + (0.5)(0.3) + (0.7) + (1.3) = 2.819
2! 3!
Formula:
r (r + 1) 2 n fn n 1
Pn ( x) = f n + rf n +
2!
f n + ... +
n!
(r + i) , where
i =0
k
f i = k 1 f i k 1 f i 1 is the
x xn
backward-difference operator and r = .
h
Example:
i 0 1 2 3
xi 1.0 1.2 1.4 1.6
y i = f ( xi ) 2.5 2.8 2.4 2.6
Solution:
i xi 0 f i = f ( xi ) 1 f i 2 fi 3 fi
0 1.0 2.5
1 1.2 2.8 0.3
2 1.4 2.4 -0.4 -0.7
3 1.6 2.6 0.2 0.6 1.3
r (r + 1) 2 r (r + 1)(r + 2) 3 x x 3 x 1 .6
P(r ) = P3 ( x) = f 3 + rf 3 + f3 + f 3 , where r = = .
2! 3! h 0 .2
r (r + 1) r (r + 1)(r + 2)
P(r ) = P3 ( x) = 2.6 + r (0.2) + ( 0 .6 ) + (1.3) .
2! 3!
1.1 1.6
For x = 1.1 , r = = 2.5 . Therefore,
0.2
(2.5)(2.5 + 1) (2.5)(2.5 + 1)(2.5 + 2)
P( x = 1.1) = P(2.5) = 2.6 + (2.5)(0.2) + (0.6) + (1.3)
2! 3!
= 2.819
Example: Find a polynomial of degree 2, P2 ( x) , that approximates the points (1,4), (1.2,5), (1.4,1),
(1.7,-1) and (2,3). Hence, evaluate P2 (1.5) .
Solution:
S 0 S1 S 2 a 0 v 0
P2 ( x) = a 0 + a1 x + a 2 x . 2
SLE need to be solved:
S S2 S 3 a1 = v1
1
S 2 S3 S 4 a 2 v 2
m 1
S j = x ij : S 0 = x 00 + x10 + x 20 + x 30 + x 40 = 1 + 1 + 1 + 1 + 1 = 5
i =0
m 1
v k = y i x ik : v 0 = y 0 x 00 + y1 x10 + y 2 x 20 + y 3 x 30 + y 4 x 40 = 4 + 5 + 1 1 + 3 = 12.000
i =0