You are on page 1of 16

Chapter 3

Interpolation and Approximation

3.1 CURVE FITTING


The body of a car has been designed in such a way it possesses good aerodynamic features. This is
important in order for the car to be comfortable, energy-efficient, cost-effective and attractive. To
achieve these objectives, the body surface of the car is made to be smooth. The normal techniques for
designing the body of a car involve computer-aided design tools on the computer. The body is
constructed by fitting and blending a set of patches from the B-spline or Bezier surfaces by
approximating a set of points. B-spline and Bezier are some of the two-dimensional curves that are
widely used in curve and surface fittings.
In general, curve and surface fitting is useful in many applications notably in the design of body
surfaces such as cars, aircrafts, ships, glasses, pipes and vases. A patch in the surface is the three-
dimensional extension of the B-spline curve which is obtained from an approximation technique that will
be discussed later in the chapter.
Curve fitting is a generic term for constructing a curve from a given set of points. This objective can
be achieved in two ways, through interpolation or approximation. Interpolation refers to a curve that
passes through all the given points, while approximation is the case when the curve does not pass
through one or more of the given points. The curve obtained from interpolation or approximation is
one that best represents all the points.
Figure 3.1 shows two different curves that can be produced from a set of points ( xi , yi ) for
i = 0,1,...,5 . The curve at the top is generated through interpolation as it passes through all the points.
The bottom curve is an approximation as it misses several points.
In this chapter, we will discuss several common interpolation and approximation methods. We
will concentrate on the two-dimensional aspect of these methods which provides a strong foundation
for three or higher dimensional problems. The topics include the Lagrange, Newton and cubic spline
methods in interpolation, and the least-square method in approximation. In discussing the interpolation
and approximation methods, the interpolating points are given as ( xi , yi ) , for i = 0,1,..., n . There are
n + 1 points given. The exact values at xi are yi = f ( xi ) while their interpolated values are denoted as
P ( xi ) .

3-1 Shaharuddin Salleh


( x0 , y 0 )

( x4 , y4 )
( x2 , y 2 )
( x3 , y3 )

( x1 , y1 )

( x5 , y 5 )

( x0 , y0 )

( x3 , y3 ) ( x4 , y4 )

( x2 , y2 )

( x1 , y1 )

( x5 , y5 )

Figure 3.1. Interpolation (top) and approximation (bottom).

3.2 LAGRANGE INTERPOLATION


An interpolation on two points, ( x0 , y0 ) and ( x1 , y1 ) , results in a linear equation, or a straight line. The
standard form of a linear equation is given by

y = mx + c , (3.1)

where m is the gradient of the line and c is the y intercept. In the above equation,

y1 y0
m= and c = y0 mx0 ,
x1 x0

which results in

y1 y0 x y x y
y= x+ 1 0 0 1 .
x1 x0 x1 x0

French mathematician, Joseph Louis Lagrange, proposed to rewrite the linear equation so that the
two interpolated points, ( x0 , y0 ) and ( x1 , y1 ) , are directly represented. With this in mind, the linear
equation is rewritten as

P1 ( x) = a0 ( x x1 ) + a1 ( x x0 )

3-2 Shaharuddin Salleh


where a0 and a1 are constants. The points x0 and x1 in the factors of the above equation are called the
y0
centers. Applying the equation at ( x0 , y0 ) we obtain y0 = a0 ( x0 x1 ) + a1 ( x0 x0 ) , or a0 = . At
x0 x1
y1
( x1 , y1 ) , we get y1 = a0 ( x1 x1 ) + a1 ( x1 x0 ) , or a1 = . Therefore, the linear equation becomes
x1 x0

( x x1 ) ( x x0 )
P1 ( x) = y0 + y1 . (3.2)
x0 x1 x1 x0

The quadratic form of the Lagrange polynomial interpolates three points, ( x0 , y0 ) , ( x1 , y1 ) and
( x2 , y2 ) . The polynomial has the form of

P2 ( x) = a0 ( x x1 )( x x2 ) + a1 ( x x0 )( x x2 ) + a2 ( x x0 )( x x1 ) ,

with centers at x0 , x1 and x2 . At ( x0 , y0 ) ,

y0 = a0 ( x0 x1 )( x0 x2 ) + a1 ( x0 x0 )( x0 x2 ) + a2 ( x0 x0 )( x0 x1 ) , or
y0
a0 = .
( x0 x1 )( x0 x2 )

Similarly, applying the equation at ( x1 , y1 ) and ( x2 , y2 ) yields

y1
a1 = ,
( x1 x0 )( x1 x2 )
y2
a2 = .
( x2 x0 )( x2 x1 )

This produces a quadratic Lagrange polynomial, given by

( x x1 )( x x2 ) ( x x0 )( x x2 ) ( x x0 )( x x1 )
P2 ( x) = y0 + y1 + y2 . (3.3)
( x0 x1 )( x0 x2 ) ( x1 x0 )( x1 x2 ) ( x2 x0 )( x2 x1 )

Definition 3.1. The Langrange operator Li ( x) for ( xi , yi ) where i = 0,1,..., n is defined as

n
( x xk ) ( x x0 )( x x1 )...( x xi1 )( x xi+1 )...( x xn )
Li ( x) = = . (3.4)
k =1 ( xi xk ) ( xi x0 )( xi x1 )...( xi xi1 )( xi xi+1 )...( xi xn )
k i

In general, Lagrange polynomial of degree n is a polynomial that is produced from an interpolation


over a set of points, ( xi , yi ) for i = 0,1,..., n , as follows:

Pn ( x) = y0 L0 ( x) + y1L1 ( x) + ... + yn Ln ( x) . (3.5)

There are n factors in both the numerator and denominator of Equation (3.5). The inequality
k i denies zero value in the denominator which may cause a fatal error in division. It is obvious that
n = 1 produces a linear curve, or a straight line, while n = 2 produces a quadratic curve, or a parabola.
3-3 Shaharuddin Salleh
Algorithm 3.1. Lagrange Method.
Given the interpolating points ( xi , yi ) for i = 0,1,..., n ;
for i = 0 to n
n
( x xk )
Evaluate Li ( x) = ;
k =1 ( xi xk )
k i

endfor
Evaluate Pn ( x) = y0 L0 ( x) + y1 L1 ( x) + ... + yn Ln ( x) ;

3.3 NEWTON INTERPOLATIONS


The Lagrange method has a drawback. The amount of calculations depends very much on the given
number of interpolated points. A large number of points require very tedious calculations on the
Lagrange operators, as each of these functions has an equal degree as the interpolating polynomial.
A slightly simpler approach to the Lagrange method is the Newton methods which apply to
polynomials in the form of Newton polynomials. Newton polynomial has the following general form:

Pn ( x) = a0 + a1 ( x x0 ) + a2 ( x x0 )( x x1 ) + ... + an ( x x0 )( x x1 )...( x xn1 ) . (3.6)

In the above equation, ai for i = 0,1,..., n are constants whose values are determined by applying the
equation at the given interpolated points.
There are several different methods for evaluating the Newton polynomials. They include the
divided-difference, forward-difference, backward difference and central-difference. We discuss each of
these methods in this chapter.

Divided-Difference Method
The divided-difference method is a method for determining the coefficients ai for i = 0,1,..., n in
Equation (3.6) using the divided-difference constants, defined as follows:

Definition 3.2. The divided-difference constant d k ,i is defined as i th divided-difference of the function


y = f ( x) at xi , where

d k 1,i+1 d k 1,i
d k ,i = .
xk xi

The initial values are d 0,i = yi for i = 0,1,..., n and k = 1, 2,..., n 1 .

The general form of the linear equation in Equation (3.1) which interpolates ( x0 , y0 ) and ( x1 , y1 )
can also be expressed in the form of divided-difference constants,

P ( x) = d 0,0 + d1,0 ( x x0 ) ,

3-4 Shaharuddin Salleh


where x0 is the center, and d 0,0 and d1,0 are special constants called the zeroth and first divided-
difference at x0 , respectively. Applying the linear equation at the two points, we obtain

y1 y0
d 0,0 = y0 and d1,0 = .
x1 x0

y1 y0
This gives P1 ( x) = d 0,0 + d1,0 ( x x0 ) = y0 + ( x x0 ) .
x1 x0
At the same time, the quadratic form of the Newton polynomial which interpolates the points
( x0 , y0 ) , ( x1 , y1 ) and ( x2 , y2 ) can now be written as

P ( x) = d 0,0 + d1,0 ( x x0 ) + d 2,0 ( x x0 )( x x1 ) .

In the above equation, x0 and x1 are the centers. Applying the quadratic equation to the three points,
we obtain

d 0,0 = y0 ,
d 0,1 d 0,0 y1 y0
d1,0 = = ,
x1 x0 x1 x0
d 0,2 d 0,1 y2 y1
d1,1 = =
x2 x1 x2 x1
y2 y1 y1 y0

d1,1 d1,0 x2 x1 x1 x0
d 2,0 = = .
x2 x0 x2 x0

In general, the divided-difference method for interpolating ( xi , yi ) for i = 0,1,..., n produces a


Newton polynomial of degree n , given by

Pn ( x) = d 0,0 + d1,0 ( x x0 ) + d 2,0 ( x x0 )( x x1 ) + ... + d n ,0 ( x x0 )( x x1 )...( x xn1 ) . (3.7)

Algorithm 3.2 summarizes the divided-difference approach. An example using this algorithm is
illustrated in Example 3.2.

Algorithm 3.2. Newtons Divided-Difference Method.


Given the interpolating points ( xi , yi ) for i = 0,1,..., n ;
Set d 0,i = yi , for i = 0,1,..., n ;
Evaluate the divided-difference constants:
for i = 0 to n
for k = 1 to n 1
d d k 1,i
Compute d k ,i = k 1,i+1 ;
xk xi
endfor
endfor
Get Pn ( x) using Equation (3.7);

3-5 Shaharuddin Salleh


Forward-Difference Method
Both the Lagrange and Newton divided-difference methods can be applied to cases where the x
subintervals are uniform or non-uniform. On a special case where the x subintervals are uniform, it may
not be necessary to apply the two methods as other methods may prove to be easier. In this case, the
divided-difference method with uniform x subintervals can be reduced into a method called forward-
difference. This method involves an operator called the forward-difference operator.

Definition 3.3. The forward-difference operator k ,i is defined as the k th forward difference at xi , or


k ,i = k 1,i +1 k 1,i . The initial values are given by 0,i = yi for i = 0,1,..., n .

In deriving the Newton polynomial using the forward-difference method, let h be the width of the
x subintervals. Uniform subintervals suggests all the subintervals in x has equal width given as h . In
another word, h = xi+1 xi for i = 0,1,..., n 1 .
The forward-difference formula is derived from the divided-difference equation. Consider a cubic
form of the divided-difference equation from Equation (3.6) given as

P3 ( x) = d 0,0 + d1,0 ( x x0 ) + d 2,0 ( x x0 )( x x1 ) + d3,0 ( x x0 )( x x1 )( x x2 )

This is simplified into

1,0 2,0 3,0


P3 ( x) = y0 + ( x x0 ) + ( x x0 )( x x1 ) +
( x x0 )( x x1 )( x x2 )
x1 x0 x2 x0 x3 x0

= y0 + 1,0 ( x x0 ) + 2,0 ( x x0 )( x x1 ) + 3,0 ( x x0 )( x x1 )( x x2 )
h 2h 3h

It can be proven that the general form of the forward-difference method for interpolating the
points ( xi , yi ) for i = 0,1,..., n can be extended from the cubic case above. The solution is given by

1,0 2,0 n ,0
Pn ( x) = y0 + ( x x0 ) + ( x x0 )( x x1 ) + ... + ( x x0 )( x x1 )...( x xn1 ) . (3.8)
h 2h nh

Algorithm 3.3 summarizes the forward-difference approach.

Algorithm 3.3. Newtons Forward-Difference Method.


Given the interpolating points ( xi , yi ) for i = 0,1,..., n ;
Set 0,i = yi , for i = 0,1,..., n ;
Evaluate the forward-difference constants:
for i = 0 to n
for k = 1 to n 1
Compute k ,i = k 1,i +1 k 1,i ;
endfor
endfor
Get Pn ( x) using Equation (3.8);

3-6 Shaharuddin Salleh


Backward-Difference Method
The operator k ,i is based on forward difference. It is also possible to do the opposite, that is,
backward difference.

Definition 3.4. The backward difference operator k ,i is defined as the k th backward operator at xi , or
k ,i = k 1,i k 1,i1 . The initial value are 0,i = yi for i = 0,1,..., n .

Backward-difference method is also derived from Newton divided-difference method. The method
requires all the x subintervals to have uniform width, given as h . We discuss on the case of quadratic
polynomial from the divided-difference method in deriving the formula for the backward-difference
method. From Equation (3.7),

r (r + 1) 2 n f n n1
Pn ( x) = f n + rf n +
2!
f n + ... + (r + i) ,
n ! i =0
(3.9)

x xn
where k f i = k 1 f i k 1 f i 1 is the backward-difference operator and r =
.
h
Algorithm 3.4 summarizes the steps in the backward-difference method for generating the
Newton polynomial.

Algorithm 3.4. Newtons Backward-Difference Method.


Given the interpolating points ( xi , yi ) for i = 0,1,..., n ;
Set 0,i = yi , for i = 0,1,..., n ;
Evaluate the backward-difference constants:
for i = 0 to n
for k = 1 to n 1
Compute k ,i = k 1,i+1 k 1,i ;
Get Pn ( x) using Equation (3.9);

3.5 LEAST-SQUARE APPROXIMATION


Least-square method is an approximation method for a set of points based on the sum of the square of
the errors. The method is popularly applied in many applications, such as in the statistical analysis
involving multiple data regression. Multiple regression involves approximation on several variables based
on straight lines, or linear equations. Its advantage of using low-degree polynomials contributes in
providing tools for forecasting, experimental designs and other form of statistical modeling.
In the least-square method, an error at a point is defined as the difference between the true value
and the approximated value. The method has the advantage over the Lagrange and Newton methods as
the approximation is independent of the number of points. This allows low degree polynomials for fitting
a finite number of points.
Least-square method generates a low degree polynomial for approximating the given points by
minimizing the sum of the squares of the errors. The solution to the problem is obtained by solving a
system of linear equations that is generated from the minimization.
Approximation using least-square method can be achieved either in continuous or discrete forms.
The difference between these two forms rests on the use of integral in the former and summation in the
latter. Continuous least-square method is appropriate in applications requiring the use of continuous
3-7 Shaharuddin Salleh
variables, and in analog-based applications. On the other hand, discrete least-square is good in handling
applications that have finite data. We will limit our discussion to the discrete least-square only in this
chapter.
The discrete least-square form of the problem is based on m interpolated points ( xi , yi ) for
i = 0,1,..., m 1 . The curve to be fitted is a low degree polynomial P ( x) that best represents all the
points. The most common function used in the least-square method is the linear function
P ( x) = a0 + a1 x which is good enough for many applications. Occasionally, some applications also
require quadratic or cubic polynomials.
In the least-square method, the error between the interpolated point yi and the approximated
value pi = P( xi ) at the point x = xi is given by

ei = yi pi . (3.18)

The sum of the squares of the errors ei at the points x = xi for i = 0,1,..., m 1 is expressed as an
objective function E , as follows:

m 1 m 1
E = ei2 = ( yi pi ) 2 (3.19)
i =0 i =0

( x0 , y0 )

e0

( x0 , p0 ) ( x4 , p4 )
( x3 , y3 )
( x2 , y2 ) e4
( x1 , p1 ) e3
e1 e2 ( x3 , p3 ) ( x4 , y4 )
( x2 , p2 ) ( x5 , p5 )
( x1 , y1 )
e5
( x5 , y5 )

Figure 3.3. Approximation using a polynomial in the least-square method.

Figure 3.3 shows a case of m = 6 points with the interpolated points ( xi , yi ) in white squares and
the approximated points ( xi , pi ) in dark squares. At each point xi , the error ei = yi pi is computed.
The objective function in Equation (3.18) is obtained by adding the sum of squares of all these errors.
The curves to be fitted in the least-square approximations are normally low-degree polynomials,
such as

P1 ( x) = a0 + a1 x , for a linear function,


P2 ( x) = a0 + a1 x + a2 x 2 , for a quadratic polynomial,
P3 ( x) = a0 + a1 x + a2 x 2 + a3 x 3 , for a cubic polynomial.

In linear approximation, a straight line equation is used to approximate the points. The objective
function becomes

3-8 Shaharuddin Salleh


m 1 m 1
E = ( yi P1 ( xi )) 2 = ( yi (a0 + a1 xi )) 2 . (3.20)
i =0 i =0

Our objective here is to find the values of a0 and a1 by minimizing the sum of the square of the
E E
errors. The minimization requires setting = 0 and = 0 to produce two linear equations which
a0 a1
will be sufficient to solve for a0 and a1 . The partial derivatives are obtained as follows:

E m 1
= 2 ( yi ( a0 + a1 xi )) ,
a0 i =0

E m 1
= 2 xi ( yi (a0 + a1 xi )) .
a1 i =0

E
Setting = 0 , we obtain the first linear equation
a0
m 1
2 ( yi (a0 + a1 xi )) = 0 ,
i =0
m 1 m 1 m 1

y a a x
i =0
i
i =0
0
i =0
1 i =0,
m 1 m 1 m 1
a0 1 + a1 xi = yi .
i =0 i =0 i =0

E
The second linear equation is obtained by setting =0:
a1
m 1

x (y
i =0
i i ( a0 + a1 xi )) = 0 ,
m 1 m 1 m 1
a0 xi + a1 xi2 = xi yi .
i =0 i =0 i =0

The two equations can be written in matrix form as follows:

m 1 m 1
m 1
1 x a
i yi
i =0 i =0
0 = i =0 . (3.21)
m 1 m 1
a m 1
xi xi2 1 yi xi
i =0 i =0 i =0

A least-square approximation using the quadratic function P2 ( xi ) = a0 + a1 x + a2 x 2 produces a


system of 3 x 3 linear equations. The objective function is

m 1
E = ( y (a0 + a1 x + a2 x 2 )) 2 (3.22)
i =0

The approximation is obtained by minimizing the sum of the squares of the errors through
E E E
=0, = 0 and = 0 . The first equation is obtained through the following steps:
a0 a1 a2

3-9 Shaharuddin Salleh


E m 1
= 2 ( yi ( a0 + a1 xi + a2 xi2 )) ,
a0 i =0

E m 1
= 0 : ( yi (a0 + a1 xi + a2 xi2 )) = 0 ,
a0 i =0
m 1 m 1 m 1 m 1
a0 1 + a1 xi + a2 xi2 = yi .
i =0 i =0 i =0 i =0

The second equation is generated in the same manner, as follows:

E m 1
= 2 xi ( yi (a0 + a1 xi + a2 xi2 )) ,
a1 i =0

E m 1
= 0 : xi ( yi ( a0 + a1 xi + a2 xi2 )) = 0 ,
a0 i =0
m 1 m 1 m 1 m 1
a0 xi + a1 xi2 + a2 xi3 = xi yi .
i =0 i =0 i =0 i =0

We also obtain the third equation from similar steps as above

E m 1
= 2 xi2 ( yi (a0 + a1 xi + a2 xi2 )) ,
a2 i =0

E m 1
= 0 : xi2 ( yi (a0 + a1 xi + a2 xi2 )) = 0 ,
a0 i =0
m 1 m 1 m 1 m 1
a0 xi2 + a1 xi3 + a2 xi4 = xi2 yi .
i =0 i =0 i =0 i =0

The three equations are formulated in matrix form, as follows:

m 1 m 1 m 1
2 m 1
1 xi xi yi
i =0 i =0 i =0
a0 i = 0
m 1 m 1 m 1
m 1

xi x 2
i xi3 a1 = yi xi . (3.23)
i =0 i =0 i =0 a i =0
m 1 2 2 m 1
2
m 1 m 1

xi x yi xi
3
xi4
i = 0 i = 0
i
i =0 i =0

Equations (3.21) and (3.23) can be generalized into approximating a polynomial of degree n . The
least-square method produces the following system of (n + 1) x (n + 1) linear equations:

m 1 m 1 m 1 m 1
n m 1
1 xi ... xin1 x i yi
i =0 i =0 i =0 i =0
i =0
m 1 m 1 m 1 m 1
a m 1

xi xi2 x xin +1
n 0
... i
a yi xi
i =0 i =0 i =0 i =0 1 i =0
... ... ... ... ... ... = ... (3.24)
m 1 m 1 m 1 m 1 m 1
2 n 1 an 1 n 1
x x an
n +1 2n2
xin ... xi yi xi
i =0 i =0
i
i =0
i
i =0 i =0

m 1 m 1 m 1 m 1
m 1
x n +1 2n n
x x
n+2 2 n 1
... xi yi xi
i =0
i
i =0
i
i =0
i
i =0
i =0

3-10 Shaharuddin Salleh
Equation (3.23) is a generalization for fitting a polynomial of degree n into a set of m points using
m 1 m 1
the least-square approximation method. Letting si = xki and vi = yk xki , this equation can be
k =0 k =0
rewritten as

s0 s1 ... sn1 sn a0 v0
s s2 ... sn sn+1 a1 v1
1
... ... ... ... ... ... = ... . (3.25)

sn1 sn ... s2 n2 s2 n1 an1 vn1
sn sn+1 ... s2 n1 s2 n an vn

As a word of caution, since the equation involves a power of zero a value of x = 0 produces 00
which should be treated as 1.

Algorithm 3.7. Least-square Method.


Given the points ( xi , yi ) for i = 0,1,..., m ;
Select the polynomial P ( x) = a0 + a1 x + ... + an x n ;
m 1 m 1
Find si = xki and vi = yk xki from Equation (3.25).
k =0 k =0

Solve the system of linear equations to find ai for i = 0,1,..., n ;

3-11 Shaharuddin Salleh


Fast Example 1: Lagrange Method
(for uniform and non-uniform intervals)

Formula
n (x x j )
Pn ( x) = y 0 L0 ( x) + y1 L1 ( x) + ... + y n Ln ( x) where Li ( x) = is the Lagrange operator.
j =1 ( xi x j )
j i

Example

i 0 1 2 3
xi 1.0 1.2 1.5 2.0
y i = f ( xi ) 2.5 2.8 2.7 2.6

Find the polynomial P ( x) = P3 ( x) and, hence, determine the value of P (1.1) .

Solution

( x x1 )( x x 2 )( x x 3 ) ( x 1.2)( x 1.5)( x 2.0)


L0 ( x) = = = 10( x 1.2)( x 1.5)( x 2.0)
( x 0 x1 )( x 0 x 2 )( x 0 x 3 ) (1.0 1.2)(1.0 1.5)(1.0 2.0)

( x x 0 )( x x 2 )( x x 3 ) ( x 1.0)( x 1.5)( x 2.0)


L1 ( x) = = = 20.833( x 1.0)( x 1.5)( x 2.0)
( x1 x 0 )( x1 x 2 )( x1 x 3 ) (1.2 1.0)(1.2 1.5)(1.2 2.0)

( x x 0 )( x x1 )( x x 3 ) ( x 1.0)( x 1.2)( x 2.0)


L 2 ( x) = = = 13.333( x 1.0)( x 1.2)( x 2.0)
( x 2 x 0 )( x 2 x1 )( x 2 x 3 ) (1.5 1.0)(1.5 1.2)(1.5 2.0)

( x x 0 )( x x1 )( x x 2 ) ( x 1.0)( x 1.2)( x 1.5)


L3 ( x ) = = = 2.5( x 1.0)( x 1.2)( x 1.5)
( x 3 x 0 )( x 3 x1 )( x 3 x 2 ) (2.0 1.0)(2.0 1.2)(2.0 1.5)

Therefore,

P ( x ) = y 0 L0 ( x ) + y1 L1 ( x ) + y 2 L 2 ( x ) + y 3 L3 ( x )

P( x) = 2.5(10)( x 1.2)( x 1.5)( x 2.0) + 2.8(20.833)( x 1.0)( x 1.5)( x 2.0)


+ 2.7(13.333)( x 1.0)( x 1.2)( x 2.0) + 2.6(2.5)( x 1.0)( x 1.2)( x 1.5)
P(1.1) = 2.5(10)(1.1 1.2)(1.1 1.5)(1.1 2.0) + 2.8(20.833)(1.1 1.0)(1.1 1.5)(1.1 2.0)
+ 2.7(13.333)(1.1 1.0)(1.1 1.2)(1.1 2.0) + 2.6(2.5)(1.1 1.0)(1.1 1.2)(1.1 1.5)
= 2.702

3-12 Shaharuddin Salleh


Fast Example 2: Newton Divided-Difference Method
(for uniform and non-uniform intervals)

Formula:
n 1
P( x) = P3 ( x) = f 0[ 0 ] + f 0[1] ( x x 0 ) + f 0[ 2 ] ( x x 0 )( x x1 ) + ... + f 0[ n ] ( x x i ) where
i =0
[ k 1] [ k 1]
f i +1 fi
f i[ k ] = is the divided-difference operator.
xi+k xi

Example:

i 0 1 2 3
xi 1.0 1.2 1.5 2.0
y i = f ( xi ) 2.5 2.8 2.7 2.6

Find the polynomial, P ( x) = P3 ( x) , and, hence, determine the value of P (1.1) .

Solution:

i xi f i[ 0 ] = f ( x i ) f i[1] f i[ 2] f i [ 3]
0 1.0 2.5 1.500 -3.667 3.833
1 1.2 2.8 -0.333 0.167
2 1.5 2.7 -0.200
3 2.0 2.6

Therefore,

P( x) = P3 ( x) = f 0[ 0 ] + f 0[1] ( x x 0 ) + f 0[ 2] ( x x 0 )( x x1 ) + f 0[3] ( x x 0 )( x x1 )( x x 2 )

P( x) = 2.5 + 1.5( x 1.0) + (3.667)( x 1.0)( x 1.2) + 3.8333( x 1.0)( x 1.2)( x 1.5)

P (1.1) = 2.5 + 1.5(1.1 1.0) + (3.667)(1.1 1.0)(1.1 1.2) + 3.8333(1.1 1.0)(1.1 1.2)(1.1 1.5)
= 2.702

3-13 Shaharuddin Salleh


Fast Example 3: Newtons Forward-Difference Method
(for uniform intervals only)

Formula:
r (r 1) 2 n f 0 n 1
Pn ( x) = f 0 + rf 0 +
2!
f 0 + ... +
n!
(r i) , where
i =0
k f i = k 1 f i +1 k 1 f i is the

x x0
forward-difference operator and r = .
h

Example:

i 0 1 2 3
xi 1.0 1.2 1.4 1.6
y i = f ( xi ) 2.5 2.8 2.4 2.6

Find the polynomial, P ( x) = P3 ( x) , and, hence, determine the value of P (1.1) .

Solution:

i xi 0 f i = f ( x i ) 1 f i 2 f i 3 f i
0 1.0 2.5 0.3 -0.7 1.3
1 1.2 2.8 -0.4 0.6
2 1.4 2.4 0.2
3 1.6 2.6

Therefore,

r (r 1) 2 r (r 1)(r 2) 3 x x0 x 1
P(r ) = P3 ( x) = f 0 + rf 0 + f0 + f 0 , where r = =
2! 3! h 0 .2

r (r 1) r (r 1)(r 2)
P(r ) = 2.5 + r (0.3) + ( 0 .7 ) + (1.2)
2! 3!

1.1 1
For x = 1.1 , r = = 0.5 .
0.2
(0.5)(0.5 1) (0.5)(0.5 1)(0.5 2)
We get P (1.1) = 2.5 + (0.5)(0.3) + (0.7) + (1.3) = 2.819
2! 3!

3-14 Shaharuddin Salleh


Fast Example 4: Newton Backward-Difference Method
(for uniform intervals only)

Formula:
r (r + 1) 2 n fn n 1
Pn ( x) = f n + rf n +
2!
f n + ... +
n!
(r + i) , where
i =0
k
f i = k 1 f i k 1 f i 1 is the

x xn
backward-difference operator and r = .
h

Example:

i 0 1 2 3
xi 1.0 1.2 1.4 1.6
y i = f ( xi ) 2.5 2.8 2.4 2.6

Find the polynomial, P ( x) = P3 ( x) , and, hence, determine the value of P (1.1) .

Solution:

i xi 0 f i = f ( xi ) 1 f i 2 fi 3 fi
0 1.0 2.5
1 1.2 2.8 0.3
2 1.4 2.4 -0.4 -0.7
3 1.6 2.6 0.2 0.6 1.3

r (r + 1) 2 r (r + 1)(r + 2) 3 x x 3 x 1 .6
P(r ) = P3 ( x) = f 3 + rf 3 + f3 + f 3 , where r = = .
2! 3! h 0 .2

r (r + 1) r (r + 1)(r + 2)
P(r ) = P3 ( x) = 2.6 + r (0.2) + ( 0 .6 ) + (1.3) .
2! 3!

1.1 1.6
For x = 1.1 , r = = 2.5 . Therefore,
0.2
(2.5)(2.5 + 1) (2.5)(2.5 + 1)(2.5 + 2)
P( x = 1.1) = P(2.5) = 2.6 + (2.5)(0.2) + (0.6) + (1.3)
2! 3!
= 2.819

3-15 Shaharuddin Salleh


Fast Example 5: Least-Squares Method
(for uniform and non-uniform intervals)

A technique to approximate a set of data by minimizing the sum of the squared-errors, E:


m 1 m 1
E = e i2 = ( y i Pn ( x i ) ) ,
2
m = no. of data, n = polynomial degree
i =0 i =0
E
That is, by letting = 2e i = 0 to get a system of linear equations, then solve.
e i

Example: Find a polynomial of degree 2, P2 ( x) , that approximates the points (1,4), (1.2,5), (1.4,1),
(1.7,-1) and (2,3). Hence, evaluate P2 (1.5) .

Solution:
S 0 S1 S 2 a 0 v 0
P2 ( x) = a 0 + a1 x + a 2 x . 2
SLE need to be solved:
S S2 S 3 a1 = v1
1
S 2 S3 S 4 a 2 v 2

m 1
S j = x ij : S 0 = x 00 + x10 + x 20 + x 30 + x 40 = 1 + 1 + 1 + 1 + 1 = 5
i =0

S 1 = x 10 + x11 + x 12 + x 31 + x 14 = 1 + 1.2 + 1.4 + 1.7 + 2 = 7.300


S 2 = x 02 + x12 + x 22 + x 32 + x 42 = 12 + 1.2 2 + 1.4 2 + 1.7 2 + 2 2 = 11.290
S 3 = x 03 + x13 + x 23 + x 33 + x 43 = 13 + 1.2 3 + 1.4 3 + 1.7 3 + 2 3 = 18.385
S 4 = x 04 + x14 + x 24 + x 34 + x 44 = 14 + 1.2 4 + 1.4 4 + 1.7 4 + 2 4 = 31.267

m 1
v k = y i x ik : v 0 = y 0 x 00 + y1 x10 + y 2 x 20 + y 3 x 30 + y 4 x 40 = 4 + 5 + 1 1 + 3 = 12.000
i =0

v1 = y 0 x 10 + y1 x11 + y 2 x 12 + y 3 x 31 + y 4 x 14 = 4(1) + 5(1.2) + 1(1.4) 1(1.7) + 3(2) = 15.700

v 2 = y 0 x 02 + y1 x12 + y 2 x 22 + y 3 x 32 + y 4 x 42 = 4(12 ) + 5(1.2 2 ) + 1(1.4 2 ) 1(1.7 2 ) + 3(2 2 ) = 22.270

S 0 S1 S 2 a 0 v 0 5 7.300 11.290 a 0 12.000


S S2 S 3 a1 = v1 : 7.300 11.290 18.385 a = 15.700 .
1 1
S 2 S3 S 4 a 2 v 2 11.290 18.385 31.267 a 2 22.270

Solve the above SLE to get a 0 = 32.877 , a1 = 39.907 and a 2 = 12.306 .

Hence, P2 ( x) = 32.877 39.907 x + 12.306 x 2 .


Finally, P2 (1.5) = 32.877 39.907(1.5) + 12.306(1.5 2 ) = 0.705 .

3-16 Shaharuddin Salleh

You might also like