Professional Documents
Culture Documents
INTERPOLATION
Given that
y f ( x), x0 x xn
and assuming that f ( x ) is single-valued, continuous and explicitly known, then the value of
f ( x ) corresponding to x0 , x1 ,..., xn can easily be computed and tabulated. Suppose we want to
reverse this process, that is, given the set of tabular values ( x0 , y0 ), ( x1 , y1 ),..., ( xn , yn ) satisfying
the relation y f ( x), where the explicit nature of f ( x ) is not known, we want to find a
simpler function, say ( x), such that f ( x ) and ( x ) agree at the set of tabulated points. Such
a process is called interpolation. If ( x ) is a polynomial, then the process is called polynomial
interpolation. One reason for approximating unknown function by means of a polynomial is
outlined in the following theorem:
Suppose that f is defined and continuous on [a, b]. For each 0, there exists a polynomial
p( x), with the property that
Theorem 4.1.1 shows that given any function, defined and continuous on a closed and bounded
interval, there exists a polynomial that is as “close” to the given function as desired.
y y f ( x)
y p ( x)
y f ( x)
y f ( x)
0 a b x
The fact that derivatives and indefinite integrals for polynomials can easily be determined is
another reason for using polynomials to approximate unknown function.
1
Taylor polynomial may not give a correct approximation because it is concentrated at a point
x0 . Thus, moving away from x0 gives inaccurate approximations.
1
Exercise: Check that using the Taylor polynomial for f ( x ) expanded about x 1 to
x
1
approximate f (3) gives inaccurate value.
3
Suppose we have two distinct points ( x0 , y0 ) and ( x1 , y1 ), and would like to approximate f
x x1 x x0
L0 ( x) and L1 ( x) .
x0 x1 x1 x0
x x1 x x0
p ( x) L0 ( x) f ( x0 ) L1 ( x) f ( x1 ) f ( x0 ) f ( x1 )
x0 x1 x1 x0
Example 4.1.1
Determine the linear interpolating polynomial that passes through the points (2, 4) and (5,1).
Solution:
x x1 x 5 1
L0 ( x) ( x 5)
x0 x1 2 5 3
x x0 x 2 1
L1 ( x) ( x 2)
x1 x0 5 2 3
1 1
p( x) L0 ( x) f ( x0 ) L1 ( x) f ( x1 ) ( x 5) (4) ( x 2) (1)
3 3
x 6
NOTE: Two points give a linear function (polynomial of degree 1). It follows that n 1
points give a polynomial of degree at most n.
2
Suppose that we have n 1 points ( x0 , f ( x0 )), ( x1 , f ( x1 )),..., ( xn , f ( xn )). In this case, we first
1, if i k
Ln ,k ( xi ) .
0, if i k
1
L1,0 ( x0 ) L1,0 (2) (2 5) 1
3
1
L1,0 ( x1 ) L1,0 (5) (5 5) 0
3
1
L1,1 ( x0 ) L1,1 (2) (2 2) 0
3
1
L1,1 ( x1 ) L1,1 (5) (5 2) 1.
3
To satisfy Ln,k ( xi ) 0, i k , requires that the numerator of Ln,k ( x) contain the term
( x x0 )( x x1 )...( x xk 1 )( x xk 1 )...( x xn ),
( x x0 )( x x1 )...( x xk 1 )( x xk 1 )...( x xn )
Ln ,k ( x) ,
( xk x0 )( xk x1 )...( xk xn )
( x x1 )( x x2 )
L2,0 ( x)
( x0 x1 )( x0 x2 )
( x x0 )( x x2 )
L2,1 ( x)
( x1 x0 )( x1 x2 )
( x x0 )( x x1 )
L2,2 ( x) .
( x2 x0 )( x2 x1 )
Theorem 4.1.2
If x0 , x1 ,..., xn are n 1 distinct numbers and f is a function whose values are given at these
3
f ( xk ) p( xk ), for each k 0,1, 2,..., n.
p ( x) f ( x0 ) Ln ,0 ( x) f ( x1 ) Ln ,1 ( x) ... f ( xn 1 ) Ln ,n 1 ( x) f ( xn ) Ln ,n ( x)
n
f ( xk ) Ln ,k ( x),
k 0
( x x0 )( x x1 )...( x xk 1 )( x xk 1 )...( x xn )
Ln ,k ( x)
( xk x0 )( xk x0 )...( xk xn )
n
( x xi )
.
i 0 ( xk xi )
ik
Example 4.1.2
Use the nodes x0 2, x1 2.75 and x2 4 to find the second Lagrange interpolating
1 1
polynomial for f ( x ) and use it to approximate f (3) .
x 3
Solution:
( x x1 )( x x2 ) ( x 2.75)( x 4) 2
L2,0 ( x) ( x 2.75)( x 4)
( x0 x1 )( x0 x2 ) (2 2.75)(2 4) 3
( x x0 )( x x2 ) ( x 2)( x 4) 16
L2,1 ( x) ( x 2)( x 4)
( x1 x0 )( x1 x2 ) (2.75 2)(2.75 4) 15
( x x0 )( x x1 ) ( x 2)( x 2.75) 2
L2,2 ( x) ( x 2)( x 2.75)
( x2 x0 )( x2 x1 ) (4 2)(4 2.75) 5
p( x) f ( x0 ) L2,0 ( x) f ( x1 ) L2,1 ( x) f ( x2 ) L2,2 ( x)
1 2 4 16 1 2
. ( x 2.75)( x 4) . ( x 2)( x 4) . ( x 2)( x 2.75)
2 3 11 15 4 5
1 64 1
( x 2.75)( x 4) ( x 2)( x 4) ( x 2)( x 2.75)
3 165 10
1 2 35 49
x x .
22 88 44
At x 3,
1 35 49 29
f (3) (32 ) (3) 0.32955.
22 88 44 88
4
Theorem 4.1.3
Suppose the nodes in the interval [ a, b] are distinct and f C n1[a, b]. Then, for each x in
[a, b], a number ( x ) (generally unknown) between the nodes, and hence in (a, b) exists
with
f ( n 1) ( x)
f ( x) p( x) ( x x0 )( x x1 )...( x xn ),
(n 1)!
NOTE: The error formula given in Theorem 4.1.3 is similar to that given by the Taylor
Theorem, only that this error uses information at the distinct numbers x0 , x1 ,..., xn .
Example 4.1.3
Find the error formula for the second Lagrange interpolating polynomial found in Example
4.1.2 and find the maximum error when this polynomial is used to approximate f ( x ) for x
in [2, 4].
Solution:
1 1 2 6
f ( x) f ( x) 2 , f ( x) 3 , f ( x) 4 .
x x x x
Thus,
f (3) ( x)
f ( x) p ( x) ( x x0 )( x x1 )( x x2 ),
(2 1)!
where
f (3) ( x) 6
( x x0 )( x x1 )( x x2 ) ( x 2)( x 2.75)( x 4)
6 ( x)
4
3!
1
The maximum value of ( x) on (2,4) is 24
4
. We now determine the maximum value
16
35 49
of the absolute value of the polynomial g ( x) ( x 2)( x 2.75)( x 4) x 3 x 2 x 22
4 2
35 49 7 7
g ( x) 0 3x 2 x 0 x or x
2 2 3 2
25 9
g 73 or g 72 .
108 16
f (3) ( x) 1 9 9
Hence, the maximum error is g ( x) 0.0352.
3! 16 16 256
5
4.2. DIVIDED DIFFERENCES
Suppose that pn ( x) is the nth Lagrange interpolating polynomial that agrees with the function
pn ( x0 ) a0 f ( x0 )
f ( x1 ) f ( x0 )
pn ( x1 ) a0 a1 ( x1 x0 ) f ( x1 ) a1 .
x1 x0
f [ xi 1 ] f [ xi ]
f [ xi , xi 1 ] .
xi 1 xi
f [ xi 1 , xi 2 ,..., xi k ] f [ xi , xi 1 , xi 2 ,..., xi k 1 ]
f [ xi , xi 1 , xi 2 ,..., xi k ] .
xi k xi
6
Equation (4.1) is known as Newton’s Divided-Difference formula. The table below outlines
how divided differences from tabulated data can be determined:
x0 f x0
f [ x0 , x1 ]
x1 f x1 f [ x0 , x1 , x2 ]
f [ x1 , x2 ] f [ x0 , x1 , x2 , x3 ]
x2 f x2 f [ x1 , x2 , x3 ]
f [ x2 , x3 ]
x3 f x3
Example 4.2.1
Suppose we have the data (1, 2), (2, 4), (4,3), (5, 0). Then,
x0 1, x1 2, x2 4, x3 5 and f [ x0 ] 2, f [ x1 ] 4, f [ x2 ] 3, f [ x3 ] 0
1 f x0 2
42
f [ x0 , x1 ] 2
2 1
12 2 5
2 f x1 4 f [ x0 , x1 , x2 ]
4 1 6
3 4 1 56 56
f [ x1 , x2 ] f [ x0 , x1 , x2 , x3 ] 0
42 2 5 1
3 12 5
4 f x2 3 f [ x1 , x2 , x3 ]
52 6
03
f [ x2 , x3 ] 3
54
5 f x3 0
7
p3 ( x) f [ x0 ] f [ x0 , x1 ]( x x0 ) f [ x0 , x1 , x2 ]( x x0 )( x x1 ) f [ x0 , x1 , x2 , x3 ]( x x0 )( x x1 )( x x2 )
5
2 2( x 1) ( x 1)( x 2) 0( x 1)( x 2)( x 4)
6
5 9 5
x x2 .
3 2 6
5 9 5
p3 (3) (3) (32 ) 5.16.
3 2 6
5 y p3 ( x)
0 1 2 3 4 5 x
Example 4.2.2
Write the divided difference table for the given values of x and f ( x). Hence, find p4 ( x).
x f ( x)
1.0 0.7651977
1.3 0.6200860
1.6 0.4554022
1.9 0.2818186
2.2 0.1103623
Solution:
8
f [ x1 ] f [ x0 ] 0.6200860 0.7651977
f [ x0 , x1 ] 0.4837057
x1 x0 1.3 1.0
f [ x2 ] f [ x1 ] 0.4554022 0.6200860
f [ x1 , x2 ] 0.5489460
x2 x1 1.6 1.3
f [ x3 ] f [ x2 ] 0.2818186 0.4554022
f [ x2 , x3 ] 0.5786120
x3 x2 1.9 1.6
f [ x4 ] f [ x3 ] 0.1103623 0.2818186
f [ x3 , x4 ] 0.571521
x4 x3 2.2 1.9
f [ x1 , x2 ] f [ x0 , x1 ] 0.5489460 (0.4837057)
f [ x0 , x1 , x2 ] 0.1087338
x2 x0 1.6 1.0
f [ x2 , x3 ] f [ x1 , x2 ] 0.5786120 (0.5489460)
f [ x1 , x2 , x3 ] 0.0494433
x3 x1 1.9 1.3
f [ x3 , x4 ] f [ x2 , x3 ] 0.571521 (0.5786120)
f [ x2 , x3 , x4 ] 0.0118183
x4 x2 2.2 1.6
f [ x1 , x2 , x3 ] f [ x0 , x1 , x2 ] 0.0494433 (0.1087338)
f [ x0 , x1 , x2 , x3 ] 0.0658783
x3 x0 1.9 1.0
f [ x2 , x3 , x4 ] f [ x1 , x2 , x3 ] 0.0118183 (0.0494433)
f [ x1 , x2 , x3 , x4 ] 0.0680684
x4 x1 2.2 1.3
f [ x1 , x2 , x3 , x4 ] f [ x0 , x1 , x2 , x3 ] 0.0680684 0.0658783
f [ x0 , x1 , x2 , x3 , x4 ] 0.0018251
x4 x0 2.2 1.0
1.0 f x0 0.7651977
f [ x0 , x1 ] 0.4837057
1.3 f x1 0.6200860 f [ x0 , x1 , x2 ] 0.1087338
f [ x1 , x2 ] 0.5489460 f [ x0 , x1 , x2 , x3 ] 0.0658783
1.6 f x2 0.4554022 f [ x1 , x2 , x3 ] 0.0494433 f [ x0 , x1 , x2 , x3 ] 0.0018251
f [ x2 , x3 ] 0.5786120 f [ x1 , x2 , x3 , x4 ] 0.0680684
1.9 f x3 0.2818186 f [ x2 , x3 , x4 ] 0.0118183
f [ x3 , x4 ] 0.5715210
2.2 f x4 0.1103623
9
4.2.1. Newton Forward- and Backward-Difference
Let us try to write equation (4.1) in a simplified form when the nodes are arranged
consecutively with equal spacing. Let h xi 1 xi and x x0 sh. Since the nodes are equally
x xi x0 sh ( x0 ih) (s i )h.
pn ( x) pn ( x0 sh) f [ x0 ] shf [ x0 , x1 ] s(s 1)h2 f [ x0 , x1, x2 ] ... s(s 1)...(s n 1)hn f [ x0 , x1,..., xn ].
s
Since s ( s 1)...( s k 1) k ! , we have that
k
n
s
pn ( x) pn ( x0 sh) f [ x0 ] k !h k f [ x0 , x1 ,..., xk ] (4.2)
k 1 k
Newton Backward-Difference formula is obtained when the nodes are reordered from last to
first. In this case, (4.1) can be written in the form
pn ( x) f [ xn ] f [ xn , xn 1 ]( x xn ) f [ xn , xn 1 , xn 2 ]( x xn )( x xn 1 )
... f [ xn , xn 1 ,..., x0 ]( x xn )( x xn 1 )...( x x1 ).
As observed above, since the nodes are equally spaced, letting x xn sh and
x xi ( s n i )h, given
If we use
n
s
pn ( x) pn ( x0 sh) f [ xn ] (1) k k !h k f [ xk , xk 1 ,..., x0 ] (4.3)
k 1 k
10
NOTE: The Newton Forward- Difference formula is useful for interpolation near the beginning
of a set of tabular values and the Newton Backward- Difference formula for interpolation near
the end of a set of tabular values.
Example 4.2.3
Solution:
Since x 1.1 is near the first value x0 1.0, we use The Newton Forward- Difference formula.
Here, h 1.3 1.0 1.6 1.3 1.9 1.6 2.2 1.9 0.3.
1
x x0 sh 1.1 1.0 0.3s s .
3
To approximate f (2.0), we use Newton Backward- Difference formula since x 2.0 is near
the last value x 2.2.
2
x xn sh x4 sh 2.0 2.2 0.3s s
3
11
4.3. SPLINE INTERPOLATION
quadratic and cubic splines. Other high-order splines can be derived in a similar way with a
greater increase in computational difficulty.
The simplest spline is a linear spline which consists of joining a set of data points
x x1 x x0
s1 ( x) f ( x0 ) f ( x1 ) , x [ x0 , x1 ]
x0 x1 x1 x0
s2 ( x) f ( x1 ) x x2 f ( x2 ) x x1 , x [ x1 , x2 ]
S ( x) x1 x2 x2 x1
x xn x xn 1
sn ( x) f ( xn 1 ) f ( xn ) , x [ xn 1 , xn ]
xn 1 xn xn xn 1
x0 x1 x2 x3 x4 xn
12
Example 4.3.1
Construct the linear spline interpolating the data below:
x 1 0 1
y 0 1 3
Solution:
s1 ( x), x [1, 0]
S ( x)
s2 ( x), x [0,1]
where
x x1 x x0 x0 x (1)
s1 ( x) f ( x0 ) f ( x1 ) f (1) f (0)
x0 x1 x1 x0 1 0 0 (1)
x x 1
(0) (1) x 1
1 1
x x2 x x1 x 1 x0
s2 ( x) f ( x1 ) f ( x2 ) f (0) f (1)
x1 x2 x2 x1 0 1 1 0
x 1 x
(1) (3) 1 2 x
1 1
1 x, x [1, 0]
S ( x)
1 2 x, x [0,1]
NOTE:
A drawback for linear splines is that S ( x) is generally discontinuous at each interior nodes xi .
13
s1 ( x) a0 b0 x c0 x 2 , x [ x0 , x1 ]
s ( x) a1 b1 x c1 x , x [ x1 , x2 ]
2
S ( x) 2
s ( x) a b x c x 2 , x [ x , x ]
n n 1 n 1 n 1 n 1 n
si ( xi 1 ) f ( xi 1 ), i 1, 2,..., n
si ( xi ) f ( xi ).
Since each si has 3 unknown constants, S ( x) will give a total of 3n unknown coefficients.
The condition si ( xi ) si 1 ( xi ) imposes n 1 linear conditions and interpolation imposes 2n
linear conditions giving a total of 3n 1 imposed linear conditions. This means that the
conditions will give a system of 3n 1 linear equations in 3n unknown coefficients, which
may give infinite-many solutions. To get a unique solution, we need to impose another
condition. The problem is to determine what additional condition to impose to make the
solution unique, for example, f ( x0 ) S ( x0 ) or f ( xn ) S ( xn ). One way to avoid this
Example 4.3.2
Solution:
Since the interval [1,1] has been divided into three intervals, we must have
s1 ( x) a0 b0 x, x [1, 0]
S ( x)
s2 ( x) a1 b1 x c1 x , x [0,1]
2
s1 ( x0 ) f ( x0 ), s1 ( x1 ) s2 ( x1 ) f ( x1 ) and s2 ( x2 ) f ( x2 )
s1 (1) f (1) a0 b0 (1) 0 a0 b0
14
s1 (0) s2 (0) a0 b0 (0) a1 b1 (0) c1 (02 ) a0 a1 and s1 (0) f (0) a0 a1 1
s2 (1) f (1) a1 b1 (1) c1 (12 ) 3 b1 c1 2
1 x, x [1, 0]
S ( x)
1 x x , x [0,1]
2
Definition 4.3.1
When natural boundary conditions occur, the spline is called Natural Spline and when
clamped boundary conditions occur, we have Clamped Spline.
15
Using Definition 4.3.1, we will construct cubic splines of the form
S j ( x) a j b j ( x x j ) c j ( x x j ) 2 d j ( x x j )3
Example 4.3.3
1. Construct a natural cubic spline that passes through the points (1, 2), (2,3) and (3,5).
2. Construct a clamped cubic spline that passes through the points (1, 2), (2,3) (3, 5) and
has S (1) 2 and S (3) 1.
Solutions:
From the given data, we have that x0 1, x1 2, x2 3 , S0 ( x) [1, 2] and S1 ( x) [2,3] where
so that
S0 ( x), x [1, 2]
S ( x)
S1 ( x), x [2,3]
16
Solving the system formed by equations (i) - (v), we get
b0 d 0 1 1 0 0 1 0 b0 1
b1 c1 d1 2
0 1 1 0 1 b1 2
b0 b1 3d 0 0 1 1 0 3 0 c1 0
c1 3d 0 0 0 0 1 3 0 d 0 0
c1 3d1 0 0 0 1 0 3 d1 0
1 0 0 1 0 1 1 0 0 1 0 1 1 0 0 1 0 1
0 1 1 0 1 2 0 1 1 0 1 2 0 1 1 0 1 2
r3 r3 r1 r3 r2 r3
1 1 0 3 0 0 0 1 0 2 0 1 0 0 1 2 1 1
r r4 r5 r 13 r5
0 0 1 3 0 0 5 0 0 1 3 0 0 5 0 0 1 3 0 0
0 0 1 0 3 0 0 0 0 3 3 0 0 0 0 1 1 0
1 0 0 1 0 1 1 0 0 1 0 1
0 1 1 0 1 2 0 1 1 0 1 2
r4 r3 r4 0 0 1 2 1 1 r5 r5 15 r4 0 0 1 2 1 1
0 0 0 5 1 1 0 0 0 5 1 1
0 0 0 1 1 0 0 0 0 0 1
5 5
4
4 1 1
d1 d1
5 5 4
1
5d 0 d1 1 d 0
4
3
c1 2d 0 d1 1 c1
4
3
b1 c1 d1 2 b1
2
3
b0 d 0 1 b0
4
2 4 ( x 1) 4 ( x 1) ,
3 1 3
x [1, 2]
S ( x)
3 2 ( x 2) 4 ( x 2) 4 ( x 2) ,
x [2,3]
3 3 2 1 3
S0 (2) S1 (2) b0 b1 2c0 3d0 0 and S0 (2) S1 (2) c0 c1 3d0 0
17
Using clamped condition, we get
S ( x0 ) f ( x0 ) S (1) 2 b0 2c0 (1 1) 3d0 (1 1) 2 b0 2
Thus,
01 0 1 0 b1 1 01 0 1 0 1 10 1 01 2
c 2
10 1 0 1 0 10 1 0 1 2 01 0 1 0 1
1 2 0 3 0 c1 2 1 2 0 3 0 2 r1 r2 1 2 0 3 0 2
01 1 3 0 d 0 0 01 1 3 0 0 01 1 3 0 0
10
2 0 3 d1 1 10 2 0 3 1 10 2 0 3 1
1 0 1 0 1 2 1 0 1 0 1 2
r3 r3 r1 0 1 0 1 0 1 0 1 0 1 0 1
r3 r3 2r2 r4 r3 r4
r5 r5 r1 0 2 1 3 1 0 0 0 1 1 1 2
r r5 r4 r 12 r5
r4 r2 r4 0 0 1 2 0 1 5 0 0 1 2 0 1 5
0 0 1 1 0 0
0 2 0 0 2 2
1 0 101 2 1 0 1 0 1 2
0 1 01 0 1 0 1 0 1 0 1
0 0 11 1 2 r5 r5 13 r4 0 0 11 1 2
0 0 03 1 3 0 0 0 3 1 3
0 0 0 0 0 0 23 1
0 01 1
2 3
d1 1 d1
3 2
3
3d 0 d1 3 d 0
2
c1 d0 d1 2 c1 2
5
c0 d 0 1 c0
2
3
b1 c1 d1 2 b1
2
2 2( x 1) 2 ( x 1) 2 ( x 1) ,
5 2 3 3
x [1, 2]
S ( x)
3 2 ( x 2) 2( x 2) 2 ( x 2) ,
x [2,3]
3 2 3 3
18
4.4. LEAST-SQUARES APPROXIMATION
Suppose that a mathematical equation is to be fitted to experimental data by plotting the data
on a graph paper and then passing a line through the data points. The method of least squares
endeavours to determine the best approximating line by minimising the sum of the squares of
error, i.e. if ( xi , yi ), i 1, 2,3,..., n are data points and Y f ( x) is the curve to be fitted to
this data, then the error is
ei yi f ( xi )
S
0 2[ y1 a0 a1 x1 ] 2[ y2 a0 a1 x2 ] ... 2[ yn a0 a1 xn ] 0
a0
n
2 [ yi a0 a1 xi ] 0 (4.5)
i 1
and
S
0 2 x1[ y1 a0 a1 x1 ] 2 x2 [ y2 a0 a1 x2 ] ... 2 xn [ yn a0 a1 xn ] 0
a1
n
2 xi [ yi a0 a1 xi ] 0. (4.6)
i 1
Simplifying (4.5) and (4.6) leads to what are known as normal equations:
n n n n
2 [ yi a0 a1 xi ] 0 yi a0 a1 xi 0
i 1 i 1 i 1 i 1
n n
yi na0 a1 xi 0 (4.7)
i 1 i 1
19
and
n n n n
2 xi [ yi a0 a1 xi ] 0 xi yi a0 xi a1 xi 2 0
i 1 i 1 i 1 i 1
n n n
xi yi a0 xi a1 xi 2 (4.8)
i 1 i 1 i 1
n n
xi y i
Letting x i 1
and y i 1
, we can rewrite the formulas for a0 and a1 as
n n
n
x y n.x . y
i i
a1 i 1
n
x
i 1
i
2
n.x 2
n n
y xi 2 x xi yi
a0 i 1 n i 1 .
xi 2 n.x 2 i 1
2S 2S
Clearly, 0 and 0 showing that these values of a0 and a1 provide a minimum
a0 2 a12
of S .
Example 4.4.1
The table below gives the temperature T (in o C ) and length l (in mm) of a heated rod. If
l a bT , find the best values of a and b.
T 20 30 40 50 60 70
l 800.3 800.4 800.6 800.7 800.9 801.0
20
Solution:
T l T2 Tl
20 800.3 400 16006
30 800.4 900 24012
40 800.6 1600 32024
50 800.7 2500 40035
60 800.9 3600 48054
70 801.0 4900 56070
Sum 270 4803.9 13900 216201
n 6, T 270, l 4803.9, T 2
13900, Tl 216201
n Tl T l
b 6(216201) (270)(4803.9) 0.014571428 0.0146
2
2
6(13900) (270) 2
n T T
2
T l Tl T (13900)(4803.9) (216201)(270)
a 799.9942857 800
2
2
6(13900) (270) 2
n T T
l 800 0.0146T
S
by setting 0, for i 0,1, 2,..., k. This leads to the following k 1 normal equations in
ai
k 1 unknowns:
21
n n n n
na0 a1 xi a2 xi 2 ... ak xi k yi
i 1 i 1 i 1 i 1
n n n n n
a0 xi a1 xi 2 a2 xi 3 ... ak xi k 1 xi yi
i 1 i 1 i 1 i 1 i 1
n n n n n
a0 xi k a1 xi k 1 a2 xi k 2 ... ak xi 2 k xi k yi
i 1 i 1 i 1 i 1 i 1
Example 4.4.2
Fit a polynomial of the second degree to the data points given below:
x 0 1.0 2.0
y 1.0 6.0 17.0
Solution:
x y x2 x3 x4 xy x2 y
0 1 0 0 0 0 0
1 6 1 1 1 6 6
2 17 4 8 16 34 68
: 3 24 5 9 17 40 74
Y 1 2 x 3x 2
22
Exponential function: Let the curve y a0ea1x be fitted to the data ( xi , yi ), i 1, 2,..., n.
Then, taking natural logarithms on both sides, we get
ln y ln a0e a1x ln a0 a1 x,
Y A0 A1 x,
Example 4.4.3
Solution:
5 A0 A1 xi Yi ln yi
A0 xi A1 xi 2 xiYi xi ln yi
x y ln y x2 x ln y
1.00 5.10 1.629 1.00 1.629
1.25 5.79 1.756 1.5625 2.195
1.50 6.53 1.876 2.2500 2.814
1.75 7.45 2.008 3.0625 3.514
2.00 8.46 2.135 4.0000 4.270
5 A0 A1 xi ln yi
5 A0 7.5 A1 9.404
7.5 A0 11.875 A1 14.422
A0 xi A1 xi 2 xi ln yi
23
9.404 5 A0 9.404 5 A0
A1 7.5 A0 11.875 14.422 A0 1.1224
7.5 7.5
Since A0 ln a0 , we have that a0 e1.1224 3.072
9.404 5(1.1224)
A1 a1 0.5056
7.5
y 3.072e0.5056 x
If the given data is not of equal quality, the fit by minimising the sum of squares of the errors
may not be very accurate. To improve the fit, a more general approach is to minimise the
weighted sum of squares of the errors taken over all data points
The wi 's are prescribed positive numbers and are called weights. A weight is prescribed
according to the relative accuracy of a data point. If all the data points are accurate, we set
wi 1 for all i.
Example 4.4.4
In Example 4.4.1 we got the linear fit l 800 0.0146T . Suppose that the point (60,800.9)
is known to be more reliable than the others. Then we prescribe a weight (say 10)
corresponding to this point only and set wi 1, for all other points so that
n n n
a0 wi a1 wiTi wili
i 1 i 1 i 1
n n n
a0 wiTi a1 wiTi 2 wiTi li
i 1 i 1 i 1
24
T l w T2 wT wl Tl wT 2 wTl
20 800.3 1 400 20 800.3 16006 400 16006
30 800.4 1 900 30 800.4 24012 900 24012
40 800.6 1 1600 40 800.6 32024 1600 32024
50 800.7 1 2500 50 800.7 40035 2500 40035
60 800.9 10 3600 600 8009.0 48054 36000 480540
70 801.0 1 4900 70 801.0 56070 4900 56070
n n n
a0 wi a1 wiTi wili
i 1 i 1 i 1 15a0 810a1 12012
n n n
810a0 46300a1 648687
a0 wiTi a1 wiTi 2 wiTi li
i 1 i 1 i 1
12012 15a0
a1
810
12012 15a0
810a0 46300 648687 a0 800
810
12012 15(800)
a1 0.0148
810
l 800 0.0148T
Note that the approximation becomes better when the weight is increased.
We discuss the least squares approximation of a continuous function on an interval [a, b].
Let Y ( x) a0 a1 x ... an x n be chosen to minimise
S w( x) y ( x) a0 a1 x ... an x n dx
b 2
25
The necessary conditions for a minimum yield
2 w( x) y ( x) a0 a1 x ... an x n dx 0
b
2 x n w( x) y ( x) a0 a1 x ... an x n dx 0
b
b b b b b
a0 w( x)dx a1 x.w( x)dx a2 x 2 .w( x)dx ... an x n .w( x)dx w( x) y ( x)dx
a a a a a
b b b b b
a0 x.w( x)dx a1 x 2 .w( x)dx a2 x 3 .w( x)dx ... an x n1.w( x )dx x.w( x ) y ( x )dx
a a a a a
b b b b b
a0 x n .w( x)dx a1 x n1.w( x)dx a2 x n 2 .w( x)dx ... an x 2 n .w( x)dx x n .w( x) y ( x)dx
a a a a a
Example 4.4.5
Find the least-squares approximating polynomial of degree two for the function f ( x) sin x
on the interval [0,1] with respect to the weight function w( x) 1.
Solution:
1 1 2
1 1 1
a0 dx a1 x dx a2 x 2 dx sin xdx
1
a0 a1 a2
0 0 0 0 2 3
1 1 1 2
1 1 1 1
a0 x dx a1 x 2 dx a2 x 3 dx x sin xdx a0 a1 a2
0 0 0 0
2 3 4
1 1 1 1
a0 x 2 dx a1 x 3 dx a2 x 4 dx x 2 sin xdx 1 1 1 2 4
0 0 0 0 a0 a1 a2 x
3 4 5 3
Solving this system gives
12 2 120
a0 0.050465
3
720 60 2
a1 4.12251
3
60 2 720
a2 4.12251
3
Y ( x) 0.050465 4.12251x 4.12251x 2
26
4.5. CHEBYSHEV POLYNOMIALS AND ECONOMISATION OF POWER SERIES
Let f1 , f 2 ,..., f n be values of the given function and 1 ,2 ,...,n be the corresponding values
of the approximating function. Then the error vector e, where the components of e are given
by ei fi i . The approximation may be chosen using least-squares method or may be
chosen in such a way that the maximum component of e is minimised. The later method leads
to Chebyshev polynomials.
Tn ( x) cos(n cos1 x)
Tn ( x) T n ( x).
Tn ( x) cos n .
we have that
which is the recurrence relation that can be used to complete successively all Tn ( x) since we
know T0 ( x) and T1 ( x).
T0 ( x) 1
T1 ( x) x
T2 ( x) 2 xT1 ( x) T0 ( x) 2 x 2 1
T3 ( x) 2 xT2 ( x) T1 ( x) 2 x 2 x 2 1 x 4 x 3 3 x
T4 ( x) 8 x 4 8 x 2 1
T5 ( x) 16 x5 20 x 3 5 x
T6 ( x) 32 x 6 48 x 4 18 x 2 1
27
The graphs of the first four Chebyshev polynomials are:
If Pn ( x) is a monic polynomial such that Pn ( x) 21n Tn ( x), then Pn ( x) has the least upper
bound 21 n since Tn ( x) 1 . Thus, in Chebyshev approximation, the maximum error is kept
down to a minimum.
1 T0 ( x)
x T1 ( x)
1
x 2 [T0 ( x) T2 ( x)]
2
1
x3 3T1 ( x) T3 ( x)
4
1
x 4 3T0 ( x) 4T2 ( x) T4 ( x)
8
1
x5 10T1 ( x) 5T3 ( x) T5 ( x)
16
1
x 6 10T0 ( x) 15T1 ( x) 6T4 ( x) T6 ( x)
32
28
Example 4.5.1
x3 x5 x7
a) sin x x
6 120 5040
while keeping the error less than 0.005.
x 2 x3 x 4
b) e x 1 x
2! 3! 4!
while keeping the error less than 0.05.
Solution:
1
a) Since 0.000198... is the first value which is numerically less than 0.005,
5040
we have that
x3 x5
sin x x .
6 120
We now express P5 ( x) in terms of Chebyshev polynomials.
1 1 1 1
Pn ( x) T1 ( x) . 3T1 ( x) T3 ( x) . 10T1 ( x) 5T3 ( x) T5 ( x)
6 4 120 16
169 5 1
T1 ( x) T3 ( x) T5 ( x).
192 128 1920
1
Since 0.00052083 0.005, the economised power series is
1920
P3 ( x)
169
192
T1 ( x)
5
128
T3 ( x)
169
192
x
5
128
4 x 3 3x
383 5
x x3 .
384 32
f (5) ( ( x)) x 5 e
R4 ( x) 0.023 for 1 x 1.
5! 120
29
1 1 1 1 1 1 1
P4 ( x) T0 ( x) T1 ( x) T0 ( x) T2 ( x) T1 ( x) T3 ( x) T0 ( x) T2 ( x) T4 ( x)
4 4 8 24 64 48 192
81 9 13 1 1
T0 ( x) T1 ( x) T2 ( x) T3 ( x) T4 ( x)
64 8 48 24 192
1 1
But T4 ( x) 0.0053. Thus,
192 192
1
R4 ( x) T4 ( x) 0.023 0.0053 0.0283 0.05.
192
Also,
1
R4 ( x) T3 ( x) 0.023 0.04173 0.0647 0.05.
24
Therefore,
81 9 13 1
P3 ( x) T0 ( x) T1 ( x) T2 ( x) T3 ( x)
64 8 48 24
191 13 2 1 3
P3 ( x ) x x x
192 24 6
THE END!
30