(xn,fn)
60
F(x)
50
40
(x5,f5)
30
(x4,f4)
20
(x3,f3)
10
(x2,f2)
(x1,f1)
0
0
10
12
(x)
Suppose that one is given the n points (x1, f1), (x2, f2), (x3, f3), . . . . . , (xn, fn), where fi = f(xi). Here
fi is the exact value of the function at x = xi. From the n points it is now desired to form a
polynomial from which one may estimate the function f(x) for some x different from x1, x2, . . . .
,xn. It is evident that if one passes a polynomial through these n points, it must have degree n1 at
least. If the degree is less than n1, then one has a sort of fit to the data since this curve cannot, in
general, pass through all n points. On the other hand, if the curve has degree greater than n1, we
shall shortly see that an infinite number of curves can be found which pass through all n points.
We shall consider here the curve of minimum degree, i.e., n1, which may be written in the form.
y ( x ) = a1 x 0 + a 2 x1 + a3 x 2 ......... + a n x n 1
(1.1)
At this point, it should be mentioned that one motivation for developing such an interpolation
formula as Eq. 1.1 is as follows: f(x) may be a very complex function of x and requires much
computing time for a single evaluation. Therefore, if one spends some time to obtain the formula
Eq. 1.1 that gives the value of the function f(x) to a prescribed accuracy, then Eq. 1.1 can be used
in the subroutine for f(x) with little machine time for evaluation. A further motive for this section
is to introduce the students to some notions in numerical analysis. Also, the interpolation formula
developed will be used in subsequent lectures in this course.
Dr. D. A. Fadare
Page 1 of 12
17/01/2002
In Eq. 1.1, the polynomial y(x) may be regard as the interpolating polynomial for use in estimating
f(x). The constants a1, a2, . . . . ,an may be found by demanding that the polynomial y(x) pass
through all n points (xi, fi), i = 1, 2, . . . , n.
a1 + a 2 x1 + ....................... + a n x1
n 1
= f1
n 1
a1 + a 2 x 2 + ....................... + a n x 2 = f 2
(1.2)
n 1
a1 + a 2 x n + ....................... + a n x n = f n
In principle, the constants a1, a2, . . . . ,an may now be found by solving the n linear simultaneous
equations (Eq. 1.2) using determinants. An alternative form for y(x), the approximating
polynomial, may also be formed as
(x x2 )(x x3 )............................(x xn )
y(x ) = f1
(x1 x2 )(x1 x3 )............................(x1 x n )
+ f2
(x x1 )(x x3 )............................(x xn )
(x 2 x1 )(x2 x3 )............................(x2 xn )
+ .............................................................................
+ fn
(1.3)
It is seen that the two forms of y(x) given by Eqs. 1.1 and 1.3 are identical in the following
manner: Firstly, the y(x) given by Eq. 1.3 has n terms on the righthand side, each term being a
polynomial of (n1)st degree. Hence, Eqs. 1.1 and 1.3 are polynomial of degree n1. Secondly, if
one examines the righthand side of Eq. 1.3 and sets x = x1, then all terms except the first are zero
since they each have the factor (xx1). Also, the first term becomes
(x x 2 )(x x3 ).........................(x x n )
=
1 x 2 )( x1 x 3 ).........................( x1 x n )
lim f (x
x x1
f1
(1.4)
Further, if one sets x = x2, all terms except the second become zero due to the factor (x x2) and
the second term reduces to f2. Continuing in this manner, one sees that the polynomial given by Eq.
1.3 passes through the n points (x1, f1), (x2, f2), (x3, f3), . . . . . , (xn, fn). Hence, Eqs. 1.1 and 1.3 are
identical. The form given by Eq. 1.3 is often referred to as Lagranges formula for interpolation.
Eq. 1.3 can be made a little neater, if one defines the following functions such that:
( x ) = ( x x1 )( x x 2 )....................( x x n )
(1.5)
and
j (x ) =
(x )
(1.6)
x xi
Dr. D. A. Fadare
Page 2 of 12
17/01/2002
y(x ) = f i
i =1
j (x )
j ( xi )
(1.7)
Eq. 1.7 is a good form for y(x) since it is possible to now expand it out to obtain the form given by
Eq. 1.1 without the use of determinants.
1.1 Analysis of error involved in Eq. 1.7
It is important to note that, the approximating interpolating polynomial is of little use unless one
knows the accuracy involved. To this end, we therefore let
f (x ) = y(x ) + E (x )
(1.8)
Where E(x) is the error involved by representing f(x) by the approximation polynomial y(x). The
following obvious facts should now be noted. E(x) must be zero if x = x1, x2, . . . . . . . . . . ., xn
since here y(x) exactly gives back f(x). Also, E(x) must be zero if f(x) itself happens to be a
polynomial of degree n1 or less. From the above, we set
E ( x ) = (x )G ( x )
(1.9)
Eq. 1.8 then forces E(x) to be zero if x = x1, ., xn since (x), given by Eq. 1.5, is seen to contain
all factors (x xi). It remains now to obtain a form for G(x). This is done as follows:
We artificially introduce the function
P ( x ) = f (x ) y ( x ) ( x )G (a )
(1.10)
Where a is a constant value of x. It is readily seen that P(x) =0 at n+1 points x1, x2, . . . . . . . , xn,
and a. At points x1, x2, . . . . . . ., xn, fi = yi and (xi) = 0 whereas at x = a, the righthand side of Eq.
1.10 becomes identically zero by Eq.1.8. In fact, P(x) = 0 at n+1 points (at least).
Therefore, by Rolles theorem, the slope
dP
= 0 at (at least) n points in the range of x1, x2, . ..,
dx
d 2P
= 0 at (at least) n1 points in the range x1, x2, . . . . . , xn, a. Continuing in
dx 2
d nP
this manner and using Rolles theorem, one finds that
= 0 at (at least) 1 (one) point in the
dx n
range (x1, x2, . . . . , xn, a). Let this point be at (sita). Eventually, = ( x1, x2, . . ., xn, a) and is in
dny
the range of its arguments. Differentiating Eq. 1.10 n times and noting that
= 0 since y(x) is
dx n
dn
of (n1) st degree and that
( x ) = n (or n!, n factorial) by Eq. 1.5, it is readily seen that one
dx n
obtains at x = .
xn, a. The curvature
0=
dn
f ( x ) x = 0 nG (a )
dx n
or
Dr. D. A. Fadare
(1.11)
Page 3 of 12
17/01/2002
1 (n )
f ( )
n
Where = (x1, x2, . . ., xn, a) and has a value in the range of its arguments. By generalizing i.e.
replacing a by x in Eq. 1.11 gives
1 (n )
(1.12)
G(x ) =
f ( )
n
where
(1.13)
= ( x1, x2, . . . . . . . . . . ., xn, x)
and is in the range of x1, x2, . . . . . . . . . . ., xn, x. From the foregoing, one has from Eq. (1.8)
G (a ) =
f (x ) = y(x ) +
(x )
n
f (n ) ( )
(1.14)
The error term now is seen to automatically yield zero if f(x) is itself a polynomial of order n1 or
less.
From the foregoing remarks, it is readily seen that the (n1)st degree polynomial passing through
the n points is given by y(x). However, if one now knows y(x), it is possible to pass a curve of
arbitrary degree n1 through the n points. This curve is readily seen to be given by function g(x)
where
g ( x ) = y ( x ) + ( x )H (x )
(1.15)
Note that g(x) will have degree n1. H(x) is arbitrary. In particular, if H(x) = 0 one obtains g(x) =
y(x) as before. The form given by Eq. 1.15 brings to mind familiar question on I.Q. tests of finding
the next term in a sequence, given the first few terms: the answer, of course, is always N, where N
is arbitrary. For example, given the partial sequence:
1
16
25
36
(1.16a)
One desires the 7th term. One may say naturally, it is 49. However, the sequence (1.16a) is seen
to be generated by the function g(n) where
g (n ) = n 2 + (n 1)(n 2)(n 3)(n 4 )(n 5)(n 6)H (n )
(1.16b)
This has the same form as Eq. 1.15. With H(n) arbitrary, it is evident that the first six terms in (Eq.
1.16a) are faithfully produced while the seventh term is certainly arbitrary.
Returning to Eq. 1.14, the approximating (or interpolating) polynomial y(x) is seen to give f(x)
f (n ) ( )
with an error, which is equal to (x )
. When one uses y(x), one usually does so for a range
n
of x, a x b. In order words, y(x) is used to give f(x) for the range (a, b). Naturally, the error
must be within the prescribed tolerance. It is clear that once (a, b) is chosen and x1, x2, . . . ., xn are
picked, then if y(x) is used in (a, b), this will result in a maximum error somewhere in (a, b). If one
can find this maximum, then it represents an upper bound for the error involved in using y(x) for
f(x) in range (a, b).
Dr. D. A. Fadare
Page 4 of 12
17/01/2002
One can consider this problem further by varying the values of x1, x2, . . . . , xn, the interpolation
points. It is obvious that for each choice of (x1, x2, . . . , xn) one will get a different maximum error
in (a, b). In fact, the problem of finding the values of x1, x2, . . . . , xn that will minimize the
f (n ) ( )
in (a, b) is known as the Chebyshev problem. If the Chebyshev
maximum of ( x )
n
problem is solved, one sees that the resulting y(x) gives the best overall value for f(x) for use in the
range (a, b) such that the error bound is smallest. It seems reasonable to expect that this optimum
(x1, x2, . . . , xn) will depend firstly, on the limits (a, b) and secondly, on the function f(x) itself. If
one is not so ambitious after all, the solution to the Chebyshev problem is tedious one can
partially solve as follows:
f (n ) ( )
Instead of minimizing the maximum of ( x )
in the range (a, b), one can obtain the
n
maximum of f (n ) ( x ) in the rang (a, b) be given by . Then one can find a set (x1, x2, . . . , xn)
which minimizes the maximum of ( x )
(x)
x1 1
x2 2 x3
xn1
n1
xn
It is clear that the maximas of (x) will occur at 1, 2, . . . . , n1 and also possible at a and b. Here
i is the stationary point of (x) such that xi < i < xn1 as shown. Of course, one only considers
d ( x )
those i being in (a, b). The points may be found by taking
= 0 one has at x = i
dx
d 2
d
( x ) = 2 ( x ) ( x )
dx
dx
1
1
1
= 2 2 ( x )
+
+ ........ +
x xn
x x1 x x 2
= 0
that is,
n
k =1
Dr. D. A. Fadare
1
=0
xk
Page 5 of 12
(1.17)
17/01/2002
Eq. 1.17 gives the equation for determining i. Let us now investigate the behaviour of 2(i) if the
root xk is shifted to xk + xk, where xk is small. It is seen that
d 2
d
( i xi )( i x2 )....( i xk ).....( i x n )
( i ) = 2 ( i )
dx k
dx k
n
( xi )( i x 2 ).........( i x n )
d
1
i
= 2 ( i ) i
+ ( i )
( i x k )
j =1 i x j dx k
= 2 2 ( i )
n
2 2 ( i )
d
1
i
dx k
i xk
j =1 i x j
2 2 ( i )
=
i xk
(1.18)
1
= 0 by Eq. 1.17
j =1
i xj
In Eq. 1.18, if xk is moved to xk + xk, where xk is small and greater than zero and the remaining
xis are constant, 2(i) increases if i < xk and decreases if i > xk. This is seen as follows. If xk
increases by a small amount, then maxima less than xk increase in magnitude while those greater
than xk decrease in size. Conversely, if xk decreases by a small amount, then those maxima less
than xk will decrease in magnitude while those greater than xk will increase in amplitude. From this
fact, we can see that the size of the maxima is therefore decreased if the roots approach one
another. On the other hand, if the roots becomes too close together at least one of 2(a) or 2(b)
will increase.
Since
In summary therefore, we see that if an (x1, x2, . . . , xn) can be found such that
(a ) = ( 1 ) = ( 2 ) = . . . . . . . . . . . ( n 1 ) = (b )  or all maxima are equal in magnitude,
then this is the answer to the partial Chebyshev problem. This is seen to be true since if, in fact, a
set (x1, x2, . . . , xn) is found such that the magnitude of the maxima are all equal, then one changes
any xi to xi + xi where xi is less/greater than zero, then at least one of the maxima is increased in
magnitude. The polynomial that readily satisfies this property is called the Chebyshev polynomial
for obvious reason it is given in the form
Tn (x ) = cos n cos 1 x
(1.19)
and has this equiripple character in the range (1, 1). We shall now see that Tm(x) does indeed
have the required properties in (1, 1). First, Tm(x) is seen to be an mst degree polynomial in the
following manner: From Eq. 1.19, one has
Tn (x ) = cos n cos 1 x
= R e in cos
= R e i cos
( (
( x)
( x)
Dr. D. A. Fadare
Page 6 of 12
))
17/01/2002
= R x + i 1 x2
or
Tn ( x ) = x n ( n2 )x n 2 1 x 2 + ( n4 )x n 4 1 x 2
n 1
......
(1.20)
n 1
n 1
n n n
1 + + + + ..................
2 4 6
The above sum is found to be 2n1 by observing that
(1 + 1)n
n n n
= 2 n = 1 + + + + ..................
1 2 3
(1 + 1)n
n n n
= 0 = 1 + + ..................
1 2 3
Adding
n n
2 n = 21 + + + ......
2 4
which yields
n n
1 + + + ...... = 2 n 1
2 4
(1.21)
(1.22)
Hence
The roots of Tn ( x ) are located at n cos 1 x =
2k 1
, i.e,
2
2k 1
x k = cos
, k = 1,2,.........., n
(1.23)
2n
The ordering of the roots given by Eq. 1.23 is from the largest to the smallest which we see are in
the range (1, 1). Note that there are n roots in this range.
Dr. D. A. Fadare
Page 7 of 12
17/01/2002
j
, j = 1, 2, . . . . ., n1
n
(1.24)
x k +1 < k < x k
The values of Tn at these extrema are all equal since
= cos j
= ( 1) = 1
j
Also
and
Therefore the magnitude of Tn at the intermediate extremum points j and at the end points 1 are
all equal to one. From Eq. 1.22, we also see that Tn ( z ) can be written as
Tn ( z ) = 2 n 1 ( z z1 )(z z 2 )...............( z z n )
(1.25)
We now propose to make (x ) = ( x x1 ).......(x x n ) , where (x1, . . . . . . . .xn) are in the range
(a, b), have the same behaviour in this range as Tn ( z ) in the range (1, 1).
Setting
x = z + , a = + , b = +
So that the transformation reads
ba
b+a
z+
2
2
Using Eq. 1.23, one finds the interpolation points to lies at
x=
xk =
(1.26)
ba
b=a
zk +
2
2
2k 1
z k = cos
, k = 1, 2, . . . n
2n
(1.27)
( x ) = (x x1 )( x x 2 )..........( x x n )
Dr. D. A. Fadare
Page 8 of 12
17/01/2002
= (z + z1 )(z + z 2 )..........(z + z n )
= n ( z z1 )( z z 2 ).............( z z n )
=
n
2
n 1
2 n 1 ( z z1 )(z z 2 ).............(z z n )
or
(x ) =
(b a )n T (z )
n
2 n 1
(1.28)
n
(
b a)
, in (a, b)
(x )
(1.29)
2 2 n 1
In summary, therefore, if one wishes to use n points Chebyshev (partial) interpolation for use in
the range (a, b), one has to obtain x k , k =1, . . . . n using Eq. 1.27. Then the approximating
n
j (x )
where j ( x ) is given in Eq. 1.6. An upper bound on the
polynomial (Eq. 1.7) y ( x ) = f i
j ( xi )
i =1
error will then be given by
EB
n
(
b a)
=
2 n 1
1
f (n ) ( x ) max in (a, b)
n
(1.30)
Tn (x ) = cos n cos 1 x
(1.31a)
T0 ( x ) = 1 , T1 ( x ) = x
From which one may show that
T0 ( x ) = 1
T1 ( x ) = x
T2 (x ) = 2 x 2 1
T3 ( x ) = 4 x 3 3x
(1.31c)
T4 ( x ) = 6 x 8 x + 1
T5 ( x ) = 16 x 5 20 x 3 + 5 x
4
T6 ( x ) = 32 x 6 48 x 4 + 18 x 2 1
Dr. D. A. Fadare
Page 9 of 12
17/01/2002
From Eq.1.31b, one also can easily show that the generating function for Tn ( x ) is given by
1 xz
(1.31d)
1 2 xz + z 2
n =0
From Eq. 1.31a, one also easily shows that Tn ( x ) are orthogonal in the range (1, 1) with
G ( z , k ) = z nTn ( x ) =
weighting function (1 x 2 )
1
2
Tn ( x )
Tn ( x )
1 x2
dx = 0 , n = m
=
,n=m>0
(1.31e)
= , n = m= 0
Tn +1 ( x ) sin (n + 1) cos 1 x
=
n +1
1 x2
Tn 1 (x ) sin (n 1) cos 1 x
=
n 1
1 x2
Subtracting the above two equations, one gets

Tn +1 ( x ) Tn 1 ( x )
= 2Tn ( x ), n 2
n +1
n 1

(1.31f)
In like manner, a host of other interesting analytical relationships may be easily derived.
From the foregoing, the reader will see that the Chebyshev problem itself is one which is not too
satisfactory from a practical point of view. It merely asks us to find the best y(x) or the optimum
(x1, x2, . . . , xn) that yields a y(x) such that the error bound is a minimum.
It seems to me that a more general optimum problem would read as follows:
Given a function f(x) which one wishes to approximate in the range (a, b), how does one select the
n data points and the corresponding approximation function such that the error bound will be a
minimum?
The approximation function is seem to be given by Eq, 1.15 or
g ( x ) = y (x ) + ( x )H ( x )
Dr. D. A. Fadare
Page 10 of 12
(1.15)
17/01/2002
where y(x) is the (n1)th order polynomial given previously. Then using g(x) as the approximating
function, one sees that the error term is now given by Eg(x), where
f (n ) ( )
f (x ) = g ( x ) + ( x )
H ( x )
(1.32)
n!
f (n ) ( )
Eg ( x) = (x )
H ( x )
n!
In fact, for arbitrary (a, b) and (x1, x2, . . . , xn) we see that if we are clever enough to choose
f (n ) ( )
, then the error Eg(x) = 0. Hence a practical solution to this problem in reality is to
H ( x) =
n!
choose (x1, x2, . . . , xn) such that
f (n ) ( )
n!
shall turn to be a relatively simple function. Often in practice f (n ) (x ) may itself be slowly varying
C
where C is the mean value
in (a, b) or at least have the same size. If so, then if one sets H ( x) =
n!
of f (n ) (x ) in the range (a, b) the setting
H ( x) =
g ( x) = y ( x) + (x )
C
n!
will yield a value of f(x) which, if one uses Chebyshev polynomial interpolation in (a, b) i.e., Eq.
1.27, has an error bound of
(b a )n 1 max(a, b) f (n ) (x ) C
2 2 n 1
n!
which is better than that given by Eq. 1.30. A word of caution when forming the approximation
polynomial y(x): From Eq. 1.7, we see that
n
y(x ) = f i
i =1
j (x )
j ( xi )
Note that the denominators (and numerators) are of the form (x x1) . . . . (x xn), that is, they
have n1 factors where each factor is the difference between two numbers. Hence, if one has range
(a, b) fairly small or if n is fairly large, then the terms i ( x ) and i (xi ) will be made up of n1
factors, each being the difference between two reasonably sized numbers, such that the difference
may well be small. In order words, significant figures will be lost when one uses this formula on
the computer. This could hen lead to compute values of y(x) with errors larger than that given by
the error bound. Hence before one uses y(x) in actual computation, for a given error tolerance, be
sure that the result will have significant figures to yield the desired error tolerance. It may even be
necessary to go to double precision or to adopt more extreme measures.
Dr. D. A. Fadare
Page 11 of 12
17/01/2002
PROBLEMS
1. Develop the partial (three point) Chebyshev interpolation polynomial y(x) (i.e., using Eq.
1.27) for estimating ex for use in the range (0.9, 1). Disentangle y(x) and write out as
ax 2 + bx + c . Find a, b and c. Find an error bound. Calculate y(x) at x = 0.9, 0.91, . . . . , 1
and compare with the exact value of ex. From the latter comparison, compare the true error
e 0.95
with the error bund. Setting c =
, and using the approximation function as (Eq.1.15)
3!
g ( x) = y ( x) + C (x )
3
2
Write g ( x ) = a1 x + b1 x + c1 x + d1 and find a1 , b1 , c1 and d 1 . Find the new error bound.
Repeat the above calculation for g(x) at x = 0.9, 0.91, . . . . , 1, compare with ex and
compare the true error again the error bound.
2. a) Show that
Tn (x ) = k (1 x 2 )2
1
1
dn
2 n 2
(
1
x
)
dx n
T (z )T (z ) = 0 , i =j
k =1
n
,i = j 0
2
= n, i = j = 0
=
(1 x ) x + n = 0
2
1 1 x
Tn ( x ) = F n, n
2 2
NOTE:
The deadline for the submission of these problems shall be communicated to you via the
email.
Dr. D. A. Fadare
Page 12 of 12
17/01/2002