You are on page 1of 6

Scientific Computing– LESSON 11: Polynomial Interpolation: Error Analysis 1

Interpolation Error

Let pn (t) be the unique polynomial of degree at most n that interpolates n + 1


data points (ti , fi ), i = 0, 1, 2, . . . , n. Suppose that the data points are sampled from
some underlying function f (t), i.e., fi = f (ti ).

Although pn (t) and f (t) agree at the points ti , it is natural to ask how close they
are to each other at an arbitrary point t. This calls for a study on approximation
errors.

There are various ways of measuring an approximation error. We shall consider


the maximum error (L∞ error)
max |f (t) − pn (t)|
t∈[a,b]

in some interval [a, b] of interest.

Clearly, this approximation error depends on the behavior of f (t) as well as the
distribution of the interpolation points ti , i = 0, 1, 2, . . . , n. In the following we shall
first derive an expression of the error term, and then identify the optimal arrangement
of the interpolation points ti in terms of minimizing the approximation error.

Suppose that there are n+1 data points (ti , fi ), i = 0, 1, 2, . . . , n, to be interpolated,


where fi = f (ti ). Recall that pn (t) is the unique interpolation polynomial of degree
at most n interpolating f (t) at the ti , i = 0, 1, 2, . . . , n. Let α be an arbitrary but
fixed number.

Clearly, the polynomial


pn+1 (t) = pn (t) + λφn+1 (t),
where
φn+1 (t) = (t − t0 )(t − t1 ) · · · (t − tn ),
also interpolates the n + 1 points (ti , fi ), i = 0, 1, 2, . . . , n.

In addition, we make pn+1 (t) interpolate (α, f (α)) by setting


f (α) − pn (α)
λ= .
φn+1 (α)
That is
φn+1 (t)[f (α) − pn (α)]
pn+1 (t) = pn (t) + .
φn+1 (α)
Scientific Computing– LESSON 11: Polynomial Interpolation: Error Analysis 2

Consider the function


g(t) = f (t) − pn+1 (t).
Since g(t) vanishes at the n + 2 points α and the ti , i = 0, 1, 2, . . . , n, by applying
Rolle’s Theorem repeatedly, we conclude that the derivative g (n+1) (t) has a zero ηα
between minni=0 {ti } and maxni=0 {ti }, i.e.,
(n+1)
g (n+1) (ηα ) = f (n+1) (ηα ) − pn+1 (ηα ) = 0.

(n+1) (n+1)
Since pn (t) = 0 and φn+1 (t) = (n + 1)! , we have
(n+1)
(n+1) φn+1 (ηα )[f (α) − pn (α)]
pn+1 (ηα ) =
φn+1 (α)
(n + 1)![f (α) − pn (α)]
= .
φn+1 (α)
It follows that
(n + 1)![f (α) − pn (α)]
f (n+1) (ηα ) − = 0.
φn+1 (α)
or,
f (n+1) (ηα )
f (α) = pn+1 (α) = pn (α) + φn+1 (α).
(n + 1)!
Therefore
f (n+1) (ηα )
f (α) − pn (α) = φn+1 (α).
(n + 1)!

Since α is arbitrary, we may replace it by t to yield the approximation error


f (n+1) (ηt )
e(t) = f (t) − pn (t) = (t − t0 )(t − t1 ) · · · (t − tn )
(n + 1)!
where ηt lies between minni=0 {ti } and maxni=0 {ti }.

An example: Suppose we wish to approximate the function sin(t) over [0, 1] using
a quadratic polynomial p2 (t) interpolating three interpolation points 0, 0.5, and 1.0.

Since the third order derivative of sin(t) is bounded by 1, the approximation error
is bounded by
1
| sin t − p2 (t)| ≤ |t(t − 0.5)(t − 1)| ≤ 0.008
6

Placement of interpolation points


Scientific Computing– LESSON 11: Polynomial Interpolation: Error Analysis 3

–4 –2 0 2 4

1
Figure 1: Interpolation of f (t) = 25t2 +1 by p10 (t) at 11 equally spaced points ti in [−5, 5].

Suppose that the interpolation points ti , i = 0, 1, 2, . . . , n, are uniformly distributed


over an interval [a, b], i.e., ti = i(b−a)/n+a. Then, when n increases, the interpolation
polynomial pn (t) may fail to converge to the underlying function f . An example is
provided by Runge’s function
1
f (t) = , t ∈ [−5, 5].
25t2 +1

Figure 1 shows the graph of the interpolating polynomial p10 (t) of f (x) at 11
equally spaced points ti in [−5, 5]. You may try different values of n greater than
10, using uniformly distributed interpolation points ti , to see that, in general, the
magnitude of oscillations of pn (t) increases rapidly as n increases.

In the above example, the reason for the large error is partly due to that the Runge
function has a pole nearby the interpolation interval, and partly due to the inherent
large oscillations of φn+1 (t) = ni=0 (t − ti ) in the error term that uses the uniformly
Q

distributed interpolation points ti .

One remedy for this problem is to use a better placement of the interpolation
Scientific Computing– LESSON 11: Polynomial Interpolation: Error Analysis 4

points to make
Yn
max (t − ti )

t∈[a,b]
i=0

as small as possible.

It has been shown by Chebyshev that



Yn
min max (t − ti ) = 2−n ,

t∈[−1,1]
i=0

where the minimization is taken over all possible placements of the ti , i = 0, 1, 2, . . . , n,


within [−1, 1], and that the minimum is achieved when
 
2(n − i) + 1
ti = cos π , i = 0, 1, 2, . . . , n.
2n + 2
These interpolation points are called the Chebyshev points.

There is a simple geometric description of the Chebyshev points. First, generate


n + 1 uniformly spaced points
    
2(n − i) + 1 2(n − i) + 1
vi = cos π , sin π ,
2n + 2 2n + 2
i = 0, 1, 2, . . . , n, on the upper half of the unit circle. Then, project these points
down to the x-axis to yield the Chebyshev points. Note that the Chebyshev points
are distributed more densely towards the ends of [−1, 1] than in the middle of the
interval.

The Chebyshev points defined above over [−1, 1] can be transformed into any
interval [a, b] via the mapping
t+1
t̃ = a + (b − a).
2

You may approximate Runge’s function again using an interpolation polynomial


pn (t) with the Chebyshev points to see how much their use improves the approxima-
tion error.

Another instructive exercise is to plot the graph of φn+1 (t) = ni=0 (t − ti ) over
Q

[−1, 1] using the uniformly distributed interpolation points and then using the Cheby-
shev points to compare their convergence behaviors.
Scientific Computing– LESSON 11: Polynomial Interpolation: Error Analysis 5

Optimality of the Chebyshev points

Denote φn+1 (t) = ni=0 (t − ti ). We are going to show that maxt∈[−1,1] φn+1 (t) is
Q

indeed minimized by the Chebyshev points.

Define the Chebyshev polynomial of degree k by

Tn (t) = cos(n arccos(t)), t ∈ [−1, 1],

or, equivalently,
Tn (t) = cos(nθ), t = cos θ.
Since Tn (cos θ) = cos(nθ), one way to obtain Tn (t) is to expand cos(nθ) as a polyno-
mial of cos θ, and then replace cos θ by t.

Since
cos(n + 1)θ + cos(n − 1)θ = 2 cos θ cos nθ,
there is the following recurrence relation among the Tn (t),

Tn+1 (t) = 2tTn (t) − Tn−1 (t), n ≥ 1,

with T0 (t) = 1 and T1 (t) = t. It follows that the coefficient of the leading term tn+1
of Tn+1 (t) is 2n , for n ≥ 0.

Since Tn+1 (t) = cos(n + 1)θ vanishes at


2(n − i) + 1
θi = π, i = 0, 1, . . . , n,
2(n + 1)
the zeros of Tn+1 (t) are
 
2(n − i) + 1
ti = cos θi = cos π , i = 0, 1, . . . , n.
2(n + 1)

Clearly, Tn+1 attains its extreme values, −1 and 1, alternatively at the following
n + 2 points  
n−i
t̃i = cos π , i = 0, 1, . . . , n, n + 1.
n+1
Hence, maxt∈[−1,1] |Tn+1 (t)| is 1.

Now consider the monic polynomial


1
rn+1 (t) = Tn+1 (t),
2n
Scientific Computing– LESSON 11: Polynomial Interpolation: Error Analysis 6

for n ≥ 0. (A polynomial is said to be monic if its leading coefficient is 1.)

We claim that
1
max |rn+1 (t)| =
t∈[−1,1] 2n
achieves the minimum of
max |hn+1 (t)|
t∈[−1,1]

among all monic polynomials hn+1 (t) of degree n + 1.

We shall prove the claim by contradiction. Suppose that there is a monic polyno-
mial qn+1 (t) of degree n + 1 different from rn+1 (t) such that
1
max |qn+1 (t)| <
t∈[−1,1] 2n
Then the polynomial
g(t) ≡ rn+1 (t) − qn+1 (t)
changes its sign at least n + 1 times between consecutive points ti and ti+1 , i =
0, 1, . . . , n, where
 
n−i
t̃i = cos π , i = 0, 1, . . . , n, n + 1.
n+1
By the intermediate value theorem, g(x) has at least n + 1 zeros over [−1, 1].

On the other hand, since the leading terms of the monic polynomial rn+1 (t) and
qn+1 (t) are canceled, g(t) is a nonzero polynomial of degree at most n. Therefore,
g(t) has at most n zeros.

This contradiction proves our claim on the optimality of the Chebyshev points,
which are the zeros of rn+1 (t).

You might also like