4.1 Polynomial and Rational Extrapolation

You might also like

You are on page 1of 9

Chapter 4

Extrapolation Methods

4.1 Polynomial and Rational Extrapolation


In Chapter 3, we used extrapolation as a tool to estimate the discretization errors of high-
order Runge-Kutta methods. We learned that these error estimates are \asymptotically
correct" thus, they converge at the same rate as actual discretization errors. Indeed,
using local extrapolation, we added these error estimates to the computed solution to
obtain a higher-order approximation. In this chapter, we study the use of low-order
methods with repeated extrapolation to obtain high-order methods satisfying prescribed
accuracy criteria. As in previous chapters, we'll consider the scalar IVP
y
0
= ( )
f t y y (0) = y0 : (4.1.1)
With the complications introduced by computing solutions on meshes having dierent
spacings, let us change our notation for the numerical solutions and let ( ) denote the z t h

numerical approximation of ( ) at time obtained using step size . If n = , then


y t t h t nh

yn = ( n ).
z t h

Suppose that the global error of a numerical method has an expansion in powers of
h of the form
X
q
i + O(hq+1)
( ) = ( )+
z t h y t i (t)h
c (4.1.2)
i=1
with j ( ) = 0, = 1 2
c t j ::: ; 1, for a method of order
p . Gragg 4] showed
p < q

that every explicit one-step method has such an expansion when ( ) is smooth. Implicit
y t

Runge-Kutta methods may also have expansions of this form.


1
The idea of extrapolation is to combine solutions ( j ) obtained with dierent z t h

step sizes j , = 0 1 , so that successive terms of the error expansion (4.1.2) are
h j :::

eliminated thus, resulting in a higher-order approximation. The process is commonly


called Richardson's extrapolation. Let us begin with a simple example.
Example 4.1.1. If solutions are smooth, the global error of either the forward or
backward Euler methods have series expansions of the form
( ) = ( )+
z t h y t c1 h + c2 h
2
+ ::: :

Obtain two solutions at the same time using step sizes of


t h0 and h0 =2, i.e.,
(
z t h0 )= ( )+ y t c1 h0 + c2 h0
2
+ :::

( + 2 ( 0 )2 +
2) = ( ) + h0 h

2 2
z t h0 = y t c1 c ::: :

Subtracting these two equations, we eliminate ( ) and obtain y t

3 2 2+
= ( 0) ; ( 0 2) ;
1 0
c h c

2 z t h
4 0 z t h = h ::: :

The leading term of this expression furnishes a discretization error estimate of either
solution. Substituting the above expression into either of the two global error expansions
yields
2 ( 0 2) ; ( 0 ) = ( ) ; 22 20 +
z t h = z t h y t
c
h ::: :

Thus, 2 ( 0 2) ; ( 0 ) provides a higher-order ( ( 2)) approximation of ( ) than


z t h = z t h O h y t

either ( 0 ) or ( 0 2).
z t h z t h =

The same result can be obtained by approximating ( ) by a linear polynomial z t h

R 1(t h) that interpolates ( ) at = 0 and 0 2. Thus, let


z t h h h h =

R1 t h ( ) = 0( ) + 1( )
 t  t h:

The interpolation conditions require


(
z t h0 ) = 0( ) + 1( )
 t  t h0

and
(
z t h0 = 2) = 0 ( ) + 1 (2)
 t
 t h0
:

2
z(t,h)

z(t,h 0 )

R 1 (t,h)
z(t,h 0 /2)

R 1 (t,0)

h 0/2 h0 h

Figure 4.1.1: Interpretation of Richardson's extrapolation for Example 4.1.1.


Thus,
0 =2 ( z t h0 = 2) ; (z t h0 ) 1 = 2 ( z t h0 = 2) ; ( z t h0 )] :
h0

As shown in Figure 4.1.1, the higher-order solution is obtained as


( 0) =
R1 t 0 =2 ( z t h0 = 2) ; ( z t h0 : )

The general extrapolation procedure consists of generating a sequence of solutions


( j ), = 0 1
z t h j , with 0 1
::: q hq and interpolating these solutions by an
> h > ::: > h

q th degree polynomial q0 ( ). The desired higher-order approximation is the value of


R t h

the interpolating polynomial at = 0, i.e., q0( 0). The meaning of the superscript 0
h R t

will become clear shortly.


The Aitken-Neville algorithm provides a simple way of generating the necessary ap-
proximations in the form a recurrence relation where higher-order solutions are obtained
from lower-order ones. Suppressing the dependence, let qi ( ) be the unique polynomial
t R h

of degree that satises the interpolation conditions


q

i
Rq (h) = z (t hj ) j = i i +1 ::: i + q: (4.1.3)
Consider i
R q (h) in the form
R
i i
q (h) = iq (h)Rq ; 1 ( ) + (1 ; iq ( ))
h h R
i+1
q 1 (h)
;
(4.1.4a)

3
where iq ( ) is a linear polynomial in . By assumption,
h h

R
i
q ; 1 ( j) = (
h z t h j) j = i i +1 ::: i + ;1q

and
i+1
R q 1 (hj ) = z (t hj )
;
j = +1 +2 i i ::: i +  q

thus, the required interpolation conditions

R
i
q (h) = z (t hj ) j = +1 +2
i i ::: i + ;1q

are satised for all choices of iq ( ). If we additionally select


h

iq ( i ) = 1
h iq ( h i+q ) = 0

then
i i i i+1
R q (hi ) = Rq ; 1 ( i) = (
h z t h i) R q (hi+q ) = Rq 1 (hi+q ) = z (t hi+q ): ;

Since iq ( ) is a linear function of then


h h

iq ( ) = ;;
h
h hi+q
: (4.1.4b)
h i h i+q

Combining (4.1.4a) and (4.1.4b)


i ( ; i+q )Rqi ( )+( i; ) i+1
q 1 (h)
q (h) = (4.1.5)
h h ; 1 h h h R ;

i;
R :
h h i+q

We'll denote the extrapolated solutions qi (0) simply as qi . These functions may be
R R

simply generated in a tableau as indicated in Figure 4.1.2. Using (4.1.5), the entries of
the tableau are
(
z t h0 )= R0
0
R1
0
R2
0
R3
0

( ...
1) =
1 1 1
z t h R0 R1 R2

( ...
2) =
2 2
z t h R0 R1

( ...
3) =
3
z t h R0
...

Figure 4.1.2: Tableau for Richardson's extrapolation.


4
i (0) = hi Rq 1 ; hi+q Rq q 1 ; Rq 1
i+1 i i+1 i
i = = i+1
q 1+ (4.1.6)
; ;1 R ; ;

hi ; hi+q
R q R q R ;
( hhi+i q ) ; 1
:

If this extrapolation technique is used with a rst-order method, then the values of qi R

increase in accuracy as either or increase, provided that ( ) is smooth. In this case,


i q y t

the global error ( ) ; qi is ( q+1), provided that i ! 0 as ! 1 4]. Finally, we note


y t R O h h i

that it is not necessary to halve the step size after each pass. Other step size sequences
can be used to reduce computation. For example
fh0 h0 = 2 h0 =3 h0 = 4 h0 = 6 h0 = 8 ::: g :

This should be done carefully since some sequences lead to a loss of stability (5], Section
II.9).
Example 4.1.2. Consider the solution of

y
0
=; y 0 < t 1 y (0) = 1
using the forward Euler method with step sizes i = 2 i, = 0 1 8. The results at h
;
i :::

t= 1 are reported in Table 4.1.1. The entries in the rst column ( = 0) are computed q

by Euler's method with step size i. Subsequent columns are obtained using (4.1.6). Let
h

us verify the entry in the rst row and second column of the upper table by using (4.1.6)
with = 0 and = 1 thus,
i q

0 ; 0
= 0 25 + 0 25 ; 0 0 = 0 5
1 0
0
= 1
+ R R : :

( h1 ) ; 1
h 20;10
R R : : :
1 0 0 : :

Columns converge at increasing powers of  thus, errors ( (1) ; qi ) decrease as ( )


h y R O h

in the rst ( = 0) column, as ( 2) in the second ( = 1) column, etc. This is conrmed


q O h q

in the bottom table, which consists of the errors divided by qi +1. h

Increasing or increases the number of calculations. If the basic integration scheme


q i

is Euler's method, computations increase at the rate of 2q+i when the basic step 0 = 1 h

is repeatedly halved. If worst-case round-o occurs at each step, the answers would lose
one bit of precision for each increase in or . We could reduce this problem by using
i q

a dierent strategy to choose step sizes. For example, select 0 = 1, 1 = 1 , , h h =r :::

5
q =0 1 2 3 4
= 0 0.0000 0.5000 0.3438 0.3701 0.3678
i

1 0.2500 0.3828 0.3668 0.3679 0.3679


2 0.3164 0.3708 0.3678 0.3679 0.3679
3 0.3436 0.3685 0.3679 0.3679 0.3679
4 0.3561 0.3680 0.3679 0.3679 0.3679
5 0.3621 0.3679 0.3679 0.3679
6 0.3650 0.3679 0.3679
7 0.3664 0.3679
8 0.3672
q =0 1 2 3 4
i = 0 3.6788e-01 -1.3212e-01 2.4129e-02 -2.2263e-03 1.0452e-04
1 1.1788e-01 -1.4933e-02 1.0682e-03 -4.1157e-05 8.3834e-07
2 5.1473e-02 -2.9321e-03 9.7508e-05 -1.7864e-06 1.7546e-08
3 2.4271e-02 -6.5990e-04 1.0625e-05 -9.5199e-08 4.6022e-10
4 1.1805e-02 -1.5701e-04 1.2449e-06 -5.5185e-09 1.3240e-11
5 5.8242e-03 -3.8318e-05 1.5078e-07 -3.3249e-10
6 2.8929e-03 -9.4664e-06 1.8557e-08
7 1.4417e-03 -2.3527e-06
8 7.1969e-04
q =0 1 2 3 4
i = 0 3.6788e-01 -1.3212e-01 2.4129e-02 -2.2263e-03 1.0452e-04
1 2.3576e-01 -5.9732e-02 8.5453e-03 -6.5851e-04 2.6827e-05
2 2.0589e-01 -4.6914e-02 6.2405e-03 -4.5731e-04 1.7967e-05
3 1.9416e-01 -4.2234e-02 5.4402e-03 -3.8993e-04 1.5080e-05
4 1.8888e-01 -4.0194e-02 5.0990e-03 -3.6166e-04 1.3883e-05
5 1.8637e-01 -3.9238e-02 4.9408e-03 -3.4864e-04
6 1.8515e-01 -3.8774e-02 4.8645e-03
7 1.8454e-01 -3.8546e-02
8 1.8424e-01
Table 4.1.1: Solution of Example 4.1.2 by Richardson's extrapolation. The upper table
presents solution, the middle table presents errors in qi , and the lower table presents
R

errors divided by qi +1.


h

hi = 1=ri, for 1 < r  2. The worst case rounding errors in R0i will be proportional to
i i
r , since this is approximately the number of steps. The error in R could be as large as
1

i+1 + ri
i+1 + i+1 ( r + 1=r ):
;1 =
r
r
r
r
r ;1
6
The error in i
could be as large as
R q
q
i+q ( + 1 )( + 1 ) ( + 1 )
2
r =r r =r r =r

;1 r
2;1 r
q;1 r
:::
r
:

Values of near 2 will make the expression i+q large whereas values of near unity
r r q

will make the denominators in the above expression small. Bulirsch and Stoer 2] used
sequences with = 3 2 or = 4 3 so that i+q grows more slowly than with = 2.
r = r = r r

If the extrapolation is started with a method of order (  1) and the method has p p

an expansion in of the form


h

X
q
p+i + O(hp+q+1):
( ) = ( )+
z t h y t c i (t)h
i=0
In this case, the approximation qi has an error of order ( p+q+1).
R O h

Some methods have error expansions that only contain even powers of , e.g., h

X
q
i (q+1)
( ) = ( )+
z t h y t c i (t)h + ( O h )
i=1
where is typically 2 or 4. The extrapolation algorithm can be modied to take advantage


of this by utilizing polynomials in to obtain h

q 1 ; Rq 1
i+1 i
i
Rq = Rqi+11 +
R ; ;
(4.1.7)
( hhi+i q ) ; 1
;
:

Now, the order of the approximation is increased by in each successive column of the 

extrapolation tableau. If the trapezoidal rule is solved exactly at each step, then it has
an expansion in powers of 2 ( = 2). h 

Extrapolation with rational functions is also possible (5], Section II.9). The basic idea
is to approximate ( ) by a rational function ( ) and then evaluate (0). Bulirsch
z t h R h R

and Stoer 1] derived a scheme where qi ( ) is dened as the rational approximation that
R h

interpolates ( ) at = i i+1
z t h h i+q for = i
h h i+1
::: h i+q . The values h h > h > ::: > h

of qi = qi (0) can be obtained from the following recursion


R R

i =0 i = (
R;1 R0 z t h i) (4.1.8a)

q 1 ; Rq 1
i+1 i
i = i+1
q 1+ (4.1.8b)
R ; ;
R q R ;
( hhi+i q )21 ; Rqi+11 ; Rmi 1 Rqi+11 ; Rqi+22 ] ; 1
; ; ; ;

7
when the method has an error expansion in powers of 2. The computation of i ac-
h Rq
cording to (4.1.8b) is equivalent to interpolating ( ) by a function of the form
z t h

( a0 +a1 h2 +:::+aj hj
i b0 +b1 h2 +:::+bj hj if is even
j
Rj (h) = a0 +a1 h2 +:::+aj;1 hj;1 : (4.1.8c)
b0 +b1 h2 +:::+bj;1 hj;1 if is odd
j

There are several enhancements to the basic extrapolation approach. Some of these
follow.
1. A variable-order extrapolation algorithm could choose the order (i.e., the number
of columns in the tableau) adaptively. Error estimates can be computed as the dif-
ference between the rst subdiagonal element and the diagonal or by the dierence
between two successive diagonal elements.
2. Extrapolation methods can be written as Runge-Kutta methods if the step size and
order of the extrapolation are xed. In this case, convergence, stability, and error
bounds follow from the results of Chapter 3 for general one-step methods.
3. Implicit methods can be used in combination with extrapolation to solve sti prob-
lems. A survey of the state of the art was written by Deuhard 3] who prefers
polynomial extrapolation to rational extrapolation because rational extrapolation
lacks translation invariance, rational extrapolation can impose restrictions on the
base step size, and polynomial extrapolation is slightly more ecient.

8
Bibliography

1] R. Bulirsch and J. Stoer. Fehlerabschatzungen und extrapolation mit rationalen


funktionen bei verfahren vom Richardson-typus. Numer. Math., 6:413{427, 1964.
2] R. Bulirsch and J. Stoer. Numerical treatment of ordinary dierential equations by
extrapolation methods. Numer. Math., 8:1{13, 1966.
3] P. Deuhard. Recent progress in extrapolation methods for ordinary dierential
equations. SIAM Review, 27:505{535, 1985.
4] W.B. Gragg. On extrapolation algorithms for ordinary initial value problems. SIAM
J. Numer. Anal., 2:384{403, 1964.
5] E. Hairer, S.P. Norsett, and G. Wanner. Solving Ordinary Dierential Equations I:
Nonsti Problems. Springer-Verlag, Berlin, second edition, 1993.

You might also like