You are on page 1of 7

Jim Lambers

MAT 772
Fall Semester 2010-11
Lecture 20 Notes

These notes correspond to Sections 12.3, 12.4 and 12.5 in the text.

Consistency and Convergence


We have learned that the numerical solution obtained from Euler’s method,

𝑦𝑛+1 = 𝑦𝑛 + ℎ𝑓 (𝑡𝑛 , 𝑦𝑛 ), 𝑡𝑛 = 𝑡0 + 𝑛ℎ,

converges to the exact solution 𝑦(𝑡) of the initial value problem

𝑦 ′ = 𝑓 (𝑡, 𝑦), 𝑦(𝑡0 ) = 𝑦0 ,

as ℎ → 0.
We now analyze the convergence of a general one-step method of the form

𝑦𝑛+1 = 𝑦𝑛 + ℎΦ(𝑡𝑛 , 𝑦𝑛 , ℎ),

for some continuous function Φ(𝑡, 𝑦, ℎ). We define the local truncation error of this one-step method
by
𝑦(𝑡𝑛+1 ) − 𝑦(𝑡𝑛 )
𝑇𝑛 (ℎ) = − Φ(𝑡𝑛 , 𝑦(𝑡𝑛 ), ℎ).

That is, the local truncation error is the result of substituting the exact solution into the approxi-
mation of the ODE by the numerical method.
As ℎ → 0 and 𝑛 → ∞, in such a way that 𝑡0 + 𝑛ℎ = 𝑡 ∈ [𝑡0 , 𝑇 ], we obtain

𝑇𝑛 (ℎ) → 𝑦 ′ (𝑡) − Φ(𝑡, 𝑦(𝑡), 0).

We therefore say that the one-step method is consistent if

Φ(𝑡, 𝑦, 0) = 𝑓 (𝑡, 𝑦).

A consistent one-step method is one that converges to the ODE as ℎ → 0.


We then say that a one-step method is stable if Φ(𝑡, 𝑦, ℎ) is Lipschitz continuous in 𝑦. That is,

∣Φ(𝑡, 𝑢, ℎ) − Φ(𝑡, 𝑣, ℎ)∣ ≤ 𝐿Φ ∣𝑢 − 𝑣∣, 𝑡 ∈ [𝑡0 , 𝑇 ], 𝑢, 𝑣 ∈ ℝ, ℎ ∈ [0, ℎ0 ],

for some constant 𝐿Φ .

1
We now show that a consistent and stable one-step method is consistent. Using the same
approach and notation as in the convergence proof of Euler’s method, and the fact that the method
is stable, we obtain the following bound for the global error 𝑒𝑛 = 𝑦(𝑡𝑛 ) − 𝑦𝑛 :
( )
𝑒𝐿Φ (𝑇 −𝑡0 ) − 1
∣𝑒𝑛 ∣ ≤ max ∣𝑇𝑚 (ℎ)∣.
𝐿Φ 0≤𝑚≤𝑛−1

Because the method is consistent, we have

lim max ∣𝑇𝑛 (ℎ)∣ = 0.


ℎ→0 0≤𝑛≤𝑇 /ℎ

It follows that as ℎ → 0 and 𝑛 → ∞ in such a way that 𝑡0 + 𝑛ℎ = 𝑡, we have

lim ∣𝑒𝑛 ∣ = 0,
𝑛→∞

and therefore the method is convergent.


In the case of Euler’s method, we have
ℎ ′′
Φ(𝑡, 𝑦, ℎ) = 𝑓 (𝑡, 𝑦), 𝑇𝑛 (ℎ) = 𝑓 (𝜏 ), 𝜏 ∈ (𝑡0 , 𝑇 ).
2
Therefore, there exists a constant 𝐾 such that

∣𝑇𝑛 (ℎ)∣ ≤ 𝐾ℎ, 0 < ℎ ≤ ℎ0 ,

for some sufficiently small ℎ0 . We say that Euler’s method is first-order accurate. More generally,
we say that a one-step method has order of accuracy 𝑝 if, for any sufficiently smooth solution 𝑦(𝑡),
there exists constants 𝐾 and ℎ0 such that

∣𝑇𝑛 (ℎ)∣ ≤ 𝐾ℎ𝑝 , 0 < ℎ ≤ ℎ0 .

We now consider an example of a higher-order accurate method.

An Implicit One-Step Method


Suppose that we approximate the equation
∫ 𝑡𝑛+1
𝑦(𝑡𝑛+1 ) = 𝑦(𝑡𝑛 ) + 𝑦 ′ (𝑠) 𝑑𝑠
𝑡𝑛

by applying the Trapezoidal Rule to the integral. This yields a one-step method

𝑦𝑛+1 = 𝑦𝑛 + [𝑓 (𝑡𝑛 , 𝑦𝑛 ) + 𝑓 (𝑡𝑛+1 , 𝑦𝑛+1 )],
2

2
known as the trapezoidal method.
It follows from the error in the Trapezoidal Rule that

𝑦(𝑡𝑛+1 ) − 𝑦(𝑡𝑛 ) 1 1
𝑇𝑛 (ℎ) = − [𝑓 (𝑡𝑛 , 𝑦𝑛 ) + 𝑓 (𝑡𝑛+1 , 𝑦𝑛+1 )] = − ℎ2 𝑦 ′′′ (𝜏𝑛 ), 𝜏𝑛 ∈ (𝑡𝑛 , 𝑡𝑛+1 ).
ℎ 2 12
Therefore, the trapezoidal method is second-order accurate.
To show convergence, we must establish stability by finding a suitable Lipschitz constant 𝐿Φ
for the function
1
Φ(𝑡, 𝑦, ℎ) = [𝑓 (𝑡𝑛 , 𝑦𝑛 ) + 𝑓 (𝑡𝑛+1 , 𝑦𝑛+1 )],
2
assuming that 𝐿𝑓 is a Lipschitz constant for 𝑓 (𝑡, 𝑦) in 𝑦. We have

1
∣Φ(𝑡, 𝑢, ℎ) − Φ(𝑡, 𝑣, ℎ)∣ = ∣𝑓 (𝑡, 𝑢, ℎ) + 𝑓 (𝑡 + ℎ, 𝑢 + ℎΦ(𝑡, 𝑢, ℎ)) − 𝑓 (𝑡, 𝑣, ℎ) − 𝑓 (𝑡 + ℎ, 𝑣 + ℎΦ(𝑡, 𝑣, ℎ)∣
2

≤ 𝐿𝑓 ∣𝑢 − 𝑣∣ + 𝐿𝑓 ∣Φ(𝑡, 𝑢, ℎ) − Φ(𝑡, 𝑣, ℎ)∣.
2
Therefore ( )

1 − 𝐿𝑓 ∣Φ(𝑡, 𝑢, ℎ) − Φ(𝑡, 𝑣, ℎ) ≤ 𝐿𝑓 ∣𝑢 − 𝑣∣,
2
and therefore
𝐿𝑓
𝐿Φ ≤ ,
1 − ℎ2 𝐿𝑓
provided that ℎ2 𝐿𝑓 < 1. We conclude that for ℎ sufficiently small, the trapezoidal method is stable,
and therefore convergent, with 𝑂(ℎ2 ) global error.
The trapezoidal method constrasts with Euler’s method because it is an implicit method, due
to the evaluation of 𝑓 (𝑡, 𝑦) at 𝑦𝑛+1 . It follows that it is generally necessary to solve a nonlinear
equation to obtain 𝑦𝑛+1 from 𝑦𝑛 . This additional computational effort is offset by the fact that
implicit methods are generally more stable than explicit methods such as Euler’s method. Another
example of an implicit method is backward Euler’s method

𝑦𝑛+1 = 𝑦𝑛 + ℎ𝑓 (𝑡𝑛+1 , 𝑦𝑛+1 ).

Like Euler’s method, backward Euler’s method is first-order accurate.

Runge-Kutta Methods
We have seen that Euler’s method is first-order accurate. We would like to use Taylor series to
design methods that have a higher order of accuracy. First, however, we must get around the fact
that an analysis of the global error, as was carried out for Euler’s method, is quite cumbersome.

3
Instead, we will design new methods based on the criteria that their local truncation error, the
error committed during a single time step, is higher-order in ℎ.
Using higher-order Taylor series directly to approximate 𝑦(𝑡𝑛+1 ) is cumbersome, because it
requires evaluating derivatives of 𝑓 . Therefore, our approach will be to use evaluations of 𝑓 at
carefully chosen values of its arguments, 𝑡 and 𝑦, in order to create an approximation that is just
as accurate as a higher-order Taylor series expansion of 𝑦(𝑡 + ℎ).
To find the right values of 𝑡 and 𝑦 at which to evaluate 𝑓 , we need to take a Taylor expansion of
𝑓 evaluated at these (unknown) values, and then match the resulting numerical scheme to a Taylor
series expansion of 𝑦(𝑡 + ℎ) around 𝑡. To that end, we state a generalization of Taylor’s theorem to
functions of two variables.
Theorem Let 𝑓 (𝑡, 𝑦) be (𝑛 + 1) times continuously differentiable on a convex set 𝐷, and let
(𝑡0 , 𝑦0 ) ∈ 𝐷. Then, for every (𝑡, 𝑦) ∈ 𝐷, there exists 𝜉 between 𝑡0 and 𝑡, and 𝜇 between 𝑦0 and 𝑦,
such that
𝑓 (𝑡, 𝑦) = 𝑃𝑛 (𝑡, 𝑦) + 𝑅𝑛 (𝑡, 𝑦),
where 𝑃𝑛 (𝑡, 𝑦) is the 𝑛th Taylor polynomial of 𝑓 about (𝑡0 , 𝑦0 ),
[ ]
∂𝑓 ∂𝑓
𝑃𝑛 (𝑡, 𝑦) = 𝑓 (𝑡0 , 𝑦0 ) + (𝑡 − 𝑡0 ) (𝑡0 , 𝑦0 ) + (𝑦 − 𝑦0 ) (𝑡0 , 𝑦0 ) +
∂𝑡 ∂𝑦
2 2 ∂2𝑓 (𝑦 − 𝑦0 )2 ∂ 2 𝑓
[ ]
(𝑡 − 𝑡0 ) ∂ 𝑓
(𝑡0 , 𝑦0 ) + (𝑡 − 𝑡0 )(𝑦 − 𝑦0 ) (𝑡0 , 𝑦0 ) + (𝑡0 , 𝑦0 ) +
2 ∂𝑡2 ∂𝑡∂𝑦 2 ∂𝑦 2
⎡ ⎤
𝑛 ( ) 𝑛𝑓
1 ∑ 𝑛 ∂
⋅⋅⋅ + ⎣ (𝑡 − 𝑡0 )𝑛−𝑗 (𝑦 − 𝑦0 )𝑗 𝑛−𝑗 𝑗 (𝑡0 , 𝑦0 )⎦ ,
𝑛! 𝑗 ∂𝑡 ∂𝑦
𝑗=0

and 𝑅𝑛 (𝑡, 𝑦) is the remainder term associated with 𝑃𝑛 (𝑡, 𝑦),


𝑛+1
1 ∑( 𝑛+1 ) ∂ 𝑛+1 𝑓
𝑅𝑛 (𝑡, 𝑦) = (𝑡 − 𝑡0 )𝑛+1−𝑗 (𝑦 − 𝑦0 )𝑗 𝑛+1−𝑗 𝑗 (𝜉, 𝜇).
(𝑛 + 1)! 𝑗 ∂𝑡 ∂𝑦
𝑗=0

We now illustrate our proposed approach in order to obtain a method that is second-order
accurate; that is, its local truncation error is 𝑂(ℎ2 ). This involves matching
ℎ2 𝑑 ℎ3 𝑑2
𝑦 + ℎ𝑓 (𝑡, 𝑦) + [𝑓 (𝑡, 𝑦)] + [𝑓 (𝜉, 𝑦)]
2 𝑑𝑡 6 𝑑𝑡2
to
𝑦 + ℎ𝑎1 𝑓 (𝑡 + 𝛼1 , 𝑦 + 𝛽1 ),
where 𝑡 ≤ 𝜉 ≤ 𝑡 + ℎ and the parameters 𝑎1 , 𝛼1 and 𝛽1 are to be determined. After simplifying by
removing terms or factors that already match, we see that we only need to match
ℎ𝑑 ℎ2 𝑑2
𝑓 (𝑡, 𝑦) + [𝑓 (𝑡, 𝑦)] + [𝑓 (𝑡, 𝑦(𝑡))]
2 𝑑𝑡 6 𝑑𝑡2

4
with
𝑎1 𝑓 (𝑡 + 𝛼1 , 𝑦 + 𝛽1 ),
at least up to terms of 𝑂(ℎ), so that the local truncation error will be 𝑂(ℎ2 ).
Applying the multivariable version of Taylor’s theorem to 𝑓 , we obtain
∂𝑓 ∂𝑓
𝑎1 𝑓 (𝑡 + 𝛼1 , 𝑦 + 𝛽1 ) = 𝑎1 𝑓 (𝑡, 𝑦) + 𝑎1 𝛼1 (𝑡, 𝑦) + 𝑎1 𝛽1 (𝑡, 𝑦) +
∂𝑡 ∂𝑦
2 2
𝛼1 ∂ 𝑓 2
∂ 𝑓 𝛽12 ∂ 2 𝑓
(𝜉, 𝜇) + 𝛼 1 𝛽 1 (𝜉, 𝜇) + (𝜉, 𝜇),
2 ∂𝑡2 ∂𝑡∂𝑦 2 ∂𝑦 2
where 𝜉 is between 𝑡 and 𝑡 + 𝛼1 and 𝜇 is between 𝑦 and 𝑦 + 𝛽1 . Meanwhile, computing the full
derivatives with respect to 𝑡 in the Taylor expansion of the solution yields
ℎ ∂𝑓 ℎ ∂𝑓
𝑓 (𝑡, 𝑦) + (𝑡, 𝑦) + (𝑡, 𝑦)𝑓 (𝑡, 𝑦) + 𝑂(ℎ2 ).
2 ∂𝑡 2 ∂𝑦
Comparing terms yields the equations
ℎ ℎ
𝑎1 = 1, 𝑎1 𝛼1 = , 𝑎1 𝛽1 = 𝑓 (𝑡, 𝑦),
2 2
which has the solutions
ℎ ℎ
𝑎1 = 1, 𝛼1 = , 𝛽1 = 𝑓 (𝑡, 𝑦).
2 2
The resulting numerical scheme is
( )
ℎ ℎ
𝑦𝑛+1 = 𝑦𝑛 + ℎ𝑓 𝑡𝑛 + , 𝑦𝑛 + 𝑓 (𝑡𝑛 , 𝑦𝑛 ) .
2 2

This scheme is known as the midpoint method, or the explicit midpoint method. Note that it
evaluates 𝑓 at the midpoint of the intervals [𝑡𝑛 , 𝑡𝑛+1 ] and [𝑦𝑛 , 𝑦𝑛+1 ], where the midpoint in 𝑦 is
approximated using Euler’s method with time step ℎ/2.
The midpoint method is the simplest example of a Runge-Kutta method, which is the name
given to any of a class of time-stepping schemes that are derived by matching multivaraiable Taylor
series expansions of 𝑓 (𝑡, 𝑦) with terms in a Taylor series expansion of 𝑦(𝑡 + ℎ). Another often-used
Runge-Kutta method is the modified Euler method

𝑦𝑛+1 = 𝑦𝑛 + [𝑓 (𝑡𝑛 , 𝑦𝑛 ) + 𝑓 (𝑡𝑛+1 , 𝑦𝑛 + ℎ𝑓 (𝑡𝑛 , 𝑦𝑛 )],
2
which resembles the Trapezoidal Rule from numerical integration, and is also second-order accurate.

5
However, the best-known Runge-Kutta method is the fourth-order Runge-Kutta method, which
uses four evaluations of 𝑓 during each time step. The method proceeds as follows:

𝑘1 = ℎ𝑓 (𝑡𝑛 , 𝑦𝑛 ),
( )
ℎ 1
𝑘2 = ℎ𝑓 𝑡𝑛 + , 𝑦𝑛 + 𝑘1 ,
2 2
( )
ℎ 1
𝑘3 = ℎ𝑓 𝑡𝑛 + , 𝑦𝑛 + 𝑘2 ,
2 2
𝑘4 = ℎ𝑓 (𝑡𝑛+1 , 𝑦𝑛 + 𝑘3 ) ,
1
𝑦𝑛+1 = 𝑦𝑛 + (𝑘1 + 2𝑘2 + 2𝑘3 + 𝑘4 ).
6
In a sense, this method is similar to Simpson’s Rule from numerical integration, which is also
fourth-order accurate, as values of 𝑓 at the midpoint in time are given four times as much weight
as values at the endpoints 𝑡𝑛 and 𝑡𝑛+1 .
Example We compare Euler’s method with the fourth-order Runge-Kutta scheme on the initial
value problem
𝑦 ′ = −2𝑡𝑦, 0 < 𝑡 ≤ 1, 𝑦(0) = 1,
2
which has the exact solution 𝑦(𝑡) = 𝑒−𝑡 . We use a time step of ℎ = 0.1 for both methods. The
computed solutions, and the exact solution, are shown in Figure 1.
It can be seen that the fourth-order Runge-Kutta method is far more accurate than Euler’s
method, which is first-order accurate. In fact, the solution computed using the fourth-order Runge-
Kutta method is visually indistinguishable from the exact solution. At the final time 𝑇 = 1, the
relative error in the solution computed using Euler’s method is 0.038, while the relative error in
the solution computing using the fourth-order Runge-Kutta method is 4.4 × 10−6 . □

6
Figure 1: Solutions of 𝑦 ′ = −2𝑡𝑦, 𝑦(0) = 1 on [0, 1], computed using Euler’s method and the
fourth-order Runge-Kutta method

You might also like