You are on page 1of 74

MA 214: Introduction to numerical analysis

Lecture 33

Shripad M. Garge.
IIT Bombay

( shripad@math.iitb.ac.in )

2021-2022

1/74
shripad@math.iitb.ac.in MA214 (2021-2022) L33
Runge-Kutta Methods of Order Two

This is a difference method to solve the initial-value problem


dy
“ f pt, y q, a ď t ď b, y paq “ α.
dt
It gives w0 “ α and
ˆ ˙
h h
wi`1 “ wi ` hf ti ` , wi ` f pti , wi q
2 2

for i “ 0, . . . , N ´ 1. This was obtained by equating T pt, y q with


a1 f pt ` α1 , y ` β1 q and then solving for a1 , α1 and β1 .

The term a1 f pt ` α1 , y ` β1 q has three terms and they were all


needed to get the match with T .

So a more complicated form is required to satisfy the conditions for


any of the higher-order Taylor methods.
2/74
shripad@math.iitb.ac.in MA214 (2021-2022) L33
Runge-Kutta Methods of Order Two

If we use the form a1 f pt, y q ` a2 f pt ` α2 , y ` δ2 f pt, y qq containing


4 parameters to approximate

h h2
T pt, y q “ f pt, y q ` f 1 pt, y q ` f 2 pt, y q
2 6
then there is insufficient flexibility to match the term
2
h2 Bf

pt, y q f pt, y q
6 By

resulting from the expansion of ph2 {6qf 2 pt, y q.

Consequently, the best that can be obtained from the 4-parameter


form are methods with Oph2 q local truncation error.

3/74
shripad@math.iitb.ac.in MA214 (2021-2022) L33
Runge-Kutta Methods of Order Two

Since four parameters are available, it gives a flexibility in their


choice, so a number of Oph2 q methods can be derived.

One of the most important ones is the Modified Euler method,


which corresponds to choosing a1 “ a2 “ 1{2 and α2 “ δ2 “ h.

It has the following difference-equation form: w0 “ α and


h
wi`1 “ wi ` rf pti , wi q ` f pti`1 , wi ` hf pti , wi qqs
2
This method differs slightly from the Euler method, hence the
name.

Let us do an example.

4/74
shripad@math.iitb.ac.in MA214 (2021-2022) L33
An example

We apply the modified Euler method with N “ 10, h “ 0.2,


ti “ 0.2i and w0 “ 0.5 to approximate the solution to our usual
example

y 1 “ y ´ t 2 ` 1, 0 ď t ď 2, y p0q “ 0.5.

The method gives w0 “ 0.5 and

wi`1 “ wi ` 1.22wi ´ 0.0088i 2 ´ 0.008i ` 0.216

for i “ 0, . . . , 9.

The first two approximations are

w1 “ 0.826 and w2 “ 1.20692.

The table follows.


5/74
shripad@math.iitb.ac.in MA214 (2021-2022) L33
The example, continued

6/74
shripad@math.iitb.ac.in MA214 (2021-2022) L33
Higher-Order Runge-Kutta Methods

Now we will approximate

h h2
T pt, y q “ f pt, y q ` f 1 pt, y q ` f 2 pt, y q
2 6
by f pt ` α1 , y ` δ1 f pt ` α2 , y ` δ2 f pt, y qqq with error Oph3 q.

The number of parameters involved is 4 and the derivation of these


constants is a bit involved.

The most common method is Heun’s method.

The difference-equation of Heun’s method is


„ ˆ ˙
h 2h 2h ` h ˘
wi`1 “ wi ` f pti , wi q ` 3f ti ` , wi ` f ti ` f pti , wi q .
4 3 3 3

7/74
shripad@math.iitb.ac.in MA214 (2021-2022) L33
Our usual example

We apply Heun’s method with N “ 10, h “ 0.2, ti “ 0.2i and


w0 “ 0.5 to approximate the solution to our usual example

y 1 “ y ´ t 2 ` 1, 0 ď t ď 2, y p0q “ 0.5.

The method gives w0 “ 0.5 and



wi`1 “ wi ` p0.05q f p0.2i, wi q `
ˆ ˙
` ˘
3 f 0.2i ` 1.3̄, wi ` 1.3̄ f 0.2i ` 0.06̄ f p0.2i, wi q .

The first two values are

w1 “ 0.829244 and w2 “ 1.2139750.

The table follows.


8/74
shripad@math.iitb.ac.in MA214 (2021-2022) L33
The example, continued

9/74
shripad@math.iitb.ac.in MA214 (2021-2022) L33
Runge-Kutta of order four

This is the most common Runge-Kutta method. It is given by


1
w0 “ α and wi`1 “ wi ` pk1 ` 2k2 ` 2k3 ` k4 q where
6
ˆ ˙
h 1
k1 “ hf pti , wi q, k2 “ hf ti ` , wi ` k1 ,
2 2
ˆ ˙
h 1
k3 “ hf ti ` , wi ` k2 and k4 “ hf pti`1 , wi ` k3 q.
2 2
This method has local truncation error Oph4 q, hence the order of
this method is four, provided the solution y ptq has five continuous
derivatives.

The notations k1 , k2 , k3 , k4 are introduced to eliminate the need for


successive nesting in the second variable of f pt, y q. Let us do our
usual example.
10/74
shripad@math.iitb.ac.in MA214 (2021-2022) L33
Our usual example

We consider the usual initial-value problem:

y 1 “ y ´ t 2 ` 1, 0 ď t ď 2, y p0q “ 0.5

and approximate its solution by Runge-Kutta of order four with


N “ 10, h “ 0.2 and ti “ 0.2i.

We first compute k1 , k2 , k3 and k4 :

k1 “ 0.2f p0, 0.5q “ 0.2p1.5q “ 0.3,


k2 “ 0.2f p0.1, 0.65q “ 0.328,
k3 “ 0.2f p0.1, 0.664q “ 0.3308,
k4 “ 0.2f p0.2, 0.8308q “ 0.35816.

We have
1
w1 “ 0.5 ` p0.3 ` 2p0.328q ` 2p0.3308q ` 0.35816q “ 0.8292933.
6
11/74
shripad@math.iitb.ac.in MA214 (2021-2022) L33
The example, continued

12/74
shripad@math.iitb.ac.in MA214 (2021-2022) L33
Computational Comparisons

The main computational effort in applying the Runge-Kutta


methods is the evaluation of f .

In the second-order methods, the local truncation error is Oph2 q,


and the cost is two function evaluations per step.

The Runge-Kutta method of order four requires 4 evaluations per


step, and the local truncation error is Oph4 q.

The following table establishes the relationship between the


number of evaluations per step and the order of the local
truncation error.

13/74
shripad@math.iitb.ac.in MA214 (2021-2022) L33
Computational Comparisons

Evaluations Best possible local


per step truncation error

2 Oph2 q
3 Oph3 q
4 Oph4 q
5ďnď7 Ophn´1 q
8ďnď9 Ophn´2 q
10 ď n Ophn´3 q

This table indicates why the methods of order less than five with
smaller step size are used in preference to the higher-order
methods using a larger step size.

14/74
shripad@math.iitb.ac.in MA214 (2021-2022) L33
Computational Comparisons

The Runge-Kutta method of order four requires four evaluations


per step, whereas Euler’s method requires only one evaluation.

Hence if the Runge-Kutta method of order four is to be superior it


should give more accurate answers than Euler’s method with one-
fourth of the step size.

Similarly, if the Runge-Kutta method of order four is to be superior


to the second-order Runge-Kutta methods, which require two
evaluations per step, it should give more accuracy with step size h
than a second-order method with step size h{2.

The following example illustrates the superiority of the Runge-


Kutta fourth-order method by this measure for our usual
initial-value problem.

15/74
shripad@math.iitb.ac.in MA214 (2021-2022) L33
Computational Comparisons

We consider the problem

y 1 “ y ´ t 2 ` 1, 0 ď t ď 2, y p0q “ 0.5.

We employ Euler’s method with h “ 0.025, the modified Euler


method with h “ 0.05, and the Runge-Kutta fourth-order method
with h “ 0.1.

These methods are compared at the common mesh points of these


methods: 0.1, 0.2, 0.3, 0.4, and 0.5.

Each of these techniques requires 20 function evaluations to


determine the values listed in the next table to approximate y p0.5q.

We see that the fourth-order method is clearly superior.

16/74
shripad@math.iitb.ac.in MA214 (2021-2022) L33
Computational Comparisons

17/74
shripad@math.iitb.ac.in MA214 (2021-2022) L33
MA 214: Introduction to numerical analysis
Lecture 34

Shripad M. Garge.
IIT Bombay

( shripad@math.iitb.ac.in )

2021-2022

18/74
shripad@math.iitb.ac.in MA214 (2021-2022) L34
Error control in Runge-Kutta methods

In lecture 25, we studied the adaptive quadrature methods to


approximate definite integrals.

These methods adapt the number and position of the nodes used
in the approximation to keep the truncation error within a specific
bound, in spite of an increase in the number of calculations.

The problem of approximating the value of a definite integral and


that of approximating the solution to an initial-value problem are
similar.

It is not surprising, then, that there are adaptive methods for


approximating the solutions to initial-value problems and that
these methods are not only efficient, but also incorporate the
control of error.

19/74
shripad@math.iitb.ac.in MA214 (2021-2022) L34
Error control in Runge-Kutta methods

Any one-step method for approximating the solution, y ptq, of the


initial-value problem

y 1 “ f pt, y q, a ď t ď b, y paq “ α

can be expressed in the form

wi`1 “ wi ` hi φpti , wi , hi q

for i “ 0, . . . , N ´ 1 and some function φ.

An ideal such method will have the property that, given  ą 0, a


minimal number of mesh points could be used to ensure that the
global error, |y pti q ´ wi |, remains less than  for every i.

In such a method, it will be natural to not have equally-spaced


nodes.
20/74
shripad@math.iitb.ac.in MA214 (2021-2022) L34
Error control in Runge-Kutta methods

We now examine techniques used to control the error of a


difference-equation method in an efficient manner by the
appropriate choice of mesh points.

By using methods of differing order we can predict the local


truncation error and, using this prediction, choose a step size that
will keep it and the global error in check. We illustrate the
technique by comparing two approximation techniques:

The first is obtained from an n-th order Taylor method of the form

y pti`1 q “ y pti q ` hφpti , y pti q, hq ` Ophn`1 q

and the second is obtained from a Taylor method of order pn ` 1q


of the form

y pti`1 q “ y pti q ` hφ̃pti , y pti q, hq ` Ophn`2 q.


21/74
shripad@math.iitb.ac.in MA214 (2021-2022) L34
Error control in Runge-Kutta methods

The difference methods are given by w0 “ α “ w̃0 and for i ě 0

wi`1 “ wi ` hφpti , wi , hq and w̃i`1 “ w̃i ` hφ̃pti , w̃i , hq

with respective local truncation errors: τi`1 phq “ Ophn q and


τ̃i`1 phq “ Ophn`1 q.

We first make the assumption that wi « y ptiq « w̃i and generate


the approximations wi`1 and w̃i`1 to y pti`1 q with the same h.
Then
y pti`1 q ´ y pti q
τi`1 phq “ ´ φpti , y pti q, hq
h
y pti`1 q ´ wi
« ´ φpti , wi , hq
h
1` ˘
“ y pti`1 q ´ wi`1
h
22/74
shripad@math.iitb.ac.in MA214 (2021-2022) L34
Error control in Runge-Kutta methods

Hence
1` ˘
τi`1 phq « y pti`1 q ´ wi`1
h
and similarly
1` ˘
τ̃i`1 phq « y pti`1 q ´ w̃i`1 .
h
Therefore
1` ˘
τi`1 phq ´ τ̃i`1 phq « w̃i`1 ´ wi`1
h
The orders of these errors are Ophn q and Ophn`1 q, respectively.
Hence the significant part of τi`1 phq must come from

1` ˘
w̃i`1 ´ wi`1 .
h
After estimating the local truncation error, we now proceed to
adjust the step size to keep it within a specified bound.
23/74
shripad@math.iitb.ac.in MA214 (2021-2022) L34
Error control in Runge-Kutta methods

Since τi`1 phq “ Ophn q, we assume that there exists a K such that
τi`1 phq « Khn .

Then the local truncation error produced by applying the n-th


order method with a new step size qh can be estimated using the
original approximations:
qn `
τi`1 pqhq « K pqhqn “ q n pKhn q « q n τi`1 phq «
˘
w̃i`1 ´ wi`1 .
h
To bound τi`1 pqhq by  we choose q such that

qn
|w̃i`1 ´ wi`1 | « |τi`1 pqhq| ď 
h
that is,
ˆ ˙1{n
h
qď .
|w̃i`1 ´ wi`1 |
24/74
shripad@math.iitb.ac.in MA214 (2021-2022) L34
Runge-Kutta-Fehlberg Method

One popular technique that uses the above inequality for error
control is the Runge-Kutta-Fehlberg method.

This technique uses a Runge-Kutta method with local truncation


error of order five,
16 6656 28561 9 2
w̃i`1 “ w̃i ` k1 ` k3 ` k4 ´ k5 ` k6
135 12825 56430 50 55
to estimate the local error in a Runge-Kutta method of order four
given by
25 1408 2197 1
wi`1 “ wi ` k1 ` k3 ` k4 ´ k5 .
216 2565 4104 5
We give the coefficient equations on the next slide.

25/74
shripad@math.iitb.ac.in MA214 (2021-2022) L34
Runge-Kutta-Fehlberg Method

We have

k1 “ hf pti , wi q
ˆ ˙
h 1
k2 “ hf ti ` , wi ` k1
4 4
ˆ ˙
3h 3 9
k3 “ hf ti ` , wi ` k1 ` k2
8 32 32
ˆ ˙
12h 1932 7200 7296
k4 “ hf ti ` , wi ` k1 ´ k2 ` k3
13 2197 2197 2197
ˆ ˙
439 3680 845
k5 “ hf ti ` h, wi ` k1 ´ 8k2 ` k3 ´ k4
216 513 4104
ˆ ˙
h 8 3544 1859 11
k6 “ hf ti ` , wi ´ k1 ` 2k2 ´ k3 ` k4 ´ k5 .
2 27 2565 4104 40

26/74
shripad@math.iitb.ac.in MA214 (2021-2022) L34
Runge-Kutta-Fehlberg Method

An advantage to this method is that only six evaluations of f are


required per step.

Arbitrary Runge-Kutta methods of orders four and five used


together require at least four evaluations of f for the fourth-order
method and an additional six for the fifth-order method, for a total
of at least ten function evaluations.

So the Runge-Kutta-Fehlberg method has at least a 40% decrease


in the number of function evaluations over the use of a pair of
arbitrary fourth- and fifth-order methods.

In the error-control theory, an initial value of h at the i-th step is


used to find the first values of wi`1 and w̃i`1 , which leads to the
determination of q for that step, and then the calculations are
repeated.

27/74
shripad@math.iitb.ac.in MA214 (2021-2022) L34
Runge-Kutta-Fehlberg Method

This procedure requires twice the number of function evaluations


per step as without the error control.

In practice, the value of q to be used is chosen somewhat


differently in order to make the increased function-evaluation cost
worthwhile.

The value of q determined at the i-th step is used for two purposes:

When q ă 1: to reject the initial choice of h at the i-th step


and repeat the calculations using qh, and
When q ě 1: to accept the computed value at the i-th step
using the step size h, but change the step size to qh for the
pi ` 1q-st step.

Since a penalty of function evaluations must be paid if the steps


are repeated, q tends to be chosen conservatively.
28/74
shripad@math.iitb.ac.in MA214 (2021-2022) L34
An example

A common choice of q for the Runge-Kutta-Fehlberg method with


n “ 4 is
ˆ ˙1{4 ˆ ˙1{4
h h
q“ “ p0.84q .
2|w̃i`1 ´ wi`1 | |w̃i“1 ´ wi`1 |

We now use this method with a tolerance  “ 10´5 , a maximum


step size hmax “ 0.25, and a minimum step size hmin “ 0.01 to
approximate the solution to our usual initial-value problem

y 1 “ y ´ t 2 ` 1, 0 ď t ď 2, y p0q “ 0.5

and compare the results with the exact solution

y ptq “ pt ` 1q2 ´ 0.5e t .

The initial condition gives t0 “ 0 and w0 “ 0.5.


29/74
shripad@math.iitb.ac.in MA214 (2021-2022) L34
The example, continued

Now, we compute the coefficients k1 , . . . , k6 using hmax “ 0.25:

k1 “ hf pt0 , w0 q “ 0.25p0.5 ´ 02 ` 1q “ 0.375

k2 “ 0.3974609, k3 “ 0.4095383, k4 “ 0.4584971,


k5 “ 0.4658452, and k6 “ 0.4204789.
We now compute the approximations to y p0.25q:

16 6656 28561 9 2
w̃1 “ w̃0 ` k1 ` k3 ` k4 ´ k5 ` k6
135 12825 56430 50 55
16 6656 28561
“ 0.5 ` 0.375 ` 0.4095383 ` 0.4584971
135 12825 56430
9 2
´ 0.4658452 ` 0.4204789
50 55
“ 0.9204870.

30/74
shripad@math.iitb.ac.in MA214 (2021-2022) L34
The example, continued

Similarly,
25 1408 2197 1
w1 “ w0 ` k1 ` k3 ` k4 ´ k5
216 2565 4104 5
25 1408 2197
“ 0.5 ` 0.375 ` 0.4095383 ` 0.4584971
216 2565 4104
1
´ 0.4658452
5
“ 0.9204886.

Then
1
τ1 phq « |w̃1 ´ w1 | “ 0.00000621388
h
and ˆ ˙1{4
h
q “ p0.84q “ 0.9461033291.
|w̃i“1 ´ wi`1 |

31/74
shripad@math.iitb.ac.in MA214 (2021-2022) L34
The example, continued

Since q « 1 we accept the approximation 0.9204886 for y p0.25q


but we should adjust the step size for the next iteration to

h “ 0.9461033291p0.25q « 0.2365258.

However, only the leading 5 digits of this result would be expected


to be accurate because τ1 phq has only about 5 digits of accuracy.

Because we are effectively subtracting the nearly equal numbers wi


and w̃i when we compute τ1 phq, there is a good likelihood of
round-off error.

This is an additional reason for being conservative when computing


q.

The table of these values can not be displayed in a slide, it has 8


columns and 10 rows.
32/74
shripad@math.iitb.ac.in MA214 (2021-2022) L34
Why did we discuss this method

The point is that there is a close connection between the problem


of approximating the value of a definite integral and that of
approximating the solution to an initial-value problem.

Therefore, there are analogues of the adaptive methods for solving


the initial-value problems.

However, these methods become very complicated very soon if the


computations are to be done by hand, that is, on a calculator than
by a programme.

As such, our interest in these methods is only for the academic


purpose and not from the computational purposes.

33/74
shripad@math.iitb.ac.in MA214 (2021-2022) L34
MA 214: Introduction to numerical analysis
Lecture 35

Shripad M. Garge.
IIT Bombay

( shripad@math.iitb.ac.in )

2021-2022

34/74
shripad@math.iitb.ac.in MA214 (2021-2022) L35
Multi-step methods

The methods discussed to this point in the theme are called


one-step methods because the approximation for the mesh point
ti`1 involves information from only one of the previous mesh
points, ti .

Although these methods might use function evaluation information


at points between ti and ti`1 , they do not retain that information
for direct use in future approximations.

All the information used by these methods is obtained within the


subinterval over which the solution is being approximated.

The approximate solution is available at each of the mesh points


t0 , t1 , . . . , ti before the approximation at ti`1 is obtained, and the
error |wj ´ y ptj q| tends to increase with j, so it seems reasonable
to develop methods that use these more accurate previous data
when approximating the solution at ti`1 .
35/74
shripad@math.iitb.ac.in MA214 (2021-2022) L35
Multi-step methods

Methods using the approximation at more than one previous mesh


points to determine the approximation at the next point are called
multi-step methods.

An m-step method for solving the initial-value problem

y 1 “ f pt, y q, a ď t ď b, y paq “ α

is the difference method


wi`1 “ am´1 wi ` am´2 wi´1 ` ¨ ¨ ¨ ` a0 wi`1´m
“ ‰
`h bm f pti`1 , wi`1 q ` bm´1 f pti , wi q ` ¨ ¨ ¨ ` b0 f pti`1´m , wi`1´m q

for i “ m ´ 1, . . . , N ´ 1, h “ pb ´ aq{N, constants ai , bj and


initial values w0 “ α, w1 “ α1 , . . . , wm´1 “ αm´1 .

36/74
shripad@math.iitb.ac.in MA214 (2021-2022) L35
Multi-step methods

When bm “ 0, the method is called explicit or open, because the


difference equation then gives wi`1 explicitly in terms of previously
determined values.

When bm ‰ 0, the method is called implicit or closed, because


then wi`1 occurs on both sides of the difference equation, so wi`1
is specified only implicitly.

The implicit methods are generally more accurate than the explicit
methods, but to apply an implicit method directly, we must solve
the implicit equation for wi`1 . This is not always possible, and
even when it can be done, the solution for wi`1 may not be unique.

The starting values α1 , . . . , αm´1 must be specified, generally by


assuming w0 “ α and generating the remaining values by either a
Runge-Kutta or a Taylor method.

37/74
shripad@math.iitb.ac.in MA214 (2021-2022) L35
Multi-step methods

Adams-Bashforth technique: This is an explicit four-step method


with the difference equation
h
wi`1 “ wi ` r55f pti , wi q ´ 59f pti´1 , wi´1 q ` 37f pti´2 , wi´2 q
24
´9f pti´3 , wi´3 qs

Adams-Moulton technique: This is an implicit three-step method


with the difference equation
h
wi`1 “ wi ` r9f pti`1 , wi`1 q ` 19f pti , wi q ´ 5f pti´1 , wi´1 q
24
`f pti´2 , wi´2 qs

38/74
shripad@math.iitb.ac.in MA214 (2021-2022) L35
An example

We have been playing with the initial-value problem

y 1 “ y ´ t 2 ` 1, 0 ď t ď 2, y p0q “ 0.5

throughout this theme.

We apply the Runge-Kutta method of order 4 to it with h “ 0.2


and compute w0 “ 0.5, w1 “ 0.8292933, w2 “ 1.2140762 and
w3 “ 1.6489220.

We use these as starting values for the Adams-Bashforth method


to compute approximations for y p0.8q and y p1.0q.

We will later compare them with those obtained by the above


Runge-Kutta method of order four.

39/74
shripad@math.iitb.ac.in MA214 (2021-2022) L35
The example, continued

We get
0.2 “
w4 “ w3 ` 55f p0.6, w3 q ´ 59f p0.4, w2 q ` 37f p0.2, w1 q
24
´9f p0, w0 qq
“ 2.1272892

and w5 “ 2.6410533.

The errors then are

y pti q R-K error A-B error


2.1272295 2.1272027 2.69 ˆ 10´5 2.1272295 5.97 ˆ 10´5
2.6408591 2.6408227 3.64 ˆ 10´5 2.6410533 1.94 ˆ 10´4

40/74
shripad@math.iitb.ac.in MA214 (2021-2022) L35
Local truncation error

If y ptq is the solution to the initial-value problem y 1 “ f pt, y q,


a ď t ď b, y paq “ α and

wi`1 “ am´1 wi ` am´2 wi´1 ` ¨ ¨ ¨ ` a0 wi`1´m



`h bm f pti`1 , wi`1 q ` bm´1 f pti , wi q ` . . .

`b0 f pti`1´m , wi`1´m q

is a multi-step method, then its local truncation error is


yi`1 ´ am´1 yi ´ am´2 yi´1 ´ ¨ ¨ ¨ ´ a0 yi`1´m
τi`1 phq “
“ h
´ bm f pti`1 , yi`1 q ` bm´1 f pti , yi q ` . . .

`b0 f pti`1´m , yi`1´m q .

41/74
shripad@math.iitb.ac.in MA214 (2021-2022) L35
Predictor-corrector method

The implicit methods generally give better results than the explicit
method of the same order.

But the implicit methods have the inherent weakness of first


having to convert the method algebraically to an explicit
representation for wi`1 . This procedure is not always possible.

We could use Newton’s method or the secant method to


approximate wi`1 , but this complicates the procedure considerably.

In practice, implicit multi-step methods are not used as described


above. Rather, they are used to improve approximations obtained
by explicit methods.

The combination of an explicit method to predict and an implicit


method to improve the prediction is called a predictor-corrector
method.
42/74
shripad@math.iitb.ac.in MA214 (2021-2022) L35
Predictor-corrector method

Consider the following fourth-order method for solving an


initial-value problem.

The first step is to calculate the starting values w0 , w1 , w2 and w3


for the four-step explicit Adams-Bashforth method. To do this, we
use the Runge-Kutta method of order four.

The next step is to calculate an approximation, w4p , to y pt4 q using


the explicit Adams-Bashforth method as predictor:
h
w4p “ w3 ` r55f pt3 , w3 q´59f pt2 , w2 q`37f pt1 , w1 q´9f pt0 , w0 qs.
24
This approximation is improved by inserting w4p in the right side of
the three-step implicit Adams-Moulton method and using that
method as a corrector.

43/74
shripad@math.iitb.ac.in MA214 (2021-2022) L35
Predictor-corrector method

This gives
h“ ‰
w4 “ w3 ` 9f pt4 , w4p q ` 19f pt3 , w3 q ´ 5f pt2 , w2 q ` f pt1 , w1 q .
24
The only new function evaluation required in this procedure is
f pt4 , w4p q in the corrector equation; all the other values of f have
been calculated for earlier approximations.

The value w4 is then used as the approximation to y pt4 q, and the


technique of using the Adams-Bashforth method as a predictor and
the Adams-Moulton method as a corrector is repeated to find w5p
and w5 , the initial and final approximations to y pt5 q.

This process is continued until we obtain an approximation to


y ptN q “ y pbq.

44/74
shripad@math.iitb.ac.in MA214 (2021-2022) L35
Predictor-corrector method

Improved approximations to y pti`1 q might be obtained by iterating


the Adams-Moulton formula, but these converge to the
approximation given by the implicit formula rather than to the
solution y pti`1 q.

Hence it is usually more efficient to use a reduction in the step size


if improved accuracy is needed.

Example: Apply the Adams fourth-order predictor-corrector


method with h “ 0.2 and starting values from the Runge-Kutta
fourth order method to our standard initial-value problem
y 1 “ y ´ t 2 ` 1, 0 ď t ď 2, y p0q “ 0.5.

The initial values are: w0 “ 0.5, w1 “ 0.8292933, w2 “ 1.2140762


and w3 “ 1.6489220.

The fourth-order Adams-Bashforth method gives w4p “ 2.1272892.


45/74
shripad@math.iitb.ac.in MA214 (2021-2022) L35
The example, continued

We now use w4p as the predictor of the approximation to y p0.8q


and determine the corrected value w4 from the implicit
Adams-Moulton method. This gives
0.2 `
w4 “ w3 ` 9f p0.8, w4p q ` 19f p0.6, w3 q ´ 5f p0.4, w2 q
24 ˘
`f p0.2, w1 q
“ 2.1272056.

The predicted and corrected value for approximating y p1.0q are


similarly computed as

w5p “ 2.6409314 and w5 “ 2.6408286.

46/74
shripad@math.iitb.ac.in MA214 (2021-2022) L35
The example, continued

The values computed using only the explicit Adams-Bashford


method were found to be inferior to those computed by the
Runge-Kutta method of order 4.

Now we have managed to do better than the Runge-Kutta method.

y pti q R-K error P-C error


2.1272295 2.1272027 2.69 ˆ 10´5 2.1272056 2.39 ˆ 10´5
2.6408591 2.6408227 3.64 ˆ 10´5 2.6408286 3.05 ˆ 10´5

We give the table of the remaining values in the following slide:

47/74
shripad@math.iitb.ac.in MA214 (2021-2022) L35
The example, continued

48/74
shripad@math.iitb.ac.in MA214 (2021-2022) L35
MA 214: Introduction to numerical analysis
Lecture 36

Shripad M. Garge.
IIT Bombay

( shripad@math.iitb.ac.in )

2021-2022

49/74
shripad@math.iitb.ac.in MA214 (2021-2022) L36
Winding up

A number of methods have been presented in this theme for


approximating the solution to an initial-value problem.

Although numerous other techniques are available, we have chosen


the methods described here because they generally satisfied three
criteria:

‚ Their development is clear enough so that you can understand


how and why they work.

‚ One or more of the methods will give satisfactory results for


most of the problems that are encountered by students.

‚ Most of the more advanced and complex techniques are based on


one or a combination of the procedures described here.

50/74
shripad@math.iitb.ac.in MA214 (2021-2022) L36
Consistency and convergence

We now discuss why these methods are expected to give


satisfactory results when some similar methods do not.

A one-step difference-equation method with local truncation error


τi phq at the i-th step is said to be consistent with the differential
equation it approximates if
lim max |τi phq| “ 0.
hÑ0 1ďiďN

A consistent one-step method has the property that the difference


equation for the method approaches the differential equation when
the step size goes to zero. So the local truncation error of a
consistent method approaches zero as the step size approaches
zero.

Note that consistency is a local measure since, for each of the


values τi phq, we are assuming that the approximation wi´1 and the
exact solution y pti´1 q are the same. 51/74
shripad@math.iitb.ac.in MA214 (2021-2022) L36
Consistency and convergence

A more realistic measure of analysing the effects of making h small


is to determine the global effect of the method.

This is the maximum error of the method over the entire range of
the approximation, assuming only that the method gives the exact
result at the initial value.

A one-step difference-equation method is said to be convergent


with respect to the differential equation it approximates if

lim max |y pti q ´ wi | “ 0.


hÑ0 1ďiďN

52/74
shripad@math.iitb.ac.in MA214 (2021-2022) L36
Consistency and convergence

Let us check the convergence for Euler’s method. The global error
in Euler’s method is given by
Mh Lpb´aq
max |y pti q ´ wi | ď |e ´ 1|.
1ďiďN 2L
where M, L, a and b are constants.

Hence
Mh Lpb´aq
lim max |y pti q ´ wi | ď lim |e ´ 1| “ 0.
hÑ0 1ďiďN hÑ0 2L
Thus Euler’s method is convergent.

53/74
shripad@math.iitb.ac.in MA214 (2021-2022) L36
Stability

The other problem that exists when using difference methods is a


consequence of not using exact results.

In practice, neither the initial conditions nor the arithmetic that is


subsequently performed is represented exactly because of the
round-off error associated with finite-digit arithmetic.

We saw earlier that this consideration can lead to difficulties even


for the convergent Euler’s method.

To analyze this situation, at least partially, we will try to determine


which methods are stable, in the sense that small changes or
perturbations in the initial conditions produce correspondingly
small changes in the subsequent approximations.

54/74
shripad@math.iitb.ac.in MA214 (2021-2022) L36
Stability

Suppose the initial-value problem

y “ f pt, y q, a ď t ď b, y paq “ α

is approximated by a one-step difference method in the form

w0 “ α, wi`1 “ wi ` hφpti , wi , hq.

Suppose also that a number h0 ą 0 exists and that φpt, w , hq is


continuous and satisfies a Lipschitz condition in the variable w
with Lipschitz constant L on the set
(
D “ pt, w , hq : a ď t ď b and ´ 8 ă w ă 8, 0 ď h ď h0 .

Then the following three properties hold:

(1) The method is stable.


55/74
shripad@math.iitb.ac.in MA214 (2021-2022) L36
Stability

(2) The difference method is convergent if and only if it is


consistent, the consistency being equivalent to

φpt, y , 0q “ f pt, y q, for all a ď t ď b.

(3) If a function τ exists and, for each i “ 1, 2, . . . , N, the local


truncation error τi phq satisfies |τi phq| ď τ phq whenever 0 ď h ď h0
then
τ phq Lpti ´aq
|y pti q ´ wi | ď e .
L
The third part is about controlling the global error of a method by
controlling its local truncation error.

It implies that when the local truncation error has the rate of
convergence Ophn q, the global error will have the same rate of
convergence.

56/74
shripad@math.iitb.ac.in MA214 (2021-2022) L36
Modified Euler method

The Modified Euler method is given by w0 “ α,


h“ ` ˘‰
wi`1 “ wi ` f pti , wi q ` f ti`1 , wi ` hf pti , wi q
2
for i “ 0, 1, . . . , N ´ 1.

We verify that this method is stable by showing that it satisfies the


hypothesis of the above Theorem.
1“ ` ˘‰
Here φpt, w , hq “ f pt, w q ` f t, w ` hf pt, w q and hence
2
1 1 ` ˘
φpt, w1 , hq ´ φpt, w2 , hq “ f pt, w1 q ` f t ` h, w1 ` hf pt, w1 q
2 2
1 1 ` ˘
´ f pt, w2 q ´ f t ` h, w2 ` hf pt, w2 q
2 2

57/74
shripad@math.iitb.ac.in MA214 (2021-2022) L36
Modified Euler method

Since f satisfies Lipschitz condition with Lipschitz constant L


1
|φpt, w1 , hq ´ φpt, w2 , hq| ď L|w1 ´ w2 |
2
1
` L|w1 ` hf pt, w1 q ´ w2 ´ hf pt, w2 q|
2
1
ď L|w1 ´ w2 | ` L|hf pt, w1 q ´ hf pt, w2 q|
2
1
ď L|w1 ´ w2 | ` hL2 |w1 ´ w2 |
2
` 1 2˘
“ L ` hL |w1 ´ w2 |.
2
Thus φ satisfies a Lipschitz condition with the costant L ` 21 hL2 on
(
D “ pt, w , hq : a ď t ď b and ´ 8 ă w ă 8, 0 ď h ď h0

for any h0 ą 0. 58/74


shripad@math.iitb.ac.in MA214 (2021-2022) L36
Modified Euler method

Finally, the continuity of f on ra, bs ˆ R implies the continuity of φ


on ra, bs ˆ R ˆ r0, h0 s.

Thus, by the above theorem, this method is stable.

Since
1 1 ` ˘
φpt, w , 0q “ f pt, w q ` f t, w ` 0f pt, w q “ f pt, w q
2 2
we get that this method is consistent.

The above theorem then implies that the method is also


convergent.

Moreover, we have seen that for this method the local truncation
error is Oph2 q so, by the third part of the theorem, the
convergence of the Modified Euler method is also Oph2 q.
59/74
shripad@math.iitb.ac.in MA214 (2021-2022) L36
Multi-step methods

For multi-step methods, the problems involved with consistency,


convergence, and stability are compounded because of the number
of approximations involved at each step.

In the one-step methods, the approximation wi`1 depends directly


only on the previous approximation wi , whereas the multistep
methods use at least two of the previous approximations, and the
usual methods that are employed involve even more.

The convergence of a multi-step method is defined in the same way


as for the one-step methods, based on the global error, |y pti q ´ wi |,
but the consistency and stability are phrased in a different way.

We will not go in the details but will only say here that both the
Adams-Bashforth method and the Adams-Moulton method are
stable.

60/74
shripad@math.iitb.ac.in MA214 (2021-2022) L36
The fifth theme is complete

This completes the fifth theme of our course:

Ordinary Differential Equations.

From our next lecture, we will begin the next (and essentially the
last) theme:

Numerical Linear Algebra.

61/74
shripad@math.iitb.ac.in MA214 (2021-2022) L36
MA 214: Introduction to numerical analysis
Lecture 37

Shripad M. Garge.
IIT Bombay

(shripad@math.iitb.ac.in)

2021-2022

62/74
shripad@math.iitb.ac.in MA214 (2021-2022) L37
Numerical linear algebra

We begin our final theme

Numerical Linear Algebra.


In linear algebra, one needs to do several computations, right from
solving a homogeneous linear system of equations to computing
eigenvalues and eigenvectors of matrices.

There are many methods in this area and we will be able to do


justice to only a few.

63/74
shripad@math.iitb.ac.in MA214 (2021-2022) L37
System of linear equations

To begin with, we interest ourselves with the homogeneous system


of linear equations. Such a system is given in the form

A¨x “b

where A is an n ˆ n real matrix, b “ rb1 , b2 , . . . , bn st is the given


n ˆ 1 vector and x “ rx1 , x2 , . . . , xn st is the unknown vector. We
want to solve for the vector x.

If the matrix A is invertible, then there is a unique solution,

x “ A´1 b,

however, if A is not invertible then either the solution does not


exist or there are infinitely many solutions.

64/74
shripad@math.iitb.ac.in MA214 (2021-2022) L37
System of linear equations

Consider the systems:


¨ ˛¨ ˛ ¨ ˛ ¨ ˛¨ ˛ ¨ ˛
1 1 1 x1 4 1 1 1 x1 4
˝2 2 1‚˝x2 ‚ “ ˝6‚ and ˝2 2 1‚˝x2 ‚ “ ˝4‚.
1 1 2 x3 6 1 1 2 x3 6

The first system has infinitely many solutions, indeed each


px1 , x2 , 2q with x1 ` x2 “ 2 is a solution to it.

Whereas the second system has no solution. This can be seen by


observing x1 ` x2 ` x3 “ 2x1 ` 2x2 ` x3 “ 4 and hence
x1 ` x2 “ 0. Putting this back in the first equation gives x3 “ 4,
and putting it in the third equation gives x3 “ 3.

This is a contradiction! The invertibility is therefore necessary.

65/74
shripad@math.iitb.ac.in MA214 (2021-2022) L37
System of linear equations

In the invertible case, we have Cramer’s rule:


det Aj
xj “
det A
where Aj is the n ˆ n matrix obtained by replacing the j-th column
of A by the vector b. This method, however, involves way too
many operations. If A is an upper triangular matrix,
¨ ˛
a11 a12 ¨ ¨ ¨ a1n
˚ 0 a22 ¨ ¨ ¨ a2n ‹
A“˚ .
˚ ‹
. .. . . .. ‹
˝ . . . . ‚
0 0 ¨ ¨ ¨ ann

then the system can be solved by an easier method, first solving for
xn , then for xn´1 and so on.

66/74
shripad@math.iitb.ac.in MA214 (2021-2022) L37
System of linear equations

This is a much simpler method and so it is desirable to transform


every matrix to an upper triangular matrix.

This is indeed possible, and our old friend Carl Friedrich Gauß will
help us do it.

There is a method called the Gauß elimination method, GEM for


short, which we shall soon see.

This also leads to an interesting decomposition A “ LU of any


given matrix A as a product of a lower triangular matrix L with an
upper triangular matrix U. This is called an LU-decomposition.

We will see more of it later.

67/74
shripad@math.iitb.ac.in MA214 (2021-2022) L37
System of linear equations

We use three operations to simplify the given linear system of


equations E1 – En :
1 Equation Ei can be multiplied by any nonzero constant λ with
the resulting equation used in place of Ei . This operation is
denoted pλEi q Ñ pEi q.
2 Equation Ej can be multiplied by any constant λ and added to
equation Ei with the resulting equation used in place of Ei .
This operation is denoted pEi ` λEj q Ñ pEi q.
3 Equations Ei and Ej can be transposed in order. This
operation is denoted pEi q Ø pEj q.

By a sequence of these operations, a linear system will be


systematically transformed into a new linear system that is more
easily solved and has the same solutions.
68/74
shripad@math.iitb.ac.in MA214 (2021-2022) L37
An example

Let us consider an example:


E1 : x1 ` x2 ` 3x4 “ 4,
E2 : 2x1 ` x2 ´ x3 ` x4 “ 1,
E3 : 3x1 ´ x2 ´ x3 ` 2x4 “ ´3,
E4 : ´ x1 ` 2x2 ` 3x3 ´ x4 “ 4.
We will solve it for x1 , x2 , x3 and x4 .

We first use equation E1 to eliminate the unknown x1 from


equations E2 , E3 , and E4 by performing
pE2 ´ 2E1 q Ñ pE2 q,
pE3 ´ 3E1 q Ñ pE3 q and
pE4 ` E1 q Ñ pE4 q.

69/74
shripad@math.iitb.ac.in MA214 (2021-2022) L37
System of linear equations

The resulting system is as follows:


E1 : x1 ` x2 ` 3x4 “ 4,
E2 : ´ x2 ´ x3 ´ 5x4 “ ´ 7,
E3 : ´4x2 ´ x3 ´ 7x4 “ ´15,
E4 : 3x2 ` 3x3 ` 2x4 “ 8.
In the new system, E2 is used to eliminate the unknown x2 from E3
and E4 by performing pE3 ´ 4E2 q Ñ pE3 q and pE4 ` 3E2 q Ñ pE4 q.
This results in
E1 : x1 ` x2 ` 3x4 “ 4,
E2 : ´ x2 ´ x3 ´ 5x4 “ ´ 7,
E3 : ` 3x3 ` 13x4 “ 13,
E4 : ´13x4 “ ´13.

70/74
shripad@math.iitb.ac.in MA214 (2021-2022) L37
System of linear equations

The system of equations is now in triangular (or reduced) form and


can be solved for the unknowns by a backward-substitution process.

Since E4 implies x4 “ 1, we can solve E3 for x3 to give


1 1
x3 “ p13 ´ 13x4 q “ p13 ´ 13q “ 0.
3 3
Continuing, E2 gives

x2 “ ´p´7 ` 5x4 ` x3 q “ ´p´7 ` 5 ` 0q “ 2,

and E1 gives

x1 “ 4 ´ 3x4 ´ x2 “ 4 ´ 3 ´ 2 “ ´1.

The solution to system is, x1 “ ´1, x2 “ 2, x3 “ 0 and x4 “ 1.

71/74
shripad@math.iitb.ac.in MA214 (2021-2022) L37
Gauß elimination method

We apply Gaußian elimination method to the linear system


E1 : a11 x1 ` a12 x2 ` ¨ ¨ ¨ ` a1n xn “ b1 ,
E2 : a21 x1 ` a22 x2 ` ¨ ¨ ¨ ` a2n xn “ b2 ,
.. .. ..
. . .
En : an1 x1 ` an2 x2 ` ¨ ¨ ¨ ` ann xn “ bn .
as follows.

We first form the augmented matrix

à “ rA : bs.

If a11 ‰ 0 then we perform pEj ´ paj1 {a11 qE1 q Ñ pEj q for each
j “ 2, 3, . . . , n to eliminate x1 in each of these rows.

72/74
shripad@math.iitb.ac.in MA214 (2021-2022) L37
Gauß elimination method

Although the entries in rows 2, 3, . . . , n are expected to change, for


ease of notation we again denote the entry in the i-th row and the
j-th column by aij .

With this in mind, we follow a sequential procedure for


i “ 2, 3, . . . , n ´ 1 and perform the operation

pEj ´ paji {aii qEi q Ñ pEj q

for each j “ i ` 1, i ` 2, . . . , n, provided aii ‰ 0.

This eliminates xi in each row below the i-th row for all values of
i “ 1, 2, . . . , n ´ 1. The resulting augmented matrix has the form

à “ rA : bs

where the matrix A is now upper triangular.


73/74
shripad@math.iitb.ac.in MA214 (2021-2022) L37
Gauß elimination method

The system now looks


E1 : a11 x1 ` a12 x2 ` ¨ ¨ ¨ ` a1n xn “ b1 ,
E2 : a22 x2 ` ¨ ¨ ¨ ` a2n xn “ b2 ,
.. .. ..
. . .
En : ann xn “ bn .
so backward substitution can be performed.
bn
Solving En for xn gives xn “ . Then we solve for xn´1 , then for
ann
xn´2 and so on using
bi ´ ai,n xn ´ ai,n´1 xn´1 ´ ¨ ¨ ¨ ´ ai,i`1 xi`1
xi “
aii
řn
bi ´ j“i`1 aij xj
“ .
aii
74/74
shripad@math.iitb.ac.in MA214 (2021-2022) L37

You might also like