You are on page 1of 17

HISTORY

The field of numerical analysis predates the invention of modern


computers by many centuries. Linear interpolation was already in
use more than 2000 years ago. Many great mathematicians of the
past were preoccupied by numerical analysis, as is obvious from the
names of important algorithms like Newton's method, Lagrange
interpolation polynomial, Gaussian elimination, or Euler's method.
To facilitate computations by hand, large books were produced with
formulas and tables of data such as interpolation points and function
coefficients. Using these tables, often calculated out to 16 decimal
places or more for some functions, one could look up values to plug
into the formulas given and achieve very good numerical estimates
of some functions. The canonical work in the field is the NIST
publication edited by Abramowitz and Stegun, a 1000-plus page
book of a very large number of commonly used formulas and
functions and their values at many points. The function values are
no longer very useful when a computer is available, but the large
listing of formulas can still be very handy.
The mechanical calculator was also developed as a tool for hand
computation. These calculators evolved into electronic computers in
the 1940s, and it was then found that these computers were also
useful for administrative purposes. But the invention of the
computer also influenced the field of numerical analysis, since now
longer and more complicated calculations could be done.
INTRODUCTION

Numerical analysis is the study of algorithms that use numerical


approximation (as opposed to general symbolic manipulations) for
the problems of mathematical analysis (as distinguished from
discrete mathematics).One of the earliest mathematical writings is a
Babylonian tablet from the Yale Babylonian Collection (YBC 7289),
which gives a sexagesimal numerical approximation of
, the
length of the diagonal in a unit square. Being able to compute the
sides of a triangle (and hence, being able to compute square roots)
is extremely important, for instance, in astronomy, carpentry and
construction.[2]
Numerical analysis continues this long tradition of practical
mathematical calculations. Much like the Babylonian approximation
of
, modern numerical analysis does not seek exact answers,
because exact answers are often impossible to obtain in practice.
Instead, much of numerical analysis is concerned with obtaining
approximate solutions while maintaining reasonable bounds on
errors.

Numerical analysis naturally finds applications in all fields of


engineering and the physical sciences, but in the 21st century also
the life sciences and even the arts have adopted elements of
scientific computations. Ordinary differential equations appear in
celestial mechanics (planets, stars and galaxies); numerical linear
algebra is important for data analysis; stochastic differential
equations and Markov chains are essential in simulating living cells
for medicine and biology.Before the advent of modern computers
numerical methods often depended on hand interpolation in large
printed tables. Since the mid 20th century, computers calculate the
required functions instead.

These same interpolation formulas nevertheless continue to be


used as part of the software algorithms for solving differential
equationsBefore the advent of modern computers numerical methods often
depended on hand interpolation in large printed tables. Since the mid 20th century,
computers calculate the required functions instead. These same interpolation formulas
nevertheless continue to be used as part of the software algorithms for solving
differential equations.

Advanced numerical methods are essential in making numerical


weather prediction feasible.

Computing the trajectory of a spacecraft requires the accurate


numerical solution of a system of ordinary differential equations.

Car companies can improve the crash safety of their vehicles by


using computer simulations of car crashes. Such simulations
essentially consist of solving partial differential equations
numerically.

Hedge funds (private investment funds) use tools from all fields of
numerical analysis to attempt to calculate the value of stocks and
derivatives more precisely than other market participants.

RungeKutta methoed
In numerical analysis, the RungeKutta methods (German
pronunciation) are an important family of implicit and explicit
iterative methods, which are used in temporal discretization for the
approximation of solutions of ordinary differential equations. These
techniques were developed around 1900 by the German
mathematicians C. Runge and M. W. Kutta.
See the article on numerical methods for ordinary
differential equations for more background and other
methods. See also List of RungeKutta methods.

The RungeKutta method

One member of the family of RungeKutta methods is often referred


to as "RK4", "classical RungeKutta method" or simply as "the
RungeKutta method".
Let an initial value problem be specified as follows.

Here, y is an unknown function (scalar or vector) of time t which we


would like to approximate; we are told that , the rate at which y
changes, is a function of t and of y itself. At the initial time the
corresponding y-value is . The function f and the data , are
given.
Now pick a step-size h>0 and define

for n = 0, 1, 2, 3, . . . , using

Note: the above equations have different but equivalent definitions


in different texts

Here
is the RK4 approximation of
, and the next value (
) is determined by the present value ( ) plus the weighted
average of four increments, where each increment is the product of
the size of the interval, h, and an estimated slope specified by
function f on the right-hand side of the differential equation.

is the increment based on the slope at the beginning of the


interval, using , (Euler's method) ;
is the increment based on the slope at the midpoint of the
interval, using

is again the increment based on the slope at the midpoint, but


now using

is the increment based on the slope at the end of the interval,


using
.

In averaging the four increments, greater weight is given to the


increments at the midpoint. If the weights are chosen such that if
is independent of , so that the differential equation is equivalent to
a simple integral, then RK4 is Simpson's rule.[3]
The RK4 method is a fourth-order method, meaning that the local
truncation error is on the order of
accumulated error is order

, while the total

Tan Delin and Chen Zheng have developed a general formula for a
RungeKutta method in the fourth-order as follows:

for n = 0, 1, 2, 3, . . . , using

,
Where is a free parameter. Choosing
order Runge-Kutta method. With
other fourth-order RungeKutta methods.

, this is the classical fourth, this formula produces

Explicit RungeKutta methods


The family of explicit RungeKutta methods is a generalization of the RK4
method mentioned above. It is given by

Where:

[4]

(Note: the above equations have different but equivalent definitions


in different texts).[2]
To specify a particular method, one needs to provide the integer s (the
number of stages), and the coefficients aij (for 1 j < i s), bi (for i = 1, 2,
..., s) and ci (for i = 2, 3, ..., s). The matrix [aij] is called the RungeKutta
matrix, while the bi and ci are known as the weights and the nodes.[5]
These data are usually arranged in a mnemonic device, known as a
Butcher tableau (after John C. Butcher):
0

The RungeKutta method is consistent if

There are also accompanying requirements if one requires the method to


have a certain order p, meaning that the local truncation error is O(hp+1).
These can be derived from the definition of the truncation error itself. For
example, a two-stage method has order 2 if b1 + b2 = 1, b2c2 = 1/2, and a21
= c2.[6]
In general, if an explicit -stage Runge Kutta method has order , then
, and if
, then
.[7] The minimum required for an explicit
-stage Runge Kutta method have order is an open problem. Some
values which are known are [8]

Examples
The RK4 method falls in this framework. Its tableau is: [9]
0
1/2

1/2

1/2

1/2

1/6

1/3

1/3

1/6

A slight variation of "the" RungeKutta method is also due to Kutta in 1901


and is called the 3/8-rule.[10] The primary advantage this method has is
that almost all of the error coefficients are smaller than the popular
method, but it requires slightly more FLOPs (floating point operations) per
time step. Its Butcher tableau is given by:
0
1/3

1/3

2/3

1/3

1/8

3/8

3/8

1/8

However, the simplest RungeKutta method is the (forward) Euler method,


given by the formula
. This is the only consistent
explicit RungeKutta method with one stage. The corresponding tableau is:
0
1

Second-order methods with two stages


An example of a second-order method with two stages is provided by the
midpoint method

The corresponding tableau is:


0
1/2

1/2
0

The midpoint method is not the only second-order RungeKutta method


with two stages; there is a family of such methods, parameterized by ,
and given by the formula

[11]

Its Butcher tableau is


0

In this family,
method.[3]

gives the midpoint method and

is Heun's

Usage
As an example, consider the two-stage second-order RungeKutta method
with = 2/3. It is given by the tableau
0
2/3

2/3
1/4

3/4

with the corresponding equations

This method is used to solve the initial-value problem

with step size h = 0.025, so the method needs to take four steps.
The method proceeds as follows:

The numerical solutions correspond to the underlined values.


Adaptive RungeKutta methods
The adaptive methods are designed to produce an estimate of the local
truncation error of a single RungeKutta step. This is done by having two
methods in the tableau, one with order and one with order
.
The lower-order step is given by

where the

which is

are the same as for the higher-order method. Then the error is

. The Butcher tableau for this kind of method is extended to

give the values of


0

The RungeKuttaFehlberg method has two methods of orders 5 and 4. Its


extended Butcher tableau is:
0
1/4

1/4

3/8

3/32

9/32

12/13

1932/219
7

7200/219
7296/2197
7

439/216

3680/513

1/2

8/27

3544/256
1859/4104
5

16/135

6656/12825

28561/5643
9/50 2/55
0

25/216

1408/2565

2197/4104

-845/4104
11/4
0

1/5

However, the simplest adaptive RungeKutta method involves combining


Heun's method, which is order 2, with the Euler method, which is order 1.
Its extended Butcher tableau is:
0
1

1
1/2

1/2

The error estimate is used to control the step size.


Other adaptive RungeKutta methods are the BogackiShampine method
(orders 3 and 2), the CashKarp method and the DormandPrince method
(both with orders 5 and 4).
Nonconfluent RungeKutta methods
A RungeKutta method is said to be nonconfluent
are distinct.

[12]

if all the

Implicit RungeKutta methods


All RungeKutta methods mentioned up to now are explicit methods.
Explicit RungeKutta methods are generally unsuitable for the solution of
stiff equations because their region of absolute stability is small; in
particular, it is bounded.[13] This issue is especially important in the
solution of partial differential equations.
The instability of explicit RungeKutta methods motivates the
development of implicit methods. An implicit RungeKutta method has the
form

where

[14]

The difference with an explicit method is that in an explicit method, the


sum over j only goes up to i 1. This also shows up in the Butcher
tableau: the coefficient matrix
of an explicit method is lower triangular.
In an implicit method, the sum over j goes up to s and the coefficient
matrix is not triangular, yielding a Butcher tableau of the form [9]

The consequence of this difference is that at every step, a system of


algebraic equations has to be solved. This increases the computational
cost considerably. If a method with s stages is used to solve a differential
equation with m components, then the system of algebraic equations has
ms components. This can be contrasted with implicit linear multistep
methods (the other big family of methods for ODEs): an implicit s-step
linear multistep method needs to solve a system of algebraic equations
with only m components, so the size of the system does not increase as
the number of steps increases. [15]
Examples
The simplest example of an implicit RungeKutta method is the backward
Euler method:

The Butcher tableau for this is simply:

This Butcher tableau corresponds to the formulae

which can be re-arranged to get the formula for the backward Euler
method listed above.

Another example for an implicit (RungeKutta) method is the trapezoidal


rule. Its Butcher tableau is:

The trapezoidal rule is a collocation method (as


discussed in that article). All collocation methods are
implicit RungeKutta methods, but not all implicit RungeKutta
methods are collocation methods.[16]

GaussLegendre methods form a family of collocation


methods based on Gauss quadrature. A GaussLegendre method
with s stages has order 2s (thus, methods with arbitrarily high order
can be constructed).[17] The method with two stages (and thus order
four) has Butcher tableau:

[15]

Stability
The advantage of implicit RungeKutta methods over explicit ones is their
greater stability, especially when applied to stiff equations. Consider the
linear test equation y' = y. A RungeKutta method applied to this
equation reduces to the iteration

, with r given by

[18]

where e stands for the vector of ones. The function r is called the stability
function.[19] It follows from the formula that r is the quotient of two
polynomials of degree s if the method has s stages. Explicit methods have
a strictly lower triangular matrix A, which implies that det(I zA) = 1 and
that the stability function is a polynomial.[20]
The numerical solution to the linear test equation decays to zero if | r(z) |
< 1 with z = h. The set of such z is called the domain of absolute
stability. In particular, the method is said to be A-stable if all z with Re(z) <
0 are in the domain of absolute stability. The stability function of an
explicit RungeKutta method is a polynomial, so explicit RungeKutta
methods can never be A-stable.[20]
If the method has order p, then the stability function satisfies
as

. Thus, it is of interest to study quotients

of polynomials of given degrees that approximate the exponential function


the best. These are known as Pad approximants. A Pad approximant
with numerator of degree m and denominator of degree n is A-stable if
and only if m n m + 2.[21]
The GaussLegendre method with s stages has order 2s, so its stability
function is the Pad approximant with m = n = s. It follows that the
method is A-stable.[22] This shows that A-stable RungeKutta can have
arbitrarily high order. In contrast, the order of A-stable linear multistep
methods cannot exceed two.[23]
The A-stability concept for the solution of differential equations is related
to the linear autonomous equation
. Dahlquist proposed the
investigation of stability of numerical schemes when applied to nonlinear
systems which satisfies a monotonicity condition. The corresponding
concepts were defined as G-stability for multistep methods (and the
related one-leg methods) and B-stability (Butcher, 1975) for RungeKutta
methods. A RungeKutta method applied to the non-linear system
, which verifies

, is called B-stable, if

this condition implies


solutions.
Let

and

be three

for two numerical

matrices defined by

A RungeKutta method is said to be algebraically stable [24] if the matrices


and
are both non-negative definite. A sufficient condition for B[25]
stability
is: and are non-negative definite.
Derivation of the RungeKutta fourth-order method
In general a RungeKutta method of order can be written as:

where:

are increments obtained evaluating the derivatives of

at the -th order.

We develop the derivation[26] for the RungeKutta fourth-order method


using the general formula with
evaluated, as explained above, at the

starting point, the midpoint and the end point of any interval
thus we choose:

and

otherwise. We begin by defining the following quantities:

where

and
If we define:

where:

is the total derivative of

with respect to time.

If we now express the general formula using what we just derived we


obtain:

and comparing this with the Taylor series of

around

we obtain a system of constraints on the coefficients:

which when solved gives

as stated above

Direct methods compute the solution to a problem in a finite


number of steps. These methods would give the precise answer if
they were performed in infinite precision arithmetic. Examples
include Gaussian elimination, the QR factorization method for
solving systems of linear equations, and the simplex method of
linear programming. In practice, finite precision is used and the
result is an approximation of the true solution (assuming stability).
In contrast to direct methods, iterative methods are not expected to
terminate in a finite number of steps. Starting from an initial guess,
iterative methods form successive approximations that converge to
the exact solution only in the limit. A convergence test, often
involving the residual, is specified in order to decide when a
sufficiently accurate solution has (hopefully) been found. Even using
infinite precision arithmetic these methods would not reach the
solution within a finite number of steps (in general). Examples
include Newton's method, the bisection method, and Jacobi
iteration. In computational matrix algebra, iterative methods are
generally needed for large problems.
Iterative methods are more common than direct methods in
numerical analysis. Some methods are direct in principle but are
usually used as though they were not, e.g. GMRES and the
conjugate gradient method. For these methods the number of steps
needed to obtain the exact solution is so large that an
approximation is accepted in the same manner as for an iterative
method.
Much effort has been put in the development of methods for solving
systems of linear equations. Standard direct methods, i.e., methods
that use some matrix decomposition are Gaussian elimination, LU
decomposition, Cholesky decomposition for symmetric (or
hermitian) and positive-definite matrix, and QR decomposition for
non-square matrices. Iterative methods such as the Jacobi method,
GaussSeidel method, successive over-relaxation and conjugate

gradient method are usually preferred for large systems. General


iterative methods can be developed using a matrix splitting.

References Hildebrand, F. B. (1974). Introduction to Numerical


Analysis (2nd edition ed.). McGraw-Hill. ISBN 0-07-028761-9.

You might also like