You are on page 1of 2

Runge-Kutta

(Pardon this sounding like a television commercial, it just happens to be true.) During the instructor’s
graduate school days the standard differential equation solver was the fourth-order Runge-Kutta method. It
has more precision than Euler’s algorithm, is easy to start and program, and is relatively fast. This classic
then lost its appeal in favor of more intelligent and self correcting methods. Yet with experience, the faults in
these other methods have left them less attractive so that many computational scientists are again returning to
Runge-Kutta. In fact, a fourth-order Runge-Kutta with a self-adjusting step size has proven itself to be robust
and capable of industrial-strength work. In this section we derive the second-order Runge-Kutta method and
give the results and a sample subroutine for the fourth-order method.

An approach to developing an algorithm for integrating a differential equation is to write down the formal
integral of the equation and then approximate the integral:

The Runge-Kutta method approximates the formal solution (64) by expanding f(t,y) in a Taylor series about
the midpoint of the integration subinterval,

Yet since the integral of vanishes when evaluated about the midpoint, we automatically get
improved precision using only the first term in (65):

This algorithm effectively uses the derivative of the function evaluated at the midpoint to predict the solution
at the end of the subinterval.  (Note that the algorithm does not use the midpoint again when computing the
next interval since we want the freedom to employ adaptive step size, that is, h may not be the same for the
next interval.)  Since the error in (68) is it is more accurate than Euler’s algorithm. However, (65)
cannot be applied immediately since it requires a knowledge of which is not in the scheme of things.
We thus approximate with Euler’s algorithm (which may only be , but when multiplied by the
h in (65), keeps our error ):

The second-order Runge-Kutta algorithm (71) requires the known derivative function f at the endpoints and
midpoint of the interval, and the unknown function y at the previous point. Since we start with initial
conditions, the algorithm is self-starting. Note too that it is applicable with a general function f (for example
nonlinear), and simple to program.  (If the simultaneous equations were actually higher-order differential
equations, we would simultaneously be solving for some intermediate values of the derivative functions, and
so the intermediate values of some ’s might need be approximated.)
As an example, we apply this algorithm to our spring problem for the first interval:

This looks more complicated than it would be to program.  We continue using these same equations to
progressively step (integrate) outwards. Note that, at last, the algorithm’s equation for the distance covered in
one step is as sophisticated as the one known by all first-year physics students.

As indicated earlier, the fourth-order Runge-Kutta method is generally considered to provide an excellent
balance of power, precision ( ), and simplicity to program. Here there are four gradient or “k” terms
which provide a better approximation to the behavior of f(t,y) near the midpoint ; the extra
gradients can still be programmed simply with just four calls to the same subroutine:

You might also like