Implicit Runge-Kutta Algorithm
Implicit Runge-Kutta Algorithm
NEWTON-RAPHSON METHOD
Andrés L. Granados M.
Universidad Simón Bolı́var, Dpto. Mecánica.
Apdo. 89000, Caracas 1080A. Venezuela.
(Corrected and Enlarged June/2016)
RESUMEN
ABSTRACT
An algorithm for solving ordinary differential equations has been developed using implicit Runge-
Kutta methods, which may be partially or fully implicit. The system of algebraic equations generated by the
Runge-Kutta method in each step of integration is solved with the help of the Newton-Raphson Method. The
unknown variables in this case are the derivatives kr (r = 1, 2, . . . , N ) of the intermediate points for a Runge-
Kutta method of N stages. An expression for the gradient in the Newton-Raphson method is deduced with
the application of the chain rule and then is approximated by a linear estimation. The criterion to evaluate
the iterative process in the Newton-Rapson method is stated in order to determine the size of the step that
assures the convergence. The starting values of the variables kr for the iterative process are computed using
an explicit Runge-Kutta method with the same intermediate points as the implicit method. This reduce
substantially the cost of computation for each step, particularly in those steps where the implicit method is
some redundant and there are few iterations. Finally, the comparison of the implicit and the explicit Runge-
Kutta methods in the previous step permits to estimate the size of the next step before the corresponding
integration is made. At the end two examples are presented with partial implicit and full implicit methods.
INTRODUCTION
The principal aim of this work is the development of an algorithm for solving ordinary differential
equations based on known implicit Runge-Kutta methods but where the selection of the methods and the
application of iterative procedures have been accurately studied in order to produce the least number of
numerical calculations and the highest order of exactitude with a reasonable stability.
1
As it is well known, the implicit Runge-Kutta methods are more stable than explicit ones. However,
the solution of the system of non-linear equations with the auxiliary variables “k”, produced by the implicit
dependence, generates an iterative process that can be extremely slow and can produce a restriction in the
stepsize. This restriction can be overcome by the selection of the Newton-Raphson method for solving the
algebraic equations generates by the implicit dependence.
Differential Equations
As it is well known, every system of ordinary differential equations of any order may be transformed,
with a convenient change of variables, into a system of first order ordinary differential equations [1,2]. This
is the reason why only this last type of differential equations will be studied.
Let the following system of M first order ordinary differential equations be expressed as dy i /dx =
i
f (x, y) with i = 1, 2, 3, . . . , M , being y a M -dimensional function with each component depending on x.
This may be symbolically expressed as
dy
= f (x, y) where y = y(x) = (y 1 (x), y 2 (x), y 3 (x), . . . , y M (x)) (1)
dx
When each function f i (x, y) depends only on each y i the system is said to be uncoupled, otherwise
it is said to be coupled. If the system of ordinary differential equations is uncoupled then every differential
equation can be solved separately. When the system does not explicitly depends on x the system is said to
be autonomous. When the conditions of the solution y(x) are known at a unique specific point, for example,
y i (xo ) = yoi at x = xo , or symbolically
x = xo y(xo ) = yo (2)
Both expressions (1) and (2) are said to state an Initial Value Problem, otherwise they state a Boundary
Value Problem.
Indeed, the system of differential equations (1) is a particular case of a general autonomous system
stated in the next form [2,3]
dy dy i /dx = 1 if i = 1
= f (y) ≡ (3)
dx dy i /dx = f i (y) if i = 2, 3, . . . , M + 1
Runge-Kutta Methods
Trying to make a general formulation, a Runge-Kutta method of order P and equiped with N stages
is defined [3] with the expression
i
yn+1 = yni + h(cr kri ) (4.a)
for i = 1, 2, 3, . . . , M and r, s = 1, 2, 3, . . . , N . Notice that index convention of sum has been used. Thus
every time an index appears twice or more in a term, this should be summed up to complete its range.
A Runge-Kutta method (4) has order P if, for a sufficiently smooth problem, the expressions (1) and
(2) satisfy
y(xn + h) − yn+1 ≤ Φ(ζ) hP +1 = O(hP +1 ) ζ ∈ [xn , xn + h] (5)
2
i.e., the Taylor series for the exact solution y(xn + h) and for the numerical solution yn+1 coincide up to
(and include) the term with hP [9].
The Runge-Kutta method thus defined can be applied for solving initial value problems, and it is
used recurrently. This is, given a point (xn , yn ), it can be obtained the next point (xn+1 , yn+1 ) using the
expressions (4), being xn+1 = xn + h, where h is named the stepsize of the method. Every time that this is
made, the method goes forward (or backward if h is negative) an integration step h in x, offering the solution
in consecutive points, one for each jump. In this way, if the method begins with the initial conditions (x0 , y0 )
stated by (2), it can calculate (x1 , y1 ), (x2 , y2 ), (x3 , y3 ), . . . , (xn , yn ), and continue this way, up to the desired
boundary in x. In each integration or jump the method reinitiates with the information from the adjacent
point inmediately preceding. This characteristic classifies the Runge-Kutta methods within the group of
one step methods. It should be noticed, however, that the auxiliary variables kri are calculated for each r
up to N stages in each step. These calculations are no more than evaluations of the functions f i (x, y) for
intermediate points x + br h in the interval [xn , xn+1 ] (0 ≤ br ≤ 1). The evaluation of each M -dimensional
auxiliary variable bf kr represents one stage of the method.
Let us now introduce a condensed representation of the generalized Runge-Kutta method, formerly
developed by Butcher [4,5,6,7] and that is presented systematically in the book of Lapidus and Seinfeld [8]
and in the books of Hairer, Nørsett and Wanner [9,10]. This last books has a numerous collection of Runge-
Kutta methods using the Butcher’s notation and an extensive bibliography. After the paper of Butcher [5]
it became customary to symbolize the general Runge-Kutta method (4) by a tableau. In order to illustrate
this representation, consider the expresions (4) applied to a method of four stages (N = 4). Accommodating
the coefficients ars , br and cr in an adequate form, they may be schematically represented as
0 ≤ br ≤ 1
b1 a11 a12 a13 a14
b2 a21 a22 a23 a24
N
ars = br
b3 a31 a32 a33 a34 s=1 (6)
b4 a41 a42 a43 a44
N
c1 c2 c3 c4 cr = 1
r=1
The aforementioned representation allows for the basic distinction of the following types of Runge-
Kutta methods, according to the characteristics of the matrix ars : If ars = 0 for s ≥ r, then the matrix ars
is lower triangular (excluding the principal diagonal) and the method is said to be completely explicit. If,
ars = 0 for s > r, then the matrix ars is lower triangular, including the principal diagonal, and the method is
said to be semi-implicit or simple-diagonally implicit. If the matrix ars is diagonal by blocks, the method is
said to be diagonally implicit. if the first row of the matrix ars is filled with zeros, a1,s = 0, and the method
is diagonally implicit, then the method is called Lagrange Method [11] (the coefficients br may be arbitrary).
If a Lagrange method has bN = 1 and the last row of the matrix is the array cs = aN,s , then the method
is said to be stiffly accurate. If, conversely, none of the previous conditions are satisfied, the method is said
to be simply implicit. In the cases of implicit methods, it can be noticed that an auxiliary variable kr may
depend on itself and on any other variables not calculated before in the same stage. That is why the methods
are named implicit in these cases.
Additionally, the condensed representation described above permits to verify very easily certain prop-
N
erties that the coefficients ars , br , and cr should fulfill. For example, 0 ≤ br ≤ 1, s=1 ars = br and
N
r=1 cr = 1. These mentioned properties may be interpreted in the following manner: The first property
expresses that the Runge-Kutta is a one step method, and the functions f i (x, y(x)) in (4.b) should be evalu-
ated for x ∈ [xn , xn+1 ]. The Second property results in applying the Runge-Kutta method (4) to the system
of differential equations (3), where ks1 = 1 ∀s = 1, 2, 3, . . . , N , and thus the sum of ars in each line r offers
i
the value of br . The third property means that in the expression (4.a) the value of yn+1 is obtained from the
i i i
value of yn and the projections (with h) of an average of the derivatives dy /dx = f (x, y). The values of cr
are the weighted coefficients of this average.
3
The coefficients ars , br and cr are determined applying the aforementioned properties and using some
additional relations that are resumed in the following theorem:
Theorem (Butcher [4,9]). Let the next conditions be defined as
N
1
B(P ) ci bq−1
i = q = 1, 2, . . . , P
i=1
q
N
bqi
C(η) aij bq−1
j = i = 1, 2, . . . , N q = 1, 2, . . . , η (7)
j=1
q
N
cqj
D(ξ) ci bq−1
i aij = (1 − bqj ) j = 1, 2, . . . , N q = 1, 2, . . . , ξ
i=1
q
If the coefficients bi , ci , aij of a Runge-Kutta method satisfy the conditions B(P ), C(η), D(ξ) with P ≤ η+ξ+1
and P ≤ 2η + 2, then the method is of order P .
This theorem is demonstrated basically with the expansion in Taylor series of kri in (4.b). The same
variables in implicit dependence are also expanded in series up to the corresponding order term. The recurrent
substitution of these in (4.a) and the consequent comparison with the Taylor series of y(xn+1 ) ≈ yn+1 around
y(xn ) = yn results in the following relations equivalent to (7)
These relations are valid for Runge-Kutta methods, whether implicit or explicit, from first order method (e.g.
Euler method) to fifth order method (e.g. Fehlberg method [9,12]). In all the cases, the indexes r, s, t and
u vary from 1 to the number of stages N , and is applied the convention of sum (δr = 1 ∀r).
Gear [3], has presented a similar deduction for (8), but only for explicit methods. Relations similar to
(8) appear in Hairer et al. [9], but only for explicit methods and for terms up to order h4 . Also Hairer et al.
deduce a theorem that expresses the equivalence of the implicit Runge-Kutta methods and the orthogonal
collocation methods [9,10].
Ralston in 1965 (see for example [13,pp.217]) made a similar analysis to obtain the relations of the
coefficients, but for an explicit Runge-Kutta method of fourth order and four stages, and found the following
family of solutions of relations (8)
b1 = 0 b4 = 1 ars = 0 (s ≥ r) (9.a − c)
b3 (b3 − b2 )
a21 = b2 a31 = b3 − a32 a32 = a41 = 1 − a42 − a43 (9.d − g)
2 b2 (1 − 2 b2 )
(1 − b2 )[b2 + b3 − 1 − (2 b3 − 1)2 ] (1 − 2 b2 )(1 − b2 )(1 − b3 )
a42 = a43 = (9.h, i)
2 b2 (b3 − b2 )[6 b2 b3 − 4 (b2 + b3 ) + 3] b3 (b3 − b2 )[6 b2 b3 − 4 (b2 + b3 ) + 3]
4
1 1 − 2(b2 + b3 ) 2 b3 − 1
c1 = + c2 = (9.j, k)
2 12 b2 b3 12 b2 (b3 − b2 )(1 − b2 )
1 − 2 b2 1 2 (b2 + b3 ) − 3
c3 = c4 = + (9.l, m)
12 b3 (b3 − b2 )(1 − b3 ) 2 12 (1 − b2 )(1 − b3 )
Notice that if b2 = 1/2 and b3 = 1/2, then the well known classical Runge-Kutta method of fourth order and
four stages is obtained.
THE ALGORITHM
Newton-Raphson Method
For solving the initial value problem (3), the implicit Runge-Kutta method
i
yn+1 = yni + h cr kri + O(hP +1 ) yn+1 = yn + h cr kr + O(hP +1 )
(10)
kri = f i (yn + h ars ks ) kr = f (yn + h ars ks )
can be used. The expression (10.b) should be interpreted as a system of non-linear equations with the auxiliary
variables kri as unknowns. For this reason, it is convenient to define a function
where it has been used the chain rule and, in the right side of (11) and (13), k contains all the kr , r =
1, 2, . . . , N . Superscript means the component of the system of differential equations and subscript means
the number of the corresponding stage. The notation Jfij is used instead of ∂f i /∂y j , the elements of the
jacobian Jf (yn + h ars ks ), and r does not sum although it appear twice. The identity matrixes If and IA
have the dimensions of f and A, respectively (ranges of the indexes from Kronecker delta δij and δrt ). From
now on, we shall reserve the use of bold small letter k without steps index (except internal iteration index
m) for those variables where we have grouped all stages in one array.
Thus the Newton-Raphson method can be applied with the following algorithm
i i i
kr,(m+1) = kr,(m) + ω ∆kr,(m) k(m+1) = k(m) + ω ∆k(m) (14.a)
j
where the variables ∆kt,(m) are found from the next system of linear equations
∂g i
j
r
∆kt,(m) = −gri (k(m) ) [Jg (k(m) )].∆k(m) = −g(k(m) ) (14.b)
∂ktj (m)
5
and ω is a relaxation factor. The iterative process (14) is applied in a successive form until
The value ir,(m) is the local error in the auxiliary variables kri , while h cr r,(m) is the local error for yn+1 ,
i
and max is the tolerance for the local truncation error in the numerical solution yn+1 .
For very complicated functions, it is convenient to linearize the partial derivatives of the jacobian
tensor (13) using backward finite differences, as it is indicated in the following expression
∂g i f i (y + ∆y ) − f i (y(j) )
n n n
r
≈ h art − δji δrt (16)
∂ktj ∆ynj
where
∆ynj = h ars ksj and f i (yn(j) ) = f i (yn1 + ∆yn1 , yn2 + ∆yn2 , . . . , ynj , . . . , ynM + ∆ynM ) (17)
j
The initial values kr,(0) may be estimated by using a explicit Runge-Kutta method whose intermediate
points br are the same of the implicit Runge-Kutta method.
The explicit fourth order (P = 4) Runge-Kutta method with the same intermediate points used to estimate
the initial values for k (iteration in m) may be easily obtained from (9). The coefficients of this new explicit
Runge-Kutta method in Butcher’s notation can be expressed as [15]
0 0 0 0 0
√ √
(5 − 5)/10 (5 − 5)/10 0 0 0
√ √ √
(5 + 5)/10 −(5 + 3 5)/20 (3 + 5)/4 0 0 (19)
√ √ √
1 (−1 + 5 5)/4 −(5 + 3 5)/4 (5 − 5)/2 0
1/12 5/12 5/12 1/12
The mentioned example (18) is only implicit in the second and the third stages, therefore for this especial
case the iterative process is simpler than for a full implicit Runge-Kutta method.
Iterative Process
The problem stated by (11) may be written as
f (yn + h a1s ks )
f (yn + h a1s ks ) − k1
..
..
. .
g(k) = F(k) − k = 0 F(k) = f (yn + h ars ks ) g(k) = f (yn + h ars ks ) − kr (20)
..
..
.
.
f (yn + h aN s ks ) f (yn + h aN s ks ) − kN
6
The function F and the variable k has dimension M × N (autonomous case). The algorithm of Newton-
Raphson method (14) is then expressed as
km+1 = km − ω [Jg (km )]−1. g(km ) = km − ω [JF (km ) − II]−1. (F(km ) − km ) [Jg (k)] = [JF (k)] − [ II ] (21)
where [JF (k)] is the jacobian of the function F(k), and the jacobian [Jg (k)] of the function g(k) in (21.b)
may be calculated from (13).
The fixed point method (12) has a linear convergence in the proximity of a solution when
km+1 = F(km ) [JF (ζ)]ζ∈IBρ (K∗ ) < 1 [JF (k)]rt = h art JF (yn + h ars ks ) (22.a, b, c)
h a11 [ Jf (yn + h a1s ks ) ] · · · h a1t [ Jf (yn + h a1s ks ) ] · · · h a1N [ Jf (yn + h a1s ks ) ]
.. .. ..
. . .
[JF (k)] = h ar1 [ Jf (yn + h ars ks ) ] · · · h art [ Jf (yn + h ars ks ) ] · · · h arN [ Jf (yn + h ars ks ) ]
(22.c )
.. .. ..
. . .
h aN 1 [ Jf (yn + h aN s ks ) ] · · · h aN t [ Jf (yn + h aN s ks ) ] · · · h aN N [ Jf (yn + h aN s ks ) ]
with r, t row and column blocks (no sum in r and sum in s), r, t, s = 1, . . . , N . Similarly, it is well known
that, in the proximity of the solution, the Newton-Raphson method (21) has a quadratic convergence when
where the jacobians are [Jg (k)] = [JF (k)] − [ II ], the hessians are equal, [IHg (k)] = [IHF (k)], and IBρ (K∗ ) is a
ball of radio ρ = k − k∗ < ρ∗ with center in k∗ , the solution of (20.a). The norm used is the infinity norm
· ∞ [2].
When the condition (23.a) is appropriately applied to the Runge-Kutta method, it imposes a restriction
to the value of the stepsize h. This restriction should neither be conduced with the restriction imposed by
the stability criteria, nor with the control of the step size. For the analysis of these last aspects in the case
of the proposed example, refer to [15].
The iterative process will be illustrated by the implicitness of k2 and k3 for the example (18), in the
case of one autonomous ordinary differential equation ẏ = f (y). Thus the homogeneous function is
g2 (k) f (yn + h a2s ks ) − k2
{g(k)} = = =0 (24)
g3 (k) f (yn + h a3s ks ) − k3
The iterations m continue for each stepsize h (n constant) until the condition (15) is satisfied. After this,
calculate k4 and yn+1 , and one may attempt another step (next n) with the same or another size.
7
Full Implicit Example
The iterative process begins with an initial estimations of variables ks for each step. For the case we
are interested in, we shall use Lagrange generated fifth order Runge-Kutta method (P = 5) or Ralston &
Rabinowitz [13] third order Runge-kutta method (P = 3), both explicit
0 0 0 0 0 0
- - √
- - - - - √- - - - - - - - - - - - - - - - - -
(5 − 15)/10 (5 − 15)/10 0 0 0 0 0 0 0 0
√ √
1/2 −(3 + 15)/4 (5 + 15)/4 0 0 0 1/2 1/2 0 0
√ √ √ √
(5 + 15)/10 3(4 + 15)/5 −(35 + 9 15)/10 2(4 + 15)/5 0 0 1 −1 2 0
- - - - - - - - - - - - - - - - - - - - - - - - -
1 −1 5/3 −20/15 25/15 0 1/6 2/3 1/6
0 5/18 4/9 5/18 0
(27.a, b)
The first, eq. (27.a), was generated by Lagrange polynomials
n n
(x − xj )
Pn (x) = Li (x) f (xi ) Li (x) = (28)
i=0 j=0
(xi − xj )
j=i
that pass by the previous stages each time, with collocation points bs as independent variables and ks as
dependent known variables (s = 1, 2, . . . , r − 1, r = 2, . . . , N ). Then by an extrapolation of the Pr−2 (x)
(r ≥ 2) polynomial for the next stage br (b1 = 0), the coefficients ars are calculated as (a1s = 0, a21 = b2 )
r−1
(br − bj )
r−1
ars = br αs /α αs = Ls (br ) = α= αs (29)
j=1
(bs − bj ) s=1
j=s
r−1
The value of α is for normalizing αs and satisfy ars δs = s=1 ars = br (between dashed line in eq. (27.a)
are the interesting values we need to calculate the initial iteration, but the first stage is necessary only to
complete the scheme, the last two rows are unnecessary). For this case in (27.a) α = 1 always. When upper
limit is less than lower limit the product symbol Π is 1. The coefficients cr (r = 1, 2, . . . , N ) are calculated
using again (28), the Lagrange polynomials PN −1 (x), by integration of
1 N
(x − bj )
cr = Lr (x) dx Lr (x) = (30)
0 j=1
(b r − bj )
j=r
N
and must satisfy r=1 cr = 1 (for the case eq. (27.a), N = P = 5). In fact, not only (27.a), but also (27.b),
satisfy (29) − (30). The explicit method (19) does not belong to (29) coefficients generation, only (30).
The second, eq. (27.b) equivalent to Simpson 1/3 quadrature, simpler, is a third order explicit Runge-
√
Kutta that is close to the analyzed below in the collocations points. The value (5 − 15)/10 ≈ 0.1127 is
√
close to 0 (r = 1 below), the value (5 + 15)/10 ≈ 0.8873 is close to 1 (r = 3 below), and the value 1/2 is
exact (r = 2 below). These estimations of variables kr (r = 1, 2, 3) for initial iterates are considered enough
to guarantee convergence in each step. Either explicit Runge-Kutta (27) selected for initial estimates of kr
is optional.
The method we shall analyze is the full implicit Runge-Kutta method of sixth order (P = 6) based on
Gauss quadrature (Kuntzmann-Butcher) [9]
√ √ √
(5 − 15)/10 5/36 (10 − 3 15)/45 (25 − 6 15)/180
√ √
1/2 (10 + 3 15)/72 2/9 (10 − 3 15)/72
√ √ √ (31)
(5 + 15)/10 (25 + 6 15)/180 (10 + 3 15)/45 5/36
5/18 4/9 5/18
8
Impressive numerical results from celestial mechanics for these methods were reported in the thesis of D.
Sommer [16].
The Arenstorf orbits (1963) illustrate a good problem of astronomy to test the proposed methods. One
of them is a restricted three body problem. Consider two bodies of masses η and µ in circular rotation in a
plane and a third body of negligible mass moving around in the same plane. The differential equations in
relative variables for the case of the earth and the moon are
dx
=u
dt
du x+µ x−η
= x + 2v − η −µ
dt A B
(32)
dy
=v
dt
dv y y
= y − 2u − η −µ
dt A B
where
A= [(x + µ)2 + y 2 ]3 µ = 0.012277471
(33)
B = [(x − η)2 + y 2 ]3 η =1−µ
9
The initial values have been determined carefully
T = 17.0652165601579625588917206249 (35)
Such periodic solutions have fascinated astronomers and mathematicians for many decades (Poincaré) and are
now often called “Arenstorf orbits” in honour to Arenstorf (1963), who also did many numerical computations
on high speed electronic computers. The problem is C ∞ with the exception of the two singular points x = −µ
and x = η, for y = 0, therefore the Euler polygons are known to converge to the solution. But are they
really numerically useful here? It has been chosen ns = 24000 steps of step length h = T /ns for solving the
problem reasonably good. To illustrate better the figure 1 shows the problem solved with several Runge-Kutta
methods [9, pp.127-129].
In order to simplify the problem we changed some variables to convert the problem into this one
dx
=u
dt
du
= x (1 − ηa3 − µb3 ) + 2v − ηµ (a3 − b3 )
dt
(36)
dy
=v
dt
dv
= y (1 − ηa3 − µb3 ) − 2u
dt
where
∂a3 ∂a3
a(x, y) = 3
1/A = [(x + µ)2 + y 2 ]−1/2 = −3 (x + µ) a5 = −3 y a5
∂x ∂y
(37)
∂b3 ∂b3
b(x, y) = 3 1/B = [(x − η)2 + y 2 ]−1/2 = −3 (x − η) b5 = −3 y b5
∂x ∂y
dy
y = {x, u, y, v} = f (y) (38)
dt
where some of the developments have been collocated outside for simplicity
∂f 2
= 3 [ η (x + µ)2 a5 + µ (x − η)2 b5 ] + 1 − ηa3 − µb3 (39.b)
∂x
∂f 4
= 3 y [ η (x + µ) a5 + µ (x − η) b5 ] (39.c)
∂x
10
∂f 2
= 3 y [ η (x + µ) a5 + µ (x − η) b5 ] (39.d)
∂y
∂f 4
= 3 y 2 (ηa5 + µb5 ) + 1 − ηa3 − µb3 (39.e)
∂y
finds the next step solution yn+1 , with an order of the local error of hP +1 , as
The global error is of order of hP . To solve the unknown ks , we state the homogeneous system of equation,
thus we may apply the Newton-Raphson method to the function g(k)
f (yn + h a1s ks ) f (yn + h a1s ks ) − k1
g(k) = F(k) − k = 0 F(k) = f (yn + h a2s ks ) g(k) = f (yn + h a2s ks ) − k2 (41)
f (yn + h a3s ks ) f (yn + h a3s ks ) − k3
within an unique step size h. The iterations are calculated (index m) until convergence is assured as in (15).
Stability
The stability of a Runge-Kutta method is stablished by the analysis of the problem
dy
= f (y) f (y) = λ y (44)
dx
the matrix [A] contains the coefficients ars of the method, the vector k represents ks all stages in an array,
the vector {1} is a vector filled with 1 in all components, and [ I ] is the identity matrix. The dimension of
the system (46) is the number of stages N . Solving this system for the vector k we obtain
11
and computing the next integration yn+1
where the involved function µ(z), z = hλ, is the characteristic root of the method and may be expressed by
For the method (31), the characteristic root function is [8, p.135]
1 + 12 z + 1 2
10 z + 1 3
120 z
µ(z) = (50)
1 − 12 z + 1 2
10 z − 1 3
120 z
a Padé approximant to ez in the diagonal position of the tableau [2] [8]. In this case |µ(hλ)| < 1 for real part
(λ) < 0, therefore the method is said to be A-stable.
For the method (18), the characteristic root function is [15]
1 + 23 z + 15 z 2 + 30
1 3
z + 1 4
360 z
µ(z) = 1 1 2 (51)
1 − 3 z + 30 z
and is also a Padé approximant to ez . The expression (51) is always positive and less than unit in the interval
3
√
z ∈ ( −9.648495248 , 0.0 ) = (−a, 0), where a = 4 + b − 4/b and b = 124 + 4 965 = 6.284937532. It may be
noticed that the function µ(z) approximates relatively well to the function y = ez for the range z> ∼ − 4, where
closely has a minimum (z = −3.827958538). Although there are a local maximum and a local minimum
around z = 6.276350 and z = 12.278646.
1 1
0.8 0.8
0.6 0.6
µ(hλ)
µ(hλ)
0.4 0.4
0.2 0.2
0 0
Gauss Ralston
Lobatto Lagrange
-0.2 -0.2
-10 -9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 -3 -2.5 -2 -1.5 -1 -0.5 0 0.5
hλ hλ
Figure 2. Characteristic root functions for implicit Gauss and Lobatto quadratures (left).
Characteristic root functions for explicit Lobatto collocations points, Ralston and Lagrange types (right).
The figure 2 (left) shows the mentioned characteristic root functions for Lobatto quadrature method
(18) with eq. (51), and Gauss quadrature method (31) with eq. (50) in the interval hλ ∈ [−10, 1]. A special
12
fact is that, being irrational most of the coefficients of the methods, the characteristic root function obtained
by (49) is completely rational.
The figure 2 (right) compare the characteristic root functions in the interval hλ ∈ [−3, 0.5], from
Ralston family eq. (9), (P = 4) explicit R-K eq. (19), and from Lagrange generated eq. (29), (P = 4)
√ √ √ √ √
explicit R-K (a21 = (5 − 5)/10, a31 = −(5 + 3 5)/10, a32 = (5 + 2 5)/5, a41 = 1, a42 = − 5, a43 = 5,
√ √
ars = 0 s ≥ r), both for the same collocation points (b1 = 0, b2 = (5 − 5)/10, b3 = (5 + 5)/10, b4 = 1)
of Lobatto quadrature. The curves are very close from each other with minimums at ( hλ = −1.5961 ,
µ = 0.2704 ) and ( hλ = −1.6974 , µ = 0.0786 ) and limits of stability at ( hλ = −2.7853 , µ = 1 ) and
( hλ = −2.625253 , µ = 1 ), for Ralston and Lagrange types, respectively. This means that the stability
characteristic of both methods are similar.
Results
The figure 3 shows the results for the computation of the Arenstorf orbits for the methods of Runge-
Kutta implicit of the Butcher tableau (31) with three types of procedures to solve the auxiliary variables
kr .
1 1
0.5 0.5
y Position
y Position
0 0
-0.5 -0.5
-1 -1
Initial Iteration
Fixed Point 1 Fixed Point 2
Newton-Raphson 1 Newton-Raphson 2
N-R Numeric 1 N-R Numeric 2
-1.5 -1.5
-1.5 -1 -0.5 0 0.5 1 1.5 -1.5 -1 -0.5 0 0.5 1 1.5
x Position x Position
1 1
0.5 0.5
y Position
y Position
0 0
-0.5 -0.5
-1 -1
13
The problem (32) − (35) was solved for ns = 3000 steps in a period T (h = T /ns ), that is one complete
orbit. The problem was reformulated by (36) − (37) in order to calculate the jacobian [Jf ] easily. With the
initial values estimated by (27.a), an iterative procedure (in index m for each step) is implemented with three
variants: Fixed point method by (12), (22.a) o (40.a), Newton-Raphson method by (42) − (43), and Newton-
Raphson with jacobian [Jg ] calculated numerically by (16) − (17). One to four iterations were performed for
each procedure, identified the legends in low-right corners with a digit 1, 2, 3 or 4.
To obtain reasonable solution with the explicit Runge-Kutta (27.a), dotted line in the first screen
(above-left) in figure 2, identified with ‘Initial Iteration’, it is necessary more than ns = 7.3 × 105 steps. With
ns = 3000 this method produces a spiral increasing outside the screen clock-wise with center in (1,0). The
same for Newton-Raphson Numeric, one iteration, but with a bigger spiral.
For this case (ns = 3000), two iterations are enough for Newton Raphson Analytic and four iterations
are enough for Fixed Point to obtain a good accuracy. With three iterations both analytic and numeric
Newton-Raphson methods are equivalent in precision.
The figure 4, for ns = 2000 near the stability limits, Newton-Raphson Analytic needs two iteration to
give a good performance. Instead, Newton-Raphson Numeric and Fixed Point need four iterations to become
stable, whereas the second performs better.
1 1
0.5 0.5
y Position
y Position
0 0
-0.5 -0.5
-1 -1
The conclusion is that the methods are ordered from better to wore: Newton-Raphson Analytic,
Newton-Raphson Numeric, and Fixed Point, for high number of iterations (m ≥ 4). For low number of
iterations (m = 2), Newton-Raphson Numeric and Fixed Point are competitive for numerical reasons in
calculating estimations of derivatives in jacobian matrix, but this declines with iterations or step size decrease,
when numeric and analytic Newton-Raphson methods become equivalent.
REFERENCES
[1] Gerald, C. F. Applied Numerical Analysis, 2nd Edition. Addison-Wesley, New York, 1970.
[2] Burden R. L.; Faires, J. D. Numerical Analysis, 3rd Edition. PWS, Boston, 1985.
[3] Gear, C. W. Numerical Initial Value Problems in Ordinary Differential Equations. Prentice-
Hall, Englewood Cliffs, New Jersey, 1971.
[4] Butcher, J. C. “Implicit Runge-Kutta Processes”, Math. Comput., Vol.18, pp.50-64, (1964).
[5] Butcher, J. C. “On Runge-Kutta Processes of High Order”, J. Austral. Math. Soc., [Link], Part
2, pp.179-194, (1964).
14
[6] Butcher, J. C. The Numerical Analysis of Ordinary Differential Equations, Runge-Kutta
and General Linear Methods. John Wiley, New York, 1987.
[7] Butcher, J. C. Numerical Methods for Ordinary Differential Equations, 2nd /3rd Editions.
John Wiley & Sons (New York), 2008/2016.
[8] Lapidus, L.; Seinfeld, J. H. Numerical Solution of Ordinary Differential Equations. Academic
Press, New York, 1971.
[9] Hairer, E.; Nørsett, S. P.; Wanner, G. Solving Ordinary Differential Equations I. Nonstiff
Problems. Springer-Verlag, Berlin, 1987.
[10] Hairer, E.; Wanner, G. Solving Ordinary Differential Equations II: Stiff and Differential -
Algebraic Problems. Springer-Verlag, Berlin, 1991.
[11] van der Houwen, P. J.; Sommeijer, B. P. “Iterated Runge-Kutta Methods on Parallel Computers”.
SIAM J. Sci. Stat. Comput., Vol.12, No.5, pp.1000-1028, (1991).
[12] Fehlberg, E. “Low-Order Classical Runge-Kutta Formulas with Stepsize Control”, NASA Report
No. TR R-315, 1971.
[13] Ralston, A.; Rabinowitz, P. A First Course in Numerical Analysis, 2nd Edition. McGraw-Hill,
New York, 1978.
[14] Lobatto, R. Lessen over Differentiaal- en Integraal-Rekening. 2 Vol., La Haye, 1851-52.
[15] Granados M., A. L. “Lobatto Implicit Sixth Order Runge-Kutta Method for Solving Ordinary Differ-
ential Equations With Stepsize Control”, Mecánica Computacional, [Link], compilado por G.
Etse y B. Luccioni (AMCA, Asociación Argentina de Mecánica Computacional), pp.349-359, (1996).
[16] Sommer, D. “Numerische Anwendung Impliziter Runge Kutta-Formeln”, ZAMM, Vol.45, Sonderheft,
pp.T77-T79, (1965).
15