You are on page 1of 14

International Journal of Computer Mathematics

Vol. 87, No. 8, July 2010, 1755–1767

The Hilber–Hughes–Taylor-α (HHT-α) method compared with


an implicit Runge–Kutta for second-order systems
Basem S. Attili*
Mathematics Department, University of Sharjah, Sharjah, UAE

(Received 16 March 2008; revised version received 21 June 2008; accepted 01 July 2008)

We will consider the Hilber–Hughes–Taylor-α (HHT-α) method to solve periodic second-order initial value
problems arising in, e.g. mechanics. We will consider the analysis of the method when applied to such
problems. Second-order convergence is theoretically demonstrated and numerically illustrated. In addition,
we will consider the efficient implementation of an implicit fourth-order Runge–Kutta scheme. Numerical
details and examples will also be presented to demonstrate the efficiency of the methods.

Keywords: differential algebraic systems; Hilber–Hughes–Taylor method; second order systems; index
2 DAEs; convergence

2000 AMS Subject Classification: 65L80

1. Introduction

The problem under consideration is the implicit second-order initial value problem of the form

y  = f (t, y, y  ), y(0) = y0 , y  (0) = y0 , t > 0. (1)

Note that in mechanics, y represents generalized coordinates and z represents the corresponding
velocities. For example, problems like Equation (1) arise in nonlinear oscillation problems where
they have the form

My  = F (t, y, y  ), t > 0, y(0) and y  (0) given (2)

with M as a positive definite n × n matrix called the mass matrix and f (t, y, y  ) = M −1 F (t, y, y  )
with F (t, y, y  ) representing external forces. They may also have the form

My  + Ky = g(t), t > 0, y(0) and y  (0) given (3)

that arises in structural mechanics, where again M and K are n × n mass and stiffness matri-
ces, respectively and g(t) is the forcing term. Usually M and K have a block diagonal form.

*Email: b.attili@sharjah.ac.ae

ISSN 0020-7160 print/ISSN 1029-0265 online


© 2010 Taylor & Francis
DOI: 10.1080/00207160802464589
http://www.informaworld.com
1756 B.S. Attili

The solution to Equation (2) or (3) is oscillatory. When numerical methods are applied to test
problems of the form y  = −wy; w > 0 stability problems arises since the general solution is of
the form y = A cos(wt + α), see Burder [4] and Sharp et al. [23].
Several authors dealt with such previously mentioned problems. For example, Cahlon [6]
considered in general well-posed problems that have some singularities at some boundaries. Attili
et al. [2] considered explicit Runge–Kutta methods for two-point boundary value problems (BVP)
of the type mentioned in Cahlon [6]. Sharp et al. [23] developed one class of numerical methods
based on Runge–Kutta Nystrom methods with appropriate stability and oscillation properties.
Cash [7,8] developed a p-stable method for periodic initial value problems. Others like Chawla [10]
developed an unconditionally stable Noumerov-type method, Cooper and Butcher [11] considered
an iterative method, Butcher and Chartier [5] presented a one-stage method, Xiao [25] considered a
two-stage method and Gladwell and Wang [14] presented analysis of two- and three-step methods
for second-order systems and Shampine [22] dealt with implicit method for solving ODEs. Parallel
implementation of the implicit Runge–Kutta and use of predictor corrector can be found in Li and
Gan [19] and Voss and Muir [24].
We will consider an extension of the Hilber–Hughes–Taylor-α (HHT-α) method, which pre-
serves the second-order convergence to solve the second-order system under consideration. The
idea of the method was first introduced by Hilber et al. [16] where the authors were seeking meth-
ods that posses numerical dissipation for high frequencies. These types of methods are commonly
used in structural dynamics since they maintain certain important properties such as unconditional
stability when applied to linear problems, no more than one set of implicit equations to be solved
at each step, second-order accuracy and self-starting. Note that unconditional stability in this case
is related to the behaviour of the method when applied to the scalar test equation y  = −w 2 y,
see [16]. The HHT-α method when applied to Equation (3) has the form

Man+1 + K[(1 + α)yn+1 − αyn ] = g(tn+1 + αh)


h2
yn+1 = yn + hzn + [(1 − 2β)an + 2βan+1 ]
2
zn+1 = zn + h[(1 − γ )an + γ an+1 ], (4)

where h is the step size, yn and zn are approximations to y(tn ) and y  (tn ), respectively [16]. Note
that when α = 0, then Equation (4) is referred to as the Newmark family of methods. The only
second-order member of this family is the one when α = 0, γ = 1/2 and β = 1/4 in which
case the method will be the familiar trapezoidal rule. We will consider the case when α  = 0,
γ = 1/2 − α and β = (1 − α)2 /4. As a result, the method will be proven to be of order 2. Note
that if α ∈ [−1/3, 0], the method will be unconditionally stable.
A first-order extension of the HHT-α method was considered by Yen et al. [26] where they
applied it to holonomically constrained mechanical systems. They based the method on projecting
the solution of the underlying ODE onto constraints after each step. For the application of the
HHT-α method. To differential algebraic system (DAE), refer, e.g. to Negrut et al. [20] and Jay
and Negrut [17]. For some more numerical results on differential algebraic systems, one can
refer to, e.g. Attili [1], Celik and Yeloglu [9], Karasozen and Somalı [18] and El-Hawary and
Mahmoud [13] and the references therein.
We will consider two alternatives to solve problem (1); namely, a fourth-order two-stage implicit
Runge–Kutta method where we present implementing the method efficiently in addition to the
HHT-α method. This will be performed in Sections 2 and 3. Some proofs for the second-order
convergence (locally) of the HHT-α method will be presented in Section 3. Finally and in Section 4,
we will present some numerical details and examples in addition to numerical comparison between
the two methods.
International Journal of Computer Mathematics 1757

2. The implicit Runge–Kutta method

We will consider the initial value problem of the form (1); i.e.,

y  = f (t, y, y  ), y(0) = y0 , y  (0) = y0 , t > 0. (5)

We may rewrite Equation (5) as a first-order system of the form

Y  = F (Y ), Y (0) = Y0 , t > 0, (6)


y    y0 

where as given in Equations (1) and (2), Y = ẏ ; F (Y ) = M −1 f (y)
and Y0 = ẏ0 .
The implicit fourth-order Runge–Kutta method for Equation (6) will be

h
Yn+1 = Yn + [F (Y1 ) + F (Y2 )], (7)
2
where
h
Y1 = Yn + [F (Y1 ) + αm F (Y2 )],
4
(8)
h
Y2 = Yn + [αp F (Y1 ) + F (Y2 )],
4
√ √
with αm = 1 − (2 3/3) and αp = 1 + (2 3/3), see Cooper and Butcher [11], Ehle and
Picel [12], Gladwell and Wang [14] and Serbin [21]. To implement the method, one solves
Equation (8) for Y1 and Y2 using the Newton’s method. Then substitute back into Equation (7) to
obtain a new Yn+1 . To do so, the Newton’s iterates will be

J (Y p−1 )Y p = −F (Y p−1 )

or in detail
⎡ h ∂F h ∂F ⎤ ⎡ ⎤
p
I− −αm p−1 h p−1 h p−1
⎢ Y − Y − F (Y ) − α F (Y )
4 ∂y 4 ∂y ⎥ Y1 ⎢ 1 n
4 1
4
m 2 ⎥
⎢ ⎥ = −⎢ ⎥ . (9)
⎣ h ∂F h ∂F ⎦ p
Y2 ⎣ h h ⎦
p−1 p−1 p−1
−αp I− Y2 − Yn − αp F (Y1 ) − F (Y2 )
4 ∂y 4 ∂y 4 4

If the system in Equation (5) is of order s, then the system in Equation (9) will be of order 4s.
This means this approach is inefficient for large s. Instead Attili et al. [3] considered eliminating
one of the variables Y1 or Y2 by solving for one of them in terms of the other to obtain with
αp αm = −1/3

h h
Y1 − Yn − F (Y1 ) − αm F [Yn (1 + 3αp ) − 3αp Y1 + h.αp F (Y1 )] = 0 (10)
4 4
and
Y2 = [Yn (1 + 3αp ) − 3αp Y1 + hαp F (Y1 )], (11)
a system of order 2s. This means that one can solve Equation (10) using Newton’s method for Y1 ,
then recover Y2 from Equation (11) and update Y from Equation (7). The Jacobian of Equation (10)
1758 B.S. Attili

after some simplification is (see Attili et al. [3]; similar systems were considered by Cooper and
Butcher [11]):
 
h ∂F h2 ∂F 2
J =I− + . (12)
2 ∂y 12 ∂y
As a result, Newton’s method will be
p−1 p p−1
J (Y1 )Y1 = −H (Y1 ),
p p p−1
where Y1 = Y1 − Y1 and

p−1 p−1 h p−1 h p−1 p−1


H (Y1 )=Y1 −Yn − F (Y1 )−αm F [Yn (1 + 3αp ) − 3αp Y1 + hαp F (Y1 )]. (13)
4 4
One can easily see that the operator in Equation (15) can be factorized as:
 
∂F 2
J = I − rh
∂y

with r = 3/6 leading to discrepancy in the linear term or r = 1/4 leading to discrepancy in the
quadratic term. With this factorization, the system becomes
 2
∂F p p−1
I − rh Y1 = −H (Y1 ). (14)
∂y
This implies that the solution can be obtained in two stages but using the same matrix; i.e., solve
 
∂F p−1
I − rh Z = −H (Y1 ),
∂y
 
∂F p
I − rh Y1 = Z. (15)
∂y
One factorization with two back substitutions can be written as:

H1 + M −1 H2
p−1 p−1 p−1
p−1 p−1 u1
H (Y1 ) = p−1 , Y1 = p−1 , (16)
H3 + M −1 H4
p−1
u2

where
p−1 p−1 h h p−1
H1 = yn − u1 + (αm − 1)ẏn + u2 ,
4 2
p−1 −h2 p−1 p−1 p−1
H2 = f (u1 ), H3 = ẏn − u2 ,
12
p−1 h p−1 h p−1 p−1
H4 = f (u1 ) + αm f ((1 + 3αp )yn − 3αp u1 + hαp u2 ). (17)
4 4
Notice also that the matrix involved in the right-hand side of Equation (18) is of the form
 
∂F I −rhI
I − rh = . (18)
∂y −rhM −1 ∂f
∂y
I

Here ∂f/∂y = J is assumed a constant Jacobian at each step and let z = (z1 z2 )T .
International Journal of Computer Mathematics 1759

Now to reduce the work more and implement the method efficiently, we multiply the first part
from left by
 
I rhI
C=M (19)
0 I
to obtain
   p−1 p−1 p−1 p−1

M − r 2 h2 J 0 z1 M(H1 + rhH3 ) + H2 + rhH4
= .
−rhJ M z2 p−1
MH3 + H4
p−1

This leads to
p−1 p−1 p−1 p−1
(M − r 2 h2 J )z1 = M(H1 + rhH3 ) + H2 + rhH4 . (20)

From the first row of the first part of Equation (18), we will have

+ M −1 H2
p−1 p−1
z1 − rhz2 = H1

and hence
− M −1 H2 )
p−1 p−1
(z1 − H1
z2 = , (21)
rh
which means that z1 and z2 can be computed. Now for the second part of Equation (18) again
premultiplying by the matrix C given by Equation (28) will lead to
  p p−1
 
M − r 2 h2 J 0 u1 − u1 z
= 1 .
−rhJ M up2 − up−12
z 2

In a similar way, we will have


p p−1
(M − r 2 h2 J )(u1 − u1 ) = M(z1 + rhz2 )

and as in Equation (30)


p p−1 p−1 p−1
(M − r 2 h2 J )(u1 − u1 ) = M(2z1 − H1 ) − H2 . (22)
p p−1 p
We will repeat a similar argument here to compute u2 − u2 and hence u2 ; i.e. from the first
row of the second part of Equation (18), we will have
p p−1 p p−1
(u1 − u1 ) − rh(u2 − u2 ) = z1

leading to
p p−1
p p−1 [(u1 − u1 ) − z1 ]
u2 = u 2 + . (23)
rh
p p
Having u1 and u2 , we can compute Y2 from Equation (13) and Y1 from Equation (14). Instead, if
we substitute Y1 and Y2 in Equation (7) directly and simplify the result, we obtain
 
h2 h h
Myn+1 = αp f (u1 ) + M yn + ẏn (1 + 3αp ) + u2 (1 − 3αp ) ,
2 2 2
h h
M ẏn+1 = M ẏn + f (u1 ) + f (yn (1 + 3αp ) − 3αp u1 + αp hu2 ). (24)
2 2
This will lead to the following algorithm.
1760 B.S. Attili

Algorithm Given yn , ẏn and J = ∂f/∂y at any step. Then to compute yn+1 and ẏn+1

1. Set p = 1, predict u01 and u02 and then carry out Newton’s iteration.
p−1 p−1 p−1
2. Evaluate f (u1 ) and f (yn (1 + 3αp ) − 3αp u1 + αp hu2 ).
p−1 p−1 p−1 p−1
3. Form H1 , H2 , H3 and H4 using Equation (26).
p p
4. Solve the systems (18) to (21) for z1 , z2 , u1 and u2 .
5. Set p = p + 1 and repeat until convergence

Then calculate yn+1 and ẏn+1 from Equation (33).


Note that it is advisable to deal with hz2 and hw2 in Equations (30) and in (32), respectively in
order to avoid dividing by what possibly might be very small step size h.

3. The HHT-α method

Again, consider the second-order system of the form

y  = f (t, y, y  ), (25)

which is equivalent to
y = z z = f (t, y, z),

with ti+1 = ti + h, and h a step size. The HHT-α method for this system is given by, see [16],

h2
yi+1 = yi + hzi + [(1 − 2β)k0i + 2βk1i ]
2
zi+1 = zi + h[(1 − γ )k0i + γ k1i ] (26)

with

k = f (t, y, z), k0i = f (ti , yi , zi ),


k1i = ((1 + α)f (ti + (α + 1)h, yi+1 , zi+1 ) − αf (ti + αh, yi , zi )) (27)

the parameters given as


 
−1 (1 − α)2 1
α∈ ,0 , β= , γ = − α. (28)
3 4 2

To analyse the HHT-α method, we will start with the following existence theorem.

Theorem 1 Consider the system given by Equation (26) with initial conditions y(0) = y0 (h),
z(0) = z0 (h) and k(0) = k0 (h) depending on h such that k0i − k(t0 + αh) = O(h). Then for
0 ≤ h ≤ h0 , there exists a unique solution (yi+1 , zi+1 , k1i+1 ) depending on h to Equation (26) in
a neighbourhood of (yi , zi , k0i ). Moreover, we have the estimates yi+1 − yi = O(h), zi+1 − zi =
O(h) and k1i+1 − k0i = O(h).

The proof is straightforward and for similar results, see [16].


International Journal of Computer Mathematics 1761

Theorem 2 Consider the system given by Equation (26) with initial conditions (y0 , z0 , k00 ) at t0 .
Then for 0 ≤ h ≤ h0 , the computed solution (y1 , z1 , k10 ) at t1 = t0 + h to the system Equation (26)
satisfies

y1 − y(t1 ) = O(h3 ), z1 − z(t1 ) = O(h2 ),


k10 − k(t1 + αh) = O(h2 ),

where (y(t), z(t)) is the exact solution to Equation (26) at t satisfying y(t0 ) = y0 and z(t0 ) = z0 .
If in addition we assume that k00 − k(t0 + αh) = O(h2 ), then z1 − z(t1 ) = O(h3 ).

Proof Using Taylor’s theorem, we have

h2 
y(t1 ) = y0 + hy  (t0 ) + y (t0 ) + O(h3 )
2
h2
= y0 + hz0 + f0 + O(h3 ).
2

Using Equation (26), we have

h2
y1 = y0 + hz0 + [(1 − 2β)k00 + 2βk10 ].
2

This means after some simplification

h2
y1 − y(t1 ) = [(1 − 2β)k00 + 2βk10 − f0 ] + O(h3 )
2
h2
= [(1 − 2β)f0 + 2β((1 + α)f1 − αf0 ) − f0 ] + O(h3 ). (29)
2

Now since f1 = f0 + O(h), Equation (29) becomes

h2
y1 − y(t1 ) = [f0 − 2βf0 + 2βf0 + 2β(O(h)) + 2βαf0 − 2βαf0 + 2βα(O(h)) − f0 ] + O(h3 )
2
= O(h3 ).

Similarly, one can show that z1 − z(t1 ) = O(h2 ) since z1 = z0 + hf0 + O(h2 ) while z(t1 ) =
z0 + hf0 + O(h2 ).
Now since
k10 = (1 + α)f (t1 , y1 , z1 ) − αf (t0 , y0 , z0 )

then
k10 (t1 + αh) = f0 + (1 + α)h(ft0 + fy0 z0 + fz0 f0 ) + O(h2 ) (30)

and
k10 = f0 + (1 + α)h(ft0 + fy0 z0 + fz0 f0 ) + O(h2 ). (31)

Clearly from Equations (30) and (31), we will have k10 (t1 + αh) − k10 = O(h2 ).
1762 B.S. Attili

Now to show the last part of the theorem, note that the condition k00 − k00 (t0 + αh) = O(h2 )
means that
k00 = f0 + αh(ft0 + fy0 z0 + fz0 f0 ) + O(h2 ). (32)
Now consider the Taylor’s series expansion of z(t1 ) about t0 given by

z1 = z0 + h((1 − γ )k00 + γ k10 ). (33)

Now from Equations (31) and (32) and since γ = (1/2) − α, and after some simplification we
will have
   
1 1 h
+ α k00 + − α k10 = f0 + (ft0 + fy0 z0 + fz0 f0 ) + O(h2 ).
2 2 2

Substituting back into Equation (34), we get

h2
z1 = z0 + hf0 + (ft + fy0 z0 + fz0 f0 ) + O(h3 ). (34)
2 0
Hence the result follows directly by subtracting Equation (33) from (34) and so we have z1 −
z1 (t1 ) = O(h3 ), completes the proof of the theorem. 

These results show existence and uniqueness of numerical solution of the HHT-α method as can
be seen from Theorem 1. While Theorem 2 gives local error estimates of the various components
of the solution, this paves the way for numerical testing as in the next example.

4. Numerical details and examples

We will consider some examples to test the methods presented in the previous sections. For the
HHT-α method, the parameters α, β and γ are determined by choosing the damping coefficient
α = −0.25. It should be noted here that the HHT-α method is self-starting and only one set of
implicit equation is to be solved at each step. In addition, the method is also unconditionally stable
and that makes it attractive when solving structural dynamics problems.

Example 1 Consider

y  = − 2 y
y(0) = y0 , y  (0) = yp ,

with y0 and √yp given. We considered the problem with  = 8, y0 = 0.25 and yp = −0.5, which
has y(t) = 17/16 sin(8t + θ ); θ = π − arctan(4) as exact solution. Solving using the HHT-
method with h = 0.05, the results obtained are plotted against the exact solution in Figure 1.
The solution obtained compares well with the previous work of Attili et al. [3]. On the other hand
and when using the implicit Runge–Kutta scheme, the errors at x = 1 for step sizes h = 0.01 and
h = 0.005 were respectively, 0.1084 × 10−6 and 0.6776 × 10−8 . This means that the algorithm
is producing an order of 3.99979; i.e. O(h4 ) approximations to the solution of the system with
the amount of work reduced to a minimum.
Again we considered the same example with  = 10, y0 = 1 and yp = 1 +  , which has
y = cos  x + sin  x as exact solution. The approximate solutions against the exact ones are
given in Figure 2. Using the HHT method, the errors at x = 1 with h = 0.05 and h = 0.025
International Journal of Computer Mathematics 1763

Figure 1. The exact solution against the approximate.

Figure 2. The Hilber–Hughes–Taylor solution against the exact solution.

where respectively, 0.105 × 10−3 and 0.273 × 10−4 . Hence the order of convergence when  = 10
is 1.94342, which reflects the expected second-order convergence. While when using the Runge–
Kutta algorithm, the errors at x = 1 with h = 0.01 and h = 0.005 where respectively, 0.526 ×
10−6 and 0.329 × 10−7 . Hence the order of convergence when  = 10 is 3.9989. The results
are accurate even though the solution is oscillating. Again the results compare well with Attili
et al. [3].

Example 2 Consider the nonlinear problem

y  + y 3 = 0
y(0) = 1, y  (0) = 1.

Solving using the NDSolve Mathematica routine, the HHT method and the implicit Runge–Kutta
algorithm, the solutions obtained are given in Figures 3 and 4.
Both were solved with h = 0.05. Since the exact solution is not known, each was compared
with the Mathematica-6 solution, which is assumed to be efficient and accurate.
1764 B.S. Attili

Figure 3. The Mathematica solution against the Hilber–Hughes–Taylor solution

Figure 4. The Mathematica solution against the Runge–Kutta solution.

Example 3 Consider the nonlinear problem

y  = y 3 − yy 
y(1) = 0.5 y  (1) = −0.25, (35)

which has the exact solution y(x) = 1/(x + 1). The errors at mesh points in the interval [1, 2]
are given in Table 1 where we report also the results of Ha [15].
The errors in Ha’s case are in general less than that of the HHT case. This is due to the fact
that the initial value solver that Ha used is with better order of accuracy than 2, which is the order
of the HHT scheme. In addition, the amount of work done in the latter case is significantly less
that makes it attractive. Note also that here we solve the initial value problem and not the BVP as
in Ha [15].

Example 4 Consider the initial value problem

(32 + 2x 3 − yy  )
y  = ,
8
y(1) = 17 y  (1) = −14,
International Journal of Computer Mathematics 1765

Table 1. Comparison of the results we obtained with those of Ha [15].

x HHT RK4 Ha’s results

1.00 0 0 0
1.05 0.00008 1.97e−6 −0.000030
1.10 0.00013 4.68e−7 −0.000054
1.15 0.00025 1.53e−7 −0.000073
1.20 0.00028 6.17e−7 −0.000088
1.25 0.00026 2.85e−7 −0.000099
1.30 0.00043 1.46e−7 −0.000107
1.35 0.00046 8.12e−8 −0.000111
1.40 0.00056 4.8e−7 −0.000113
1.45 0.00041 2.98e−7 −0.000112
1.50 0.00053 1.92e−6 −0.000109
1.55 0.00038 1.29e−7 −0.000103
1.60 0.00036 8.92e−7 −0.000096
1.65 0.00018 6.32e−7 −0.000088
1.70 0.00019 4.57e−7 −0.000078
1.75 0.00024 3.38e−7 −0.000066
1.80 0.00033 2.54e−7 −0.000053
1.85 0.00011 1.93e−7 −0.000040
1.90 0.00009 1.50e−7 −0.000025
1.95 0.00013 1.17e−7 −0.000009
2.00 0.00008 9.31e−8 −0.000007

Note: HHT, Hilber–Hughes–Taylor; RK, Runge–Kutta.

Figure 5. The Runge–Kutta-4 solution against the Hilber–Hughes–Taylor solution.

which has y = x 2 + (16/x) as exact solution. Again using the HHT method and the Runge–Kutta
schemes, the results obtained are plotted in Figure 5 where the solid line is the fourth-order
Runge–Kutta solution and the dots represent the HHT solution. They are good approximations of
the exact solution.

Example 5 Consider the nonlinear problem

y  = 5 sinh 5y,
y(0) = 0, y  (0) = 0.0457504.
1766 B.S. Attili

Figure 6. The Runge–Kutta-4 solution against the Hilber–Hughes–Taylor solution.

The results obtained are given in Figure 6 where we plot the RK-4 results (the solid line) against
the HHT solution (the dotted). There is no exact solution available to compare the results against it.

5. Conclusion

A fourth-order two-stage implicit Runge–Kutta method was implemented efficiently to solve


the second-order system in addition to the HHT-α method. Some proofs for the second-order
convergence (locally) of the HHT-α method was also presented. Although the rate of convergence
of the HHT-α is not high and just second order, but its unconditional stability when applied to linear
problems and no more than one set of implicit equations to be solved at each step in addition
to self-starting makes it attractive. Numerical examples show the efficiency of the approaches
considered in addition to some comparison with the work of others or the exact solutions.

References

[1] B. Attili, Continuation method for computing certain singular points for index-1 differential algebraic equations,
Appl. Math. Comput. 175 (2006), pp. 1026–1038.
[2] B. Attili, M. Elgindi, and M. Elgebeily, Initial value methods for the eigen elements of singular two-point boundary
value problems, AJSE 22, no. 2C (1997), pp. 67–77.
[3] B. Attili, K. Furati, and M. Syam, An efficient implicit Runge–Kutta method for second order systems, Appl. Math.
Comput. 178 (2006), pp. 229–238.
[4] J. Burder, Linearly implicit Runge–Kutta methods based on implicit Runge–Kutta methods, Appl. Numer. Math. 13
(1993), pp. 33–40.
[5] J. Butcher and P. Chartier, The effective order of singly-implicit Runge–Kutta methods, Numer. Algorithms, 20 (1999),
pp. 269–284.
[6] B. Cahlon, Numerical methods for initial value problems, Appl. Numer. Math. 5 (1989), pp. 399–407.
[7] J. Cash, High order P-stable formulae for periodic initial value problems, Numer. Math. 37 (1981), pp. 355–370.
[8] J. Cash, Efficient P-stable methods for periodic initial value problems, BIT 24 (1984), pp. 248–252.
[9] E. Celik and T. Yeloglu, Chebyshev series approximation for solving differential-algebraic equations (DAEs), Int.
J. Comput. Math. 83(8–9) (2006), pp. 651–662.
[10] M. Chawla, Unconditionally stable Noumerov-type methods for second order differential equations, BIT, 23 (1983),
pp. 541–552.
[11] J. Cooper and J. Butcher, An iterative scheme for implicit Runge–Kutta methods, IMA J. Numer. Anal. 3 (1983),
pp. 127–140.
International Journal of Computer Mathematics 1767

[12] B. Ehle and Z. Picel, Two parameter arbitrary order exponential approximations for stiff equations, Math. Comp.
29 (1975), pp. 501–511.
[13] H.M. El-Hawary and S.M. Mahmoud, The numerical solution of higher index differential-algebraic equations by
4-point spline collocation methods, Inter. J. Comput. Math. 80 (2003), pp. 1299–1312.
[14] I. Galdwell and J. Wang, Iterations and predictors for second order systems, in Proceedings of the 14th IMACS
World Congress on Computing and Applied Mathematics, W.F. Ames, ed., Georgia, 3 (1994), pp. 1267–1270.
[15] S.N. Ha, A nonlinear shooting method for two point boundary value problems, Comput. Math. Appl. 42 (2001),
pp. 1411–1420.
[16] H.M. Hilber, T.J. Hughes, and R.L. Taylor, Improved numerical dissipation for time integration algorithms in
structural dynamics, Earthquake Eng. Struc. Dyn. 5 (1977), pp. 283–292.
[17] L.O. Jay and D. Negrut, Extensions of the HHT-α method to differential algebraic equations in mechanics, Elect.
Trans. Numer. Anal. 26 (2007) pp. 190–208.
[18] B. Karasozen and Ş. Somalı, An error analysis of iterated correction methods for linear differential-algebraic
equations. Inter. J. Comput. Math. 60 (1996), pp. 121–137.
[19] S.F. Li and S. Gan, A class of Parallel Multistep Runge–Kutta Predictor-Corrector Algorithms, J. Numer. Methods
Comput. Appl. 17 (1995), pp. 1–11.
[20] D. Negrut, R. Rampalli, G. Ottarsson, and A. Sajdak, On an implementation of the HHT method in the context of
index 3 differential algebraic equations of multibody dynamics, ASME J. Comp. Nonlin. Dyn. 2(2007) pp. 723–85.
[21] M. Serbin, On factoring a class of complex symmetric matrices Without pivoting, Math. Comput. 35 (1980),
pp. 1231–1234.
[22] L. Shampine, Implementation of implicit formulas for the solution of ODE’s, SIAM J. Sci. Stat. Comput. 1 (1980),
pp. 103–118.
[23] P. Sharp, J. Fine, and K. Burrage, Two-stage and three stage diagonally implicit Runge–Kutta Nystrom methods of
order three and four, IMA J. Numer. Anal. 10 (1990), pp. 489–504.
[24] D. Voss and P. Muir, Mono-implicit Runge–Kutta schemes for the parallel solutions of initial value ODE’s, J. Comput.
Appl. Math. 102 (1999) pp. 235–252.
[25] A. Xiao, Order results for algebraically stable mono-implicit Runge–Kutta methods, J. Comput. Appl. Math. 17
(1999) pp. 639–644.
[26] J. Yen, L. Petzold, and S. Raha, A time integration algorithm for flexible mechanism dynamics: the DAE-alpha
method, Comput. Math. Appl. Mech. Eng. 158 (1998), pp. 341–355.
Copyright of International Journal of Computer Mathematics is the property of Taylor & Francis Ltd and its
content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's
express written permission. However, users may print, download, or email articles for individual use.

You might also like