ISSN 0965-5425, Computational Mathematics and Mathematical Physics, 2006, Vol. 46, No. 8, pp. 1320–1340.

© MAIK “Nauka /Interperiodica” (Russia), 2006.
Original Russian Text © A.B. Al’shin, E.A. Al’shina, N.N. Kalitkin, A.B. Koryagina, 2006, published in Zhurnal Vychislitel’noi Matematiki i Matematicheskoi Fiziki,
2006, Vol. 46, No. 8, pp. 1392–1404.

Rosenbrock Schemes with Complex Coefficients for Stiff
and Differential Algebraic Systems
A. B. Al’shina, E. A. Al’shinaa, N. N. Kalitkinb, and A. B. Koryaginac
a Faculty of Physics, Moscow State University, Leninskie gory, Moscow, 119992 Russia
e-mail: alshina@gmx.co.uk
b Institute of Mathematical Modeling, Russian Academy of Sciences,
Miusskaya pl. 4a, Moscow, 125047 Russia
c Moscow State Institute of Electronic Engineering (Technical University), Zelenograd,
Moscow, 103498 Russia
Received March 24, 2006

Abstract—Many applied problems are described by differential algebraic systems. Complex Rosen-
brock schemes are proposed for the numerical integration of differential algebraic systems by the
ε-embedding method. The method is proved to converge quadratically. The scheme is shown to be appli-
cable even to superstiff systems. The method makes it possible to perform computations with a guaran-
teed accuracy. An equation is derived that describes the leading term of the error in the method as a func-
tion of time. An algorithm extending the method to systems of differential equations for complex-valued
functions is proposed. Examples of numerical computations are given.
DOI: 10.1134/S0965542506080057
Keywords: systems of stiff differential algebraic equations, Rosenbrock scheme with complex coeffi-
cients

1. INTRODUCTION
1.1. Problem
Singularly perturbed problems are a special class of problems that involve a parameter ε multiplying the
highest time derivative. When the parameter is small, the underlying differential equation is stiff. When
ε 0, the Cauchy problem for the differential equation becomes a differential algebraic system. Singu-
larly perturbed problems frequently arise in various applications. One of them is associated with fluid
dynamics, in which linear boundary value problems are encountered that involve a small parameter ε (vis-
cosity) in such a manner that the differential equation loses the highest derivative when ε 0. Other appli-
cations are related to nonlinear oscillations with large parameters [1, 2] and to chemical kinetics with slow
and fast reactions.
The limiting case ε = 0 is known as the reduced system (see [3]). This problem makes sense only when
the initial conditions are consistent, i.e., belong to the solution manifold of the algebraic component of the
system. In practice, differential algebraic equations (DAEs) frequently arise in a pure form (without passage
to the limit with respect to a small parameter). For example, current oscillations in electrical circuits are gov-
erned by differential equations (DEs). If a circuit consists of several branches, algebraic relations are
imposed on the currents and potentials at the junctions. Fluid dynamics are described by a system of Euler
differential equations supplemented with algebraic equations of state. The motion of a hinge mechanism is
described by DEs of Newtonian dynamics for each link and by algebraic matching conditions at the joints.
For nearly linear problems, approximate analytical solution methods have been constructed based on the
study of the spectral properties of a problem (see [4]). Two major approaches to the numerical solution of
DAEs are the ε-embedding method and the state space method (see [3]). However, their accuracy remains a
vague question: none of the examples presented in the literature gives estimates of the accuracy of the results
achieved with these methods. Below is a brief description of the former method.

1320

ROSENBROCK SCHEMES WITH COMPLEX COEFFICIENTS 1321

1.2. The ε-Embedding Method
Consider the singularly perturbed problem
y' = f ( y, z ), y ( 0 ) = y0 ,
(1)
εz' = g ( y, z ), z ( 0 ) = z0 ,
where y and z are vectors of generally different dimensions and f(y, z) and g(y, z) are vector functions of the
same dimension that are differentiable a sufficient number of times. The corresponding reduced system is
y' = f ( y, z ), y ( 0 ) = y0 ,
(2)
0 = g ( y, z ), z ( 0 ) = z0 ,
and its initial data are consistent; i.e., g(y0, z0) = 0. Assume that the Jacobi matrix of the algebraic part of
system (2) is invertible; i.e.,
∃g z ( y, z )
–1
(3)
in the neighborhood of the solution to (2). Then, by the implicit function theorem, there exists a unique solu-
tion z = G(y) that converts the algebraic equation of system (2) into an identity: g(y, G(y)) ≡ 0 . Substituting
z = G(y) into the differential part of system (2) gives the so-called equation in the state space:
y' = f(y, G(y)),
which is an ordinary differential equation. If condition (3) is satisfied, (2) is said to be a DAE of index 1.
In the ε-embedding method, any numerical method is applied to problem (1) and, then, ε = 0 is set in the
resulting formulas. The idea of the method was proposed for backward differentiation formulas (BDF meth-
ods) in [5]. The above arguments are only heuristic considerations for the construction of the ε-embedding
method. The proof of its convergence is a nontrivial problem for each type of difference scheme. For several
implicit schemes with real coefficients, the convergence of the ε-embedding method was proved, for exam-
ple, in [3].
The loss-of-accuracy phenomenon associated with the ε-embedding method was also described in [3].
Specifically, the order of accuracy for DAEs frequently turns out to be lower than that for pure DEs.
Schemes preserving the same high order of accuracy for DAEs as for DEs were called stiffly accurate in [3].

1.3. Scheme for Stiff Systems
An additional difficulty in applications arises when the underlying system is stiff. Stiff systems are char-
acterized by both fast damping and slowly varying solution components, and the characteristic times of the
various processes differ by many orders of magnitude. In practice, stiff systems are almost inevitably
encountered in problems involving many processes. As was mentioned above, singularly perturbed prob-
lems are stiff. Differential algebraic systems can be treated as problems of infinite stiffness. Therefore, in
the ε-embedding method, it is reasonable to use schemes that perform well for purely differential systems
of high stiffness.
Difference schemes constructed for stiff systems must satisfy enhanced stability requirements: A-stabil-
ity, Lp-stability, and other types [3, 6, 7]. Explicit schemes are hardly applicable to stiff problems.
Stiff stability. Beginning in the 1950s, special implicit methods were developed for stiff problems and
a number of additional properties were stated that must be satisfied by the desired schemes. Consider the
Dahlquist problem
λt
du/dt = λu, 0 ≤ t ≤ T, u ( 0 ) = u0 , u ex ( t ) = u 0 e . (4)
When λ  –1, the exact solution to Eq. (4) decays rapidly and monotonically.
For any linear scheme, the transition to the next time level while solving problem (4) has the form û =
R(τλ)u, where R(ξ) is called the growth function or the stability function.
Definition 1. A scheme is said to be A-stable if |R(ξ)| ≤ 1 for Re ξ ≤ 0; i.e., the numerical solution is stable
in the same range of λ as the exact solution to problem (4).
If a scheme is not at least A-stable, it is not applicable to stiff problems.
It is desirable that the stability function also decay rapidly as Re λ –∞. For this reason, the concept
of Lp-stability is introduced.
Definition 2. A scheme is said to be Lp-stable if it is A-stable and R(ξ) = O(ξ–p) as |ξ| ∞.

COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS Vol. 46 No. 8 2006

The updating formulas for the one-parameter family of one-stage Rosenbrock schemes [8] have the form û = u + τRek. not all IRK schemes are suitable for stiff problems. Formally. the scheme possesses unique accuracy and stability properties. where stiffly accurate methods (i. analyze the behavior of its error. 8 2006 .1. un(t)}. We prove the convergence of the method. However. and α is a numerical parameter determining the properties of the scheme.e. but only schemes with real coefficients were examined in that study. The ranges of the parameters of DIRK methods in which the scheme is A. and the system can be solved by direct methods with high precision. 2. excellent results have been obtained for stiff purely differential systems (see [11–15]).e.and L-stable were examined in [3]. The transition to a new time level does not require iteration. 8–11]). Fu ≡ ∂F/∂u is the Jacobi matrix. τ is the time step. methods applicable to DAEs) are also indicated. we distinguish the complex Rosenbrock scheme (CROS). The stiffer the problem (where the measure of stiffness is |λT|). the transition from one time level to the next one takes a finite prior to known number of operations. we have been successfully applying it to various problems of high stiffness. û is the solution at the next time level. Iterative methods. The method constructed makes it possible to obtain asymptotically accurate a poste- riori error estimates and perform computations with guaranteed error control. and compare the method with exist- ing approaches. Iteration strongly complicates the use of IRK schemes. However. By applying the CROS scheme. It is these methods that are more often used in practice. give numerical examples. i. They underlie the popular Gear package. When α ≠ 0. including systems of nonlinear partial differential equations. Noniterative methods. and IRK methods. Rosenbrock schemes were applied to the integration of DAEs in [16]. ( E – ατF u )k = F ( u ). the more advantageous the Lp-stable schemes with large p. SINGLE-STAGE COMPLEX ROSENBROCK SCHEME 2. Unfortunately. For many years. For an s-stage IRK method. The vector k in (5) is determined from a system of linear equations with the matrix E – ατFu. …. scheme (5) is implicit. since the stability issues are supplemented with concerns related to the convergence of the iterative process. as in explicit schemes. (5) Here. For purely differential systems.e. Extrapolation methods (see [3]). Multistep methods (see [3]). IRK schemes have been considered in numerous works. E is an identity matrix. a useful generalization is Lpα-stability (|R(ξ)| ≤ 1 for 90° + α ≤ argξ ≤ 270° – α and R(ξ) = O(ξ–p) as |ξ| ∞). A-stability is destroyed by extrapolation. For high-order schemes. Popular implicit schemes are implicit Runge–Kutta (IRK) methods. Due to this advantage. ROW. it can be shown that multistep methods are at most Lpα-stable with small p = 1/q. However.. which is repeatedly solved by Newton’s iteration. the scheme remains little known and was not mentioned in the classical monograph [3].. thus confirming its efficiency. Moreover. u is the solution at a current time level.1322 AL’SHIN et al. such schemes are called explicit– COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS Vol. The goal of this work is to construct an efficient numerical integration method for superstiff and differ- ential algebraic systems based on the complex Rosenbrock scheme combined with a posteriori global Rich- ardson extrapolation. The coefficients of multistep methods are chosen so that the q-step method is O(τq) accurate. The matrix is well conditioned. u ( 0 ) = u0 . these schemes are called explicit–implicit or semiexplicit. 46 No. they are considerably less reliable than ROS. but they do not involve iteration and the number of arith- metic operations required for the transition to a new time level is fixed and known in advance (as in explicit schemes). Among the one- stage Rosenbrock schemes.. Rosenbrock Schemes Consider the Cauchy problem for an autonomous system of ODEs: du/dt = F ( u ). An alternative approach that overcomes this difficulty is to use Rosenbrock (ROS) and Rosenbrock–Wanner (ROW) methods (see [3. For this reason. |R(ξ)| ≤ 1). They allow us to hope for the construction of high-order schemes. where u is a column vector of functions: {u1(t). A comprehensive review of IRK methods suitable for stiff problems can be found in [3]. even if a method is based on an A-stable scheme (i. Every time step of an IRK method involves a system of generally nonlinear equations. the minimum number s of arising nonlinear systems corresponds to diagonal implicit Runge– Kutta (DIRK) methods. these schemes are implicit.

At the points α = (1 ± i)/2. However. g ( y 0. In this case. (9) ⎝ ẑ ⎠ ⎝ z⎠ ⎝ l ⎠ where the increments are determined by the linear system ⎛ E – τα f ⎞⎛ ⎞ ⎜ y – τα f z ⎟ ⎜ k ⎟ = τ ⎛⎜ f ⎞⎟ . It is L1-stable. z ) ⎞ ⎜ ⎟ = ⎜ ⎟. For this reason. The scheme is highly reliable and can be used for very stiff problems. However. A differential algebraic problem is derived from (1) by passage to the limit as ε 0. The domain of A-stability is determined by the condition Reα ≥ 1/2. However. we have an explicit scheme. this family includes a complex scheme with α = (1 + i)/2 (see [8]).= F ( u ). it is L1-stable at all the points of the semicircle. When α = 0. ( M – ατF u )k = F ( u ). z ( 0 ) = z0 . it is known as the CROS scheme.25. It is this scheme that was used in the computations described below. When α = 0. (8) dt ⎝ z ⎠ ⎝ ε –1 g ( y. z ) ⎠ Next. The scheme is second-order accurate on the line Reα = 1/2. However. (7) The properties of schemes (5) and (7) are determined by the value of α. ROSENBROCK SCHEMES WITH COMPLEX COEFFICIENTS 1323 implicit or semiexplicit. the scheme is only A-stable. the scheme is L2-stable and second-order accurate. (6) dt where M is a constant nonsingular matrix. The ε-Embedding Method with a One-Stage Rosenbrock Scheme The autonomous singularly perturbed problem (1) is solved for the derivatives to obtain d⎛ ⎞ ⎛ f ( y. the accuracy O(τ) of the scheme is rather low. we have the well-known scheme with a half-sum. 8 2006 . The scheme has an O(τ2) error and is unconditionally stable. (10) ⎜ ⎟⎝ l ⎠ ⎝ ε –1 g ⎠ ⎝ – ταε g y E – ταε g z –1 –1 ⎠ The CROS scheme in (10) corresponds to α = (1 + i)/2. the one-stage Rosenbrock scheme is applied to (8): ⎛ ŷ ⎞ ⎛ y⎞ ⎛ k⎞ ⎜ ⎟ = ⎜ ⎟ + Re ⎜ ⎟ . 2. 0).5. we obtain the backward Euler scheme (purely implicit). z 0 ) = 0. The stability function vanishes at infinity when α = x + iy belongs to the circle (x – 0. Figure 1 shows the ranges of the complex parameter α in which scheme (5) is A-stable. L-stable. We write this problem in the semiexplicit form ⎛ y' ⎞ ⎛ f ( y. it is frequently used in computations. The Rosenbrock method can easily be extended to implicit systems of differential equations that are unsolved for the derivatives: du M -----. which ensures unconditional stability and a good qualitative behav- ior of the numerical solution. Therefore. which has unique properties [12]: its accuracy is O(τ2) and it is L2-stable and. When α = 1.⎜ y ⎟ = ⎜ ⎟. Its error is O(τ). COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS Vol. ⎝ 0⎠ ⎝ g ( y. for the scheme to be L-stable. the one-stage Rosenbrock scheme has the form û = u + τRek. accordingly. the scheme is L- stable on the right semicircle. The schemes described above are real. This version of the scheme is hardly applicable to the computation of stiff prob- lems. Moreover. In the literature.2. z ) ⎠ (11) y ( 0 ) = y0 . and O(τ2) accurate. 46 No. it must be A-stable in addition to the fact that the stability function vanishes at infinity. z ) ⎞ ----. except for the point (0.5)2 + y2 = 0. unconditionally stable. according to the classification of stiff problems. which prevents its wide application.

L-stability –0. the accuracy of the scheme degrades to the first order.4 1. However.0 x Fig. A rather frequent situation is that where the differential and algebraic equations are not separated and the DAE system is stated in implicit form (6).8 accurate 0 0.8 2.6 scheme A-stability 0. the time t + 0. 46 No. the system of DAEs du M -----.8 CROS 0. if they depend explic- itly on time.4 0. Multiplying the lower lines in system (10) by ε and passing to the limit as ε 0. we obtain formulas for the transition to a new time level in the ε-embedding method with the one-stage Rosenbrock scheme for problem (2): ŷ = y + Rek.5τ must be used on the right-hand side of (14).2 –0.2 0.4 Second. t)).0 1. ẑ = z + Rel.= F (13) dt can be autonomous (F = F(u)) or nonautonomous (F = F(u. ⎝ 0 0⎠ ⎝ g y gz ⎠ ⎝ l ⎠ ⎝ τg ⎠ Implicit form. t + 0. In practice. in general.6 0.5τ ).4 0. y 0. t )⎠ ζ = F ( u.6 order –0. 1. The ε-embedding method with a CROS scheme suitable for nonautonomous purely differential problems leads to the system û = u + τReζ. Nonautonomy. we have to use the Rosenbrock scheme in the form of (7).6 1. Then. 8 2006 . If the differential and algebraic equations split. which ensures the total second order of the scheme.8 1. the application of the CROS scheme can be described as follows. It was shown in [3] that all the results that hold for DAE in semiexplicit form (11) remain valid for DAE systems in implicit form. t + τ can be used on the right-hand side of (14) for the algebraic components. in which case the universal autonomization procedure should be recommended. as in the case of purely differential implicit systems. However. the Newton iterations converge to it rather than to the required time t + τ. the equa- tions do not split. However.2 1. (14) 2 Our numerical experiments suggest that scheme (14) is second-order accurate for DAEs (as in the case of pure DEs) if the algebraic relations in (13) do not depend explicitly on time. where M is singular: detM = 0. To achieve O(τ2) accuracy for a DE. COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS Vol. ⎛ M – ----------τF 1+i ⎞ ⎝ u ( u. We perform one scheme’s step of O(τ2) accuracy for a purely differential system or one Newton iteration for a purely alge- braic system. if this time is used in the algebraic equations. This explains the loss-of-accuracy phenomenon for nonautonomous DAEs.2 0 –0. Qualitatively. ⎛ E0⎞ ⎛ f f ⎞ ⎛ k⎞ ⎛ τf ⎞ (12) ⎜ ⎟ – ατ ⎜ y z ⎟ ⎜ ⎟ = ⎜ ⎟.1324 AL’SHIN et al.

(17) ⎝ 0⎠ ⎝ g ( v 0. For the exact solution. omitting the dependence of the right-hand side and the Jacobi matrix of time. (20) COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS Vol. z0) = 0.1. v0 . w1 given by scheme (16). δw τ ( t ) = O ( τ ). with the number of retained terms being sufficient for proving the theorem. The increase in the amount of computations is insignificant. z(t). w0 are the numerical solution at the current time level t and v1. z) and g(y. z ). w 0 ) ⎠ ⎝ g y gz ⎠ ⎝ l ⎠ For CROS. Let the right-hand sides f(y. CONVERGENCE OF THE ε-EMBEDDING METHOD WITH THE ONE-STAGE ROSENBROCK SCHEME The convergence of the ε-embedding method for Rosenbrock schemes with real coefficients was proved in [3]. z) of the equations be twice continuously differentiable with respect to both variables. z'. (17) are denoted by δv τ ( x ) = v 1 – y ( t + τ ). 3 2 Proof. we apply the method by performing a step of τ. w ≈ z. while the resulting system is now autonomous. v'. For it. δw τ ( x ) = w 1 – z ( t + τ ). In the new notation. The differential equation and the initial conditions for the new function are trivial: duJ + 1/dt = 1. we calculate the derivatives y'. and its order and rank increase by one. δv τ ( t ) = O ( τ ). 46 No. v'' = Rek''. z' = – ( g z ) g y f . Local Error Consider an autonomous system of DAEs in semi-implicit form (11). w 1 = w 0 + Rel. uJ + 1(0) = 0. Theorem 1. the alge- braic relations do not depend explicitly on the new time. Fixing t. we introduce the following notation for the numerical solution to system (11): v ≈ y. 8 2006 . Thus. We extend that proof to complex coefficients. one more unknown function uJ + 1 ≡ t is added to (13). and w' with respect to τ at the point τ = 0 and see that they coincide for the exact and numerical solutions. v''. w1 are the numerical solution obtained from (16) with the initial data v0 = y(t) and w0 = z(t). The matrix M is modified in an obvious manner. w' = Rel'. (15) As before. we have α = (1 + i)/2. the exact solution is denoted by y(t). and let the initial values y0 and z0 be consistent: g(y0. for the numerical solution. –1 (19) y'' = f y y' + f z z' = f y f – f z ( g z ) g y f . (17) are expanded in Taylor series in powers of τ. we assume that system (14) has been made autonomous. 3. ROSENBROCK SCHEMES WITH COMPLEX COEFFICIENTS 1325 To this end. For the ε-embedding method with the CROS scheme. (16) Here. In what follows. y''. The increments k and l in (16) are determined from the linear algebraic system ⎛ k⎞ ⎛ f ( v 0. (18) where v1. z(t + τ) to problem (11) and the numerical solution v1. the formulas for the transition to a new time level in the ε-embedding method with the CROS scheme as applied to problem (11) are given by v 1 = v 0 + Rek. For convenience. 3. The local truncation errors in scheme (16). For this purpose. w1 are the solution at the new time level t + τ. The exact solution y(t + τ). w 0 ) ⎞ ⎛ f f ⎞⎛ k ⎞ ⎜ ⎟ = τ⎜ ⎟ + ατ ⎜ y z ⎟ ⎜ ⎟. we have y' = f ( y. –1 According to (16). we use method (14). we have v' = Rek'.

∂k/∂w0. we conclude that (19) and (20) coincide for τ = 0.= O ( τ ). –1 –1 ∂w 0 ∂w 0 α α Differentiating (16) and using (23). z0) = 0). ∂w1/∂v0 .+ O ( τ + δ ). 3. the functions f(y.+ O ( τ + δ ) = O ( τ + δ ). We follow the scheme used in [3] for proving the convergence of the ε-embedding method with real Rosenbrock–Wanner schemes. and ∂w1/∂w0. we calculate ∂k/∂v0. Since gz is regular in a neighborhood of the solution. ∂v 0 ∂v 0 ∂w 0 ∂w 0 α =0 COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS Vol.= – --. --------. and ∂l/∂w0 and estimate k and l by using (21): 1 –1 k = O ( τ ). we have g z ( y. Suppose that the matrix gz is regular in a neighborhood of the exact solution (y(t). Then the ε-embedding method with the CROS scheme provides second-order convergence: y ( t + Nτ ) – v N = O ( τ ). ∂v1/∂w0. Theorem 2. --------. Using (16).= O ( τ ). Global Error Assume that problem (11) is computed on an N-node grid. all the terms in (18) up to δv τ ( t ) = O ( τ ). Therefore. respectively. we find ∂v1/∂v0. z) are twice continuously differentiable with respect to both vari- ables. (21) The first equation in (21) yields k' τ=0 = f.( g z g ) + O ( τ ). g(y0. and the initial values y0 and z0 are consistent (i. For this purpose. z(t)) to problem (11). z) and g(y. l = – ---g z g + O ( τ ) = O ( τ + δ ). α Differentiating (21) yields ∂k ∂l 1 ∂ --------. z ( t + Nτ ) – w N = O ( τ ). An estimate for the global error accumulated in the course of the computation is given by the following assertion. 8 2006 . we obtain ∂v ∂w ∂v 1 ∂w 1 1 ---------1 = 1 + O ( τ ).= 1 – Re --.2. 2 2 where y(t + Nτ).--------. wN are the exact and numerical solutions at tN = Nτ.1326 AL’SHIN et al. we took into account that l|τ = 0 = 0 for k. z )g ( y. –1 ∂v 0 ∂v α ∂v 0 (23) ∂k ∂l 1 1 --------. Proof. –1 (22) Here. Thus.+ g z g zz g z g + O ( τ ) = – --. ---------1 = O ( 1 ).. ∂l/∂v0 . z ) ≤ δ.= – --. Differentiating the first equation in (21) twice. ------. Formula (16) is written componentwise as k = τf + ατ f y k + ατ f z l. The second equation in (21) gives l' = –(gz)–1gyk' = –(gz)–1gyf. --------. Here. –1 k'' τ=0 Taking into account that Re2α = 1 for CROS. δ is independent of τ and can be made arbitrarily small by contracting the neighborhood.e. z(t + Nτ) and vN. the local error in scheme (16). δw τ ( t ) = O ( τ ) 3 2 cancel out. (17) is O(τ3) for the differential component and O(τ2) for a part of the solution to system (11).= O ( τ ). and the time interval on which the solution is sought is N = const. 0 = g + αg y k + αg z l. 46 No. we obtain = 2α f y k' + 2α f z l' = 2α f y f – 2α f z ( g z ) g y f .

L. 1. we obtain v 1 – ṽ 1 ≤ ( 1 + τL ) v 0 – ṽ 0 + τP w 0 – w̃ 0 . n ⎝ N ⎠ ⎝ n ⎠ COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS Vol. 46 No. w n ≤ m̂ 21 v 0 + m̂ 22 w 0 . we need several auxiliary assertions. wn) and ( ṽ n . moreover. ROSENBROCK SCHEMES WITH COMPLEX COEFFICIENTS 1327 Here. n n n (26) Indeed. ⎛ m m ⎞ and let all the elements of M = ⎜ 11 12 ⎟ be positive. Using estimates (23) and applying the mean value theorem. The constants Q. (24) w 1 – w̃ 1 ≤ Q v 0 – ṽ 0 + q w 0 – w̃ 0 . n ⎛ ⎞ where m̂ ij are the elements of Mn = ⎜ 1 + τL τP ⎟ . w̃ 0 ) be initial data. where m̂ ij are the elements of Mn. q < 1.+ O ( τ ). and let N steps (Nτ ≤ const) be performed so that (vn. λ 2 = q + -----------------------. and q are independent of the initial data and are positive. N ⎛ ⎞ Proof. Specifically. w̃ n ) (1 ≤ n ≤ N) stay within neighborhood (22) at each step. we took into account α = (1 + i)/2. [17. They are not necessarily consistent but lie in neighborhood (22). Lemma 2. λ 2 ≤ Bq . (25) w n – w̃ n ≤ m̂ 21 v 0 – ṽ 0 + m̂ 22 w 0 – w̃ 0 . contracting neigh- borhood (22). 78]). 8 2006 . there are constants A > 0 and B > 0 such that λ 1 ≤ A. w0) and ( ṽ 0 . Then. …) be sequences of nonnegative numbers satisfying the inequal- ities v n + 1 ≤ m 11 v n + m 12 w n . w̃ 0 ). if Nτ ≤ const (since λ1 = 1 + O(τ)). then λ1 ≤ 1 + Ch.+ O ( τ ). Lemma 1. e. 2 2 q – τL – 1 q – τL – 1 The following estimates are important for the subsequent exposition. v N – ṽ N ≤ C ( v 0 – ṽ 0 + τ w 0 – w̃ 0 ). q < 1. The eigenvalues λ of M are determined from the char- ⎝ Q q ⎠ acteristic equation τPQ τPQ λ 1 = 1 + τL – -----------------------. P. Then vn and wn can be estimated in terms of v0 ⎝ m 21 m 22 ⎠ and w0: v n ≤ m̂ 11 v 0 + m̂ 12 w 0 . Let {vn} and {wn} (n = 0. Let (v0. For the subsequent proof of Theorem 2. p. Using a well-known estimate (see. we obtain const n const n λ 1 ≤ ⎛ 1 + C ------------⎞ ≤ ⎛ 1 + C ------------⎞ ≤ exp ( Cconst ) = A. w N – w̃ N ≤ C ( v 0 – ṽ 0 + ( τ + q ) w 0 – w̃ 0 ). The lemma can easily be proved by induction. we can achieve q < 1. We use formula (24) and apply Lemma 1 with the matrix M = ⎜ 1 + τL τP ⎟ : ⎝ Q q ⎠ v n – ṽ n ≤ m̂ 11 v 0 – ṽ 0 + m̂ 12 w 0 – w̃ 0 ..g. which are stated as lemmas. w n + 1 ≤ m 21 v n + m 22 w n . It is necessary to estimate how the error is accumulated over N steps. Consider two pairs of initial data: (v0. w0) and ( ṽ 0 .

q < 1.1328 AL’SHIN et al. 3 N–n 2 By substituting (30) into (28) and (29) and taking into account Nτ ≤ const and ∑ N N–n n=1 q < 1/(1 – q). Thus. n n m̂ 12 = λ 1 [ 1 + O ( τ ) ]O ( τ ) + λ 2 O ( τ ) [ 1 + O ( τ ) ] ≤ C 2 τ ≤ Cτ. The subsequent accumulation of the error over N – n steps is described by Lemma 2. –1 n ⎜ ⎟ ⎝ 0 λ2 n ⎠ where the columns of T are the coordinates of the eigenvectors of M: ⎛ τP ⎞ ⎜ 1 – ----------. we return to the proof of Theorem 2. According to Theorem 1. and the number of steps is N – n. 2. 8 2006 . ||w0 – w̃ 0 || = O(τ2). C5} in (27) and apply estimate (25). the local error of the method. One step in method (16) is made with the initial data v0 = y[t + (n – 1)τ] and w0 = z[t + (n – 1)τ]. we obtain ⎝ O(1) 1 + O(τ) ⎠ m̂ 11 = [ 1 + O ( τ ) ]λ 1 + λ 2 O ( τ ) ≤ C 1 ≤ C. the COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS Vol. for Mn. The accumulation of the error is shown in Fig. δwτ(x) = O(τ2). n n (27) m̂ 21 = λ 1 O ( 1 ) [ 1 + O ( τ ) ] + λ 2 O ( 1 ) ≤ C 4 + C 3 q ≤ C. The second estimate is proved in a similar manner. We need to perform N – n steps to obtain vN – n and wN – n and one more step to obtain vN – n + 1 and wN – n + 1. 3 2 (30) w N – n – w N – (n – 1) ≤ C [ O ( τ ) + ( τ + q )O ( τ ) ]. 46 No. we hypothetically apply the numerical method with the initial data v0 = y(t + nτ). wN − n. C2. Now. wN with the initial data v0 = y(t)). in fact. C4 + C3qn. n n n n The assertion of the lemma is obtained if we set C = max{C1. Then the norm of the difference between the numerical solution vN. we have v N – n – v N – ( n – 1 ) ≤ C [ O ( τ ) + τO ( τ ) ]. it is equal to δvτ(x) = O(τ3). z(t + Nτ) after N steps can be estimated by the formulas v N – y ( t + Nτ ) = v N – v N – 1 + v N – 1 – v N – 2 + … + v 1 – y ( t + Nτ ) (28) ≤ v N – v N – 1 + v N – 1 – v N – 2 + … + v 1 – y ( t + Nτ ) . To prove it. The numerical solution obtained after N – n steps is denoted by vN – n. w N – z ( t + Nτ ) = w N – w N – 1 + w N – 1 – w N – 2 + … + w 1 – z ( t + Nτ ) (29) ≤ w N – w N – 1 + w N – 1 – w N – 2 + … + w 1 – z ( t + Nτ ) . n n n m̂ 22 = λ 1 O ( 1 )O ( τ ) + λ 2 [ 1 + O ( τ ) ] ≤ C 5 τ + q ≤ Cτ + q . The norms ||vN – n – vN – (n – 1)|| and ||wN – n – wN – (n – 1)|| are considered separately. N–n steps in scheme (16) are performed simultaneously. For the matrix Mn. Then.+ O ( τ ) 1 ⎟ ⎝ 1–q ⎠ ⎛ ⎞ This yields T–1 = ⎜ 1 + O ( τ ) O ( τ ) ⎟ . where ||v0 – ṽ 0 || = O(τ3). To do this. w0 = z(t + nτ) taken for the exact solution. Then.+ o ( τ ) ⎟ T = ⎜ 1–q ⎟ = ⎛⎜ 1 O ( τ ) ⎞⎟ . we have to estimate how the error is accumulated over N steps (τN ≤ const). ⎜ Q ⎟ ⎝ O(1) 1 ⎠ ⎜ ----------. The error at the first step is. we have the representation ⎛ n ⎞ λ 0 T M T = ⎜ 1 ⎟. w0 = z(t) and the exact solution y(t + Nτ).

at the original time level t.+ y''' ( t ) ---. ROSENBROCK SCHEMES WITH COMPLEX COEFFICIENTS 1329 y ||vN – n – vN – (n – 1)|| y(t + nτ) y[t + (n – 1)τ] δvτ = O(τ3) x Fig. After one step in the Rosenbrock scheme. 2 (31) w N – z ( t + Nτ ) ≤ constτ . 2 2+1 (35) Assume that Ck(t) are sufficiently smooth functions.+ C 1 ( t )τ + C '1 ( t )τ + C 2 ( t )τ + o ( τ ). This condition is satisfied starting with a sufficiently small step τ. Evolution of the Error for Differential Problems Consider the Cauchy problem dy/dt = f ( y ). (34) converges with the second order. we have v 0 ( t ) = y ( t ) + C 1 ( t )τ + C 2 ( t )τ + …. the order of the scheme is the same for DAEs and DEs. (32) For the numerical solution v to problem (32). 2 3 3 3 2 6 COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS Vol. Thus. global error is estimated as follows: v N – y ( t + Nτ ) ≤ constτ . α = (1 + i)/2 for CROS. Specifically. the formulas for the transition to a new time level in the one- stage Rosenbrock scheme are given by v 1 = v 0 + Rek. this scheme is stiffly accurate. 3. 46 No. The theorems proved above imply that scheme (33). In the nomenclature of [3]. Note that the accumulation of the global error for the differential component y(t) decreases the order the global error by one. while the orders of the global and local errors for the algebraic component z(t) are identical. 2 An important condition for this proof is that the numerical solution lies in neighborhood (22) of the exact solution. y ( 0 ) = y0 . 8 2006 . representation (35) is also valid for v1: v 1 = y ( t + τ ) + C 1 ( t + τ )τ + C 2 ( t + τ )τ + o ( τ ) 2 3 3 τ 2 3 τ (37) = y ( t ) + y' ( t )τ + y'' ( t ) ---. (33) where k is determined from the system k = τf ( v 0 ) + ατ f y ( v 0 )k.3. 2 3 3 (36) On the other hand. (34) Here. Remark 1. Estimate (31) proves that the ε-embedding method for the one-stage Rosenbrock scheme with Reα = 0.5 and Imα ≠ 0 (in particular. 2. for the CROS scheme) provides second-order convergence. Assume that the error of the scheme at every time can be represented as a series in powers of τ. we have v 1 = y ( t ) + C 1 ( t )τ + C 2 ( t )τ + Rek ( τ ) + o ( τ ).

z 0 ) = 0.( f y f ) = -----. (39) dt dt dy we conclude that. and the numerical solution is vn. Proof. where C1(τ) is the solution to problem (40). we estimate the accumulation of the global error over N steps: || ỹ N – vN|| ≤ Cτ3. z )0. τ) = y(t) + C1(t)τ2 . 0 = g(y. τ) = y(t) + C1(t)τ2 . Since d d d y''' ( t ) = ----. The numerical solution v(t) to problem (32) produced by the one-stage Rosenbrock scheme with Reα = 1/2 is related to the exact solution y(t) of the problem by the formula v(t) = y(t) + C1(t)τ2 + O(τ3). g ( y 0. since they do not ensure the second order of accuracy. Theorem 3. (34) with the initial data v0 = ỹ (nτ. ỹ n = ỹ (t + nτ.1330 AL’SHIN et al. we assume that the error COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS Vol. τ). we see that the local error at a single step has the order ||v1 – ỹ n + 1 || = O(τ4) ∀n ≤ N. 46 No. 2 A comparison of (36) and (37) shows that all the terms with τ0. We derive an expression for k(τ) retaining only the terms up to τ3 inclusive: τ τ 2 3 k ( τ ) = k ( 0 ) + k' ( 0 )τ + k'' ( 0 ) ---. z). (40) is calculated with the exact solution y(t) to problem (32). Applying scheme (33).( f y f ) f = f yy ff + f y f y f . 2 (38) 6 Note that (38) with α = (1 + i)/2 (the CROS scheme) considerably simplifies.4. 2 6 6 (40) C 2 ( 0 ) = 0. k' ( 0 ) = f ( y ( t ) ). For this reason.y'' = ----. This statement holds for all α satisfying Reα = 1/2. Nτ ≤ const. 2 3 2 3 which yields k ( 0 ) = 0. the remaining values of α are useless. 3 2 6 The derivatives of k(τ) can be evaluated using formula (34): k ( τ ) = τf [ y ( t ) + C 1 ( t )τ + o ( τ ) ] + ατ f y [ y ( t ) + C 1 ( t )τ + o ( τ ) ]k ( τ ).f yy ff – --. τ. y ( 0 ) = y 0 . since Re(α2) = 0. 8 2006 .f y f y f . if expansion (35) is valid. the solution taking into account the leading term of the error ỹ (t. The theorem means that ỹ n – vn = O(τ3). and the numerical solution v(t) (circles). 3. and τ2 coincide for Reα = 1/2. Next. the one-stage Rosenbrock scheme with a com- plex parameter α (Reα = 1/2. Equating the coefficients of τ3 . α ≠ 1/2) converges with the second order.+ k''' ( 0 ) ---. Figure 3 shows the exact solution y(t). C1(t) satisfies the Cauchy problem 1 1 C '1 ( t ) – f y ( y ( t ) )C 1 ( t ) = Re ( α ) f y f y f – --.y''' ( t ) + C '1 ( t ) = f y ( y ( t ) )C 1 + Re ( α ) f y f y f ( y ( t ) ). The right-hand side of Eq. Evolution of the Error for Differential Algebraic Systems Consider the Cauchy problem for a differential algebraic system with consistent initial data: dy/dt = f ( y. (41) It was shown above that. we obtain 1 --. First. k'' ( 0 ) = 2α f y ( y ( t ) )k' ( 0 ) = 2α f y ( y ( t ) ) f ( y ( t ) ). for differential algebraic systems.+ o ( τ ). k''' ( 0 ) = 6 f y ( y ( t ) )C 1 ( t ) + 3α f y ( y ( t ) )k'' ( 0 ) = 6 f y ( y ( t ) )C 1 ( t ) + 6α f y ( y ( t ) ) f y f . z ( 0 ) = z0 . After n time steps. τ) = ỹ n and repeating nearly word for word the manipulations used to derive formula (40). We introduce the notation ỹ (t. using the standard technique described in detail in the proof of Theorem 2. we follow the same line of reasoning as in the case of systems of ordinary differential equations.

2 3 (42) w ( t ) = z ( t ) + C 12 ( t )τ + C 22 ( t )τ + …. w are the numerical solution and y(t).k'' ( 0 )τ + --. and C12(t). 8 2006 . Assume that the local errors y(t + τ) + C11(t + τ)τ2 – v1 and z(t + τ) + C12(t + τ)τ2 – w1 are O(τ4) and O(τ3). we have v 1 = y ( t ) + C 11 ( t )τ + Rek ( τ ) 2 = y ( t ) + C 11 ( t )τ + Re ⎛ k ( 0 ) + k' ( 0 )τ + --. (46) 2 –1 –1 6 2Re ( 1/α ) The second equation is conveniently written by rearranging (45): g z z'' ( t ) + g y y'' ( t ) 0 = g y C 11 + g z C 12 + --------------------------------------. Note that.+ Reα ( f y y''(t) – f z ( g z ) g y y''(t)) + -----------------------. ROSENBROCK SCHEMES WITH COMPLEX COEFFICIENTS 1331 y O(τ3) O(τ4) C1(t)τ2 ~y(t) y(t) t Fig. z(t) are the exact solution. since the local error in the differential part is O(τ3).τ + C 12 ( t )τ + O ( τ ). 2 3 Here.f z(z''(t) + ( g z ) g y y''(t)). 46 No. v.= – ( g z ) g y C 11 – C 12 . it is necessary that y'''(t) 1 C '11(t) = f y C 11 + f z C 12 – ----------. although vanishing at t = COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS Vol. To derive an equation for C11(t) and C12(t). –1 -----------------------------------------------. 2 2 1 2 3 ⎝ 2 ⎠ For expansions (43) and (44) for the algebraic component z to coincide up to the terms of order τ2. (17). we conclude that the coefficient C11(t) tends to zero as t 0. 2 2 3 2 For the numerical solution to system (16). we apply the numerical method with initial data at the points v0 = y(t) + C11(t)τ2 and w0 = z(t) + C12(t)τ2.l'' ( 0 )τ ⎞ + O ( τ ). can be expanded in a series in powers of τ: v ( t ) = y ( t ) + C 11 ( t )τ + C 21 ( t )τ + …. it is nec- essary that z'' ( t ) + ( g z ) g y y'' ( t ) –1 . (45) 2Re ( 1/α ) For expansions (43) and (44) for the differential component y to coincide up to the terms of order τ3. -. the equations describing the leading terms in the expansion of the error in powers of τ constitute a linear inhomogeneous differential algebraic system.k''' ( 0 )τ ⎞ + O ( τ ). 3. respectively (see Remark 1): y'' ( t ) 2 y''' ( t ) 3 y ( t + τ ) + C 11 ( t + τ )τ = y ( t ) + y' ( t )τ + -----------τ + ------------τ + C 11 ( t )τ + C '11 ( t )τ + O ( τ ). The transition to a new time level is described by formulas (16) and (17). 2 1 2 1 3 4 (44) ⎝ 2 6 ⎠ w 1 = z ( t ) + C 12 ( t )τ + Rel ( τ ) = z ( t ) + C 12 ( t )τ + Re ⎛ l ( 0 ) + l' ( 0 )τ + --. (47) 2Re ( 1/α ) Thus. performing one step by formulas (16) and (17). 2 2 3 4 2 6 (43) z'' ( t ) 2 z ( t + τ ) + C 12 ( t + τ )τ = z ( t ) + z' ( t )τ + ----------.

z̃ n = z̃ ( nτ. Assume that the computations are performed on con- densing nested grids with the number of nodes N. (50) 2 –1 Here. we see that the deviation of the numerical solution to the differential component from ỹ after one step is O(τ4). wn be the numerical solution to problem (41) produced by the Rosenbrock scheme at the point tn = τn after n steps. On three grids. and applying Richardson’s method of iterative refinement.+ Reα [ f y y'' ( t ) – f z ( g z ) g y y'' ( t ) ] + -----------------------. As applied to DAE systems. τ) and w0 = zn ≡ z̃ (nτ. 2 3 (48) w ( t ) = z ( t ) + C 12 ( t )τ + O ( τ ).. u(N)(t) is the solution computed on the grid with N nodes. By successively condensing the grid. τ ) = y ( t ) + C 11 ( t )τ . (47) and can be nonzero. we find that the order of the global error in the differential component decreases by one (|| ỹ N – vN|| ≤ Cτ3) if Nτ ≤ const. formula (50) is asymptotically accurate as N ∞.f z ( z'' ( t ) + ( g z ) g y y'' ( t ) ). Let ỹ ( t. and p is the order of accuracy of the method. r2N. 2 –1 –1 6 2Re ( 1/α ) g z z'' ( t ) + g y y'' ( t ) 0 = g y C 11 + g z C 12 + --------------------------------------- -. τ ). and the deviation of the numerical solution to the algebraic component from z̃ after one step is O(τ3). Unfortunately. This is explained by the fact that the error of the algebraic part is O(τ2) starting with the first step. 2 and let vn. ỹ n = ỹ ( nτ. calculating the error by formula (50). i. it can be stated as follows. (49) 2Re ( 1/α ) C 11 ( 0 ) = 0. τ ) = z ( t ) + C 12 ( t )τ . beginning at the point tn = nτ.e. We apply the one-stage Rosenbrock method with the initial data v0 = ỹ n ≡ ỹ (nτ. τ). the order can be increased by 2. Next. The even nodes of every new grid coincide with the nodes of the preceding grid. and its potentials remain underestimated. thus increasing the order of accuracy by 1. C12(τ) are the solution to the problem y''' ( t ) 1 C '11 ( t ) = f y C 11 + f z C 12 – -----------. a prescribed accuracy can be achieved with a guarantee for a small number of grid nodes. It is most convenient to use r = 2. 2 3 where C11(τ). where r is an inte- ger.1. 0 in any arbitrarily close neighborhood of zero. is determined by Eq. It is convenient to use them for estimating the error Rj(t) in each of the solu- tion components 1 ≤ j ≤ J. … increasing every time by r. rN. Proof. COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS Vol. and the order of the global error in the algebraic component remains equal to three. COMPUTATIONS OF DAE WITH ERROR CONTROL 4. Without knowledge of the exact solution. The theorem means that ỹ n – vn = O(τ3) and z̃ n – wn = O(τ3).1332 AL’SHIN et al. 46 No. Theorem 4. For twice continuously differentiable solutions. Error Estimation An algorithm for computation on condensing grids with guaranteed error control was described in [15]. Repeating nearly word for word the manipulations used to derive formulas (46) and (47). 2 z̃ ( t. The numerical solution produced by the Rosenbrock scheme with Reα = 1/2 and α ≠ 1/2 is related to the exact solution of problem (41) as follows: v ( t ) = y ( t ) + C 11 ( t )τ + O ( τ ). || z̃ N – wN|| ≤ O(τ3). Richardson’s iterative refinement is possible if the solution is sufficiently smooth. 8 2006 . etc. τ ). u(2N)(t) is the solution computed on the denser grid with 2N nodes. The value given by (50) can be taken into account as a correction. which is done below. using the standard technique described in detail in the proof of Theorem 2. this technique is not very popular in applied computations. 4. this can be done by using the Richardson formula (2 N ) (N) (2 N ) u j (t) – u j (t) Rj ( t ) = --------------------------------------- p -.

(53) 6 6 At every fixed time. (34) for the components of the solution to (51).. 2 = πl for the component y. when the right-hand side of (40) has the simplest form. Numerical computations for prob- lems with known exact solutions have repeatedly shown that the actual accuracy of computation may differ from the announced accuracy by several orders of magnitude. On the other hand. 1 = π(l + 1/2) for the component x and tl. Consider the problem dy/dt = y. the solution to system (40) is C1(t) ≡ 0. ROSENBROCK SCHEMES WITH COMPLEX COEFFICIENTS 1333 The most widespread computational algorithms involve a step size control procedure. the expansion of the difference between the numerical and exact solutions in powers of τ begins with τ3 . This is completely explained by the fact that the exact solution y = (t/2 + y 0 )2 is a second-degree polynomial and its third derivative is zero at all the points. The a priori error estimate for such algorithms is frequently even not majorant. Consider the CROS scheme with α = (1 + i)/2. these formulas give the coefficient in the leading term of the error occurring in scheme (33). ẏ = – x.e. the effective order of accuracy of the corresponding solution component for system (51) is higher than the the- oretical order and is equal to 3. Therefore.t sin t.tτ2. (34) with any Reα = 0. On the one hand. then this characteristic is close to 2 everywhere. except for the points tl. 6 sin t (52) Ċ 12 = – C 11 + ---------. Theorem 3 holds for scheme ⎝ C 12 ( t ) ⎠ (33). y = asint and describes the motion in the circle x2 + y2 = a2 . they explain an increase in the order of convergence of the CROS scheme at some points. The vector ⎛ C (t) ⎞ function C1(t) in (40) is a column of two scalar functions: C1(t) = ⎜ 11 ⎟ . Numerical Examples The validity of formula (40) for the leading term of the error is illustrated by examples. (51) x ( 0 ) = a.5. Computations on refining grids with control of the effective order of accuracy show that the CROS scheme for this problem converges with the third rather than second order. 4. There- fore. taking into account (39).t cos t. 8 2006 . if the CROS scheme is applied on refining grids and the effec- tive order of accuracy of the scheme is controlled.2. 4. A software program represents a black box for users. 46 No. 2 2 6 Example 2. The evolution of the leading term of the error is illustrated in Fig. The Cauchy problem for the system of two equations ẋ = y. These points lie on Archimedes’ spiral 1 ∆ = R x + R y = --. and the error Ry in y is plotted on the vertical axis. and the required accuracy is specified by the tolerance parameter. The error Rx in x is plotted on the horizontal axis. COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS Vol. Example 1. C 12 ( t ) = --. For example. The Jacobi matrix is constant and (40) can be written as cos t Ċ 11 = C 12 + ----------. 6 C 11 ( 0 ) = C 12 ( 0 ) = 0. At the points tl. It follows from (52) that 1 1 C 11 ( t ) = --. we see that the right-hand side of (40) vanishes. i. y ( 0 ) = y 0 > 0. has the exact solution x = acost. formulas (53) show that the error increases linearly with time. y ( 0 ) = 0.

. hence. The numerical Fig. However. –1 much higher magnitudes of stiffness (from –108 to –1012) can occur. the problem is stiff. 8 2006 . This behavior of the error is explained by the unsuccessful 0 2 4 6 8 10 12 14 16 step size procedure used in RADAU5.2). Ry with the exact solution 3 (a) u1 = u2 – sin(πt) = eλt. for Example 1. 2 12 Therefore. Example 4.1334 AL’SHIN et al. the one-stage Rosenbrock scheme is second-order accurate for any Reα = 1/2. time. 46 No. Below. we used λ = –100 and 0 ≤ t ≤ 10. For example. in radio physics and electrical engineering.unige. which is especially recommended in [3] for DAE systems. can be used for stiff problems.ch/~hairer/software. The C1 parameter to tol = 10–9 was set in RADAU5. if we have a linear system. As was noted above. steady oscillations are achieved 1 very rapidly (in about 10–4). Such problems are traditionally used in test programs 0 for the integration of stiff systems. Example 3. for example. The increase in the order of accuracy of the scheme in Examples 2 and 3 can be explained by analyzing the local error without using formula (40). for example. Consider the nonautonomous problem du 1 /dt = λu 1 . –2 –3 –2 –1 0 1 2 Rx In the tests. Figure 5 C11(t) = --. we have developed a software program that implements scheme (14) with error control. For λ = –1. The envelope of 0 these maxima is the average level of the error. However. it is compared with the code RADAU5. 19]). When λ = –104. results seem plausible and can be judged as correct.tcost shows the absolute value of the error vs. In the western litera- ture (see [18. and the step 3 (b) size in method (14) was chosen so that the total amount 1 of computations was the same in both cases. but these contributions can be of different signs and can cancel out at some point so that the leading term of the error vanishes. In this case. COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS Vol. Although this value of α is a special case. The fact is that the local error at every step contributes to the global error. 0 = u 1 – u 2 + sin ( πt ).± i ---------. The C12(t) = --. the order of accuracy can be increased by taking into account additional information 1 1 on the system. the coefficient of τ2 in the expansion of the error vanishes as well. and the scheme remains A- stable and. 4. this can be done only by using (40). RADAU5 increases the step size up to unreasonably large values. the correspond- ing scheme can find application. However. After the stiffness t interval is passed (for t > 0. RADAU5 is freely available at http://www. Then the error rapidly increases and –3 becomes comparable to the solution itself at t ≈ 2. The solution exponentially attains a purely oscillatory 2 mode with a period of 2. in the design of modern radio equipment [4]. the right-hand side in (40) vanishes at α = --. where the order of con- vergence of the scheme increases at individual points. (54) u 1 ( 0 ) = u 2 ( 0 ) = 1. The zeros of the error correspond to the downward cusps. problem (54) is regarded as being stiff when λ = –102 and superstiff when λ = –104.html. For the numerical solution of DAEs. It implements a fifth-order scheme with a step size algorithm. the characteristic times of these two processes are comparable with each other. The sign 6 of the error alternates along each curve. 1 The RADAU5 computations are labeled by R0. since linear systems of differential equations frequently arise. Remark 2.tsint program produces a small error corresponding to tol only 6 at the initial time.

t = 10. 3) is the error in each of the solution components for (55). Its exact solution is u1 = u2 – sin(πt) = eλt + λ[πeλt – πcos(πt) – λsin(πt)]/(λ2 + π2). –104. Error in the algebraic component of lems. this occurs after the fifth refinement with the num- ber of grid points N ~ 104. However. Let us discuss the results.. One Richardson iteration reduced the error to CROS. (55) u 1 ( 0 ) = u 2 ( 0 ) = 1. Thus. Apparently.5 1. For the intermediate stiffness λ = –104. the cause of the failure is that RADAU5 uses |R2| only an A-stable scheme. Curve C0 corresponds good that we were able to use Richardson’s method to refine to CROS. ROSENBROCK SCHEMES WITH COMPLEX COEFFICIENTS 1335 Apparently.e. such a tuning is hardly possible if 10 C1 the exact solution is unknown. 5. the CROS scheme with Rich- ardson’s iterative refinement gives a very high accuracy even for superstiff DAEs. The accuracy achieved is ~10–11. which is much longer than required in practice. all the plots are close to straight lines. according to the theory. This indicates that the error is a power of N.⎜ ∫∑ R j ( t )⎟ dt 2 . and C1. respectively. for λ = –108. but its error R3(t) remained close to the round-off errors in the course of the computations.0 for this test rapidly attained 10–7 and remained at this level at t very long times t ~ 100. C3 Before applying the CROS scheme to problem (54). The third component appeared after the autonomization of system (55). On a double logarithmic scale. this is the only scheme possessing such a property at present. second. The other lines correspond to Richardson’s iterative refinements. A comparison of the errors obtained at individual points is not quite representative because of the oscillations of the error. this method is applicable to superstiff prob. the step size) for the CROS scheme as applied to problems of various stiffness: λ = –102. The error was reduced to 10–11 after two grid refinements and to 10–13 after three grid refinements. R0 RADAU5 contains more than 20 fitting parameters. Fig. Thus. 10–9 (which coincided with the RADAU5 result obtained by specially tuning the parameters). and –108. these results are –8 ë0 labeled by RM. For nonautonomous problem (55). Example 5. We introduced a more convenient characteristic equivalent to the envelopes of the curves depicted in Fig. For example. Consider a more complicated DAE system than (54): du 1 /dt = λu 2 . the slopes show that the CROS scheme is only first-order accurate without autonomization and is second-order accurate with autonomization. 8 2006 . RM For a higher degree of stiffness. and third Richardson refinements for the solution. The value given by (56) was nearly a constant everywhere except for a short initial interval t ~ |λ|–1. We –4 managed to manually choose them so that the actual accuracy 10 for problem (54) was comparable to tol (while the running time increased by several times). 46 No. 5: t 1/2 1 ⎛ ⎞ J δ = --. (56) t ⎝ ⎠ 0 j=1 where Rj(t) (j = 1. The results produced by the CROS scheme were so test (54) for λ = –100. In Fig. the loss of accuracy in 10–12 C2 RADAU5 is even more visual. 5. That is why Richardson’s iterative refinements are not very effective on the same grids: the first refinement improves the absolute value of the COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS Vol. 0 = u 1 – u 2 + sin ( πt ). C2. and C3 depict the first. 2. The error of the CROS scheme 10 0 0. each refinement increases the order of accuracy by one until round-off errors are achieved. the effective order of accuracy of the CROS scheme achieves its theoretical value only for a rather large number of grid nodes: N ~ 105. For a moderately stiff problem (λ = −100) and a superstiff problem (λ = –108). respectively. the –16 latter was made autonomous. This improvement of the accuracy visually illustrates the necessity of autonomization. Fig- ures 6–8 analyze this characteristic as a function of the number of grid intervals (i. which is insufficient for superstiff 100 problems.

COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS Vol. (4) the sec.– -----. 2. 1 1 R2 ⎝ R 1 R 2⎠ U f ( U 2 – U 3 ) – ------3 – C 2 U '3 = 0.026 ) – 1 ]. The consistent initial data were U1(0) = 0. 2. In this case.01 f ( U 2 – U 3 ) = 0. and the currents through the capacitors satisfy I = CdU/dt. –6 R 0 = 1000. Ub = 6 is the operating voltage. a transistor amplifier was computed in [3].1336 AL’SHIN et al. 3. 3. 7.99 f ( U 2 – U 3 ) = 0.+ C 3 ( U '5 – U '4 ) – 0. The currents through the resistors obey Ohm’s law I = U/R. C i = i × 10 . The current in the transistor is a nonlinear function of U3 – U2 . 3) are constant.2. R 1 = … = R 5 = 9000. and U5(t) is the output voltage. 2. R5 The constants were set equal to the values given in [20]: f ( U ) = 10 [ exp ( U/0. and 5 yields the equations Ue(t ) U1 ------------. and U5(0) = 0. U4(0) = Ub. etc. 1. (57) R3 Ub U4 -----. The resistances Ri (i = 0. error. 5) and the capacitances Ci (i = 1. 2. 46 No. (3) the first Richardson refinement. Numerical results obtained (1) without Fig. 5.4sin(200πt). U2(0) = U3(0) = UbR1/(R1 + R2). The integration interval was 0 ≤ t ≤ 0. 4. For this reason. (3) the first Richardson refinement. finer grids and a somewhat greater computa- tion time are required for achieving high accuracy. ond Richardson refinement. Ui(t) are the electric potentials at the nodes i = 1. R4 R4 U – ------5 + C 3 ( U 4' – U 5' ) = 0. the second and sub- sequent refinements do not improve the accuracy. 9. but the order of accuracy remains lower than its theoretical value. 8 2006 . i = 1. …. etc. –6 The periodic input signal was specified as Ue(t) = 0. autonomization and (2) with autonomization. logδ logδ –2 λ = –100 λ = –104 –3 –3 –4 1 1 –5 –5 –6 –7 2 –7 –8 2 –9 –9 3 4 3 4 –10 5 5 –11 –11 3 4 5 logN 3 4 5 logN Fig.+ -----⎞ + C 1 ( U '1 – U '2 ) – 0. Its scheme is shown in Fig. R0 R0 U ------b – U 2 ⎛ ----. Applying Kirchhoff’s law to nodes 1. 6. Ue(t) is the input voltage. 3. 4. As a test DAE system written in implicit form. Example 6. Numerical results obtained (1) without autonomization and (2) with autonomization.+ C 1 ( U 2' – U 1' ) = 0.– -----. (4) the sec- ond Richardson refinement. Here.

200. CROS scheme was used on refining uniform nested grids (4) the second Richardson refinement.10 0. Additionally. 11 (curves 1. and. which required several seconds of CPU time on a 0 PC of moderate performance. …. A similar plot illustrating the –11 results of the computation was presented in [3]. To guarantee the achievement of this accuracy. For the best accuracy tol < 10–10. for example. we Fig. 4 C3 5 After the computations. it is sufficient to cease the grid condensation at N = 3200 and execute the four Richardson’s refinements possible for this grid.05 0. RADAU5 executed same prescribed accuracy (tol = 10 7721 right-hand-side evaluations (which was larger than for –3 CROS with Richardson’s iterative refinement). Fig. …. 200.20 of right-hand-side evaluations increased with the prescribed t accuracy. Numerical results obtained (1) with- out autonomization and (2) with autonomiza- Problem (57) was first made autonomous. RADAU5 failed to compute this problem. we applied Richardson’s itera. it should be kept in mind that. R0 C2 If the prescribed accuracy is 10–4 (the dashed line in R1 R3 R5 Ue Fig. For example. The number 0 0. whereas the tolerance specified in RADAU5 is not guar- anteed in the general case. 11 (curve 0). U(t) The maximum accuracy 10–11 in the example presented 2 U5(t) in Fig. 1 C1 2 tive refinement. 51200 after eight Richardson iterative refinements. 51200. Figure 12 displays the error in RADAU5 for problem (57) as averaged over five COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS Vol. 9 was achieved in all the computations with N = 100. which is based on the fifth-order accurate implicit Runge–Kutta scheme with step size control. 8). then.15 0. ROSENBROCK SCHEMES WITH COMPLEX COEFFICIENTS 1337 Equations (57) have the form of (13) with the matrix logδ ⎛ –C C ⎞ λ = –108 –3 ⎜ 1 1 ⎟ 1 ⎜ C –C ⎟ ⎜ 1 1 ⎟ –5 M = ⎜ –C2 ⎟. The results of the refinement are depicted in Fig. at each stage. ⎜ ⎟ ⎜ –C3 C3 ⎟ –7 ⎜ ⎟ ⎝ C3 –C3 ⎠ –9 4 2 3 Figure 10 shows the input (Ue(t)) and output (U5(t)) volt. In [3]. The decrease in the error confirms the second-order convergence of the R2 R4 method. 10. which required a few arithmetic operations 3 Ub at each grid node. The integral error δ calculated by formula (56) is shown in Fig. the tion. 8 2006 . 46 No. The computational cost can be estimated. with N = 100. …. 200. we conclude that the CROS scheme with Richardson’s iterative refinement is less costly than RADAU5 and guarantees the prescribed accuracy. Fig. Thus. etc. 3200). …. (3) the first Richardson refinement. 8. by the number of evaluations of the right-hand side of (13). 9. 141967 function evaluations were required for tol = 10–10. For achieving the –2 –4). Ue(t) 1 The total number of right-hand-side evaluations was 102300. the implicit scheme underlying RADAU5 involved a nonlinear system of five equations solved by Newton-type itera- tion. needed 6300 right-hand-side evaluations (over all the com- putations with N = 100. 5 ages of the transistor amplifier. transistor amplifier (57) was computed by –1 RADAU5. 8). It can be 3 4 5 logN seen that the amplifier operates. but the accuracy of the com- putation cannot be estimated from the plot.

For tol = 10–10. Reξ < 0. For stiff systems. next.– ----------------. Therefore. ⎝ 1 – ξ/4⎠ However. --. even the A-stability of the scheme can be lost in this case. a posteriori error estimation. the local Richardson extrapolation gives a scheme with the stability function R ( ξ/2 ) – R ( 2ξ ) 1 ⎛ 1 + ξ/4⎞ 2 ⎛ 1 + ξ/2⎞ 1 5 R̃ ( ξ ) = R ( ξ/2 ) + ------------------------------------- . For nonstiff problems. In our view. components. 46 No. the unreliability of RADAU5 is even more visual. however. For problems of higher stiffness (see Example 3). 11. Since the exact solution to problem (57) is not known. Local Richardson extrapolation underlies extrapolation methods and some step size control strategies. this approach is justified and increases the order of the scheme. z ( 0 ) = z0 . In our approach. RADAU5 failed to achieve the prescribed accuracy at a number of points.. 8 2006 . In this approach. Note that the stiffness of problem (57) is not high. it is applied to the numerical results obtained over the entire time interval). the method proposed provides guaranteed accuracy. R ( ξ/2 ) ≤ 1. Reξ < 0. The result is the refined numer- ical solution.20 t Fig. 1 – ξ/2 Two steps of τ/2 instead of a single step of τ have the stability function 1 + ξ/4 2 R ( ξ/2 ) = ⎛ -----------------⎞ .15 0. 2 2 –1 3 ⎝ 1 – ξ/4⎠ ⎝ 1 – ξ/2⎠ Reξ → –∞ 3 3 This fact provides additional evidence in favor of global condensation of uniform or quasi-uniform grids. For the moderate value tol = 10–4. The stiff stability of the scheme is not lost in this refinement (extrapolation). the error increased with time and remained 100 times higher than tol in the course of the computation. and Richardson extrapolation in contrast to step size control. Fig.4 ----------------. in con- trast to software codes with step size control.10 0. logδ R 0 1E–3 tol = 10–4 –2 0 1E–5 –4 –6 1 1E–7 2 –8 3 5 4 1E–9 –10 6 tol = 10–10 7 8 2 3 4 logN 1E–11 0 0. this can be explained as follows. the Richardson refinement is executed a posteriori (after computations) and globally (i. the situation was even worse: the prescribed accuracy was achieved only at the beginning of the computations.= --. Error control and the Richardson refinement method give an a posteriori asymptotically sharp estimate of the error in the numerical solution. 12. a step of τ and two steps of τ/2 are taken from the original point t.05 0. (58) COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS Vol.> 1. 5.( 4 + 1 ) = --. since no new computation is executed. The local error is esti- mated from these two computations and is taken into account as a correction. COMPLEX-VALUED ODES The CROS scheme can be directly applied to Cauchy problems for systems of complex-valued ODEs: dz/dt = G ( z ). R ( ξ ) ≤ 1.1338 AL’SHIN et al. Assume that the computation is based on an A-stable scheme whose stability function is given by 1 + ξ/2 R ( ξ ) = -----------------.e. the error was estimated by comparing it with the CROS results produced by the above-described algorithm with a guaranteed accuracy of 10–11.

5 ( 1 – i )τG z ]w = G ( z. the dimension of the system. …. Eq. z is the solution at the current time level. x ) = – 2ν -. Here. (60) has solutions of the form 2N ∑ ------------------- 1 u ( t. it is reasonable to develop numerical methods directly for system (58). zn(t)} are the unknown complex-valued vector functions and g = {g1(z). [ E – 0. The motion of the poles is governed by the law ∑ -------------- 1 ż n = – 2ν . such as the eigenvalues of the Jacobi matrix Gz. Λu ( t. zn. we rewrite CROS in the form ẑ = z + 0. k ) dk. Here.5 ( 1 + i )τG z ]v = G ( z. and z is the solution at the next time level. τ is the time step. (60) where the operator Λ is defined by the Fourier transform with respect to the spatial variable: +∞ +∞ u ( t.– i sgn ( Imz ). 8 2006 . includes an error control procedure and Richardson’s iterative refinement. Here. In this context. z = {z1(t). t + 0. (61) x – z (t) n n=1 (see [22]). …. and is simple to implement (the scheme does not involve iterations). The capabilities of this approach can be illustrated using an example from combustion physics. To solve system (58) numerically.5τ ( v + w ). It was shown in [21] that the dynamics of a flame front can be described by a nonlinear partial differential equation with a pseudodifferential operator: u t + uu x = Λu + v u xx . E is an identity matrix. System (58) can be written for the real and imaginary parts of zn to obtain a real system of doubled dimension. 13. However. gn(z)} ∈ C∞ are analytic functions of complex variables z1. Gz ≡ ∂G/∂z is the Jacobi matrix. x ) = ∫e k u ( t. The basic advantages of scheme (59) for system (58) are that it applies to highly stiff systems. …. x ) = ∫e u ( t. More specifically. ROSENBROCK SCHEMES WITH COMPLEX COEFFICIENTS 1339 Imzα(t) z3(t) z7(t) z1(t) z5(t) Rezα(t) z2(t) z6(t) z8(t) z4(t) Fig. the smoothness of its solution.5τ ). which appear in the formula as com- plex conjugate pairs. ikx ikx ) ) –∞ –∞ Equation (60) has an interesting property known as pole decomposition. t + 0. (59) [ E – 0. zn are the poles of function (61) in the complex plane. an important role in the solution of stiff systems is played by a number of parameters. n (62) z –zn m n≠m COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS Vol. 46 No. thus preserving the relatively low dimension. k ) dk.5τ ). etc.

Tsentr Akad. Rogov. N. Mekh. “Exponential Time Differencing for Stiff Systems. Model. Sveshnikov for participating in numerous fruitful discussions of this study. Solving Ordinary Differential Equations II: Stiff and Differential-Algebraic Problems (Springer-Verlag. U. N. Rosenbrock. “Optimal Schemes for Stiff Nonautonomous Systems. S. V. and I. 151–167 (1997). 329–330 (1963). V. Model. Moscow. Fiz. Mat. Guzhev and N.G. J. and R. 17. L. Therefore. “Burgers Equation: Test for Numerical Methods. A. 1996. N.” Prentice Hall. P. Kalitkin. 1177–1206 (1977). Ascher. 14. Phys. Kalitkin and S.” Numer.” Mat. Shirkov. This work was supported by the Russian Foundation for Basic Research (project nos.” Mat. 16. M. Fundamentals of Mathematical Analysis (Fizmatlit. V. 2. M. “Numerical Initial Value Problems in Ordinary Differential Equations. The numerical results confirmed the property of the trajectories to be attracted with time to a line parallel to the imaginary axis (see [22]). 1991). Panchenko. P. A. 22. “Nonlinear Analysis of Hydrodynamic Instability in Laminar Flames I: Derivation of Basic Equations.” Com- put. 12. “On Relaxation Oscillations. N. 21. Model.” Zh. Numer. O. Van der Pol. Mir.” Numer. Vychisl. “Some General Implicit Processes for the Numerical Solution of Differential Equations. Doctoral Dissertation in Mathematics and Physics (Vychisl. V. 19. Al’shina. 9. 2005) [in Russian]. “Asymptotic Solution of Van der Pol Equation. Kupriyanov. Il’in and E. 35–50 (2003). D. “The Application of Rosenbrock–Wanner Type Methods with Stepsize Control in Differential-Algebraic Equations. V.1). Mat. G. Comput. Dorodnitsyn. 55. Hairer and G.” Prikl. 18. Wanner. Mikhailov. Cox and P. N. each pole zn remains in that half-plane (upper or lower) where it was located according to the initial data. Math. 11 (6). 114–127 (1991). Roche. and K. 17637. Preprint Nos. “Calculations on Quasi-uniform Grids” (Fizmatlit. The tests were conducted on refining grids that allow for a posteriori asymptotically accurate error estimation. Thual. Guzhev and N. N. N. 8–11 (1995). Matthews. Phys. 8. 545–563 (1989).” Appl. Kalitkin. REFERENCES 1. 2002). “Implicit-Explicit Runge–Kuttta Methods for Time-Dependent Partial Differential Equation. Krasno- yarsk.” Mat. 313–328 (1947). 11. 128–138 (1994). 8 2006 . 7 (5). 52.9.2005. Math. A. 5. 5. M. I. Steinebach. E. S.2006. Sivashinsky. S. A. 2. 4. J. Kalitkin.” Mat. 46 No. 1999). 90 IPM RAN (Keldysh Institute of Applied Mathematics. 80. Moscow.1340 AL’SHIN et al.” J. Kalitkin. The pairs of complex conjugate poles experience repulsive forces directed along the imaginary axis. 99–127 (1995). N. Yu. 11. 05-01-08006) and by grants from the President of the Russian Federation (MK-1513. 1485–1494 (1985). G. 13. Frich. Moscow. 15 (10). 05-01-00144. N. M. “Optimal Scheme for Package ROS4. Part 1 [in Rus- sian]. They experimentally confirmed the O(τ2) convergence of scheme (59). Roche. Moscow. As a result. E. 20. “Application of Pole Decomposition to an Equation Governing the Dynamics of Wrinkled Flame Fronts. C. Figure 13 shows the trajectories of the poles. SSSR Academy of Sciences. Mag. 6. Gear. Kuz’mina.2006. 05-01-00122. “Stable Numerical Methods for Superstiff Differential-Algebraic Equations. 6 (11). 4. B. Model.” Acta Astronaut. and M. 5. Spiteri.” Mat. Math. D. 52–81 (1999). “Solution of Partial Differential Equations by Using Schemes with Complex Coefficients. and B. 3. The numerical computations were performed using double precision complex arithmetics. N. Mikhailov. G.9. Al’shin. W. E. A. Henon. Mat. 7 (4). Ritus. Model. N.” J. 1981). COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS Vol. 45–63 (1988). 430– 455 (2002). Kalitkin. H. D. 253 (1971). J. (62) are analytic functions. Rentrop. 10. 3 (9). 978–992 (1926). 05-01-00152. N. E. H. C. A. “Rosenbrock Methods for Differential Algebraic Equations. N. 15. Ruuth. Poznyak. V. B. Model. “Numerical Solution Methods for Stiff Equations. 1422–1432 (1992). 7. Berlin. V. 176. ACKNOWLEDGMENTS The authors are deeply grateful to A. A. Kalitkin and L. Novikov. Dnestrovskaya. “L-Stability of Diagonally Implicit Runge–Kutta Schemes and Rosenbrock Methods. B. Nauk SSSR. 31. S. U. 46. 25. Gridin. sgn ( Imz n ) = const and the right-hand sides of Eqs. and G. NSh-5772.” Mat.” Philos.