# ISSN 0965-5425, Computational Mathematics and Mathematical Physics, 2006, Vol. 46, No. 8, pp. 1320–1340.

© MAIK “Nauka /Interperiodica” (Russia), 2006.
Original Russian Text © A.B. Al’shin, E.A. Al’shina, N.N. Kalitkin, A.B. Koryagina, 2006, published in Zhurnal Vychislitel’noi Matematiki i Matematicheskoi Fiziki,
2006, Vol. 46, No. 8, pp. 1392–1404.

Rosenbrock Schemes with Complex Coefficients for Stiff
and Differential Algebraic Systems
A. B. Al’shina, E. A. Al’shinaa, N. N. Kalitkinb, and A. B. Koryaginac
a Faculty of Physics, Moscow State University, Leninskie gory, Moscow, 119992 Russia
e-mail: alshina@gmx.co.uk
b Institute of Mathematical Modeling, Russian Academy of Sciences,
Miusskaya pl. 4a, Moscow, 125047 Russia
c Moscow State Institute of Electronic Engineering (Technical University), Zelenograd,
Moscow, 103498 Russia

Abstract—Many applied problems are described by differential algebraic systems. Complex Rosen-
brock schemes are proposed for the numerical integration of differential algebraic systems by the
ε-embedding method. The method is proved to converge quadratically. The scheme is shown to be appli-
cable even to superstiff systems. The method makes it possible to perform computations with a guaran-
teed accuracy. An equation is derived that describes the leading term of the error in the method as a func-
tion of time. An algorithm extending the method to systems of differential equations for complex-valued
functions is proposed. Examples of numerical computations are given.
DOI: 10.1134/S0965542506080057
Keywords: systems of stiff differential algebraic equations, Rosenbrock scheme with complex coeffi-
cients

1. INTRODUCTION
1.1. Problem
Singularly perturbed problems are a special class of problems that involve a parameter ε multiplying the
highest time derivative. When the parameter is small, the underlying differential equation is stiff. When
ε 0, the Cauchy problem for the differential equation becomes a differential algebraic system. Singu-
larly perturbed problems frequently arise in various applications. One of them is associated with fluid
dynamics, in which linear boundary value problems are encountered that involve a small parameter ε (vis-
cosity) in such a manner that the differential equation loses the highest derivative when ε 0. Other appli-
cations are related to nonlinear oscillations with large parameters [1, 2] and to chemical kinetics with slow
and fast reactions.
The limiting case ε = 0 is known as the reduced system (see [3]). This problem makes sense only when
the initial conditions are consistent, i.e., belong to the solution manifold of the algebraic component of the
system. In practice, differential algebraic equations (DAEs) frequently arise in a pure form (without passage
to the limit with respect to a small parameter). For example, current oscillations in electrical circuits are gov-
erned by differential equations (DEs). If a circuit consists of several branches, algebraic relations are
imposed on the currents and potentials at the junctions. Fluid dynamics are described by a system of Euler
differential equations supplemented with algebraic equations of state. The motion of a hinge mechanism is
described by DEs of Newtonian dynamics for each link and by algebraic matching conditions at the joints.
For nearly linear problems, approximate analytical solution methods have been constructed based on the
study of the spectral properties of a problem (see [4]). Two major approaches to the numerical solution of
DAEs are the ε-embedding method and the state space method (see [3]). However, their accuracy remains a
vague question: none of the examples presented in the literature gives estimates of the accuracy of the results
achieved with these methods. Below is a brief description of the former method.

1320

ROSENBROCK SCHEMES WITH COMPLEX COEFFICIENTS 1321

1.2. The ε-Embedding Method
Consider the singularly perturbed problem
y' = f ( y, z ), y ( 0 ) = y0 ,
(1)
εz' = g ( y, z ), z ( 0 ) = z0 ,
where y and z are vectors of generally different dimensions and f(y, z) and g(y, z) are vector functions of the
same dimension that are differentiable a sufficient number of times. The corresponding reduced system is
y' = f ( y, z ), y ( 0 ) = y0 ,
(2)
0 = g ( y, z ), z ( 0 ) = z0 ,
and its initial data are consistent; i.e., g(y0, z0) = 0. Assume that the Jacobi matrix of the algebraic part of
system (2) is invertible; i.e.,
∃g z ( y, z )
–1
(3)
in the neighborhood of the solution to (2). Then, by the implicit function theorem, there exists a unique solu-
tion z = G(y) that converts the algebraic equation of system (2) into an identity: g(y, G(y)) ≡ 0 . Substituting
z = G(y) into the differential part of system (2) gives the so-called equation in the state space:
y' = f(y, G(y)),
which is an ordinary differential equation. If condition (3) is satisfied, (2) is said to be a DAE of index 1.
In the ε-embedding method, any numerical method is applied to problem (1) and, then, ε = 0 is set in the
resulting formulas. The idea of the method was proposed for backward differentiation formulas (BDF meth-
ods) in [5]. The above arguments are only heuristic considerations for the construction of the ε-embedding
method. The proof of its convergence is a nontrivial problem for each type of difference scheme. For several
implicit schemes with real coefficients, the convergence of the ε-embedding method was proved, for exam-
ple, in [3].
The loss-of-accuracy phenomenon associated with the ε-embedding method was also described in [3].
Specifically, the order of accuracy for DAEs frequently turns out to be lower than that for pure DEs.
Schemes preserving the same high order of accuracy for DAEs as for DEs were called stiffly accurate in [3].

1.3. Scheme for Stiff Systems
An additional difficulty in applications arises when the underlying system is stiff. Stiff systems are char-
acterized by both fast damping and slowly varying solution components, and the characteristic times of the
various processes differ by many orders of magnitude. In practice, stiff systems are almost inevitably
encountered in problems involving many processes. As was mentioned above, singularly perturbed prob-
lems are stiff. Differential algebraic systems can be treated as problems of infinite stiffness. Therefore, in
the ε-embedding method, it is reasonable to use schemes that perform well for purely differential systems
of high stiffness.
Difference schemes constructed for stiff systems must satisfy enhanced stability requirements: A-stabil-
ity, Lp-stability, and other types [3, 6, 7]. Explicit schemes are hardly applicable to stiff problems.
Stiff stability. Beginning in the 1950s, special implicit methods were developed for stiff problems and
a number of additional properties were stated that must be satisfied by the desired schemes. Consider the
Dahlquist problem
λt
du/dt = λu, 0 ≤ t ≤ T, u ( 0 ) = u0 , u ex ( t ) = u 0 e . (4)
When λ  –1, the exact solution to Eq. (4) decays rapidly and monotonically.
For any linear scheme, the transition to the next time level while solving problem (4) has the form û =
R(τλ)u, where R(ξ) is called the growth function or the stability function.
Definition 1. A scheme is said to be A-stable if |R(ξ)| ≤ 1 for Re ξ ≤ 0; i.e., the numerical solution is stable
in the same range of λ as the exact solution to problem (4).
If a scheme is not at least A-stable, it is not applicable to stiff problems.
It is desirable that the stability function also decay rapidly as Re λ –∞. For this reason, the concept
of Lp-stability is introduced.
Definition 2. A scheme is said to be Lp-stable if it is A-stable and R(ξ) = O(ξ–p) as |ξ| ∞.

COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS Vol. 46 No. 8 2006

The updating formulas for the one-parameter family of one-stage Rosenbrock schemes [8] have the form û = u + τRek. not all IRK schemes are suitable for stiff problems. Formally. the scheme possesses unique accuracy and stability properties. where stiffly accurate methods (i. analyze the behavior of its error. 8 2006 .1. un(t)}. We prove the convergence of the method. However. and α is a numerical parameter determining the properties of the scheme.e. but only schemes with real coefficients were examined in that study. The ranges of the parameters of DIRK methods in which the scheme is A. and the system can be solved by direct methods with high precision. 2. excellent results have been obtained for stiff purely differential systems (see [11–15]).e.and L-stable were examined in [3]. The transition to a new time level does not require iteration. 8–11]). Fu ≡ ∂F/∂u is the Jacobi matrix. τ is the time step. methods applicable to DAEs) are also indicated. we distinguish the complex Rosenbrock scheme (CROS). The stiffer the problem (where the measure of stiffness is |λT|). the transition from one time level to the next one takes a finite prior to known number of operations. we have been successfully applying it to various problems of high stiffness. û is the solution at the next time level. Iterative methods. The method constructed makes it possible to obtain asymptotically accurate a poste- riori error estimates and perform computations with guaranteed error control. and compare the method with exist- ing approaches. Iteration strongly complicates the use of IRK schemes. However. By applying the CROS scheme. It is these methods that are more often used in practice. give numerical examples. i. They underlie the popular Gear package. When α ≠ 0. including systems of nonlinear partial differential equations. Noniterative methods. and IRK methods. Rosenbrock schemes were applied to the integration of DAEs in [16]. ( E – ατF u )k = F ( u ). the more advantageous the Lp-stable schemes with large p. SINGLE-STAGE COMPLEX ROSENBROCK SCHEME 2. Unfortunately. For many years. For an s-stage IRK method. The vector k in (5) is determined from a system of linear equations with the matrix E – ατFu. …. scheme (5) is implicit. since the stability issues are supplemented with concerns related to the convergence of the iterative process. as in explicit schemes. (5) Here. For purely differential systems.e. Extrapolation methods (see [3]). Multistep methods (see [3]). IRK schemes have been considered in numerous works. E is an identity matrix. a useful generalization is Lpα-stability (|R(ξ)| ≤ 1 for 90° + α ≤ argξ ≤ 270° – α and R(ξ) = O(ξ–p) as |ξ| ∞). A-stability is destroyed by extrapolation. For high-order schemes. Popular implicit schemes are implicit Runge–Kutta (IRK) methods. Due to this advantage. ROW. it can be shown that multistep methods are at most Lpα-stable with small p = 1/q. However.. which is repeatedly solved by Newton’s iteration. the scheme remains little known and was not mentioned in the classical monograph [3].. thus confirming its efficiency. Moreover. u is the solution at a current time level.1322 AL’SHIN et al. such schemes are called explicit– COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS Vol. The goal of this work is to construct an efficient numerical integration method for superstiff and differ- ential algebraic systems based on the complex Rosenbrock scheme combined with a posteriori global Rich- ardson extrapolation. The coefficients of multistep methods are chosen so that the q-step method is O(τq) accurate. The matrix is well conditioned. u ( 0 ) = u0 . these schemes are called explicit–implicit or semiexplicit. 46 No. they are considerably less reliable than ROS. but they do not involve iteration and the number of arith- metic operations required for the transition to a new time level is fixed and known in advance (as in explicit schemes). Among the one- stage Rosenbrock schemes.. Rosenbrock Schemes Consider the Cauchy problem for an autonomous system of ODEs: du/dt = F ( u ). An alternative approach that overcomes this difficulty is to use Rosenbrock (ROS) and Rosenbrock–Wanner (ROW) methods (see [3. For this reason. |R(ξ)| ≤ 1). They allow us to hope for the construction of high-order schemes. where u is a column vector of functions: {u1(t). A comprehensive review of IRK methods suitable for stiff problems can be found in [3]. even if a method is based on an A-stable scheme (i. Every time step of an IRK method involves a system of generally nonlinear equations. the minimum number s of arising nonlinear systems corresponds to diagonal implicit Runge– Kutta (DIRK) methods. these schemes are implicit.

At the points α = (1 ± i)/2. However. g ( y 0. In this case. (9) ⎝ ẑ ⎠ ⎝ z⎠ ⎝ l ⎠ where the increments are determined by the linear system ⎛ E – τα f ⎞⎛ ⎞ ⎜ y – τα f z ⎟ ⎜ k ⎟ = τ ⎛⎜ f ⎞⎟ . It is L1-stable. z ) ⎞ ⎜ ⎟ = ⎜ ⎟. For this reason. The scheme is highly reliable and can be used for very stiff problems. However. A differential algebraic problem is derived from (1) by passage to the limit as ε 0. The domain of A-stability is determined by the condition Reα ≥ 1/2. However. we have an explicit scheme. this family includes a complex scheme with α = (1 + i)/2 (see [8]).= F ( u ). it is L1-stable at all the points of the semicircle. When α = 0. ( M – ατF u )k = F ( u ). z ( 0 ) = z0 . it is known as the CROS scheme.25. It is this scheme that was used in the computations described below. When α = 0. (8) dt ⎝ z ⎠ ⎝ ε –1 g ( y. z ) ⎠ Next. The scheme is second-order accurate on the line Reα = 1/2. However. (7) The properties of schemes (5) and (7) are determined by the value of α. ROSENBROCK SCHEMES WITH COMPLEX COEFFICIENTS 1323 implicit or semiexplicit. the scheme is only A-stable. the scheme is L2-stable and second-order accurate. (6) dt where M is a constant nonsingular matrix. The ε-Embedding Method with a One-Stage Rosenbrock Scheme The autonomous singularly perturbed problem (1) is solved for the derivatives to obtain d⎛ ⎞ ⎛ f ( y. the accuracy O(τ) of the scheme is rather low. we have the well-known scheme with a half-sum. 8 2006 . The scheme has an O(τ2) error and is unconditionally stable. (10) ⎜ ⎟⎝ l ⎠ ⎝ ε –1 g ⎠ ⎝ – ταε g y E – ταε g z –1 –1 ⎠ The CROS scheme in (10) corresponds to α = (1 + i)/2. the one-stage Rosenbrock scheme is applied to (8): ⎛ ŷ ⎞ ⎛ y⎞ ⎛ k⎞ ⎜ ⎟ = ⎜ ⎟ + Re ⎜ ⎟ . 2. 0).5. we obtain the backward Euler scheme (purely implicit). z 0 ) = 0. The stability function vanishes at infinity when α = x + iy belongs to the circle (x – 0. Figure 1 shows the ranges of the complex parameter α in which scheme (5) is A-stable. L-stable. We write this problem in the semiexplicit form ⎛ y' ⎞ ⎛ f ( y. it is frequently used in computations. The Rosenbrock method can easily be extended to implicit systems of differential equations that are unsolved for the derivatives: du M -----. which ensures unconditional stability and a good qualitative behav- ior of the numerical solution. Therefore. which has unique properties [12]: its accuracy is O(τ2) and it is L2-stable and. When α = 1.⎜ y ⎟ = ⎜ ⎟. Its error is O(τ). COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS Vol. ⎝ 0⎠ ⎝ g ( y. for the scheme to be L-stable. the one-stage Rosenbrock scheme has the form û = u + τRek. accordingly. the scheme is L- stable on the right semicircle. The schemes described above are real. This version of the scheme is hardly applicable to the computation of stiff prob- lems. Moreover. In the literature.2. z ) ⎠ (11) y ( 0 ) = y0 . and O(τ2) accurate. 46 No. it must be A-stable in addition to the fact that the stability function vanishes at infinity. z ) ⎞ ----. except for the point (0.5)2 + y2 = 0. unconditionally stable. according to the classification of stiff problems. which prevents its wide application.

L-stability –0. the accuracy of the scheme degrades to the first order.4 1. However.0 x Fig. A rather frequent situation is that where the differential and algebraic equations are not separated and the DAE system is stated in implicit form (6).8 accurate 0 0.8 2.6 scheme A-stability 0. the time t + 0. 46 No. the system of DAEs du M -----.8 CROS 0. if they depend explic- itly on time.4 0. Multiplying the lower lines in system (10) by ε and passing to the limit as ε 0. we obtain formulas for the transition to a new time level in the ε-embedding method with the one-stage Rosenbrock scheme for problem (2): ŷ = y + Rek.5τ must be used on the right-hand side of (14).2 –0.2 0.4 Second. t)).0 1. ẑ = z + Rel.= F (13) dt can be autonomous (F = F(u)) or nonautonomous (F = F(u. ⎝ 0 0⎠ ⎝ g y gz ⎠ ⎝ l ⎠ ⎝ τg ⎠ Implicit form. t + 0. In practice. in general.6 0.5τ ).4 0. y 0. t )⎠ ζ = F ( u.6 order –0. 1. The ε-embedding method with a CROS scheme suitable for nonautonomous purely differential problems leads to the system û = u + τReζ. Nonautonomy. we have to use the Rosenbrock scheme in the form of (7).6 1. Then. 8 2006 . If the differential and algebraic equations split. which ensures the total second order of the scheme.8 1. the application of the CROS scheme can be described as follows. It was shown in [3] that all the results that hold for DAE in semiexplicit form (11) remain valid for DAE systems in implicit form. t + τ can be used on the right-hand side of (14) for the algebraic components. in which case the universal autonomization procedure should be recommended. as in the case of purely differential implicit systems. However. the Newton iterations converge to it rather than to the required time t + τ. the equa- tions do not split. However.2 1. (14) 2 Our numerical experiments suggest that scheme (14) is second-order accurate for DAEs (as in the case of pure DEs) if the algebraic relations in (13) do not depend explicitly on time. where M is singular: detM = 0. To achieve O(τ2) accuracy for a DE. COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS Vol. ⎛ M – ----------τF 1+i ⎞ ⎝ u ( u. We perform one scheme’s step of O(τ2) accuracy for a purely differential system or one Newton iteration for a purely alge- braic system. if this time is used in the algebraic equations. This explains the loss-of-accuracy phenomenon for nonautonomous DAEs.2 0 –0. Qualitatively. ⎛ E0⎞ ⎛ f f ⎞ ⎛ k⎞ ⎛ τf ⎞ (12) ⎜ ⎟ – ατ ⎜ y z ⎟ ⎜ ⎟ = ⎜ ⎟.1324 AL’SHIN et al.

(17) ⎝ 0⎠ ⎝ g ( v 0. For the exact solution. omitting the dependence of the right-hand side and the Jacobi matrix of time. (20) COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS Vol. z0) = 0.1. v0 . w1 given by scheme (16). δw τ ( t ) = O ( τ ). with the number of retained terms being sufficient for proving the theorem. The increase in the amount of computations is insignificant. z(t). w0 are the numerical solution at the current time level t and v1. z) and g(y. z ). w 0 ) ⎠ ⎝ g y gz ⎠ ⎝ l ⎠ For CROS. Let the right-hand sides f(y. CONVERGENCE OF THE ε-EMBEDDING METHOD WITH THE ONE-STAGE ROSENBROCK SCHEME The convergence of the ε-embedding method for Rosenbrock schemes with real coefficients was proved in [3]. z) of the equations be twice continuously differentiable with respect to both variables. z'. (17) are denoted by δv τ ( x ) = v 1 – y ( t + τ ). 3 2 Proof. we apply the method by performing a step of τ. w ≈ z. while the resulting system is now autonomous. v'. For it. δw τ ( x ) = w 1 – z ( t + τ ). In the new notation. The differential equation and the initial conditions for the new function are trivial: duJ + 1/dt = 1. we calculate the derivatives y'. and its order and rank increase by one. δv τ ( t ) = O ( τ ). 46 No. v'' = Rek''. z' = – ( g z ) g y f . Local Error Consider an autonomous system of DAEs in semi-implicit form (11). w 1 = w 0 + Rel. uJ + 1(0) = 0. Theorem 1. the alge- braic relations do not depend explicitly on the new time. Fixing t. we introduce the following notation for the numerical solution to system (11): v ≈ y. 8 2006 . Thus. We extend that proof to complex coefficients. one more unknown function uJ + 1 ≡ t is added to (13). and w' with respect to τ at the point τ = 0 and see that they coincide for the exact and numerical solutions. v''. w1 are the numerical solution obtained from (16) with the initial data v0 = y(t) and w0 = z(t). The matrix M is modified in an obvious manner. w' = Rel'. (15) As before. we have α = (1 + i)/2. the exact solution is denoted by y(t). and let the initial values y0 and z0 be consistent: g(y0. for the numerical solution. –1 (19) y'' = f y y' + f z z' = f y f – f z ( g z ) g y f . (17) are expanded in Taylor series in powers of τ. we assume that system (14) has been made autonomous. 3. ROSENBROCK SCHEMES WITH COMPLEX COEFFICIENTS 1325 To this end. For the ε-embedding method with the CROS scheme. (16) Here. In what follows. y''. The increments k and l in (16) are determined from the linear algebraic system ⎛ k⎞ ⎛ f ( v 0. (18) where v1. z(t + τ) to problem (11) and the numerical solution v1. the formulas for the transition to a new time level in the ε-embedding method with the CROS scheme as applied to problem (11) are given by v 1 = v 0 + Rek. For convenience. 3. The local truncation errors in scheme (16). For this purpose. w1 are the solution at the new time level t + τ. The exact solution y(t + τ). w 0 ) ⎞ ⎛ f f ⎞⎛ k ⎞ ⎜ ⎟ = τ⎜ ⎟ + ατ ⎜ y z ⎟ ⎜ ⎟. we have y' = f ( y. –1 According to (16). we use method (14). we have v' = Rek'.

∂k/∂w0. we conclude that (19) and (20) coincide for τ = 0.= O ( τ ). –1 –1 ∂w 0 ∂w 0 α α Differentiating (16) and using (23). z0) = 0). ∂w1/∂v0 .+ O ( τ + δ ). 3. the functions f(y.+ O ( τ + δ ) = O ( τ + δ ). We follow the scheme used in [3] for proving the convergence of the ε-embedding method with real Rosenbrock–Wanner schemes. and ∂w1/∂w0. we calculate ∂k/∂v0. Since gz is regular in a neighborhood of the solution. ∂v 0 ∂v 0 ∂w 0 ∂w 0 α =0 COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS Vol.= – --. --------. and ∂l/∂w0 and estimate k and l by using (21): 1 –1 k = O ( τ ). we have g z ( y. Suppose that the matrix gz is regular in a neighborhood of the exact solution (y(t). Then the ε-embedding method with the CROS scheme provides second-order convergence: y ( t + Nτ ) – v N = O ( τ ). ∂v1/∂w0. Theorem 2. --------. Using (16).= O ( τ ). Global Error Assume that problem (11) is computed on an N-node grid. all the terms in (18) up to δv τ ( t ) = O ( τ ). Therefore. respectively. we find ∂v1/∂v0. z) are twice continuously differentiable with respect to both vari- ables. (21) The first equation in (21) yields k' τ=0 = f.( g z g ) + O ( τ ). g(y0. and the initial values y0 and z0 are consistent (i. For this purpose. z(t)) to problem (11). z) and g(y. l = – ---g z g + O ( τ ) = O ( τ + δ ). α Differentiating (21) yields ∂k ∂l 1 ∂ --------. z ( t + Nτ ) – w N = O ( τ ). An estimate for the global error accumulated in the course of the computation is given by the following assertion. 8 2006 . we obtain ∂v ∂w ∂v 1 ∂w 1 1 ---------1 = 1 + O ( τ ).= 1 – Re --.2. 2 2 where y(t + Nτ).--------. wN are the exact and numerical solutions at tN = Nτ.1326 AL’SHIN et al. we took into account that l|τ = 0 = 0 for k. z )g ( y. –1 ∂v 0 ∂v α ∂v 0 (23) ∂k ∂l 1 1 --------. Proof. –1 (22) Here. Thus.+ g z g zz g z g + O ( τ ) = – --. ---------1 = O ( 1 ).. ∂l/∂v0 . z ) ≤ δ.= – --. Differentiating the first equation in (21) twice. ------. Formula (16) is written componentwise as k = τf + ατ f y k + ατ f z l. The second equation in (21) gives l' = –(gz)–1gyk' = –(gz)–1gyf. --------. Here. –1 k'' τ=0 Taking into account that Re2α = 1 for CROS. δ is independent of τ and can be made arbitrarily small by contracting the neighborhood.e. z(t + Nτ) and vN. the local error in scheme (16). δw τ ( t ) = O ( τ ) 3 2 cancel out. (17) is O(τ3) for the differential component and O(τ2) for a part of the solution to system (11).= O ( τ ). and the time interval on which the solution is sought is N = const. 0 = g + αg y k + αg z l. 46 No. we obtain = 2α f y k' + 2α f z l' = 2α f y f – 2α f z ( g z ) g y f .

L. 1. we obtain v 1 – ṽ 1 ≤ ( 1 + τL ) v 0 – ṽ 0 + τP w 0 – w̃ 0 . n ⎝ N ⎠ ⎝ n ⎠ COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS Vol. 46 No. w n ≤ m̂ 21 v 0 + m̂ 22 w 0 . we need several auxiliary assertions. wn) and ( ṽ n . moreover. ROSENBROCK SCHEMES WITH COMPLEX COEFFICIENTS 1327 Here. n n n (26) Indeed. ⎛ m m ⎞ and let all the elements of M = ⎜ 11 12 ⎟ be positive. Using estimates (23) and applying the mean value theorem. The constants Q. (24) w 1 – w̃ 1 ≤ Q v 0 – ṽ 0 + q w 0 – w̃ 0 . n ⎛ ⎞ where m̂ ij are the elements of Mn = ⎜ 1 + τL τP ⎟ . w̃ 0 ) be initial data. where m̂ ij are the elements of Mn. q < 1.+ O ( τ ). and let N steps (Nτ ≤ const) be performed so that (vn. λ 2 = q + -----------------------. and q are independent of the initial data and are positive. N ⎛ ⎞ Proof. Specifically. w̃ n ) (1 ≤ n ≤ N) stay within neighborhood (22) at each step. we took into account α = (1 + i)/2. [17. They are not necessarily consistent but lie in neighborhood (22). Lemma 2. λ 2 ≤ Bq . (25) w n – w̃ n ≤ m̂ 21 v 0 – ṽ 0 + m̂ 22 w 0 – w̃ 0 . contracting neigh- borhood (22). 78]). 8 2006 . there are constants A > 0 and B > 0 such that λ 1 ≤ A. w0) and ( ṽ 0 . Then. …) be sequences of nonnegative numbers satisfying the inequal- ities v n + 1 ≤ m 11 v n + m 12 w n . w̃ 0 ). if Nτ ≤ const (since λ1 = 1 + O(τ)). then λ1 ≤ 1 + Ch.+ O ( τ ). Lemma 1. e. 2 2 q – τL – 1 q – τL – 1 The following estimates are important for the subsequent exposition. v N – ṽ N ≤ C ( v 0 – ṽ 0 + τ w 0 – w̃ 0 ). q < 1. The eigenvalues λ of M are determined from the char- ⎝ Q q ⎠ acteristic equation τPQ τPQ λ 1 = 1 + τL – -----------------------. P. Then vn and wn can be estimated in terms of v0 ⎝ m 21 m 22 ⎠ and w0: v n ≤ m̂ 11 v 0 + m̂ 12 w 0 . Let {vn} and {wn} (n = 0. Let (v0. For the subsequent proof of Theorem 2. p. Using a well-known estimate (see. we obtain const n const n λ 1 ≤ ⎛ 1 + C ------------⎞ ≤ ⎛ 1 + C ------------⎞ ≤ exp ( Cconst ) = A. w N – w̃ N ≤ C ( v 0 – ṽ 0 + ( τ + q ) w 0 – w̃ 0 ). The lemma can easily be proved by induction. we can achieve q < 1. We use formula (24) and apply Lemma 1 with the matrix M = ⎜ 1 + τL τP ⎟ : ⎝ Q q ⎠ v n – ṽ n ≤ m̂ 11 v 0 – ṽ 0 + m̂ 12 w 0 – w̃ 0 ..g. which are stated as lemmas. w n + 1 ≤ m 21 v n + m 22 w n . It is necessary to estimate how the error is accumulated over N steps. Consider two pairs of initial data: (v0. w0) and ( ṽ 0 .

q < 1.1328 AL’SHIN et al. 3 N–n 2 By substituting (30) into (28) and (29) and taking into account Nτ ≤ const and ∑ N N–n n=1 q < 1/(1 – q). Thus. n n m̂ 12 = λ 1 [ 1 + O ( τ ) ]O ( τ ) + λ 2 O ( τ ) [ 1 + O ( τ ) ] ≤ C 2 τ ≤ Cτ. The subsequent accumulation of the error over N – n steps is described by Lemma 2. –1 n ⎜ ⎟ ⎝ 0 λ2 n ⎠ where the columns of T are the coordinates of the eigenvectors of M: ⎛ τP ⎞ ⎜ 1 – ----------. we return to the proof of Theorem 2. According to Theorem 1. and the number of steps is N – n. 2. 8 2006 . ||w0 – w̃ 0 || = O(τ2). C5} in (27) and apply estimate (25). the local error of the method. One step in method (16) is made with the initial data v0 = y[t + (n – 1)τ] and w0 = z[t + (n – 1)τ]. we obtain ⎝ O(1) 1 + O(τ) ⎠ m̂ 11 = [ 1 + O ( τ ) ]λ 1 + λ 2 O ( τ ) ≤ C 1 ≤ C. the COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS Vol. for Mn. The accumulation of the error is shown in Fig. δwτ(x) = O(τ2). n n (27) m̂ 21 = λ 1 O ( 1 ) [ 1 + O ( τ ) ] + λ 2 O ( 1 ) ≤ C 4 + C 3 q ≤ C. The second estimate is proved in a similar manner. We need to perform N – n steps to obtain vN – n and wN – n and one more step to obtain vN – n + 1 and wN – n + 1. 3 2 (30) w N – n – w N – (n – 1) ≤ C [ O ( τ ) + ( τ + q )O ( τ ) ]. 46 No. we hypothetically apply the numerical method with the initial data v0 = y(t + nτ). wN − n. C2. Now. wN with the initial data v0 = y(t)). in fact. C4 + C3qn. n n n n The assertion of the lemma is obtained if we set C = max{C1. Then the norm of the difference between the numerical solution vN. we have v N – n – v N – ( n – 1 ) ≤ C [ O ( τ ) + τO ( τ ) ]. it is equal to δvτ(x) = O(τ3). z(t + Nτ) after N steps can be estimated by the formulas v N – y ( t + Nτ ) = v N – v N – 1 + v N – 1 – v N – 2 + … + v 1 – y ( t + Nτ ) (28) ≤ v N – v N – 1 + v N – 1 – v N – 2 + … + v 1 – y ( t + Nτ ) . To prove it. The numerical solution obtained after N – n steps is denoted by vN – n. w N – z ( t + Nτ ) = w N – w N – 1 + w N – 1 – w N – 2 + … + w 1 – z ( t + Nτ ) (29) ≤ w N – w N – 1 + w N – 1 – w N – 2 + … + w 1 – z ( t + Nτ ) . n n n m̂ 22 = λ 1 O ( 1 )O ( τ ) + λ 2 [ 1 + O ( τ ) ] ≤ C 5 τ + q ≤ Cτ + q . The norms ||vN – n – vN – (n – 1)|| and ||wN – n – wN – (n – 1)|| are considered separately. N–n steps in scheme (16) are performed simultaneously. For the matrix Mn. Then.+ O ( τ ) 1 ⎟ ⎝ 1–q ⎠ ⎛ ⎞ This yields T–1 = ⎜ 1 + O ( τ ) O ( τ ) ⎟ . where ||v0 – ṽ 0 || = O(τ3). To do this. w0 = z(t + nτ) taken for the exact solution. Then.+ o ( τ ) ⎟ T = ⎜ 1–q ⎟ = ⎛⎜ 1 O ( τ ) ⎞⎟ . we have to estimate how the error is accumulated over N steps (τN ≤ const). ⎜ Q ⎟ ⎝ O(1) 1 ⎠ ⎜ ----------. The error at the first step is. we have the representation ⎛ n ⎞ λ 0 T M T = ⎜ 1 ⎟. w0 = z(t) and the exact solution y(t + Nτ).

at the original time level t.+ y''' ( t ) ---. ROSENBROCK SCHEMES WITH COMPLEX COEFFICIENTS 1329 y ||vN – n – vN – (n – 1)|| y(t + nτ) y[t + (n – 1)τ] δvτ = O(τ3) x Fig. After one step in the Rosenbrock scheme. 2 (31) w N – z ( t + Nτ ) ≤ constτ . 2 2+1 (35) Assume that Ck(t) are sufficiently smooth functions.+ C 1 ( t )τ + C '1 ( t )τ + C 2 ( t )τ + o ( τ ). This condition is satisfied starting with a sufficiently small step τ. Evolution of the Error for Differential Problems Consider the Cauchy problem dy/dt = f ( y ). (34) converges with the second order. we have v 0 ( t ) = y ( t ) + C 1 ( t )τ + C 2 ( t )τ + …. the order of the scheme is the same for DAEs and DEs. (32) For the numerical solution v to problem (32). 2 3 3 3 2 6 COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS Vol. Thus. global error is estimated as follows: v N – y ( t + Nτ ) ≤ constτ . α = (1 + i)/2 for CROS. Specifically. the formulas for the transition to a new time level in the one- stage Rosenbrock scheme are given by v 1 = v 0 + Rek. this scheme is stiffly accurate. 3. 46 No. The theorems proved above imply that scheme (33). In the nomenclature of [3]. Note that the accumulation of the global error for the differential component y(t) decreases the order the global error by one. while the orders of the global and local errors for the algebraic component z(t) are identical. 2 An important condition for this proof is that the numerical solution lies in neighborhood (22) of the exact solution. y ( 0 ) = y0 . 8 2006 . representation (35) is also valid for v1: v 1 = y ( t + τ ) + C 1 ( t + τ )τ + C 2 ( t + τ )τ + o ( τ ) 2 3 3 τ 2 3 τ (37) = y ( t ) + y' ( t )τ + y'' ( t ) ---. (33) where k is determined from the system k = τf ( v 0 ) + ατ f y ( v 0 )k.3. 2 3 3 (36) On the other hand. (34) Here. Remark 1. Estimate (31) proves that the ε-embedding method for the one-stage Rosenbrock scheme with Reα = 0.5 and Imα ≠ 0 (in particular. 2. for the CROS scheme) provides second-order convergence. Assume that the error of the scheme at every time can be represented as a series in powers of τ. we have v 1 = y ( t ) + C 1 ( t )τ + C 2 ( t )τ + Rek ( τ ) + o ( τ ).

z 0 ) = 0.( f y f ) = -----. (39) dt dt dy we conclude that. and the numerical solution is vn. Proof. where C1(τ) is the solution to problem (40). we estimate the accumulation of the global error over N steps: || ỹ N – vN|| ≤ Cτ3. z )0. τ) = y(t) + C1(t)τ2 . 0 = g(y. τ) = y(t) + C1(t)τ2 . Since d d d y''' ( t ) = ----. The numerical solution v(t) to problem (32) produced by the one-stage Rosenbrock scheme with Reα = 1/2 is related to the exact solution y(t) of the problem by the formula v(t) = y(t) + C1(t)τ2 + O(τ3). g ( y 0. since they do not ensure the second order of accuracy. Theorem 3. (34) with the initial data v0 = ỹ (nτ. ỹ n = ỹ (t + nτ.1330 AL’SHIN et al. we assume that the error COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS Vol. τ). we see that the local error at a single step has the order ||v1 – ỹ n + 1 || = O(τ4) ∀n ≤ N. 46 No. 2 A comparison of (36) and (37) shows that all the terms with τ0. We derive an expression for k(τ) retaining only the terms up to τ3 inclusive: τ τ 2 3 k ( τ ) = k ( 0 ) + k' ( 0 )τ + k'' ( 0 ) ---. z). (40) is calculated with the exact solution y(t) to problem (32). Applying scheme (33).( f y f ) f = f yy ff + f y f y f . 2 (38) 6 Note that (38) with α = (1 + i)/2 (the CROS scheme) considerably simplifies.4. 2 6 6 (40) C 2 ( 0 ) = 0. k' ( 0 ) = f ( y ( t ) ). For this reason.y'' = ----. This statement holds for all α satisfying Reα = 1/2. Nτ ≤ const. 2 3 2 3 which yields k ( 0 ) = 0. the remaining values of α are useless. 3 2 6 The derivatives of k(τ) can be evaluated using formula (34): k ( τ ) = τf [ y ( t ) + C 1 ( t )τ + o ( τ ) ] + ατ f y [ y ( t ) + C 1 ( t )τ + o ( τ ) ]k ( τ ).f yy ff – --. τ. y ( 0 ) = y 0 . since Re(α2) = 0. 8 2006 .f y f y f . if expansion (35) is valid. the solution taking into account the leading term of the error ỹ (t. The theorem means that ỹ n – vn = O(τ3). and the numerical solution v(t) (circles). 3. and τ2 coincide for Reα = 1/2. Next. the one-stage Rosenbrock scheme with a com- plex parameter α (Reα = 1/2. Equating the coefficients of τ3 . α ≠ 1/2) converges with the second order.+ k''' ( 0 ) ---. Figure 3 shows the exact solution y(t). C1(t) satisfies the Cauchy problem 1 1 C '1 ( t ) – f y ( y ( t ) )C 1 ( t ) = Re ( α ) f y f y f – --.y''' ( t ) + C '1 ( t ) = f y ( y ( t ) )C 1 + Re ( α ) f y f y f ( y ( t ) ). The right-hand side of Eq. Evolution of the Error for Differential Algebraic Systems Consider the Cauchy problem for a differential algebraic system with consistent initial data: dy/dt = f ( y. (41) It was shown above that. we obtain 1 --. First. k'' ( 0 ) = 2α f y ( y ( t ) )k' ( 0 ) = 2α f y ( y ( t ) ) f ( y ( t ) ). for differential algebraic systems.+ o ( τ ). k''' ( 0 ) = 6 f y ( y ( t ) )C 1 ( t ) + 3α f y ( y ( t ) )k'' ( 0 ) = 6 f y ( y ( t ) )C 1 ( t ) + 6α f y ( y ( t ) ) f y f . z ( 0 ) = z0 . After n time steps. τ) = ỹ n and repeating nearly word for word the manipulations used to derive formula (40). We introduce the notation ỹ (t. using the standard technique described in detail in the proof of Theorem 2. we follow the same line of reasoning as in the case of systems of ordinary differential equations.

2 3 (42) w ( t ) = z ( t ) + C 12 ( t )τ + C 22 ( t )τ + …. w are the numerical solution and y(t).k'' ( 0 )τ + --. and C12(t). 8 2006 . Assume that the local errors y(t + τ) + C11(t + τ)τ2 – v1 and z(t + τ) + C12(t + τ)τ2 – w1 are O(τ4) and O(τ3). we have v 1 = y ( t ) + C 11 ( t )τ + Rek ( τ ) 2 = y ( t ) + C 11 ( t )τ + Re ⎛ k ( 0 ) + k' ( 0 )τ + --. (46) 2 –1 –1 6 2Re ( 1/α ) The second equation is conveniently written by rearranging (45): g z z'' ( t ) + g y y'' ( t ) 0 = g y C 11 + g z C 12 + --------------------------------------. Note that.+ Reα ( f y y''(t) – f z ( g z ) g y y''(t)) + -----------------------. ROSENBROCK SCHEMES WITH COMPLEX COEFFICIENTS 1331 y O(τ3) O(τ4) C1(t)τ2 ~y(t) y(t) t Fig. z(t) are the exact solution. since the local error in the differential part is O(τ3).τ + C 12 ( t )τ + O ( τ ). 2 3 Here.f z(z''(t) + ( g z ) g y y''(t)). 46 No. v.= – ( g z ) g y C 11 – C 12 . it is necessary that y'''(t) 1 C '11(t) = f y C 11 + f z C 12 – ----------. although vanishing at t = COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS Vol. To derive an equation for C11(t) and C12(t). –1 -----------------------------------------------. 2 2 1 2 3 ⎝ 2 ⎠ For expansions (43) and (44) for the algebraic component z to coincide up to the terms of order τ2. (17). we conclude that the coefficient C11(t) tends to zero as t 0. 2 2 3 2 For the numerical solution to system (16). we apply the numerical method with initial data at the points v0 = y(t) + C11(t)τ2 and w0 = z(t) + C12(t)τ2.l'' ( 0 )τ ⎞ + O ( τ ). can be expanded in a series in powers of τ: v ( t ) = y ( t ) + C 11 ( t )τ + C 21 ( t )τ + …. it is nec- essary that z'' ( t ) + ( g z ) g y y'' ( t ) –1 . (45) 2Re ( 1/α ) For expansions (43) and (44) for the differential component y to coincide up to the terms of order τ3. -. the equations describing the leading terms in the expansion of the error in powers of τ constitute a linear inhomogeneous differential algebraic system.k''' ( 0 )τ ⎞ + O ( τ ). 3. respectively (see Remark 1): y'' ( t ) 2 y''' ( t ) 3 y ( t + τ ) + C 11 ( t + τ )τ = y ( t ) + y' ( t )τ + -----------τ + ------------τ + C 11 ( t )τ + C '11 ( t )τ + O ( τ ). The transition to a new time level is described by formulas (16) and (17). 2 1 2 1 3 4 (44) ⎝ 2 6 ⎠ w 1 = z ( t ) + C 12 ( t )τ + Rel ( τ ) = z ( t ) + C 12 ( t )τ + Re ⎛ l ( 0 ) + l' ( 0 )τ + --. (47) 2Re ( 1/α ) Thus. performing one step by formulas (16) and (17). 2 2 3 4 2 6 (43) z'' ( t ) 2 z ( t + τ ) + C 12 ( t + τ )τ = z ( t ) + z' ( t )τ + ----------.

z̃ n = z̃ ( nτ. Assume that the computations are performed on con- densing nested grids with the number of nodes N. (50) 2 –1 Here. we see that the deviation of the numerical solution to the differential component from ỹ after one step is O(τ4). wn be the numerical solution to problem (41) produced by the Rosenbrock scheme at the point tn = τn after n steps. On three grids. and applying Richardson’s method of iterative refinement.+ Reα [ f y y'' ( t ) – f z ( g z ) g y y'' ( t ) ] + -----------------------. As applied to DAE systems. τ) and w0 = zn ≡ z̃ (nτ. 2 3 (48) w ( t ) = z ( t ) + C 12 ( t )τ + O ( τ ).. u(N)(t) is the solution computed on the grid with N nodes. By successively condensing the grid. τ ) = y ( t ) + C 11 ( t )τ . (47) and can be nonzero. we find that the order of the global error in the differential component decreases by one (|| ỹ N – vN|| ≤ Cτ3) if Nτ ≤ const. formula (50) is asymptotically accurate as N ∞.f z ( z'' ( t ) + ( g z ) g y y'' ( t ) ). Let ỹ ( t. and p is the order of accuracy of the method. r2N. 2 –1 –1 6 2Re ( 1/α ) g z z'' ( t ) + g y y'' ( t ) 0 = g y C 11 + g z C 12 + --------------------------------------- -. τ ). and the deviation of the numerical solution to the algebraic component from z̃ after one step is O(τ3). Unfortunately. This is explained by the fact that the error of the algebraic part is O(τ2) starting with the first step. 2 and let vn. ỹ n = ỹ ( nτ. calculating the error by formula (50). i. it can be stated as follows. (49) 2Re ( 1/α ) C 11 ( 0 ) = 0. τ ) = z ( t ) + C 12 ( t )τ . beginning at the point tn = nτ.e. We apply the one-stage Rosenbrock method with the initial data v0 = ỹ n ≡ ỹ (nτ. τ). the order can be increased by 2. Next. The even nodes of every new grid coincide with the nodes of the preceding grid. and its potentials remain underestimated. thus increasing the order of accuracy by 1. C12(τ) are the solution to the problem y''' ( t ) 1 C '11 ( t ) = f y C 11 + f z C 12 – -----------. a prescribed accuracy can be achieved with a guarantee for a small number of grid nodes. It is most convenient to use r = 2. 2 3 where C11(τ). where r is an inte- ger.1. 0 in any arbitrarily close neighborhood of zero. is determined by Eq. It is convenient to use them for estimating the error Rj(t) in each of the solu- tion components 1 ≤ j ≤ J. … increasing every time by r. rN. Proof. COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS Vol. and the order of the global error in the algebraic component remains equal to three. COMPUTATIONS OF DAE WITH ERROR CONTROL 4. Without knowledge of the exact solution. The theorem means that ỹ n – vn = O(τ3) and z̃ n – wn = O(τ3).1332 AL’SHIN et al. 46 No. Theorem 4. For twice continuously differentiable solutions. Error Estimation An algorithm for computation on condensing grids with guaranteed error control was described in [15]. Repeating nearly word for word the manipulations used to derive formulas (46) and (47). 2 z̃ ( t. The numerical solution produced by the Rosenbrock scheme with Reα = 1/2 and α ≠ 1/2 is related to the exact solution of problem (41) as follows: v ( t ) = y ( t ) + C 11 ( t )τ + O ( τ ). || z̃ N – wN|| ≤ O(τ3). Richardson’s iterative refinement is possible if the solution is sufficiently smooth. 8 2006 . etc. τ ). u(2N)(t) is the solution computed on the denser grid with 2N nodes. The value given by (50) can be taken into account as a correction. which is done below. using the standard technique described in detail in the proof of Theorem 2. this technique is not very popular in applied computations. 4. this can be done by using the Richardson formula (2 N ) (N) (2 N ) u j (t) – u j (t) Rj ( t ) = --------------------------------------- p -.

(53) 6 6 At every fixed time. (34) for the components of the solution to (51).. 2 = πl for the component y. when the right-hand side of (40) has the simplest form. Numerical computations for prob- lems with known exact solutions have repeatedly shown that the actual accuracy of computation may differ from the announced accuracy by several orders of magnitude. On the other hand. 1 = π(l + 1/2) for the component x and tl. Consider the problem dy/dt = y. the solution to system (40) is C1(t) ≡ 0. ROSENBROCK SCHEMES WITH COMPLEX COEFFICIENTS 1333 The most widespread computational algorithms involve a step size control procedure. the expansion of the difference between the numerical and exact solutions in powers of τ begins with τ3 . This is completely explained by the fact that the exact solution y = (t/2 + y 0 )2 is a second-degree polynomial and its third derivative is zero at all the points. The a priori error estimate for such algorithms is frequently even not majorant. Consider the CROS scheme with α = (1 + i)/2. these formulas give the coefficient in the leading term of the error occurring in scheme (33). ẏ = – x.e. the effective order of accuracy of the corresponding solution component for system (51) is higher than the the- oretical order and is equal to 3. Therefore.t sin t.tτ2. (34) with any Reα = 0. On the one hand. then this characteristic is close to 2 everywhere. except for the points tl. 6 sin t (52) Ċ 12 = – C 11 + ---------. Theorem 3 holds for scheme ⎝ C 12 ( t ) ⎠ (33). y = asint and describes the motion in the circle x2 + y2 = a2 . they explain an increase in the order of convergence of the CROS scheme at some points. The vector ⎛ C (t) ⎞ function C1(t) in (40) is a column of two scalar functions: C1(t) = ⎜ 11 ⎟ . Numerical Examples The validity of formula (40) for the leading term of the error is illustrated by examples. (51) x ( 0 ) = a.5. Computations on refining grids with control of the effective order of accuracy show that the CROS scheme for this problem converges with the third rather than second order. 4. There- fore. taking into account (39).t cos t. 8 2006 . if the CROS scheme is applied on refining grids and the effec- tive order of accuracy of the scheme is controlled.2. 4. A software program represents a black box for users. 46 No. 2 2 6 Example 2. The evolution of the leading term of the error is illustrated in Fig. The Cauchy problem for the system of two equations ẋ = y. These points lie on Archimedes’ spiral 1 ∆ = R x + R y = --. and the error Ry in y is plotted on the vertical axis. and the required accuracy is specified by the tolerance parameter. The error Rx in x is plotted on the horizontal axis. COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS Vol. Example 1. C 12 ( t ) = --. For example. The Jacobi matrix is constant and (40) can be written as cos t Ċ 11 = C 12 + ----------. 6 C 11 ( 0 ) = C 12 ( 0 ) = 0. At the points tl. It follows from (52) that 1 1 C 11 ( t ) = --. we see that the right-hand side of (40) vanishes. i. y ( 0 ) = y 0 > 0. has the exact solution x = acost. formulas (53) show that the error increases linearly with time. y ( 0 ) = 0.