You are on page 1of 15

.

Accuracy Analysis Tools

51

Chapter 5: ACCURACY ANALYSIS TOOLS

52

TABLE OF CONTENTS
Page

5.1.

5.2.

5.3.

5.4.

5. 5.

Concept 5.1.1. Forward Error Analysis . . . . 5.1.2. Difculties with Global Error . 5.1.3. Backward Error Analysis . . . Linear Multistep Methods 5.2.1. Forward Euler . . . . . . 5.2.2. Backward Euler . . . . . . 5.2.3. General One-Step Method . . 5.2.4. Can Forward Euler be Made Exact? Runge-Kutta Methods 5.3.1. Explicit Runge Kutta . . . . 5.3.2. Implicit Midpoint Rule . . . . Mathematica Modules 5.4.1. FOMoDE by Direct Solve . . . 5.4.2. FOMoDE by Recursion . . . . Notes and.Bibliography . . . . . . . . . . . References . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

53 53 53 54 54 54 55 56 57 57 57 58 58 58 510 511 512

52

53

5.1 CONCEPT

This Chapter introduces the modied equation method as a tool for accuracy analysis of time discretizations for coupled systems. Because the method is relatively unknown as compared to truncation and forward error analysis, it is described in more detail than will normally the case in a lecture. 5.1. Concept Unlike stability, accuracy of ODE discretizations is not an easy topic. The effect of instability is highly visual; it jumps out like the robot in a Terminator movie. Assessing accuracy is not so obvious. There are two basic ways to go about the subject: forward or backward. 5.1.1. Forward Error Analysis Forward analysis relies on the global error, also called forward error. Given an ODE with unknown state variable u(t ) and exact solution utr ue , the global error is dened as the difference eG = ucomputed utr ue , (5.1)

where is any norm appropriate to the occasion. Aside from the choice of norm the idea is easy to grasp. Unfortunately utr ue is unavailable aside from test problems or benchmarks. Consequently it is impossible to directly estimate or control (5.1) in general-purpose implementations. To make progress, the easily understood but uncomputable eG is replaced by a more obscure but computable measure: the local error or truncation error eT . This is the difference between the computed u(t ) over the interval [tn , tn + h ] and the locally exact solution of the ODE using the computed solution at tn as initial condition(s). If eT is Taylor-series expanded in the stepsize h centered at t = tn , and the rst nonvanishing term is h p+1 , the order of the integration operator is said to be p . If p 1 the operator is said to be consistent. The local error can be estimated by ODE solvers through well known techniques that usually involve doing some extra computations per step, and used to control h . Though various behavioral assumptions on the differential system, it is possible to derive bounds on the global error eG from eT . For a scalar equation integrated with average stepsize h , the typical form of those bounds is eG C e Lt h p . (5.2) where L is the Lipschitz constant, C a coefcient, and p the order of the method. Over a nite time interval t = [0, T ] the error (5.2) goes to zero as h 0 if L < and p 1 (i.e., a consistent integration operator). With some effort, estimates such as (5.2) can be extended to systems. This is the end of the forward analysis theory, which was nalized by the mid-1960s [5.40]. 5.1.2. Difculties with Global Error There is a wide class of application problems where the estimate (5.2) is practically useless: Long Term Integration. If L > 1, (5.2) grows without bound as t increases. This may result in huge overestimates. In fact it is easy to construct scalar ODE examples where the bound is pessimistic by any order of magnitude desired. Periodic and Oscillatory Systems. If the integrator has a phase error, as virtually all do, the computed solution of oscillatory and periodic systems may shadow the true one. But since (5.1) only accounts for the error at a given t , it can be grossly pessimistic if the user is not interested in phase errors. Conserving Invariants. In physical applications the user often is interested in conservation (or accurate prediction) of certain invariants of the response. For example: energy, linear momentum and angular 53

Chapter 5: ACCURACY ANALYSIS TOOLS

54

momentum in conservative mechanical systems. But the global error bounds do not take into account such special requirements. Forward error analysis in computational linear algebra, which was guilty of unrealistic predictions, disappeared by the late 1960 in favor of backward error, as narrated in Notes and Bibliography. Only recently has a similar process received attention for ODE computations. 5.1.3. Backward Error Analysis Backward error analysis takes the reverse approach to accuracy. Given the computed solution, it asks: which problem has the method actually solved? In other words, we seek an ODE which, if exactly solved, would reproduce the computed solution. This ODE is called the modied differential equation, often abbreviated to MoDE. The solution of the MoDE that passes through the computed points is called the reproducing solution. The difference between the modied equation and the original one provides an estimate of the error. As noted, this approach is now routine in computational linear algebra. It agrees with common sense. Application problems involve physical parameters such as mass, damping, stiffness, conductivity, inductance, etc., which are known approximately. If the modied equation models a nearby problem with parameters still within the range of uncertainty, it is certainly as good as the original one. This defect correction can be used as the basis for controlling accuracy. In view of the disadvantages noted for the forward error, why is backward analysis not the standard approach to ODE accuracy? Two reasons can be cited. Intellectual Inertia. Many ODE solvers have been developed by numerical analysts without the foggiest idea of the practical use of simulations. Lacking application exposure they tend to stick to the ideas hammered in by their predecessors, and forward analysis has been the establishment way of assessing accuracy even since Euler invented piecewise linearization. Technical Difculties. In linear algebra the modied equation is another algebraic system, with the same conguration as the original one. But a MoDE constructed by applying Taylor series to a delaydifference (DD) equation has innite order. (Nothing mysterious about this, since DD equations have an innite number of characteristic roots.) As such, it has an innite manifold of solutions through each initial condition point. The reproducing solution is included but so are innitely many others. The maze is ltered by reduction to nite order. The reduction, however, is not an easy process to master and in fact constitutes the bulk of the effort. 5.2. Linear Multistep Methods One way to learn backward error analysis of ODEs is to go through examples for simple scalar equations. As the difference equations become more complex, hand processing rapidly becomes prohibitive. Help from a computer algebra system (CAS) is required. The following examples for Linear Multistep (LMS) methods were actually processed using the Mathematica modules described in 5.4. 5.2.1. Forward Euler Consider the solution of the scalar test equation u = u , u = u (t ), u (0) = u 0 , by the Forward Euler method: u n +1 = u n + hu n = u n + h u n , in which a prime denotes derivative with respect to time.1
1

This change with respect to the usual superposed-dots (Newtons notation) is to allow simpler notation for higher derivatives.

54

55

5.2 LINEAR MULTISTEP METHODS

First, relabel tn and tn +1 as generic times: t and t + h , respectively: u (t + h ) u (t ) = h u (t ). (5.3)

This is the delay-difference form of the modied equation2 and usually abbreviated to DDMoDE. It denes a new function u (t ), assumed innitely differentiable, which agrees with the computed FE solutions at time stations tn , tn +1 , . . .. Next, expand the left-hand side in a Taylor series centered at t and solve for u (t ). This gives hu (t ) 1 h 2 u (t ) u (t ) = u (t ) 1 2 6
1 3 h u 24

(t ) . . .

(5.4)

This is a modied ODE of innite order, called IOMoDE for short. Because of the presence of an innite number of derivatives, it does not provide insight. To make progress (5.4) is truncated to a derivative order m d on the right, and then differentiated repeatedly m d -times with respect to t while discarding 1 3 derivatives higher than m d . For m d = 4 we get u (t ) = u (t ) 1 hu (t ) 1 h 2 u (t ) 24 h u (t ), 2 6 1 1 2 1 u (t ) = h u (t ) 2 hu (t ) 6 h u (t ), u (t ) = h u (t ) 2 hu (t ), and u (t ) = h u (t ). Collect these as a matrix3 system, moving all derivatives of u to the left hand side: 1 1 3 1 h 1 h 2 24 h 2 6 u 1 1 1 2 h h u 0 2 6 (5.5) = u. 1 0 0 1 h u 2 0 u 0 0 1 Solve for the derivatives and expand in powers of h : u = 4(6 + 6h + h 2 2 ) h + 1 h 2 2 1 h 3 3 + . . .) u , u = (1 1 2 3 4 2 2 (2 + h )(12 + 12h + h ) (5.6)

Only the series for u is of interest for what follows; while those for u , u etc., might be of interest for other investigations. The coefcients of h in the u series follow a recognizable pattern, which may be conrmed by retaining higher derivatives: u = (1 1 h + 1 h 2 2 1 h 3 3 + 1 h 4 4 + . . .) u = 2 3 4 5 log(1 + h ) u. h (5.7)

This is a modied equation of nite order, often abbreviated to FOMoDE. The general solution of this rst-order equation is u (t ) = C (1 + h )t / h , (5.8) which for the initial condition u (0) = u 0 yields C = u 0 and u (t ) = u 0 (1 + h )t / h . At time station tn = nh , u n (t ) = u 0 (1 + h )n , which coincides with the value computed by Forward Euler. As t / h > by making h 0 while keeping t xed, (5.8) obviously tends to the solution u (t ) = u 0 et of the original equation u = u . So for this case the backward analysis can be completed fully.
2

Note that it is not a difference equation, because u (t ) in u (t + h ) u (t ) = h u (t ) is continuous (in fact innitely differentiable) and hence dened between time stations. The coefcient matrix of (5.5) is a Toeplitz matrix since all diagonals contain the same entry.

55

Chapter 5: ACCURACY ANALYSIS TOOLS

56

5.2.2. Backward Euler Next, u = u , u = u (t ), u (0) = u 0 , is treated by the Backward Euler method: u n +1 = u n + h u n +1 = u n + h u n +1 . The DDMoDE is u (t + h ) u (t ) = h u (t + h ). Expanding in Taylor series centered at tn yields hu (t ) + 1 h 2 u (t ) + . . . = u (t ) + hu (t ) + 1 h 2 u (t ) + 1 h 3 u (t ) + . . . . u (t ) + 1 2 6 2 6 Solve for u to produce IOMoDE: hu (t ) + 1 h 2 3u (t ) u (t ) + (1 h )u = u (t ) 1 2 6
1 2 h 24

(5.9) (5.10)

4u (t ) u (t ) + . . .

Differentiate this relation repeatedly with respect to t while keeping u -derivatives up to m d order to set up a linear system. For m d = 4 one gets 1 1 2 1 3 1 h h h u + h 2 + h 2 3 + h 3 4 2 6 24 1 1 2 ( + h 2 + h 2 3 ) 1 h h u 0 2 6 = u. 1 2 0 u 0 ( + h ) 1 h 2 0 u 0 0 1 (5.11) Solving for u and expanding in h gives u = 4(6 + 12h + 16h 2 2 + 17h 3 3 + 11h 4 4 + 5h 5 5 + h 6 6 ) u 24 + 36h + 38h 2 2 + 31h 3 3 + 16h 4 4 + 6h 5 5 + h 6 6 log(1 h ) u 1 2 2 1 3 3 = 1+ 1 h + h + h + . . . u = u = . 2 3 4 h h log(1 h ) (5.12)

as FOMoDE. Its solution with u (0) = u 0 is u (t ) = u 0 /(1 h )t / h . If t is kept xed while h 0 this tends to u 0 et , as expected. 5.2.3. General One-Step Method The test equation u = u , u = u (t ), u (0) = u 0 , is treated by the general one-step LMS method: u n +1 u n = h u n +1 + h (1 )u n = h u n +1 + (1 )u n , where is a free parameter. The DDMoDE is u (t + h ) u (t ) = h u (t + h ) + (1 )u (t ) . Expanding in Taylor series and solving for u gives the innite-order MoDE: hu (t ) + 1 h 2 3u (t ) u (t ) + (1 h )u = u (t ) 1 2 6
1 2 h 24

4u (t ) u (t ) + . . . (5.13)

Differentiating and retaining derivatives up to order 4: 1 1 2 1 3 h h 24 h 1 u 1+ h + 2 h 2 2 + 3 h 3 3 2 6 1 (1+ h + 2 h 2 2 ) 1 h 1 h2 u 0 2 6 = u . 1 u 0 0 (1+ h ) 1 h 2 0 u 0 0 1 (5.14) Solving for u and expanding in series gives: log(1 + (1 )h ) u = 1+ 1 u. (5.15) h (2 1) + 1 h 2 2 (1 3 + 3 2 ) + . . . u = 2 3 h log(1 h ) This is the FOMoDE equation. It includes FE ( = 0) and BE ( = 1). The solution with u (0) = u 0 t/h is u (t ) = u 0 (1 + 1 h )/(1 1 h ) et for any as h 0 for xed t . The principal error term 2 2 1 h (2 1) ags a rst order method unless = 1 . With = 1 , which gives the Trapezoidal Rule 2 2 2 1 2 2 (TR), the principal error is 12 h , and the method is of second order accuracy. 56

57 5.2.4. Can Forward Euler be Made Exact?

5.3 RUNGE-KUTTA METHODS

, determined Yes, but only for a scalar test equation such as in u = u . The idea is to replace by by solving 400
u

= h . log(1 + h )

(5.16)

300 200 100 0 0 1 0.8 0.6 1/10 2/10

is used in the integration. This makes the and this modied equation (5.7) identical with the original one. = 63.896; For example, if = 20 and h = 1/10, = 8.44665. whereas if = 20, h = 1/10, s matches the Running Forward Euler with these exact solutions at the time stations. This is shown in Figure 5.1.

t
3/10

The trick has been discovered many times before. It is 0.4 often called exponential tting. One well known form originally due to Liniger and Willoughby is covered 0.2 in Lamberts textbook [5.53]. Linigers form keeps t 0 xed and adjusts the method parameters, whereas 1/10 0 2/10 3/10 the version (5.16) changes the exponent. This variant Forward Euler on u = u , works for any and h , even when h is in the unstable Figure 5.1. Results of running tted by (5.16). Top gure: u (0) = 1, with range, but is not applicable to ODE systems, which = 63.896. Lower = 20, h = 1/10 and exhibit multiple characteristic values. gure: = 20, h = 1/10 and 5.3. Runge-Kutta Methods 5.3.1. Explicit Runge Kutta hk1 ); k3 = The classical four-step Runge-Kutta (CRK) for u = u is k1 = u n ; k2 = (u n + 1 2 (u n + 1 hk ) ; k = ( u + hk ) ; u u = ( k + 2 k + 2 k + k )/ 6. The DDMoDE is u ( t + h ) u (t ) = 2 4 n 3 n +1 n 1 2 3 4 2 1 1 (k1 + 2k2 + 2k3 + k4 )/6, k1 = u (t ), k2 = (u (t ) + 2 hk1 ), k3 = (u (t ) + 2 hk2 ), k4 = (u (t ) + hk3 ). Expanding in Taylor series centered at u (t ) and solving for u (t ) gives the IOMoDE u = u + 1 h (2 u u ) + 1 h 2 (3 u u ) + 2 6
1 3 4 h ( u 24

= 8.64665. Red: exact; black: computed.

u ) + ....

(5.17)

Truncating at m d = 4 and using repeated differentiation one constructs the linear system 1 1 2 1 3 1 3 3 h h 24 h 1 u 1+ 1 h + 1 h 2 2 + 24 h 2 6 2 6 1 1 2 (1+ 1 h + 1 h 2 2 ) 1 h 6h u 0 2 6 2 = u , 1 1 u 0 0 (1 + 2 h ) 1 h 2 u 0 0 0 1 (5.18) Solving for the derivatives u one simply gets u = u , u = 2 u , etc. On raising m d an identiable series appears:
1 4 4 1 5 5 1 6 6 1 4 4 u = (1 120 h + 144 h 336 h +. . .)u = h 1 log(1+ h + 1 h 2 2 + 1 h 3 3 + 24 h ) u . (5.19) 2 6

which shows the method to be of fourth order. 57

Chapter 5: ACCURACY ANALYSIS TOOLS

58

For the simplest explicit RK, also known as Heun method: u n +1 u n = h (u n + 1 h u n ), the DDMoDE 2 1 is u (t + h ) u (t ) = h (u (t ) + 2 h u ). Proceeding as above one nds the FOMoDE u = (1 + 1 h 2 2 + 1 h 3 3 6 8 5.3.2. Implicit Midpoint Rule The IMR is the simplest implicit Runge-Kutta (RK) method. Applied to u = u it reads u n +1 u n = hu n +1/2 = h u n +1/2 , where u n +1/2 and u n +1/2 denote the slope and solution, respectively, at tn + 1 h. 2 1 1 DDMoDE form: u (t + h ) u (t ) = 2 h u (t + 2 h ). Expanding both sides in Taylor series and solving for u gives IOMoDE as 1 1 h u (t ) = u 1 hu + 2 2
1 2 h 24 1 4 4 h 20

+ . . .)u = h 1 = log(1 + h + 1 h 2 ) u . 2

(5.20)

3u 4u ) + . . . .

(5.21)

Differentiating, expanding in h and retaining derivatives up to order 4: 1 1 h 2 1 h 2 3 2 4 0 0


1 h 2 1 3 2 +1 h 2 + 16 h 8 1 1 h 2 2 0 1 2 h 6 1 h 2 1 3 + 16 h 1 2 + 8h 1 1 3 h 24 1 2 h 6 1 h 2

u u 0 u = 0 u , 0 u 1

(5.22)

in which = 1 + 1 h + 1 h 2 2 + 1 h 3 3 . Solving for u : 2 4 8 u = 96 + 48h + 52h 2 2 + 12h 3 3 + 7h 4 4 u = 1 96 + 48h + 56h 2 2 + 14h 3 3 + 8h 4 4


1 2 2 h 24

3 4 4 h 640

. . . u .

(5.23)

The series in parentheses is more difcult to identify. Retaining more terms reveals a recurrence relation that yields the FOMoDE equation
1 2 2 3 4 5 u = 1 24 h + 640 h 7168 h 6 6 + . . . u = h 1 log 1 + h 1 + 1 h 2 2 + 1 h 2 2 u . (5.24) 4 2

which is a new result. Note that IMR is a second order method, as is TR, but it is twice more accurate. h u (t + 1 h ) + u (t + 2 h ) , For the 2-point Gauss-Legendre implicit Runge-Kutta u (t + h ) u (t ) = 1 2 1 1 1 in which 1 = 2 3/6 and 2 = 2 + 3/6 are Gauss points. The FOMoDE is u = (1 4320 h 4 4 + . . .)u , showing fourth order accuracy. The series in parenthesis has not yet been identied. 5.4. Mathematica Modules The foregoing examples were actually carried out by two Mathematica modules. These are displayed in Figures 5.2 and 5.3. Module ModifiedScalarODEbySolve carries out the process through a direct solve method, whereas ModifiedScalarODEbyRecursion does it by recursive descent. Both forms receive the CD Form of the MoDE assumed to be in the canonical form u (t + h ) u (t ) = h f (u , t ) (5.25)

58

59

5.4 MATHEMATICA MODULES


ReplaceIncrement[u_,t_,h_,c_,order_]:={u[t+c*h]-> Sum[((c*h)^i/i!)*D[u[t],{t,i}],{i,0,order}]}; ModifiedScalarODEbySolve[res_,u_,t_,h_,htab_,order_,m_,isol_]:= Module[{r=res,d,v,usol,up,ups,vsol,umod,i,j,k,n,ns,nh,c}, nh=Length[htab]; If [nh==0, r=r/.ReplaceIncrement[u,t,h,1,order]]; If [nh>0, For[i=1,i<=nh,i++,c=htab[[i]]; r=r/.ReplaceIncrement[u,t,h,c,order]]]; r=Simplify[r]; usol=Together[Simplify[Solve[r==0,u'[t]]]]; up=Simplify[u'[t]/.usol[[1]]]; ups=Normal[Series[up,{h,0,order-1}]]; d=v=Table[0,{order}]; d[[1]]=ups; v[[1]]=u'[t]; For [i=2, i<=order, i++, d[[i]]=Simplify[Normal[Series[D[d[[i-1]],t],{h,0,order-i}]]]; v[[i]]=D[v[[i-1]],t] ]; eqs={}; For[i=1,i<=order,i++,AppendTo[eqs,v[[i]]==d[[i]]] ]; vsol=Solve[eqs,v]; vsol=Simplify[vsol]; ns=Length[vsol]; umod=Table[0,{ns}]; If [ns>1, Print[ "***ModifiedScalarODEbySolve: ", ns," solutions found for nonlinear ODE"]]; For [k=1,k<=ns,k++,umod[[k]]=Take[v,m]/.vsol[[k]]; For [i=1,i<=m,i++, umod[[k,i]]=Simplify[ Normal[Series[umod[[k,i]],{h,0,order-i}]]] ]]; If [isol>0, Return[umod[[isol]]]]; Return[umod]];
Figure 5.2. Module to calculate nite order MoDE of rst-order scalar equation by direct solve.

ModifiedScalarODEbyRecursion[res_,u_,t_,h_,htab_,order_,m_,isol_]:= Module[{r=res,d,v,usol,up,i,j,k,klim,n,nh,c,ijtab={},dold,dnew}, nh=Length[htab]; klim=0; If [nh==0, r=r/.ReplaceIncrement[u,t,h,1,order]]; If [nh>0, For[i=1,i<=nh,i++, c=htab[[i]]; r=r/.ReplaceIncrement[u,t,h,c,order]]]; r=Simplify[r]; usol=Solve[r==0,u'[t]]; up=Simplify[u'[t]/.usol[[1]]]; up=Normal[Series[up,{h,0,order-1}]]; d=v=Table[0,{order}]; d[[1]]=Simplify[up]; v[[1]]=u'[t]; For [i=2, i<=order, i++, d[[i]]=Simplify[Normal[Series[D[d[[i-1]],t],{h,0,order-i}]]]; v[[i]]=D[v[[i-1]],t] ]; For [i=order-1,i>=1,i--,For[j=order,j>=i,j--,AppendTo[ijtab,{i,j,0}]]]; For [i=2,i<=m,i++, For[j=1,j<=i,j++, AppendTo[ijtab,{i,j,0}]]]; For [ij=1,ij<=Length[ijtab],ij++, {i,j,kk}=ijtab[[ij]]; rep={v[[j]]->d[[j]]}; dold=Simplify[Normal[Series[d[[i]],{h,0,order-i}]]]; kmax=order+1; For [k=1, k<=kmax, k++, dnew=dold/.rep; ijtab[[ij,3]]=k; klim=Max[k,klim]; If [dnew==dold, d[[i]]=dnew; Break[]]; If [k>=kmax, Print["Recursion OVW, j=",j," rep: ",rep]; Return[Null] ]; dold=Simplify[Normal[Series[dnew,{h,0,order-i}]]]; ]; ]; Return[Take[d,m]]];
Figure 5.3. Module to calculate nite order MoDE of rst-order scalar equation by recursive descent.

59

Chapter 5: ACCURACY ANALYSIS TOOLS

510

5.4.1. FOMoDE by Direct Solve Module ModifiedScalarODEbySolve receives the delay form of the MoDE for a rst-order scalar ODE. It returns the nite-order MoDE computed up to a given order. It should be used only for linear ODEs. For nonlinear ODEs it will attempt to compute solutions via Mathematica Solve function, but it is unlikely to succeed unless the order is very low and the equation of simple polynomial form. The module is invoked as follows: MoDE=ModifiedScalarODEbySolve[lhs-rhs,u,t,h,htab,order,m,isol]; The arguments are: lhs rhs u h htab The left hand side of the canonical form (5.25). Typically this is u[t+h]-u[t]. The right hand side of the canonical form (5.25). The name of the unknown function. The name of the stepsize variable. A list of all multiples of h different from one that appear in the canonical form (5.25). For example, in the Forward Euler: u (t + h ) u (t ) = h u (t ), this list would be empty because only u (t + h ) appears. But in the implicit MR: u (t + h ) u (t ) = h u (t + 1 h ) the multiple 2 1 has be be listed and htab should be { 1/2 }. In the 2-point Gauss implicit Runge Kutta: 2 u (t + h ) u (t ) = h 1 u (t + 1 h ) + u (t + 2 h ) , where 1 = 1 1 3 and 2 = 1 +1 3, 2 2 6 2 6 htab should be { (3-Sqrt[3])/6,(3+Sqrt[3])/6 }. The number of FOMoDEs returned in the function: 1 through order. If m=1 only the MoDE for u is returned. If m=2 return FOMoDEs for u and u , etc. Normally only that for u is of practical interest. If the input equation is nonlinear, ModifiedScalarODEbySolve will return k>1 FOMoDEs if it can compute any at all. If isol>0, return only the isolth solution; if isol=0 return all. If the input equation is linear, this is a dummy argument.

order The order of the largest derivative of u retained in the computations. m

isol

The function returns a list of FOMoDEs. If the input equation is linear, this list is one-dimensional and contains m entries. For example if m=3, it returns the list { FOMode for u , FOMode for u , FOMode for u }. If the input equation is nonlinear with isol=0 and if ModifiedScalarODEbySolve is able to compute k solutions, the result will be a two-dimensional list dimensioned with k rows and m columns. Only one of the rows will be relevant in the sense of being a FOMoDE while the others are spurious. It is then possible to use isol>0 in subsequent invocations to weed out the output. But since the module will rarely be able to solve nonlinear equations this case is largely irrelevant. 5.4.2. FOMoDE by Recursion There is a second way to obtain the FOMoDE, which involves using recursive substitution of derivatives rather than a direct solution. This method is implemented in the module ModifiedScalarODEbyRecursion, which is listed in Figure 5.3. The arguments of this module are the same as those of ModifiedScalarODEbySolve, except that isol is a dummy argument. The module returns only one FOMoDE so isol is ignored. The logic of this module is quite hairy and only understandable to Mathematica experts so it wont be explained here. 510

511

5. Notes and Bibliography

Which module is preferable? For simple linear ODEs with low to moderate order, try both modules and compare answers. If everything goes well, the answers will be identical, which provides a valuable check. The recursion module, however, should be preferred in the following cases: 1. The ODE is nonlinear. In this case ModifiedScalarODEbySolve will typically fail unless the ODE is of polynomial type and the internal Solve is asked to solve a quadratic or cubic polynomial. This can only happen if order is very low, typically 1 or 2. The ODE is linear but order is high enough that ModifiedScalarODEbySolve begins taking inordinate amounts of time. This is because the computational effort increases roughly as the cube of order when direct solving symbolic systems and simplifying. For the test systems used here and one-step LMS methods, this happens when order reaches 810. On the other hand, the computational effort growth of ModifiedScalarODEbyRecursion is milder, typically as the square of order. This is important when one is trying to identify series to express in closed form by taking more and more terms.

2.

Notes and Bibliography The study of errors in automatic matrix computations started during WWII. It was a byproduct of the appearance of programmable computers: punched card equipment, soon to be followed by digital computers. The paper that established the method as a science was co-authored by von Neumman and Goldstine [5.86]. This was soon followed by Turing [5.84]. All of these authors were pioneers of early digital computation. The original (early 1940s) predictions of forward error analysis for the canonical problem of linear algebra: solving Ax = b by elimination, were highly pessimistic. They showed worst case exponential error growth typied by 4n or 2n , where n is the number of equations. This lead early workers to predict that direct solvers will lose all accuracy by n 50100. But these dire predictions were not veried in actual computations. The discrepancy was explained by backward error analysis in a landmark paper by Wilkinson [5.89], who continued developing the approach throughout his career [5.90,5.91]. The concept of existence of a nearby problem as perturbation of the given data was actually rst mentioned in [5.86]. Hence von Neumann deserves the credit of being the originator of backward error analysis, among other brilliant contributions worth at least three and a half Nobel Prizes [5.10]. His priority has been unfortunately belittled, however, by a revisionist view of early matrix computations [5.70]. The concept of forward-truncation error for ODE systems is much older since stepsize control was used since the late XIX Century for hand and slide-rule computations. The theory was brought to an elegant conclusion by Henrici [5.40]. Modied differential equations originally appeared in conjunction with nite difference discretizations for computational uid dynamics (CFD). The prescription for constructing them can be found in Richtmyer and Mortons textbook [5.75, p. 331]. They used it to interpret numerical dissipation and dispersion in the Lax-Wendroff CFD treatment of shocks, and to derive corrective operators. Similar ideas were used by Hirt [5.43] and Roache [5.76]. Warming and Hyett [5.88] were the rst to describe the correct procedure for eliminating high order time derivatives. They used modied PDEs for studying accuracy and stability of several methods and attributed the modied equation name to Lomax [5.58]. In this work the original equation typically models ow effects of conduction and convection, h includes grid dimensions in space and time, and feedback is used to adjust parameters in terms of improving stability as well as reducing spurious oscillations (e.g. by articial viscosity or upwinding) and dispersion. The rst use of modied equations to study spatial nite-element discretizations can be found in [5.87]. However the derivative elimination and force lumping procedures were faulty, which led to incorrect conclusions. This was corrected by Park and Flaggs [5.63,5.64], who used modied equations for a systematic study of C 0 beam, plate and shell FEM discretizations.

511

Chapter 5: ACCURACY ANALYSIS TOOLS

512

The method has recently attracted attention from the numerical mathematics community since it provides an effective tool to understand long-time structural behavior of computational dynamic systems, both deterministic and chaotic. Recommended references are [5.34,5.35,5.38,5.83]. Web accessible Maple scripts for the MoDEFOMoDE reduction process are presented in [5.1].

References [5.1] Ahmed, M. O. and Corless, R. M., The method of modied equations in Maple, Electronic Proc. 3rd Int. IMACS Conf. on Applications of Computer Algebra, Maui, 1997. PDF accessible at http://www.aqit.uwo.ca/corless. Belvin, W. K. and Park, K. C., Structural tailoring and feedback control synthesis: an interdisciplinary approach, J. Guidance, Control & Dynamics, 13, 424429, 1990. Belytschko, T. and Mullen, R., Mesh partitions of explicit-implicit time integration, in: Formulations and Computational Algorithms in Finite Element Analysis, ed. by K.-J. Bathe, J. T. Oden and W. Wunderlich, MIT Press, Cambridge, 673690, 1976. Belytschko, T. and Mullen, R., Stability of explicit-implicit mesh partitions in time integration, Int. J. Numer. Meth. Engrg., 12, 15751586, 1978. Belytschko, T., Yen, T. and Mullen, R., Mixed methods for time integration, Comp. Meths. Appl. Mech. Engrg., 17/18, 259275, 1979. Bergan, P. G. and Felippa, C. A., A triangular membrane element with rotational degrees of freedom, Comp. Meths. Appl. Mech. Engrg., 50, 2569, 1985 Butcher, J. C., Numerical Methods for Ordinary Differential Equations, Wiley, New York, 2003. Cohn, A., Uber die Anzahl der Wurzeln einer algebraischen Gleichung in einem Kreise, Math Z., 1415, 110148, 1914. Dahlquist, G. and Bj orck, A., Numerical Methods, Prentice-Hall, Englewood Cliffs, N.J., 1974; reprinted by Dover, New York, 2003. Davis, P. J., John von Neumann at 100: SIAM celebrates a rich legacy, SIAM News, 36, No. 4, May 2003. Douglas, J. and Rachford Jr., H. H., On the numerical solution of the heat equation in two and three space variables, Trans. Amer. Math. Soc., 82, 421439, 1956. Farhat, C. and Lin, T. Y., Transient aeroelastic computations using multiple moving frames of reference, AIAA Paper No. 90-3053, AIAA 8th Applied Aerodynamics Conference, Portland, Oregon, August 1990. Farhat, C., Park, K. C. and Pelerin, Y. D., An unconditionally stable staggered algorithm for transient nite element analysis of coupled thermoelastic problems, Comp. Meths. Appl. Mech. Engrg., 85, 349365, 1991. Farhat, C. and Roux, F.-X., Implicit parallel processing in structural mechanics, Comput. Mech. Advances, 2, 1124, 1994. Farhat, C., Chen, P. S. and Mandel, J., A scalable Lagrange multiplier based domain decomposition method for implicit time-dependent problems, Int. J. Numer. Meth. Engrg., 38, 38313854, 1995. Felippa, C. A., Rened nite element analysis of linear and nonlinear two-dimensional structures, Ph.D. Dissertation, Department of Civil Engineering, University of California at Berkeley, Berkeley, CA, 1966. Felippa, C. A. and Park, K. C., Computational aspects of time integration procedures in structural dynamics, Part I: Implementation, J. Appl. Mech., 45, 595602, 1978. Felippa, C. A., Yee, H. C. M. and Park, K. C., Stability of staggered transient analysis procedures for coupled mechanical systems, Report LMSC-D630852, Applied Mechanics Laboratory, Lockheed Palo Alto Research Laboratory, Lockheed Missiles & Space Co., 1979. Felippa, C. A. and Park, K. C., Direct time integration methods in nonlinear structural dynamics, Comp. Meths. Appl. Mech. Engrg., 17/18, 277313, 1979. Felippa, C. A. and Park, K. C., Staggered transient analysis procedures for coupled dynamic systems: formulation, Comp. Meths. Appl. Mech. Engrg., 24, 61112, 1980.

[5.2] [5.3]

[5.4] [5.5] [5.6] [5.7] [5.8] [5.9] [5.10] [5.11] [5.12] [5.13] [5.14] [5.15] [5.16] [5.17] [5.18]

[5.19] [5.20]

512

513

5. References

[5.21] Felippa, C. A. and DeRuntz, J. A., Finite element analysis of shock-induced hull cavitation, Comp. Meths. Appl. Mech. Engrg., 44, 297337, 1984. [5.22] Felippa, C. A. and Bergan, P. G., A triangular plate bending element based on an energy-orthogonal free formulation, Comp. Meths. Appl. Mech. Engrg., 61, 129160, 1987. [5.23] Felippa, C. A. and T. L. Geers, Partitioned analysis of coupled mechanical systems, Engrg. Comput., 5, 123133, 1988. [5.24] Felippa, C. A., Recent advances in nite element templates, Chapter 4 in Computational Mechanics for the Twenty-First Century, ed. by B.H.V. Topping, Saxe-Coburn Publications, Edinburgh, 7198, 2000. [5.25] Felippa, C. A., Park, K. C. and Farhat, C., Partitioned analysis of coupled mechanical systems, Invited Plenary Lecture, 4th World Congress in Computational Mechanics, Buenos Aires, Argentina, July 1998, expanded version in Comp. Meths. Appl. Mech. Engrg., 190, 32473270, 2001. [5.26] Felippa, C. A., A study of optimal membrane triangles with drilling freedoms, Comp. Meths. Appl. Mech. Engrg., 192, 21252168, 2003. [5.27] Gantmacher, F., The Theory of Matrices, Vol. II, Chelsea, New York, 1960. [5.28] Gear, C. W., Numerical Initial value Problems in Ordinary Differential Equations, Prentice-Hall, Englewood Cliffs, N.J., 1971. [5.29] Geers, T., L., Residual potential and approximate methods for three-dimensional uid-structure interaction, J. Acoust. Soc. Am., 45, 15051510, 1971. [5.30] Geers, T. L., Doubly asymptotic approximations for transient motions of general structures, J. Acoust. Soc. Am., 45, 15001508, 1980. [5.31] Geers, T. L. and Felippa, C. A., Doubly asymptotic approximations for vibration analysis of submerged structures, J. Acoust. Soc. Am., 73, 11521159, 1980. [5.32] Geers, T. L., Boundary element methods for transient response analysis, in: Chapter 4 of Computational Methods for Transient Analysis, ed. by T. Belytschko and T. J. R. Hughes, North-Holland, Amsterdam, 221244, 1983. [5.33] Grcar, J., Birthday of Modern Numerical Analysis, sepp@california.sandia.gov, 2003 [5.34] Grifths, D. and Sanz-Serna, J., On the scope of the method of modied equations. SIAM J. Sci. Statist. Comput., 7, 9941008, 1986. [5.35] Hairer, E., Backward analysis of numerical integrators and symplectic methods, Annals Numer. Math., 1, 107132, 1994. [5.36] Hairer, E., Norsett, S. P. and Wanner, G., Solving Ordinary Differential Equations I: Nonstiff Problems, Springer, Berlin, 1994. [5.37] Hairer, E. and Wanner, G., Solving Ordinary Differential Equations II: Stiff and Difference-Algebraic Problems, Springer, Berlin, 1996. [5.38] Hairer, E., Lubich, C. and Wanner, G., Geometric Numerical Integration: Structure Preserving Algorithms for Ordinary Differential Equations, Springer Verlag, Berlin, 2002. [5.39] Henrici, P., Discrete Variable Methods for Ordinary Differential Equations, Wiley, New York, 1963. [5.40] Henrici, P., Error Propagation for Difference Methods, Wiley, New York, 1964. [5.41] Henrici, P., Applied and Computational Complex Analysis, Vol II, Wiley, New York, 1977. [5.42] Hermite, C., Sur le nombre des racines dune e quation alg ebrique comprise entre des limites donn ees, J. Reine Angew. Math. 52, 3951, 1856. [5.43] Hirt, C. W., Heuristic stability theory for nite difference equations, J. Comp. Physics, 2, 339342, 1968. [5.44] Hughes, T. J. R. and Liu, W.-K., Implicit-explicit nite elements in transient analysis: I. Stability theory; II. Implementation and numerical examples, J. Appl. Mech., 45, 371378, 1978. [5.45] Hughes, T. J. R., Pister, K. S. and Taylor, R. L., Implicit-explicit nite elements in nonlinear transient analysis, Comp. Meths. Appl. Mech. Engrg., 17/18, 159182, 1979. [5.46] Hughes , T. J. R. and Stephenson, R. S., Stability of implicit-explicit nite elements in nonlinear transient analysis, Int. J. Engrg. Sci., 19, 295302, 1981. [5.47] Hughes, T. J. R., The Finite Element Method Linear Static and Dynamic Finite Element Analysis, Prentice-Hall, Englewood Cliffs, N.J., 1987.

513

Chapter 5: ACCURACY ANALYSIS TOOLS

514

[5.48] Hurwitz, A., Uber die Bedingungen, unter welchen eine Gleichung nur Wurzeln mit negativen reellen Theilen besitz, Math. Ann., 46, 273284, 1895. English translation: On the conditions under which an equation has only roots with negative real part, in Selected papers on Mathematical Trends in Control Theory, ed. by R. Bellman and R. Kalaba, Dover, New York, 7282, 1964. [5.49] Jensen, P. S., Transient analysis of structures by stify stable methods, Computers & Structures, 4, 615626, 1974. [5.50] Jury, E. I., Inners and Stability of Dynamic Systems, 2nd ed., Krieger, Malabar, FA, 1982. [5.51] Kloeden, P. E. and Palmer, K. J. (eds), Chaotic Numerics, American Mathematical Society, Providence, RI, 1994. [5.52] Lagrange, J. L., M echanique Analytique, Chez le Veuve Desaint, Paris, 1788; 1965 Edition compl ete, 2 vol., Blanchard, Paris. [5.53] Lambert, J., Computational Methods in Ordinary Differential Equations, Wiley, London, 1973. [5.54] Lapidus, L. and Seineld, J. H., Numerical Solutions of Ordinary Differential Equations, Academic Press, New York, 1971. [5.55] LaSalle, R. D., The Stability of Dynamic Systems, Society for Industrial and Applied Mathematics, Providence, RI, 1976. [5.56] LaSalle, R. D., The Stability and Control of Dynamic Processes, Springer, Berlin, 1986. [5.57] Li enard, A. and Chipart, M. H., Sur la signe de la partie reelle des racines dune e quation algebrique, J. Math. Pure Appl., 10, 291346, 1914. [5.58] Lomax, H., Kutler, P. and Fuller, F. B., The numerical solution of partial differential equations governing convection, AGARDograph 146, 1970. [5.59] Park, K. C., Felippa, C. A. and DeRuntz, J. A., Stabilization of staggered solution procedures for uidstructure interaction analysis, in: Computational Methods for Fluid-Structure Interaction Problems, ed. by T. Belytschko and T. L. Geers, AMD Vol. 26, American Society of Mechanical Engineers, New York, 95124, 1977. [5.60] Park, K. C. and Felippa, C. A., Computational aspects of time integration procedures in structural dynamics, Part II: Error propagation, J. Appl. Mech., 45, 603611, 1978. [5.61] Park, K. C., Partitioned transient analysis procedures for coupled-eld problems: stability analysis, J. Appl. Mech., 47, 370376, 1980. [5.62] Park, K. C. and Felippa, C. A., Partitioned transient analysis procedures for coupledeld problems: accuracy analysis, J. Appl. Mech., 47, 919926, 1980. [5.63] Park, K. C. and Flaggs, D. L., An operational procedure for the symbolic analysis of the nite element method, Comp. Meths. Appl. Mech. Engrg., 46, 6581, 1984. [5.64] Park, K. C. and Flaggs, D. L., A Fourier analysis of spurious modes and element locking in the nite element method, Comp. Meths. Appl. Mech. Engrg., 42, 3746, 1984. [5.65] Park, K. C. and Felippa, C. A., Partitioned analysis of coupled systems, Chapter 3 in Computational Methods for Transient Analysis, T. Belytschko and T. J. R. Hughes, eds., North-Holland, AmsterdamNew York, 157219, 1983. [5.66] Park, K. C. and Felippa, C. A., Recent advances in partitioned analysis procedures, in: Chapter 11 of Numerical Methods in Coupled Problems, ed. by R. Lewis, P. Bettess and E. Hinton, Wiley, Chichester, 327352, 1984. [5.67] Park, K. C. and Belvin, W. K., A partitioned solution procedure for control-structure interaction simulations, J. Guidance, Control and Dynamics, 14, 5967, 1991. [5.68] Park, K. C. and Felippa, C. A., A variational principle for the formulation of partitioned structural systems, Int. J. Numer. Meth. Engrg., 47, 395418, 2000. [5.69] Park, K. C., Felippa, C. A. and Ohayon, R., Partitioned formulation of internal uid-structure interaction problems via localized Lagrange multipliers, Comp. Meths. Appl. Mech. Engrg., 190, 29893007, 2001. [5.70] Parlett, B. N., Very early days of matrix computations, SIAM News, 36, No. 9, Nov. 2003. [5.71] Peaceman, D. W. and Rachford Jr., H. H., The numerical solution of parabolic and elliptic differential equations, SIAM J., 3, 2841, 1955.

514

515

5. References

[5.72] Piperno, S. and Farhat, C., Design of efcient partitioned procedures for the transient solution of aeroelastic problems, Revue Europ eenne El ements Finis, 9, 655680, 2000. [5.73] Piperno, S. and Farhat, C., Partitioned procedures for the transient solution of coupled aeroelastic problems: an energy transfer analysis and three-dimensional applications, Comp. Meths. Appl. Mech. Engrg., 190, 31473170, 2001. [5.74] Press, W. H., Flannery, B. P., Teukolsky, S. A. and Vetterling, W. T., Numerical Recipes in Fortran, Cambridge Univ. Press, 2nd ed., 1992. [5.75] Richtmyer, R. L. and Morton, K. W., Difference Methods for Initial Value Problems, 2nd ed., Interscience Pubs., New York, 1967. [5.76] Roache, P. J., Computational Fluid Mechanics, Hermosa Publishers, Albuquerque, 1970. [5.77] Routh, E. J., A Treatise on the Stability of a Given State of Motion Adams Prize Essay, Macmillan, New York, 1877. [5.78] Schuler, J. J. and Felippa, C. A., Superconducting nite elements based on a gauged potential variational principle, I. Formulation, II. Computational results, J. Comput. Syst. Engrg, 5, 215237, 1994. [5.79] Schur, I., Uber Potenzreihen die in Inners des Einheitkreises besehr ankt sind, J. F ur Math., 147, 205232, 1917. [5.80] Sewell, G., The Numerical Solutions of Ordinary & Partial Differential Equations, Academic Press, 1997. [5.81] Shampine, L. F., Numerical Solution of Ordinary Differential Equations, CRC Press, 1994. [5.82] Straughan, B., Energy Method, Stability and Nonlinear Convection, Springer, Berlin, 1992. [5.83] Stuart, A. M. and Humphries, A. R., Dynamic Systems and Numerical Analysis, Cambridge Univ. Press, Cambridge, 1996. [5.84] Turing, A. M., Rounding-off errors in matrix processes, Quart. J. Mech, 1, 287308, 1987. [5.85] Uspenky, J. V., Theory of Equations, McGraw-Hill, New York, 1948. [5.86] von Neumann, J. and Goldstine, H. Numerical inverting of matrices of high order, Bull AMS, 1947. [5.87] Waltz, J. E., Fulton, R. E. and Cyrus, N. J., Accuracy and convergence of nite element approximations, Proc. Second Conf. on Matrix Methods in Structural Mechanics, WPAFB, Ohio, Sep. 1968, in AFFDL TR 68-150, 9951028, 1968. [5.88] Warming, R. F. and Hyett, B. J., The modied equation approach to the stability and accuracy analysis of nite difference methods, J. Comp. Physics, 14, 159179, 1974. [5.89] Wilkinson, J. H., Error analysis of direct methods of matrix inversion, J. ACM 8, 281330, 1961. [5.90] Wilkinson, J. H., Rounding Errors in Algebraic Processes, Prentice-Hall, Englewood Cliffs, N.J., 1963. [5.91] Wilkinson, J. H., The Algebraic Eigenvalue Problem, Oxford Univ. Press, Oxford, 1965. [5.92] Wolfram, S., The Mathematica Book, Wolfram Media Inc., 5th ed. 2003. [5.93] Yanenko, N. N., The Method of Fractional Steps, Springer, Berlin, 1991.

515

You might also like