This action might not be possible to undo. Are you sure you want to continue?

BooksAudiobooksComicsSheet Music### Categories

### Categories

### Categories

### Publishers

Scribd Selects Books

Hand-picked favorites from

our editors

our editors

Scribd Selects Audiobooks

Hand-picked favorites from

our editors

our editors

Scribd Selects Comics

Hand-picked favorites from

our editors

our editors

Scribd Selects Sheet Music

Hand-picked favorites from

our editors

our editors

Top Books

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Audiobooks

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Comics

What's trending, bestsellers,

award-winners & more

award-winners & more

Top Sheet Music

What's trending, bestsellers,

award-winners & more

award-winners & more

P. 1

Fundamentals of Computational Fluid Dynamics_Harvard Lomax|Views: 18|Likes: 1

Published by irbis200

Theory of FEA apllied to Fluid Dynamics

Theory of FEA apllied to Fluid Dynamics

See more

See less

https://www.scribd.com/doc/122382765/Fundamentals-of-Computational-Fluid-Dynamics-Harvard-Lomax

12/15/2014

text

original

Harvard Lomax and Thomas H. Pulliam NASA Ames Research Center David W. Zingg University of Toronto Institute for Aerospace Studies August 26, 1999

Contents

1 INTRODUCTION

1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.2.1 Problem Speci cation and Geometry Preparation . . . . . . 1.2.2 Selection of Governing Equations and Boundary Conditions 1.2.3 Selection of Gridding Strategy and Numerical Method . . . 1.2.4 Assessment and Interpretation of Results . . . . . . . . . . . 1.3 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.4 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 Conservation Laws . . . . . . . . . . . . 2.2 The Navier-Stokes and Euler Equations . 2.3 The Linear Convection Equation . . . . 2.3.1 Di erential Form . . . . . . . . . 2.3.2 Solution in Wave Space . . . . . . 2.4 The Di usion Equation . . . . . . . . . . 2.4.1 Di erential Form . . . . . . . . . 2.4.2 Solution in Wave Space . . . . . . 2.5 Linear Hyperbolic Systems . . . . . . . . 2.6 Problems . . . . . . . . . . . . . . . . . . 3.1 Meshes and Finite-Di erence Notation 3.2 Space Derivative Approximations . . . 3.3 Finite-Di erence Operators . . . . . . 3.3.1 Point Di erence Operators . . . 3.3.2 Matrix Di erence Operators . . 3.3.3 Periodic Matrices . . . . . . . . 3.3.4 Circulant Matrices . . . . . . . iii . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1

1 2 2 3 3 4 4 5

2 CONSERVATION LAWS AND THE MODEL EQUATIONS

7 8 12 12 13 14 14 15 16 17

7

**3 FINITE-DIFFERENCE APPROXIMATIONS
**

. . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

21

21 24 25 25 25 29 30

3.4 Constructing Di erencing Schemes of Any Order . . . . . 3.4.1 Taylor Tables . . . . . . . . . . . . . . . . . . . . 3.4.2 Generalization of Di erence Formulas . . . . . . . 3.4.3 Lagrange and Hermite Interpolation Polynomials 3.4.4 Practical Application of Pade Formulas . . . . . . 3.4.5 Other Higher-Order Schemes . . . . . . . . . . . . 3.5 Fourier Error Analysis . . . . . . . . . . . . . . . . . . . 3.5.1 Application to a Spatial Operator . . . . . . . . . 3.6 Di erence Operators at Boundaries . . . . . . . . . . . . 3.6.1 The Linear Convection Equation . . . . . . . . . 3.6.2 The Di usion Equation . . . . . . . . . . . . . . . 3.7 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

. . . . . . . . . . . .

31 31 34 35 37 38 39 39 43 44 46 47

4 THE SEMI-DISCRETE APPROACH

4.1 Reduction of PDE's to ODE's . . . . . . . . . . . . . . . . . . . . . . 4.1.1 The Model ODE's . . . . . . . . . . . . . . . . . . . . . . . . 4.1.2 The Generic Matrix Form . . . . . . . . . . . . . . . . . . . . 4.2 Exact Solutions of Linear ODE's . . . . . . . . . . . . . . . . . . . . 4.2.1 Eigensystems of Semi-Discrete Linear Forms . . . . . . . . . . 4.2.2 Single ODE's of First- and Second-Order . . . . . . . . . . . . 4.2.3 Coupled First-Order ODE's . . . . . . . . . . . . . . . . . . . 4.2.4 General Solution of Coupled ODE's with Complete Eigensystems 4.3 Real Space and Eigenspace . . . . . . . . . . . . . . . . . . . . . . . . 4.3.1 De nition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 Eigenvalue Spectrums for Model ODE's . . . . . . . . . . . . . 4.3.3 Eigenvectors of the Model Equations . . . . . . . . . . . . . . 4.3.4 Solutions of the Model ODE's . . . . . . . . . . . . . . . . . . 4.4 The Representative Equation . . . . . . . . . . . . . . . . . . . . . . 4.5 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.1 Basic Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Model Equations in Integral Form . . . . . . . . . . . . . . . . . . . 5.2.1 The Linear Convection Equation . . . . . . . . . . . . . . . 5.2.2 The Di usion Equation . . . . . . . . . . . . . . . . . . . . . 5.3 One-Dimensional Examples . . . . . . . . . . . . . . . . . . . . . . 5.3.1 A Second-Order Approximation to the Convection Equation 5.3.2 A Fourth-Order Approximation to the Convection Equation 5.3.3 A Second-Order Approximation to the Di usion Equation . 5.4 A Two-Dimensional Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

51

52 52 53 54 54 55 57 59 61 61 62 63 65 67 68 72 73 73 74 74 75 77 78 80

5 FINITE-VOLUME METHODS

71

5.5 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

83 86 87 88 89 90 91 91 92 93 93 95 95 96 97 97 98 99 100 101 102 102 103 105 106 107 110 110 111 112 115 117

6 TIME-MARCHING METHODS FOR ODE'S

6.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.2 Converting Time-Marching Methods to O E's . . . . . . . . . . . . 6.3 Solution of Linear O E's With Constant Coe cients . . . . . . . . 6.3.1 First- and Second-Order Di erence Equations . . . . . . . . 6.3.2 Special Cases of Coupled First-Order Equations . . . . . . . 6.4 Solution of the Representative O E's . . . . . . . . . . . . . . . . 6.4.1 The Operational Form and its Solution . . . . . . . . . . . . 6.4.2 Examples of Solutions to Time-Marching O E's . . . . . . . 6.5 The ; Relation . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.5.1 Establishing the Relation . . . . . . . . . . . . . . . . . . . . 6.5.2 The Principal -Root . . . . . . . . . . . . . . . . . . . . . . 6.5.3 Spurious -Roots . . . . . . . . . . . . . . . . . . . . . . . . 6.5.4 One-Root Time-Marching Methods . . . . . . . . . . . . . . 6.6 Accuracy Measures of Time-Marching Methods . . . . . . . . . . . 6.6.1 Local and Global Error Measures . . . . . . . . . . . . . . . 6.6.2 Local Accuracy of the Transient Solution (er j j er! ) . . . 6.6.3 Local Accuracy of the Particular Solution (er ) . . . . . . . 6.6.4 Time Accuracy For Nonlinear Applications . . . . . . . . . . 6.6.5 Global Accuracy . . . . . . . . . . . . . . . . . . . . . . . . 6.7 Linear Multistep Methods . . . . . . . . . . . . . . . . . . . . . . . 6.7.1 The General Formulation . . . . . . . . . . . . . . . . . . . . 6.7.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.7.3 Two-Step Linear Multistep Methods . . . . . . . . . . . . . 6.8 Predictor-Corrector Methods . . . . . . . . . . . . . . . . . . . . . . 6.9 Runge-Kutta Methods . . . . . . . . . . . . . . . . . . . . . . . . . 6.10 Implementation of Implicit Methods . . . . . . . . . . . . . . . . . . 6.10.1 Application to Systems of Equations . . . . . . . . . . . . . 6.10.2 Application to Nonlinear Equations . . . . . . . . . . . . . . 6.10.3 Local Linearization for Scalar Equations . . . . . . . . . . . 6.10.4 Local Linearization for Coupled Sets of Nonlinear Equations 6.11 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.1 Dependence on the Eigensystem 7.2 Inherent Stability of ODE's . . 7.2.1 The Criterion . . . . . . 7.2.2 Complete Eigensystems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

85

7 STABILITY OF LINEAR SYSTEMS

121

122 123 123 123

. . . . . . . . . . . . . . . . . . . . .5. . . . . . .1.7. . .6 Steady Problems . . . 7. . . . . . .3 A Perspective . . . . . 7. . . . . . . . . . . . . . . 8. . . . .1 Sti ness De nition for ODE's . . . . . . . . . . . . . . . . . . 8.6. . . . .3 Defective Eigensystems . . .5. . 149 149 149 151 151 152 153 154 154 154 155 158 158 159 160 160 161 . . . . . . . . . . . . . . .3. . . . . Fourier Stability Analysis . . . . . . . . . .5. . . . . . Numerical Stability of O E 's . . . . . . . . .7 Problems . . . . . . . .8 7. . . . . .9 7. . . .7. . . . . . . .3 An Example Involving Periodic Convection . . . . . . . . . . . . . . . . . . . 8. . . . . .6. . . . . . . . . . . . . . . .2 Complete Eigensystems . . . . . 8. . . . . . . A-Stable Methods . . . . . . . . . . . . . . . . . .1. 8. . . . .3 Sti ness Classi cations . . . . .2. . . . . . . .1. . . . . . . . . .7. .2 Driving and Parasitic Eigenvalues . . . . . . . . . . . 123 124 124 125 125 125 128 128 132 135 135 136 137 141 141 142 143 143 146 8 CHOICE OF TIME-MARCHING METHODS 8. . . . . . . .1 Stability for Large h. . . . . . . . . . . . . . . . . . . . . . . . .4. . 8. . . . . . Consistency . .5 7. . . . . . . . . . . . . . . . . . . . . . . . . . Problems . . . . . . . 7. . . . . . .3 Relation to Circulant Matrices . . . . . . . .3 7. . . 7. . . . . . . . . . . . . . . . . . . . . . . . .2 Some Examples . . .3. . . . . . . . . . . . . . . . . . . . . . . . .7. . . . 7. . . . . . . . . . . . . . . . . .2 Implicit Methods .1 The Criterion . . . . . . . . . . . . . . . .1 Explicit Methods . . .6. . . . .4. . . . . . . . . .4 7. . . Time-Space Stability and Convergence of O E's . . . . . . . Numerical Stability Concepts in the Complex -Plane . . . . 7. 7. . . . . . 8. .5 Coping With Sti ness . . . . . . . . . . . . . . . .5. . . . . . . . . 8. . . . . . . . . . . . . . 8. . . . . . . .6 7. . . . . . . . . . . . . . . . . . .2 Relation of Sti ness to Space Mesh Size . . . . . . . . . 7. . . .2 Stability for Small t . .1 Relation to -Eigenvalues . . . . . . . . . . . . . . . . . . . . . 8. . . .3. . . .3 Stability Contours in the Complex h Plane. .2 Unconditional Stability. . . . . . . . . 8.4. . . . . . . . . . . . . . . . .7 7. . . . . . . . . . . . . . . 8. . . . . .3 Practical Considerations for Comparing Methods . Numerical Stability Concepts in the Complex h Plane 7. . . . . . . . . . 8. . 8. . . . . . . . . . . . . . . . . . .3 Defective Eigensystems .1 -Root Traces Relative to the Unit Circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5. . . .1 The Basic Procedure . . . . 7. . . . . . .1 Imposed Constraints . . . . 8. 7. .2 An Example Involving Di usion . . . . . . . . . . .4 Comparing the E ciency of Explicit Methods . . . . . . . . .

. . . . . . .2 Properties of the Iterative Method . . . . . . . . . . 200 10. . . . . . . . . . . . . . . . . . . . .1 The Concept . . .4 Problems .2 Factoring Physical Representations | Time Splitting . . . . . . . . .1 Eigenvector and Eigenvalue Identi cation with Space Frequencies191 10. . . . . . . . . . . . . .3 A Two-Grid Process . . . . . .3 11. . . . . . . . . . . . . . 9. . . . . . . . . . . . . . . . . . 192 10. . . .1. . . . . . . . . . . . . . . . 220 217 . . . . . . . 217 12. . . . . . .3. . . . . . . . . . 11. . .4. . . . . . . . . . . . . . . .1 Preconditioning the Basic Matrix . . . . . . . 9. . . . . .4. . 9. . . . . . . .2. . . . . . . . . . .3 The ODE Approach to Classical Relaxation . 191 11 NUMERICAL DISSIPATION 203 204 205 207 209 210 212 213 214 12 SPLIT AND FACTORED FORMS 12.1 11. . . . . . . . . . .2 ODE Form of the Classical Methods . . . 11. . . . . . . . . . . . . . Upwind Schemes . . . .1. . . . .3 The Classical Methods . . . . . . . . . 11. . . . . . . .2. . . . . . . . . . . . . . . . . . .3 Factoring Space Matrix Operators in 2{D . . . The Lax-Wendro Method . . . . . . . . . . . . . . .1 Formulation of the Model Problem . . .1 The Delta Form of an Iterative Scheme . . . . . . . . . . . . .2 The Basic Process . . . . . . . . . 202 11. . . . . . . . . . . . . . . . . . . . .2 Classical Relaxation . . . . . 9. .1 Motivation . . . . . . . . . . . . . . . . . . . .5 Nonstationary Processes . . . . . . . . . . 218 12. 9. . . . .2 The Gauss-Seidel System . . 11. . . 192 10. . . . . .1. . . .3 The SOR System . . . . . . . . . 9. . and the Error 9. . . . . . . . . . . . 191 10. . 9. . . . . . .4. . . . . . . . . . . . . 163 164 164 166 168 168 168 169 170 170 172 173 174 176 180 182 187 10 MULTIGRID 10. . . . . . . . . . . . . . . . . . . . . . . .2. 9. . . . . . . . . . .2 The Converged Solution. . . . . . . . . . . . . . . . .1 The Ordinary Di erential Equation Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . .6 Problems . . . . . . . . . . . . . . . . .9 RELAXATION METHODS 9. .4. .2 The Model Equations . 9. . . . . .5 Arti cial Dissipation .1 Flux-Vector Splitting . . . 9. . . . . . . . . . . . . .6 Problems . .4 One-Sided First-Derivative Space Di erencing The Modi ed Partial Di erential Equation . . . the Residual. . .1 The Point-Jacobi System . . . . . . . . . . .2 11. . . . . . . . . . . . . . . .4. . . . . . . . . . 9. . . . . . 9. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Flux-Di erence Splitting . . . . . . . . . . . . . . . . . . . . . 9.3. . . . .4 Eigensystems of the Classical Methods . 9. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1. . . . . .

. . . . . . . . . .3. . . . . . . . . . . . . . . . .4.2 The Factored Nondelta Form of the Implicit Euler Method 13. 13. . . . . . . . . . . . .1 Standard Tridiagonals . . . . . . Vector and Matrix Norms . . . . . . . . . . . . . .12. . . . . . . . . . . . . . . . . . . . .2. . . . . . . . . . .4. .3. . . . . . . . . . . .1 The Representative Equation for Circulant Operators . . . . . . . . . . . . . . . . 12. . . . . . . . . . . . . . . . . . . . .3. . Eigensystems of Circulant Matrices . . . . 12. . .5 Special Cases Found From Symmetries . . . . . . . . . .3 The Factored Delta Form of the Implicit Euler Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 A.3 B. .1 B. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 B. .5 Example Analysis of the 3-D Model Equation . . Importance of Factored Forms in 2 and 3 Dimensions The Delta Form . . . B. .6 Special Cases Involving Boundary Conditions . . . . . . . 13. . . . . . . . . . . . . .2 Analysis of a Second-Order Time-Split Method . . .6 12. . . . .3 A. . . 257 . . 13. . . . . . . . . . . . . . . . . . . . . 13. . . . . . . . . . . . .4 Example Analysis of 2-D Model Equations . . . .4 Notation .3. 220 221 221 223 226 226 228 229 13 LINEAR ANALYSIS OF SPLIT AND FACTORED FORMS 13. . . .4. . . . .4 12. . . . . . . .2 A. . . . . . . . .4 Space Splitting and Factoring . . . . . . . . . Generalized Eigensystem for Simple Tridiagonals . 13. . . 233 233 234 234 237 238 242 242 243 243 244 245 247 249 250 251 251 254 257 258 259 260 260 261 262 263 A USEFUL RELATIONS AND DEFINITIONS FROM LINEAR ALGEBRA 249 B SOME PROPERTIES OF TRIDIAGONAL MATRICES Standard Eigensystem for Simple Tridiagonals . De nitions . . . . . . . . .2. . . . . . A. . . . . . The Inverse of a Simple Tridiagonal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Eigensystems . . . . . 13. . . . . . . B. . . . . . .4. . . . . .1 The Unfactored Implicit Euler Method . . . . . B. . . . . .1 Mesh Indexing Convention . . . . . . . Problems . . . .6 Problems . . 13. . . . . . . . Second-Order Factored Implicit Methods . . . .4 The Factored Delta Form of the Trapezoidal Method . . . . 13. . . . . . . . . . . . . .1 Stability Comparisons of Time-Split Methods . . . .3 The Representative Equation for Space-Split Operators . 13. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 A. . .2 B. 12.4. . . . . . . . .2 General Circulant Systems . . . . . .7 12. . . . . .2 Example Analysis of Circulant Systems . . .2 Data Bases and Space Vectors . 13. . . . . . . . . . . . .5 12.4. Algebra . . . . . . . . . . . . . . B.3 Data Base Permutations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

C THE HOMOGENEOUS PROPERTY OF THE EULER EQUATIONS265 .

The mathematical structure is the theory of linear algebra and the attendant eigenanalysis of linear systems. slip surfaces. shock waves. These events are related to the action and interaction of phenomena such as dissipation. Many of the most important aspects of these relations are nonlinear and. As we shall see in a later chapter. often have no analytic solution. convection. motivates the numerical solution of the associated partial di erential equations.Chapter 1 INTRODUCTION 1. In the eld of aerodynamics. di usion. As time went on and these attempts began to crystallize. in e ect. which 1 . The new equations. all of these phenomena are governed by the compressible Navier-Stokes equations. and turbulence. as a consequence. The principal such constraint was the demand for uni cation. but the authors believe the answer is a rmative and present this book as justi cation for that belief. Experience has shown that such is not the case. of course. The ultimate goal of the eld of computational uid dynamics (CFD) is to understand the physical events that occur in the ow of uids around and within designated objects. can change the form of the basic partial di erential equations themselves. boundary layers. the use of numerical methods to solve partial di erential equations introduces an approximation that. underlying constraints on the nature of the material began to form.1 Motivation The material in this book originated from attempts to understand and systemize numerical solution techniques for the partial di erential equations governing the physics of uid ow. At the same time it would seem to invalidate the use of linear algebra for the classi cation of the numerical methods. This. Was there one mathematical structure which could be used to describe the behavior and results of most numerical methods in common use in the eld of uid dynamics? Perhaps the answer is arguable.

the resulting simulation can still be of little practical use if ine cient or inappropriate algorithms are used. This motivates studying the concepts of stability. All of these concepts we hope to clarify in this book. It is safe to say. On the other hand. and the requirements of the simulation. physical reality. simulate the physical phenomena listed above in ways that are not exactly the same as an exact solution to the basic partial di erential equation. these di erences are usually referred to as truncation errors. even if the errors are kept small enough that they can be neglected (for engineering purposes). as long as the error remains in some engineering sense \small". and probably will. if any. Since they are not precisely the same as the original equations. Independent of the speci c application under study. ow conditions. 1. However. that most numerical methods in practical use for solving the nondissipative Euler equations create a modi ed partial di erential equation that produces some form of dissipation. Mathematically. the following sequence of steps generally must be followed in order to obtain a satisfactory solution. including the geometry. nor with deliberately directing an error to a speci c physical process. the theory associated with the numerical analysis of uid mechanics was developed predominantly by scientists deeply interested in the physics of uid ow and.2. these methods give very useful information.2 Background The eld of computational uid dynamics has a broad range of applicability. Regardless of what the numerical errors are called. these errors are often identi ed with a particular physical phenomenon on which they have a strong e ect. and algorithm development in general. for example. and consistency. with identifying an error with a physical process. producing answers that represent little. This motivates studying the concepts of sti ness. 1. This means that the errors caused by the numerical approximation result in a modi ed partial di erential equation having additional terms that can be identi ed with the physics of dissipation in the rst case and dispersion in the second. There is nothing wrong. if used and interpreted properly. factorization. of course. they can lead to serious di culties. INTRODUCTION are the ones actually being solved by the numerical process. However.1 Problem Speci cation and Geometry Preparation The rst step involves the speci cation of the problem.2 CHAPTER 1. are often referred to as the modi ed partial di erential equations. they can. The geometry may result from . convergence. if their e ects are not thoroughly understood and controlled. as a consequence. Thus methods are said to have a lot of \arti cial viscosity" or said to be highly dispersive.

For example. These may be steady or unsteady and compressible or incompressible.3 Selection of Gridding Strategy and Numerical Method Next a numerical method and a strategy for dividing the ow domain into cells.2 Selection of Governing Equations and Boundary Conditions Once the problem has been speci ed. for example. We concern ourselves here only with numerical methods requiring such a tessellation of the domain. no geometry need be supplied. . If necessary. and the solution parameters of interest. Possible simpli ed governing equations include the potential. or elements. momentum. must be selected. it is always prudent to consider solving simpli ed forms of the Navier-Stokes equations when the simpli cations retain the physics which are essential to the goals of the simulation. or mesh. Alternatively. Turbulence is an example of a physical process which is rarely simulated in a practical context (at the time of writing) and thus is often modelled. The boundary conditions which must be speci ed depend upon the governing equations. physical models must be chosen for processes which cannot be simulated within the speci ed constraints. 1. at a solid wall. However. etc.2. the Euler equations. As an example of solution parameters of interest in computing the ow eld about an airfoil.2. Flow conditions might include. a set of objectives and constraints must be speci ed. The partial di erential equations resulting from these conservation laws are referred to as the Navier-Stokes equations. an appropriate set of governing equations and boundary conditions must be selected. in a design context. one may be interested in i) the lift and pitching moment only. periodic boundaries. or iii) the details of the ow at some speci c location. the turnaround time required. It is generally accepted that the phenomena of importance to the eld of continuum uid dynamics are governed by the conservation of mass. while the Navier-Stokes equations require the no-slip condition.ow equations. The requirements of the simulation include issues such as the level of accuracy needed. symmetry boundaries. BACKGROUND 3 measurements of an existing con guration or may be associated with a design study. 1. the Euler equations require ow tangency to be enforced. in the interest of e ciency. and the thin-layer Navier-Stokes equations. The success of a simulation depends greatly on the engineering insight involved in selecting the governing equations and physical models based on the problem speci cation. which is known as a grid.1.2. Boundary types which may be encountered include solid walls. in ow and out ow boundaries. and energy. Instead. The rst two of these requirements are often in con ict and compromise is necessary. the Reynolds number and Mach number for the ow over an airfoil. ii) the drag as well as the lift and pitching moment.

composite. we can make use of much simpler expressions. including structured. hybrid. and understand most numerical methods used to nd solutions for the complete compressible Navier-Stokes equations.4 CHAPTER 1. analyze. Finally. or spectral methods.and three-dimensional uid ow simulations. INTRODUCTION Many di erent gridding strategies exist. when used intelligently. unstructured. Furthermore. For example. These model equations isolate certain aspects of the physics contained in the complete set of equations. We believe that a thorough understanding of what happens when . Rather than presenting the details of the most advanced methods. 1. This step can require post-processing of the data. the grid can be altered based on the solution in an approach known as solution-adaptive gridding.4 Assessment and Interpretation of Results 1. Tannehill. Many of these issues are not addressed in this book. which are still evolving.3 Overview It should be clear that successful simulation of uid ows can involve a wide range of issues from grid generation to turbulence modelling to the applicability of various simpli ed forms of the Navier-Stokes equations. nite-volume. Hence their numerical solution can illustrate the properties of a given numerical method when applied to a more complicated system of equations which governs similar physical phenomena. The choices of a numerical method and a gridding strategy are strongly interdependent. and Pletcher 1] and Hirsch 2]. to develop. nite-element. the use of nite-di erence methods is typically restricted to structured grids. of di culties and complexities that arise in realistic two. It is critical that the magnitude of both numerical and physical-model errors be well understood. we present a foundation for developing. Although the model equations are extremely simple and easy to solve.2. the results of the simulation must be assessed and interpreted. the so-called \model" equations. the success of a simulation can depend on appropriate choices for the problem or class of problems of interest. for example calculation of forces and moments. they have been carefully selected to be representative. and can be aided by sophisticated ow visualization tools and error estimation techniques. Fortunately. with emphasis on nite-di erence and nite-volume methods for the Euler and Navier-Stokes equations. The numerical methods generally used in CFD can be classi ed as nite-di erence. Here again. Some of them are presented in the books by Anderson. and overlapping grids. Instead we focus on numerical methods. analyzing. and understanding such methods.

e. k-D ~ bc O( ) Partial di erential equation Ordinary di erential equation Ordinary di erence equation Right-hand side Particular solution of an ODE or system of ODE's Fixed (time-invariant) steady-state solution k-dimensional space Boundary conditions. and a vector consisting of a collection of scalars in our presentation of hyperbolic systems. PDE ODE O E RHS P. proportional to) .. Otherwise. although we can learn a great deal by studying numerical methods as applied to the model equations and can use that information in the design and application of numerical methods to practical problems. 1. the use of a vector consisting of a collection of scalars should be apparent from the context and is not identi ed by any special notation. The vector symbol ~ is used for the vectors (or column matrices) which contain the values of the dependent variable at the nodes of a grid.4 Notation The notation is generally explained as it is introduced. however. there are many aspects of practical problems which can only be understood in the context of the complete physical systems.S. a scalar quantity in the linear convection and di usion equations. As a word of caution. it should be noted that. Some of the abbreviations used throughout the text are listed and de ned below. the variable u can denote a scalar Cartesian velocity component in the Euler and Navier-Stokes equations. usually a vector A term of order (i. S. For example.S.4. Bold type is reserved for real physical vectors. NOTATION 5 numerical approximations are applied to the model equations is a major rst step in making con dent and competent use of numerical approximations to the Euler and Navier-Stokes equations.1. such as velocity.

6 CHAPTER 1. INTRODUCTION .

in particular. which is useful in understanding the concepts involved in nite-volume schemes. and energy.1 Conservation Laws Conservation laws. The main focus. can be written in the following integral form: Z V (t2 ) QdV . and the recurring use of these model equations allows us to develop a consistent framework of analysis for consistency. The equations are then recast into divergence form.. Q is a vector containing the set of variables which are conserved. 2. such as the Euler and Navier-Stokes equations and our model equations. Z V (t1 ) QdV + Z t2 I t1 S (t) n:FdSdt = Z t2 Z t1 V (t) PdV dt (2. accuracy. and they represent phenomena of importance to the analysis of certain aspects of uid dynamic problems. and convergence. momentum. The equation is a statement of 7 . e. The concepts of convection and di usion are prevalent in our development of numerical methods for computational uid dynamics.Chapter 2 CONSERVATION LAWS AND THE MODEL EQUATIONS We start out by casting our equations in the most general form. The model equations we study have two properties in common. per unit volume. though. the integral conservation-law form. which is natural for nite-di erence schemes. The Euler and Navier-Stokes equations are brie y discussed in this Chapter. They are linear partial di erential equations (PDE's) with coe cients that are constant in both space and time. will be on representative model equations. stability. the convection and di usion equations.g. These equations contain many of the salient mathematical and physical features of the full Navier-Stokes equations. mass.1) In this equation.

2. momentum and energy for a uid.5) @t @x with 2 3 2 u 3 2 3 0 6 6 Q=6 6 6 4 7 7 u7 7 7 5 e 6 6 2 7 6 4 @u 7 6 E = 6 u + p 7 . or tensor.3 and nd a solution for Q on that basis are referred to as nite-di erence methods. If all variables are continuous in time. F is a set of vectors. respectively. 2.2 and nd a solution for Q on that basis are referred to as nite-volume methods. For a Newtonian uid in one dimension. CONSERVATION LAWS AND THE MODEL EQUATIONS the conservation of these quantities in a nite region of space with volume V (t) and surface area S (t) over a nite interval of time t2 . 6 3 @x 6 7 6 6 7 6 4 5 4 4 u @u + u(e + p) 3 @x @T @x 7 7 7 7 7 5 (2. and k are unit vectors in the x y.3) @t where r: is the well-known divergence operator given. and z coordinate directions. containing the ux of Q per unit area per unit time. in Cartesian coordinates. a partial derivative form of a conservation law can also be derived.8 CHAPTER 2. then Eq.6) . On the other hand. In two dimensions. they can be written as @Q + @E = 0 (2. 2. The divergence form of Eq. The vector n is a unit vector normal to the surface pointing outward. and P is the rate of production of Q per unit volume per unit time. by ! @ +j @ +k @ : (2.1 can be rewritten as d Z QdV + I n:FdS = Z PdV (2. is an area A(t) bounded by a closed contour C (t). t1 .2 is obtained by applying Gauss's theorem to the ux integral. Those methods which make various approximations of the derivatives in Eq.4) r: i @x @y @z and i j. 2. 2. Many of the advanced codes written for CFD applications are based on the nite-volume concept.2) dt V (t) S (t) V (t) Those methods which make various numerical approximations of the integrals in Eqs. or cell. leading to @Q + r:F = P (2. the region of space.1 and 2.2 The Navier-Stokes and Euler Equations The Navier-Stokes equations form a coupled system of nonlinear PDE's describing the conservation of mass.

with no interest in the transient portion of the solution. Many ows of engineering interest are steady (time-invariant). or at least may be treated as such. and is the thermal conductivity. such as the ideal gas law. these can be written as @Q + @E + @F = 0 (2. for example. they can lead to quite di erent numerical solutions in terms of shock strength and shock speed. The steady solution to the one-dimensional Navier-Stokes equations must satisfy @E = 0 (2. In order . Tannehill. T is the temperature. we are often interested in the steady-state solution of the Navier-Stokes equations. the Euler equations are obtained. The total energy e includes internal energy per unit volume (where is the internal energy per unit mass) and kinetic energy per unit volume u2=2. v.9) 4 35 4 5 4 5 4 5 q4 e u(e + p) v(e + p) where u and v are the Cartesian velocity components. and Pletcher 1] and Hirsch 2]. and p. Although non-conservative forms of the equations are analytically the same as the above form. Note that the convective uxes lead to rst derivatives in space. while the viscous and heat conduction terms involve second derivatives. p is the pressure. such as u and p. THE NAVIER-STOKES AND EULER EQUATIONS 9 where is the uid density.7) @x If we neglect viscosity and heat conduction. e is the total energy per unit volume. These equations must be supplemented by relations between and and the uid state as well as an equation of state. This form of the equations is called conservation-law or conservative form. u is the velocity. Later on we will make use of the following form of the Euler equations as well: @Q + A @Q + B @Q = 0 (2. For such ows. Non-conservative forms can be obtained by expanding derivatives of products using the product rule or by introducing di erent dependent variables. Thus the conservative form is appropriate for solving ows with features such as shock waves.2. Details can be found in Anderson. is the coe cient of viscosity. u.8) @t @x @y with 2q 3 2 3 2 u 3 2 v 3 1 6 7 6 7 6 2+ 7 6 7 Q = 6 q2 7 = 6 u 7 E = 6 u uv p 7 F = 6 v2uv p 7 6q 7 6 v 7 6 7 6 + 7 (2.10) @t @x @y @F @E The matrices A = @Q and B = @Q are known as the ux Jacobians. The ux vectors given above are written in terms of the primitive variables. .2. In two-dimensional Cartesian coordinates.

where is the ratio of speci c heats. 1 q3 2 .1 q32 7 6 2 q1 2 q1 7 6 7=6 7 6 7 6 q3 q 2 7 6 q1 7 6 7 6 5 6 4 q4q2 . and q4 . . we must rst write the ux vectors E and F in terms of the conservative variables. cp=cv . (u2 + v2)=2] from the ideal gas law. q312 .1 q22 q3 q1 2 .12) 2 q1 3 . 3 . q1 .1 q23 q32 q2 q1 . ) q1 q2 q1 a43 a42 3 7 7 7 . 2 q1 2 ! q2 q1 !2 . 7 6 2 q 7 6 5 6 4 q4 q3 . q3 . ) q2 q1 q3 q1 0 a21 q2 q3 q1 q1 a41 q3 (1 .11) 3 2 q3 7 6 6 7 6 7 6 q3 q2 7 6 q1 7 6 7=6 7 6 7 6 7 6 ( . From this it follows that the ux Jacobian of E can be written in terms of the conservative variables as 2 6 6 6 6 6 @Ei = 6 A = @q 6 6 6.13) a21 = . 1) e . 1)q4 + 3. as follows: 2 6 E1 6 6 6 E2 6 E=6 6 6E 6 3 6 6 4 E4 2 6 F1 6 6 6 F2 6 F =6 6 6F 6 3 6 6 4 F4 2 3 6 q2 6 7 6 7 6 7 6 ( . q22 . 2 2 q1 + 2 q1 3 7 7 7 7 7 7 7 7 7 7 7 7 7 7 5 3 7 7 7 7 7 7 7 7 7 7 7 7 7 5 (2. CONSERVATION LAWS AND THE MODEL EQUATIONS to derive the ux Jacobian matrices.1 q2 (2. j 6 6 6 6 4 where 0 1 (3 .10 CHAPTER 2. 2 2 q1 + q3 2 q1 We have assumed that the pressure satis es p = ( . q2. 1)q4 + 3.1 7 7 7 7 7 7 0 7 7 7 7 q2 5 q1 0 (2.

The homogeneous property of the Euler equations is discussed in Appendix C. Thus we also study linear hyperbolic systems of PDE's. aspects of convective phenomena associated with coupled systems of equations such as the Euler equations are important in developing numerical methods and boundary conditions. 1) q2 q3 q1 q1 (2. 1) 7 7 6 7 6 7 A=6 7 6 .2.uv 7 6 v u 0 7 6 7 6 7 4 5 a41 a42 a43 u where (2. .2. 3 . The Navier-Stokes equations include both convective and di usive uxes. 1 (3u2 + v2 ) 2 . This motivates the choice of our two scalar model equations associated with the physics of convection and di usion.5. THE NAVIER-STOKES AND EULER EQUATIONS a41 a42 a43 11 and in terms of the primitive variables as 2 !3 ! !3 ! ! q2 + q3 2 q2 5 .14) 3 2 0 1 0 0 7 6 6 7 6 6 a21 (3 . . 1 v2 . This is the de ning feature of hyperbolic systems of PDE's.( . ue a42 = e . 1)u(u2 + v2) . 1)4 q q1 q1 q1 q1 1 2 ! ! !3 q4 . The eigenvalues of the ux Jacobian matrices are purely real. q4 q2 = ( . )v ( . u2 2 2 a43 = (1 . .16) Derivation of the two forms of B = @F=@Q is similar. which are further discussed in Section 2. 1 43 q2 2 + q3 25 = q1 2 q1 q1 ! ! = .15) a21 = a41 = ( . )u (1 . )uv (2. Furthermore.

eventually returning to its initial position. At any given time. Hence. the waveform travels through one boundary and reappears at the other boundary.1 Di erential Form CHAPTER 2.17) @t @x Here u(x t) is a scalar quantity propagating with speed a. the ow being simulated is periodic. and the domain is in nite. at) (2. the \out ow" boundary. This is consistent in terms of the well-posedness of a 1st-order PDE. Hence the wave leaves the domain through the out ow boundary without distortion or re ection.12 2. It is easy to show by substitution that the exact solution to the linear convection equation is then u(x t) = u0(x . This is referred to as the biconvection problem. Note that the left-hand boundary is the in ow boundary when a is positive. Now let us consider a situation in which the initial condition is given by u(x 0) = u0(x). as the convection problem. while the right-hand boundary is the in ow boundary when a is negative. simply. This type of phenomenon is referred to. the process continues forever without any . No boundary condition is speci ed at the opposite side. (2) In the other type.3. With periodic boundary conditions. In this case. we pay a great deal of attention to it in the initial chapters.3 The Linear Convection Equation 2. a real constant which may be positive or negative. without special consideration of boundaries.18) The initial waveform propagates unaltered with speed jaj to the right if a is positive and to the left if a is negative. what enters on one side of the domain must be the same as that which is leaving on the other. the scalar quantity u is given on one boundary. It represents most of the \usual" situations encountered in convecting systems. corresponding to a wave entering the domain through this \in ow" boundary. The manner in which the boundary conditions are speci ed separates the following two phenomena for which this equation is a model: (1) In one type. CONSERVATION LAWS AND THE MODEL EQUATIONS The simplest linear model for convection and wave propagation is the linear convection equation given by the following PDE: @u + a @u = 0 (2. It is the simplest to study and serves to illustrate many of the basic properties of numerical methods applied to problems involving convection.

waves with di erent wavenumbers travel at di erent speeds.19) where f (0) is a complex constant. As we shall see later. 2.20 gives the following solution u(x t) = f (0)ei (x. This means that the phase speed is the same for all wavenumbers. are related by != a (2. in a simulation. The linear relation given by Eq. 2.24 is characteristic of wave propagation in a nondispersive medium. Substituting this solution into the linear convection equation.2 Solution in Wave Space We now examine the biconvection problem in more detail. Eq. With such an initial condition.ia f (2. most numerical methods introduce some dispersion that is.17. Let the domain be given by 0 x 2 . the solution can be written as u(x t) = f (t)ei x (2. It is a measure of the number of wavelengths within the domain.2. and is the wavenumber.21) dt which has the solution f (t) = f (0)e. one obtains u(x 0) = M X m=1 fm (0)ei mx (2. we nd that f (t) satis es the following ordinary di erential equation (ODE) df = . !.19.3. must be an integer. An arbitrary initial waveform can be produced by summing initial conditions of the form of Eq. For M modes. 2.at) = f (0)ei( x. . 2.23) where the frequency. the wavenumber.ia t (2.24) The relation between the frequency and the wavenumber is known as the dispersion relation. We restrict our attention to initial conditions in the form u(x 0) = f (0)ei x (2. and the phase speed.20) where the time dependence is contained in the complex function f (t).22) Substituting f (t) into Eq. 2.25) . a. Preserving the shape of the initial condition u0(x) can be a di cult challenge for a numerical method. THE LINEAR CONVECTION EQUATION 13 change in the shape of the solution. In order to satisfy the periodic boundary conditions.3.!t) (2.

with u representing the temperature. 2. the steady-state solution must satisfy @2u = 0 (2.1 Di erential Form Di usive uxes are associated with molecular motion in a continuum uid. Dirichlet (speci ed u). A simple linear model equation for a di usive process is @u = @ 2 u (2. the di usion equation has a nontrivial steady-state solution. Other steady-state solutions are obtained if a source term g(x) is added to Eq.29) giving a steady state-solution which satis es @ 2 u . the solution is obtained by summing solutions of the form of Eq.27.27) @t @x2 where is a positive real constant. In contrast to the linear convection equation. as follows: @u = @t " @ 2 u .23.14 CHAPTER 2. g(x) @x2 # (2. which is one that satis es the governing PDE with the partial derivative in time equal to zero. Boundary conditions can be periodic. giving u(x t) = M X Dispersion and dissipation resulting from a numerical approximation will cause the shape of the solution to change from that of the original waveform.28) @x2 Therefore.26) 2. In the case of Eq. m=1 fm (0)ei m(x. g(x) = 0 @x2 (2. u must vary linearly with x at steady state such that the boundary conditions are satis ed. 2. this parabolic PDE governs the di usion of heat in one dimension. For example. CONSERVATION LAWS AND THE MODEL EQUATIONS where the wavenumbers are often ordered such that 1 2 M .4. Neumann (speci ed @u=@x).4 The Di usion Equation 2. 2. Since the wave equation is linear.at) (2. or mixed Dirichlet/Neumann.30) .27.

2. . i. h(x) = ua + (ub . 2.31) @t @x2 @y2 where g(x y) is again a source term.37 shows that high wavenumber components (large m) of the solution decay more rapidly than low wavenumber components. 2 t (2.. The corresponding steady equation is @ 2 u + @ 2 u .35) m m dt and we nd m fm(t) = fm(0)e. 2. and as Laplace's equation for zero g. g(x y) (2. 2 t sin m x + h(x) (2.27 gives the following ODE for fm : dfm = .2 Solution in Wave Space We now consider a series solution to Eq. Let the initial condition be u(x 0) = M X m=1 fm(0) sin m x + h(x) (2. It is clear that the steady-state solution is given by a linear function which satis es the boundary conditions.4.37) The steady-state solution (t ! 1) is simply h(x). 2. the di usion equation becomes " # @u = @ 2 u + @ 2 u . 2.32 is elliptic. Eq. 2 f (2. Eq. Let the domain be given by 0 x with boundary conditions u(0) = ua. 2.4. The latter is known as the Poisson equation for nonzero g. Substituting this form into Eq.34.34) satis es the initial and boundary conditions. we obtain m=1 u(x t) = M X m=1 m fm (0)e. A solution of the form M X u(x t) = fm(t) sin mx + h(x) (2.32) @x2 @y2 While Eq.33) where must be an integer in order to satisfy the boundary conditions. consistent with the physics of di usion. g (x y ) = 0 (2. ua)x= . u( ) = ub.e. THE DIFFUSION EQUATION 15 In two dimensions.31 is parabolic. 2.27.36) Substituting fm(t) into equation 2.

For conservation laws.1AX X . and X is the matrix of right eigenvectors.39) @t @x where f is the ux vector and A = @f is the ux Jacobian matrix.1u + @ X .1u. we obtain z }| { @X . are also of hyperbolic type. Many aspects of numerical methods for such systems can be understood by studying a one-dimensional constant-coe cient linear system of the form @u + A @u = 0 (2. Such a system is hyperbolic if A is diagonalizable with real eigenvalues.16 2. this equation can also be written in the form @u + @f = 0 (2.44) i @t @x 1 See Appendix A for a brief review of some basic relations and de nitions from linear algebra.38 by X .40) j The ux Jacobian for the Euler equations is derived in Section 2. this can be rewritten as @w + @w = 0 (2.41) where is a diagonal matrix containing the eigenvalues of A. form a hyperbolic system of partial di erential equations. 2.38) @t @x where u = u(x t) is a vector of length m and A is a real m m matrix.1u = 0 (2. The entries in the @u ux Jacobian are @f aij = @ui (2.1 Thus = X .2. CONSERVATION LAWS AND THE MODEL EQUATIONS The Euler equations. postmultiplying A by the product XX .42) @t @x With w = X .43) @t @x When written in this manner. .1 are constants. the equations have been decoupled into m scalar equations of the form @wi + @wi = 0 (2.1 .5 Linear Hyperbolic Systems CHAPTER 2. and noting that X and X . Premultiplying Eq.8.1AX (2. Other systems of equations governing convection and wave propagation phenomena. such as the Maxwell equations describing the propagation of electromagnetic waves. Eq. 2.1.

The characteristic variables each satisfy the linear convection equation with the wave speed given by the corresponding eigenvalue.6 Problems 1. and c = p= is the local speed of sound. That is. Each characteristic variable satis es the linear convection equation with the speed given by the corresponding eigenvalue of A. While the scalar linear convection equation is clearly an excellent model equation for hyperbolic systems. While other boundary condition treatments are possible. where u is the local uid velocity. all of the wave speeds have the same sign. we must ensure that our numerical methods are appropriate for wave speeds of arbitrary sign and possibly widely varying magnitudes. they must be consistent with this approach. The eigenvalues of the ux Jacobian matrix. The speed u is associated with convection of the uid. PROBLEMS 17 The elements of w are known as characteristic variables. where juj < c. Characteristic variables associated with negative eigenvalues can be speci ed at the right boundary.2. are q u u + c.38 has a solution given by the superposition of waves which can travel in either the positive or negative directions and at varying speeds. we see that a hyperbolic system in the form of Eq. the boundary conditions can be speci ed accordingly. which corresponds to in ow for these variables. in a supersonic ow. corresponding to the fact that sound waves can travel upstream in a subsonic ow. which is the inow boundary for these variables. and u . Therefore. The one-dimensional Euler equations can also be diagonalized. of course.1 7 4 5 0 p u . where juj > c. In a subsonic ow. leading to three equations in the form of the linear convection equation. characteristic variables associated with positive eigenvalues can be speci ed at the left boundary. c. 2.6. Based on the above. while u + c and u . Show that the 1-D Euler equations can be written in terms of the primitive variables R = u p]T as follows: @R + M @R = 0 @t @x where 2 3 u 0 M = 6 0 u . c are associated with sound waves. 2. or wave speeds. although they remain nonlinear. Therefore. wave speeds of both positive and negative sign are present. The signs of the eigenvalues of the matrix A are also important in determining suitable boundary conditions.

@x + @u = 0 @y . that is.4. in the form of Eq. CONSERVATION LAWS AND THE MODEL EQUATIONS Assume an ideal gas.5). Show that the two matrices M and A derived in questions 1 and 3. Plot the solution at t = 1 with = 1. 4. 2. p = ( . 2.31. 1. 3. j = 0 1 2 3. 6. Given the initial condition u(x 0) = sin x de ned on 0 x 2 .18 CHAPTER 2.25. Plot the initial condition on the domain 0 x . 2. in the form @U + A @U = 0 @t @x where U = @u=@x @u=@t]T .2. 7. write it in the form of Eq. are related by a similarity transform. and initial conditions from Eq. i. Write the 2-D di usion equation. Eq..e. 8.33 with fm(0) = 1 for 1 m 3. 2. fm (0) = 0 for m > 3. 9. Find its eigenvalues and compare with those obtained in question 2. (Hint: use M = 2 with 1 = 1 and 2 = . respectively. Find the eigenvalues and eigenvectors of the matrix M derived in question 1. Find the eigenvalues and eigenvectors of A.1. (Hint: make use of the matrix S = @Q=@R.) Next consider the same initial condition de ned only at x = 2 j=4. Plot the rst three basis functions used in constructing the exact solution to the di usion equation in Section 2. nd the necessary values of fm (0). Write the classical wave equation @ 2 u=@t2 = c2 @ 2 u=@x2 as a rst-order system. 1)(e . Next consider a solution with boundary conditions ua = ub = 0. 2. Derive the ux Jacobian matrix A = @E=@Q for the 1-D Euler equations resulting from the conservative variable formulation (Eq.) 5. u2=2). Find the values of fm(0) required to reproduce the initial condition at these discrete points using M = 4 with m = m . 2.2. The Cauchy-Riemann equations are formed from the coupling of the steady compressible continuity (conservation of mass) equation @ u+@ v =0 @x @y and the vorticity de nition @v ! = .

PROBLEMS 19 where ! = 0 for irrotational ow. 1 2 . . For isentropic and homenthalpic ow. 1 u2 + v2 . (b) Determine the eigenvalues of the ux Jacobians. (c) Determine the conditions (in terms of and u) under which the system is hyperbolic. One approach to solving these equations is to add a time-dependent term and nd the steady solution of the following equation: @q + @f + @g = 0 @t @x @y (a) Find the ux Jacobians of f and g with respect to q.e.) .2. has real eigenvalues.6. we have @f (q) + @g(q) = 0 @x @y where v q= u f = . the system is closed by the relation = 1 .1 1 Note that the variables have been nondimensionalized.. (d) Are the above uxes homogeneous? (See Appendix C.v u g = . Combining the two PDE's. i.u v .

20 CHAPTER 2. CONSERVATION LAWS AND THE MODEL EQUATIONS .

2. we use an equispaced Cartesian system without being unduly restrictive or losing practical application. All of the material below is presented in a Cartesian system. thereby producing a system of ordinary di erential equations. For this reason. In this chapter. Alternatively. The time derivatives are approximated next. One can approximate these simultaneously and then solve the resulting di erence equations. our model equations contain partial derivatives with respect to both space and time. the concept of nite-di erence approximations to partial derivatives is presented.1. Our emphasis in this chapter is on spatial derivatives time derivatives are treated in Chapter 6. These can be applied either to spatial derivatives or time derivatives. Strategies for applying these nite-di erence approximations will be discussed in Chapter 4. Inspection of this gure permits us to de ne the terms and notation needed to describe nite21 . leading to a time-marching method which produces a set of di erence equations. in much of the following accuracy analysis.1 Meshes and Finite-Di erence Notation The simplest mesh involving both time and space is shown in Figure 3.Chapter 3 FINITE-DIFFERENCE APPROXIMATIONS In common with the equations governing unsteady uid ow. one can approximate the spatial derivatives rst. 3. This is the approach emphasized here. as shown in Figure 3. We emphasize the fact that quite general classes of meshes expressed in general curvilinear coordinates in physical space can be transformed to a uniform Cartesian mesh with equispaced intervals in a so-called computational space. The computational space is uniform all the geometric variation is absorbed into variable coe cients of the transformed equations.

FINITE-DIFFERENCE APPROXIMATIONS y x Figure 3. Later k and l are used for y and z in a similar way. time is always a superscript and is usually enclosed with parentheses to distinguish it from an exponent. local context should make the meaning obvious. .1: Physical and computational spaces. Note that h = t throughout. For the rst several chapters we consider primarily the 1-D case u = u(x t).2) where x is the spacing in x and t the spacing in t.1) t = tn = n t = nh (3. When only one variable is denoted. and that for t is always n.2. di erence approximations. The convention for subscript and superscript indexing is as follows: u(t + kh) = u( n + k]h) = un+k u(x + m x) = u( j + m] x) = uj+m (3. as shown in Figure 3. the dependent variables. The mesh index for x is always j . are functions of the independent variables t. dependence on the other is assumed. In general. u. for example. Then on an equispaced grid x = xj = j x (3.3) (n+k) u(x + m x t + kh) = uj+m Notice that when used alone. When n j k l are used for other purposes (which is sometimes necessary). but when used together. and x y z. both the time and space indices appear as a subscript.22 CHAPTER 3.

for example. Derivatives are expressed according to the usual conventions. the spacing is uniform. (3. By this we mean that the symbol is used to represent a di erence approximation to a derivative such that. un etc.6) but the precise nature (and order) of the approximation is not carried in the symbol .2: Space-time grid arrangement. tn xj = xj+1 . The one exception is the symbol . . which is de ned such that tn = tn+1 . xj un = un+1 .4) For the ordinary time derivative in the study of ODE's we use u0 = du dt (3.7) When there is no subscript on t or x. The notation for di erence approximations follows the same philosophy. Other ways are used to determine its precise meaning. x @x xx @xx (3. (3. Thus for partial derivatives in space or time we use interchangeably @u @xu = @x @t u = @u @t 2 @xx u = @ u @x2 etc. subscripts on dependent variables are never used to express derivatives.3.1.5) In this text. Thus ux will not be used to represent the rst derivative of u with respect to x. MESHES AND FINITE-DIFFERENCE NOTATION t Grid or Node Points n+1 n n-1 23 ∆x ∆t j-2 j-1 j j+1 j+2 x Figure 3. but (with one exception) it is not unique.

Eq. 3. uj = @u + 1 ( x) @ 2 u + (3. Expanding the latter term about x gives1 ! ! ! @u + 1 (k x)2 @ 2 u + + 1 (k x)n @ n u + (3. Next consider the space di erence approximation (uj+1 . FINITE-DIFFERENCE APPROXIMATIONS Thus the expression (uj+1 . and one often sees the summary result presented in the form ! @u = uj+1 .1)=(2 x) is a second-order approximation to a rst derivative. From Eq.3.1 + O( x2 ) (3. The error term containing the grid spacing to the lowest power gives the order of the method. consider the Taylor series expansion for uj+1: ! ! ! @u + 1 ( x)2 @ 2 u + + 1 ( x)n @ n u + (3. x = j x and u(x + k x) = u(j x + k x) = uj+k. It is also clear that the error depends on the grid spacing to a certain order. Similarly.9) uj+1 = uj + ( x) @x @x2 j n! @xn j j 2 Now subtract uj and divide by x to obtain ! ! uj+1 .1 . For example.1)=(2 x).10.2 Space Derivative Approximations 24 CHAPTER 3. . uj. For example.11) 2 x @x j 6 @x3 j 120 @x5 j When expressed in this manner. 3. uj )= x is a reasonable approximation for @u j as long as @x x is small relative to some pertinent length scale.12) @x j 2 x 1 A di erence approximation can be generated or evaluated by means of a simple Taylor series expansion. consider u(x t) with t xed. Then. The latter is referred to as the three-point centered di erence approximation. uj.1 to 3. it is clear that the discrete terms on the left side of the equation represent a rst derivative with a certain amount of error which appears on the right side of the equal sign.10) x @x j 2 @x2 j We assume that u(x t) is continuously di erentiable. uj. @u = 1 x2 @ 3 u + 1 x4 @ 5 u : : : (3.8) uj+k = uj + (k x) @x @x2 j n! @xn j j 2 Local di erence approximations to a given partial derivative can be formed from linear combinations of uj and uj+k for k = 1 2 .11 shows that @x (uj+1 . Expand the terms in the numerator about j and regroup the result to form the following equation ! ! ! uj+1 . 3.3. following the notation convention given in Eqs. uj. uj )= x is a rst-order approximation to @u j . we see that the expression (uj+1 .

3. We 2 We will derive the second derivative operator shortly.1 ! @ 2 u = 1 (u . Now let us derive a matrix operator representation for the same approximation. neither of these expressions tells us how other points in the mesh are di erenced or how boundary conditions are enforced. we mean the number of interior points excluding the boundaries. a 1 2 3 4 b x= 0 . However. 2u + u ) + O( x2 ) j j . Such additional information requires a more sophisticated formulation.13) (3. u ) + O( x2 ) @x j 2 x j+1 j.3.1 @x2 j x2 j+1 25 Perhaps the most common examples of nite-di erence formulas are the three-point centered-di erence approximations for the rst and second derivatives:2 (3.15) x which is a point di erence approximation to a second derivative. Notice that when we speak of \the number of points in a mesh".2 Matrix Di erence Operators Consider the relation ( xxu)j = 1 2 (uj+1 . .1) (3. u(0) = ua u( ) = ub and use the centered di erence approximation given by Eq. FINITE-DIFFERENCE OPERATORS 3. x = =(M + 1) Now impose Dirichlet boundary conditions.14) These are the basis for point di erence operators since they give an approximation to a derivative at one discrete point in a mesh in terms of surrounding points. .3 Finite-Di erence Operators 3. j= 1 M Four point mesh.3.15 at every point in the mesh.1 Point Di erence Operators ! @u = 1 (u . 2uj + uj. 3.3. . 3. . Consider the four point mesh with boundary points at a and b shown below.

17) it is clear that we can express them in a vector-matrix form.17 as (3. boundary conditions may dictate that the lines at or near the bottom or top of the matrix be modi ed. 2u2 + u3) x 1 (u . 2u + u ) ( xxu)3 = 3 4 x2 2 ( xxu)4 = 1 2 (u3 . In the extreme case of the matrix di erence operator representing a spectral method. For example. 3.19) we can rewrite Eq. but the point operators used from line to line are not necessarily the same. Each line of a matrix di erence operator is based on a point di erence operator. and further.26 CHAPTER 3. none . FINITE-DIFFERENCE APPROXIMATIONS ( xxu)1 = arrive at the four equations: 1 (u .2 . that the resulting matrix has a very special form.18) and 2 .2u4 +ub x2 x2 x2 x2 (3.2u3 +u4 ( u3 .20) This example illustrates a matrix di erence operator.2u2 +u3 ( u2 .2 1 6 A = 1 2 6 1 . Introducing 2u 6 u1 ~ = 6 u2 u 6 4 3 u4 3 7 7 7 5 2u a 1 6 0 ~ 6 bc = 2 6 0 x 4 ub 3 7 7 7 5 (3.2u1 +u2 ( u1 .2 3 7 7 5 17 (3. 2u4 + ub) x Writing these equations in the more suggestive form ( ( ( ( xx u)1 xx u)2 xx u)3 xx u)4 (3.1 6 1 2 x 4 u u xx~ = A~ + ~ bc 1 .16) )= )= )= )= = = = = ( ua . 2u + u ) 1 2 x2 a ( xxu)2 = 1 2 (u1 .

22) ! ! @u @u (3.1 6 xx 1 2 x 4 1 .): 2 @u ( xxu)4 = 3 1x @x .6..2 1 6 7 6 (3. and the illustration is given for a simple tridiagonal matrix although any number of .2 17 x 2=3 ..23) @x x= = @x b Then the matrix operator for a three-point central-di erencing scheme at interior points and a second-order approximation for a Neumann condition on the right is given by 2 . We call such a matrix a banded matrix and introduce the notation 2 3 1 b c 6a b c 7 6 7 6 7 6 7 .1 .1 6 x 2 x4 .25) 6 7 .2 ..3. 2 1 2 (u4 . u3) b 3 x where the boundary condition is ! (3.21) As a further example. B (M : a b c) = 6 7 (3.24) xx = 24 5 1 .2=3 Each of these matrix di erence operators is a square matrix with elements that are all zeros except for those along bands which are clustered around the central diagonal. replace the fourth line in Eq. 6 7 .3.2 3 7 7 5 17 (3.1 0 3 1 7 7 5 0 17 2 . Use of M in the argument is optional. 3. a b c5 4 a b M where the matrix dimensions are M M . The matrix operators representing the three-point centraldi erence approximations for a rst and second derivative with Dirichlet boundary conditions on a four-point mesh are 2 0 1 6 0 = 1 6 . FINITE-DIFFERENCE OPERATORS 27 of the lines is the same.16 by the following point operator for a Neumann boundary condition (See Section 3.2 1 3 7 1 6 1 .2 1 6 = 1 2 6 1 .

. FINITE-DIFFERENCE APPROXIMATIONS bands is a possibility.2=3]T !# " 2 x @u T 1 u 00 ~ bc = 3 @x b x2 a Notice that the matrix operators given by Eqs.27 and 3. we nd 1 B (~ ~ 1)~ + bc ~ u (3.28) x If we prescribe Neumann boundary conditions on the right side.27) xx~ = x ~ where bc stands for the vector holding the Dirichlet boundary conditions on the left and right sides: ~ bc = 1 2 ua 0 0 ub]T (3.26) we can approximate the second derivative of ~ by u u 1 2 B (1 . 3.29. The ability to specify in the matrix derivative operator the exact nature of the approximation at the Note that ~ is a function of time only since each element corresponds to one speci c spatial u location. 3. uM 3 7 7 7 7 7 7 7 5 (3..24 and 3. 3.2 1)~ + bc u ~ (3.29) xx~ = x2 a b u where ~ = 11 a 2=3]T ~ = . De ning ~ as3 u 2 6 6 6 ~ =6 u 6 6 6 4 u1 u2 u3 .29 carry more information than the point operator given by Eq. 3. 3 . A tridiagonal matrix without constants along the bands can be expressed as B (~ ~ ~).2 b .2 . In Eqs. The arguments for a banded matrix are always odd in number abc and the central one always refers to the central diagonal.28 CHAPTER 3. We can now generalize our previous examples.2 .27 and 3.15. as in Eqs.22. the boundary conditions have been uniquely speci ed and it is clear that the same point operator has been applied at every point in the eld except at the boundaries.

. A point operator is generally written for some derivative at the reference point j in terms of neighboring values of the function. 3.3. A j -shift in the point operator corresponds to a diagonal shift in the matrix operator.3.30) might be the point operator for a rst derivative.3. If the boundary conditions are periodic. j= 0 1 M Eight points on a linear periodic mesh. Since we make considerable use of both matrix and point operators. or more suggestively on a circular mesh as in Figure 3.31) Note the addition of a zero in the fth element which makes it clear that b is the coe cient of uj . 0 . . it doesn't really matter where the numbering starts as long as it \ends" at the point just preceding its starting location. the form of the matrix operator changes. . For example ( xu)j = a2 uj.3. The corresponding matrix operator has for its arguments the coe cients giving the weights to the values of the function at the various locations. This can either be presented on a linear mesh with repeated entries.2 + a1 uj. it is important to establish a relation between them.3 Periodic Matrices The above illustrated cases in which the boundary conditions are xed. Thus the matrix equivalent of Eq.30 is u x~ = B (a2 a1 b c1 0)~ u (3. A typical periodic tridiagonal matrix operator . x = 2 =M The matrix that represents di erencing schemes for scalar equations on a periodic mesh is referred to as a periodic matrix. FINITE-DIFFERENCE OPERATORS 29 various points in the eld including the boundaries permits the use of quite general constructions which will be useful later in considerations of stability. . .1 + buj + c1 uj+1 (3. . 7 8 1 2 3 4 5 6 7 8 1 2 x= . 3. Consider the eight-point periodic mesh shown below. . 2 . When the mesh is laid out on the perimeter of a circle.

The most general circulant matrix of order 4 is 2b b b b 3 6 b0 b1 b2 b3 7 6 3 0 1 27 6b b b b 7 (3. with nonuniform entries is given for a 6-point mesh by 2b c a6 3 1 2 6 a1 b2 c3 7 6 7 6 a b c 7 6 7 2 3 4 ~ ~ ~) = 6 7 Bp(6 : a b c 6 7 a3 b4 c5 6 7 6 4 5 a4 b5 c6 7 c1 a5 b6 (3. Circulant matrices play a vital role in our analysis.4 Circulant Matrices In general. FINITE-DIFFERENCE APPROXIMATIONS 1 8 2 7 3 6 5 4 Figure 3.3.30 CHAPTER 3.3: Eight points on a circular mesh. a special subset of a periodic matrix is the circulant matrix. However. as shown in the example.32) 3.3) one element to the right of the one above it. The special case of a tridiagonal circulant matrix is . the elements along the diagonals of a periodic matrix are not constant.33) 4 2 3 0 15 b1 b2 b3 b0 Notice that each row of a circulant matrix is shifted (see Figure 3. We will have much more to say about them later. formed when the elements along the various bands are constant.

3." which makes extensive use of the expansion given by Eq..4. CONSTRUCTING DIFFERENCING SCHEMES OF ANY ORDER given by 31 2 3 b c a7 1 6a b c 6 7 6 7 6 7 .1 .. 3.4. see Eq.1 6 2 x4 1 1 0 .8.34) When the standard three-point central-di erencing approximations for a rst and second derivative.1. Later on we take advantage of this special property. 4 5 c a b M (3.21.4 Constructing Di erencing Schemes of Any Order 3. As an example..1 0 1) 13 and 2 .2 1) (3. .2 . they take the form 2 0 1 6 0 ( x)p = 1 6 . which represents a Taylor table for an approximation of a second derivative using three values of the function centered about the point at which the derivative is to be evaluated.2 1 6 ( xx)p = 1 2 6 1 .1 6 1 2 x 4 1 1 .35) Clearly. 3.. are used with periodic boundary conditions. consider Table 3. 3.1 .2 7 1 7 5 1 7 = x2 Bp(1 . Bp(M : a b c) = 6 7 6 7 6 a b c 7 .1 Taylor Tables The Taylor series expansion of functions about a xed point provides a means for constructing nite-di erence point-operators of any order. A simple and straightforward way to carry this out is to construct a \Taylor table. these special cases of periodic operators are also circulant operators.1 3 0 7 1 7 5 1 7 = 2 x Bp(. Notice that there are no boundary condition vectors since this information is all interior to the matrices themselves.

c (1)4 24 6 ! @ku @xk j k = 0 1 2 The columns to the right of the leftmost one. what is the local error caused by the use of this approximation? Notice that all of the terms in the equation appear in a column at the left of the table (although.1.c uj+1 = . under the headings.c (1)3 1 . in this case.1 x x2 x3 2 3 @u @x j @u @x2 j uj x2 . This represents one of the questions that a study of this table can answer namely.c (1)2 2 . c (1)4 1 x4 @ 4 u .c uj+1: ! ! 1 x @u . . 1 (a u + b u + c u ) = ? j j +1 @x2 j x2 j.a uj.36) .a (-1)4 24 1 6 .a (-1)3 1 .a .a (-1)2 2 .c (1) 1 . that is.c . c (1) 1 @x 2 @x2 j ! j ! 1 x3 @ 3 u . FINITE-DIFFERENCE APPROXIMATIONS ! @ 2 u . b uj .b 1 1 1 . The table is constructed so that some of the algebra is simpli ed. Then notice that at the head of each column there appears the common factor that occurs in the expansion of each term about the point j . x2 has been multiplied into each term in order to simplify the terms to be put into the table).1 . For example.32 CHAPTER 3.c (1)3 6 @x3 j 24 @x4 j A Taylor table is simply a convenient way of forming linear combinations of Taylor series on a term by term basis. Taylor table for centered 3-point Lagrangian approximation to a second derivative. Each entry is the coe cient of the term at the top of the corresponding column in the Taylor series expansion of the term to the left of the corresponding row. (3. the last row in the table corresponds to the Taylor series expansion of . At the top of the table we see an expression with a question mark.c uj+1 @2u @x2 j 1 @u @x3 j @u @x4 j 4 x4 1 1 . make up the Taylor table.a (-1) 1 . c (1)2 1 x2 @ 2 u . xk Table 3.c uj .

1 . We have just derived the familiar 3-point central-di erencing point operator for a second derivative ! @ 2 u . and c.a + . b.1 3 2 a 3 2 0 3 6 1 0 .c x4 @ u @x4 x 24 24 ! j 2 4 =. We designate the rst nonvanishing sum to be ert . In this case ert occurs at the fth column in the table (for this example all even columns will vanish by symmetry) and one nds 4 ert = 1 2 .1 0 . we proceed from left to right and force. 1 (3.2 The solution is given by a b c] = 1 .1 . To maximize the order of accuracy of the method.1 7 6 b 7 = 6 0 7 4 54 5 4 5 . these sums to be zero.2.4. The columns that do not sum to zero constitute the error.2 1]. by the proper choice of a.1 c . . 2u + u ) = O( x2 ) j j +1 @x2 j x2 j . 1 (u . CONSTRUCTING DIFFERENCING SCHEMES OF ANY ORDER 33 Consider the sum of each of these columns. One can easily show that the sums of the rst three columns are zero if we satisfy the equation 2 .37) Note that x2 has been divided through to make the error term consistent. x @u ! j 12 @x4 (3. and refer to it as the Taylor series error.3.38) The Taylor table for a 3-point backward-di erencing operator representing a rst derivative is shown in Table 3.

b uj .1 j @x j 2 x j.4 .a1 .a2 (-2)4 24 1 6 1 1 . X a u = er t @xm j i=.1 .1 3 2 a2 3 2 0 3 6 2 1 0 7 6 a1 7 = 6 .a2 (-2)2 2 .a2 uj.4 3].41) . 1 (a u + a u + b u ) = ? j @x j x 2 j.a1 (-1)4 24 1 6 .a2 (-2)3 1 . In this case the fourth column provides the leading truncation error term: ! ! 1 8a2 + a1 x3 @ 3 u = x2 @ 3 u (3.1 x x2 x3 2 3 @u @x j uj x @u @x j 1 @u @x2 j @u @x3 j x4 @4u @x4 j 1 1 .2 .34 CHAPTER 3.2 3.1 0 b 0 1 which gives a2 a1 b] = 2 1 .4.a2 (-2) 1 .b Table 3. This time the rst three columns sum to zero if 2 . Taylor table for backward 3-point Lagrangian approximation to a rst derivative. a di erence approximation to the mth derivative at grid point j can be cast in terms of q + p + 1 neighboring points as (3.a2 .40) j .p i j+i In general. FINITE-DIFFERENCE APPROXIMATIONS ! @u .a1 (-1) 1 .a1 (-1)3 1 .2 Generalization of Di erence Formulas ! q @ m u .39) ert = x 6 6 @x3 j 3 @x3 j Thus we have derived a second-order backward-di erence approximation of a rst derivative: ! @u .1 .2 1 j.a1 (-1)2 2 .1 7 4 54 5 4 5 .1 . 1 (u . 4u + 3u ) = O( x2) (3.2.a1 uj.

To construct a polynomial for u(x). backward. is the fact that it can be further generalized. and evaluate these derivatives at the appropriate discrete point. let us approach the subject in a slightly di erent way. Clearly this process can be used to nd forward. 3. that is from the point of view of interpolation formulas. impose an equispaced mesh.4. we rederive the nite-di erence approximations just presented. x)) + u1 (xx0.44) . Hermite formulas use values of the function and its derivative(s) at given points in space. A generalization of the Lagrangian approach is brought about by using Hermitian interpolation. The construction of the ak (x) can be taken from the simple Lagrangian formula for quadratic interpolation (or extrapolation) with non-equispaced points ( . It could be computer automated and extended to higher dimensions. 3. If we take the rst or second derivative of u(x). skewed. producing the expression u(x) = X ak (x)uk + X bk (x) @u @x ! k (3. )(x . ( . xx)(x2 .3 Lagrange and Hermite Interpolation Polynomials The Lagrangian interpolation polynomial is given by u(x) = K X k=0 ak (x)uk (3. These formulas are discussed in many texts on numerical analysis. or central point operators of any order for any derivative. however. x0 0 1 2 1 (x0 . x) +u2 (x . Our illustration is for the case in which discrete values of the function and its rst derivative are used.4. Finitedi erence schemes that can be derived from Eq. CONSTRUCTING DIFFERENCING SCHEMES OF ANY ORDER 35 where the ai are coe cients to be determined through the use of Taylor tables to produce approximations of a given order.3.42 are referred to as Lagrangian approximations. x)) 1 0 )(x2 .42) where ak (x) are polynomials in x of degree K . xx)(x2 . More important. x )(x . x)(x1 . x u(x) = u0 (xx1.43) Notice that the coe cient of each uk is one when x = xk . In order to do this. x ) 0 2 1 2 (3. and zero when x takes any other discrete value in the set.

c .8: ! (" X 1 1 n # @ m u !) @mu n @ (3. A generalization of Eq.46) @xm j+k = n=0 n! (k x) @xn @xm j The derivative terms now have coe cients (the coe cient on the j point is taken as one to simplify the algebra) which must be determined using the Taylor table approach as outlined below.3..a uj.a . The previous examples of a Taylor table constructed explicit point di erence operators from Lagrangian interpolation formulas.1 j x x2 x3 x4 2 3 4 @u @x j d 1 1 1 e e (1) 1 e (1)2 1 e (1)3 6 e (1)4 24 1 2 1 1 1 . Taylor table for central 3-point Hermitian approximation to a rst derivative. This requires the following generalization of the Taylor series expansion given in Eq.r j +i i=.c (1)2 1 .45) i=.b 1 1 1 1 .e. An example formula is illustrated at the top of Table 3. 1 and j + 1. 1x (auj.a (-1)5 120 1 6 . q s X @mu ! X bi @xm + aiuj+i = ert (3.1 x @u j @x @u x e @x j+1 . A complete discussion of these polynomials can be found in many references on numerical methods.a (-1)4 24 . which also must be expanded using Taylor series about point j . 3.c (1)5 120 2 1 d (-1) 1 @u @x2 j d @u @x3 j (-1)2 1 2 d @u @x4 j (-1)3 1 6 d @u @x5 j (-1)4 5 x5 1 24 Table 3.p analogous to Eq. .a (-1)3 1 .c (1)3 6 .c (1) 1 . Consider next the Taylor table for an implicit space di erencing scheme for a rst derivative arising from the use of a Hermite interpolation formula.c uj+1 ! ! ! @u + e @u + @x .3. FINITE-DIFFERENCE APPROXIMATIONS Obviously higher-order derivatives could be included as the problems dictate. 3. 3. but here we need only the concept.41 can include derivatives at neighboring points.36 CHAPTER 3.1 + buj + cuj+1) = ? @x j+1 j .1 . d @u @x uj x d @u @x j .b uj .44. Here not only is the derivative at point j represented. i. but also included are derivatives at points j .a (-1) 1 .a (-1)2 2 .c (1)4 24 .

1 + uj+1) = O( x4) (3.1 . 3.2 2 7 6 c 7 = 6 0 7 76 7 6 7 6 6 1 0 .4. However. 3x (.1 1 1 7 6 b 7 6 .uj.1 .49) xu 6 2 x in which Dirichlet boundary conditions have been imposed.1 7 6 6 .48 can be applied.1 u u (3.51) x~ = 6 B (1 4 1)] ~ @u=@x at both boundaries.50) u x~ = 6 B (1 4 1)] 2 x which can be reexpressed by the \predictor-corrector" sequence ~ = 1 B (. It is one thing to construct methods using the Hermitian concept and quite another to implement them in a computer code.48) @x j. A banded matrix notation for Eq.1 3 3 7 6 d 7 6 0 7 76 7 6 7 4 54 5 4 5 .47) j and the method can be expressed as ! ! ! @u @u + @u + 4 @x .1 . we must satisfy the relation 2 .1 @x j+1 j This is also referred to as a Pade formula.3.4 Mathematically this is equivalent to .4.1 0 . Under these conditions the sixth column sums to ! x4 @ 5 u ert = 120 @x5 (3.1 0 0 3 2 a 3 2 0 3 6 1 0 .4 Practical Application of Pade Formulas In this case the vector containing the boundary conditions would include values of both u and .3 0 3 1 1]. 3. the situation is quite easy to comprehend if we express the same method in the form of a matrix operator. 4 3.1 1 B (.4 4 e 0 1 having the solution a b c d e] = 4 . CONSTRUCTING DIFFERENCING SCHEMES OF ANY ORDER 37 To maximize the order of accuracy.48 is 1 B (1 4 1) ~ = 1 B (.1 0 1)~ + bc u ~ (3.1 .1 0 1)~ + bc u ~ (3. In the form of a point operator it is probably not evident at rst just how Eq.1 0 .1 0 1)~ + bc u ~ u ~ 2 x .

These operators are very common in nite di erence applications.8 0 8 . In application. It simply says { take the vector array ~ . in this case a tridiagonal. e cient to run. but they will have a wider spread of space indices. the meaning of the predictor in this sequence should be clear. A nal word on Hermitian approximations. Clearly they have an advantage over 3-point Lagrangian schemes because of their increased accuracy.54) u @x 12 x p 3.2 1)~ + bc u u ~ (3. and widely used. FINITE-DIFFERENCE APPROXIMATIONS With respect to practical implementation. since it is demanding the evaluation of an inverse operator.1 is found by means of a tridiagonal \solver". but it still can be given a simple interpretation. although this is one of the principal issues in applying these schemes): @ ~ .4. It should be mentioned that the spline approximation is one form of a Pade matrix di erence operator. 3.52) xx~ = 12 B (1 10 1)] x2 is O( x4) and makes use of only tridiagonal operations. a more subtle point is that they get this increase in accuracy using information that is still local to the point where the derivatives are being evaluated. 1 B (1 .2 1)~ + bc u ~ (3. di erence it. It is obvious.52.1)~ = O x4 u (3.1 1 B (1 . In general. How much this reduction in accuracy is o set by the increased global continuity built into a spline t is not known. It is given by . However.1 1 B (1 . The evaluation of B (1 4 1)]. two Lagrangian schemes with the same order of accuracy are (here we ignore the problem created by the boundary conditions.5 Other Higher-Order Schemes . 3. The u meaning of the second row is more subtle. that ve point schemes using Lagrangian approximations can be derived that have the same order of accuracy as the methods given in Eqs. of course. which is simple to code. and store the result in the intermediate array ~ . An inverse matrix operator implies the solution of a coupled set of linear equations.48 to 3. We note that the spline t of a rst derivative is identical to any of the expressions in Eqs.51. Hermitian forms of the second derivative can also be easily derived by means of a Taylor table. They appear in the form of banded matrices having a small bandwidth.38 CHAPTER 3.53) u xx~ = 6 B (1 4 1)] x2 but its order of accuracy is only O( x2). In particular. For example . Hermitian or Pade approximations can be practical when they can be implemented by the use of e cient banded solvers.48 and 3. u add the boundary conditions. this can be advantageous at boundaries and in the vicinity of steep gradients.

1) = 2 x i x .56) @x If we apply.30 16 .57) .i x)ei xj = (e 2 x 1 (cos x + i sin = 2 x = i sin x x ei xj = i ei xj 3. where is the wavenumber. for example. It is therefore of interest to examine how well a given nite-di erence operator approximates derivatives of ei x. although the analysis is equally applicable to higher derivatives. ei x(j. a second-order centered di erence operator to uj = ei xj . 1 B (. We will concentrate here on rst derivative approximations.1 16 . FOURIER ERROR ANALYSIS @ 2~ . which are in the form ei x. i sin x)]ei xj (3. (cos x .5 Fourier Error Analysis In order to select a nite-di erence scheme for a given application one must be able to assess the accuracy of the candidate schemes.1 2 x ei x(j+1) .5.1 Application to a Spatial Operator x) . uj.3. While this is a useful measure.1)~ = O x4 u u @x2 12 x2 p 39 (3. it provides a fairly limited description. we get ( xu)j = uj+1 . An arbitrary periodic function can be decomposed into its Fourier components. e.55) 3. where xj = j x. The accuracy of an operator is often expressed in terms of the order of the leading error term determined from a Taylor table. The exact rst derivative of ei x is @ei x = i ei x (3.5. Further information about the error behavior of a nite-di erence scheme can be obtained using Fourier error analysis.

4. In general. 0 x . where is the modi ed wavenumber. along with similar relations for the standard fourth-order centered di erence scheme and the fourth-order Pade scheme. as is to be expected.58 is plotted in Figure 3. since 6 Equation 3.58) = sin x x Note that approximates to second-order accuracy.5 th 4 Pade 4 Central nd 1 0. For the second-order centered di erence operator the modi ed wavenumber is given by (3. appears in the exact expression.40 CHAPTER 3.5 3 κ∆ x Figure 3.5 κ*∆ x 2 th 1. FINITE-DIFFERENCE APPROXIMATIONS 3 2. The modi ed wavenumber is so named because it appears where the wavenumber. . nite-di erence operators can be written in the form a s ( x)j = ( x )j + ( x)j sin x= .5 2 2.5 1 1. Thus the degree to which the modi ed wavenumber approximates the actual wavenumber is a measure of the accuracy of the approximation.4: Modi ed wavenumber for various schemes.5 2 Central 0 0 0. The expression for the modi ed wavenumber provides the accuracy with which a given wavenumber component of the solution is resolved for the entire wavenumber range available in a mesh of a given size. x 3 x2 + : : : .

1) + a2(uj+2 . In the context of the linear convection equation. uj. The fourth-order Pade scheme is given by ( xu)j.1 are handled by letting ( x u)j = ik e shift in j .3. the antisymmetric part is obtained from (A .1) The modi ed wavenumber for this scheme satis es6 i e.2) + a3(uj+3 . uj. . with the imaginary component being entirely error.i x) which gives 3 sin i = (2 + icos xx) x The modi ed wavenumber provides a useful tool for assessing di erence approximations.1 + 4( xu)j + ( xu)j+1 = 3x (uj+1 .1) + d2(uj+2 + uj.i x + 4i + i ei x = 3x (ei x . FOURIER ERROR ANALYSIS 41 a s where ( x )j is an antisymmetric operator and ( x)j is a symmetric operator.2) + d3(uj+3 + uj. When the operator includes a symmetric component.5 If we restrict our interest to schemes extending from j . e. then a ( x u)j = 1x a1 (uj+1 . the errors can be given a physical interpretation.3)] The corresponding modi ed wavenumber is i = 1x d0 + 2(d1 cos x + d2 cos 2 x + d3 cos 3 x) + 2i(a1 sin x + a2 sin 2 x + a3 sin 3 x) (3. 6 Note that terms such as ( u) i j x and evaluating the x j . AT )=2 and the symmetric part from (A + AT )=2. the modi ed wavenumber is purely real. uj. uj.5. Consider once again the linear convection equation in the form @u + a @u = 0 @t @x 5 In terms of a circulant matrix operator A. the modi ed wavenumber is complex. 3 to j + 3.3)] and s ( xu)j = 1x d0uj + d1(uj+1 + uj.59) When the nite-di erence operator is antisymmetric (centered).

the following ODE is obtained for f (t): df = . 3. although the original PDE is nondispersive.3.ia f dt Solving for f (t) and substituting into Eq.60 gives the exact solution as u(x t) = f (0)ei (x. a waveform consisting of many di erent wavenumber components eventually loses its original form.1 to 1. The resolving e ciency of a scheme can be expressed in terms of the PPW required to produce errors below a speci ed level.a t) (3.at) If second-order centered di erences are applied to the spatial term.1 percent. Recall from Section 2. Since a is a function of the wavenumber. we obtain unumerical(x t) = f (0)ei (x. FINITE-DIFFERENCE APPROXIMATIONS on a domain extending from .ia sin x f = .60) df = .60. For example.2 that a solution initiated by a harmonic function with wavenumber is u(x t) = f (t)ei x where f (t) satis es the ODE (3. the second-order centered di erence scheme requires 80 PPW to produce an error in phase speed of less than 0. The 5-point fourth-order centered scheme and . 3.5 shows the numerical phase speed for the schemes considered previously. The number of points per wavelength (PPW ) by which a given wave is resolved is given by 2 = x.42 CHAPTER 3. Since a =a 1 for this example. the numerical solution propagates too slowly.61) dt x Solving this ODE exactly (since we are considering the error from the spatial approximation only) and substituting into Eq. Figure 3.62) where a is the numerical (or modi ed) phase speed. the numerical approximation introduces dispersion. which is related to the modi ed wavenumber by a = a For the above example.ia f (3. As a result. a = sin x a x The numerical phase speed is the speed at which a harmonic function is propagated numerically.

In this section.2 0 0 0. but in the general case it can include an imaginary component as well.59. a matrix di erence operator incorporates both the di erence approximation in the interior of the domain and that at the boundaries.5 2 2. DIFFERENCE OPERATORS AT BOUNDARIES 1.5: Numerical phase speed for various schemes. the error in the phase speed is determined from the real part of the modi ed wavenumber.5 3 κ ∆x Figure 3. 3.8 * a _ a 2 Central th nd 0. as shown in Eq.6 4 Central 4 Pade th 0.61. as can be seen by inspecting Eq. In that case. the fourth-order Pade scheme require 15 and 10 PPW respectively to achieve the same error level. For our example using second-order centered di erences. 3.3.3. Thus the antisymmetric portion of the spatial di erence operator determines the error in speed and the symmetric portion the error in amplitude. In the case of periodic boundary conditions.2. no special boundary operators are required. we consider the boundary operators needed for our model equations for convection and di usion.5 1 1.6. while the imaginary part leads to an error in the amplitude of the solution. the modi ed wavenumber is purely real.2.6 Di erence Operators at Boundaries As discussed in Section 3. This will be further discussed in Section 11. .4 0.2 43 1 0. 3.

Here we assume that the wave speed a is positive thus the lefthand boundary is the in ow boundary. u3) 6 x which is easily derived using a Taylor table. can have an order of accuracy which is one order lower than that of the interior scheme.3. The resulting di erence operator has the form of Eq. a di erent operator is required at j = 1 which extends only to j . then no special treatment is required for this boundary. 7 3.1 0 1 7 4 5 4 .6 12 . and the global accuracy will equal that of the interior scheme. Thus the vector of unknowns is ~ = u1 u2 : : : uM ]T u (3.54.66) ( xu)1 = 1 (. In the latter case. 7 5 . while no condition is speci ed at the out ow boundary. 7 5 .. with fourth-order centered di erences.1 7 6 u0 0 7 6 7 6 7 ~ A = 121 x 6 1 ..65) 6 7 6 . and the right-hand boundary is the out ow boundary. we can use the following third-order operator at j = 1: (3.2 3 2 .6. . .. Consider rst the in ow boundary.67) 6 7 6 4 5 4 . then the approximation at j = 1 requires a value of uj.8 0 8 . if we use the fourth-order interior operator given in Eq. 1.2. Proof of this theorem is beyond the scope of this book the interested reader should consult the literature for further details. Such an operator.8 0 8 .1 0 1 7 6 00 7 6 7 6 7 ~ A = 2 1x 6 bc = 2 1 x 6 0 7 (3. FINITE-DIFFERENCE APPROXIMATIONS Referring to Section 2. 3u1 + 6u2 .u 3 6 . Hence.2u0 .63) and u0 is speci ed.. with second-order centered di erences we obtain u u ~ (3. However. known as a numerical boundary scheme.4u 3 6 .1 7 bc = 121 x 6 0 7 (3.. which is outside the domain. while having the appropriate order of accuracy.64) x~ = A~ + bc with 2 0 1 3 2 . It is clear that as long as the interior di erence approximation does not extend beyond uj.1. 3. For example.1 The Linear Convection Equation . a Dirichlet boundary condition is given at the in ow boundary.64 with 2 .7 For example.44 CHAPTER 3. 3.. the boundary conditions for the linear convection equation can be either periodic or of in ow-out ow type.

2E . 7 7 6 4 5 .. . For example. Thus the second-order centered-di erence operator.3.1) 2 x 2 x (3.1) = 1 (2uM .1 + E . We must approximate @u=@x at node M with no information about uM +1. 7 6 7 6 .72) Substituting this into the second-order centered-di erence operator applied at node M gives ( xu)M = 1 (uM +1 . With a second-order interior operator.1) x which is identical to Eq. uM .1 . uM .. The following formula allows uM +1 to be extrapolated from the interior data to arbitrary order on an equispaced grid: (1 .73) = 1 (uM . 2uM + uM . cannot be used at j = M .1) (3. 6 7 6 .68.1 0 1 7 4 5 0 .70) where E is the shift operator de ned by Euj = uj+1 and the order of the approximation is p . no boundary condition is speci ed.1)puM +1 = 0 (3.69) bc = 2 x 6 .6. with p = 2 we obtain (1 . A backward-di erence formula must be used.2 2 In the case of a fourth-order centered interior operator the last two rows of A require modi cation.u0 7 6 .68) This produces a di erence operator with 2 0 1 3 2 3 . uM ..1 (3. uM . DIFFERENCE OPERATORS AT BOUNDARIES 45 This approximation is globally fourth-order accurate. the following rst-order backward formula can be used: ( xu)M = 1x (uM .1 = 0 (3.71) which gives the following rst-order approximation to uM +1: uM +1 = 2uM . uM .2 )uM +1 = uM +1 . which requires uj+1. 3. E .1 0 1 1 6 1 6 0 7 7 7 ~ A= 2 x6 7 6 (3.1 0 1 7 6 7 6 0 7 6 7 6 6 7 . uM . 1. At the out ow boundary. Another approach to the development of boundary schemes is in terms of space extrapolation.

3.a uj.46 CHAPTER 3.1) (3.6. b.a (-1) 1 . With the second-order centered interior operator ( xxu)j = 1 2 (uj+1 .2 The Di usion Equation ! 2 @ 2 u .c (1)2 1 .1 (3.1 M 3 x @x M +1 (3.27.1 x x2 @u @2u uj @x j @x2 j x2 @2 u j 1 @x2 1 . FINITE-DIFFERENCE APPROXIMATIONS In solving the di usion equation.a . Taylor table for Neumann boundary condition.1 . The treatment of a Dirichlet boundary condition proceeds along the same lines as the in ow boundary for the convection equation discussed above.c (1) 1 @x @u @x ! 3 5=? j +1 x3 @3u @x3 j x4 @4u @x4 j 1 24 1 6 .74) x no modi cations are required near boundaries. 4 1 (au + bu ) + c j @x2 j x x2 j. that is ! ! @u @u (3.77) .76) M x @x M +1 where a.b uj . Solving for a. we assume that @u=@x is speci ed at j = M + 1.a (-1)2 1 2 . 3.75) @x M +1 = @x b Thus we design an operator at node M which is in the following form: ! 1 (au + bu ) + c @u ( xxu)M = x2 M .4.a (-1)3 1 .4. leading to the di erence operator given in Eq. b. we must consider Dirichlet and Neumann boundary conditions. and c are constants which can easily be determined using a Taylor table.b 1 . x c @u j+1 . and c. For a Neumann boundary condition. 2u ) + 2 @u ( xxu)M = 3 x2 M .c (1)3 2 Table 3.c .a (-1)4 6 . we obtain the following operator: ! 1 (2u . 2uj + uj. as shown in Table 3.

3. Notice that this operator is second-order accurate. uM .1 + ( xu)j = 1x (buj.1 + cuj + duj+1) Find the leading error term.81) = 3 1x2 (2uM .7. 2uM ) + 3 2 x @u @x M +1 which is identical to Eq.1 .77 using the space extrapolation idea. 4uM + 3uM +1) + O( x2) (3. This contrasts with the numerical boundary schemes described previously which can be one order lower than the interior scheme.78) @x M +1 2 x Solving for uM +1 gives " ! # 1 4u . Derive a third-order nite-di erence approximation to a rst derivative in the form ( xu)j = 1x (auj. In the case of a numerical approximation to a Neumann boundary condition.1 + O( x3) (3.29. u + 2 x @u uM +1 = 3 M M . 3. 3. . this is necessary to obtain a globally second-order accurate formulation. Derive a nite-di erence approximation to a rst derivative in the form a( xu)j. 3.77.80) x " ! # = 1 2 3uM .1 . 6uM + 4uM .1 + cuj + duj+1) Find the leading error term. 2.2 + buj. Consider a second-order backward-di erence approximation applied at node M + 1: ! @u = 1 (uM .7 Problems 1.1 + 2 x @u 3 x @x M +1 ! (3.1 . We can also obtain the operator in Eq. 2uM + uM .3.79) @x M +1 Substituting this into the second-order centered di erence operator for a second derivative applied at node M gives ( xxu)M = 1 2 (uM +1 .1) (3. PROBLEMS 47 which produces the di erence operator given in Eq.

54).1 + ( xxu)j + e( xxu)j+1 = 1 2 (auj. write out the 4 4 matrices and the boundarycondition vector formed by using the scheme derived in question 2 when both u and @u=@x are given at j = 0 and u is given at j = 5. Application of the second-derivative operator to the function ei x gives @ 2 ei x = .1 percent for sixth-order noncompact and compact centered approximations to a rst derivative.1 + buj + cuj+1) x Find the leading error term. the noncompact fourth-order operator (Eq. Derive a nite-di erence approximation to a third derivative in the form ( xxxu)j = 1 3 (auj. 6. FINITE-DIFFERENCE APPROXIMATIONS 3. . 9. ( x)2 for 0 x . 3.55). 3. 5. Derive a compact (or Pade) nite-di erence approximation to a second derivative in the form d( xxu)j.2 + buj. Find the modi ed wavenumber for the operator derived in question 1. x for 0 x . Repeat question 2 with d = 0. Plot ( x)2 vs.48 CHAPTER 3. 4. Using a 4 (interior) point mesh. and the compact fourth-order operator derived in question 6. Find the grid-points-per-wavelength (PPW ) requirement to achieve a phase speed error less than 0. Plot the real and imaginary parts of x vs.1 + cuj + duj+1 + euj+2) x Find the leading error term. 2 ei x @x2 Application of a di erence operator for the second derivative gives ( xxei j x) j = . Compare the real part with that obtained from the fourth-order centered operator (Eq. 2ei x thus de ning the modi ed wavenumber for a second derivative approximation. 8. Find the modi ed wavenumber for the second-order centered di erence operator for a second derivative. 7.

1 + 9uj.2) =(2 x) ( xu)j = (11uj . Plot the real and imaginary parts of x vs.3. x for 0 x .7. and third-order. Consider the following one-sided di erencing schemes.1 + uj. PROBLEMS 49 10. which are rst-.1) = x ( xu)j = (3uj .3) =(6 x) Find the modi ed wavenumber for each of these schemes. second-. uj. Derive the two leading terms in the truncation error for each scheme. 18uj. . 2uj.2 . 4uj. respectively: ( xu)j = (uj .

FINITE-DIFFERENCE APPROXIMATIONS .50 CHAPTER 3.

can be used for the time advance of the solution at any given point in the mesh. = 2h x2 or h i u(jn+1) = u(jn.2) . 2u(jn) + u(jn)1 (4. In the most general notation. This generally leads to a point di erence operator (see Section 3. but the rest of this chapter is devoted to a more specialized matrix notation described below.1) dt which includes all manner of nonlinear and time-dependent possibilities.Chapter 4 THE SEMI-DISCRETE APPROACH One strategy for obtaining nite-di erence approximations to a PDE is to start by di erencing the space derivatives only. without approximating the time derivative.1) which.1) 4 u(jn) . we nd 3 2 u(jn+1) . As an example let us consider the model equation for di usion @u = @ 2 u @t @x2 Using three-point central-di erencing schemes for both the time and space derivatives. u(jn. Di erencing the space derivatives converts the basic PDE into a set of coupled ODE's. In the following chapters.1) + 2h 2 u(jn) . these ODE's would be expressed in the form d~ = F (~ t) u ~u (4. we proceed with an analysis making considerable use of this concept. which we refer to as the semi-discrete approach. x +1 51 .3. in turn. Another strategy for constructing a nite-di erence approximation to a PDE is to approximate all the partial derivatives at once. On occasion. we use this form. 2u(jn) + u(jn)1 5 +1 .

3 is referred to as the DuFort-Frankel method. There are a number of issues concerning the accuracy. We introduce these methods here only to distinguish between methods in which the temporal and spatial terms are discretized separately and those for which no such separation is possible.2 is sometimes called Richardson's method of overlapping steps and Eq. . THE SEMI-DISCRETE APPROACH Clearly Eq. namely (u(jn+1) + u(jn. stability. however.3) which can be solved for u(n+1) and time advanced at the point j . This is a simple and natural formulation when the ODE's are linear. 4. we shall separate the space di erence approximations from the time di erencing. As we shall see later on.3 which we cannot comment on until we develop a framework for such investigations. 4. we reduce the governing PDE's to ODE's by discretizing the spatial terms and use the well-developed theory of ODE solutions to aid us in the development of an analysis of accuracy and stability. we can approximate the space derivatives with di erence operators and express the resulting ODE's with a matrix formulation.2 with the time average of u at that point.1) )=2.1 The Model ODE's First let us consider the model PDE's for di usion and biconvection described in Chapter 2.1 Reduction of PDE's to ODE's 4. In these simple cases. Another possibility is to replace the value of u(jn) in the right hand side of Eq. 1 to the level n + 1. Equation 4. 4. 4. there are subtle points to be made about using these methods to nd a numerical solution to the di usion equation. 4.1 2 x2 j+1 2 0 1 3 (4. In this case. 2 @ u(jn+1) + u(jn.1) A + u(n) 5 j . It is a full discretization of the PDE.2 is a di erence equation which can be used at the space point j to advance the value of u from the previous time levels n and n . and no semi-discrete form exists. the spatial and temporal discretizations are not separable. this method has an intermediate semi-discrete form and can be analyzed by the methods discussed in the next few chapters. Note.1. and convergence of Eqs.2 and 4.52 CHAPTER 4. that the spatial and temporal discretizations are separable.1) + 2h 4u(n) . In this approach. This results in the formula u(jn+1) = u(jn. Thus. For the time being.

4. the linear analysis presented in this book leads to diagnostics that are surprisingly accurate when used to evaluate many aspects of numerical methods as they apply to the Euler and Navier-Stokes equations.3. that is. 4. The vector ~ (t) is usually determined f by the boundary conditions and possibly source terms. Model ODE for Biconvection The term biconvection was introduced in Section 2. using the 3-point central-di erencing scheme to represent the second derivative in the scalar PDE governing di usion leads to the following ODE di usion model d~ = u B (1 .1.4.5 are the model ODE's for di usion and biconvection of a scalar in one dimension. It is used for the scalar convection model when the boundary conditions are periodic. a B (. the elements of A depend on the solution ~ and u are usually derived by nding the Jacobian of a ux vector. In this case. 4. even the Euler and Navier-Stokes equations can be expressed in the form of Eq. the 3-point central-di erencing approximation produces the ODE model given by d~ = . Although the equations are nonlinear. . Eqs. They are linear with coe cient matrices which are independent of x and t. In general. ~ (t) u u f dt (4. REDUCTION OF PDE'S TO ODE'S 53 Model ODE for Di usion For example. In such cases the equations are nonlinear.2 1)~ + (bc) u ~ (4.1.4 and 4.6) Note that the elements in the matrix A depend upon both the PDE and the type of di erencing scheme chosen for the space terms.6.2 The Generic Matrix Form The generic matrix form of a semi-discrete approximation is expressed by the equation d~ = A~ .1 0 1)~ u u (4.4) dt x2 ~ with Dirichlet boundary conditions folded into the (bc) vector.5) dt 2 x p where the boundary condition vector is absent because the ow is periodic.

This holds regardless of whether or ~ not the forcing function. is independent of u. THE SEMI-DISCRETE APPROACH In order to advance Eq. f . and together they have the property that X . As we have already pointed out. we will make use of exact solutions of coupled systems of ODE's. the general solution cannot be written whereas. the system of ODE's must be integrated using a time-marching method.8) . 4. 4.1 in time. A.6 can be written..2 Exact Solutions of Linear ODE's CHAPTER 4.2. In order to analyze time-marching methods. (4.1 Eigensystems of Semi-Discrete Linear Forms Complete Systems An M M matrix is represented by a complete eigensystem if it has a complete set of linearly independent eigenvectors (see Appendix A) . 4.6 in which the coe cient matrix. the exact solution of Eq. As we shall soon see. ~ m . if A does not depend explicitly on t. I] = 0 We form the right-hand eigenvector matrix of a complete system by lling its columns with the eigenvectors ~ m: x h i X = ~1 ~2 : : : ~M x x x The inverse is the left-hand eigenvector matrix. An eigenvector. if @F=@u = A where A is independent of u).7) The eigenvalues are the roots of the equation det A .e. If A does depend explicitly on t. 4. depends explicitly on t. This will lead us to a representative scalar equation for use in analyzing time-marching methods.54 4. when the ODE's are linear they can be expressed in a matrix notation as Eq. The ODE's represented by Eq. These ideas are developed in the following sections. m. 4. 4.6 can be written in terms of the eigenvalues and eigenvectors of A. the general solution to Eq.1 are said to be linear if F is linearly dependent on u (i.1AX = where is a diagonal matrix whose elements are the eigenvalues of A. have the property that A~ m = m~ m or x x ~m = 0 A . and its x corresponding eigenvalue. mI]x (4. which exist under certain conditions.

be transformed to a diagonal set of blocks. however. 7 J =6 6 7 6 7 Jm 4 5 .. This theme will be developed as we proceed. if a = 0.. . . Defective systems play a role in numerical stability analysis. it cannot be transformed to a diagonal matrix of scalars. some of which may be scalars (see Appendix A). the numerical analysis of Eq. if = 0.9) dt where .2.2 Single ODE's of First. Although it is quite simple. rst-order equation du = u + ae t (4..2. In general. and an eigenvalue with multiplicity n within a Jordan block is said to be a defective eigenvalue.e. 4 5 m n The matrix J is said to be in Jordan canonical form. 4. all of which can be complex numbers. It can.9 displays many of the fundamental properties and issues involved in the construction and study of most popular time-marching methods. and is homogeneous if the forcing function is zero.e. i. 1 7 ..4.. It has a steady-state solution if the right-hand side is independent of t. a. and 2 3 m 1 6 7 1 . 4.. 7 .1AS = J where 2 J1 3 6 J2 7 6 7 6 7 . The equation is linear because does not depend on u. and has a general solution because does not depend on t. i. .and Second-Order First-Order Equations The simplest nonhomogeneous ODE of interest is given by the single.. and are scalars. there exists some S which transforms any matrix A such that S . EXACT SOLUTIONS OF LINEAR ODE'S 55 Defective Systems If an M M matrix does not have a complete set of linearly independent eigenvectors..... m (n) = 6 6 Jm 6 7 . and it is said to be defective..

it can be written u(t) = u(0)e t + ae t.10) u(t) = u(0) + at]e t As we shall soon see. . The interesting question can arise: What happens to the solution of Eq.e t . 2. THE SEMI-DISCRETE APPROACH The exact solution of Eq. . Second-Order Equations The homogeneous form of a second-order equation is given by d2 u + a du + a u = 0 (4. this solution is required for the analysis of defective systems. 4.56 CHAPTER 4. and then taking the limit as ! 0. since the roots of these polynomials determine the solutions of the equations. t u(t) = c1 e t + ae . 4. For ODE's. Characteristic polynomials are fundamental to the analysis of both ODE's and O E's.9 is. Now we introduce the di erential operator D such that d D dt and factor u(t) out of Eq. m . for 6= .11) dt2 1 dt 0 where a1 and a0 are complex constants. 4.11. In terms of the initial value of u. giving (D2 + a1 D + a0 ) u(t) = 0 The polynomial in D is referred to as a characteristic polynomial and designated P (D). Using this limiting device. solving. where c1 is a constant determined by the initial conditions.9 when = ? This is easily found by setting = + . we often label these roots in order of increasing modulus as 1. we nd that the solution to du = u + ae t dt is given by (4.

EXACT SOLUTIONS OF LINEAR ODE'S 57 .16) 1 x 2 x a21 a22 x21 a21 a22 x22 21 22 Notice that a higher-order equation can be reduced to a coupled set of rst-order equations by introducing a new set of dependent variables. homogeneous equations is given by u01 = a11 u1 + a12 u2 u02 = a21 u1 + a22 u2 (4. In our simple example.4. 4.14. M .3 Coupled First-Order ODE's A Complete System A set of coupled.12.17) which is a subset of Eq. One nds the result c1e 1 t ( 2 + a1 1 + a0 ) + c2e 2 t ( 2 + a1 2 + a0) 1 2 which is identically zero for all c1 . by setting u1 = u0 u2 = u we nd Eq. 4. 1 and 2.11. 4. rst-order.13 into Eq.2.14 if and only if " #" # " # " #" # " # a11 a12 x11 = x11 a11 a12 x12 = x12 (4. 4. 4. They are found by solving the equation P ( ) = 0.11 is given by u(t) = c1e 1 t + c2e 2 t (4.a1 u1 . and t if and only if the 's satisfy Eq.15) By substitution. there would be two roots.14) which can be written " # 0 a11 a12 T ~ = A~ ~ = u1 u2] u u u A = (aij ) = a a 21 22 Consider the possibility that a solution is represented by u1 = c1 x11 e 1 t + c2 x12 e 2 t u2 = c1 x21 e 1 t + c2 x22 e 2 t (4.13) where c1 and c2 are constants determined from initial conditions. a0 u2 u02 = u1 (4. Thus.12) and the solution to Eq.11 can be written u01 = . these are indeed solutions to Eq. 4. 4.2. 4. c2 . determined from P ( ) = 2 + a1 + a0 = 0 (4. The proof of this is simple and is found by substituting Eq. .

Then. 4.10 and has the solution u2(t) = u2(0) + u1(0)t]e t From this the reader should be able to verify that u3(t) = a + bt + ct2 e t is a solution to if 2 03 2 6 u1 7 = 6 1 4 u02 5 4 u03 1 a = u3(0) b = u2(0) 32 3 7 6 u1 7 5 4 u2 5 u3 1 c = 2 u1(0) (4. in this case.15 is still a solution to Eq. one nds du2 = u + u (0)e t 2 1 dt which is identical in form to Eq.19) The general solution to such defective systems is left as an exercise. . A Defective System If A is defective.18) whose solution is not obvious. one can solve the top equation rst. then it can be represented by the Jordan canonical form " # " #" # u01 = 0 u1 u02 1 u2 (4. This is the case where A has a complete set of eigenvectors and is not defective. However. 4. 4.14 if 1 = 2 = .16 with A = . provided two linearly independent vectors exist to satisfy Eq. THE SEMI-DISCRETE APPROACH A Derogatory System Eq. 4. In this case 0 0 1 = 0 1 0 and 0 0 0 = 1 0 1 provide such a solution.58 CHAPTER 4. substituting this result into the second equation. giving u1(t) = u1(0)e t .

1~ (t) g f (4.20 from the left by X . ~ (t) u u f (4. ~ (t) ~ g (4. this regrouping is crucial for the solution process because Eqs.22) we reduce Eq. X .1. The only di erence between them was brought about by algebraic manipulations which regrouped the variables.1~ (t) u f (4.23 are expressing exactly the same equality. 4. linear. There results u ~ X .4.1AX X .23) dt It is important at this point to review the results of the previous paragraph. to a diagonal matrix having diagonal elements which are the eigenvalues of A.21 can be modi ed to u d X .1 du = X . and Eq.1 and X . but because they are only of limited interest in the general development of our theory. rst-order ODE's with constant coe cients which might have been derived by space di erencing a set of PDE's. nonhomogeneous. X . X .1~ = X . In the following. 4.20) dt Our assumption is that the M M matrix A has a complete eigensystem1 and can be transformed by the left and right eigenvector matrices. Represent them by the equation d~ = A~ .2. 4. the elements in X .1 and X are also indepenu dent of both ~ and t. Notice that Eqs. EXACT SOLUTIONS OF LINEAR ODE'S 59 4. 1 . by introducing the new variables w and ~ such that g ~ w = X .1 and insert the identity combination XX .1~ (t) u f dt u ~ Finally.1~ . Now let us multiply Eq.20 to a new algebraic form ~ dw = w .21) dt Since A is independent of both ~ and t. 4.1~ .2.2. However.1 = I between A and ~ . not because they cannot be analyzed (the example at the conclusion of the previous section proves otherwise).1~ u ~ (t) = X .4 General Solution of Coupled ODE's with Complete Eigensystems Let us consider a set of coupled.20 and 4. see Section 4. we exclude defective systems.

60 CHAPTER 4.. x We next focus on the very important subset of Eq.20 when neither A nor ~ has f any explicit dependence on t. 4.23 are no longer coupled.23 and 4. using the inverse of the relations given in Eqs. THE SEMI-DISCRETE APPROACH 4..1~ x f | Transient {z } | Steady-state {z } (4.24) For any given set of gm (t) each of these equations can be solved separately and then recoupled..22: ~ (t) = X w(t) ~ u = M X m=1 wm(t)~ m x (4. 0 wm = .25) where ~ m is the m'th column of X .1~ f cm e m t ~ m + A.1 X .24 are also time invariant and the solution to any line in Eq. gM (t) (4. gm (t) M wM . 4. In such a case.24 is w (t) = c e m t + 1 g m m m m where the cm are constants that depend on the initial conditions.26) .. the gm in Eqs. thus 0 w1 = . rst-order equations. They can be written line by line as a set of independent. single. 0 wM = 1 w1 .. 4.e. the eigenvector corresponding to m . 4. i. g1 (t) m wm . Transforming back to the u-system gives ~ (t) = X w(t) ~ u = = = = M X m=1 M X m=1 M X m=1 M X m=1 wm(t)~ m x cm e m t ~ m + x m=1 m M X 1 ~ gm xm cm e m t ~ m + X x .

3 Real Space and Eigenspace Following the semi-discrete approach discussed in Section 4.1 and 4. In this way. as might be expected. of Jordan blocks or.1 De nition . and this can add signi cantly to our understanding. In particular.1~ u x x x f (4. That is. We say If a solution is expressed in terms of ~ .3.2. The most natural reference frame is the physical one. This permits us to say a great deal about such problems and serves as the basis for this section. another very useful frame. For the model problems on which we are focusing most of our attention. REAL SPACE AND EIGENSPACE 61 Note that the steady-state solution is A.4. we reduce the partial di erential equations to a set of ordinary di erential equations represented by the generic form d~ = A~ . it is said u to be in real space. An alternative. in the most general case. on the cleverness of the relator. to a much greater extent. f The rst group of terms on the right side of this equation is referred to classically as the complementary solution or the solution of the homogeneous equations.and post-multiplication of A by the appropriate similarity matrices transforms A into a diagonal matrix.1.27) 4. In our application to uid dynamics.3. we get di erent perspectives of the same solution. in the 4. the elements of A are independent of both u and t. composed. How these relate to the numerical solutions of more practical problems depends upon the problem and. it is more instructive to refer to these groups as the transient and steady-state solutions. ~ u u f (4. we can develop some very important and fundamental concepts that underly the global properties of the numerical solutions to the model problems. however. We saw in Sections 4. form of the solution is ~ (t) = c1e 1 t ~ 1 + + cme m t ~ m + + cM e M t ~ M + A.28) dt The dependent variable ~ represents some physical quantity or quantities which relate u to the problem of interest. we identify di erent mathematical reference frames (spaces) and view our solutions from within each. but entirely equivalent.2 that pre.1~ . respectively. There is. The second group is referred to classically as the particular solution or the particular integral. We begin by developing the concept of \spaces".

These model ODE's are presented in Section 4. the u ~ elements of w are linear combinations of all of the elements of ~ . When the forcing function ~ is independent of t. ~ ~ g dt ~ If a solution is expressed in terms of w.2 and. the transient portion of the solution in eigenspace provides the weighting of each eigenvector component of the transient solution in real space. it is said to be in eigenspace (often referred to as wave space).28 had the alternative form ~ dw = w . At this point we make two observations: 1.2 Eigenvalue Spectrums for Model ODE's It is instructive to consider the eigenvalue spectrums of the ODE's formulated by central di erencing the model equations for di usion and biconvection. Equations for the eigenvalues of the simple .1X .1 ~ u . and 2. and individually u they have no direct local physical interpretation. THE SEMI-DISCRETE APPROACH which is an uncoupled set of rst-order ODE's that can be solved independently for ~ the dependent variable vector w. respectively. we found that Eq. for simplicity. The relations that transfer from one space to the other are: ~ w = X .62 CHAPTER 4. 4. the transient portion of the solution in real space consists of a linear combination of contributions from each eigenvector. However. Following Section 4.1 ~ ~ = X f g ~ = Xw ~ u ~ = X~ f g The elements of ~ relate directly to the local physics of the problem.1~ u x f and wm(t) = cm e m t + 1 gm m=1 2 M for real space and eigenspace. m 4.1. the solutions in the two spaces f are represented by ~ (t) = X cm e m t ~ m + X . We say simplest nondefective case. of scalars.3. using only complete systems for our examples.

1 (4. The interpretation of this result plays a very important role later in our stability analysis. 4. Recall that xj = j x = j =(M + 1). to help visualize the matrix structure.5.i m a m m=0 1 M .3 Eigenvectors of the Model Equations The Di usion Model Next we consider the eigenvectors of the two model equations.4.2 + 2 cos M + 1 x ! .32) . Consider Eq. the model ODE's for di usion. for the model biconvection equation .3.1. REAL SPACE AND EIGENSPACE 63 tridiagonals.29) 2(M + 1) x2 and. are given in Appendix B. These follow as special cases from the results given in Appendix B. from Section B. B (M : a b c) and Bp(M : a b c).30) = sin m x m=0 1 M . and x = 2 =M . The right-hand eigenvector matrix X is given by 2 sin (x ) sin (2x ) sin (3x ) sin (4x ) 3 6 sin (x1) sin (2x1 ) sin (3x1) sin (4x1 ) 7 6 6 sin (x2) sin (2x2 ) sin (3x2) sin (4x2 ) 7 7 4 3 3 3 3 5 sin (x4) sin (2x4 ) sin (3x4) sin (4x4 ) The columns of the matrix are proportional to the eigenvectors. From Section B. so in general the relation ~ = X w can be written as u ~ uj = M X m=1 wm sin mxj j=1 2 M (4.1 m = x M where = .4. First. we nd for the model di usion equation with Dirichlet boundary conditions m m = 2 .1 (4. 4.31) x is the modi ed wavenumber from Section 3. Notice that the di usion eigenvalues are real and negative while those representing periodic convection are all pure imaginary. m = m. we present results for a simple 4-point mesh and then we give the general case.4.3.4 sin2 m = m=1 2 M (4.ia sin 2m m=0 1 M .

32 represents the sine synthesis that companions the sine transform given by Eq.1.33.1 M . The coe cient matrices for these ODE's are always circulant. In general w = X . The Biconvection Model Next consider the model ODE's for periodic convection.34) . Similarly. the right-hand eigenvectors are given by ~ m = ei j (2 x With xj = j M .1~ ~ u gives wm = M X j =1 uj sin mxj m=1 2 M (4.33 represents a sine transform of the function u(x) for an M -point sample between the boundaries x = 0 and x = with the condition u(0) = u( ) = 0. Eq. 4.1~ is a sine transform from real space to (sine) wave u space. In summary. For the model di usion equation: ~ w = X .33) In the eld of harmonic analysis. 4. we can write ~ = X w as u ~ uj = wmeimxj j=0 1 (4. For our model ODE.64 CHAPTER 4.1 X m=0 m=M ) j =0 1 m =0 1 M . or left-hand eigenvector matrix X .1 M . 4.5. ~ = Xw ~ u is a sine synthesis from wave space back to real space. Eq. 4. Eq. THE SEMI-DISCRETE APPROACH For the inverse. we nd 2 sin (x ) 6 sin (2x11 ) 6 6 sin (3x ) 4 1 sin (4x1 ) sin (x2 ) sin (2x2) sin (3x2) sin (4x2) sin (x3) sin (2x3 ) sin (3x3 ) sin (4x3 ) sin (x4 ) 3 sin (2x4) 7 7 5 sin (3x4) 7 sin (4x4) The rows of the matrix are proportional to the eigenvectors.1 x = j 2 =M .

imxj wm = M j j =0 m=0 1 M . ~ ~ u u For any circulant system: ~ w = X . 4.36) . REAL SPACE AND EIGENSPACE For a 4-point periodic mesh.1 f )j j=1 2 M (4.3.27 becomes uj (t) = where M X m=1 cme m t sin mxj + (A.4 Solutions of the Model ODE's The Di usion Equation For the di usion equation. 4.12i =4 7 6 u3 7 e.4 2 sin2 2(M + 1) x ! (4. Eq. it is straightforward to establish the fact that the relation ~ = X w represents the Fourier synthesis of the variable w back to ~ .1~ is a complex Fourier transform from real space to u wave space.4: 2w 3 21 1 6 w1 7 1 6 1 e. u For circulant matrices.6i =4 In general 65 nd the following left-hand eigenvector matrix e.4.2i =4 6 2 7 = 6 .4i =4 6w 7 4 61 e 4 35 4 w4 1 e.1 This equation is identical to a discrete Fourier transform of the periodic dependent variable ~ using an M -point sample between and including x = 0 and x = 2 . In summary.12i =4 1 32u 3 1 e.18i =4 u4 1 X 1 M . x. We can now combine the results of the previous sections to write the solutions of our model ODE's.1~ 76 7 u 54 5 e. we from Appendix B.1 u e.35) m m = . ~ = Xw ~ u is a complex Fourier synthesis from wave space back to real space.3.6i =4 7 6 u2 7 = X .4i =4 e.8i =4 e.

The di erence between the modi ed wavenumber and the actual wavenumber depends on the di erencing scheme and the grid resolution. we can write the ODE solution as uj (t) = M X m=1 cm e . we obtain uj (t) = where M .37) With the modi ed wavenumber de ned as 2 m x m = x sin 2 and using m = m. The Convection Equation For the biconvection equation.1 (4.1 X m=0 cme.66 CHAPTER 4.39) We see that the solutions are identical except for the steady solution and the modi ed wavenumber in the transient term.i m at ei mxj j=0 1 M .42) . evaluated at the nodes of the grid: uj (t) = M X m=1 cm e. while high wavenumber modes (if they have signi cant amplitudes) can have large errors. With conventional di erencing schemes. This di erence causes the various modes (or eigenvector components) to decay at rates which di er from the exact solution.40) m = .1f )j j=1 2 M (4. 2. 4.37. We can write this ODE solution as uj (t) = M . THE SEMI-DISCRETE APPROACH (4. low wavenumber modes are accurately represented.38) This can be compared with the exact solution to the PDE.1 (4.41) with the modi ed wavenumber de ned in Eq.1 X m=0 cme m t ei mxj j=0 1 M . m 2 t sin m xj + (A. The modi ed wavenumber is an approximation to the actual wavenumber.i m a (4. Eq. m 2 t sin m xj + h(xj ) j=1 2 M (4.31.

. Since the error in the phase speed depends on the wavenumber. For such a purpose Eq. since is real. In the case of non-centered di erencing. 2. the modi ed wavenumber is complex.1 (4.4 The Representative Equation In Section 4. m=0 4. uncoupled equation and our conclusions will apply in general.1 X uj (t) = fm (0)e.26.i mat ei m xj j=0 1 M .44) m m m dt The goal in our analysis is to study typical behavior of general situations.42 shows that the imaginary portion of the modi ed wavenumber produces nonphysical decay or growth in the numerical solution.2. A typical member of the family is dwm = w . This leads to the following important concept: The numerical solution to a set of linear ODE's (in which A is not a function of t) is entirely equivalent to the solution obtained if the equations are transformed to eigenspace. solved there in their uncoupled form. 4. while the actual phase speed is independent of the wavenumber. topics which are covered in Chapters 6 and 7. not particular problems. evaluated at the nodes of the grid: M . which are related by algebraic manipulation. The role of m is clear it stands for some representative eigenvalue in the original A matrix. We found the uncoupled solution to a set of ODE's in Section 4.4.23 express identical results but in terms of di erent groupings of the dependent variables.4. The importance of this concept resides in its message that we can analyze timemarching methods by applying them to a single. 4. g (t) (4. Eq. and then returned as a coupled set to real space. we pointed out that Eqs. 4. Our next objective is to nd a \typical" single ODE to analyze.5. discussed in Chapter 11. The form of Eq. the result is erroneous numerical dispersion. This is helpful both in analyzing the accuracy of time-marching methods and in studying their stability. this leads to an error in the speed with which various modes are convected.44 is not quite satisfactory. THE REPRESENTATIVE EQUATION 67 and compare it to the exact solution of the PDE.20 and 4. However.3. As discussed in Section 3.43) Once again the di erence appears through the modi ed wavenumber contained in m .

write the semi-discrete form obtained with periodic boundary conditions on a 5-point grid (M = 5). Consider the nite-di erence operator derived in question 1 of Chapter 3.44 has the exact solution: k w(t) = ce From this we can extract the k'th term and replace ik with . 3. In such evaluations the parameters and must be allowed to take the worst possible combination of values that might occur in the ODE eigensystem. For example X . 6x @t @x2 . we note that.46) . which can be used to evaluate all manner of time-marching methods. 4. in principle.68 CHAPTER 4. u(1) = 1: @u = @ 2 u .52 to the 1-D di usion equation with Dirichlet boundary conditions. Write the semi-discrete form resulting from the application of second-order centered di erences to the following equation on the domain 0 x 1 with boundary conditions u(0) = 0. The exact solution of the representative ODE is (for 6= ): t w(t) = ce t + ae (4.45) ikt t + X ak e k ik . 4. Write the resulting semi-discrete ODE form. This leads to The Representative ODE dw = w + ae t dt (4.5 Problems 1.g(t) = ak eikt for which Eq. one can express any one of the forcing terms gm(t) as a nite Fourier series. Using this operator to approximate the spatial derivative in the linear convection equation. 3. Consider the application of the operator given in Eq. Find the entries in the boundary-condition vector. THE SEMI-DISCRETE APPROACH the question is: What should we use for gm(t) when the time dependence cannot be ignored? To answer this question. 2.

can be found from Appendix B. as well as the matrices X and X . Using these.37.4. Repeat for the initial condition u(x 0) = sin 2x. PROBLEMS 69 4. For initial conditions u(x 0) = sin(mx) and boundary conditions u(0 t) = u( t) = 0. Consider the matrix A = . Calculate and plot the corresponding ODE solutions. The eigenvalues of the above matrix A. 4.) Calculate the corresponding modi ed wavenumbers for the second-order centered operator from Eq. . 5.4. Calculate the numerical phase speed from the modi ed wavenumber corresponding to this initial condition and show that it is consistent with the ODE solution. The grid nodes are given by xj = j x j = 0 1 : : : 9. Compare with the exact solution of the PDE. plot the exact solution of the di usion equation with = 1 at t = 1 with m = 1 and m = 3. Consider a grid with 10 interior points spanning the domain 0 x .Bp(10 . (Plot the solution at the grid nodes only.1. Note that the domain is 0 x 2 and x = 2 =10.1 0 1)=(2 x) corresponding to the ODE form of the biconvection equation resulting from the application of second-order central di erencing on a 10-point grid.5. compute and plot the ODE solution at t = 2 for the initial condition u(x 0) = sin x.

THE SEMI-DISCRETE APPROACH .70 CHAPTER 4.

As a result.e.1) We will begin by presenting the basic concepts which apply to nite-volume strategies. This increased exibility can be used to great advantage in generating grids about arbitrary geometries.1. Second. they ensure that the discretization is conservative. Finite-volume methods have become popular in CFD as a result.2. In Chapter 4. of two advantages. While this property can usually be obtained using a nite-di erence formulation. which is d Z QdV + I n:FdS = Z PdV dt V (t) S (t) V (t) (5.1 or Eq. 71 . we saw how to derive nite-di erence approximations to arbitrary derivatives.Chapter 5 FINITE-VOLUME METHODS In Chapter 3. primarily. and energy are conserved in a discrete sense. First. we will show how similar semi-discrete forms can be derived using nite-volume approximations in space. either in the form of Eq. momentum. we will study the latter form. Finite-volume methods are applied to the integral form of the governing equations. nitevolume methods do not require a coordinate transformation in order to be applied on irregular meshes. we saw that the application of a nite-di erence approximation to the spatial derivatives in our model PDE's produces a coupled set of ODE's. Next we will give our model equations in the form of Eq. 2. they can be applied on unstructured meshes consisting of arbitrary polyhedra in three dimensions or arbitrary polygons in two dimensions. mass. In this Chapter. 2. Consistent with our emphasis on semi-discrete methods. This will be followed by several examples which hopefully make these concepts clear.. 5. i. it is obtained naturally from a nite-volume formulation.

Next a time-marching method1 can be applied to nd the value of Z V QdV (5.1 is that of a control volume whose shape is dependent on the nature of the grid.72 5.3) d Q + I n:FdS = Z PdV (5. In our examples. FINITE-VOLUME METHODS The basic idea of a nite-volume method is to satisfy the integral form of the conservation law to some degree of approximation for each of many contiguous control volumes which cover the domain of interest. 5.4) V dt S V for a control volume which does not vary with time.1. When these are used to calculate F(Q).1 can be written as 1 Z QdV V V (5. 5. which is a closed surface in three dimensions and a closed contour in two dimensions. Similarly. which are a function of Q. The basic elements of a nite-volume method are thus the following: 1 Time-marching methods will be discussed in the next chapter. First. This ux must then be integrated to nd the net ux through the boundary. This is a form of interpolation often referred to as reconstruction. 5. Thus the volume V in Eq. The ux is required at the boundary of the control volume.1 Basic Concepts CHAPTER 5. A nondissipative scheme analogous to centered di erencing is obtained by taking the average of these two uxes.2) at the next time step. we note that the average value of Q in a cell with volume V is Q and Eq. the source term P must be integrated over the control volume. In order to evaluate the uxes. Let us consider these approximations in more detail. that is. . As we shall see in our examples. Q can be represented within the cell by some piecewise approximation which produces the correct value of Q. the ux will be discontinuous. Another approach known as ux-di erence splitting is described in Chapter 11. at the control-volume boundary. we will consider only control volumes which do not vary with time. Examining Eq. they will generally produce di erent approximations to the ux at the boundary between two control volumes. each cell will have a di erent piecewise approximation to Q. we have updated values of the cell-averaged quantities Q. Thus after applying a timemarching method. we see that several approximations must be made.

Integrate the ux to nd the net ux through the control-volume boundary using some sort of quadrature. Evaluate F(Q) at the boundary. u(x y t) with speed a along a straight line making an angle with respect to the x-axis. Using this approximation. MODEL EQUATIONS IN INTEGRAL FORM 73 1. Given the value of Q for each control volume. The order of accuracy of the method is dependent on each of the approximations. Advance the solution in time to obtain new values of Q.2. This issue is discussed in Section 11. In order to include di usive uxes. construct an approximation to Q(x y z) in each control volume. . Apply some strategy for resolving the discontinuity in the ux at the controlvolume boundary to produce a single value of F(Q) at any point on the boundary.5. Since there is a distinct approximation to Q(x y z) in each control volume. These ideas should be clari ed by the examples in the remainder of this chapter.6) 5. two distinct values of the ux will generally be obtained at any point on the boundary between two control volumes. rQdA = C nQdl A I (5. 3. The one-dimensional form is recovered with = 0.2. 4. nd Q at the control-volume boundary.4.1 The Linear Convection Equation A two-dimensional form of the linear convection equation can be written as @u + a cos @u + a sin @u = 0 @t @x @y (5. in two dimensions. the following relation between rQ and Q is sometimes used: Z I rQdV = nQdS (5.2.7) This PDE governs a simple plane wave convecting the scalar quantity. V S Z where the unit vector n points outward from the surface or contour. 2.5) or.2 Model Equations in Integral Form 5.

13) (5.74 CHAPTER 5. as shown in Fig. 5.10) Since Q is a scalar. The nodes of the grid are located at xj = j x as usual.12) (5. x=2 xj+1=2 = xj + x=2 (5.9) (5.16) ! d Z udA = I n: i @u + j @u ds dt A @x @y C to be the integral form of the two-dimensional di usion equation. we nd The integral form of the two-dimensional di usion equation with no source term and unit di usion coe cient is obtained from the general divergence form.17) . the two-dimensional linear convection equation is obtained from the general divergence form. i @x @y 0 u (5.8) (5.ru ! @u + j @u .14) (5. We will use the following notation: xj.15) (5.11) dt A C where A is the area of the cell which is bounded by the closed contour C . FINITE-VOLUME METHODS For unit speed a. Eq.2 The Di usion Equation Q = F = = P = Using these. Control volume j extends from xj . F is simply a vector. We consider an equispaced grid with spacing x. Eq. 2. as in the model equations. 5.3.1=2 = xj . 2. 2. with .2.2 gives the following integral form d Z udA + I n:(iu cos + ju sin )ds = 0 (5.3.3 One-Dimensional Examples We restrict our attention to a scalar dependent variable u and a scalar ux f . Substituting these expressions into a twodimensional form of Eq. with Q = u F = iu cos + ju sin P = 0 (5.1. 5. x=2 to xj + x=2.

20) j j +1=2 . the integral form of the linear convection equation.23) . 5.1=2 and the integral form becomes Z xj+1=2 d ( xu ) + f Pdx (5. fj . ONE-DIMENSIONAL EXAMPLES j-1/2 ∆ x j+1/2 75 j-2 j-1 L R j L R j+1 j+2 Figure 5. x=2 j @x j 2 @x2 j 6 @x3 j ! ! x2 @ 2 u + x4 @ 4 u + O( x6 ) (5.1=2 = 0 (5. Eq.22) where uj is the value at the center of the cell.21) = uj + 24 @x2 4 j 1920 @x j or uj = uj + O( x2) (5.1: Control volume in one dimension.18) With these de nitions.1=2 = dt xj.3. uj 1=2 = u(xj 1=2) fj 1=2 = f (uj 1=2) (5. 5.19 in a Taylor series about xj (with t xed) to get 2 3 ! ! ! 1 Z x=2 4u + @u + 2 @ 2 u + 3 @ 3 u + : : :5 d uj x .1=2 Now with = x . fj.1 A Second-Order Approximation to the Convection Equation In one dimension. becomes u x ddtj + fj+1=2 . the cell-average value becomes Z xj+1=2 uj (t) 1x u(x t)dx (5.5. 5.19) xj.11. Hence the cell-average value and the value at the center of the cell di er by a term of second order.3. xj . we can expand u(x) in Eq.

1 (uj. we obtain u 1 u x ddtj + 1 (uj + uj+1) .1=2.1 (5.31) 2 2 With periodic boundary conditions. FINITE-VOLUME METHODS with f = u.1 + uj ) (5.30) . as shown in Fig. and cell j .1=2 = 2 (fjL.27) . We choose a piecewise constant approximation to u(x) in each cell such that u(x) = uj xj. uj. this point operator produces the following semidiscrete form: d~ = .28) We have now accomplished the rst step from the list in Section 5. 1 B (. The cell to the right of xj+1=2 . 5.30 into the integral form. In this example.25) j j where the L indicates that this approximation to fj+1=2 is obtained from the approximation to u(x) in the cell to the left of xj+1=2 .24) Evaluating this at j + 1=2 gives fjL+1=2 = f (uL+1=2) = uL+1=2 = uj (5. Eq. 5.1=2 x xj+1=2 (5.1. Substituting Eqs. 2 where f^ denotes a numerical ux which is an approximation to the exact ux.1=2 = uj.1) = 0 (5.29 and 5. cell j is the cell to the right of xj.32) dt 2 x p .23.1=2 . 5.1 0 1)~ u u (5. Thus 1 f^j+1=2 = 2 (fjL+1=2 + fjR =2) = 1 (uj + uj+1) (5.1 + uj ) = x ddtj + 2 (uj+1 . which is cell j + 1. giving fjR 1=2 = uj (5.76 CHAPTER 5.1 we have de ned the uxes at the cell boundaries in terms of the cell-average data.1=2 + fjR 1=2) = 1 (uj.29) +1 2 and 1 f^j. the discontinuity in the ux at the cell boundary is resolved by taking the average of the uxes on either side of the boundary. gives fjR =2 = uj+1 (5.26) +1 Similarly. giving fjL. 1 is the cell to the left of xj.

22ux2+ uj.5. Since the eigenvalues of Bp(. the piecewise quadratic approximation produces the following values at the cell boundaries: 1 uL+1=2 = 6 (2uj+1 + 5uj .35) With these values of a. x=2 u( )d = u j . can lead to a nondissipative nite-volume method.1 x . uj+1 24 (5.34) These constraints lead to u b = uj+12 . 5. rather than the nodal values.33) where is again equal to x .29 and 5.30. The three parameters a.1 + 26uj .3.1) (5. ONE-DIMENSIONAL EXAMPLES 77 This is identical to the expression obtained using second-order centered di erences.1 (5. and c. uj.36) j . x=2 1 Z 3 x=2 u( )d = u j +1 x x=2 j a = uj+1 . ~ .1 0 1) is relevant to nite-volume methods as well as nite-di erence methods.3. as in Eqs. b.uj. u u Hence our analysis and understanding of the eigensystem of the matrix Bp(. and c are chosen to satisfy the following constraints: 1 Z . xj .1 with a piecewise quadratic approximation as follows u( ) = a 2 + b + c (5.1 0 1) are pure imaginary.3 x=2 1 Z x=2 u( )d = u j x .1 c = .2 A Fourth-Order Approximation to the Convection Equation Let us replace the piecewise constant approximation in Section 5. except it is written in terms of the cell average ~ .3. we can conclude that the use of the average of the uxes on either side of the cell boundary. 5. b.x j.

2) j 6 using the notation de ned in Section 5. The rst approach is simpler to extend to multidimensions. uj. we describe two approaches to deriving a nite-volume approximation to the di usion equation.uj+1 + 7uj + 7uj.1=2 = 1 (2uj + 5uj.uj+2 + 8uj+1 . 5.37) (5.1. the following semi-discrete form is obtained: d~ = .42) x ddtj + 12 (. Eq. Recalling that f = u.43) dt 12 x p This is a system of ODE's governing the evolution of the cell-average data.1) and Substituting these expressions into the integral form.uj+1 + 5uj + 2uj. uj.78 CHAPTER 5.1) j (5. 8uj. With periodic boundary conditions.3.23.2) 12 (5.1=2) + f (uR. as can be veri ed using Taylor series expansions (see question 1 at the end of this chapter). uj.1=2 = 6 (. 5.8 0 8 .40) = 12 (.38) (5.1 .1)~ u u (5.2) = 0 This is a fourth-order approximation to the integral form of the equation.1=2 = 1 f (uL. f^j.39) uR+1=2 = 1 (. while the second approach is more suited to extension to higher order accuracy. gives u 1 (5. we again use the average of the uxes on either side of the boundary to obtain f^j+1=2 = 1 f (uL+1=2) + f (uR+1=2)] j 2 j 1 (5.3.41) In this section.uj+2 + 5uj+1 + 2uj ) j 6 uL.1=2)] j 2 j = 1 (.uj+2 + 7uj+1 + 7uj . 1 B (1 .1 . FINITE-VOLUME METHODS 1 uR.1 + uj.3 A Second-Order Approximation to the Di usion Equation .

51) @x @ .5. we use a piecewise quadratic approximation as in Section 5. and we see that the properties of the matrix B (1 . d~ = 1 B (1 .48) Substituting these into the integral form. u(a) (5. we know that the value of a continuous function at the center of a given interval is equal to the average value of the function over the interval to second-order accuracy. 5.3. uj. Also. 5. From Eq.1) (5.1=2 = .2 1)~ + bc u u ~ (5. ONE-DIMENSIONAL EXAMPLES 79 In one dimension. Eq. uj ) (5. we obtain u x ddtj = 1x (uj.33 we have @u = @u = 2a + b (5. Hence. For our second approach.22.44. to second-order. f^j.3. becomes u (5.1=2 = 0 with f = .6 becomes Z b @u dx = u(b) .2. @u f = . 1 (uj+1 . Eq. 1x (uj .2 1) are relevant to the study of nite-volume methods as well as nite-di erence methods.50) dt x2 This provides a semi-discrete nite-volume approximation to the di usion equation. with Dirichlet boundary conditions. 2uj + uj+1) (5.16.ru = .1 .44) x ddtj + fj+1=2 . 5. the integral form of the di usion equation.45) a @x We can thus write the following expression for the average value of the gradient of u over the interval xj x xj+1: 1 Z xj+1 @u dx = 1 (u . 5. we can write ! ^j+1=2 = .@u=@x.46) x xj @x x j+1 j From Eq. 5.49) or.47) @x j+1=2 x Similarly. u ) (5. Eq. fj.

The control volumes are regular hexagons with area A. 5. The resulting semi-discrete form with periodic boundary conditions is d~ = 1 B (1 . As in Section 5.56) A 3 The two-dimensional form of the conservation law is d Z QdA + I n:Fdl = 0 (5. With f = . The following relations hold between `. . uj ) +1 fjR 1=2 = fjL.53) .4 A Two-Dimensional Example The above one-dimensional examples of nite-volume approximations obscure some of the practical aspects of such methods.2.35.52) with a and b given in Eq. 2u + u ) (5.54) j j +1 dt x2 j. Notice that there is no discontinuity in the ux at the cell boundary. FINITE-VOLUME METHODS (5. This produces duj = 1 (u .55) dt x2 p which is written entirely in terms of cell-average data.3. 5. 1 ` = p 3 p A = 3 2 3 `2 ` = 2 (5. 5. Thus our nal example is a nite-volume approximation to the two-dimensional linear convection equation on a grid consisting of regular triangles.1) (5. 1x (uj .1=2 = .2 1)~ u u (5. The nodal data are stored at the vertices of the triangles formed by the grid.57) dt A C . and ` is the length of the sides of the hexagons. 1x (uj+1 .49.1.80 CHAPTER 5. is the length of the sides of the triangles. this gives fjR =2 = fjL+1=2 = . and A.1 which is identical to Eq. uj.@u=@x. as shown in Figure 5. we use a piecewise constant approximation in each control volume and the ux at the control volume boundary is the average of the uxes obtained on either side of the boundary.

5. Side 0 1 2 3 4 5 Outward Normal n p pi (i + p j)=2 3 (. as shown in Figure 5. . 3 j)=2 (i . Outward normals. the unit normals can be taken outside the integral and the ux balance is given by 5 d Z Q dA + X n Z Fdl = 0 dt A =0 where indexes a side of the hexagon.1.2. 3 j)=2 Table 5.i . 5. where we have ignored the source term.4.2: Triangular grid.i + 3 j)=2 . i and j are unit normals along x and y.i p (. A TWO-DIMENSIONAL EXAMPLE c b 2 3 81 p d 4 1 l ∆ 0 5 a e f Figure 5.2. respectively. The contour in the line integral is composed of the sides of the hexagon. see Fig. A list of the normals for the mesh orientation shown is given in Table 5.1. Since these sides are all straight.

60) dt A . Weights of ux integrals. Making the change of variable z = =`.60.60. the approximation to the ux integral becomes trivial. if the integrals in the equation are evaluated exactly.61) . cos . we have n (i cos + j sin ) p (cos . we have for side n Z Fdl = n (i cos + j sin ) Z `=2 Z 1=2 Z `=2 . 3 sin )=2 Table 5. cos p (.59) " # 5 d Z u dA + X n (i cos + j sin ) ` Z 1=2 u(z)dz = 0 (5. the two-dimensional linear convection equation.`=2 u( )d = ` . Introducing the cell average. 5.`=2 u ( )d (5. There are no numerical approximations in Eq.2. That is. Taking the average of the ux on either side of each edge of the hexagon gives for edge 1: Z Z 1=2 u(z)dz = up + ua dz = up + ua (5. 5. 5.1=2 u(z)dz (5. the integrated time rate of change of the integral of u over the area of the hexagon is known exactly.82 CHAPTER 5.1=2 =0 The values of n (i cos + j sin ) are given by the expressions in Table 5. FINITE-VOLUME METHODS For Eq. see Eq.11. 3 sin )=2 cos p (cos + p sin )=2 3 (. one has the expression .2. in terms of u and the hexagon area A. cos + 3 sin )=2 .1=2 2 1 A u dA = Aup (5.58) where is a length measured from the middle of a side . Z and the piecewise-constant approximation u = up over the entire hexagon. Side 0 1 2 3 4 5 Then.62) 2 .

5. that this is a second-order approximation to the integral form of the two-dimensional linear convection equation. ud) + (cos + 3 sin )(ub . PROBLEMS Similarly. 5.63) (5.60.69) The reader can verify. u ) + (cos + p3 sin )(u .64) (5. we obtain p u ` A ddtp + 2 (2 cos )(ua . we have for the other ve edges: 83 Z 2 u(z)dz = up + ub 2 u(z)dz = up + uc 2 (5.67) Z 3 Z 4 u(z)dz = up + ud 2 u(z)dz = up + ue 2 u(z)dz = up + uf 2 Z 5 Z 0 Substituting these into Eq.42 is a fourth-order approximation to Eq. cos + 3 sin )(uc . 5. cos + 3 sin )(uc .65) (5. 5. 2. uf )] = 0 (5. using Taylor series expansions. uf )] = 0 or dup + 1 (2 cos )(u . along with the expressions in Table 5.2. Find the semi-discrete ODE form governing the cell-average data resulting from the use of a linear approximation in developing a nite-volume method for the linear convection equation.5. ue) p (5. Use Taylor series to verify that Eq.23. u ) a d b e dt 3 p +(.68) +(. Use the following linear approximation: u( ) = a + b .5 Problems 1.66) (5.5.

84 where b = uj and CHAPTER 5.3.3. 4. 3. FINITE-VOLUME METHODS u a = uj+1 .x j.1 2 and use the average ux at the cell interface. Repeat question 3 for a grid consisting of equilateral triangles. derive a nite-volume approximation to the spatial terms in the two-dimensional di usion equation on a square grid. . Using the rst approach given in Section 5.

1) dt These can be integrated in time using a time-marching method to obtain a timeaccurate solution to an unsteady ow problem. the goal is simply to remove the transient portion of the solution as quickly as possible timeaccuracy is not required. or directly using Gaussian elimination or some variation thereof. one can consider the use of Newton's method. spatial discretization leads to a coupled system of nonlinear algebraic equations in the form ~u F (~ ) = 0 (6. time-marching methods for ODE's.). Alternatively. This produces an iterative method in which a coupled system of linear algebraic equations must be solved at each iteration.2) As a result of the nonlinearity of these equations. 85 . one can consider a time-dependent path to the steady state and use a time-marching method to integrate the unsteady form of the equations until the solution is su ciently close to the steady solution. These can be solved iteratively using relaxation methods.10. For example. is thus relevant to both steady and unsteady ow problems. which will be discussed in Chapter 9.Chapter 6 TIME-MARCHING METHODS FOR ODE'S After discretizing the spatial derivatives in the governing PDE's (such as the NavierStokes equations). some sort of iterative method is required to obtain a solution. This motivates the study of stability and sti ness. which is widely used for nonlinear algebraic equations (See Section 6.3. The subject of the present chapter. we obtain a coupled system of nonlinear ODE's in the form d~ = F (~ t) u ~u (6. When using a time-marching method to compute steady ows. topics which are covered in the next two chapters. For a steady ow problem.

6. we reduce our PDE to a set of coupled ODE's represented in general by Eq. We then face a further task concerning the numerical stability of the resulting methods.1 0 un . etc. we developed exact solutions to our model PDE's and ODE's. or the (n) superscript. Our rst task is to nd numerical approximations that can be used to carry out the time integration of Eq. we introduced the convention that the n subscript. 6.4) un+1 = f 1 u0n+1 0u0n . The methods we study are to be applied to linear or nonlinear ODE's. but the methods themselves are formed by linear combinations of the dependent variable and its derivative at various time intervals. always points to a discrete time value. the reader should recall the arguments made in Chapter 4 to justify the study of a scalar ODE.3 gives u0n = Fn = F (un tn) tn = nh Often we need a more sophisticated notation for intermediate time steps involving temporary calculations denoted by u.1 un.3 to some given accuracy. Application of a time-marching method to an ODE produces an ordinary di erence equation (O E ). for the purpose of this chapter.1.1 . we will develop exact solutions for the model O E's arising from the application of timemarching methods to the model ODE's. In earlier chapters.86 CHAPTER 6.1 Notation Using the semi-discrete approach. 6. They are both commonly used in the literature on ODE's. where accuracy can be measured either in a local or a global sense. and. In this chapter we will present some basic theory of linear O E's which closely parallels that for linear ODE's. For these we use the notation ~ ~ u0n+ = Fn+ = F (~n+ tn + h) ~ u The choice of u0 or F to express the derivative in a scheme is arbitrary. 4. However. Combining this notation with Eq. using this theory. In Chapter 2. They are represented conceptually by (6. TIME-MARCHING METHODS FOR ODE'S Application of a spatial discretization to a PDE produces a coupled system of ODE's.3) dt Although we use u to represent the dependent variable. u.1u0n. but we postpone such considerations to the next chapter. we need only consider the scalar case du = u0 = F (u t) (6. rather than w. and h represents the time interval t.

and therefore the time advance is simple. the new predicted solution is also a function of the time derivative at the new time level. u0n. will be presented in Section 11. implicit methods require more complicated strategies to solve for un+1 than explicit methods. at given time intervals.2 Converting Time-Marching Methods to O E's Examples of some very common forms of methods used for time-marching general ODE's are: un+1 = un + hu0n un+1 = un + hu0n+1 and (6. CONVERTING TIME-MARCHING METHODS TO O E'S 87 With an appropriate choice of the 0s and 0s.2.1 for a method using two previous time levels. We refer to them as the explicit Euler method and the MacCormack predictor-corrector method.3 1 . The method commonly referred to as MacCormack's method.7) According to the conditions presented under Eq. 6. by applying a time-marching method. we can analytically convert Here we give only MacCormack's time-marching method.6) un+1 = un + hu0n ~ 1 un+1 = 2 un + un+1 + hu0n+1] ~ ~ (6.8) dt written here in terms of the dependent variable.1. for example.5) (6. 6. As we shall see.6. The second is implicit and referred to as the implicit (or backward) Euler method. The material presented in Chapter 4 develops a basis for evaluating such methods by introducing the concept of the representative equation du = u0 = u + ae t (6.1 respectively. An explicit method is one in which the new predicted solution is only a function of known data. u0n+1. for systems of ODE's and nonlinear problems. the rst and third of these are examples of explicit methods. These methods are simple recipes for the time advance of a function in terms of its value and the value of its derivative. The value of this equation arises from the fact that. which is a fully-discrete method.4. u0n. these methods can be constructed to give a local Taylor series accuracy of any order. For an implicit method. that is. The methods are said to be explicit if 1 = 0 and implicit otherwise. u. un and un.

and just as powerful as. applying the implicit Euler method.9 is a linear O E .7 is simply the explicit Euler method. Note that the rst line in Eq. The second line in Eq. but for di erence rather than di erential equations. (1 + h)un = ahe hn ~ 1 (1 + h)~ + u . gives un+1 . Apply the simple explicit Euler scheme. There results un+1 = un + h( un + ae hn) or un+1 . (1 + h)un = hae hn (6. 1 u = 1 ahe h(n+1) .10) As a nal example. This is demonstrated by repeating some of the developments in Section 4. 6. to Eq. As another example. Eq. The latter are subject to a whole body of analysis that is similar in many respects to.6. we nd un+1 = un + h un+1 + ae h(n+1) or (1 .88 CHAPTER 6. Eq. h)un+1 . 6.2. 6.7.9) Eq.5. since the predictor step in Eq. . Eq. the theory of ODE's. 6. the predictor-corrector sequence. 6.11 is obtained by noting that u0n+1 = F (~n+1 tn + h) ~ u = un+1 + ae h(n+1) ~ (6.11) 2 which is a coupled set of linear O E's with constant coe cients. un = he h ae hn (6.12) Now we need to develop techniques for analyzing these di erence equations so that we can compare the merits of the time-marching methods that generated them. to Eq. 6. 6. 6. 6. 6.11 is identical to Eq.8.2 un+1 n+1 2 n (6. expressed in terms of the dependent variable un and the independent variable n.3 Solution of Linear O E's With Constant Coe cients The techniques for solving linear di erence equations with constant coe cients is as well developed as that for ODE's and the theory follows a remarkably parallel path.8. with constant coe cients. We next consider examples of this conversion process and then go into the general theory on solving O E's. TIME-MARCHING METHODS FOR ODE'S such a linear ODE into a linear O E .9. 6.

complex parameters.13 is n un = c1 n + bab . is not a function of either n or u. where c1 is a constant determined by the initial conditions. . SOLUTION OF LINEAR O E'S WITH CONSTANT COEFFICIENTS 89 6. Just as in the development of Eq. we use for O E's the di erence operator E (commonly referred to as the displacement or shift operator) and de ned formally by the relations un+1 = Eun un+k = E k un Further notice that the displacement operator also applies to exponents.13) where . one can readily show that the solution of the defective case. 6. thus b bn = bn+ = E bn . (b = ).3.3. un+1 = un + a n is h i un = u0 + an .1 n This can all be easily veri ed by substitution.6. In terms of the initial value of u it can be written n n un = u0 n + a b b . a.10. and since the equations are linear and have constant coe cients. The exact solution of Eq. Second-Order Equations The homogeneous form of a second-order di erence equation is given by un+2 + a1 un+1 + a0un = 0 (6. 4. and b are.14) d Instead of the di erential operator D dt used for ODE's. The independent variable is now n rather than t.and Second-Order Di erence Equations First-Order Equations The simplest nonhomogeneous O E of interest is given by the single rst-order equation un+1 = un + abn (6. in general.1 First.

E ) c12 u1 (n) = C .15) which must be zero for all un.90 CHAPTER 6. 6. 6. its roots determine the solution to the O E.16 is a solution to Eq. 6. They are found by solving the equation P ( ) = 0.14 for all c1 c2 and n should be veri ed by substitution. . 6.14. The roles of D and E are the same insofar as once they have been introduced to the basic equations the value of u(t) or un can be factored out. 6. we label these roots 1 . linear homogeneous di erence equations have the form u(1n+1) = c11u(1n) + c12 u(2n) u(2n+1) = c21u(1n) + c22 u(2n) which can also be written (6. The -roots are found from P ( ) = det (c11c . Thus Eq. E ) u2 which must be zero for all u1 and u2. there are just two roots and in terms of them the solution can be written un = c1( 1 )n + c2( 2 )n (6.16) where c1 and c2 depend upon the initial conditions. 2 . TIME-MARCHING METHODS FOR ODE'S where can be any fraction or irrational number. and refer to them as the -roots. The fact that Eq. The operational form contains a characteristic polynomial P (E ) which plays the same role for di erence equations that P (D) played for di erential equations that is. 6. 6. ) (c c12 ) = 0 21 22 . In the analysis of O E's. In the simple example given above.14 can now be re-expressed in an operational notion as (E 2 + a1E + a0)un = 0 (6. etc. E I ]. .15 is known as the operational form of Eq.17) ~ n+1 = C~ n u u h i ~ n = u(1n) u(2n) T u C = c11 c12 c21 c22 The operational form of Eq. rst-order.3. E I ]~ = 0 un c21 (c22 . Again we are led to a characteristic polynomial.2 Special Cases of Coupled First-Order Equations A Complete System Coupled. this time having the form P (E ) = det C . Eq.17 can be written (c11 .

1) 0 2 .4.21) " E . the solution of Eq. rst-order ordinary di erence equations.18) 6. 1]un = h E ae hn (6. For example. Using the displacement operator.17 is if x 2 ~ n = X ck ( k )n~ k u x k=1 where ck are constants determined by the initial conditions.1 i n .2. produced by applying a time-marching method to the representative equation.4. h)E .6.9 to 6. (1 + h)]un = h ae hn (1 .4 Solution of the Representative O E's Examples of the nonhomogeneous.11.2. E . linear.20) (6.1 The Operational Form and its Solution E .1 + u n(n . 6. ~ are its eigenvectors.19) (6.(1 + h) 1 (1 + h)E E . A Defective System The solution of O E's with defective eigensystems follows closely the logic in Section 4. are given by Eqs. 1 . these equations can be written 6. SOLUTION OF THE REPRESENTATIVE O E'S 91 Obviously the k are the eigenvalues of C and.2 2 #" # " # u = h 1 ae hn ~ 1 u n 2E . 6. following the logic of Section 4.2 # n (6.2 for defective ODE's. one can show that the solution to 2 3 2 32 3 un+1 7 6 6 un+1 5 = 4 1 7 6 un 7 ^ 4^ 5 4 un 5 un+1 1 un is un = u0 n h un = u0 + u0n ^ ^ " un = u0 + u0n ^ .

we have P (E ) = (1 .25) 6. 6. 4.23) un = ck ( k ) P (e h ) k=1 where k are the K roots of the characteristic polynomial. The terms P (E ) and Q(E ) are polynomials in E referred to as the characteristic polynomial and the particular polynomial. we have P (E ) = E . 1 Q(E ) = hE (6. h For the implicit Euler method.20. When determinants are involved in the construction of P (E ) and Q(E ). 6.23: un = c1(1 + h)n + ae hn h h e . TIME-MARCHING METHODS FOR ODE'S All three of these equations are subsets of the operational form of the representative O E P (E )un = Q(E ) ae hn (6.22) which is produced by applying time-marching methods to the representative ODE.92 CHAPTER 6. Keep in mind that for methods such as in Eq. one for un and un and we are usually only interested in the nal solution un. Eq. h)E .4.24) and the solution of its representative O E follows immediately from Eq. respectively. 6. 6.21. 6. the ratio Q(E )=P (E ) can be found by Kramer's rule. 6.22 all manner of standard time-marching methods having multiple time steps and various types of intermediate predictor-corrector families. In such a case K X Q(1) un = ck ( k )n + a P (1) k=1 As examples of the use of Eqs. 6. or a steady state. We can express in terms of Eq. P ( ) = 0. Eq.21.45.19 to 6.23. 1 . For the explicit Euler method. Notice also. representing a time invariant particular solution. 6.21 there are multiple (two in this case) solutions. The general solution of Eq. h Q(E ) = h (6. ~ the important subset of this solution which occurs when = 0.22 can be expressed as K h X n + ae hn Q(e ) (6.2 Examples of Solutions to Time-Marching O E's . as would be the case for Eq. 6.22 and 6. Eq. we derive the solutions of Eqs.19.1.

The former are the eigenvalues of the A matrix in the ODE's found by space di erencing the original PDE. 1 . This relation is rst demonstrated by developing it for the explicit Euler method.5. First we make use of the semi-discrete approach to nd a system of ODE's and then express its solution in the form of Eq. 1 . h)e h .27) . The complete solution can therefore be written 1 h 1 2h2 n + ae hn 2 h e + 1 + h un = c1 1 + h + 2 (6.26) h . the -roots and the -roots. 1 2 2 2 h Q(E ) = det . Relation We have now been introduced to two basic kinds of roots. THE . 1 2 h2 e 2 6. and there ~ results # " E . 1 2h2 P (E ) = det . h . h .5. h .27. Eq. and the latter are the roots of the characteristic polynomial in a representative O E found by applying a time-marching method to the representative ODE. 1 . 1 In the case of the coupled predictor-corrector equations. 6. 4.6. one solves for the nal family un (one can also nd a solution for the intermediate family u).(1 + h) = E E . 1 (1 E h)E 1 hE = 1 hE (E + 1 + h) + 2 2 2 The -root is found from P( ) = . one can write n n n x x x u(t) = c1 e 1 h ~ 1 + + cm e m h ~ m + + cM e M h ~ M + P:S: (6. 1 2 h2 = 0 2 which has only one nontrivial root ( = 0 is simply a shift in the reference index). 1 (1 + h)E E .5 The . h + ae (1 . RELATION so 1 n hn 93 he h un = c1 1 .21. Remembering that t = nh.1 Establishing the Relation 6. There is a fundamental relation between the two which can be used to identify many of the essential properties of a time-march method.

2 hE . . For one of these we nd m q h + 1 + 2 h2 m m 1 = 1 + mh + 2 2 h2 . 1.4. so that for every the must satisfy the relation 2 m .8 to Eq.94 CHAPTER 6. one -root. 2 mh m . The error is O( 2 h2 ). the solution2 of the resulting O E is un = c1( 1 )n ~ 1 + x + cm ( m )n ~ m + x + cM ( M )n ~ M + P:S: x (6. TIME-MARCHING METHODS FOR ODE'S where for the present we are not interested in the form of the particular solution (P:S:). 6. 6.5. instead of the Euler method. The other root. So if we use the Euler method for the time advance of the ODE's. which is de ned by un+1 = un. m 2 3 . we have the characteristic polynomial P (E ) = E 2 . 6.27 and Eq. Since the value of e h can be expressed in terms of the series 1 1 e h = 1 + h + 2 2h2 + 6 3h3 + 1 + n! nhn + the truncated expansion = 1 + h is a reasonable3 approximation for small enough h.29) Applying Eq. Based on Section 4.32) mh This is an approximation to e m h with an error O( 3h3). 6. which is given by = 1 + h.28. x Comparing Eq. 1 4 h4 + m 8 m = (6. Suppose.3. we use the leapfrog method for the time advance. we see a correspondence between m and e m h. will be discussed in Section 6. q 1 + 2 h2 .1 + 2hu0n (6. 1 = 0 (6. Now the explicit Euler method produces for each -root.29.31) (6.30) Now we notice that each produces two -roots.28) where the cm and the ~ m in the two equations are identical and m = (1 + m h).

twice as many time steps are required to reach the time T . Therefore the error at the end of the simulation is reduced by a factor of four.5. The following example makes this clear.1 + 2hu0n (6. We saw from Eq. One of these we identi ed as the principal root which always 6.6. Thus the principal root is an approximation to e h up to O hk .34. Consider a solution obtained at a given time T using a second-order time-marching method with a time step h. We refer to the root that has the above property as the principal -root.35) has a leading truncation error O(h3). this is reduced by a factor of eight (considering the leading term only). RELATION 95 6. knowing only that its leading error is O hk+1 .6.34) 2h has a leading truncation error which is O(h2).30 that the . while the second-order time-marching method which results from this approximation. 6. consistent with a second-order approximation. Since the error per time step is O(h3). However. THE . 6. The above property can be stated regardless of the details of the timemarching method. un.5.2 The Principal -Root Based on the above we make the following observation: Application of the same time-marching method to all of the equations in a coupled system linear ODE's in the form of Eq. which is the leapfrog method: un+1 = un.3 Spurious -Roots . This arises simply because of our notation for the time-marching method in which we have multiplied through by h to get an approximation for the function un+1 rather than the derivative as in Eq. relation for the leapfrog method produces two -roots for each . 4.33) 1 2h2 + + 1 k hk + O hk+1 =1+ h+ 2 k! where k is the order of the time-marching method.5.1) (6. always produces one -root for every -root that satis es the relation (6. Now consider the solution obtained using the same method with a time step h=2. and designate it ( m )1 . Note that a second-order approximation to a derivative written in the form ( t u)n = 1 (un+1 .

For example. No matter whether a -root is principal or spurious. many very accurate methods in practical use for integrating some forms of ODE's have spurious roots. generation of spurious roots does not. and presumably the magnitudes of the spurious roots themselves are all less than one (see Chapter 7). after some nite time the amplitude of the error associated with the spurious roots is even smaller then when it was initialized.. if one starts the method properly) the spurious coe cients are all initialized with very small magnitudes. To express this fact we use the notation = ( h). it is always some algebraic function of the product h. In fact. 6. If a time-marching method produces spurious -roots. TIME-MARCHING METHODS FOR ODE'S has the property given in 6. They are also called one-step methods. We refer to them as one-root methods. the leapfrog method requires ~ n. Following again the message of Section 4.5. the solution for the O E in the form shown in Eq. 1 or earlier to advance the solution from time level n to n + 1. 6.4 One-Root Time-Marching Methods There are a number of time-marching methods that produce only one -root for each -root.4. The initial vector ~ 0 u does not provide enough data to initialize all of the coe cients. Thus while spurious roots must be considered in stability analysis. For example.96 CHAPTER 6. In general. Such roots originate entirely from the numerical approximation of the time-marching method and have nothing to do with the ODE being solved.1 and thus cannot be started using u only ~ n. Then the presence of spurious roots does not contaminate the answer. u Presumably (i. the . However. All spurious roots are designated ( m )k where k = 2 3 . relation produced by a time-marching scheme can result in multiple -roots all of which.e.. This results because methods which produce spurious roots require data from time level n . are spurious.36 must be initialized by some starting procedure. That is.36) Spurious roots arise if a method uses data from time level n . we have un = c11 ( 1 )n ~ 1 + + cm1( m )n ~ m + + cM 1 ( M )n ~ M + P:S: 1 x 1 x 1 x +c12 ( 1)n ~ 1 + + cm2 ( m )n ~ m + + cM 2 ( M )n ~ M 2 x 2 x 2x n~ + n~ + +c13 ( 1)3 x1 + cm3 ( m )3 xm + cM 3 ( M )n ~ M 3x +etc. if there are more spurious roots (6. all of the coe cients (cm )2 in Eq. 6. except for the principal one. in itself. . make a method inferior.28 must be modi ed. The other is referred to as a spurious -root and designated ( m )2 . they play virtually no role in accuracy analysis. 1 or earlier. if there is one spurious root to a method.33. It should be mentioned that methods with spurious roots are not self starting.

6. vectors.6. as we shall see in Chapter 8. a Taylor series analysis is a very limited tool for nding the more subtle properties of a numerical time-marching method. respectively. A popular method having this property. This is a local error such as that found from a Taylor table analysis. However. One is the error made in each time step. The only error made by introducing the time marching is the error that makes in approximating e h. )u0n + u0n+1 The -method represents the explicit Euler ( = 0). It is quite common to judge a time-marching method on the basis of results found from a Taylor table.1 Local and Global Error Measures 6. Three one-root methods were analyzed in Section 6. The other is the error determined at the end of a given event which has covered a speci c interval of time composed of many time steps.2.37) respectively. It is instructive to compare the exact solution to a set of ODE's (with a complete eigensystem) having time-invariant forcing terms with the exact solution to the O E's for one-root methods. Notice that when t and n = 0.4. it is of no use in: 6. see Section 3. It is usually used as the basis for establishing the order of a method. This is a global error. These are ~ (t) = c1 e 1 h n ~ 1 + + cm e m h n ~ m + + cM e M h n ~ M + A. h h . ACCURACY MEASURES OF TIME-MARCHING METHODS 97 They have the signi cant advantage of being self-starting which carries with it the very useful property that the time-step interval can be changed at will throughout the marching process. the trapezoidal ( = 1 ).6 Accuracy Measures of Time-Marching Methods .1~ u x x x f ~ n = c1( 1 )n ~ 1 + + cm( m )n ~ m + + cM ( M )n ~ M + A. is given by the formula h i un+1 = un + h (1 . It is useful for comparing methods. There are two broad categories of errors that can be used to derive and evaluate timemarching methods. and matrices are identical except the ~ and the terms inside u the parentheses on the right hand sides.6. the so-called -method. and the 2 implicit Euler methods ( = 1).4. relation is ) = 1 +1(1 . Its .1~ u x x x f (6. For example. so that all the constants. these equations are identical.

However. TIME-MARCHING METHODS FOR ODE'S nding spurious roots. nding the global error. In the discussion of the . (6.6.98 CHAPTER 6. analyzing the particular solution of predictor-corrector combinations. a necessary condition for the choice should be that the measure can be used consistently for all methods. This is similar to the error found from a Taylor table.38) and the solution to the representative O E's. Therefore. either local or global. evaluating numerical stability and separating the errors in phase and amplitude. .39) er! ) 6. a very natural local error measure for the transient solution is the value of the di erence between solutions based on these two roots. The order of the method is the last power of h matched exactly. We designate this by er and make the following de nition er e h. 1 The leading error term can be found by expanding in a Taylor series and choosing the rst nonvanishing term. and to study them we make use of the material developed in the previous sections of this chapter.2 Local Accuracy of the Transient Solution (er Transient error The particular choice of an error measure. Our error measures are based on the di erence between the exact solution to the representative ODE. including only the contribution from the principal root. is to some extent arbitrary. which can be written as un = c1 ( 1 )n + ae hn Q(e h ) P (e h) j j (6. given by t u(t) = ce t + ae .relation we saw that all time-marching methods produce a principal -root for every -root that exists in a set of linear ODE's. The latter three of these are of concern to us here.

6.41) Amplitude and phase errors are important measures of the suitability of time-marching methods for convection and wave propagation phenomena. 1( h).5 can be extended to the combination of a spatial discretization and a time-marching method applied to the linear convection equation. 6.1 ( 1 )i=( 1 )r )] (6. A positive value of erp corresponds to phase lag (the numerical phase speed is too small). The approach to error analysis described in Section 3. ( 1)2 + ( 1 )2 i r and the local error in phase can be de ned as er! !h . Cn = ah= x. The above expression for er! can be normalized to give the error in the phase speed.6.42) !h n where ! = . that is q era = 1 . is found using = . Let = i! where ! is a real number representing a frequency. Thus one can obtain values of the principal root over the range 0 x for a given value of the Courant number.40) 1 = r +i i e From this it follows that the local error in amplitude is measured by the deviation of j 1 j from unity. Then the numerical method must produce a principal -root that is complex and expressible in the form i!h (6. Such can indeed be the case when we study the equations governing periodic convection which produces harmonic motion. as follows . The principal root. ) . Introducing the Courant number. j 1 j = 1 . tan. ACCURACY MEASURES OF TIME-MARCHING METHODS 99 Amplitude and Phase Error Suppose a eigenvalue is imaginary. where is the modi ed wavenumber of the spatial discretization.1 ( erp = er! = 1 + tan C 1)i =x( 1)r )] (6.3 Local Accuracy of the Particular Solution (er ) The numerical error in the particular solution is found by comparing the particular solution of the ODE with that for the O E. while a negative value corresponds to phase lead (the numerical phase speed is too large). We have found these to be given by 1 P:S:(ODE) = ae t ( . we have h = .a .6.iCn x. For such cases it is more meaningful to express the error in terms of amplitude and phase.ia .

44) ( . An illustration of this is given in the section on Runge-Kutta methods.4 Time Accuracy For Nonlinear Applications er = c1 ( h)k1+1 er = c2 ( h)k2+1 where k = smallest of(k1 k2) (6.47) . a time-marching method is said to be of order k if 6. many numerical methods generate an er that is also zero.46) (6. Eq. More precisely.6. where co = h!0 h( . The algebra involved in nding the order of er can be quite tedious. P e h . so that the order of er and er are consistent. If the forcing function is independent of time.45) (6. and for this case. 6. this order is quite important in determining the true order of a time-marching method by the process that has been outlined.h ) lim P e The value of co is a method-dependent constant that is often equal to one. TIME-MARCHING METHODS FOR ODE'S t Q(e h) P:S:(O E) = ae P (e h) respectively. 1 er h P:S: (6. However.43 can be written in terms of the characteristic and particular polynomials as n o er = co (6. and it is necessary that the advertised order of accuracy be valid for the nonlinear cases as well as for the linear ones. time-marching methods are usually applied to nonlinear ODE's. A necessary condition for this to occur is that the local accuracies of both the transient and the particular solutions be of the same order. )Q e h . In order to determine the leading error term. is equal to zero.100 and CHAPTER 6.43) (ODE ) The multiplication by h converts the error from a global measure to a local one. For a measure of the local error in the particular solution we introduce the de nition ( ) P:S:(O E) . In practice.

48) Global error in amplitude and phase If the event is periodic.49) ( 1 )i Er! N ( 1 )r = !T . Suppose we wish to compute some time-accurate phenomenon over a xed interval of time using a constant time step. We refer to such a computation as an \event".6.5 Global Accuracy In contrast to the local error measures which have just been discussed.50) . we are more concerned with the global error in amplitude and phase. 6.6. we can also de ne global error measures. These are given by Era = 1 . This subject is covered in Chapter 8 after our introduction to stability in Chapter 7. These are useful when we come to the evaluation of time-marching methods for speci c purposes. to derive all of the necessary conditions for the fourth-order Runge-Kutta method presented later in this chapter the derivation must be performed for a nonlinear ODE. ( 1 ( h))N (6.6. given by the relation T = Nh Global error in the transient A natural extension of er to cover the error in an entire event is given by Er e T . For example.1 ( 1)i =( 1)r ] !h . N tan. Let T be the xed time of the event and h be the chosen step size. ACCURACY MEASURES OF TIME-MARCHING METHODS 101 The reader should be aware that this is not su cient. is N. and q " ( 1 )2 + ( 1)2 i r N (6. the analysis based on a linear nonhomogeneous ODE produces the appropriate conditions for the majority of time-marching methods used in CFD.1 !# (6. tan. Then the required number of time steps. However.

1. We shall not spend much time on development or design of these methods. since most of them have historic origins from a wide variety of disciplines and applications. and they are said to be K -step because K time-levels of data are required to marching the solution one time-step.7.K 1 0 1 X k k E Aun = h@ k=1. we have developed the framework of error analysis for time advance methods and have randomly introduced a few methods without addressing motivational. 6. Eq.51 is applied to the representative equation.52) . ) P eh 6. It can be measured by Er Qeh .8.K k un+k =h 1 X k=1.4.K 1 k hn k E A( un + ae ) (6. h. The Linear Multistep Methods (LMM's) are probably the most natural extension to time marching of the space di erencing schemes introduced in Chapter 3 and can be analyzed for accuracy or designed using the Taylor table approach of Section 3. In the subsequent sections. 6. we introduce classes of methods along with their associated error analysis.1 ( . the global error in the particular solution follows naturally by comparing the solutions to the ODE and the O E.7 Linear Multistep Methods In the previous sections. When Eq. TIME-MARCHING METHODS FOR ODE'S Global error in the particular solution Finally.K k Fn+k (6. They are explicit if 1 = 0 and implicit otherwise. The methods are said to be linear because the 's and 's are independent of u and n.102 CHAPTER 6. 6. developmental or design issues. and the result is expressed in operational form.51) where the notation for F is de ned in Section 6.1 The General Formulation When applied to the nonlinear ODE du = u0 = F(u t) dt all linear multistep methods can be expressed in the general form 1 X k=1. one nds 0 1 @ X k=1.

(.5.1 1 . 6.1 . 12 .2 .2 (6.(.h . Two well-known families of them.1 . .2)3 .51 can be multiplied by an arbitrary constant.1.h 0 un .53) 1=1 0 = .1 + . can be designed using the Taylor table approach of Section 3. One can show that these conditions are met by any method represented by Eq. . 6.h 1 u0n+1 . that is ! (1 + h) as h ! 0.1 .1 u0n.7. LINEAR MULTISTEP METHODS 103 We recall from Section 6. k k k k 6.2 .1 .(.2)0 .2 . The Adams-Moulton family is obtained from Eq. 1) k Since both sides of Eq. 6. 6.2 Examples There are many special explicit and implicit forms of linear multistep methods.h . labeled 1.4.1 . to be of any value in time accuracy. 6.1 6 2 .1 k=0 The Adams-Bashforth family has the same 's with the additional constraint that 1 = 0.2)2 .1 . and it is certainly a necessary condition for the accuracy of any time marching method. We can also agree that. un .(. these methods are often \normalized" by requiring X k=1 Under this condition co = 1 in Eq. The three-step Adams-Moulton method can be written in the following form un+1 = un + h( 1u0n+1 + 0 u0n + .2 1 2 6 Table 6.51 with k = .44.2 1 . that approximates e h through the order of the method. .1 u0n. 11 6 0 .6.54) A Taylor table for Eq.54 can be generated as un h u0n h2 u00 h3 u000 h4 u0000 n n n 1 1 1 un+1 1 1 2 6 24 .0 1 .7.2)1 .2 that a time-marching method when applied to the representative equation must provide a -root. a method should at least be rst-order accurate. referred to as Adams-Bashforth (explicit) and AdamsMoulton (implicit).2 u0n. Taylor table for the Adams-Moulton three-step linear multistep method.2) (6.2u0n.51 if X X X k = 0 and k = (K + k . The condition referred to as consistency simply means that ! 1 as h ! 0.1 1 .

56) 1 = 9=24 0 = 19=24 . resulting in (6.1 + 2hu0n+1 3 h 0 i 2nd-order Backward h = un + 12 5un+1 + 8u0n .1 + 5u0n.1 + 2hhu0n Leapfrog i = un + 1 h 3u0n .16=12 .104 CHAPTER 6. A list of simple methods.55 and 6.57 up to the order of the method.32 1 .2 Implicit Methods un+1 un+1 un+1 un+1 Recall from Section 6. In the following material AB(n) and AM(n) are used as abbreviations for the (n)th order Adams-Bashforth and (n)th order Adams-Moulton methods.4 .4 With 1 = 0 one obtains 2 32 3 2 3 1 1 1 0 6 0 .2 .55) 4 5 4 . is given below together with identifying names that are sometimes associated with them. 4 = un + hu0n+1 Implicit Euler 1 hhu0n + u0n+1i = un + 2 Trapezoidal (AM2) h i = 1 4un .2 .1 7 = 6 1 7 (6. TIME-MARCHING METHODS FOR ODE'S This leads to the linear system 2 1 1 1 1 32 3 213 1 6 2 0 . 6.5.4 7 6 0 7 6 1 7 6 7=6 7 6 3 0 3 12 7 6 76 (6.1 AM3 . Explicit Methods un+1 un+1 un+1 un+1 = un + hu0n Euler = un.2 that a kth-order time-marching method has a leading truncation error term which is O(hk+1 ). u0n. 16u0n.58) 0 = 23=12 .2 = 5=12 This is the third-order Adams-Bashforth method.2 to solve for the 's. un.5=24 .4 7 6 . One can verify that the Adams type schemes given below satisfy Eqs. u0n.57) 4 54 5 415 0 3 12 1 .2 = 1=24 which produces a method which is fourth-order accurate.1 = .2 giving (6.1 = .1 7 6 1 7 5 4 5 4 0 . some of which are very common in CFD applications.1 2 h i AB2 h AB3 = un + 12 23u0n .

1=4 . 6.' = .6.5 6 1 Finally a unique fourth-order method is found by setting = . +2 The class of all 3rd-order methods is determined by imposing the additional constraint =2 . One can show after a little algebra that both er and er are reduced to 0(h3) (i.e.3 Two-Step Linear Multistep Methods ' 0 0 0 0 . 6.1=2.1=2 .1=2 0 .1=6 0 . A list of methods contained in Eq.1=3 0 1=12 . see Eq.1=2 . =3 = 6 . 6. 6. un.1=2 .2.1 (6.59 is given in Table 6. that is at least rst-order accurate.1 = 0 in Eq. Notice that the Adams methods have = 0. LINEAR MULTISTEP METHODS 105 High resolution CFD problems usually require very large data sets to store the spatial information from which the time derivative is calculated.1=3 ..2=9 0 1=2 . 0 1 1=2 1 3=4 1=3 1=2 5=9 0 0 0 1=3 5=12 1=6 0 0 0 1=2 0 . which corresponds to 0 = 0 in Eq. The most general two-step linear multistep method (i.59) Clearly the methods are explicit if = 0 and implicit otherwise. K=2 in Eq. the methods are 2nd-order accurate) if 1 '= .51.1=6 .51..2. are known as Milne methods. Some linear one.1=2 6.e. 'u0n.1=6 Method Euler Implicit Euler Trapezoidal or AM2 2nd Order Backward Adams type Lees Type Two{step trapezoidal A{contractive Leapfrog AB2 Most accurate explicit Third{order implicit AM3 Milne Order 1 1 2 2 2 2 2 2 2 2 3 3 3 4 Table 6.5=6 .51). 6. + ')u0n .and two-step methods. can be written as h i (1 + )un+1 = (1 + 2 )un . Methods with = .1] + h u0n+1 + (1 . .7.7. which corresponds to . This limits the interest in multistep methods to about two time levels.59.

and hybrid methods.63) is a second-order accurate method for any .62) Considering only local accuracy.26. Their use is motivated by ease of application and increased e ciency. and usually the nal family has a higher Taylor-series order of accuracy than the intermediate ones. For this to hold for er . where measures of e ciency are discussed in the next two chapters. 1 u0n ~ (6.4.44 that the same conditions also make it O(h3). to the following observations. Their use in solving ODE's is relatively easy to illustrate and understand. but it is not di cult to show using Eq. by following the discussion in Section 6. 1 .6.61) Q(E ) = E h E + + h] (6. 6. each of which is referred to as a family in the solution process. Their use in solving PDE's can be much more subtle and demands concepts5 which have no counterpart in the analysis of ODE's. It is easy to show. following the example leading to Eq. A simple one-predictor. therefore. fractional-step. one is led. For the method to be second-order accurate both er and er must be O(h3).106 6. that h i P (E ) = E E . . One concludes. Predictor-corrector methods constructed to time-march linear or nonlinear ODE's are composed of sequences of linear multistep methods. ( + ) h . that the predictor-corrector sequence un+ = un + hu0n ~ 1 un+1 = un + 2 h 1 u0n+ + 2 . 5 Such as alternating direction. There may be many families in the sequence. 2h2 (6.8 Predictor-Corrector Methods CHAPTER 6. one-corrector example is given by un+ = un + h 0n ~ hu i un+1 = un + h u0n+ + u0n ~ (6. 6. it is obvious from Eq. One can analyze this sequence by applying it to the representative equation and using the operational techniques outlined in Section 6. 6. TIME-MARCHING METHODS FOR ODE'S There are a wide variety of predictor-corrector schemes created and used for a variety of purposes. The situation for er requires some algebra.61 that + =1 =1 2 which provides two equations for three unknowns.60) where the parameters and are arbitrary parameters to be determined.

9. second-order accurate methods are given below. presented earlier in this chapter. is 1 hu ~ i un+1 = un + 2 h 3~0n . 6. we refer to these as ABM(k) methods.6 that produce just one -root for each -root such that ( h) corresponds Although implicit and multi-step Runge-Kutta methods exist.66) (6. speci c. The order of the combination is then equal to the order of the corrector.63 with = 1=2 is un+1=2 = un + 1 hu0n ~ 2 un+1 = un + hu0n+1=2 ~ and.9 Runge-Kutta Methods There is a special subset of predictor-corrector methods. u0n.67) (6. nally. MacCormack's method.68) 6. we will consider only single-step. u0n. explicit Runge-Kutta methods here.1 ~ 2 i hh u un+1 = un + 12 5~0n+1 + 8u0n . is un+1 = un + hu0n ~ un+1 = 1 un + un+1 + hu0n+1] ~ ~ 2 Note that MacCormack's method can also be written as un+1 = un + hu0n ~ 1 un+1 = un + 2 h u0n + u0n+1] ~ from which it is clear that it is obtained from Eq. RUNGE-KUTTA METHODS 107 A classical predictor-corrector sequence is formed by following an Adams-Bashforth predictor of any order with an Adams-Moulton corrector having an order one higher. 6. obtained from Eq.63 with = 1.65) 2 The Burstein method.64) Some simple. referred to as Runge-Kutta methods. (6. The Adams-BashforthMoulton sequence for k = 3 is h i un+1 = un + 1 h 3u0n . The Gazdag method.1 ~ h i un+1 = un + 1 h u0n + u0n+1 ~ ~ (6. u0n. which we discuss in Chapter 8. If the order of the corrector is (k).6.1 (6. 6 .

70. we prefer to present it using predictor-corrector notation. the principal (and only) -root is given by = 1 + h + 1 2h2 + + 1 k hk (6. Thus for a Runge-Kutta method of order k (up to 4th order). the choices for the time samplings.69) 2 k! It is not particularly di cult to build this property into a method. The most widely publicized Runge-Kutta process is the one that leads to the fourth-order method.71 and 6. the method must further satisfy the constraint that er = O(hk+1) (6.71 is b un+ = un + hu0n b un+ 1 = un + 1 hu0n + 1hu0n+ ~ 0 + hu0 + hu0 un+ 2 = un + 2 hun 2 bn+ 2 ~n+ 1 b (6. It is usually introduced in the form k1 = hF (un tn) k2 = hF (un + k1 tn + h) k3 = hF (un + 1k1 + 1k2 tn + 1 h) k4 = hF (un + 2k1 + 2k2 + 2 k3 tn + 2h) followed by u(tn + h) . We present it below in some detail.6.70) and this is much more di cult.108 CHAPTER 6. .72) un+1 = un + 1hu0n + 2hu0n+ + 3hu0n+ 1 + 4hu0n+ 2 ~ Appearing in Eqs. 6.4.72 are a total of 13 parameters which are to be determined such that the method is fourth-order according to the requirements in Eqs. u(tn) = 1 k1 + 2k2 + 3k3 + 4k4 (6. They must satisfy the relations = 1 = 1+ 1 (6. Thus.71) However. are not arbitrary.73) 2 = 2+ 2+ 2 . but.69 and 6. as we pointed out in Section 6. First of all. it is not su cient to guarantee k'th order accuracy for the solution of u0 = F (u t) or for the representative equation. a scheme entirely equivalent to 6. To ensure k'th order accuracy. and 2 . 6. 1. TIME-MARCHING METHODS FOR ODE'S to the Taylor series expansion of e h out through the order of the method and then truncates.

the resulting method would be third-order accurate. A more general derivation based on a nonlinear ODE can be found in several books. 7 .74 and 6.4.75 are all satis ed. but the equations follow directly from nding P (E ) and Q(E ) and then satisfying the conditions in Eqs.6. It results in the \standard" fourth-order Runge-Kutta method expressed in predictor-corrector form as 1 b un+1=2 = un + 2 hu0n b un+1=2 = un + 1 hu0n+1=2 ~ 2 un+1 = un + hu0n+1=2 ~ i h b (6. RUNGE-KUTTA METHODS 109 The algebra involved in nding algebraic equations for the remaining 10 parameters is not trivial. Using Eq. Thus if the rst 3 equations in 6. which is based on a linear nonhomogenous representative ODE. but the most popular one is due to Runge. To satisfy the condition that er = O(k5). 6.75 which must be satis ed by the 10 unknowns. 6.7 There are eight equations in 6. two parameters can be set arbitrarily.73 to eliminate the 's we nd from Eq. 6.76) un+1 = un + 1 h u0n + 2 u0n+1=2 + u0n+1=2 + u0n+1 ~ 6 The present approach based on a linear inhomogeneous equation provides all of the necessary conditions for Runge-Kutta methods of up to third order.75) 2 + ( 2 + 2 ) = 1=12 (4) 3 1 4 2 1 2 (4) 3 1 1 + 4 2 ( 2 + 1 2 ) = 1=8 The number in parentheses at the end of each equation indicates the order that is the basis for the equation. Since the equations are overdetermined. 6.69 and 6.74 and the rst equation in 6. we have to ful ll four more conditions 2 2 2 = 1=3 (3) 2 + 3 1+ 4 2 3+ 3+ 3 = 1=4 (4) 2 3 1 4 2 (6. As discussed in Section 6.74) These four relations guarantee that the ve terms in exactly match the rst 5 terms in the expansion of e h .6.70.9. Several choices for the parameters have been proposed. the fourth condition in Eq.75 cannot be derived using the methodology presented here.69 the four conditions 1+ 2+ 3+ 4 2 + 3 1+ 4 2 3 1 + 4( 2 + 1 2) 4 1 2 = 1 = 1=2 = 1=6 = 1=24 (1) (2) (3) (4) (6.

Following the steps outlined in Section 6. In any event. TIME-MARCHING METHODS FOR ODE'S 9 > > = > > Notice that this represents the simple sequence of conventional linear multistep methods referred to. and third-order methods can be derived from Eqs.10 Implementation of Implicit Methods We have presented a wide variety of time-marching methods and shown how to derive their . un = he h ae hn (6.74 and the rst equation in 6. h)un+1 .72 by setting 4 = 0 and satisfying only Eqs. We shall discuss the consequences of this in Chapter 8.2. as Euler Predictor Euler Corrector Leapfrog Predictor Milne Corrector RK 4 One can easily show that both the Burstein and the MacCormack methods given by Eqs. 6.66 and 6. It is clear that for orders one through four. Higher-order RungeKutta methods can be developed. For example. 6. Our presentation of the time-marching methods in the context of a linear scalar equation obscures some of the issues involved in implementing an implicit method for systems of equations and nonlinear equations. but they require more derivative evaluations than their order. 6. These are covered in this Section. 6. 6.77) using the implicit Euler method. relations.75. a fth-order method requires six evaluations to advance the solution one step. we will see that these methods can have widely di erent properties with respect to stability. RK methods of order k require k evaluations of the derivative function to advance the solution one time step. In the next chapter. we obtained (1 . storage requirements reduce the usefulness of Runge-Kutta methods of order higher than four for CFD applications. respectively. This leads to various tradeo s which must be considered in selecting a method for a speci c application.1 Application to Systems of Equations Consider rst the numerical solution of our representative ODE u0 = u + ae t (6.10.110 CHAPTER 6.78) .67 are second-order Runge-Kutta methods.

The primary area of application of implicit methods is in the solution of sti ODE's.6. f (t) u u ~ (6..78 is ~ (I .aBp (. and hence its solution is inexpensive. but rather we solve Eq. the cost per time step of an implicit method is larger than that of an explicit method. the system of equations which must be solved is tridiagonal (e. 6. As an example.2 Application to Nonlinear Equations (6. For our one-dimensional examples.g. requiring only an additional division. IMPLEMENTATION OF IMPLICIT METHODS Solving for un+1 gives 111 1 un+1 = 1 . ~ n = . 6.79) This calculation does not seem particularly onerous in comparison with the application of an explicit method to this ODE. A = .80) ~ where ~ and f are vectors and we still assume that A is not a function of ~ or t. but in multidimensions the bandwidth can be very large. hA)~ n+1 . h (un + he h ae hn ) (6. for biconvection. Now let us apply the implicit Euler method to our generic system of equations given by ~ 0 = A~ . hA).1 0 1)=2 x). consider the nonlinear ODE du + 1 u2 = 0 (6.1 ~ n .85) dt 2 . as we shall see in Chapter 8.hf (t + h) u u (6.10.82) The inverse is not actually performed. In general. hf (t + h)] u u (6. Now u u the equivalent to Eq.83) (6.81) or ~ ~ n+1 = (I .10.81 as a linear system of equations. Now consider the general nonlinear scalar ODE given by du = F (u t) dt Application of the implicit Euler method gives 6.84) un+1 = un + hF (un+1 tn+1) This is a nonlinear di erence equation.

TIME-MARCHING METHODS FOR ODE'S solved using implicit Euler time marching. u )2 + @ 2 F (u .88) and Eq. There are several di erent approaches one can take to solving this nonlinear di erence equation. since the \initial guess" is simply the solution at the previous time step. + (6. Designate the reference value by tn and the corresponding value of the dependent variable by un.10. which implies that a linearization approach may be quite successful. u ) + @F (t . tn) @u @t ! 2 + 1 (t .89) ! ! @F (u . can be used.87 can be (6. u )(t .87) On the other hand. we expand F (u t) about some reference point in time.3 Local Linearization for Scalar Equations General Development Let us start the process of local linearization by considering Eq. the expansion of u(t) in terms of the independent variable t is u(t) = un + (t .112 CHAPTER 6. tn)k and (u .83. un)k are written n O(hk ). In order to implement the linearization. both (t . t ) F (u t) = F (un tn) + @u n n @t n ! ! n 1 @ 2 F (u . t ) + 2 @u2 n n n @u@t n n ! 1 @ 2 F (t . 6. u ) + @F (t . tn)2 @ u @t2 n 2 ! If t is within h of tn . t ) + O(h2) F (u t) = Fn + @u n n @t n n .86) 2 n which requires a nontrivial method to solve for un+1. Such an approach is described in the next Section. In practice. 6. A Taylor series expansion about these reference quantities gives ! ! @F (u . which gives un+1 + h 1 u2 +1 = un (6. such as Newton's method (see below). An iterative method. the \initial guess" for this nonlinear problem can be quite close to the solution. t )2 + + 2 @t2 n n (6. 6.

83. relative to the order of expansion of the function. 6. locally-linear approximation to F (u t) that is valid in the vicinity of the reference station tn and the corresponding un = u(tn).90) Implementation of the Trapezoidal Method As an example of how such an expansion can be used. 1 h @F 2 @u n @t n which is now in the delta form which will be formally introduced in Section 12. The trapezoidal method is given by 1 un+1 = un + 2 h Fn+1 + Fn] + hO(h2) (6. In many uid mechanic applications the nonlinear function F is not an explicit function " ! ! # " !# ! . With this we obtain the locally (in the neighborhood of tn) time-linear representation of Eq. 6.6. combine to make a second-order-accurate numerical integration process. @F u + @F (t . of course.91) where we write hO(h2) to emphasize that the method is second order accurate.92) Note that the O(h2) term within the brackets (which is due to the local linearization) is multiplied by h and therefore is the same order as the hO(h2) error from the Trapezoidal Method. other second-order implicit timemarching methods that can be used.92 results in the expression 1 un = hFn + 2 h2 @F (6. and the trapezoidal time march. consider the mechanics of applying the trapezoidal method for the time integration of Eq. 6. There are. The use of local time linearization updated at the end of each time step. Using Eq. A very useful reordering of the terms in Eq. one nds 1 un+1 = un + 2 h Fn + @F (un+1 . The important point to be made here is that local linearization updated at each time step has not reduced the order of accuracy of a second-order time-marching process.83. IMPLEMENTATION OF IMPLICIT METHODS 113 Notice that this is an expansion of the derivative of the function. 6. un) + h @F + O(h2) + Fn @u n @t n 2) +hO(h (6.89 to evaluate Fn+1 = F (un+1 tn+1). namely ! ! ! ! du = @F u + F . it represents a second-order-accurate.10. Thus.93) 1 .6. t ) + O(h2) n n dt @u n @u n n @t n (6.

the basic equation was the simple scalar equation 6. 6. but for our applications. 2 h @F @u !# n un = hFn (6. In this example.90 into this method. A numerical time-marching procedure using Eq. u 4. u 2. store them in an array say R. 6. It is the product of h and the RHS of the basic equation evaluated at the previous time step.93 simpli es to the second-order accurate expression " 1 1 . we arrive at the form " !# @F 1 .1 in carrying out this step). Solve the coupled set of linear equations B ~n = R u ~ for ~ n.96) n . Let this storage area be referred to as B . h @u un = hFn (6. Find ~ n+1 by adding ~ n to ~ n. 6. and save ~ n. rearrange terms. thus u u u ~ n+1 = ~ n + ~ n u u u The solution for ~ n+1 is generally stored such that it overwrites the value of ~ n u u and the process is repeated. and remove the explicit dependence on time.114 CHAPTER 6.95) if we introduce Eq.94) Notice that the RHS is extremely simple. it is generally the space-di erenced form of the steady-state equation of some uid ow problem.83. Solve for the elements of the matrix multiplying ~ n and store in some approu priate manner making use of sparseness or bandedness of the matrix if possible. Implementation of the Implicit Euler Method We have seen that the rst-order implicit Euler method can be written un+1 = un + hFn+1 (6. (Very seldom does one nd B . In such cases the partial derivative of F (u) with respect to t is zero and Eq. 3. TIME-MARCHING METHODS FOR ODE'S of t.94 is usually implemented as follows: ~ ~ 1. Solve for the elements of hFn.

98) n n This is the well-known Newton method for nding the roots of a nonlinear equation F (u) = 0. df 2A = 0 dt3 dt2 dt (6. 6.6. The reader should ponder the meaning of letting h ! 1 for the trapezoidal method. @u (6.10.87 and 6.97) n or " !# @F . @u un = Fn (6. let us consider an example involving some simple boundary-layer equations. Our task is to apply the implicit trapezoidal method to the equations 0 !1 d3f + f d2f + @1 . Newton's Method 6. By quadratic convergence. We shall see later that this method is an excellent choice for steady problems.1 F un+1 = un . Quadratic convergence is thus a very powerful property. We choose the Falkner-Skan equations from classical boundary-layer theory. Omission of this factor degrades the method in time accuracy by one order of h.94. 6. There results ! @F . The fact that it has quadratic convergence is veri ed by a glance at Eqs. where the error is the di erence between the current solution and the converged solution. 6.88 (remember the dependence on t has been eliminated for this case).96..10.4 Local Linearization for Coupled Sets of Nonlinear Equations In order to present this concept.96 obtained by dividing both sides by h and setting 1=h = 0. 6. Use of a nite value of h in Eq. Consider the limit h ! 1 of Eq. the error at a given iteration is some multiple of the error at the previous iteration. we mean that the error after a given iteration is proportional to the square of the error at the previous iteration.96 leads to linear convergence. 6. and is a scaling factor.94 and 6.e. i. . IMPLEMENTATION OF IMPLICIT METHODS 115 We see that the only di erence between the implementation of the trapezoidal method and the implicit Euler method is the factor of 1 in the brackets of the left side of 2 Eqs.99) Here f represents a dimensionless stream function. given by Eq.

. It is derived from Eq. Omitting the explicit dependency ~ ~ u on the independent variable t. u2 2 0 = F =u u2 2 1 0 = F =u u3 (6. called the Jacobian matrix.101) 3 2 and these can be represented in vector notation as d~ = F (~ ) u ~ u (6.99 to a set of rst-order nonlinear equations by the transformations 2 u1 = d f u2 = df u3 = f (6. 6.100) 2 dt dt This gives the coupled set of three nonlinear equations u01 = F1 = .102) dt Now we seek to make the same local expansion that derived Eq. TIME-MARCHING METHODS FOR ODE'S First of all we reduce Eq. 6. one has 9 2.90. a matrix times a vector for the second term. except that this time we are faced with a nonlinear vector function.104) @u1 @u2 @u3 7 7 7 7 @F3 @F3 @F3 5 @u1 @u2 @u3 ~ u The expansion of F (~ ) about some reference state ~ n can be expressed in a way u similar to the scalar expansion given by eq 6. The required extension requires the evaluation of a matrix. 1 .8 Let us refer to this matrix as A.87.102 by the following process A = (aij ) = @Fi =@uj For the general case involving a third order matrix this is (6. 6. rather than a simple nonlinear scalar function. and tensor products for the terms of higher order.u1 u3 .2 8 9 2 6 6 6 6 A=6 6 6 6 6 4 Recall that we derived the Jacobian matrices for the two-dimensional Euler equations in Section The Taylor series expansion of a vector contains a vector for the rst term.116 CHAPTER 6.103) @F1 @F1 @F1 3 @u1 @u2 @u3 7 7 7 7 @F2 @F2 @F2 7 (6. and de ning F n as F (~ n).

) . we nd the Jacobian matrix to be 2 . could be used to integrate the equations without loss in accuracy with respect to order. Using this we can write the local linearization of Eq. the elements using u the solution will be second-order-accurate in time. Find an expression for the nth term in the Fibonacci series. tn and the argument for O(h2) is the same as in the derivation of Eq. which is given by 1 1 2 3 5 8 : : : Note that the series can be expressed as the solution to a di erence equation of the form un+1 = un + un.11. and the manner in which.6. 6. on the nature of the problem. ~ n + O(h2) u u (6.106) nu dt | {z } \constant" which is a locally-linear. Thus for the Falkner-Skan equations the trapezoidal method results in 2 3 1 + h (u3)n . explicit or implicit. and the solution is now advanced one step.h (u1)n 6 74 5 4 5 1 0 76 ( u2)n 7 = h 6 2 4 5 ( u) (u2)n 3 n 0 . 6. second-order-accurate approximation to a set of coupled nonlinear ordinary di erential equations that is valid for t tn + h. of course. u2)n 3 2 2 6 2 7 6 .1.or second-order time-marching method. PROBLEMS 117 ~ u ~ F (~ ) = F n + An ~ . A ~ +O(h2) u ~ n n un (6. u3 2 u2 . 6. (1 . h(u2)n h (u1)n 72 ( u1)n 3 2 .102 as d~ = A ~ + F . The number of times. Returning to our simple boundary-layer example. Re-evaluate u u u ~ n+1 and continue.101.107) 4 5 0 1 0 The student should be able to derive results for this example that are equivalent to those given for the scalar case in Eq. Without any iterating within a step advance. (u1u3)n .11 Problems 1. Any rst. What is u25 ? (Let the rst term given above be u0.105) where t .h 1 2 We nd ~ n+1 from ~ n + ~ n. 6. the terms in the Jacobian matrix are updated as the solution proceeds depends. 6.93. which is given by Eq.88.u1 3 0 0 7 A=6 1 (6.

relation. Find the di erence equation which results from applying the Gazdag predictorcorrector method (Eq.relation. The 2nd-order backward method is given by h i un+1 = 1 4un . 6. (a) What is the resulting O E? (b) What is its exact solution? (c) How does the exact steady-state solution of the O E compare with the exact steady-state solution of the ODE if = 0? 3. Identify the polynomials P (E ) and Q(E ). un. Identify the polynomials P (E ) and Q(E ). 5. Consider the following time-marching method: un+1=3 = un + hu0n=3 ~ un+1=2 = un + hu0n+1=3 =2 ~ un+1 = un + hu0n+1=2 . Find the . TIME-MARCHING METHODS FOR ODE'S 2.118 CHAPTER 6. Consider the time-marching scheme given by un+1 = un. The trapezoidal method un+1 = un + 1 h(u0n+1 + u0n) is used to solve the repre2 sentative ODE.1 + 2hu0n+1 3 (a) Write the O E for the representative equation. (c) Find er and the rst two nonvanishing terms in a Taylor series expansion of the spurious root. (b) Derive the . 4. (b) Derive the .1 + 23h (u0n+1 + u0n + u0n. Solve for the -roots and identify them as principal or spurious.65) to the representative equation.1) (a) Write the O E for the representative equation. (c) Find er . relation. (d) Perform a -root trace relative to the unit circle for both di usion and convection. 6.

Run until a periodic steady state is reached which is independent of the initial condition and plot your solution . including the homogeneous and particular solutions. Show solutions at t = 1 and t = 10 compared to the exact solution. Write a computer program to solve the one-dimensional linear convection equation with in ow-out ow boundary conditions and a = 1 on the domain 0 x 1. 8. 11. plot the error given by v uX u M (uj . What order is the homogeneous solution? What order is the particular solution? Find the particular solution if the forcing term is xed. For the initial condition. Use only 4th-order Runge-Kutta time marching at a Courant number of unity. 9. Find the . of 0. Find the solution to the di erence equation. 200. uexact)2 u j t M j =1 where M is the number of grid nodes and uexact is the exact solution. 2nd-order Adams-Bashforth (AB2). Find er and er .0:5)= ]2 with = 0:08. use a Courant number. Use the explicit Euler.0:5 (x. trapezoidal. Use 2nd-order centered di erences in space and a grid of 50 points. 7. and 4th-order Runge-Kutta methods. compute the solution at t = 1 using 2nd-order centered di erences in space coupled with the 4th-order Runge-Kutta method for grids of 100. use u(x 0) = e. For the explicit Euler and AB2 methods. Using the computer program written for problem 7. On a log-log scale. use a Courant number of unity.6. repeat problem 9 using 4th-order (noncompact) di erences in space. Let u(0 t) = sin !t with ! = 10 . PROBLEMS 119 Find the di erence equation which results from applying this method to the representative equation. Repeat problem 7 using 4th-order (noncompact) di erences in space. implicit Euler.relation. and 400 nodes. Using the computer program written for problem 8.11.1 for the other methods. ah= x. Plot the solutions obtained at t = 1 compared to the exact solution (which is identical to the initial condition). 10. Write a computer program to solve the one-dimensional linear convection equation with periodic boundary conditions and a = 1 on the domain 0 x 1. Find the global order of accuracy from the plot.

. as described in problem 9. 3. Use grids with 100. and the amplitude error. Explain your results. era . 200. 1. derive and use a 3rd-order backward operator (using nodes j . i. 12. j . 3. Use a third-order forward-biased operator at the in ow boundary (as in Eq. j . j . Note that the required -roots for the various Runge-Kutta methods can be deduced from Eq.2.e. 2nd. and j ) and at the second last node. j ..120 CHAPTER 6. 2. without actually deriving the methods. 6. At the last grid node. 13. nd the phase speed error. Find the global order of accuracy. Use 2nd-order centered di erences in space with a 1st-order backward di erence at the out ow boundary (as in Eq. and 400 nodes and plot the error vs. 2. Using the approach described in Section 6. and j +1 see problem 1 in Chapter 3). 1.69.69) together with 4th-order Runge-Kutta time marching.6. TIME-MARCHING METHODS FOR ODE'S compared with the exact solution. that obtained using the spatial discretization alone. and 4th-order Runge-Kutta time-marching at a Courant number of unity. for the combination of second-order centered differences and 1st.67). the number of grid nodes. 3. use a 3rd-order backwardbiased operator (using nodes j . 3rd. Repeat problem 11 using 4th-order (noncompact) centered di erences. erp. Also plot the phase speed error obtained using exact integration in time.

1) dt and ~ n+1 = C~ n . In the case of the nonlinear ODE's of interest in uid dynamics.2.2 by introducing new dependent variables. ~ (t) u u f (7. from within that domain. These are important and interesting concepts. the explicit Euler method leads to C = I + hA. In these terms a system is said to be stable in a certain domain if. 7. stability is often discussed in terms of xed points and attractors.1.Chapter 7 STABILITY OF LINEAR SYSTEMS A general de nition of stability is neither simple nor universal and depends on the particular phenomenon being considered.2) respectively. For a one-step method. but we do not dwell on them in this work. the latter form is obtained by applying a timemarching method to the generic ODE form in a fairly straightforward manner. The fully-discrete form. 121 . 7. We will refer to such matrices as stationary. 7. Methods g ~ involving two or more steps can always be written in the form of Eq. Our basic concern is with time-dependent ODE's and O E 's in which the coe cient matrices are independent of both u and t see Section 4. These equations are represented by d~ = A~ . ~ n u u g (7. For example. some norm of its solution is always attracted to the same xed point.2 and the associated stability de nitions and analysis are applicable to all methods. and ~n = f (nh). and then the O E's generated from the representative ODE's by application of time-marching methods. Chapters 4 and 6 developed the representative forms of ODE's generated from the basic PDE's by the semidiscrete approach. Note also that only methods in which the time and space discretizations are treated separately can be written in an intermediate semi-discrete form such as Eq. Eq.

This is discussed in Section 7. STABILITY OF LINEAR SYSTEMS Our de nitions of stability are based entirely on the behavior of the homogeneous parts of Eqs. In many interesting cases.2 are su cient to determine the stability. we found from our model ODE's for diffusion and periodic convection what could be expected for the eigenvalue spectrums of practical physical problems containing these phenomena. 7.2. 1 This is not the case if the coe cient matrix depends on t even if it is linear.1 depends entirely on the eigensystem1 of A.7. 7. in this case the situation is not quite so simple since.3. in our applications to partial di erential equations (especially hyperbolic ones). we can. 7.1 and 7. The stability of Eq. 7. with a reminder that a complete system can be arbitrarily close to a defective one. a stability de nition can depend on both the time and space di erencing. 7. in theory at least. For example. However.2. in which case practical applications can make the properties of the latter appear to dominate.1 Dependence on the Eigensystem 122 CHAPTER 7. For periodic convection-dominated ows the -eigenvalues tend to lie along the imaginary axis.1 and 7. and we will nd it convenient to examine the stability of various methods in both the complex and complex planes. In previous chapters. The stability of Eq. .2 can often also be related to the eigensystem of its matrix. we designated these eigenvalues as m and m for Eqs. estimate their fundamental properties. They are important enough to be summarized by the following: For di usion dominated ows the -eigenvalues tend to lie along the negative real axis.1 and 7. the eigenvalues of the matrices in Eqs. respectively. Analysis of these eigensystems has the important added advantage that it gives an estimate of the rate at which a solution approaches a steady-state if a system is stable. These expectations are referred to many times in the following analysis of stability properties. If A and C are stationary.3. see Section 4.4.2. Consideration will be given to matrices that have both complete and defective eigensystems.2. in Section 4.

so the above u condition is met. instead of Eq. for inherent stability. Eq. INHERENT STABILITY OF ODE'S 7. 7. 6.27.18 to 4. u Note that inherent stability depends only on the transient solution of the ODE's.2 Complete Eigensystems If a matrix has a complete eigensystem. For a stationary matrix A.2.2 Inherent Stability of ODE's 7.1 The Criterion 123 Here we state the standard stability criterion used for ordinary di erential equations. we inspect the nature of their solutions in eigenspace. For example. In this case it is true that ~ does not decay as t ! 1. This criterion is satis ed for the model ODE's representing both di usion and biconvection. 4. all of the eigenvalues must lie on. ~ remains bounded as t ! 1. It should be emphasized (as it is an important practical consideration in convection-dominated systems) that the special case for which = i is included in the domain of stability. For this we draw on the results in Sections 4.1 is inherently stable if.3 Defective Eigensystems In order to understand the stability of ODE's that have defective eigensystems. when ~ is (7.2.4) This states that.2. 7.3 and especially on Eqs. and the matrix can be diagonalized by a similarity transformation.2. Finally we note that for ODE's with complete eigensystems the eigenvectors play no role in the inherent stability criterion. In such a case it follows at once from Eq. the imaginary axis in the complex plane.2. that the ODE's are inherently stable if and only if <( m ) 0 for all m (7. 4. In an eigenspace related to defective systems the form of the representative equation changes from a single equation to a Jordan block. all of its eigenvectors are linearly independent.7. but neither does it grow. 7. for example.19 in that section.45 a typical form of the homogeneous part might be 2 03 2 6 u1 7 = 6 1 4 u02 5 4 1 u03 32 3 7 6 u1 7 5 4 u2 5 u3 . or to the left of.3) f constant.

A de nition often used in CFD literature stems from the development of PDE solutions that do not necessarily follow the semidiscrete route. in practical applications the criterion may be worthless since there may be a very large growth of the polynomial before the exponential \takes over" and brings about the decay.6 excludes the imaginary axis which tends to be occupied by the eigenvalues related to biconvection problems.4 must be modi ed to the form <( m) < 0 for all m (7. STABILITY OF LINEAR SYSTEMS u1(t) = u1(0)e t u2(t) = u2(0) + u1(0)t]e t u3(t) = u3(0) + u2(0)t + 1 u1(0)t2 e t 2 for which one nds the solution (7. This de nition of stability is sometimes referred to as asymptotic or time stability. We discuss this point of view in Section 7. when ~ is g constant. u (7.3. Note that the stability condition 7. ~ n remains bounded as n ! 1. on a computer such a growth might destroy the solution process before it could be terminated. Eq.124 CHAPTER 7.2 is numerically stable if. In such cases it is appropriate to consider simultaneously the e ects of both the time and space approximations. Theoretically this condition is su cient for stability in the sense of Statement 7.1 The Criterion The O E companion to Statement 7.j jt ! 0 as t ! 1 for all non-zero .6) since for pure imaginary .5) Inspecting this solution.3 since tk e. However.6 is of little or no practical importance if signi cant amounts of dissipation are present. u2 and u3 would grow without bound (linearly or quadratically) if u2(0) 6= 0 or u1(0) 6= 0. However. A time-space domain is xed and stability is de ned in terms of what happens to some norm of the solution within this domain as the mesh intervals go to zero at some constant ratio. stability de nitions are not unique. condition 7. 7. .4.3 Numerical Stability of O E 's 7. we see that for such cases condition 7. Furthermore. 7. As we stated at the beginning of this chapter.3 is For a stationary matrix C.7) We see that numerical stability depends only on the transient solution of the O E 's.

if there are any) must lie on or inside the unit circle in the complex -plane.7. Clearly. 7. Again the sensitive case occurs for the periodic-convection model which places the \correct" location of the principal {root precisely on the unit circle where the solution is only neutrally stable. In the previous discussion we considered in some detail the following approach: 1. 6.3. Further. Time-march methods are developed which guarantee that j ( h)j 1 and this is taken to be the condition for numerical stability. 2. according to the condition set in Eq. 7. the eigenvectors play no role in the numerical stability assessment.18 and note its similarity with Eq. for example in Eq.36. for numerical stability.7.3.4. 6.5 if j j = 1 and either u2(0) or u1(0) 6= 0 we get linear or quadratic growth.8) This condition states that. This de nition of stability for O E 's is consistent with the stability de nition for ODE's. Inherent stability of the ODE's is established by guaranteeing that <( ) 0. 6. The stability criterion. 7. for such systems a timemarching method is numerically stable if and only if j( m )k j 1 for all m and k (7.8 is j( m )k j < 1 for all m and k (7. 3. The PDE's are converted to ODE's by approximating the space derivatives on a nite mesh. Eq.28 and its companion for multiple -roots. 7.5. for a complete eigensystem. 7.2 Complete Eigensystems Consider a set of O E 's governed by a complete eigensystem. . TIME-SPACE STABILITY AND CONVERGENCE OF O E'S 125 7.3 Defective Eigensystems The discussion for these systems parallels the discussion for defective ODE's. Examine Eq.9) since defective systems do not guarantee boundedness for j j = 1. follows at once from a study of Eq.4 Time-Space Stability and Convergence of O E's Let us now examine the concept of stability in a di erent way. all of the eigenvalues (both principal and spurious. We see that for defective O E's the required modi cation to 7.

Cover this domain with a grid that is equispaced in both time and space and x the mesh ratio by the equation2 t cn = x Next reduce our O E approximation of the PDE to a two-level (i. the number of time steps. Now let us de ne stability in the time-space sense. Applying simple recursion to Eq. This is the classical u de nition of stability.e.11) This ratio is appropriate for hyperbolic problems a di erent ratio may be needed for parabolic problems such as the model di usion equation. This does not guarantee that desirable solutions are generated in the time march process as both the time and space mesh intervals approach zero.8.126 CHAPTER 7.2.3 Clearly. 7. we nd ~ n = Cn~ 0 u u and using vector and matrix p-norms (see Appendix A) and their inequality relations. in the hyperbolic case). A method is consistent if it produces no error (in the Taylor series sense) in the limit as the mesh spacing and the time step go to zero (with cn xed. Clearly as the mesh intervals go to zero. this is an important property. two time-planes) formula in the form of Eq. The homogeneous part of this formula is ~ n+1 = C~ n u u (7. which states that. Here we refer to a numerical solution converging to the exact solution of the PDE. so the criterion in 7. This is further discussed in Section 7. ~ n.10) Eq. must go to in nity in order to cover the entire xed domain. will have a numerical solution that is bounded as t = nh ! 1.. as the mesh shrinks to zero for a xed cn. the word converge is used with two entirely di erent meanings. It is often referred to as Lax or Lax-Richtmyer stability. 7. Later we will refer to the numerical solution converging to a steady state solution. produces a bounded u solution vector. ~ 0. if a numerical method is stable (in the sense of Lax) and consistent then it is convergent. generated from a PDE on some xed space mesh. A method is convergent if it converges to the exact solution as the mesh spacing and time step go to zero in this manner. STABILITY OF LINEAR SYSTEMS This does guarantee that a stationary system. The signi cance of this de nition of stability arises through Lax's Theorem. N . 3 In the CFD literature. 7.10.7 is a necessary condition for this stability criterion. we have jj~ njj = jjCn~ 0jj jjCnjj jj~ 0jj jjCjjn jj~ 0jj u u u u 2 (7.10 is said to be stable if any bounded initial vector. . First construct a nite timespace domain lying within 0 x L and 0 t T .

Furthermore. In Eq. the spectral radius condition is necessary4 but not su cient for stability.8 and 7. 7. 7. 7. If the spectral radius of any governing matrix is greater than one. 7. Thus for general matrices.e. Two facts about the relation between spectral radii and matrix norms are well known: 1. From these relations we draw two important conclusions about the numerical stability of methods used to solve PDE's.12.4. the method is unstable by any criterion. stability is related to the spectral radius of C. it commutes with its transpose.7.. 7.12. 7. when C is normal. the solution vector is bounded if 127 jjCjj 1 (7. These criteria are both necessary and su cient for methods that generate such matrices and depend solely upon the eigenvalues of the matrices. its eigenvalue of maximum magnitude. stability is related to a p-norm of C. Now we need to relate the stability de nitions given in Eqs. This includes symmetric. i. Eq. In Eqs. 7.8 and 7. The spectral radius is the lower bound of all norms. The spectral radius of a matrix is its L2 norm when the matrix is normal. the second inequality in Eq.11 becomes an equality.e. It is clear that the criteria are the same when the spectral radius is a true p-norm. 2. TIME-SPACE STABILITY AND CONVERGENCE OF O E'S Since the initial data vector is bounded. In this case.12) where jjCjj represents any p-norm of C. but this distinction is not critical for our purposes here. This is often used as a su cient condition for stability. and circulant matrices..9 with that given in Eq. 4 Actually the necessary condition is that the spectral radius of C be less than or equal to 1 + .12 becomes both necessary and su cient for stability. asymmetric.8 and 7.12 are identical for stationary systems when the governing matrix is normal. O( t).9. i. The stability criteria in Eqs.

As the magnitude of h increases. STABILITY OF LINEAR SYSTEMS Whether or not the semi-discrete approach was taken to nd the di erencing approximation of a set of PDE's. This serves as a very convenient guide as to where we might expect the -roots to lie relative to the unit circle in the complex -plane. h is real and negative. ~n u u g Furthermore if C has a complete5 eigensystem. the parameter h for xed values of equal to -1 and i for the di usion and biconvection cases. As the magnitude of !h increases. For the di usion model.5 Numerical Stability Concepts in the Complex -Plane 7. The fact that it lies on the unit circle means only that the amplitude of the representation is correct. 6 Or. In both cases the represents the starting value where h = 0 and = 1.5.41). respectively. On the other hand. if you like. for the biconvection model. We must be careful in interpreting when it is representing ei!h . 6. From now on we will omit further discussion of this special case. h = i!h is always imaginary. the nal di erence equations can be represented by + cM nx M ~M where the m are the eigenvalues of C. we can nd a relation between the and the eigenvalues.1 shows the exact trace of the -root if it is generated by e h representing either di usion or biconvection. 5 . the trace representing the biconvection model travels around the circumference of the unit circle. If the semi-discrete approach is used. which it never leaves. Locus of the exact trace Figure 7. The phase error relates to the position on the unit circle. the trace representing the dissipation model heads towards the origin as h =! .1 -Root Traces Relative to the Unit Circle ~ n+1 = C~ n . it tells us nothing of the phase error (see Eq. The subject of defective eigensystems has been addressed.1. For this reason we will proceed to trace the locus of the -roots as a function of the parameter h for the equations modeling di usion and periodic convection6 .7. the solution to the homogeneous part can always be expressed as nx ~ n = c1 1 ~ 1 + u nx + cm m~ m + 128 CHAPTER 7.

h.1 shows the . h =0 . (Usually the magnitude of has to be estimated and often it is found by trial and error).trace falls outside the unit circle for all nite h. h) . 1 h ) . 5 h ) .2 and 7. and the method has no range of stability in this case. .4 and 4. (1 + 8 h ) + 1 h = 0 AM3 12 12 12 2 . (1 + 13 h + 15 2 h2 ) + 1 h (1 + 5 h ) = 0 ABM3 12 24 12 2 3 . Eqs. 5 h = 0 AB3 12 12 12 (1 .relations for a variety of methods. Most of the important possibilities are covered by the illustrations. 2 RK4 6 24 2 (1 . 1 h = 0 Gazdag 2 2 1 2h2 = 0 RK2 .1. 1 3h3 . Explicit Euler Method Figure 7. Figures 7. NUMERICAL STABILITY CONCEPTS IN THE COMPLEX -PLANE 129 Examples of some methods Now let us compare the exact -root traces with some that are produced by actual time-marching methods. 4 + 1 = 0 2nd O Backward 3 3 3 2 (1 .1.2 h 2 .1. 4. Table 7. (1 + 2 h ) 2 + 3 h . It is implied that the behavior shown is typical of what will happen if the methods are applied to di usion.(or dissipation-) dominated or periodic convection-dominated problems as well as what does happen in the model cases.2 shows results for the explicit Euler method. (1 + 1 h ) = 0 Milne 4th 3 3 3 Table 7. 1 = 0 Implicit Euler 1 h) . Relations .1.5. (1 + Explicit Euler Leapfrog 3 h) + 1 h = 0 AB2 2 2 3 . 2 Trapezoidal 2 2 (1 . When used for biconvection the . 1 4 h4 = 0 .7.3 illustrate the results produced by various methods when they are applied to the model ODE's for di usion and periodic-convection. Some . 2 h ) . (1 + 23 h ) 2 + 16 h . When used for dissipationdominated cases it is stable for the range -2 h 0. 2 1 2h2 .1=0 a. 1 2 3 4 5 6 7 8 9 10 11 12 13 2. (1 + 1 h) = 0 (1 . h. 4 h .5.

but it also produces a that falls precisely on the unit circle in the range 0 !h 1. since there are two 's produced by every . Although the gure shows that there is a range of !h in which the leapfrog method produces no error in amplitude. ωh σ = e λ h . unlike the leapfrog scheme. the method is not only stable. Therefore. see Fig. λh a) Dissipation . rather than on the unit circle.2. that the method is without error. b.oo σ=e oo b) Convection Figure 7. More is said about this in Chapter 8. the spurious root starts at the origin. for biconvection cases. As was pointed out above.130 CHAPTER 7. Leapfrog Method This is a two-root method. 7. In fact. c. The situation is quite di erent when the . there is a range of real negative h for which the method will be stable.1:0 since at that point the spurious root leaves the circle and j 2 j becomes greater than one. but the spurious root is not. Second-Order Adams-Bashforth Method This is also a two-root method but. when is pure imaginary. 7. However. STABILITY OF LINEAR SYSTEMS Ι(σ) Ι(σ) λ h= 0 ωh= 0 λ h= οο R (σ) R (σ) i ωh . The gure shows that the range ends when h < . the spurious root starts on the unit circle and falls outside of it for all <( h) < 0.2 that the principal root is stable for a range of h. it says nothing about the error in phase.1: Exact traces of -roots for model equations. When applied to dissipation dominated problems we see from Fig. this does not mean.

the principal root falls outside the unit circle for all !h > 0. Its -roots fall on or inside the unit circle for both the dissipating and the periodic convecting case and.2: Traces of -roots for various methods. d. Just like the leapfrog method it has the capability of .7. In that case as !h increases away from zero the spurious root remains inside the circle and remains stable for a range of !h.3. and for the biconvection model equation the method is unstable for all h. 7. However.5. it is stable for all values of h for which itself is inherently stable. in fact. Trapezoidal Method The trapezoidal method is a very popular one for reasons that are partially illustrated in Fig. -root is pure imaginary. NUMERICAL STABILITY CONCEPTS IN THE COMPLEX -PLANE 131 σ1 σ1 λ h=-2 a) Euler Explicit ωh = 1 σ2 σ1 σ2 σ1 b) Leapfrog σ2 λ h=-1 σ1 σ2 σ1 c) AB2 Diffusion Convection Figure 7.

7. If the root had fallen inside the circle the . STABILITY OF LINEAR SYSTEMS producing only phase error for the periodic convecting case. Note that both spurious roots are located at the origin when = 0.3f show that for both methods the principal root falls outside the unit circle and is unstable for all !h.1. Figs. 7.and Fourth-Order Runge-Kutta Methods. 7. RK2 and RK4 Traces of the -roots for the second. 2 . going almost all the way to -2.3.3. whereas RK4 remains inside the unit circle p for 0 !h 2 2. One can show that the RK4 stability limit is about j hj < 2:8 for all complex h for which <( ) 0. but there is a major di erence between the two since the trapezoidal method produces no amplitude error for any !h | not just a limited range between 0 !h 1. a spurious root limits the stability. f. on the basis of the principal root. Such cases are typi ed by the AB2 and RK2 methods when they are applied to a biconvection problem. whereas RK2 is limited to -2. On the other hand for biconvection RK2 is unstable for all !h. These are shown in Fig. Second.132 CHAPTER 7. Since its characteristic polynomial for is a cubic (Table 7.and fourth-order Runge-Kutta methods are shown in Fig.g. Situations for which this is not the case are considered in Chapter 8. and for the biconvecting case. the error caused by this instability may be relatively unimportant.8. is small so that accuracy of all the -roots is of importance. but that the range of stability for RK4 is greater. 10). Therefore. In both the dissipation and biconvection cases. Gazdag Method The Gazdag method was designed to produce low phase error. h. the stability of a method that is required to resolve a transient solution over a relatively short time span may be a moot issue.5. t It is interesting to pursue the question of stability when the time step size. 7. if the transient solution of interest can be resolved in a limited number of time steps that are small in the sense of the gure. 2 when !h > 3 .2 Stability for Small Mild instability All conventional time-marching methods produce a principal root that is very close to e h for small values of h. no. e. 1 a spurious root leaves the unit circle when h < .2c and 7. The gures show that both methods are stable for a range of h when h is real and negative. However. For the dissipating case. it must have two spurious roots in addition to the principal one.

8 Diffusion ωh= 2 2 Convection Figure 7.3: Traces of -roots for various methods (cont'd). .οο σ3 σ2 σ1 d) Trapezoidal σ2 σ1 ω h = 2/3 σ3 e) Gazdag σ1 σ1 λ h=-2 f) RK2 σ1 σ1 g) RK4 λ h=-2.5.7. NUMERICAL STABILITY CONCEPTS IN THE COMPLEX -PLANE 133 σ1 σ1 λ h=.

if the magnitude of one of the spurious is equal to 5. P (E ) = E 2 + 4E .1): Catastrophic instability un+1 = . one must understand how methods with spurious roots work in practice. computations. The other type is exempli ed by the 2nd-order Adams-Bashforth and Gazdag methods. In this case a spurious root falls on the unit circle when h ! 0. if not all.3.36.6. the principal one. we nd them to be of two types. the cmk for k > 1 in Eq. For this reason the AB2 and the RK2 methods have both been used in serious quantitative studies involving periodic convection.1 (7. One type is typi ed by the leapfrog method. Such methods are called catastrophically unstable and are worthless for most. and 2 . and if the method is stable. If the starting process is well designed these coe cients are forced to be very small. the method is likely to become . that this method is thirdorder accurate both in terms of er and er . The former type is referred to as a Milne Method. 1)( + 5) = 0. However. Since at least one spurious root for a Milne method always starts on the unit circle. a spurious one.134 CHAPTER 7. STABILITY OF LINEAR SYSTEMS method would have been declared stable but an error of the same magnitude would have been committed. linear multistep method (see Table 7. one can see disaster is imminent because (5)10 107. they get smaller with increasing n. 6. One of the best illustrations of this kind of problem stems from a critical study of the most accurate.4un + 5un.13) One can show. and the special procedures chosen to start them initialize the coe cients of the spurious roots. This can easily be accomplished by studying its characteristic polynomial when h ! 0. We know that they are not self starting. equal to -5!! In order to evaluate the consequences of this result. Milne and Adams type methods If we inspect the -root traces of the multiple root methods in Figs. There are two -roots 1 . Even a very small initial value of cmk is quickly overwhelmed. 5. two-step. 7. This kind of instability is referred to as mild instability and is not a serious problem under the circumstances discussed. In this case all spurious roots fall on the origin when h ! 0. equal to 1. From Eq.1 + 2h 2u0n + u0n.13 it follows that for h = 0.2 and 7. 7. just in the opposite direction. There is a much more serious stability problem for small h that can be brought about by the existence of certain types of spurious roots. Factoring P ( ) = 0 we nd P ( ) = ( . explicit. let us inspect its stability even for very small values of h. using the methods given in Section 6. However. so from an accuracy point of view it is attractive.

1 Stability for Large h. The reason to study stability for small values of h is fairly easy to comprehend. On the basis of a Taylor series expansion. NUMERICAL STABILITY CONCEPTS IN THE COMPLEX H PLANE135 unstable for some complex as h proceeds away from zero. In both of these cases there exist in the eigensystems relatively large values of j hj associated with eigenvectors that we wish to drive through the solution process without any regard for their individual accuracy in eigenspace.6 Numerical Stability Concepts in the Complex h Plane 7. however. coupled systems with {eigenvalues having widely separated magnitudes. however. relatively speaking. and to minimize this we wish to make the step size as large as possible. Since for these methods all spurious methods start at the origin for h = 0. Aside from ruling out catastrophically unstable methods. we seek to make the size of this step as small as possible. Presumably we are seeking to resolve some transient and. This situation is .6. On the other hand. 7. they su er. In the compromise. on the basis of the magnitude of the coe cient in the leading Taylor series error term. The latter type is referred to as an Adams Method. they have a guaranteed range of stability for small enough h. and stability requirements in CFD applications generally override the (usually small) increase in accuracy provided by a coe cient with lower magnitude. since the accuracy of the transient solution for all of our methods depends on the smallness of the time-step. the cost of the computation generally depends on the number of steps taken to compute a solution. However.6. For a given amount of computational work. from accuracy. the order of accuracy of the two types is generally equivalent. stability can play a part. the situation in which all of the transient terms are resolved constitutes a rather minor role in stability considerations. these methods are generally the most accurate insofar as they minimize the coe cient in the leading term for ert . By far the most important aspect of numerical stability occurs under conditions when: One has inherently stable. or We seek only to nd a steady-state solution using a path that includes the unwanted transient.7.

1 1=2 1 3=4 1=3 1=2 5=8 0 0 1=2 0 . we start with the de nition: A numerical method is unconditionally stable if it is stable for all ODE's that are inherently stable.1 (7. 2 (7.3.6.7. one nds these methods to be A-stable if and only if 1 (7.1=2 . This serves as an excellent reference frame to discuss and de ne the general stability features of time-marching methods.1=3 .14) '+ 2 . it amounted to the requirement that the real parts of all eigenvalues must lie on.2.1=2 . for coupled sets with a complete eigensystem. A-Stable Methods ' 0 0 0 . By applying a fairly simple test for A-stability in terms of positive real functions to the class of two-step LMM's given in Section 6. For example.136 CHAPTER 7. and I-stable if it contains the entire imaginary axis.2 and.2 Unconditional Stability. Inherent stability of a set of ODE's was de ned in Section 7. It can be proved that the order of an A-stable LMM cannot exceed two.15) 2 1 +'.2.16) A set of A-stable implicit methods is shown in Table 7.1=2 . A method with this property is said to be A-stable. STABILITY OF LINEAR SYSTEMS the major motivation for the study of numerical stability. and. furthermore .1=4 . Some unconditionally stable (A-stable) implicit methods. or to the left of. the imaginary axis in the complex plane.2=9 Method Implicit Euler Trapezoidal 2nd O Backward Adams type Lees Two-step trapezoidal A-contractive Order 1 2 2 2 2 2 2 Table 7. It leads to the subject of sti ness discussed in the next chapter.1=6 7. Notice that none of these methods has an accuracy higher than second-order. A method is Ao-stable if the region of stability contains the negative real axis in the complex h plane.

+2 and the two sets of inequalities reduce to the same set which is 2 .6.1 (7. the trapezoidal method has the smallest truncation error. Although the order of accuracy of an A-stable method cannot exceed two.14 to 7. This includes all explicit methods and predictor-corrector methods made up of explicit sequences.19 are less stringent than 7.21) . Ao-stable LMM methods exist which have an accuracy of arbitrarily high order.1 2 Hence. Returning to the stability test using positive real functions one can show that a two-step LMM is Ao-stable if and only if 1 '+ 2 (7. however. Such methods are referred to.20) (7. therefore. the parameters ( ') are related by the condition 1 '= . It has been shown that for a method to be I-stable it must also be A-stable. 7. second-order accurate LMM's that are A-stable and Ao -stable share the same (' ) parameter space.16. The proof follows from the fact that the coe cients of such a polynomial are sums of various Or can be made equal to unity by a trivial normalization (division by a constant independent of .17 to 7.6.19) For rst-order accuracy. as conditionally stable methods.18) . NUMERICAL STABILITY CONCEPTS IN THE COMPLEX H PLANE137 that of all 2nd-order A-stable methods. For second-order accuracy. two-step.7.' (7. such that the resulting combinations of products of all its roots. It is not di cult to prove that methods having a characteristic polynomial for which the coe cient of the highest order term in E is unity7 can never be unconditionally stable. A very convenient way to present the stability properties of a time-marching method is to plot the locus of the complex h for which j j = 1.3 Stability Contours in the Complex 7 h Plane. no further discussion is necessary for the special case of I-stability.1 2 0 . the inequalities 7.17) (7. Therefore. h).

6. The contour encloses a nite portion of the left-half complex h-plane. 7. contour goes through the point h = 0.138 I( λ h) CHAPTER 7. as a stability contour. Contours for explicit methods Contours for unconditionally stable implicit methods Fig. respectively.5.3 that on one side of this contour the numerical method is stable while on the other. leapfrog. Here j j refers to the maximum absolute value of any . However. RK4). see Figs.. it is unstable. Fig. We refer to it. Fig. it is conditional. that is a root to the characteristic polynomial for a given h.4a shows the stability contour for the explicit Euler method. In the following two ways it is typical of all stability contours for explicit methods: 1. Notice in particular that the third.4. It follows from Section 7. 2. Gazdag. Typical stability contours for both explicit and implicit methods are illustrated in Fig. STABILITY OF LINEAR SYSTEMS I λ h) ( I λ h) ( Stable Unstable 1 Stable Stable Unstable 1 R(λ h) R(λ h) Unstable R( λ h) θ=1 a) Euler Explicit θ=0 b) Trapezoid Implicit θ = 1/2 c) Euler Implicit Figure 7. include a portion of the imaginary axis p out to 1:9i and 2 2i.. therefore. It is typical of many stability contours for unconditionally stable implicit methods.g. 7.4c shows the stability contour for the implicit Euler method.5 and 7. RK3. AB2. and therefore. Although several explicit methods share this de ciency (e.g. which is derived from the one-root -method given in Section 6. 7. 7. Notice that the .and fourth-order Runge-Kutta methods.4.6. The region of stability is inside the boundary. RK2). principal or spurious. 7. several others do not (e. this method includes no part of the imaginary axis (except for the origin) and so it is unstable for the model biconvection problem.4: Stability contours for the -method.

0 Stable Regions RK4 RK3 2.0 -1.7. .0 RK2 1.5: Stability contours for some explicit methods.0 -2. NUMERICAL STABILITY CONCEPTS IN THE COMPLEX H PLANE139 Figure 7.6: Stability contours for Runge-Kutta methods. I( λh) 3.6.0 Figure 7.0 RK1 R(λ h) -3.

1 as no. method is stable for the entire range of complex h that fall outside the boundary.1 + h u0n+1 + u0n. 7.140 I(λh) CHAPTER 7. One of these is the Adams-Moulton 3rd-order method (no.7: Stability contours for the 2 unconditionally stable implicit methods.0 2. 7.0 2.1 p and shown in Table 7.1 and a method due to Lees 2 un+1 = un. 13. 7.0 1. Contours for conditionally stable implicit methods Just because a method is implicit does not mean that it is unconditionally stable.1).4b. In all of these cases the imaginary axis is part of the stable region.7.0 4.. Not all unconditionally stable methods are stable in some regions where the ODE's they are integrating are inherently unstable.0 Unstable R(λ h) Unstable 1. Two other methods that have this property are the two-step trapezoidal method un+1 = un. 8. It is stable only for = i! when 0 ! 3.8. Two illustrations of this are shown in Fig. STABILITY OF LINEAR SYSTEMS Stable Outside 2.0 3.5b). Table 7. Some other implicit unconditionally stable methods with the same property are shown in Fig. . The classic example of a method that is stable only when the generating ODE's are themselves inherently stable is the 1 trapezoidal method. Its stability boundary is very similar to that for the leapfrog method (see Fig. The boundary is the imaginary axis and the numerical method is stable for h lying on or to the left of this axis.1 + 3 h u0n+1 + u0n + u0n.0 R( λh) a) 2nd Order Backward Implicit b) Adams Type Figure 7. This means that the method is numerically stable even when the ODE's that it is being used to integrate are inherently unstable. The stability boundary for this case is shown in Fig.0 1.0 I(λh) Stable Outside 1. 7. i.e. Another is the 4th-order Milne method given by the point operator 1 un+1 = un.1 + 3 h u0n+1 + 4u0n + u0n.0 2.1 Notice that both of these methods are of the Milne type. the special case of the -method for which = 2 .

0 -2.0 2.0 141 I(λh) 3. Strictly speaking it applies only to di erence approximations of PDE's that produce O E's which are linear. This analysis is usually carried out on point operators and it does not depend on an intermediate stage of ODE's. FOURIER STABILITY ANALYSIS I(λh) 3.7.0 1.0 R(λ h) R(λ h) b) Milne 4th Order a) AM3 Figure 7.0 Stable 1.7. 8 7. Then one imposes a spatial harmonic as an initial value on the mesh and asks the question: Will its amplitude grow or decay in time? The answer is determined by nding the conditions under which u(x t) = e t ei x (7. u(jn+`) = e (t+` t) ei (x+m x) = e ` t ei m x u(jn) +m Another way of viewing this is to consider it as an initial value problem on an in nite space domain.0 -1.7 Fourier Stability Analysis By far the most popular form of stability analysis for numerical schemes is the Fourier or von Neumann approach. It serves as a fairly reliable necessary stability condition.0 Stable only on Imag Axis 2.7.22) is a solution to the di erence equation. Since.1 The Basic Procedure .0 -3. 7. but it is by no means a su cient one.8: Stability contours for the 2 conditionally stable implicit methods. have no space or time varying coe cients. for the general term.8 In practical application it is often used as a guide for estimating the worthiness of a method for more general problems. One takes data from a \typical" point in the ow eld and uses this as constant throughout time and space according to the assumptions given above. where is real and x lies in the range 0 x . and have periodic boundary conditions.

23) For numerical stability j j 1 and the problem is to solve for the 's produced by any given method and.1: t (7. We nd.1) + 2 2 u(jn) . 7.24 if is a root of Eq. since e t = e t n t = n.i x x2 or 2 + 4 t (1 .b from which it is clear that one j j is always > 1. make sure that. The procedure can best be explained by examples.25.22 into Eq.22 is a solution of Eq. we nd the term e t which we represent by . Richardson's method of overlapping steps is unstable for all .24 gives the relation = .142 CHAPTER 7. 4. 2 + e. 7.7. Consider as a rst example the nite di erence approximation to the model di usion equation known as Richardson's method of overlapping steps.24) u(jn+1) = u(jn. 7.2 Some Examples . the condition is often presented as j j 1 + O( t). cos (7. 2u(jn) + u(jn)1 . therefore. x +1 Substitution of Eq. condition 7. 7. 1 = 0 2 } | x {z 2b Thus Eq. STABILITY OF LINEAR SYSTEMS the quantity u(jn) is common to every term and can be factored out. In the remaining expressions. thus: e Then. The two roots of 7. it is clear that (7.25 are p2 b +1 1 2 = . 7. This was mentioned in Section 4. that by the Fourier stability test. and t.25) x) . 7.1 and given as Eq. 9 If boundedness is required in a nite time domain.23 is satis ed9 . as a necessary condition for stability.1 + 2 t ei x . in the worst possible combination of parameters.

7. Clearly. 7.4. the ei x in Eq. Such being the case. this method is always unstable for such conditions. and the two examples just presented outline a simple procedure for nding the eigenvalues of the circulant matrices formed by application of the two methods to the model problems. and according to Appendix B. is unstable for any choice of the free parameters. is not an accident. this matrix has pure imaginary eigenvalues. the results present rather obvious conclusions.27) .8 Consistency Consider the model equation for di usion analysis @u = @ 2 u @t @x2 (7. The underlying assumption in a Fourier stability analysis is that the C matrix. From Appendix B we nd that the eigenvalues of this matrix are real negative numbers. is circulant. by the Fourier stability test.8. If we examine the preceding examples from the viewpoint of circulant matrices and the semi-discrete approach. Thus we have another nite-di erence approximation that. but arrived at from a di erent perspective.7.2 1) where s is a positive scalar coe cient. from Fig. However.22 represents an eigenvector of the system. the time-marching is being carried out by the leapfrog method and.3 Relation to Circulant Matrices 7. the space matrix in Eq.7.1 0 1). On the other hand. this method is unstable for all eigenvalues with negative real parts. u(jn)1 +1 . The choice of for the stability parameter in the Fourier analysis. 7. in this case the explicit Euler method is being used for the time-march and. 7. therefore.2. according to Fig. 7.26) = 1 . In this case (7. It is exactly the same we have been using in all of our previous discussions. 7. determined when the di erencing scheme is put in the form of Eq. a xt i sin x from which it is clear that j j > 1 for all nonzero a and .26 is Bp(.5. CONSISTENCY 143 As another example consider the nite-di erence approximation for the model biconvection equation a t u(jn+1) = u(jn) . The space di erencing in Richardson's method produces the matrix s Bp(1 . 2 x u(jn) .

there is no intermediate -root structure to inspect. 7. There are many ways to manipulate the numerical stability of algorithms. the hand calculations (the only approach available at the time) were not carried far enough to exhibit strange phenomena. +1 There is no obvious ODE between the basic PDE and this nal O E. 4. 2@ u(jn+1) + u(jn.29) in which the central term in the space derivative in Eq. however.2 and analyzed its stability by the Fourier method in Section 7. It is. One of them is to introduce mixed time and space di erencing. For example. Hence. conditionally stable and for this case we are rather severely limited in time step size by the requirement t x2 =(2 ). Richardson proposed a method for integrating equations of this type. The method was used by Richardson for weather prediction.1) + 2 t 4u(n) . however.-2. the concept is quite clear today and we now know immediately that his approach would be unstable. the concept of numerical instability was not known. h i .1) + u(jn)1 + u(jn) .7.1) A + u(n) 5 j +1 2 x2 j.1) are all real and negative and the spurious -root in the leapfrog method is unstable for all such cases. )u(jn. As a semi-discrete method it can be expressed in matrix notation as the system of ODE's: d~ = u B (1 .1 2 0 1 3 (7. see Fig.28 since it is stable for real negative 's. In all probability. we introduced the DuFort-Frankel method in Chapter 4: u(jn+1) = u(jn. Now let 2 t x2 and rearrange terms (1 + )u(jn+1) = (1 . 7. Our analysis in this Chapter revealed that this is numerically unstable since the -roots of B(1. Instead one proceeds immediately to the -roots.144 CHAPTER 7.2 has been replaced by its average value at two di erent time levels.5b.2 1)~ + (bc) u ~ (7. We presented his method in Eq. We could.28) dt x2 with the leapfrog method used for the time march. of course. 4. use the 2nd-order Runge-Kutta method to integrate Eq. In Richardson's time. and this fact can now be a source of some levity. Lewis F. in fact). However. a possibility we have not yet considered. STABILITY OF LINEAR SYSTEMS Many years before computers became available (1910.

xxxx j 12 1 t2 t 2(@ u)(n) + 12 x tttt j (7.30 in a Taylor series and reconstruct the partial di erential equation we are actually solving as the mesh size becomes very small.8. ) .7. This means that the method is unconditionally stable! The above result seems too good to be true. u(jn. 2 cos(k x) . = cos x q The price we have paid for this is the loss of consistency with the original PDE.30 with the above expansions and take the limit as t x ! 0.7.1) i = (@ u)(n) + 1 t2 (@ u)(n) + t j ttt j j 2 t j 6 and for the mixed time and space di erences Replace the terms in Eq. u(n. 7. 2 sin2 x 1+ There are 2M -roots all of which are 1 for any real in the range 0 1.1) + u(jn) t 2 (@ u)(n) + . u(jn+1) . CONSISTENCY 145 The simplest way to carry this out is by means of the Fourier stability analysis introduced in Section 7.30) . we expand the terms in Eq. This leads at once to (1 + ) = (1 . r2 @ 2 u (7. For the time derivative we have 1 hu(n+1) . To prove this. since we have found an unconditionally stable method using an explicit combination of Lagrangian interpolation polynomials.31) @t @x2 @t2 where t r x u(jn)1 . xx j x tt j x2 1 x2 (@ u)(n) .1 + ei x + e. (1 . +1 = (@ u)(n) . ) = 0 The solution of the quadratic is 1 . 7.i x or (1 + ) 2 . We nd @u = @ 2 u .

= an arbitrary error bound.31 is hyperbolic.146 CHAPTER 7. and the method is consistent. 7. Eq.9 Problems 1. Approximates @u = @t only if t 2< x Therefore q t< x @2u @x2 Table 7. then r2 is O( t). In such a case Eq. not a di usion equation.1 7 f =4 0 5 0 . the equation we actually solve by the method in Eq. STABILITY OF LINEAR SYSTEMS Eq.10 . Thus if t ! 0 and x ! 0 in such t a way that x remains constant.30 is a wave equation. 2nd-order Runge-Kutta For Stability DuFort{Frankel t 2x Conditionally Stable Uniformly Consistent with @u = @ 2 u @t @x2 2 For Consistency t 1 Unconditionally Stable Conditionally Consistent.1 2 3 6 . If t ! 0 and x ! 0 in such a way that xt2 remains constant.30 is not uniformly consistent with the equation we set out to solve even for vanishingly small step sizes.0:1 3 17 A = 6 1 . The situation is summarized in Table 7.0:1 . 7.3. this leads to a constraint on the time step which is just as severe as that imposed by the stability condition of the second-order Runge-Kutta method shown in the table.27 is parabolic. 7.1 5 4 10 1 u0 = du = Au + f dt . However. 7. Consider the ODE with 2 .3: Summary of accuracy and consistency conditions for RK2 and Du Fort-Frankel methods. 7.

and MacCormack methods. Compare the computed solution with the exact steady solution.05 from 0 to 0. 2un + un. 3. 4. 2 Cn(un+1 . known as the Courant (or CFL) number.1 + 4( xu)j + ( xu)j+1 = 3x (uj+1 .relation for the method. implicit Euler. Take h in intervals of 0. is ah= x. Compare with the Runge-Kutta methods of order 1 through 4. obtain a time-marching method using this formula. uj. un. In all three cases.80 and compute the absolute values of the roots to at least 6 places. Using Fourier stability analysis.7. 6. h = 0:4 for 250 time steps and h = 1:0 for 100 time steps.1) + 2 Cn(un+1 . nd the range of Cn for which the method is stable. Ao -stable. (c) Using the . 5. what are the expected bounds on h for stability? Are your results consistent with these bounds? 2. use h = 0:1 for 1000 time steps. Recall the compact centered 3-point approximation for a rst derivative: ( xu)j. (c) Repeat the above for the RK2 method. the widely known Lax{Wendro method gives: 1 1 2 un+1 = un . or I-stable? .1) By replacing the spatial index j by the temporal index n. to what values of .1) j j j j j j j where Cn.9.59) does it correspond? Derive the . . Is it A-stable.relations for these three methods. (b) Plot the trace of the roots in the complex -plane and draw the unit circle on the same plot. and (in Eq. When applied to the linear convection equation. (a) Compute a table of the numerical values of the -roots of the 2nd-order Adams-Bashforth method when = i. h = 0:2 for 500 time steps. What is the steadystate solution? How does the ODE solution behave in time? (b) Write a code to integrate from the initial condition u(0) = 1 1 1]T from time t = 0 using the explicit Euler. What order is the method? Is it explicit or implicit? Is it a two-step LMM? If so. PROBLEMS 147 (a) Find the eigenvalues of A using a numerical package. Determine and plot the stability contours for the Adams-Bashforth methods of order 1 through 4.

1. (1 + 3 .4. Apply a Dirichlet boundary condition at the left boundary. ~n u u g Write C in banded matrix notation and give the entries of ~ . i. Write the O E which results from the application of explicit Euler time marching in matrix-vector form. if 2nd-order centered di erencing is used for the spatial di erences and the boundary conditions are periodic? Find the stability condition for one of these methods. 7. nd the eigenvalues of the spatial operator matrix. is obtained with h = . No boundary condition is permitted at the right boundary.relation for the 2nd-order AdamsBashforth method is h=2) + h=2 = 0 show that the maximum stable value of j hj for real negative . the point where the stability contour intersects the negative real axis. Using Appendix B.1..1. 8. Using Fourier analysis. 9. nd the -eigenvalues. Using the eigenvalues of the spatial operator matrix. Using the g relation for the explicit Euler method. nd the maximum stable time step for the combination of 2nd-order centered di erences and the 2nd-order Adams-Bashforth method applied to the model di usion equation. Consider the following PDE: @u = i @ 2 u @t @x2 p where i = . Repeat using Fourier analysis. what is the maximum Courant number allowed for asymptotic stability? Explain why this di ers from the answer to problem 8. Consider the linear convection with a positive wave speed as in problem 8. STABILITY OF LINEAR SYSTEMS 6. nd the -eigenvalues. Based on these. Write the system of ODE's which results from rst-order backward spatial di erencing in matrix-vector form. Hint: is C normal? 2 . i. Which explicit time-marching methods would be suitable for integrating this PDE.. Given that the . analyze the stability of rst-order backward di erencing coupled with explicit Euler time marching applied to the linear convection equation with positive a. ~ n+1 = C~ n . Find the maximum Courant number for stability.148 CHAPTER 7. Using Appendix B. Write the ODE system obtained by applying the 2nd-order centered di erence approximation to the spatial derivative in the model di usion equation with periodic boundary conditions.e.e.

De nitions given in the literature are not unique. 7.1 Sti ness De nition for ODE's 8. we assume that this scale would be adequately resolved by the chosen time-marching method.Chapter 8 CHOICE OF TIME-MARCHING METHODS In this chapter we discuss considerations involved in selecting a time-marching method for a speci c application. An important concept underlying much of this discussion is sti ness. Examples are given showing how time-marching methods can be compared in a given context. and the decision to use some numerical time-marching or iterative method to solve it. The di erence between the dynamic scales in physical space is represented by the di erence in the magnitude of the eigenvalues in eigenspace. However. which is de ned in the next section. but fortunately we now have the background material to construct a de nition which is entirely su cient for our purposes. We start with the assumption that our CFD problem is modeled with su cient accuracy by a coupled set of ODE's producing an A matrix typi ed by Eq.1.1. Any de nition of sti ness requires a coupled system with at least two eigenvalues. and.1 Relation to -Eigenvalues The introduction of the concept referred to as \sti ness" comes about from the numerical analysis of mathematical models constructed to simulate dynamic phenomena containing widely di erent time scales. 8. since this part of the ODE has no e ect on the numerical stability of 149 . In the following discussion we concentrate on the transient part of the solution. The forcing function may also be time varying in which case it would also have a time scale.

28.1. Consider now the form of the exact solution of a system of ODE's with a complete eigensystem.150 CHAPTER 8. the homogeneous part. if at all. The whole concept of sti ness in CFD arises from the fact that we often do not need the time resolution of eigenvectors associated with the large j m j in the transient solution. 6. The situation is represented in the complex h plane in Fig. In x many numerical applications the eigenvectors associated with the small j mj are well resolved and those associated with the large j mj are resolved much less accurately. .1: Stable and accurate regions for the explicit Euler method.27 and its solution using a one-root. CHOICE OF TIME-MARCHING METHODS Stable Region λh I(λ h) 1 R(λ h) Accurate Region Figure 8. the time integration is an approximation in eigenspace that is di erent for every eigenvector ~ m . 6. For a given time step. timemarching method is represented by Eq. In this gure the time step has been chosen so that time accuracy is given to the eigenvectors associated with the eigenvalues lying in the small circle and stability without time accuracy is given to those associated with the eigenvalues lying outside of the small circle but still inside the large circle. 8. This is given by Eq. we exclude the forcing function from further discussion in this section. although these eigenvectors must remain coupled into the system to maintain a high accuracy of the spatial resolution.

An inherently stable set of ODE's is sti if j pj j Mj Cr = j M j = j pj Mildly-sti Cr < 102 Strongly-sti 103 < Cr < 105 Extremely-sti 106 < Cr < 108 Pathologically-sti 109 < Cr It should be mentioned that the gaps in the sti category de nitions are intentional because the bounds are arbitrary.2 Driving and Parasitic Eigenvalues For the above reason it is convenient to subdivide the transient solution into two parts.3 Sti ness Classi cations In particular we de ne the ratio and form the categories The following de nitions are somewhat useful. but their presence must not contaminate the accuracy of the complete solution). complex.1) Then we write p M Transient = X c e m t ~ + X c e m t ~ xm xm (8.2) m m Solution m=1 {z m=p+1 | } | {z } Driving Parasitic This concept is crucial to our discussion.8.1. although time accuracy requirements are dictated by the driving eigenvalues. . it states that we can separate our eigenvalue spectrum into two groups one 1 ! p] called the driving eigenvalues (our choice of a time-step and marching method must accurately approximate the time variation of the eigenvectors associated with these). STIFFNESS DEFINITION FOR ODE'S 151 8. we nd that.1. and imaginary eigenvalues. First we order the eigenvalues by their magnitudes. numerical stability requirements are dictated by the parasitic ones. Unfortunately. called the parasitic eigenvalues (no time accuracy whatsoever is required for the eigenvectors associated with these. 8. Rephrased. thus j 1j j 2j j Mj (8. p+1 ! M ]. It is important to notice that these de nitions make no distinction between real. and the other.1.

2 sin 2 x x ! 2 =. and in a boundary layer.2. let us examine the eigensystems of the model problems given in Section 4. the sti er our equations tend to become. 4x2 2x 1 = . Under these conditions. A simple calculation shows that 4 2 . 8. As a result it is quite common to cluster mesh points in certain regions of space and spread them out otherwise. Our brief analysis has been limited to equispaced and the ratio is . this ratio is about 680. Even for a mesh of this moderate size the problem is already approaching the category of strongly sti .3. 2 = 1 42 =4 M +1 M x The most important information found from this example is the fact that the sti ness of the transient solution is directly related to the grid spacing. in di usion problems this sti ness is proportional to the reciprocal of the space mesh size squared. We see that in both cases we are faced with the rather annoying fact that the more we try to increase the resolution of our spatial gradients. The simplest example to discuss relates to the model di usion equation. Typical CFD problems without chemistry vary between the mildly and strongly sti categories. For a mesh size M = 40. Examples of where this clustering might occur are at a shock wave. In order to demonstrate this. but much less so than for di usion-dominated problems. i. One quickly nds that this grid clustering can strongly a ect the eigensystem of the resulting A matrix. Consider the case when all of the eigenvalues are parasitic.1. CHOICE OF TIME-MARCHING METHODS Many ow elds are characterized by a few regions having high spatial gradients of the dependent variables and other domains having relatively low gradient phenomena. In this case the eigenvalues are all real. negative numbers that automatically obey the ordering given in Eq. For the biconvection model a similar analysis shows that j M j = j 1j 1x Here the sti ness parameter is still space-mesh dependent. and are greatly a ected by the resolution of a boundary layer since it is a di usion process. the sti ness is determined by the ratio M = 1 . near an airfoil leading or trailing edge. we are interested only in the converged steady-state solution. 4 2 M .2 Relation of Sti ness to Space Mesh Size CHAPTER 8. Furthermore.152 8. 2 sin 2(M + 1) x 4 2 =.e..

Nevertheless. rather than the local ones er . the time interval would be that required to extract a reliable statistical sample. one can wonder how this information is to be used to pick a \best" choice for a particular problem. Let us consider the problem of measuring the e ciency of a time{marching method for computing. Er and Er! . highly dependent upon the speed.3 Practical Considerations for Comparing Methods We have presented relatively simple and reliable measures of stability and both the local and global accuracy of time-marching methods. and its computation usually consumes the major portion of the computer time required to make the simulation. and the accuracy would be related to how much the energy of certain harmonics would be permitted to distort from a given level. an accurate transient solution of a coupled set of ODE's. era . capacity. The appropriate error measures to be used in comparing methods for calculating an event are the global ones. The length of the time interval. It should then be clear how the analysis can be extended to other cases.8. and erp discussed earlier. if certain ground rules are agreed upon. Let us now examine some ground rules that might be appropriate. 8.6. This function is usually nonlinear. T . Since there are an endless number of these methods to choose from. among other things. .5. relevant conclusions can be reached. over a xed interval of time. discussed in Section 6. Such a computation we refer to as an event. For example.3. and architecture of the available computer. and the accuracy required of the solution are dictated by the physics of the particular problem involved. and technology in uencing this is undergoing rapid and dramatic changes as this is being written. PRACTICAL CONSIDERATIONS FOR COMPARING METHODS 153 problems. but in general the sti ness of CFD problems is proportional to the mesh intervals in the manner shown above where the critical interval is the smallest one in the physical domain. it is. For example. There is no unique answer to such a question. The actual form of the coupled ODE's that are produced by the semi-discrete approach is d~ = F (~ t) u ~ u dt ~ u At every time step we must evaluate the function F (~ t) at least once. Era . We refer to a single calculation of the ~ u vector F (~ t) as a function evaluation and denote the total number of such evaluations by Fev . in calculating the amount of turbulence in a homogeneous ow.

u(T ) = 0:25. CHOICE OF TIME-MARCHING METHODS As mentioned above. First of all. Fev . Since the exact solution is u(t) = u(0)e. the allowable error constraint means that the global error in the amplitude. let us set the additional requirement The error in u at the end of the event.t.154 8. must have the property: Er < 0:005 eT 8. this makes the exact value of u at the end of the event equal to 0. i.u (8. We judge the most e cient method as the one that satis es these conditions and has the fewest number of evaluations.3) dt from t = 0 to T = . so that T = Nh where N is the total number of time steps.48. Implications of computer storage capacity and access time are ignored. the e ciency of methods can be compared only if one accepts a set of limiting constraints within which the comparisons are carried out. The time-march method is explicit. must simulate an entire event which takes a total time T . h. must be < 0:5%. 8.4.1 Imposed Constraints 8. AB2. In some contexts.3 is obtained from our representative ODE with = .e. Three methods are compared | explicit Euler.. Let the event be the numerical solution of du = . Eq. see Eq.. 6. The follow assumptions bound the considerations made in this Section: 1. 2.4.4 Comparing the E ciency of Explicit Methods CHAPTER 8. the global error. i. this can be an important consideration.2 An Example Involving Di usion . ln(0:25) with u(0) = 1. and RK4. a = 0. 3.1.e. and must use a constant time step size. The calculation is to be time-accurate. To the constraints imposed above.25.

Method N h Fev Er 1 Euler 193 :00718 :99282 193 :001248 worst AB2 16 :0866 :9172 16 :001137 RK4 2 :6931 :5012 8 :001195 best Table 8.8. In this numerical experiment the function evaluations contribute overwhelmingly to the CPU time.1 were computed using a simple iterative procedure. The above constraint has led to the invention of schemes that omit the function evaluation in the corrector step of a predictor-corrector combination. The main purpose of this exercise is to show the (usually) great superiority of second-order over rst-order time-marching methods.4. COMPARING THE EFFICIENCY OF EXPLICIT METHODS Then.1: Comparison of time-marching methods for a simple dissipation problem. ( 1 (ln(:25)=N ))N =:25 < 0:005 155 where 1 is found from the characteristic polynomials given in Table 7. it follows that 1 .1.4. The results shown in Table 8. leading to the so-called incomplete . since h = T=N = . 8. and the fourth-order Runge-Kutta method is the best of all. in addition to the constraints given in Section 8. for a given global accuracy.3 An Example Involving Periodic Convection Let us use as a basis for this example the study of homogeneous turbulence simulated by the numerical solution of the incompressible Navier-Stokes equations inside a cube with periodic boundary conditions on all sides. On the other hand. a complete event must be established in order to obtain meaningful statistical samples which are the essence of the solution.4. Thus the second-order Adams-Bashforth method is much better than the rstorder Euler method. and the number of these evaluations must be kept to an absolute minimum because of the magnitude of the problem. In this case. ln(0:25)=N .1. In this example we see that. we add the following: ~u The number of evaluations of F (~ t) is xed. Under these conditions a method is judged as best when it has the highest global accuracy for resolving eigenvectors with imaginary eigenvalues. the method with the highest local accuracy is the most e cient on the basis of the expense in evaluating Fev .

k=1 j k = 2 j 2h1 k=4 j 4h1 0 T j j j uN ( h1 )]8 (2 h1)]4 (4 h1)]2 Step sizes and powers of for k-evaluation methods used to get to the same value of T if 8 evaluations are allowed. Notice that as k increases. the time step must be twice that of a one{evaluation method so h = 2h1. and AB2 schemes are all 1-evaluation methods. are the same.4) 1 See Eqs. For a 1evaluation method the total number of time steps. notice also that as k increases. h. 6. The second and fourth order RK methods are 2. after K evaluations. The presumption is.1. in this case. the power to which 1 is raised to arrive at the nal destination decreases see the Figure below. However. one evaluation being used for each step. in order to arrive at the same time T after K evaluations. This is the key to the true comparison of time-march methods for this type of problem. and the number of evaluations.and 4-evaluation methods. N . given in Section 6. the global amplitude and phase error for kevaluation methods applied to systems with pure imaginary -roots can be written1 Era = 1 . that more e cient methods will result from the omission of the second function evaluation. respectively.relation for the method is shown as entry 10 in Table 7. For a 4-evaluation method the time interval must be h = 4h1 . In order to discuss our comparisions we introduce the following de nitions: Let a k-evaluation method be de ned as one that requires k evaluations of ~u F (~ t) to advance one step using that method's time interval. However. Basically this is composed of an AB2 predictor and a trapezoidal corrector. etc. leapfrog.156 CHAPTER 8. CHOICE OF TIME-MARCHING METHODS predictor-corrector methods. Let h1 be the time interval advanced in one step of a one-evaluation method. The Gazdag.38 and 6. j 1(ik!h1 )jK=k (8. of course. However. so that for these methods h = h1 . The . For a 2-evaluation method N = K=2 since two evaluations are used for each step. An example is the method of Gazdag. the derivative of the fundamental family is never found so there is only one evaluation required to complete each cycle.8. Let K represent the total number of allowable Fev .39. . the time span required for one application of the method increases. K . In general.

We idealize to the case where = i! and set ! equal to one. we nd. On the basis of global error the Gazdag method is not superior to RK4 in either amplitude or phase. K leapfrog AB2 Gazdag RK4 !h1 = :1 100 . The 1 root for the Gazdag method can be found using a numerical root nding routine to trace the three roots in the -plane. 7.2:4 :45 :12 !h1 = :2 50 .3e. by some rather simple calculations2 made using Eqs. Notice that to nd global error the Gazdag root must be raised to the power of 50 while the RK4 root is raised only to the power of 50/4. see Fig. 2 . K leapfrog AB2 Gazdag RK4 !h1 = :1 100 1:0 1:003 :995 :999 !h1 = :2 50 1:0 1:022 :962 :979 a. for !h = 0:2 the Gazdag method produces a j 1 j = 0:9992276 whereas for !h = 0:8 (which must be used to keep the number of evaluations the same) the RK4 method produces a j 1 j = 0:998324. although.9:8 1:5 1:5 b. AB2. It is not di cult to show that on the basis of local error (made in a single step) the Gazdag method is superior to the RK4 method in both amplitude and phase. The event must proceed to the time t = T = 10. Gazdag.1 1 (ik!h1)]imaginary Er! = !T . and RK4.2. Table 8.:96 . the results shown in Table 8.8. k 1 (ik!h1 )]real 157 (8. Using this.0. Phase error in degrees.2: Comparison of global amplitude and phase errors for four methods. Amplitude. We consider two maximum evaluation limits K = 50 and K = 100 and choose from four possible methods. exact = 1. we are making our comparisons on the basis of global error for a xed number of evaluations. However. 8.5) Consider a convection-dominated event for which the function evaluation is very time consuming.5. and the fact that ! = 1. COMPARING THE EFFICIENCY OF EXPLICIT METHODS " # K tan.3:8 . leapfrog.4. For example. First of all we see that for a one-evaluation method h1 = T=K . The rst three of these are one-evaluation methods and the last one is a four-evaluation method. in terms of phase error (for which it was designed) it is superior to the other two methods shown.4 and 8.

6) We will assume that our accuracy requirements are such that su cient accuracy is obtained as long as j hj 0:1.001 is negligible. A good example of the concept is produced by studying the Euler method applied to the representative equation. 100h)nx21 + c2(1 . Stability boundaries for some explicit methods are shown in Figs.5. 8.5 Coping With Sti ness 8. We have de ned these h as parasitic eigenvalues.5 and 7. which is determined from 1. Analytical evaluation of the time histories poses no problem since e. h)nx12 + (PS )1 u2(n) = c1 (1 . and a relatively slowly decaying function in the other. Numerical solutions.1. these h are sti .1 0) as shown in Fig. 100h)nx11 + c2(1 . depend upon ( m h)]n and no j m j can exceed one for any m in the coupled system or else the process is numerically unstable. This de nes a time step limit based on accuracy considerations of h = 0:001 for 1 and h = 0:1 for 2 . CHOICE OF TIME-MARCHING METHODS Using analysis such as this (and also considering the stability boundaries) the RK4 method is recommended as a basic rst choice for any explicit time-accurate calculation of a convection-dominated problem.6. For a speci c example. is h = 0:02. the time histories of the two solutions would appear as a rapidly decaying function in one case. In this case the trace forms a circle of unit radius centered at (. With this time step the . We rst run 66 time steps with h = 0:001 in order to resolve the 1 term. If uncoupled and evaluated in wave space.158 CHAPTER 8. consider the mildly sti system composed of a coupled two-equation set having the two eigenvalues 1 = . of course. 7. The time step limit based on stability. If some h fall outside the small circle but stay within the stable region. h)nx22 + (PS )2 (8. If h is chosen so that all h in the ODE eigensystem fall inside this circle the integration will be numerically stable. but stable. Numerical evaluation is altogether di erent. We will also assume that c1 = c2 = 1 and that an amplitude less than 0. Also shown by the small circle centered at the origin is the region of Taylor series accuracy.100t quickly becomes very small and can be neglected in the expressions when time becomes large. The transient solution is un = (1 + h)n and the trace of the complex value of h which makes j1 + hj = 1 gives the whole story.1 Explicit Methods The ability of a numerical method to cope with sti ness can be illustrated quite nicely in the complex h plane.100 and 2 = . The coupled equations in real space are represented by u1(n) = c1 (1 .1. 8. Let us choose the simple explicit Euler method for the time march.

**8.5. COPING WITH STIFFNESS
**

2

159

term is resolved exceedingly well. After 66 steps, the amplitude of the 1 term (i.e., (1 ; 100h)n) is less than 0.001 and that of the 2 term (i.e., (1 ; h)n) is 0.9361. Hence the 1 term can now be considered negligible. To drive the (1 ; h)n term to zero (i.e., below 0.001), we would like to change the step size to h = 0:1 and continue. We would then have a well resolved answer to the problem throughout the entire relevant time interval. However, this is not possible because of the coupled presence of (1 ; 100h)n, which in just 10 steps at h = 0:1 ampli es those terms by 109, far outweighing the initial decrease obtained with the smaller time step. In fact, with h = 0:02, the maximum step size that can be taken in order to maintain stability, about 339 time steps have to be computed in order to drive e;t to below 0.001. Thus the total simulation requires 405 time steps. Now let us re-examine the problem that produced Eq. 8.6 but this time using an unconditionally stable implicit method for the time march. We choose the trapezoidal method. Its behavior in the h plane is shown in Fig. 7.4b. Since this is also a oneroot method, we simply replace the Euler with the trapezoidal one and analyze the result. It follows that the nal numerical solution to the ODE is now represented in real space by 50h n u1(n) = c1 1 ; 50h x11 + c2 1+ ! 1 ; 50h nx + c u2(n) = c1 1 + 50h 21 2

8.5.2 Implicit Methods

!

1 ; 0:5h nx + (PS ) 1 1 + 0:5h 12 ! 1 ; 0:5h nx + (PS ) 2 1 + 0:5h 22

!

(8.7)

In order to resolve the initial transient of the term e;100t , we need to use a step size of about h = 0:001. This is the same step size used in applying the explicit Euler method because here accuracy is the only consideration and a very small step size must be chosen to get the desired resolution. (It is true that for the same accuracy we could in this case use a larger step size because this is a second-order method, but that is not the point of this exercise). After 70 time steps the 1 term has amplitude less than 0.001 and can be neglected. Now with the implicit method we can proceed to calculate the remaining part of the event using our desired step size h = 0:1 without any problem of instability, with 69 steps required to reduce the amplitude of the second term to below 0.001. In both intervals the desired solution is second-order accurate and well resolved. It is true that in the nal 69 steps one -root is 1 ; 50(0:1)]= 1 + 50(0:1)] = 0:666 , and this has no physical meaning whatsoever. However, its in uence on the coupled solution is negligible at the end of the rst 70 steps, and, since (0:666 )n < 1, its in uence in the remaining 70

160

CHAPTER 8. CHOICE OF TIME-MARCHING METHODS

steps is even less. Actually, although this root is one of the principal roots in the system, its behavior for t > 0:07 is identical to that of a stable spurious root. The total simulation requires 139 time steps. It is important to retain a proper perspective on a problem represented by the above example. It is clear that an unconditionally stable method can always be called upon to solve sti problems with a minimum number of time steps. In the example, the conditionally stable Euler method required 405 time steps, as compared to about 139 for the trapezoidal method, about three times as many. However, the Euler method is extremely easy to program and requires very little arithmetic per step. For preliminary investigations it is often the best method to use for mildly-sti di usion dominated problems. For re ned investigations of such problems an explicit method of second order or higher, such as Adams-Bashforth or Runge-Kutta methods, is recommended. These explicit methods can be considered as e ective mildly sti stable methods. However, it should be clear that as the degree of sti ness of the problem increases, the advantage begins to tilt towards implicit methods, as the reduced number of time steps begins to outweigh the increased cost per time step. The reader can repeat the above example with 1 = ;10 000, 2 = ;1, which is in the strongly-sti category. There is yet another technique for coping with certain sti systems in uid dynamic applications. This is known as the multigrid method. It has enjoyed remarkable success in many practical problems however, we need an introduction to the theory of relaxation before it can be presented.

8.5.3 A Perspective

8.6 Steady Problems

In Chapter 6 we wrote the O E solution in terms of the principal and spurious roots as follows:

**un = c11 ( 1 )n ~ 1 + + cm1( m )n ~ m + + cM 1 ( M )n ~ M + P:S: 1 x 1 x 1 x +c12 ( 1)n ~ 1 + + cm2 ( m )n ~ m + + cM 2 ( M )n ~ M 2 x 2 x 2x n~ + n~ + +c13 ( 1)3 x1 + cm3 ( m )3 xm + cM 3 ( M )n ~ M 3x +etc., if there are more spurious roots (8.8)
**

When solving a steady problem, we have no interest whatsoever in the transient portion of the solution. Our sole goal is to eliminate it as quickly as possible. Therefore,

8.7. PROBLEMS

161

the choice of a time-marching method for a steady problem is similar to that for a sti problem, the di erence being that the order of accuracy is irrelevant. Hence the explicit Euler method is a candidate for steady di usion dominated problems, and the fourth-order Runge-Kutta method is a candidate for steady convection dominated problems, because of their stability properties. Among implicit methods, the implicit Euler method is the obvious choice for steady problems. When we seek only the steady solution, all of the eigenvalues can be considered to be parasitic. Referring to Fig. 8.1, none of the eigenvalues are required to fall in the accurate region of the time-marching method. Therefore the time step can be chosen to eliminate the transient as quickly as possible with no regard for time accuracy. For example, when using the implicit Euler method with local time linearization, Eq. 6.96, one would like to take the limit h ! 1, which leads to Newton's method, Eq. 6.98. However, a nite time step may be required until the solution is somewhat close to the steady solution.

8.7 Problems

1. Repeat the time-march comparisons for di usion (Section 8.4.2) and periodic convection (Section 8.4.3) using 2nd- and 3rd-order Runge-Kutta methods. 2. Repeat the time-march comparisons for di usion (Section 8.4.2) and periodic convection (Section 8.4.3) using the 3rd- and 4th-order Adams-Bashforth methods. Considering the stability bounds for these methods (see problem 4 in Chapter 7) as well as their memory requirements, compare and contrast them with the 3rd- and 4th-order Runge-Kutta methods. 3. Consider the di usion equation (with = 1) discretized using 2nd-order central di erences on a grid with 10 (interior) points. Find and plot the eigenvalues and the corresponding modi ed wavenumbers. If we use the explicit Euler timemarching method what is the maximum allowable time step if all but the rst two eigenvectors are considered parasitic? Assume that su cient accuracy is obtained as long as j hj 0:1. What is the maximum allowable time step if all but the rst eigenvector are considered parasitic?

162

CHAPTER 8. CHOICE OF TIME-MARCHING METHODS

**Chapter 9 RELAXATION METHODS
**

In the past three chapters, we developed a methodology for designing, analyzing, and choosing time-marching methods. These methods can be used to compute the time-accurate solution to linear and nonlinear systems of ODE's in the general form d~ = F (~ t) u ~u (9.1) dt which arise after spatial discretization of a PDE. Alternatively, they can be used to solve for the steady solution of Eq. 9.1, which satis es the following coupled system of nonlinear algebraic equations: ~u F (~ ) = 0 (9.2) In the latter case, the unsteady equations are integrated until the solution converges to a steady solution. The same approach permits a time-marching method to be used to solve a linear system of algebraic equations in the form A~ = ~ x b (9.3) To solve this system using a time-marching method, a time derivative is introduced as follows d~ = A~ ; ~ x x b (9.4) dt and the system is integrated in time until the transient has decayed to a su ciently low level. Following a time-dependent path to steady state is possible only if all of the eigenvalues of the matrix A (or ;A) have real parts lying in the left half-plane. Although the solution ~ = A;1~ exists as long as A is nonsingular, the ODE given by x b Eq. 9.4 has a stable steady solution only if A meets the above condition. 163

This preconditioning has the e ect of multiplying Eq. and. the .1~ b u f f b b f (9. Using an iterative method.8) which is identical to the solution of Eq. we seek to obtain rapidly a solution which is arbitrarily close to the exact solution of Eq.5.2. our analysis will focus on their application to large sparse linear systems of equations in the form Ab~ .7) ~ = CAb ].5.5) where Ab is nonsingular. In this chapter. which is given by ~ 1 = A. Such methods are known as relaxation methods. For example. C~ b = 0 u f Notice that the solution of Eq. there are well-known techniques for accelerating relaxation schemes if the eigenvalues of CAb are all real and of the same sign. 9. 9.7 depends crucially on the eigenvalue and eigenvector structure of the matrix CAb. for example.1C~ b = A. 9. If we designate the conditioning matrix by C . 9.1 C . and the use of the subscript b will become clear shortly.1 exists. at each time step of an implicit time-marching method or at each iteration of Newton's method. 9. In the simplest possible case the conditioning matrix is a diagonal matrix composed of a constant D(b). provided C . To use these schemes. While they are applicable to coupled systems of nonlinear algebraic equations in the form of Eq.1C~ b = A. Such systems of equations arise.1 Formulation of the Model Problem It is standard practice in applying relaxation procedures to precondition the basic equation. RELAXATION METHODS The common feature of all time-marching methods is that they are at least rstorder accurate.1 Preconditioning the Basic Matrix CAb~ . ~ b = 0 u f (9. 9. equally important.164 CHAPTER 9.1~ b u (9.6) b f 9. we consider iterative methods which are not time accurate at all. does not depend at all on the eigensystem of the basic matrix Ab .1.5 from the left by some nonsingular matrix. the problem becomes one of solving for ~ in u 9.7 is (9. In the following we will see that our approach to the iterative solution of Eq.

1 .1 3 7 7 7 7 7 17 5 1 2 0 6 . This is accomplished using a diagonal preconditioning matrix 2 61 60 6 D = 2 x6 0 6 60 4 0 1 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 1 2 3 7 7 7 7 7 7 5 (9.9.2 4 1 (9.1 0 6 6 .1 4 1 0 .11) 1 . It can rst be conditioned so that the modulus of each element is 1. 2 1 1 6 0 0 6 6 1 0 .1 1 1 76 0 76 76 . We then further condition with multiplication by the negative transpose.1 32 0 7 6 .1 1 1 6 0 6 6 .0 a 7 6 7~ 6 7u+6 0 7 6 17 6 0 5 4 2 39 > 7> 7> 7= 7 7> 7> 5> 0 (9.6).2 0 =6 6 1 0 . The result is A2 = . For example.2 . 9.2 3 2 u 7 6 .1 0 >4 : .1 .10) which scales each row. A choice of C which ensures this condition is the negative transpose of Ab.AT A1 = 1 2 0 6 .9 has eigenvalues whose imaginary parts are much larger than their real parts. consider a spatial discretization of the linear convection equation using centered di erences with a Dirichlet condition on the left side and no constraint on the right side. Using a rst-order backward di erence on the right side (as in Section 3. this leads to the approximation 82 > 0 1 >6 > 0 6 1 <6 .1 1 1 ~= 6 0 xu 2 x >6 >6 .1 .9) The matrix in Eq.1 0 76 176 .1.1 54 3 7 7 7 17 7 17 5 1 0 . FORMULATION OF THE MODEL PROBLEM 165 conditioning matrix C must be chosen such that this requirement is satis ed.

symmetric with negative real eigenvalues). We will consider the preconditioned system of equations having the form 2 1 6 .2 54 1 1 .AT Ab b is nevertheless symmetric negative de nite (i.2 0 76 76 1 0 . and the classical relaxation methods can be applied. Thus even when the basic matrix Ab has nearly imaginary eigenvalues.2 The symbol for the dependent variable has been changed to as a reminder that the physics being modeled is no longer time A permutation matrix (de ned as a matrix with exactly one 1 in each row and column and has the property that P T = P .2 1 =6 6 6 1 . ~ = 0 f (9.1. as de ned in Appendix A. In the remainder of this chapter.2 . We do not necessarily recommend the use of .AT as a preconditioner we simply wish to show that a broad b range of matrices can be preconditioned into a form suitable for our analysis.1 .2 The Model Equations A~ .12) 9.13) where A is symmetric negative de nite. Relaxation methods are applicable to more general matrices. RELAXATION METHODS If we de ne a permutation matrix P 1 and carry out the process P T . The classical methods will usually converge if Ab is diagonally dominant.2 32 760 761 76 1760 76 1760 54 3 7 7 7 7 7 17 5 0 0 0 1 0 0 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 3 7 7 7 7 7 7 5 1 .2 1 6 1 6 1 ..AT A1]P (which 1 just reorders the elements of A1 and doesn't change the eigenvalues) we nd 2 60 60 6 60 6 60 4 1 0 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 0 1 0 0 32 7 6 . Preconditioning processes such as those described in the last section allow us to prepare our algebraic equations in advance so that certain eigenstructures are guaranteed.e.0 1 1 76 0 2 0 76 7 6 1 0 .2 4 (9.1 which has all negative real eigenvalues. we will thoroughly investigate some simple equations which model these structures. 1 . the conditioned matrix .166 CHAPTER 9. 2 We use a symmetric negative de nite matrix to simplify certain aspects of our analysis. as given in Appendix B.1 ) just rearranges the rows and columns of a matrix.

then A has the form A=B 1~ 1 b .7. It is instructive in formulating the concepts to consider the special case given by the di usion equation in one dimension with unit di usion coe cient : @u = @ 2 u .2 1)~ = ~ f (9. Introducing the threepoint central di erencing scheme for the second derivative with Dirichlet boundary conditions.5 and 9. Note that the solution of Eq. (bc) f g ~ (9.13.17) dt x2 ~ where (bc) contains the boundary conditions and ~ contains the values of the source g term at the grid nodes. In the notation of f Eqs. we obtain B (1 .9.2 1)~ + (bc) . ~ = A.14) The above was written to treat the general case. 9.16) @x2 which is the one-dimensional form of the Poisson equation.1~ . ~ u u ~ g (9. FORMULATION OF THE MODEL PROBLEM 167 accurate when we later deal with ODE formulations. If we consider a Dirichlet boundary condition on the left side and f either a Dirichlet or a Neumann condition on the right side. A = CAb and ~ = C~ b f f (9.18) Choosing C = x2 I . In this case Ab = 1 2 B (1 .1. we nd d~ = 1 B (1 . is guaranteed to exist because A is nonsingular.19) ~ where ~ = x2 fb. g(x) @t @x2 This has the steady-state solution (9.15) @ 2 u = g(x) (9.2 1) x ~b = ~ . 9.

that it is directly applicable to two. The error at the nth iteration is de ned as ~ n ~ n .1A (9.1~ = G~ n .and three-dimensional problems.2 Classical Relaxation 9.2.2.22) 9. and much of the remaining material is devoted to this case.1 (9.1A]~ n . ~ f (9. A tremendous amount of insight to the basic features of relaxation is gained by an appropriate study of the one-dimensional case. the Residual. The converged solution is designated ~ 1 so that ~ 1 = A. RELAXATION METHODS ~ = .2 The Converged Solution. 3.168 CHAPTER 9.1~ f f (9.2 . ~ 1 = ~ n . ~ n = A~ n . The iteration count is designated by the subscript n or the superscript (n).1 is easily obtained from the matrix resulting from the Neumann boundary condition given in Eq.2 s]T s = .25) e . 9.1~ f Solving Eq.23) where G I + H . H . A.2 b . or at least easier to solve than the original system. 9. however.20) Note that s = . We attempt to do this in such a way.2 or .24 using a diagonal conditioning matrix. and the Error (9. The matrix H is independent of n for stationary methods and is a function of n for nonstationary ones.1~ f (9. H .21 for ~ n+1 gives ~ n+1 = I + H .1 The Delta Form of an Iterative Scheme We will consider relaxation methods which can be expressed in the following delta form: h i H ~ n+1 .21) where H is some nonsingular matrix which depends upon the iterative method.24) Hence it is clear that H should lead to a system of equations which is easy to solve.

Nonstationary processes in which H (and possibly C ) is varied at each iteration are discussed in Section 9. and its eigenvalues. The Point-Jacobi method is expressed in point operator form for the one-dimensional case as i (n+1) 1 h (n) = + (jn) .3 The Classical Methods . 9.29) as given in Section 9.9.5. The method of successive overrelaxation (SOR) 9. we have considered only what are usually referred to as stationary processes in which H is constant throughout the iterations. 9.27) Finally. Point Operator Schemes in One Dimension Let us consider three classical relaxation procedures for our model equation B (1 . fj (9.30) j +1 2 j.2 1)~ = ~ f (9.1.1 + (jn) . The Gauss-Seidel method is i (n+1) 1 h (n+1) = 2 j. CLASSICAL RELAXATION 169 where ~ 1 was de ned in Eq.29 is satis ed.22. fj (9.26. determine the convergence rate of a method. In all of the above. Hence the j th row of Eq. it is not di cult to show that ~ n+1 = G~ n e e (9. G is referred to as the basic iteration matrix. 9. There results the relation between the error and the residual A~ n . which we designate as m . 9.31) j +1 This operator is a simple extension of the point-Jacobi method which uses the most recent update of j.2.1 and the old value of j +1.1. The residual at the nth iteration is de ned as ~ n A~ n . ~ n = 0 e r (9.28) Consequently. and use the de nition in Eq.1 This operator comes about by choosing the value of (jn+1) such that together with the old values of j.2.2. ~ r f (9. 9.29 is satis ed using the new values of j and j . the j th row of Eq.25 by A from the left.1 and j+1.26) Multiply Eq.

33) j 2 +1 2 The General Form The general form of the classical methods is obtained by splitting the matrix A in Eq.21 results from the application of the explicit Euler time-marching method (with h = 1) to the following system of ODE's: ~ H d = A~ . !) (jn) + ! (jn) . 9. being lower triangular. D. The Gauss-Seidel method is obtained with H = . ~ ] fb f (9. such that A=L+D+U (9.1 The Ordinary Di erential Equation Formulation . U . One can easily see that Eq. then perhaps it would be better to move further in this direction. 9. fj . and the portion above the diagonal. It is usually expressed in two steps as h i ~j = 1 (jn+1) + (jn) . (jn) (9.(L + D). but it can also be written in the single line (n+1) ! (n+1) = 2 j.36) b dt 9.35) dt This is equivalent to d~ = H . about which we have already learned a great deal.3. 9.1 + (1 .3 The ODE Approach to Classical Relaxation The particular type of delta form given by Eq.34) Then the point-Jacobi method is obtained with H = .13 into its diagonal. the portion of the matrix below the diagonal.32) j where ! generally lies between 1 and 2.170 CHAPTER 9. ~ f (9. which is also easy to solve.1 +1 2 h i (n+1) = (jn) + ! ~j . which certainly meets the criterion that it is easy to solve. ! fj (9. RELAXATION METHODS is based on the idea that if the correction produced by the Gauss-Seidel method tends to move the solution toward ~ 1. 9.1 A~ .1C A ~ . L. ~ = H .D.21 leads to an interpretation of relaxation methods in terms of solution techniques for coupled rst-order ODE's.

THE ODE APPROACH TO CLASSICAL RELAXATION 171 In the special case where H . Throughout the remaining discussion we will refer to the independent variable t as \time". We see that the goal of a relaxation method is to remove the transient solution from the general solution in the most e cient way possible. 6. then the system of ODE's has a steady-state solution which is approached as t ! 1. a constant for the entire iteration.13. In general it is assumed that any or all of the eigenvectors could have been given an equally \bad" excitation by the initial guess. the solution can be written as ~ = c1e 1 t~ 1 + + cM e M t~ M +~ 1 x} (9. if all of the eigenvalues of H .9. 9. even though no true time accuracy is involved.36.37) | x {z error where what is referred to in time-accurate analysis as the transient solution. In a stationary method. Assuming that H . the preconditioning matrix in 9. As we shall see.1~ f (9.1A depends on neither ~ nor t. the \time" step.36 are independent of t. 9. and the secondary conditioning matrix in 9.7.1~ is also independent u f of t. It is clear that. H . and the eigenvectors of H .28) ~ n = c1~ 1(1 + 1 h)n + |x + cm~ m (1 + mh)n + x {z error + cM ~ M (1 + x n M h) } +~ 1 (9. The eigenvalues are xed by the basic matrix in Eq. they are not changed throughout the iteration process. given by ~ 1 = A. Hence the numerical solution after n steps of a stationary relaxation method can be expressed as (see Eq. the three classical methods have all been conditioned by the choice of H to have an optimum h equal to 1 for a stationary iteration process.1A has been chosen (that is. The generalization of this in our approach is to make h.35.38) which is the solution of Eq. that is. These are xed by the initial guess. For this method m = 1 + m h. an iteration process has been decided upon).3.39) The initial amplitudes of the eigenvectors are given by the magnitudes of the cm .1A have negative real parts (which implies that H . is now referred to in relaxation analysis as the error. H and C in Eq. so that we must devise a way to remove them all from the general solution on an equal basis. . 9. The eigenvalues are xed for a given h by the choice of time-marching method. the only free choice remaining to accelerate the removal of the error terms is the choice of h.1A is nonsingular).1A are linearly independent. Suppose the explicit Euler method is used for the time integration.

30.B (1 . since multiplication by the inverse amounts to solving the problem by a direct method without iteration.31 and 9.1 (1 .2.1.40) dt As a start. ~ f (9. TABLE 9. If we impose this constraint and further restrict ourselves to banded tridiagonals with a single constant in each band. ~ n) = B (1 . Now let us study these methods as subsets of ODE as formulated in Section 9. this is not in the spirit of our study. 9. ~ n) = B (1 .29 into the ODE form 9. Insert the model equation 9.32 obey no apparent pattern except that they are easy to implement in a computer code since all of the data required to update the value of one point are explicitly available at the time of the update.42) It is clear that the best choice of H from the point of view of matrix algebra is . However. With this choice of notation the three methods presented in Section 9.35. ! 0)(~ n+1 .3. we are led to 2 B (.43) where and ! are arbitrary.2 1)~ n .2 ODE Form of the Classical Methods ! 0 1 1 Method Equation 6:2:3 6:2:4 6:2:5 1 Point-Jacobi 1 Gauss-Seidel i h 2= 1 + sin M + 1 Optimum SOR .2 1) since then multiplication from the left by .2.3 can be identi ed using the entries in Table 9. ~ f (9.2 1)~ n .3 is that all the elements above the diagonal (or below the diagonal if the sweeps are from right to left) are zero.3.172 CHAPTER 9.B .2 1)~ .41) n+1 = n + h n with a step size. The constraint on H that is in keeping with the formulation of the three methods described in Section 9. 9. Then ~ H d = B (1 . ~ f (9.43 THAT LEAD TO CLASSICAL RELAXATION METHODS 9. We arrive at H (~ n+1 . let us use for the numerical integration the explicit Euler method 0 (9.2 1) gives the correct answer in one step. equal to 1. RELAXATION METHODS The three iterative procedures de ned by Eqs.1: VALUES OF and ! IN EQ. 9.1. h.

1(. 2 0) B (1 . 9. solution of the set of ODE's ~ H d = A~ . 1) (n) + !h (n) .1 j 2 2 j+1 2 j This represents a generalization of the classical relaxation techniques. the three methods are all contained in the following set of ODE's d~ = B . our purpose is to examine the whole procedure as a special subset of the theory of ordinary di erential equations. EIGENSYSTEMS OF THE CLASSICAL METHODS 173 The fact that the values in the tables lead to the methods indicated can be veri ed by simple algebraic manipulation. !h f (9. and ~ 1 A. ~ f (9. However. ~ b = 0 u f (9. ~ f (9. .2 1)~ . The e f three classical methods. ) (n) . The basic equation to be solved came from some time-accurate derivation Ab~ .45) (n+1) = 2 j. and SOR.48) dt This solution has the analytical form ~ n = ~n + ~ 1 e (9.44 and Table 9.1. Gauss-Seidel. (!h .49) where ~ n is the transient. In this light. or steady-state.9.44) dt ! and appear from it in the special case when the explicit Euler method is used for its numerical integration.1~ is the steady-state solution. Point-Jacobi.4.46) This equation is preconditioned in some manner which has the e ect of multiplication by a conditioning matrix C giving A~ .1 j j . The point operator that results from the use of the explicit Euler scheme is ! ! ! (n+1) + ! (h .4 Eigensystems of the Classical Methods The ODE approach to relaxation can be summarized as follows. or error. 9. ~ = 0 f (9. are identi ed for the one-dimensional case by Eq.47) An iterative scheme is developed to nd the converged.

1 1 ). 9. the optimum stationary case. To illustrate the behavior. B (1 . 9. The plot for h = 1.54) . To obtain the optimal scheme we wish to minimize the maximum j m j which occurs when h = 1.1 since both d and f are zero.1 + cos M +1 m = 1 2 ::: M The . j 1 j = j M j and the best possible convergence rate is achieved: 9.2. 9.3.1A matrix for the three methods. the maximum j m j is obtained with m = M . Thus Convergence rate j mjmax m=1 2 M (9. Fig. the maximum j mj is obtained with m = 1. the ODE matrix H .44. i.31.1A) having maximum absolute value.53) m = .1 The Point-Jacobi System j m jmax = cos M + 1 (9. j m j > 1:0. RELAXATION METHODS Given our assumption that the component of the error associated with each eigenvector is equally likely to be excited.1.1A reduces to simply B ( 2 .2 and for h > 1.2 1)~ m = x 1 If = 0 and ! = 1 in Eq. 2 The eigensystem can be determined from Appendix B.relation for the explicit Euler method is m = 1 + m h. d = . It is also instructive to inspect the eigenvectors and eigenvalues in the H .30. c = 1. (9. the asymptotic convergence rate is determined by the eigenvalue m of G ( I + H . we use the ODE analysis to nd the convergence rates of the three classical methods represented by Eqs. 9. 9. b = . . we take M = 5 for the matrix order. The three special cases considered below are obtained with a = 1.51) 2 0)~ (9. For h < 1.52) ! xm The generalized eigensystem for simple tridigonals is given in Appendix B. Note that for h > 1:0 (depending on M ) there is the possibility of instability.4.174 CHAPTER 9.e. 9. is shown in Fig. and 9.32. This relation can be plotted for any h. e = 2=!. The eigenvalues are given by the equation m (9. This special case makes the general result quite clear. and f = 0. Fig. This amounts to solving the generalized eigenvalue problem A~ m = x for the special case x m H~ m m B (.50) In this section.2.

(1 . 3=2 2 3 2 p3=2 3 1=2 6 .0:5 . Again from Appendix B.23 times its initial level. the eigenvectors of H . 2 )h]n~ 2 p .1 7 6 7 x 6 0 7 x 6 7 x 6 6 6 7 6 3=2 7 7 6 . EIGENSYSTEMS OF THE CLASSICAL METHODS 175 For M = 40.1. 3=2 7 6 0 7 4 5 4 p 7 5 4 5 1 1=2 .1 = .4. ~ 1 = c1 1 . 3=2 7 7 6 3=2 7 5 4 5 4 p 7 1=2 .56) p (9. 9.1 .39. For M = 5.1 + 2 = . The rst 5 eigenvectors are simple sine waves. 9. 2 = . 3=2 The corresponding eigenvalues are.53 = . we obtain j mjmax = 0:9971.1:5 = .1A are given by ~ j = (xj )m = sin j m x j = 1 2 ::: M (9. 23 = . the numerical solution written in full is p ~ n . the eigenvectors can be written as 2 3 2 p 3 2 3 1=2 3=2 6 p3=2 7 6 p3=2 7 6 1 7 6 7 6 7 6 0 7 ~ 1 = 6 p 1 7 ~ 2 = 6 p 7 ~ 3 = 6 .55) M +1 This is a very \well-behaved" eigensystem with linearly independent eigenvectors and distinct eigenvalues.0:134 1 2 = . from Eq.9. 3 )h]n~ 1 x 2 1 x + c2 1 . (1 .57) 1 = .p3=2 7 6 .1 .1:0 3= 1 4 5 (9.1 + 23 = .p3=2 7 7 6 7 6 ~ 4 = 6 p0 7 ~ 5 = 6 p 7 7 x 6 1 7 6 x 6 6 6 . Thus after 500 iterations the error content associated with each eigenvector is reduced to no more than 0.1:866 From Eq.

176

1.0

m=5

**CHAPTER 9. RELAXATION METHODS
**

h = 1.0

m=1

Values are equal

σm

m=4 m=2

Μ=5

0.0 -2.0

m=3

-1.0

0.0

Figure 9.1: The

relation for Point-Jacobi, h = 1 M = 5. + c3 1 ; (1 )h]n~ 3 x

+ c4 1 ; (1 + 1 )h]n~ 4 2 x p + c5 1 ; (1 + 23 )h]n~ 5 x

(9.58)

**9.4.2 The Gauss-Seidel System
**

If and ! are equal to 1 in Eq. 9.44, the matrix eigensystem evolves from the relation

B (1 ;2 1)~ m = x

m B (;1

2 0)~ m x

(9.59)

which can be studied using the results in Appendix B.2. One can show that the H ;1A matrix for the Gauss-Seidel method, AGS , is

AGS

B ;1 (;1 2 0)B (1 ;2 1) =

**9.4. EIGENSYSTEMS OF THE CLASSICAL METHODS
**

Approaches 1.0

177

1.0

h < 1.0

h

0.0 -2.0

De

cr

ea

h

si

σm

ng

ng

Figure 9.2: The

**relation for Point-Jacobi, h = 0:9 M = 5.
**

Exceeds 1.0 at h

De cr ea si

-1.0

0.0

λ mh

∼ ∼

1.072 / Ustable h > 1.072

1.0

h > 1.0

ng si ea cr In

In

cr

ea

si

σm

ng

h

h

M = 5

0.0 -2.0

-1.0

0.0

λmh

Figure 9.3: The

relation for Point-Jacobi, h = 1:1 M = 5.

principal vectors. The equation for the nondefective eigenvalues in the ODE matrix is (for odd M ) + 2 m ) m = 1 2 ::: M2 1 (9.61) m = ;1 + cos ( M +1 and the corresponding eigenvectors are given by j ;1 m + ~ m = cos m x sin j M + 1 m = 1 2 ::: M2 1 (9.62) M +1 The - relation for h = 1, the optimum stationary case, is shown in Fig. 9.4. The m with the largest amplitude is obtained with m = 1. Hence the convergence rate is Since this is the square of that obtained for the Point-Jacobi method, the error associated with the \worst" eigenvector is removed in half as many iterations. For M = 40, j m jmax = 0:9942. 250 iterations are required to reduce the error component of the worst eigenvector by a factor of roughly 0.23. The eigenvectors are quite unlike the Point-Jacobi set. They are no longer symmetrical, producing waves that are higher in amplitude on one side (the updated side) than they are on the other. Furthermore, they do not represent a common family for di erent values of M . The Jordan canonical form for M = 5 is

CHAPTER 9. RELAXATION METHODS 2 ;1 1=2 3 1 6 0 ;3=4 1=2 7 2 6 7 6 0 1=8 ;3=4 1=2 7 3 6 7 6 7 6 0 1=16 1=8 ;3=4 1=2 7 4 6 7 (9.60) 6 0 1=32 1=16 1=8 ;3=4 1=2 7 5 6 7 6 . ... 6 .. 7 . . . . . . . . . . . . . . . 7 ... 4 5 M 0 1=2 M The eigenvector structure of the Gauss-Seidel ODE matrix is quite interesting. If M is odd there are (M + 1)=2 distinct eigenvalues with corresponding linearly independent eigenvectors, and there are (M ; 1)=2 defective eigenvalues with corresponding

178

j m jmax = cos M + 1

2

(9.63)

2 h~ i 6 1 h~ i 6 6 2 6 X ;1AGS X = JGS = 6 6 6 6 4

3 7 7 7 37 2~ 7 1 77 6 3 ~ 7 77 6 4 3 1 5 5 ~3

(9.64)

**9.4. EIGENSYSTEMS OF THE CLASSICAL METHODS
**

1.0

179

h = 1.0

σm Μ=5

2 defective λ m 0.0 -2.0 -1.0

0.0

λmh

Figure 9.4: The

relation for Gauss-Seidel, h = 1:0 M = 5.

The eigenvectors and principal vectors are all real. For M = 5 they can be written

2 3 2 p 3 2 3 2 3 2 3 1=2 7 17 0 7 p3=2 7 6 3=4 7 6 3=4 7 607 6 2 7 6 0 7 6 6 6 7 6 7 6 0 7 ~ 1 = 6 3=4 7 ~ 2 = 6 p0 7 ~ 3 = 6 0 7 ~ 4 = 6 ;1 7 ~ 5 = 6 4 7 (9.65) 6 7 x 6 7 x 6 7 x 6 7 x 6 7 x 6 6 6 7 6 7 6 7 6 9=16 7 7 6 ; 3=16 7 7 605 6 0 5 6 ;4 7 4 5 4 p 5 4 4 4 5 9=32 0 0 1 ; 3=32

The corresponding eigenvalues are = ;1=4 = ;3=4 = ;1 (4) Defective, linked to (5) 3 Jordan block The numerical solution written in full is thus ~ n ; ~ 1 = c1(1 ; h )n~ 1 4 x h x + c2(1 ; 34 )n~ 2

1 2 3 )

(9.66)

180

+ c5(1 ; h)n~ 5 x

**CHAPTER 9. RELAXATION METHODS " # n + c h n (1 ; h)n;1 + c h2 n(n ; 1) (1 ; h)n;2 ~ + c3(1 ; h) x3 4 5 1! 2! x + c (1 ; h)n + c h n (1 ; h)n;1 ~
**

4 5

1!

4

(9.67)

If = 1 and 2=! = x in Eq. 9.44, the ODE matrix is B ;1 (;1 x 0)B (1 ;2 1). One can show that this can be written in the form given below for M = 5. The generalization to any M is fairly clear. The H ;1A matrix for the SOR method, ASOR B ;1(;1 x 0)B (1 ;2 1), is

9.4.3 The SOR System

**2 ;2x4 x4 0 0 0 6 ;2x3 + x4 x3 ; 2x4 6 x4 0 0 1 6 ;2x2 + x3 x2 ; 2x3 + x4 x3 ; 2x4 6 x4 0 x5 6 ;2x + x2 x ; 2x2 + x3 x2 ; 2x3 + x4 x3 ; 2x4 6 x4 4 ;2 + x 1 ; 2x + x2 x ; 2x2 + x3 x2 ; 2x3 + x4 x3 ; 2x4
**

Eigenvalues of the system are given by

3 7 7 7 7 (9.68) 7 7 5

!pm + zm m = ;1 + 2

where

2

m = 1 2 :::M

(9.69)

**zm = 4(1 ; !) + !2p2 ]1=2 m pm = cos m =(M + 1)]
**

If ! = 1, the system is Gauss-Seidel. If 4(1 ; !) + !2pm < 0 zm and m are complex. If ! is chosen such that 4(1 ; !) + !2p2 = 0 ! is optimum for the stationary case, 1 and the following conditions hold: 1. Two eigenvalues are real, equal and defective. 2. If M is even, the remaining eigenvalues are complex and occur in conjugate pairs. 3. If M is odd, one of the remaining eigenvalues is real and the others are complex occurring in conjugate pairs.

much faster than both GaussSeidel and Point-Jacobi.73) 6 7 ~ 1 = 6 1=3 7 x2 = 6 16 7 x3 4 = 6 p 0p x 6 6 7 6 7 6 7 6 0 7 7 6 1=6 7 6 13 5 6 3(5 i 2)=54 7 5 4 5 4 5 4 4p p 1=9 1=18 6 3(7 4i 2)=162 The corresponding eigenvalues are 1 = .relation reduces to that shown in Fig. 1 (9. 1.74) 5 = . and if h = 1.5. the optimum value of ! may have to be determined by trial and error. EIGENSYSTEMS OF THE CLASSICAL METHODS One can easily show that the optimum ! for the stationary case is and for ! = !opt 181 (9. and the bene t may not be as great. 9. m = 1 .(10 . the optimum value for the stationary case. j mjmax = 0:8578.(10 + 2 2i)=9 (9.9. Hence the worst error component is reduced to less than 0.70) !opt = 2= 1 + sin M + 1 ~m x m = = 2 m. there are two real eigenvectors and one real principal vector.6 7 6 6 1=2 7 6 9 7 6 p3(1 ip2)=6 7 7 6 6 7 6 7~ 6 0 7 6 7~ 6 7~ 6 7 x5 = 6 1=3 7 (9. Hence the convergence rate is j m jmax = !opt . the .71) For M = 40. In practical applications.72) !opt = 2= 1 + sin M + 1 . The remaining linearly independent eigenvectors are all complex. p2 m= 1 m 2 m 2 Using the explicit Euler method to integrate the ODE's.1 m sin where m j M +1 (9. This illustrates the fact that for optimum stationary SOR all the j m j are identical and equal to !opt . h + h m .1 j .2=3 (2) Defectivep linked to 1 3 = .23 times its initial value in only 10 iterations.4.4=3 !opt p + iqp2 . For M = 5 they can be written 3 2 3 2 3 2 3 2 p3(1 )=2 1 7 1=2 7 . For odd M . 2p2i)=9 4 = .

2p2i)h=9]n~ 3 c4 1 . ~1 = + + + + c1 (1 . It is much simpler. This does not change the steadystate solution A. 9. 2h=3)n + c2nh(1 . to study the case of xed H and C but variable step size.39 is .1~ b . H and C . RELAXATION METHODS h = 1. h. however. 2h=3) x2 p x c3 1 . The nonstationary form of Eq.5 Nonstationary Processes In classical terminology a method is said to be nonstationary if the conditioning matrices. This process changes the Point-Jacobi method to Richardson's method in standard terminology.0 -2. M = 5.0 λmh Figure 9. are varied at each time step. In our ODE b f approach this could also be considered and would lead to a study of equations with nonconstant coe cients.75) 9. The numerical solution written in full is ~n . (10 .0 CHAPTER 9.0 -1. For the Gauss-Seidel and SOR methods it leads to processes that can be superior to the stationary methods.182 1.0 2 real defective λ m σm 2 complex λ m 1 real λ m Μ=5 0. but it can greatly a ect the convergence rate. (10 + 2 2i)h=9]n~ 4 x n~ c5 (1 .1]~ 1 x n~ c2 (1 . 2h=3)n. 4h=3) x5 (9.5: The relation for optimum stationary SOR.0 0. h = 1.

we considered making the error exactly zero by taking advantage of some knowledge about the discrete values of m for a particular case. It is therefore unnecessary and impractical to set hm = .5. This is the basis for the multigrid methods discussed in Chapter 10.79) treating it as a continuous function of the independent variable . Now focus attention on the polynomial (Pe)N ( ) = (1 + h1 )(1 + h2 ) (1 + hN ) (9. ~ 1 = X cm~ m Y (1 + e x m=1 n=1 m hn ) (9. In the annihilation process mentioned after Eq. Let us choose the hn so that the maximum value of (Pe)N ( ) is as small as possible for all lying between a and b such that b a 0.78) where Pe signi es an \Euler" polynomial.80) .76. the eigenvalues m are generally unknown and costly to compute. We will see that a few well chosen h's can reduce whole clusters of eigenvectors associated with nearby 's in the m spectrum. for m = 1 2 M . However. Mathematically stated. 9.76) where the symbol stands for product. the error term can theoretically be completely eliminated in M steps by taking hm = . Since hn can now be changed at each step.9. This leads to the concept of selectively annihilating clusters of eigenvectors from the error terms as part of a total iteration process. Let us consider the very important case when all of the m are real and negative (remember that they arise from a conditioned matrix so this constraint is not unrealistic for quite practical cases).1= m .77) and write it in the form Y cm~ m Pe( m ) cm~ m (1 + x x N n=1 m hn ) (9.1= m for m = 1 2 : : : M . NONSTATIONARY PROCESSES ~ N = c1~ 1 Y (1 + 1hn ) + x N 183 + N Y + cM ~ M (1 + x n=1 n=1 Y + cm~ m (1 + x N n=1 M hn ) + ~ 1 m hn ) (9. Consider one of the error terms taken from M N ~ N ~ N . we seek b max a j(Pe)N ( )j = minimum with(Pe)N (0) = 1 (9. Now we pose a less demanding problem.

9. 1) 6 " #! (9.86) . ! (Pe)N ( ) = where (9.1 y 1 and (9. ~ m .82) q q N N TN (y) = 1 y + y2 . ) cos (2n .81) TN (y) = cos(N arccos y) are the Chebyshev polynomials along the interval . It is 2 . 1 + 1 y .2 .84.84) b a h 2 b a 2N n Remember that all are negative real numbers representing the magnitudes of m in an eigenvalue spectrum. b a b This problem has a well known solution due to Markov.76 is expressed in terms of a set of eigenvectors. 1) n = 1 2 :::N (9. As an example for the use of Eq. The three values of h which satisfy this problem are (9. With each x eigenvector there is a corresponding eigenvalue. ampli ed by the coe cients cm Q(1 + m hn). 9.85) hn = 2= 3 . and it leads to the nonstationary step size choice given by ( " #) 1 = 1 . Eq. RELAXATION METHODS a. + ( . 1 (9.83) 2 2 are the Chebyshev polynomials for jyj > 1.1 using only 3 iterations. TN . b ! .184 CHAPTER 9. The error in the relaxation process represented by Eq. y2 . b TN a. In relaxation terminology this is generally referred to as Richardson's method. a. let us consider the following problem: Minimize the maximum error associated with the eigenvalues in the interval .84 gives us the best choice of a series of hn that will minimize the amplitude of the error carried in the eigenvectors associated with the eigenvalues between b and a . cos (2n . 9. .

9. with (Pt)N (0) = 1 (9.37 on the condition that the explicit Euler method. namely that b 1h 2 n 1 1 . Eq. If instead the implicit trapezoidal rule n+1 p = n+ is used. 9. 2 hn 1 mC C + ~1 A m (9. 9.90) max a j(Pt)N ( )j = minimum . 8]3 =2 99 A plot of Eq. p 0) h3 = 4=(6 + 3) Return now to Eq.88) n p p o T3 (3) = 3 + 8]3 + 3 .2 .92) .87 is given in Fig.1 have been reduced to less than about 1% of their initial values. This was derived from Eq. was used to integrate the basic ODE's. 3) h2 = 4=(6 . 9. This calls for a study of the rational \trapezoidal" polynomial.76.41. 9. NONSTATIONARY PROCESSES and the amplitude of the eigenvector is reduced to (Pe)3( ) = T3 (2 + 3)=T3(3) where 185 (9.6 and we see that the amplitudes of all the eigenvectors associated with the eigenvalues in the range . Pt: 0 1 1 N Y B 1 + hn C (Pt )N ( ) = B 2 C (9.87) (9.89) ~N = M X m=1 cm ~ m x N Y B1 + n=1 0 B @ would result. the nonstationary formula 1 h( 0 + 0 ) 2 n+1 n (9. 1 hn 2 under the same constraints as before. 9.5.6 are h1 = 4=(6 . The values of h used in Fig.91) @ A n=1 1 . 9.

01 0.01 −0.3 0.6 −0.1. minimization over .2 −1 λ −0.8 −0.6 (Pe )3 (λ) 0. .4 −0.015 0.1 0 −2 −1.2 0 Figure 9.02 −2 −1.6 −0.4 −1.2 0.8 −1.4 −1.8 0.6 −1.005 (P ) (λ) e 3 0 −0.6: Richardson's method for 3 steps.4 0.2 0 0.2 −1 λ −0.7 0.8 −1.005 −0. RELAXATION METHODS 1 0.5 0.6 −1.186 CHAPTER 9.015 −0.02 0.8 −0.4 −0.2 .9 0.

Gn)A. A1)~ + b x where is a parameter. 9.1A satisfy A~ m = mH~ m .93) b hn b This process is also applied to problem 9.7 are h1 = 1 p h2 = 2 h3 = 2 9.2 1) and ! = !opt. but we settle here for the approximation suggested by Wachspress (n. b Determine the iteration matrix G if = .7. 3. A2)xn + b x (I + A2)xn+1 = (I . xk ) = (A1 + A2 )xk . nd the eigenvalues of H .6 Problems 1. Given a relaxation method in the form ~ H ~n = A~n .1.) Find the numerical values.1f ~ where G = I + H .2. The error amplitude is about 1/5 of that found for (Pe)3( ) in the same interval of .1) 2 =.6.1=2. .85. consider the iterative method (I + A1 )~ = (I . Then nd the corresponding j m j values. 9. x x not just the expressions. Using Appendix B. For a linear system of the form (A1 + A2)x = b. Recall that the eigenvalues of H . (You do not have to nd H . The results for (Pt)3 ( ) are shown in Fig.9. 2. Show that this iterative method can be written in the form H (xk+1 . PROBLEMS ! 187 The optimum values of h can also be found for this problem. The values of h used in Fig.1)=(N .1A for the SOR method with A = B (4 : 1 . f show that ~n = Gn ~0 + (I . a n=1 2 N (9.1A.

2 0 2 x 10 −3 1.8 −1.8 −0.5 1 0.2 −1 λ −0. .4 0.8 0.7: Wachspress method for 3 steps.8 −0.6 −1.2 .4 −0.6 (Pt )3 (λ) 0.6 −0. RELAXATION METHODS 1 0.3 0.188 CHAPTER 9.6 −0.5 0.2 −1 λ −0.2 0.4 −1.4 −1.5 −2 −2 −1. minimization over .7 0.5 −1 −1.5 (Pt )3 (λ) 0 −0.2 0 Figure 9.1.9 0.1 0 −2 −1.6 −1.8 −1.4 −0.

6x = 0 @x2 For the initial condition. Plot the logarithm of the L2 -norm of the residual vs. u(1) = 1: @ 2 u . Compare with the theoretical asymptotic convergence rate.5. Solve the following equation on the domain 0 x 1 with boundary conditions u(0) = 0. (b) the Gauss-Seidel method. Plot the solution after the residual is reduced by 2. Use second-order centered di erences on a grid with 40 cells (M = 39). the number of iterations. and 4 orders of magnitude. use u(x) = 0.6.9. Determine the asymptotic convergence rate. and (d) the 3-step Richardson method derived in Section 9. 3. (c) the SOR method with the optimum value of !. Iterate to steady state using (a) the point-Jacobi method. . PROBLEMS 189 4.

190 CHAPTER 9. RELAXATION METHODS .

2 1).3. the space-frequency (or wavenumber) of the corresponding eigenvectors also increase. 10.1 Eigenvector and Eigenvalue Identi cation with Space Frequencies j 1j j 2j j Mj (10.m2 sin(mx) (10.2) @x2 and recall that 1 B (1 .1.3) xx~ = x2 x2 191 .1) then the corresponding eigenvectors are ordered from low to high space frequencies. The viewpoint presented here is a natural extension of the concepts discussed in Chapter 9. if the eigenvalues are ordered such that 10.2 1)~ = X 1 D(~ ) X .2 and 4. Note that @ 2 sin(mx) = .1 Motivation Consider the eigensystem of the model matrix B (1 . Notice that as the magnitudes of the eigenvalues increase.3.3. That is.Chapter 10 MULTIGRID The idea of systematically using sets of coarser grids to accelerate the convergence of iterative schemes that arise from the numerical solution to partial di erential equations was made popular by the work of Brandt. This has a rational explanation from the origin of the banded matrix.1~ (10. respectively. The eigenvalues and eigenvectors are given in Sections 4. There are many variations of the process and many viewpoints of the underlying theory.

The second key motivation for multigrid is the following: 10.4.m2 m << M (10. the complete counterexample of the above association is contained in the eigensystem 1 for B ( 1 1 2 ). One nds that 2 1 = M + 1 . The classical point-Jacobi method does not share this property. It is also true. from Appendix B. This is consistent with the physics of di usion as well. by design.m2 .2 Properties of the Iterative Method Many iterative methods reduce error components corresponding to eigenvalues of large amplitude more e ectively than those corresponding to eigenvalues of small amplitude.2 The Basic Process First of all we assume that the di erence equations representing the basic partial di erential equations are in a form that can be related to a matrix which has certain . the correlation of large magnitudes of m with high space-frequencies is to be expected for these particular matrix operators.2 + 2 cos m . 9. As we saw in Section 9.5. this method produces the same value of j j for min and max . the property can be restored by using h < 1. for example. of the Richardson method described in Section 9. 10. When an iterative method with this property is applied to a matrix with the above correlation between the modulus of the eigenvalues and the space frequency of the eigenvectors. and X ~ . However. This is the key concept underlying the multigrid process.1. a sine synthesis. the operation 1 2 D(~ ) represents the numerical approximation of the multiplication of the x appropriate sine wave by the negative square of its wavenumber. of the Gauss-Seidel method and.2. For this matrix one nds.1. MULTIGRID where D(~ ) is a diagonal matrix containing the eigenvalues. error components corresponding to high space frequencies will be reduced more quickly than those corresponding to low space frequencies. In fact. .4) 2 m M +1 x Hence. However. exactly the opposite 2 behavior. Therefore.1~ represents a sine transform.192 CHAPTER 10. this correlation is not necessary in general. We have seen that X . This is to be expected of an iterative method which is time accurate. as shown in Fig.

we are led to the starting formulation C Ab ~ 1 .7) where A CAb (10.6 is used. 2 These conditions are su cient to ensure the validity of the process described next. 2. 3.2. 4. The eigenvectors associated with the eigenvalues having largest magnitudes can be correlated with high frequencies on the di erencing mesh. The iterative procedure used greatly reduces the amplitudes of the eigenvectors associated with eigenvalues in the range between 1 j jmax and j jmax. In Eq. r where ~ = C Ab~ . Having preconditioned (if necessary) the basic nite di erencing scheme by a procedure equivalent to the multiplication by a matrix C . ~ b ] = 0 f (10.27. 9. We start with some any. 10. ~ = 0 e r (10.8) .10. as in the example given by Eq. and initial guess for ~ 1 and proceed through n iterations making use of some iterative process that satis es property 5 above. if f ~ 1 is a vector representing the desired exact solution.11. 9. The problem is linear. the residual. This form can be arrived at \naturally" by simply replacing the derivatives in the PDE with di erence schemes. m . 5. The eigenvalues. or it can be \contrived" by further conditioning.6) Recall that the ~ used to compute ~ is composed of the exact solution ~ 1 and the r error ~ in such a way that e A~ . but for clarity we suppose that the three-step Richardson method illustrated in Fig. ~ b] r f (10. At the end of the three steps we nd ~ . as in the examples given by Eq. THE BASIC PROCESS 193 basic properties. 3. The m are fairly evenly distributed between their maximum and minimum values.5. The basic assumptions required for our description of the multigrid process are: 1. of the matrix are all real and negative. We do not attempt to develop an optimum procedure here.5) where the matrix formed by the product CAb has the properties given above. the vector ~ b represents the boundary conditions and the forcing function.

For a 7-point example 2 3 2 32 3 e2 7 6 0 1 0 0 0 0 0 7 6 e1 7 6 e4 7 6 0 0 0 1 0 0 0 7 6 e2 7 6 7 6 6 7 6 6 e6 7 6 0 0 0 0 0 1 0 7 6 e3 7 " ~ # 76 7 6 7 6 ee 6 e1 7 = 6 1 0 0 0 0 0 0 7 6 e4 7 76 7 ~ 6 7 6 (10. and the eigenvalues of the Richardson process in the form: M=2 3 M 3 ~ = X cm~ m Y ( m hn)] + X cm~ m Y ( mhn)] e x x (10. Next we construct a permutation matrix which separates a vector into two parts.1 partitions the A matrix to form 2 3 A1 A2 " ~ # " ~ # 6 7 ee = r e 4 5 ~ ~o r A3 A4 eo A1~ e + A2~ o = ~ e e e r (10.11) ~o = P e 6 e 7 6 0 0 1 0 0 0 0 76 e 7 76 7 e 6 37 6 76 5 7 6 7 6 6 e5 5 4 0 0 0 0 1 0 0 7 6 e6 7 76 7 4 54 5 e7 0 0 0 0 0 0 1 e7 Multiply Eq.10) m=1 n=1 n=1 m=M=2+1 | {z } very low amplitude Combining our basic assumptions. 10. MULTIGRID (10. we can be sure that the high frequency content of ~ has been greatly reduced (about 1% or less of its original value in the initial guess).14) . and the other the even entries of the original vector (or any other appropriate sorting which is consistent with the interpolation approximation to be discussed below). assumption 4 ensures that the error has been smoothed.1P ]~ = P ~ e r (10.12) The operation PAP . since a permutation matrix has an inverse which is its transpose. e In addition.~ e CHAPTER 10. we can write PA P .194 If one could solve Eq. one containing the odd entries.9) Thus our goal now is to solve for ~ .13) Notice that (10. 10.7 for ~ then e ~1 = ~ .7 from the left by P and. We can write the exact solution for ~ in terms of e e the eigenvectors of A.

15) 4 6 2 1 (e + e ) e7 2 6 b It is important to notice that ea and eb represent errors on the boundaries where the error is zero if the boundary conditions are given.14 reduces to A1~ e + A2 A02~ e = ~ e e e r or (10. no operations on r If the boundary conditions are Dirichlet. it is reasonable to suppose that the odd points are the average of the even points. the averaging of ~ is our only approximation.18) (10. no aliasing of these functions can occur in subsequent steps.17) (10.2. ~ e = 0 e r where Ac = A1 + A2 A02] . It is also important to notice that we are dealing with the relation between ~ and ~ so the original boundary conditions and e r forcing function (which are contained in ~ in the basic formulation) no longer appear f in the problem.19) Ac~ e . Hence. in this formulation.10. Since the top half of the frequency spectrum has been removed. It is that there is some connection between ~ e and ~ o brought about by the smoothing property of the Richardson e e relaxation procedure. and one can write for the example case 21 0 03 7 16 60 1 17 A02 = 2 6 1 1 0 7 4 5 0 0 1 (10. For example 1 (e + e ) e1 2 a 2 1 (e + e ) e3 2 2 4 1 (e + e ) e5 or ~ o = A02~ e e e (10. notice that. THE BASIC PROCESS 195 is an exact expression. Finally. 10. At this point we make our one crucial assumption. e ~ are required or justi ed. ea and eb are zero.16) With this approximation Eq.

1 Ac }| { 3 If the boundary conditions are mixed Dirichlet-Neumann. 10. 10. is completely determined by the choice of the permutation matrix and the interpolation approximation. the matrix on the coarse mesh. For Neumann conditions.2 .1]T . the interpolation formula in Eq. 1) xjm = sin j 2M + 1 m=1 2 M (10.15 must be changed.2 .2 6 . However.20) and Eq. 10.1. All of them go through zero on the left (Dirichlet) side.19.1 1=2 7 (10.55 for Dirichlet conditions.2 ::: . It is easy to show that the high space-frequencies still correspond to the eigenvalues with high magnitudes.196 CHAPTER 10.18 gives z }| 2 . eb is equal to eM .21) 5 1=2 . If Neumann conditions are on the left.22) and are illustrated in Fig. all of the properties given in Section 10. 9. 10. and all of them re ect on the right (Neumann) side. If the original A had been B (7 : 1 .2 3 7 7 7 17 2A A 3 7 7 6 1 27 7=4 7 5 7 7 A3 A4 7 7 7 5 (10. in fact. our 7-point example would produce 2 . and.1 = 6 1 6 6 6 6 1 1 6 6 1 1 4 1 1 1 1 1 1 .2 . the eigenvector structure is di erent from that given in Eq. A in the 1-D model equation is B (1 ~ 1) where ~ = . The eigensystem is given by b b Eq. 10. MULTIGRID The form of Ac.2 1).2 { 2 }| 3 z 1 1 7+6 1 1 5 4 A2 1 1 { 3 7 5 z A2 3 z 2 1 }| { 2 1 6 1 1 7 = 6 1=1 6 7 4 .2 4 A1 .1 are met.2 6 . In the particular case illustrated in Fig. the example in Eq.2 .2 .2 6 6 6 .1.23) A02 = 1 6 1 1 0 7 5 24 0 0 2 . ea = e1 .2 4 5 26 1 1 7 1 0 1=2 . B. In the present case they are given by " !# (2m .2 6 6 PAP . When eb = eM .16 changes to 21 0 03 6 7 60 1 17 (10.

we have reduced the problem from B (1 . Recall that our goal is ne mesh to 2 e r ~ . 7 5 5 24 1=2 .2 1)~ = ~ on the e r 1 B (1 . e e In order to examine this relationship.15.26) ~ m = sin j m x m = ." We will continue with Dirichlet boundary conditions for the remainder of this Section. we can compute e using Eq. cos m=1 2 M M +1 M +1 .2 { 2 }| 3 z 7+6 1 1 1 5 4 1 A2 1 1 { 3 7 5 z A2 3 z Ac 2 1 }| { 2 }| { 3 . 10.1 are unchanged (only A4 is modi ed by putting .2 1) the eigenvalues and eigenvectors are m j=1 2 M (10.2. and thus ~ . which will provide us with the solution ~ 1 using Eq. In order to complete the process. At this stage. THE BASIC PROCESS 197 X Figure 10. we must now determine e the relationship between ~ e and ~ .2 .2 4 A1 .1: Eigenvectors for the mixed Dirichlet{Neumann case.1=2 2 0 which gives us what we might have \expected.9.24) 6 6 1 17 4 2 .2 1)~e = ~ e on the next coarser mesh.1 (10. The permutation matrix remains the same and both A1 and A2 in the partitioned matrix PAP . Given ~ e to solve for e e ~o computed on the coarse grid (possibly using even coarser grids). 10.25) For A = B (M : 1 . 2 1 6 1 1 7 = 6 1=1 1=1 1=2 7 (10. we can construct the coarse matrix from }| z 2 6 . we need to consider the eigensystems of A and Ac: A = X X .2 1 .1 in the lower right element). Therefore.10.1 Ac = Xc cXc.

1 . 7 6 . cos M + 1 x j=1 2 M (10. As M increases.0:003649.27) M +1 For example. cos M2+ 1 (~ c)1 = sin j M2+ 1 j = 1 2 x Mc (10. 7 c 6 7 4.1 6 . The eigenvalue and eigenvector corresponding 1 to m = 1 for the matrix Ac = 2 B (Mc 1 . that is. MULTIGRID Based on our assumptions. i.. one can easily see that (~ c)1 coincides with ~ 1 at x x every second point of the latter vector.0:007291 = 1:998 1.1 1 (~ c)1 e x c r c (10.198 CHAPTER 10. ~ = ~ 1 . with M = 51. we obtain ( c)1 = .30) since (~ c)1 contains the even elements of ~ 1.29) 1 and the residual on the coarse grid is ~ e = 1 (~ c)1 r x (10.e.32) 2 1=( ) 3 6 0c 1 7 6 7 = 1Xc 6 .2 1 . In addition. The exact solution on the coarse grid x x satis es ~ e = A.. x Now let us consider the case in which all of the error consists of the eigenvector component ~ 1. 1 = . then Mc = (M .2 1) are ( c)1 = .1~ e = Xc .. If we restrict our attention to odd M .33) . corresponding to ~ 1 = sin j = . the most di cult error mode to eliminate is that with m = 1. 7 4 5 0 (10.1Xc.31) 213 607 6 7 = 1Xc . it contains the even elements of ~ 1.5 0 (10.28) For M = 51 (Mc = 25). 1)=2 is the size of Ac. Then the residual is x e x ~ = A~ 1 = 1~ 1 r x x (10. ( c)1 approaches 2 1.

36) 1 B (M : 1 .2 1) comes from a discretization of the di usion equation.2 1) and the preconditioning matrix C is simply 2 C= x I (10. the matrix A = B (M : 1 . in addition to interpolating ~ e to the ne grid e x e (using Eq. This is equivalent to solving or 1A ~ = ~ e r 2 ce e (10. Thus we see that the process is recursive. THE BASIC PROCESS = ( 1 ~ (xc)1 c)1 199 (10.37) c 4 In our case. we must multiply the result by 2.2 1)~ = ~ ee re (10. but they vary depending on the problem and the user.38) (10. x2 C x2 B (Mc : 1 . in each case the appropriate permutation matrix and the interpolation approximation de ne both the down. 10. 10.2 1) 4 c c (10.39) Applying the discretization on the coarse grid with the same preconditioning matrix as used on the ne grid gives. The reduction can be.2 1) = 1 B (Mc : 1 .37. since xc = 2 x. The details of nding optimum techniques are. The problem to be solved on the coarse grid is the same as that solved on the ne grid. which gives Ab = x2 B (M : 1 . The remaining steps required to complete an entire multigrid process are relatively straightforward.2. carried to even coarser grids before returning to the nest level. and usually is.34) (10. However.and up-going paths. . quite important but they are not discussed here.10.15).40) which is precisely the matrix appearing in Eq.2 1) = x2 B (Mc : 1 .35) 1 (~ ) 2 xc 1 Since our goal is to compute ~ = ~ 1 . obviously.

This type of restriction is known as \simple injection. 10.3 A Two-Grid Process CHAPTER 10.1~(2) e 2 r 1 2 (10.41) 1 1 where G1 = I + H1..45) 5 0 0 0 0 0 1 0 that is.1 f . the rst three rows of the permutation matrix P in Eq.11. Next compute the residual based on ~(1) : ~ ~ ~ ~(1) = A ~(1) .1 f (10. which can be easily generalized to a process with an arbitrary number of grids due to the recursive nature of multigrid.1A1 (10.42) and H1 is de ned as in Chapter 9 (e. Gn1 ) A. Solve the problem A2~(2) = ~(2) on the coarse grid exactly:2 e r ~(2) = A.46) See problem 1 of Chapter 9. Extension to nonlinear problems requires that both the solution and the residual be transferred to the coarse grid in a process known as full approximation storage multigrid. the restriction matrix is 2 3 0 1 0 0 0 0 0 2 4 R1 = 6 0 0 0 1 0 0 0 7 (10.21). Note that the coarse grid matrix denoted A2 here was denoted Ac in the preceding section. Transfer (or restrict) ~(1) to the coarse grid: r 2r ~(2) = R1~(1) r (10. AGn1 A.44) In our example in the preceding section. 3. . 1.1f ~ (10. Eq. f = AGn1 ~n + A (I .43) 1 1 2. Perform n1 iterations of the selected relaxation method on the ne grid. 9. starting with ~ = ~n. MULTIGRID ~ We now describe a two-grid process for the linear problem A ~ = f .200 10. Gn1 ) A. Call the result ~(1) ." Some form of weighted restriction can also be used. This gives1 ~(1) = Gn1 ~n + (I . f r 1 1 ~ = AGn1 ~n .g.

10.3. A TWO-GRID PROCESS

201

Here A2 can be formed by applying the discretization on the coarse grid. In the preceding example (eq. 10.40), A2 = 1 B (Mc : 1 ;2 1). It is at this stage that the 4 generalization to a multigrid procedure with more than two grids occurs. If this is the coarsest grid in the sequence, solve exactly. Otherwise, apply the two-grid process recursively. 4. Transfer (or prolong) the error back to the ne grid and update the solution:

~n+1 = ~(1) ; I21~(2) e

In our example, the prolongation matrix is

(10.47)

2 6 1=2 6 1 6 6 1=2 6 1=6 0 I2 6 6 0 6 6 6 0 4

0 0 1=2 1 1=2 0 0 0

0 0 0 0 1=2 1 1=2

3 7 7 7 7 7 7 7 7 7 7 7 5

(10.48)

**which follows from Eq. 10.15. Combining these steps, one obtains
**

2 2 ~n+1 = I ; I21A;1R1 A]Gn1 ~ n ; I ; I21A;1R1 A]Gn1 A;1f + A;1f ~ ~ 2 1 2 1

(10.49) (10.50)

**Thus the basic iteration matrix is
**

2 I ; I21 A;1R1 A]Gn1 2 1

The eigenvalues of this matrix determine the convergence rate of the two-grid process. The basic iteration matrix for a three-grid process is found from Eq. 10.50 by replacing A;1 with (I ; G2)A;1 , where 2 3 2

3 G2 = I ; I32 A;1R2 A2]Gn2 3 3 2

(10.51)

3 In this expression n2 is the number of relaxation steps on grid 2, I32 and R2 are the transfer operators between grids 2 and 3, and A3 is obtained by discretizing on grid 3. Extension to four or more grids proceeds in similar fashion.

202

10.4 Problems

CHAPTER 10. MULTIGRID

1. Derive Eq. 10.51. 2. Repeat problem 4 of Chapter 9 using a four-grid multigrid method together with (a) the Gauss-Seidel method, (b) the 3-step Richardson method derived in Section 9.5. Solve exactly on the coarsest grid. Plot the solution after the residual is reduced by 2, 3, and 4 orders of magnitude. Plot the logarithm of the L2 -norm of the residual vs. the number of iterations. Determine the asymptotic convergence rate. Calculate the theoretical asymptotic convergence rate and compare.

**Chapter 11 NUMERICAL DISSIPATION
**

Up to this point, we have emphasized the second-order centered-di erence approximations to the spatial derivatives in our model equations. We have seen that a centered approximation to a rst derivative is nondissipative, i.e., the eigenvalues of the associated circulant matrix (with periodic boundary conditions) are pure imaginary. In processes governed by nonlinear equations, such as the Euler and Navier-Stokes equations, there can be a continual production of high-frequency components of the solution, leading, for example, to the production of shock waves. In a real physical problem, the production of high frequencies is eventually limited by viscosity. However, when we solve the Euler equations numerically, we have neglected viscous e ects. Thus the numerical approximation must contain some inherent dissipation to limit the production of high-frequency modes. Although numerical approximations to the Navier-Stokes equations contain dissipation through the viscous terms, this can be insu cient, especially at high Reynolds numbers, due to the limited grid resolution which is practical. Therefore, unless the relevant length scales are resolved, some form of added numerical dissipation is required in the numerical solution of the Navier-Stokes equations as well. Since the addition of numerical dissipation is tantamount to intentionally introducing nonphysical behavior, it must be carefully controlled such that the error introduced is not excessive. In this Chapter, we discuss some di erent ways of adding numerical dissipation to the spatial derivatives in the linear convection equation and hyperbolic systems of PDE's. 203

204

**11.1 One-Sided First-Derivative Space Di erencing
**

We investigate the properties of one-sided spatial di erence operators in the context of the biconvection model equation given by

CHAPTER 11. NUMERICAL DISSIPATION

@u = ;a @u @t @x

(11.1)

with periodic boundary conditions. Consider the following point operator for the spatial derivative term

;a( x u)j = 2;ax ;(1 + )uj;1 + 2 uj + (1 ; )uj+1] = ;a (;uj;1 + uj+1) + (;uj;1 + 2uj ; uj+1)] 2 x

(11.2)

The second form shown divides the operator into an antisymmetric component (;uj;1+ uj+1)=2 x and a symmetric component (;uj;1 + 2uj ; uj+1)=2 x. The antisymmetric component is the second-order centered di erence operator. With 6= 0, the operator is only rst-order accurate. A backward di erence operator is given by = 1 and a forward di erence operator is given by = ;1. For periodic boundary conditions the corresponding matrix operator is

;a x = 2;ax Bp(;1 ; 2 1 ; )

The eigenvalues of this matrix are

m

= ;a x

m 1 ; cos 2M

m + i sin 2M

for m = 0 1 : : : M ; 1

If a is positive, the forward di erence operator ( = ;1) produces Re( m ) > 0, the centered di erence operator ( = 0) produces Re( m ) = 0, and the backward di erence operator produces Re( m) < 0. Hence the forward di erence operator is inherently unstable while the centered and backward operators are inherently stable. If a is negative, the roles are reversed. When Re( m) = 0, the solution will either grow or decay with time. In either case, our choice of di erencing scheme produces nonphysical behavior. We proceed next to show why this occurs.

11.2. THE MODIFIED PARTIAL DIFFERENTIAL EQUATION

**11.2 The Modi ed Partial Di erential Equation
**

( xu)j = 2 1 x 42 x @u ; @x j

205

First carry out a Taylor series expansion of the terms in Eq. 11.2. We are lead to the expression

2

!

3 ! ! ! @ 2 u + x3 @ 3 u ; x4 @ 4 u + : : :5 x2 @x2 3 @x3 j 12 @x4 j j

We see that the antisymmetric portion of the operator introduces odd derivative terms in the truncation error while the symmetric portion introduces even derivatives. Substituting this into Eq. 11.1 gives

@u = ;a @u + a x @ 2 u ; a x2 @ 3 u + a x3 @ 4 u + : : : (11.3) @t @x 2 @x2 6 @x3 24 @x4 This is the partial di erential equation we are really solving when we apply the approximation given by Eq. 11.2 to Eq. 11.1. Notice that Eq. 11.3 is consistent with Eq. 11.1, since the two equations are identical when x ! 0. However, when we use a computer to nd a numerical solution of the problem, x can be small but it is not zero. This means that each term in the expansion given by Eq. 11.3 is excited to some degree. We refer to Eq. 11.3 as the modi ed partial di erential equation. We proceed next to investigate the implications of this concept. Consider the simple linear partial di erential equation @u = ;a @u + @ 2 u + @ 3 u + @ 4 u (11.4) @t @x @x2 @x3 @x4 Choose periodic boundary conditions and impose an initial condition u = ei x. Under these conditions there is a wave-like solution to Eq. 11.4 of the form u(x t) = ei xe(r+is)t provided r and s satisfy the condition r + is = ;ia ; 2 ; i 3 + 4 or r = ; 2 ( ; 2 ) s = ; (a + 2 ) The solution is composed of both amplitude and phase terms. Thus ) a u = e; 2 ({z; 2} ei x;({z+ 2 )t}] (11.5) | | amplitude phase

the choice of a backward di erence scheme ( = 1) produces a modi ed PDE with . This we refer to as dispersion.1) is equivalent to deliberately adding a destabilizing term to the PDE. any departure from antisymmetry in the matrix di erence operator must introduce dissipation (or ampli cation) into the modi ed PDE.2 .5.4. Therefore.6) The antisymmetric component of this operator is the fourth-order centered di erence operator. Biased schemes use more information on one side of the node than the other. Therefore. The symmetric component approximates x3 uxxxx=12. NUMERICAL DISSIPATION It is important to notice that the amplitude of the solution depends only upon and . 11. 4uj+1 + uj+2)] (11. by using the antisymmetry of the matrix di erence operators. Therefore.2 . As a corollary. Eq. We have seen that the threepoint centered di erence approximation of the spatial derivative produces a modi ed PDE that has no dissipation (or ampli cation). one could choose = 1=2 in Eq. If the wave speed a is positive.3 we have = . the coe cients of the odd derivatives. 2 > 0 and hence the amplitude of the solution decays. 11. 6uj. For example. .1 + 3uj + 2uj+1) = 1 (uj. the coe cients of the even derivatives in Eq. Our purpose here is to investigate the properties of one-sided spatial di erencing operators relative to centered di erence operators. the choice of a forward di erence scheme ( = . Under the same condition.2. this operator produces fourth-order accuracy in phase with a third-order dissipative term. 4uj.1 + 6uj . One can easily show. uj+2) 12 x + (uj. 11. The resulting spatial operator is not one-sided but it is dissipative.206 CHAPTER 11. Furthermore. we see that the speed of propagation is a + 2 . and the phase depends only on a and .a x2 =6. Any symmetric component in the spatial operator introduces dissipation (or ampli cation).1 + 8uj+1 . Referring to the modi ed PDE. the phase speed of the numerical solution is less than the actual phase speed. By examining the term governing the phase of the solution in Eq. 11. 8uj. that the same is true for any centered di erence approximation of a rst derivative. the numerical phase speed is dependent upon the wavenumber . a third-order backward-biased scheme is given by ( xu)j = 6 1 x (uj. This is tantamount to deliberately adding dissipation to the PDE.2 . Note that the use of one-sided di erencing schemes is not the only way to introduce dissipation.

the corresponding fully-discrete matrix operator is 0 2 !3 ! 2 ! 31 1 4 ah + ah 25 1 . the linear convection equation @u + a @u = 0).7) First replace the time derivatives with space derivatives according to the PDE (in this case.9) (n+1) (n) 1 ah (n) (n) uj = uj . cos 2 m M m .8) @t @x @t2 @x2 Now replace the space derivatives with three-point centered di erence operators. Consider the following Taylor-series expansion in time: 2 u(x t + h) = u + h @u + 1 h2 @ u + O(h3) @t 2 @t2 (11. . It is a fully-discrete nite-di erence scheme. ah + ah 2 5A ~ un ~ n+1 = Bp @ 2 x u x x 2 x x The eigenvalues of this matrix are ah m =1. backward di erencing must be used if the wave speed is positive. The quantity j ah j is known as the x Courant (or CFL) number.3 The Lax-Wendro Method 207 In order to introduce numerical dissipation using one-sided di erencing. uj. known as the Lax-Wendro method.1) + 2 x j +1 j j .a @u (11. ah 2 1 4. 2u(n) + u(n) ) (11. It is equal to the ratio of the distance travelled by a wave in one time step to the mesh spacing. and forward di erencing must be used if the wave speed is negative. Thus @t @x @ 2 u = a2 @ 2 u @u = . Next we consider a method which introduces dissipation independent of the sign of the wave speed.3. 2 x (uj+1 . There is no intermediate semi-discrete stage.1 This is the Lax-Wendro method applied to the linear convection equation.11. THE LAX-WENDROFF METHOD 11. 1 x For j ah j 1 all of the eigenvalues have modulus less than or equal to unity and hence x the method is stable independent of the sign of a. This explicit method di ers conceptually from the methods considered previously in which spatial di erencing and time-marching are treated separately. giving ! 1 ah 2 (u(n) . For periodic boundary conditions. x !2 1 . i ah sin 2M for m = 0 1 : : : M .

9 and converting the time derivatives to space derivatives using Eq. Recall that the odd derivatives on the right side lead to unwanted dispersion and the even derivatives lead to dissipation (or ampli cation. a h8 x (1 . however. a2h2 ) @ u = . depending on the sign). 11. u(jn+1))] ~ (11. x u(jn+1) = 1 u(jn) + u(jn+1) . a ( x2 . Therefore.1 a dissipative second-order method is obtained. presented in Chapter 6: un+1 = un + hu0n ~ 1 ~ ~ (11. The two methods di er when applied to nonlinear hyperbolic systems. this approach leads to u(jn+1) = u(jn) . ah (~(jn+1) . u(jn)1) ~ . ah (u(jn) . NUMERICAL DISSIPATION The nature of the dissipative properties of the Lax-Wendro scheme can be seen by examining the modi ed partial di erential equation. Cn) @ u @x4 @x4 This term has the appropriate sign and hence the scheme is truly dissipative as long as Cn 1. . these should be applied alternately. a2 h ( x2 . the leading error term in the Lax-Wendro method is dispersive and proportional to 3 2 3 2 @ . a 6x (1 . 1 Or vice-versa for nonlinear problems. For the linear convection equation.11) 2 x u +1 ~ which can be shown to be identical to the Lax-Wendro method. Hence MacCormack's method has the same dissipative and dispersive properties as the Lax-Wendro method.208 CHAPTER 11. a8h ( x2 . a2h2 ) @ 3 u . 11. Recall MacCormack's timemarching method. which is given by @u + a @u = . The two leading error terms appear on the right side of the equation.10) un+1 = 2 un + un+1 + hu0n+1] If we use rst-order backward di erencing in the rst stage and rst-order forward di erencing in the second stage. a2h2 ) @ u = . a ( x2 . a2 h2) @ 4 u + : : : @t @x 6 @x3 8 @x4 This is derived by substituting Taylor series expansions for all terms in Eq. Cn) @xu 3 6 @x3 The dissipative term is proportional to 2 4 2 2 4 2 .8. A closely related method is that of MacCormack.

12) @t @x where we do not make any assumptions as to the sign of a. the eigenvalues of the ux Jacobian for the onedimensional Euler equations are u u + a u . we present the basic ideas of ux-vector and ux-di erence splitting. Alternatively. then a+ = 0 and If a 0. some form of splitting is required.13) This approach is extended to systems of equations in Section 11. However.11. 11. We can rewrite Eq.4 Upwind Schemes 11. In order to apply one-sided di erencing schemes to such systems. = a 0. With this approach. manner. that is. these are of mixed sign.5. = 0.2. This is the basic concept behind upwind methods. In the next two sections. schemes in which the numerical dissipation is introduced in the spatial operator are generally preferred over the Lax-Wendro approach.1.. some decomposition or splitting of the uxes into terms which have positive and negative characteristic speeds so that appropriate di erencing schemes can be chosen. we see that a stable discretization is obtained with = 1 if a 0 and with = . When the ow is subsonic. uj+1)] (11.e. a. These are by no means unique. the direction of the one-sided operator (i.1 + uj+1) + jaj(.a( xu)j = 2. ( 0) term forward di erence. but entirely equivalent. as a result of their superior exibility. ) @u = 0 a = a 2 jaj @t @x + =a . The above approach to obtaining a stable discretization independent of the sign of a can be written in a di erent.1 + 2uj .4.1x a(. UPWIND SCHEMES 209 In Section 11. 11. When a hyperbolic system of equations is being solved. This is achieved by the following point operator: . whether it is a forward or a backward di erence) or the sign of the symmetric component depends on the sign of the wave speed. In this section. This is avoided in the Lax-Wendro scheme. by adding a symmetric component to the spatial operator. we present two splitting techniques commonly used with upwind methods. we saw that numerical dissipation can be introduced in the spatial di erence operator using one-sided di erence schemes or. more generally.12 as @u + (a+ + a. Consider again the linear convection equation: @u + a @u = 0 (11. For example.1 if a 0.uj. if a 0. then a 0 and a . the wave speeds can be both positive and negative.uj. Now for the a+ ( 0) term we can safely backward di erence and for the a a. For more subtle aspects of implementation and application of such techniques to . From Eq.

and the wi's are the characteristic variables.14) @t @x @t @x can be decoupled into characteristic equations of the form @wi + @wi = 0 (11. To accomplish this. @w term.19) . is positive. i.18) @t @x @t @x @x The spatial terms have been split into two components according to the sign of the wave speeds. . the reader is referred to the literature on this subject. 11. Premultiplying by X and inserting the product @x X . NUMERICAL DISSIPATION nonlinear hyperbolic systems such as the Euler equations.16) . constant-coe cient. into two components such that = where + ++ .1X in the spatial terms gives @Xw + @X +X . i. let us split the matrix of eigenvalues. are the eigenvalues of the Jacobian matrix. In order to apply a one-sided (or biased) spatial di erencing scheme. @w = 0 (11. We can now rewrite the system in terms of characteristic variables as @w + @w = @w + + @w + . (11. = . A. hyperbolic system of partial di erential equations given by @u + @f = @u + A @u = 0 (11.1Xw + @X .1 Flux-Vector Splitting Recall from Section 2.j j (11.17) = +j j 2 2 With these de nitions. and a forward di erence if the wave speed is negative. we need to apply a backward di erence if the wave speed. contains the negative eigenvalues.5 that a linear.1Xw = 0 @t @x @x (11. + contains the positive eigenvalues and .X .15) i @t @x where the wave speeds.210 CHAPTER 11.4. We can use backward di erencing for the + @w term and forward @x di erencing for the .

For the Euler equations. = 0 (11.26) For the Euler equations. f is also equal to Au as a result of their homogeneous property.X . Note that f = f+ + f. as discussed in Appendix C. This approach is known as ux-vector splitting. u = 0 @t @x @x Finally the split ux vectors are de ned as f + = A+u and we can write (11.25) A. = @f @u . and A.. A++ has eigenvalues which are all positive. we obtain A.24) Thus by applying backward di erences to the f + term and forward di erences to the f . 6= A. When an implicit time-marching method is used.21) (11. UPWIND SCHEMES With the de nitions2 211 A+ = X +X . f = Au. and A. the de nition of the split uxes follows directly from the de nition of the ux.4. @u @u Therefore.22) f . has all negative eigenvalues. = A. = X . @f + 6= A+ @f . we are in e ect solving the characteristic equations in the desired manner. (11.23) @t @x @x In the linear case. one must nd and use the new Jacobians given by A++ = @f @u + (11.1 and recalling that u = Xw. 2 With these de nitions A+ has all positive eigenvalues. (11.. u @u + @f + + @f . . the Jacobians of the split ux vectors are required.11.1 (11. has all negative eigenvalues. term.20) @u + @A+u + @A. In the nonlinear case.

Now. wL and wR.19. as in Eq.29) i i R if i < 0 where the subscript i indicates individual components of w and g. In a nite-volume method.33) f^j+1=2 = 1 (fL + fR ) + 1 jAj (uL .2 Flux-Di erence Splitting j Aj = X j j X . constant-coe cient.1X after and j j to obtain 1 (11. The centered approximation to gj+1=2.212 CHAPTER 11. which is nondissipative. we premultiply by X to return to the original variables and insert the product X . as a ^ function of the states to the left and right of the interface. wR ) ^ (11. is given by 1 gj+1=2 = 2 (g(wL) + g(wR)) ^ (11. as in Chapter 5. hyperbolic system of equations @w + @w = 0 (11. is known as ux-di erence splitting. more suited to nite-volume methods. we require ( (w ) (^i)j+1=2 = i(wi)L if i > 0 g (11.31) 2 Now. 11.1X (wL + wR) + 2 X j jX . NUMERICAL DISSIPATION Another approach. wR ) ^ 2 and thus (11.27) @t @x The ux vector associated with this form is g = w.4. This is achieved with 1 (11. uR) 2 2 where 11. (wi)R] g 2 or 1 gj+1=2 = 1 (wL + wR) + 2 j j (wL . gj+1=2.28) In order to obtain a one-sided upwind approximation.30) (^i)j+1=2 = 1 i (wi)L + (wi)R ] + 2 j ij (wi)L .32) X gj+1=2 = 1 X X . we consider the numerical ux at the interface between nodes j and j + 1. 1 (11. We again begin with the diagonalized form of the linear. respectively.34) .1X (wL . the uxes must be evaluated at cell boundaries.

jAj = A if they are all negative. fR = Aj+1=2 (uL .11. In this case. jAj = . in the nonlinear case. ARTIFICIAL DISSIPATION 213 and we have also used the relations f = Xg. Note that the ux Jacobian can be written in terms of u and H only see problem 6 at the end of this chapter.A. this leads to an upwind operator which is identical to that obtained using ux-vector splitting. constant-coe cient case. we would like our de nition of f^j+1=2 to satisfy ( 0 ^j+1=2 = fL if all i 0s > 0 f fR if all i s < 0 (11.37 is satis ed by the ux Jacobian evaluated at the Roe-average state given by p u +p u L (11.35 is obtained by the de nition 1 1 f^j+1=2 = 2 (fL + fR ) + 2 jAj+1=2j (uL .3 11. 11. 11.5. Thus we can generalize the concept of upwinding to include any scheme in which the symmetric portion of the operator is treated in such a manner as to be truly dissipative. 3 . uR) (11. Hence satisfaction of Eq.39) p L+p R R R where u and H = (e + p)= are the velocity and the total enthalpy per unit mass. respectively.36) if Aj+1=2 satis es fL . In order to resolve this.35) giving pure upwinding. uR) (11. If the eigenvalues of A are all positive.38) uj+1=2 = p L + p R R L R p H +p H L L Hj+1=2 = (11. and A = X X . In the linear. Eq. However.5 Arti cial Dissipation We have seen that numerical dissipation can be introduced by using one-sided differencing schemes together with some form of ux splitting. u = Xw. We have also seen that such dissipation can be introduced by adding a symmetric component to an antisymmetric (dissipation-free) operator. there is some ambiguity regarding the de nition of jAj at the cell interface j + 1=2. consider a situation in which the eigenvalues of A are all of the same sign.37) For the Euler equations for a perfect gas.1.

1 + 6uj .43) x f = x f + x (jAju) where jAj is de ned in Eq.1 2 x 2 x a s Applying x = x + x to the spatial derivative in Eq. A more complicated treatment of the numerical dissipation is also required near shock waves and other discontinuities. This is particularly evident from a comparison of Eqs. 11. a s With appropriate choices of x and x. x is stable if i 0 and unstable if i > 0. 4uj. let CHAPTER 11. This symmetric operator approximates x3 uxxxx and thus introduces a third-order dissipative term. 11.1 + 3uj ) 2 x . uj.43. The appropriate implementation is thus a s (11.1 (11.uj+1 + 2uj . It is also sometimes referred to as arti cial di usion or arti cial viscosity. 4uj+1 + uj+2) where is a problem-dependent coe cient.15 is stable if i 0 and a s unstable if i < 0. as in the previous two sections. With an appropriate value of . NUMERICAL DISSIPATION s a ( xu)j = .2 .36 and 11. this often provides su cient damping of high frequency modes without greatly a ecting the low frequency modes. this approach can be related to the upwind approach. 4uj. applying x = x . The second spatial term is known as arti cial dissipation. 11. but is beyond the scope of this book. For details of how this can be implemented for nonlinear hyperbolic systems.42) x (Au) = x (Au) + x (jAju) or a s (11. A second-order backward di erence approximation to a 1st derivative is given as a point operator by ( xu)j = 1 (uj. s ( xu)j = 11. gives a s (11. s It is common to use the following operator for x (11.34.2 .214 For example. the reader should consult the literature. Similarly.44) x (uj.6 Problems 1. uj.41) i x = i x + j ij x Extension to a hyperbolic system by applying the above approach to the characteristic variables.40) ( x u)j = uj+1 .

3. 5.4. 11. Find the modi ed wavenumber for the operator given in Eq.6. Plot the real and imaginary parts of x vs. Plot the real and imaginary parts of x vs.1.2.37. Find the modi ed wavenumber for the rst-order backward di erence operator. Consider the hyperbolic system derived in problem 8 of Chapter 2. 11. x for 0 x . Consider the spatial operator obtained by combining second-order centered differences with the symmetric operator given in Eq.6. x for 0 x . Find the modi ed wavenumber for this operator with = 0 1=12 1=24.6.11. Form the plus-minus split ux vectors as in Section 11. . nd j j for the combination of this spatial operator with 4th-order Runge-Kutta time marching at a Courant number of unity and plot vs. nd j j for the combination of this spatial operator with 4th-order Runge-Kutta time marching at a Courant number of unity and plot vs.44. 2.6. 4. Using Fourier analysis as in Section 6.3 to see how to construct the symmetric and skew-symmetric components of a matrix. Show that the ux Jacobian for the 1-D Euler equations can be written in terms of u and H .39 leads to satisfaction of Eq. nd the derivative which is approximated by the corresponding symmetric and skew-symmetric operators and the leading error term for each. x for 0 x . and 1=48. Find the matrix jAj. 11.) (b) Using a Taylor table. Using Fourier analysis as in Section 6.6. PROBLEMS 215 (a) Express this operator in banded matrix form (for periodic boundary conditions). 11. x for 0 x . Show that the use of the Roe average state given in Eqs. nd j j for the combination of this spatial operator with 4th-order Runge-Kutta time marching at a Courant number of unity and plot vs. (See Appendix A.38 and 11. then derive the symmetric and skew-symmetric matrices that have the matrix operator as their sum. x for 0 x . Plot the real and imaginary parts of x vs. Using Fourier analysis as in Section 6. x for 0 x .2.2. 6.

216 CHAPTER 11. NUMERICAL DISSIPATION .

2.1 The Concept Factored forms of numerical operators are used extensively in constructing and applying numerical methods to problems in uid mechanics. 3. When we approach numerical analysis in the light of matrix derivative operators. 12. and \fractional step".1) . This gives the reader a feel for some of the modi cations which can be made to the basic algorithms in order to obtain e cient solvers for practical multidimensional applications. Let us start with the following observations: 1. Matrices can be split in quite arbitrary ways. They are the basis for a wide variety of methods variously known by the labels \hybrid".Chapter 12 SPLIT AND FACTORED FORMS In the next two chapters. Time marching methods are valid only to some order of accuracy in the step size. ~ u u f dt 217 (12. \time split". the concept of factoring is quite simple to present and grasp. Factored forms are especially useful for the derivation of practical algorithms that use implicit methods. h. and a means for analyzing such modi ed forms. we present and analyze split and factored algorithms. Now recall the generic ODE's produced by the semi-discrete approach d~ = A~ . Advancing to the next time level always requires some reference to a previous one.

The semi-discrete approach to its solution might be put in the form d~ = A ~ + A ~ + (bc) u ~ (12. Both of these considerations are investigated later. 12. h2A1 A2 ~ n . h~ + O(h2) u u f Finally.h2 A1 A2~ n): u ~ n+1 = I + hA1 ] I + hA2]~ n . their numerical stability can be quite di erent.5) cu du dt where Ac and Ad are matrices representing the convection and dissipation terms. ~ u (12. and techniques to carry out their numerical evaluation can have arithmetic operation counts that vary by orders of magnitude. However. h~ + O(h2) u u f (12. in this sense. from observation 2 (new data ~ n+1 in terms of old ~ n): u u ~ n+1 = I + hA1 + hA2 ]~ n . neither one is to be preferred over the other. For the time march let us choose the simple.3 and 12. SPLIT AND FACTORED FORMS and consider the above observations. Then. .2) 1 2u f dt where A = A1 + A2 ] but A1 and A2 are not unique.2 Factoring Physical Representations | Time Splitting Suppose we have a PDE that represents both the processes of convection and dissipation.218 CHAPTER 12.1 explicit Euler method.3) or its equivalent h i ~ n+1 = I + hA1 ] I + hA2 ] . respectively and their sum forms the A matrix we have considered in the previous sections. Choose again the explicit Euler time march so that ~ ~ n+1 = I + hAd + hAc]~ n + h(bc) + O(h2) u u (12.4) Notice that Eqs. from observation 3 (allowing us to drop higher order terms .6) 1 Second-order time-marching methods are considered later. Here we seek only to apply to some simple cases the concept of factoring. 12. h~ + O(h2) u u f (12. rst-order. From observation 1 (arbitrary splitting of A) : d~ = A + A ]~ .4 have the same formal order of accuracy and.

but the di usion operator is now implicit.1 I + hAc]~ n + h(bc) u u ~ (12. another way to approximate Eq.7 are often applied in a predictor-corrector sequence.7 and the original unfactored form Eq. FACTORING PHYSICAL REPRESENTATIONS | TIME SPLITTING 219 Now consider the factored form ~ ~ n+1 = I + hAd ] I + hAc]~ n + h(bc) u u ~ ~ = | I + hAd + hAc]~ n + h(bc) + h2 Ad Ac~ n + (bc) +O(h2) (12.8. In this case one could write ~ un+1 = I + hAc]~ n + h(bc) ~ u ~ n+1 = I + hAd]~n+1 u u (12. For example. on this basis. 2 . This time a predictor-corrector interpretation leads to the sequence ~ un+1 = I + hAc]~ n + h(bc) ~ u I .7) u {z u } | {z } Original Unfactored Terms Higher Order Terms and we see that Eq. 12. hAd ]. hAd]. and a rst-order time-march method is usually unsatisfactory.1 = I + hAd + h2 A2 + d if h jjAdjj < 1. We have yet to determine its stability. where jjAd jj is some norm of Ad ]. 12. this factored form would appear to be superior to that provided by Eq.12. 12. their selection is arbitrary. the important aspect of stability has yet to be discussed.2. as before. requiring a tridiagonal solver if the di usion term is central di erenced. In practical applications2 equations such as 12. Therefore. Since numerical sti ness is generally much more severe for the di usion process. 12. We do not suggest that this particular method is suitable for use. However.8) Factoring can also be useful to form split combinations of implicit and explicit techniques.6 have identical orders of accuracy in the time approximation.10) The convection operator is applied explicitly.9) = | I + hAd + hAc]~ n + h(bc) +O(h2) {z u } Original Unfactored Terms where in this approximation we have used the fact that I .6 with the same order of accuracy is given by the expression ~ ~ n+1 = I . hAd]~ n+1 = un+1 u ~ (12.

12. Factoring is widely used in codes designed for the numerical solution of equations governing unsteady two. Let us study the basic concept of factoring by inspecting its use on the linear 2-D scalar PDE that models di usion: @u = @ 2 u + @ 2 u (12. Thus u32 represents the value of u at j = 3 and k = 2. 3 . We choose the 3 4 point mesh3 shown in the Sketch 12.3. 12. SPLIT AND FACTORED FORMS We should mention here that Eq.and 3-D requires some reference to a mesh.3 Factoring Space Matrix Operators in 2{D My k 1 13 23 33 43 12 22 32 42 11 21 31 41 1 (12. un = A u + A u + (bc) + O(h2) ~ c n d n+1 h Then ~ I . the number of (interior) x points. The numbers 11 12 43 represent the location in the mesh of the dependent variable bearing that index.and three-dimensional ows. In this example Mx . is 4 and My .1 Mesh Indexing Convention 12.11) @t @x2 @y2 We begin by reducing this PDE to a coupled set of ODE's by di erencing the space derivatives and inspecting the resulting matrix operator.12) j Mx Mesh indexing in 2-D.220 CHAPTER 12. 12. 12. A clear description of a matrix nite-di erence operator in 2. hAd ]un+1 = I + hAc]un + h(bc) which is identical to Eq.9 can be derived for a di erent point of view by writing Eq. This could also be called a 5 6 point mesh if the boundary points (labeled in the sketch) were included. 12.10.6 in the form un+1 . but in these notes we describe the size of a mesh by the number of interior points. the number of (interior) y points is 3.

When the matrix is multiplying a space vector U .3.16 part a and b.13) Pyx = PT = P.3 Data Base Permutations U (x) = Pxy U (y) where and The two vectors (arrays) are related by a permutation matrix P such that U (y) = PyxU (x) (12. The symbol U by itself is not enough to identify the structure of the data-base and is used only when the structure is immaterial or understood. depending on the data{base chosen for the U it multiplies. Their structures are similar. 12. In this notation the ODE form of Eq.11 can be written 4 dU = A U + (bc) ~ (12.12. which are notations used in the special case of space vectors. consecutively along rows that are themselves consecutive from k = 1 to My .3. used in the previous sections. Examples of the order of indexing U for these space vectors are given in Eq. we use the notation ) A(xx+)y or A(xy+y . To be speci c about the structure. u .16 part a and b. and (2). the usual (but ambiguous) representation is given by Ax+y . consecutively along columns that are consecutive from j = 1 to Mx. and one composed of y -vectors with U (y) . we label a data{base composed of x-vectors with (x) . are subsets of A and ~ . (1) and (2) above are referred to as xvectors and y-vectors. we consider only two: (1). We refer to each row or column group as a space vector (they represent data along lines that are continuous in space) and label their sum with the symbol U . Examples are in Eq. respectively. Thus ) A(xx+)y = Pxy A(xy+y Pyx 4 (12. Of these. however.15) Notice that Ax+y and U . In particular.3. There are many ways to lay out a data-base.14) x+y dt If it is important to be speci c about the data-base structure. FACTORING SPACE MATRIX OPERATORS IN 2{D 221 12. Notice that the matrices are not the same although they represent the same derivative operation.1 xy xy Now consider the structure of a matrix nite-di erence operator representing 3point central-di erencing schemes for both space derivatives in two dimensions. 12.2 Data Bases and Space Vectors The dimensioned array in a computer code that allots the storage locations of the dependent variable(s) is referred to as a data-base. 12. and they are related by the same permutation matrix that relates U (x) to U (y) . 12.

for both ! ..12. matrix operator. for 3 4 mesh shown in Sketch 12. 6 7 6o 7 12 6 7 j x j o 6 7 6 o 7 22 j x x j o 6 7 U (x) = 6 6 7 o j x x j o 7 32 6 7 6 o j x j o 7 42 6 7 6 7 .. Entries for x ! x. (12. Data base composed of Mx y{vectors stored in U (y) .16) . 6 7 6 7 6x 7 21 j o j x j 6 7 6 x 7 22 6 7 j o o j x j 6 7 6 7 23 x j o j x j 6 7 6 7 . for y ! o. for both ! . A(xx+)y ) A(xy+y 2 o 3 11 j x j j 6o 7 12 o j x j j 6 7 6 o 7 13 j x j j 6 7 6 7 . Data base composed of My x{vectors stored in U (x) . Ax+y . Entries for x ! x. SPLIT AND FACTORED FORMS 2 x 3 11 j o j 6x 7 21 o j 6 7 6 x x x jj 7 31 6 7 o j 6 7 6 7 41 x j o j 6 7 6 7 .. for 3 4 mesh shown in Sketch 12. central-di erence. matrix operator. Ax+y . U (y) = 6 7 6 7 31 6 7 j x j o j x 6 7 6 j x j o o j x 7 32 6 7 6 6 7 j x j o j x 7 33 6 7 6 7 .12. 6 7 6 6 7 j j x j o 7 41 6 7 4 j j x j o o 5 42 j j x j o 43 b: Elements in 2-dimensional. central-di erence. for y ! o.222 CHAPTER 12.. 6 7 6 7 13 6 7 j o j x 6 7 6 j o j x x 7 23 6 7 6 4 5 j o j x x 7 33 j o j x 43 a:Elements in 2-dimensional..

hA(xx) I .3. applying the implicit Euler method to Eq.12.23. hA(yx) Unx) = Unx) + h(bc) + O(h2) +1 (12.2.18) where the split matrices are shown in Eq. hA(yx) Unx) = Unx) + h(bc) + O(h2) +1 h i~ ( ~ I . It should be clear that ) the matrix A(xx+y can be split into two matrices such that A(xx+)y = A(xx) + A(yx) where A(xx) and A(yx) are shown in Eq.17) (12. hA(xx) . 12. There results (12.22.17 and 12. As an example ( rst-order in time).18 can be combined with factoring in the manner described in Section 12. 12. FACTORING SPACE MATRIX OPERATORS IN 2{D 223 12.20) As in Section 12.2. 12.3.14 gives h i ( ( ( ~ Unx) = Unx) + h A(xx) + A(yx) Unx) + h(bc) +1 +1 or h h i ( ( ~ I . we retain the same rst order accuracy with the alternative ih i ( ( ~ I . 12. Similarily ) A(xy+y = A(xy) + A(yy) (12.21) .19) (12. hA(yy) Uny) = U (y) +1 Write this in predictor-corrector form and permute the data base of the second row.4 Space Splitting and Factoring We are now prepared to discuss splitting in two dimensions. hA(xx) U (x) = Unx) + h(bc) h i ( ~ I . The permutation relation also holds for the split matrices so A(yx) = Pxy A(yy) Pyx and A(xx) = Pxy A(xy) Pyx The splittings in Eqs.

22) 2o 3 j o j 6 o 7 j o j 6 7 6 7 6 7 o j o j 6 7 6 7 o j o j 6 7 6 7 6 7 6o 7 6 7 j o j o 6 7 6 o 7 (x) j o j o 7 (x) U (x) = 6 Ay 6 6 7 o j o j o 7 U 6 7 6 o j o j o7 6 7 6 7 6 7 6 7 6 7 j o j o 6 7 6 7 j o j o 6 7 6 4 5 j o j o 7 j o j o ) The splitting of A(xx+y .224 CHAPTER 12. . SPLIT AND FACTORED FORMS 2x x 3 j j 6x x x 7 j 6 7 6 x x x jj 7 6 7 j 6 7 6 7 x x j j 6 7 6 7 6 7 6 7 6 7 j x x j 6 7 6 7 (x) j x x x j 7 U (x) U (x) = 6 Ax 6 7 6 7 j x x x j 6 7 6 7 j x x j 6 7 6 7 6 7 6 7 6 7 j j x x 6 7 6 j j x x x 7 6 7 6 4 5 j j x x x7 j j x x (12.

3 7 7 7 7 7 7 7 7 7 7 7 7 7 7 (y ) 7 U 7 7 7 7 7 7 7 x7 7 7 7 7 7 7 5 x (12.3. FACTORING SPACE MATRIX OPERATORS IN 2{D 225 2x j x j j 6 x j x j j 6 6 x j x j j 6 6 6 6 6x j x j x j 6 6 x 6 j x j x j 6 6 x j x j x j 6 A(xy) U (y) = 6 6 6 6 j x j x j x 6 6 j x j x j x 6 6 6 j x j x j 6 6 6 6 6 j j x j x 6 4 j j x j x j j x j 2o o j j j 6o o o j j j 6 6 o o j j j 6 6 6 6 6 j o o j j 6 6 6 j o o o j j 6 6 j o o j j 6 (y) U (y) = 6 Ay 6 6 6 j j o o j 6 6 j j o o o j 6 6 6 j j o o j 6 6 6 6 6 j j j o o 6 4 j j j o o j j j o ) The splitting of A(xy+y .23) 3 7 7 7 7 7 7 7 7 7 7 7 7 7 7 (y) 7 U 7 7 7 7 7 7 7 7 7 7 7 7 7 7 o5 o .12.

.26) 2 2 2 and both the factored and unfactored form of the trapezoidal method are second-order accurate in the time march. 2 hAx I . There results ~ I . we can write 1 ~ I .4 Second-Order Factored Implicit Methods CHAPTER 12. Let the data base be immaterial ~ and the (bc) be time invariant. SPLIT AND FACTORED FORMS For idealized commuting systems the methods given by Eqs. 1 hAx . 12. 2 hAx U = I + 1 hAy Un + 2 hFn 2 1 hA U + 1 hF + O(h3) 1 hA U (12. 12.26 and 12. 2 y n+1 = I + 2 x ~ 2 n+1 12. 1 h2 AxAy Un+1 2 2 4 ~ = I + 1 hAx I + 1 hAy . Un) is proportional to h.27 di er only in their evaluation of a time-dependent forcing term.5 Importance of Factored Forms in 2 and 3 Dimensions When the time-march equations are sti and implicit methods are required to permit reasonably large time steps. apply the trapezoidal method to Eq.27) I . 1 hAy . the use of factored forms becomes a very valuable tool 5 A form of the Douglas or Peaceman-Rachford methods.17 or 12. 1 hAx I . An alternative form of this kind of factorization is the classical ADI (alternating direction implicit) method5 usually written 1 1 ~ I .25) 2 2 4 Then notice that the combination 1 h2 AxAy ](Un+1 .226 12.24) 2 2 2 2 Factor both sides giving I . 1 hAy Un+1 = I + 1 hAx I + 1 hAy Un + h(bc) + O(h3)(12. Second-order accuracy in time can be maintained in a certain factored implicit methods. For example.14 where the derivative operators have been split as in Eq. 1 hAy Un+1 = I + 1 hAx + 1 hAy Un + h(bc) + O(h3) (12.18. Therefore. Un) is proportional to h3 since 4 the leading term in the expansion of (Un+1 . 1 h2AxAy Un + h(bc) + O(h3) (12. 12.

7 The lower band is also computed but does not have to be saved unless the solution is to be repeated for another vector. this probably exceeds memory availability for some time to come. a three-dimensional solver would require a memory of approximatly Ne2 My2 Mz2 Mx words and. but for meshes of practical size the ll is mostly dense. 12. 8 One billion oating-point operations per second. but very large. With computing speeds of over one giga op. However.16 part b. The forward sweep of a simple Gaussian elimination lls6 all of the 4 4 blocks between the main and outermost diagonal7 (e. for example. Accumulate the result of such a computation in the array (RHS ).28) I . This must be stored in computer memory to be used to nd the nal solution in the backward sweep. 2 hA(xx) U (x) = (RHS )(x) 1 ( ~ (12. density and velocity eld. each symbol (o x ) represents a 4 4 block matrix with entries that depend on the pressure.16 part a and b. Actually current computer power is able to cope rather easily with storage requirements of this order of magnitude. On the other hand. set of coupled simultaneous equations having the matrix form shown in Eq. If My > Mx. in real cases involving the Euler or Navier-Stokes equations. 12. One can then write the remaining terms in the two-step predictor-corrector form 1 ~ I .8 direct solvers may become useful for nding steady-state solutions of practical problems in two dimensions. Furthermore. 2 hAx+y Un+1 = I + 1 hAx+y Un + h(bc) 2 Forming the right hand side poses no problem.25.5. A moderate mesh of 60 200 points would require over 11 million words to nd the solution. IMPORTANCE OF FACTORED FORMS IN 2 AND 3 DIMENSIONS 227 for realistic problems. for well resolved ow elds. My and Mx would be interchanged. the problem of computing the time advance in the unfactored form of the trapezoidal method given by Eq.12. Suppose we were to solve the equations directly.). . If Ne represents the order of the small block matrix (4 in the 2-D Euler case). Again computing the right hand side poses no problem.g. Here it is assumed that My < Mx. between and o in Eq. the approximate memory requirement is (Ne My ) (Ne My ) Mx oating point words. 2 hA(yy) Uny) = U (y) +1 6 For matrices as small as those shown there are many gaps in this \ ll". consider computing a solution using the factored implicit equation 12. but nding Un+1 requires the solution of a sparse. 12.24 1 ~ I . Consider.

our original ODE. One way that is especially useful. each with Mx blocks of order Ne . 2 hAx I . is referred to as the \delta form" and we develop it next. and matrix multiplications do not commute. we see that this is equivalent to solving My one-dimensional ~ problems. 12.228 CHAPTER 12. A block tridiagonal solver is similar to a scalar solver except that small block matrix operations replace the scalar ones. and ~ let the (bc) be time invariant: 1 1 1 ~ I . Consider the unfactored form of the trapezoidal method given by Eq. 12. Then. 12. 12. The rst step would be solved using My uncoupled block tridiagonal solvers9 .29 converges. 9 . using the standard de nition of the di erence operator . 1 hAy Un 2 2 leaving the equality unchanged.29 and maintain O(h2) accuracy. 12.30) I .24. Now we can factor Eq. Thus. 12. 2 hAy Un = h Ax+y Un + (bc) + O(h3) (12. 1 hAx . 12. SPLIT AND FACTORED FORMS which has the same appearance as Eq. for ensuring a correct steady-state solution in a converged time-march.29) Notice that the right side of this equation is the product of h and a term that is identical to the right side of Eq. 2 hAx . We arrive at the expression h 1 ~ i (12. 1 hAy Un = h Ax+y Un + (bc) + O(h3) 2 This is the delta form of a factored.23 shows that the nal be permuted to U step consists of solving Mx one-dimensional implicit problems each with dimension My .22. Un = Un+1 .21 but is second-order time accurate. it is guaranteed to converge to the correct steady-state solution of the ODE. Inspecting the top of Eq. 12. 2 hAy Un+1 = I + 2 hAx + 1 hAy Un + h(bc) + O(h3) 2 From both sides subtract I . Un one nds h 1 1 ~ i I .6 The Delta Form Clearly many ways can be devised to split the matrices and generate factored forms. 2-D equation. 2 hAx . 2nd-order.14. The temporary solution U (x) would then ~ (y) and an inspection of the bottom of Eq. if Eq.

12. 12. we will see in the next chapter that the convergence properties of Eq. 2uj + uj+1) x The uniform grid has x = 1 and 8 interior points. hAx .31 are vastly di erent.19 is h ~ i I . Consider the 1-D heat equation: @u = @ 2 u @t @x2 0 x 9 Let u(0 t) = 0 and u(9 t) = 0. If we reorder the points with the odd points rst and then the even points. 2 10 . ( xxu)j = 1 2 (uj. but it can have a profound e ect on the stability and convergence properties of a method. ii. the factored form is presented in Eq.20. the delta form of the unfactored Eq.30 and the O(h) method given by Eq. PROBLEMS 229 The point at which the factoring is made may not a ect the order of time-accuracy. 12.1 .e. Notice that the only di erence between the O(h2 ) method given by Eq. and if it is immediately factored. i.(P12.19. so that we can simplify the boundary conditions. hAy ] Un = h Ax+y Un + (bc) and its factored form becomes10 h ~ i I .20 and Eq. On the other hand. Assume that second order central di erencing is used.12. u(2) ? iii. 12. For example. 12. 12.31) In spite of the similarities in derivation. What is the space vector for the natural ordering (monotonically increasing in index)..P21 ). write the space vector. hAy ] Un = h Ax+y Un + (bc) (12. hAx] I . Write down the permutation matrices. 12.7 Problems 1.31 is the appearance of the factor 1 on the left side of the O(h2 ) method.7. (a) Space vector de nition i. u(1) ? Only include the interior points. 12. the unfactored form of a rst-order method derived from the implicit Euler time march is given by Eq.

(b) System de nition In problem 1a. We can put such a partitioning to use. write the delta form of the implicit algorithm. SPLIT AND FACTORED FORMS iv. v. (Note f = 0. and P21 which partition the odd points from the even points. we de ned u(1) u(2) A1 A2 P12 . Comment on the form of the resulting implicit matrix operator. The generic ODE representing the discrete form of the heat equation is du(1) = A u(1) + f 1 dt Write down the matrix A1 . Applying implicit Euler time marching. First de ne extraction operators 21 0 0 0 0 0 0 03 60 1 0 0 0 0 0 07 7 6 60 0 1 0 3 0 0 0 0 7 2I 7 6 7 60 0 0 1 04 4 6 0 0 0 07=6 7 7 4 I (o) = 6 0 0 0 0 5 7 6 0 0 0 07 0 6 04 60 0 0 0 4 7 6 0 0 0 07 7 6 40 0 0 0 0 0 0 05 0 0 0 0 0 0 0 0 20 0 0 0 0 0 0 03 60 0 0 0 0 0 0 07 7 6 60 0 0 0 3 0 0 0 0 7 20 7 6 7 60 0 0 0 04 4 6 0 0 0 07=6 7 7 4 I (e) = 6 0 0 0 0 5 7 6 1 0 0 07 0 6 I4 60 0 0 0 4 7 6 0 1 0 07 7 6 40 0 0 0 0 0 1 05 0 0 0 0 0 0 0 1 . due to the boundary conditions) Next nd the matrix A2 such that du(2) = A u(2) 2 dt Note that A2 can be written as 2 3 D UT 7 A2 = 6 4 5 U D De ne D and U .230 CHAPTER 12.

Examine the implicit operators for the factored delta form. . Apply implicit Euler time marching to the split ODE. Also. Comment on the order of the error terms. You should be able to argue that these are now trangular matrices (a lower and an upper).7. Comment on the solution process this gives us relative to the direct inversion of the original system. PROBLEMS 231 which extract the odd even points from u(2) as follows: u(o) = I (o) u(2) and u(e) = I (e) u(2). Write out the matrices Ao and Ae. ii. such that Ao operates only on the odd terms. write them in terms of D and U de ned above. i.12. iii. Write down the delta form of the algorithm and the factored delta form. Beginning with the ODE written in terms of u(2). Comment on their form. de ne a splitting A2 = Ao + Ae. and Ae operates only on the even terms.

232 CHAPTER 12. SPLIT AND FACTORED FORMS .

233 . 13. if the results indicate that a method has an instability. accuracy. In this and the following section we assume circulant systems and our analysis depends critically on the fact that all circulant matrices commute and have a common set of eigenvectors.Chapter 13 LINEAR ANALYSIS OF SPLIT AND FACTORED FORMS In Section 4. The question is: Can we nd a similar equation that will allow us to evaluate the stability and convergence properties of split and factored schemes? The answer is yes | for certain forms of linear model equations.4 we introduced the concept of the representative equation. The analysis in this chapter is useful for estimating the stability and steady-state properties of a wide variety of time-marching schemes that are variously referred to as time-split. We have seen that under these conditions a semi-discrete approach can lead to circulant matrix di erence operators. and we discussed circulant eigensystems1 in Section 4.1 The Representative Equation for Circulant Operators Consider linear PDE's with coe cients that are xed in both space and time and with boundary conditions that are periodic. 1 See also the discussion on Fourier stability analysis in Section 7. When these methods are applied to practical problems. hybrid. the results found from this analysis are neither necessary nor su cient to guarantee stability. and (approximately) factored.7. and used it in Chapter 7 to study the stability. and convergence properties of timemarching schemes. However.3. the method is probably not suitable for practical use. fractional-step.

3) that 13. .2 Example Analysis of Circulant Systems 13.. circulant systems is du = + + + ]u + ae t a b c dt where a + b + c + are the sum of the eigenvalues in Aa Ab Ac share the same eigenvector.4) This is to be contrasted to the developments found later in the analysis of 2-D equations. we can use the arguments made in Section 4.2. 0 wM = ( a + b)M wM .2.1 Stability Comparisons of Time-Split Methods Consider as an example the linear convection-di usion equation: @u + a @u = @ 2 u @t @x @x2 2 (13. gM (t) (13. as a result of space di erencing the PDE. Since both matrices have the same set of eigenvectors.1) apu b pu f dt where the subscript p denotes a circulant matrix.2) The analytic solution of the m'th line is wm(t) = cm e( a+ b)mt + P:S: Note that each a pairs with one. we arrive at a set of ODE's that can be written d~ = A ~ + A ~ . ~ (t) u (13.3 to uncouple the set and form the M set of independent equations 0 w1 = ( a + b)1 w1 .. This suggests (see Section 4. (13. 0 wm = ( a + b)m wm . eigenvector.4: b since they must share a common The representative equation for split. gm(t) .. and only one2 . LINEAR ANALYSIS OF SPLIT AND FACTORED FORMS Suppose.. g1 (t) .234 CHAPTER 13.

Eq. the explicit-explicit Euler method. (1 + h c) This leads to the principal root 1 + i ah sin m x = 1 + 4 h 2 sin2 m 2 x where we have made use of Eq.4. 1 . a B (.5) dt 2 x p x2 p the convection matrix operator and the di usion matrix operator.13.6 to quantify the eigenvalues.1 0 1)~ + u u B (1 . the characteristic polynomial of this method is P (E ) = (1 .2): ( c)m = ia sin m x m ( d)m = . 12. 13.6) x In these equations m = 2m =M . When applied to Eq. 2. can be represented by the eigenvalues c and d.2. Now introduce the dimensionless numbers Cn = ah Courant number x R = a x mesh Reynolds number and we can write for the absolute value of q C2 2 j j = 1 +C n sin m m 1 + 4 R n sin2 2 0 m 1. The Explicit-Implicit Method 2 (13. EXAMPLE ANALYSIS OF CIRCULANT SYSTEMS 235 If the space di erencing takes the form d~ = . Using these values and the representative equation 13. so that 0 m 2 .8. the explicit-implicit Euler method.4.10. we can analyze the stability of the two forms of simple time-splitting discussed in Section 12.2 1)~ u (13. In this section we refer to these as 1. respectively. where (see Section 4. 4 2 sin2 2 (13. m = 0 1 M . 12.2.7) . 13. h d )E . Eq.3.

13. 13. LINEAR ANALYSIS OF SPLIT AND FACTORED FORMS 1. which yields the same result as in the previous example. A simple numerical parametric study of Eq. one near 0.2 C = R /2 n ∆ C = 2/ R n ∆ 0. 4 Cn sin2 m 0 m 2 j j = 1 + Cn R 2 Again a simple numerical parametric study shows that this has two critical ranges of m .8 n 1. and the . From this we nd that the condition on Cn and R that make j j 1 is 2 h i C 2 1 + Cn sin2 = 1 + 4 R n sin2 2 As ! 0 this gives the stability region 2 Cn < R which is bounded by a hyperbola and shown in Fig.7 shows that the critical range of m for any combination of Cn and R occurs when m is near 0 (or 2 ). The Explicit-Explicit Method An analysis similar to the one given above shows that this method produces " # q 2 sin2 m 1 .1.2 C = 2/ R ∆ n 0.4 0.1: Stability regions for two simple time-split methods.236 CHAPTER 13.8 C n C 0.4 0 0 4 R∆ Explicit-Implicit 8 0 0 4 R∆ Explicit-Explicit 8 _ Figure 13. 2.

The di usion term is integrated implicitly using the trapezoidal method.9) Q(E ) = 1 h(E 2 + E ) 2 13. EXAMPLE ANALYSIS OF CIRCULANT SYSTEMS 237 other near 180o. Next let us analyze a more practical method that has been used in serious computational analysis of turbulent ows. 13. factored method has a much smaller region of stability when R is small. un. which produces the constraint that 1 Cn < 2 R for R 2 The resulting stability boundary is also shown in Fig. This method applies to a ow in which there is a combination of di usion and periodic convection. 1 h d)E 2 . Our model equation is again the linear convection-di usion equation 13.1) + 2 h d (un+1 + un) 2 This produces 3 P (E ) = (1 . as well as the stability.5. The totaly explicit.2. 13.1. thus 1 un+1 = un + 1 h c(3un . time-marching method on the equation u0 = cu + du + ae t First let us nd expressions for the two polynomials.8) and in the latter (13. In the former case it is 1 Q(E ) = 2 h(3E .4 which we split in the fashion of Eq.13. P (E ) and Q(E ). The characteristic polynomial follows from the application of the method to the homogeneous equation. 1) (13. as we should have expected. In order to evaluate the accuracy. (1 + 2 h c + 1 h d)E + 1 h c 2 2 2 The form of the particular polynomial depends upon whether the forcing function is carried by the AB2 method or by the trapezoidal method. The convection term is treated explicitly using the second-order Adams-Bashforth method.2. we include the forcing function in the representative equation and study the e ect of our hybrid.2 Analysis of a Second-Order Time-Split Method .

2.6. LINEAR ANALYSIS OF SPLIT AND FACTORED FORMS Accuracy From the characteristic polynomial we see that there are two {roots and they are given by the equation 3 1 1 + 2h c + 2h = s d The principal -root follows from the plus sign and one can show 1 2 2 1 3+ 2 2 3 3 c d. The results are plotted in Fig. These results show that.7. c h 1 = 1 + ( c + d )h + ( c + d ) h + d 2 4 1 From this equation it is clear that 6 3 = 1 ( c + d)3 does not match the coe cient 6 of h3 in 1 .8 or Eq.11) The extension of the following to 3-D is simple and straightforward. For values of R 2 this second-order method has a much greater region of stability than the rst-order explicit-implicit method given by Eq. 13.3. 13. 1h d 2 d 2 .10 by a parametric study of cn and R de ned in Eq. 13.3 The Representative Equation for Space-Split Operators Consider the 2-D model3 equations @u = @ 2 u + @ 2 u @t @x2 @y2 3 (13.1.238 CHAPTER 13. 13. 13. one can show er = O(h3) using either Eq.9. so er = O(h3) Using P (e h) and Q(e h ) to evaluate er in Section 6.10 and shown in Fig. 12. . 1 h 2 d (13. 13. for the model equation.10) Stability The stability of the method can be found from Eq.1. This was carried out in a manner similar to that used to nd the stability boundary of the rst-order explicit-implicit method in Section 13. the hybrid method retains the second-order accuracy of its individual components.2. 13. c d. 3 1 1 + 2h c + 2h 2 1 . 2h c 1 .

Let us inspect the structure of these matrices closely to see how we can diagonalize Ax + Ay ] in terms of the individual eigenvalues of the two matrices considered separately.22 and 12. THE REPRESENTATIVE EQUATION FOR SPACE-SPLIT OPERATORS239 1. 12. to the coupled set of ODE's: dU = A + A ]U + (bc) ~ (13.12) @t x @x y @y Reduce either of these.13) x y dt for the space vector U . The form of the Ax and Ay matrices for three-point central di erencing schemes are shown in Eqs. Now nd the block eigenvector .2 0.3.8 Cn 0.12.1 I ~0 I B b b where B is a banded matrix of the form B (b.2: Stability regions for the second-order time-split method.1 I ~0 I ~1 I 7 b b 5 4 5 4b ~. and @u + a @u + a @u = 0 (13.23 for the 3 4 mesh shown in Sketch 12. First we write these matrices in the form 2 3 2~ 3 B b0 I ~1 I b A(xx) = 6 B 7 A(yx) = 6 ~.4 Stable 0 4 8 ∆ R _ Figure 13.1 b0 b1 ).13. by means of spatial di erencing approximations.

see the bottom of Eq.1 I ~0 I b b b b One now permutes the transformed system to the y-vector data-base using the permutation matrix de ned by Eq.14 is transparent to the transformation.1 ~0 ~1). by the same argument as before. 12.1 6 ~ I X 4 b.1 I ~0 I ~1 I 7 b b 5 b b 5 4b ~. 12. so the nal result is the complete diagonalization of the matrix Ax+y : h ~ . There results h i Pyx X .1 A(xx) + A(yx) X Pxy = 3 2 I 3 2B ~ 1 7 6 7 6 B ~ 7 6 7+6 2 I 7 6 7 6 (13.1 4 54 54 5 4 X .1B X = 6 4 ~2 7 5 ~ 7 4 5 ~3 ~ This time.15) 6 7 ~ I+ 4 5 3 4I + ~ Notice that the matrix set X diag(X ) 2~ b0 I .1 I ~0 I ~.13. That is.1 B X where 2 3 3 7 5 6 =6 6 4 1 2 3 3 2~ 3 ~1 I b b0 I ~1 I b ~0 I ~1 I 7 X = 6 ~. if we 4 7 7 7 5 .1 A(yx) is transparent to this transformation. Thus 2 .240 CHAPTER 13.1 32 32 3 2 X B X 6 76 B 76 X 7 = 6 X . Let B diag(B ) and ~ ~ X diag(X ) and form the second transformation 2~ 3 2~ 3 1 6 ~ 7 7 ~ ~~ 6 ~=6 X .1 ih ih i ~ X Pyx X . b b b ~ ~ ~ ~ Next nd the eigenvectors X that diagonalize the B blocks. 13.23.1 A(xx+)y X Pxy X = 2 3 1I + ~ 6 7 6 7 2I + ~ 6 7 (13. LINEAR ANALYSIS OF SPLIT AND FACTORED FORMS matrix that diagonalizes B and use it to diagonalize A(xx). the rst matrix on the right side of Eq.14) 6 ~ 4 5 4 B 7 3 I 5 ~ B 4 I ~ where B is the banded tridiagonal matrix B (~.

18) x y We have used 3-point central di erencing in our example. The block matrices B and B can be either circulant or noncirculant in both cases we are led to the nal result: The 2{D representative equation for model linear systems is du = + ]u + ae t x y dt where x and y are any combination of eigenvalues from Ax and Ay . First reduce the PDE to ODE by some choice4 of space di erencing.16) ~ where B and B are any two matrices that have linearly independent eigenvectors (this puts some constraints on the choice of di erencing schemes). Now we are ready to present the representative equation for two dimensional systems.15. This results in a spatially split A matrix formed from the subsets A(xx) = diag(B ) ~ A(yy) = diag(B ) (13.3. 4 . the steady-state solution of the representative equation. does not ensure the property of \all possible combinations". Although Ax and Ay do commute. Often we are interested in nding the value of. and where Ax and Ay satisfy the conditions in 13. this fact. and the convergence rate to. + (13.13. THE REPRESENTATIVE EQUATION FOR SPACE-SPLIT OPERATORS241 It is important to notice that: The diagonal matrix on the right side of Eq. In that case we set = 0 and use the simpler form du = + ]u + a (13. To obtain the latter property the structure of ~ the matrices is important. 13.17) x y dt which has the exact solution a u(t) = ce( x+ y )t .16. a and are (possibly complex) constants. and its use is not necessary to arrive at Eq. but this choice was for convenience only. 13. by itself.15 contains every possible com~ bination of the individual eigenvalues of B and B .

1 Q(E ) = h giving the solution (13. h y )E . LINEAR ANALYSIS OF SPLIT AND FACTORED FORMS 13. Converges very rapidly to the steady-state when h is large. we nd (1 . 2.4 Example Analysis of 2-D Model Equations 1. x y Like its counterpart in the 1-D case. The form of the method is given by Eq.19) un = c 1 . . In the following we analyze four di erent methods for nding a xed. Is unconditionally stable. In each case we examine 13. h y )un+1 = un + ha P (E ) = (1 .5. however. 3. and when it is applied to the representative equation. h x . use of this method for 2-D problems is generally impractical for reasons discussed in Section 12. 12. Unfortunately.4. " 1 #n a x+ y 2. steady-state solution to the 2-D representative equation 13. 13. rst-order scheme which can then be used as a reference case for comparison with the various factored ones.16.19. Produces the exact (see Eq. h from which x .242 CHAPTER 13. steady-state solution. The convergence rate to reach the steady-state. h . The stability. The accuracy of the xed. 3.18) steady-state solution (of the ODE) for any h. this method: 1. h .1 The Unfactored Implicit Euler Method Consider rst this unfactored.

Is unconditionally stable. h ) .4. Next apply Eq. h x)(1 . x y We see that this method: 1. h x)(1 . h y )un+1 = un + ha from which P (E ) = (1 . Produces a steady state solution that depends on the choice of h.31 to the 2-D representative equation. 12.3 The Factored Delta Form of the Implicit Euler Method .2 The Factored Nondelta Form of the Implicit Euler Method 13. h x)(1 . if one tries to take advantage of its rapid convergence rate. EXAMPLE ANALYSIS OF 2-D MODEL EQUATIONS 243 Now apply the factored Euler method given by Eq. un) = h( xun + y un + a) which reduces to (1 . h )(1 . h ) . Converges rapidly to a steady-state for large h. 1 Q(E ) = h (13.20 to the 2-D representative equation. + x y x y This method: 13. 12. 2. One nds (1 . h x)(1 . However. There results (1 . but the converged solution is completely wrong. h )(1 . it is not very useful since its transient solution is only rst-order accurate and. h y )un+1 = 1 + h2 x y un + ha and this has the solution #n " 1 + h2 x y a un = c (1 . the converged value is meaningless. h y )E . + a h x y x y.20) giving the solution " #n 1 un = c (1 . The method requires far less storage then the unfactored form. h y )(un+1 .4. 3.4.13.

since j j ! 1 as h ! 1.244 CHAPTER 13. 1 h x 1 . 2. since j j ! 1 as h ! 1. Finally consider the delta form of a second-order time-accurate method. this method demands far less storage than the unfactored form. Apply Eq. 12. Converges very slowly to the steady{state solution for large values of h. LINEAR ANALYSIS OF SPLIT AND FACTORED FORMS 1. 1h x 1 . 1 h y un+1 = 1 + 1 h x 1 + 1 h y un + ha 2 2 2 2 and this has the solution 2 1h 1 h 3n 6 1+ 2 x 1+ 2 y 7 7 . Produces the exact steady-state solution for any choice of h.5 A brief inspection of eqs. and the factored delta form of the implicit Euler method can be used when a converged steady-state is all that is required.4 The Factored Delta Form of the Trapezoidal Method . 3. 2 h y (un+1 . 1h y 2 2 This method: 1. 3. the value of h on the left side of the implicit equation is literally switched 1 from h to 2 h. 2.27 should be enough to convince the reader that the 's produced by those methods are identical to the produced by this method. but convergence is not nearly as rapid as that of the unfactored form. Converges very slowly to the steady{state solution for large values of h. Produces the exact steady{state solution for any choice of h. a un = c6 4 5 x+ y 1 .26 and 12. Is unconditionally stable. as discussed in Section 12.30 to the representative equation and one nds 1 1 1 . un) = h( xun + y un + a) which reduces to 1 . it can be used when time accuracy is desired. Is unconditionally stable. The correct steady solution is obtained.4. Since it is second order in time. 5 13. All of these properties are identical to those found for the factored delta form of the implicit Euler method. Like the factored nondelta form.5. In practical codes. 2 h x 1 . 12.

5 Example Analysis of the 3-D Model Equation 245 The arguments in Section 13. and write the solution either in the form 2 1 3n 1 + h( x + y + z ) + 1 h2 ( x y + x z + y z ) . 8 x y z .3 generalize to three dimensions and. 2 h( x + y + z ) + 4 x y + x z + y z ) .22. nd the root. EXAMPLE ANALYSIS OF THE 3-D MODEL EQUATION 13. each with an additional term. factored.12.16 with an A(zz) included. 2 h( x + y + z ) un+1 = 1 + 1 h( x + y + z ) un + ha 2 Put this in delta form: 1 . 13. . 1 h z un = h ( x + y + z )un + a] (13. First apply the trapezoidal method: un+1 = un + 1 h ( x + y + z )un+1 + ( x + y + z )un + 2a] 2 Rearrange terms: 1 1 . 2 h x 1 .11 and 13. the model 3-D cases6 have the following representative equation (with = 0): du = + + ]u + a (13.5. +a+ x y 6 z (13. 1 h3 x y z 7 6 4 8 7 un = c6 2 4 1 5 1 h2( 1 h3 1 .13.23) Eqs.21) x y z dt Let us analyze a 2nd-order accurate. 13. under the conditions given in 13. One can derive the characteristic polynomial for Eq. delta form using this equation. 2 h y 1 . 1 h( x + y + z ) un = h ( x + y + z )un + a] 2 Now factor the left side: 1 1 1 .22) 2 This preserves second order accuracy since the error terms 1 h3 1 h2( x y + x z + y z ) un and 4 8 x y z are both O(h3).

5 1 1 x+ y+ z 1 . Furthermore. Now consider what happens when we apply this method to the biconvection model. First write the root in Eq. however. 13. .23 in the form 1+ = 1. some form of dissipation is almost always added to methods that are used to solve the Euler equations and our experience to date is that. Remember that in our analysis we must consider every possible combination of these eigenvalues. )2 > 1 ) which veri es the second order accuracy of the factored form. in the presence of this dissipation. it follows from Eq. 13. In this case. 1h x 1 . clearly. 2h z 2 It is interesting to notice that a Taylor series expansion of Eq.23 that. +i i . )2 + ( .i . +i and the method is unconditionally unstable for the model convection problem. if the method converges. 13.7 With regards to stability. central di erencing causes all of the 's to be imaginary with spectrums that include both positive and negative values. the 3-D form of Eq.24 results in (13.12 with periodic boundary conditions. This makes the method unconditionally stable for the 3-D di usion model when it is centrally di erenced in space. 4 h3 x y z 7 a (13. and are real numbers that can have any sign. 2h y 1 . we already knew this because we chose the delta form. the instability disclosed above is too weak to cause trouble. From the above analysis one would come to the conclusion that the method represented by Eq. Now we can always nd one combination of the 's for which .246 CHAPTER 13. 7 However. In practical cases. the method is stable for all h.25) = 1 + h( x + y + z ) + 1 h2( x + y + z )2 2 h i + 1 h3 3 + (2 y + 2 x) + 2 2 + 3 x y + 2 2 + 3 + 2 x 2 + 2 2 y + 3 + y y y y x x 4 z 2 1 6 1 + 2h un = c6 4 x or in the form where . if all the 's are real and negative.24) 2 7 .22 should not be used for the 3-D Euler equations. and are both positive. 13. 13. it converges to the proper steady-state. In that case since the absolute value of the product is the product of the absolute values (1 . )2 + ( + 2 j j2 = (1 . LINEAR ANALYSIS OF SPLIT AND FACTORED FORMS 3n 1 1 1 + 2 h y 1 + 1 h z .

6.6 Problems 13. . and accuracy of the converged steady-state solution. (d) Repeat 1c for the explicit-implicit scheme of problem 1b. Applying implicit Euler time marching gives un+1 . nd the exact solution to the resulting scalar di erence equation and comment on the stability. leave two explicit: un+1 . un = A u + A u + A u + A u + f 1 n+1 2 n+1 3 n+1 4 n+1 h (a) Write the factored delta form. What is the error term? (b) Instead of making all of the split terms implicit. Starting with the generic ODE. un = A u + A u + A u + A u + f 1 n+1 2 n 3 n+1 4 n h Write the resulting factored delta form and de ne the error terms. PROBLEMS 247 1. du = Au + f dt we can split A as follows: A = A1 + A2 + A3 + A4. convergence.13. (c) The scalar representative equation is du = ( + + + )u + a 1 2 3 4 dt For the fully implicit scheme of problem 1a.

248 CHAPTER 13. LINEAR ANALYSIS OF SPLIT AND FACTORED FORMS

**Appendix A USEFUL RELATIONS AND DEFINITIONS FROM LINEAR ALGEBRA
**

A basic understanding of the fundamentals of linear algebra is crucial to our development of numerical methods and it is assumed that the reader is at least familar with this subject area. Given below is some notation and some of the important relations between matrices and vectors.

A.1 Notation

1. In the present context a vector is a vertical column or string. Thus 2 v1 3 6 7 ~ = 6 v..2 7 v 6 7 4 . 5 vm and its transpose ~ is the horizontal row vT ~ T = v1 v2 v3 : : : vm] ~ = v1 v2 v3 : : : vm ]T v v 2. A general m m matrix A can be written 2 a11 a12 a1m 3 6a a a 7 A = (aij ) = 6 21 22 . . . 2m 7 7 6 5 4 am1 am2 amm 249

250APPENDIX A. USEFUL RELATIONS AND DEFINITIONS FROM LINEAR ALGEBRA 3. An alternative notation for A is h i A = ~1 ~2 : : : ~m a a a and its transpose AT is 2 ~T 3 a 6 ~1 7 6 aT 7 AT = 6 .2 7 6 7 6 .. 7 4 T5 ~m a

4. The inverse of a matrix (if it exists) is written A;1 and has the property that A;1A = AA;1 = I , where I is the identity matrix.

A.2 De nitions

1. A is symmetric if AT = A. 2. A is skew-symmetric or antisymmetric if AT = ;A. 3. A is diagonally dominant if aii Pj6=i jaij j i = 1 2 : : : m and aii > Pj6=i jaij j for at least one i. 4. A is orthogonal if aij are real and AT A = AAT = I 5. A is the complex conjugate of A. 6. P is a permutation matrix if P ~ is a simple reordering of ~ . v v 7. The trace of a matrix is Pi aii. 8. A is normal if AT A = AAT . 9. det A] is the determinant of A. 10. AH is the conjugate transpose of A, (Hermitian). 11. If b A= a d c then det A] = ad ; bc and d A;1 = det1 A] ;c ;b a

A.3. ALGEBRA

A.3 Algebra

We consider only square matrices of the same dimension. 1. A and B are equal if aij = bij for all i j = 1 2 : : : m. 2. A + (B + C ) = (C + A) + B , etc. 3. sA = (saij ) where s is a scalar. 4. In general AB 6= BA. 5. Transpose equalities: (A + B )T = AT + B T (AT )T = A (AB )T = B T AT 6. Inverse equalities (if the inverse exists): (A;1);1 = A (AB );1 = B ;1 A;1 (AT );1 = (A;1 )T

251

7. Any matrix A can be expressed as the sum of a symmetric and a skew-symmetric matrix. Thus: 1 A = 2 A + AT + 1 A ; AT 2

A.4 Eigensystems

1. The eigenvalue problem for a matrix A is de ned as A~ = ~ or A ; I ]~ = 0 x x x and the generalized eigenvalue problem, including the matrix B , as A~ = B~ or A ; B ]~ = 0 x x x 2. If a square matrix with real elements is symmetric, its eigenvalues are all real. If it is asymmetric, they are all imaginary.

252APPENDIX A. USEFUL RELATIONS AND DEFINITIONS FROM LINEAR ALGEBRA 3. Gershgorin's theorem: The eigenvalues of a matrix lie in the complex plane in the union of circles having centers located by the diagonals with radii equal to the sum of the absolute values of the corresponding o -diagonal row elements. 4. In general, an m m matrix A has n~ linearly independent eigenvectors with x n~ m and n distinct eigenvalues ( i) with n n~ m. x x 5. A set of eigenvectors is said to be linearly independent if a ~ m + b ~ n 6= ~ k x x x m 6= n 6= k for any complex a and b and for all combinations of vectors in the set. 6. If A posseses m linearly independent eigenvectors then A is diagonalizable, i.e.,

X ;1AX =

where X is a matrix whose columns are the eigenvectors, h i X = ~1 ~2 : : : ~m x x x and is the diagonal matrix

0 0 m If A can be diagonalized, its eigenvectors completely span the space, and A is said to have a complete eigensystem. 7. If A has m distinct eigenvalues, then A is always diagonalizable, and with each distinct eigenvalue there is one associated eigenvector, and this eigenvector cannot be formed from a linear combination of any of the other eigenvectors. 8. In general, the eigenvalues of a matrix may not be distinct, in which case the possibility exists that it cannot be diagonalized. If the eigenvalues of a matrix are not distinct, but all of the eigenvectors are linearly independent, the matrix is said to be derogatory, but it can still be diagonalized. 9. If a matrix does not have a complete set of linearly independent eigenvectors, it cannot be diagonalized. The eigenvectors of such a matrix cannot span the space and the matrix is said to have a defective eigensystem.

2 0 0 6 01 . . . ... 6 = 6 .. . .2 . . 6. . . 0 4

3 7 7 7 7 5

7 6 7 i 1 6 60 0 7 . .1 = P then 2 3 2 3 1 0 0 0 P . The complete set of principal vectors and eigenvectors are all linearly independent.A. 6 J = S .2 . .. 6 ... Note that if P is the permutation matrix 2 3 0 0 1 P = 60 1 07 4 5 1 0 0 P T = P . there exists one eigenvector. 6... 1 vectors associated with this eigenvalue are referred to as principal vectors. Some of the Jordan subblocks may have the same eigenvalue. 4 0 03 ..4. S .. 7 . The other r . .. Defective matrices cannot be diagonalized but they can still be put into a compact form by a similarity transform. . 1 7 4 5 0 0 0 i 12. For each Jordan submatrix with an eigenvalue i of multiplicity r. .1AS = 6 . 7 7 7 5 07 0 Jk 253 10. . 11.. . A repeated root in a Jordan block is referred to as a defective eigenvalue. such that where there are k linearly independent eigenvectors and Ji is either a Jordan subblock or i. A Jordan submatrix has the form 2 0 03 i 1 60 . Use of the transform S is known as putting A into its Jordan Canonical form. ..1 6 0 17P = 61 07 4 5 4 5 0 0 0 1 14. the . 0 7 Ji = 6 7 i 6. 13.. . . For example. EIGENSYSTEMS 2J 0 6 01 J . ..

USEFUL RELATIONS AND DEFINITIONS FROM LINEAR ALGEBRA matrix 22 66 64 6 6 6 6 6 6 6 6 6 6 6 4 1 1 1 3 17 5 1 1 1 1 1 2 1 2 3 7 7 7 7 7 7 7 7 7 7 7 7 7 5 3 is both defective and derogatory. A p-norm of a matrix A is de ned as Av jjAjjp = max jjjjvjjjjp x6=0 p . A p-norm of the vector ~ is de ned as v 0M 1 X pA1=p jjvjjp = @ jvj j j =1 3.254APPENDIX A.5 Vector and Matrix Norms 1. The spectral radius of a matrix A is symbolized by (A) such that (A) = j m jmax where m are the eigenvalues of the matrix A. having: 9 eigenvalues 3 distinct eigenvalues 3 Jordan blocks 5 linearly independent eigenvectors 3 principal vectors with 1 1 principal vector with 2 A. 2.

A. in fact. 6. jjAjj2 = q PM ja j maximum column sum i=1 ij (AT A) . (A) is a true norm. In general (A) does not satisfy the conditions in 4. is the lower bound of all the norms of A. All matrix norms must have the properties jjAjj 0 jjAjj = 0 implies A = 0 jjc Ajj = jcj jjAjj jjA + B jj jjAjj + jjB jj jjA B jj jjAjj jjB jj 5. in this case it is the L2 norm. (A). VECTOR AND MATRIX NORMS 255 4. 8. When A is normal.5. Let A and B be square matrices of the same order. 7. Special p-norms are jjAjj1 = maxj=1 M jjAjj1 = maxi=1 2 M PM jaij j maximum row sum j =1 where jjAjjp is referred to as the Lp norm of A. The spectral radius of A. so in general (A) is not a true norm.

256APPENDIX A. USEFUL RELATIONS AND DEFINITIONS FROM LINEAR ALGEBRA .

e.1 Standard Eigensystem for Simple Tridiagonals In this work tridiagonal banded matrices are prevalent.b.1) m = b + 2 ac cos M +1 The right-hand eigenvector of B (a b c) that is associated with the eigenvalue m satis es the equation B (a b c)~ m = m~ m x x (B.. Many of these can be derived by solving the simple linear di erence equations that arise in deriving recursion relations.3) c M +1 257 .1 ~ m = (xj )m = a 2 sin j m x m=1 2 M (B.Appendix B SOME PROPERTIES OF TRIDIAGONAL MATRICES B.2) and is given by j. a tridiagonal with constant scalar elements a. and c. i. If we examine the conditions under which the determinant of this matrix is zero. It is useful to list some of their properties. Let us consider a simple tridiagonal matrix. we nd (by a recursion exercise) det B (M : a b c)] = 0 if p m b + 2 ac cos M + 1 = 0 m=1 2 M From this it follows at once that the eigenvalues of B (a b c) are p m m=1 2 M (B.4. see Section 3.

i(m.1 a 2 sin jm j=1 2 M X = (xjm) = c (B.. c 7 6 .6) B..7) M (B. 7 6 76 7 4 54 5 4 54 5 a b xM d e xM In this case one can show after some algebra that det B (a . m e + 2 (a .2 Generalized Eigensystem for Simple Tridiagonals This system is de ned as follows 2b c 32 x 3 2e f 32 x 3 1 6a b c 7 6 x2 7 6 d e f 7 6 x1 7 6 76 7 6 76 2 7 6 a b 76 x 7 6 d e 76 x 7 6 76 3 7 = 6 76 3 7 6 76 . j. SOME PROPERTIES OF TRIDIAGONAL MATRICES These vectors are the columns of the right-hand eigenvector matrix.. m f ) cos M + 1 = 0 m=1 2 If we de ne m m= M + 1 pm = cos m (B.8) . the elements of which are j. e c . 7 6 .1 and c = 1 .1) 2 (B. 7 6 .258 APPENDIX B.1 and c = 1. .1 a 2 = ei(j. .1) 2 a M M (B. .4) m=1 2 M M +1 Notice that if a = . . f ] = 0 if q m b .1 = 2 X j=1 2 M +1 a M +1 In this case notice that if a = . md)(c . c m 2 1 = e. f 7 6 .5) c The left-hand eigenvector matrix of B (a b c) can be written . c m 2 1 sin mj m=1 2 . d b .

4fdp2 m The right-hand eigenvectors are " #j . B.3. bE + ac) and the two roots to P ( ) = 0 result in the solution 9 8" p < b + b2 . it is simple to derive the rst few determinants.9) One can also nd the recursion relation DM = bDM . In the limiting case when b2 .2.10) Eq. 4ac = 0. one can show that ! b M DM = (M + 1) 2 b2 = 4ac . pb2 . m d 2 sin j ] ~m = x m c.3 The Inverse of a Simple Tridiagonal The inverse of B (a b c) can also be written in analytic form.10 is a linear O E the solution of which was discussed in Section 4.1 . 4ac : M =0 1 2 (B. 2(cd + af )p2 + 2pm (ec . 4ac #M +1 " b . M M B. bd) + (cd . ac D3 = b3 . fb)(ea . Let DM represent the determinant of B (M : a b c) DM det B (M : a b c)] De ning D0 to be 1. THE INVERSE OF A SIMPLE TRIDIAGONAL q eb . 2 2 b .B. acDM .11) where we have made use of the initial conditions D0 = 1 and D1 = b. 1 a . f 259 m =1 2 j =1 2 m These relations are useful in studying relaxation methods.2 (B. af )pm ]2 m m= e2 . thus D0 = 1 D1 = b D2 = b2 . 2abc (B. 4ac #M +1= DM = p 2 1 . Its characteristic polynomial P (E ) is P (E 2 .

260 APPENDIX B.1 Standard Tridiagonals B.n=DM B.n(.4 Eigensystems of Circulant Matrices Bp(M : a b c ) (B. SOME PROPERTIES OF TRIDIAGONAL MATRICES 2 D .aD2 D1 a4 D0 .cD1 7 6 aD 7 aD1 1 1 2 1 2 5 44 .aD2 D3 c4D0 3 .D1DD .4.cD3 7 D4 Then for M = 4 and for M = 5 2 D4 .1 (.1 n=m+1 m+2 M dmn = Dm.c3 D0 3 3 16 cD1 c2 D 7 2 B .D DD2 .mDn.1DM .m =DM Diagonal: n=m=1 2 M dmm = DM .D1 DD .cD1D1 B .2aD2 .cD3 c2D2 .cD2 c2D1 .c)n.2 D 3 .4) tridiagonal matrix .m =DM M n=1 2 M .c3 D1 7 7 c2D2 7 7 5 .D DD1 .4.a3 D1 a2 D2 .1 = D 6 a 2 6 aD1 2 2 2 2 D1 5 6 .1 = D 6 .aD3 The general element dmn is Upper triangle: m=1 2 M .c3D1 6 aD cD1 c2 D 3 1 6 .a)m.1DM .12) Consider the circulant (see Section 3.1 Lower triangle: m =n+1 n+2 dmn = DM .a3 D 2D D 4 D3D1 1 a 1 1 .a3 D0 a2D1 .

1 X = (xjm) = eij M m =0 1 M .12 do not depend on the elements a b c in the matrix. where X is the conjugate transpose of X . and the right-hand eigenvector matrix has the form 2 m j =0 1 M .1 The left-hand eigenvector matrix with elements x0 is 2 j m =0 1 M . c) sin 2 m M M m=0 1 2 The right-hand eigenvector that satis es Bp(a b c)~ m = m~ m is x x ~ m = (xj )m = ei j (2 m=M ) x j=0 1 M . For the element indexing shown in eq.1.1 for the tridiagonal circulant matrix given by eq.1 .2 General Circulant Systems . An example of a dense circulant matrix of order M = 4 is 2b b b b 3 6 b0 b1 b2 b3 7 6 3 0 1 27 6b b b b 7 (B.im M X mj j =0 1 M . all circulant matrices of order M have the same set of linearly independent eigenvectors. even if they are completely dense. j =0 M . and further examination shows that the elements in these eigenvectors correspond to the elements in a complex harmonic analysis or complex discrete Fourier series.15 they have the general form M . B. the eigenvalues are not. Notice the remarkable fact that the elements of the eigenvector matrices X and X .1 are symmetric and that X .B.14.4.1 (B.1 M 1 Note that both X and X .1 X i(2 jm=M ) bj e m= of which eq. In fact.1 = M X .15) 4 2 3 0 15 b1 b2 b3 b0 The eigenvectors are always given by eq. EIGENSYSTEMS OF CIRCULANT MATRICES The eigenvalues are m 261 = b + (a + c) cos 2 m .13) B. B.13 is a special case.14) p where i . Although the eigenvectors of a circulant matrix are independent of its elements.4. i(a .1 (B.1 = (x0 ) = 1 e. B. B.

....B. 7 6 ..17) . 7 6 7 6 x2 7 4 5 x1 This leads to the subsystem of order M which has the form 2 3 b a 6a b a 7 6 7 6 7 6 a .. .. For example. 7 6 . one can show from previous results that the eigenvalues of eq.. SOME PROPERTIES OF TRIDIAGONAL MATRICES Consider a mesh with an even number of interior points such as that shown in Fig.7 6 7 m6 .1.16 are (2m .... 7 6 76 7 6 . 7 6. B.5 Special Cases Found From Symmetries 262 APPENDIX B. 7 6 a .16) By folding the known eigenvectors of B (2M : a b a) about the center. we seek the set of eigenvectors ~ m for which x 2b a 3 2 x1 3 6a b a 7 6 x2 7 6 76 . B.. One can seek from the tridiagonal matrix B (2M : a b a ) the eigenvector subset that has even symmetry when spanning the interval 0 x . 1) m = b + 2a cos 2M + 1 ! m=1 2 M (B. 7 6 7~ 7 xm = B (M : a ~ a)~ m = 6 b x 6 7 . 7 = 6 76 7 6 76 7 6 7 6 4 54 5 a b a 7 6 x2 7 a b x1 2 3 6 x1 7 6 x2 7 6 . a 7 6 . a 6 7 6 7 6 a b a 7 4 5 a b+a x m~ m (B.

7 6 7 ~ b a) = 6 B (M : a . 1) m = b + 2a cos 2M and the corresponding eigenvectors are Line of Symmetry 263 x =0 x= j0 = 1 2 3 4 5 6 M0 j= 1 2 3 M a.6 Special Cases Involving Boundary Conditions We consider two special cases for the matrix operator representing the 3-point central di erence approximation for the second derivative @ 2 =@x2 at all points away from the boundaries. a 7 6 7 6 7 6 4 5 a b a7 2a b By folding the known eigenvalues of B (2M . An odd{numbered mesh Figure B... 1) x 2M + 1 j =1 2 M Imposing symmetry about the same interval but for a mesh with an odd number of points.6. see Fig.. An even-numbered mesh Line of Symmetry x =0 x= j0 = 1 2 3 4 5 M0 j= 1 2 3 M b. SPECIAL CASES INVOLVING BOUNDARY CONDITIONS and the corresponding eigenvectors are ~ m = sin j (2m . 1 : a b a) about the center. .B. one can show from previous results that the eigenvalues of eq. B. 1) x 2M ! j=1 2 M B. combined with special conditions imposed at the boundaries. B. leads to the matrix 2b a 3 6a b a 7 6 7 6 a ..1.1 { Symmetrical folds for special cases m=1 2 M ! ~ m = sin j (2m .17 are (2m .

2 3 7 7 7 7 7 17 5 ~m x m m = . 1) = sin j 2M + 1 (B. 2 1 6 .264 APPENDIX B.2 + 2 cos( ) = .2 1 6 1 6 6 1 .2 + 2 cos (2m .2 4 1 .1 3 7 7 7 7 7 17 5 ~m x m = .2 4 1 .2 1 6 1 6 6 1 .2 . 1) 2M + 1 (2m . 2 1 6 .18) When one boundary condition is Dirichlet and the other is Neumann (and a diagonal preconditioner is applied to scale the last equation).19) .4 sin When the boundary conditions are Dirichlet on both sides. SOME PROPERTIES OF TRIDIAGONAL MATRICES Note: In both cases m =1 2 M j =1 2 M 2 ( =2) .2 1 6 6 1 .2 1 6 6 1 .2 h+ 2 cos Mi+ 1 m = sin j M + 1 (B.2 .

If F (u v) satis es the identity F ( u v ) = n F (u v ) (C. we nd u @F + v @F = nF (u v) (C.1) for a xed n. F is said to be homogeneous of degree n and we nd (C.4) . In order to examine this property.3) " # @F Q = nF (Q) @q 265 (C.Appendix C THE HOMOGENEOUS PROPERTY OF THE EULER EQUATIONS The Euler equations have a special property that is sometimes useful in constructing numerical methods. let us rst inspect Euler's theorem on homogeneous functions. F is called homogeneous of degree n. If the vector F (Q) satis es the identity F ( Q) = nF (Q) for a xed n.2) @u @v Consider next the theorem as it applies to systems of equations. Di erentiating both sides with respect to and setting = 1 (since the identity holds for all ). Consider rst the scalar case.

Notice also that. F = AQ and " # @F = A @Q + @A Q @x @x @x Therefore. As a nal remark.105 can be written in general as. 6.11 and 2. 1 .9) in spite of the fact that individually @A=@x] and Q are not equal to zero. we notice that the expansion of the ux vector in the vicinity of tn which.1 This being the case.6) since the terms En . E = En + An(Q .106. 2.12 are homogeneous of degree 1.3. if F is homogeneous of degree 1. The Euler equations are homogeneous if the equation of state can be written in the form p = f ( ).7) (C. BnQn are identically zero for homogeneous vectors of degree 1. C. by direct use of eq. the constant term drops out of eq. we notice from the chain rule that for any vectors F and Q " # @F (Q) = @F @Q = A @Q @x @Q @x @x We notice also that for a homogeneous F of degree 1. where is the internal energy per unit mass. see eq. C.4. Note that this depends on the form of the equation of state. and their Jacobians. that both E and F in eqs. A and B .5) E = An Q + O(h2) F = BnQ + O(h2) (C.266APPENDIX C. Qn) + O(h2) can be written (C. under this condition. according to eq. AnQn and Fn . THE HOMOGENEOUS PROPERTY OF THE EULER EQUATIONS Now it is easy to show. Qn) + O(h2) F = Fn + Bn(Q . " # @A Q = 0 @x (C. are homogeneous of degree 0 (actually the latter is a direct consequence of the former).8) (C. 6.

Computational Fluid Mechanics and Heat Transfer. 1988. McGraw-Hill Book Company. John Wiley & Sons. Anderson. New York. J. volume 1.Bibliography 1] D. New York. Numerical Computation of Internal and External Flows. 1984. Hirsch.2. Tannehill. 2] C. 267 . and R. Pletcher.

Cálculo Técnico (Portugués)

Dibujo de Máquinas_José I. García, Elmer Galvis, John Jairo_U. Del Valle

Estandar Estructuras de Acero_Sigdo Koppers

Apuntes de mineralogia_Michael Dobbs & Mauricio Domcke

Libro de Bombas

Manual de Instalaciones de Agua Potable_U. de Chile

Curso de bombas

Fundamentals of Computational Fluid Dynamics_Harvard Lomax and Thomas H. Pulliam
NASA Ames Research Center

Diseño de Estanques

Conceptos básicos en Bombas de Pulpa

FACTORY APPLIED
EXTERNAL PIPELINE COATINGS
FOR CORROSION CONTROL

Conceptos Basicos en bombas de Pulpa

Manual de Muestreo Para Exploracion, Mineria Subterranea y Tajo Abierto

Petrologia & Mineralogia

Solidworks Flow Simulation Tutorial 2012

Manual Mecanico Pdms

Manual Mecanico Pdms

Gu├¡a de instalaci├│n - Aire comprimido

Mathcad Users Guide

Banco Estado 24 Hrs (b12)

COLORES RGB

- Read and print without ads
- Download to keep your version
- Edit, email or read offline

Are you sure?

This action might not be possible to undo. Are you sure you want to continue?

CANCEL

OK

You've been reading!

NO, THANKS

OK

scribd

/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->