# Draft Notes ME 608

Numerical Methods in Heat, Mass, and Momentum Transfer

Instructor: Jayathi Y. Murthy School of Mechanical Engineering Purdue University

Spring 2002

c 1998 J.Y. Murthy and S.R. Mathur. All Rights Reserved

2

Contents
1 Mathematical Modeling 1.1 Conservation Equations . . . . . . . . . . . . . . . . . . . . 1.1.1 Discussion . . . . . . . . . . . . . . . . . . . . . . 1.1.2 Conservation Form . . . . . . . . . . . . . . . . . . 1.2 Governing Equations . . . . . . . . . . . . . . . . . . . . . 1.2.1 The Energy Equation . . . . . . . . . . . . . . . . . 1.2.2 The Momentum Equation . . . . . . . . . . . . . . 1.2.3 The Species Equation . . . . . . . . . . . . . . . . . 1.3 The General Scalar Transport Equation . . . . . . . . . . . . 1.4 Mathematical Classiﬁcation of Partial Differential Equations 1.4.1 Elliptic Partial Differential Equations . . . . . . . . 1.4.2 Parabolic Partial Differential Equations . . . . . . . 1.4.3 Hyperbolic Partial Differential Equations . . . . . . 1.4.4 Behavior of the Scalar Transport Equation . . . . . . 1.5 Closure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 9 11 12 12 12 13 13 13 14 14 16 17 17 19 21 21 23 23 23 25 25 28 29 29 30 31 33 33 34 35 35 35

2 Numerical Methods 2.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 Mesh Terminology and Types . . . . . . . . . . . . . . . . . . 2.2.1 Regular and Body-ﬁtted Meshes . . . . . . . . . . . . 2.2.2 Structured, Block Structured, and Unstructured Meshes 2.2.3 Conformal and Non-Conformal Meshes . . . . . . . . 2.2.4 Cell Shapes . . . . . . . . . . . . . . . . . . . . . . . 2.2.5 Node-Based and Cell-Based Schemes . . . . . . . . . 2.3 Discretization Methods . . . . . . . . . . . . . . . . . . . . . 2.3.1 Finite Difference Methods . . . . . . . . . . . . . . . 2.3.2 Finite Element Methods . . . . . . . . . . . . . . . . 2.3.3 Finite Volume Method . . . . . . . . . . . . . . . . . 2.4 Solution of Discretization Equations . . . . . . . . . . . . . . 2.4.1 Direct Methods . . . . . . . . . . . . . . . . . . . . . 2.4.2 Iterative Methods . . . . . . . . . . . . . . . . . . . . 2.5 Accuracy, Consistency, Stability and Convergence . . . . . . . 2.5.1 Accuracy . . . . . . . . . . . . . . . . . . . . . . . . 2.5.2 Consistency . . . . . . . . . . . . . . . . . . . . . . . 3

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . . .

. . . . . . . . 3. . . . . . . . . .5.1 Dirichlet Boundary Condition . . . .3 Discrete Equation for Non-Orthogonal Meshes 4. . .2 Source Linearization and Treatment of Non-Linearity 3.1 Interpolation of Γ . . . . . . . . . . . . .5 Inﬂuence of Secondary Gradients on Coefﬁcients . . . . . . . . . . . . . . . . . . . . . . .3. . . . . . . . . . . . . 36 36 36 37 37 39 40 41 42 43 44 45 47 48 50 50 53 55 55 57 57 59 59 59 61 62 65 67 67 71 74 74 75 76 79 79 81 86 87 87 88 89 3 The Diffusion Equation: A First Look 3. . . . . . 4. . . . . . 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3. . . . . . . . . . . . . . . 3. .6. . .1 Two-Dimensional Diffusion in Rectangular Domain . . . . . . . . . . .1. . . .2 Temporal Truncation Error . . . 4.6. . . .4. . . . . . . . . . . . . . . . . 3. . . . . . . . . . . 3. . . . . . . . . . . . . . . . . . . . . .1 Data Structures . . . . . .6. . . . . . . . . . . . . . . . . . . . . 3. . . . . . . . . . . . . . . . . . .1 Discussion . . . . . . . . . . . . . . . . . .1. . . . .3 The Crank-Nicholson Scheme . . . . . . . . . . . . . . . . . . . . . 3. . . . . 3. . . 3. . . . . . . . . . . . . . . . . . .6 Implementation Issues . . . 4. .2 Boundary Conditions . . . . .1 The Explicit Scheme . . . . .2. . .3 Stability . . . . . . . . . . . . .3 Unsteady Conduction . . . . . . . . . . . . . .2. . . . . . . . . . . . . . . . . . . . . . . . . . .8. .2 Unstructured Meshes . . 4 . .2. . . . . . . . . . . . . . . . . . . . . .6. . 4. . . . . . . 4. . . . .4. .2 The Fully-Implicit Scheme . . . . . . . . . . . . . . 4. . . 3. . . . . . 4. . . . . . . . . . . . 4. . . . . 3. 3. . . . . . . . . . . . . . .4 Convergence . . . . . . .5. . . . . .2 Secondary Gradient Calculation . . . . . . . . . . . . 4 The Diffusion Equation: A Closer Look 4. . . . . . . . .1 Diffusion on Orthogonal Meshes .5 Diffusion in Axisymmetric Geometries . . . . . . . . . .3. . . . . . . . . . . . .2 Neumann Boundary Condition . . . . . . . . . . . . . . 3. . . . . 4. . 3. . . . . . . . . . . . . . . . . . . . . . . . 3. . . . . . . . . . . . . . . . . . . . . . . .3 Mixed Boundary Condition . . . . . . 4. . . . . . . .3. . . .1 Spatial Truncation Error . . . . . . . . . . . . . . . . . . . .7 Discussion . .3 Under-Relaxation . . . . . . . . . . . . . . . . . . . . . . . .2. . . . . . . . . . . . .2. . . . . . . . . . . . .4 Gradient Calculation . . . . . .2 Non-Orthogonal Meshes . . . . . .9 Stability Analysis . . . . . . . . . . .2 Discussion . 3. 3. . . .2 Overall Solution Loop . . . .4 Diffusion in Polar Geometries . . . . . . . . 2. . . . . . . . . . . .6. . . .6 Finishing Touches . . . . . . . . . . . 3. . .2. . . . . . . Closure . . . . . . . . . .1 Structured Meshes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4. . . . . . . . . . . . . . .3 Boundary Conditions . . . . . . . . . . 3.2. . . . . .7 Closure . 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . .10 Closure . . . .8. . . . . . . .1 Discretization . . . .6 2.8 Truncation Error . . . . . . . . . . . . . . . . . . . . 4. .

. . . . . . .10 Closure . . . . . . . . . . . . . . . .6. .5 Lax-Wendroff Scheme . . . 5. . . . . 6. . . . 5. . . .2 Overall Algorithm . . . .4 Error Analysis . . 5. . . .2 Upwind Differencing . . .9. . . 5. . . . . . . . . . . . . . . . . . . . . . . . . . .9 Boundary Conditions . . . . . . . . . . .1 1D Finite Volume Discretization . . .5. . . . . . . . . . . 6. . . . . . . . .7 The SIMPLER Algorithm . . . . .4. . . . .3 The Staggered Grid . . . . . . . . . . . . . . . . . . .1 An Illustrative Example . . . . . . . 5. . . . . . . . . . . . . . . .3 Accuracy of Upwind and Central Difference Schemes .5 Convection 5. . . . .9. . . . . .2.2. . . . . .4. . . . . . . . . .1 Overall Algorithm .5. 5. 6 Fluid Flow: A First Look 6. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6. . 5. . .6. .2 False Diffusion and Dispersion . . . . . . . . . . . . . . . .3 Geometric Boundaries . . . . . . . . . . .9. . .1 Central Difference Approximation . . . . . . . . . . .5 Solution Methods . 5. . . . . . . . . . . . .5 Unsteady Convection . .2 Hybrid Scheme . . . . . . . 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5. . . . 6. . . . . . . . .1. . . . . . . . . . . . . . . . . . . . . . .4 Boundary Conditions . . . . 5. . . . . . . . . 5. . . . . . . . . . . . .1 The Pressure Correction Equation . . . . .7 Higher-Order Schemes for Unstructured Meshes . . . . . . . . . . . . .3 Implementation Issues . . . . . . . . . . . .6. .6. 5. . . . . . . . . . . 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5. . . . . . . . . . . . . . . 5. . 6. . . . . . . . .6. . . . . . . . 5. . . . . . . .1 Second-Order Upwind Schemes . . . . . . . . . . . . . . . . . 6. . . . . . . . . . . . . . . . . . . . .5. . . . . .3. . .1 Inﬂow Boundaries . . . . . . . . . . . . . 6. . . . . . . . . . . . . . . . . .1 Exponential Scheme . . . . . . . . 5. . . . .1. . . . . . . .2 Outﬂow Boundaries . 5. . . . . . . . . . . . . . . .2 Upwind Differencing Approximation . . . 5. . . . . . . . . . . . . 5. . .3 Discussion . .3 Power Law Scheme . . . . . . . . 5.2 Convection-Diffusion on Non-Orthogonal Meshes . . . . . 6. . 5. . .2 Discretization of the Continuity Equation . . . . . . . . . . . . . . . 5. . . . . . . .1 Central Differencing .1 Discretization of the Momentum Equation . . 5. . . . .6 Higher-Order Schemes . . . . . .6 The SIMPLE Algorithm . . . . . . .1 Two-Dimensional Convection and Diffusion in A Rectangular Domain 5. . . . . . . . . . . . . . . . . . . . .4. . . . . . . . .8 Discussion . . . . . .6. . . 5 91 91 93 95 96 97 97 98 98 99 102 102 104 105 105 107 107 108 109 110 112 112 113 114 114 115 116 116 117 118 119 121 121 123 125 126 127 129 131 132 132 133 134 135 136 . . . . .3. . . . .2 Third-Order Upwind Schemes . . . . . . . . . .7. . . . . . . . . . . . . . . . . . . . 6. . . . . . .6. . . . . . 5. . . . . . . .3 First Order Upwind Scheme . . . . . . . . . . . . 5. . . . . 5. . . . . . . . . . . . .5. . . . . . . . . . . . . 5.4 Discussion . . . . . . . . . . . . . . . . . . . . . .4 First-Order Schemes Using Exact Solutions .2 Central Difference Scheme . . . . . . . . . . . . . . . 6. . . . . . . . . . . . .5 Pressure Level and Incompressibility 6. . . 6. . . . . . . . 6. . .

. 7. . . . . . . . . . .3 Tri-Diagonal Matrix Algorithm . . . . . .4 Velocity Checkerboarding . 7. . . . . .7. 7. . . . . . . . . . . . . . . . . . . . . . .9 6. . .8 Analysis Of Iterative Methods . . 8. . . . . . . . . . . . . . . . . . . . . . . . .9 Multigrid Methods . . . . . . . . . . . . . . . . . . . . 6 .1 Velocity and Pressure Checkerboarding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8. . . .3 The Concept of Added Dissipation . .10 6. . . . . . . . . . . . . . . .4 Accuracy of Added Dissipation Scheme .2 Co-Located Formulation .9. . . .8 6.11 6. . .5 Discussion . . . 8. . . . . . . . . . . . . . . . . . . . . 7. Closure . . 7. . .9 Co-Located Formulation for Non-Orthogonal and Unstructured Meshes 7. . . . . . . . . . . . . . . . .1. . . . . . . 7. . . . . . . . . . .6 Two-Dimensional Co-Located Variable Formulation . . . . . . . . . . . . . . .7.2 Discussion . . . . . . 7. . . . . .1.9. . . . . .4 Line by line TDMA .7 Convergence of Jacobi and Gauss Seidel Methods 8. 7. . . . . . . . . . . . . . . .1 Discretization of Momentum Equations . . . . . . . . . . . .6 General Iterative Methods . . . . . . . . . 8. . . . . .1 Direct vs Iterative Methods . . . .13 6. . . . . . . 7. . Non-Orthogonal Structured Meshes . . . . . . . . . . .5 Jacobi and Gauss Seidel Methods . . . . . . . . . . . . . . . . . . . . . . . . . .1 Velocity and Pressure Corrections . . . . .7. . . .8 Underrelaxation and Time-Step Dependence . . . . . .3 Pressure Checkerboarding . . . . . . . . . . . . . . . . 8. . . . . . . . . . . . The SIMPLEC Algorithm . . .6. . . . . . . . . . . . . . .1 Discussion . . . . . . 7. . . . . . . . . . 7. . . . . . . . . . . . . . . . . . . . . . . . .10 The SIMPLE Algorithm for Non-Orthogonal and Unstructured Meshes 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7. . . . . . . . . . . . . . . . 8. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6. . . . . . . . . 8. . . . . . .9. . 8. . . . . . . . . .1 Discretization of Momentum Equation .7. . .9. . . . . . .11 Closure . .2 Discretization of Continuity Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Linear Solvers 8. . . . . Optimal Underrelaxation for SIMPLE Discussion . .1. . . . . . . . . . .3 Overall Solution Procedure . . . . . .2 Pressure Correction Equation . . . . . . . . . . . 137 137 138 139 140 142 142 145 145 146 147 147 148 148 150 151 152 153 153 153 154 155 156 157 157 158 159 161 162 163 164 165 167 168 168 170 171 173 173 175 178 180 181 181 7 Fluid Flow: A Closer Look 7. . . . .2 Momentum Interpolation . .7. .10. 7. . . . . . . . . . .1. . . . . . . . . 7. . .2 Momentum Interpolation for Face Velocity .4 Discussion . . . . . 7. .1 Face Normal Momentum Equation . . . . .2 Storage Strategies . . . . . . . . . . . . . . . 7. 7.2 Geometric Multigrid . . . . . . . .12 6. 7. . . . 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . 7. . . . . . . . . . . . . . . . . . . . . . . . .6. 7.7 SIMPLE Algorithm for Co-Located Variables . . . .1 Coarse Grid Correction . . . . . . . . . . . . . . . . . . . . . . . . . Unstructured Meshes . 8. . . . . . . . . . . . . . . . . . . .

. . . .4 Agglomeration Strategies . . 8. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10 Closure . . .8. . 8. . . . . . . 185 186 189 192 7 . . . . . . . . . . . . . . . . . . .9. . . . . .3 Algebraic Multigrid . . . .9. .5 Cycling Strategies and Implementation Issues 8. . .9. .

8 .

and other related physical phenomena. the momentum equation expresses the principle of conservation of linear momentum in terms of the momentum per unit mass. the energy equation expresses the conservation of total energy. Nearly all the physical phenomena of interest to us in this book are governed by principles of conservation and are expressed in terms of partial differential equations expressing these principles.Chapter 1 Mathematical Modeling In order to simulate ﬂuid ﬂow. or any other such quantity. energy. The equation for conservation of chemical species expresses the conservation of the mass of the species in terms of its mass fraction Let us consider a speciﬁc quantity φ . For example. heat transfer.. which may be momentum per unit mass. it is necessary to describe the associated physics in mathematical terms.i.1) 9 .1. momentum. For example.1 Conservation Equations Typical governing equations describing the conservation of mass.. the momentum equations express the conservation of linear momentum. Consider a control volume of size ∆x ∆y ∆z shown in Figure 1. quantities expressed on a per unit mass basis. 1. i.e. Let us assume that φ is governed by a conservation principle that states ¡ ¡ Accumulation of φ in the control volume over time ∆t ¢ Net inﬂux of φ into control volume Net generation of φ inside control volume £ (1.e. or the energy per unit mass. or chemical species are written in terms of speciﬁc quantities . In this chapter we derive a typical conservation equation and examine its mathematical properties. We want to express the variation of φ in the control volume over time. velocity.

For physical phenomena of interest to us. and J x ∆x the ﬂux leaving the face x ∆x. Similar ﬂuxes exist on the y and z faces respectively. Let Jx represent the ﬂux of φ coming into the control volume through face x.2) Here. The net generation of φ inside the control volume over time ∆t is given by S∆ ∆t ¡ ∆y ¡ ¥ (1. the net inﬂux of φ into the control volume. ∆ is the volume of the control volume (∆x ∆z) and t is time. In many cases.1: Control Volume The accumulation of φ in the control volume over time ∆t is given by ¤ ρφ ∆¥§¦ ¥ t ∆t ¨ © ¤ ρφ ∆¥¦ t (1. S is also sometimes called the source term.6) .5) ρ uφ (1. and convection due to the motion of ﬂuid. the diffusion ﬂux may be written as Jdiffusion x The convective ﬂux may be written as Jconvection x 10  ¢ ©  ¢ Γ ∂φ ∂x (1. The net inﬂux of φ into the control volume over time ∆t is £ ¨ J x © Jx¨ ∆x  ∆y∆z∆t £ Jy © Jy¨ ∆y  ∆x∆z∆t £  Jz © Jz¨ ∆z  ∆x∆y∆t (1.∆y Jx Jx+ ∆x ∆z ∆x Figure 1. φ is transported by two primary mechanisms: diffusion due to molecular collision.3) where S is the generation of φ per unit volume.4) We have not yet said what physical mechanisms cause the inﬂux of φ . Let us consider the remaining term. ρ is the density of the ﬂuid.

Accumulating terms.1.10) 1.Here. in vector notation ∂ ρφ ∂t ¤ ¦ £ ∇ ρ Vφ % ¢ ∇ Γ∇φ %¤ ¦&£ S (1. we can.9) It is convenient to write Equation 1. We should get the same ﬁnal governing differential equation regardless of the shape of the volume chosen to do the derivation.9 as ∂ ρφ ∂t ¤ ¦!£ ∂ ρ uφ ∂x ¤ ¦"£ ∂ ρ vφ ∂y ¦#£ ∂ ρ wφ ∂z ¦\$¢ ∂ ∂φ Γ ∂x ∂x ∂ ∂φ Γ ∂z ∂z    £  £ ∂ ∂φ Γ ∂y ∂y S   £ or.8) 0. the velocity ﬁeld is given by the vector V and diffusive ﬂux may be written as Jx Jx ¢ ui vj wk. in principle. and dividing by ∆ ∆t Equation 1.7) ∆x ¤ ρφ ¦ t ∆t ¨ © ∆t ¤ ρφ ¦ t ¢ J © Jx¨ ∆x  £  Jy © Jy¨ ∆y  £ ∆y  J ∆Jx z© z ¨ ∆z  £ S x Taking the limit ∆x ∆y ∆z ∆t    ¤ ∂ ρφ ¦ ∂t ∆z (1.1 Discussion It is worth noting the following about the above derivation: ' ' The differential form is derived by considering balances over a ﬁnite control volume. we get ¢ © ¤ ∂ Jx ∂x © ¤ ∂ Jy ∂y © ∂ Jz ∂z £ S (1.1 may be written as ¤ ¦ ¨ ∆x ∂φ ∂x ∂φ Γ ∂x Γ  x¨ (1. Though we have chosen hexahedral control volume on which to do conservation. Thus the net convective £ £  x ¢ ¢  ρ uφ ©  ρ uφ © ¥ where ρ u x is the mass ﬂux through the control volume face at x. Similar expressions may be written for the y and z directions respectively. 11 . choose any shape.

though it can also be cast into the general form of Equation 1. let us assume low-speed ﬂow and negligible viscous dissipation. the divergence of the ﬂux is zero: ∇ J where J = Jx i Jy j Jz k. 1. For this case. By using the continuity equation.2 Governing Equations The governing equations for ﬂuid ﬂow. 1. may be represented by the conservative form. in the absence of generation.2. or momentum per unit mass (m/s) or some similar quantity. This form is characterized by the fact that in steady state.12 can be made to approximate conservation in some limiting sense. Those that start from Equation 1. is the generation of φ per unit volume per unit time.1. 1. The non-conservative form does not have a direct interpretation of this sort.' ' The conservation equation is written in terms of a speciﬁc quantity φ .10 for example. the energy equation may be written in terms of the speciﬁc enthalpy h as ∂ ρh ∇ ρ Vh ∇ k∇T Sh (1.10.10 represents the conservative or divergence form of the conservation equation. Equation 1. heat and mass transfer.1 The Energy Equation The general form of the energy equation is quite elaborate.10. which may be energy per unit mass (J/kg).10 £ £ % ¢ 0 (1.11) ∂ ρφ ∂t ¤ ¦ £ ρ V ∇φ % ¢ Γ∇ ∇φ % £ ∇Γ ∇φ % £ S (1. Thus. For simplicity.13) ∂t ¤ ¦£ %¤ ¦(¢ %¤ ¦"£ 12 . Numerical methods that are developed with the divergence form as a starting point can be made to reﬂect the conservation property exactly if care is taken. S would be the generation of energy per unit volume per unit time. but not exactly. If φ were energy per unit mass. we may write the nonconservative form of Equation 1. The conservation equation is written on a per unit volume per unit time basis. Let us now consider some speciﬁc cases of the conservation equation for φ . The generation term in Equation 1. the conservative form is a direct statement about the conservation of φ in terms of the physical ﬂuxes (convection and diffusion).2 Conservation Form Equation 1. as well as other transport equations.12) The divergence of J represents the net efﬂux per unit volume of J.

heat.15 with Equation 1.2. where Yi is deﬁned as the mass of species i per mass of mixture.10.14) ¢ so that Equation 1.2 The Momentum Equation The momentum equation for a Newtonian ﬂuid in the direction x may be written as ∂ ρu ∂t £ ∇ ρ Vu %¤ ¦0¢ ∇ µ ∇u %¤ ¦© ¢ ∂p ∂x £ Su (1.10. The equation for the conservation of mass for a chemical specie i may be written in terms of its mass fraction.3 The General Scalar Transport Equation We have seen that the equations governing ﬂuid ﬂow.10 shows that the energy equation can be cast into the form of the general conservation equation.17 has the same form as the general conservation equation 1. the governing conservation equation is ∂ ρ Yi ∂t £ ∇ ρ VYi %¤ ¦\$¢ ¢ ∇ Γi ∇Yi %¤ ¦!£ Ri (1. For ideal gases and incompressible substances. Su contains those parts of the stress tensor not appearing directly in the diffusion term. ¢ ¢ 1.3 The Species Equation Consider the transport of a mixture of chemical species.17) Γi is the diffusion coefﬁcient for Yi in the mixture and R i is the rate of formation of Yi through chemical reactions. with φ Y i . Γ k C p and S Sh . and S Ri . 13 . Again we see that Equation 1. we will have a framework within which to solve the equations for ﬂow.where k is the thermal conductivity and T is the temperature.16 has the same form as the general conservation equation 1. and mass transfer. Yi .2. ) ¢ ¢ © ) £ 1. If Fick’s law is assumed valid.13 may be written as ∂ ρh ∂t £ ∇ ρ Vh %¤ ¦(¢ ∇ % k ∇h Cp  £ Sh (1. with φ h. with φ u.16) Here. heat and mass transfer can be cast into a single general form which we shall call the general scalar transport equation: ∂ ρφ ∂t ¤ ¦ £ ∇ ρ Vφ %¤ ¦\$¢ ∇ Γ∇φ %¤ ¦&£ S (1.18) If numerical methods can be devised to solve this equation. Γ µ and S ∂ p ∂ x S u. We see that Equation 1.15) Comparing Equation 1. Γ Γi . dh C p dT (1. ¢ ¢ ¢ ) 1. and ∂ p ∂ x is pressure gradient.

1. the PDE is called parabolic.d.23) This simple problem illustrates important properties of elliptic PDEs. 14 ¤ ¦ .20) 0 the PDE is called elliptic. It cannot be either higher or lower than the boundary temperatures. T x is bounded by the temperatures on the boundaries. The behavior of Equation 1. as shown in Figure 1.4. 132 1 1¢ ¢ © 134 1. If the properties ρ and Γ. or the generation term S φ are functions of φ .2. The temperature at any point x in the domain is inﬂuenced by the temperatures on both boundaries.4 Mathematical Classiﬁcation of Partial Differential Equations The general scalar transport equation is a second-order partial differential equation (PDE) governing the spatial and temporal variation of φ . The governing equation and boundary conditions are given by ∂ ∂T k ∂x ∂x with   ¢ 0 (1. we examine the behavior of this equation. Ignoring non-linearities for the moment.1 Elliptic Partial Differential Equations Let us consider steady heat conduction in a one-dimensional slab.19 may be classiﬁed according to the sign on the discriminant b2 4ac (1. In the absence of source terms. 2.b. It is desirable when devising numerical schemes that these basic properties be reﬂected in the characteristics of the scheme.c. but not of φ itself. These are 1. it is non-linear. Let us consider typical examples of each type of equation.y). the solution is given by ¤ ¦ ¢ ¤ 5 ¦ ¢ 6 T0 T0 TL (1.19) The coefﬁcients a.21) T 0 T L For constant k. If 0 the If PDE is called hyperbolic. It is instructive to consider a general second-order PDE given by aφxx £ bφxy £ cφyy £ d φx £ eφy £ fφ £ g¢ 0 (1. f and g are functions of the coordinates (x. If 0.e.22) ¤ T x ¦\$¢ £ ¤T L © T0 ¦ x L (1.

2: Conduction in a One-Dimensional Slab 15 .x L Figure 1.

Only initial conditions are required (i. Here. The initial conditions inﬂuence the temperature at every point in the domain for all future times. and may affect different spatial points to different degrees.27) We note the following about the solution: 1. 5. Spatial variables may also behave in this way. The boundary temperature T0 inﬂuences the temperature T(x. not past temperatures.e. Equation 1.13 may be written in terms of the temperature T as where α given by ¢ k ) ¤ ρC p ¦ ∂T ∂t ¢ α ∂ 2T ∂ x2 (1. It is clear from this problem that the variable t behaves very differently from the variable x. The initial and boundary conditions are T x0 ¤  ¦5¢ ¤ T 0  t ¦7¢ ¤ T L  t ¦5¢ ∞ Ti x T0 T0 ¤ ¦ (1. conditions at t 0). just as with elliptic PDE’s. t is sometimes referred to as the marching or parabolic direction. It also recovers its elliptic spatial behavior. for example conditions at t ∞.1.2.t) at every point in the domain. 16 ¢ ¤  ¦ . If k.4. A steady state is reached for t ∞. for example. We do not need to know the future to solve this problem! 3. ρ and C p are constant. the solution becomes independent of Ti x 0 .. 2.26)   nπ x dx n L  ¢ 1  2  3 CBDBDB (1. The temperature is bounded by its initial and boundary conditions in the absence of source terms. 4. The variation in t admits only one-way inﬂuences. No ﬁnal conditions are required. the axial direction in a pipe ﬂow. The amount of inﬂuence decreases with time. 6.25) Using a separation of variables technique.2 Parabolic Partial Differential Equations Consider unsteady conduction in the slab in Figure 1. we may write the solution to this problem as T xt where Bn ¤  ¦8¢ L i T0 ¢ 2 L A0  T ¤ x¦ © £ ∑ Bn sin  n9 1 T0 sin nπ x e L  @ α n2 π 2 t L2 (1. whereas the variable x admits two-way inﬂuences. The initial conditions only affect future temperatures.24) is the thermal diffusivity.

17 . 1. U .3. the ﬂuid upstream of the channel entrance is held at temperature T0 . The velocity of the ﬂuid.28) ¤  5 ¦ ¢ ¤ ¦ ¢ T x F 0 t 5 T x0 Ti T0 (1. The elliptic diffusion equation is recovered if we assume steady state and there is no ﬂow. In most engineering situations. 2. with the diffusion terms tending to bring in elliptic inﬂuences. For t 0. it is useful in parabolic problems to think about time as the parabolic coordinate and to think of space as the elliptic coordinate.4. The properties ρ and C p are constant and k 0. The inlet boundary condition is not felt at point x until t ¢ ¢ x) U . The convection side of the scalar transport equation exhibits hyperbolic behavior. 3. The upstream boundary condition (x 0) affects the solution in the domain.10) has much in common with the partial differential equations we have seen here. as shown in Figure 1. the equation exhibits mixed behavior. The governing equations and boundary conditions are given by: 4 E ¢ ∂ ρCp T ∂t with ¤ ¦!£ ∂ ρ C pUT ∂x ¤ ¦\$¢ 0 (1. It is sometimes useful to consider particular coordinates to be elliptic or parabolic.4 Behavior of the Scalar Transport Equation The general scalar transport equation we derived earlier (Equation 1. The solution to this problem is T xt T x Ut 0 (1. U . and the unsteady and convection terms bringing in parabolic or hyperbolic inﬂuences.3 Hyperbolic Partial Differential Equations Let us consider the one-dimensional ﬂow of a ﬂuid in a channel. also U 0. For example. is a constant. as shown in Figure 1.4.4. Conditions downstream of the domain do not affect the solution in the domain.28 is hyperbolic by differentiating it once with respect to either t or x and ﬁnding the discriminant.1. The inlet boundary condition propagates with a ﬁnite speed.29) You can convince yourself that Equation 1.31) The solution is essentially a step in T traveling in the positive x direction with a velocity U . We should note the following about the solution: 1. The same problem solved for unsteady state exhibits parabolic behavior.30) ¤  ¦8¢ C ¤¤ © ¢ ¦G ¦ 2 E or to put it another way T xt ¤  ¦5¢ Ti for t T0 for t x U x U (1.

4: Temperature Variation with Time 18 .3: Convection of a Step Proﬁle T0 T t=0 Ti t = x 1 /U x 1 t=x 2 /U x x 2 Figure 1.T0 T Ti x U Figure 1.

Though it is possible to devise numerical methods which exploit the particular nature of the general scalar transport equation in certain limiting cases.5 Closure In this chapter. We have seen that the conservation equations governing the transport of momentum. This equation has unsteady. parabolic and hyperbolic equations. diffusion and source terms. By studying the behavior of canonical elliptic. 1. as opposed to a single scalar transport equation. These sets can also be analyzed in terms similar to the discussion above. we will have to deal with coupled sets of equations. We will concentrate on developing numerical methods which are general enough to handle the mixed behavior of the general transport equation. convection. When we study ﬂuid ﬂow in greater detail. we begin to understand the behavior of these different terms in determining the behavior of the computed solution. heat and other speciﬁc quantities have a common form embodied in the general scalar transport equation. The ideal numerical scheme should be able to reproduce these inﬂuences correctly. we have seen that many physical phenomena of interest to us are governed by conservation equations. we will not do that here. 19 . These conservation equations are derived by writing balances over ﬁnite control volumes.

20 .

Mesh generation divides the domain of interest into elements or cells. on the other hand. The numerical solution. 2. In arriving at these discrete equations for φ we will be required to assume how φ varies between grid points i.e. Most widely used methods for discretization require local proﬁle assumptions. consistency. aims to provide us with values of φ at a discrete number of points in the domain. and associates with each element or cell one or more discrete    21 . The process of converting our governing transport equation into a set of equations for the discrete values of φ is called the discretization process and the speciﬁc methods employed to bring about this conversion are called discretization methods. That is. The discrete values of φ are typically described by algebraic equations relating the values at grid points to each other. as well as a method for their solution. Fundamental to the development of a numerical method is the idea of discretization. The conversion of a differential equation into a set of discrete algebraic equations requires the discretization of space. A typical mesh is shown in Figure 2. stability and convergence. and identify the main components of the solution method. we prescribe how φ varies in the local neighborhood surrounding a grid point. to make proﬁle assumptions. The development of numerical methods focuses on both the derivation of the discrete set of algebraic equations.1 Overview Our objective here is to develop a numerical method for solving the general scalar transport equation. we examine numerical methods for solving this type of equation. This is accomplished by means of mesh generation. though we may also see them referred to as nodes or cell centroids. We also examine ways of characterizing our numerical methods in terms of accuracy.. but not over the entire domain. we saw that physical phenomena of interest to us could be described by a general scalar transport equation. In this chapter. An analytical solution to a partial differential equation gives us the value of φ as a function of the independent variables (x y z t ).1. These points are called grid points. depending on the method.Chapter 2 Numerical Methods In the previous chapter.

It is these values of φ we wish to compute.) Since we wish to get an answer to the original differential equation.e.Cell Vertex Figure 2.. When the number of grid points is small. The rate at which it tends to the exact solution depends on the type of proﬁle assumptions made in obtaining the discretization. the departure of the discrete solution from the exact solution is expected to be large. For simplicity. The path to solution determines whether we are successful in obtaining a solution. all well-behaved discretization methods should tend to the exact solution when a large enough number of grid points is employed. depends only on the discretization process. 22 . let us say that the accuracy of the numerical solution. we shall not pursue this line of investigation here. its closeness to the exact solution. A well-behaved numerical scheme will tend to the exact solution as the number of grid points is increased.e. it is appropriate to ask whether our algebraic equation set really gives us this.1: An Example of a Mesh values of φ . and not on the methods employed to solve the discrete set (i. We should also distinguish between the discretized equations and the methods employed to solve them. the path to solution).. i. No matter what discretization method is employed. (For some non-linear problems. But it does not determine the ﬁnal answer. and how much time and effort it will cost us. For our purposes. the path to solution can determine which of several possible solutions is obtained.

faces and edges are the same. the mesh is divided into blocks. These shapes can be meshed by regular grids. The fundamental unit of the mesh is the cell (sometimes called the element). A cell is surrounded by faces. which meet at nodes or vertices. 2. and the 23 .g. as shown in Figure 2.5 shows a block-structured mesh. stair stepping occurs at domain boundaries. such an approximation of the boundary may not be acceptable. cylinders.2 Mesh Terminology and Types The physical domain is discretized by meshing or gridding it. A variety of mesh types are encountered in practice.3 are examples of structured meshes.2 Structured. In three dimensions. however. grid lines are not necessarily orthogonal to each other. e. in ﬂows dominated by wall shear. Figure 2.2 in describing our meshes. every interior vertex in the domain is connected to the same number of neighbor vertices. Here. In two dimensions.3(b). and Unstructured Meshes The meshes shown in Figure 2. and conform to the boundaries of the domain.2. We shall use the terminology shown in Figure 2. 2. and curve to conform to the irregular geometry.2. our interest lies in analyzing domains which are regular in shape: rectangles. Here. These meshes are also sometimes called orthogonal meshes. If regular grids are used in these geometries.4. the face is a surface surrounded by edges. These are described below.(We shall use the terms mesh and grid interchangeably in this book). spheres.3(a).2: Mesh Terminology 2. Here. For many practical problems. When the physics at the boundary are important in determining the solution.Node (Vertex) Cell Centroid Cell Face Figure 2.. An example is shown in Figure 2. Block Structured. as shown in Figure 2. Associated with each cell is the cell centroid. cubes. the domains of interest are irregularly shaped and regular meshes may not sufﬁce. The grid lines are orthogonal to each other.1 Regular and Body-ﬁtted Meshes In many cases.

3: Regular and Body-Fitted Meshes Figure 2.(a) (b) Figure 2.4: Stair-Stepped Mesh 24 .

as computational ﬂuid dynamics is becoming more widely used for analyzing industrial ﬂows.4 Cell Shapes Meshes may be constructed using a variety of cell shapes. Figure 2. More recently.6 are conformal meshes.3 Conformal and Non-Conformal Meshes An example of a non-conformal mesh is shown in Figure 2. Methods for generating good-quality structured meshes for quadrilaterals and hexahedra have existed for some time now. As of this writing. the vertices of a cell or element may fall on the faces of neighboring cells or elements. The most widely used are quadrilaterals and hexahedra.5: Block-Structured Mesh mesh within each block is structured. the meshes in Figures 2. and mesh generation techniques for their generation are rapidly reaching maturity.2.2.5 and 2. However. triangles and tetrahedra are increasingly being used.6 shows an unstructured mesh. In contrast.2. Here.7.Block Figure 2. make it easier to mesh very complex geometries. unstructured meshes are becoming necessary to handle complex geometries. each vertex is connected to an arbitrary number of neighbor vertices. such as boundary-layer ﬂows. there are no general 25 . 2. Unstructured meshes impose fewer topological restrictions on the user. and as a result. Here. Here. Though mesh structure imposes restrictions. the arrangement of the blocks themselves is not necessarily structured. 2.3. structured quadrilaterals and hexahedra are well-suited for ﬂows with a dominant direction.

Cell Vertex Figure 2.6: Unstructured Mesh 26 .

Figure 2.7: Non-Conformal Mesh 27 .

there are twice as many cells as nodes. and the choice will depend 28 . (d) Hexahedron. This fact must be taken into account in deciding whether a given mesh provides adequate resolution for a given problem. Finite element methods are typically node-based schemes. (e) Prism. we will develop numerical methods capable of using all these cell shapes. and (f) Pyramid purpose techniques for generating unstructured hexahedra. are called cell-based schemes. the number of cells is approximately equal to the number of nodes. or associate them with the cell. For structured and block-structured meshes composed of quadrilaterals or hexahedra. In this book. (b) Tetrahedron.8: Cell Shapes: (a) Triangle. For other cell shapes.5 Node-Based and Cell-Based Schemes Numerical methods which store their primary unknowns at the node or vertex locations are called node-based or vertex-based schemes. on average.(a) (b) (c) (d) (e) (f) Figure 2. transitioning to tetrahedra in the free-stream. For example. 2. prisms are used in boundary layers. and the spatial resolution of both storage schemes is similar for the same mesh. For triangles. Those which store them at the cell centroid. for example. there may be quite a big difference in the number of nodes and cells in the mesh.2. (c) Quadrilateral. both schemes have advantages and disadvantages. Another recent trend is the use of hybrid meshes. and many ﬁnite volume methods are cell-based. From the point of view of developing numerical methods.

1 Finite Difference Methods Finite difference methods approximate the derivatives in the governing differential equation using truncated Taylor series expansions.10.2 from Equation 2.3 Discretization Methods So far.3.3 gives ¤¤ ¦ ¦ 4 C φ3 φ2 ∆x dφ dx ¤ ∆x¦ 2 2 O ∆x ¤ ¦n (2. 2.4) By adding the two equations together. but have not said speciﬁcally what method we will use to convert our general transport equation to a set of discrete algebraic equations.Flow Triangles Quadrilaterals Boundary Layer Figure 2. A number of popular methods are available for doing this. Subtracting Equations 2. we can write d2φ dx2  2¢ £ φ3 2φ2 ∆x2 29 © (2. 2.3)   dφ dx  2¢ φ1 φ3 φ1 2∆x © ¤ ¤ ∆x¦ 2 ¦ £ OH ¤ ¤ ∆x¦ 2 ¦ £ OC (2.5) . we have alluded to the discretization method. we write £ ¢ φ1 and ¢ ¢ φ2 © £ dφ ∆x dx    2£  2£ ¤ ∆x¦ 2 2   d2φ dx2 d2φ dx2  2£  2£ O ∆x ¤ ¦3  ¤ ¦3  (2.2) The term O ∆x 3 indicates that the terms that follow have a dependence on ∆x where n 3. Consider a one-dimensional scalar transport equation with a constant diffusion coefﬁcient and no unsteady or convective terms: d 2φ Γ 2 S 0 (2. Referring to the one-dimensional mesh shown in Figure 2.1) dx We wish to discretize the diffusion term.9: Hybrid Mesh in Boundary Layer on what we wish to achieve.

Finite difference methods do not explicitly exploit the conservation principle in deriving discrete equations.10 requires that the residual R become zero in a weighted sense. Though they yield discrete equations that look similar to other methods for simple cases.7) Substituting Equations 2. they are not guaranteed to do so in more complicated cases. Equation 2. 2.6) © The source term S is evaluated at the point 2 using ¢ S  φ2  (2.8) This is the discrete form of Equation 2.1.1.7 into Equation 2.2 Finite Element Methods We consider again the one-dimensional diffusion equation. and Equation 2. we obtain an set of algebraic equations in the discrete values of φ . so that there is a residual R: d2φ dx2 We wish to ﬁnd a φ such that £ S¢ R (2.6 and 2. By obtaining an equation like this for every point in the mesh.1 gives the equation 2Γ Γ Γ ¤∆ φ ¢ ¤ φ £ ¤ φ £ x¦ 2 2 ∆x ¦ 2 1 ∆x ¦ 2 3 S2 (2.1 exactly. Let us look at a popular variant.9) A domain W Rdx ¢ 30 0 (2. for example on unstructured meshes. This equation set may be solved by a variety of methods which we will discuss later in the book.3. it does not satisfy Equation 2.1 2 3 ∆x ∆x Figure 2. the Galerkin ﬁnite element method. we (2.10) W is a weight function. In order to generate a set of discrete equations we use a family of .10: One-Dimensional Mesh By including the diffusion coefﬁcient and dropping terms of O ∆x can write φ φ3 2φ2 d2φ Γ 1 Γ dx2 2 ∆x2   ¢ S2 £ ¤C¤ ¦ 2 ¦ or smaller. Let φ be an approximation to φ . There are different kinds of ﬁnite element methods. Since φ is only an approximation.

It is possible to start the discretization process with a direct statement of conservation on the control volume.11. as in Equation 1..11) The weight functions Wi are typically local in that they are non-zero over element i. The cell faces are denoted by w and e. Typically this variation is also local. Let us store discrete values of φ at cell centroids.weight functions Wi . Let us examine the discretization process by looking at one-dimensional diffusion with a source term: dφ d Γ dx dx   £ S¢ 0 (2.3 Finite Volume Method The ﬁnite volume method (sometimes called the control volume method) divides the domain in to a ﬁnite number of non-overlapping cells or control volumes over which conservation of φ is enforced in a discrete sense. For example we may assume that φ assumes a piece-wise linear proﬁle between points 1 and 2 and between points 2 and 3 in Figure 2.10. Thus. Further.3. We now turn to a method which employs conservation as a tool for developing discrete equations.e. we have made no approximation. rather than a single weight function. 2. Performing the integration in Equation 2. We start by integrating Equation 2. it does not enforce the conservation principle in its original form. Thus far. This yields Aw so that e d dφ Γ dx dx   dx £ A w Sdx ¢ e e 0 (2. The Galerkin ﬁnite element method requires that the weight and shape functions be the same. with cells as shown in Figure 2. i. P and E . we require ¢  CBDBDB A domain WiRdx ¢ 0 i ¢ 1  2 CBDBIB N (2.11 results in a set of algebraic equations in the nodal values of φ which may be solved by a variety of methods.14) We note that this equation can also be obtained by writing a heat balance over the cell P from ﬁrst principles. assume how φ varies between nodes.12 over the cell P. we assume a shape function for φ . Alternatively we may start with the differential equation and integrate it over the control volume. where N is the number of grid points. We should note here that because the Galerkin ﬁnite element method only requires the residual to be zero in some weighted sense. We focus on the cell associated with P.9 in the previous chapter. Let us assume the face areas to be unity. 31 .12) Consider a one-dimensional mesh. denoted by W . i 1 2 N .13) φ dφ  Γd  £ Sdx ¢ Γ © dx  e dx  w A w 0 (2. but are zero everywhere else in the domain.

∆x

W w

P

E e

δ xw

δ xe

Figure 2.11: Arrangement of Control Volumes We now make a proﬁle assumption, i.e., we make an assumption about how φ varies between cell centroids. If we assume that φ varies linearly between cell centroids, we may write Γw φP φW Γe φE φP S∆x 0 (2.15) δ xe δ xw

¤

¤ ©¦  £

¢

Here S is the average value of S in the control volume. We note that the above equation is no longer exact because of the approximation in assuming that φ varies in a piecewise linear fashion between grid points. Collecting terms, we obtain aP φP where aE aW aP b

¢

aE φE

£

aW φW

£

b

(2.16)

) ¤¤ ¦ ¢ Γw ) δ xw ¦ ¢ aE £ aW ¢ ¢
Γe δ xe S∆x (2.17)

Equations similar to Equation 2.16 may be derived for all cells in the domain, yielding a set of algebraic equations, as before; these may be solved using a variety of direct or iterative methods. We note the following about the discretization process. 1. The process starts with the statement of conservation over the cell. We then ﬁnd cell values of φ which satisfy this conservation statement. Thus conservation is guaranteed for each cell, regardless of mesh size. 32

2. Conservation does not guarantee accuracy, however. The solution for φ may be inaccurate, but conservative. 3. The quantity Γd φ dx e is diffusion ﬂux on the e face. The cell balance is written in terms of face ﬂuxes. The gradient of φ must therefore be evaluated at the faces of the cell. 4. The proﬁle assumptions for φ and S need not be the same. We will examine additional properties of this discretization in the next chapter.

¤

) ¦

2.4 Solution of Discretization Equations
All the discretization methods described here result in a set of discrete algebraic equations which must be solved to obtain the discrete values of φ . These equations may be linear (i.e. the coefﬁcients are independent of φ ) or they may be non-linear (i.e. the coefﬁcients are functions of φ ). The solution techniques are independent of the discretization method, and represent the path to solution. For the linear algebraic sets we will encounter in this book, we are guaranteed that there is only one solution, and if our solution method gives us a solution, it is the solution we want. All solution methods (i.e. all paths to solution) which arrive at a solution will give us the same solution for the same set of discrete equations. For non-linear problems, we do not have this guarantee, and the answer we get may depend on factors like the initial guess, and the actual path to solution. Though this is an important issue in computing ﬂuid ﬂows, we will not address it here. Solution methods may be broadly classiﬁed as direct or iterative. We consider each brieﬂy below.

2.4.1 Direct Methods
Using one of the discretization methods described previously, we may write the resulting system of algebraic equations as Aφ

T is a vector consisting of the discrete where A is the coefﬁcient matrix, φ φ 1 φ2 values of φ , and B is the vector resulting from the source terms. Direct methods solve the equation set 2.18 using the methods of linear algebra. The simplest direct method is inversion, whereby φ is computed from

¢QP  CBDBDB R
φ

¢

B

(2.18)

¢ AS 1B (2.19) A solution for φ is guaranteed if A S 1 can ¤ be found. However, the operation count for the inversion of an N ¡ N matrix is O N 2 ¦ . Consequently, inversion is almost
never employed in practical problems. More efﬁcient methods for linear systems are available. For the discretization methods of interest here, A is sparse, and for structured meshes it is banded. For certain types of equations, for example, for pure diffusion, the 33

matrix is symmetric. Matrix manipulation can take into account the special structure of A in devising efﬁcient solution techniques for Equation 2.18. We will study one such method, the tri-diagonal matrix algorithm (TDMA), in a later chapter. Direct methods are not widely used in computational ﬂuid dynamics because of large computational and storage requirements. Most industrial CFD problems today involve hundreds of thousands of cells, with 5-10 unknowns per cell even for simple problems. Thus the matrix A is usually very large, and most direct methods become impractical for these large problems. Furthermore, the matrix A is usually non-linear, so that the direct method must be embedded within an iterative loop to update nonlinearities in A. Thus, the direct method is applied over and over again, making it all the more time-consuming.

2.4.2 Iterative Methods
Iterative methods are the most widely used solution methods in computational ﬂuid dynamics. These methods employ a guess-and-correct philosophy which progressively improves the guessed solution by repeated application of the discrete equations. Let us consider an extremely simple iterative method, the Gauss-Seidel method. The overall solution loop for the Gauss-Seidel method may be written as follows: 1. Guess the discrete values of φ at all grid points in the domain. 2. Visit each grid point in turn. Update φ using

φP

¢

¤a

E φE

£

aW φW aP

£ b¦

(2.20)

The neighbor values, φ E and φW are required for the update of φ P . These are assumed known at prevailing values. Thus, points which have already been visited will have recently updated values of φ and those that have not will have old values. 3. Sweep the domain until all grid points are covered. This completes one iteration. 4. Check if an appropriate convergence criterion is met. We may, for example, require that the maximum change in the grid-point values of φ be less than 0 1 %. If the criterion is met, stop. Else, go to step 2.

B

The iteration procedure described here is not guaranteed to converge to a solution for arbitrary combinations of a P , aE and aW . Convergence of the process is guaranteed for linear problems if the Scarborough criterion is satisﬁed. The Scarborough criterion requires that

T T T T T T aE £ aW F2 aP

1 1

for all grid points for at least one grid point (2.21)

Matrices which satisfy the Scarborough criterion have diagonal dominance. We note that direct methods do not require the Scarborough criterion to be satisﬁed to obtain a 34

5. even methods of very high order may yield inaccurate results on a given mesh. The order of a discretization method is n if its truncation error is O ∆x n . but is not an indicator of how high the error is on the current mesh. the iterative nature of the scheme makes it particularly suitable for non-linear problems. The truncation error of a discretization scheme is the largest truncation error of each of the individual terms in the equation being discretized. The truncation error associated with the diffusion term using the ﬁnite difference method is O ∆x 2 .solution. The coefﬁcients a P . Also.5 Accuracy. In most cases. Stability and Convergence In this section. we will use a multigrid method to accelerate the rate of convergence of this scheme and make it usable as a practical tool. We are guaranteed this if the truncation error is some power of the mesh spacing ∆x (or ∆t ). we do not know the exact solution. It is therefore more useful to talk of the truncation error of a discretization method. aE . both spatial and temporal truncation errors must be considered). The rate of convergence of the scheme decreases to unacceptably low levels if the system of equations is large.1 Accuracy Accuracy refers to the correctness of a numerical solution when compared to an exact solution. the Gauss-Seidel scheme is rarely used in practice for solving the systems encountered in CFD. aW and b can be computed on the ﬂy if desired. if we reﬁne the mesh. Thus. If the coefﬁcients depend on φ .5. Consistency. they may be updated using prevailing values of φ as the iterations proceed.This simply says that if d 2 φ dx2 is represented by the ﬁrst term in Equation 2. since the entire coefﬁcient matrix for the domain is not required when updating the value of φ at any grid point. In a later chapter.5. Nevertheless. we can always obtain a solution to our linear set of equations as long as our coefﬁcient matrix is not singular. If we double the x-direction mesh.2 Consistency A consistent numerical method is one for which the truncation error tends to vanish as the mesh becomes ﬁner and ﬁner. we are guaranteed that the error will decrease more rapidly with mesh reﬁnement than with a discretization method of lower order. It is important to understand that the truncation error tells us how fast the error will decrease with mesh reﬁnement. we expect the truncation error to decrease as ∆x 2 . ) C ¤¤ ¤ ¦ ¦ ¦ ¤C¤ ¦ ¦ ¤H¤ ¦ ¦ 2. The Gauss-Seidel scheme can be implemented with very little storage.5. 2. we expect the truncation error to decrease by a factor of four. However. the terms that are neglected are of O ∆x 2 . (For unsteady problems. we turn to certain important properties of numerical methods. 2. All that is required is storage for the discrete values of φ at the grid points. Thus. as shown by Equation 2. Sometimes we may come across schemes 35 .

where the truncation error of the method is O ∆x ∆t . non-linearities in the governing equations. When solving unsteady problems. We shall use the term in both senses in this book. An unstable time-marching scheme would not be able to reach steady state in an unsteady heat conduction problem. However. Depending on the properties of the method. 36 . Here. we have presented a broad overview of discretization and introduced terminology associated with numerical methods. We may choose to solve this set using an iterative method. make a formal analysis difﬁcult. this is most convenient for linear problems. Without it. using the solution at one or more previous time steps as initial conditions. 2. for example (assuming that a steady state exists). for example. consistency is not guaranteed unless ∆x is decreased faster than ∆t . In the next chapter.3 Stability The previous two properties refer to the behavior of the discretization method. It is possible to analyze iterative and time marching methods using stability analysis. we consider the ﬁnite volume method in more detail. or that we have obtained convergence using a particular method. we mean the process of mesh reﬁnement. By this. 2. we have no guarantee that mesh reﬁnement will improve our solution. Here. An iterative solution method is unstable or divergent if it fails to yield a solution to the discrete set. Of these. we obtain a discretized set of algebraic equations which must be solved. Stability is a property of the path to solution. In reality the practitioner of CFD must rely on experience and intuition in devising stable solution methods. Consistency is a very important property. and properties. Stability analysis allow us to determine whether errors in the solution remain bounded as time marching proceeds. solution errors may be ampliﬁed or damped. We may also speak of convergence to mesh independence. and is usually too difﬁcult for most realistic problems. We have learned that there are a number of different philosophies for discretizing the scalar transport equation. and thus ensures that both local and global conservation are guaranteed no matter how coarse the mesh. we will use numerical methods which compute the solution at discrete instants of time.5.5. as well as coupling between multiple governing equations. and study the properties of the discretizations it produces when applied to diffusion problems. boundary conditions.4 Convergence We distinguish between two popular usages of the term convergence. ¤ ) ¦ 2. only the ﬁnite volume method enforces conservation on each cell. It is also possible to speak of the stability of time-marching schemes. We may say that an iterative method has converged to a solution. For steady state problems.6 Closure In this chapter. and its use in obtaining solutions that are essentially invariant with further reﬁnement. By this we mean that our iterative method has successfully obtained a solution to our discrete algebraic equation set.

the equation defaults to the familiar Laplace equation.3) We note that Equation 3. 3. 37 . From Equation 1. the gradient operator is given by ∇ ¢ ∂ i ∂x £ ∂ j ∂y (3.2) In Cartesian geometries. Diffusion operators are common in heat. and other physics. the Poisson equation is obtained. When Γ is constant and S is non-zero.1 Two-Dimensional Diffusion in Rectangular Domain Let us consider the steady two-dimensional diffusion of a scalar φ in a rectangular domain. radiation.10. the governing scalar transport equation may be written as (3. We consider the discretization and solution of the scalar transport equation for both steady and unsteady diffusion problems.1) ∇ J S where J ¢ Jx i £ % ¢ Jy j is the diffusion ﬂux vector and is given by J ¢ © Γ∇φ (3. mass and momentum transfer and can also be used to model electrostatics. The methodology we develop in this chapter will allow us to examine more complicated mesh types and physics in later chapters. When Γ is constant and S is zero. namely diffusion. We will attempt to relate the properties of our discrete equations with the behavior of the canonical partial differential equations we studied previously.Chapter 3 The Diffusion Equation: A First Look In this chapter we turn our attention to an important physical process.1 is written in conservative or divergence form.

N A n δy W A w A S δx w ∆x δ xe s P Ae δy E n ∆y s Figure 3.1: Two-Dimensional Control Volume 38 .

The volume of the cell P is ∆ ∆x ∆y. ¤ J % A¦ £ ¤ J % A ¦ £ ¤ J % A ¦ £ ¤ J % A ¦ ¢ e w n s f ewns S∆ ¥ (3.9) The transport in the other directions may be written analogously. We begin the process of discretization by integrating Equation 3. so that it may be represented by its value at the face centroid. We also assume that the mean value of the source term S over the control volume is S. we focus on cell P and its neighbors. We also store the diffusion coefﬁcient Γ at cell centroids. more compactly J %A ¢ 9∑   f f Ae Aw S∆ ¥ The face areas Ae and Aw are given by ¢ ¢ © ∆y i © ∆y i (3. the cells E. Further J e Ae J w Aw % ¢ ¢ % ∂φ ∂x ∂φ Γw ∆y ∂x Γe ∆y    w e  (3.1 Discretization The arrangement of cells under consideration is shown in Figure 3. We assume that J varies linearly over each face of the cell P. n and s are associated with area vectors A e . As in the previous chapter.1. Thus. The faces e.6) (3.w. we make one more round of proﬁle assumptions.1.7) or. The vectors are positive pointing outwards from the cell P. Thus.5) The ﬁrst integral represents the integral over the control surface A of the cell. We now make a proﬁle assumption about the ﬂux vector J. We assume that φ varies linearly between cell centroids.4) A A J % dA ¢ A ∆ V Sd ¥ (3. we apply the divergence theorem to yield Sd ¥ (3. Equation 3. An and As . Aw . N and S. In order to complete the discretization process.3. W. Discrete values of φ are stored at cell centroids.10) 39 .8) The other area vectors may be written analogously. We have made no approximations thus far.1 over the cell P: ¡ ¥U¢ A ∆V ∇ % Jd ¥W¢ A ∆V Next.9 may be written as J e Ae J w Aw % ¢ ¢ % φE φP δx e φ φW Γw ∆y P δx w © Γe ∆y ¤ ©¦ ¤©¦ (3.

However.14) Equation 3. We write the volume-averaged source term S in the cell P as S SC SP φP (3.15) Here.1. We say that S has been linearized. 3. If the temperature at E is increased. 2. overall conservation in the calculation domain is not guaranteed unless the diffusion transfer from one cell enters the next cell.Similar expressions may be written for the other ﬂuxes. W . Many higher order schemes do not have this property. Thus conservation over individual control volumes is guaranteed.12 into Equation 3. in writing the balance for cell E. The coefﬁcients a P and anb are all of the same sign. 3. N . The discrete equation expresses a balance of discrete ﬂux (the J’s) and the source term.12) F S ¢ SC £ SP φ (3. not decrease.8 and 3. This has physical meaning. we must ensure that the ﬂux used on the face e is J e . Let us assume that the source term S has the form with SP 0. In our case they are all positive.10.2 Discussion We make the following important points about the discretization we have done so far: 1.13) ¢ £ £ £ £ where aE aW aN aS aP b ¢ ¢ ¢ ¢ ¢ ¢ Γw ∆y δx w Γn ∆x δy n Γs ∆x δy s aE aW ∆y ¤Γδex ¦ ¤ ¦ ¤ ¦ ¤ ¦ £ e SC ∆x∆y aP φP £ aN £ aS © b SP ∆x∆y (3. and S.6 yields a discrete equation for φP : aP φP aE φE aW φW aN φN aS φS b (3. we would expect the temperature at P to increase.11) ¢ £ Substituting Equations 3. We will see later how general forms of S can be written in this way. For example. (The solution to our elliptic partial differential equation also has this property). This does not mean these schemes are wrong – it means they do not have a property we would like to have if at all possible. 40 .10. and that it is discretized exactly as in Equation 3.13 may be written in a more compact form as ¢ ∑ anb φnb £ nb (3. the subscript nb denotes the cell neighbors E .

If SP 0 and aP ∑nb anb .11 be negative.13 may then be written as φP where ∑nb anb aP 1. it is always bounded by them. This is also true of the original differential equation. Equation 3. If for example S is a temperature source. we notice that φ and φ C are solutions to Equation 3.19) Let us assume that the boundary ﬂux J b is given by the boundary face centroid value. 4.2 Boundary Conditions A typical boundary control volume is shown in Figure 3. By extension. When S 0. but this is perfectly physical. In addition. Discrete values of φ are stored at cell centroids. Let us consider the discretization process for a near-boundary control volume centered about the cell centroid P with a face on the boundary. The solution can be made unique by specifying boundary conditions on φ which ﬁx the value of φ at some point on the boundary.2. We require that S P in Equation 3. S increases indeﬁnitely. The boundary face centroid is denoted by b.17) ¢Y ¢ ¢ £ 3. Thus Jb Γb ∇φb (3. Integrating the governing transport equation over the cell P as before yields ¤ J % A¦ £ ¤ J % A ¦ £ ¤ J % A¦ £ ¤ J % A ¦ ¢ e n s b ¢ © S∆ ¥ (3. φP need not be bounded in this manner. This also has physical meaning. The amount of overshoot is determined by the magnitude of S C and SP with respect to the a nb ’s. The face area vector of the boundary face is A b .3. 5. The boundary area vector A b is given by Ab ∆y i (3.1. When SP ¢ 0.16) Equation 3. We notice that this property is also shared by our canonical elliptic equation. ¤ ) ¦X¢ ¢ ∑ nb  (3. and points outward from the cell P as shown. We control this behavior in the numerical scheme by insisting that S P be kept negative. we store discrete values of φ at the centroids of boundary faces. we do not want a situation where as T increases. A boundary control volume is one which has one or more faces on the boundary. as before.13.20) ¢ © 41 . φ P is always bounded by the boundary values of φ . and can overshoot or undershoot its boundary values. Since φP is the weighted sum of its neighbor values. we have aP ¢ ∑ anb nb anb φ aP nb (3.18) The ﬂuxes on the interior faces are discretized as before.

the boundary ﬂux J b .2: Boundary Control Volume so that J b Ab % ¢ ∆yΓb ∇φb (3. 3.A b b δx b P e E Figure 3.22.21) Assuming that φ varies linearly between b and P. we write φ φ Jb % Ab ¢ ∆yΓb ¤ P © b  δ x¦ b (3.24) .22) The speciﬁcation of boundary conditions involves either specifying the unknown boundary value φb .23) Using φb given in Equation 3.1 Dirichlet Boundary Condition The boundary condition is given by φb ¢ £ φb given  (3.2.18 yields the following discrete equation for boundary cell P: aP φP  % ¢ aE φE aN φN 42 £ aS φS £ b (3. or alternatively. Let us consider some common boundary conditions next. and including J b Ab in Equation 3.

where aE aN aS ab aP b

¢ ¢ ¢ ¢ ¢ ¢

Γe ∆y δx e Γn ∆x δy n Γs ∆x δy s Γb ∆y δx b aE aN aS ab ab φb SC ∆x∆y

¤ ¦ ¤ ¦ ¤ ¦ ¤ ¦ £

We note the following important points about the above discretization: 1. At Dirichlet boundaries, a P aE aN aS . This property ensures that the Scarborough criterion is satisﬁed for problems with Dirichlet boundary conditions. 2. φP is guaranteed to be bounded by the values of φ E , φN , φS and φb if SC and SP are zero. This is in keeping with the behavior of the canonical elliptic partial differential equation we encountered earlier.

4 ¤ £

£

£

£

SP ∆x∆y (3.25)

£

3.2.2 Neumann Boundary Condition
Here, we are given the normal gradient of φ at the boundary:

© ¤ We may thus include
where aE aN aS aP b

¤ Γ∇φ ¦ % i ¢ b ¦

qb given 

(3.26)

We are in effect given the ﬂux J b at Neumann boundaries: J b Ab q b given ∆y directly in Equation 3.18 to yield the following discrete equation for the boundary cell P: aP φP

% ¢ © qb given∆y £
aN φN

(3.27)

¢

aE φE

£

aS φS

£

b

(3.28)

¢ ¢ ¢ ¢ ¢

Γe ∆y δx e Γn ∆x δy n Γs ∆x δy s aE aN aS SP ∆x∆y qb given ∆y SC ∆x∆y

¤ ¦ ¤ ¦

¤ ¦ £ £ © £ 
43

(3.29)

We note the following about the discretization at the boundary cell P:

1. aP

¢ ¤ aE £ 

aN

£

aS at Neumann boundaries if S

¦

¢

0.

2. If both q b given and S are zero, φ P is bounded by its neighbors. Otherwise, φ P can exceed (or fall below) the neighbor values of φ . This is admissible. If heat were being added at the boundary, for example, we would expect the temperature in the region close to the boundary to be higher than that in the interior. 3. Once φP is computed, the boundary value, φ b may be computed using Equation 3.22: qb given Γb δ xb φP φb (3.30) Γb δ xb

¢

¤  ¤£ ) ¦ ) ¦

3.2.3 Mixed Boundary Condition
The mixed boundary condition is given by

Since Ab

¢

¤ Γ∇φ ¦ % i ¢ b

hb φ∞

¤

(3.31)

∆yi, we are given that

J b Ab Using Equation 3.22 we may write Γb We may thus write φb as

% ¢ © hb ¤ φ∞ © φ ¦ ∆y
P

(3.32) 

φ ¢

δ xb

φb 

φb

¦

(3.33)

φb

hb φ∞ hb

£ ¤¤ Γb ) δ xb ¦ φP £ Γb ) δ xb ¦
Req φ∞

(3.34)

Using Equation 3.34 to eliminate φ b from Equation 3.33 we may write J b Ab where Req

% ¢ © ¢

¤

φP ∆y

¦

(3.35)

hb Γb δ xb hb Γb δ xb

¤ ) ¦ £ ¤ ) ¦

(3.36)

We are now ready to write the discretization equation for the boundary control volume P. Substituting the boundary ﬂux from Equation 3.35 into Equation 3.18 given the following discrete equation for φ P : aP φP

¢

aE φE

£

aN φN 44

£

aS φS

£

b

(3.37)

where aE aN aS ab aP b

¢ ¢ ¢ ¢ ¢ ¢

Γe ∆y δx e Γn ∆x δy n Γs ∆x δy s Req ∆y aE aN

¤ ¦ ¤ ¦ ¤ ¦ £

Req ∆yφ∞

£

£

aS

SC ∆x∆y

£

ab

SP ∆x∆y (3.38)

We note the following about the above discretization:

aE aN aS for the boundary cell P if S 0. Thus, mixed boundaries 1. aP are like Dirichlet boundaries in that the boundary condition helps ensure that the Scarborough criterion is met. 2. The cell value φ P is bounded by its neighbor values φ E , φN and φS , and the external value, φ ∞ . 3. The boundary value, φ b , may be computed from Equation 3.34 once a solution has been obtained. It is bounded by φ P and φ∞ , as shown by Equation 3.34.

4 ¤ £

£

¦

¢

¤ ! ¦£ ¤ We are given initial conditions φ x  y 0 ¦ .
∂ ρφ ∂t

Let us now consider the unsteady counterpart of Equation 3.1: ∇ J

% ¢

S

(3.39)

As we saw in a previous chapter, time is a “marching” coordinate. By knowing the initial condition, and taking discrete time steps ∆t , we wish to obtain the solution for φ at a each discrete time instant. In order to discretize Equation 3.39, we integrate it over the control volume as usual. We also integrate it over the time step ∆t , i.e., from t to t ∆t .

A ∆t A ∆V

∂ ρφ d dt ∂t

¤ ¦ ¥

£

£ A ∆t A ∆V ∇ % Jd ¥

dt

¢ A ∆t A ∆V

Sd dt

¥

(3.40)

Applying the divergence theorem as before, we obtain 

¤ ρφ ¦ 1 ¤ ρφ ¦ 0 d ¥Q£ ©  A ∆V A ∆t A A J % dAdt ¢ A ∆t A ∆V A ∆V
ρφ d

Sd dt

¥

(3.41)

The superscripts 1 and 0 in the ﬁrst integral denote the values at the times t respectively. Let us consider each term in turn. If we assume that

£

∆t and t

¥`¢ ¤ ρφ ¦ P ∆¥
45

(3.42)

Thus.51 for a few limiting cases.53) ¢ ¢ ¢ ¢ ¢ ¢ ¢ Γe ∆y δx e Γw ∆y δx w Γn ∆x δy n Γs ∆x δy s ρ∆ ∆t a0 P ¤ ¦ ¤ ¦ ¤ ¦ ¤ ¦ ¥ C  S0 £ 0 0 SP φP ∆  ¥ £ (3. Further aE aW aN aS a0 P aP b ¢ ¢ ¢ ¢ ¢ ¢ ¢ Γe ∆y δx e Γw ∆y δx w Γn ∆x δy n Γs ∆x δy s ρ∆ ∆t f ∑ anb ¤ ¦ ¤ ¦ ¤ ¦ ¤ ¦ ¥ ¥c£ a0 P  fnb ¤ 0 0 0 SC £ 1 © f ¦ SC £ ¤ 1 © f ¦ SP φP  ∆ ¥ © f SP ∆ (3. In this limit. we obtain the explicit scheme. it is possible for us to evaluate the right hand side completely. The right hand side of Equation 3.1 The Explicit Scheme If we set f 0. This means that the ﬂux and source terms are evaluated using values exclusively from the previous time step.54) We notice the following about the discretization: 1. we obtain the following discrete equations: aP φP and aE aW aN aS a0 P aP b 0 ¢ ∑ anbφnb £ b £ a a0 P © ∑ anb b nb nb 0 φP ¢ (3.where nb denotes E . and ﬁnd the value of φ P at time t ∆t . given the condition at time t .W .3. N and S.53 contains values exclusively from the previous time t .52) It is useful to examine the behavior of Equation 3. 47 . 3.

this restriction can be shown . the error reduces only linearly with time-step reﬁnement. leading to very long computational times. We see that the term multiplying 0 can become negative when φP a0 (3.58) This condition is sometimes called the von Neumann stability criterion in the literature. two. we are assured that our solution upon reaching steady state is the φP φP same as that we would have obtained if we had solved a steady problem in the ﬁrst place. when 0 . We can avoid this by requiring a 0 ∑nb anb .57) and ∆t ρ ∆x 6Γ (3. 3..3. We will see later in the chapter that the explicit scheme has a truncation error of O ∆t . Thus. 0 a0 P φP ¢ ∑ anbφnb £ b £ nb 48 (3. This type of behavior is not possible with a parabolic partial differential equation. ¢ 0 prevails over the en4.2. Thus. When ∆t ∞. It requires that the forward time step be limited by the square of the mesh size.51. This is also true when steady state is reached through time marching. for one-. However. ¤ ¦ 3. P For a uniform mesh. aP φP ¢ 1 in Equation 3.2 The Fully-Implicit Scheme The fully-implicit scheme is obtained by setting f we obtain the following discrete equation for φ P . This type of scheme is very simple and convenient. it suffers from a serious drawback. The explicit scheme corresponds to the assumption that φ P tire time step.and three-dimensional cases to be. i.56) F F ¤ ¦2 ¤ ¦2 (3. In this limit. respectively 2 0 nb E ¤ ¦2 ∆t F ρ ∆x 2Γ ρ ∆x 4Γ ∆t (3. It requires us to take smaller and smaller time steps as our mesh is reﬁned.55) ∑ anb P ¤ ¦ 2 When aP ∑nb anb . we see that an increase in φ at the previous time instant can cause a decrease in φ at the current instant. we see that the discrete equation for steady state is recovered. and is frequently used in CFD practice.e. We do not need to solve a set of linear algebraic equations to ﬁnd φ P . This dependence on ∆x 2 is very restrictive in practice. and constant properties. 5.59) .

the Scarborough criterion is satisﬁed. φP P ¢ £ £ 2.60) We note the following important points about the implicit scheme: 1.e. the rate of error reduction with time step is rather slow. we recover the discrete equations gov3. Because of this property.with aE aW aN aS a0 P aP b ¢ ¢ ¢ ¢ ¢ ¢ ¢ Γw ∆y δx w Γn ∆x δy n Γs ∆x δy s ρ∆ ∆t ∑ anb ∆y ¤Γδex ¦ ¤ ¦ ¤ ¦ ¤ ¦ ¥ e nb SC ∆ ¥ © SP ∆ ¥c£ a0 P (3. 6. We can take as big a time step as we wish without getting an unrealistic φ P . The solution at time t £ ∆t requires the solution of a set of algebraic equations. However. We will see later in this chapter that the truncation error of the fully-implicit scheme is O ∆t . 0 i. as ∆t erning steady state diffusion.. The fully-implicit scheme corresponds to assuming that φ P prevails over the entire time step..e. we are guaranteed that φ P is bounded by the values of its spatial neighbors at t ∆t and by the value at point P at the previous time. We may consider 0 to be the time neighbor of φ . it is a ﬁrst-order accurate scheme. As with the explicit scheme. ¢ 4. 5. Also. Though the coefﬁcients it produces have useful properties. Also. 49 ¤ ¦ . φP φP . In the absence of source terms. if we reach steady state by time marching. There is no time step restriction on the fully-implicit scheme. i. a P ∑nb anb a0 P . we recover the discrete algebraic set governing steady diffusion. ∞. This is in keeping with the behavior of canonical parabolic partial differential equation. physical plausibility does not imply accuracy – it is possible to get plausible but inaccurate answers if our time steps are too big.

it may be used to describe diffusive transport in other coordinate systems as well. Indeed much of our derivation thus far can be applied with little change to other systems. © ¥f¢ 50 .4 Diffusion in Polar Geometries Since Equation 3. The control volume is located in the r θ plane and is bounded by surfaces of constant r and θ .62) We note the following about the Crank-Nicholson scheme: 0 becomes negative. The grid point P is located at the cell centroid.1 is written in vector form. A typical control volume is shown in Figure 3. Consequently. £ ¤C¤ ¦ ¦ 3.3. leading to the 1. the scheme has a truncation error of O ∆t 2 . With this value of f . any value of f different from one will have this property.3 The Crank-Nicholson Scheme The Crank-Nicholson Scheme is obtained by setting f discrete equation becomes aP φP with aE aW aN aS a0 P aP b ¢ 0 B 5. if used with care. our 0 φP 0 ¢ ∑ anb  0 B 5φnb £ 0B 5φnb £ a a0 P © 0 B 5 ∑ anb b  £ bd nb nb (3.3. The Crank-Nicholson scheme essentially makes a linear assumption about the variation of φP with time between t and t ∆t . We will see later in this chapter that though this leads to a possibility of negative coefﬁcients for large time steps. Let us consider two-dimensional polar geometries next. For 0 5 ∑nb anb the term multiplying φ P possibility of unphysical solutions. Indeed.61) ¢ ¢ ¢ ¢ ¢ ¢ ∆y ¤Γδex ¦e P © 0 B 5SP ∆ ¥e£ a0 0 0 0 φP  ∆ ¥ ¢ 0 B 5  ¤ SC £ SC ¦!£ SP nb Γn ∆x δy n Γs ∆x δy s ρ∆ ∆t 0 5 ∑ anb y ¤Γδwx∆ ¦ ¤ ¦ ¤ ¦ ¥ B w (3.3. The volume of the control volume is ∆ r P ∆θ ∆r. the error in the our solutions can be reduced more rapidly with time-step reﬁnement than the other schemes we have encountered thus far. a0 P 2 B 2.

64) We recall that the diffusive ﬂux J is given by ¢ © £ Γ∇φ (3. Integrating Equation 3.65) For polar geometries the gradient operator is given by ∇ ¢ ∂ er ∂r 1 ∂ e r ∂θ θ (3.1 over the control volume as before.66) 51 .6: ) ¢ © ¤ J % A¦ £ ¤ J % A ¦ £ ¤ J % A ¦ £ ¤ J % A ¦ ¢ e w n s The face area vectors are given by Ae Aw S∆ ¥ (3. and applying the divergence theorem yields Equation 3. We also assume steady diffusion.δθw N δ rn n P W w δ rs s S ∆θ e r e E δθe ∆r e θ Figure 3. though the unsteady counterpart is easily derived.63) ¢ ¢ ¢ ¢ J ∆r eθ e ∆r eθ w An As rn ∆θ er rs ∆θ er © ©   (3. so that all transport is conﬁned to the r θ plane.3: Control Volume in Polar Geometry We assume ∂ φ ∂ x 0.

71) We note the following about the above discretization: 52 .Thus the ﬂuxes on the faces are given by J e Ae J w Aw J n An J s As % ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¢ ¤S £ C £ % % % 1 ∂φ re ∂ θ e 1 ∂φ Γw ∆r rw ∂ θ w ∂φ Γn rn ∆θ ∂r n ∂φ Γs rs ∆θ ∂r s © © Γe ∆r         (3.69) Collecting terms.70) Γe ∆r re δ θ e Γw ∆r rw δ θ w Γn rn ∆θ δr n Γs rs ∆θ δr s aE aW aN SC ∆ ¤ ¦ £ £ ¥ ¤ ¦ ¤ ¦ ¤ ¦ £ aS © SP ∆ ¥ (3.68) SP φP ∆ ¦ ¥ aN φN (3. we may write the discrete equation for the cell P as aP φP where aE aW aN aS aP b ¢ ¢ ¢ ¢ ¢ ¢ ¢ aE φE aW φW £ £ aS φS £ b (3.67) Assuming that φ varies linearly between grid points yields J e Ae J w Aw J n An J s As The source term may be written as % % % % φE φP re δ θ e φ φW Γw ∆r P rw δ θ w φP φ Γn rn ∆θ N δr n φP φS Γs rs ∆θ δr s © © Γe ∆r ¤© ¦ ¤ ©¦ ¤©¦ ¤© ¦ (3.

when the line joining the cell centroids is not perpendicular to the face. Since the problem is axisymmetric. Regardless of the shape of the control volume.1. 3. 2. A typical control volume is shown in Figure 3. ∂ φ ∂ θ 0.e. The volume of the control volume is ∆ rP ∆r∆x. for example). e r and eθ are always perpendicular to each other. We will see in the next chapter that additional terms appear when the mesh is non-orthogonal.5 Diffusion in Axisymmetric Geometries A similar procedure can be used to derive the discrete equation for axisymmetric geometries.72) For axisymmetric geometries the gradient operator is given by ∂ er ∂r Thus the ﬂuxes on the faces are given by ∇ J e Ae J w Aw J n An J s As ¢ £ ∂ i ∂x (3. the line joining the cell centroids (P and E .e. We assume steady conduction. The only differences are in the form of the face area vectors and the gradient operator for the coordinate system. The face area vectors are given by ¥W¢ © ) ¢ Ae Aw An ¢ ¢ ¢ ¢ re ∆r i rw ∆r i rn ∆x er © © As rs ∆x er (3.73) % ¢ ¢ ¢ ¢ % % % ∂φ ∂x e ∂φ Γw rw ∆r ∂x w ∂φ Γn rn ∆x ∂r n ∂φ Γs rs ∆x ∂r s © © Γe re ∆r         (3. for example) is perpendicular the face (e. the basic process is the same. The integration of the conservation equation over the control volume and the application of the divergence theorem results in a control volume balance equation. The latter manifests itself in the expressions for the distances between grid points. i. the ﬂux normal to the face can be written purely in terms of the cell centroid φ values for the two cells sharing the face. The polar coordinate system is orthogonal.74) 53 . 3. The grid point P is located at the cell centroid. Because the control volume faces are aligned with the coordinate directions... As a result. and is located in the r x plane. i. regardless of the shape of the control volume.4.

76) Collecting terms.77) .4: Control Volume in Axisymmetric Geometry Assuming that φ varies linearly between grid points yields J e Ae J w Aw J n An J s As % ¢ ¢ ¢ ¢ % % % φE φP δx e φP φW Γw rw ∆r δx w φN φP Γn rn ∆x δr n φ φS Γs rs ∆x P δr s © © Γe re ∆r ¤ ©¦ ¤©¦ ¤ © ¤©¦ ¦ (3. we may write the discrete equation for the cell P as aP φP ¢ aE φE £ aW φW 54 £ aN φN £ aS φS £ b (3.N n W w δr P e δr n E ∆r s s S ∆x er i r x Figure 3.75) The source term may be written as ¤S £ C SP φP ∆ ¦ ¥ (3.

79) ¢ where £ ¤ © ¦ 0 B 5∆xE fe ¢ ¤ δ x¦ e (3. an equivalent interface diffusion coefﬁcient may be deﬁned as ∆xP 0 B 5∆xE ¢ 0B 5 £ Γ Γ P E (3.81) 0 5∆xP ΓP 0 5∆xE ΓE ¢ © ¤ B δ xe Γe © ¤ ¦ ¦) £ B H ¤ ¦H) Thus.82) 55 . Since we store Γ at cell centroids.80) As long as Γe is smoothly varying. It is useful to devise an interpolation procedure which accounts for these jumps.1 Interpolation of Γ We notice in our discretization that the diffusion ﬂux is evaluated at the face of the control volume.6 Finishing Touches A few issues remain before we have truly ﬁnished discretizing our diffusion equation.6. this is a perfectly adequate interpolation. Let us assume locally one-dimensional diffusion. step jumps in Γ may be encountered at conjugate boundaries. we must ﬁnd a way to interpolate Γ to the face. We deal with these below. When φ is used to represent energy or temperature.5. 3.where aE aW aN aS aP b ¢ ¢ ¢ ¢ ¤ ¦ ¢ £ £ ¢ SC ∆¥ Γe re ∆r δx e Γw rw ∆r δx w Γn rn ∆x δr n Γs rs ∆x δr s aE aW ¤ ¦ ¤ ¦ ¤ ¦ aN £ aS © SP ∆ ¥ (3. we may write φE φP Je (3.78) 3. In this limit. Let J e be the magnitude of the ﬂux vector J e . As a result. it is possible to simple interpolate Γ linearly as: Γe fe ΓP 1 fe ΓE (3. Our desire is to represent the interface ﬂux correctly. we must specify the face value of the diffusion coefﬁcient Γ. Referring to the notation in Figure 3.

We simply treat solid and ﬂuid cells as a part of the same domain. and the effective resistance is that offered by the cell E .. the discontinuity in temperature gradient at the conjugate interface is correctly captured. i.δx e P J e E e ∆x P ∆ xE Figure 3. With this type of interpolation. Γe ¢ 2ΓP ΓE ΓP ΓE £ (3. the face e lies midway between P and E . We may write Γe as Γe ) © fe £ ¢g 1 Γ P fe ΓE  S1 ¢ B (3.83 represents a harmonic mean interpolation for Γ. In this case. The properties of this interpolation may be better understood if we consider the case f e 0 5. nothing special need be done to treat conjugate interfaces. we get Γe 2ΓE . h ¢ B It is important to realize that the use of harmonic mean interpolation for discontinuous diffusion coefﬁcients is exact only for one-dimensional diffusion.84) In the limit ΓP ΓE .e. With harmonic-mean interpolation. corresponding to a distance of 0 5∆x E . This is as expected since the high-diffusion coefﬁcient region does not offer any resistance. with different diffusion coefﬁcients stored at cell centroids.83) Equation 3. 56 . Nevertheless. its use for multi-dimensional situations has an important advantage.5: Diffusion Transport at Conjugate Boundaries The term δ xe Γe may be seen as a resistance to the diffusion transfer between P and E .

However. Let the prevailing value of φ be called φ . There are many ways to treat non-linearities. ∂ S ∂ φ is the gradient evaluated at the prevailing value φ . we will treat non-linearities through Picard iteration. and allows us to satisfy the Scarborough criterion. it is frequently necessary to control the rate at which variables are changing during iterations. In this method. and our initial guess is far from the solution. When the scalar transport is non-linear.86) ∂φ i8¢ ¤ ip¦ i ¢ i £3 ¢ ¢  i¤ © i¦ i φi so that SC SP S i©  ∂S ∂φ ∂S ∂φ Here. we often employ under-relaxation. the diffusion coefﬁcient may be a function of φ . and increases the temperature further. for example. They are updated as φ is updated by iteration. Here. In radiative heat transfer in participating media.6. the resulting algebraic set is also non-linear. in diffusion problems. We write a Taylor series expansion for S about its prevailing value Sφ : S ∂S S S φ φ (3. ¤ ) ¦qi   i  (3. the coefﬁcients a P . Non-linearity can arise from a number of different sources.2 Source Linearization and Treatment of Non-Linearity Our goal is to reduce our differential equation to a set of algebraic equations in the discrete values of φ .6. (The counter-measure is provided by the fact that the fuel is eventually consumed and the ﬁre burns out). the source term in the energy equation contains fourth powers of the temperature. From a numerical viewpoint. SC and SP are evaluated using prevailing values of φ . for example. for example. It aids in the convergence of iterative solution techniques. For example. We said previously that the source term S could be written in the form S ¢ SC £ SP φ (3. ∂ S ∂ φ is negative. 57 . This is the value that exists at the current iteration. For most problems of interest to us. the application of a high temperature (the lighting of a match) causes energy to be released.3. resulting in a negative S P . In an explosion or a ﬁre. making it difﬁcult for the iteration to proceed. we may get large oscillations in the temperature computed during the ﬁrst few iterations. This ensures that the source tends to decrease as φ increases. a negative SP makes aP ∑nb anb in our discretization. In such cases.85) We now examine how this can be done when S is a non-linear function of φ .87) ) i 4 3. providing a stabilizing effect. When we have a strong non-linearity in a temperature source term. such as in the case of temperature-dependent thermal conductivity.3 Under-Relaxation When using iterative methods for obtaining solutions or when iterating to resolve nonlinearities. The source term S may also be a function of φ . this type of dependence is not always guaranteed. anb .

92: 1.90) We wish to make φP change only by a fraction α of this change. The initial guess acts as the initial condition. A value close to unity allows the solution to move quickly towards convergence.89 implies. want φ P to change as much as Equation 3. We note the similarity of under-relaxation to time-stepping. We know that φP satisﬁes aP φP i ¢ ∑ anb φnb £ nb £ b (3.92) We note the following about Equation 3..Let the current iterate of φ be φ P . Though over-relaxation (α 1) is a possibility. 4 F F ) 4 3. the original discrete equation is recovered.e. but may be more prone to divergence. both under-relaxed and un-underrelaxed equations yield the same ﬁnal solution. but keeps the solution from diverging. ) ¤C¤ © ¦H) ¦ i 58 . So we are assured that under-relaxation is only a change in the path to solution. The change in φP from one iteration to the next is given by ∑nb anb φnb aP £ b © φPi £ b (3. when φ P φP . The terms a P α and 1 α α a p φP represent the effect of the unsteady terms.89) aP ¢ We do not. A low value keeps the solution close to the initial guess. on how strong the non-linearities are. This allows us to satisfy the Scarborough criterion. i. we may write aP φ α P ¢ ∑ anbφnb £ b £ 1 ©α α aPφPi nb ¢ i (3. With α 1. however. we are assured that a P α ∑nb anb . Thus.88) so that after the system has been solved for the current iteration. Thus φP ¢ φPi £ α  ∑nb anb φnb aP © φPi  (3. on grid size and so on. We use intuition and experience in choosing an appropriate value. When the iterations converge to a solution.91) Collecting terms. 4. and not in the discretization itself. 2. we will for the most part be using α 1. we expect to compute a value of φP of ∑nb anb φnb b φP (3. The optimum value of α depends strongly on the nature of the system of equations we are solving.

The gradient approximations is O ∆x 2 only for uniform meshes. The cell centroid values ρφ 1 P and ρφ represent the average value for the cell. 3. we wish to evaluate a term of the type ¤C¤ ¦ ¤C¤ ¦ ¦ ¤¦ At t1 S t dt 61 (3. The mean-value approximation is O ∆x 2 even for a non-uniform mesh.8. Let us examine the truncation error implicit in item 3.2 Temporal Truncation Error Let us consider the fully-implicit scheme. The spatial assumptions are as described above. The proﬁle assumptions made in discretizing the unsteady diffusion equation using this scheme are 1. We see that all the approximations in a steady diffusion equation are O ∆x 2 . the O ∆x 2 terms in the Taylor series expansion do not cancel upon subtraction. ¤ ¦ ¤ ¦ 0 in the unsteady term are assumed to P £ ∆t are assumed to prevail over the time 2. We have already seen that the other spatial assumptions result in a truncation error of O ∆x 2 . The ﬂux and source terms from time t step ∆t . For non-uniform meshes. So the discretization scheme has a truncation error of O ∆x 2 .∆x W w P e E ∆x ∆x Figure 3.100) 0 . In effect. leaving a formal truncation error of O ∆x on d φ dx. The ﬁrst assumption is equivalent to the mean value assumption analyzed above and engenders a spatial error of O ∆x 2 . ¤C¤ ¦ ¦ ¤C¤ ¦ ¦ ) ¤H¤ ¦ ¦ ¤C¤ ¦C¦ ¤H¤ ¦ ¤C¤ ¦ ¦ ¤ ¦ ) 3.6: Uniform Arrangement of Control Volumes Thus the assumption that φ varies linearly between grid points leads to a truncation error of O ∆x 2 in d φ dx.

104. and the time dependence by eσmt . If it is less than one.104.109. the solution is oscillatory in time. the Φ terms in Equation 3. and the un-superscripted values represent the values at time t ∆t . If σm is real and greater than zero . our error grows with time step. and our scheme is stable. and therefore.109 essentially presents the spatial dependence of the error by the sum of periodic functions e iλm x .104 yields aP ΦP © ¤ £ εP ¦8¢ aE Φ0 E  £ ¢ 0 εE £ 0 aW ΦW  £ 0 εW  P©  £ a0 aE © aW   Φ0 £ P 0 εP  (3.Let Φ represent the exact solution to Equation 3. leaving 0 0 0 (3. If σ m is complex. Substituting Equation 3. the Φ and ε terms superscripted 0 represent the values at the time step t .113) 63 . all real computers available to us have ﬁnite precision. However. Therefore. and our scheme is unstable.111) If the ampliﬁcation factor is greater than one. it satisﬁes Equation 3.106 into Equation 3.106) ¢ We ask the following question: Will the explicit scheme cause our error ε to grow. and if σ m is real and less than zero. Since the diffusion equation for constant Γ and zero S is linear. or will the process of time-stepping keep the error within bounds? Substituting Equation 3. we may exploit the principle of superposition.112) ¢ rather than the summation in Equation 3. obtained using a computer with inﬁnite precision.112 into Equation 3. Equation 3.110) L is the width of the domain. the error grows with time.108) aP εP aE εE aW εW a0 aE aW εP P £ £ Let us expand the error ε in an inﬁnite series £  © m ©  ε xt ¤ \$ ¦ ¢ ∑ eσ t eiλ x m m m ¢ 0  1  2 HBIBDB M (3. Let this ﬁnite-precision solution be given by φ . the error is damped. We wish to ﬁnd the ampliﬁcation factor ε x t ∆t ε xt ¤  £ ¦ ¤ ¦ (3.109) where σm may be either real or complex.108. we can examine the stability consequences of a single error term ε eσmt eiλm x (3. and λ m is given by λm ¢ mπ L  m ¢ 0  1  2 CBDBIB M (3.107) As before.107 cancel. Since Φ P is the exact solution. By exact we mean that it is the solution to the discrete equation. The error ε is given by ε φ Φ (3. Thus. we must contend with round-off error. our error is damped in the process of time marching. we get aP eσm s t ¨ ∆t t eiλ x ¢ m eσmt aE eiλm s x¨ ∆xt £ aW eiλ s x S ∆xt  σ t iλ x e £  a0 P © aE © aW  e  m m m (3.

121) This is always guaranteed since all terms are positive. Thus m ) ¢ Γ∆t ρ ∆x ¢ t ¤ ¦ 2  eiλ ∆x £ eS iλ ∆x  £  1 © ρ2¤ Γ∆ ∆x ¦ 2  (3.122) ρ ∆x 2 or ρ ∆x 2 ∆t (3. we require 4Γ∆t ρ ∆x 2 ¤ ¦ E ¤ ¦ F ¤ ¦ F 0 (3.116) we get eσm ∆t t β ¢ 1 © ρ4¤ Γ∆ sin2 2 ∆x ¦ 2 (3.115) λm ∆x and the identities eiλm ∆x £ eS iλ ∆x ¢ m 2 cos β ¢ 2© 4sin2 β 2 (3.114) For a uniform mesh a E eσm ∆t Using β ¢ aW ¢ Γ) ∆x and a0 P ¢ m ρ ∆x ∆t .120) 0.117) We recognize that the ampliﬁcation factor is ε x t ∆t ε xt We require that or If 1 ¤ t ¦ © ρ4s Γ∆ ∆x t 2 4 ¤  £ ¦ t ¤  ¦ ¢ eσ ∆t ¢ 1 © ρ4¤ Γ∆ ∆x ¦ 2 T ¤ T ε x  t £ ∆t ¦ ¤ F 1 ε x t ¦ T T 4Γ∆t ¤ 1© F 1 ρ ∆x ¦ 2 m (3.56.119) (3.Dividing through by a P eσmt eiλm x we get eσm ∆t ¢ £ aE iλm ∆x e aP a0 P £ © © aW e aP aW S iλ ∆x m aE aP (3. Using an error expansion of the type (3.124) ε xyt eσmt eiλm x eiλn y s t ¦uF ¤ © ¤   ¦0¢ 64 .118) (3. on the other hand. we require ρ ∆x 2 4Γ∆t 2 (3. If. 1 4Γ∆t 0.123) 2Γ We recognize this criterion to be the same as Equation 3.

it is possible to get unphysical results. ¤ ¦ F ¢ ) 2 B 3. We have also seen how to handle non-linearities.we may derive an equivalent criterion for two-dimensional uniform meshes (∆x ∆y) as ρ ∆x 2 ∆t (3. we use von Neumann stability analysis to give us guidance on the baseline behavior of idealized systems. we see right away that it would be substantially more complicated to do the above analysis if Γ were a function of φ . For these. using a similar analysis. We have also see how to conduct a stability analysis for the explicit scheme as applied to the diffusion equation. 65 . In the next chapter. which the von Neumann stability analysis classiﬁes as unconditionally stable. the von Neumann stability analysis tells us that the oscillations in our solution will remain bounded. It becomes very difﬁcult to use when we solve coupled non-linear equations such as the Navier-Stokes equations. However. It is also possible to show.10 Closure In this chapter. It is possible to get bounded but unphysical solutions. but it cannot guarantee that they will be physically plausible. In this case. we should not conclude that the two will always yield identical results. However. we will address the issue of mesh non-orthogonality and understand the resulting complications. we must rely on intuition and experience for guidance. Though the von Neumann stability analysis and our heuristic requirement of positive coefﬁcients have yielded the same criterion in the case of the explicit scheme. our temporal discretization is ﬁrst-order accurate.125) 4Γ A similar analysis may also be done to obtain the criterion for three-dimensional situations. polar and axisymmetric geometries. Von Neumann stability analysis is a classic tool for analyzing the stability of linear problems. We have seen that our discretization guarantees physically plausible solutions for both steady and unsteady problems. In practice. For uniform meshes. that the fully-implicit and Crank-Nicholson schemes are unconditionally stable. and that if we use the fully-implicit scheme. we have completed the discretization of the diffusion equation on regular orthogonal meshes for Cartesian. This happens in the case of the Crank-Nicholson scheme. or if we had non-linear source terms. The von Neumann analysis yields a time-step limitation required to keep round-off errors bounded. we know that for ρ ∆x ∆t 0 5 ∑nb anb . we have shown that our spatial discretization is formally second-order accurate. realizing that the coupled nonlinear equations we really want to solve will probably have much stringent restrictions.

66 .

1 Diffusion on Orthogonal Meshes Let us consider a mesh consisting of convex polyhedra.3) A ∆V 0 % ¥W¢ A ∆V 0 ¥ 0 Applying the divergence theorem. we integrate it over the control volume C0 as before: ∇ Jd Sd (4. we address non-orthogonal meshes.Chapter 4 The Diffusion Equation: A Closer Look In the last chapter. 4.4) . we saw how to discretize the diffusion equation for regular (orthogonal) meshes composed of quadrilaterals. To discretize Equation 4. any face f in the mesh is shared by only two neighbor cells.2) ¢ © Γ∇φ We focus on the cell C0. A detail of the cells C0 and C1 is shown in Figure 4. Regardless of the shape of the cells.1.1) (4. We shall consider the mesh to be orthogonal if the line joining the centroids of adjacent cells is orthogonal to the shared face f . equilateral triangles. In this chapter. This is guaranteed for the equilateral triangular mesh in the ﬁgure. both structured and unstructured. as shown in Figure 4.2. Consider the steady diffusion equation: ∇ J where J % ¢ S (4.1. and particularly. we get A A J % d A ¢ A ∆V 67 Sd ¥ (4. We will also address the special issues associated with the computation of gradients on unstructured meshes. We will see that non-orthogonality leads to extra terms in our discrete equations which destroy some of the properties we had particularly prized in our discretization scheme.

f C0 C1 Figure 4.1: Orthogonal Mesh y Af ∆ξ x C0 C1 eη e ξ Figure 4.2: Arrangement of Cells in Orthogonal Mesh ef fac 68 .

Thus  ∑ J f % A f ¢ SC £ f SPφ0 ∆  ¥ ¢ £ 0 (4.We now make a proﬁle assumption. We assume that J can be approximated by its value at the centroid of the face f . This is easy to arrange for a rectangular mesh. and φ 0 and φ1 are the discrete values of φ at the cell centroids.9) f We see that if the line joining the centroids is perpendicular to the face. as we saw in the previous chapter. the unit vectors are perpendicular to each other. The unit vector e η is tangential to the face f .5) The summation in the ﬁrst term is over all the faces of the control volume C0. We see from Figure 4.2. the normal diffusion transport J f A f only depends on the gradient of φ in the coordinate direction ξ and not on η . 69 . let us make a linear proﬁle assumption for the variation of φ ξ between the centroids of cells C0 and C1. We may write the face ﬂux vector J f in the coordinate system ξ η as )  (4. as shown in Figure 4. The unit vector e ξ is aligned with the line joining the centroids. Thus % Jf Af % ¢ © Γf Af φ ¤ ¦ 1 ∆ξ © φ0  (4. We see that if the mesh is orthogonal. Jf ¢ ¢ © ) ¤ ¦ f ∂φ © Γf  ∂x i £ Γ f ∇φ ∂φ j ∂y In order to evaluate ∂ φ ∂ x and ∂ φ ∂ y at the face f .6) f © Jf ¢ © Γf  ∂φ e ∂ξ ξ £ ∂φ e ∂η η  (4. the diffusion transport normal to the face f can be written purely in terms of values of φ at the centroids of the cells sharing the face. Because the mesh is orthogonal. We realize that since cell centroid values of φ are available along the eξ direction.2. we need cell centroid values of φ which are suitably placed along the x and y axes.7) The area vector A f may be written as Af Therefore ¢ A f eξ Γf Af (4. Consider the coordinate directions ξ and η in Figure 4. We must take an alternative approach. and A f is the area vector on the face f pointing outwards from cell C0.8) Jf Af % ¢ © )  ∂φ ∂ξ  (4. Also.2 that such values are not available for the mesh under consideration.10) where ∆ξ is the distance between the cell centroids. As before. Further. it is easy for us to write ∂ φ ∂ ξ . let us assume that S S C SP φ0 as before.

but these do not appear in the discretization. The above development holds for three dimensional situations. It is worth noting here that J f should be thought of as belonging to the face f rather that to either cell C0 or C1. nb denotes the face neighbors of the cell under consideration. ¢ ¢ 7. we have not used the fact that cell under consideration is a triangle. M is the number of face neighbors. 70 .5 and summing over all the faces of the cell C0 yields an equation of the form (4. a P ∑nb anb . 2. We are thus guaranteed that φ P is bounded by its face neighbors when S 0. If the cell is a triangle.So far we have looked at the transport through a single face f . P. As with our orthogonal rectangular mesh in the previous chapter. The scheme is conservative because we use the conservation principle to derive the discrete equations. 4. The cell P shares vertices with other cells. That is. 6. We note the following: ¢ 1. All we have required is that the cell be a polyhedron and that line joining the cell centroids be orthogonal to the face. In the absence of source terms. Thus. and M 3 in the above equation. which does not place any restrictions on the number of neighbors. We need not make the assumption of two-dimensionality. We have not made any assumptions about mesh structure. All the development from the previous chapter regarding unsteady ﬂow. 5. It is possible to solve the discrete equation set using the Gauss-Seidel iterative scheme. there are three face neighbors. It is necessary for the mesh to consist of convex polyhedra. though it is usually difﬁcult to generate orthogonal meshes without some type of structure. In the development above.11) aP φP ∑ anb φnb b ¢ nb £ where anb aP b ¢ ∑ anb © nb ¢ SC ∆ ¥ 0 ¢  Γf Af ∆ξ  nb  nb SP ∆ ¥ ¢ 1  2 HBIBDB M (4. Thus overall conservation is guaranteed.10 into Equation 4. If the cells are not convex.12) 0 In the above equations. The face ﬂux J f leaves one cell and enters the adjacent cell. we get a wellbehaved set of discrete equations. Substituting Equation 4. the “neighbor” cells are those which share faces with the cell under consideration. interpolation of Γ f and linearization of source terms may be carried over unchanged. whatever proﬁle assumptions are used to evaluate J f are used consistently for both cells C0 and C1. 8. the cell centroid may not fall inside the cell. 3.

The unit vector e ξ is parallel to the line joining the centroids. We integrate the equation over the control volume C0 and apply the divergence theorem as before. since the mesh is not ) ¢ ) © 71 . the procedure is the same as that for an orthogonal mesh. The area vector A f is given by A f Ax i Ay j (4. Our interest therefore lies in developing methods which can deal with non-orthogonal meshes.15) As before. We consider this mesh to be nonorthogonal because the line joining the cell centroids C0 and C1 is not perpendicular to the face f . The unit vector e η is tangential to the face. it is rarely possible to use orthogonal meshes in industrial geometries because of their complexity. with unstructured meshes. prevails over the face f . Assuming that J f . and preferably. We will start by looking at general non-orthogonal meshes and derive expressions for structured meshes as a special case. As before. the ﬂux at the face centroid. However.3: Non-Orthogonal Mesh 4. Let us consider instead the coordinate system ξ η on the face f .14) Thus far.2 Non-Orthogonal Meshes In practice. we obtain: ∑Jf % Af f ¢  SC £ £ SPφ0 ∆  ¥ 0 (4. we consider the steady diffusion equation ∇ J % ¢ S (4.13) and focus on the cell C0. Consider the cells C0 and C1 shown in Figure 4.b A f y e C0 ∆η η eξ C1 ∆ξ face f x a Figure 4.3. writing J f in terms of ∂ φ ∂ x and ∂ φ ∂ y is not useful since we do not have cell centroid values of φ along these directions.

η = constant b C0 C1 a ξ = constant c b ζ η ξ a d face f (a) (b) Figure 4. However.4: Deﬁnition of Face Tangential Directions in (a) Two Dimensions and (b) Three Dimensions Equation 4.32 writes the secondary gradient term as the difference of the total transport through the face f and the transport in the direction e ξ . It is often more convenient to store ∇φ at the cells C0 and C1. Both terms require the face gradient ∇φ f . 4. ¤ ¦ Thus the problem of computing the secondary gradient is reduced to the problem of computing the face gradient of φ .2. It is possible to compute ∇φ f directly at the face using the methods we will present in a later section. The resulting discrete equation may be written in the form aP φP ¢ ∑ anbφnb £ nb 75 b (4. it is usually simpler and less expensive to exploit mesh structure when it exists. we may ﬁnd an average as ¤ ¦ ¤ ∇φ ¦ ¢ f ∇φ0 £ 2 ∇φ1 (4. its computation is addressed in a later section. Assuming that the gradient of φ in each cell is a constant.3 Discrete Equation for Non-Orthogonal Meshes The discretization procedure at the face f can be repeated for each of the faces of the cell C0.33) The unstructured mesh formulation can of course be applied to structured meshes.34) .

where anb aP b

¢

As before, nb denotes the cell neighbors of the cell under consideration, P. The quantities Γ f , eξ , ∆ξ , A f and f correspond to the face f shared by the cell P and the neighbor cell nb. We see that the primary terms result in a coefﬁcient matrix which has the same structure as for orthogonal meshes. The above discretization ensures that a P ∑nb anb if S 0. However, we no longer have the guarantee that φ P is bounded by its cell neighbors. This is because the secondary gradient term, f , involves gradients of φ which must be evaluated from the cell centroid values by interpolation. As we will see later, this term can result in the possibility of spatial oscillations in the computed values of φ . Though the formulation has been done for steady diffusion, the methods for unsteady diffusion outlined in the previous chapter are readily applicable here. Similarly the methods for source term linearization and under-relaxation are also readily applicable. The treatment of interfaces with step jumps in Γ requires a modest change, which we leave as an exercise to the reader.

% b nb nb ¢ 1  2 CBDBIB M ¢ ∑ anb © SP %¥  0 nb ¢ SC ∆¥ 0 © ∑  f  nb nb 
∆ξ A f eξ

a

Γf Af Af

%

(4.35)

¢



¢

4.3 Boundary Conditions
Consider the cell C0 with the face b on the boundary, as shown in Figure 4.5. The cell value φ0 is stored at the centroid of the cell C0. As with regular meshes, we store a discrete value φb at the centroid of the boundary face b. The boundary area vector A b points outwards from the cell C0 as shown. The cell balance for cell C0 is given as before by (4.36) ∑ J f A f Jb Ab SC SP φ0 ∆ 0
f

% £

% ¢  £ 

¥

Here the summation over f in the ﬁrst term on the left hand side is over all the interior faces of cell C0, and the second term represents the transport of φ through the boundary face. We have seen how the interior ﬂuxes are discretized. Let us now consider the boundary term J b Ab . The boundary transport term is written as

%

J b Ab

Γb ∇φ

¤ ¦ %A b b

(4.37)

We deﬁne the direction ξ to be the direction connecting the cell centroid to the face centroid, as shown in Figure 4.5, and the direction η to be tangential to the boundary 76

A d e η C0

b

b

e

ξ

∆ξ

b c

Figure 4.5: Boundary Control Volume for Non-Orthogonal Mesh face. Decomposing the ﬂux J b in the ξ and η directions as before, we obtain J b Ab

Γb

Ab Ab φξ Ab e ξ

%

% 

 b£ % %

Γf

Ab Ab e e φ Ab e ξ ξ η η

%

%

%  b

(4.38)

As before, we deﬁne the secondary diffusion term
b



¢

Γf

Ab Ab e e φ Ab e ξ ξ η η

%   b 

(4.39)

We note that if eξ and eη are perpendicular to each other, b is zero. Thus, the total transport across the boundary face b is given by J b Ab

Γb

Ab Ab φξ Ab e ξ

%

% 

 b£

b

(4.40)

As before we make a linear proﬁle assumption for the variation of φ between points C0 and b. This yields J b Ab 

φ b % Ab % ¢ © ¤ ∆Γξb¦ A b© b Ab % e ξ



φ0 

£

b

(4.41)

As with interior faces, we are faced with the question of how to compute φ η at the face b. For structured meshes, and two-dimensional unstructured meshes, we may use interpolation to obtain the vertex values φ c and φd in Figure 4.5. For three-dimensional unstructured meshes, however, the boundary face tangential directions are not uniquely 77

deﬁned. Consequently, we write the boundary secondary gradient term as the difference of the total and primary terms:



b

Γb ∇φ

¤ ¦ % A £ ¤ Γb Ab % Ab ¤ ∇φ ¦ % e ¤ ∆ξ ¦ b b b ξ b ∆ξ ¦ b Ab % eξ ¤ ∇φ ¦ ¢ ¤ ∇φ ¦ 0 b

(4.42)

Further assuming that

(4.43)

we may write the secondary gradient term at the boundary as

¤ ¦ % A £ ¤ Γb Ab % Ab ¤ ∇φ ¦ % e ¤ ∆ξ ¦ 0 0 ξ b b ∆ξ ¦ b Ab % eξ ¤ We will see in a later section how the cell gradient ∇φ ¦ is computed.
b



Γb ∇φ

(4.44)

0

Having seen how to discretize the boundary ﬂux, we now turn to the application of boundary conditions.

Dirichlet Boundary Condition At Dirichlet boundaries, we are given the value of φ b

φb

¢

φb given

Using φb given in Equation 4.41 and including J b Ab in the boundary cell balance yields a discrete equation of the following form: aP φP where anb ab aP b  

(4.45)

%

¢ ∑ anb φnb £ nb %

b

(4.46)

Here, nb denotes the interior cell neighbors of the cell under consideration, P. The quantities Γ f , eξ , ∆ξ , A f and f correspond to the face f shared by the cell P and the neighbor cell nb. In the a b term, eξ corresponds to the direction shown in Figure 4.5. As with interior cells, we see that the primary terms result in a coefﬁcient matrix which has the same structure as for orthogonal meshes. The above discretization ensures that aP ∑nb anb if S 0. However, we no longer have the guarantee that φ P

% b nb nb ¢ 1  2 HBIBDB M b % Ab ¢ ¤ ∆Γξb¦ A A b b % eξ ¢ ∑ anb £ ab © S  P %¥ 0  nb ¢ SC ∆¥ 0 © ∑  f  nb £ abφb given © nb 
∆ξ A f eξ

¢

a

Γf Af Af

b

(4.47)

4

¢

78

6: Arrangement of Cells in Cartesian Mesh coordinates.55) ∂η f ∆η © For Cartesian grids.N b W P a NE E ∆y S SE ∆x Figure 4. For example.56) The second interpretation (Equation 4. For an equispaced Cartesian grid this means that the nodal value φ a is φa ¢ φP φE £ φSE 4 £ φS (4.54) An alternative approach would be to write the face derivative in terms of the values at the nodes a and b φb φa ∂φ (4. We write ∂φ ∂ξ ∂φ ∂η © © y plane and the corre-  P ¢    P ¢ φE φW 2∆ξ φN φS 2∆η © (4. it is easy to show that the two approximations are equivalent if the nodal values are obtained by linear interpolation. for a uniform mesh ∂φ ∂η  f ¢   ¢  £ ∂φ ∂η P   £ 2 ∂φ ∂η E   (4.55) is useful because it suggests one way of obtaining derivatives at faces for general unstructured grid where we no longer have 80 . we can compute the face value required in the secondary diffusion term (Equation 4.53) © Knowing the cell gradients.52) (4.7 shows a non-orthogonal mesh in the x sponding transformed mesh in the ξ η plane.29) by averaging. Figure 4.

Therefore we need to seek other ways of calculating derivatives that are applicable for arbitrary grids. Gradient Theorem Approach One approach is suggested by the gradient theorem which states that for any closed volume ∆ 0 surrounded by surface A ¥ A ∆V ∇φ d 0 ¥f¢ A A φ dA (4. Thus.55 for the face f shown in Figure 4.2 Unstructured Meshes The method outlined above is applicable for arbitrary unstructured grids in two dimensions. 4.η = constant N W b P E ∆ξ S ∆η a SE NE W N b P a S NE E ∆η SE ξ = constant y η ∆ξ x ξ Figure 4. in many instances we need to know all three components of the derivative at cell centers. However.7: Structured Mesh in (a) Physical Space and (b) Mapped Space orthogonal directions to apply the ﬁnite-difference approximation.57. This yields ∇φ φ dA f ¢ ∆1 ¥ 0∑ f A A 81 (4. as we commented earlier. we make our usual round of proﬁle assumptions. extension to three dimensions is not straightforward.57) where dA is the outward-pointing incremental area vector. First.58) . The nodal values in this case are obtained by some averaging procedure over the surrounding cells. To obtain a discrete version of Equation 4. we can use Equation 4. we assume that the gradient in the cell is constant.3. not just derivatives in the plane of the cell face. This is because in 3D we no longer have unique directions in the plane of the face to use a ﬁnite-difference approximation.4. Also.

66) ¡ 2 matrix M ¢  ∆xJ 83 ∆x1 ∆x2 . . .C1 ∆r 1 C0 ∆r 0 Figure 4.67)  . .63) ∂φ ∂x  0 £   £  ∆y1 ∂φ ∂y  0 ¢   ¢  φ1 © © φ0 (4. .   (4.64) We require that the same be true at all other cells surrounding cell C0.62 as ∆x1 ¢ ∆x1 i £ ∆y1 j (4. ∆yJ ∆y1 ∆y2 .65) ∂x 0 ∂y 0 It is convenient to assemble all the equations in a matrix form as follows Md Here M is the J ¢ φ (4.8: Arrangement of Cells in Unstructured Mesh Here ∆r1 s the vector from the centroid of cell 0 to the centroid of cell 1 ∆r1 We rewrite Equation 4. For a cell C j the equation reads ∂φ ∂φ ∆x j ∆y j φ j φ0 (4.

Physically.. a solution for which the RMS value of the difference between the neighboring cell values and the reconstructed values is minimized. i. .66 represents J equations in two unknowns. From Equation 4.68) and φ is the vector of differences of φ φ    φ1 φ2 φJ   .C2 C3 ∆r 3 ∆r 2 C0 ∆r 4 ∆r 1 C1 C4 Figure 4.65 we know that the difference in the reconstructed value and the cell value 84 .e.9: Least Squares Calculation of Cell Gradient and d is the vector of the components of gradients of φ at cell C0 d  ∂φ ∂x 0 ∂φ ∂y 0        (4. . Since in general J is larger than 2 this is an over-determined system. this means that we cannot assume a linear proﬁle for φ around the cell C0 such that it exactly reconstructs the known solution at all of its neighbors. We can only hope to ﬁnd a solution that ﬁts the data in the best possible way. φ0 φ0 φ0    (4.69)  Equation 4.

At corner 85  Bx  Cy (4.e... we need at least three points at which φ is speciﬁed in order to determine the gradient. Mathematically. for the solution to exist the matrix M must be non-singular. We note that it places no restrictions on cell shape. It also does not require us to choose only face neighbors for the reconstruction. it should have linearly independent columns and its rank must be greater than or equal to 2. The gradient for any scalar can then be computed easily by multiplying the matrix with the difference vector φ . the inversion only needs to be done once. i.70)  ∑ R2j j b∆y j (4.74) φ  A Since this involves three unknowns.for cell C j is given by Rj  ∆x j ∂φ ∂x  The sum of the squares of error over all the neighboring cells is R ∂φ ∂x 0  0  ∆y j ∂φ ∂y  ed φ j  i  φ0 f (4.71) Let    a and ∂φ ∂y 0    b. and does not require a structured mesh. In practical implementations. Another way of understanding this requirement is to note that assuming a linear variation means that φ is expressed as k  MT φ (4. We should note here that since M is purely a function of geometry.72) Our objective is to ﬁnd a and b such that R is minimized. we would compute a matrix of weights for each cell. Recall that the standard way of solving the problem is to differentiate R with respect to a and b and set the result to zero.75) . Equation 4. It is easy to show that this equation set is the same as that obtained by multiplying Equation 4. viz. Physically this implies that we must involve at least 3 non-collinear points to compute the gradient.73) This gives us two equations in the two unknowns. ∂R ∂a ∂R ∂b   0 0 (4.66 by the transpose of the matrix M MT Md MT M is a 2 2 matrix that can be easily inverted analytically to yield the required gradient ∇φ .71 can then be written as R  ∑ a∆x j  j g hd φ j  φ0 fji 2 (4. ie. The least squares approach is easily extended to three dimensions. the components of the gradient at cell C0.

The overall idea is the same as for regular meshes. we have seen how much of the calculation is amenable to a face-based data structure. and therefore.4. which may be averaged from cell gradients.7 Closure In this chapter. both structured and unstructured. we have seen that the face ﬂuxes may no longer be written purely in terms of the neighbor cell values if the mesh is non-orthogonal. and involves a balance of diffusive ﬂuxes on the faces of the cell with the source inside the cell. In the next chapter. an extra secondary gradient term appears. 89 . we address the discretization of the convective term. We have seen how cell gradients may be computed for structured and unstructured meshes. we require face gradients of φ . we have seen how to discretize the diffusion equation on non-orthogonal meshes. To compute this term. Finally. the solution of a complete scalar transport equation. However.

90 .

Once we have addressed this term.1 over the cell of interest. 5. we will have developed a tool for solving the complete scalar transport equation. For the sake of clarity. In reality. We will address the calculation of the ﬂuid ﬂow in later chapters. or the convection-diffusion equation.e.1. and the special problems associated with it. The equation governing steady scalar transport in the domain is given by ∇J where J is given by J t  S (5. so that z z ∇ t Jd v| Sd v ∆{ ∆{ ui 91  (5. and is given by  ρ Vφ V  Γ∇φ vj  As before.. P. the velocity vector V is known at all the requisite points in the domain. We seek to determine how the scalar φ is transported in the presence of a given ﬂuid ﬂow ﬁeld.e. i.1) (5. We will address both regular and non-orthogonal meshes.Chapter 5 Convection In this chapter.i. we turn to the remaining term in the general transport equation for the scalar φ .4) . as it is sometimes called in the literature. namely the convection term. of course. we will assume that the ﬂuid ﬂow is known. let us assume that ∆x and ∆y are constant.. the ﬂuid ﬂow would have to be computed. the mesh is uniform in each of the directions x and y. The domain has been discretized using a regular Cartesian mesh. We will see how to difference this term in the framework of the ﬁnite volume method. As before.2) (5.3) Here. In the development that follows. we store discrete values of φ at cell centroids. we integrate Equation 5.1 Two-Dimensional Convection and Diffusion in A Rectangular Domain Let us consider a two-dimensional rectangular domain such as that shown in Figure 5. ρ is the density of the ﬂuid and V is its velocity.

Thus   J e Ae with t  J w Aw t  Jn An t   J s As t  p SC  SP φP ∆ q v P (5. J e Ae .9) .8 that the convection component has the form (5. we assume that J is constant over each of the faces e. and that the face centroid value is representative of the face average. This is given by J e Ae t  d ρ uφ f e ∆y  Fe φe 92 Γe ∆y m t∂ φ ∂x n (5. n and s of the cell P.5) As usual. say. we write z J dA A t  S∆ v P (5. Also. we assume S SC SP φP as before.N n W w s V S P e E ∆y ∆x Figure 5.7) Thus far the discretization process is identical to our procedure in previous chapters.6) Aw    As   An Ae ∆y i ∆y i ∆x j ∆x j (5. Let us now consider one of the transport terms.1: Convection on a Cartesian Mesh Applying the divergence theorem.8) e We see from Equation 5. w.

12) Similar diffusion terms can be written for other faces as well. it is then simply a matter of doing the same operation on all the faces.15.13) is called the Peclet number. We already know how to write the face gradient.15) Similar expressions may be written for the convective contributions on other faces. For a uniform mesh.1 Central Differencing The problem of discretizing the convection term reduces to ﬁnding an interpolation for φe from the cell centroid values of φ . and the face gradient ∂ φ ∂ x e . Collecting the convection and diffusion terms on all faces. If it is based on a cell length scale. We note that the face value. we may write the following discrete equation for the cell P: aP φP  ∑ anbφnb  nb 93 b . We see that writing the face ﬂux J e requires two types of information: the face value φe . collecting terms. We have seen that the diffusive term on the face e may be written as  d ρ uf e ∆y (5.1. We note with trepidation that φ E and φP appear with the same sign in Equation 5.where Fe is the mass ﬂow rate through the face e. φ e .14) so that the convective transport through the face is Fe d φE  φP 2 f (5. The quantity Pe  F D  ρ uδ x Γ (5. δ x. d l f 5. is required to determine the transport due to convection at the face. We turn now to different methods for writing the face value φ e in terms of the cell centroid values. Here.11)  d f ∆y δx e (5. we assume that φ varies linearly between grid points. we may write φe  φE 2 φP (5. and measures the relative importance of convection and diffusion in the transport of φ . Once φ e is determined. One approximation we can use is the centraldifference approximation. and writing the discrete equation for the cell P. For the purposes of this chapter.10)  where De φE De d  Γe φP f (5. it is referred to as the cell Peclet number. we shall assume that Fe is given.

we are not guaranteed the convergence of the Gauss-Seidel iterative scheme. aN becomes negative. For u 0 and v 0. we see that as long as Fe 2De and Fn 2Dn . since the equation violates the Scarborough criterion. For a given velocity ﬁeld and physical properties we can meet this Peclet number criterion The term d Fe  } }  d  f } }   } }  l ~  l ~ ~ ~ 94 . we are guaranteed positive coefﬁcients. That is.17)         Fw ∆y δx e ∆y Γw δx w ∆x Γn δy n ∆x Γs δy s ρ u e ∆y Γe d f d f d f d f d f d ρ uf w ∆y d ρ vf n ∆x d ρ vf s ∆x (5. (Similar behavior would occur with a W and aS if the velocity vector reverses sign).18) Fn  Fs f (5. we are not guaranteed that φ P is bounded by its neighbors. For Fe 2De . we see that aE becomes negative. If the underlying ﬂow ﬁeld satisﬁes the continuity equation. These negative coefﬁcients mean that though a P ∑nb anb for S 0. Let us consider the case when the velocity vector V ui vj is such that u 0 and v 0.19)  represents the net mass outﬂow from the cell P. Furthermore. and physically plausible behavior. the face Peclet numbers Pee Fe De 2 and Pen Fn Dn 2 are required for uniform meshes. De Dw Dn Ds Fe Fw Fn Fs     De Dw Dn Ds      ∑ anb  nb  SC ∆v P Fe 2 Fw 2 Fn 2 Fs 2 SP ∆ v P F  d e Fw  Fn  Fs f (5. Similarly for Fn 2Dn .where aE aW aN aS aP b In the above equations. we would expect this term to be zero.

we write φe   φP if Fe φE if Fe  ~ 0 0 (5. Similar expressions may be written on the other faces. we realize that the reason we encounter negative coefﬁcients is the arithmetic averaging in Equation 5. the face value of φ is set equal to the upwind cell centroid value. 5.14.20 in the cell balance for cell P. For many practical situations.22) Max a b  w   a if a } b (5.1.11. Consequently. We will examine other higher order differencing schemes which do not have this characteristic.2 Upwind Differencing When we examine the discretization procedure described above. it can smear discontinuous proﬁles of φ even in the absence of diffusion.1. We now consider an alternative differencing procedure call the upwind differencing scheme. In this scheme. Thus.20) These expressions essentially say that the value of φ on the face is determined entirely by the mesh direction from which the ﬂow is coming to the face. for face e in Figure 5. Using Equation 5.by reducing the grid size sufﬁciently. and the diffusion term discretization in Equation 5. We will see in later sections that though the upwind scheme produces a coefﬁcient matrix that guarantees physically plausible results and is ideally suited for iterative linear solvers. and a P ∑nb anb if the ﬂow ﬁeld satisﬁes the continuity equation and S 0. and the storage and computational requirements may be too large to afford. we may write the following discrete equation for cell P: aP φP where aE aW  ∑ anbφnb  nb b       aN aS aP b Here  w   w   Dn Max   Fn w 0 Ds  Max  Fs w 0  ∑ anb  SP ∆ v P d Fe   nb SC ∆ v P De Max Fe 0 Dw Max Fw 0  Fw  Fn  Fs f (5. we are guaranteed that φP is bounded by its neighbors.23) b otherwise We see that the upwind scheme yields positive coefﬁcients.   95 . the resulting mesh may be very ﬁne. however.

both structured and unstructured.2 Convection-Diffusion on Non-Orthogonal Meshes For non-orthogonal meshes.A f V C0 eξ C1 ∆ξ Figure 5.2: Convection on a Non-Orthogonal Mesh 5. integrate it over the cell C0 shown in Figure 5. and apply the divergence theorem to get face f z J dA A t  p SC  t  p  SP φ0 ∆ q v q v 0 (5.29) φ φ0 f ∆ξ A f eξ 1  q s .26) The transport of φ at the face f may thus be written as Jf Af t  d ρ Vf f t A f φ f  Ff Γ f ∇φ (5.25) ∑ J f A f SC SP φ0 ∆ 0 f where the summation is over the faces f of the cell.The ﬂux is given by Jf  d ρ Vφ f f  Γ f ∇φ d ff d f f t Af (5.27) We deﬁne the face mass ﬂow rate as  d ρ Vf f t A f t p  t 96 (5.2. counterparts of the central difference and upwind schemes are easily derived.28) This is the mass ﬂow rate out of the cell C0. We start with the integration of the convection-diffusion equation as usual. so that (5. We have already seen in the previous chapter that the diffusion transport at the face may be written as: Γf Af Af (5. we assume that the ﬂux on the face may be written in terms of the face centroid value.24) As before.

the following discrete equation is obtained: aP φP where anb aP b  ∑ anbφnb  nb  Ff 2 b   ∑ anb  SP ∆v 0 ∑ Ff  f nb  SC ∆v 0  ∑ s f i nb nb g } ~ l } Df (5. q s f (5. As before. the convective transport of φ at the face requires the evaluation of the face value φ f .32) With this approximation.1 Central Difference Approximation As with regular meshes.2. φf   φ0 if Ff 0 φ1 otherwise } (5. and is expected to be zero if the underlying ﬂow ﬁeld satisﬁes mass balance. ie.2. just as with structured meshes.31) 5. If F f 0. if the Peclet number Pe f 2. we expect a nb 0 if Ff D f 2.34) We see that. we may ﬁnd φ f through either a central difference or an upwind approximation. the quantity ∑ f Ff is the sum of the outgoing mass ﬂuxes for the cell. it is possible to get negative coefﬁcients using central differencing. as with regular meshes. } 5. The simplest central difference approximation is to write φf  φ0 2 φ1 (5.30) we write the net transport across the face f as Jf Af t  Ff φ f D f φ1 p  φ0 We note that.Deﬁning Df   Γf Af Af ∆ξ A f eξ t t (5.35) Using this approximation in the discrete cell balance we get aP φP  ∑ anbφnb  nb 97 b .2 Upwind Differencing Approximation Under the upwind differencing approximation.

3. 5. dividing by two. The ﬂow ﬁeld in the domain is given by V 1 0i 1 0j (5.38 and 5. 5.3 Accuracy of Upwind and Central Difference Schemes Let us consider the truncation error associated with the upwind and central difference schemes.where anb aP b    Ff w 0  ∑ nb  SP∆v 0 ∑ Ff  f nb  SC ∆ v 0  ∑ s f i nb nb g Df  a Max (5. but. we see that the a nb is always guaranteed positive.38. The upwind scheme produces positive coefﬁcients. the central difference scheme introduces the possibility of negative coefﬁcients for cell Peclet numbers greater than 2 for uniform meshes. as shown in Figure 5. we see that the discretization for unstructured meshes yields a coefﬁcient structure that is very similar to that for regular meshes. we get } O ∆x  d f (5. and rearranging terms.37) As with structured meshes. In both cases. Thus.39.1 An Illustrative Example Consider convection of a scalar φ over the square domain shown in Figure 5.4.42)    o  o 98 .40 when Fe 0. Let us assume a uniform mesh.3.40) φe  φP 2 φE ∆x 2  d 8f m d2φ dx2 n e O ∆x 3 gd f i (5. The left and bottom boundaries are held at φ 0 and φ 1 respectively. Using a Taylor series expansion about point e.41) We see that the central difference approximation is second-order accurate. we may write φP φE   φe   m φe m ∆x 2 n n m ∆x 2 m dφ dx dφ dx n e n e  1 2 1 2 m m ∆x 2 n n 2 m ∆x 2 2 m d2φ dx2 d2φ dx2 n e n e O ∆x f 3 i d g ∆x f 3 i d g (5.39) From Equation 5.38) O (5. as we will see in the next section. we see that φe φP Recall that we use Equation 5. We see that upwind differencing is only ﬁrst order accurate. Adding Equations 5. this comes at the cost of accuracy.

Pe ∞. Figure 5. For Γ 0. For other values of Pe. We wish to compute the distribution of φ in the domain using the upwind and central difference schemes for. The ﬂow is governed by the domain Peclet number Pe ρ V L Γ. We see that the upwind scheme smears the phi proﬁle so that there is a diffusion layer even when there is no physical diffusion.2 False Diffusion and Dispersion We can gain greater insight into the behavior of the upwind and central difference schemes through the use of model equations.e..    k    l  ~ ~  5. To understand the reasons behind this we develop a model equation for the scheme. since the cell Peclet number is inﬁnite no matter how ﬁne the mesh. Consider case of steady two-dimensional convection with no diffusion: f Consider the case of a constant velocity ﬁeld. v upwind differencing. i. on the other hand. We consider the case Pe ∞.5 shows the predicted φ values along the vertical centerline of the domain (x=0.43) 0.3: One-Dimensional Control Volume so the velocity vector is aligned with the diagonal as shown. The main drawback of the ﬁrst order upwind scheme is that it is very diffusive. The diffusion layer is wider for smaller Pe.5). Using ρu φP φW ∆x   ρv 99 φP ∆y  φS  0 (5.44) . it is not possible to control these oscillations by reﬁning the mesh. shows unphysical oscillations in the value of phi. We compute the steady convection-diffusion problem in the domain using 13 16 quadrilateral cells to discretize the domain. we obtain the following discrete form } 0. The central difference scheme.∆x WW W w P e E ∆x ∆x Figure 5. with u } ∂ ρ uφ ∂x d f ∂ ρ vφ ∂y d 0 (5. the solution is φ 1 below the diagonal and φ 0 above the diagonal. and ρ constant.3. In this problem. we expect a diffusion layer in which 0 φ 1.

5: Variation of φ Along Vertical Centerline 100 .2 0.0 0.6 0.0 L Figure 5.5 0.5 1.0 y Figure 5.0 φ 0.0 Exact Upwind Scheme Central Difference -0.4 0.4: Convection and Diffusion in a Square Domain 1.V φ=0 V y x φ = 1.8 1.5 0.

44 and rearranging we obtain d ∆xf 2 ∂ 3φ o oHo C  3! ∂ x3  φP  φS ∂ φ d ∆y f ∂ 2 φ d ∆y f 2 ∂ 3 φ  ∂ y  2! ∂ y2 3! ∂ y3 oCoCo ∆y   φW ∆x   ∂φ ∂x ∆x  d 2! f ∂ 2φ ∂ x2 (5. we get  ρv φN φS 2∆y   0 (5.43 and the discrete equation ∆x f 2 i O d ∆y f 2 i (5. applying the upwind scheme means that effectively we are getting the solution to a problem with some diffusion.51) φW  φP  ∆x ∂φ ∂x ∆x f 3 ∂ 3 φ d ∆xf 2 ∂ 2φ d o oCo H  2! ∂ x2  3! ∂ x3  101 (5.47) (5.50 is O ∆x .52) .50 is our original differential equation and the right hand side represents the truncation error. the artiﬁcial diffusion coefﬁcient is proportional to the grid size so we would expect its effects to decrease as we reﬁne the grid. This phenomenon is variously called artiﬁcial. we get φW All derivatives in the above equations are evaluated at P.1.50 with the one dimensional form of the convection-diffusion equation (Equation 5. The left hand side of Equation 5. as expected for a ﬁrst-order scheme. false or numerical diffusion (or viscosity.The leading term in Equation 5. The same analysis for the two-dimensional central difference scheme starts with Equation 5. in the context of momentum equations).46) φP Using the above expressions in Eq 5. It is interesting to compare Equation5. Thus we see that although we are trying to solve a pure convection problem.50 reduces to m ∂φ ∂φ ρ u∆x ∂ 2 φ ∂ 2 φ ρu ρv  2 ∂ x2 ∂ y2 n O d ∆xf 2 i (5. In case of the upwind scheme.50 looks similar to the diffusion term in the convection-diffusion equation.50) ∂x  ∂y   g ρu ∂φ ∂x ρv ∂φ ∂y  ρ u∆x ∂ 2 φ 2 ∂ x2 ρ v∆x ∂ 2 φ 2 ∂ y2 O d f ρu φE φW 2∆x  Expanding φW and φE about φP using a Taylor series. Rearranging.48) The differential equation derived in this manner is referred to as the equivalent or modiﬁed equation.Expanding φ W and φS about φP using a Taylor series.45) (5.49) d    g  g For simplicity let us consider the case when u  v and ∆x  ∆y. ∆x f 3 ∂ 3 φ d ∆xf 2 ∂ 2φ d o oCo H   2! ∂ x2 3! ∂ x3  ∂ φ d ∆y f 2 ∂ 2 φ d ∆y f 3 ∂ 3 φ φS  φP  ∆y oCoHo ∂y  2! ∂ y2  3! ∂ y3   φP  ∆x ∂φ ∂x (5. Equation 5. It represents the continuous equation that our ﬁnite-volume numerical scheme is effectively modeling.Consider the computational domain shown in Figure 5. We ﬁnd that the leading order error term in Equation 5.1). but it is always present.

rather than discretizing them separately. the effective equaφN φS 2∆y ∂φ ∂y tion we solve using the central difference scheme contains a third derivative term on the right hand side.. Their behavior in multi-dimensional situations has the same characteristics as the upwind scheme. 5. This term is responsible for dispersion. whereas the central-difference scheme tends to be dispersive.54) A similar procedure in the y-direction yields d ∆yf 2 ∂ 3φ o oHo C (5.55 into Equation 5. i. for the oscillatory behavior of φ . We present these here for historical completeness.  5.∂φ ∆x 2 ∂ 2 φ ∂x 2! ∂ x2 Subtracting the two equations. Thus. as expected in a second order scheme. we consider the case when u  v and ∆x  ∆y.55)  3! ∂ y3  As before.53) φE φW 2∆x    ∂φ ∂x d ∆xf 2 ∂ 3φ o oCo H  3! ∂ x3  (5.54 and 5. Though we set out to solve a pure convection equation. Substituting Equations 5.4 First-Order Schemes Using Exact Solutions A number of ﬁrst-order schemes for the convection-diffusion equation have been published in the literature which treat the convection and diffusion terms together.56) ∂x  ∂y  g  We see that the leading truncation term is O d ∆x 2 f .58) .51 we get m ∂φ ∂φ ρ u∆x2 ∂ 3 φ ∂ 3 φ   3! ∂ x3 ∂ y3 n O d ∆xf 3 i ρu ρv (5. the upwind scheme leads to false or artiﬁcial diffusion which tends to smear sharp gradients.57) ∂x ∂x ∂x d f m n  0 L The boundary conditions are φ φ   φ0 atx φL atx 102   (5.4. we get φE  φP  ∆x  d f ∆x 3 ∂ 3 φ  d 3!f ∂ x3  oCoCo (5.e.1 Exponential Scheme The one-dimensional convection diffusion equation with no source term may be written as ∂ ∂ ∂φ ρ uφ Γ 0 (5. These schemes use local proﬁle assumptions which are approximations to the exact solution to a local convection-diffusion equation.

We assume that φ d xf may be taken from Equation 5.66) A similar expression may be written for the w face.3 and the convection-diffusion equation: ∂ ρ uφ ∂x d f ∂ ∂φ Γ ∂x ∂x m n  S (5.59) where Pe is the Peclet number given by Pe  (5. We will use this proﬁle to evaluate both φ and d φ l dx at the face. Integrating Equation 5.65) Je t Ae  Fe φP  exp d Pee f  1 n Here Pee As before.61 over the cell P yields (5.62) J e Ae J w Aw SC SP φP ∆ P Assuming. we must make proﬁle assumptions to write φ e .60) We wiish to use this exact solution in making proﬁle assumptions.59. Thus m φP  φE (5. Collecting terms yields the following discrete equation for φ P : aP φP  aW φW  b 103 . φw and the gradients f e δ xe   d ρ uΓ e  aE φE Fe De (5.63) t   d ρ uφ f e  Γe m dφ dx t d ρ uφ f w  Γw n e md φ dx n (5. Consider the one-dimensional mesh shown in Figure 5.64) w d dφ l dxf e and d dφ l dxf w.The exact solution to the problem is φ φ0 φL φ0    exp Pe x L 1 exp Pe 1 dCd f l f  d f ρ uL Γ (5.61) We wish to obtain a discrete equation for point P. for this 1-D case that t  t  p  Ae q v Aw we may write J e Ae J w Aw    i i (5.

2 Hybrid Scheme The hybrid scheme seeks to approximate the behavior of the discrete coefﬁcients from the exponential scheme by reproducing their limiting behavior correctly.4.6. The coefﬁcient aE in the exponential scheme may be written as aE De  A plot of aE De is shown in Figure 5. researchers have created schemes which approximate the behavior of the coefﬁcients obtained using the exponential scheme.6.69)    l    0 for Pee ∞  1 Pee for Pee  Pee 2  ∞ at Pee  0 (5. For the case of S 0.  5. this is not true when a source term exists or when the situation is multi-dimensional. Of course. Thus aE 0 for Pee 2 De aE Pee 1 for 2 Pee 2 De 2 aE Pee for Pee 2 (5.71) De }     ~   The overall discrete equation for the cell P is given by aP φP  aE φE  aW φW  b 104 .where aE aW aP b     Fe exp Fe De 1 Fw exp Fw Dw exp Fw Dw 1 aE aW SP∆ P SC ∆ P d l f d l f d l f v d Fe  v   Fw f (5. It shows the following limiting behavior: aE De aE De aE De l Pe exp Pee d f  1 (5. Because exponentials are expensive to compute. it will yield the exact solution regardless of mesh size or Peclet number.70) The hybrid scheme models a E De using the three bounding tangents shown in Figure 5. It is possible to show that for these general situations the scheme is only ﬁrst-order accurate. These include the hybrid and power law schemes which are described below. for one-dimensional situations.68) This scheme always yields positive coefﬁcients and bounded solutions.

rather than to use the bounding tangents as the hybrid scheme does. The convection-diffusion equation (Equation 5.5 Unsteady Convection Let us now focus our attention on unsteady convection.73) 5. the objective is to curve-ﬁt the a E De curve using a ﬁfth-order polynomial. 5.74) This expression has the advantage that it is less expensive to compute that the exponential scheme. while reproducing its behavior closely.Pe e /2 -5 -4 -3 -2 -1 0 1 1 Pe e Exact a E / D e =0 2 3 4 5 Figure 5.6: Variation of a E De (Adapted from Patankar(1980)) l where aE aW aP b e   Fe w De  F w 0 2  Max  Fw w Dw F2w w 0  aE aW  SP ∆v P d Fe  vP   SC ∆  Max Fw f (5.3 Power Law Scheme Here.1) then takes the  105 . For simplicity we will set ρ to be unity and Γ 0.4.a 6 a / D = -Pe E e e E / De 5 4 3 2 a E /D = e 1 . The power-law expressions for aE De may be written as: l l aE De  Max 0 m  Fe  5   w 1  0o 1 De n  Max 0  w  Fe (5.

Many different approaches  o d w  d wf  of 106 .0 φ 0.5 t=0 t=0.7.4 0.25 1.0 x/L x/L Figure 5. Figure 5.0 -1. historically it has been one of the most widely studied equations in CFD.8 1. It is termed linear because the convection speed u is not a function of the convected quantity φ .7: Exact solutions for linear convection of (a) sine wave (b) square wave ∂ φ ∂ uφ 0 (5. We will use these examples to determine whether our discretization schemes are able to predict this translation accurately. the exact solution is simply the initial proﬁle translated by a distance ut ..0 t=0 0.5 0.2 0.5 -0. i.2.7 shows the initial solution as well as the solution at t 0 25.0 φ 0. One is a single sine wave and the other is a square wave.75 appears to be quite simple.5 1.0 0. The boundary conditions for both problems are φ 0 t φ Lt 0. form   φ x0 d w f φ0 x d f (5. To complete the speciﬁcation of the problem we need to deﬁne the initial and boundary conditions.0 -0.6 0. without distorting or smearing the proﬁle. as shown in Figure 5. coupled equations that govern high speed ﬂows.75 is also known as the linear advection or linear wave equation.0 0. Even though Equation 5.e.25 t=0. Let us consider a domain of length L and let the initial solution be given by a spatial function φ 0 .76) The exact solution to this problem is given by φ xt d wf φ0 x d  ut f  (5. For these reasons.5 0.75) ∂t ∂x Equation 5. We saw in a previous chapter that mathematically this is classiﬁed as a hyperbolic equation.4 0.0 1. non-linear. Consider the convection of two initial proﬁles by a convection velocity u 1 in a domain of length L.6 0.77) In other words.0 0.2 0. We see that the proﬁles have merely shifted to the right by 0 25u.8 1. it is important because it provides a great deal of insight into the treatment of the more complex.

As such it is not usable but there are several important lessons one can learn from this. An alternative to iterations is to use time-marching.3. Integrating Equation 5. φ P for every cell is only a function of the (known) solution at the previous time level. and to obtain the steady state solution as the culmination of an unsteady process.have been developed for its numerical solution but in this book we will concentrate mostly on modern techniques that form the basis of solution algorithms for the Euler and Navier-Stokes equations. Often. iterating for non-linearities. it would seem logical to devise methods that yield the solution at successive instants in time. using time marching. only the φ 0) is of interest. however. With the general framework in hand. the solution as ∂ ∂t ter we saw that we could obtain the steady-state solution by solving a sparse system of nominally linear algebraic equations. As we noted earlier. and the terms carrying the superscript “0” denote the value at the previous time level. In some instances we might be interested in the transient evolution of the solution. We assume a uniform mesh with a cell width ∆x as shown in Figure 5. In general. Thus the scheme described above is second-order accurate in space and ﬁrstorder accurate in time. however.75 over the cell P yields ∂φ ∆x ue φe uw φw 0 (5. in the explicit scheme. starting with the initial solution.2 Central Difference Scheme Using the explicit scheme we developed in a previous chapter. let’s look at some speciﬁc schemes. Applying the von-Neumann stability analysis to this scheme. we are interested in two kinds of problems. we may rearrange the equation to write  d  f m ∂φ ∂t n P u φe ∆x d  φw f 0 (5. shows that it is unconditionally unstable.1 1D Finite Volume Discretization For simplicity let us examine the discretization of Equation 5. 5. the un-superscripted values denote the values at the current time. the problem of determination of the face ﬂux simply reduces to the determination of the face values φ .5. We have shown from a truncation error analysis that the central difference scheme is second-order accurate in space.80) where.  5.75.e.. we may write φP ∆t  0 φP  u d φE0  0 φW 2∆x f  0 (5. 107 . as per our convention.78) ∂t P m n Since u is constant.79) Since for the linear problem the velocity u is known everywhere. explicit time discretization is ﬁrst order accurate in time.5. In the previous chapsteady-state solution (i. Since the equation is hyperbolic in the time coordinate and the solution only depends on the past and not the future.

Explicit schemes usually have a stability limit which dictates the maximum Courant number that can be used. To 108  d 1  ν f φP0  νφW (5. This limits the time step and makes the use of time marching with explicit schemes undesirable for steady state problems. However. we obtain the following scheme φP ∆t  0 φP   u d φP0  0 φW ∆x f  0 (5. This is easily seen by writing Equation 5. The solution is therefore not guaranteed to be bounded by the spatial and temporal neighbor values. it is important to realize that stability does not guarantee physical plausibility in the solution.85) .The ﬁrst point to note is that the instability is not caused by either the spatial or the temporal discretization alone but by the combination of the two. it requires the solution of a linear equation set at each time step. we get very different behavior if we simply using an implicit temporal discretization while retaining the central difference scheme for the spatial term: φP ∆t  0 φP  u d φE  φW 2∆x f  0 (5. Indeed.83 in the form   φP We therefore the expect the scheme to also be monotone when it is stable.5. We may rewrite the scheme in the following form: φP  } u∆t d φE  φW 2∆x f  0 φP (5.3 First Order Upwind Scheme Using the upwind difference scheme for spatial discretization and an explicit time discretization. However. Friedrichs and Lewy [1. It is interesting to note that our heuristic requirement of all spatial and temporal neighbor coefﬁcients being positive is also met when the above condition is satisﬁed.84) ∆x ∆t The quantity ν u ∆ x is known as the Courant or CFL number (after Courant. 2]) who ﬁrst analyzed the convergence characteristics of such schemes.83) We have shown that the upwind differencing scheme is only ﬁrst-order accurate and that the explicit scheme is also only ﬁrst-order accurate. making it difﬁcult to use iterative solvers. the Scarborough criterion is not satisﬁed. Since the scheme is implicit. ~ 5.81) It is easy to show that the resulting implicit scheme is unconditionally stable.82) We see that aW is negative for u 0 and a E is negative for u 0. To see how well the upwind scheme performs for unsteady problems let us apply it to the problem of convection of single sine and square waves we described above. Stability analysis reveals that the scheme is stable as long as ∆t 0 u 1 (5.

5 -0. just like steady state.0 φ 0. the transient form of the upwind scheme also suffers from numerical dissipation. We can now appreciate the physical reason behind } 109 . Although this results in a loss of accuracy.2.4 0. as expected.6 0.0 x/L x/L Figure 5.8 are smoothened.0 0.0 1. We know that physically the effect of diffusion (or viscosity) is to smoothen out the gradients.5.0 0.5 0.  o  o 5.6 0.8. Our numerical solution is similarly shifted but we note that the sharp discontinuities in either the slope of the variable itself have been smoothened out considerably.5 ν = 0. In case of the sine wave. we apply the explicit scheme for 25 time steps.5 -1.5 0.2 0. Note also that for ν 1 the numerical viscosity of the upwind scheme would be negative. Using ν 0 5.25.0 Exact Solution First Order Upwind Exact Solution 1.0 0. is the initial proﬁle translated to the right by a distance of 0. the peak amplitude has decreased but nowhere has the solution exceeded the initial bounds.5 First Order Upwind 1. We compare the resulting solutions at t 0 25 with the exact solutions in Figure 5. this same artiﬁcial dissipation is also responsible for the stability of our scheme.0 -0.4 Error Analysis We can develop a model equation for the transient form of the upwind scheme (Equation 5.8 1.4 0. The numerical diffusion present in the upwind scheme acts in a similar manner and this is why we ﬁnd the proﬁles in Figure 5.2 0. This is because the artiﬁcial viscosity also damps out any errors that might arise during the course of time marching (or iterations) and prevents these errors from growing.8: First order upwind solutions for linear convection of (a) sine wave (b) square wave compute this numerical solution we use an equispaced solution mesh of 50 cells.8 1. We also observe that the proﬁles in both examples are monotonic.0 φ 0.86) We see that.0 0.5 ν = 0. which is now a function of the Courant number.83) using techniques similar those we used for steady state to obtain ∂φ ∂t  u ∂φ ∂x  u∆x 1 2 d  f ν ∂ 2φ ∂ x2  oCoHo (5. The exact solution.

It can also be shown that the implicit version (Equation 5. Such non-monotonic behavior is even more pronounced in the presence of discontinuities. Discretizing the second derivative using linear proﬁles assumptions. Let us now analyze the explicit central difference scheme (Equation 5. in regions of slope discontinuity we see spurious “wiggles”. The corresponding modiﬁed equation is ∂φ ∂φ u2 ∆t ∂ 2 φ u (5.9(a)).   oHoCo (5. If φ is a physical variable such as species concentration or the turbulence kinetic energy that is always supposed to be positive.5.80) we saw earlier.5 Lax-Wendroff Scheme A large number of schemes have been developed to overcome the shortcomings of the explicit central difference scheme that we discussed in the previous section. as in previous chapters. when it is violated the numerical viscosity will be negative and thus cause any errors to grow. Stability analysis shows that it is stable for ν 1. The principle idea is to remove the negative artiﬁcial diffusion of the explicit central difference scheme by adding an equal amount of positive diffusion. we obtain the following explicit equation for cell P   φP ∆t  0 φP  u d φE0  0 φW 2∆x f 0 u2 ∆t φE 2 d    d f 0 0 2φP φW ∆x 2 f  0 (5. However.the CFL condition.81 has the following modiﬁed equation      oCoHo ∂φ ∂φ u2 ∆t ∂ 2 φ u ∂t ∂x 2 ∂ x2 Thus this scheme is stable but also suffers from artiﬁcial diffusion. At some locations it is higher than one while in other places it is negative.89) ∂t ∂x 2 ∂ x2 rather than the original convection equation. The modiﬁed equation for the Lax-Wendroff scheme is ∂φ ∂t  u ∂φ ∂t xf 2   u d ∆6 d 1 110 ν2 f ∂ 3φ ∂ x3 O ∆x 3  d f (5. That is.9(b). as shown in Figure 5. The reasons for this behavior can once again be understood by examining the truncation error. Of these. We also note that the solution in this case exceeds the initial bounds. we seek to solve ∂φ ∂φ u2 ∆t ∂ 2 φ u (5.91) .88) 5. such behavior could cause a lot of difﬁculties in our numerical procedure. Applying it to the sine wave convection problem. the most important is the Lax-Wendroff scheme since it forms the basis of several wellknown schemes used for solution of Euler and compressible Navier-Stokes equations. we see that it resolves the smoothly varying regions of the proﬁle much better than the upwind scheme (see Figure 5.87) ∂t ∂x 2 ∂ x2 This scheme has a negative artiﬁcial viscosity in all cases and therefore it is not very surprising that it is unconditionally unstable.90) It is possible to show that this scheme is second order accurate in both space and time.

9: Lax-Wendroff solutions for linear convection of (a) sine wave (b) square wave The leading order term is proportional to the third derivative of φ and there is no diffusion like term.0 0. In wave-mechanics terms. However. the equation that is satisﬁed at convergence is given by P  2  φE φ u∆t φ 2∆x E d  φP f    P2 W  111 φ φ u∆t φ 2∆x P d  φW f  0 (5.0 0.5 0. dispersion refers to the phenomenon that alters the frequency content of a signal and dissipation refers to the reduction in amplitude.6 0.90 in the following form φP ∆t  0 φP  u ∆x P   2  0 φE φ0 u∆t 0 φ 2∆x E d  0 φP fy   P  2 W  φ0 φ0 u∆t 0 φ 2∆x P d  0 φW fyu  0 We recognize the terms in square brackets as the face ﬂux deﬁnitions for the e and w faces respectively.2.6 0.0 Exact Solution Lax-Wendroff Exact Solution 1. This is characteristic of second-order schemes and is the reason why wiggles in the solution do not get damped. The ﬁrst order upwind scheme. This becomes more clear if we re-arrange Equation 5.0 1.0 -0. The errors produced by second-order terms are dispersive in nature whereas those produced by ﬁrst-order schemes are dissipative.5 -1. The Lax-Wendroff scheme is different in that the spatial proﬁle assumption is tied to the temporal discretization. on the other hand.8 1.0 φ 0.5 Lax-Wendroff 1. does not alter the phase differences but damps all modes. For smooth proﬁles that contain few frequencies.5 -0.92) The consequence of this is that the ﬁnal answer we obtain depends on the time-step size ∆t ! Although the solution still has a spatial truncation error of O ∆x 2 . second-order schemes work very well.8 1.4 0.5 ν = 0. this d f f .2 0. All the numerical schemes we saw in previous sections were derived by separate proﬁle assumptions for the spatial and temporal variations.0 0.5 0. the effect of numerical dispersion is to cause phase errors between the different frequencies.0 φ 0.5 ν = 0.2 0. If we are using this scheme for computing a steady state solution.4 0.0 0.0 x/L x/L Figure 5. the lack of numerical diffusion preserves the amplitude. in case of discontinuities (which are composed of many frequencies).

and the latter due to dispersion. This is equivalent to retaining the ﬁrst two terms of the expansion. In this particular case. we have assume that. Evaluating Equation 5. backward or central difference formula to give us three different second-order schemes. If Fe 0.96) ∂x 2∆x d f   we obtain φe  φP  p φE  112 φW 4 q (5. such as linear or quadratic. Thus far. we write a Taylor series expansion for φ in the neighborhood of the upwind point P: } φ x d f φP x  d  xP f ∂φ ∂x  d x xP 2! f 2 ∂ 2φ 2 ∂x O ∆x 3  d f (5. In order to write φ e in terms of φ cell centroid values.93) Instead of assume a constant proﬁle assumption for φ .1 Second-Order Upwind Schemes We may derive a second-order upwind scheme by making a linear proﬁle assumption. These higher-order schemes aim to obtain at least a second-order truncation error.94) 5. Therefore there has been a great deal of research to improve the accuracy of the upwind scheme. the proﬁle of φ is essentially constant. φe  } φP (5. for Fe 0. by using higher-order interpolation.95) This assumption has a truncation error of O ∆x 2 . the resulting error in the ﬁnal solution maybe small since stability requirements restrict the time step size. If we φ write ∂ ∂ x using φE φW ∂φ (5. to derive a set of upwind weighted higherorder schemes. On our onedimensional grid we can represent the derivative at P using either a forward. We will see in later chapters that similar path dependence of the converged solution can occur even in iterative schemes if we are not careful. for the purposes of writing the face value φ e . we must write ∂ ∂ x in terms of cell centroid values.94 at x e xP ∆x 2.dependence on the time step is clearly not physical.6. we obtain  l  d f φe  φP  ∆x ∂ φ 2 ∂x (5. 5. we may use higher-order proﬁle assumptions.6 Higher-Order Schemes We have seen thus far that both the upwind and central difference schemes have severe limitations. the former due to artiﬁcial diffusion.97) . That is. while controlling the severity of spatial oscillations.

we may write  d φE  2 φP f  p φE  φW 8  2φP q (5.103) we may write φe  φe φP φW q φW 8  2φP q (5.  d  f d  113  f .102) and using cell-centroid values to write the derivatives ∂φ ∂x and  p φE2 ∆xφW q d f ∆x  d O ∆x2 f ∂ 2φ ∂ x2  p φE   p φE  4 φW  2 2φP q  p φE  O ∆x2  d f (5. We can emphasize this by introducing a curvature factor C such that 1 φe φ φP C φE φW 2φP (5.6. we get l φe  φP  p φP   φW 4 q  d φE  φP 4 f (5.  (5.98) This scheme is referred to as the Fromm scheme in the literature. φ If we write ∂ ∂ x using φP φW ∂φ ∂x ∆x we obtain φP φW φe φP 2 This scheme is referred to in the literature as the Beam-Warming scheme.106) 2 E and C=1/8. ∂ x2 (5.2 Third-Order Upwind Schemes We may derive third-order accurate schemes by retaining the second derivative in the Taylor series expansion: φ x d f φP x  d  xP f ∂φ ∂x  d x xP 2! ∂φ ∂x f 2 ∂ 2φ 2 ∂x and ∂ 2φ .104) Re-arranging.101) Using (5.100) 5.or.105) This scheme is called the QUICK scheme (Quadratic Upwind Interpolation for Convective Kinetics) [3]. This scheme may be viewed as a parabolic correction to linear interpolation for φ e .99)   p  q (5. adding and subtracting φ P 4.

For κ  1 we get the familiar central difference scheme. φP and φE .and third-order schemes we have seen here may be combined into a single expression for φ e using d 1  κ f d φP  φW f d 1  κ f d φE  φP f (5. However. Higher-order schemes for unstructured meshes are an area of active research and new ideas continue to emerge . while retaining at least second-order accuracy. The second term is a correction term. Consequently. The upwind term is included in the calculation of the coefﬁcients a p and anb while the correction term is included in the b term. we have no guarantee that the outer iterations will themselves converge. the ﬁrst term on the right hand side represents the upwind ﬂux.107)  4  4 Here. we can no longer write the face value purely in terms of cell centroid values on either side of the face. a similar stencil is required for φw . This presents a big problem for unstructured meshes since no such line structure exists. φe  φP We see from the signs of the terms in Equation 5. Thus.3. and φP in Figure 5. Since the upwind scheme gives us coefﬁcients that satisfy the Scarborough criterion. φ P φP . and the resulting solution satisﬁes the QUICK scheme. κ  0 the Fromm scheme and κ  1 l 2 the QUICK scheme. 114 . the extent of the resulting spatial oscillations is substantially smaller than those obtained through the central difference scheme.107 that it is possible to produce negative coefﬁcients in our discrete equation using these higher-order schemes. It is sometimes necessary to use good initial guesses and underrelaxation strategies to obtain convergence. Just as with non-linear problems. At convergence.3 Implementation Issues If iterative solvers are used to solve the resulting set of discrete equations. we must typically know the values φW . For the QUICK scheme. 5. we are assured that the iterative solver will converge every outer iteration.7 Higher-Order Schemes for Unstructured Meshes All the higher-order schemes we have seen so far assume line structure. it is important to ensure that the Scarborough criterion is satisﬁed by the nominally linear equations presented to the iterative solver. and represents the difference between the QUICK and upwind ﬂuxes. and involves the values φWW . the convective transport Fe φe for Fe 0 is written as } Fe φe  Fe φP  Fe m φ φ φ φ d E  Pf  d E  W  2 8 2φP f  φP n (5. for example. φW . It is evaluated using the prevailing value of φ .6.   5. typical implementations of higher-order schemes use deferred correction strategies whereby the higher order terms are included as corrections to an upwind ﬂux.108) Here. κ   1 yields the Beam-Warming scheme. In order to write φe . We present here a second-order accurate upwind scheme suitable for unstructured meshes.The second.

109 at ∆r  } φ xy d w f φ0 (5. in principle.8 Discussion We have touched upon a number of different ﬁrst and second-order schemes for discretizing the convection term. referring to Figure 5. for linear problems.111) f ure 5. as shown in Fig(5. If F f 0.10: Schematic for Second-Order Scheme Our starting point is the multi-dimensional equivalent of Equation 5. These schemes typically employ non-linear coefﬁcients which are adjusted to ensure adjacent 115 .110) ∆r 0 . whereas the central difference and higher-order differencing schemes exhibit non-monotonicity to varying degrees. We have seen that all the schemes we have discussed here have drawbacks. we evaluate Equation 5. we do not. require outer iterations unless a deferred correction strategy is adopted.e. A number of researchers have sought to control the spatial oscillations inherent in higher-order schemes by limiting cell gradients so as to ensure monotonicity. Any of these methods may be used to provide ∇φ 0 .A f V f ∆r C1 0 C0 Figure 5. the problem now turns to the evaluation of ∇φ 0 . the discrete coefﬁcients are independent of φ . we may write φ using a Taylor series expansion about the upwind cell centroid: ∇φ f 0 t ∆r O d  ∆r  2 f d   Here.95.10 φf  φ0 As with structured meshes.10.. Thus.109) (5. The ﬁrst-order schemes are diffusive. ∆r is given by ∆r  p x  x0 q i p y  y0 q j  To ﬁnd the face value φ . All the schemes we have seen employ linear coefﬁcients. We have already seen in the previous chapter several methods for the calculation of the cell centered gradient. ∇φ f 0 t ∆r0 O d  ∆r0  2 f d   d f d f 5. i.

Flow boundaries occur in a problem because we cannot include the entire universe in our computational domain and are forced to consider only a subset. we classiﬁed boundaries according to what information was speciﬁed. J b Ab t  ρ Vb Ab φgiven  Γb Ab Ab φ0 ∆ξ Ab eξ t g  i  s b (5.1 Inﬂow Boundaries At inﬂow boundaries. For convection problems we must further distinguish between ﬂow boundaries where ﬂow enters or leaves the computational domain. We must then supply the appropriate information that represents the part of the universe that we are not considering but that is essential to solve the problem we are considering. For example. We consider each in turn. We have already seen how to deal with the interior ﬂuxes J f .9. The geometric boundaries in such a problem would be the external walls of the manifold as well the surfaces of any components inside the manifold. The boundary ﬂux may J b is given by Using φb  J b Ab t  t ρ Vb Ab φb t  t Γb ∇φ d f b t Ab φgiven (5.cell values are smoothly varying [4].9 Boundary Conditions When dealing with the diffusion equation. while analyzing the exhaust manifold of an automobile we might not include the combustion chamber and external airﬂow but then we must specify information about the temperature.115) This boundary ﬂux may be incorporated into the cell balance for the cell C0. 5. as well the value of φ V φ   Vb .113) The summation in the second term is over the interior faces of the cell. and geometric boundaries. we are given the inlet velocity distribution.11. velocity etc. The discrete equation for the cell is given by J b Ab J f t A f |p SC t  ∑  f SP φ0 ∆ q v 0 (5. Though we do not address this class of discretization scheme here. 116 . the gradient of φ was speciﬁed. The dashed line shows the inﬂow boundary.112) Consider the boundary cell shown in Fig 5. At Dirichlet boundaries. Flow boundaries may further be classiﬁed as inﬂow and outﬂow boundaries. and writing the diffusion ﬂux as in the previous chapter. 5.114) φgiven . Vb Ab φgiven t  0 (5. the value of φ itself was speciﬁed whereas at Neumann boundaries. of the ﬂow as it leaves the combustion chamber and enters our computational domain. they nevertheless represent an important arena of research.

we would certainly need to know the temperature of the ﬂuid where it entered the domain but the temperature at the outlet would be determined as part of the solution and thus cannot be speciﬁed a priori. at outﬂow boundaries.2 Outﬂow Boundaries At outﬂow boundaries. Vb Ab t } 0 (5. For example.118) Jb Ab ρ Vb Ab φ0  Thus. Thus.118. so that (5. we do not require the value of φ to be speciﬁed – it is determined by the physical processes in the domain. we may write φ b φ0 .Ab Vb b e ξ C0 ∆ξ Inflow Boundary Figure 5. we assume  J b Ab Γb ∇φ d f b t Ab  t 0 (5. the net boundary ﬂux at the outﬂow boundary is t  ρ Vb Ab φb .11: Cell Near Inﬂow Boundary 5. Indeed. we have assumed that the 117 t  t . This result makes physical sense.117) The cell balance for cell near an outﬂow boundary. if we were solving for temperature in an exhaust manifold where the ﬂuid was cooled because of conduction through the walls.116) That is. the diffusive component of the ﬂux normal to the boundary is zero. and convected to the boundary by the exiting ﬂow. There are two key implicit assumptions in writing Equation 5.9. such as that shown in Figure 5. Using a ﬁrst-order upwind scheme.113. The ﬁrst is that convective ﬂux is more important than diffusive ﬂux.12 is given by Equation 5.

13. as shown in Figure 5. we would have to specify the φ values associated with the incoming portions of the ﬂow for the problem to be well-posed.12: Cell Near Outﬂow Boundary local grid Peclet number at the outﬂow face is inﬁnitely large. If we choose location A as the the outﬂow boundary. In this situation.119) . These are not usually available to us. Location B. Outlet boundaries should be placed such that the conditions downstream have no inﬂuence on the solution. this means that the ﬂow rate is high enough so that conditions downstream of our boundary do not affect the solution inside the domain. For the exhaust manifold example. Inﬂow boundaries should be placed at locations where we have sufﬁcient data. then we would expect that a lower temperature downstream of the our domain would cause a lower temperature inside the domain as well. such as the external walls of an exhaust manifold. is a much better choice for an outﬂow boundary. It is very important to place ﬂow boundaries at the appropriate location. either from another numerical simulation or from experimental observations. we cut across the recirculation bubble. The second assumption is that the ﬂow is directed out of the domain at all points on the boundary.A b C0 b Vb Figure 5. located well past the recirculation zone. Consider the ﬂow past a backward-facing step. 5.3 Geometric Boundaries At geometric boundaries. the normal component of the velocity is zero: Vb Ab t  118 0 (5.9. If the ﬂow rate was not sufﬁciently high compared to the diffusion.

smears discontinuities. we have addressed the discretization of the convection-diffusion equation. we have seen that our solution may have spatial oscillations for high Peclet numbers.A B Figure 5.120) At geometric boundaries. 5. All these higher-order schemes yield solutions which can have spatial oscillations. 119 . we are typically given either Dirichlet.13: Recirculation At Domain Boundary Consequently the boundary ﬂux J b is purely diffusive and given by Jb   Γb ∇φ d fb (5. We have also examined a class of upwindweighted second-order and third–order schemes. If the face value is interpolated using a central difference scheme.10 Closure In this chapter. we have assumed that the ﬂow ﬁeld is known. Neumann or mixed boundary conditions. We have seen that the convection term requires the evaluation of φ at the faces of the cell for both structured and unstructured meshes. In the next chapter we shall turn to the task of computing the ﬂow and pressure ﬁelds. We have already shown how these may be discretized in a previous chapter. In developing these schemes for convection. though the solution is bounded. The upwind scheme on the other hand.

120 .

The primary obstacle is the fact that the pressure ﬁeld is unknown. The reader may wish to conﬁrm that Su and Sv   fu   ∂ ∂u µ ∂x ∂x ∂ ∂v µ ∂y ∂y m m n  n  ∂ ∂v µ ∂y ∂x ∂ ∂u µ ∂x ∂y 121 m m n  n  2 ∂ µ∇ V 3 ∂x d t f (6. Let us assume for the moment that the velocity vector V and the pressure p are stored at the cell centroids.1 Discretization of the Momentum Equation Consider the two-dimensional rectangular domain shown in Figure 6.4) . The momentum equations have the same form as the general scalar equation. We shall also see how to develop formulations suitable for unstructured meshes. for simplicity. assume a Newtonian ﬂuid though the issues raised here apply to other rheologies as well. and the rest is contained in S u and Sv . and as such we know how to discretize them.2) In the above equations. The extra equation available for its determination is the continuity equation.Chapter 6 Fluid Flow: A First Look We have thus far considered convection and diffusion of a scalar in the presence of a known ﬂow ﬁeld. Let us also assume steady state. 6.3) fv 2 ∂ µ∇ V 3 ∂y d t f (6.1. we shall examine the particular issues associated with the computation of the ﬂow ﬁeld. In this chapter. Let us. the stress tensor term has been split so that a portion of the normal stress appears in the diffusion term. especially for incompressible ﬂows. We shall examine how to deal with this coupling. The calculation of the ﬂow ﬁeld is complicated by the coupling between these two equations.1) (6. The momentum equations in the x and y directions may be written as: ∇ ρ Vu ∇ ρ Vv td td f  f  ∇ µ ∇u ∇ µ ∇v td td f f ∇p i ∇p j t t  Su Sv (6.

Here. for the discrete v-momentum equation. for the discrete u.1 and 6. the pressure gradient term is  t ∆{ i which in turn is given by z ∇ pd vW  i t ∑ p f A f f pe ∆y (6.7) Thus. Let us consider the pressure gradient term.11) . Each momentum equation contains a pressure gradient term. as well as remnants of the stress tensor term.10) Completing the discretization. we write z pd A  ∑ pf Af f ∆yi (6. Applying the gradient theorem. In deriving discrete equations. which we have written separately.momentum equation.9) Similarly. and as such we know how to discretize most of the terms in the equation. f u and f v contain the body force components in the x and y directions respectively.2 have the same form as the general scalar transport equation.8) p f A f  d pw   it ∑ f f (6.5) Assuming that the pressure at the face centroid represents the mean value on the face. we integrate the governing equations over the cell volume. This results in the integration of the pressure gradient over the control volume. the pressure gradient term is p f A f  d ps   jt ∑ f pn ∆x f (6. as well as a source term (S u or Sv ) which contains the body force term. the discrete u. we get z ∆ { A ∇ pd vW z pd A A (6. We see that Equations 6.6) The face area vectors are Ae Aw An     ∆yi ∆xj   As ∆xj (6.and v-momentum equations may be written as aP uP a P vP  ∑ anbunb d pw  pe f ∆y   nb  ∑ anbvnb d ps  pn f ∆x   nb 122 bu bv (6.

13) Given a pressure ﬁeld. we thus know how to discretize the momentum equations. the continuity equation. However. for a uniform grid pe pw pn ps       2 P pW pP  2 pN pP 2 pS pP 2 pE p (6. and the extra equation we need for its computation is the continuity equation. pw . which takes the form ∇ ρV td f  123 0 (6. If we assume that the pressure varies linearly between cell centroids.1: Orthogonal Mesh The next step is to ﬁnd the face pressures p e .2 Discretization of the Continuity Equation For steady ﬂow.N n W w s S P e E ∆y ∆x Figure 6. 6. we may write. the pressure ﬁeld must be computed. Let us examine its discretization next.12) Therefore the pressure gradient terms in the momentum equations become d pw  d ps  pe ∆y pn ∆x f f p pW  pE q ∆y p pS  pN q ∆x (6. pn and ps .14) .

and must interpolate the cell centroid values to the face. the discrete continuity equation for the cell P is (6.Integrating over the cell P and applying the divergence theorem. a checkerboard velocity pattern of the type shown in Figure 6. and is computed as a part of the solution. We see also that the pressure gradient term in the u-momentum equation (Equation 6. In practice.13) involves pressures that are 2∆x apart on the mesh. we may write ρ V dA ∑ ρ V f A f (6. we may assume d ρ uf e ∆y hd ρ uf w ∆y  d ρ vf n ∆x hd ρ vf s ∆x  d ρ uf e d ρ uf w d ρ vf n d ρ vf s     d ρ uf P  d ρ uf E 2 ρ u d f W  d ρ uf P 2 ρ v d f P  d ρ vf N 2 d ρ vf S  d ρ vf P 0 (6. Consequently. Instead. the momentum equations would not be able to distinguish it from a completely uniform pressure ﬁeld. the tendency towards checkerboarding manifests itself in unphysical wiggles in the velocity and pressure ﬁelds.15) z Using V  A t  f d f t ui  vj and Equations 6.16) t d f v` ∆{ ∇ ρV d z A ρ V dA t (6. The same is true for the v-momentum equation. 124 d ρ uf E ∆y d ρ uf W ∆y  d ρ vf N ∆x hd ρ vf S ∆x  0 (6.7. and does not involve the pressure at the point P. We should emphasize that these wiggles are a property of the spatial discretization and would be obtained regardless of the method used to solve the discrete equations. For a uniform grid. the checkerboarding would persist in the ﬁnal solution. perfect checkerboarding is rarely encountered because of irregularities in the mesh. This means that if a checkerboarded pressure ﬁeld were imposed on the mesh during iteration. If the momentum equations can sustain this pattern. we get z Assuming that the ρ V on the face is represented by its face centroid value. it is possible to create pressure ﬁelds whose gradients exactly compensate the checkerboarding of momentum transport implied by the checkerboarded velocity ﬁeld. Since the pressure gradient is not given a priori.17) 2 Gathering terms.19) .2 can be sustained by the continuity equation. the ﬁnal pressure and velocity ﬁelds would exhibit checkerboarding. it would persist at convergence. boundary conditions and physical properties. Under these circumstances. we may write the discrete continuity equation as We do not have the face velocities available to us directly. If the continuity equation were consistent with this pressure ﬁeld as well.18) We realize that continuity equation for cell P does not contain the velocity for cell P.

For the momentum equations. The cell P is used to discretize the continuity equation as before: d ρ uf e ∆y ed ρ uf w ∆y  d ρ vf n ∆x ed ρ vf s ∆x  0 (6. but this is of no consequence. All properties. Scalars such as enthalpy or species mass fraction are stored at the centroids of the cell P as in previous chapters.20) However.21) Similarly.2: Checkerboard Velocity Field 6. no further interpolation of velocity is necessary since discrete velocities are available directly where required. A typical staggered mesh arrangement is shown in Figure 6. We note that the mesh for the u-momentum equation consists of non-overlapping cells which ﬁll the domain completely. The control volumes for u and v overlap each other and the cell P. and are associated with the staggered cells. We distinguish between the main cell or control volume and the staggered cell or control volume. the staggered control volumes are used to write momentum balances. The procedure is the same as above. for the velocity v n . such as density and Γ. 125 . The velocity components are stored on the faces of the main cells as shown.3 The Staggered Grid A popular remedy for checkerboarding is the use of a staggered mesh. Thus the possibility of velocity checkerboarding is eliminated. Thus for the discrete momentum equation for the velocity u e . without interpolating as in Equation 6.3. The u velocity is stored on the e and w faces and the v velocity is stored on the n and s faces. the pressure gradient term is d pP  p pP  pE ∆y f (6.100 200 100 200 Figure 6. This is also true for the v-momentum equation and the continuity equation. the pressure gradient term is pN ∆y q (6. are stored at the main grid points. we no longer have a dependency on pressure values that are 2∆x apart.12. The pressure is stored at centroids of the main cells.22) Thus. except that the pressure gradient term may be written directly in terms of the pressures on the faces of the momentum control volumes directly.

4 Discussion The staggered mesh provides an ingenious remedy for the checkerboarding problem by locating discrete pressures and velocities exactly where required. A similar stencil inﬂuences v n .3: Staggered Mesh We note further that the mass ﬂow rates Fe Fw     Fn Fs d ρ uf e ∆y d ρ uf w ∆y d ρ vf n ∆x d ρ vf s ∆x (6. The discrete continuity equation 126 . where they are needed for the discretization of the convective terms in scalar transport equations. φ            u e       P    v         s S N vn E           main cell cell for u velocity cell for v velocity Figure 6.4. the neighbors nb would involve the u velocities at points ee. nb refers to the momentum control volume neighbors. nne w and sse shown in Figure 6.24) Similarly the equation for v n may be written as: a n vn ∆x pP p  pN bn (6. 6.25) Here. our discretization of the continuity and momentum equations is essentially complete. At this point. For the e momentum equation. The ue momentum equation may be written as ae ue  ∑ anbunb  nb  ∑ anbvnb  nb ∆y pP d  pE f q be (6.23) are available at the main cell faces.W uw                   p .

We now turn to issues related to the solution of these equations.4: Neighbors for u e Momentum Control Volume is given by Equation 6.NW N nne P w e NE W E ee EE SW S sse SE Figure 6. when solving multiple differential equations. we have examined issues related to discretization of the continuity and momentum equations. 6.20. The discretization affects the accuracy of the ﬁnal answer we obtain. Ns  Discretize governing equation for species i 127 . For the purposes of this book. when computing the transport of Ns chemical species. (Note that because of grid staggering. the coefﬁcients a p and anb are different for the u and v equations). our solution philosophy has been to so solve our discrete equations iteratively.5 Solution Methods Thus far. The solution path determines whether we obtain a solution and how much computer time and memory we require to obtain the solution. It is not yet clear how this equation set is to be solved to obtain u. We turn to this matter next. and only dependent on the discretization. a convenient way is to solve them sequentially. Though we have not emphasized this. for example. (This is not true in general for non-linear problems. one option is to employ a solution loop of the type: 1. for species i =1. v and p. where the solution path may determine which of several possible solutions is captured). That is. Thus far. the ﬁnal solution we obtain is considered independent of the path used to obtain it.

That is. Solve for mass fractions of species i at cell centroids. especially for non-linear problems where the M matrix (or a related matrix) would have to be recomputed every iteration. we may wish to solve the entire problem directly using a linear system of the type Mφ b (6. In order to solve a set of discrete equations iteratively. For compressible ﬂows. For example. The density does appear in the continuity equation. It is this choice that forces us to associate each governing differential equation with a solution variable. If we intend to use the continuity equation to solve for pressure. iterative methods. sequential iterative solution procedures are frequently adopted because of low storage requirements and reasonable convergence rate. If we were to use a direct method. This practice is especially popular in the compressible ﬂow community. Thus. pressure and density are related through an equation of state. If we are not worried about storage or computational time. There are a number of methods in the literature [5] which use the density as a primary variable rather than pressure. we use the discrete energy equation to solve for the temperature. where N is the number of cells and Ns is the number of species.and density-based schemes is directly tied to our decision to solve our governing equations sequentially and iteratively. It is possible to ﬁnd the density using the continuity equation. They are very popular in the incompressible ﬂow community. and b is a column vector also of size N Ns . It is important to realize that the necessity for pressure. If all species mass fractions have converged. it is necessary to ﬁnd a way to introduce the pressure into the continuity equation.26) where M is a coefﬁcient matrix of size N Ns N Ns . the density is unrelated to the pressure and cannot be used instead. and to deduce the pressure from it for use in the momentum equations. However there is a difﬁculty associated with the sequential solution of the continuity and momentum equations for incompressible ﬂows. else go to step 1. For incompressible ﬂows. φ is a column vector of size N N s . Methods which use pressure as the solution variable are called pressure-based methods. our intent is to solve for the entire set of N Ns species mass fractions in the calculation domain simultaneously. Similarly. Conversely. this type of simultaneous solution is still not affordable. but for incompressible ﬂows. pressure-based methods have also been developed which may be used for compressible ﬂows [7]. we intend to use the discrete u-momentum equation to solve for the u-velocity. a class of methods called the artiﬁcial compressibility methods have been developed which seek to ascribe a small (but ﬁnite) compressibility to incompressible ﬂows in order to facilitate numerical solution through density-based methods [6]. if we want to use sequential. and solve for the discrete k k d k  fk d k f k 128 . Such methods are called density-based methods. it is necessary to associate the discrete set with a particular variable. For most practical applications. Other alternatives are possible. For practical CFD problems. we encounter a problem for incompressible ﬂows because the pressure does not appear in the continuity equation directly. stop. assuming prevailing values to resolve non-linearities and dependences on the mass fractions of species j 2.

The SIMPLE algorithm uses the discrete momentum equations to derive this connection.25 we obtain ae ue   ∑ anbun b ∆y p pP   pE q  nb  b ∆x p pS  pP q an v   ∑ anbvn  nb u v u v 129   ∑ anbun b ∆y d pP   pE  f be   nb   ∑ anbvn b ∆x d pS  pP  f bn (6. but which can be extended to compressible ﬂows as well.20). have been proposed [8] but are not pursued here. However.velocities and pressure using a domain-wide matrix of size 3N 3N . They then solve for the continuity and momentum equations sequentially. Since the continuity equation contains discrete face velocities. These methods seek to create an equation for pressure by using the discrete momentum equations.27) a n vn   nb Similar expressions may be written for u w  and vs . In this chapter. We propose a correction to the starred velocity ﬁeld such that the corrected values satisfy Equation 6.p .over N cells). even with the power of today’s computers. If the pressure ﬁeld p  is only a guess or a prevailing iterate. Let p represent the discrete pressure ﬁeld which is used in the solution of the momentum equations. we shall concentrate on developing pressure-based methods suitable for incompressible ﬂows. The primary idea behind SIMPLE is to create a discrete equation for pressure (or alternatively. we wish to correct the existing pressure ﬁeld p  with p  p p  If we subtract Equations 6.v.20). We emphasize here that these methods deﬁne the path to solution and not the discretization technique. including local direct solutions at each cell. Let u and v be the discrete u and v ﬁelds resulting from a solution of the discrete u and v momentum equations.30) .24 and 6.27 from Equations 6. Thus.6 The SIMPLE Algorithm The SIMPLE (Semi-Implicit Method for Pressure Linked Equations) algorithm and its variants are a set of pressure-based methods widely used in the incompressible ﬂow community [9].20: u    v  Correspondingly. we need some way to relate these discrete velocities to the discrete pressure ﬁeld. the discrete u  and v  obtained by solving the momentum ae ue     (6. with each discrete equation set being solved using iterative methods. no such association is necessary.29) (6. Other methods. (three variables u. direct solutions of this type are still out of reach for most problems of practical interest. k 6.28) (6. u e and vn satisfy    equations will not. in general. coupled to an iterative sweep. a related quantity called the pressure correction) from the discrete continuity equation (Equation 6. satisfy the discrete continuity equation (Equation 6.

Fs and Fs .30 as      ae ue a n vn or. Fw . We approximate Equations 6.31 as ue p   pE q   dn p pP   pS q vn de pP     Fe (6.35) The corrected face ﬂow rates are given by Fe Fn with Fe Fe   Fn Fn  Fe  (6.Similar expressions may be written for u w and vs .36)     Fn ρe de ∆y pP ρn dn ∆x pP Similar expressions may be written for Fw . Equations 6.31)   (6. Thus far.32) we write Equations 6. they tell us how the velocity ﬁeld will respond when the pressure gradient is increased or decreased. using Equations 6.23 we may write the face ﬂow rates obtained after the solution of the momentum equations as    de pP dn pP p   pE q p   pS q  (6.30 represent the dependence of the velocity corrections u and v on the pressure corrections p . We now make an important simpliﬁcation. In effect.33) so that ue vn ue vn Further. 130  p   pE q p   pS q  (6.37) . deﬁning     de dn ∆y pP ∆x pS p   pE q p   pP q ∆y ae ∆x an (6. We now turn to the task of creating an equation for pressure from the discrete continuity equation.34)      Fn ρe ue ∆y ρn vn ∆x  (6. we have derived expressions describing how the face ﬂow rates vary if the pressure difference across the face is changed.

6.6.1 The Pressure Correction Equation

We now consider the discrete continuity equation. The starred velocities u and v , obtained by solving the momentum equations using the prevailing pressure ﬁeld p do not satisfy the discrete continuity equation. Thus







or, in terms of F , we have

d ρ u f e ∆y ed ρ u f w ∆y  d ρ v  f n ∆x ed ρ v f s ∆x  
Fe

0

(6.38)

We require our corrected velocities, given by Equations 6.28, to satisfy continuity. Alternately, the corrected face ﬂow rates, given by Equation 6.36, must satisfy continuity. Thus, Fe or using Equations 6.37

  Fw Fn  Fs   

0

(6.39)





Fe

  Fw  Fw Fn Fn  Fs  Fs   

0

(6.40)

Rearranging terms, we may write an equation for the pressure correction p P as: aP pP where aE aW aN aS



Fe ρe de ∆y pP pE Fw ρw dw ∆y pW Fn ρn dn ∆x pP pN Fs ρs ds ∆x pS







p  r q  p  8 q 

p   pP q p   pP q 

0

(6.41) (6.42)

  ∑ pn b  nb
ρe de ∆y ρw dw ∆y ρn dn ∆x ρs ds ∆x ∑ anb



b

      

aP b

nb

Fw

We note that the source term in the pressure correction equation is the mass source for the cell P. If the face ﬂow rates F satisfy the discrete continuity equation (i,e, b is zero), we see that p constant satisﬁes Equation 6.44. Thus, the pressure correction equation yields non-constant corrections only as long as the velocity ﬁelds produced by the momentum equations do not satisfy continuity. Once these velocity ﬁelds satisfy the discrete continuity equations, the pressure correction equation will yield a constant correction. In this limit, differences of p are zero and no velocity corrections are obtained. If the constant correction value is chosen to be zero (we will see why this is possible in a later section), the pressure will not be corrected. Thus, convergence is obtained once the velocities predicted by the momentum equations satisfy the continuity equation.

  Fe Fs  Fn 

(6.44)





131

6.6.2 Overall Algorithm
The overall procedure for the SIMPLE algorithm is the following: 1. Guess the pressure ﬁeld p . 2. Discretize and solve the momentum equations using the guessed value p for the pressure source terms. This yields the u and v ﬁelds. 3. Find the mass ﬂow rates F using the starred velocity ﬁelds. Hence ﬁnd the pressure correction source term b. 4. Discretize and solve the pressure correction equation, and obtain the p ﬁeld. 5. Correct the pressure ﬁeld using Equation 6.29 and the velocities using Equation 6.28. The corrected velocity ﬁeld satisﬁes the discrete continuity equation exactly. 6. Solve the discrete equations for scalar φ if desired, using the continuity-satisfying velocity ﬁeld for the convection terms. 7. If the solution is converged, stop. Else go to step 2.













6.6.3 Discussion
The pressure correction equation is a vehicle by which the velocity and pressure ﬁelds are nudged towards a solution that satisﬁes both the discrete continuity and momentum equations. If we start with an arbitrary guess of pressure and solve the momentum equations, there is no guarantee that the resulting velocity ﬁeld will satisfy the continuity equation. Indeed the b term in the continuity equation is a measure of the resulting mass imbalance. The pressure correction equation corrects the pressure and velocity ﬁelds to ensure that the resulting ﬁeld annihilates this mass imbalance. Thus, once we solve the pressure correction equation and correct the u and v ﬁelds using Equations 6.28, the corrected velocity ﬁelds will satisfy the discrete continuity equations exactly. It will no longer satisfy the discrete momentum equations, and the iteration between the two equations continues until the pressure and velocity ﬁelds satisfy both equations. It is important to realize that because we are solving for the pressure correction rather that the pressure itself, the omission of the ∑nb anb unb and ∑nb anb vnb terms in deriving the pressure correction equation is of no consequence as far as the ﬁnal answers are concerned. This is easily seen if we consider what happens in the ﬁnal iteration. In the ﬁnal iteration, u , v and p satisfy the momentum equations, and the b term in the p equation is zero. As discussed earlier, the p equation does not generate any corrections, and u u , v v , p p holds true. Thus, the p equation plays no role in the ﬁnal iteration. Only discrete momentum equation, and the discrete continuity equation (embodied in the b term) determine the ﬁnal answer. The dropping of the ∑nb anb unb and ∑nb anb vnb terms does have consequences for the rate of convergence, however. The u-velocity correction in Equation 6.30, for example, is a function of both the velocity correction term ∑nb anb unb and the pressure











         









132

correction term. If we drop the ∑nb anb unb term, we place the entire burden of correcting the u-velocity upon the pressure correction. The resulting corrected velocity will satisfy the continuity equation all the same, but the resulting pressure is over-corrected. Indeed, because of this over-correction of pressure, the SIMPLE algorithm is prone to divergence unless underrelaxation is used. We underrelax the momentum equations using underrelaxation factors α u and αv in the manner outlined in previous chapters. In addition, the pressure correction is not applied in its entirety as in Equation 6.29. Instead, only a part of the pressure correction is applied: p



 p 

αp p



(6.45)

The underrelaxation factor α p is chosen to be less than one in order to correct for the over-correction of pressure. It is important to emphasize that we do not underrelax the velocity corrections in the manner of Equation 6.45. The entire point of the pressure correction procedure is to create velocity corrections such that the corrected velocity ﬁelds satisfy continuity. Underrelaxing the velocity corrections would destroy this feature. Thus, the SIMPLE algorithm approaches convergence through a set of intermediate continuity-satisfying ﬁelds. The computation of transported scalars such as enthalpy or species mass fraction is therefore done soon after the velocity correction step (Step 6). This ensures that the face ﬂow rates used in discretizing the φ transport equation are exactly continuity satisfying every single iteration.

6.6.4 Boundary Conditions
We have already dealt with the boundary conditions for scalar transport in the previous chapter. These apply to the momentum equations as well. We turn now to boundary conditions for pressure. Two common boundary conditions are considered here: given normal velocity and given static pressure. A third condition, given stagnation pressure and ﬂow angle, is also encountered, but we will not address it here. At a given-velocity boundary, we are given the normal component of the velocity vector Vb at the boundary. This type of boundary could involve inﬂow or outﬂow boundaries or boundaries normal to which there is no ﬂow, such a walls. Consider the near-boundary cell shown in Figure 6.5. Out objective is to derive the pressure correction equation for C0. Integrating the continuity equation for the cell C0 in the usual fashion, we have Fe



Fb

We know how to write the interior face ﬂow rates Fe , Fn and Fs in terms of the the pressure corrections. For Fb , no such expansion in terms of pressure correction is necessary because Fb is known, and is given by Fb



Fn



Fs



0

(6.46)



ρb ub ∆y

This known ﬂow rate is incorporated directly into the mass balance equation for cell C0. When all boundaries are given-velocity boundaries, we must ensure that the speciﬁed 133

if we divide the computational domain into N cells.5: Near-Boundary Cell for Pressure boundary velocities satisfy overall mass balance over the entire computational domain. it is common to encounter situations which are best modeled with given-velocity boundary conditions on all boundaries. the pressure level would rise by a constant every iteration.n V b P e s Figure 6. For compressible ﬂows.5 Pressure Level and Incompressibility For incompressible ﬂows.44 and its behavior at convergence. the level of pressure in the domain is not set. We are given velocity boundary conditions that are continuity-satisfying in an overall sense. the pressure correction has an arbitrary level.where density is not a function of pressure. alternatively. the pressure is made unique. Equation 6. the pressure p sees no corrections at convergence. Thus. but this is acceptable since      134 . it is necessary to specify pressure boundary conditions on at least part of the domain boundary. the pressure correction p b is set equal to zero. but the individual pressure values themselves are not. The reader may verify that p and p C are solutions to the governing differential equations.6. where the density is a function of pressure. If all boundaries are given-velocity boundaries.  6. Thus. Therefore we do not have enough equations for N pressure (or pressure correction) unknowns.(Even if we did not set p to zero. otherwise the problem would not be well-posed. Let us go back to the pressure correction equation. We should emphasize this situation only occurs if all boundaries are given-velocity boundaries. This situation may be remedied by setting the pressure at one cell centroid arbitrarily. When the static pressure p is given on a boundary. Differences in pressure are unique. we may set p 0 in one cell in the domain. We have said that the pressure correction becomes a constant at convergence. which we are free to set equal to zero. and impose a mass balance on them. From a computational viewpoint we may interpret this situation in the following way. and the problem does not arise. the result is still converged. At a given-pressure boundary. only N 1 unique equations can result. In such a case.

50)   ρe u ˆe ∆y ρn v ˆn ∆x 135 (6. Consequently. A good way of understanding this is to consider what the SIMPLE algorithm does when we know the velocity ﬁeld. while ﬁnding another way to compute the pressure. Thus the constant value predicted by the pressure correction will come out to be zero. we may deﬁne    ∑nb anb unb au e ∑nb anb vnb ae   be bn (6. the pressure correction predicts zero corrections at convergence. With SIMPLER.7 The SIMPLER Algorithm The SIMPLE algorithm has been widely used in the literature. and then embark on a long iterative process to recover it. If we solve the momentum equations with a guessed pressure ﬁeld p . If one or more given-pressure boundaries are present. since optimal values are problem dependent and rarely know a priori. the pressure corrections resulting from it are too large. p 0 is set at at least one boundary face. Nevertheless.49)  vˆn   u ˆe ˆe F ˆn F d  dn p pP  de pP pE pN f q (6. and guarantee that the corrected velocities satisfy the continuity equation. are good. Because the ∑nb anb unb and ∑nb anb vnb terms are dropped in its derivation. In this case also. but do not know the pressure ﬁeld.51) . there have been a number of attempts to accelerate its convergence. We would prefer an algorithm which can recover the correct pressure ﬁeld immediately if the exact velocity ﬁeld is known. however. it would seem appropriate to use the pressure correction equation to correct velocities. A good guess of the velocity ﬁeld is no use when using the SIMPLE algorithm.  6.only differences of pressure are relevant with all given-velocity boundaries). One of the drawbacks of the SIMPLE algorithm is the approximate nature of the pressure correction equation. This slows down convergence. The velocity corrections. we destroy the original (good) velocity ﬁeld. and one such algorithm is SIMPLER (SIMPLE-Revised) [10]. we derive the pressure equation by re-arranging the momentum equations as follows:    ue vn By deﬁning   ∑nb anb unb ae ∑nb anb vnb an   be bn   d  dn p pP  de pP pE f q (6.48) pN u ˆe v ˆn we may write ue vn Furthermore. and require under-relaxation. unless accompanied by a good pressure guess.

28. Thus. Compute u ˆ and v ˆ.1 Overall Algorithm The SIMPLER solution loop takes the following form: 1. 7. the correct pressure ﬁeld will be recovered. Do not correct the pressure ! 136    . however. 3. and involves the velocities u ˆ and v ˆ rather than the prevailing velocities u and v .   5. and v .54) using the guessed ﬁeld and obtain the pressure.  ˆs F  ˆn F (6. if the velocity ﬁeld is exact. Solve the momentum equations using the pressure ﬁeld just computed.so that Fe Fn   ˆe F ˆn F   ρe de ∆y pP pE ρn dn ∆x pP pN d  p  f q (6.20) we obtain the following equation for the pressure: aP pP where aE aW aN aS  ∑ anb pnb  nb ρe de ∆y ρw dw ∆y ρn dn ∆x ρs ds ∆x ∑ anb b       aP b nb ˆw F  ˆe F The form of the pressure equation is identical to that of the pressure correction equation.52 into the discrete continuity equation (Equation 6. Solve the pressure correction equation to obtain p . is different. Guess the velocity ﬁeld. Another important difference is that no approximations have been made in deriving the pressure equation. 4. and the a P and anb coefﬁcients are identical to those governing the pressure correction. The b term. 2. to obtain u . Solve the pressure equation (Equation 6. Correct u and v using Equations 6. it does not represent the mass imbalance. We should emphasize that although the b term looks similar in form to that in the p equation.54)    6. 6.7. Compute the mass source term b in the pressure correction equation.52) Substituting Equations 6.

Because the pressure correction p is not used to correct the pressure. but this is not usually required.55) .33. The same is true of pressure. adding an extra pressure equation in the SIMPLER algorithm increases the computational effort by about 50%. The lack of Dirichlet boundary conditions for p causes linear solvers to converge slowly. Thus.8. The momentum equation coefﬁcients are needed in two places – to ﬁnd u ˆ and v ˆ for the pressure equation. stop. Thus. Solve for any scalar φ ’s of interest. The momentum equations must be underrelaxed to account for non-linearities and also to account for the sequential nature of the solution procedure.8 The SIMPLEC Algorithm The SIMPLE-Corrected (SIMPLEC) algorithm [11] attempts to cure the primary failing of the SIMPLE algorithm. and therefore involves more computational effort. Thus. 6.7. To avoid computing them twice. storage for each of the coefﬁcient sets is required. it uses one extra equation.45 is required.   6. no underrelaxation of the pressure correction in the manner of Equation 6. the SIMPLER algorithm does not have the tendency to destroy a good velocity ﬁeld guess like the SIMPLE algorithm. and later. This is primarily because the SIMPLER algorithm does not require a good pressure guess (which is difﬁcult to provide in any case).the actual pressure and the pressure correction.e. If converged. i. 9. This is because the pressure correction equation is frequently solved with given-velocity boundary conditions.2 Discussion The SIMPLER algorithm has been shown to perform better than SIMPLE. the SIMPLEC algorithm attempts to approximate the neighbor corrections by using the cell correction as:   b ∑ anbun b ∑ anbvn nb nb   137 ue ∑ anb ve ∑ anb nb   nb (6. Instead of ignoring these completely. which is easier to guess. It generates the pressure ﬁeld from a good guess of the velocity ﬁeld.. Check for convergence. The SIMPLER algorithm solves for two pressure variables . to solve the momentum equations. At this point we have a continuity satisfying velocity ﬁeld. The pressure equation may itself be underrelaxed if desired. This is also true for the pressure coefﬁcients. The pressure correction solver in the SIMPLE loop typically accounts for half the computational effort during an iteration. Else go to 2. the neglect of the ∑nb anb unb and ∑nb anb vnb terms in writing Equations 6.

the velocity corrections take the form   Redeﬁning de and dn as ae an   anb  ue  ∑ nb anb  vn    ∑ nb ∆y pP ∆x pP p   pE q p   pN q q q (6.56) de dn   p ae  p an  ∆y ∑nb anb ∆x ∑nb anb The rest of the procedure is the same as that for the SIMPLE algorithm except for the fact that the pressure correction need not be underrelaxed as in Equation 6. The Now let us examine the p equation (or p equation has the form ˆnb b aP p ˆP ∑ anb p (6. However. The SIMPLEC algorithm has been shown to converge faster than the SIMPLE and does not have the computational overhead of the SIMPLER algorithm.58) whereas the SIMPLEC algorithm employs a pressure correction of the type Rather than solve for p . However.61)       nb   138 .60)   p   p  (6. let us make the SIMPLE algorithm solve for a variable p ˆ deﬁned as p ˆ αp p (6.59)   Then its correction equation takes the form of Equation 6. it does share with SIMPLE the property that a good velocity ﬁeld guess would be destroyed in the initial iterations unless accompanied by a good guess of the pressure ﬁeld. The SIMPLE procedure employs a pressure correction of the type p  p  p αp p  (6.Thus. 6.59. We may think of SIMPLEC as solving for p ˆ rather than p .57) (6.9 Optimal Underrelaxation for SIMPLE It is possible to make SIMPLE duplicate the speed-up exhibited by SIMPLEC through the judicious choice of underrelaxation factors. we should note that the d e and dn deﬁnitions require the momentum equations to be underrelaxed to prevent the denominator from going to zero. ˆ equation) solved by SIMPLEC.45.

If we cast its p equation in p get aP p ˆP     ∑ anb pˆn b  nb  αp αu b (6. the pressure correction coefﬁcients in Equation 6. they are known to require a large number of iterations in problems with large body forces resulting from buoyancy. in strongly non-linear cases. and reasonably good performance over a broad range of problems.62 may be written as anb  1 αρ ∆y a α ∑nb nb 2 u u (6.67)   Thus. variations on the basic theme of sequential solution.65) with the coefﬁcients a nb having the form anb ρ ∆y2 ∑nb anb (6. All these procedures have the advantage of low storage. For such problems. divergence may occur despite underrelaxation. we conclude that α p 1 αu (6. However. Many of these difﬁculties are a result of the sequential nature of the momentum and continuity solutions.67 in underrelaxing the momentum equations and the pressure correction. we would essentially reproduce the iterations computed by SIMPLEC.10 Discussion We have seen three different possibilities for a sequential solution of the continuity and momentum equations using a pressure-based scheme.63) ae αu Thus. which are strongly coupled through the pressure gradient term when strong body forces are present. so that ∑nb anb (6.66) If we require the a nb coefﬁcient in Equation 6. we Now we turn to the SIMPLE algorithm. the user would not ﬁnd much difference in the performance of the different SIMPLE variants. and the same basic advantages and disadvantages obtain. we see that if we use Equation 6.64 to be equal to that in Equation 6. A variety of coupled solvers have been developed which seek 139 . A number of other variants and improvements of the basic SIMPLE procedure are also available in the literature.with the coefﬁcients a nb having the form anb   ρ ∆y2 ae ∑nb anb  (6. swirl and other agents.66.64) ˆ form.62) Let us assume for simplicity that the momentum equation coefﬁcients have no S P terms. These are. however. 6.

it is possible to encounter cells where the stored face velocity component is tangential to the face.11 Non-Orthogonal Structured Meshes When structured non-orthogonal meshes are used. Any velocity set with a ﬁxed (non-zero) angle to the local face would do as well.6: Velocity Systems for Structured Bodyﬁtted Meshes: (a) Cartesian. The e ξ vector is aligned along the line joining the cell centroids. especially in connection with unstructured meshes. These are the covariant velocity and the contravariant velocity systems.(Other options. have been tried. the result is not very elegant. it is necessary to use grid following velocities. at least for laminar ﬂow problems. though formulations of this type can be worked out. On each face.6(b). the staggered mesh procedures described above may be used in principle. the component along the line joining the centroids is stored. it is not possible to use Cartesian velocity components with a staggered mesh. similar to our practice on regular meshes. as illustrated in Figure 6. If staggered meshes are used. for example. The vectors e ξ and eη are called the covariant basis vectors.(a) (b) Figure 6.e. However. In Figure 6. such as storing both components of the Cartesian velocities at all faces. These procedures usually incur a storage penalty but exhibit substantial convergence acceleration. Two primary grid-following or curvilinear velocity systems have been used in the literature for this purpose. Here we consider a 90 elbow. velocities whose orientation is deﬁned with respect to the local face. Consider the cells P and E is Figure 6. For example.7(a).6(a). If we store the u and v velocities on the faces as shown. making it difﬁcult to discretize the continuity equation correctly. The velocity components along these basis directions are called the covariant velocity components. we use velocity components normal to the face in question.. i. for example. These velocity components are guaranteed to never become tangential to the face because they turn as the mesh turns. overlapping momentum control volumes result). This area continues to be active area for new research. 6. Thus on Karki  140 . great care must be taken in the choice of velocity components. The e η vector is aligned tangential to the face. and (b) Grid-Following to replace the sequential solution of the momentum and continuity equations through tighter coupling of the two equations (see [8] for example).

To overcome this.7(b). additional curvature terms appear. Here. just as they do in cylindrical-polar or spherical coordinates. curvilinear velocities remain cumbersome and are difﬁcult 141 . curvilinear velocities are not particularly easy entities to deal with.7: Curvilinear Velocity Components (a) Covariant. Curvilinear coordinates do not have this property. as shown in Figure 6. researchers have computed the ﬂow ﬁeld in curvilinear coordinates and stored both Cartesian and curvilinear velocities. Since covariant and contravariant basis vectors change direction with respect to an invariant Cartesian system. Since Newton’s laws of motion conserve linear momentum. These quantities are extremely cumbersome to derive in curvilinear coordinates. see Karki and Patankar [7]. contravariant basis vectors may be used. velocity gradients are required in other equations. and the basis vector e η is perpendicular to the line joining the cell centroids. Alternatively. and (b) Contravariant and Patankar [7] have use a covariant velocity formulation to develop a staggered mesh SIMPLE algorithm for body-ﬁtted structured meshes. As a result. Despite these workarounds. for example). momentum in the curvilinear directions is not conserved.P eη E e ξ (a) P η e E eξ (b) Figure 6. for example for production terms in turbulence models. or for strain rates in non-Newtonian rheologies. The SIMPLE family of algorithms may be developed for a staggered mesh discretization using either of these two coordinate systems. the momentum equations written in Cartesian coordinates may always be cast in conservative form. Each face stores one contravariant component. Thus we are not guaranteed conservation. The contravariant velocities are aligned along these basis directions. Though these efforts have generally been successful. (Researchers have proposed clever cures for this problem. the basis vector e ξ is perpendicular to the face. Furthermore. They use Cartesian velocity components for all such manipulations. the momentum equations written in these curvilinear directions cannot be written in conservative form. the component perpendicular to the face.

we have developed staggered mesh based discretization techniques for the continuity and momentum equations. The macroelement is subdivided into 4 sub-elements. equal-order. efforts have been underway to do away with staggering altogether and to develop techniques known variously as non-staggered . SIMPLER and SIMPLEC for solving this set of discrete equations. Though this arrangement has been shown to prevent checkerboarding. it is not easy to identify a workable staggered mesh. pressure is resolved to only one-fourth the mesh size as the velocity in two dimensions. This has been shown to circumvent checker-boarding.8. triangular macro-elements are employed.Pressure Storage Location Velocity Storage Location Figure 6. Because of these difﬁculties. For unstructured meshes. even if we could use curvilinear velocity components. Staggering was shown to be necessary to prevent checkerboarding in the velocity and pressure ﬁelds. In the ﬁnite element community. In the work by Baliga and Patankar. 6. 6. For cell based schemes there is no obvious counterpart of this unequal order arrangement. or co-located methods. We see that the staggered discretization developed in this chapter is not easily extended to body-ﬁtted and unstructured meshes. a class of unequal order methods have been developed which interpolate pressure to lower order than velocity. Pressure is stored at the nodes of the macroelement.13 Closure In this chapter. and velocity is stored on the vertices of the subelements. as shown in Figure 6.12 Unstructured Meshes For unstructured meshes. The use of staggered mesh methods based on curvilinear velocities is cumbersome and uninviting for structured meshes. the staggered mesh discretization method we have derived so far is almost entirely useless since no obvious mesh staggering is possible. prompting concerns about accuracy.8: Storage Arrangement for Node-Based Unequal Order Finite Volume Scheme to interpret and visualize in complex domains. We then developed sequential and iterative pressure-based techniques called SIMPLE. for example. These tech142 . Node-based ﬁnite-volume schemes [12] have been developed which employ the same unequal-order idea.

We should emphasize that the material to be presented in the next chapter changes only the discretization practice.niques store pressure and velocity at the same physical location and attempt to solve the problem of checkerboarding through clever interpolation techniques. They also do away with the necessity for curvilinear velocity formulations. We may still use sequential and iterative techniques such as SIMPLE to solve the resulting set of discrete equations. We will address this class of techniques in the next chapter. and use Cartesian velocities in the development. 143 .

144 .

it is not obvious how to deﬁne staggered pressure and velocity control volumes. storing both pressure and velocity at the cell centroid. The iterative methods used to solve the resulting discrete set are the same as those in the previous chapter. in Figure 7. we use Cartesian velocity components u and v. we turn to the problem of discretizing the continuity and momentum equations using a non-staggered or co-located mesh. i. The mesh and associated nomenclature are shown 145 . as are other scalars φ . leads to checkerboarding in the pressure and velocity ﬁelds.e. we will continue to use the SIMPLE family of algorithms for the solution of the discrete equations. we saw that this option is also not entirely optimal. For unstructured meshes. staggered meshes are somewhat cumbersome to use. deﬁned in a global coordinate system. For body-ﬁtted meshes. at the cell centroids. We will address two-dimensional and non-orthogonal meshes in a later section. Furthermore. The change to a co-located or non-staggered storage scheme is a change in the discretization practice. We saw in the last chapter that storing pressure and velocity at the same location.Chapter 7 Fluid Flow: A Closer Look In this chapter. recent research has focused on developing formulations which employ Cartesian velocity components. v and p are stored at the cell centroid P. Thus. u. as well increased coding complexity. we saw in the previous chapter that staggered meshes could only be used if grid-following velocities were used. For the purposes of this chapter. staggering requires the storage of geometry information for the main and staggered u and v control volumes. In the discussion that follows. albeit with a few minor changes to account for the change in storage scheme.1 Velocity and Pressure Checkerboarding Co-located or non-staggered methods store pressure and velocity at the cell centroid. As a result.1. Furthermore. We circumvented this in Cartesian meshes by using staggered storage of pressure and velocity. 7. We also used Cartesian velocity components as our primary variables.. let us assume an orthogonal one-dimensional uniform mesh. For Cartesian meshes. Specialized interpolation schemes are used to prevent checkerboarding. once the basic idea is clear. The principle of conservation is enforced on the cell P.

a unit area of cross section ∆y  1 has been assumed.φ s e E EE S Figure 7. aP uP (7.1: Co-Located Storage of Pressure and Velocity in Figure 7.N n P W w u.2: Control Volumes for Velocity Interpolation 146 .2.1. 7.1 Discretization of Momentum Equation The discretization of the u-momentum equation for the cell P follows the principles outlined in previous chapters and yields the following discrete equation:  ∑ anbunb bu p p  P  d w  ef nb Here. The pressure at the faces e and w are not known since the pressure is stored at the cell centroids.v. ue W P e E EE Figure 7.p.1) The neighbors nb for this co-located arrangement include the velocities at the points E and W .

Dividing Equations 7. and a uniform mesh. 7.1. we must make sure that the discretization of the continuity equation somehow disallows pressure checkerboarding. Thus.3 Pressure Checkerboarding Let us now consider the question of whether Equation 7. for u E .4) ρe ue ∆y ρw uw ∆y (7.7 can support a checkerboarded pressure ﬁeld.6) uw 2 Substituting these relations into Equation 7.3) We see that the u-momentum equation at point P does not involve the pressure at the point P.7) 7. the u-momentum equation can support a checkerboarded solution for pressure. Adopting a linear interpolation. we may write aP uP  ∑ anbunb  nb  ∑ anbunb  nb bu P   p pW  pE 2 q f (7. we must interpolate u from the cell centroid values to the face in order to ﬁnd u e and uw . we have uP uE   u ˆP u ˆE   1 pW pE aP 2 1 pP pEE aE 2 p  q d  f (7. The same is true for u E .8) 147 .3 by their respective center coefﬁcients a P and aE .Consequently. If we use a linear interpolation uP uE ue 2 uW uP (7. This yields Fe where Fe Fw    Fw  0 (7.2 and 7. it only involves pressures at cells on either side. we may write aE uE bu E d pP  pEE 2 (7. If we retain this type of discretization for the pressure term in the momentum equation. interpolation is required.2) Similarly. we integrate it over the cell P as before and apply the divergence theorem.1.5) As we discussed in the previous chapter.4 and assuming unit ∆y yields     ρe uE  ρw uW  0 (7.2 Discretization of Continuity Equation In order to discretize the continuity equation.

the linear interpolation of cell velocities also introduces checkerboarding.7.10) A similar equation can be written for u w : uw u ˆW  u ˆP  1 aW m pWW pP 4  1 aP m pW 4  pE (7. If we enforce momentum balance on the cell.7 we obtain ue   u ˆP 2  u ˆE 2 m pW 4  pE n  n  1 aE m pP  pEE 4 n n  (7. Thus.   7. we will in effect create a pressure ﬁeld to offset these checkerboarded momenta. Equation 7.8 to ﬁnd u e in Equation 7. Furthermore the face velocity u e is not deﬁned purely as a linear interpolant of the two adjacent cell values. called the added dissipation prevents velocity checkerboarding. As we have seen in the previous chapter.4 Velocity Checkerboarding In addition to pressure checkerboarding. Since the same type of checkerboarding is supported by the discrete momentum equations. Such a checkerboarded velocity ﬁeld implies checkerboarded momenta in the cell momentum balance. we must ensure that the discretization of either the momentum or the continuity equation provides a ﬁlter to remove these oscillatory modes. a checkerboarded pressure ﬁeld can persist in the ﬁnal solution if boundary conditions permit.2 Co-Located Formulation Co-located formulations prevent checkerboarding by devising interpolation procedures which express the face velocities u e and uw in terms of adjacent pressure values rather than alternate pressure values. In order to prevent checkerboarding in the ﬁnal solution. this pressure ﬁeld must of necessity be checkerboarded. the continuity equation supports a checkerboarded velocity ﬁeld. an additional term.7). does not involve the cell-centered velocities uP .11) We see right away that any checkerboarded pressure ﬁeld which sets p W pE and pP pEE pWW will be seen as a uniform pressure ﬁeld by u e and uw .1.where u ˆP u ˆE   1 aP ∑nb anb unb aP ∑nb anb unb aE   bu P bu E (7. the resulting discrete continuity equation.9) If we average u P and uE using Equation 7. These face velocities are used to write the one-dimensional discrete continuity equation (Equation 7. 7. 148 . Our discrete continuity equation does nothing to ﬁlter spurious oscillatory modes in the pressure ﬁeld supported by the momentum equation.

18) A similar expression may be written for u w .As we saw in the previous section. the discrete one-dimensional momentum equations for cells P and E yield uP uE where dP dE   u ˆP u ˆE     dP dE p pW  d pP  1 aP 1 aE pE 2 2 q f (7. It is also sometimes referred to as an added dissipation scheme.12) pEE (7. unit cross-sectional area is assumed. with small variations.15) ρe ue ρw uw  pE As before.17) where de  (7. It was proposed. 14. pE . 15] d  f 149 . pP and pEE ) and added in a new pressure gradient term written in terms of the pressure difference p P pE .16) Instead of interpolating linearly. Another way of looking at this is to say that in writing u e . by different researchers in the early 1980’s [13. It is important to understand the manipulation that has been done in obtaining Equation 7.14) 0 (7. but write the pressure gradient term directly in terms of the adjacent cell-centroid pressures p P an pE . We have removed the pressure gradient term resulting from linear interpolation of velocities (which involves the pressures p W . we interpolate the u ˆ component linearly between P and E . This type of interpolation is sometimes referred to as momentum interpolation in the literature.13) Writing the continuity equation for cell P we have Fe or equivalently   m Fw  0 (7.17. we get ue    uP 2  uE uP u ˆP uE u ˆP 2  u ˆE dP pW 4  n  pP m dE pP  pEE 4 n pE (7. we use ue 2  2 E  u ˆ m dP pW 4  pE pE n  dP m dE  pEE 4 de pP d  f 2 dE n  de pP d  f (7. If we interpolate u e linearly.

a continuity-satisfying velocity ﬁeld would not be able to ignore a checkerboarded pressure ﬁeld. and the ﬂow ﬁeld. let us assume that d P dE de . may be interpreted as a momentum equation for the staggered velocity ue . we have m dP pW 4  pE  pE pP  pEE n  dP pP pE    m m dP dP 4 pW 4  pp pW    pP  pEE 4 hd pP  pEE pE fn fq (7. the continuity equation does not permit such a pressure ﬁeld to persist. the momentum interpolation procedure derives it by interpolating u ˆ linearly. The momentum interpolation formula. Ideas similar to it have also been used in the compressible ﬂow community with density-based solvers.3 The Concept of Added Dissipation It is useful to understand why momentum interpolation is also referred to as the added dissipation scheme. Another useful way to think about momentum interpolation is to consider the face velocity ue to be a sort of staggered velocity. Let us consider the ﬁrst of the expressions in Equation 7. Thus.17:   ue  uP 2  uE m dP pW 4  pE  4 pP  pEE 4 n  d  dP pP d  f pE f Let us look at the pressure terms  Rearranging. diffusion coefﬁcients and source terms were constant for the cells P and E. (Recall that the quantity u ˆ contains the convection. Since they are written in terms of adjacent pressures rather that alternate ones. and adding the pressure gradient appropriate for the staggered cell. momentum interpolation avoids the creation of staggered cell geometry and makes it possible to use the idea for unstructured meshes.17. This would be the case if the mesh were uniform. Momentum interpolation prevents checkerboarding of the velocity ﬁeld by not interpolating the velocities linearly.) By not using an actual staggered-cell discretization.21) pE 2 pP q8hd pP  q   q 2 pE Using a Taylor series expansion. For simplicity.for use with pressure-based solvers. 7.22) . we can show that ∂2p ∂ x2 p pW   n P pE 2 p P ∆x2 150  O ∆x2 p (7. Instead of deriving the staggered momentum equation from ﬁrst principles. The face velocities u e and uw are used to write the discrete continuity equation for cell P. Equation 7. diffusion and source contributions of the momentum equation. even though the momentum equation contains a pressure gradient term that can support a checkerboarded pattern.

The interpolation scheme may be written using Equation 7.31) 151 . 7.25 and 7. we may write   uP 2  uE dP 4 n n ∆x3 e (7. dividing and multiplying the pressure term by ∆x.26 we get uE uW 2∆x    m dP 4 m ∂4p ∂ x4 n ∆x3 P (7.25) A similar expression may be written for u w : uw uW   2 uP ue dP 4 m ∂3p ∂ x3 ∆x3 w (7.28) Using a Taylor series expansion.30) We see that momentum interpolation is equivalent to solving a continuity equation with an added fourth derivative of pressure.24) or.29) so that Equation 7. and dividing through by ∆x we get uw ∆x   0 (7.4 Accuracy of Added Dissipation Scheme Let us now examine the accuracy of the momentum interpolation or the added dissipation scheme. Even derivatives are frequently referred to in the literature as dissipation.Similarly m  ∂2p ∂ x2 d pP  n E uE dP 4 pEE 2 pE ∆x2  f  m O ∆x2 p q ∆x2 (7. hence the name added dissipation scheme. If we write the continuity equation for constant ρ .23) The pressure term in the momentum interpolation scheme may thus be written as ue uP 2  ue mm ∂2p ∂ x2 n P m ∂3p ∂ x3 ∂2p ∂ x2 n En (7.27) Substituting for u e and uw from Equations 7.26) Now. let us look at the continuity equation. we may show that uE uW 2∆x  ∂u ∂x 2 n P  O d ∆x f (7.21 ue  uP 2 E  u dP 4 pp pW  pE  2 pP q hd pP  pEE  2 pE fq (7.28 may be written as m ∂u ∂x n P dP 4 m ∂4p ∂ x4 n ∆x3 P  0 (7.

d f  d O ∆x2 f (7. They satisfy the continuity equations only indirectly d f d f 152 .22 that the pressure term may be written as  dP 4 pp pW  pE   2 pP q hd  pP  2PE fq   ∆x3 dP 4 m d f ∂3p ∂ x3 n ∆x3 e (7.31 is O ∆x 2 . It is not written directly in terms of the cell-centered velocities u P and uE . the added dissipation term does not change the formal second-order accuracy of the underlying scheme. Adding this dissipation term does not change the formal accuracy of our discretization scheme since the term added has a dependence O ∆x 3 whereas the other terms have a truncation error of O ∆x 2 . In a co-located formulation. We have seen that momentum interpolation is equivalent to solving the continuity equation with an extra dissipation term for pressure. Using a Taylor series expansion about e. it is the face velocities that directly satisfy the discrete continuity equation. Consequently. which is of higher order than the second-order truncation error of the linear interpolation.5 Discussion Thus far we have been looking at how to interpolate the face velocity in order to circumvent the checkerboarding problem for co-located arrangements. We already saw from Equation 7. Our intent is to write the continuity equation using this interpolation for the face velocity. Thus. the cell-centered velocities directly satisfy the discrete momentum equations.14) is written in terms of the face velocities u e and uw .33) Thus the truncation error in writing the ﬁrst term in Equation 7.1.34) so that the total expression for u e is ue uP 2  uE dP 4 m ∂3p ∂ x3 n e We see that the pressure term we have added is O ∆x 3 . An extremely important point to be made is that the discrete continuity equation (Equation 7.32) Adding the two equations and dividing by two yields ue  2 E  O d ∆x2 f u pEE (7. not the cell-centered velocities. at convergence. we m ∂ u ∆x m ∂ 2u ∆x2  ue  ∂ x n 2 ∂ x2 n 8 O d ∆x3 f m ∂ u e ∆x  m ∂ 2u e ∆x2   ue ∂ x n 2 ∂ x2 n 8 O d ∆x3 f e e     uP (7.35) 7. The discretization of the momentum equations retains the form of Equation 7.Let us consider the ﬁrst term u P may write uP uE d  uE f l 2.

in the sense that they satisfy the momentum interpolation formula. we may derive the discrete u. We will use this development as a stepping stone to developing a SIMPLE algorithm for solving the discrete set of equations. 7. let us look at regular two-dimensional meshes and derive the equivalent forms for the face velocity interpolation. We consider the cell P in Figure 7.and v-momentum equations for the velocities u P and vP : au P uP av P vP  ∑ au nb unb  nb  ∑ avnbvnb  nb bu P   ∆y ∆x p pW  p pS  pE pN 2 q (7. aP and anb are the coefﬁcients of the v-momentum equation.1. in a co-located formulation.37) pEE 153 . but do not satisfy any discrete momentum equation since no direct momentum conservation over a staggered cell is ever written for them.6 Two-Dimensional Co-Located Variable Formulation Let us generalize our development to two-dimensional regular meshes before considering how to solve our discrete set of equations.2 Momentum Interpolation Consider the face e in Figure 7. They satisfy momentum conservation only indirectly.6. We note that the pressure gradient terms involve pressures 2∆x and 2∆y apart respectively.36) bv P 2 q u Here. the face velocities ue and uw directly satisfy the continuity equation. To complete the development. and the u momentum equations for the cells P and E .through their role in deﬁning u e and uw . Dividing the discrete momentum equations by the center coefﬁcients we obtain: uP uE   u ˆP u ˆE   u dP u dE p pW  d pP  pE 2 2 q f (7. everything we have said about the one-dimensional case is also true in two dimensions.6. Conversely. 7. 7.1. Similarly. Though the properties of the momentum interpolation scheme are less clearly evident in 2D.1 Discretization of Momentum Equations Using the procedures described earlier. the coefﬁcients a u P and anb are the coefﬁcients of the u-momentum equation for v v cell P. Similar discrete equations may be written for the neighboring cells.

43) 7. we may write face velocities for the other faces as: uw vn vs with  dw p pW  pP q   vˆn dn p pP  pN q  vˆs  ds p pS  pP q  dw dn ds (7.42)    u dW v dP v dS 2  u dP v dN v dP 2 2 (7.39) Using momentum interpolation for the face velocity u e . we now turn to the question of how to write the discrete continuity equation.7 SIMPLE Algorithm for Co-Located Variables Having deﬁned the face velocities as momentum-interpolants of cell centered values.38) The factor ∆y appears because the pressure gradient term is multiplied by it in a twodimensional mesh. we obtain ue where u ˆe de  u ˆe de pP d  2 2 pE f (7.40)   u ˆw u ˆP u dP u ˆE u dE (7.where u dP u dE       ∆y au P ∆y au E (7.41) By analogy. By analogy we may deﬁne equivalent quantities at other faces: u dW v dN v dS ∆y u aW ∆x av N ∆x av S (7. and how 154 .

Thus. The procedure is similar to that adopted in the previous chapter. 7. uw .47)   Fe Fn where Fe   Fn  (7.7. We wish to correct the face velocities (and the face ﬂow rates) such that the corrected ﬂow rates satisfy the discrete continuity equation. we may write face ﬂow rate corrections pP Fe Fn (7.49)  p  q  p   q     vn   vˆn dn p pP   pN q vs   vˆs  ds p pS  pP q  u ˆe de pP pE u ˆw dw pW pP 155   (7. We must now devise a way to formulate a pressure correction equation that can be used with the co-located variables.to solve the discrete set.45)  ue   vn   (7.1 Velocity and Pressure Corrections As before. The face velocities u e . we wish to use the SIMPLE algorithm. vn and vs found by interpolating the u and v to the face using momentum interpolation are not guaranteed to satisfy the discrete continuity equation. and to use these expressions to derive a pressure correction equation. let u and v denote the solution to the discrete momentum equations using a guessed pressure ﬁeld p . In keeping with our philosophy of using sequential iterative solutions.        Fw Fn  Fs   0 (7. albeit with a few small changes. Thus.48) Fe      We now seek to express u and v in terms of the cell pressure corrections p .46)  pP  pP   Correspondingly.44)  The face mass ﬂow rates F are deﬁned in terms of the momentum-interpolated face Fe Fe Fn velocities as        ρe ue ∆y ρn vn ∆x   Similar expressions may be written for Fw and Fs . From our face velocity deﬁnitions. we write ue uw  Fn  ρe ∆yue ρn ∆xvn   (7.50) . we propose face velocity corrections ue vn and the cell pressure corrections ue vn  (7.

51)     Fn  Fs  ρe ∆yde pP pE ρw ∆ydw pW pP ρn ∆xdn pP pN ρs ∆xds pS pP p  p   p  p  q q q q (7. This is analogous to dropping the ∑nb anb unb terms in the staggered formulation.50 as ue uw by dropping the u ˆ and v ˆ terms.51 and 7.2 Pressure Correction Equation   Fw Fn Fn  Fs  Fs  0 Fe  Fw (7. For later use. 156 .49 for the F  values.7.53) 7.52) We notice that the velocity corrections in Equations 7. we write the continuity equation in terms of the corrected face ﬂow rates F as before:    ∑ anb pn b  nb ρe de ∆y ρw dw ∆y ρn dn ∆x ρs ds ∆x aE aW Fw Fe b      N S      Fn  Fs b  (7. we write the cell-centered velocity corrections by analogy: uP   pE f d pW 2   pN q v p pS   dP vP 2   u dP (7.56)  We see that the pressure correction equation has the same structure as that for the aS aP a a staggered grid formulation.51 correct the face velocities. we approximate Equations 7.54)    Substituting from Equations 7. but not the cell-centered velocities. we obtain the pressure Fe correction equation: aP pP with aE aW aN To derive the pressure correction equation for the co-located formulation. The corresponding face ﬂow rate corrections are Fe Fw   p  q   p   q   dn p pP   pN q vn vs   ds p pS  pP q de pP pE dw pW pP    (7.In keeping with the SIMPLE algorithm. The source term in the pressure correction equation is the mass imbalance resulting from the solution of the momentum equations.

the face ﬂow rates (and by implication the face velocities) satisfy the discrete continuity equation at step 5 in each iteration. we are assured corrected face ﬂow rates in step 5 will satisfy the discrete continuity equation identically at each iteration of the SIMPLE procedure. Thus. they do not satisfy a discrete momentum equation directly. but does nothing to make u P and vP satisfy the discrete continuity equation. The solution of passive scalars φ in step 8 employs the continuity-satisfying face ﬂow rates F in discretizing the convective terms. the cell-centered velocities either before or after the correction in step 6 are never guaranteed to satisfy the discrete continuity equation. 4. 5. underrelax the pressure correction as: p      p 8. and also at convergence. Compute the face mass ﬂow rates F using momentum interpolation to obtain face velocities. Else go to 2. in a co-located formulation. The pressure correction equation contains a source term b which is the mass imbalance in the cell P. In keeping with the SIMPLE algorithm.48. However. If converged. the momentum-interpolated values are used instead. Guess the pressure ﬁeld p .7. 7. 6. The cell-centered velocities are never used for this purpose since they do not satisfy the discrete continuity equation for the cell. However. Solve for other scalars φ if desired.50.  2. but not the discrete continuity equation. Thus. By the same token. the cell-centered velocities satisfy the discrete momentum equations. However.4 Discussion We see that the overall SIMPLE procedure is very similar to that for staggered meshes. Correct the cell pressure using Equation 7.47.7. This curious disconnect between the cell-center and face velocities is an inherent property of co-located schemes. This is because the ﬂow rates F are not written directly using u P and vP . 7. stop. We should also note that the cell-velocity correction in step 7 is designed to speed up convergence. Check for convergence. we should note a very important difference.  αp p  9. Correct the cell-centered velocities u P and vP using Equation 7. Correct the face ﬂow rates using Equation 7. The computed pressure corrections are designed to annihilate this mass imbalance.    3.3 Overall Solution Procedure The overall SIMPLE solution procedure for co-located meshes takes the following form: 1.7. Solve the p equation. 157 . Solve the u and v momentum equations using the prevailing pressure ﬁeld p to obtain u and v at cell centroids.

the resulting co-located formulation is underrelaxation dependent. the ﬁnal solution depends on the underrelaxation employed.59) Using momentum interpolation as before. 158 d  de pP d  pE fHf  d 1  α f ue (7.62) where ulinear to ulinear . Since u e is never equal ue  α u ˆe Here u ˆe and de are computed using un-underrelaxed momentum equations for cells P and E . the cell-centered velocities satisfy  l de pP d  pE l f (7. we obtain ue  α u ˆe In order for a variable φ to be underrelaxation-independent at convergence. We see that the underrelaxed value of u e does not have this form.61)   d  f   is the prevailing linearly interpolated face value. since we cannot underrelax u e without storing it. The underrelaxed face velocity has the form α ue 1 α ulinear (7.58) uP uE q  d 1  α f uP    m u d pE  pEE f   α uˆE dE n  d 1  α f uE 2   α u ˆP u dP  p pW  pE 2 (7. That is. the center coefﬁcients of the u momentum equations at points P and E . The remedy is to use an underrelaxation of the form   1 α φ  d  f (7. the underrelaxation expression must have the form d  de pP d  pE u u fCf  d 1  α f P  2 E (7. If the momentum equations are underrelaxed. Similarly. This is clearly extremely undesirable. Let us say that u ˆ e and de in Equation 7. φ φ .60) αφ At convergence. the value of u e is underrelaxation dependent.8 Underrelaxation and Time-Step Dependence We have thus far said very little about the role of underrelaxation in developing our co-located formulation. The face velocity is then underrelaxed separately to obtain the desired form. We note that the interpolation requires the storage of the face velocity u e . Consider the face velocity u e .40 corresponds to un-underrelaxed u values of au P and aE . Majumdar [16] has shown that unless care is taken in underrelaxing the face velocities. and different underrelaxation factors may lead to different solutions.7. and the above expression recovers φ regardless of what underrelaxation is used. which may be written as ue  u ˆe u Recall that de involves the averages of 1 a u P and 1 a E .63) . not even at convergence. u ˆ e also contains au P and aE in the denominator.

the discrete momentum equations for the cell C0 may now be written: au 0 u0 av 0 v0  ∑ au nb unb  nb 160 t v  ∑ avnb vnb bv0  ∇ p0 t j∆v  nb bu 0  ∇ p0 i∆ 0 0 (7. p f may be written in terms of the cell-centroid values of the pressure.65)  Ax i The pressure gradient terms in the u and v momentum equations are  t ∆{ z j  t ∆{ i z  Ay j (7.72) . Having discretized the pressure gradient term.65 eliminates the cell pressure p 0 .70) ∑Af 0 f it is clear that the summation in Equation 7.69) where r0 and r1 are the distances from the cell centroids of cells C0 and C1 to the face centroid. p f is interpolated linearly to the face. For a uniform mesh this interpolation would take the form pf  p0 2 p1 (7.67) In keeping with our co-located mesh technique.theorem. we may write  z { ∆ 0 ∇ pd vW  z A pd A (7.64) Assuming that the pressure at the face centroid prevails over the face. the reconstructed pressure value from Equation 7. Thus. as with regular meshes.68) For non-uniform meshes. Since (7.66) ∇ pd 0 v v   ∇ pd 0 p f A f   ∑ p f Ax  it ∑ f f  j t ∑ p f A f   ∑ p f Ay f f (7. In either event. the momentum equations can support a checkerboarded pressure ﬁeld. we may use the reconstructed value pf  p0  ∇ p 0 r0 t  p1  2  ∇ p 1 r1 t (7.71) where ∇ p0 denotes the average pressure gradient in the cell. For future use. let us write the pressure gradient term in the cell as pf Af    ∑ f ∇ p0 ∆ v 0 (7. the pressure gradient term in the u and v momentum equations may be written as pf Af  ∑ f The face area vector is given by Af (7.69 will also behave in the same way. If ∇ p0 and ∇ p1 are computed using the same type of linear assumptions.

This velocity is given by V0n  V0 n t  u0 n x v0 n y (7.75) where an 0 an nb   au 0 au nb   av 0 av nb (7. The diffusion coefﬁcient for the u and v momentum equations is the same.1 Face Normal Momentum Equation Let us consider the face f between the cells C0 and C1. But for these exceptions.78) 161 .76) The pressure gradient term may be written as  We note further that ∑ pf Af f   bn 0 m ∂p ∂n n ∆ 0 v 0  bv 0 ny ∇ p0 n∆ t v 0 (7. the momentum equation in the cell C0 for the velocity in the face normal direction may be written as: n an 0V0 n  ∑ an nbVnb  nb bn 0  ∇ p0 n t (7. au 0 and a0 denote the center coefﬁcients of the u and v momentum equations and u v b0 and b0 the source terms. and is equal to viscosity µ . it is necessary to understand the form taken by the momentum equation for the velocity in the face normal direction. and for most boundary conditions. Since it is the face normal velocity that appears in the discrete continuity equation for cells C0 and C1. the u momentum coefﬁcient set ( a u 0 and v v au nb ) are equal to the coefﬁcients of the v momentum equation ( a 0 and anb ). the same is true at inlet and outﬂow boundaries.v Here. The main difference occurs at symmetry boundaries aligned with either the x or y directions.77)  bu 0 nx  (7. the coefﬁcient modiﬁcations for both velocity directions are the same.9. The pressure gradient terms sum the face pressures on all the faces of the cell. Away from the boundaries. the only difference between the center v coefﬁcients au 0 and a0 occurs because of source terms with S p components which act preferentially in the x or y directions. the coefﬁcients of the momentum equations are equal to each other when there are no body forces present. The face normal vector n is given by n f  A  A f nx i   ny j (7. This is because the ﬂow rates governing convection are the same for all φ ’s.74) In a co-located variable formulation. 7. Under these circumstances.73) n denote the component of the cell-centered velocity at cell C 0 in the direction of Let V0 the face normal n. At Dirichlet boundaries.

74. The momentum-interpolated face normal velocity is given by Vf  Vf Here. On a uniform mesh Vf  n V0 2 V1n (7. we choose them as v  d ∆ v  f an f ∇p n t  m ∂p ∂n n f (7.82) For non-uniform meshes.2 Momentum Interpolation for Face Velocity The momentum interpolation procedure is applied to the normal velocity at the face.79) Using similar procedures. and adding in a face 162 d l f 2 ∇ p1 (7.85) .83) f ∆ v f    ∆ v 0 an f an 0 2 ∆ v 1 2 an 1 (7.84) The pressure gradient ∇ p is the mean pressure gradient at the face and is given by ∇p ∇ p0 The quantity ∂ p ∂ n f is the face value of the pressure gradient. we may write V0n   n ˆ0 V   ∆ 0 ∇ p0 n an 0 v v t t (7. Let the linearly interpolated face normal velocity be given by V f . as long as the associated truncation error is kept O ∆x 3 .69. for cell C1 V1n Here n ˆ0 V n ˆ1 V n ˆ1 V ∆ 1 ∇ p1 n an 1 (7. For the purposes of this chapter. face values of u and v may be reconstructed to the face and averaged in the manner of Equation 7.81) 7.80)   u ˆ0 n x u ˆ1 n x   v ˆ0 ny v ˆ1 ny (7.83 we are removing the mean normal pressure gradient.75 by a n 0 .Dividing Equation 7. we may write.9. These may be chosen in a number of different ways. and a face normal velocity found using Equation 7. In writing Equation 7. the quantities ∆ f and an f represent the cell volume and center coefﬁcient associated with the face.

we postulate face normal velocity corrections   f  Ff  0 (7.10.93) yields a pressure correction equation for the cell center pressure.94 and 7.98) Substituting Equations 7. The overall SIMPLE algorithm takes the same form. We postulate face ﬂow rate corrections F such that ∑ Ff    where F are the face mass ﬂow rates computed from the momentum-satisfying velocities u and v . the discrete continuity equation for the cell C0 is not satisﬁed by u and v . and is not repeated here. As before.93)  Vf  d f p0 p   p1 q p   p1 q (7.94) ˆ has been dropped in keeping with the SIMPLE algorithm.96)     pf p  f Ax  ∑ f ∆v 0 p  Ay av  ∑ f f v (7. Here the correction to V f The corresponding face ﬂow rate corrections are Ff  ρ f d f A f p0 (7.before. However a few important points must be made. we deﬁne cell velocity corrections u0 v0 with face pressure corrections   p0 p0  ∆ 0 au 0 0 (7.95) We also postulate a cell pressure correction p0 As with regular meshes. We may write this in the form aP pP ∑ pnb b nb where anb aP b   ∑ anb nb   ∑ Ff f ρf df Af (7. 164 .96 into the discrete continuity equation (Equation 7.100) 7.97)   p0  2 p1     (7.1 Discussion The broad structure of the pressure correction equation is the same as for regular meshes.

166 .

the systems of equations that we deal with in CFD have certain distinguishing characteristics that we need to bear in mind while selecting the appropriate algorithms. for example. has non-zero entries only on the diagonal and two adjacent “lines” on either side. One important characteristic of our linear systems is that they are very sparse. i. 167 . although the exact structure of these bands depends on how the cells are numbered. As we shall see shortly. Depending on the structure of the grid. only about 5 N of them are non-zero. Recall that the discrete equation at a cell has non-zero coefﬁcients for only the neighboring cells. out of the N 2 entries in the matrix. The efﬁcient solution of such systems is an important component of any CFD analysis.2) A 0 0 x x x 0 0 0 x x k       ¡¡ ¡¡ ¢ Here x denote the non-zero entries. For a mesh of 5 cells. Once again. We also note that two and three-dimensional structured grids similarly result in banded matrices. The system of equations resulting from a one-dimensional grid. i. It would seem to be a good idea to seek solution methods that take advantage of the sparse nature of our matrix. for example. the matrix has the form x x 0 0 0 x x x 0 0 0 x x x 0 (8. the matrix might also have speciﬁc ﬁll pattern.1) Ax b Here A is a N N matrix and x is a vector of the unknowns. However. there are a large number of zeroes in the matrix A. known as tri-diagonal matrices.Chapter 8 Linear Solvers As we have seen in the earlier chapters.. linear systems involving such matrices. Thus for a two dimensional structured quadrilateral grid. Linear systems also arise in numerous other engineering and scientiﬁc applications and a large number of techniques have been developed for their solution. can be solved easily and form the basis of some solution methods for more general matrices as well.e.e the pattern of location of the non-zero entries. implicit schemes result in a system of linear equations of the form (8.

the mass ﬂux appearing in convective coefﬁcients of the energy equation). By this we mean that the coefﬁcients of the matrix and/or the source vector b are themselves subject to change after we have solved the equation. variable properties (temperature dependent thermal conductivity. 1 To distinguish the iterations being performed because of equation non-linearity and/or inter equation coupling from the iterations being performed to obtain the solution for a given linear system. a large number of the entries of our coefﬁcient matrix are zero. Since these methods successively improve the solution by the application of a ﬁxed number of operations. This maybe because of coupling between different equations (e. 8.1 Direct vs Iterative Methods Linear solution methods can broadly be classiﬁed into two categories. the implication for the linear solver is that it may not be really worthwhile to solve the system to machine accuracy. Given the characteristics of the linear systems outlined above. can easily be formulated to take advantage of the matrix sparsity. direct or iterative. 8. it is usually sufﬁcient if we obtain only an approximate solution to any given linear system. The exact way of doing this will depend on the nature of the grid. Whatever the underlying reason. such as Gauss elimination. Iterative methods on the other hand.. Direct methods. Since we are going to recompute the coefﬁcient matrix and solve the linear problem once again. They also do not take advantage of any initial guess of the solution.2 Storage Strategies As we have already noted. In this section we consider some smarter ways of storing only the non-zero entries that will still allow us to perform any of the matrix operations that the solution algorithm might require. Iterative methods are therefore preferred and we shall devote the bulk of this chapter to such methods. we can stop the process when the solution at any given outer iteration 1 has been obtained to a sufﬁcient level of accuracy and not have to incur the expense of obtaining the machine-accurate solution. it is easy to see why they are rarely used in CFD applications. As the outer iterations progress and we have better initial guesses for the iterations of the linear solver. 168 . we usually have a good initial guess for the unknown and linear solvers that can take advantage of this are obviously desirable.it would seem advantageous to exploit the band structure of our matrix. as we are iterating. both for storage and solution techniques. Consequently. typically do not take advanatage of matrix sparsity and involve a ﬁxed number of operations to obtain the ﬁnal solution which is determined to machine accuracy.g. LU decomposition etc. it is very inefﬁcient to use a two dimensional array structure to store our matrix. Another important characteristic of our linear systems is that in many instances they are approximate. for example) or other non-linearities in the governing equations. the effort required during the linear solution also decreases. we use the term “outer iteration” for the former and “inner iteration” or simply iteration for the latter. Also..

The non-zero entries of the matrix A can then be obtained as For a two dimensional structured grid of NI NJ cells. In either case. AW. For unstructured grids however. AE. We also allocate one other integer array of length N 1. aE .1: Storage Scheme for Unstructured Mesh Coefﬁcient Matrix For a one-dimensional grid of N cells.7) (8. we could store the diagonal and the two lines parallel to it using three one-dimensional arrays of length N . AE and AW. labelled CINDEX which is deﬁned as  CINDEX(1) CINDEX(i)   1 CINDEX(i-1) 169  1 ni (8. Therefore we cannot use the approach mentioned above of storing coefﬁcients as aP .5) AW(i) AE(i) k d w f  B  ∑ ni i£ 1 N (8. We will look at one strategy that is often used in these cases. Another difﬁculty is caused by the fact that the number of neighbors is not ﬁxed. It is therefore easy to interpret any matrix operation involving the coefﬁcient matrix A in terms of these coefﬁcient arrays. The total number of neighbor coefﬁcients that we need to store is then given by dwf  A d i w i  1f  A d i w i 1f   Aii AP(i)   (8. the connectivity of the matrix must be stored explicitly. AN and AS coefﬁcients. Consider a mesh of N cells and let n i represent the number of neighbors of cell i.3) (8. respectively.6) We allocate two arrays of length B. it is usually convenient to refer to the cells using a double index notation and therefore we could use 5 two dimensional arrays of dimension NI NJ to store the AP. Following the notation we have used in the previous chapters.8) .CINDEX 2 1 4 3 1 3 5 7 9 NBINDEX 2 4 1 3 2 4 3 1 Figure 8. aW etc. arrays.4) (8. we label these arrays AP. one might prefer to number the cells using a single index notation and store coefﬁcients using 5 one-dimensional arrays of size NI NJ instead. because of the grid structure we implicitly know the indices of the neighboring cells and thus the position of these coefﬁcients in the matrix A. Alternatively. one of integers (labelled NBINDEX) and one of ﬂoating point numbers (labelled COEFF). for example.

This is illustrated in Fig.AW. however the sparse. ~  ~ 8. The corresponding coefﬁcients for these neighbors are stored in the corresponding locations in the COEFF array.X) for i = 2 to N r = AW(i)/AP(i-1). B(i) = B(i) . x x y x y X(N) = B(N)/AP(N). tri-diagonal pattern of the matrix allows us to obtain the solution in O N operations. for i = N-1 down to 1 y X(i) = (B(i) . The solution for the other equations can then be obtained by working our way back from the last to the ﬁrst unknown.3.2: Tri-Diagonal Matrix Algorithm The idea is that the indices of the neighbours of cell i will be stored in the array NBINDEX at locations locations j that are given by CINDEX(i) j CINDEX(i+1). AP(i) = AP(i) . This is accomplished in two steps.3 Tri-Diagonal Matrix Algorithm Although the bulk of this chapter is concerned with iterative solution techniques. in a process known as back-substitution. the algorithm can be written in the form shown in Fig. Using the storage strategy described by Eq. Finally the center coefﬁcient is stored in a separate array AP of length N .AW(i)*X(i+1))/AP(i). 8. First. 8.AE. the matrix is upper-triangularized.3 d f 170 .e. the entries below the diagonal are successively eliminated starting with the second row. Figure 8.B. for the tridiagonal linear system arising out of a one-dimensional problem there is a particularly simple direct solution method that we consider ﬁrst.r*AE(i-1).r*B(i-1). 8.TDMA(AP.1 which shows the contents of the CINDEX and NBINDEX for a two dimensional unstructured grid. i. The idea is essentially the same as Gaussian elimination. The last equation thus has ony one unknown and can be solved..

10)              where the superscript denotes guessed values. i.J+1) and X(i. We can now d w fw  w  171 .J+1) AS(I.J) X(I+1.J) X(I.j NJ NJ-1 J 1 i 1 2 I NI-1 NI Figure 8.J) AW(I. 8.J) are considered to be unknowns.J-1) (8.I) X(I-1. The right hand side of Eq. unlike the one-dimensional problem.J).J) X (I. We rewrite this equation as AP(I. We will assume that the coefﬁcient matrix is stored using the strategy discussed in Sec.3 we obtain a system which has the same form as the tridiagonal system which we can then solve using TDMA. in 5 two dimensional arrays AP.10 is thus considered to be known and only X(I. using the TDMA we can devise iterative methods.3. The equation at point I J is then given by d w f AP(I.J-1) in building up the tri-diagonal system.J) X(I.e. there are no simple methods analogous to the TDMA that we saw in the previous section for the direct solution of such systems.J) X(I.J) AW(I. However.3: Structured Grid for Line by Line TDMA 8. AE.J) AE(I. the two dimensional structured grid shown in Fig. this is not the exact solution but only an approximate one since we had to guess for values of X(i. However.J) (8. AW. for example. AN and AS. This gives us values for X(i) for all cells i along j J line.J-1) B(I. X(I+1.J) X (I.J) AE(I.J) AN(I. Unfortunately.J) and X(I-1.J) B(I.J) X(I+1.4 Line by line TDMA Linear systems arising from two or three dimensional structured grids also have a regular ﬁll pattern.J) AN(I. 8.J+1) AS(I.J) X(I. 8.I) X(I-1. Writing similar equations for all the cells i J i 1 NI (shown by the dotted oval in Fig. 8. Consider.2.9) We also assume that we have a guess for the solution everywhere.

j). ﬁrst visiting all j lines in increasing order of their index and then in decreasing order as described above.j). we will try to improve the solution by repeating the process. j J 1.4 Once we have applied the process for all the j lines. Figure 8.j+1).B1D. AW1D(i) = AW(i. if (j < NJ) B1D(i) = B1D(i) . we notice that visiting j lines in sequence from 1 to NJ means that all cells have seen the inﬂuence of the boundary at y 0 but only the cells at j NJ have seen inﬂuence of y 1 boundary. In doing so we will use the recently computed values whenever X(i.AE1D. for i = 1 to NI X(i. The overall procedure can be described with the pseudocode shown in Fig.for j = 1 to NJ for i = 1 to NI AP1D(i) = AP(i.4 again.j)*X(i. As noted above these are only approximate values but hopefully they are better approximations than our initial guess. we will have updated the value of each X i j . we could apply the algorithm in Fig.j). 8.j-1).j).4 again. With this symmetric visiting sequence we ensure that all cells in the domain see the inﬂuence of both boundaries as soon as possible. One easy way of removing this bias is to visit the j lines in the reverse order during the second update. To this end. this time the cells at j NJ 1 will see this inﬂuence (since they will use the values obtained at j NJ during the present update) but it will takes several repetitions before cells near J 1 see any inﬂuence of the boundary at y 1.X1D). However. an iteration may consist of two sweeps. If we repeat the process in Fig.AW1D. 8. B1D(i) = B(i. Thus for the line by line TDMA.j) = X1D(i). if (j > 1) B1D(i) = B1D(i) . considering only the j   dw f         dw f 172 . Of course. AE1D(i) = AE(i. An iteration is the sequence that is repeated multiple number of times. x x y x y y TDMA(AP1D.AN(i.j)*X(i. As in all iterative methods.4: Line By Line TDMA Algorithm along j lines apply the same process along the next line. it is not mandatory to apply the TDMA for a constant j line. We could apply the same process along a constant i.J)’s are required. The sequence of operations whereby all the values of X i j are updated once is referred to as a sweep.AS(i. 8.

it is useful to combine both. Instead.direction neighbours implicitly. we seek to obtain a better approximation x k 1 and then repeat the whole process.5 Jacobi and Gauss Seidel Methods For matrices resulting from unstructured grids. 8. the latest values of xi are used at all the neighbours that have already been updated during the current sweep and old values are used for the neighbours that are yet to be visited.13) As the residual approaches zero.6 General Iterative Methods The general principle in all iterative methods is that given an approximate solution x k . the cells are visited in sequence and at each cell i the value of x i is updated by writing its equation as Ai i xi b ∑ Ai nb xnb (8. The two methods differ in the values of the neighboring x i that are employed. no line-by-line procedure is possible. Of course.5. of course. 8. we must use more general update methods. sweeping by visiting i or j lines might be the most optimal. In case of the Jacobi method. the solution approaches the exact solution. 8.2.1. 8.14) 173 . 8. The simplest of these are the Jacobi and Gauss-Seidel methods.11) ¤   ¤ where the summation is over all the neighbors of cell i. like the one shown in Fig. Thus one iteration of the line by line TDMA would consist of visiting.13 as  rk (8. deﬁned as rk  b Aek Axk (8.12 and 8. In general cases. We deﬁne the error at any given iteration as ¥ ek  x xk (8. the “old” values of xi are used for all the neighbors whereas in the Gauss-Seidel method. say each of the i lines ﬁrst in increasing order and then in decreasing order followed by similar symmetric sweeps along j lines (and k lines in three dimensional problems). since we don’t know the exact solution. 8. we also don’t know the error at any iteration. however.12) where x is the exact solution. In general the Gauss-Seidel method has better convergence characteristics than the Jacobi method and is therefore most widely used although the latter is sometimes used on vector and parallel hardware. one iteration of the Gauss-Seidel method can be expressed in the pseudo-code shown in Fig. However. As in the case of the line by line TDMA. it is possible to check how well any given solution satisﬁes the equation by examining the residual r. In both cases. Depending on the coordinate direction along which information transfer is most critical. the order of visiting the cell is reversed for the next sweep so as to avoid directional bias. Using the storage scheme outlined in Sec. We can determine the relation between them using Eqs.

IEND = 1.COEFF(n)*X(j). IEND = N. for n = CINDEX(i) to CINDEX(i+1)-1 x y j = NBINDEX(n).5: Symmetric Gauss-Seidel Sweep 174 . for i = IBEG to IEND stepping by ISTEP x x x r = B(i). Figure 8. r = r .CINDEX.X) for sweep = 1 to 2 if (sweep = 1) IBEG = 1.COEFF.Gauss Seidel(AP.NBINDEX. y y y X(i) = r/AP(i). else IBEG = N.B. ISTEP = 1. ISTEP = -1.

with the residual r replacing the source vector b. hsN hsN  2φ0 h   ¡¡ ¡¡ ¡¡   21φ ¢ L (8. The usual observation is that residuals drop quickly during the ﬁrst few iterations but afterwards the iterations “stall”. Another way of expressing any iterative scheme that we will ﬁnd useful in later analysis is xk 1 Pxk gk (8. Using Eq. For example. Recall that this and speciﬁed Dirichlet boundary conditions φ 0 equation results from our 1d scalar transport equation in the pure diffusion limit if we choose a diffusion coefﬁcient of unity. Here D. where D is the diagonal part of the matrix A.16)   ¥   where P is known as the iteration matrix. let us consider the following 1D Poisson equation over a domain of length L.20) h . ∂ 2φ sx (8.14 and the deﬁnition of the error we obtain x  xk  A 1rk xk (8. it can be shown that the Jacobi method is obtained when B D 1 . This is specially pronounced for large matrices. 8. .18) A D L U   d  f   d  f   8. . If we discretize this equation on a grid of N equispaced control volumes using the method outlined in Chapter 3 we will obtain a linear system of the form  d f d f   ¡¡ ¡¡ d f 1 h     3 1 .Thus the error satisﬁes the same set of equations as the solution. This is an important property that we will make use of in devising multigrid schemes. L and U are the diagonal. strictly lower and upper triangular parts of A obtained by splitting it as (8. .   ¡¡ ¡¡ ¡  hs1 hs2 . To demonstrate this behavior.17)  Brk (8.15) Most iterative methods are based on approximating this expression as xk ¥ 1 where B is some approximation of the inverse of A that can be computed inexpensively. . 0 0  1 2  0 1 0 0 tCtHt tCtHt tCtHt  tCtHt 0 0 1 0 0 0 2 1 0 0  ¡   ¢  1  3 175 0 0    ¢ φN  1 φN φ1 φ2 . .7 Convergence of Jacobi and Gauss Seidel Methods Although the Jacobi and Gauss-Seidel methods are very easy to implement and are applicable for matrices with arbitrary ﬁll patterns their usefulness is limited by their slow convergence characteristics. For the Jacobi method the iteration matrix is given by P D 1 L U while for the Gauss-Seidel method it is P D L 1 U.19) ∂ x2 φ 0 and φ L φL . .

1. To judge how the solution is converging we plot the maximum φi (which is also the maximum error). In order to distinguish the convergence characteristics for different error proﬁles we will solve the problem with initial guesses given by  d f   d f φi  m sin k π xi L n (8. the maximum error has reduced by less than 20% but with a guess of k 16 Fourier mode.0 k=8 k=2 0.5 0.4 0.21 represents Fourier modes and k is known as the wavenumber.2 0. We can now study the behavior of iterative schemes by starting with arbitrary initial guesses. Figure 8.0 0. the error reduces by over 99% even after 10 iterations. The results are shown in Fig.6 0.21) Equation 8. To see what the scheme does in such cases we start with an initial guess consist-    176 . the error at any iteration is then simply the current value of the variable φ . 8. We see that when we start with an initial guess corresponding to k 1.5 k=1 φ 0. our initial guess will of course contain more than one Fourier mode. Note that for low values of k we get “smooth” proﬁles while for higher wavenumbers the proﬁles are very oscillatory.0 -0. Thus Another simpliﬁcation we make is to choose φ 0 φL 0 as well as s x the exact solution to this problem is simply φ x 0.6 shows these modes over the domain for a few values of k. 0.6: Fourier Modes on N  64 grid L where h N and hsi represents the source term integrated over the cell. In general cases. Starting with these Fourier modes.0 x/L Figure 8.7(a).8 1. we apply the Gauss Seidel method for 50 iterations on a grid with N 64.

8.0 1.. the convergence is quicker for the same mode. Interesting results are obtained for the mixed mode initial guess given by Eq. These numerical experiments begin to tell us the reasons behind the typical behavior of the Gauss-Seidel scheme. 8.4 0.e.2 0. It appears that the same error proﬁle behaves as a less smooth proﬁle when solved on a coarser grid.2 k=3 k = 16 0. 8. ie.9 indicate that the convergence becomes even worse. We see that the oscillatory component has vanished leaving a smooth mode error proﬁle.1. that the convergence deteriorates as the grid is reﬁned. m 2π x 1 i φi  sin 3  L n  8π xi sin L m n  m sin 16π xi L n  (8. This accounts for the rapid drop in residuals at the beginning when one starts with an arbitrary initial guess. The resulting convergence plot shown in Fig. 8. we solve the problem on a grid that is twice as ﬁne.6 Max. Using our sample problem we can also verify another commonly observed shortcoming of the Gauss-Seidel iterative scheme viz. 8.7(b) that the error drops rapidly at ﬁrst but then stalls. Error 0. on a coarse grid with N 32.6 0.8 Max. Error 50 0.    177 . We also note from Fig. On the ﬁner grid we can resolve more modes and again the higher ones among those converge quickly but the lower modes appear more “smooth” on the ﬁner grid and hence converge slower.22) In this case we see from Fig. Retaining the same form of initial guess and using k 2. It is very effective at reducing high wavenumber errors.0 0 10 20 30 40 0.22.8.9 that the converse is also true. Another way of looking at the effect of the iterative scheme is to plot the solution after 10 iterations as shown in Fig.0 k=1 0.0 0 10 20 30 40 50 Iteration Iteration Figure 8. ie. Once these oscillatory components have been removed we are left with smooth error proﬁles on which the scheme is not very effective and thus convergence stalls.7: Convergence of Gauss-Seidel method on N 64 grid for (a) initial guesses consisting of single wavenumbers (b) initial guess consisting of multiple modes  ing of k  2 w 8 and 16 modes i.4 k=8 k=2 0.. We see that the amplitude is not signiﬁcantly reduced when the initial guess is of low wave number modes but it is greatly reduced for the high wave number modes.8 0. N 128.

0 0.6 0. the spectral radius of the iteration matrix (which is the largest absolute eigenvalue of the matrix) must be less than one and the rate of convergence depends on how small this spectral radius is. Both the systems have similar convergence characteristics.1.23) en Pn e0 ¦ In order for the error to reduce with iterations.5 -1.0 0.8 Analysis Of Iterative Methods For simple linear systems like the one we used in the examples above and for simple iterative schemes like Jacobi.5 Initial Final φ 0.5 -0. . 178 .6 0.0 x/L x/L x/L Figure 8.19 with an equispaced mesh consisting of N interior nodes and is only slightly different from the system we used in the previous section. 8. .0 1.0 0.4 0.5 -0.0 φ 0.8 1. Using the iterative scheme expressed in the form Eq. 8. it is possible to understand the reasons for the convergence behaviour analyically. the reason for choosing this form instead of Eq. hsN hsN ¯ φ0 h « ¬¬ ¬¬ ¬¬ ­ ¯ ® 1φ (8.0 0.0 0.0 0. We will not go into details here but brieﬂy describe some of the important results.2 0. we can show that the error at any iteration n is related to the initial error by the following expression (8. The eigen values of the Jacobi iteration matrix are closely related to the eigen values of matrix A and the two matrices have the same eigen vectors.2 0.8 1.4 0.0 -1.20 is that it is much easier to study analytically.0 0.8: Initial and ﬁnal solutin after 10 Gauss-Seidel iterations on N (a) initial guesses consisting of k 2 (b) initial guesses consisting of k guess consisting of multiple modes ¦ ¦ ¦ 64 grid for 8 (c) initial Some of the methods for accelerating convergence of iterative solvers are based upon this property. 8.2 0.8 1.6 0. «¬¬ ¬¬ ¬ §¨¨ ¨¨ ¨¨© ® ­ ¦ hs1 hs2 . if we choose the matrix linear system to be 2 §¨¨ 1 h ¨¨ ª ¨© 2 1 .0 -0. .0 -1.5 Initial Final φ 0.24) L h 2 This is the system one obtains using a ﬁnite difference discretization of Eq.5 0.0 1. .16. 8. 0 0 ª 1 2 ª 0 1 0 0 tHtCt tHtCt tHtCt ª tHtCt 0 0 1 0 0 0 0 0 «¬¬ ¨§¨ ¬¬ ¨¨ ¬ ©¨ 0 0 ª 2 1 ª 1­ 2 φN 1 φN φ1 φ2 .4 0. . . Now.0 0.

27) e0 ∑ αk wk Substituing this expression in Eq.16 and using the deﬁnition of eigenvalue. The j th component of the eigen vector corresponding to the eigenvalue λ k is given by wk ´j¦ sin ° ¦ jkπ N ± j ¦ 1 ² 2 ²C³D³I³ N (8.9: Convergence of Gauss-Seidel method on different sized grids for initial guess corresponding to k 2 mode ¦ we can ﬁnd analytical expression for the eigenvalues of the corresponding Jacobi iteration matrix.0 N = 128 0.1. 8. we can write it in terms of these eigenvectors as (8. 8.8 Max.25 we note that the largest eigenvalue occurs for k λk 1 and therefore it is easy to see why the lower modes are the slowest to converge. we obtain en ¦ ∑ αk λknwk ¦ (8.0 0 10 20 30 40 50 Iteration Figure 8.26) Now. They are given by λk ¦ 1ª sin2 ° kπ 2N ± k ¦ 1 ² 2 ²H³I³D³ N (8. this indicates the reason 179 .28) We see from this expression that the k th mode of the initial error is reduced by a factor n . if our initial error is decomposed into Fourier modes. From Eq. Error 0.25) The eigenvectors of the Jacobi iteration matrix.6 N = 64 0. w k turn out to be the same as the Fourier modes we used in the previous section as starting guesses. Also note that the magnitude of the largest eigenvalue increases as N increases.2 N = 32 0.4 0.

These observations suggest that we could accelerate the convergence of these iterative linear solvers if we could somehow involve coarser grids.behind our other observation that convergence on coarser grids is better than that on ﬁner grids for the same mode. the pressure correction equation. Our insistence on maintaining diagonal dominance and positive coefﬁcients is motivated by the requirement of keeping the spectral radius below unity. the interpolated solution will not in general satisfy the discrete equations at the ﬁne level but it would probably be a better approximation than an arbitrary initial guess.e.. we must handle stiff linear systems. i. these conclusions hold true in general. Once we have a converged solution on this coarse grid.. which is based on the improved convergence characteristics of the simple iterative schemes on coarser meshes. The eigenvalues of the matrix A as well as that of the iteration matrix play an important role in determining the convergence characteristics. The disadvantage with this strategy. the ﬁne grid solution approaches the exact answer) the inﬂuence of any coarse levels should also approach zero. e. The two main questions we need to answer are (1) what problem should be solved on the coarse grid and (2) how should we make use of the coarse grid information in the ﬁne grid solution.e. Many iterative schemes have been devised to handle stiff systems. We know that in general the accuracy of the solution depends on the discretization. We can repeat the process recursively on even ﬁner grids till we reach the desired grid. known as nested iteration is that it solves the problem fully on all coarse grids even though we are only interested in the ﬁnest grid 180 . However in many cases. therefore we would require that our ﬁnal solution be determined only by the ﬁnest grid that we are employing. In general. One consequence of this requirement is that it is enough to solve only an approximate problem at the coarse levels since its solution will never govern the ﬁnal accuracy we achieve.g. They usually involve some form of preconditioning to improve the eigenvalues of the iteration matrix. One strategy for involving coarse levels might be to solve the original differential equation on a coarse grid.9 Multigrid Methods We saw in the previous section that the reason for slow convergence of Gauss-Seidel method is that it is only effective at removing high frequency errors. Although we have analyzed the convergence behaviour of a very simple scheme on a simple matrix. we could interpolate it to a ﬁner grid. Of course. This means that the coarse grids can only provide us with corrections or guesses to the ﬁne grid solution and as the ﬁne grid residuals approach zero (i. those with spectral radius close to 1.. Practices such as linearizing source terms and underrelaxation that we need in order to handle non-linearities also help in reducing the spectral radius. Certain constraints can be easily identiﬁed. 8. We will not discuss any of them here but rather look at another strategy. these methods are a lot more complicated to implement compared to the simple Jacobi and Gauss-Seidel methods we have studied so far. We also observed that low frequency modes appear more oscillatory on coarser grids and then the Gauss-Seidel iterations are more effective.

First of all.9. More importantly however.9. 8.14). the physical problem may not be well resolved on the coarse grid and thus the solution on coarse grid may not always be a good guess for the ﬁne grid solution. i. Of course. known as coarse grid correction is based on the error equation (Eq. We expect that on the coarser grid the smooth error will be more oscillatory and therefore convergence will be better. Instead we could try to estimate it by solving Eq.2 Geometric Multigrid Having developed a general idea of how coarse grids might be used to accelerate convergence.29 on a coarse grid and how exactly we intend to use the errors estimated from the coarse levels but we can already see that the strategy outlined above has the desired properties. note that if the ﬁne grid solution is exact. the error in the estimation of the ﬁne grid error) will have smooth components. 8. At this stage we can simply solve the linear analytically.29 on a coarser grid. 8. We still have to specify exactly what we mean by solving Eq. To distinguish between the matrices and 181 . although using Gauss-Seidel iteration usually sufﬁces as well. It also does not make use of any guess we might have for the ﬁnest level grid from previous outer iterations. Although we don’t know the error we do know that the error at this stage is likely to be smooth and that further iterations will not reduce it quickly.29 with the residual r0 b Ax0 and an initial guess of zero error. Thus we are guaranteed that the ﬁnal solution is only determined by the ﬁnest level discretization. 8.e.. Smooth error modes may arise only on the ﬁner grids and then the convergence will still be limited by them. let us now look at some details. We can thus apply the same strategy for its solution that we used for the ﬁnest grid.29 as a linear problem in its own right. 8. solve for its error on an even coarser mesh.e.1 Coarse Grid Correction A more useful strategy. i. In addition. This can be continued recursively on successively coarser meshes till we reach one with a few number (2-4) of cells. 8. When we have obtained a satisfactory solution for e we can use it to correct our ﬁne grid solution. ¦ ª 8..1 with an initial guess x 0 is identical to solving Eq. We can now view Eq.solution. 8. we will achieve convergence right away and not waste any time on further coarse level iterations. Thus any approximations we make in the coarse level problem only effect the convergence rate and not the ﬁnal ﬁnest grid solution. 8. Another useful characteristic of this approach is that we use coarse level to only estimate ﬁne level errors. the residual will be zero and thus the solution of the coarse level equation will also be zero. since we start with a zero initial guess for the coarse level error. even on the coarse level the error (which of course now is the error in Eq. Now suppose that after some iterations on the ﬁnest grid we have a solution x.29) Note also that solving Eq. Recall that the error e satisﬁes the same set of of equations as our solution if we replace the source vector by the residual: Ae ¦ r (8.29.

starting with (0) for the ﬁnest grid and increasing it as the grid is coarsened . We can now apply our usual discretization procedure on the differential equation (remembering that the unknown is a correction to the ﬁne level unknown) and obtain the linear system of the same form · ¸¸ · º » 182 . the unknown x l represents the estimate of the error at the l 1 level and that the source vector b l is somehow to be based on the residual r l 1 . if present.10: Coarsening for 1D grid vectors at different grid levels. We also know that for coarse levels (l 0). The simplest way of obtaining the coarse grid for our sample 1D problem is to merge cells in pairs to obtain a grid of N 2 cells.31) We have already seen how to discretize and iterate on the ﬁnest level (l 0) grid of N cells and compute the residual r 0 . The next step is to deﬁne the exact means of doing these coarse grid operations and the intergrid transfers. After doing some iterations on the coarse grid (and perhaps recursively repeating the process at further coarse levels) we would like to make use of the errors calculated at level l to correct the current guess for solution at the ﬁner level. will denote the iteration number. The resulting grid is similar to the original grid. Thus the problem to be solved at the level l is given by µ¶ Alxl · ¸ · ¸ ¦ b· l¸ (8.i 1 2 3 4 5 6 7 8 (a) Fine Level (l=0) I 1 2 (b) Coarse Level (l=1) 3 4 Figure 8. with a cell width of 2h. 8.30) and the residual at level l after k iterations is rl · ¸¹´ k ¦ b · l ¸ ª A · l ¸ x · l ¸¹´ k ·¸ ¦ µ ®ª ¸ ¶ · (8. we shall employ a parenthicated superscript.10. as shown in Fig. a second superscript.

The process of transfering this correction back to the ﬁner level is known as prolongation and is denoted by the operator I ll 1 . the coarse level cells had the same type of geometry as those at the ﬁne level and this made the discretization process straightforward. For multidimensional cases.e. 8. With such nested grid hierarchies it may not be feasible to discretize the original differential equation on the coarse level cells.11 the coarse level cells may not even be convex. The coarse level matrix is given by §¨¨ A1 ·¸¦ 1 2h ¨¨ ª ¨© 3 1 . 8. for cell 2 at the ﬁne level we use x20 · ¸ ¦ x2· 0¸ ¯ ° 3 x · 1¸ 4 1 ¯ 1 1 x 4 2 · ¸± (8. We already know that the starting guess for the coarse level unknowns x l 1 0 is zero. In the one dimensional case. . however. the cell shapes obtained by agglomerating ﬁne level cells can be very different. i. The source term for the coarse level cell can thus be obtained by averaging the residuals at the parent cells: ¸ l¸ (8. for example. 183 . Thus we now have all the information to iterate the coarse level system and obtain an esitmate for the error.34) bi l 1 rl 2 2i For our equispaced grid we used averaging as the restriction operator.20. The correction of the solution at the ﬁne level using the coarse level solution is written as · ½ 1¸ ¦ · ½ ¸¹´ ½ xl · ¸ ¦ x · l ¸ ¯ Ill½ 1x · l ½ 1¸ (8.35) The simplest prolongation operator that we can use on our 1D grid (Fig. . 8.10) is to apply the correction from a coarse level cell to both the parent ﬁne level cells.as 8. 0 0 ª 1 2 ª 0 1 0 0 ¼C¼C¼ ¼C¼C¼ ¼C¼C¼6ª ¼C¼C¼ 0 0 1 0 0 0 0 0 « ¬¬ ¬¬ ¬ (8.10 that the cell centroid of a coarse level cell lies midway between the centroids of its parent ﬁne level cells. in the general case we will need to use some form of interpolation operator. As shown in Fig. A more sophisticated approach is to use linear interpolation.33) ¾ · ® 1 ¯ r2· i ¿ This operation of transfering the residual from a ﬁne level to the coarse level is known as restriction and is denoted by the operator I ll ½ 1 deﬁned as b · l ½ 1 ¸ ¦ Ill ½ 1 r · l ¸ (8.32) 0 0 ª 2 1 ª 1­ 3 We also note from Fig. use a zeroth order interpolation. of course.36) The strategy outlined in this section is known as geometric multigrid because we made use of the grid geometry and the differential equation at the coarse levels in order to arrive at the linear system to be solved.

11: Nested Coarsening For 2D grid (a) l=0 (b) l=1 (c) l=2 Figure 8.12: Independent Coarsening For 2D grid 184 .(a) l=0 (b) l=1 (c) l=2 Figure 8.

Likewise.38) Now.9. we speak of agglomerating the equations at those cells to directly obtain the linear equation corresponding to that coarse cell. it is useful to devise methods of obtaining coarse grid linear systems that do not depend on the geometry or the differential equation.39) Thus we see that the coefﬁcients for the coarse level matrix have been obtained by summing up coefﬁcients of the ﬁne level matrix and without any use of the geometry or differential equation. the pressure correction equation that we derived algebraically. 8. Also.3 Algebraic Multigrid The general principles behind algebraic multigrid methods are the same as those for the geometric multigrid method we saw in the last section. In this case there are no common faces between any two grids.38 gives us the required coarse level equation for index I : ·½ ¸ ¦ · ½ ¸ ¯ x1 Ai 0 i 1 I 1 · ´ ®¸ · ® ¸ ¯ µ Ai· ´0i ¸ ¯ Ai· ´0i½¸ 1 ¯ Ai· ½ 0¸ 1´ i ¯ Ai· ½0¸ 1´ i½ 1 ¶ xI· 1¸ ¯ Ai· ½0¸ 1 ´ i½ 2xI· 1½ ¸ 1 ¦ ri· 0¸ ¯ ri· ½ 0¸ 1 (8. For instance.37) (8. For the moment let us just consider the 1D grid and the coarse grid levels. The main problem with this approach is that the prolongation and restriction operators become very complicated since they involve multidimensional interpolation. 8. 8. Substituting these relations and adding Eqs. in some instances. the multigrid principle is that the errors at level l are to be estimated from the unknowns at level l 1. Writing out the error equation (Eq. that we used in the geometric multigrid section above. 185 .One approach to get around this problem is to use a sequence of non-nested. shown in Fig. Let i and i 1 be the indices of the ﬁne level equations that we will agglomerate to produce the coarse level equation with index I . as shown in Fig.14) for indices i and i 1 we have ¯ ¯ · ´ ®¸ Ai· ½ 0¸ 1 ´ i ei· 0¸ ¯ ¯ e0 Ai 0 i 1 i 1 · ® ¸ ¯ Ai· ´0i¸ ei· 0¸ ¯ Ai· ´0i½¸ 1ei· ½0¸ 1 ¦ Ai· ½0¸ 1 ´ i ½ 1 ei· ½ 0¸ 1 ¯ Ai· ½ 0¸ 1´ i½ 2 ei· ½ 0 ¸ 2 ¦ ·¸ ·® ¸ ¦ · ® ¸ ·¸ ri· ½ 0 ¸ 1 ri 0 (8. independent grids. 8.37 and 8. The source term for the coarse level cells turn out be the sum of the residuals at the constituent ﬁne levels cells. 8.12. We will look at such algebraic multigrid methods in the next section. cells 1 and 2 at level 0 are combined to obtain the equation for cell 1 at level 1. For these reasons. The main difference is that the coarse level system is derived purely from the ﬁne level system without reference to the underlying grid geometry or physical principle that led to the ﬁne level system. Instead of thinking in terms of agglomerating two or more ﬁne level cells to obtain the geometry of a coarse level cell. Our derivation above implies zeroth order interpolation as the prolongation operator. We assume that the error for both the parent cells i and i 1 is the same and is obtained from x I 1 . This is equivalent to the use of addition as the restriction operator.10. We will see shortly how to select the equations to be agglomerated in the general case. we may not have a formal differential equation that can be discretized to obtain the coarse level system. ei 0 1 xI 1 1 and ei 0 2 xI 1 1 . for example.

e. and if has not been grouped (i. The agglomeration strategy outlined above is very effective in accelerating the convergence rate of the linear solver. The linear systems encountered in CFD applications are frequently stiff. Also.4 Agglomeration Strategies In the 1D case that we have seen so far. i.. large conductivity ratios in conjugate heat transfer problems. 186 .39 we can show that ¯ ¦ ´ AI lJ 1 · ´ ½ ¸ ¦ ∑ ∑ Ali´ j iÀ G j À G I J (8.9. If such a division is continued till we have just 2 or 3 cells at the coarsest level. it is nevertheless quite useful to visualize the effective grids resulting from the coarsening in order to understand the behaviour of the method.8. and others. This permits the use of relaxation methods such as line-by-line TDMA on the coarse level systems as well as the ﬁne level systems.. a coarse level grid that consists of roughly half the number of cells as the ﬁnest level. 8.e. 8. ¦ 3 Even though we are not concerned with the actual geometry of the coarse level grids in algebraic multigrid. its coarse index is 0). agglomeration was a simple matter of combining equations at cells 2i and 2i 1 to obtain the coarse level system of equations. the same idea can be applied in one or more directions simultaneously.or equations at a time to obtain septa-diagonal form of the linear system.40) where GI denotes the set of ﬁne level cells that belong to the coarse level cell I .41) The coarse level matrices can be stored using the same storage strategies outlined in Sec. group it and n of its neighbours for which the coefﬁcient Ai j is the largest (i.2 for the ﬁnest level. Best multigrid performance is usually observed for n 2. One useful practice is to try to combine cells that have the largest mutual coefﬁcients. we associate a coarse index with each ﬁne level cell and initialize it to zero. For unstructured grids. This creates coarse level grids 3 that allow the optimal transfer of boundary information to interior regions and thus accelerates convergence. the source vector for the coarse level equation is obtained by summing up the residuals of the constituent ﬁne level cells bI l · ½ 1¸ ¦ ∑ ri· l ¸ iÀ G I (8.e. This stiffness is a result of a number of factors: large aspect-ratio geometries. disparate grid sizes typical of unstructured meshes. however. To implement such a coarsening procedure. assign them the coarse level index C) and increment C by 1. For linear systems resulting from two or three dimensional structured grids. the total memory required for storing all the coarse levels is roughly equal to that required for the ﬁnest level. Besides simplifying the book-keeping this practice has the additional advantage of maintaining the penta. as we have seen before. Proceeding in the same manner that we used to derive Eq. We have already seen that the coefﬁcients of the coarse level matrix are obtained by summing up appropriate coefﬁcients of the ﬁne level matrix. We also initialize a coarse level cell counter C 1 We then visit the cells (or equations) in sequence. we need to devise more general agglomeration criteria.

Coefﬁcients resulting from the diffusion term scale as kA ∆x. coefﬁcients to interior cells are approximately three orders of magnitude bigger than coefﬁcients to cells in the low-conducting region. The temperature is speciﬁed on the four external walls of the domain. and another in the low-conducting region. Convergence of typical linear solvers is inhibited by the large anisotropy of coefﬁcients for cells bordering the interface of the two regions. The ratio of conductivities is 1000. » 187 .Level 0 Level 1 Level 2 Level 4 Level 6 Level 8 Figure 8. adjacent to the low-conducting region. are only available at the outer boundaries of the domain. The associated coefﬁcient matrix has coefﬁcients of the same order. 8. An agglomeration strategy which clusters cell neighbors with the largest coefﬁcients results in the coarse levels shown in Fig. However. the domain consists of a single cell in the high-conducting region. Dirichlet boundary conditions.13. Information transfer from the outer boundary to the interior region is inhibited because the large-coefﬁcient terms overwhelm the boundary information transferred through the small-coefﬁcient terms. which set the level of the temperature ﬁeld. and results in very fast convergence.13: Conduction in composite domain: multigrid coarsening Consider the situation depicted in Fig. where A is a typical face area and ∆x is a typical cell length scale. A composite domain consists of a low-conductivity outer region surrounding a highly conducting inner square domain. The temperature level of the inner region is set primarily by the multigrid corrections at this level. For interface cells in the highly conducting region. At the coarsest level.13. 8. a ratio of this order would occur for a copper block in air.

Level 0 Level 1 Level 2 Level 3 Level 4 Level 5 Figure 8. º » We should note here that the coefﬁcient based coarsening strategy are dersirable even on structured grids. the conductivity k ξ ξ in the direction perpendicular to η is zero. Thus. however. This is specially useful for non-linear and/or transient problems. The problem involves orthotropic conduction in a triangular domain with temperature distributions given on all boundaries [17]. the mutual coupling between the governing equations is the main cause of convergence degradation. aligned at π 3 radians from the horizontal. they do not always result in optimal multigrid acceleration in general situations since coefﬁcient anisotropies are not always aligned along lines. Since the coarsening is based on the coefﬁcients of the linearized equations it also changes appropriately as the solution evolves. the primary direction of information transfer is in the η direction. Mesh agglomeration based on coefﬁcient size results in coarse-level meshes aligned with η as shown. 8. The material has a conductivity k ηη 0 in the η direction. Algebraic multigrid methods used with sequential solution procedures have the advantage that the agglomeration strategy can be equation-speciﬁc. In some applications.14: Orthotropic conduction: multigrid coarsening Another example is shown in Fig. Athough coarse levels created by agglomerating complete grid lines in each grid direction have the advantage of preserving the grid structure and permitting the use of the same line-by-line relaxation schemes as used on the ﬁnest level.14. Geometric multigrid methods that solve the coupled problem on a sequence of coarser meshes may offer better performance in such cases. the discrete coefﬁcients for the speciﬁc governing equation can be used to create coarse mesh levels. Since all faces with normals in the ξ direction have zero coefﬁcients. 188 . the coarse level mesh correctly captures the direction of information transfer.

We will look at both of these ideas next.15(a) where each circle represents relaxation sweeps and the up and down arrows represent prolongation and restriction operators. continuing till we reach the coarsest level.8. These cycling strategies can be expressed rather compactly using recursion and this makes it very easy to implement them.. The µ -cycle can then be thought of as a cycle which is recursively applied µ ¦ 189 . After ﬁnishing sweeps on the coarsest level we start the upward leg. This can be achieved by using a µ cycle. respectively. The starting point in all ﬁxed grid methods therefore is to compute all the coarse level coefﬁcients.9. It is best understood through a recursive deﬁnition. The simplest ﬁxed cycle is known as the V cycle and consists of two legs. One can think of the V-cycle as a ﬁxed grid cycle where the cycle is recursively applied at each coarse grid if we haven’t reach the coarsest grid. Various strategies can be devised for the manner in which the coarse levels are visited. The two need not be equal. it is generally a good idea to always have non-zero ν 2 in order to ensure that the solution satisﬁes the discrete equation at the current level. However.e. ν 1 and ν2 respectively.e. the V-cycle may not be sufﬁcient and more coarse level iterations are called for. specially using a computer language such as C that allows recursion (i. using the solution from the current level to correct the the solution at the next ﬁner level. to not do any sweeps on the down leg but to simply keep injecting the residuals till the coarsest level. 8. For very stiff systems. a function is allowed call itself). for geometric multigrid the coarsest possible grid size might be dictated by the minimum number of cells required to reasonably discretize the governing equation. We should note that since the coarse grids only provide an estimate of the error. the coarse level matrix is only a function of coefﬁcients of the ﬁne level matrix and thus remains constant. Fixed Cycles We have seen that the coarse level source vector is computed from the residuals at the previous ﬁne level and thus it changes every time we visit a coarse level. This cycle is graphically illustrated in Fig. Mutigrid cycles can broadly be classiﬁed into two categories – (1) ﬁxed cycles that repeat a set pattern of coarse grid visits and (2) ﬂexible cycles that involve coarse grid relaxations as and when they are needed. i. then transfer the residuals to the next coarse level and relax on that level. With algebraic multigrid it is usually desirable to keep coarsening the grid till there are only two or three cells left. The coarsest level then establishes an average solution over the entire domain which is then reﬁned by relaxation sweeps on the upward leg..5 Cycling Strategies and Implementation Issues The attractiveness of multigrid methods lies in the fact that signiﬁcant convergence acceleration can be achieved just by using simple relaxation sweeps on a sequence of coarse meshes. The general principles of these cycling strategies are applicable for both geometric and algebraic multigrid methods but we shall concentrate on the latter. The two parameters deﬁning the V-cycle are the number of sweeps performed on the down and up legs. In the ﬁrst leg we start with the ﬁnest level and perform a ﬁxed number of relaxation sweeps. in many applications it is most efﬁcient to have ν1 0. then perfoming some relaxation sweeps at that ﬁner level and continuing the process till we reach the ﬁnest level.

which is also known as the W-cycle. 8.15(c). Usually the L-2 norm (i. For such cases the use of ﬂexible cycles is preferred. known as the Fcycle. It is illustrated in Fig. involves somewhat less coarse level sweeps but still more than the V-cycle.17. a W-cycle involves a lot of coarse level relaxation sweeps. specially when the number of levels is large.. Here. the RMS value) or the L-∞ norm (i. Because of the recursiveness. If the ratio is below β we continue sweeps at the current level till the termination criterion is met. When we meet the termination criterion at 190 . All the ﬁxed grid cycles we have seen so far can be expressed very compactly in the recursive pseudocode shown in Fig. It can be thought of as a ﬁxed cycle where one recursively applies one ﬁxed cycle followed by a V-cycle at each successive level.16 The entire linear solver can then be expressed using the code shown in Fig. it is not always economical to use all multigrid levels all the time in a regular pattern. the largest value) is employed. Here α is the termination criterion which determines how accurately the system is to be solved and nmax is the maximum number of ﬁxed multigrid cycles allowed. 8. ¦ ÁIÁ ÁDÁ Flexible Cycles For linear systems that are not very stiff. The commonly used version is the one corresponding to µ 2.e. It is illustrated in Fig. 8.15: Relaxation and Grid Transfer Sequences for Some Fixed Cycles times at each successive level. A slight variant of the W-cycle. 8.. x represents some suitable norm of the vector x.e. we monitor the residuals after every sweep on a given grid level and if the ratio is above a speciﬁed rate β . we transfer the problem to the next coarse level and continue sweeps at that level.l=0 l=1 l=2 l=3 (a) V-cycle (b) W-cycle l=0 l=1 l=2 l=3 l=0 l=1 l=2 l=3 (c) F-cycle Figure 8.15(b).

Compute rn b0 A0x0 .16: Fixed Cycle Algorithm Solve(A 0 . if (cycle type = µ CYCLE) Fixed Cycle(l+1.cycle type). · ¸ ½ ¦ ¸ · ¸ ª½ · ¸¸ · ¸ · ¦ · ·½ ¸¦ µ ª 1¶ times. if rn r0 α return.17: Driver Algorithm for Fixed Cycle 191 . x 0 . Perform ν2 relaxation sweeps on A l x l · ¸ · ¸ ¦ b· l¸ . l 1 x 0. bl 1 Ill 1 r l . ¦ÁIÁ · ¸ ª · ¸ · ¸ ÁIÁ µ » Å ¶ Figure 8. for n = 1 to nmax ·¸ ·¸ ·¸ Compute r0 ¦cÁDÁ b · 0¸ ª A · 0¸ x · 0¸ ÁDÁ . cycle type) Â Compute all coarse level matrices. cycle type). Ä ·¸¦ ·¸¯ ½ ·½ ¸ Ä rl bl Al xl . Fixed Cycle(l+1. x 0 . xl x l Ill 1 x l 1 . Figure 8. V CYCLE).Fixed Cycle(l. α . W CYCLE) µ else if (cycle type = F CYCLE) Fixed Cycle(l+1. Â Ä Ä Fixed Cycle(0. cycle type) Perform ν1 relaxation sweeps on A l x l if (l lmax) Â Â ¦Ã · ¸ · ¸ ¦ b· l¸ .

b · l ½ 1 ¸ ¦ Ill ½ 1 r · l ¸ . but is not suitable for unstructured meshes. methods 192 . ¶rº ¶ ¦Ã Compute A · l ½ 1¸ if ﬁrst visit to level l+1. x · l ½ 1 ¸ ¦ 0. we examined different approaches to solving the linear equation sets that result from discretization. Set rold r0 .Flexible Cycle(l) Compute r0 bl Al xl . a limit is imposed on the number of relaxation sweeps allowed at any level. However. We saw that the only viable approaches for most ﬂuid ﬂow problems were iterative methods. x Ä · l ¸ ¦ x · l ¸ ¯ Ill½ 1x · l ½ 1¸ . In practical implementation.18: Flexible Cycle Algorithm any level. 8. r· l¸ ¦ b· l¸ ª A· l¸ x · l¸ . the solution at that level is used to correct the solution at the next ﬁner level and the process continues. Compute r if r r0 α return.10 Closure In this chapter. The ﬂexible cycle can also be described compactly in a recursive form. Flexible Cycle(l+1). as shown in Fig. β and (l lmax) else if r rold µ » ¶Å Â µHµ » ¦ÁIÁ · ¸ ª · ¸ · ¸ IÁ Á · ¸ · ¸ ¦ b· l¸ . and we are not already at the ﬁnest level.18 8. The line-by-line TDMA algorithm may be used for structured meshes. if total number of sweeps on level l not exhausted Â Â ¦ ¦cÁDÁ · ¸ ª · ¸ · ¸ IÁ Á Perform ν1 relaxation sweeps on A l x l bl Al xl . Figure 8. Â Ä rold else Ä Ä ¦ r.

they are also inadequate on ﬁne meshes. we examined geometric and algebraic multigrid schemes.like Gauss-Seidel or Jacobi iteration do not have adequate rates of convergence. By the same token. To accelerate these schemes. These schemes have been shown in the literature to substantially accelerate linear solver convergence. but cannot reduce low-frequency errors. and are very efﬁcient way to solve unstructured linear systems. We saw that these schemes are good at reducing high frequency errors. which use coarse mesh solutions for the error to correct the ﬁne mesh solution. 193 .

194 .

2(1):12–26. [3] B. [8] S. Leonard. H. Hemisphere Publishing Corporation. A calculation procedure for heat. 15:1787–1805. and H.C. 195 . AIAA Journal.J. Friedrichs. 11:215–234. 1980. [11] J. C. Jespersen. Courant. 100:32–74.Bibliography [1] R. Spalding. [5] D. A stable and accurate convective modelling procedure based on quadratic upstream interpolation. Patankar. 1979. Block-implicit multigrid solution of Navier-Stokes equations in primitive variables. AIAA 89-0366. 7:147–163. Patankar and D. 1989. 1984.P. Res. Numerical Heat Transfer. [6] A. Chorin. Numerical Heat Transfer and Fluid Flow. [2] R.P. Lewy. Enhancements of the SIMPLE method for predicting incompressible ﬂuid ﬂows. Pletcher. [10] S.C. Friedrichs. 1928. On the partial difference equations of mathematical physics.P. J. A numerical method for solving incompressible viscous ﬂow problems. K. mass and momentum transfer in three-dimensional parabolic ﬂows. Raithby. Patankar. Van Doormal and G. Vanka. 1984. Dev. Anderson. Journal of Computational Physics. Computational Fluid Mechanics and Heat Transfer. Lewy. 1967. IBM J. Uber die partiellen differenzgleichungen der mathematischen physik. and H. K. 1972. V.O. 1989. Journal of Computational Physics.V. Computer Methods in Applied Mechanics and Engineering.B. [9] S. Mathematische Annalen. 65:138–158. Barth and D. Tannehill. The design and application of upwind schemes on unstructured meshes. 1986.O. [4] T. 27(9):1167–1174.A. 19:59–98. International Journal of Heat and Mass Transfer. J.V. Karki and S. and R.. Courant. McGraw-Hill New York.D. Pressure based calculation procedure for viscous ﬂows at all speeds in arbitrary conﬁgurations. [7] K. 1967.

L. Numerical Heat Transfer. Numerical Heat Transfer. 21:1525–1532. 13:125–132. [16] S. A Curvilinear-Coordinate Method for Momentum. 6:245–261. Heat and Mass Transfer in Domains of Irregular Geometry. Mathur. A Finite Volume Method for the Prediction of Three Dimensional Fluid Flow in Complex Ducts. 1983. 1983. Majumdar. Computation of anisotropic conduction using unstructured meshes. A numerical study of the turbulent ﬂow past an isolated airfoil with trailing edge separation. Murthy and S. R.Y. 196 . August 1998. A control-volume ﬁnite element method for twodimensional ﬂuid ﬂow and heat transfer.[12] B. [14] C. PhD thesis. 120:583–591. 1981. 1988. August 1985. Role of underrelaxation in momentum interpolation for calculation of ﬂow with nonstaggered grids. Rhie and W. [17] J. Journal of Heat Transfer. AIAA Journal. Baliga and S. University of Minnesota. Chow.M. [15] M. PhD thesis. Patankar. Hsu. Peric. V.R. [13] C. University of London.