Are you sure?
This action might not be possible to undo. Are you sure you want to continue?
Lecture Notes Summer Term 2007
R. Verf¨ urth
Fakult¨at f¨ ur Mathematik, RuhrUniversit¨ at Bochum
Contents
Chapter I. Fundamentals 7
I.1. Modelization 7
I.1.1. Lagrangian and Eulerian representation 7
I.1.2. Velocity 7
I.1.3. Transport theorem 8
I.1.4. Conservation of mass 9
I.1.5. Cauchy theorem 9
I.1.6. Conservation of momentum 10
I.1.7. Conservation of energy 11
I.1.8. Constitutive laws 11
I.1.9. Compressible NavierStokes equations in conservative
form 13
I.1.10. Euler equations 13
I.1.11. Compressible NavierStokes equations in non
conservative form 13
I.1.12. Instationary incompressible NavierStokes equations 15
I.1.13. Stationary incompressible NavierStokes equations 16
I.1.14. Stokes equations 16
I.1.15. Initial and boundary conditions 17
I.2. Notations and basic results 18
I.2.1. Domains and functions 18
I.2.2. Diﬀerentiation of products 18
I.2.3. Integration by parts formulae 19
I.2.4. Weak derivatives 19
I.2.5. Sobolev spaces and norms 20
I.2.6. Friedrichs and Poincar´e inequalities 21
I.2.7. Finite element partitions 21
I.2.8. Finite element spaces 22
I.2.9. Approximation properties 23
I.2.10. Nodal shape functions 23
I.2.11. A quasiinterpolation operator 26
I.2.12. Bubble functions 27
Chapter II. Stationary linear problems 29
II.1. Discretization of the Stokes equations. A ﬁrst attempt 29
II.1.1. The Poisson equation revisited 29
II.1.2. A variational formulation of the Stokes equations 29
3
4 CONTENTS
II.1.3. A naive discretization of the Stokes equations 30
II.1.4. Possible remedies 32
II.2. Mixed ﬁnite element discretizations of the Stokes
equations 32
II.2.1. Saddlepoint formulation of the Stokes equations 32
II.2.2. General structure of mixed ﬁnite element discretizations
of the Stokes equations 33
II.2.3. A ﬁrst attempt 34
II.2.4. A necessary condition for a wellposed mixed
discretization 34
II.2.5. A second attempt 35
II.2.6. The infsup condition 36
II.2.7. A stable loworder element with discontinuous pressure
approximation 37
II.2.8. Stable higherorder elements with discontinuous
pressure approximation 37
II.2.9. Stable loworder elements with continuous pressure
approximation 38
II.2.10. Stable higherorder elements with continuous pressure
approximation 39
II.2.11. A priori error estimates 39
II.3. PetrovGalerkin stabilization 40
II.3.1. Motivation 40
II.3.2. The mini element revisited 40
II.3.3. General form of PetrovGalerkin stabilizations 43
II.3.4. Choice of stabilization parameters 44
II.3.5. Choice of spaces 44
II.3.6. Structure of the discrete problem 44
II.4. Nonconforming methods 44
II.4.1. Motivation 44
II.4.2. The CrouzeixRaviart element 45
II.4.3. Construction of a local solenoidal bases 46
II.5. Streamfunction formulation 48
II.5.1. Motivation 48
II.5.2. The curl operators 49
II.5.3. Streamfunction formulation of the Stokes equations 49
II.5.4. Variational formulation of the biharmonic equation 50
II.5.5. A nonconforming discretization of the biharmonic
equation 51
II.5.6. Mixed ﬁnite element discretizations of the biharmonic
equation 52
II.6. Solution of the discrete problems 53
II.6.1. General structure of the discrete problems 53
II.6.2. The Uzawa algorithm 54
II.6.3. The conjugate gradient algorithm revisited 55
CONTENTS 5
II.6.4. An improved Uzawa algorithm 57
II.6.5. The multigrid algorithm 58
II.6.6. Smoothing 61
II.6.7. Prolongation 62
II.6.8. Restriction 64
II.6.9. Variants of the CGalgorithm for indeﬁnite problems 65
II.7. A posteriori error estimation and adaptive grid reﬁnement 67
II.7.1. Motivation 67
II.7.2. General structure of the adaptive algorithm 68
II.7.3. A residual a posteriori error estimator 68
II.7.4. Error estimators based on the solution of auxiliary
problems 70
II.7.5. Marking strategies 73
II.7.6. Regular reﬁnement 74
II.7.7. Additional reﬁnement 75
II.7.8. Required data structures 76
Chapter III. Stationary nonlinear problems 79
III.1. Discretization of the stationary NavierStokes equations 79
III.1.1. Variational formulation 79
III.1.2. Fixedpoint formulation 79
III.1.3. Existence and uniqueness results 80
III.1.4. Finite element discretization 80
III.1.5. Fixedpoint formulation of the discrete problem 81
III.1.6. Properties of the discrete problem 81
III.1.7. Symmetrization 82
III.1.8. A priori error estimates 82
III.1.9. A warning example 83
III.1.10. Upwind methods 87
III.1.11. The streamlinediﬀusion method 88
III.2. Solution of the discrete nonlinear problems 89
III.2.1. General structure 89
III.2.2. Fixedpoint iteration 90
III.2.3. Newton iteration 91
III.2.4. Path tracking 91
III.2.5. A nonlinear CGalgorithm 92
III.2.6. Operator splitting 94
III.2.7. Multigrid algorithms 95
III.3. Adaptivity for nonlinear problems 95
III.3.1. General structure 95
III.3.2. A residual a posteriori error estimator 96
III.3.3. Error estimators based on the solution of auxiliary
problems 96
Chapter IV. Instationary problems 99
IV.1. Discretization of the instationary NavierStokes equations 99
6 CONTENTS
IV.1.1. Variational formulation 99
IV.1.2. Existence and uniqueness results 100
IV.1.3. Numerical methods for ordinary diﬀerential equations
revisited 100
IV.1.4. Method of lines 102
IV.1.5. Rothe’s method 104
IV.1.6. Spacetime ﬁnite elements 104
IV.1.7. The transportdiﬀusion algorithm 105
IV.2. Spacetime adaptivity 107
IV.2.1. Overview 107
IV.2.2. A residual a posteriori error estimator 108
IV.2.3. Time adaptivity 109
IV.2.4. Space adaptivity 110
IV.3. Discretization of compressible and inviscid problems 110
IV.3.1. Systems in divergence form 110
IV.3.2. Finite volume schemes 111
IV.3.3. Construction of the partitions 113
IV.3.4. Relation to ﬁnite element methods 115
IV.3.5. Construction of the numerical ﬂuxes 115
Summary 119
Bibliography 123
Index 125
CHAPTER I
Fundamentals
I.1. Modelization
I.1.1. Lagrangian and Eulerian representation. Consider a
volume Ω occupied by a ﬂuid which moves under the inﬂuence of in
terior and exterior forces. We denote by η ∈ Ω the position of an
arbitrary particle at time t = 0 and by
x = Φ(η, t)
its position at time t > 0. The basic assumptions are:
• Φ(, t) : Ω → Ω is for all times t ≥ 0 a bijective
diﬀerentiable mapping,
• its inverse mapping is diﬀerentiable,
• the transformation Φ(, t) is orientationpreserving,
• Φ(, 0) is the identity.
There are two ways to look at the ﬂuid ﬂow:
(1) We ﬁx η and look at the trajectory t → Φ(η, t). This is the
Lagrangian representation. Correspondingly η is called
Lagrangian coordinate. The Langrangian coordinate sys
tem moves with the ﬂuid.
(2) We ﬁx the point x and look at the trajectory t →Φ(, t)
−1
(x)
which passes through x. This is the Eulerian representa
tion. Correspondingly x is called Eulerian coordinate.
The Eulerian coordinate system is ﬁxed.
I.1.2. Velocity. We denote by
DΦ =
_
∂Φ
i
∂η
j
_
1≤i,j≤3
the Jacobian matrix of Φ(, t) and by
J = det DΦ
its Jacobian determinant. The preservation of orientation is equivalent
to
J(η, t) > 0 for all t > 0 and all η ∈ Ω.
The velocity of the ﬂow at point x = Φ(η, t) is deﬁned by
7
8 I. FUNDAMENTALS
v(x, t) =
∂
∂t
Φ(η, t) , x = Φ(η, t).
I.1.3. Transport theorem. Consider an arbitrary volume V ⊂ Ω
and denote by
V (t) = Φ(, t)(V ) = ¦Φ(η, t) : η ∈ V ¦
its shape at time t > 0. Then the following transport theorem
holds for all diﬀerentiable mappings f : Ω (0, ∞) →R
d
dt
_
V (t)
f(x, t)dx =
_
V (t)
_
∂f
∂t
(x, t) + div(fv)(x, t)
_
dx.
Example I.1.1. We consider a onedimensional model situation, i.e.
Ω = (a, b) is an interval and x and η are real numbers. Then V = (α, β)
and V (t) = (α(t), β(t)) are intervals, too. The transformation rule for
integrals gives
_
V (t)
f(x, t)dx =
_
β(t)
α(t)
f(x, t)dx
=
_
β
α
f(Φ(η, t), t)
∂Φ
∂η
(η, t)dη.
Diﬀerentiating this equality and using the transformation rule we get
d
dt
_
V (t)
f(x, t)dx =
_
β
α
d
dt
_
f(Φ(η, t), t)
∂Φ
∂η
(η, t)
_
dη
=
_
β
α
∂
∂t
f(Φ(η, t), t)
∂Φ
∂η
(η, t)dη
. ¸¸ .
=
V (t)
∂
∂t
f(x,t)dx
+
_
β
α
∂
∂x
f(Φ(η, t), t)
∂Φ
∂t
(η, t)
. ¸¸ .
=v(Φ(η,t),t)
∂Φ
∂η
(η, t)dη
. ¸¸ .
=
V (t)
∂
∂x
f(x,t)v(x,t)dx
+
_
β
α
f(Φ(η, t), t)
∂
∂t
∂
∂η
Φ(η, t)
. ¸¸ .
=
∂
∂η
v(Φ(η,t),t)=
∂
∂x
v(Φ(η,t),t)
∂Φ
∂η
(η,t)
dη
. ¸¸ .
=
V (t)
f(x,t)
∂
∂x
v(x,t)dx
I.1. MODELIZATION 9
=
_
V (t)
∂
∂t
f(x, t)dx
+
_
V (t)
∂
∂x
f(x, t)v(x, t)dx
+
_
V (t)
f(x, t)
∂
∂x
v(x, t)dx
=
_
V (t)
∂
∂t
f(x, t) +
∂
∂x
¦f(x, t)v(x, t)¦ dx.
This proves the transport theorem in one dimension.
I.1.4. Conservation of mass. We denote by ρ(x, t) the density
of the ﬂuid. Then
_
V (t)
ρ(x, t)dx
is the total mass of the volume V (t). The conservation of mass and the
transport theorem therefore imply that
0 =
d
dt
_
V (t)
ρ(x, t)dx
=
_
V (t)
_
∂ρ
∂t
(x, t) + div(ρv)(x, t)
_
dx.
This gives the equation for the conservation of mass:
∂ρ
∂t
+ div(ρv) = 0 in Ω (0, ∞).
I.1.5. Cauchy theorem. Fluid and continuum mechanics are
based on three fundamental assumptions concerning the interior forces:
• interior forces act via the surface of a volume V (t),
• interior forces only depend on the normal direction
of the surface of the volume,
• interior forces are additive and continuous.
Due to the Cauchy theorem these assumptions imply that the
interior forces acting on a volume V (t) must be of the form
_
∂V (t)
T ndS
with a tensor T : Ω →R
3×3
. Here, as usual, ∂V (t) denotes the bound
ary of the volume V (t), n is the unit outward normal, and dS denotes
10 I. FUNDAMENTALS
the surface element.
The integral theorem of Gauss then yields
_
∂V (t)
T ndS =
_
V (t)
div Tdx.
I.1.6. Conservation of momentum. The total momentum of a
volume V (t) in the ﬂuid is given by
_
V (t)
ρ(x, t)v(x, t)dx.
Due to the transport theorem its temporal variation equals
d
dt
_
V (t)
ρ(x, t)v(x, t)dx
=
_
d
dt
_
V (t)
ρ(x, t)v
i
(x, t)dx
_
1≤i≤3
=
_
_
V (t)
_
∂
∂t
(ρv
i
)(x, t) + div(ρv
i
v)(x, t)
_
dx
_
1≤i≤3
=
_
V (t)
_
∂
∂t
(ρv)(x, t) + div(ρv ⊗v)(x, t)
_
dx
where
v ⊗u = (v
i
u
j
)
1≤i,j≤3
.
Due to the conservation of momentum this quantity must be balanced
by exterior and interior forces. Exterior forces must be of the form
_
V (t)
ρf dx.
The Cauchy theorem tells that the interior forces are of the form
_
V (t)
div Tdx.
Hence the conservation of momentum takes the integral form
_
V (t)
_
∂
∂t
(ρv) + div(ρv ⊗v)
_
dx =
_
V (t)
_
ρf + div T
_
dx.
This gives the equation for the conservation of momentum:
∂
∂t
(ρv) + div(ρv ⊗v) = ρf + div T in Ω (0, ∞).
I.1. MODELIZATION 11
I.1.7. Conservation of energy. We denote by e the total en
ergy. The transport theorem implies that its temporal variation is
given by
d
dt
_
V (t)
edx =
_
V (t)
_
∂e
∂t
(x, t) + div(ev)(x, t)
_
dx.
Due to the conservation of energy this quantity must be balanced by
the energies due to the exterior and interior forces and by the change
of the internal energy.
The contributions of the exterior and interior forces are given by
_
V (t)
ρf vdx
and
_
∂V (t)
n T vdσ
=
_
V (t)
div(T v)dx.
Due to the Cauchy theorem and the integral theorem of Gauss, the
change of internal energy must be of the form
_
∂V (t)
σ ndS =
_
V (t)
div σdx
with a vectorﬁeld σ : Ω →R
3
.
Hence the conservation of energy takes the integral form
_
V (t)
_
∂
∂t
e + div(ev)
_
dx =
_
V (t)
_
ρf v + div(T v) + div σ
_
dx.
This gives the equation for the conservation of energy:
∂
∂t
e +div(ev) = ρf v +div(T v) +div σ in Ω(0, ∞).
I.1.8. Constitutive laws. The equations for the conservation of
mass, momentum and energy must be complemented by constitutive
laws. These are based on the following fundamental assumptions:
• T only depends on the gradient of the velocity.
• The dependence on the velocity gradient is linear.
• T is symmetric. (Due to the Cauchy theorem this
is a consequence of the conservation of angular mo
mentum.)
12 I. FUNDAMENTALS
• In the absence of internal friction, T is diagonal
and proportional to the pressure, i.e. all interior
forces act in normal direction.
• e = ρε +
1
2
ρ[v[
2
. (ε is called internal energy
and is often identiﬁed with the temperature.)
• σ is proportional to the variation of the internal
energy, i.e. σ = α∇ε.
The conditions on T imply that it must be of the form
T = 2λD(v) + µ(div v) I −pI.
Here,
I = (δ
ij
)
1≤i,j≤3
denotes the unit tensor,
D(v) =
1
2
(∇v +∇v
t
) =
1
2
_
∂v
i
∂x
j
+
∂v
j
∂x
i
_
1≤i,j≤3
is the deformation tensor,
p = p(ρ, ε)
denotes the pressure and λ, µ ∈ R are the dynamic viscosities of
the ﬂuid.
The equation for the pressure is also called equation of state. For
an ideal gas, e.g., it takes the form p(ρ, ε) = (γ −1)ρε with γ > 1.
Note that div v is the trace of the deformation tensor D(v).
Example I.1.2. The physical dimension of the dynamic viscosities λ
and µ is kg m
−1
s
−1
and is called Poise; the one of the density ρ is
kg m
−3
. In ¸I.1.12 (p. 15) we will introduce the kinematic viscosity
ν =
λ
ρ
. Its physical dimension is m
2
s
−1
and is called Stokes. Table
I.1.1 gives some relevant values of these quantities.
Table I.1.1. Viscosities of some ﬂuids and gazes
λ ρ ν
Water 20
◦
C 1.005 10
−3
1000 1.005 10
−6
Alcohol 20
◦
C 1.19 10
−3
790 1.506 10
−6
Ether 20
◦
C 2.43 10
−5
716 3.394 10
−8
Glycerine 20
◦
C 1.499 1260 1.190 10
−3
Air 0
◦
C 1 atm 1.71 10
−5
1.293 1.322 10
−5
Hydrogen 0
◦
C 8.4 10
−6
8.99 10
−2
9.344 10
−5
I.1. MODELIZATION 13
I.1.9. Compressible NavierStokes equations in conserva
tive form. Collecting all results we obtain the compressible Na
vierStokes equations in conservative form:
∂ρ
∂t
+ div(ρv) = 0
∂(ρv)
∂t
+ div(ρv ⊗v) = ρf + 2λdiv D(v)
+ µgrad div v −grad p
= ρf + λ∆v
+ (λ + µ) grad div v −grad p
∂e
∂t
+ div(ev) = ρf v + 2λdiv[D(v) v]
+ µdiv[div v v]
−div(pv) + α∆ε
= ¦ρf + 2λdiv D(v) + µgrad div v
−grad p¦ v
+ λD(v) : D(v) + µ(div v)
2
−p div v + α∆ε
p = p(ρ, ε)
e = ρε +
1
2
ρ[v[
2
.
I.1.10. Euler equations. In the inviscid case, i.e. λ = µ = 0,
the compressible NavierStokes equations in conservative form reduce
to the socalled Euler equations for an ideal gas:
∂ρ
∂t
+ div(ρv) = 0
∂(ρv)
∂t
+ div(ρv ⊗v + pI) = ρf
∂e
∂t
+ div(ev + pv) = ρf v + α∆ε
p = p(ρ, ε)
e = ρε +
1
2
ρ[v[
2
.
I.1.11. Compressible NavierStokes equations in noncon
servative form. Inserting the ﬁrst equation of ¸I.1.9 (p. 13) in the
14 I. FUNDAMENTALS
lefthand side of the second equation yields
∂(ρv)
∂t
+ div(ρv ⊗v)
= (
∂ρ
∂t
)v + ρ
∂v
∂t
+ (div(ρv))v + ρ(v ∇)v
= ρ[
∂v
∂t
+ (v ∇)v].
Inserting the ﬁrst two equations of ¸I.1.9 in the lefthand side of the
third equation implies
λD(v) : D(v) + µ(div v)
2
−p div v + α∆ε
= −¦ρf + 2λdiv D(v) + µgrad div v −grad p¦ v
+
∂(ρε +
1
2
ρ[v[
2
)
∂t
+ div(ρεv +
1
2
ρ[v[
2
v)
= −¦
∂(ρv)
∂t
+ div(ρv ⊗v)¦ v
+ ε
∂ρ
∂t
+ ε div(ρv) + ρ
∂ε
∂t
+ ρv grad ε
+
1
2
[v[
2
∂ρ
∂t
+
1
2
[v[
2
div(ρv) + ρ
1
2
∂[v[
2
∂t
+ ρv grad(
1
2
[v[
2
)
= −ρv [
∂v
∂t
+ (v ∇)v]
+ ρ
∂ε
∂t
+ ρv grad ε
+ ρv
∂v
∂t
+ ρv [(v ∇)v]
= ρ
∂ε
∂t
+ ρv grad ε.
With these simpliﬁcations we obtain the compressible Navier
Stokes equations in nonconservative form
∂ρ
∂t
+ div(ρv) = 0
ρ[
∂v
∂t
+ (v ∇)v] = ρf + λ∆v + (λ + µ) grad div v
−grad p
ρ[
∂ε
∂t
+ ρv grad ε] = λD(v) : D(v) + µ(div v)
2
−p div v
+ α∆ε
p = p(ρ, ε).
I.1. MODELIZATION 15
I.1.12. Instationary incompressible NavierStokes equa
tions. Next we assume that the density ρ is constant, replace p by
p
ρ
, denote by
ν =
λ
ρ
the kinematic viscosity, and suppress the third equation in ¸I.1.11
(p. 13) (conservation of energy). This yields the instationary in
compressible NavierStokes equations:
div v = 0
∂v
∂t
+ (v ∇)v = f + ν∆v −grad p.
Remark I.1.3. We say that a ﬂuid is incompressible if the volume
of any subdomain V ⊂ Ω remains constant for all times t > 0, i.e.
_
V (t)
dx =
_
V
dx for all t > 0.
The transport theorem then implies that
0 =
d
dt
_
V (t)
dx =
_
V (t)
div vdx.
Hence, incompressibility is equivalent to the equation
div v = 0.
Remark I.1.4. The incompressible NavierStokes equations are often
rescaled to a nondimensional form. To this end we introduce a refer
ence length L, a reference time T, a reference velocity U, a reference
pressure P, and a reference force F and introduce new variables and
quantities by x = Ly, t = Tτ, v = Uu, p = Pq, f = Fg. Physical rea
sons suggest to ﬁx T =
L
U
. When rewriting the momentum equation
in the new quantities we obtain
Fg = f =
U
T
∂u
∂τ
+
U
2
L
(u ∇
y
)u −
νU
L
2
∆
y
u +
P
L
∇
y
q
=
νU
L
2
_
L
2
νT
∂u
∂τ
+
LU
ν
(u ∇
y
)u −∆
y
u +
PL
νU
∇
y
q
_
.
The physical dimension of F and
νU
L
2
is m s
−2
. The quantities
L
2
νT
,
PL
νU
,
and
LU
ν
are dimensionless. The quantities L, U, and ν are determined
by the physical problem. The quantities F and P are ﬁxed by the con
ditions F =
νU
L
2
and
PL
νU
= 1. Thus there remains only the dimensionless
parameter
16 I. FUNDAMENTALS
Re =
LU
ν
.
It is called Reynolds’ number and is a measure for the complexity
of the ﬂow. It was introduced in 1883 by Osborne Reynolds (1842
– 1912).
Example I.1.5. Consider an airplane at cruising speed and an oil
vessel. The relevant quantities for the airplane are
U ≈ 900km h
−1
≈ 250m s
−1
L ≈ 50m
ν ≈ 1.322 10
−5
m
2
s
−1
Re ≈ 9.455 10
8
.
The corresponding quantities for the oilvessel are
U ≈ 20knots
≈ 36km h
−1
≈ 10m s
−1
L ≈ 300m
ν ≈ 1.005 10
−6
m
2
s
−1
Re ≈ 2.985 10
9
.
I.1.13. Stationary incompressible NavierStokes equations.
As a next step of simpliﬁcation, we assume that we are in a stationary
regime, i.e. all temporal derivatives vanish. This yields the station
ary incompressible NavierStokes equations:
div v = 0
−ν∆v + (v ∇)v + grad p = f .
I.1.14. Stokes equations. In a last step we linearize the Navier
Stokes equations at velocity v = 0 and rescale the viscosity ν to 1.
This gives the Stokes equations:
div v = 0
−∆v + grad p = f .
I.1. MODELIZATION 17
Remark I.1.6. All CFD algorithms solve complicated ﬂow problems
via a nested sequence of simpliﬁcation steps similar to the described
procedure. Therefore it is mandatory to dispose of eﬃcient discretiza
tion schemes and solvers for simpliﬁed problems as, e.g., the Stokes
equations, which are at the heart in the inmost loop.
I.1.15. Initial and boundary conditions. The equations of ¸¸
I.1.9 – I.1.12 are timedependent. They must be completed by ini
tial conditions for the velocity v, the internal energy ε and – for the
problems of ¸¸I.1.9 – I.1.11 – the density ρ.
The Euler equations of ¸I.1.10 (p. 13) are hyperbolic and require
boundary conditions on the inﬂowboundary.
The problems of ¸¸I.1.9 and I.1.11 – I.1.14 are of second order in
space and require boundary conditions on the whole boundary Γ = ∂Ω.
For the energyequations in ¸¸I.1.9 (p. 13) and I.1.11 (p. 13) one may
choose either Dirichlet (prescribed energy) or Neumann (prescribed
energy ﬂux) boundary conditions. A mixture of both conditions is also
possible.
The situation is less evident for the mass and momentum equations
in ¸¸I.1.9 and I.1.11 – I.1.14. Around 1845, Sir George Gabriel
Stokes (1819 – 1903) suggested that – due to the friction – the ﬂuid
will adhere at the boundary. This leads to the Dirichlet boundary
condition
v = 0 on Γ.
More generally one can impose the condition v = v
Γ
with a given
boundary velocity v
Γ
.
Around 1827, Pierre Louis Marie Henri Navier (1785 – 1836)
had suggested the more general boundary condition
λ
n
v n + (1 −λ
n
)n T n = 0
λ
t
[v −(v n)n] + (1 −λ
t
)[T n −(n T n)n] = 0
on Γ with parameters λ
n
, λ
t
∈ [0, 1] that depend on the actual ﬂow
problem. More generally one may replace the homogeneous righthand
sides by given known functions.
The ﬁrst equation of Navier refers to the normal components of v and
n T, the second equation refers to the tangential components of v
and n T. The special case λ
n
= λ
t
= 1 obviously yields the noslip
condition of Stokes. The case λ
n
= 1, λ
t
= 0 on the other hand gives
the slip condition
v n = 0
T n −(n T n)n = 0.
The question, which boundary condition is a better description of the
reality, was decided in the 19th century by experiments with pendu
lums submerged in a viscous ﬂuid. They were in favour of the noslip
18 I. FUNDAMENTALS
condition of Stokes. The viscosities of the ﬂuids, however, were similar
to that of honey and the velocities were only a few meters per second.
When dealing with much higher velocities or much smaller viscosities,
the slip condition of Navier is more appropriate. A particular example
for this situation is the reentry of a space vehicle. Other examples are
coating problems where the location of the boundary is an unknown
and is determined by the interaction of capillary and viscous forces.
I.2. Notations and basic results
I.2.1. Domains and functions. The following notations concern
ing domains and functions will frequently be used:
Ω open, bounded, connected set in R
n
, n ∈ ¦2, 3¦;
Γ boundary of Ω supposed to be Lipschitzcontinuous;
n exterior unit normal to Ω;
p, q, r, . . . scalar functions with values in R;
u, v, w, . . . vectorﬁelds with values in R
n
;
S, T, . . . tensorﬁelds with values in R
n×n
;
I unit tensor;
∇ gradient;
div divergence;
div u =
n
i=1
∂u
i
∂x
i
;
div T =
_
n
i=1
∂T
ij
∂x
i
_
1≤j≤n
;
∆ = div ∇ Laplace operator;
D(u) =
1
2
_
∂u
i
∂x
j
+
∂u
j
∂x
i
_
1≤i,j≤n
deformation tensor;
u v inner product;
S : T dyadic product (inner product of tensors);
u ⊗v = (u
i
v
j
)
1≤i,j≤n
tensorial product.
I.2.2. Diﬀerentiation of products. The product formula for dif
ferentiation yields the following formulae for the diﬀerentiation of prod
ucts of scalar functions, vectorﬁelds and tensorﬁelds:
div(pu) = ∇p u + p div u,
div(T u) = (div T) u +T : D(u),
div(u ⊗v) = (div u)v + (u ∇)v.
I.2. NOTATIONS AND BASIC RESULTS 19
I.2.3. Integration by parts formulae. The above product for
mulae and the Gauss theorem for integrals give rise to the following
integration by parts formulae:
_
Γ
pu ndS =
_
Ω
∇p udx +
_
Ω
p div udx,
_
Γ
n T udS =
_
Ω
(div T) udx +
_
Ω
T : D(u)dx,
_
Γ
n (u ⊗v)dS =
_
Ω
(div u)vdx +
_
Ω
(u ∇)vdx.
I.2.4. Weak derivatives. Recall that A denotes the closure of a
set A ⊂ R
n
.
Example I.2.1. For the sets
A = ¦x ∈ R
3
: x
2
1
+ x
2
2
+ x
2
3
< 1¦ open unit ball
B = ¦x ∈ R
3
: 0 < x
2
1
+ x
2
2
+ x
2
3
< 1¦ punctuated open unit ball
C = ¦x ∈ R
3
: 1 < x
2
1
+ x
2
2
+ x
2
3
< 2¦ open annulus
we have
A = ¦x ∈ R
3
: x
2
1
+ x
2
2
+ x
2
3
≤ 1¦ closed unit ball
B = ¦x ∈ R
3
: x
2
1
+ x
2
2
+ x
2
3
≤ 1¦ closed unit ball
C = ¦x ∈ R
3
: 1 ≤ x
2
1
+ x
2
2
+ x
2
3
≤ 2¦ closed closed annulusl.
Given a continuous function ϕ : R
n
→ R, we denote its support
by
supp ϕ = ¦x ∈ R
n
: ϕ(x) ,= 0¦.
The set of all functions that are inﬁnitely diﬀerentiable and have their
support contained in Ω is denoted by C
∞
0
(Ω):
C
∞
0
(Ω) = ¦ϕ ∈ C
∞
(Ω) : supp ϕ ⊂ Ω¦.
Remark I.2.2. The condition ”supp ϕ ⊂ Ω” is a non trivial one, since
supp ϕ is closed and Ω is open. Functions satisfying this condition
vanish at the boundary of Ω together with all their derivatives.
Given a suﬃciently smooth function ϕ and a multiindex α ∈ N
n
,
we denote its partial derivatives by
D
α
ϕ =
∂
α
1
+...+α
n
ϕ
∂x
α
1
1
. . . ∂x
α
n
n
.
20 I. FUNDAMENTALS
Given two function ϕ, ψ ∈ C
∞
0
(Ω), the Gauss theorem for integrals
yields for every multiindex α ∈ N
n
the identity
_
Ω
D
α
ϕψdx = (−1)
α
1
+...+α
n
_
Ω
ϕD
α
ψ.
This identity motivates the deﬁnition of the weak derivatives:
Given two integrable function ϕ, ψ ∈ L
1
(Ω) and a multi
index α ∈ N
n
, ψ is called the αth weak derivative of ϕ
if and only if the identity
_
Ω
ψρdx = (−1)
α
1
+...+α
n
_
Ω
ϕD
α
ρ
holds for all functions ρ ∈ C
∞
0
(Ω). In this case we write
ψ = D
α
ϕ.
Remark I.2.3. For smooth functions, the notions of classical and weak
derivatives coincide. However, there are functions which are not dif
ferentiable in the classical sense but which have a weak derivative (cf.
Example I.2.4 below).
Example I.2.4. The function [x[ is not diﬀerentiable in (−1, 1), but it
is diﬀerentiable in the weak sense. Its weak derivative is the piecewise
constant function which equals −1 on (−1, 0) and 1 on (0, 1).
I.2.5. Sobolev spaces and norms. We will frequently use the
following Sobolev spaces and norms:
H
k
(Ω) = ¦ϕ ∈ L
2
(Ω) : D
α
ϕ ∈ L
2
(Ω) for all α ∈ N
n
with α
1
+ . . . + α
n
≤ k¦,
[ϕ[
k
=
_
¸
_
¸
_
α∈N
n
α
1
+...+α
n
=k
D
α
ϕ
2
L
2
(Ω)
_
¸
_
¸
_
1/2
,
ϕ
k
=
_
k
l=0
[ϕ[
2
l
_
1/2
=
_
¸
_
¸
_
α∈N
n
α
1
+...+α
n
≤k
D
α
ϕ
2
L
2
(Ω)
_
¸
_
¸
_
1/2
,
H
1
0
(Ω) = ¦ϕ ∈ H
1
(Ω) : ϕ = 0 on Γ¦,
L
2
0
(Ω) = ¦p ∈ L
2
(Ω) :
_
Ω
p = 0¦,
H
1
2
(Γ) = ¦ψ ∈ L
2
(Γ) : ψ = ϕ
¸
¸
¸
Γ
for some ϕ ∈ H
1
(Ω)¦,
I.2. NOTATIONS AND BASIC RESULTS 21
ψ1
2
,Γ
= inf¦ϕ
1
: ϕ ∈ H
1
(Ω), ϕ
¸
¸
¸
Γ
= ψ¦.
Note that all derivatives are to be understood in the weak sense.
Remark I.2.5. The space H
1
2
(Γ) is called trace space of H
1
(Ω),
its elements are called traces of functions in H
1
(Ω). Except in one
dimension, n = 1, H
1
functions are in general not continuous and do
not admit point values (cf. Example I.2.6 below). A function, however,
which is piecewise diﬀerentiable is in H
1
(Ω) if and only if it is globally
continuous. This is crucial for ﬁnite element functions.
Example I.2.6. The function [x[ is not diﬀerentiable, but it is in
H
1
((−1, 1)). In two dimension, the function ln(ln(
_
x
2
1
+ x
2
2
)) is an
example of an H
1
function that is not continuous and which does not
admit a point value in the origin. In three dimensions, a similar exam
ple is given by ln(
_
x
2
1
+ x
2
2
+ x
2
3
).
Example I.2.7. Consider the open unit ball
Ω = ¦x ∈ R
n
: x
2
1
+ . . . + x
2
n
< 1¦
in R
n
and the functions
ϕ
α
(x) = ¦x
2
1
+ . . . + x
2
n
¦
α
2
α ∈ R.
Then we have
ϕ
α
∈ H
1
(Ω) ⇐⇒
_
α ≥ 0 if n = 2,
α > 1 −
n
2
if n > 2.
I.2.6. Friedrichs and Poincar´e inequalities. The following in
equalities are fundamental:
ϕ
0
≤ c
Ω
[ϕ[
1
for all ϕ ∈ H
1
0
(Ω),
Friedrichs inequality
ϕ
0
≤ c
Ω
[ϕ[
1
for all ϕ ∈ H
1
(Ω) ∩ L
2
0
(Ω)
Poincar
´
e inequality.
The constants c
Ω
and c
Ω
depend on the domain Ω and are propor
tional to its diameter.
I.2.7. Finite element partitions. The ﬁnite element discretiza
tions are based on partitions of the domain Ω into nonoverlapping
simple subdomains. The collection of these subdomains is called a
partition and is labeled T . The members of T , i.e. the subdomains,
are called elements and are labeled K.
Any partition T has to satisfy the following conditions:
22 I. FUNDAMENTALS
• Ω ∪ Γ is the union of all elements in T .
• (Affine equivalence) Each K ∈ T is either a
triangle or a parallelogram, if n = 2, or a tetrahe
dron or a parallelepiped, if n = 3.
• (Admissibility) Any two elements in T are either
disjoint or share a vertex or a complete edge or –
if n = 3 – a complete face.
• (Shaperegularity) For any element K, the ra
tio of its diameter h
K
to the diameter ρ
K
of the
largest ball inscribed into K is bounded indepen
dently of K.
Remark I.2.8. In two dimensions, n = 2, shape regularity means
that the smallest angles of all elements stay bounded away from zero.
In practice one usually not only considers a single partition T , but
complete families of partitions which are often obtained by successive
local or global reﬁnements. Then, the ratio h
K
/ρ
K
must be bounded
uniformly with respect to all elements and all partitions.
For any partition T we denote by h
T
or simply h the maximum of
the element diameters:
h = h
T
= max¦h
K
: K ∈ T ¦.
Remark I.2.9. In two dimensions triangles and parallelograms may
be mixed (cf. Figure I.2.1). In three dimensions tetrahedrons and
parallelepipeds can be mixed provided prismatic elements are also in
corporated. The condition of aﬃne equivalence may be dropped. It,
however, considerably simpliﬁes the analysis since it implies constant
Jacobians for all element transformations.
d
d
d
d
d
Figure I.2.1. Mixture of triangular and quadrilateral elements
I.2.8. Finite element spaces. For any multiindex α ∈ N
n
we
set for abbreviation
[α[
1
= α
1
+ . . . + α
n
,
I.2. NOTATIONS AND BASIC RESULTS 23
[α[
∞
= max¦α
i
: 1 ≤ i ≤ n¦,
x
α
= x
α
1
1
. . . x
α
n
n
.
With any element K in any partition T and any integer number k we
associate a space R
k
(K) of polynomials by
R
k
(K) =
_
¸
_
¸
_
span¦x
α
: [α[
1
≤ k¦ ,if K is a triangle or a tetrahedron,
span¦x
α
: [α[
∞
≤ k¦ ,if K is a parallelogram or
a parallelepiped.
With this notation we deﬁne ﬁnite element spaces by
S
k,−1
(T ) = ¦ϕ : Ω →R : ϕ
¸
¸
¸
K
∈ R
k
(K) for all K ∈ T ¦,
S
k,0
(T ) = S
k,−1
(T ) ∩ C(Ω),
S
k,0
0
(T ) = S
k,0
(T ) ∩ H
1
0
(Ω) = ¦ϕ ∈ S
k,0
(T ) : ϕ = 0 on Γ¦.
Note, that k may be 0 for the ﬁrst space, but must be at least 1 for
the second and third space.
Example I.2.10. For a triangle, we have
R
1
(K) = span¦1, x
1
, x
2
¦,
R
2
(K) = span¦1, x
1
, x
2
, x
2
1
, x
1
x
2
, x
2
2
¦.
For a parallelogram on the other hand, we have
R
1
(K) = span¦1, x
1
, x
2
, x
1
x
2
¦,
R
2
(K) = span¦1, x
1
, x
2
, x
1
x
2
, x
2
1
, x
2
1
x
2
, x
2
1
x
2
2
, x
1
x
2
2
, x
2
2
¦.
I.2.9. Approximation properties. The ﬁnite element spaces de
ﬁned above satisfy the following approximation properties:
inf
ϕ
T
∈S
k,−1
(T )
ϕ −ϕ
T

0
≤ ch
k+1
[ϕ[
k+1
ϕ ∈ H
k+1
(Ω), k ∈ N,
inf
ϕ
T
∈S
k,0
(T )
[ϕ −ϕ
T
[
j
≤ ch
k+1−j
[ϕ[
k+1
ϕ ∈ H
k+1
(Ω),
j ∈ ¦0, 1¦, k ∈ N
∗
,
inf
ϕ
T
∈S
k,0
0
(T )
[ϕ −ϕ
T
[
j
≤ ch
k+1−j
[ϕ[
k+1
ϕ ∈ H
k+1
(Ω) ∩ H
1
0
(Ω),
j ∈ ¦0, 1¦, k ∈ N
∗
.
I.2.10. Nodal shape functions. A
T
denotes the set of all ele
ment vertices.
24 I. FUNDAMENTALS
For any vertex x ∈ A
T
the associated nodal shape function is
denoted by λ
x
. It is the unique function in S
1,0
(T ) that equals 1 at
vertex x and that vanishes at all other vertices y ∈ A
T
¸¦x¦.
The support of a nodal shape function λ
x
is denoted by ω
x
and
consists of all elements that share the vertex x (cf. Figure I.2.2).
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
Figure I.2.2. Some examples of domains ω
x
The nodal shape functions can easily be computed elementwise from
the coordinates of the element’s vertices.
¨
¨
¨
¨
¨
¨
d
d
d ¨
¨
¨
¨
¨
¨
¨
¨
¨
¨
¨
¨
a
0
a
0
a
1
a
1
a
2
a
2
a
3
Figure I.2.3. Enumeration of vertices of triangles and
parallelograms
Example I.2.11. (1) Consider a triangle K with vertices a
0
, . . . , a
2
numbered counterclockwise (cf. Figure I.2.3). Then the restrictions to
K of the nodal shape functions λ
a
0
, . . . , λ
a
2
are given by
λ
a
i
(x) =
det(x −a
i+1
, a
i+2
−a
i+1
)
det(a
i
−a
i+1
, a
i+2
−a
i+1
)
i = 0, . . . , 2,
where all indices have to be taken modulo 3.
(2) Consider a parallelogram K with vertices a
0
, . . . , a
3
numbered
counterclockwise (cf. Figure I.2.3). Then the restrictions to K of the
nodal shape functions λ
a
0
, . . . , λ
a
3
are given by
λ
a
i
(x) =
det(x −a
i+2
, a
i+3
−a
i+2
)
det(a
i
−a
i+2
, a
i+3
−a
i+2
)
det(x −a
i+2
, a
i+1
−a
i+2
)
det(a
i
−a
i+2
, a
i+1
−a
i+2
)
i = 0, . . . , 3,
where all indices have to be taken modulo 4.
(3) Consider a tetrahedron K with vertices a
0
, . . . , a
3
enumerated as
in Figure I.2.4. Then the restrictions to K of the nodal shape functions
λ
a
0
, . . . , λ
a
3
are given by
λ
a
i
(x) =
det(x −a
i+1
, a
i+2
−a
i+1
, a
i+3
−a
i+1
)
det(a
i
−a
i+1
, a
i+2
−a
i+1
, a
i+3
−a
i+1
)
i = 0, . . . , 3,
I.2. NOTATIONS AND BASIC RESULTS 25
where all indices have to be taken modulo 4.
(4) Consider a parallelepiped K with vertices a
0
, . . . , a
7
enumerated as
in Figure I.2.4. Then the restrictions to K of the nodal shape functions
λ
a
0
, . . . , λ
a
7
are given by
λ
a
i
(x) =
det(x −a
i+1
, a
i+3
−a
i+1
, a
i+5
−a
i+1
)
det(a
i
−a
i+1
, a
i+3
−a
i+1
, a
i+5
−a
i+1
)
det(x −a
i+2
, a
i+3
−a
i+2
, a
i+6
−a
i+2
)
det(a
i
−a
i+2
, a
i+3
−a
i+2
, a
i+6
−a
i+2
)
det(x −a
i+4
, a
i+5
−a
i+4
, a
i+6
−a
i+4
)
det(a
i
−a
i+4
, a
i+5
−a
i+4
, a
i+6
−a
i+4
)
i = 0, . . . , 7,
where all indices have to be taken modulo 8.
d
d
d
d
d
d
a
0
a
0
a
1
a
1
a
3
a
2
a
3
a
7
a
4
a
6
a
5
Figure I.2.4. Enumeration of vertices of tetrahedra
and parallelepipeds (The vertex a
2
of the parallelepiped
is hidden.)
Remark I.2.12. For every element (triangle, parallelogram, tetrahe
dron, or parallelepiped) the sum of all nodal shape functions corre
sponding to the element’s vertices is identical equal to 1 on the element.
The functions λ
x
, x ∈ A
T
, form a bases of S
1,0
(T ). The bases
of higherorder spaces S
k,0
(T ), k ≥ 2, consist of suitable products of
functions λ
x
corresponding to appropriate vertices x.
Example I.2.13. (1) Consider a again a triangle K with its vertices
numbered as in example I.2.11 (1). Then the nodal basis of S
2,0
(T )
¸
¸
¸
K
consists of the functions
λ
a
i
[λ
a
i
−λ
a
i+1
−λ
a
i+2
] i = 0, . . . , 2
4λ
a
i
λ
a
i+1
i = 0, . . . , 2,
26 I. FUNDAMENTALS
where the functions λ
a
are as in example I.2.11 (1) and where all
indices have to be taken modulo 3. An other basis of S
2,0
(T )
¸
¸
¸
K
, called
hierarchical basis, consists of the functions
λ
a
i
i = 0, . . . , 2
4λ
a
i
λ
a
i+1
i = 0, . . . , 2.
(2) Consider a again a parallelogram K with its vertices numbered as
in example I.2.11 (2). Then the nodal basis of S
2,0
(T )
¸
¸
¸
K
consists of
the functions
λ
a
i
[λ
a
i
−λ
a
i+1
+ λ
a
i+2
−λ
a
i+3
] i = 0, . . . , 3
4λ
a
i
[λ
a
i+1
−λ
a
i+2
] i = 0, . . . , 3
16λ
a
0
λ
a
2
where the functions λ
a
are as in example I.2.11 (2) and where all
indices have to be taken modulo 3. The hierarchical basis of
S
2,0
(T )
¸
¸
¸
K
consists of the functions
λ
a
i
i = 0, . . . , 3
4λ
a
i
[λ
a
i+1
−λ
a
i+2
] i = 0, . . . , 3
16λ
a
0
λ
a
2
.
(3) Consider a again a tetrahedron K with its vertices numbered as in
example I.2.11 (3). Then the nodal basis of S
2,0
(T )
¸
¸
¸
K
consists of the
functions
λ
a
i
[λ
a
i
−λ
a
i+1
−λ
a
i+2
−λ
a
i+3
] i = 0, . . . , 3
4λ
a
i
λ
a
j
0 ≤ i < j ≤ 3,
where the functions λ
a
are as in example I.2.11 (3) and where all
indices have to be taken modulo 4. The hierarchical basis consists of
the functions
λ
a
i
i = 0, . . . , 3
4λ
a
i
λ
a
j
0 ≤ i < j ≤ 3.
I.2.11. A quasiinterpolation operator. We will frequently use
the quasiinterpolation operator R
T
: L
1
(Ω) → S
1,0
0
(T ) which
is deﬁned by
R
T
ϕ =
x∈N
T
∩Ω
λ
x
1
[ω
x
[
_
ω
x
ϕdx.
Here, [ω
x
[ denotes the area, if n = 2, respectively volume, if n = 3,
of the set ω
x
.
I.2. NOTATIONS AND BASIC RESULTS 27
The operator R
T
satisﬁes the following local error estimates for all
ϕ ∈ H
1
0
(Ω) and all elements K ∈ T :
ϕ −R
T
ϕ
L
2
(K)
≤ c
1
h
K
ϕ
H
1
( ω
K
)
,
ϕ −R
T
ϕ
L
2
(∂K)
≤ c
2
h
1/2
K
ϕ
H
1
( ω
K
)
.
Here, ¯ ω
K
denotes the set of all elements that share at least a vertex
with K (cf. Figure I.2.5).
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
K
K
d
d
d
Figure I.2.5. Examples of domains ¯ ω
K
Remark I.2.14. The operator R
T
is called a quasiinterpolation op
erator since it does not interpolate a given function ϕ at the vertices
x ∈ A
T
. In fact, point values are not deﬁned for H
1
functions. For
functions with more regularity which are at least in H
2
(Ω), the situa
tion is diﬀerent. For those functions point values do exist and the clas
sical nodal interpolation operator I
T
: H
2
(Ω) ∩ H
1
0
(Ω) → S
1,0
0
(T ) can
be deﬁned by the relation (I
T
(ϕ))(x) = ϕ(x) for all vertices x ∈ A
T
.
I.2.12. Bubble functions. For any element K ∈ T we deﬁne an
element bubble function by
ψ
K
= α
K
x∈K∩N
T
λ
x
,
α
K
=
_
¸
¸
¸
_
¸
¸
¸
_
27 if K is a triangle,
256 if K is a tetrahedron,
16 if K is a parallelogram,
64 if K is a parallelepiped.
It has the following properties:
0 ≤ ψ
K
(x) ≤ 1 for all x ∈ K,
ψ
K
(x) = 0 for all x ,∈ K,
28 I. FUNDAMENTALS
max
x∈K
ψ
K
(x) = 1.
We denote by c
T
the set of all edges, if n = 2, and of all faces, if
n = 3, of all elements in T . With each edge respectively face E ∈ c
T
we associate an edge respectively face bubble function by
ψ
E
= β
E
x∈E∩N
T
λ
x
,
β
E
=
_
¸
_
¸
_
4 if E is a line segment,
27 if E is a triangle,
16 if E is a parallelogram.
It has the following properties:
0 ≤ ψ
E
(x) ≤ 1 for all x ∈ ω
E
,
ψ
E
(x) = 0 for all x ,∈ ω
E
,
max
x∈ω
E
ψ
E
(x) = 1.
Here ω
E
is the union of all elements that share E (cf. Figure I.2.6).
Note that ω
E
consists of two elements, if E is not contained in the
boundary Γ, and of exactly one element, if E is a subset of Γ.
d
d
d
d
d
d
d
d
d
d
d
d
Figure I.2.6. Examples of domains ω
E
(E is marked bold.)
With each edge respectively face E ∈ c
T
we ﬁnally associate a unit
vector n
E
orthogonal to E and denote by []
E
the jump across E in
direction n
E
. If E is contained in the boundary Γ the orientation of
n
E
is ﬁxed to be the one of the exterior normal. Otherwise it is not
ﬁxed.
Remark I.2.15. []
E
depends on the orientation of n
E
but quantities
of the form [n
E
ϕ]
E
are independent of this orientation.
CHAPTER II
Stationary linear problems
II.1. Discretization of the Stokes equations. A ﬁrst attempt
II.1.1. The Poisson equation revisited. We recall the Poisson
equation
−∆ϕ = f in Ω
ϕ = 0 on Γ
and its variational formulation: Find ϕ ∈ H
1
0
(Ω) such that
_
Ω
∇ϕ∇ψdx =
_
Ω
fψdx
holds for all ψ ∈ H
1
0
(Ω).
The Poisson equation and this variational formulation are equiva
lent in the sense that any solution of the Poisson equation is a solution
of the variational problem and that conversely any solution of the vari
ational problem which is twice continuously diﬀerentiable is a solution
of the Poisson equation. Moreover, one can prove that the variational
problem admits a unique solution.
Standard ﬁnite element discretizations simply choose a partition T
of Ω and an integer k ≥ 1 and replace in the variational problem H
1
0
(Ω)
by the ﬁnite dimensional space S
k,0
0
(T ). This gives rise to a linear
system of equations with a symmetric, positive deﬁnite, square matrix,
called stiffness matrix. If, e.g. k = 1, the size of the resulting
discrete problem is given by the number of vertices in T which do not
lie on the boundary Γ.
II.1.2. A variational formulation of the Stokes equations.
Now we look at the Stokes equations of ¸I.1.14 (p. 16) with noslip
boundary condition
−∆u + grad p = f in Ω
div u = 0 in Ω
u = 0 on Γ.
The space
V = ¦u ∈ H
1
0
(Ω)
n
: div u = 0¦
is a subspace of H
1
0
(Ω)
n
which has similar properties. For every p ∈
H
1
(Ω) and any u ∈ V the integration by parts formulae of ¸I.2.3 (p.
29
30 II. STATIONARY LINEAR PROBLEMS
19) imply that
_
Ω
∇p udx = −
_
Ω
p div udx = 0.
We therefore obtain the following variational formulation of the Stokes
equations: Find u ∈ V such that
_
Ω
∇u : ∇udx =
_
Ω
f vdx
holds for all v ∈ V .
As for the Poisson problem, any velocity ﬁeld which solves the
Stokes equations also solves this variational problem. Moreover, one
can prove that the variational problem admits a unique solution. But,
we have lost the pressure! This is an apparent gap between diﬀerential
equation and variational problem which is not present for the Poisson
equation.
II.1.3. A naive discretization of the Stokes equations. Al
though the variational problem of the previous section does not incor
porate the pressure, it nevertheless gives information about the velocity
ﬁeld. Therefore one may be attempted to use it as a starting point for
a discretization process similar to the Poisson equation. This would
yield a symmetric, positive deﬁnite system of linear equations for a
discrete velocity ﬁeld. This would have the appealing sideeﬀect that
standard solvers such as preconditioned conjugate gradient algorithms
were available. The following example shows that this approach leads
to a deadend road.
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
dd
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
K
0
K
1
K
2
Figure II.1.1. Courant triangulation
Example II.1.1. We consider the unit square Ω = (0, 1)
2
and divide
it into N
2
squares of equal size. Each square is further divided into
two triangles by connecting its topleft corner with its bottomright
II.1. A FIRST ATTEMPT 31
corner (cf. Figure II.1.1). This partition is known as Courant tri
angulation. The length of the triangles’ short sides is h =
1
N−1
. For
the discretization of the Stokes equations we replace V by V (T ) =
S
1,0
0
(T )
2
∩ V , i.e. we use continuous, piecewise linear, solenoidal ﬁnite
element functions. Choose an arbitrary function v
T
∈ V (T ). We ﬁrst
consider the triangle K
0
which has the origin as a vertex. Since all its
vertices are situated on the boundary, we have v
T
= 0 on K
0
. Next we
consider the triangle K
1
which shares its long side with K
0
. Denote by
z
1
the vertex of K
1
which lies in the interior of Ω. Taking into account
that v
T
= 0 on Γ and div v
T
= 0 in K
1
, applying the integration by
parts formulae of ¸I.2.3 (p. 19) and evaluating line integrals with the
help of the trapezoidal rule, we conclude that
0 =
_
K
1
div v
T
dx
=
_
∂K
1
v
T
n
K
1
dS
=
h
2
v
T
(z
1
) ¦e
1
+e
2
¦.
Here n
K
1
is the unit exterior normal to K
1
and e
1
, e
2
denote the
canonical bases vectors in R
2
. Next we consider the triangle K
2
, which
is adjacent to the vertical boundary of Ω and to K
1
. With the same
arguments as for K
1
we conclude that
0 =
_
K
2
div v
T
dx
=
_
∂K
2
v
T
n
K
2
dS
=
h
2
v
T
(z
1
) ¦e
1
+e
2
¦
. ¸¸ .
=0
−
h
2
v
T
(z
1
) e
2
= −
h
2
v
T
(z
1
) e
2
.
Since v
T
(z
1
) e
1
= −v
T
(z
1
) e
2
we obtain
v
T
(z
1
) = 0
and consequently v
T
= 0 on K
0
∪ K
1
∪ K
2
. Repeating this argument,
we ﬁrst conclude that v
T
vanishes on all squares that are adjacent to
the left boundary of Ω and then that v
T
vanishes on all of Ω. Hence we
have V (T ) = ¦0¦. Thus this space is not suited for the approximation
of V .
Remark II.1.2. One can prove that S
k,0
0
(T )
2
∩ V is suited for the
approximation of V only if k ≥ 5. This is no practical choice.
32 II. STATIONARY LINEAR PROBLEMS
II.1.4. Possible remedies. The previous sections show that the
Poisson and Stokes equations are fundamentally diﬀerent and that the
discretization of the Stokes problem is a much more delicate task than
for the Poisson equation.
There are several possible remedies for the diﬃculties presented
above:
• Relax the divergence constraint: This leads to mixed ﬁnite
elements.
• Add consistent penalty terms: This leads to stabilized Petrov
Galerkin formulations.
• Relax the continuity condition: This leads to nonconforming
methods.
• Look for an equivalent diﬀerential equation without the diver
gence constraint: This leads to the streamfunction formula
tion.
In the following sections we will investigate all these possibilities
and will see that each has its particular pitfalls.
II.2. Mixed ﬁnite element discretizations of the Stokes
equations
II.2.1. Saddlepoint formulation of the Stokes equations.
We recall the Stokes equations with noslip boundary condition
−∆u + grad p = f in Ω
div u = 0 in Ω
u = 0 on Γ.
If p is any pressure solving the Stokes equations and if c is any
constant, obviously p + c is also an admissible pressure for the Stokes
equations. Hence, the pressure is at most unique up to an additive
constant. We try to ﬁx this constant by the normalization
_
Ω
pdx = 0.
If v ∈ H
1
0
(Ω)
n
and p ∈ H
1
(Ω) are arbitrary, the integration by
parts formulae of ¸I.2.3 (p. 19) imply that
_
Ω
∇p vdx =
_
Γ
p v n
.¸¸.
=0
dS −
_
Ω
p div vdx
= −
_
Ω
p div vdx.
These observations lead to the following mixed variational for
mulation of the Stokes equations:
II.2. MIXED FINITE ELEMENTS 33
Find u ∈ H
1
0
(Ω)
n
and p ∈ L
2
0
(Ω) such that
_
Ω
∇u : ∇vdx −
_
Ω
p div vdx =
_
Ω
f vdx for all v ∈ H
1
0
(Ω)
n
_
Ω
q div udx = 0 for all q ∈ L
2
0
(Ω).
It has the following properties:
• Any solution u, p of the Stokes equations is a solution of the
above variational problem.
• Any solution u, p of the variational problem which is suﬃ
ciently smooth is a solution of the Stokes equations.
• The variational problem is the EulerLagrange equation cor
responding to the constrained minimization problem
Minimize
1
2
_
Ω
[∇u[
2
dx −
_
Ω
f udx
subject to
_
Ω
q div udx = 0 for all q ∈ L
2
0
(Ω).
Hence it is a saddlepoint problem, i.e. u is a minimizer, p
is a maximizer and is the Lagrange multiplicator corresponding
the constraint div u = 0.
A deep mathematical result is:
The mixed variational formulation of the Stokes equations
admits a unique solution u, p. This solution depends con
tinuously on the force f , i.e.
[u[
1
+p
0
≤ c
Ω
f 
0
with a constant c
Ω
which only depends on the domain Ω.
II.2.2. General structure of mixed ﬁnite element discreti
zations of the Stokes equations. We choose a partition T of Ω and
two associated ﬁnite element spaces X(T ) ⊂ H
1
0
(Ω)
n
for the velocity
and Y (T ) ⊂ L
2
0
(Ω) for the pressure. Then we replace in the mixed
variational formulation of the Stokes equations the space H
1
0
(Ω)
n
by
X(T ) and the space L
2
0
(Ω) by X(T ).
This gives rise to the following general mixed finite element
discretization of the Stokes equations:
Find u
T
∈ X(T ) and p
T
∈ Y (T ) such that
_
Ω
∇u
T
: ∇v
T
dx −
_
Ω
p
T
div v
T
dx =
_
Ω
f v
T
dx
34 II. STATIONARY LINEAR PROBLEMS
for all v
T
∈ X(T )
_
Ω
q
T
div u
T
dx = 0 for all q
T
∈ Y (T ).
Remark II.2.1. Obviously, any particular mixed ﬁnite element dis
cretization of the Stokes equations is determined by specifying the par
tition T and the spaces X(T ) and Y (T ). The condition X(T ) ⊂
H
1
0
(Ω)
n
implies that the discrete velocities are globally continuous and
vanish on the boundary. The condition Y (T ) ⊂ L
2
0
(Ω) implies that the
discrete pressures have vanishing mean value, i.e.
_
Ω
p
T
dx = 0 for all
p
T
∈ Y (T ). The condition
_
Ω
q
T
div u
T
dx = 0 for all q
T
∈ Y (T ) in
general does not imply div u
T
= 0.
II.2.3. A ﬁrst attempt. We consider the unit square Ω = (0, 1)
2
and the Courant triangulation of Example II.1.1 (p. 30). We choose
X(T ) = S
1,0
0
(T )
2
Y (T ) = S
0,−1
(T ) ∩ L
2
0
(T )
i.e. a continuous, piecewise linear velocity approximation and a piece
wise constant pressure approximation on a triangular grid. Assume
that u
T
, p
T
is a solution of the discrete problem. Then div u
T
is piece
wise constant. The condition
_
Ω
q
T
div u
T
dx = 0 for all q
T
∈ Y (T )
therefore implies div u
T
= 0. Hence we conclude from Example II.1.1
that u
T
= 0. Thus this mixed ﬁnite element discretization has the
same defects as the one of Example II.1.1.
II.2.4. A necessary condition for a wellposed mixed dis
cretization. We consider an arbitrary mixed ﬁnite element discretiza
tion of the Stokes equations with spaces X(T ) for the velocity and
Y (T ) for the pressure. A minimal requirement for such a discretization
obviously is its wellposedness, i.e. that it admits a unique solution.
Hence the stiﬀness matrix must be invertible.
Denote by n
u
the dimension of X(T ) and by n
p
the one of Y (T )
and choose arbitrary bases for these spaces. Then the stiﬀness matrix
has the form
_
A B
B
T
0
_
with a symmetric positive deﬁnite n
u
n
u
matrix A and a rectangular
n
u
n
p
matrix B. Since A is symmetric positive deﬁnite, the stiﬀness
matrix is invertible if and only if the matrix B has rang n
p
. Since the
rang of a rectangular k m matrix is at most min¦k, m¦, this leads to
the following necessary condition for a wellposed mixed discretization:
II.2. MIXED FINITE ELEMENTS 35
n
u
≥ n
p
.
Let’s have a look at the example of the previous section in the light
of this condition. Denote by N
2
the number of the squares. An easy
calculation then yields n
u
= 2(N − 1)
2
and n
p
= 2N
2
− 1. Hence we
have n
p
> n
u
and the discretization cannot be wellposed.
II.2.5. A second attempt. As in ¸II.2.3 (p. 34) we consider the
unit square and divide it into N
2
squares of equal size with N even.
Contrary to ¸II.2.3 the squares are not further subdivided. We choose
the spaces
X(T ) = S
1,0
0
(T )
2
Y (T ) = S
0,−1
(T ) ∩ L
2
0
(T )
i.e. a continuous, piecewise bilinear velocity approximation and a piece
wise constant pressure approximation on a quadrilateral grid. This
discretization is often refered to as Q1/Q0 element.
A simple calculation yields the dimensions n
u
= 2(N − 1)
2
and
n
p
= N
2
− 1. Hence the condition n
u
≥ n
p
of the previous section is
satisﬁed provided N ≥ 4.
+1 +1
+1 +1
+1 +1
+1 +1
−1 −1
−1 −1
−1 −1
−1 −1
Figure II.2.1. Checkerboard mode
Denote by K
i,j
, 0 ≤ i, j ≤ N−1, the square which has the lower left
corner (ih, jh) where h =
1
N−1
. We now look at the particular pressure
ˆ p
T
∈ Y (T ) which has the constant value (−1)
i+j
on K
i,j
(cf. Figure
II.2.1). It is frequently called checkerboard mode. Consider an
arbitrary velocity v
T
∈ X(T ). Using the integration by parts formulae
of ¸I.2.3 (p. 19) and evaluating line integrals with the trapezoidal rule
36 II. STATIONARY LINEAR PROBLEMS
we obtain
_
K
ij
ˆ p
T
div v
T
dx
= (−1)
i+j
_
K
ij
div v
T
dx
= (−1)
i+j
_
∂K
ij
v
T
n
K
ij
dS
= (−1)
i+j
h
2
_
(v
T
e
1
)((i + 1)h, jh) + (v
T
e
1
)((i + 1)h, (j + 1)h)
−(v
T
e
1
)(ih, (j + 1)h) −(v
T
e
1
)(ih, jh)
+ (v
T
e
2
)((i + 1)h, (j + 1)h) + (v
T
e
2
)(ih, (j + 1)h)
−(v
T
e
2
)(ih, jh) −(v
T
e
2
)((i + 1)h, jh)
_
.
Due to the homogeneous boundary condition, summation with respect
to all indices i, j yields
_
Ω
ˆ p
T
div v
T
dx = 0.
Since v
T
was arbitrary, the solution of the discrete problem cannot be
unique: One may add ˆ p
T
to any pressure solution and obtains a new
one.
This phenomenon is known as checkerboard instability. It
shows that the condition of the previous section is only a necessary one
and does not imply the wellposedness of the discrete problem.
II.2.6. The infsup condition. In 1974 Franco Brezzi pub
lished the following necessary and suﬃcient condition for the well
posedness of a mixed ﬁnite element discretization of the Stokes equa
tions:
inf
p
T
∈Y (T )\{0}
sup
u
T
∈X(T )\{0}
_
Ω
p
T
div u
T
dx
[u
T
[
1
p
T

0
≥ β > 0.
A pair X(T ), Y (T ) of ﬁnite element spaces satisfying this condi
tion is called stable. The same notion refers to the corresponding
discretization.
Remark II.2.2. In practice one usually not only considers a single par
tition T , but complete families of partitions which are often obtained
by successive local or global reﬁnements. Usually they are labeled T
h
where the index h indicates that the meshsize gets smaller and smaller.
II.2. MIXED FINITE ELEMENTS 37
In this situation, the above condition of Franco Brezzi must be satisﬁed
uniformly, i.e. the number β must be independent of the meshsize.
Remark II.2.3. The above condition is known under various
names. A prosaic one is infsup condition. Another one, mainly
used in western countries, is Babu
ˇ
skaBrezzi condition. A third
one, which is popular in eastern Europe and Russia, is Ladyzhens
kayaBabu
ˇ
skaBrezzi condition in short LBBcondition. The
additional names honour older results of Olga Ladyzhenskaya and Ivo
Babuˇska which, in a certain sense, lay the ground for the result of
Franco Brezzi.
In the following sections we give a catalogue of various stable ele
ments. It incorporates the most popular elements, but it is not com
plete. We distinguish between discontinuous and continuous pressure
approximations. The ﬁrst variant sometimes gives a better approxima
tion of the incompressibility condition; the second variant often leads to
smaller discrete systems while retaining the accuracy of a comparable
discontinuous pressure approximation.
II.2.7. A stable loworder element with discontinuous pres
sure approximation. This discretization departs from the unstable
Q1/Q0element. It is based on the observation that the instability of
the Q1/Q0element is due to the fact that the velocity space cannot
balance pressure jumps across element boundaries. As a remedy the
velocity space is enriched by edge respectively face bubble functions
(cf. ¸I.2.12 (p. 27)) which give control on the pressure jumps.
The pair
X(T ) = S
1,0
0
(T )
n
⊕span¦ψ
E
n
E
: E ∈ c
T
¦
Y (T ) = S
0,−1
(T ) ∩ L
2
0
(Ω)
is stable.
Since the bubble functions ψ
E
are contained in S
n,0
(T ) we obtain
as a corollary:
The pair
X(T ) = S
n,0
0
(T )
n
Y (T ) = S
0,−1
(T ) ∩ L
2
0
(Ω)
is stable.
II.2.8. Stable higherorder elements with discontinuous
pressure approximation. For the following result we assume that
38 II. STATIONARY LINEAR PROBLEMS
T exclusively consists of triangles or tetrahedrons. Moreover, we de
note for every positive integer by
P
= span¦x
α
1
1
. . . x
α
n
n
: α
1
+ . . . + α
n
= ¦
the space of homogeneous polynomials of degree . With these restric
tions and notations the following result can be proven:
For every m ≥ n the pair
X(T ) =
_
S
m,0
0
(T ) ⊕span¦ψ
K
ϕ : K ∈ T , ϕ ∈ P
m−2
¦
¸
n
Y (T ) = S
m−1,−1
(T ) ∩ L
2
0
(Ω)
is stable.
Since the elementbubble functions ψ
K
are polynomials of degree
n + 1 we obtain as a corollary:
For every m ≥ n the pair
X(T ) = S
n+m−1,0
0
(T )
n
Y (T ) = S
m−1,−1
(T ) ∩ L
2
0
(Ω)
is stable.
II.2.9. Stable loworder elements with continuous pressure
approximation. Given a partition T we denote by T1
2
the reﬁned par
tition which is obtained by connecting in every element the midpoints
of its edges. With this notation the following results were established
in the early 1980s:
The mini element
X(T ) =
_
S
1,0
0
(T ) ⊕span¦ψ
K
: K ∈ T ¦
¸
n
Y (T ) = S
1,0
(T ) ∩ L
2
0
(Ω)
is stable.
The TaylorHood element
X(T ) = S
2,0
0
(T )
n
Y (T ) = S
1,0
(T ) ∩ L
2
0
(Ω)
is stable.
The modified TaylorHood element
X(T ) = S
1,0
0
(T1
2
)
n
II.2. MIXED FINITE ELEMENTS 39
Y (T ) = S
1,0
(T ) ∩ L
2
0
(Ω)
is stable.
II.2.10. Stable higherorder elements with continuous pres
sure approximation. The stability of higher order elements with con
tinuous pressure approximation was established only in the early 1990s:
For every k ≥ 3 the higher order TaylorHood ele
ment
X(T ) = S
k,0
0
(T )
n
Y (T ) = S
k−1,0
(T ) ∩ L
2
0
(Ω)
is stable.
II.2.11. A priori error estimates. We consider a partition T
with meshsize h and a stable pair X(T ), Y (T ) of corresponding ﬁnite
element spaces for the discretization of the Stokes equations. We de
note by k
u
and k
p
the maximal polynomial degrees k and m such that
S
k,0
0
(T )
n
⊂ X(T ) and either S
m,−1
(T ) ∩ L
2
0
(Ω) ⊂ Y (T ) when using
a discontinuous pressure approximation or S
m,0
(T ) ∩ L
2
0
(Ω) ⊂ Y (T )
when using a continuous pressure approximation. Set
k = min¦k
u
−1, k
p
¦.
Further we denote by u, p the unique solution of the saddlepoint
formulation of the Stokes equations (cf. ¸II.2.1 (p. 32)) and by u
T
, p
T
the unique solution of the discrete problem under consideration.
With these notations the following a priori error estimates can be
proven:
Assume that u ∈ H
k+2
(Ω)
n
∩ H
1
0
(Ω)
n
and p ∈ H
k+1
(Ω) ∩
L
2
0
(Ω), then
[u −u
T
[
1
+p −p
T

0
≤ c
1
h
k+1
f 
k
.
If in addition Ω is convex, then
u −u
T

0
≤ c
2
h
k+2
f 
k
.
The constants c
1
and c
2
only depend on the domain Ω.
Example II.2.4. Table II.2.1 collects the numbers k
u
, k
p
, and k for
the examples of the previous sections.
40 II. STATIONARY LINEAR PROBLEMS
Table II.2.1. Parameters of various stable elements
pair k
u
k
p
k
¸II.2.7 1 0 0
¸II.2.8 m m−1 m−1
mini element 1 1 0
TaylorHood element 2 1 1
modiﬁed TaylorHood element 1 1 0
higher order TaylorHood element k k −1 k −1
Remark II.2.5. The above regularity assumptions are not realistic.
For a convex polygonal domain, one in general only has u ∈ H
2
(Ω)
n
∩
H
1
0
(Ω)
n
and p ∈ H
1
(Ω) ∩L
2
(Ω). Therfore, independently of the actual
discretization, one in general only gets an O(h) error estimate. If Ω
is not convex, but has reentrant corners, the situation is even worse:
One in general only gets an O(h
α
) error estimate with an exponent
1
2
≤ α < 1 which depends on the largest interior angle at a boundary
vertex of Ω. This poor convergence behaviour can only be remedied by
an adaptive grid reﬁnement based on an a posteriori error control.
Remark II.2.6. The diﬀerentiation index of the pressure always is
one less than the diﬀerentiation index of the velocity. Therefore, dis
cretizations with k
p
= k
u
− 1 are optimal in that they do not waste
degrees of freedom by choosing too large a velocity space or too small
a pressure space.
II.3. PetrovGalerkin stabilization
II.3.1. Motivation. The results of ¸II.2 show that one can stabi
lize a given pair of ﬁnite element spaces by either reducing the number
of degrees of freedom in the pressure space or by increasing the number
of degrees of freedom in the velocity space. This result is reassuring
but sometimes not practical since it either reduces the accuracy of the
pressure or increases the number of unknowns and thus the compu
tational work. Sometimes one would prefer to stick to a given pair
of spaces and to have at hand a diﬀerent stabilization process which
does neither change the number of unknowns nor the accuracy of the
ﬁnite element spaces. A particular example for this situation is the
popular equal order interpolation where X(T ) = S
m.0
0
(T )
n
and
Y (T ) = S
m,0
(T ) ∩ L
2
0
(Ω).
In this section we will devise a stabilization process with the desired
properties. For its motivation we will have another look at the mini
element of ¸II.2.9 (p. 38).
II.3.2. The mini element revisited. We consider a triangula
tion T of a two dimensional domain Ω and set for abbreviation
B(T ) =
_
span¦ψ
K
: K ∈ T ¦
¸
2
.
II.3. PETROVGALERKIN STABILIZATION 41
The discretization of the Stokes equations with the mini element of
¸II.2.9 (p. 38) then takes the form:
Find u
T
∈ S
1,0
0
(T )
2
⊕B(T ) and p
T
∈ S
1,0
(T ) ∩ L
2
0
(Ω) such that
_
Ω
∇u
T
: ∇v
T
dx −
_
Ω
p
T
div v
T
dx =
_
Ω
f v
T
dx
for all v
T
∈ S
1,0
0
(T )
2
⊕B(T )
_
Ω
q
T
div u
T
dx = 0
for all q
T
∈ S
1,0
(T ) ∩ L
2
0
(Ω).
For simplicity we assume that f ∈ L
2
(Ω)
2
.
We split the velocity u
T
in a ”linear part” u
T ,L
∈ S
1,0
0
(T )
2
and a
”bubble part” u
T ,B
∈ B(T ):
u
T
= u
T ,L
+u
T ,B
.
Since u
T ,B
vanishes on the element boundaries, diﬀerentiation by parts
yields for every v
T
∈ S
1,0
0
(T )
2
_
Ω
∇u
T ,B
: ∇v
T
dx =
K∈T
_
K
∇u
T ,B
: ∇v
T
dx
= −
K∈T
_
K
u
T ,B
∆v
T
.¸¸.
=0
dx
= 0.
Hence, we have
_
Ω
∇u
T ,L
: ∇v
T
dx −
_
Ω
p
T
div v
T
dx =
_
Ω
f v
T
dx
for all v
T
∈ S
1,0
0
(T )
2
.
The bubble part of the velocity has the representation
u
T ,B
=
K∈T
α
K
ψ
K
with coeﬃcients α
K
∈ R
2
. Inserting the test function v
T
= e
i
ψ
K
with
i ∈ ¦1, 2¦ and K ∈ T in the momentum equation, we obtain
_
K
f e
i
ψ
K
dx
=
_
Ω
f (ψ
K
e
i
)dx
=
_
Ω
∇u
T ,L
: ∇(ψ
K
e
i
)dx
. ¸¸ .
=0
+
_
Ω
∇u
T ,B
: ∇(ψ
K
e
i
)dx
. ¸¸ .
=α
K,i
K
∇ψ
K

2
dx
42 II. STATIONARY LINEAR PROBLEMS
−
_
Ω
p
T
div(ψ
K
e
i
)dx
. ¸¸ .
=−
K
∂p
T
∂x
i
ψ
K
dx
= α
K,i
_
K
[∇ψ
K
[
2
dx +
_
K
∂p
T
∂x
i
ψ
K
dx, i = 1, 2,
and thus
α
K
=
__
K
[f −∇p
T
]ψ
K
dx
___
K
[∇ψ
K
[
2
dx
_
−1
for all K ∈ T .
For abbreviation we set
˜ γ
K
=
__
K
[∇ψ
K
[
2
dx
_
−1
.
Next we insert the representation of u
T ,B
in the continuity equation.
This yields for all q
T
∈ S
1,0
(T ) ∩ L
2
0
(Ω)
0 =
_
Ω
q
T
div u
T
dx
=
_
Ω
q
T
div u
T ,L
dx +
K∈T
_
K
q
T
div(α
K
ψ
K
)dx
. ¸¸ .
=−
K
ψ
K
α
K
·∇q
T
dx
=
_
Ω
q
T
div u
T ,L
dx
−
K∈T
˜ γ
K
__
K
ψ
K
∇q
T
dx
_
__
K
ψ
K
[f −∇p
T
]dx
_
.
This proves that u
T ,L
∈ S
1,0
0
(T )
2
, p
T
∈ S
1,0
(T ) ∩ L
2
0
(Ω) solve the
modiﬁed problem
_
Ω
∇u
T ,L
: ∇v
T
dx −
_
Ω
p
T
div v
T
dx =
_
Ω
f v
T
dx
for all v
T
∈ S
1,0
0
(T )
2
_
Ω
q
T
div u
T ,L
dx + c
T
(p
T
, q
T
) = χ
T
(q
T
)
for all q
T
∈ S
1,0
(T ) ∩ L
2
0
(Ω)
where
c
T
(p
T
, q
T
) =
K∈T
γ
K
_
K
∇p
T
∇q
T
dx
χ
T
(q
T
) =
K∈T
˜ γ
K
__
K
ψ
K
∇q
T
dx
_
__
K
ψ
K
f dx
_
II.3. PETROVGALERKIN STABILIZATION 43
γ
K
= ˜ γ
K
[K[
−1
__
K
ψ
K
dx
_
2
.
Note that γ
K
is proportional to h
2
K
.
The new problem can be interpreted as a P1/P1 discretization of
the diﬀerential equation
−∆u + grad p = f in Ω
div u −α∆p = −αdiv f in Ω
u = 0 on Γ
with a suitable penalty parameter α. Taking into account that
∆u
T
vanishes elementwise, the discrete problem does not change if we
also add the term α∆div u to the lefthand side of the second equation.
This shows that in total we may add the divergence of the momentum
equation as a penalty. Since the penalty vanishes for the exact solu
tion of the Stokes equations, this approach is also called consistent
penalty.
II.3.3. General form of PetrovGalerkin stabilizations. Gi
ven a partition T of Ω we choose two corresponding ﬁnite element
spaces X(T ) ⊂ H
1
0
(Ω)
n
and Y (T ) ⊂ L
2
0
(Ω). For every element K ∈ T
and every edge respectively face E ∈ c
T
we further choose nonnegative
parameters δ
K
and δ
E
.
With these notations we consider the following discrete problem:
Find u
T
∈ X(T ) and p
T
∈ Y (T ) such that
_
Ω
∇u
T
: ∇v
T
dx −
_
Ω
p
T
div v
T
dx =
_
Ω
f v
T
dx
_
Ω
q
T
div u
T
dx
+
K∈T
δ
K
h
2
K
_
K
[−∆u
T
+∇p
T
] ∇q
T
dx
+
E∈E
T
δ
E
h
E
_
E
[p
T
]
E
[q
T
]
E
dS =
K∈T
δ
K
h
2
K
_
K
f ∇q
T
dx
for all v
T
∈ X(T ) and all q
T
∈ Y (T ).
Recall that []
E
denotes the jump across E.
Remark II.3.1. The terms involving δ
K
correspond to the equation
div[−∆u +∇p] = div f .
The terms involving δ
E
are only relevant for discontinuous pressure
approximations. When using a continuous pressure approximation,
they vanish.
44 II. STATIONARY LINEAR PROBLEMS
II.3.4. Choice of stabilization parameters. With the nota
tions of the previous section we set
δ
max
= max¦max
K∈T
δ
K
, max
E∈E
T
δ
E
¦,
δ
min
=
_
_
_
min¦min
K∈T
δ
K
, min
E∈E
T
δ
E
¦ if pressures are discontinuous,
min
K∈T
δ
K
if pressures are continuous.
A good choice of the stabilization parameters then is determined by
the condition
δ
max
≈ δ
min
.
II.3.5. Choice of spaces. The a priori error estimates are opti
mal when the polynomial degree of the pressure approximation is one
less than the polynomial degree of the velocity approximation. Hence
a standard choice is
X(T ) = S
k,0
0
(T )
Y (T ) =
_
¸
¸
¸
_
¸
¸
¸
_
S
k−1,0
(T ) ∩ L
2
0
(Ω) continuous pressure
approximation
S
k−1,−1
(T ) ∩ L
2
0
(Ω) discontinuous pressure
approximation
where k ≥ 1 for discontinuous pressure approximations and k ≥ 2 for
continuous pressure approximations. These choices yield O(h
k
) error
estimates provided the solution of the Stokes equations is suﬃciently
smooth.
II.3.6. Structure of the discrete problem. The stiﬀness ma
trix of a PetrovGalerkin scheme has the form
_
A B
−B
T
C
_
with symmetric, positive deﬁnite, square matrices A and C and a rect
angular matrix B. The entries of C are by a factor of h
2
smaller than
those of A.
Recall that C = 0 for the discretizations of ¸II.2.
II.4. Nonconforming methods
II.4.1. Motivation. Up to now we have always imposed the con
dition X(T ) ⊂ H
1
0
(Ω)
n
, i.e. the discrete velocities had to be contin
uous. Now, we will relax this condition. We hope that we will be
II.4. NONCONFORMING METHODS 45
rewarded by less diﬃculties with respect to the incompressibility con
straint.
One can prove that, nevertheless, the velocities must enjoy some
minimal continuity: On each edge the mean value of the velocity jump
across this edge must vanish. Otherwise the consistency error will be
at least of order O(1).
II.4.2. The CrouzeixRaviart element. We consider a trian
gulation T of a two dimensional domain Ω and set
X(T ) = ¦v
T
: Ω →R
2
: v
T
¸
¸
¸
K
∈ R
1
(K) for all K ∈ T ,
v
T
is continuous at midpoints,
v
T
of interior edges,
v
T
vanishes at midpoints
v
T
of boundary edges¦
Y (T ) = S
0,−1
(T ) ∩ L
2
0
(Ω).
This pair of spaces is called CrouzeixRaviart element. Its
degrees of freedom are the velocityvectors at the midpoints of in
terior edges and the pressures at the barycentres of elements. The
corresponding discrete problem is:
Find u
T
∈ X(T ) and p
T
∈ Y (T ) such that
K∈T
_
K
∇u
T
: ∇v
T
dx −
K∈T
_
K
p
T
div v
T
dx =
_
Ω
f v
T
dx
K∈T
_
K
q
T
div u
T
dx = 0
for all v
T
∈ X(T ) and all q
T
∈ Y (T ).
The following results can be proven:
The CrouzeixRaviart discretization admits a unique solu
tion u
T
, p
T
.
The continuity equation div u
T
= 0 is satisﬁed element
wise.
If Ω is convex, the following error estimates hold
_
K∈T
[u −u
T
[
2
H
1
(K)
_
1/2
+p −p
T

0
= O(h),
46 II. STATIONARY LINEAR PROBLEMS
u −u
T

0
= O(h
2
).
However, the CrouzeixRaviart discretization has several drawbacks
too:
• Its accuracy deteriorates drastically in the presence
of reentrant corners.
• It has no higher order equivalent.
• It has no threedimensional equivalent.
II.4.3. Construction of a local solenoidal bases. One of the
main attractions of the CrouzeixRaviart element is the fact that it
allows the construction of a local solenoidal bases for the velocity space
and that in this way the computation of the velocity and pressure can
be decoupled.
For the construction of the solenoidal bases we denote by
• NT the number of triangles,
• NE
0
the number of interior edges,
• NV
0
the number of interior vertices,
• V (T ) = ¦u
T
∈ X(T ) : div u
T
= 0¦ the space of solenoidal
velocities.
The quantities NT, NE
0
, and NV
0
are connected via Euler’s for
mula
NT −NE
0
+ NV
0
= 1.
Since X(T ) and Y (T ) satisfy the infsup condition, an elementary
calculation using this formula yields
dimV (T ) = dimX(T ) −dimY (T )
= 2NE
0
−(NT −1)
= NE
0
+ NV
0
.
With every edge E ∈ c
T
we associate a unit tangential vector t
E
and a piecewise linear function ϕ
E
which equals 1 at the midpoint of E
and which vanishes at all other midpoints of edges. With this notation
we set
w
E
= ϕ
E
t
E
for all E ∈ c
T
.
These functions obviously are linearly independent. For every triangle
K and every edge E we have
_
K
div w
E
dx =
_
∂K
n
K
w
E
dS = 0.
Since div w
E
is piecewise constant, this implies that the functions w
E
are contained in V (T ).
II.4. NONCONFORMING METHODS 47
With every vertex x ∈ A
T
we associate the set c
x
of all edges
which have x as an endpoint. Starting with an arbitrary edge in c
x
we
enumerate the remaining edges in c
x
consecutively in counterclockwise
orientation. For E ∈ c
x
we denote by t
E,x
and n
E,x
two orthogonal
unit vectors such that t
E,x
is tangential to E, points away from x, and
satisﬁes det(t
E,x
, n
E,x
) > 0. With these notations we set (cf. Figure
II.4.1)
w
x
=
E∈E
x
1
[E[
ϕ
E
n
E,x
.
The functions w
x
obviously are linearly independent. Since the w
E
are
tangential to the edges and the w
x
are normal to the edges, both sets
of functions are linearly independent.
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
T
'
©
c
E
Figure II.4.1. The function w
x
Next consider a triangle K ∈ T and a vertex x ∈ A
T
of K and
denote by E
1
, E
2
the two edges of K sharing the vertex x. Denote
by m
E
i
the midpoint of E
i
, i = 1, 2. Using the integration by parts
formulae of ¸I.2.3 (p. 19) and evaluating line integrals with the help of
the midpoint formula, we conclude that
_
K
div w
x
dx =
_
∂K
w
x
n
K
dS
=
2
i=1
[E
i
[w
x
(m
E
i
) n
K
=
2
i=1
n
E
i
,x
n
K
= 0.
This proves that w
x
∈ V (T ).
Hence we have
V (T ) = span¦w
x
, w
E
: x ∈ A
T
, E ∈ c
T
¦
48 II. STATIONARY LINEAR PROBLEMS
and the discrete velocity ﬁeld of the CrouzeixRaviart discretization is
given by the problem:
Find u
T
∈ V (T ) such that
K∈T
_
K
∇u
T
: ∇v
T
dx =
_
Ω
f v
T
dx
for all v
T
∈ V (T ).
This is a linear system of equations with NE
0
+NV
0
equations and
unknowns and with a symmetric, positive deﬁnite stiﬀness matrix. The
stiﬀness matrix, however, has a condition of O(h
−4
) compared with a
condition of O(h
−2
) of the larger, symmetric, but indeﬁnite stiﬀness
matrix of the original mixed problem.
The above problem only yields the velocity. The pressure, however,
can be computed by a simple postprocessing process. To describe it,
denote by m
E
the midpoint of any edge E ∈ c
T
. Using the integration
by parts formulae of ¸I.2.3 (p. 19) and evaluating line integrals with
the help of the midpoint formula, we conclude that
_
Ω
f ϕ
E
n
E
dx −
K∈T
_
K
∇u
T
: ∇ϕ
E
n
E
dx
= −
K∈T
_
K
p
T
div ϕ
E
n
E
dx
= −
_
E
[ϕ
E
p
T
]
E
dS
= −[E[[ϕ
E
(m
E
)p
T
]
E
= −[E[[p
T
]
E
.
Hence, we can compute the pressure jumps edge by edge by evaluating
the lefthand side of this equation. Starting with a triangle K which has
an edge on the boundary Γ, we set ˜ p
T
= 0 on this triangle and compute
it on the remaining triangles by passing through adjacent triangles and
adding the pressure jump corresponding to the common edge. Finally,
we compute
P =
K∈T
h
[K[
[Ω[
˜ p
T
¸
¸
¸
K
and subtract P from ˜ p
T
. This yields the pressure p
T
of the Crouzeix
Raviart discretization which has meanvalue zero.
II.5. Streamfunction formulation
II.5.1. Motivation. The methods of the previous sections all dis
cretize the variational formulation of the Stokes equations of ¸II.1.2 (p.
II.5. STREAMFUNCTION FORMULATION 49
29) and yield simultaneously approximations for the velocity and pres
sure. The discrete velocity ﬁelds in general are not solenoidal, i.e. the
incompressibility constraint is satisﬁed only approximately. In ¸II.4 we
obtained an exactly solenoidal approximation but had to pay for this
by abandoning the conformity. In this section we will consider another
variational formulation of the Stokes equations which leads to conform
ing solenoidal discretizations. As we will see this advantage has to be
paid for by other drawbacks.
Throughout this section we assume that Ω is a two dimensional,
simply connected polygonal domain.
II.5.2. The curl operators. The subsequent analysis is based
on two curloperators which correspond to the rotoperator in three
dimensions and which are deﬁned as follows:
curl ϕ =
_
−
∂ϕ
∂y
∂ϕ
∂x
_
,
curl v =
∂v
1
∂y
−
∂v
2
∂x
.
They fulﬁll the following chain rule
curl(curl ϕ) = −∆ϕ for all ϕ ∈ H
2
(Ω)
curl(curl v) = −∆v +∇(div v) for all v ∈ H
2
(Ω)
2
and the following integration by parts formula
_
Ω
v curl ϕdx =
_
Ω
curl vϕdx +
_
Γ
ϕv tdS
for all v ∈ H
1
(Ω)
2
, ϕ ∈ H
1
(Ω).
Here t denotes a unit tangent vector to the boundary Γ.
The following deep mathematical result is fundamental:
A vectorﬁeld v : Ω → R
2
is solenoidal, i.e. div v = 0, if
and only if there is a unique streamfunction ϕ : Ω →R
such that v = curl ϕ in Ω and ϕ = 0 on Γ.
II.5.3. Streamfunction formulation of the Stokes equa
tions. Let u, p be the solution of the Stokes equations with exterior
50 II. STATIONARY LINEAR PROBLEMS
force f and homogeneous boundary conditions and denote by ψ the
stream function corresponding to u. Since
u t = 0 on Γ,
we conclude that in addition
∂ψ
∂n
= t curl ψ = 0 on Γ.
Inserting this representation of u in the momentum equation and ap
plying the operator curl we obtain
curl f = curl¦−∆u +∇p¦
= −∆(curl u) + curl(∇p)
. ¸¸ .
=0
= −∆(curl(curl ψ))
= ∆
2
ψ.
This proves that the stream function ψ solves the biharmonic equa
tion
∆
2
ψ = curl f in Ω
ψ = 0 on Γ
∂ψ
∂n
= 0 on Γ.
Conversely, one can prove: If ψ solves the above biharmonic equa
tion, there is a unique pressure p with meanvalue 0 such that u =
curl ψ and p solve the Stokes equations. In this sense, the Stokes
equations and the biharmonic equation are equivalent.
Remark II.5.1. Given a solution ψ of the biharmonic equation and
the corresponding velocity u = curl ψ the pressure is determined by
the equation f + ∆u = ∇p. But there is no constructive way to solve
this problem. Hence, the biharmonic equation is only capable to yield
the velocity ﬁeld of the Stokes equations.
II.5.4. Variational formulation of the biharmonic equation.
With the help of the Sobolev space
H
2
0
(Ω) =
_
ϕ ∈ H
2
(Ω) : ϕ =
∂ϕ
∂n
= 0 on Γ
_
the variational formulation of the biharmonic equation of the pre
vious section is given by
II.5. STREAMFUNCTION FORMULATION 51
Find ψ ∈ H
2
0
(Ω) such that
_
Ω
∆ψ∆ϕdx =
_
Ω
curl f ϕdx
for all ϕ ∈ H
2
0
(Ω).
Remark II.5.2. The above variational problem is the EulerLagrange
equation corresponding to the minimization problem
J(ψ) =
1
2
_
Ω
(∆ψ)
2
dx −
_
Ω
curl f ψdx →min in H
2
0
(Ω).
Therefore it is tempting to discretize the biharmonic equation by re
placing in its variational formulation the space H
2
0
(Ω) by a ﬁnite ele
ment space X(T ) ⊂ H
2
0
(Ω). But, this would require C
1
elements! The
lowest polynomial degree for which C
1
elements exist, is ﬁve!
II.5.5. A nonconforming discretization of the biharmonic
equation. One possible remedy to the diﬃculties described in Remark
II.5.2 is to drop the C
1
continuity of the ﬁnite element functions. This
leads to nonconforming discretizations of the biharmonic equation.
The most popular one is based on the socalled Morley element.
It is a triangular element, i.e. the partition T exclusively consists of
triangles. The corresponding ﬁnite element space is given by
M(T ) =¦ϕ ∈ S
2,−1
(T ) : ϕ is continuous at the vertices,
n
E
∇ϕ is continuous at
n
E
∇ϕ the midpoints of edges¦.
The degrees of freedom are
• the values of ϕ at the vertices and
• the values of n
E
∇ϕ at the midpoints of edges.
The discrete problem is given by:
Find ψ
T
∈ M(T ) such that
K∈T
_
K
∆ψ
T
∆ϕ
T
dx =
K∈T
_
K
curl f ϕ
T
dx
for all ϕ
T
∈ M(T ).
One can prove that this discrete problem admits a unique solution.
The stiﬀness matrix is symmetric, positive deﬁnite and has a condition
of O(h
−4
).
52 II. STATIONARY LINEAR PROBLEMS
There is a close relation between this problem and the Crouzeix
Raviart discretization of ¸II.4 (p. 44):
If ψ
T
∈ M(T ) is the solution of the Morley element dis
cretization of the biharmonic equation, then u
T
= curl ψ
T
is the solution of the CrouzeixRaviart discretization of the
Stokes equations.
II.5.6. Mixed ﬁnite element discretizations of the bihar
monic equation. Another remedy to the diﬃculties described in Re
mark II.5.2 (p. 51) is to use a saddlepoint formulation of the bihar
monic equation and a corresponding mixed ﬁnite element discretiza
tion. This saddlepoint formulation is obtained by introducing the
vorticity ω = curl u. It is given by:
• Find ψ ∈ H
1
0
(Ω), ω ∈ L
2
(Ω) and λ ∈ H
1
(Ω) such that
_
Ω
curl λ curl ϕdx =
_
Ω
f curl ϕdx
_
Ω
ωθdx −
_
Ω
λθdx = 0
_
Ω
curl ψ curl µdx −
_
Ω
ωµdx = 0
for all ϕ ∈ H
1
0
(Ω), θ ∈ L
2
(Ω) and µ ∈ H
1
(Ω)
• and ﬁnd p ∈ H
1
(Ω) ∩ L
2
0
(Ω) such that
_
Ω
∇p ∇qdx =
_
Ω
(f −curl λ) ∇qdx
for all q ∈ H
1
(Ω) ∩ L
2
0
(Ω).
Remark II.5.3. Compared with the saddlepoint formulation of the
Stokes equations of ¸II.1.2 (p. 29), the above problem imposes weaker
regularity conditions on the velocity u = curl ψ and stronger regularity
conditions on the pressure p. The two quantities ω and λ are both
vorticities. The second equation means that they are equal in a weak
sense. But, this equality is not a pointwise one since both quantities
have diﬀerent regularity properties. The computation of the pressure
is decoupled from the computation of the other quantities.
For the mixed ﬁnite element discretization we depart from a trian
gulation T of Ω and choose an integer ≥ 1 and set k = max¦1, −1¦.
Then the discrete problem is given by:
II.6. SOLUTION OF THE DISCRETE PROBLEMS 53
Find ψ
T
∈ S
,0
0
(T ), ω
T
∈ S
,−1
(T ) and λ
T
∈ S
,0
(T ) such
that
_
Ω
curl λ
T
curl ϕ
T
dx =
_
Ω
f curl ϕ
T
dx
_
Ω
ω
T
θ
T
dx −
_
Ω
λ
T
θ
T
dx = 0
_
Ω
curl ψ
T
curl µ
T
dx −
_
Ω
ω
T
µ
T
dx = 0
for all ϕ
T
∈ S
,0
0
(T ), θ
T
∈ S
,−1
(T ) and µ
T
∈ S
,0
(T )
and ﬁnd p
T
∈ S
k,0
(T ) ∩ L
2
0
(Ω) such that
_
Ω
∇p
T
∇q
T
dx =
_
Ω
(f −curl λ
T
) ∇q
T
dx
for all q
T
∈ S
k,0
(T ) ∩ L
2
0
(Ω)
One can prove that this discrete problem is wellposed and that its
solution satisﬁes the following a priori error estimate
ψ −ψ
T

1
+ω −ω
T

0
+p −p
T

0
≤ c
_
h
m−1
_
ψ
m+1
+∆ψ
m
+p
max{1,m−1}
_
+ h
m
∆ψ
m
_
,
where 1 ≤ m ≤ depends on the regularity of the solution of the
biharmonic equation. In the lowest order case = 1 this estimate does
not imply convergence. But, using a more sophisticated analysis, one
can prove in this case an O(h
1
2
)error estimate provided ψ ∈ H
3
(Ω),
i.e. u ∈ H
2
(Ω)
2
.
II.6. Solution of the discrete problems
II.6.1. General structure of the discrete problems. For the
discretization of the Stokes equations we choose a partition T of the
domain Ω and ﬁnite element spaces X(T ) and Y (T ) for the approxima
tion of the velocity and the pressure, respectively. With these choices
we either consider a mixed method as in ¸II.2 (p. 32) or a Petrov
Galerkin one as in ¸II.3 (p. 40). Denote by
• n
u
the dimension of X(T ) and by
• n
p
the dimension of Y (T ) respectively.
Then the discrete problem has the form
(II.6.1)
_
A B
B
T
−δC
__
u
p
_
=
_
f
δg
_
with
54 II. STATIONARY LINEAR PROBLEMS
• δ = 0 for the mixed methods of ¸II.2,
• δ > 0 and δ ≈ 1 for the PetrovGalerkin methods of ¸II.3,
• a square, symmetrix, positive deﬁnite n
u
n
u
matrix A with
condition of O(h
−2
),
• a rectangular n
u
n
p
matrix B,
• a square, symmetric, positive deﬁnite n
p
n
p
matrix C with
condition of O(1),
• a vector f of dimension n
u
discretizing the exterior force, and
• a vector g of dimension n
p
which equals 0 for the mixed meth
ods of ¸II.2 and which results from the stabilization terms on
the righthand sides of the methods of ¸II.3.
Remark II.6.1. For simplicity, we drop in this section the index T
indicating the discretization. Moreover, we identify ﬁnite element func
tions with their coeﬃcient vectors with respect to a nodal bases. Thus,
u and p are now vectors of dimension n
u
and n
p
, respectively.
The stiﬀness matrix
_
A B
B
T
−δC
_
of the discrete problem (II.6.1) is symmetric, but indeﬁnite, i.e. it has
positive and negative real eigenvalues. Correspondingly, wellestab
lished methods such as the conjugate gradient algorithm cannot be
applied. This is the main diﬃculty in solving the linear systems arising
from the discretization of the Stokes equations.
Remark II.6.2. The indeﬁniteness of the stiﬀness matrix results from
the saddlepoint structure of the Stokes equations. This cannot be
remedied by any stabilization procedure.
II.6.2. The Uzawa algorithm. The Uzawa algorithm is proba
bly the simplest algorithm for solving the discrete problem (II.6.1).
Algorithm II.6.3. (Uzawa algorithm)
(0) Given: an initial guess p
0
for the pressure, a tolerance ε > 0
and a relaxation parameter ω > 0.
Sought: an approximate solution of the discrete problem
(II.6.1).
(1) Set i = 0.
(2) Apply a few GaussSeidel iterations to the linear system
Au = f −Bp
i
and denote the result by u
i+1
. Compute
p
i+1
= p
i
+ ω¦B
T
u
i+1
−δg −δCp
i
¦.
(3) If
Au
i+1
+ Bp
i+1
−f  +B
T
u
i+1
−δCp
i+1
−δg ≤ ε
II.6. SOLUTION OF THE DISCRETE PROBLEMS 55
return u
i+1
and p
i+1
as approximate solution; stop. Otherwise
increase i by 1 and go to step 2.
Remark II.6.4. (1) The relaxation parameter ω usually is chosen in
the interval (1, 2); a typical choice is ω = 1.5.
(2)   denotes any vector norm. A popular choice is the scaled
Euclidean norm, i.e. v =
_
1
n
u
v v for the velocity part and q =
_
1
n
p
q q for the pressure part.
(3) The problem Au = f −Bp
i
is a discrete version of two, if Ω ⊂ R
2
, or
three, if Ω ⊂ R
3
, Poisson equations for the components of the velocity
ﬁeld.
(4) The Uzawa algorithm iterates on the pressure. Therefore it is
sometimes called a pressure correction scheme.
The Uzawa algorithm is very simple, but extremely slow. Therefore
it cannot be recommended for practical use. We have given it never
theless, since it is the bases for the more eﬃcient algorithm of ¸II.6.4
(p. 57).
II.6.3. The conjugate gradient algorithm revisited. The al
gorithm of the next section is based on the conjugate gradient algorithm
(CG algorithm). For completeness we recapitulate this algorithm.
Algorithm II.6.5. (Conjugate gradient algorithm)
(0) Given: a linear system of equations Lx = b with a symmetric,
positive deﬁnite matrix L, an initial guess x
0
for the solution,
and a tolerance ε > 0.
Sought: an approximate solution of the linear system.
(1) Compute
r
0
= b −Lx
0
,
d
0
= r
0
,
γ
0
= r
0
r
0
.
Set i = 0.
(2) If
γ
i
< ε
2
return x
i
as approximate solution; stop. Otherwise go to step
3.
(3) Compute
s
i
= Ld
i
,
α
i
=
γ
i
d
i
s
i
,
x
i+1
= x
i
+ α
i
d
i
,
r
i+1
= r
i
−α
i
s
i
,
56 II. STATIONARY LINEAR PROBLEMS
γ
i+1
= r
i+1
r
i+1
,
β
i
=
γ
i+1
γ
i
,
d
i+1
= r
i+1
+ β
i
d
i
.
Increase i by 1 and go to step 2.
The convergence rate of the CGalgorithm is given by
(
√
κ−1)
(
√
κ+1)
where
κ is the condition of L and equals the ratio of the largest to the smallest
eigenvalue of L.
The following Java methods implement the CGalgorithm.
// CG algorithm
public void cg() {
int iter = 0;
double alpha, beta, gamma;
res = multiply(a, x);
res = add(b, res, 1.0, 1.0);
d = new double[dim];
copy(d, res);
gamma = innerProduct(res, res)/dim;
residuals[0] = Math.sqrt(gamma);
while ( iter < maxit1 && residuals[iter] > tol ) {
iter++;
b = multiply(a, d);
alpha = innerProduct(d, b)/dim;
alpha = gamma/alpha;
accumulate(x, d, alpha);
accumulate(res, b, alpha);
beta = gamma;
gamma = innerProduct(res, res)/dim;
residuals[iter] = Math.sqrt(gamma);
beta = gamma/beta;
d = add(res, d, 1.0, beta);
}
} // end of cg
// copy vector v to vector u
public void copy(double[] u, double[] v) {
for( int i = 0; i < dim; i++ )
u[i] = v[i];
} // end of copy
// multiply matrix c with vector y
public double[] multiply(double[][] c, double[] y) {
double[] result = new double[dim];
for( int i = 0; i < dim; i++ )
result[i] = innerProduct(c[i], y);
return result;
} // end of multiply
// return s*u+t*v
public double[] add(double[] u, double[] v, double s, double t) {
double[] w = new double[dim];
for( int i = 0; i < dim; i++ )
w[i] = s*u[i] + t*v[i];
return w;
} // end of add
II.6. SOLUTION OF THE DISCRETE PROBLEMS 57
// inner product of two vectors
public double innerProduct(double[] u, double[] v) {
double prod = 0;
for( int i = 0; i < dim; i++ )
prod += u[i]*v[i];
return prod;
} // end of inner product
// add s times vector v to vector u
public void accumulate(double[] u, double[] v, double s) {
for( int i = 0; i < dim; i++ )
u[i] += s*v[i];
} // end of accumulate
II.6.4. An improved Uzawa algorithm. Since the matrix A is
positive deﬁnite, we may solve the ﬁrst equation in the system (II.6.1)
for the unknown u
u = A
−1
(f −Bp)
and insert the result in the second equation
B
T
A
−1
(f −Bp) −δCp = δg.
This gives a problem which only incorporates the pressure
(II.6.2) [B
T
A
−1
B + δC]p = B
T
A
−1
f −δg.
One can prove that the matrix B
T
A
−1
B + δC is symmetric, positive
deﬁnite and has a condition of O(1), i.e. the condition does not increase
when reﬁning the mesh. Therefore one may apply the CGalgorithm
to problem (II.6.2). The convergence rate then is independent of the
meshsize and does not deteriorate when reﬁning the mesh. This ap
proach, however, requires the evaluation of A
−1
, i.e. problems of the
form Av = g must be solved in every iteration. These are two, if
Ω ⊂ R
2
, or three, if Ω ⊂ R
3
, discrete Poisson equations for the compo
nents of the velocity v. The crucial idea now is to solve these auxiliary
problems only approximately with the help of a standard multigrid
algorithm for the Poisson equation.
This idea results in the following algorithm.
Algorithm II.6.6. (Improved Uzawa algorithm)
(0) Given: an initial guess p
0
for the pressure and a tolerance
ε > 0.
Sought: an approximate solution of the discrete problem
(II.6.1).
(1) Apply a multigrid algorithm with starting value zero and tol
erance ε to the linear system Av = f − Bp
0
and denote the
result by u
0
. Compute
r
0
= B
T
u
0
−δg −δCp
0
,
d
0
= r
0
,
γ
0
= r
0
r
0
.
58 II. STATIONARY LINEAR PROBLEMS
Set u
0
= 0 and i = 0.
(2) If
γ
i
< ε
2
compute p = p
0
+p
i
, apply a multigrid algorithm with starting
value zero and tolerance ε to the linear system Av = f − Bp,
denote the result by u, and return u, p as approximate solution
of problem (II.6.1) stop.
Otherwise go to step 3.
(3) Apply a multigrid algorithm with starting value u
i
and toler
ance ε to the linear system Av = Bd
i
and denote the result by
u
i+1
. Compute
s
i
= B
T
u
i+1
+ δCd
i
,
α
i
=
γ
i
d
i
s
i
,
p
i+1
= p
i
+ α
i
d
i
,
r
i+1
= r
i
−α
i
s
i
,
γ
i+1
= r
i+1
r
i+1
,
β
i
=
γ
i+1
γ
i
,
d
i+1
= r
i+1
+ β
i
d
i
.
Increase i by 1 and go to step 2.
Remark II.6.7. The improved Uzawa algorithm is a nested iteration:
The outer iteration is a CGalgorithm for problem (II.6.2), the inner
iteration is a standard multigrid algorithm for discrete Poisson equa
tions. The convergence rate of the improved Uzawa algorithm does not
deteriorate when reﬁning the mesh. It usually lies in the range of 0.5 to
0.8. For the inner loop usually 2 to 4 multigrid iterations are suﬃcient.
II.6.5. The multigrid algorithm. The multigrid algorithm is
based on a sequence of meshes T
0
, . . . , T
R
, which are obtained by suc
cessive local or global reﬁnement, and associated discrete problems
L
k
x
k
= b
k
, k = 0, . . . , R, corresponding to a partial diﬀerential equa
tion. The ﬁnest mesh T
R
corresponds to the problem that we actually
want to solve. In our applications, the diﬀerential equation is either
the Stokes problem or the Poisson equation. In the ﬁrst case L
k
is the
stiﬀness matrix of problem (II.6.1) corresponding to the partition T
k
.
The vector x
k
then incorporates the velocity and the pressure approx
imation. In the second case L
k
is the upper left block A in problem
(II.6.1) and x
k
only incorporates the discrete velocity.
The multigrid algorithm has three ingredients:
• a smoothing operator M
k
, which should be easy to eval
uate and which at the same time should give a reasonable
approximation to L
−1
k
,
II.6. SOLUTION OF THE DISCRETE PROBLEMS 59
• a restriction operator R
k,k−1
, which maps functions on
a ﬁne mesh T
k
to the next coarser mesh T
k−1
, and
• a prolongation operator I
k−1,k
, which maps functions
from a coarse mesh T
k−1
to the next ﬁner mesh T
k
.
For a concrete multigrid algorithm these ingredients must be speciﬁed.
This will be done in the next sections. Here, we discuss the general
form of the algorithm and its properties.
Algorithm II.6.8. (MG (k, µ, ν
1
, ν
2
, L
k
, b
k
, x
k
) one iteration of
the multigrid algorithm on mesh T
k
)
(0) Given: the level number k of the actual mesh, parameters µ,
ν
1
, and ν
2
, the stiﬀness matrix L
k
of the actual discrete prob
lem, the actual righthand side b
k
, and an inital guess x
k
.
Sought: improved approximate solution x
k
.
(1) If k = 0 compute
x
0
= L
−1
0
b
0
;
stop. Otherwise go to step 2.
(2) (Presmoothing) Perform ν
1
steps of the iterative procedure
x
k
= x
k
+ M
k
(b
k
−L
k
x
k
).
(3) (Coarse grid correction)
(a) Compute
f
k−1
= R
k,k−1
(b
k
−L
k
x
k
)
x
k−1
= 0.
(b) Perform µ iterations of MG(k − 1, µ, ν
1
, ν
2
, L
k−1
, b
k−1
,
x
k−1
) and denote the result by x
k−1
.
(c) Compute
x
k
= x
k
+ I
k−1,k
x
k−1
.
(4) (Postsmoothing) Perform ν
2
steps of the iterative proce
dure
x
k
= x
k
+ M
k
(b
k
−L
k
x
k
).
The following Java method implements the multigrid method with
µ = 1. Note that the recursion has been resolved.
// one MG cycle
// nonrecursive version
public void mgcycle( int lv, SMatrix a ) {
int level = lv;
count[level] = 1;
while( count[level] > 0 ) {
while( level > 1 ) {
smooth( a, 1 ); // presmoothing
a.mgResidue( res, x, bb ); // compute residue
restrict( a ); // restrict
level;
count[level] = cycle;
60 II. STATIONARY LINEAR PROBLEMS
}
smooth( a, 0 ); // solve on coarsest grid
count[level];
boolean prolongate = true;
while( level < lv && prolongate ) {
level++;
prolongateAndAdd( a ); // prolongate to finer grid
smooth( a, 1 ); // postsmoothing
count[level];
if( count[level] > 0 )
prolongate = false;
}
}
} // end of mgcycle
The used variables and methods have the following meaning:
• lv number of actual level, corresponds to k,
• a stiﬀness matrix on actual level, corresponds to L
k
,
• x actual approximate solution, corresponds to x
k
,
• bb actual righthand side, corresponds to b
k
,
• res actual residual, corresponds to b
k
−L
k
x
k
,
• smooth perform the smoothing,
• mgResidue compute the residual,
• restrict perform the restriction,
• prolongateAndAdd compute the prolongation and add the re
sult to the current approximation.
G
−−−→
G
−−−→
R
¸
¸
_
¸
¸
¸P
G
−−−→
G
−−−→
R
¸
¸
_
¸
¸
¸P
E
−−−→
Figure II.6.1. Schematic presentation of a multigrid
algorithm with Vcycle and three grids. The labels have
the following meaning: o smoothing, 1 restriction, T
prolongation, c exact solution.
Remark II.6.9. (1) The parameter µ determines the complexity of
the algorithm. Popular choices are µ = 1 called Vcycle and µ = 2
called Wcycle. Figure II.6.1 gives a schematic presentation of the
multigrid algorithm for the case µ = 1 and R = 2 (three meshes).
Here, o denotes smoothing, 1 restriction, T prolongation, and c exact
solution.
II.6. SOLUTION OF THE DISCRETE PROBLEMS 61
(2) The number of smoothing steps per multigrid iteration, i.e. the pa
rameters ν
1
and ν
2
, should not be chosen too large. A good choice for
positive deﬁnite problems such as the Poisson equation is ν
1
= ν
2
= 1.
For indeﬁnite problems such as the Stokes equations a good choice is
ν
1
= ν
2
= 2.
(3) If µ ≤ 2, one can prove that the computational work of one multi
grid iteration is proportional to the number of unknowns of the actual
discrete problem.
(4) Under suitable conditions on the smoothing algorithm, which is
determined by the matrix M
k
, one can prove that the convergence rate
of the multigrid algorithm is independent of the meshsize, i.e. it does
not deteriorate when reﬁning the mesh. These conditions will be dis
cussed in the next section. In practice one observes convergence rates
of 0.1 – 0.5 for positive deﬁnite problems such as the Poisson equation
and of 0.3 – 0.7 for indeﬁnite problems such as the Stokes equations.
II.6.6. Smoothing. The symmetric GaussSeidel algorithm
is the most popular smoothing algorithm for positive deﬁnite problems
such as the Poisson equation. This corresponds to the choice
M
k
= (D
k
−U
T
k
)D
−1
k
(D
k
−U
k
),
where D
k
and U
k
denote the diagonal and the strictly upper diagonal
part of L
k
respectively. The following Java method implements the
GaussSeidel algorithm for smoothing.
// one GaussSeidel sweep (x and b are defined on all levels)
// dir = 0: forward and backward
// dir = 1: only forward
// dir = 1; only backward
public void mgSGS( double[] x, double[] b, int dir ) {
double y;
if ( dir >= 0 ) { // forward sweep
for ( int i = dim[ lv1 ]; i < dim[lv]; i++ ) {
y = b[i];
for ( int j = r[i]; j < r[ i+1 ]; j++ )
y = m[j]*x[ dim[lv1]+c[j] ];
x[i] += y/m[ d[i] ];
}
}
if ( dir <= 0 ) { // backward sweep
for ( int i = dim[lv]  1; i >= dim[ lv1 ]; i ) {
y = b[i];
for ( int j = r[i]; j < r[ i+1 ]; j++ )
y = m[j]*x[ dim[lv1]+c[j] ];
x[i] += y/m[ d[i] ];
}
}
} // end of mgSGS
The variables have the following meaning:
• lv number of actual level, corresponds to k in the multigrid
algorithm,
62 II. STATIONARY LINEAR PROBLEMS
• dim dimensions of the discrete problems on the various levels,
• m stiﬀness matrix on all levels, corresponds to the L
k
’s,
• x approximate solution on all levels, corresponds to the x
k
’s,
• b righthand side on all levels, corresponds to the b
k
’s.
For indeﬁnite problems such as the Stokes equations the most pop
ular smoothing algorithm is the squared Jacobi iteration. This is the
Jacobi iteration applied to the squared system L
T
k
L
k
x
k
= L
T
k
b
k
and
corresponds to the choice
M
k
= ω
−2
L
T
k
with a suitable damping parameter satisfying ω > 0 and ω = O(h
−2
K
).
The following Java method implements this algorithm:
// one squared Richardson iteration (x and b are defined on all levels)
// res is an auxiliary array to store the residual
public void mgSR( double[] x, double[] b, double[] res, double om ) {
copy( res, b, 0, dim[lv1], dim[lv] );
for( int i = 0; i < dim[lv]  dim[ lv1 ]; i++ )
for( int j = r[ dim[lv1]+i ]; j < r[ dim[lv1]+i+1 ]; j++ ) {
res[i] = m[j]*x[ dim[lv1]+c[j] ];
}
for( int i = 0; i < dim[lv]  dim[ lv1 ]; i++ )
for( int j = r[ dim[lv1]+i ]; j < r[ dim[lv1]+i+1 ]; j++ )
x[ dim[lv1]+c[j] ] += m[j]*res[i]/om;
} // end of mgSR
The variable om is the damping parameter ω. The other variables and
methods are as explained before.
Another class of popular smoothing algorithms for the Stokes equa
tions is given by the socalled Vanka methods. The idea is to loop
through patches of elements and to solve exactly the equations associ
ated with the nodes inside the actual patch while retaining the current
values of the variables associated with the nodes outside the actual
patch.
There are many possible choices for the patches.
One extreme case obviously consists in choosing exactly one node at
a time. This yields the classical GaussSeidel method and is not ap
plicable to indeﬁnite problems, since it in general diverges for those
problems.
Another extreme case obviously consists in choosing all elements. This
of course is not practicable since it would result in an exact solution of
the complete discrete problem.
In practice, one chooses patches that consist of
• a single element, or
• the two elements that share a given edge or face, or
• the elements that share a given vertex.
II.6.7. Prolongation. Since the partition T
k
of level k always is
a reﬁnement of the partition T
k−1
of level k − 1 (cf. ¸¸II.7.6 (p. 74),
II.7.7 (p. 75)), the corresponding ﬁnite element spaces are nested, i.e.
II.6. SOLUTION OF THE DISCRETE PROBLEMS 63
ﬁnite element functions corresponding to level k−1 are contained in the
ﬁnite element space corresponding to level k. Therefore, the values of
a coarsegrid function corresponding to level k −1 at the nodal points
corresponding to level k are obtained by evaluating the nodal bases
functions corresponding to T
k−1
at the requested points. This deﬁnes
the interpolation operator I
k−1,k
.
d
d
d
d
d
d
i+1 i+2
i
i
i+1 i+2
d
d
d
d
d
d
i+1 i+2
i
i
i+1 i+2
d
d
d
d
d
d
i+1 i+2
i
i
i+1 i+2
d
d
d
d
d
d
i+1 i+2
i
i
i+1 i+2
d
d
d
0 0 0 0
0
i
+(i+1) +(i+2)
+(i)
+(3)
+0 +1
d
d
d
0 0 0 0
0 0
+0 +0
+1
+2 +1
+2
Figure II.6.2. Partitions of a triangle; expressions of
the form i + 1 have to be taken modulo 3
Figures II.6.2 and II.6.3 show various partitions of a triangle and of
a square, respectively (cf. ¸II.7.6 (p. 74), II.7.7 (p. 75)). The numbers
outside the element indicate the enumeration of the element vertices
and edges. Thus, e.g. edge 2 of the triangle has the vertices 0 and 1 as
its endpoints. The numbers +0, +1 etc. inside the elements indicate
the enumeration of the child elements. The remaining numbers inside
the elements give the enumeration of the vertices of the child elements.
Example II.6.10. Consider a piecewise constant approximation, i.e.
S
0,−1
(T ). The nodal points are the barycentres of the elements. Every
element in T
k−1
is subdivided into several smaller elements in T
k
. The
nodal value of a coarsegrid function at the barycentre of a child element
in T
k
then is its nodal value at the barycentre of the parent element in
T
k
.
Example II.6.11. Consider a piecewise linear approximation, i.e.
S
1,0
(T ). The nodal points are the vertices of the elements. The re
ﬁnement introduces new vertices at the midpoints of some edges of
the parent element and possibly – when using quadrilaterals – at the
barycentre of the parent element. The nodal value at the midpoint of
an edge is the average of the nodal values at the endpoints of the edge.
Thus, e.g. the value at vertex 1 of child +0 is the average of the values
at vertices 0 and 1 of the parent element. Similarly, the nodal value at
64 II. STATIONARY LINEAR PROBLEMS
i+3 i
i+1 i+2
i+2
i+3
i
i+1
i+3 i
i+1 i+2
i+2
i+3
i
i+1
i+3 i
i+1 i+2
i+2
i+3
i
i+1
i+3 i
i+1 i+2
i+2
i+3
i
i+1
i+3 i
i+1 i+2
i+2
i+3
i
i+1
0 0
0 0
+(i+3) +(i)
+(i+1) +(i+2)
¡
¡
¡
¡
¡
¡
¡
¡
¡
¡
¡
¡
e
e
e
e
e
e
e
e
e
e
e
e
0 0
0
+0 +1
+2
0
0
+0 +1
d
d
d
d
d
d
0 0
0
0
0
+0
+1
+2
+3
+4
d
d
d
d
d
d
0
0 0
0
0
+0
+1
+2
+3
+4
Figure II.6.3. Partitions of a square; expressions of the
form i + 1 have to be taken modulo 4
the barycentre of the parent element is the average of the nodal values
at the four element vertices.
II.6.8. Restriction. The restriction is computed by expressing
the nodal bases functions corresponding to the coarse partition T
k−1
in
terms of the nodal bases functions corresponding to the ﬁne partition
T
k
and inserting this expression in the variational formulation. This
results in a lumping of the righthand side vector which, in a certain
sense, is the transpose of the interpolation.
Example II.6.12. Consider a piecewise constant approximation, i.e.
S
0,−1
(T ). The nodal shape function of a parent element is the sum
II.6. SOLUTION OF THE DISCRETE PROBLEMS 65
of the nodal shape functions of the child elements. Correspondingly,
the components of the righthind side vector corresponding to the child
elements are all added and associated with the parent element.
Example II.6.13. Consider a piecewise linear approximation, i.e.
S
1,0
(T ). The nodal shape function corresponding to a vertex of a
parent triangle takes the value 1 at this vertex, the value
1
2
at the
midpoints of the two edges sharing the given vertex and the value 0 on
the remaining edges. If we label the current vertex by a and the mid
points of the two edges emanating form a by m
1
and m
2
, this results
in the following formula for the restriction on a triangle
R
k,k−1
ψ(a) = ψ(a) +
1
2
¦ψ(m
1
) + ψ(m
2
)¦.
When considering a quadrilateral, we must take into account that the
nodal shape functions take the value
1
4
at the barycentre b of the parent
quadrilateral. Therefore the restriction on a quadrilateral is given by
the formula
R
k,k−1
ψ(a) = ψ(a) +
1
2
¦ψ(m
1
) + ψ(m
2
)¦ +
1
4
ψ(b).
Remark II.6.14. An eﬃcient implementation of the prolongation and
restrictions loops through all elements and performs the prolongation
or restriction elementwise. This process is similar to the usual element
wise assembly of the stiﬀness matrix and the load vector.
II.6.9. Variants of the CGalgorithm for indeﬁnite prob
lems. The CGalgorithm can only be applied to symmetric positive
deﬁnite systems of equations. For nonsymmetric or indeﬁnite systems
it in general breaks down. However, there are various variants of the
CGalgorithm which can be applied to these problems.
A naive approach consists in applying the CGalgorithm to the
squared system L
T
k
L
k
x
k
= L
T
k
b
k
which is symmetric and positive deﬁ
nite. This approach cannot be recommended since squaring the systems
squares its condition number and thus at least doubles the required
number of iterations.
A more eﬃcient algorithm is the stabilized biconjugate
gradient algorithm, shortly BiCGstab. The underlying idea
roughly is to solve simultaneously the original problem L
k
x
k
= b
k
and
its adjoint L
T
k
y
k
= b
T
k
.
Algorithm II.6.15. (Stabilized bionjugate gradient algo
rithm BiCGstab)
(0) Given: a linear system of equations Lx = b, an initial guess
x
0
for the solution, and a tolerance ε > 0.
Sought: an approximate solution of the linear system.
(1) Compute
r
0
= b −Lx
0
,
66 II. STATIONARY LINEAR PROBLEMS
and set
r
0
= r
0
, v
−1
= 0 , p
−1
= 0 ,
α
−1
= 1 , ρ
−1
= 1 , ω
−1
= 1 .
Set i = 0.
(2) If
r
i
r
i
< ε
2
return x
i
as approximate solution; stop. Otherwise go to step
3.
(3) Compute
ρ
i
= r
i
r
i
,
β
i−1
=
ρ
i
α
i−1
ρ
i−1
ω
i−1
.
If
[β
i−1
[ < ε
there is a possible breakdown; stop. Otherwise compute
p
i
= r
i
+ β
i−1
¦p
i−1
−ω
i−1
v
i−1
¦,
v
i
= Lp
i
,
α
i
=
ρ
i
r
0
v
i
.
If
[α
i
[ < ε
there is a possible breakdown; stop. Otherwise compute
s
i
= r
i
−α
i
v
i
,
t
i
= Ls
i
,
ω
i
=
t
i
s
i
t
i
t
i
,
x
i+1
= x
i
+ α
i
p
i
+ ω
i
s
i
,
r
i+1
= s
i
−ω
i
t
i
.
Increase i by 1 and go to step 2.
The following Java method implements the BiCGstab algorithm.
Note that it performs an automatic restart upon breakdown at most
MAXRESTART times.
// precontioned stabilized biconjugate gradient method
public void pBiCgStab( SMatrix a, double[] b ) {
set(ab, 0.0, dim);
a.mvmult(ab, x);
add(res, b, ab, 1.0, 1.0, dim);
set(s, 0.0, dim);
set(rf, 0.0, dim);
double alpha, beta, gamma, rho, omega;
it = 0;
copy(rf, res, dim);
II.7. ADAPTIVITY 67
int restarts = 1; // restart in case of breakdown
while ( it < maxit && norm(res) > tolerance &&
restarts < MAXRESTART ) {
restarts++;
rho = 1.0, alpha = 1.0, omega = 1.0;
set(d, 0.0, dim);
set(v, 0.0, dim);
while ( it < maxit && lastRes > tolerance ) {
gamma = rho*omega;
if( Math.abs( gamma ) < EPS ) break;
// breakdown do restart
rho = innerProduct(rf, res, dim);
beta = rho*alpha/gamma;
acc(d, v, omega, dim);
add(d, res, d, 1.0, beta, dim);
a.mvmult(v, d);
gamma = innerProduct(rf, v, dim);
if( Math.abs( gamma ) < EPS ) break;
// breakdown do restart
alpha = rho/gamma;
add(s, res, v, 1.0, alpha, dim);
a.mvmult(ab, s);
gamma = innerProduct(ab, ab, dim);
omega = innerProduct(ab, s, dim)/gamma;
acc(x, d, alpha, dim);
acc(x, s, omega, dim);
add(res, s, ab, 1, omega, dim);
it++;
}
}
} // end of pBiCgStab
II.7. A posteriori error estimation and adaptive grid
reﬁnement
II.7.1. Motivation. Suppose we have computed the solution of
a discretization of a partial diﬀerential equation such as the Stokes
equations. What is the error of our computed solution?
A priori error estimates do not help in answering this question.
They only describe the asymptotic behaviour of the error. They tell
us how fast it will converge to zero when reﬁning the underlying mesh.
But for a given mesh and discretization they give no information on
the actual size of the error.
Another closely related problem is the question about the spatial
distribution of the error. Where is it large, where is it small? Obviously
we want to concentrate our resources in areas of a large error.
In summary, we want to compute an approximate solution of the
partial diﬀerential equation with a given tolerance and a minimal
amount of work. This task is achieved by adaptive grid reﬁnement
based on a posteriori error estimation.
Throughout this section we denote by X(T ) and Y (T ) ﬁnite ele
ment spaces for the velocity and pressure, respectively associated with
68 II. STATIONARY LINEAR PROBLEMS
a given partition T of the domain Ω. With these spaces we associate
either a mixed method as in ¸II.2 (p. 32) or a PetrovGalerkin method
as in ¸II.3 (p. 40). The solution of the corresponding discrete problem
is denoted by u
T
, p
T
, whereas u, p denotes the solution of the Stokes
equations.
II.7.2. General structure of the adaptive algorithm. The
adaptive algorithm has a general structure which is independent of the
particular diﬀerential equation.
Algorithm II.7.1. (General adaptive algorithm)
(0) Given: A partial diﬀerential equation and a tolerance ε.
Sought: An approximate solution with an error less than ε.
(1) Construct an initial admissible partition T
0
and set up the as
sociated discrete problem. Set i = 0.
(2) Solve the discrete problem corresponding to T
i
.
(3) For each element K ∈ T
i
compute an estimate η
K
of the error
on K and set
η
i
=
_
K∈T
i
η
2
k
_
1/2
.
(4) If
η
i
≤ ε
return the current discrete solution as approximate solution;
stop. Otherwise go to step 5.
(5) Determine a subset
¯
T
i
of elements in T
i
that must be reﬁned.
Determine T
i+1
such that it is an admissible partition and such
that all elements in
¯
T
i
are reﬁned. Set up the discrete problem
corresponding to T
i+1
. Increase i by 1 and go to step 2.
In order to make the adaptive algorithm operative we must obvi
ously specify the following ingredients:
• an algorithm that computes the η
K
’s (a posteriori error
estimation),
• a rule that selects the elements in
¯
T
i
(marking strategy),
• a rule that reﬁnes the elements in
¯
T
i
(regular refinement),
• an algorithm that constructs the partition T
i+1
(additional
refinement).
Remark II.7.2. The η
K
’s are usually called (a posteriori) error
estimators or (a posteriori) error indicators.
II.7.3. A residual a posteriori error estimator. The simplest
and most popular a posteriori error estimator is the residual error
estimator. For the Stokes equations it is given by
II.7. ADAPTIVITY 69
η
K
=
_
h
2
K
f + ∆u
T
−∇p
T

2
L
2
(K)
+ div u
T

2
L
2
(K)
+
1
2
E∈E
T
E⊂∂K
h
E
[n
E
(∇u
T
−p
T
I)]
E

2
L
2
(E)
_
1/2
.
Note that:
• The ﬁrst term is the residual of the discrete solution with re
spect to the momentum equation −∆u+∇p = f in its strong
form.
• The second term is the residual of the discrete solution with
respect to the continuity equation div u = 0 in it strong form.
• The third term contains the boundary terms that appear when
switching from the strong to the weak form of the momentum
equation by using an integration by parts formula element
wise.
• The pressure jumps vanish when using a continuous pressure
approximation.
One can prove that the residual error estimator yields the following
upper and lower bounds on the error:
u −u
T

1
+p −p
T

0
≤ c
∗
_
K∈T
η
2
K
_
1/2
,
η
K
≤ c
∗
_
u −u
T

H
1
(ω
K
)
+p −p
T

L
2
(ω
K
)
+ h
K
f −f
T

L
2
(ω
K
)
_
.
Here, f
T
is the L
2
projection of f onto S
0,−1
(T ) and ω
K
denotes the
union of all elements that share an edge with K, if Ω ⊂ R
2
, or a face,
if Ω ⊂ R
3
(cf. Figure II.7.1). The f − f
T
term is of higher order. The
constant c
∗
depends on the constant c
Ω
in the stability result at the end
of ¸II.2.1 (p. 32) and on the shape parameter max
K∈T
h
K
ρ
K
of the partition
T . The constant c
∗
depends on the shape parameter of T and on the
polynomial degree of the spaces X(T ) and Y (T ).
Remark II.7.3. The lower bound on the error is a local one whereas
the upper bound is a global one. This is not by chance. The lower error
bound involves the diﬀerential operator which is a local one: local vari
ations in the velocity and pressure result in local variations of the force
terms. The upper error bound on the other hand involves the inverse
70 II. STATIONARY LINEAR PROBLEMS
d
d
d
d
d
d
¨
¨
¨
¨
¨
¨
¨
¨
¨
¨
¨
¨
¨
¨
¨
¨
¨
¨
d
d
d
K
d
d
d
d
d
d
d
d
d
d
d
d
K
Figure II.7.1. Domain ω
K
for a triangle and a parallelogram
of the diﬀerential operator which is a global one: local variations of the
exterior forces result in a global change of the velocity and pressure.
Remark II.7.4. An error estimator, which yields an upper bound on
the error, is called reliable. An estimator, which yields a lower bound
on the error, is called efficient. Any decent error estimator must be
reliable and eﬃcient.
II.7.4. Error estimators based on the solution of auxiliary
problems. Two other classes of popular error estimators are based
on the solution of auxiliary discrete local problems with Neumann and
Dirichlet boundary conditions. For their description we denote by k
u
and k
p
the maximal polynomial degrees k and m such that
• S
k,0
0
(T )
n
⊂ X(T )
and either
• S
m,−1
(T )∩L
2
0
(Ω) ⊂ Y (T ) when using a discontinuous pressure
approximation or
• S
m,0
(T ) ∩ L
2
0
(Ω) ⊂ Y (T ) when using a continuous pressure
approximation.
Set
k
T
= max¦k
u
+ n, k
p
−1¦,
k
E
= max¦k
u
−1, k
p
¦,
where n is the space dimension, i.e. Ω ⊂ R
n
.
For the Neumanntype estimator we introduce for every element
K ∈ T the spaces
X(K) = span¦ψ
K
v , ψ
E
w : v ∈ R
k
T
(K)
n
, w ∈ R
k
E
(E)
n
,
E ∈ c
T
∩ ∂K¦,
Y (K) = span¦ψ
K
q : q ∈ R
k
u
−1
(K)¦,
where ψ
K
and ψ
E
are the element and edge respectively face bubble
functions deﬁned in ¸I.2.12 (p. 27). One can prove that the deﬁnition
of k
T
ensures that the local discrete problem
II.7. ADAPTIVITY 71
Find u
K
∈ X(K) and p
K
∈ Y (K) such that
_
K
∇u
K
: ∇v
K
dx
−
_
K
p
K
div v
K
dx =
_
K
¦f + ∆u
T
−∇p
T
¦ v
K
dx
+
_
∂K
[n
K
(∇u
T
−p
T
I)]
∂K
v
K
dS
_
K
q
K
div u
K
dx =
_
K
q
K
div u
T
dx
for all v
K
∈ X(K) and all q
K
∈ Y (K).
has a unique solution. With this solution we deﬁne the Neumann
estimator by
η
N,K
=
_
[u
K
[
2
H
1
(K)
+p
K

2
L
2
(K)
_
1/2
.
The above local problem is a discrete version of the Stokes equations
with Neumann boundary conditions
−∆v + grad q =
¯
f in K
div v = g in K
n
K
∇v −qn
K
= b on ∂K
where the data
¯
f , g, and b are determined by the exterior force f and
the discrete solution u
T
, p
T
.
For the Dirichlettype estimator we denote – as in the previous sub
section – for every element K ∈ T by ω
K
the union of all elements in
T that share an edge, if n = 2, or a face, if n = 3, (cf. Figure II.7.1 (p.
70)) and set
¯
X(K) = span¦ψ
K
v , ψ
E
w : v ∈ R
k
T
(K
)
n
, K
∈ T ∩ ω
K
,
w ∈ R
k
E
(E)
n
, E ∈ c
T
∩ ∂K¦,
¯
Y (K) = span¦ψ
K
q : q ∈ R
k
u
−1
(K
) , K
∈ T ∩ ω
K
¦.
Again, one can prove that the deﬁnition of k
T
ensures that the local
discrete problem
72 II. STATIONARY LINEAR PROBLEMS
Find ¯ u
K
∈
¯
X(K) and ¯ p
K
∈
¯
Y (K) such that
_
ω
K
∇¯ u
K
: ∇v
K
dx
−
_
ω
K
¯ p
K
div v
K
dx =
_
ω
K
f v
K
dx −
_
ω
K
∇u
T
: ∇v
K
dx
+
_
ω
K
p
T
div v
K
dx
_
ω
K
q
K
div ¯ u
K
dx =
_
ω
K
q
K
div u
T
dx
for all v
K
∈
¯
X(K) and all q
K
∈
¯
Y (K).
has a unique solution. With this solution we deﬁne the Dirichlet esti
mator by
η
D,K
=
_
[¯ u
K
[
2
H
1
(ω
K
)
+¯ p
K

2
L
2
(ω
K
)
_
1/2
.
This local problem is a discrete version of the Stokes equations with
Dirichlet boundary conditions
−∆v + grad q =
¯
f in ω
K
div v = g in ω
K
v = 0 on ∂ω
K
where the data
¯
f and g are determined by the exterior force f and the
discrete solution u
T
, p
T
.
One can prove that both estimators are reliable and eﬃcient and
equivalent to the residual estimator:
u −u
T

1
+p −p
T

0
≤ c
1
_
K∈T
η
2
N,K
_
1/2
,
η
N,K
≤ c
2
_
u −u
T

H
1
(ω
K
)
+p −p
T

L
2
(ω
K
)
+ h
K
f −f
T

L
2
(ω
K
)
_
,
η
K
≤ c
3
η
N,K
,
η
N,K
≤ c
4
_
K
⊂ω
K
η
2
K
_
1/2
,
II.7. ADAPTIVITY 73
u −u
T

1
+p −p
T

0
≤ c
5
_
K∈T
η
2
D,K
_
1/2
,
η
D,K
≤ c
6
_
u −u
T

H
1
( ω
K
)
+p −p
T

L
2
( ω
K
)
+ h
K
f −f
T

L
2
( ω
K
)
_
,
η
K
≤ c
7
η
D,K
,
η
D,K
≤ c
8
_
K
⊂ ω
K
η
2
K
_
1/2
.
Here, ¯ ω
K
is the union of all elements in T that share at least a vertex
with K (cf. Figure I.2.5 (p. 27)). The constants c
1
, . . . , c
8
only depend
on the shape parameter of the partition T , the polynomial degree of the
discretization, and the stability parameter c
Ω
of the Stokes equations.
Remark II.7.5. The computation of the estimators η
N,K
and η
D,K
obviously is more expensive than the one of the residual estimator η
K
.
This is recompensed by a higher accuracy. The residual estimator,
on the other hand, is completely satisfactory for identifying the re
gions for meshreﬁnement. Thus it is recommended to use the cheaper
residual estimator for reﬁning the mesh and one of the more expensive
Neumanntype or Dirichlettype estimators for determining the actual
error on the ﬁnal mesh.
II.7.5. Marking strategies. There are two popular marking
strategies for determining the set
¯
T
i
in the general adaptive algorithm:
the maximum strategy and the equilibration strategy.
Algorithm II.7.6. (Maximum strategy)
(0) Given: a partition T , error estimates η
K
for the elements
K ∈ T , and a threshold θ ∈ (0, 1).
Sought: a subset
¯
T of marked elements that should be re
ﬁned.
(1) Compute
η
T ,max
= max
K∈T
η
K
.
(2) If
η
K
≥ θη
T ,max
mark K for reﬁnement and put it into the set
¯
T .
Algorithm II.7.7. (Equilibration strategy)
(0) Given: a partition T , error estimates η
K
for the elements
K ∈ T , and a threshold θ ∈ (0, 1).
74 II. STATIONARY LINEAR PROBLEMS
Sought: a subset
¯
T of marked elements that should be re
ﬁned.
(1) Compute
Θ
T
=
K∈T
η
2
K
.
Set
Σ
T
= 0
and
¯
T = ∅.
(2) If
Σ
T
≥ θΘ
T
return
¯
T ; stop. Otherwise go to step 3.
(3) Compute
¯ η
T ,max
= max
K∈T \
T
η
K
.
(4) For all elements K ∈ T ¸
¯
T check whether
η
K
= ¯ η
T ,max
.
If this is the case, put K in
¯
T and add η
2
K
to Σ
T
. Otherwise
skip K.
When all elements have been treated, return to step 2.
At the end of this algorithm the set
¯
T satisﬁes
K∈
T
η
2
K
≥ θ
K∈T
η
2
K
.
Both marking strategies yield comparable results. The maximum
strategy obviously is cheaper than the equilibration strategy. In both
strategies, a large value of θ leads to small sets
¯
T , i.e. very few elements
are marked. A small value of θ leads to large sets
¯
T , i.e. nearly all
elements are marked. A popular and well established choice is θ ≈ 0.5.
II.7.6. Regular reﬁnement. Elements that are marked for re
ﬁnement usually are reﬁned by connecting their midpoints of edges.
The resulting elements are called red. The corresponding reﬁnement
is called regular.
Triangles and quadrilaterals are thus subdivided into four smaller
triangles and quadrilaterals that are similar to the parent element, i.e.
have the same angles. Thus the shape parameter of the elements does
not change.
This is illustrated by the topleft triangle of Figure II.6.2 (p. 63)
and by the top square of Figure II.6.3 (p. 64). The numbers outside
the elements indicate the local enumeration of edges and vertices of the
parent element. The numbers inside the elements close to the vertices
indicate the local enumeration of the vertices of the child elements.
II.7. ADAPTIVITY 75
The numbers +0, +1 etc. inside the elements give the enumeration
of the children. Note that the enumeration of new elements and new
vertices is chosen in such a way that triangles and quadrilaterals may
be treated simultaneously with a minimum of case selections.
Parallelepipeds are also subdivided into eight smaller similar par
allelepipeds by joining the midpoints of edges.
For tetrahedrons, the situation is more complicated. Joining the
midpoints of edges introduces four smaller similar tetrahedrons at the
vertices of the parent tetrahedron plus a double pyramid in its interior.
The latter one is subdivided into four small tetrahedrons by cutting
it along two orthogonal planes. These tetrahedrons, however, are not
similar to the parent tetrahedron. But there are rules which determine
the cutting planes such that a repeated reﬁnement according to these
rules leads to at most four similarity classes of elements originating
from a parent element. Thus these rules guarantee that the shape pa
rameter of the partition does not deteriorate during a repeated adaptive
reﬁnement procedure.
II.7.7. Additional reﬁnement. Since not all elements are re
ﬁned regularly, we need additional reﬁnement rules in order to avoid
hanging nodes (cf. Figure II.7.2) and to ensure the admissibility of
the reﬁned partition. These rules are illustrated in Figures II.6.2 (p.
63) and II.6.3 (p. 64).
¨
¨
¨
¨
¨
¨
¨
¨
¨
¨
¨
¨ r
r
r
r
r
r
r
r
r
r
r
r
•
Figure II.7.2. Example of a hanging node
For abbreviation we call the resulting elements green, blue, and
purple. They are obtained as follows:
• a green element by bisecting exactly one edge,
• a blue element by bisecting exactly two edges,
• a purple quadrilateral by bisecting exactly three edges.
In order to avoid too acute or too abstuse triangles, the blue and
green reﬁnement of triangles obey to the following two rules:
• In a blue reﬁnement of a triangle, the longest one of the re
ﬁnement edges is bisected ﬁrst.
• Before performing a green reﬁnement of a triangle it is checked
whether the reﬁnement edge is part of an edge which has been
bisected during the last ng generations. If this is the case, a
blue reﬁnement is performed instead.
76 II. STATIONARY LINEAR PROBLEMS
The second rule is illustrated in Figure II.7.3. The cross in the left part
represents a hanging node which should be eliminated by a green re
ﬁnement. The right part shows the blue reﬁnement which is performed
instead. Here the cross represents the new hanging node which is cre
ated by the blue reﬁnement. Numerical experiments indicate that the
optimal value of ng is 1. Larger values result in an excessive blowup
of the reﬁnement zone.
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
Figure II.7.3. Forbidden green reﬁnement and substi
tuting blue reﬁnement
II.7.8. Required data structures. In this subsection we shortly
describe the required data structures for a Java or C++ implementation
of an adaptive ﬁnite element algorithm. For simplicity we consider only
the twodimensional case. Note that the data structures are indepen
dent of the particular diﬀerential equation and apply to all engineering
problems which require the approximate solution of partial diﬀerential
equations.
The class NODE realizes the concept of a node, i.e. of a vertex of a
grid. It has three members c, t, and d.
The member c stores the coordinates in Euclidean 2space. It is a
double array of length 2.
The member t stores the type of the node. It equals 0 if it is an
interior point of the computational domain. It is k, k > 0, if the node
belongs to the kth component of the Dirichlet boundary part of the
computational domain. It equals −k, k > 0, if the node is on the kth
component of the Neumann boundary.
The member d gives the address of the corresponding degree of freedom.
It equals −1 if the corresponding node is not a degree of freedom, e.g.
since it lies on the Dirichlet boundary. This member takes into account
that not every node actually is a degree of freedom.
The class ELEMENT realizes the concept of an element. Its member
nv determines the element type, i.e. triangle or quadrilateral. Its
members v and e realize the vertex and edge informations, respectively.
Both are integer arrays of length 4.
It is assumed that v[3] = −1 if nv= 3.
A value e[i] = −1 indicates that the corresponding edge is on a straight
part of the boundary. Similarly e[i] = −k − 2, k ≥ 0, indicates that
the endpoints of the corresponding edge are on the kth curved part of
the boundary. A value e[i] = j ≥ 0 indicates that edge i of the current
II.7. ADAPTIVITY 77
element is adjacent to element number j. Thus the member e decribes
the neighbourhood relation of elements.
The members p, c, and t realize the grid hierarchy and give the number
of the parent, the number of the ﬁrst child, and the reﬁnement type,
respectively. In particular we have
t ∈
_
¸
¸
¸
¸
¸
_
¸
¸
¸
¸
¸
_
¦0¦ if the element is not reﬁned
¦1, . . . , 4¦ if the element is reﬁned green
¦5¦ if the element is reﬁned red
¦6, . . . , 24¦ if the element is reﬁned blue
¦25, . . . , 100¦ if the element is reﬁned purple.
At ﬁrst sight it may seem strange to keep the information about nodes
and elements in diﬀerent classes. But this approach has several advan
tages:
• It minimizes the storage requirement. The coordinates of a
node must be stored only once. If nodes and elements are rep
resented by a common structure, these coordinates are stored
4 −6 times.
• The elements represent the topology of the grid which is inde
pendent of the particular position of the nodes. If nodes and
elements are represented by diﬀerent structures it is much eas
ier to implement mesh smoothing algorithms which aﬀect the
position of the nodes but do not change the mesh topology.
When creating a hierarchy of adaptively reﬁned grids, the nodes
are completely hierarchical, i.e. a node of grid T
i
is also a node of any
grid T
j
with j > i. Since in general the grids are only partly reﬁned,
the elements are not completely hierarchical. Therefore, all elements
of all grids are stored.
The information about the diﬀerent grids is implemented by the
class LEVEL. Its members nn, nt, nq, and ne give the number of nodes,
triangles, quadrilaterals, and edges, resp. of a given grid. The members
first and last give the addresses of the ﬁrst element of the current
grid and of the ﬁrst element of the next grid, respectively. The mem
ber dof yields the number of degrees of freedom of the corresponding
discrete ﬁnite element problems.
For demonnstration purposes we refer to the Java applet ALF
(Adaptive Linear Finite elements) which implements the described data
structures for scalar diﬀerential equations of 2nd order and which to
gether with a user guide is available at the address:
www.rub.de/num1/demo/alf/ALFApplet
www.rub.de/num1/demo/alf/ALFUserguide.htm
CHAPTER III
Stationary nonlinear problems
III.1. Discretization of the stationary NavierStokes
equations
III.1.1. Variational formulation. We recall the stationary in
compressible NavierStokes equations of ¸I.1.13 (p. 16) with noslip
boundary condition and the rescaling of Remark I.1.4 (p. 15)
−∆u + Re(u ∇)u + grad p = f in Ω
div u = 0 in Ω
u = 0 on Γ
where Re > 0 is the Reynolds number. The variational formulation of
this problem is given by
Find u ∈ H
1
0
(Ω)
n
and p ∈ L
2
0
(Ω) such that
_
Ω
∇u : ∇vdx −
_
Ω
p div vdx
+
_
Ω
Re[(u ∇)u] vdx =
_
Ω
f vdx
_
Ω
q div udx = 0
for all v ∈ H
1
0
(Ω)
n
and all q ∈ L
2
0
(Ω).
It has the following properties:
• Any solution u, p of the NavierStokes equations is a solution
of the above variational problem.
• Any solution u, p of the variational problem, which is suﬃ
ciently smooth, is a solution of the NavierStokes equations.
III.1.2. Fixedpoint formulation. For the mathematical anal
ysis it is convenient to write the variational formulation of the Navier
Stokes equations as a ﬁxedpoint equation. To this end we denote by
T the Stokes operator which associates with each g the unique
solution v = Tg of the Stokes equations:
79
80 III. STATIONARY NONLINEAR PROBLEMS
Find v ∈ H
1
0
(Ω)
n
and q ∈ L
2
0
(Ω) such that
_
Ω
∇v : ∇wdx −
_
Ω
q div wdx =
_
Ω
g wdx
_
Ω
r div vdx = 0
for all w ∈ H
1
0
(Ω)
n
and all r ∈ L
2
0
(Ω).
Then the variational formulation of the NavierStokes equations takes
the equivalent ﬁxedpoint form
u = T(f −Re(u ∇)u).
III.1.3. Existence and uniqueness results. The following ex
istence and uniqueness results can be proven for the variational formu
lation of the NavierStokes equations:
• The variational problem admits at least one solu
tion.
• Every solution of the variational problem satisﬁes
the a priori bound
[u[
1
≤ f 
0
.
• The variational problem admits a unique solution
provided
γRef 
0
< 1,
where the constant γ only depends on the domain
Ω and can be estimated by
γ ≤ diam(Ω)
4−
n
2
2
n−
1
2
.
• Every solution of the variational problem has the
same regularity properties as the solution of the
Stokes equations.
• The mapping, which associates with Re a solution
of the variational problem, is diﬀerentiable. Its de
rivative with respect to Re is a continuous linear
operator which is invertible with a continuous in
verse for all but ﬁnitely many values of Re. I.e.
there are only ﬁnitely many turning or bifurcation
points.
III.1.4. Finite element discretization. For the ﬁnite element
discretization of the NavierStokes equations we choose a partition T
of the domain Ω and corresponding ﬁnite element spaces X(T ) and
Y (T ) for the velocity and pressure. These spaces have to satisfy the
III.1. STATIONARY NAVIERSTOKES EQUATIONS 81
infsup condition of ¸II.2.6 (p. 36). We then replace in the variational
problem the spaces H
1
0
(Ω)
n
and L
2
0
(Ω) by X(T ) and Y (T ), respectively.
This leads to the following discrete problem:
Find u
T
∈ X(T ) and p
T
∈ Y (T ) such that
_
Ω
∇u
T
: ∇v
T
dx −
_
Ω
p
T
div v
T
dx
+
_
Ω
Re[(u
T
∇)u
T
] v
T
dx =
_
Ω
f v
T
dx
_
Ω
q
T
div u
T
dx = 0
for all v
T
∈ X(T ) and all q
T
∈ Y (T ).
III.1.5. Fixedpoint formulation of the discrete problem.
The ﬁnite element discretization of the NavierStokes can be written
in a ﬁxedpoint form similar to the variational problem. To this end
we denote by T
T
the discrete Stokes operator which associates
with every g the solution v
T
= T
T
g of the discrete Stokes problem:
Find v
T
∈ X(T ) and q
T
∈ Y (T ) such that
_
Ω
∇v
T
: ∇w
T
dx −
_
Ω
q
T
div w
T
dx =
_
Ω
g w
T
dx
_
Ω
r
T
div v
T
dx = 0
for all w
T
∈ X(T ) and all r
T
∈ Y (T ).
The discrete NavierStokes problem then takes the equivalent ﬁxed
point form
u
T
= T
T
(f −Re(u
T
∇)u
T
).
Note, that – as for the variational problem – this equation also
determines the pressure via the operator T
T
.
III.1.6. Properties of the discrete problem. The discrete
problem has similar properties as the variational problem:
• The discrete problem admits at least one solution.
• Every solution of the discrete problem satisﬁes the
a priori bound
[u
T
[
1
≤ Ref 
0
.
82 III. STATIONARY NONLINEAR PROBLEMS
• The discrete problem admits a unique solution pro
vided
γRef 
0
< 1,
where the constant γ is the same as for the varia
tional problem.
• The mapping, which associates with Re a solution
of the discrete problem, is diﬀerentiable. Its de
rivative with respect to Re is a continuous linear
operator which is invertible with a continuous in
verse for all but ﬁnitely many values of Re. I.e.
there are only ﬁnitely many turning or bifurcation
points.
Remark III.1.1. The number of turning or bifurcation points of the
discrete problem of course depends on the partition T and on the choice
of the spaces X(T ) and Y (T ).
III.1.7. Symmetrization. The integration by parts formulae of
¸I.2.3 (p. 19) imply for all u, v, w ∈ H
1
0
(Ω)
n
the identity
_
Ω
[(u ∇)v] wdx = −
_
Ω
[(u ∇)w] vdx +
_
Ω
v wdiv udx.
If u is solenoidal, i.e. div u = 0, this in particular yields
_
Ω
[(u ∇)v] wdx = −
_
Ω
[(u ∇)w] vdx
for all v, w ∈ H
1
0
(Ω)
n
.
This symmetry in general is violated for the discrete problem since
div u
T
,= 0. To enforce the symmetry, one often replaces in the discrete
problem the term
_
Ω
Re[(u
T
∇)u
T
] v
T
dx
by
1
2
_
Ω
Re[(u
T
∇)u
T
] v
T
dx −
1
2
_
Ω
Re[(u
T
∇)v
T
] u
T
dx.
III.1.8. A priori error estimates. As in ¸II.2.11 (p. 39) we
denote by k
u
and k
p
the maximal polynomial degrees k and m such
that
• S
k,0
0
(T )
n
⊂ X(T )
and either
• S
m,−1
(T )∩L
2
0
(Ω) ⊂ Y (T ) when using a discontinuous pressure
approximation or
• S
m,0
(T ) ∩ L
2
0
(Ω) ⊂ Y (T ) when using a continuous pressure
approximation.
III.1. STATIONARY NAVIERSTOKES EQUATIONS 83
Set
k = min¦k
u
−1, k
p
¦.
Further we denote by u, p a solution of the variational formulation
of the NavierStokes equations and by u
T
, p
T
a solution of the discrete
problem.
With these notations the following a priori error estimates can be
proved:
Assume that
• u ∈ H
k+2
(Ω)
n
∩H
1
0
(Ω)
n
and p ∈ H
k+1
(Ω) ∩L
2
0
(Ω),
• hRef 
0
is suﬃciently small, and
• u is no turning or bifurcation point of the varia
tional problem,
then
[u −u
T
[
1
+p −p
T

0
≤ c
1
h
k+1
Re
2
f 
k
.
If in addition Ω is convex, then
u −u
T

0
≤ c
2
h
k+2
Re
2
f 
k
.
The constants c
1
and c
2
only depend on the domain Ω.
Remark III.1.2. (1) As for the Stokes equations, the above regularity
assumptions are not realistic for practical problems.
(2) The condition ”hRef 
0
suﬃciently small” can in general not be
quantiﬁed. Therefore it cannot be checked for a given discretization.
(3) One cannot conclude from the computed discrete solution whether
the analytical solution is a turning or bifurcation point or not.
(4) For most practical examples, the righthand side of the above er
ror estimates behaves like O(hRe
2
) or O(h
2
Re
2
). Thus they require
unrealistically small meshsizes for large Reynolds numbers.
These observations show that the a priori error estimates are of
purely academic interest. Practical informations can only be obtained
from a posteriori error estimates.
III.1.9. A warning example. We consider the onedimensional
NavierStokes equations
−u
+ Re uu
= 0 in I = (−1, 1)
u(−1) = 1
u(1) = −1.
Since
uu
= (
1
2
u
2
)
,
84 III. STATIONARY NONLINEAR PROBLEMS
we conclude that
−u
+
Re
2
u
2
= c
is constant.
To determine the constant c, we integrate the above equation and
obtain
2c =
_
1
−1
cdx =
_
1
−1
−u
+
Re
2
u
2
dx = 2 +
Re
2
_
1
−1
u
2
dx
. ¸¸ .
≥0
.
This shows that c ≥ 1 and that we may write c = γ
2
with a diﬀer
ent constant γ ≥ 1. Hence every solution of the diﬀerential equation
satisﬁes
u
=
Re
2
u
2
−γ
2
.
Therefore it must be of the form
u(x) = β
Re
tanh(α
Re
x)
with suitable parameters α
Re
and β
Re
depending on Re.
The boundary conditions imply that
β
Re
= −
1
tanh(α
Re
)
.
Hence we have
u(x) = −
tanh(α
Re
x)
tanh(α
Re
)
.
Inserting this expression in the diﬀerential equation yields
0 =
_
−u
+
Re
2
u
2
_
=
_
α
Re
tanh(α
Re
)
+
_
Re
2
−α
Re
tanh(α
Re
)
¸
u
2
_
= 2
_
Re
2
−α
Re
tanh(α
α
)
¸
uu
.
Since neither u nor its derivative u
vanish identically, we obtain the
deﬁning relation
2α
Re
tanh(α
Re
) = Re
for the parameter α
Re
.
Due to the monotonicity of tanh, this equation admits for every
Re > 0 a unique solution α
Re
. Since tanh(x) ≈ 1 for x ¸ 1, we have
α
Re
≈
Re
2
for Re ¸ 1. Hence, the solution u has a sharp interior layer
III.1. STATIONARY NAVIERSTOKES EQUATIONS 85
at the origin for large values of Re. Figure III.1.1 depicts the solution
u for Re = 100.
Figure III.1.1. Solution of the onedimensional Na
vierStokes equations for Re = 100
For the discretization, we choose an integer N ≥ 1, divide the
interval (−1, 1) into N small subintervals of equal length h =
2
N+1
,
and use continuous piecewise linear ﬁnite elements on the resulting
mesh. We denote by u
i
, 0 ≤ i ≤ N + 1, the value of the discrete
solution at the meshpoint x
i
= −1 +ih and evaluate all integrals with
the Simpson rule. Since all integrands are piecewise polynomials of
degree at most 2, this is an exact integration. With theses notations,
the discretization results in the following ﬁnite diﬀerence scheme:
2u
i
−u
i−1
−u
i+1
h
+
Re
6
(u
i
−u
i−1
)(2u
i
+ u
i−1
)
+
Re
6
(u
i+1
−u
i
)(2u
i
+ u
i+1
) = 0 for 1 ≤ i ≤ N
u
0
= 1
u
N+1
= −1.
For its solution, we try the ansatz
u
i
=
_
¸
_
¸
_
1 for i = 0
δ for 0 ≤ i ≤ N
−1 for i = N + 1
with an unknown constant δ. When inserting this ansatz in the dif
ference equations, we see that the equations corresponding to 2 ≤ i ≤
N −1 are satisﬁed independently of the value of δ. The equations for
i = 1 and i = N on the other hand result in
0 =
δ −1
h
+
Re
6
(δ −1)(2δ + 1)
86 III. STATIONARY NONLINEAR PROBLEMS
=
δ −1
h
_
1 +
Reh
6
(2δ + 1)
_
and
0 =
δ + 1
h
+
Re
6
(−δ −1)(2δ −1)
=
δ + 1
h
_
1 −
Reh
6
(2δ −1)
_
respectively.
If Reh = 6, these equations have the two solutions δ = 1 and δ = −1.
Both solutions obviously have nothing in common with the solution of
the diﬀerential equation.
If Reh ,= 6, the above equations have no solution. Hence, our ansatz
does not work. A more detailed analysis, however, shows that for
Reh ≥ 6 the discrete solution has nothing in common with the solu
tion of the diﬀerential equation.
In summary, we see that the ﬁnite element discretization yields a qual
itatively correct approximation only when h <
6
Re
. Hence we have to
expect a prohibitively small meshsize for reallife problems.
When using a standard symmetric ﬁnite diﬀerence approximation,
things do not change. The diﬀerence equations then take the form
2u
i
−u
i−1
−u
i+1
h
+
Re
2
u
i
(u
i+1
−u
i−1
) = 0 for 1 ≤ i ≤ N
u
0
= 1
u
N+1
= −1.
When Reh = 2, our ansatz now leads to the same solution as before.
Again, one can prove that one obtains a completely useless discrete
solution when Reh ≥ 2. Thus the principal result does not change in
this case. Only the critical meshsize is reduced by a factor 3.
Next, we try a backward diﬀerence approximation of the ﬁrst order
derivative. This results in the diﬀerence equations
2u
i
−u
i−1
−u
i+1
h
+ Reu
i
(u
i
−u
i−1
) = 0 for 1 ≤ i ≤ N
u
0
= 1
u
N+1
= −1.
When trying our ansatz, we see that equations corresponding to 2 ≤
i ≤ N − 1 are again satisﬁed independently of δ. The equations for
i = 0 and i = N now take the form
0 =
δ −1
h
+ Reδ(δ −1)
=
δ −1
h
[1 + Rehδ]
III.1. STATIONARY NAVIERSTOKES EQUATIONS 87
and
0 =
δ + 1
h
+ Reδ0
=
δ + 1
h
respectively. Thus, in the case Reh = 1 we obtain the unique solution
δ = −1. A more reﬁned analysis shows, that we obtain a qualitatively
correct discrete solution only if h ≤
1
Re
.
Finally, we try a forward diﬀerence approximation of the ﬁrst order
derivative. This yields the diﬀerence equations
2u
i
−u
i−1
−u
i+1
h
+ Reu
i
(u
i+1
−u
i
) = 0 for 1 ≤ i ≤ N
u
0
= 1
u
N+1
= −1.
Our ansatz now results in the two conditions
0 =
δ −1
h
+ Reδ0
=
δ −1
h
and
0 =
δ + 1
h
+ Reδ(−1 −δ)
=
δ + 1
h
[1 −Rehδ] .
For Reh = 1 this yields the unique solution δ = 1. A qualitatively
correct discrete solution is obtained only if h ≤
1
Re
.
These experiences show that we need a true upwinding, i.e. a back
ward diﬀerence approximation when u > 0 and a forward diﬀerence
approximation when u < 0. Thus the upwind direction and – with it
the discretization – depend on the solution of the discrete problem!
III.1.10. Upwind methods. The previous section shows that
we must modify the ﬁnite element discretization of ¸III.1.4 (p. 80) in
order to obtain qualitatively correct approximations also for ”large”
values of hRe.
One possibility to achieve this goal is to use an upwind differ
ence approximation for the convective derivative (u
T
∇)u
T
. To de
scribe the idea we consider only the lowest order method.
In a ﬁrst step, we approximate the integral involving the convective
derivative by a onepoint quadrature rule
_
Ω
[(u
T
∇)u
T
] v
T
dx ≈
K∈T
[K[[(u
T
(x
K
) ∇)u
T
(x
K
)] v
T
(x
K
).
88 III. STATIONARY NONLINEAR PROBLEMS
Here [K[ is the area respectively volume of the element K and x
K
denotes its barycentre. When using a ﬁrst order approximation for the
velocity, i.e. X(T ) = S
1,0
0
(T )
n
, this does not deteriorate the asymptotic
convergence rate of the ﬁnite element discretization.
In a second step, we replace the convective derivative by a suitable
upwind diﬀerence
(u
T
(x
K
) ∇)u
T
(x
K
) ≈
1
x
K
−y
K

u
T
(x
K
)(u
T
(x
K
) −u
T
(y
K
)).
Here   denotes the Euclidean norm in R
n
and y
K
is the intersection
of the halfline ¦x
K
− su
T
(x
K
) : s > 0¦ with the boundary of K (cf.
Figure III.1.2).
$
$
$
$
$
$
$
$
$
$
$$
b
a
y
K
x
K
Figure III.1.2. Upwind diﬀerence
In a last step, we replace u
T
(y
K
) by I
T
u
T
(y
K
) where I
T
u
T
denotes
the linear interpolate of u
T
in the vertices of the edge respectively face
of K which contains y
K
. If in Figure III.1.2, e.g., y
K
−b =
1
4
a−b,
we have I
T
u
T
(y
K
) =
1
4
u
T
(a) +
3
4
u
T
(b).
For ”large” values of hRe, this upwinding yields a better approxi
mation than the straightforward discretization of ¸III.1.4 (p. 80). For
suﬃciently small meshsizes, the error converges to zero linearly with h.
The upwind direction depends on the discrete solution. This has the
awkward sideeﬀect that the discrete nonlinear problem is not diﬀeren
tiable and leads to severe complications for the solution of the discrete
problem.
III.1.11. The streamlinediﬀusion method. The stream
linediffusion method has an upwind eﬀect, but avoids the dif
ferentiability problem of the upwind scheme of the previous section.
In particular, the resulting discrete problem is diﬀerentiable and can
be solved with a Newton method. Moreover, the streamlinediﬀusion
method simultaneously has a stabilizing eﬀect with respect to the in
compressibility constraint. In this respect it generalizes the Petrov
Galerkin method of ¸II.3 (p. 40).
The idea is to add an artiﬁcial viscosity in the streamline direction.
Thus the solution is slightly ”smeared” in its smooth direction while
III.2. SOLUTION OF THE DISCRETE NONLINEAR PROBLEMS 89
retaining its steep gradient in the orthogonal direction. The artiﬁcial
viscosity is added via a suitable penalty term which is consistent in the
sense that it vanishes for any solution of the NavierStokes equations.
Retaining the notations of ¸III.1.4 (p. 80), the streamlinediffu
sion discretization is given by:
Find u
T
∈ X(T ) and p
T
∈ Y (T ) such that
_
Ω
∇u
T
: ∇v
T
dx −
_
Ω
p
T
div v
T
dx
+
_
Ω
Re[(u
T
∇)u
T
] v
T
dx
+
K∈T
δ
K
h
2
K
_
K
Re[−f −∆u
T
+∇p
T
+Re(u
T
∇)u
T
] [(u
T
∇)v
T
]dx
+
K∈T
α
K
δ
K
_
K
div u
T
div v
T
dx =
_
Ω
f v
T
dx
_
Ω
q
T
div u
T
dx
+
K∈T
δ
K
h
2
K
_
K
[−∆u
T
+∇p
T
+Re(u
T
∇)u
T
] ∇q
T
dx
+
E∈E
T
δ
E
h
E
_
E
[p
T
]
E
[q
T
]
E
dS =
K∈T
δ
K
h
2
K
_
K
f ∇q
T
dx
for all v
T
∈ X(T ) and all q
T
∈ Y (T ).
Recall that []
E
denotes the jump across E.
The stabilization parameters δ
K
and δ
E
are chosen as described in
¸II.3.4 (p. 44). The additional parameter α
K
has to be nonnegative
and can be chosen equal to zero. Computational experiments, however,
suggest that the choice α
K
≈ 1 is preferable. When setting Re = 0
and α
K
= 0 the above discretization reduces to the PetrovGalerkin
discretization of the Stokes equations presented in ¸II.3.3 (p. 43).
III.2. Solution of the discrete nonlinear problems
III.2.1. General structure. For the solution of discrete nonlin
ear problems which result from a discretization of a nonlinear partial
diﬀerential equation one can proceed in two ways:
• One applies a nonlinear solver, such as e.g. the Newton
method, to the nonlinear diﬀerential equation and then dis
cretizes the resulting linear partial diﬀerential equations.
90 III. STATIONARY NONLINEAR PROBLEMS
• One directly applies a nonlinear solver, such as e.g. the New
ton method, to the discrete nonlinear problems. The resulting
discrete linear problems can then be interpreted as discretiza
tions of suitable linear diﬀerential equations.
Both approaches often are equivalent and yield comparable approxima
tions. In this section we will follow the ﬁrst approach since it requires
less notation. All algorithms can easily be reinterpreted in the second
sense described above.
We recall the ﬁxedpoint formulation of the NavierStokes equations
of ¸III.1.2 (p. 79)
(III.2.1) u = T(f −Re(u ∇)u)
with the Stokes operator T which associates with each g the unique
solution v = Tg of the Stokes equations
−∆v + grad q = g in Ω
div v = 0 in Ω
v = 0 on Γ.
Most of the algorithms require the solution of discrete Stokes equations
or of slight variations thereof. This can be achieved with the methods
of ¸II.6 (p. 53).
III.2.2. Fixedpoint iteration. The ﬁxedpoint iteration is given
by
u
i+1
= T(f −Re(u
i
∇)u
i
).
It results in the following algorithm:
Algorithm III.2.1. (Fixedpoint iteration)
(0) Given: an initial guess u
0
and a tolerance ε > 0.
Sought: an approximate solution of the stationary incompress
ible NavierStokes equations.
(1) Set i = 0.
(2) Solve the Stokes equations
−∆u
i+1
+∇p
i+1
= ¦f −Re(u
i
∇)u
i
¦ in Ω
div u
i+1
= 0 in Ω
u
i+1
= 0 on Γ.
(3) If
[u
i+1
−u
i
[
1
≤ ε
return u
i+1
, p
i+1
as approximate solution; stop. Otherwise
increase i by 1 and go to step 2.
The ﬁxedpoint iteration converges if Re
2
f 
0
≤ 1. The conver
gence rate approximately is 1 −Re
2
f 
0
. Therefore this algorithm can
only be recommended for problems with very small Reynolds’ numbers.
III.2. SOLUTION OF THE DISCRETE NONLINEAR PROBLEMS 91
III.2.3. Newton iteration. Equation (III.2.1) can be rewritten
in the form
F(u) = u −T(f −Re(u ∇)u) = 0.
We may apply Newton’s method to F. Then we must solve in each
step a linear problem of the form
g = DF(u)v
= v + ReT((u ∇)v + (v ∇)u).
This results in the following algorithm:
Algorithm III.2.2. (Newton iteration)
(1) Given: an initial guess u
0
and a tolerance ε > 0.
Sought: an approximate solution of the stationary incompress
ible NavierStokes equations.
(2) Set i = 0.
(3) Solve the modiﬁed Stokes equations
−∆u
i+1
+∇p
i+1
+ Re(u
i
∇)u
i+1
+Re(u
i+1
∇)u
i
= ¦f + Re(u
i
∇)u
i
¦ in Ω
div u
i+1
= 0 in Ω
u
i+1
= 0 on Γ.
(4) If
[u
i+1
−u
i
[
1
≤ ε
return u
i+1
, p
i+1
as approximate solution; stop. Otherwise
increase i by 1 and go to step 2.
The Newton iteration converges quadratically. However, the ini
tial guess must be ”close” to the sought solution, otherwise the it
eration may diverge. To avoid this, one can use a damped Newton
iteration. The modiﬁed Stokes problems incorporate convection and
reaction terms which are proportional to the Reynolds’ number. For
large Reynolds’ numbers this may cause severe diﬃculties with the
solution of the modiﬁed Stokes problems.
III.2.4. Path tracking. Instead of the single problem (III.2.1) we
now look at a whole family of NavierStokes equations with a parameter
λ
u
λ
= T(f −λ(u
λ
∇)u
λ
).
Its solutions u
λ
depend diﬀerentiably on the parameter λ. The deriv
ative v
λ
=
du
λ
dλ
solves the modiﬁed Stokes equations
v
λ
= −T(λ(v
λ
∇)u
λ
+ λ(u
λ
∇v
λ
) + (u
λ
∇)u
λ
).
If we know the solution u
λ
0
corresponding to the parameter λ
0
, we can
compute v
λ
0
and may use u
λ
0
+ (λ
1
− λ
0
)v
λ
0
as initial guess for the
Newton iteration applied to the problem with parameter λ
1
> λ
0
. If
92 III. STATIONARY NONLINEAR PROBLEMS
λ
1
−λ
0
is not too large, a few Newton iterations will yield a suﬃciently
good approximation of v
λ
1
.
This idea leads to the following algorithm:
Algorithm III.2.3. (Path tracking)
(0) Given: a Reynolds’ number Re, an initial parameter 0 ≤ λ <
Re, an increment ∆λ > 0, and a tolerance ε > 0.
Sought: an approximate solution of the stationary incompress
ible NavierStokes equations with Reynolds’ number Re.
(1) Set u
λ
= 0.
(2) Apply a few Newton iterations to the NavierStokes equations
with Reynolds’ number λ, initial guess u
λ
, and tolerance ε.
Denote the result by u
λ
.
(3) If λ = Re return the velocity u
λ
and the corresponding pressure
p
λ
as approximate solution; stop. Otherwise go to step 4.
(4) Solve the modiﬁed Stokes equations
−∆v
λ
+∇q
λ
+ λ(u
λ
∇)v
λ
+λ(v
λ
∇)u
λ
= ¦f −λ(u
λ
∇)u
λ
¦ in Ω
div v
λ
= 0 in Ω
v
λ
= 0 on Γ.
(5) 5. Replace u
λ
by u
λ
+ ∆λv
λ
and λ by min¦Re, λ + ∆λ¦. Go
to step 2.
The path tracking algorithm should be combined with a steplength
control: If the Newton algorithm in step 2 does not converge suﬃciently
well, the increment ∆λ should be reduced.
III.2.5. A nonlinear CGalgorithm. The idea is to apply a
nonlinear CGalgorithm to the leastsquares minimization problem
minimize
1
2
[u −T(f −Re(u ∇)u)[
2
1
.
It leads to the following algorithm:
Algorithm III.2.4. (Nonlinear CGalgorithm of Polak
Ribi
`
ere)
(0) Given: an initial guess u
0
and a tolerance ε > 0.
Sought: an approximate solution of the stationary incompress
ible NavierStokes equations.
(1) Compute the solutions z
0
and ˜ g
0
the Stokes problems
−∆z
0
+∇r
0
= Re(u
0
∇)u
0
−f in Ω
div z
0
= 0 in Ω
z
0
= 0 on Γ
III.2. SOLUTION OF THE DISCRETE NONLINEAR PROBLEMS 93
and
−∆˜ g
0
+∇˜ s
0
= Re¦(u
0
+z
0
) (∇u
0
) −(u
0
∇)(u
0
+z
0
)¦ in Ω
div ˜ g
0
= 0 in Ω
˜ g
0
= 0 on Γ.
Set i = 0 and w
0
= g
0
= u
0
+z
0
+ ˜ g
0
.
(2) If
__
Ω
[∇w
i
[
2
dx
_
1/2
≤ ε
return the velocity u
i
and the corresponding pressure p
i
as ap
proximate solution; stop. Otherwise go to step 3.
(3) Compute the solutions z
i
1
and z
i
2
of the Stokes problems
−∆z
i
1
+∇r
i
1
= Re¦(u
i
∇)w
i
+ (w
i
∇)u
i
¦ in Ω
div z
i
1
= 0 in Ω
z
i
1
= 0 on Γ
and
−∆z
i
2
+∇r
i
2
= Re(w
i
∇)w
i
in Ω
div z
i
2
= 0 in Ω
z
i
2
= 0 on Γ.
Compute
α = −
_
Ω
∇(u
i
+z
i
) : ∇(w
i
+z
i
1
)dx
β =
_
Ω
[∇(w
i
+z
i
1
)[
2
dx +
_
Ω
∇(u
i
+z
i
) : ∇z
i
2
dx
γ = −
3
2
_
Ω
∇(w
i
+z
i
1
) : ∇z
i
2
dx
δ =
1
2
_
Ω
[∇z
i
2
[
2
.
Determine the smallest positive zero ρ
i
of
α + βρ + γρ
2
+ δρ
3
and set
u
i+1
= u
i
−ρ
i
w
i
z
i+1
= z
i
−ρ
i
z
i
1
+
1
2
ρ
i
2
z
i
2
.
94 III. STATIONARY NONLINEAR PROBLEMS
Compute the solution ˜ g
i+1
of the Stokes problem
−∆˜ g
i+1
+∇˜ s
i+1
= Re¦(u
i+1
+z
i+1
) (∇u
i+1
)
−(u
i+1
∇)(u
i+1
+z
i+1
)¦ in Ω
div ˜ g
i+1
= 0 in Ω
˜ g
i+1
= 0 on Γ
and set
g
i+1
= ˜ g
i+1
+u
i+1
+z
i+1
.
Compute
σ
i+1
=
__
Ω
[∇g
i+1
[
2
dx
_
−1
_
Ω
∇(g
i+1
−g
i
) : ∇g
i+1
dx
and set
w
i+1
= g
i+1
+ σ
i+1
w
i
.
Increase i by 1 and go to step 2.
This algorithm has the advantage that it only requires the solution
of Stokes problems and thus avoids the diﬃculties associated with large
convection and reaction terms.
III.2.6. Operator splitting. The idea is to decouple the diﬃ
culties associated with the nonlinear convection term and with the
incompressibility constraint. This leads to the following algorithm:
Algorithm III.2.5. (Operator splitting)
(1) Given: an initial guess u
0
, a damping parameter ω ∈ (0, 1),
and a tolerance ε > 0.
Sought: an approximate solution of the stationary incompress
ible NavierStokes equations.
(2) Set i = 0.
(3) Solve the Stokes equations
2ωu
i+
1
4
−∆u
i+
1
4
+∇p
i+
1
4
= 2ωu
i
+f
−Re(u
i
∇)u
i
in Ω
div u
i+
1
4
= 0 in Ω
u
i+
1
4
= 0 on Γ.
(4) Solve the nonlinear Poisson equations
ωu
i+
3
4
−∆u
i+
3
4
+ Re(u
i+
3
4
∇)u
i+
3
4
= ωu
i+
1
4
+f
−∇p
i+
1
4
in Ω
u
i+
3
4
= 0 on Γ.
III.3. ADAPTIVITY FOR NONLINEAR PROBLEMS 95
(5) Solve the Stokes equations
2ωu
i+1
−∆u
i+1
+∇p
i+1
= 2ωu
i+
3
4
+f
−Re(u
i+
3
4
∇)u
i+
3
4
in Ω
div u
i+1
= 0 in Ω
u
i+1
= 0 on Γ.
(6) If
[u
i+1
−u
i
[
1
≤ ε
return u
i+1
, p
i+1
as approximate solution; stop. Otherwise
increase i by 1 and go to step 2.
The nonlinear problem in step 3 is solved with one of the algo
rithms presented in the previous subsections. This task is simpliﬁed
by the fact that the incompressibility condition and the pressure are
now missing.
III.2.7. Multigrid algorithms. When applying the multigrid al
gorithm of ¸II.6.5 (p. 58) to nonlinear problems one only has to modify
the pre and postsmoothing steps. This can be done in two possible
ways:
• Apply a few iterations of the Newton algorithm to the nonlin
ear problem combined with very few iterations of a classical
iterative scheme, such as e.g. GaußSeidel iteration, for the
auxiliary linear problems that must be solved during the New
ton iteration.
• Successively choose a node and the corresponding equation,
freeze all unknowns that do not belong to the current node,
and solve for the current unknown by applying a few Newton
iterations to the resulting nonlinear equation in one unknown.
The second variant is often called nonlinear GaußSeidel algo
rithm.
Alternatively one can apply a variant of the multigrid algorithm
of ¸II.6.5 (p. 58) to the linear problems that must be solved in the
algorithms described above. Due to the lower costs for implementation,
this variant is often preferred in practice.
III.3. Adaptivity for nonlinear problems
III.3.1. General structure. The general structure of an adap
tive algorithm as described in ¸II.7.2 (p. 68) directly applies to nonlin
ear problems. One only has to adapt the a posteriori error estimator.
The marking strategy, the regular reﬁnement, the additional reﬁne
ment, and the data structures described in ¸II.7.5 (p. 73) – ¸II.7.8 (p.
76) do not change.
96 III. STATIONARY NONLINEAR PROBLEMS
Throughout this section we denote by X(T ) and Y (T ) ﬁnite ele
ment spaces for the velocity and pressure, respectively associated with
a given partition T of the domain Ω. With these spaces we associate a
discretization of the stationary incompressible NavierStokes equations
as described in ¸III.1 (p. 79). The computed solution of the corre
sponding discrete problem is denoted by u
T
, p
T
, whereas u, p denotes
a solution of the NavierStokes equations.
III.3.2. A residual a posteriori error estimator. The residual
a posteriori error estimator for the NavierStokes equations is given by
η
K
=
_
h
2
K
f + ∆u
T
−∇p
T
−Re(u
T
∇)u
T

2
L
2
(K)
+ div u
T

2
L
2
(K)
+
1
2
E∈E
T
E⊂∂K
h
E
[n
E
(∇u
T
−p
T
I)]
E

2
L
2
(E)
_
1/2
.
When comparing it with the residual estimator for the Stokes equa
tions, we observe that the nonlinear convection term has been added
to the element residual.
Under suitable conditions on the solution of the NavierStokes equa
tions, one can prove that – as in the linear case – the residual error
estimator is reliable and eﬃcient.
III.3.3. Error estimators based on the solution of auxiliary
problems. The error estimators based on the solution of auxiliary
local discrete problems are very similar to those in the linear case. In
particular the nonlinearity only enters into the data, the local problems
itself remain linear.
We recall the notations of ¸II.7.4 (p. 70) and set
X(K) = span¦ψ
K
v , ψ
E
w : v ∈ R
k
T
(K)
n
, w ∈ R
k
E
(E)
n
,
E ∈ c
T
∩ ∂K¦,
Y (K) = span¦ψ
K
q : q ∈ R
k
u
−1
(K)¦,
¯
X(K) = span¦ψ
K
v , ψ
E
w : v ∈ R
k
T
(K
)
n
, K
∈ T ∩ ω
K
,
w ∈ R
k
E
(E)
n
, E ∈ c
T
∩ ∂K¦,
¯
Y (K) = span¦ψ
K
q : q ∈ R
k
u
−1
(K
) , K
∈ T ∩ ω
K
¦,
where
k
T
= max¦k
u
(k
u
−1), k
u
+ n, k
p
−1¦,
k
E
= max¦k
u
−1, k
p
¦,
III.3. ADAPTIVITY FOR NONLINEAR PROBLEMS 97
and k
u
and k
p
denote the polynomial degrees of the velocity and pres
sure approximation respectively. Note that the deﬁnition of k
T
diﬀers
from the one in ¸II.7.4 (p. 70) and takes into account the polynomial
degree of the nonlinear convection term.
With these notations we consider the following discrete Stokes prob
lem with Neumann boundary conditions
Find u
K
∈ X(K) and p
K
∈ Y (K) such that
_
K
∇u
K
: ∇v
K
dx
−
_
K
p
K
div v
K
dx =
_
K
¦f + ∆u
T
−∇p
T
−Re(u
T
∇)u
T
¦ v
K
dx
+
_
∂K
[n
K
(∇u
T
−p
T
I)]
∂K
v
K
dS
_
K
q
K
div u
K
dx =
_
K
q
K
div u
T
dx
for all v
K
∈ X(K) and all q
K
∈ Y (K).
With the solution of this problem, we deﬁne the Neumann estimator
by
η
N,K
=
_
[u
K
[
2
H
1
(K)
+p
K

2
L
2
(K)
_
1/2
.
Similarly we can consider the following discrete Stokes problem with
Dirichlet boundary conditions
Find ¯ u
K
∈
¯
X(K) and ¯ p
K
∈
¯
Y (K) such that
_
ω
K
∇¯ u
K
: ∇v
K
dx
−
_
ω
K
¯ p
K
div v
K
dx =
_
ω
K
f v
K
dx −
_
ω
K
∇u
T
: ∇v
K
dx
+
_
ω
K
p
T
div v
K
dx
−
_
ω
K
Re(u
T
∇)u
T
v
K
dx
_
ω
K
q
K
div ¯ u
K
dx =
_
ω
K
q
K
div u
T
dx
98 III. STATIONARY NONLINEAR PROBLEMS
for all v
K
∈
¯
X(K) and all q
K
∈
¯
Y (K).
With the solution of this problem, we deﬁne the Dirichlet estimator
by
η
D,K
=
_
[¯ u
K
[
2
H
1
(ω
K
)
+¯ p
K

2
L
2
(ω
K
)
_
1/2
.
As in the linear case, one can prove that both estimators are reliable
and eﬃcient and comparable to the residual estimator. Remark II.7.5
(p. 73) also applies to the nonlinear problem.
CHAPTER IV
Instationary problems
IV.1. Discretization of the instationary NavierStokes
equations
IV.1.1. Variational formulation. We recall the instationary in
compressible NavierStokes equations of ¸I.1.12 (p. 15) with noslip
boundary condition
∂u
∂t
−ν∆u + (u ∇)u −grad p = f in Ω (0, T)
div u = 0 in Ω (0, T)
u = 0 on Γ (0, T)
u(, 0) = u
0
in Ω.
Here, T > 0 is given ﬁnal time and u
0
denotes a given initial velocity.
The variational formulation of this problem is given by
Find a velocity ﬁeld u with
max
0<t<T
_
Ω
[u(x, t)[
2
dx +
_
T
0
_
Ω
[∇u(x, t)[
2
dxdt < ∞
and a pressure p with
_
T
0
_
Ω
[p(x, t)[
2
dxdt < ∞
such that
_
T
0
_
Ω
_
−u(x, t)
∂v(x, t)
∂t
+ ν∇u(x, t) : ∇v(x, t)
+ [(u(x, t) ∇)u(x, t)] v(x, t)
−p(x, t) div v(x, t)
_
dxdt
=
_
T
0
_
Ω
f (x, t) v(x, t)dxdt +
_
Ω
u
0
(x) v(x, 0)dx
99
100 IV. INSTATIONARY PROBLEMS
_
T
0
_
Ω
q(x, t) div u(x, t)dxdt
= 0
holds for all v with
max
0<t<T
_
Ω
_
[
∂v(x, t)
∂t
[
2
dx +[∇v(x, t)[
2
_
dx < ∞
and all q with
max
0<t<T
_
Ω
[q(x, t)[
2
dx < ∞.
IV.1.2. Existence and uniqueness results. The variational
formulation of the instationary incompressible NavierStokes equations
admits at least one solution. In two space dimensions, this solution is
unique. In three space dimensions, uniqueness of the solution can only
be guaranteed within a restricted class of more regular functions. But,
the existence of such a more regular solution cannot be guaranteed.
Any solution of the instationary incompressible NavierStokes equa
tions behaves like
√
t for small times t. For larger times the smoothness
with respect to time is better. This singular behaviour for small times
must be taken into account for the timediscretization.
IV.1.3. Numerical methods for ordinary diﬀerential equa
tions revisited. Before presenting the various discretization schemes
for the timedependent NavierStokes equations, we shortly recapitu
late some basic facts on numerical methods for ordinary diﬀerential
equations.
Suppose that we have to solve an initial value problem
dy
dt
= F(y, t)
y(0) = y
0
(IV.1.1)
in R
n
on the time interval (0, T).
We choose an integer N ≥ 1 and intermediate times 0 = t
0
< t
1
<
. . . < t
N
= T and set τ
i
= t
i
−t
i−1
, 1 ≤ i ≤ N. The approximation to
y(t
i
) is denoted by y
i
.
The simplest and most popular method is the θscheme. It is given
by
y
0
= y
0
y
i
−y
i−1
τ
i
= θF(y
i
, t
i
) + (1 −θ)F(y
i−1
, t
i−1
) , 1 ≤ i ≤ N,
IV.1. INSTATIONARY NAVIERSTOKES EQUATIONS 101
where θ ∈ [0, 1] is a ﬁxed parameter. The choice θ = 0 yields the
explicit Euler scheme, θ = 1 corresponds to the implicit Euler
scheme, and the choice θ =
1
2
gives the CrankNicolson scheme.
The θscheme is implicit unless θ = 0. It is Astable provided θ ≥
1
2
.
The θscheme is of order 1, if θ ,=
1
2
, and of order 2, if θ =
1
2
.
Another class of popular methods is given by the various Runge
Kutta schemes. They take the form
y
0
= y
0
y
i,j
= y
i−1
+ τ
i
r
k=1
a
jk
F(t
i−1
+ c
k
τ
i
, y
i,k
) , 1 ≤ j ≤ r,
y
i
= y
i−1
+ τ
i
r
k=1
b
k
F(t
i−1
+ c
k
τ
i
, y
i,k
) , 1 ≤ i ≤ N,
where 0 ≤ c
1
≤ . . . ≤ c
r
≤ 1. The number r is called stage number
of the RungeKutta scheme. The scheme is called explicit, if a
jk
= 0
for all k ≥ j, otherwise it is called implicit.
For the ease of notation one usually collects the numbers c
k
, a
jk
, b
k
in a table of the form
c
1
a
11
a
12
. . . a
1r
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
c
r
a
r1
a
r2
. . . a
rr
b
1
b
2
. . . b
r
The Euler and CrankNicolson schemes are RungeKutta schemes.
They correspond to the tables
0 0
1
r = 1,
1 1
1
r = 1,
0 0 0
1
1
2
1
2
1
2
1
2
r = 2.
Strongly diagonal implicit RungeKutta schemes (in
short SDIRK schemes) are particularly well suited for the discretiza
tion of timedependent partial diﬀerential equations since they combine
high order with good stability properties. The simplest representatives
of this class are called SDIRK 2 and SDIRK 5. They are both Astable
102 IV. INSTATIONARY PROBLEMS
and have orders 3 and 4, respectively. They are given by the data
3+
√
3
6
3+
√
3
6
0
3−
√
3
6
−
√
3
3
3+
√
3
6
1
2
1
2
SDIRK2
1
4
1
4
0 0 0 0
3
4
1
2
1
4
0 0 0
11
20
17
50
−
1
25
1
4
0 0
1
2
371
1360
−
137
2720
15
544
1
4
0
1
25
24
−
49
48
125
16
−
85
12
1
4
25
24
−
49
48
125
16
−
85
12
1
4
SDIRK5.
IV.1.4. Method of lines. This is the simplest discretization
scheme for timedependent partial diﬀerential equations.
We choose a partition T of the spatial domain Ω and associated
ﬁnite element spaces X(T ) and Y (T ) for the velocity and pressure,
respectively. These spaces must satisfy the infsup condition of ¸II.2.6
(p. 36). Then we replace in the variational formulation the velocities u,
v and the pressures p, q by discrete functions u
T
, v
T
and p
T
, q
T
which
depend upon time and which – for every time t – have their values in
X(T ) and Y (T ), respectively. Next, we choose a bases for the spaces
X(T ) and Y (T ) and identify u
T
and p
T
with their coeﬃcient vectors
with respect to the bases. Then we obtain the following differential
algebraic equation for u
T
and p
T
du
T
dt
= f
T
−νA
T
u
T
−B
T
p
T
−N
T
(u
T
)
B
T
T
u
T
= 0.
Denoting by w
i
, 1 ≤ i ≤ dimX(T ), and r
j
, 1 ≤ j ≤ dimY (T ), the
bases functions of X(T ) and Y (T ) respectively, the coeﬃcients of the
stiﬀness matrices A
T
and B
T
and of the vectors f
T
and N
T
(u
T
) are
given by
A
T ij
=
_
Ω
∇w
i
(x) : ∇w
j
(x)dx,
B
T ij
= −
_
Ω
r
j
(x) div w
i
(x)dx,
f
T i
=
_
Ω
f (x, t) w
i
(x)dx,
IV.1. INSTATIONARY NAVIERSTOKES EQUATIONS 103
N
T
(u
T
)
i
=
_
Ω
[(u(x, t) ∇)u(x, t)] w
i
(x)dx.
Formally, this problem can be cast in the form (IV.1.1) by working
with testfunctions v
T
in the space
V (T ) = ¦v
T
∈ X(T ) :
_
Ω
q
T
div v
T
dx = 0 for all q
T
∈ Y (T )¦
of discretely solenoidal functions. The function F in (IV.1.1) is then
given by
F(y, t) = f
T
−νA
T
y −N
T
(y).
If we apply the θscheme and denote by u
i
T
and p
i
T
the approximate
values of u
T
and p
T
at time t
i
, we obtain the fully discrete scheme
u
0
T
= I
T
u
0
u
i
T
−u
i−1
T
τ
i
= −B
T
p
i
T
+ θ
_
f
T
(t
i
) −νA
T
u
i
T
−N
T
(u
i
T
)
_
+ (1 −θ)
_
f
T
(t
i−1
) −νA
T
u
i−1
T
−N
T
(u
i−1
T
)
_
B
T
T
u
i
T
= 0 , 1 ≤ i ≤ N,
where I
T
: H
1
0
(Ω)
n
→ X(T ) denotes a suitable interpolation operator
(cf. ¸I.2.11 (p. 26)).
The equation for u
i
T
, p
i
T
can be written in the form
1
τ
i
u
i
T
+ θνA
T
u
i
T
+ B
T
p
i
T
+ θN
T
(u
i
T
) = g
i
B
T
T
u
i
T
= 0
with
g
i
=
1
τ
i
u
i−1
T
+ θf
T
(t
i
) + (1 −θ)
_
f
T
(t
i−1
) −νA
T
u
i−1
T
−N
T
(u
i−1
T
)
_
.
Thus it is a discrete version of a stationary incompressible Navier
Stokes equation. Hence the methods of ¸III.2 (p. 89) can be used for
its solution.
Due to the condition number of O(h
−2
) of the stiﬀness matrix A
T
,
one must choose the parameter θ ≥
1
2
. Due to the singularity of the
solution of the NavierStokes equations at time 0, it is recommended to
ﬁrst perform a few implicit Euler steps, i.e. θ = 1, and then to switch
to the CrankNicolson scheme, i.e. θ =
1
2
.
The main drawback of the method of lines is the ﬁxed (w.r.t. time)
spatial mesh. Thus singularities, which move through the domain Ω
in the course of time, cannot be resolved adaptively. This either leads
104 IV. INSTATIONARY PROBLEMS
to a very poor approximation or to an enormous overhead due to an
excessively ﬁne spatial mesh.
IV.1.5. Rothe’s method. In the method of lines we ﬁrst dis
cretize in space and then in time. This order is reversed in Rothe’s
method. We ﬁrst apply a θscheme or a RungeKutta method to the
instationary NavierStokes equations. This leaves us in every timestep
with a stationary NavierStokes equation.
For the θscheme, e.g., we obtain the problems
1
τ
i
u
i
−νθ∆u
i
+θ(u
i
∇)u
i
−grad p
i
= θf (, t
i
) +
1
τ
i
u
i−1
+ (1 −θ)¦f (, t
i−1
) −ν∆u
i−1
+ (u
i−1
∇)u
i−1
¦ in Ω
div u
i
= 0 in Ω
u
i
= 0 on Γ.
The stationary NavierStokes equations are then discretized in space.
The main diﬀerence to the methods of lines is the possibility to
choose a diﬀerent partition and spatial discretization at each time
level. This obviously has the advantage that we may apply an adaptive
meshreﬁnement on each timelevel separately. The main drawback of
Rothe’s method is the lack of a mathematically sound stepsize control
for the temporal discretization. If we always use the same spatial mesh
and the same spatial discretization, Rothe’s method of course yields
the same discrete solution as the method of lines.
IV.1.6. Spacetime ﬁnite elements. This approach circum
vents the drawbacks of the method of lines and of Rothe’s method.
When combined with a suitable spacetime adaptivity as described in
the next section, it can be viewed as a Rothe’s method with a mathe
matically sound stepsize control in space and time.
To describe the method we choose as in ¸IV.1.3 (p. 100) an integer
N ≥ 1 and intermediate times 0 = t
0
< t
1
< . . . < t
N
= T and set
τ
i
= t
i
−t
i−1
, 1 ≤ i ≤ N. With each time t
i
we associate a partition T
i
of the spatial domain Ω and corresponding ﬁnite element spaces X(T
i
)
and Y (T
i
) for the velocity and pressure, respectively. These spaces
must satisfy the infsup condition of ¸II.2.6 (p. 36). We denote by
u
i
T
∈ X(T
i
) and p
i
T
∈ Y (T
i
) the approximations of the velocity and
pressure, respectively at time t
i
. The discrete problem is then given by
IV.1. INSTATIONARY NAVIERSTOKES EQUATIONS 105
Find u
0
T
∈ X(T
0
) such that
_
Ω
u
0
T
v
0
T
dx =
_
Ω
u
0
v
0
T
dx
for all v
0
T
∈ X(T
0
) and determine u
i
T
∈ X(T
i
) and p
i
T
∈ Y (T
i
) for
i = 1, . . . , N successively such that
1
τ
i
_
Ω
u
i
T
v
i
T
dx
+θν
_
Ω
∇u
i
T
: ∇v
i
T
dx
−
_
Ω
p
i
T
div v
i
T
dx
+Θ
_
Ω
[(u
i
T
∇)u
i
T
] v
i
T
dx =
1
τ
i
_
Ω
u
i−1
T
v
i
T
dx
+ θ
_
Ω
f (x, t
i
) v
i
T
dx
+ (1 −θ)
_
Ω
f (x, t
i−1
) v
i
T
dx
+ (1 −θ)ν
_
Ω
∇u
i−1
T
: ∇v
i
T
dx
+ (1 −Θ)
_
Ω
[(u
i−1
T
∇)u
i−1
T
] v
i
T
dx
_
Ω
q
i
T
div u
i
T
dx = 0
holds for all v
i
T
∈ X(T
i
) and q
i
T
∈ Y (T
i
).
The parameter θ is chosen either equal to
1
2
or equal to 1. The param
eter Θ is chosen either equal to θ or equal to 1.
Note, that u
0
T
is the L
2
projection of u
0
onto X(T
0
) and that the
righthand side of the equation for u
i
T
involves the L
2
projection of
functions in X(T
i−1
) onto X(T
i
). Roughly speaking, these projections
make the diﬀerence between the spacetime ﬁnite elements and Rothe’s
method. Up to these projections one also recovers the method of lines
when all partitions and ﬁnite element spaces are ﬁxed with respect to
time.
On each timelevel we have to solve a discrete version of a stationary
NavierStokes equation. This is done with the methods of ¸III.2 (p. 89).
IV.1.7. The transportdiﬀusion algorithm. This method is
based on the observation that due to the transport theorem of ¸I.1.3
(p. 8) the term
∂u
∂t
+ (u ∇)u
106 IV. INSTATIONARY PROBLEMS
is the material derivative along the trajectories of the ﬂow. Therefore
_
∂u
∂t
+ (u ∇)u
_
(x, t)
is approximated by the backward diﬀerence
u(x, t) −u(y(t −τ), t −τ)
τ
where s → y(s) is the trajectory which passes at time t through the
point x. The remaining terms are approximated in a standard ﬁnite
element way.
To describe the algorithm more precisely, we retain the notations
of the previous subsection and introduce in addition on each partition
T
i
a quadrature formula
_
Ω
ϕdx ≈
x∈Q
i
µ
x
ϕ(x)
which is exact at least for piecewise constant functions. The most
important examples are given by
Q
i
the barycentres x
K
of the elements in T
i
, µ
x
K
= [K[,
Q
i
the vertices x in T
i
, µ
x
= [ω
x
[,
Q
i
the midpoints x
E
of the edges in T
i
, µ
x
E
= [ω
E
[.
Here, [ω[ denotes the area, if n = 2, or the volume, if n = 3, of a set
ω ⊂ R
n
and ω
x
and ω
E
are the unions of all elements that share the
vertex x or the edge E respectively (cf. Figures I.2.2 (p. 24) and I.2.6
(p. 28)).
With these notations the transportdiffusion algorithm is
given by:
Find u
0
T
∈ X(T
0
) such that
x∈Q
0
µ
x
u
0
T
(x) v
0
T
(x) =
x∈Q
0
µ
x
u
0
(x) v
0
T
(x)
for all v
0
T
∈ X(T
0
). For i = 1, . . . , N successively, solve the
initial value problems (transport step)
dy
x
(t)
dt
= u
i−1
T
(y
x
(t), t) for t
i−1
< t < t
i
y
x
(t
i
) = x
IV.2. SPACETIME ADAPTIVITY 107
for all x ∈ Q
i
and ﬁnd u
i
T
∈ X(T
i
) and p
i
T
∈ Y (T
i
) such
that (diffusion step)
1
τ
i
x∈Q
i
µ
x
u
i
T
(x) v
i
T
(x)
+θν
_
Ω
∇u
i
T
: ∇v
i
T
dx
−
_
Ω
p
i
T
div v
i
T
dx =
1
τ
i
x∈Q
i
µ
x
u
i−1
T
(y
x
(t
i−1
)) v
i
T
(x)
+ θ
_
Ω
f (x, t
i
) v
i
T
dx
+ (1 −θ)
_
Ω
f (x, t
i−1
) v
i
T
dx
+ (1 −θ)ν
_
Ω
∇u
i−1
T
: ∇v
i
T
dx
_
Ω
q
i
T
div u
i
T
dx = 0
holds for all v
i
T
∈ X(T
i
) and q
i
T
∈ Y (T
i
).
In contrast to the previous algorithms we now only have to solve
discrete Stokes problems at the diﬀerent timelevels. This is the reward
for the necessity to solve the initial value problems for the y
x
and to
evaluate functions in X(T
i
) at the points y
x
(t
i−1
) which are not nodal
degrees of freedom.
IV.2. Spacetime adaptivity
IV.2.1. Overview. When compared to adaptive discretizations of
stationary problems an adaptive discretization scheme for instationary
problems requires some additional features:
• We need an error estimator which gives upper and lower
bounds on the total error, i.e. on both the errors introduced
by the spatial and the temporal discretization.
• The error estimator must allow a distinction between the tem
poral and spatial part of the error.
• We need a strategy for adapting the timestep size.
• The reﬁnement strategy for the spatial discretization must also
allow the coarsening of elements in order to take account of
singularities which in the course of time move through the
spatial domain.
For the NavierStokes equations some special diﬃculties arise in
addition:
108 IV. INSTATIONARY PROBLEMS
• The velocities are not exactly solenoidal. This introduces an
additional consistency error which must be properly estimated
and balanced.
• The convection term may be dominant. Therefore the a poste
riori error estimators must be robust, i.e. the ratio of upper
and lower error bounds must stay bounded uniformly with re
spect to large values of Reu
1
or
1
ν
u
1
.
IV.2.2. A residual a posteriori error estimator. The error
estimator now consists of two components: a spatial part and a tem
poral one. The spatial part is a straightforward generalization of the
residual estimator of ¸III.2.2 (p. 90) for the stationary NavierStokes
equation. With the notations of ¸IV.1.6 (p. 104) the spatial contribu
tion on timelevel i is given by
η
i
h
=
_
K∈T
i
h
2
K
θf (, t
i
) + (1 −θ)f (, t
i−1
) −
u
i
T
−u
i−1
T
τ
i
+ θν∆u
i
T
+ (1 −θ)∆νu
i−1
T
−∇p
i
T
−Θ(u
i
T
∇)u
i
T
−(1 −Θ)(u
i−1
T
∇)u
i−1
T

2
L
2
(K)
+
K∈T
i
 div u
i
T

2
L
2
(K)
+
E∈E
T
i
h
E
[n
E
(θν∇u
i
T
+ (1 −θ)ν∇u
i−1
T
−p
i
T
I)]
E

2
L
2
(E)
_1
2
.
Recall that c
T
i
denotes the set of all interior edges, if n = 2, respectively
interior faces, if n = 3, in T
i
.
In order to obtain a robust estimator, we have to invest some work
in the computation of the temporal part of the estimator. For this
part we have to solve on each timelevel the following discrete Poisson
equations:
Find ¯ u
i
T
∈ S
1,0
0
(T
i
)
n
such that
ν
_
Ω
∇¯ u
i
T
: ∇v
i
T
dx
IV.2. SPACETIME ADAPTIVITY 109
=
_
Ω
[((Θu
i
T
+ (1 −Θ)u
i−1
T
) ∇)(u
i
T
−u
i−1
T
)] v
i
T
dx
holds for all v
i
T
∈ S
1,0
0
(T
i
)
n
.
The temporal contribution of the error estimator on timelevel i
then is given by
η
i
τ
=
_
ν[u
i
T
−u
i−1
T
[
2
1
+ ν[¯ u
i
T
[
2
1
+
K∈T
i
h
2
K
((Θu
i
T
+ (1 −Θ)u
i−1
T
) ∇)(u
i
T
−u
i−1
T
)
+ ν∆¯ u
i
T

2
L
2
(K)
+
E∈E
T
i
h
E
[ν ∇¯ u
i
T
]
E

2
L
2
(E)
_
1/2
.
With these ingredients the residual a posteriori error estimator for
the instationary incompressible NavierStokes equations takes the form
_
N
i=1
τ
i
_
_
η
i
h
_
2
+
_
η
i
τ
_
2
_
_
1/2
.
IV.2.3. Time adaptivity. Assume that we have solved the dis
crete problem up to timelevel i − 1 and that we have computed the
error estimators η
i−1
h
and η
i−1
τ
. Then we set
t
i
=
_
min¦T, t
i−1
+ τ
i−1
¦ if η
i−1
τ
≈ η
i−1
h
,
min¦T, t
i−1
+ 2τ
i−1
¦ if η
i−1
τ
≤
1
2
η
i−1
h
.
In the ﬁrst case we retain the previous timestep; in the second case
we try a larger time step.
Next, we solve the discrete problem on timelevel i with the current
value of t
i
and compute the error estimators η
i
h
and η
i
τ
.
If η
i
τ
≈ η
i
h
, we accept the current timestep and continue with the
space adaptivity, which is described in the next subsection.
If η
i
τ
≥ 2η
i
h
, we reject the current timestep. We replace t
i
by
1
2
(t
i−1
+ t
i
) and repeat the solution of the discrete problem on time
level i and the computation of the error estimators.
110 IV. INSTATIONARY PROBLEMS
The described strategy obviously aims at balancing the two contri
butions η
i
h
and η
i
τ
of the error estimator.
IV.2.4. Space adaptivity. For timedependent problems the spa
tial adaptivity must also allow for a local mesh coarsening. Hence, the
marking strategies of ¸II.7.5 (p. 73) must be modiﬁed accordingly.
Assume that we have solved the discrete problem on timelevel i
with an actual timestep τ
i
and an actual partition T
i
of the spatial
domain Ω and that we have computed the estimators η
i
h
and η
i
τ
. More
over, suppose that we have accepted the current timestep and want to
optimize the partition T
i
.
We may assume that T
i
currently is the ﬁnest partition in a hier
archy T
0
i
, . . . , T
i
of nested, successively reﬁned partitions, i.e. T
i
= T
i
and T
j
i
is a (local) reﬁnement of T
j−1
i
, 1 ≤ j ≤ .
Now, we go back mgenerations in the gridhierarchy to the partition
T
−m
i
. Due to the nestedness of the partitions, each element K ∈ T
−m
i
is the union of several elements K
∈ T
i
. Each K
gives a contribution
to η
i
h
. We add these contributions and thus obtain for every K ∈ T
−m
i
an error estimator η
K
. With these estimators we then perform M
steps of the marking strategy of ¸II.7.5 (p. 73). This yields a new
partition T
−m+M
i
which usually is diﬀerent from T
i
. We replace T
i
by
this partition, solve the corresponding discrete problem on timelevel i
and compute the new error estimators η
i
h
and η
i
τ
.
If the newly calculated error estimators satisfy η
i
h
≈ η
i
τ
, we accept
the current partition T
i
and proceed with the next timelevel.
If η
i
h
≥ 2η
i
τ
, we successively reﬁne the partition T
i
as described
in ¸II.7 (p. 67) with the η
i
h
as error estimators until we arrive at a
partition which satisﬁes η
i
h
≈ η
i
τ
. When this goal is achieved we accept
the spatial discretization and proceed with the next timelevel.
Typical values for the parameters m and M are 1 ≤ m ≤ 3 and
m ≤ M ≤ m + 2.
IV.3. Discretization of compressible and inviscid problems
IV.3.1. Systems in divergence form. In this section we con
sider problems of the following form:
Given a domain Ω ⊂ R
n
with n = 2 or n = 3, an integer m ≥ n, a
vector ﬁeld g : R
m
Ω (0, ∞) → R
m
, a vector ﬁeld M : R
m
→ R
m
,
a tensor ﬁeld F : R
m
→R
m×n
, and a vector ﬁeld U
0
: Ω →R
m
, we are
looking for a vector ﬁeld U : Ω (0, ∞) →R
m
such that
∂M(U)
∂t
+ div F(U) = g(U, x, t) in Ω (0, ∞)
U(, 0) = U
0
in Ω.
IV.3. COMPRESSIBLE AND INVISCID PROBLEMS 111
Such a problem, which has to be complemented with suitable boundary
conditions, is called a system (of differential equations) in
divergence form.
Note, that the divergence is taken rowwise, i.e.
div F(U) =
_
n
j=1
∂F(U)
i,j
∂x
j
_
1≤i≤m
.
The tensor ﬁeld F is called the flux of the system. It is often split
into an advective flux F
adv
which contains no derivatives and a
viscous flux F
visc
which contains spatial derivatives, i.e
F = F
adv
+F
visc
.
Example IV.3.1. The compressible NavierStokes equations and the
Euler equations (with α = 0) of ¸I.1.9 (p. 13) and ¸I.1.10 (p. 13) ﬁt
into this framework. For both equations we have
m = n + 2
U = (ρ, v, e)
T
M(U) =
_
_
ρ
ρv
e
_
_
F
adv
(U) =
_
_
ρv
ρv ⊗v + pI
ev + pv
_
_
g =
_
_
0
ρf
f v
_
_
.
For the NavierStokes equations the viscous forces yield a viscous
ﬂux
F
visc
(U) =
_
_
0
T+ pI
(T+ pI) v + σ
_
_
.
These equations are complemented by the constitutive equations
for p, T, and σ given in ¸I.1.8 (p. 11).
IV.3.2. Finite volume schemes. Finite volume schemes are par
ticularly suited for the discretization of systems in divergence form.
To describe the idea in its simplest form, we choose a timestep
τ > 0 and a partition T of the domain Ω. The partition may consist of
112 IV. INSTATIONARY PROBLEMS
arbitrary polyhedrons. From ¸I.2.7 (p. 21) we only retain the condition
that the subdomains must not overlap.
In a ﬁrst step, we ﬁx an i and an element K ∈ T . Then we integrate
the system over the set K [(i −1)τ, iτ]. This yields
_
iτ
(i−1)τ
_
K
∂M(U)
∂t
dxdt +
_
iτ
(i−1)τ
_
K
div F(U)dxdt
=
_
iτ
(i−1)τ
_
K
g(U, x, t)dxdt.
Using the integration by parts formulae of ¸I.2.3 (p. 19), we obtain for
the lefthand side
_
iτ
(i−1)τ
_
K
∂M(U)
∂t
dxdt =
_
K
M(U(x, iτ))dx
−
_
K
M(U(x, (i −1)τ))dx
_
iτ
(i−1)τ
_
K
div F(U)dxdt =
_
iτ
(i−1)τ
_
∂K
F(U) n
K
dSdt,
where n
K
denotes the exterior unit normal of K.
Now, we assume that U is piecewise constant with respect to space
and time. Denoting by U
i
K
and U
i−1
K
its constant values on K at times
iτ and (i −1)τ respectively, we obtain
_
K
M(U(x, iτ))dx ≈ [K[M(U
i
K
)
_
K
M(U(x, (i −1)τ))dx ≈ [K[M(U
i−1
K
),
where [K[ denotes the area, if n = 2, respectively volume, if n = 3, of
the element K.
Next we approximate the ﬂux term
_
iτ
(i−1)τ
_
∂K
F(U) n
K
dSdt ≈ τ
_
∂K
F(U
i−1
K
) n
K
dS.
The righthand side is approximated by
_
iτ
(i−1)τ
_
K
g(U, x, t)dxdt ≈ τ[K[g(U
i−1
K
, x
K
, (i −1)τ),
where x
K
denotes a point in K, which is ﬁxed a priori, e.g. its barycen
tre.
Finally, we approximate the boundary integral for the ﬂux by a
numerical flux
τ
_
∂K
F(U
i−1
K
) n
K
dS ≈ τ
K
∈T
∂K∩∂K
∈E
T
[∂K ∩ ∂K
[F
T
(U
i−1
K
, U
i−1
K
),
IV.3. COMPRESSIBLE AND INVISCID PROBLEMS 113
where c
T
denotes the set of all edges, if n = 2, respectively faces, if
n = 3, in T and where [∂K ∩ ∂K
[ is the length respectively area of
the common edge or face of K and K
.
With these approximations the simplest finite volume scheme
for a system in divergence form is given by
For every element K ∈ T compute
U
0
K
=
1
[K[
_
K
U
0
(x)dx.
For i = 1, 2, . . . successively compute for all elements K ∈ T
M(U
i
K
) = M(U
i−1
K
)
−τ
K
∈T
∂K∩∂K
∈E
T
[∂K ∩ ∂K
[
[K[
F
T
(U
i−1
K
, U
i−1
K
)
+ τg(U
i−1
K
, x
K
, (i −1)τ).
For a concrete ﬁnite volume scheme we of course have to specify
• the partition T and
• the numerical ﬂux F
T
.
This will be the subject of the following sections.
Remark IV.3.2. In practice one works with a variable timestep and
variable partitions. To this end one chooses an increasing sequence 0 =
t
0
< t
1
< t
2
< . . . of times and associates with each time t
i
a partition
T
i
of Ω. Then τ is replaced by τ
i
= t
i
−t
i−1
and K and K
are elements
in T
i−1
or T
i
. Moreover one has to furnish an interpolation operator
which maps piecewise constant functions with respect to one partition
to piecewise constant functions with respect to another partition.
IV.3.3. Construction of the partitions. An obvious possibility
is to construct a ﬁnite volume partition T in the same way as a ﬁnite
element partition. In practice, however, one prefers socalled dual
meshes.
To describe the idea, we consider the twodimensional case, i.e.
n = 2. We start from a standard ﬁnite element partition
¯
T which
satisﬁes the conditions of ¸I.2.7 (p. 21). Then we subdivide each
element
¯
K ∈
¯
T into smaller elements by either
• drawing the perpendicular bisectors at the midpoints of edges
of
¯
K (cf. Figure IV.3.1) or by
• connecting the barycentre of
¯
K with its midpoints of edges (cf.
Figure IV.3.2).
Then the elements in T consist of the unions of all small elements that
share a common vertex in the partition
¯
T .
114 IV. INSTATIONARY PROBLEMS
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
Figure IV.3.1. Dual mesh (thick lines) via perpendic
ular bisectors of primal mesh (thin lines)
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
d
e
e
e
e
e
e
e
e
e
e
e
e
e
e
e
e
e
e
e
e
e
e
e
e
e
e
e
e
e
e
e
e
e
e
e
e
e
e
e
e
e
e
e
e
e
e
e
e
e
e
e
e
r
r
r
r
r
r
r
r
r
r
r
r
r
r
r
r
r
r
r
r
r
r
r
r
r
r
r
r
r
r
r
r
r
r
r
r
r
r
r
r
r
r
r
r
r
r
r
r
r
r
r
r
Figure IV.3.2. Dual mesh (thick lines) via barycentres
of primal mesh (thin lines)
Thus the elements in T can be associated with the vertices in A
T
.
Moreover, we may associate with each edge in c
T
exactly two vertices
in A
T
such that the line connecting these vertices intersects the given
edge.
The ﬁrst construction has the advantage that this intersection is
orthogonal. But this construction also has some disadvantages which
are not present with the second construction:
• The perpendicular bisectors of a triangle may intersect in a
point outside the triangle. The intersection point is within
the triangle only if its largest angle is at most a right one.
IV.3. COMPRESSIBLE AND INVISCID PROBLEMS 115
• The perpendicular bisectors of a quadrilateral may not inter
sect at all. They intersect in a common point inside the quadri
lateral only if it is a rectangle.
• The ﬁrst construction has no three dimensional analogue.
IV.3.4. Relation to ﬁnite element methods. The fact that
the elements of a dual mesh can be associated with the vertices of a
ﬁnite element partition gives a link between ﬁnite volume and ﬁnite
element methods:
Consider a function ϕ that is piecewise constant on the dual mesh T ,
i.e. ϕ ∈ S
0,−1
(T ). With ϕ we associate a continuous piecewise linear
function Φ ∈ S
1,0
(
¯
T ) corresponding to the ﬁnite element partition
¯
T such that Φ(x
K
) = ϕ
K
for the vertex x
K
∈ A
T
corresponding to
K ∈ T .
This link sometimes considerably simpliﬁes the analysis of ﬁnite
volume methods. For example it suggests a very simple and natural
approach to a posteriori error estimation and mesh adaptivity for ﬁnite
volume methods:
• Given the solution ϕ of the ﬁnite volume scheme compute the
corresponding ﬁnite element function Φ.
• Apply a standard a posteriori error estimator to Φ.
• Given the error estimator apply a standard mesh reﬁnement
strategy to the ﬁnite element mesh
¯
T and thus construct a
new, locally reﬁned partition
´
T .
• Use
´
T to construct a new dual mesh T
. This is the reﬁnement
of T .
IV.3.5. Construction of the numerical ﬂuxes. In order to
construct the numerical ﬂux F
T
(U
i−1
K
, U
i−1
K
) corresponding to the com
mon boundary of two elements K and K
in a ﬁnite volume partition,
we split ∂K∩∂K
into straight edges, if n = 2, or plane faces, if n = 3.
Consider such an edge respectively face E. To simplify the notation,
we denote the adjacent elements by K
1
and K
2
and write U
1
and U
2
instead of U
i−1
K
1
and U
i−1
K
2
, respectively.
We assume that T is the dual mesh to a ﬁnite element partition
¯
T
as described in ¸IV.3.3 (p. 113). Denote by x
1
and x
2
the two vertices
in A
T
such that their connecting line intersects E (cf. Figure IV.3.3).
In order to construct the contribution of E to the numerical ﬂux
F
T
(U
1
, U
2
), we split the ﬂux in an advective part F
T adv
(U
1
, U
2
) and
a viscous part F
T visc
(U
1
, U
2
) similar to the splitting of the analytical
ﬂux F.
For the viscous part of the numerical ﬂux we introduce a local
coordinate system η
1
, . . . , η
n
such that the direction η
n
is parallel to
the direction x
1
x
2
and such that the other directions are tangential to
E. Note, that in general x
1
x
2
will not be orthogonal to E. Then we
116 IV. INSTATIONARY PROBLEMS
x
1
x
1
x
2
x
2
r
r
r
r
r
r
Figure IV.3.3. Examples of a common edge E (thick
lines) of two volumes and corresponding nodes x
1
and x
2
of the underlying primal mesh together with their con
necting line (thin lines)
express all derivatives in F
visc
in tems of the new coordinate system,
suppress all derivatives not involving η
n
, and approximate derivatives
with respect to η
n
by diﬀerence quotients of the form
ϕ
1
−ϕ
2
x
1
−x
2
2
. Here,
x
1
−x
2

2
denotes the Euclidean distance of the two points x
1
and x
2
.
For the advective part of the numerical ﬂux we need some addi
tional notations. Consider an arbitrary vector V ∈ R
m
. Denote by
D(F
adv
(V) n
K
1
) ∈ R
m×m
the derivative of F
adv
(V) n
K
1
with re
spect to V. One can prove that for many systems in divergence form
– including the compressible NavierStokes and Euler equations – this
matrix can be diagonalized. I.e. there is an invertible m m matrix
Q(V) and a diagonal mm matrix ∆(V) such that
Q(V)
−1
D(F
adv
(V) n
K
1
)Q(V) = ∆(V).
The entries of ∆(V) are of course the eigenvalues of D(F
adv
(V) n
K
1
).
For any real number z we set
z
+
= max¦z, 0¦ , z
−
= min¦z, 0¦
and deﬁne
∆(V)
±
=
_
_
_
_
∆(V)
±
11
0 . . . 0
0 ∆(V)
±
22
. . . 0
.
.
.
.
.
.
.
.
.
0 0 . . . ∆(V)
±
mm
_
_
_
_
and
C(V)
±
= Q(V)∆(V)
±
Q(V)
−1
.
With these notations, the SteegerWarming approximation
of the advective ﬂux is given by
IV.3. COMPRESSIBLE AND INVISCID PROBLEMS 117
F
T
(U
1
, U
2
) = C(U
1
)
+
U
1
+ C(U
2
)
−
U
2
.
Another popular numerical ﬂux is the van Leer approximation
which is given by
F
T
(U
1
, U
2
)
=
_
C(U
1
) + C(
1
2
(U
1
+U
2
))
+
−C(
1
2
(U
1
+U
2
))
−
_
U
1
+
_
C(U
2
) −C(
1
2
(U
1
+U
2
))
+
+ C(
1
2
(U
1
+U
2
))
−
_
U
2
.
In both schemes one has to compute the derivatives DF
adv
(V)
n
K
1
and their eigenvalues and their eigenvectors for suitable values
of V. At ﬁrst sight the van Leer scheme seems to be more costly
than the SteegerWarming scheme since it requires three evaluations
of C(V) instead of two. For the compressible NavierStokes and Euler
equations, however, this can be reduced to one evaluation since for
these equations F
adv
(V) n
K
1
= C(V)V holds for all V.
Summary
I Fundamentals
1. Modelization
Lagrangian and Eulerian coordinate systems; Lagrangian coordi
nate system moves with the ﬂuid; Eulerian coordinate system is
ﬁxed; velocity; transport theorem; conservation of mass; Cauchy
theorem; conservation of momentum; exterior forces; interior forces;
total energy; conservation of energy; constitutive laws; equation of
state; deformation tensor; dynamic viscosities; compressible Navier
Stokes equations in conservative form; Euler equations; compress
ible NavierStokes equations in nonconservative form; instation
ary incompressible NavierStokes equations; incompressible ⇐⇒
div v = 0; kinematic viscosity; Reynolds’ number; stationary in
compressible NavierStokes equations; Stokes equations; initial and
boundary conditions; noslip boundary condition; slip boundary
condition
2. Notations and basic results
Notations for domains, functions, vector ﬁelds, tensors and deriva
tives; diﬀerentiation of products; integration by parts formulae;
closure of a set; support of a function; C
∞
0
(Ω); weak derivatives;
classical and weak derivatives coincide for smooth functions; weak
derivative of [x[; Sobolev spaces and norms; trace spaces; a piece
wise smooth function is in H
1
iﬀ it its continuous; H
1
functions
do not admit point values; Friedrichs and Poincar´e inequalities;
ﬁnite element partitions; aﬃne equivalence; admissibility; shape
regularity; in two dimensions: shape regularity ⇐⇒ smallest an
gle bounded away from zero; triangles and parallelograms may be
mixed; polynomial spaces R
k
(K); ﬁnite element spaces S
k,−1
(T ),
S
k,0
(T ), S
k,0
0
(T ); approximation properties; nodal shape functions;
representation of nodal shape functions in terms of element vertices;
hierarchical basis; element bubble functions and their properties;
edge and face bubble functions and their properties; jumps across
edges and faces
II Stationary linear problems
1. Discretization of the Stokes equations. A ﬁrst attempt
Variational formulation of the Poisson equation and standard ﬁ
nite element discretization; variational formulation of the Stokes
equations using solenoidal velocities; Courant triangulation; ap
proximation with continuous, piecewise linear, solenoidal velocities
does not work; discretization with continuous, piecewise polyno
mial, solenoidal velocities requires a polynomial degree ≥ 5; reme
dies: mixed methods, PetrovGalerkin stabilization, nonconforming
methods, streamfunction formulation
119
120 SUMMARY
2. Mixed ﬁnite element discretizations of the Stokes equations
Mixed variational formulation of the Stokes equations; relation to
a constrained minimization problem; velocity is a minimizer, pres
sure is a maximizer; existence and uniqueness of a solution of the
mixed variational problem; continuous dependence on the force
term; general structure of a mixed ﬁnite element discretization of
the Stokes equations; discrete mass equation does not imply that
discrete velocity is solenoidal; piecewise continuous linear veloci
ties and piecewise constant pressures on a triangular grid do not
work; structure of the discrete mixed porblem; necessary condi
tion: dimX(T ) ≥ dimY (T ); checkerboard instability; infsup con
dition; stable elements; adding edge/face bubble functions to the
velocity space stabilizes the Q1/Q0 element; the pair S
n,0
0
(T )
n
,
S
0,−1
(T ) ∩L
2
0
(Ω) is stable; adding element bubble functions to the
velocity space stabilizes the pair S
m,0
0
(T )
n
, S
m−1,−1
(T ) ∩ L
2
0
(Ω);
the pair S
n+m−1,0
0
(T )
n
, S
m−1,−1
(T ) ∩ L
2
0
(Ω) is stable; the mini
element, the TaylorHood element and the modiﬁed TaylorHood
element are stable; the pair S
k,0
0
(T )
n
, S
k−1,0
(T ) ∩ L
2
0
(Ω) is sta
ble for every k ≥ 3; a priori error estimates; polynomial degree of
the pressure one less than the polynomial degree of the velocity is
optimal
3. PetrovGalerkin stabilization
Splitting the velocity of the mini element in a linear and a bub
ble part yields a P1/P1 discretization of a modiﬁed Stokes prob
lem with a consistent penalty; general form of PetrovGalerkin dis
cretizations; choice of stabilization parameters: δ
max
≈ δ
min
; choice
of spaces: polynomial degree of pressure one less than polynomial
degree of velocity; structure of the discrete problems
4. Nonconforming methods
Aim: relax the continuity condition for the discrete velocities;
CrouzeixRaviart element and its properties; a priori error esti
mates; drawbacks: diﬃculties in the presence of reentrant corners,
no higherorder equivalent, no three dimensional equivalent; con
struction of a local solenoidal basis; computation of the velocity by
solving a symmetric positive deﬁnite system with condition O(h
−4
);
computation of the pressure by a simple postprocessing
5. Streamfunction formulation
Aim: obtain an exactly solenoidal conforming velocity approxi
mation; curl operators; chain rules; integration by parts formula;
v : Ω → R
2
solenoidal ⇐⇒ ∃ streamfunction ϕ with v =
curl ϕ; Stokes equations equivalent to the biharmonic equation for
the streamfunction; biharmonic equation only yields the velocity,
there is no constructive way to compute the pressure; variational for
mulation of the biharmonic equation; a conforming discretization re
quires a polynomial degree of at least 5; nonconforming discretiza
tion using the Morley element; relation between the Morley element
discretization of the biharmonic equation and the CrouzeixRaviart
element discretization of the Stokes equations; vorticity; saddle
point formulation of the biharmonic equation and corresponding
mixed ﬁnite element discretization; a priori error estimate
SUMMARY 121
6. Solution of the discrete problems
General structure of the discrete problem; stiﬀness matrix is in
deﬁnite ⇒ standard iterative methods such as CG cannot be ap
plied; indeﬁniteness cannot be circumvented; the Uzawa algorithm;
Uzawa is very simple but very slow; Uzawa falls in the category
of pressure correction schemes; the CG algorithm and its prop
erties; the improved Uzawa algorithm; idea: eliminate the veloc
ity from the discrete Stokes equations, apply a CG algorithm to
the resulting problem for the pressure and compute velocities ap
proximately by applying a standard multigrid algorithm to the
corresponding discrete Poisson problems; convergence rate is in
dependent of the meshsize; multigrid algorithm; basic ingredients:
smoothing, coarse grid correction, restriction, prolongation; V and
Wcycles; smoothing methods: symmetric GaussSeidel, squared
Richardson, Vanka methods; prolongation; restriction; stabilized
biconjugate gradient algorithm for nonsymmetric and indeﬁnite
problems
7. A posteriori error estimation and adaptive grid reﬁnement
A priori error estimates only give information on the asymptotic
behavior of the error, they give no information on its actual size
nor on its spatial distribution; general structure of the adaptive al
gorithm; ingredients: a posteriori error estimator, marking strategy,
regular reﬁnement, additional reﬁnement; a residual error estimator
for the Stokes equations and its ingredients; global upper and local
lower bounds on the error; reliability and eﬃciency; error estimators
based on the solution of auxiliary discrete local Stokes problems
with Neumann and Dirichlet boundary conditions; equivalence of
the estimators; maximum and equilibration strategies for marking
elements; regular subdivision of triangles, parallelograms and par
allelepipeds by joining the midpoints of edges; regular subdivision
of tetrahedra requires an additional device for handling the double
pyramid in the interior; green, blue and purple reﬁnement of ele
ments; additional rules for avoiding too abstuse or acute elements;
required data structures
III Stationary nonlinear problems
1. Discretization of the stationary NavierStokes equations
Variational formulation; Stokes operator; ﬁxedpoint formulation;
properties: at least one solution, exactly one solution for suﬃciently
small Reynolds’ number, solution has the same regularity as the one
of the corresponding Stokes problem, solution is diﬀerentiable w.r.t.
the Reynolds’ number, at most ﬁnitely many turning and bifurca
tion points; ﬁnite element discretization; ﬁnite element spaces must
be stable; discrete Stokes operator; ﬁxedpoint formulation of the
discrete problem; discrete problem has properties similar to the an
alytical problem; symmetric formulation of the nonlinear term; a
priori error estimates; a onedimensional example; necessity of true
upwinding with upwind direction depending on the solution; up
wind diﬀerence approximation of the convective term; streamline
diﬀusion method; idea: add additional viscosity in streamline direc
tion, simultaneous stabilization of convective term and divergence
constraint
2. Solution of the discrete nonlinear problems
Fixedpoint iteration, convergence rate 1 − Re
2
f 
0
⇒ practicable
only for small Reynolds’ number; Newton iteration, solve in each
122 SUMMARY
step modiﬁed Stokes problems with convection; path tracking, step
length control; nonlinear CGalgorithm; operator splitting, idea:
treat nonlinearity and divergence constraint separately; nonlinear
multigrid algorithms, similar to multigrid for linear problems only
change smoothing
3. Adaptivity for nonlinear problems
General structure of the adaptive algorithm for nonlinear problems
is the same as for linear ones, only error estimators do change; resid
ual error estimator: add nonlinear convection term to the element
residuals; error estimators based on the solution of auxiliary lo
cal discrete problems: discrete problems remain linear, nonlinearity
only enters in the righthand side
IV Instationary problems
1. Discretization of the instationary NavierStokes equations
Variational formulation; variational formulation admits at least one
solution, in two dimensions the solution is unique, in three dimen
sions uniqueness can only be proved within a restricted class of
more regular functions in which existence cannot be guaranteed;
any solution behaves like
√
t for small times; numerical methods for
the solution of ordinary diﬀerential equations: θscheme, explicit
and implicit Euler method, CrankNicolson scheme, RungeKutta
methods, SDIRK schemes, implicit methods are mandatory for the
timediscretization of parabolic problems, SDIRK schemes are par
ticularly well suited; method of lines: in each timestep a discrete
stationary NavierStokes problem must be solved, implicit time
discretization is mandatory, main drawback is the ﬁxed (w.r.t. time)
spatial mesh; Rothe’s method: ﬁrst discretize in time and then in
space, leads to a stationary NavierStokes problem in each time step,
allows to use varying space discretizations in time, main drawback
is missing mathematically sound control of the temporal stepsize;
spacetime ﬁnite elements: choose diﬀerent spatial meshes and ﬁ
nite element spaces at each time step, allows a simultaneous spatial
and temporal adaptivity; transportdiﬀusion algorithm: temporal
derivative and convection term are simultaneously discretized by
backward diﬀerences along trajectories, only Stokes problems must
be solved in each time step, but trajectories must be computed at
each time step for every node of a quadrature formula
2. Spacetime adaptivity
Required additional features: upper and lower bounds on spatial
and temporal order, possibility for coarsening the spatial mesh, dis
tinction between spatial and temporal error, bounds must be uni
form w.r.t. the Reynolds’ number; residual error estimator, spatial
and temporal contribution; time adaptivity; space adaptivity
3. Discretization of compressible and inviscid problems
Systems in divergence form; ﬂux, splitting into advective and vis
cous part; compressible NavierStokes and Euler equations as sys
tems in divergence form; ﬁnite volume schemes: partitions, numer
ical ﬂuxes; dual meshes, construction via perpendicular bisectors
or via barycentres; relation to ﬁnite element methods; construction
of the numerical ﬂuxes: splitting into viscous and advective part,
SteegerWarming scheme, van Leer scheme
Bibliography
[1] M. Ainsworth, J. T. Oden: A Posteriori Error Estimation in Finite Element
Analysis. Wiley, 2000
[2] V. Girault, P. A. Raviart: Finite Element Approximation of the NavierStokes
Equations. Computational Methods in Physics, Springer, Berlin, 2nd edition,
1986
[3] R. Glowinski: Finite Element Methods for Incompressible Viscous Flows.
Handbook of Numerical Analysis Vol. IX, Elsevier 2003
[4] E. Godlewski, P. A. Raviart: Numerical Approximation of Hyperbolic Systems
of Conservation Laws. Springer, 1996
[5] D. Gunzburger, R. A. Nicolaides: Incompressible CFD  Trends and Advances.
Cambridge University Press, 1993
[6] D. Kr¨oner: Numerical Schemes for Conservation Laws. TeubnerWiley, 1997
[7] R. LeVeque: Finite Volume Methods for Hyperbolic Problems. Springer, 2002
[8] O. Pironneau: Finite Element Methods for Fluids. Wiley, 1989
[9] R. Temam: NavierStokes Equations. 3rd edition, North Holland, 1984
[10] R. Verf¨ urth: A Review of A Posteriori Error Estimation and Adaptive Mesh
Reﬁnement Techniques. TeubnerWiley, Stuttgart, 1996
123
Index
, 18

k
, 20
1
2
,Γ
, 21
[[
1
, 22
[[
k
, 20
[[
∞
, 23
⊗, 10
⊗, 18
∂, 9
:, 18
x
α
, 23
A, 19
c
T
, 28
c
x
, 47
A
T
, 23
T , 21
C
∞
0
(Ω), 19
curl, 49
curl, 49
∂
α
1
+...+α
n
∂x
α
1
1
...∂x
α
n
n
, 19
∆, 18
Γ, 18
H
1
0
(Ω), 20
H
1
2
(Γ), 20
H
k
(Ω), 20
H
2
0
(Ω), 50
K, 21
L
2
0
(Ω), 20
NE
0
, 46
NT, 46
NV
0
, 46
Ω, 18
P
, 38
R
T
, 26
Re, 16
R
k
(K), 23
S
k,−1
, 23
S
k,0
, 23
S
k,0
0
, 23
T, 79
T
T
, 81
D, 12
F
adv
, 111
F
visc
, 111
I, 12
T, 9
V , 29
V (T ), 46
n, 9
n
E
, 28
n
E,x
, 47
t
E
, 46
t
E,x
, 47
v, 7
w
E
, 46
w
x
, 47
δ
max
, 44
δ
min
, 44
div, 18
η
D,K
, 72
η
K
, 68
η
N,K
, 71
h, 22
h
T
, 22
h
K
, 22
[]
E
, 28
λ, 12
λ
x
, 24
µ, 12
∇, 18
ν, 15
ω
x
, 24
[ω
x
[, 26
p, 12
ψ
E
, 28
ψ
K
, 27
ρ
K
, 22
supp, 19
a posteriori error estimation, 68
a posteriori error estimator, 68
125
126 INDEX
a posteriori error indicator, 68
additional reﬁnement, 68
admissibility, 22
advective ﬂux, 111
aﬃne equivalence, 22
BabuˇskaBrezzi condition, 37
Babuˇska, Ivo, 37
BiCGstab, 65
biharmonic equation, 50
blue element, 75
boundary, 9
Brezzi, Franco, 36
Cauchy theorem, 9
CG algorithm, 55
checkerboard instability, 36
checkerboard mode, 35
coarse grid correction, 59
compressible NavierStokes equations
in conservative form, 13
compressible NavierStokes equations
in nonconservative form, 14
conjugate gradient algorithm, 55
conservation of energy, 11
conservation of mass, 9
conservation of momentum, 10
consistent penalty, 43
Courant triangulation, 31
CrankNicolson scheme, 101
CrouzeixRaviart element, 45
deformation tensor, 12, 18
diﬀerential algebraic equation, 102
diﬀusion step, 107
discrete Stokes operator, 81
divergence, 18
dual mesh, 113
dyadic product, 18
dynamic viscosities, 12
edge bubble function, 28
eﬃcient, 70
element, 21
element bubble function, 27
equal order interpolation, 40
equation of state, 12
equilibration strategy, 73
error estimator, 68
error indicator, 68
Euler equations, 13
Eulerian coordinate, 7
Eulerian representation, 7
Euler’s formula, 46
explicit Euler scheme, 101
explicit RungeKutta scheme, 101
face bubble function, 28
ﬁnite volume scheme, 113
ﬁxedpoint iteration, 90
ﬂux, 111
Friedrichs inequality, 21
GaussSeidel algorithm, 61
general adaptive algorithm, 68
gradient, 18
green element, 75
hanging node, 75
hierarchical basis, 26
higher order TaylorHood element,
39
ideal gas, 12
implicit Euler scheme, 101
implicit RungeKutta scheme, 101
improved Uzawa algorithm, 57
incompressible, 15
infsup condition, 37
inner product, 18
instationary incompressible
NavierStokes equations, 15
internal energy, 12
Jacobian determinant, 7
Jacobian matrix, 7
kinematic viscosity, 15
LadyzhenskayaBabuˇskaBrezzi
condition, 37
Ladyzhenskaya, Olga, 37
Lagrangian coordinate, 7
Lagrangian representation, 7
Laplace operator, 18
LBBcondition, 37
marking strategy, 68
maximum strategy, 73
MG algorithm, 59
mini element, 38
mixed ﬁnite element discretization of
the Stokes equations, 33
mixed variational formulation of the
Stokes equations, 32
modiﬁed TaylorHood element, 38
Morley element, 51
multigrid algorithm, 59
INDEX 127
Navier, Pierre Louis Marie Henri, 17
Newton iteration, 91
noslip condition, 17
nodal shape function, 24
nonlinear CGalgorithm of
PolakRibi`ere, 92
nonlinear GaußSeidel algorithm, 95
numerical ﬂux, 112
operator splitting, 94
partition, 21
path tracking, 92
penalty parameter, 43
Poincar´e inequality, 21
Poise, 12
postsmoothing, 59
presmoothing, 59
pressure, 12
pressure correction scheme, 55
prolongation operator, 59
purple element, 75
Q1/Q0 element, 35
quasiinterpolation operator, 26
red element, 74
reference force, 15
reference length, 15
reference pressure, 15
reference time, 15
reference velocity, 15
regular reﬁnement, 68, 74
reliable, 70
Reynolds, Osborne, 16
residual error estimator, 68
restriction operator, 59
Reynolds’ number, 16
robust error estimator, 108
Rothe’s method, 104
RungeKutta scheme, 101
saddlepoint problem, 33
SDIRK scheme, 101
shaperegularity, 22
slip condition, 17
smoothing operator, 58
Sobolev space, 20
stabilized biconjugate gradient
algorithm, 65
stable, 36
stable discretization, 36
stable element, 36
stable pair, 36
stage number, 101
stationary incompressible
NavierStokes equations, 16
SteegerWarming approximation, 116
SteegerWarming scheme, 116
stiﬀness matrix, 29
Stokes, 12
Stokes equations, 16
Stokes, George Gabriel, 17
Stokes operator, 79
streamfunction, 49
streamlinediﬀusion discretization,
89
streamlinediﬀusion method, 88
strongly diagonal implicit
RungeKutta scheme, 101
support, 19
surface element, 10
system in divergence form, 111
system of diﬀerential equations in
divergence form, 111
TaylorHood element, 38
tensorial product, 18
θscheme, 100
total energy, 11
trace, 12, 21
trace space, 21
transport step, 106
transport theorem, 8
transportdiﬀusion algorithm, 106
unit outward normal, 9
unit tensor, 12, 18
upwind diﬀerence, 87
Uzawa algorithm, 54
Vcycle, 60
van Leer approximation, 117
van Leer scheme, 117
Vanka method, 62
velocity, 7
viscous ﬂux, 111
vorticity, 52
Wcycle, 60
Contents
Chapter I. Fundamentals I.1. Modelization I.1.1. Lagrangian and Eulerian representation I.1.2. Velocity I.1.3. Transport theorem I.1.4. Conservation of mass I.1.5. Cauchy theorem I.1.6. Conservation of momentum I.1.7. Conservation of energy I.1.8. Constitutive laws I.1.9. Compressible NavierStokes equations in conservative form I.1.10. Euler equations I.1.11. Compressible NavierStokes equations in nonconservative form I.1.12. Instationary incompressible NavierStokes equations I.1.13. Stationary incompressible NavierStokes equations I.1.14. Stokes equations I.1.15. Initial and boundary conditions I.2. Notations and basic results I.2.1. Domains and functions I.2.2. Diﬀerentiation of products I.2.3. Integration by parts formulae I.2.4. Weak derivatives I.2.5. Sobolev spaces and norms I.2.6. Friedrichs and Poincar´ inequalities e I.2.7. Finite element partitions I.2.8. Finite element spaces I.2.9. Approximation properties I.2.10. Nodal shape functions I.2.11. A quasiinterpolation operator I.2.12. Bubble functions Chapter II. Stationary linear problems II.1. Discretization of the Stokes equations. A ﬁrst attempt II.1.1. The Poisson equation revisited II.1.2. A variational formulation of the Stokes equations
3
7 7 7 7 8 9 9 10 11 11 13 13 13 15 16 16 17 18 18 18 19 19 20 21 21 22 23 23 26 27 29 29 29 29
4.1.7.1.5. A ﬁrst attempt II. Solution of the discrete problems II. Motivation II.5. The Uzawa algorithm II.3.3. Streamfunction formulation of the Stokes equations II.6.4.2.1.3.3. Possible remedies II. The infsup condition II. Motivation II.11. A naive discretization of the Stokes equations II.2.4.6.4. A stable loworder element with discontinuous pressure approximation II.5.1.2. A necessary condition for a wellposed mixed discretization II. A priori error estimates II.2. Streamfunction formulation II.6. Stable loworder elements with continuous pressure approximation II. Saddlepoint formulation of the Stokes equations II.5.2.2.2.1.2. A nonconforming discretization of the biharmonic equation II.2.3. Mixed ﬁnite element discretizations of the biharmonic equation II.4.5.2.3.1. Motivation II.5.3.5.4. The conjugate gradient algorithm revisited 30 32 32 32 33 34 34 35 36 37 37 38 39 39 40 40 40 43 44 44 44 44 44 45 46 48 48 49 49 50 51 52 53 53 54 55 . Construction of a local solenoidal bases II.6.6.2. The mini element revisited II.3.5. Structure of the discrete problem II. Stable higherorder elements with discontinuous pressure approximation II.3.3.3.2.2. The curl operators II. General structure of the discrete problems II. Choice of spaces II.9.2. Nonconforming methods II. General structure of mixed ﬁnite element discretizations of the Stokes equations II.5.10.3.4 CONTENTS II.6. Mixed ﬁnite element discretizations of the Stokes equations II. General form of PetrovGalerkin stabilizations II.5. A second attempt II.8. Variational formulation of the biharmonic equation II. Choice of stabilization parameters II.4. The CrouzeixRaviart element II.3.4.1.2.6.2. PetrovGalerkin stabilization II. Stable higherorder elements with continuous pressure approximation II.2.
An improved Uzawa algorithm II. Variants of the CGalgorithm for indeﬁnite problems II. Error estimators based on the solution of auxiliary problems 57 58 61 62 64 65 67 67 68 68 70 73 74 75 76 79 79 79 79 80 80 81 81 82 82 83 87 88 89 89 90 91 91 92 94 95 95 95 96 96 Chapter IV. A posteriori error estimation and adaptive grid reﬁnement II.7. Multigrid algorithms III. General structure of the adaptive algorithm II. Motivation II. Prolongation II.6.2.2.8.1.1. General structure III.4. Stationary nonlinear problems III.1. Required data structures Chapter III.7.6.7.2. A warning example III.1. A residual a posteriori error estimator II.1.2.1.3.2. A priori error estimates III.2.2.6. Error estimators based on the solution of auxiliary problems II. Solution of the discrete nonlinear problems III. Fixedpoint formulation of the discrete problem III.3. Path tracking III. Fixedpoint formulation III.7.3. Fixedpoint iteration III.3. Marking strategies II.1.1.4.3.5.1.7.7.6.6.7. Newton iteration III.7. Existence and uniqueness results III.2.4. Instationary problems 99 IV. The multigrid algorithm II. Regular reﬁnement II.6. Operator splitting III.7. Symmetrization III.1.11.5.2.10.1.8. Finite element discretization III.6. General structure III.5.7.3. Adaptivity for nonlinear problems III. Discretization of the stationary NavierStokes equations III. Smoothing II. Variational formulation III. Discretization of the instationary NavierStokes equations 99 .CONTENTS 5 II.1.2.1. Properties of the discrete problem III.7.6.6.7.9.9. A residual a posteriori error estimator III. Restriction II. The streamlinediﬀusion method III.2.1. A nonlinear CGalgorithm III.4.2.1. Upwind methods III.3.1.3.7. Additional reﬁnement II.8.6.1.5.
Method of lines IV. Systems in divergence form IV.1. Numerical methods for ordinary diﬀerential equations revisited IV.2.2. Overview IV.4.2.1.2.3.6 CONTENTS IV. Spacetime ﬁnite elements IV.1.1. Existence and uniqueness results IV.3.3. Relation to ﬁnite element methods IV.3.2.3.5. Time adaptivity IV. Space adaptivity IV. Spacetime adaptivity IV.6. Construction of the numerical ﬂuxes Summary Bibliography Index 99 100 100 102 104 104 105 107 107 108 109 110 110 110 111 113 115 115 119 123 125 .3.3.3. Discretization of compressible and inviscid problems IV. The transportdiﬀusion algorithm IV.4.4.1.1. Construction of the partitions IV.5.7.2.2.1. Rothe’s method IV. Finite volume schemes IV.3.1.1. Variational formulation IV. A residual a posteriori error estimator IV.1.2.
j≤3 the Jacobian matrix of Φ(·. t)−1 (x) which passes through x. The basic assumptions are: • Φ(·.1.2.1. t) is orientationpreserving. t) is deﬁned by 7 . t) : Ω → Ω is for all times t ≥ 0 a bijective diﬀerentiable mapping. Lagrangian and Eulerian representation. • Φ(·. This is the Lagrangian representation. Correspondingly η is called Lagrangian coordinate. The velocity of the ﬂow at point x = Φ(η. (2) We ﬁx the point x and look at the trajectory t → Φ(·. Modelization I.CHAPTER I Fundamentals I. I. 0) is the identity. • the transformation Φ(·. The Eulerian coordinate system is ﬁxed.1. • its inverse mapping is diﬀerentiable. We denote by η ∈ Ω the position of an arbitrary particle at time t = 0 and by x = Φ(η. t) and by J = det DΦ its Jacobian determinant. Velocity. This is the Eulerian representation. The preservation of orientation is equivalent to J(η.1. t) its position at time t > 0. Correspondingly x is called Eulerian coordinate. The Langrangian coordinate system moves with the ﬂuid. We denote by DΦ = ∂Φi ∂ηj 1≤i. There are two ways to look at the ﬂuid ﬂow: (1) We ﬁx η and look at the trajectory t → Φ(η. t). Consider a volume Ω occupied by a ﬂuid which moves under the inﬂuence of interior and exterior forces. t) > 0 for all t > 0 and all η ∈ Ω.
t) = ∂ V (t) ∂x f (x. β) and V (t) = (α(t). t) (η. i. t). t)dx = V (t) α(t) β f (x.8 I. t)dx = V (t) α β d dt f (Φ(η.3. FUNDAMENTALS v(x. β(t)) are intervals.t). t) ∂ ∂ Φ(η. too. t) (η. t) (η. t). We consider a onedimensional model situation. t) dx. t) dη ∂η = α ∂ ∂Φ f (Φ(η. t)(V ) = {Φ(η. ∂t x = Φ(η. t) . t) = ∂ Φ(η.t)dx + α ∂ ∂Φ ∂Φ f (Φ(η. t) : η ∈ V } its shape at time t > 0. t). t)dx f (Φ(η. t). ∂t Example I.e.t)dx . Then V = (α. t) ∂t ∂η ∂ ∂ = ∂η v(Φ(η. t) ∂Φ (η.t)v(x.t) ∂η dη = V (t) ∂ f (x.t).t)dx β + α f (Φ(η. t)dx = V (t) V (t) ∂f (x. ∂η Diﬀerentiating this equality and using the transformation rule we get d dt β f (x.t) ∂x v(x. t)dη ∂x ∂t ∂η =v(Φ(η.t)= ∂x v(Φ(η. Ω = (a. t)dη ∂t ∂η = β ∂ V (t) ∂t f (x. Consider an arbitrary volume V ⊂ Ω and denote by V (t) = Φ(·. ∞) → R d dt f (x. The transformation rule for integrals gives β(t) f (x. t). t) + div(f v)(x.1. b) is an interval and x and η are real numbers.1. t) α = ∂Φ (η. t).1.t). t)dη. I. Then the following transport theorem holds for all diﬀerentiable mappings f : Ω × (0. Transport theorem.t) ∂Φ (η.
1. Cauchy theorem. t)dx V (t) ∂ρ (x. I. ∂t This gives the equation for the conservation of mass: ∂ρ + div(ρv) = 0 ∂t in Ω × (0. The conservation of mass and the transport theorem therefore imply that 0= = V (t) d dt ρ(x. Then ρ(x. t)dx V (t) is the total mass of the volume V (t). t) dx. Conservation of mass. Fluid and continuum mechanics are based on three fundamental assumptions concerning the interior forces: • interior forces act via the surface of a volume V (t). as usual.I. t)dx V (t) ∂t ∂ + f (x.4. • interior forces are additive and continuous. ∞). t) v(x.1. and dS denotes . t)} dx.5. ∂V (t) denotes the boundary of the volume V (t). I. t) + {f (x. n is the unit outward normal. t)v(x. • interior forces only depend on the normal direction of the surface of the volume. MODELIZATION 9 = ∂ f (x. t) + div(ρv)(x. Due to the Cauchy theorem these assumptions imply that the interior forces acting on a volume V (t) must be of the form T · ndS ∂V (t) with a tensor T : Ω → R3×3 . = ∂x V (t) ∂t This proves the transport theorem in one dimension. Here. We denote by ρ(x. t)dx V (t) ∂x ∂ + f (x. t)dx ∂x V (t) ∂ ∂ f (x.1. t) the density of the ﬂuid. t)v(x.
The integral theorem of Gauss then yields T · ndS = ∂V (t) V (t) div Tdx. t) + div(ρvi v)(x. t)v(x. V (t) Hence the conservation of momentum takes the integral form ∂ (ρv) + div(ρv ⊗ v) dx = ∂t ρf + div T dx. t) dx 1≤i≤3 V (t) ∂t ∂ = (ρv)(x. Due to the conservation of momentum this quantity must be balanced by exterior and interior forces. Conservation of momentum. V (t) V (t) This gives the equation for the conservation of momentum: ∂ (ρv) + div(ρv ⊗ v) = ρf + div T ∂t in Ω × (0. t) dx V (t) ∂t where v ⊗ u = (vi uj )1≤i. .10 I.1. t)v(x. t)dx V (t) 1≤i≤3 ∂ (ρvi )(x. V (t) The Cauchy theorem tells that the interior forces are of the form div Tdx. V (t) Due to the transport theorem its temporal variation equals d dt = = ρ(x. I. The total momentum of a volume V (t) in the ﬂuid is given by ρ(x. FUNDAMENTALS the surface element. Exterior forces must be of the form ρf dx. t)vi (x.6. t) + div(ρv ⊗ v)(x. ∞). t)dx.j≤3 . t)dx V (t) d dt ρ(x.
The equations for the conservation of mass. the change of internal energy must be of the form σ · ndS = ∂V (t) V (t) div σdx with a vectorﬁeld σ : Ω → R3 . Conservation of energy.1. We denote by e the total energy. Due to the Cauchy theorem and the integral theorem of Gauss.) . The contributions of the exterior and interior forces are given by ρf · vdx V (t) and n · T · vdσ ∂V (t) = V (t) div(T · v)dx.8. (Due to the Cauchy theorem this is a consequence of the conservation of angular momentum. I.1. V (t) This gives the equation for the conservation of energy: ∂ e + div(ev) = ρf · v + div(T · v) + div σ ∂t in Ω × (0. These are based on the following fundamental assumptions: • T only depends on the gradient of the velocity.I. ∞). Constitutive laws. t) dx. momentum and energy must be complemented by constitutive laws. Hence the conservation of energy takes the integral form V (t) ∂ e + div(ev) dx = ∂t ρf · v + div(T · v) + div σ dx. • The dependence on the velocity gradient is linear. ∂t Due to the conservation of energy this quantity must be balanced by the energies due to the exterior and interior forces and by the change of the internal energy. • T is symmetric.7. t) + div(ev)(x. The transport theorem implies that its temporal variation is given by d dt edx = V (t) V (t) ∂e (x. MODELIZATION 11 I.1.
1.12 (p.1.1.19 10−3 790 ◦ −5 Ether 20 C 2. 1 D(v) = ( v + 2 is the deformation tensor. Its physical dimension is m2 s−1 and is called Stokes. 1 • e = ρε + 2 ρv2 . Viscosities of some ﬂuids and gazes ρ Water 20 C 1.1 gives some relevant values of these quantities. the one of the density ρ is kg m−3 . (ε is called internal energy and is often identiﬁed with the temperature. all interior forces act in normal direction. µ ∈ R are the dynamic viscosities of the ﬂuid.005 10 1. Table I. The equation for the pressure is also called equation of state.e.293 ◦ Hydrogen 0 C 8.005 10 1000 Alcohol 20◦ C 1. Here. In §I.j≤3 λ ν 1. σ = α ε. ε) = (γ − 1)ρε with γ > 1. Note that div v is the trace of the deformation tensor D(v). i.499 1260 Air 0◦ C 1 atm 1.344 10−5 −6 . e.12 I.71 10−5 1.394 10−8 1. For an ideal gas. The conditions on T imply that it must be of the form T = 2λD(v) + µ(div v) I − pI.2. T is diagonal and proportional to the pressure.1. ε) denotes the pressure and λ. 15) we will introduce the kinematic viscosity ν = λ . Table ρ I. FUNDAMENTALS • In the absence of internal friction.43 10 716 Glycerine 20◦ C 1. I = (δij )1≤i.1.) • σ is proportional to the variation of the internal energy. The physical dimension of the dynamic viscosities λ and µ is kg m−1 s−1 and is called Poise.190 10−3 1.99 10−2 ◦ −3 vt ) = 1 ∂vi ∂vj + 2 ∂xj ∂xi 1≤i..e. i.322 10−5 9. it takes the form p(ρ.j≤3 denotes the unit tensor. Example I. p = p(ρ.4 10−6 8.g.506 10−6 3.
Compressible NavierStokes equations in nonconservative form. i.1. Collecting all results we obtain the compressible NavierStokes equations in conservative form: ∂ρ + div(ρv) = 0 ∂t ∂(ρv) + div(ρv ⊗ v) = ρf + 2λ div D(v) ∂t + µ grad div v − grad p = ρf + λ∆v + (λ + µ) grad div v − grad p ∂e + div(ev) = ρf · v + 2λ div[D(v) · v] ∂t + µ div[div v · v] − div(pv) + α∆ε = {ρf + 2λ div D(v) + µ grad div v − grad p} · v + λD(v) : D(v) + µ(div v)2 − p div v + α∆ε p = p(ρ. ε) 1 e = ρε + ρv2 . MODELIZATION 13 I.11. 2 I. Euler equations. 2 I.9. the compressible NavierStokes equations in conservative form reduce to the socalled Euler equations for an ideal gas: ∂ρ + div(ρv) = 0 ∂t ∂(ρv) + div(ρv ⊗ v + pI) = ρf ∂t ∂e + div(ev + pv) = ρf · v + α∆ε ∂t p = p(ρ.I.1.9 (p. Compressible NavierStokes equations in conservative form.1.e. 13) in the .1.1. λ = µ = 0. In the inviscid case.10. ε) 1 e = ρε + ρv2 . Inserting the ﬁrst equation of §I.
FUNDAMENTALS lefthand side of the second equation yields ∂(ρv) + div(ρv ⊗ v) ∂t ∂ρ ∂v = ( )v + ρ + (div(ρv))v + ρ(v · ∂t ∂t ∂v + (v · )v]. = ρ[ ∂t )v Inserting the ﬁrst two equations of §I. ε).1.9 in the lefthand side of the third equation implies λD(v) : D(v) + µ(div v)2 − p div v + α∆ε = −{ρf + 2λ div D(v) + µ grad div v − grad p} · v 1 ∂(ρε + 2 ρv2 ) 1 + div(ρεv + ρv2 v) ∂t 2 ∂(ρv) = −{ + div(ρv ⊗ v)} · v ∂t ∂ε ∂ρ + ε + ε div(ρv) + ρ + ρv · grad ε ∂t ∂t 1 2 ∂ρ 1 2 1 ∂v2 1 + v + v div(ρv) + ρ + ρv · grad( v2 ) 2 ∂t 2 2 ∂t 2 ∂v + (v · )v] = −ρv · [ ∂t ∂ε + ρ + ρv · grad ε ∂t ∂v + ρv · + ρv · [(v · )v] ∂t ∂ε = ρ + ρv · grad ε. ∂t + With these simpliﬁcations we obtain the compressible NavierStokes equations in nonconservative form ∂ρ + div(ρv) = 0 ∂t ∂v ρ[ + (v · )v] = ρf + λ∆v + (λ + µ) grad div v ∂t − grad p ∂ε ρ[ + ρv · grad ε] = λD(v) : D(v) + µ(div v)2 − p div v ∂t + α∆ε p = p(ρ. .14 I.
p = P q. incompressibility is equivalent to the equation div v = 0.1.1. When rewriting the momentum equation in the new quantities we obtain Fg = f = P U ∂u U 2 νU + (u · y )u − 2 ∆y u + yq T ∂τ L L L νU L2 ∂u LU PL = 2 + (u · y )u − ∆y u + yq . Thus there remains only the dimensionless L2 νU parameter . MODELIZATION 15 I.4. Physical reaL sons suggest to ﬁx T = U .12. P L . denote by ρ ν= λ ρ the kinematic viscosity. We say that a ﬂuid is incompressible if the volume of any subdomain V ⊂ Ω remains constant for all times t > 0.11 (p. L νT ∂τ ν νU 2 L The physical dimension of F and νU is m s−2 . and suppress the third equation in §I.1. a reference pressure P . 13) (conservation of energy). The quantities νT . a reference time T . dx = V (t) V dx for all t > 0. a reference velocity U . i.1.e. The quantities L. U .1.3. Instationary incompressible NavierStokes equations. Next we assume that the density ρ is constant. Remark I. and ν are determined ν by the physical problem. v = U u. and a reference force F and introduce new variables and quantities by x = Ly. Remark I. The incompressible NavierStokes equations are often rescaled to a nondimensional form. The transport theorem then implies that d 0= div vdx. To this end we introduce a reference length L. L2 νU and LU are dimensionless. The quantities F and P are ﬁxed by the conditions F = νU and P L = 1. t = T τ . f = F g. dx = dt V (t) V (t) Hence. replace p by p .I. This yields the instationary incompressible NavierStokes equations: div v = 0 ∂v + (v · ∂t )v = f + ν∆v − grad p.
Stokes equations. The relevant quantities for the airplane are U ≈ 900km h−1 ≈ 250m s−1 L ≈ 50m ν ≈ 1. i. It was introduced in 1883 by Osborne Reynolds (1842 – 1912).1. I.e. all temporal derivatives vanish. As a next step of simpliﬁcation. In a last step we linearize the NavierStokes equations at velocity v = 0 and rescale the viscosity ν to 1. Stationary incompressible NavierStokes equations.13.985 109 .455 108 . I. we assume that we are in a stationary regime.14.322 10−5 m2 s−1 Re ≈ 9. FUNDAMENTALS Re = LU . ν It is called Reynolds’ number and is a measure for the complexity of the ﬂow. Example I.1. This gives the Stokes equations: div v = 0 −∆v + grad p = f .1. This yields the stationary incompressible NavierStokes equations: div v = 0 −ν∆v + (v · )v + grad p = f . The corresponding quantities for the oilvessel are U ≈ 20knots ≈ 36km h−1 ≈ 10m s−1 L ≈ 300m ν ≈ 1.5. Consider an airplane at cruising speed and an oilvessel.16 I.005 10−6 m2 s−1 Re ≈ 2. .
11 – the density ρ. was decided in the 19th century by experiments with pendulums submerged in a viscous ﬂuid.1. The equations of §§ I. The Euler equations of §I.g.9 and I. Therefore it is mandatory to dispose of eﬃcient discretization schemes and solvers for simpliﬁed problems as. λt = 0 on the other hand gives the slip condition v·n=0 T · n − (n · T · n)n = 0.1. Around 1845. Around 1827.1. e.11 – I.1. 13) are hyperbolic and require boundary conditions on the inﬂowboundary. the second equation refers to the tangential components of v and n · T. More generally one can impose the condition v = vΓ with a given boundary velocity vΓ . Sir George Gabriel Stokes (1819 – 1903) suggested that – due to the friction – the ﬂuid will adhere at the boundary.1.I. 1] that depend on the actual ﬂowproblem.1. All CFD algorithms solve complicated ﬂow problems via a nested sequence of simpliﬁcation steps similar to the described procedure. For the energyequations in §§I.12 are timedependent. Pierre Louis Marie Henri Navier (1785 – 1836) had suggested the more general boundary condition λn v · n + (1 − λn )n · T · n = 0 λt [v − (v · n)n] + (1 − λt )[T · n − (n · T · n)n] = 0 on Γ with parameters λn . MODELIZATION 17 Remark I.1.1.9 – I. The case λn = 1. which are at the heart in the inmost loop. The special case λn = λt = 1 obviously yields the noslip condition of Stokes. 13) one may choose either Dirichlet (prescribed energy) or Neumann (prescribed energy ﬂux) boundary conditions.1. the internal energy ε and – for the problems of §§I.1. This leads to the Dirichlet boundary condition v = 0 on Γ. The problems of §§I. More generally one may replace the homogeneous righthand sides by given known functions. The situation is less evident for the mass and momentum equations in §§I.1.15.11 – I. The question.1. They must be completed by initial conditions for the velocity v. the Stokes equations.6.9 – I.1.1.11 (p. which boundary condition is a better description of the reality.9 and I.14 are of second order in space and require boundary conditions on the whole boundary Γ = ∂Ω.10 (p.1. Initial and boundary conditions. 13) and I. They were in favour of the noslip ..9 (p.1. The ﬁrst equation of Navier refers to the normal components of v and n · T. I. λt ∈ [0.14. A mixture of both conditions is also possible.
S. . connected set in Rn .2. S : T dyadic product (inner product of tensors). . bounded.2. n div u = i=1 ∂ui . )v. q. u ⊗ v = (ui vj )1≤i. I. . . however. div(T · u) = (div T) · u + T : D(u). . div(u ⊗ v) = (div u)v + (u · . boundary of Ω supposed to be Lipschitzcontinuous.2. I. . . the slip condition of Navier is more appropriate. u. gradient. n ∈ {2. The following notations concerning domains and functions will frequently be used: Ω Γ n p.j≤n u · v inner product. T. scalar functions with values in R. Domains and functions. v. 1≤j≤n ∆ = div D(u) = 1 2 Laplace operator. vectorﬁelds with values in Rn .18 I. The viscosities of the ﬂuids. open. A particular example for this situation is the reentry of a space vehicle. When dealing with much higher velocities or much smaller viscosities. r. div divergence.2. . Diﬀerentiation of products. 3}.j≤n tensorial product.1. Notations and basic results I. tensorﬁelds with values in Rn×n . . exterior unit normal to Ω. w. were similar to that of honey and the velocities were only a few meters per second. The product formula for differentiation yields the following formulae for the diﬀerentiation of products of scalar functions. vectorﬁelds and tensorﬁelds: div(pu) = p · u + p div u. Other examples are coating problems where the location of the boundary is an unknown and is determined by the interaction of capillary and viscous forces. 1≤i. ∂ui ∂uj + ∂xj ∂xi deformation tensor. I unit tensor. FUNDAMENTALS condition of Stokes. ∂xi n div T = i=1 ∂Tij ∂xi .
since supp ϕ is closed and Ω is open. . .2.2.2. Weak derivatives. I. we denote its partial derivatives by Dα ϕ = ∂ α1 +.+αn ϕ . For the sets A = {x ∈ R3 : x2 + x2 + x2 < 1} 1 2 3 B = {x ∈ R3 : 0 < x2 + x2 + x2 < 1} 1 2 3 C = {x ∈ R3 : 1 < x2 + x2 + x2 < 2} 1 2 3 we have A = {x ∈ R3 : x2 + x2 + x2 ≤ 1} 1 2 3 B = {x ∈ R3 : x2 + x2 + x2 ≤ 1} 1 2 3 C = {x ∈ R3 : 1 ≤ x2 + x2 + x2 ≤ 2} 1 2 3 by supp ϕ = {x ∈ Rn : ϕ(x) = 0}. open unit ball punctuated open unit ball open annulus closed unit ball closed unit ball closed closed annulusl.I. we denote its support Remark I. Given a suﬃciently smooth function ϕ and a multiindex α ∈ Nn . T : D(u)dx. The condition ”supp ϕ ⊂ Ω” is a non trivial one. Integration by parts formulae. Functions satisfying this condition vanish at the boundary of Ω together with all their derivatives. ∂xα1 .4.3.2. ∂xαn 1 n . NOTATIONS AND BASIC RESULTS 19 I.2.. The above product formulae and the Gauss theorem for integrals give rise to the following integration by parts formulae: pu · ndS = Γ Ω p · udx + Ω p div udx.2. The set of all functions that are inﬁnitely diﬀerentiable and have their ∞ support contained in Ω is denoted by C0 (Ω): ∞ C0 (Ω) = {ϕ ∈ C ∞ (Ω) : supp ϕ ⊂ Ω}..1. Recall that A denotes the closure of a set A ⊂ Rn . Example I. Given a continuous function ϕ : Rn → R. Ω n · T · udS = Γ Ω (div T) · udx + (div u)vdx + Ω Ω n · (u ⊗ v)dS = Γ (u · )vdx.
1). FUNDAMENTALS ∞ Given two function ϕ. . However.2.20 I.+αn Ω Ω ∞ C0 (Ω).4. I. the notions of classical and weak derivatives coincide. + αn ≤ k}.. ψ is called the αth weak derivative of ϕ if and only if the identity ψρdx = (−1)α1 +. ψ ∈ L1 (Ω) and a multiindex α ∈ Nn . This identity motivates the deﬁnition of the weak derivatives: Given two integrable function ϕ.+αn Ω Ω ϕDα ψ.+αn ≤k 1/2 ϕ k = l=0 ϕ2 l = Dα ϕ 2 L2 (Ω) 1/2 .2. 0) and 1 on (0. ψ ∈ C0 (Ω). but it is diﬀerentiable in the weak sense.2. 1/2 α 2 D ϕ L2 (Ω) . The function x is not diﬀerentiable in (−1... Remark I. Example I.3. 1). the Gauss theorem for integrals yields for every multiindex α ∈ Nn the identity Dα ϕψdx = (−1)α1 +. Sobolev spaces and norms. For smooth functions.. Γ H 2 (Γ) = {ψ ∈ L2 (Γ) : ψ = ϕ .+αn =k k ϕk = with α1 + .2.4 below). 1 H0 (Ω) = {ϕ ∈ H 1 (Ω) : ϕ = 0 on Γ}. L2 (Ω) = {p ∈ L2 (Ω) : 0 Ω 1 p = 0}. there are functions which are not differentiable in the classical sense but which have a weak derivative (cf..5. α ϕDα ρ holds for all functions ρ ∈ In this case we write ψ = D ϕ.. We will frequently use the following Sobolev spaces and norms: H k (Ω) = {ϕ ∈ L2 (Ω) : Dα ϕ ∈ L2 (Ω) for all α ∈ Nn α∈Nn α1 +.. Example I. for some ϕ ∈ H 1 (Ω)}. . α∈Nn α1 +. Its weak derivative is the piecewise constant function which equals −1 on (−1..
Example I. if n > 2. The constants cΩ and cΩ depend on the domain Ω and are proportional to its diameter. . if n = 2. Finite element partitions.2. 1)). The space H 2 (Γ) is called trace space of H 1 (Ω). The following ine equalities are fundamental: ϕ ϕ 0 ≤ cΩ ϕ1 ≤ cΩ ϕ1 1 for all ϕ ∈ H0 (Ω). The members of T . The ﬁnite element discretizations are based on partitions of the domain Ω into nonoverlapping simple subdomains. In three dimensions. Note that all derivatives are to be understood in the weak sense. 1 2 3 Example I.2. A function. . however. + x2 } 2 1 n Then we have ϕα ∈ H 1 (Ω) ⇐⇒ α≥0 α>1− n 2 α 1 α ∈ R. The function x is not diﬀerentiable.2. Example I. Friedrichs and Poincar´ inequalities.2. Any partition T has to satisfy the following conditions: .7.2. The collection of these subdomains is called a partition and is labeled T .e. I. In two dimension.2.6. . the subdomains.6. but it is in H 1 ((−1. I. Except in one dimension. which is piecewise diﬀerentiable is in H 1 (Ω) if and only if it is globally continuous. H 1 functions are in general not continuous and do not admit point values (cf. NOTATIONS AND BASIC RESULTS 21 ψ 1 .7. its elements are called traces of functions in H 1 (Ω). . Consider the open unit ball Ω = {x ∈ Rn : x2 + . are called elements and are labeled K.I.5.6 below). Remark I. Friedrichs inequality 0 for all ϕ ∈ H 1 (Ω) ∩ L2 (Ω) 0 ´ Poincare inequality. n = 1. a similar example is given by ln( x2 + x2 + x2 ).Γ 2 = inf{ ϕ 1 : ϕ ∈ H 1 (Ω).2. i. This is crucial for ﬁnite element functions. the function ln(ln( x2 + x2 )) is an 2 1 example of an H 1 function that is not continuous and which does not admit a point value in the origin. + x2 < 1} 1 n in Rn and the functions ϕα (x) = {x2 + . ϕ Γ = ψ}.
2. • (Admissibility) Any two elements in T are either disjoint or share a vertex or a complete edge or – if n = 3 – a complete face.2. In two dimensions triangles and parallelograms may be mixed (cf. For any multiindex α ∈ Nn we set for abbreviation α1 = α1 + . . the ratio hK /ρK must be bounded uniformly with respect to all elements and all partitions. if n = 3. but complete families of partitions which are often obtained by successive local or global reﬁnements. • (Affine equivalence) Each K ∈ T is either a triangle or a parallelogram.2.2. the ratio of its diameter hK to the diameter ρK of the largest ball inscribed into K is bounded independently of K. n = 2.9. Remark I. + αn . FUNDAMENTALS • Ω ∪ Γ is the union of all elements in T . In two dimensions. or a tetrahedron or a parallelepiped. The condition of aﬃne equivalence may be dropped. considerably simpliﬁes the analysis since it implies constant Jacobians for all element transformations. Figure I.8. In practice one usually not only considers a single partition T . Finite element spaces.8.2.1. if n = 2. shape regularity means that the smallest angles of all elements stay bounded away from zero. however.1).22 I. d d d d d Figure I. In three dimensions tetrahedrons and parallelepipeds can be mixed provided prismatic elements are also incorporated. . . • (Shaperegularity) For any element K. Mixture of triangular and quadrilateral elements I. It. For any partition T we denote by hT or simply h the maximum of the element diameters: h = hT = max{hK : K ∈ T }. Remark I. Then.
1 1 1 2 2 2 I. For a triangle. x1 x2 . Note. x2 x2 . x1 . x1 .−1 (T ) ∩ C(Ω).0 (T ) ∩ H0 (Ω) = {ϕ ∈ S k.2. Example I. . that k may be 0 for the ﬁrst space. k ∈ N.0 (T ≤ chk+1 ϕk+1 ϕ ∈ H k+1 (Ω).0 1 S0 (T ) = S k.I.2.0 (T ) = S k. x1 x2 . · xα n .0 (T ) : ϕ = 0 on Γ}. x1 . k ∈ N∗ . 1 2 For a parallelogram on the other hand. . x1 x2 }.10. .10.9. R2 (K) = span{1. With this notation we deﬁne ﬁnite element spaces by S k. K S k. x2 .0 ϕT ∈S0 (T inf ϕ − ϕT j ≤ chk+1−j ϕk+1 ) 1 ϕ ∈ H k+1 (Ω) ∩ H0 (Ω).if K is a triangle or a tetrahedron. k.2. inf ϕ − ϕT j ≤ chk+1−j ϕk+1 k. k ∈ N∗ . x2 . The ﬁnite element spaces deﬁned above satisfy the following approximation properties: inf ϕ − ϕT ) 0 ϕT ∈S k. ϕ ∈ H k+1 (Ω). R2 (K) = span{1. x2 . j ∈ {0. NOTATIONS AND BASIC RESULTS 23 α∞ = max{αi : 1 ≤ i ≤ n}. x2 }. 1}. x2 }. Approximation properties. we have R1 (K) = span{1. j ∈ {0. Nodal shape functions. x1 . 1 n With any element K in any partition T and any integer number k we associate a space Rk (K) of polynomials by span{xα : α1 ≤ k} .2. 1}. x2 . x2 . x2 }. x2 x2 . NT denotes the set of all element vertices. xα = xα 1 · . I.−1 (T ) = {ϕ : Ω → R : ϕ ∈ Rk (K) for all K ∈ T }.if K is a parallelogram or a parallelepiped. x1 x2 .−1 (T ) ϕT ∈S k. but must be at least 1 for the second and third space. Rk (K) = span{xα : α∞ ≤ k} . we have R1 (K) = span{1.
. . . FUNDAMENTALS For any vertex x ∈ NT the associated nodal shape function is denoted by λx . Then the restrictions to K of the nodal shape functions λa0 . ai+2 − ai+1 ) i = 0. ai+3 − ai+2 ) det(x − ai+2 .2). It is the unique function in S 1. (1) Consider a triangle K with vertices a0 . . . Then the restrictions to K of the nodal shape functions λa0 . λa3 are given by det(x − ai+2 . . ai+2 − ai+1 . . . a3 numbered counterclockwise (cf. ai+2 − ai+1 . . 2.2. .2. ai+1 − ai+2 ) λai (x) = · det(ai − ai+2 . . . . (2) Consider a parallelogram K with vertices a0 . Figure I. 3. . ai+1 − ai+2 ) i = 0. ai+2 − ai+1 ) where all indices have to be taken modulo 3. . . Figure I. . . . ai+3 − ai+2 ) det(ai − ai+2 . where all indices have to be taken modulo 4. λa3 are given by det(x − ai+1 . .2. . .3).2.2. ai+3 − ai+1 ) . . . Then the restrictions to K of the nodal shape functions λa0 . Some examples of domains ωx The nodal shape functions can easily be computed elementwise from the coordinates of the element’s vertices. Enumeration of vertices of triangles and parallelograms Example I.11. a2 numbered counterclockwise (cf. . λa2 are given by det(x − ai+1 . . . Figure I.3). The support of a nodal shape function λx is denoted by ωx and consists of all elements that share the vertex x (cf. ai+3 − ai+1 ) λai (x) = i = 0. . a3 enumerated as in Figure I. . . λai (x) = det(ai − ai+1 . (3) Consider a tetrahedron K with vertices a0 . . .24 I. 3. .2. . .0 (T ) that equals 1 at vertex x and that vanishes at all other vertices y ∈ NT \{x}.3. d d d d d d d d d d d d d d d Figure I.4. . det(ai − ai+1 . a2 ¨ ¨¨ d ¨ d d ¨¨ ¨ ¨¨ a3 ¨ ¨¨ ¨ ¨¨ a2 ¨ ¨¨ a0 a1 a0 a1 Figure I.2.2.
ai+6 − ai+4 ) det(ai − ai+4 .2. . parallelogram. ai+5 − ai+1 ) det(x − ai+2 . Then the restrictions to K of the nodal shape functions λa0 . . . ai+5 − ai+1 ) · det(ai − ai+1 . .2.2. ai+6 − ai+2 ) det(x − ai+4 . λa7 are given by λai (x) = det(x − ai+1 . ai+5 − ai+4 . . . . ai+3 − ai+1 . . . k ≥ 2. The bases of higherorder spaces S k. (1) Consider a again a triangle K with its vertices numbered as in example I. ai+6 − ai+2 ) · det(ai − ai+2 . tetrahedron. Example I. (4) Consider a parallelepiped K with vertices a0 . . .0 (T ).2. . . For every element (triangle. a6 3 d d a2 d d d d a7 a5 a a4 a3 a0 a1 a0 a1 Figure I. NOTATIONS AND BASIC RESULTS 25 where all indices have to be taken modulo 4. x ∈ NT .0 (T ). .2.11 (1). .2. .12. ai+6 − ai+4 ) i = 0. where all indices have to be taken modulo 8.13. ai+3 − ai+2 . . The functions λx .0 (T ) K consists of the functions λai [λai − λai+1 − λai+2 ] i = 0. Enumeration of vertices of tetrahedra and parallelepipeds (The vertex a2 of the parallelepiped is hidden. ai+3 − ai+2 . 2 4λai λai+1 i = 0. ai+5 − ai+4 . consist of suitable products of functions λx corresponding to appropriate vertices x. or parallelepiped) the sum of all nodal shape functions corresponding to the element’s vertices is identical equal to 1 on the element. .) Remark I. 7. . Then the nodal basis of S 2. a7 enumerated as in Figure I. . form a bases of S 1.4. ai+3 − ai+1 . .I. 2.4.
. . . respectively volume.11 (3) and where all indices have to be taken modulo 4. . ωx Here. . 3 0 ≤ i < j ≤ 3.11 (2).0 (T ) consists of K the functions λai [λai − λai+1 + λai+2 − λai+3 ] i = 0. . ωx  denotes the area. .11 (1) and where all indices have to be taken modulo 3. 3 16λa0 λa2 where the functions λa are as in example I.2. called K (2) Consider a again a parallelogram K with its vertices numbered as in example I. 3 4λai [λai+1 − λai+2 ] i = 0.11 (2) and where all indices have to be taken modulo 3. . 3 4λai [λai+1 − λai+2 ] i = 0. (3) Consider a again a tetrahedron K with its vertices numbered as in example I. .0 (T ) consists of the K functions λai [λai − λai+1 − λai+2 − λai+3 ] i = 0. . . .0 (T ) consists of the functions K λai i = 0. . . . . I. FUNDAMENTALS where the functions λa are as in example I. . of the set ωx . The hierarchical basis of S 2.2. An other basis of S 2. 2. A quasiinterpolation operator. if n = 3.11. Then the nodal basis of S 2.2. . . . . . . consists of the functions λai 4λai λai+1 i = 0. . . where the functions λa are as in example I.2. 3 4λai λaj 0 ≤ i < j ≤ 3. . . . 2 i = 0. . . .2. Then the nodal basis of S 2. . The hierarchical basis consists of the functions λai 4λai λaj i = 0.26 I.2. We will frequently use 1. if n = 2.0 (T ) hierarchical basis. .11 (3). 3 16λa0 λa2 . .0 the quasiinterpolation operator RT : L1 (Ω) → S0 (T ) which is deﬁned by RT ϕ = x∈NT ∩Ω λx 1 ωx  ϕdx.
Bubble functions. In fact. for all x ∈ K. parallelogram.0 1 sical nodal interpolation operator IT : H 2 (Ω) ∩ H0 (Ω) → S0 (T ) can be deﬁned by the relation (IT (ϕ))(x) = ϕ(x) for all vertices x ∈ NT . For functions with more regularity which are at least in H 2 (Ω).5. if if if if K K K K is is is is a a a a triangle.2. Examples of domains ωK Remark I.14.I. point values are not deﬁned for H 1 functions. I. d d d d d d d d d d d d d d dK d d d d d d d d d d d d K Figure I. NOTATIONS AND BASIC RESULTS 27 The operator RT satisﬁes the following local error estimates for all 1 ϕ ∈ H0 (Ω) and all elements K ∈ T : ϕ − RT ϕ ϕ − RT ϕ L2 (K) L2 (∂K) ≤ c1 hK ϕ ≤ c2 hK 1/2 H 1 (ωK ) .5). tetrahedron. For any element K ∈ T we deﬁne an element bubble function by ψK = αK x∈K∩NT λx .2.2. 27 256 αK = 16 64 It has the following properties: 0 ≤ ψK (x) ≤ 1 ψK (x) = 0 for all x ∈ K. Figure I. the situation is diﬀerent.2.12. ϕ Here. H 1 (ωK ) . ωK denotes the set of all elements that share at least a vertex with K (cf. parallelepiped. The operator RT is called a quasiinterpolation operator since it does not interpolate a given function ϕ at the vertices x ∈ NT . . For those functions point values do exist and the clas1.2.
x∈K We denote by ET the set of all edges. and of exactly one element.2. if n = 2.) With each edge respectively face E ∈ ET we ﬁnally associate a unit vector nE orthogonal to E and denote by [·]E the jump across E in direction nE . 4 if E is a line segment. for all x ∈ ωE . Here ωE is the union of all elements that share E (cf. If E is contained in the boundary Γ the orientation of nE is ﬁxed to be the one of the exterior normal. if E is not contained in the boundary Γ.6.2. It has the following properties: 0 ≤ ψE (x) ≤ 1 ψE (x) = 0 x∈ωE for all x ∈ ωE .2. [·]E depends on the orientation of nE but quantities of the form [nE · ϕ]E are independent of this orientation. Examples of domains ωE (E is marked bold. if E is a subset of Γ. max ψE (x) = 1.15. d d d d d d d d d d d d Figure I. βE = 27 if E is a triangle. Figure I. Otherwise it is not ﬁxed. of all elements in T .28 I. 16 if E is a parallelogram. FUNDAMENTALS max ψK (x) = 1. Remark I. if n = 3. .6). With each edge respectively face E ∈ ET we associate an edge respectively face bubble function by ψE = βE x∈E∩NT λx . and of all faces. Note that ωE consists of two elements.
Now we look at the Stokes equations of §I. For every p ∈ 1 H (Ω) and any u ∈ V the integration by parts formulae of §I. The Poisson equation revisited. We recall the Poisson equation −∆ϕ = f ϕ=0 in Ω on Γ 1 and its variational formulation: Find ϕ ∈ H0 (Ω) such that ϕ ψdx = Ω Ω f ψdx 1 holds for all ψ ∈ H0 (Ω).1. the size of the resulting discrete problem is given by the number of vertices in T which do not lie on the boundary Γ. 16) with noslip boundary condition −∆u + grad p = f div u = 0 u=0 The space 1 V = {u ∈ H0 (Ω)n : div u = 0} 1 is a subspace of H0 (Ω)n which has similar properties.14 (p. e.3 (p.1. II. k = 1. This gives rise to a linear system of equations with a symmetric. If. called stiffness matrix. Standard ﬁnite element discretizations simply choose a partition T 1 of Ω and an integer k ≥ 1 and replace in the variational problem H0 (Ω) k.2.1.1. 29 .2. Moreover.g.1. square matrix. Discretization of the Stokes equations. A variational formulation of the Stokes equations. positive deﬁnite. one can prove that the variational problem admits a unique solution. The Poisson equation and this variational formulation are equivalent in the sense that any solution of the Poisson equation is a solution of the variational problem and that conversely any solution of the variational problem which is twice continuously diﬀerentiable is a solution of the Poisson equation. A ﬁrst attempt II.CHAPTER II Stationary linear problems II.0 by the ﬁnite dimensional space S0 (T ). in Ω in Ω on Γ.
30 II. Therefore one may be attempted to use it as a starting point for a discretization process similar to the Poisson equation. As for the Poisson problem. Each square is further divided into two triangles by connecting its topleft corner with its bottomright . We consider the unit square Ω = (0. d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d K2 d d d d d d d d d K1 d d d d d d d d K0 d d d d d Figure II.1. We therefore obtain the following variational formulation of the Stokes equations: Find u ∈ V such that u: Ω udx = Ω f · vdx holds for all v ∈ V . positive deﬁnite system of linear equations for a discrete velocity ﬁeld. II. Courant triangulation Example II.3. This would yield a symmetric. A naive discretization of the Stokes equations. 1)2 and divide it into N 2 squares of equal size.1. it nevertheless gives information about the velocity ﬁeld. any velocity ﬁeld which solves the Stokes equations also solves this variational problem.1. STATIONARY LINEAR PROBLEMS 19) imply that p · udx = − Ω Ω p div udx = 0. Although the variational problem of the previous section does not incorporate the pressure. one can prove that the variational problem admits a unique solution.1.1. But. The following example shows that this approach leads to a deadend road. This would have the appealing sideeﬀect that standard solvers such as preconditioned conjugate gradient algorithms were available. Moreover. we have lost the pressure! This is an apparent gap between diﬀerential equation and variational problem which is not present for the Poisson equation.
Taking into account that vT = 0 on Γ and div vT = 0 in K1 . 2 Here nK1 is the unit exterior normal to K1 and e1 . Hence we have V (T ) = {0}. we conclude that 0= K1 div vT dx vT · nK1 dS ∂K1 = = h vT (z1 ) · {e1 + e2 }. 2 Since vT (z1 ) · e1 = −vT (z1 ) · e2 we obtain vT (z1 ) = 0 and consequently vT = 0 on K0 ∪ K1 ∪ K2 . we use continuous. Denote by z1 the vertex of K1 which lies in the interior of Ω. This is no practical choice. k. Choose an arbitrary function vT ∈ V (T ).2.3 (p. 19) and evaluating line integrals with the help of the trapezoidal rule. For −1 the discretization of the Stokes equations we replace V by V (T ) = 1. we have vT = 0 on K0 .0 S0 (T )2 ∩ V .1. With the same arguments as for K1 we conclude that 0= K2 div vT dx vT · nK2 dS ∂K2 = = h h vT (z1 ) · {e1 + e2 } − vT (z1 ) · e2 2 2 =0 h = − vT (z1 ) · e2 . Next we consider the triangle K2 . Repeating this argument. which is adjacent to the vertical boundary of Ω and to K1 . A FIRST ATTEMPT 31 corner (cf. solenoidal ﬁnite element functions. applying the integration by parts formulae of §I. we ﬁrst conclude that vT vanishes on all squares that are adjacent to the left boundary of Ω and then that vT vanishes on all of Ω. Figure II.1.1. We ﬁrst consider the triangle K0 which has the origin as a vertex. e2 denote the canonical bases vectors in R2 .II. piecewise linear. This partition is known as Courant triangulation. The length of the triangles’ short sides is h = N1 . Since all its vertices are situated on the boundary.e. i.0 Remark II. Next we consider the triangle K1 which shares its long side with K0 .1). Thus this space is not suited for the approximation of V . .2. One can prove that S0 (T )2 ∩ V is suited for the approximation of V only if k ≥ 5.
Hence. We try to ﬁx this constant by the normalization pdx = 0.2.32 II. STATIONARY LINEAR PROBLEMS II. 19) imply that p · vdx = Ω Γ p v · n dS − =0 Ω p div vdx =− Ω p div vdx. • Relax the continuity condition: This leads to nonconforming methods. • Add consistent penalty terms: This leads to stabilized PetrovGalerkin formulations. We recall the Stokes equations with noslip boundary condition −∆u + grad p = f div u = 0 u=0 in Ω in Ω on Γ. II. Ω 1 If v ∈ H0 (Ω)n and p ∈ H 1 (Ω) are arbitrary. In the following sections we will investigate all these possibilities and will see that each has its particular pitfalls.2. Possible remedies. the pressure is at most unique up to an additive constant. obviously p + c is also an admissible pressure for the Stokes equations.2. If p is any pressure solving the Stokes equations and if c is any constant.1. Mixed ﬁnite element discretizations of the Stokes equations II.3 (p. There are several possible remedies for the diﬃculties presented above: • Relax the divergence constraint: This leads to mixed ﬁnite elements.1. These observations lead to the following mixed variational formulation of the Stokes equations: . the integration by parts formulae of §I. • Look for an equivalent diﬀerential equation without the divergence constraint: This leads to the streamfunction formulation.4. The previous sections show that the Poisson and Stokes equations are fundamentally diﬀerent and that the discretization of the Stokes problem is a much more delicate task than for the Poisson equation. Saddlepoint formulation of the Stokes equations.
e. i. MIXED FINITE ELEMENTS 33 1 Find u ∈ H0 (Ω)n and p ∈ L2 (Ω) such that 0 u: Ω vdx − Ω p div vdx = Ω 1 f · vdx for all v ∈ H0 (Ω)n q div udx = 0 Ω for all q ∈ L2 (Ω).II. • The variational problem is the EulerLagrange equation corresponding to the constrained minimization problem 1  u2 dx − f · udx Minimize 2 Ω Ω subject to Ω q div udx = 0 for all q ∈ L2 (Ω).2. i. General structure of mixed ﬁnite element discretizations of the Stokes equations. u1 + p 0 ≤ cΩ f 0 with a constant cΩ which only depends on the domain Ω. • Any solution u. p. p of the variational problem which is suﬃciently smooth is a solution of the Stokes equations. u is a minimizer. A deep mathematical result is: The mixed variational formulation of the Stokes equations admits a unique solution u. II. We choose a partition T of Ω and 1 two associated ﬁnite element spaces X(T ) ⊂ H0 (Ω)n for the velocity and Y (T ) ⊂ L2 (Ω) for the pressure. p is a maximizer and is the Lagrange multiplicator corresponding the constraint div u = 0. 0 It has the following properties: • Any solution u.2.e. This solution depends continuously on the force f . Then we replace in the mixed 0 1 variational formulation of the Stokes equations the space H0 (Ω)n by X(T ) and the space L2 (Ω) by X(T ). 0 This gives rise to the following general mixed finite element discretization of the Stokes equations: Find uT ∈ X(T ) and pT ∈ Y (T ) such that uT : Ω vT dx − Ω pT div vT dx = Ω f · vT dx . 0 Hence it is a saddlepoint problem.2. p of the Stokes equations is a solution of the above variational problem.
30). m}. STATIONARY LINEAR PROBLEMS for all vT ∈ X(T ) qT div uT dx = 0 for all qT ∈ Y (T ). The condition Ω qT div uT dx = 0 for all qT ∈ Y (T ) therefore implies div uT = 0. Then the stiﬀness matrix has the form A B BT 0 with a symmetric positive deﬁnite nu × nu matrix A and a rectangular nu × np matrix B. A ﬁrst attempt. i. II.1. The condition Y (T ) ⊂ L2 (Ω) implies that the 0 discrete pressures have vanishing mean value.1. Hence we conclude from Example II. We consider the unit square Ω = (0.0 X(T ) = S0 (T )2 Y (T ) = S 0.−1 (T ) ∩ L2 (T ) 0 i.2. Obviously. The condition Ω qT div uT dx = 0 for all qT ∈ Y (T ) in general does not imply div uT = 0. that it admits a unique solution.1 (p. A necessary condition for a wellposed mixed discretization.2. Since A is symmetric positive deﬁnite. Ω Remark II.4. piecewise linear velocity approximation and a piecewise constant pressure approximation on a triangular grid.2.34 II. Assume that uT . the stiﬀness matrix is invertible if and only if the matrix B has rang np . A minimal requirement for such a discretization obviously is its wellposedness. Since the rang of a rectangular k × m matrix is at most min{k. Denote by nu the dimension of X(T ) and by np the one of Y (T ) and choose arbitrary bases for these spaces. a continuous. II. pT is a solution of the discrete problem. any particular mixed ﬁnite element discretization of the Stokes equations is determined by specifying the partition T and the spaces X(T ) and Y (T ). We consider an arbitrary mixed ﬁnite element discretization of the Stokes equations with spaces X(T ) for the velocity and Y (T ) for the pressure. The condition X(T ) ⊂ 1 H0 (Ω)n implies that the discrete velocities are globally continuous and vanish on the boundary. Hence the stiﬀness matrix must be invertible.e. Thus this mixed ﬁnite element discretization has the same defects as the one of Example II.1.e.3.1. We choose 1.1.1 that uT = 0. Then div uT is piecewise constant. 1)2 and the Courant triangulation of Example II. this leads to the following necessary condition for a wellposed mixed discretization: . i. Ω pT dx = 0 for all pT ∈ Y (T ).e.
5.2.3 (p. j ≤ N −1. Denote by N 2 the number of the squares. We now look at the particular pressure −1 pT ∈ Y (T ) which has the constant value (−1)i+j on Ki. Hence the condition nu ≥ np of the previous section is satisﬁed provided N ≥ 4. 19) and evaluating line integrals with the trapezoidal rule . II.2.2.II. A second attempt.2.−1 (T ) ∩ L2 (T ) 0 i.j . the square which has the lower left corner (ih. Checkerboard mode Denote by Ki. 34) we consider the unit square and divide it into N 2 squares of equal size with N even. 0 ≤ i. Using the integration by parts formulae of §I. −1 +1 −1 +1 +1 −1 +1 −1 −1 +1 −1 +1 +1 −1 +1 −1 Figure II. It is frequently called checkerboard mode. Consider an arbitrary velocity vT ∈ X(T ).0 X(T ) = S0 (T )2 Y (T ) = S 0.2.2. We choose the spaces 1. MIXED FINITE ELEMENTS 35 nu ≥ np . jh) where h = N1 .2. a continuous. A simple calculation yields the dimensions nu = 2(N − 1)2 and np = N 2 − 1. Let’s have a look at the example of the previous section in the light of this condition. Hence we have np > nu and the discretization cannot be wellposed. As in §II. An easy calculation then yields nu = 2(N − 1)2 and np = 2N 2 − 1. This discretization is often refered to as Q1/Q0 element.j (cf.e.1.1).3 (p.3 the squares are not further subdivided. Figure ˆ II. Contrary to §II. piecewise bilinear velocity approximation and a piecewise constant pressure approximation on a quadrilateral grid.
jh) + (vT · e2 )((i + 1)h. uT 1 pT 0 A pair X(T ). .2. the solution of the discrete problem cannot be unique: One may add pT to any pressure solution and obtains a new ˆ one.36 II. (j + 1)h) − (vT · e1 )(ih. Remark II. jh) . Y (T ) of ﬁnite element spaces satisfying this condition is called stable. jh) + (vT · e1 )((i + 1)h. The same notion refers to the corresponding discretization. STATIONARY LINEAR PROBLEMS we obtain pT div vT dx ˆ Kij = (−1)i+j Kij div vT dx vT · nKij dS ∂Kij = (−1)i+j = (−1)i+j h (vT · e1 )((i + 1)h. (j + 1)h) 2 − (vT · e1 )(ih.6. In 1974 Franco Brezzi published the following necessary and suﬃcient condition for the wellposedness of a mixed ﬁnite element discretization of the Stokes equations: inf sup Ω pT ∈Y (T )\{0} uT ∈X(T )\{0} pT div uT dx ≥ β > 0. In practice one usually not only considers a single partition T . Due to the homogeneous boundary condition. (j + 1)h) − (vT · e2 )(ih.2. Usually they are labeled Th where the index h indicates that the meshsize gets smaller and smaller. j yields pT div vT dx = 0. ˆ Ω Since vT was arbitrary. It shows that the condition of the previous section is only a necessary one and does not imply the wellposedness of the discrete problem.2. jh) − (vT · e2 )((i + 1)h. This phenomenon is known as checkerboard instability. II. (j + 1)h) + (vT · e2 )(ih. summation with respect to all indices i. The infsup condition. but complete families of partitions which are often obtained by successive local or global reﬁnements.
0 (T ) we obtain as a corollary: The pair n. is LadyzhenskayaBabuˇkaBrezzi condition in short LBBcondition. 27)) which give control on the pressure jumps.2.0 X(T ) = S0 (T )n ⊕ span{ψE nE : E ∈ ET } Y (T ) = S 0.12 (p. The above condition is known under various names. the number β must be independent of the meshsize. In the following sections we give a catalogue of various stable elements.0 X(T ) = S0 (T )n Y (T ) = S 0. We distinguish between discontinuous and continuous pressure approximations. the second variant often leads to smaller discrete systems while retaining the accuracy of a comparable discontinuous pressure approximation.2. For the following result we assume that . which is popular in eastern Europe and Russia. the above condition of Franco Brezzi must be satisﬁed uniformly. Stable higherorder elements with discontinuous pressure approximation. A third s one. §I. MIXED FINITE ELEMENTS 37 In this situation. The ﬁrst variant sometimes gives a better approximation of the incompressibility condition. i. II. in a certain sense.e. This discretization departs from the unstable Q1/Q0element. Remark II.2.−1 (T ) ∩ L2 (Ω) 0 is stable. is BabuˇkaBrezzi condition.II. Another one. lay the ground for the result of s Franco Brezzi. II. but it is not complete.3.7. The s additional names honour older results of Olga Ladyzhenskaya and Ivo Babuˇka which. A stable loworder element with discontinuous pressure approximation.2. It incorporates the most popular elements. The pair 1. A prosaic one is infsup condition.8. As a remedy the velocity space is enriched by edge respectively face bubble functions (cf.−1 (T ) ∩ L2 (Ω) 0 is stable. Since the bubble functions ψE are contained in S n.2. mainly used in western countries. It is based on the observation that the instability of the Q1/Q0element is due to the fact that the velocity space cannot balance pressure jumps across element boundaries.
ϕ ∈ Pm−2 } n Y (T ) = S m−1.2.0 X(T ) = S0 (T ) ⊕ span{ψK : K ∈ T } n Y (T ) = S 1.0 X(T ) = S0 (T ) ⊕ span{ψK ϕ : K ∈ T .0 (T ) ∩ L2 (Ω) 0 is stable.0 X(T ) = S0 (T )n Y (T ) = S m−1. .0 (T ) ∩ L2 (Ω) 0 is stable. we denote for every positive integer by P = span{xα1 · .0 X(T ) = S0 (T )n Y (T ) = S 1. Given a partition T we denote by T 1 the reﬁned par2 tition which is obtained by connecting in every element the midpoints of its edges. The modified TaylorHood element 1. The TaylorHood element 2. Since the elementbubble functions ψK are polynomials of degree n + 1 we obtain as a corollary: For every m ≥ n the pair n+m−1. Moreover. + αn = } 1 n the space of homogeneous polynomials of degree . II.−1 (T ) ∩ L2 (Ω) 0 is stable.0 X(T ) = S0 (T 1 )n 2 .9. .38 II. . STATIONARY LINEAR PROBLEMS T exclusively consists of triangles or tetrahedrons. With these restrictions and notations the following result can be proven: For every m ≥ n the pair m. . With this notation the following results were established in the early 1980s: The mini element 1. Stable loworder elements with continuous pressure approximation.−1 (T ) ∩ L2 (Ω) 0 is stable. · xαn : α1 + .
0 X(T ) = S0 (T )n Y (T ) = S k−1.1 collects the numbers ku .10.0 (T ) ∩ L2 (Ω) ⊂ Y (T ) 0 when using a continuous pressure approximation. Y (T ) of corresponding ﬁnite element spaces for the discretization of the Stokes equations.0 (T ) ∩ L2 (Ω) 0 is stable. We consider a partition T with meshsize h and a stable pair X(T ).0 S0 (T )n ⊂ X(T ) and either S m. p the unique solution of the saddlepoint formulation of the Stokes equations (cf. Set k = min{ku − 1. 32)) and by uT . and k for the examples of the previous sections. II. The stability of higher order elements with continuous pressure approximation was established only in the early 1990s: For every k ≥ 3 the higher order TaylorHood element k. §II.−1 (T ) ∩ L2 (Ω) ⊂ Y (T ) when using 0 a discontinuous pressure approximation or S m. kp }.4. then u − uT 0 0 ≤ c1 hk+1 f k.2.11. We denote by ku and kp the maximal polynomial degrees k and m such that k.2.2. ≤ c2 hk+2 f The constants c1 and c2 only depend on the domain Ω. then u − uT 1 + p − pT If in addition Ω is convex.1 (p. k. Further we denote by u. pT the unique solution of the discrete problem under consideration. Example II. MIXED FINITE ELEMENTS 39 Y (T ) = S 1.2.II. With these notations the following a priori error estimates can be proven: 1 Assume that u ∈ H k+2 (Ω)n ∩ H0 (Ω)n and p ∈ H k+1 (Ω) ∩ 2 L0 (Ω). Table II. II.2. kp . . Stable higherorder elements with continuous pressure approximation. A priori error estimates.0 (T ) ∩ L2 (Ω) 0 is stable.2.
The above regularity assumptions are not realistic.2. The diﬀerentiation index of the pressure always is one less than the diﬀerentiation index of the velocity. Sometimes one would prefer to stick to a given pair of spaces and to have at hand a diﬀerent stabilization process which does neither change the number of unknowns nor the accuracy of the ﬁnite element spaces. 38).2.2.2 show that one can stabilize a given pair of ﬁnite element spaces by either reducing the number of degrees of freedom in the pressure space or by increasing the number of degrees of freedom in the velocity space.6.0 popular equal order interpolation where X(T ) = S0 (T )n and m. We consider a triangulation T of a two dimensional domain Ω and set for abbreviation B(T ) = span{ψK : K ∈ T } .1. discretizations with kp = ku − 1 are optimal in that they do not waste degrees of freedom by choosing too large a velocity space or too small a pressure space. 2 . This result is reassuring but sometimes not practical since it either reduces the accuracy of the pressure or increases the number of unknowns and thus the computational work. Remark II.8 m m−1 m−1 mini element 1 1 0 TaylorHood element 2 1 1 modiﬁed TaylorHood element 1 1 0 higher order TaylorHood element k k − 1 k − 1 Remark II.5. The mini element revisited.2. Therfore. STATIONARY LINEAR PROBLEMS Table II. For its motivation we will have another look at the mini element of §II.2. II. independently of the actual discretization. For a convex polygonal domain.1. If Ω is not convex. II. Motivation.9 (p. one in general only gets an O(h) error estimate.3. PetrovGalerkin stabilization II.2. Therefore. In this section we will devise a stabilization process with the desired properties. but has reentrant corners. The results of §II.40 II.7 1 0 0 §II. Parameters of various stable elements pair ku kp k §II.2. This poor convergence behaviour can only be remedied by an adaptive grid reﬁnement based on an a posteriori error control. A particular example for this situation is the m.3. the situation is even worse: One in general only gets an O(hα ) error estimate with an exponent 1 ≤ α < 1 which depends on the largest interior angle at a boundary 2 vertex of Ω. one in general only has u ∈ H 2 (Ω)n ∩ 1 H0 (Ω)n and p ∈ H 1 (Ω) ∩ L2 (Ω).0 2 Y (T ) = S (T ) ∩ L0 (Ω).3.
L ∈ S0 (T )2 and a ”bubble part” uT .B : =αK.i K (ψK ei )dx  ψK 2 dx .0 Find uT ∈ S0 (T )2 ⊕ B(T ) and pT ∈ S 1. Hence.0 (T ) ∩ L2 (Ω).B · ∆vT dx =0 = 0.L : Ω vT dx − Ω pT div vT dx = Ω f · vT dx 1. we obtain f · ei ψK dx K = Ω f · (ψK ei )dx uT .9 (p.B = K∈T αK ψK with coeﬃcients αK ∈ R2 . Since uT .L : Ω =0 = (ψK ei )dx + Ω uT .B ∈ B(T ): uT = uT . 38) then takes the form: 1.II. diﬀerentiation by parts 1.L + uT .3.0 We split the velocity uT in a ”linear part” uT . The bubble part of the velocity has the representation uT . 2} and K ∈ T in the momentum equation.0 (T ) ∩ L2 (Ω) such that 0 uT : Ω vT dx − Ω pT div vT dx = Ω f · vT dx 1. we have uT .B : vT dx =− K∈T K uT . PETROVGALERKIN STABILIZATION 41 The discretization of the Stokes equations with the mini element of §II.0 for all vT ∈ S0 (T )2 .B .2.0 yields for every vT ∈ S0 (T )2 uT . 0 For simplicity we assume that f ∈ L2 (Ω)2 . Inserting the test function vT = ei ψK with i ∈ {1.0 for all vT ∈ S0 (T )2 ⊕ B(T ) qT div uT dx = 0 Ω for all qT ∈ S 1.B : Ω vT dx = K∈T K uT .B vanishes on the element boundaries. 1.
qT ) = K∈T γK K pT · qT dx ψK f dx K χT (qT ) = K∈T γK ˜ K ψK qT dx · . This yields for all qT ∈ S 1.0 (T ) ∩ L2 (Ω) solve the 0 modiﬁed problem uT .i K  ψK 2 dx + K ∂pT ψK dx. and thus −1 αK = K [f − pT ]ψK dx K  ψK  dx 2 for all K ∈ T .0 (T ) ∩ L2 (Ω) 0 where cT (pT .B in the continuity equation. 2. qT ) = χT (qT ) Ω for all qT ∈ S 1. ∂xi i = 1. 1.0 for all vT ∈ S0 (T )2 qT div uT .0 (T ) ∩ L2 (Ω) 0 0= Ω qT div uT dx qT div uT .L ∈ S0 (T )2 .42 II. STATIONARY LINEAR PROBLEMS − Ω pT div(ψK ei )dx =− ∂pT K ∂xi ψK dx = αK. For abbreviation we set −1 γK = ˜ K  ψK  dx 2 .0 This proves that uT . pT ∈ S 1. Next we insert the representation of uT .L : Ω vT dx − Ω pT div vT dx = Ω f · vT dx 1.L dx + cT (pT .L dx − K∈T γK ˜ K ψK qT dx · K ψK [f − pT ]dx .L dx + Ω K∈T K =− K = qT div(αK ψK )dx ψK αK · qT dx = Ω qT div uT .
. Since the penalty vanishes for the exact solution of the Stokes equations. they vanish. The terms involving δE are only relevant for discontinuous pressure approximations. For every element K ∈ T 0 and every edge respectively face E ∈ ET we further choose nonnegative parameters δK and δE . General form of PetrovGalerkin stabilizations.3. II. Note that γK is proportional to h2 .3. the discrete problem does not change if we also add the term α∆ div u to the lefthand side of the second equation.II.3. When using a continuous pressure approximation. this approach is also called consistent penalty.3. Given a partition T of Ω we choose two corresponding ﬁnite element 1 spaces X(T ) ⊂ H0 (Ω)n and Y (T ) ⊂ L2 (Ω). Recall that [·]E denotes the jump across E.1. The terms involving δK correspond to the equation div[−∆u + p] = div f . This shows that in total we may add the divergence of the momentum equation as a penalty. PETROVGALERKIN STABILIZATION 2 −1 K 43 γK = γK K ˜ ψK dx . With these notations we consider the following discrete problem: Find uT ∈ X(T ) and pT ∈ Y (T ) such that uT : Ω vT dx − Ω pT div vT dx = Ω f · vT dx qT div uT dx Ω + K∈T δK h2 K K [−∆uT + δE hE E∈ET E pT ] · qT dx δK h2 K K∈T K + [pT ]E [qT ]E dS = f· qT dx for all vT ∈ X(T ) and all qT ∈ Y (T ). Remark II. Taking into account that ∆uT vanishes elementwise. K The new problem can be interpreted as a P1/P1 discretization of the diﬀerential equation −∆u + grad p = f div u − α∆p = −α div f u=0 in Ω in Ω on Γ with a suitable penalty parameter α.
Choice of stabilization parameters. Structure of the discrete problem.44 II. The entries of C are by a factor of h2 smaller than those of A. Up to now we have always imposed the con1 dition X(T ) ⊂ H0 (Ω)n . Choice of spaces.−1 2 S (T ) ∩ L0 (Ω) discontinuous pressure approximation where k ≥ 1 for discontinuous pressure approximations and k ≥ 2 for continuous pressure approximations. positive deﬁnite. The a priori error estimates are optimal when the polynomial degree of the pressure approximation is one less than the polynomial degree of the velocity approximation. Now. The stiﬀness matrix of a PetrovGalerkin scheme has the form A B T −B C with symmetric.0 (T ) ∩ L2 (Ω) 0 approximation Y (T ) = k−1. Nonconforming methods II. we will relax this condition.3.6. STATIONARY LINEAR PROBLEMS II. i. Recall that C = 0 for the discretizations of §II.3.5.4. square matrices A and C and a rectangular matrix B.2.4.3. K∈T E∈ET δmin = min δK if pressures are continuous. II. min δE } if pressures are discontinuous. II. max δE }. K∈T A good choice of the stabilization parameters then is determined by the condition δmax ≈ δmin . II.0 X(T ) = S0 (T ) continuous pressure S k−1. Motivation. K∈T E∈ET min{min δK .1. We hope that we will be .4. These choices yield O(hk ) error estimates provided the solution of the Stokes equations is suﬃciently smooth. With the notations of the previous section we set δmax = max{max δK .e. the discrete velocities had to be continuous. Hence a standard choice is k.
The CrouzeixRaviart element.−1 (T ) ∩ L2 (Ω). K vT is continuous at midpoints. The corresponding discrete problem is: Find uT ∈ X(T ) and pT ∈ Y (T ) such that uT : K∈T K vT dx − K∈T K pT div vT dx = Ω f · vT dx qT div uT dx = 0 K∈T K for all vT ∈ X(T ) and all qT ∈ Y (T ). Otherwise the consistency error will be at least of order O(1). . We consider a triangulation T of a two dimensional domain Ω and set X(T ) = {vT : Ω → R2 : vT ∈ R1 (K) for all K ∈ T . vT of interior edges. II.4. One can prove that. If Ω is convex. the following error estimates hold 1/2 u − uT 2 1 (K) H K∈T + p − pT 0 = O(h). The following results can be proven: The CrouzeixRaviart discretization admits a unique solution uT . The continuity equation div uT = 0 is satisﬁed elementwise. Its degrees of freedom are the velocityvectors at the midpoints of interior edges and the pressures at the barycentres of elements. pT . NONCONFORMING METHODS 45 rewarded by less diﬃculties with respect to the incompressibility constraint.4. nevertheless.2. vT vanishes at midpoints vT of boundary edges} Y (T ) = S 0. the velocities must enjoy some minimal continuity: On each edge the mean value of the velocity jump across this edge must vanish.II. 0 This pair of spaces is called CrouzeixRaviart element.
• N E0 the number of interior edges. STATIONARY LINEAR PROBLEMS u − uT 0 = O(h2 ). • N V0 the number of interior vertices. One of the main attractions of the CrouzeixRaviart element is the fact that it allows the construction of a local solenoidal bases for the velocity space and that in this way the computation of the velocity and pressure can be decoupled. For the construction of the solenoidal bases we denote by • N T the number of triangles. this implies that the functions wE are contained in V (T ). • It has no higher order equivalent. • V (T ) = {uT ∈ X(T ) : div uT = 0} the space of solenoidal velocities. • It has no threedimensional equivalent. With every edge E ∈ ET we associate a unit tangential vector tE and a piecewise linear function ϕE which equals 1 at the midpoint of E and which vanishes at all other midpoints of edges.46 II. N E0 . and N V0 are connected via Euler’s formula N T − N E0 + N V0 = 1. The quantities N T . . Since div wE is piecewise constant.4. II. Construction of a local solenoidal bases. Since X(T ) and Y (T ) satisfy the infsup condition. an elementary calculation using this formula yields dim V (T ) = dim X(T ) − dim Y (T ) = 2N E0 − (N T − 1) = N E0 + N V0 . With this notation we set wE = ϕE tE for all E ∈ ET . the CrouzeixRaviart discretization has several drawbacks too: • Its accuracy deteriorates drastically in the presence of reentrant corners. These functions obviously are linearly independent.3. However. For every triangle K and every edge E we have div wE dx = K ∂K nK · wE dS = 0.
3 (p.1. E E∈E x The functions wx obviously are linearly independent. 19) and evaluating line integrals with the help of the midpoint formula. Starting with an arbitrary edge in Ex we enumerate the remaining edges in Ex consecutively in counterclockwise orientation. Using the integration by parts formulae of §I. and satisﬁes det(tE.x two orthogonal unit vectors such that tE. Denote by mEi the midpoint of Ei .x ) > 0. Since the wE are tangential to the edges and the wx are normal to the edges. both sets of functions are linearly independent.x and nE.x .4.x is tangential to E. This proves that wx ∈ V (T ). The function wx Next consider a triangle K ∈ T and a vertex x ∈ NT of K and denote by E1 . nE.4.2. wE : x ∈ NT . d d d d ' d d d © Td d d d d d d c d E d d d d d d d d d Figure II.x · nK = 0. i = 1. E2 the two edges of K sharing the vertex x.1) 1 wx = ϕE nE. E ∈ ET } . we conclude that div wx dx = K ∂K 2 wx · nK dS Ei wx (mEi ) · nK i=1 2 = = i=1 nEi . Hence we have V (T ) = span{wx . 2. points away from x.II. Figure II. NONCONFORMING METHODS 47 With every vertex x ∈ NT we associate the set Ex of all edges which have x as an endpoint. For E ∈ Ex we denote by tE.x . With these notations we set (cf.4.
3 (p.1. has a condition of O(h−4 ) compared with a condition of O(h−2 ) of the larger. This is a linear system of equations with N E0 + N V0 equations and unknowns and with a symmetric.2 (p. symmetric. The pressure.5. we conclude that f · ϕE nE dx − Ω K∈T K uT : ϕE nE dx =− K∈T K pT div ϕE nE dx [ϕE pT ]E dS E =− = −E[ϕE (mE )pT ]E = −E[pT ]E . we compute K P = pT ˜ Ω K K∈T h and subtract P from pT . The above problem only yields the velocity. we set pT = 0 on this triangle and compute ˜ it on the remaining triangles by passing through adjacent triangles and adding the pressure jump corresponding to the common edge. Streamfunction formulation II.5. Hence. 19) and evaluating line integrals with the help of the midpoint formula. positive deﬁnite stiﬀness matrix. but indeﬁnite stiﬀness matrix of the original mixed problem. Motivation. however. This yields the pressure pT of the Crouzeix˜ Raviart discretization which has meanvalue zero.48 II. . The stiﬀness matrix. Finally. To describe it. The methods of the previous sections all discretize the variational formulation of the Stokes equations of §II. denote by mE the midpoint of any edge E ∈ ET . Starting with a triangle K which has an edge on the boundary Γ.2. II.1. Using the integration by parts formulae of §I. however. we can compute the pressure jumps edge by edge by evaluating the lefthand side of this equation. can be computed by a simple postprocessing process. STATIONARY LINEAR PROBLEMS and the discrete velocity ﬁeld of the CrouzeixRaviart discretization is given by the problem: Find uT ∈ V (T ) such that uT : K∈T K vT dx = Ω f · vT dx for all vT ∈ V (T ).
II. Streamfunction formulation of the Stokes equations. ϕ ∈ H 1 (Ω).5.4 we obtained an exactly solenoidal approximation but had to pay for this by abandoning the conformity. the incompressibility constraint is satisﬁed only approximately. The subsequent analysis is based on two curloperators which correspond to the rotoperator in three dimensions and which are deﬁned as follows: − ∂ϕ ∂y ∂ϕ ∂x curl ϕ = curl v = . Throughout this section we assume that Ω is a two dimensional. In this section we will consider another variational formulation of the Stokes equations which leads to conforming solenoidal discretizations.II.3. ∂y ∂x They fulﬁll the following chain rule curl(curl ϕ) = −∆ϕ curl(curl v) = −∆v + (div v) for all ϕ ∈ H 2 (Ω) for all v ∈ H 2 (Ω)2 and the following integration by parts formula v · curl ϕdx = Ω Ω curl vϕdx + Γ 2 ϕv · tdS for all v ∈ H 1 (Ω) . In §II. Here t denotes a unit tangent vector to the boundary Γ. Let u. i.5. ∂v1 ∂v2 − .5. simply connected polygonal domain. The discrete velocity ﬁelds in general are not solenoidal. STREAMFUNCTION FORMULATION 49 29) and yield simultaneously approximations for the velocity and pressure. div v = 0. p be the solution of the Stokes equations with exterior . if and only if there is a unique streamfunction ϕ : Ω → R such that v = curl ϕ in Ω and ϕ = 0 on Γ. As we will see this advantage has to be paid for by other drawbacks.e. The following deep mathematical result is fundamental: A vectorﬁeld v : Ω → R2 is solenoidal. The curl operators.2.e. II. i.
In this sense. one can prove: If ψ solves the above biharmonic equation. II. Given a solution ψ of the biharmonic equation and the corresponding velocity u = curl ψ the pressure is determined by the equation f + ∆u = p.5.1. Variational formulation of the biharmonic equation. Conversely. the biharmonic equation is only capable to yield the velocity ﬁeld of the Stokes equations. With the help of the Sobolev space 2 H0 (Ω) = ϕ ∈ H 2 (Ω) : ϕ = ∂ϕ = 0 on Γ ∂n the variational formulation of the biharmonic equation of the previous section is given by . STATIONARY LINEAR PROBLEMS force f and homogeneous boundary conditions and denote by ψ the stream function corresponding to u.50 II. But there is no constructive way to solve this problem. we conclude that in addition ∂ψ = t · curl ψ = 0 on Γ. This proves that the stream function ψ solves the biharmonic equation ∆2 ψ = curl f ψ=0 ∂ψ =0 ∂n in Ω on Γ on Γ.4. ∂n Inserting this representation of u in the momentum equation and applying the operator curl we obtain curl f = curl{−∆u + p} =0 = −∆(curl u) + curl( p) = −∆(curl(curl ψ)) = ∆2 ψ.5. there is a unique pressure p with meanvalue 0 such that u = curl ψ and p solve the Stokes equations. Remark II. Since u · t = 0 on Γ. Hence. the Stokes equations and the biharmonic equation are equivalent.
The most popular one is based on the socalled Morley element.5. The above variational problem is the EulerLagrange equation corresponding to the minimization problem J(ψ) = 1 2 (∆ψ)2 dx − Ω Ω curl f ψdx → min 2 in H0 (Ω).−1 (T ) : ϕ is continuous at the vertices. One possible remedy to the diﬃculties described in Remark II. nE · nE · The degrees of freedom are • the values of ϕ at the vertices and • the values of nE · ϕ at the midpoints of edges.5. this would require C 1 elements! The lowest polynomial degree for which C 1 elements exist. A nonconforming discretization of the biharmonic equation.e. . curl f ϕT dx for all ϕT ∈ M (T ).5. One can prove that this discrete problem admits a unique solution.5. Therefore it is tempting to discretize the biharmonic equation by re2 placing in its variational formulation the space H0 (Ω) by a ﬁnite ele2 ment space X(T ) ⊂ H0 (Ω). But.2 is to drop the C 1 continuity of the ﬁnite element functions.5.II. the partition T exclusively consists of triangles. This leads to nonconforming discretizations of the biharmonic equation. The corresponding ﬁnite element space is given by M (T ) ={ϕ ∈ S 2. The discrete problem is given by: Find ψT ∈ M (T ) such that ∆ψT ∆ϕT dx = K∈T K K∈T K ϕ is continuous at ϕ the midpoints of edges}. The stiﬀness matrix is symmetric. STREAMFUNCTION FORMULATION 51 2 Find ψ ∈ H0 (Ω) such that ∆ψ∆ϕdx = Ω 2 for all ϕ ∈ H0 (Ω). is ﬁve! II. i. positive deﬁnite and has a condition of O(h−4 ). Ω curl f ϕdx Remark II. It is a triangular element.2.
51) is to use a saddlepoint formulation of the biharmonic equation and a corresponding mixed ﬁnite element discretization. It is given by: 1 • Find ψ ∈ H0 (Ω).5. II. 0 Remark II. this equality is not a pointwise one since both quantities have diﬀerent regularity properties. The computation of the pressure is decoupled from the computation of the other quantities. − 1}.3. For the mixed ﬁnite element discretization we depart from a triangulation T of Ω and choose an integer ≥ 1 and set k = max{1. then uT = curl ψT is the solution of the CrouzeixRaviart discretization of the Stokes equations.5.5. STATIONARY LINEAR PROBLEMS There is a close relation between this problem and the CrouzeixRaviart discretization of §II.2 (p.6. Compared with the saddlepoint formulation of the Stokes equations of §II. Mixed ﬁnite element discretizations of the biharmonic equation.52 II. Another remedy to the diﬃculties described in Remark II. θ ∈ L2 (Ω) and µ ∈ H 1 (Ω) • and ﬁnd p ∈ H 1 (Ω) ∩ L2 (Ω) such that 0 p· Ω qdx = Ω (f − curl λ) · qdx for all q ∈ H 1 (Ω) ∩ L2 (Ω).4 (p. This saddlepoint formulation is obtained by introducing the vorticity ω = curl u. 29). Then the discrete problem is given by: .2 (p. 44): If ψT ∈ M (T ) is the solution of the Morley element discretization of the biharmonic equation. ω ∈ L2 (Ω) and λ ∈ H 1 (Ω) such that curl λ · curl ϕdx = Ω Ω f · curl ϕdx ωθdx − Ω Ω λθdx = 0 ωµdx = 0 Ω curl ψ · curl µdx − Ω 1 for all ϕ ∈ H0 (Ω). The second equation means that they are equal in a weak sense. the above problem imposes weaker regularity conditions on the velocity u = curl ψ and stronger regularity conditions on the pressure p.1. But. The two quantities ω and λ are both vorticities.
i.6.e.0 (T ). θT ∈ S .m−1} + ∆ψ + p + hm ∆ψ m . SOLUTION OF THE DISCRETE PROBLEMS 53 Find ψT ∈ S0. where 1 ≤ m ≤ depends on the regularity of the solution of the biharmonic equation.−1 (T ) and µT ∈ S . Then the discrete problem has the form (II.6. using a more sophisticated analysis.3 (p.0 (T ) ∩ L2 (Ω) 0 One can prove that this discrete problem is wellposed and that its solution satisﬁes the following a priori error estimate ψ − ψT ≤ c hm−1 1 + ω − ωT ψ m+1 0 + p − pT m 0 max{1.2 (p. ωT ∈ S that Ω .1) with A B B T −δC u p = f δg .0 (T ) such f · curl ϕT dx Ω curl λT · curl ϕT dx = ωT θT dx − Ω Ω λT θT dx = 0 ωT µT dx = 0 Ω curl ψT · curl µT dx − Ω for all ϕT ∈ S0. For the discretization of the Stokes equations we choose a partition T of the domain Ω and ﬁnite element spaces X(T ) and Y (T ) for the approximation of the velocity and the pressure. With these choices we either consider a mixed method as in §II.1.6. 40). General structure of the discrete problems. respectively. 32) or a PetrovGalerkin one as in §II. In the lowest order case = 1 this estimate does not imply convergence. Solution of the discrete problems II.6. II. one 1 can prove in this case an O(h 2 )error estimate provided ψ ∈ H 3 (Ω).0 (T ) ∩ L2 (Ω) such that 0 pT · Ω qT dx = Ω (f − curl λT ) · qT dx for all qT ∈ S k. Denote by • nu the dimension of X(T ) and by • np the dimension of Y (T ) respectively.0 (T ) and ﬁnd pT ∈ S k.II.−1 (T ) and λT ∈ S . But.0 (T ). u ∈ H 2 (Ω)2 .
1. For simplicity.6. Thus. positive deﬁnite nu × nu matrix A with condition of O(h−2 ).6. symmetric. Moreover. it has positive and negative real eigenvalues. The Uzawa algorithm. The Uzawa algorithm is probably the simplest algorithm for solving the discrete problem (II.2.1).6.3. Algorithm II. • a square.6.6.3.6. (3) If Aui+1 + Bpi+1 − f + B T ui+1 − δCpi+1 − δg ≤ ε . we drop in this section the index T indicating the discretization. • δ > 0 and δ ≈ 1 for the PetrovGalerkin methods of §II. a tolerance ε > 0 and a relaxation parameter ω > 0.6.1). Compute pi+1 = pi + ω{B T ui+1 − δg − δCpi }. STATIONARY LINEAR PROBLEMS • δ = 0 for the mixed methods of §II.1) is symmetric. • a rectangular nu × np matrix B. • a square. i. The stiﬀness matrix A B B T −δC of the discrete problem (II. The indeﬁniteness of the stiﬀness matrix results from the saddlepoint structure of the Stokes equations. positive deﬁnite np × np matrix C with condition of O(1). wellestablished methods such as the conjugate gradient algorithm cannot be applied. II.2. symmetrix.3. Remark II.2. • a vector f of dimension nu discretizing the exterior force. Correspondingly.e. we identify ﬁnite element functions with their coeﬃcient vectors with respect to a nodal bases. Remark II. (2) Apply a few GaussSeidel iterations to the linear system Au = f − Bpi and denote the result by ui+1 . u and p are now vectors of dimension nu and np . This is the main diﬃculty in solving the linear systems arising from the discretization of the Stokes equations. but indeﬁnite. Sought: an approximate solution of the discrete problem (II. respectively. and • a vector g of dimension np which equals 0 for the mixed methods of §II. (1) Set i = 0. (Uzawa algorithm) (0) Given: an initial guess p0 for the pressure. This cannot be remedied by any stabilization procedure.54 II.2 and which results from the stabilization terms on the righthand sides of the methods of §II.
stop. but extremely slow. (1) Compute r0 = b − Lx0 . A popular choice is the scaled Euclidean norm. Remark II. Algorithm II. 2). ri+1 = ri − αi si . γi αi = . an initial guess x0 for the solution. Therefore it cannot be recommended for practical use.e. (3) Compute si = Ldi .6. (2) If γi < ε2 return xi as approximate solution. since it is the bases for the more eﬃcient algorithm of §II. i. 57). or three. if Ω ⊂ R3 . Sought: an approximate solution of the linear system.4 (p. (4) The Uzawa algorithm iterates on the pressure. Therefore it is sometimes called a pressure correction scheme. positive deﬁnite matrix L. 1 q np v = 1 v nu · v for the velocity part and q = · q for the pressure part.3. The algorithm of the next section is based on the conjugate gradient algorithm (CG algorithm). We have given it nevertheless. Otherwise go to step 3.6. Poisson equations for the components of the velocity ﬁeld.5. (3) The problem Au = f −Bpi is a discrete version of two. γ0 = r0 · r0 . Otherwise increase i by 1 and go to step 2. and a tolerance ε > 0. The Uzawa algorithm is very simple. II. SOLUTION OF THE DISCRETE PROBLEMS 55 return ui+1 and pi+1 as approximate solution. d0 = r0 . For completeness we recapitulate this algorithm. Set i = 0.6.4. The conjugate gradient algorithm revisited. .II.6. a typical choice is ω = 1. stop. di · s i xi+1 = xi + αi di . (Conjugate gradient algorithm) (0) Given: a linear system of equations Lx = b with a symmetric.6. if Ω ⊂ R2 .5. (2) · denotes any vector norm. (1) The relaxation parameter ω usually is chosen in the interval (1.
x). double[] y) { double[] result = new double[dim]. accumulate(x. b = multiply(a. res = multiply(a.0. res = add(b. beta). γi+1 βi = .sqrt(gamma). √ // CG algorithm public void cg() { int iter = 0. d = add(res. Increase i by 1 and go to step 2. double t) { double[] w = new double[dim]. i < dim.sqrt(gamma). alpha). i++ ) result[i] = innerProduct(c[i]. beta = gamma. gamma = innerProduct(res. 1. alpha = innerProduct(d. The following Java methods implement the CGalgorithm. for( int i = 0. return result. d. i < dim. beta = gamma/beta. res. res)/dim. res)/dim. ( The convergence rate of the CGalgorithm is given by (√κ−1) where κ+1) κ is the condition of L and equals the ratio of the largest to the smallest eigenvalue of L. gamma = innerProduct(res. alpha = gamma/alpha. i++ ) u[i] = v[i]. residuals[iter] = Math. d = new double[dim]. copy(d. double[] v) { for( int i = 0. accumulate(res. alpha).0). b. 1. } // end of add . d). STATIONARY LINEAR PROBLEMS γi+1 = ri+1 · ri+1 . res). double s. beta. γi di+1 = ri+1 + βi di . } } // end of cg // copy vector v to vector u public void copy(double[] u. b)/dim. residuals[0] = Math. gamma. d. i++ ) w[i] = s*u[i] + t*v[i]. } // end of copy // multiply matrix c with vector y public double[] multiply(double[][] c. } // end of multiply // return s*u+t*v public double[] add(double[] u. 1. double alpha.56 II.0. return w. i < dim. for( int i = 0. y). while ( iter < maxit1 && residuals[iter] > tol ) { iter++. double[] v.
i. The crucial idea now is to solve these auxiliary problems only approximately with the help of a standard multigrid algorithm for the Poisson equation. } // end of inner product // add s times vector v to vector u public void accumulate(double[] u.6. double[] v) { double prod = 0.6. Compute r0 = B T u0 − δg − δCp0 .6. positive deﬁnite and has a condition of O(1). double s) { for( int i = 0. } // end of accumulate 57 II. SOLUTION OF THE DISCRETE PROBLEMS // inner product of two vectors public double innerProduct(double[] u. i < dim. for( int i = 0.4. . i < dim.1) for the unknown u u = A−1 (f − Bp) and insert the result in the second equation B T A−1 (f − Bp) − δCp = δg.II. however.6. Therefore one may apply the CGalgorithm to problem (II. the condition does not increase when reﬁning the mesh. i++ ) prod += u[i]*v[i]. The convergence rate then is independent of the meshsize and does not deteriorate when reﬁning the mesh.2) [B T A−1 B + δC]p = B T A−1 f − δg. if Ω ⊂ R2 . we may solve the ﬁrst equation in the system (II. γ0 = r0 · r0 . d0 = r0 .6. if Ω ⊂ R3 .6. Since the matrix A is positive deﬁnite. problems of the form Av = g must be solved in every iteration. This approach. requires the evaluation of A−1 . (1) Apply a multigrid algorithm with starting value zero and tolerance ε to the linear system Av = f − Bp0 and denote the result by u0 . Algorithm II. This gives a problem which only incorporates the pressure (II.e.1). Sought: an approximate solution of the discrete problem (II. or three.e.6. i. double[] v. An improved Uzawa algorithm. (Improved Uzawa algorithm) (0) Given: an initial guess p0 for the pressure and a tolerance ε > 0. These are two. This idea results in the following algorithm. i++ ) u[i] += s*v[i].6.2). One can prove that the matrix B T A−1 B + δC is symmetric. discrete Poisson equations for the components of the velocity v. return prod.
TR . The vector xk then incorporates the velocity and the pressure approximation. The multigrid algorithm.6. and associated discrete problems Lk xk = bk . . Otherwise go to step 3. .6. and return u.2). γi di+1 = ri+1 + βi di . II. p as approximate solution of problem (II. The convergence rate of the improved Uzawa algorithm does not deteriorate when reﬁning the mesh.1) stop.6. the diﬀerential equation is either the Stokes problem or the Poisson equation.5 to 0. STATIONARY LINEAR PROBLEMS Set u0 = 0 and i = 0.1) corresponding to the partition Tk . the inner iteration is a standard multigrid algorithm for discrete Poisson equations. which should be easy to evaluate and which at the same time should give a reasonable approximation to L−1 . . .7. In the ﬁrst case Lk is the stiﬀness matrix of problem (II. In the second case Lk is the upper left block A in problem (II.6. . Compute si = B T ui+1 + δCdi . The ﬁnest mesh TR corresponds to the problem that we actually want to solve.6. Increase i by 1 and go to step 2. ri+1 = ri − αi si . k . γi αi = .8. apply a multigrid algorithm with starting value zero and tolerance ε to the linear system Av = f − Bp. R. . γi+1 βi = . For the inner loop usually 2 to 4 multigrid iterations are suﬃcient.58 II.1) and xk only incorporates the discrete velocity. . γi+1 = ri+1 · ri+1 . The multigrid algorithm has three ingredients: • a smoothing operator Mk . which are obtained by successive local or global reﬁnement. (2) If γi < ε2 compute p = p0 + pi . In our applications. (3) Apply a multigrid algorithm with starting value ui and tolerance ε to the linear system Av = Bdi and denote the result by ui+1 . denote the result by u.5. k = 0. corresponding to a partial diﬀerential equation. di · s i pi+1 = pi + αi di . It usually lies in the range of 0. Remark II. . The multigrid algorithm is based on a sequence of meshes T0 .6. The improved Uzawa algorithm is a nested iteration: The outer iteration is a CGalgorithm for problem (II.
SMatrix a ) { int level = lv. ν1 . ν1 . and an inital guess xk . x. (2) (Presmoothing) Perform ν1 steps of the iterative procedure xk = xk + Mk (bk − Lk xk ). // presmoothing a. // compute residue restrict( a ). Lk . Lk−1 .6. (1) If k = 0 compute x0 = L−1 b0 . This will be done in the next sections.k xk−1 . and ν2 . count[level] = 1. 0 stop. ν2 .k−1 . we discuss the general form of the algorithm and its properties.k .8. (3) (Coarse grid correction) (a) Compute fk−1 = Rk. Here. // restrict level. . (c) Compute xk = xk + Ik−1. xk ) one iteration of the multigrid algorithm on mesh Tk ) (0) Given: the level number k of the actual mesh. Algorithm II. bk . // one MG cycle // nonrecursive version public void mgcycle( int lv. Otherwise go to step 2. (MG (k. SOLUTION OF THE DISCRETE PROBLEMS 59 • a restriction operator Rk.II. and • a prolongation operator Ik−1. (b) Perform µ iterations of M G(k − 1. ν2 . µ. which maps functions from a coarse mesh Tk−1 to the next ﬁner mesh Tk . bk−1 . the actual righthand side bk .k−1 (bk − Lk xk ) xk−1 = 0. count[level] = cycle. which maps functions on a ﬁne mesh Tk to the next coarser mesh Tk−1 . Sought: improved approximate solution xk . µ. The following Java method implements the multigrid method with µ = 1. the stiﬀness matrix Lk of the actual discrete problem. 1 ). For a concrete multigrid algorithm these ingredients must be speciﬁed. parameters µ.6. xk−1 ) and denote the result by xk−1 . Note that the recursion has been resolved. ν1 . bb ). while( count[level] > 0 ) { while( level > 1 ) { smooth( a. (4) (Postsmoothing) Perform ν2 steps of the iterative procedure xk = xk + Mk (bk − Lk xk ).mgResidue( res.
(1) The parameter µ determines the complexity of the algorithm. Figure II. corresponds to Lk . P prolongation. . S denotes smoothing. corresponds to k. Popular choices are µ = 1 called Vcycle and µ = 2 called Wcycle. prolongateAndAdd( a ). 1 ). • a stiﬀness matrix on actual level. Remark II. • res actual residual.6. • smooth perform the smoothing. • restrict perform the restriction.60 II. corresponds to bk . // solve on coarsest grid count[level].6. R restriction. STATIONARY LINEAR PROBLEMS } smooth( a. if( count[level] > 0 ) prolongate = false. Schematic presentation of a multigrid algorithm with Vcycle and three grids. P prolongation. corresponds to bk − Lk xk . boolean prolongate = true. 0 ). } } } // end of mgcycle The used variables and methods have the following meaning: • lv number of actual level. Here.1 gives a schematic presentation of the multigrid algorithm for the case µ = 1 and R = 2 (three meshes). and E exact solution. • x actual approximate solution.6. • mgResidue compute the residual. • bb actual righthand side.1. E exact solution. // postsmoothing count[level]. The labels have the following meaning: S smoothing.9. R restriction. • prolongateAndAdd compute the prolongation and add the result to the current approximation. corresponds to xk . while( level < lv && prolongate ) { level++. G G −− −→ R −− −→ −− −→ R G P −− −→ −− −→ E G P Figure II. // prolongate to finer grid smooth( a.
.6. // // // // one GaussSeidel sweep (x and b are defined on all levels) dir = 0: forward and backward dir = 1: only forward dir = 1. (4) Under suitable conditions on the smoothing algorithm. The following Java method implements the GaussSeidel algorithm for smoothing. x[i] += y/m[ d[i] ]. should not be chosen too large. only backward public void mgSGS( double[] x. In practice one observes convergence rates of 0.e. j < r[ i+1 ]. II.3 – 0. which is determined by the matrix Mk . i < dim[lv]. (3) If µ ≤ 2. int dir ) { double y. if ( dir >= 0 ) { // forward sweep for ( int i = dim[ lv1 ]. for ( int j = r[i].1 – 0. SOLUTION OF THE DISCRETE PROBLEMS 61 (2) The number of smoothing steps per multigrid iteration. one can prove that the convergence rate of the multigrid algorithm is independent of the meshsize. j++ ) y = m[j]*x[ dim[lv1]+c[j] ]. j++ ) y = m[j]*x[ dim[lv1]+c[j] ]. the parameters ν1 and ν2 .) { y = b[i]. one can prove that the computational work of one multigrid iteration is proportional to the number of unknowns of the actual discrete problem. i++ ) { y = b[i]. it does not deteriorate when reﬁning the mesh. } } } // end of mgSGS The variables have the following meaning: • lv number of actual level.II.5 for positive deﬁnite problems such as the Poisson equation and of 0. Smoothing. A good choice for positive deﬁnite problems such as the Poisson equation is ν1 = ν2 = 1. double[] b. The symmetric GaussSeidel algorithm is the most popular smoothing algorithm for positive deﬁnite problems such as the Poisson equation. i.6. i >= dim[ lv1 ]. x[i] += y/m[ d[i] ]. This corresponds to the choice −1 T Mk = (Dk − Uk )Dk (Dk − Uk ). These conditions will be discussed in the next section.1.7 for indeﬁnite problems such as the Stokes equations.e. for ( int j = r[i].6. i. j < r[ i+1 ]. corresponds to k in the multigrid algorithm. where Dk and Uk denote the diagonal and the strictly upper diagonal part of Lk respectively. } } if ( dir <= 0 ) { // backward sweep for ( int i = dim[lv] . i. For indeﬁnite problems such as the Stokes equations a good choice is ν1 = ν2 = 2.
Another class of popular smoothing algorithms for the Stokes equations is given by the socalled Vanka methods. For indeﬁnite problems such as the Stokes equations the most popular smoothing algorithm is the squared Jacobi iteration. • b righthand side on all levels.dim[ lv1 ]. since it in general diverges for those problems. This yields the classical GaussSeidel method and is not applicable to indeﬁnite problems. i++ ) for( int j = r[ dim[lv1]+i ]. corresponds to the xk ’s. One extreme case obviously consists in choosing exactly one node at a time.7. §§II.e. double om ) { copy( res. i++ ) for( int j = r[ dim[lv1]+i ]. 74). This is the Jacobi iteration applied to the squared system LT Lk xk = LT bk and k k corresponds to the choice Mk = ω −2 LT k with a suitable damping parameter satisfying ω > 0 and ω = O(h−2 ). STATIONARY LINEAR PROBLEMS • dim dimensions of the discrete problems on the various levels.7 (p. for( int i = 0. 75)). j++ ) x[ dim[lv1]+c[j] ] += m[j]*res[i]/om. i < dim[lv] .7. j < r[ dim[lv1]+i+1 ]. The other variables and methods are as explained before. double[] res. There are many possible choices for the patches. corresponds to the Lk ’s. double[] b. • x approximate solution on all levels. i < dim[lv] .6 (p. j < r[ dim[lv1]+i+1 ]. II. • m stiﬀness matrix on all levels. } // end of mgSR The variable om is the damping parameter ω. the corresponding ﬁnite element spaces are nested. b. dim[lv1]. one chooses patches that consist of • a single element. Since the partition Tk of level k always is a reﬁnement of the partition Tk−1 of level k − 1 (cf.62 II. 0. .7. dim[lv] ). i. Prolongation. II. The idea is to loop through patches of elements and to solve exactly the equations associated with the nodes inside the actual patch while retaining the current values of the variables associated with the nodes outside the actual patch. In practice. } for( int i = 0. or • the two elements that share a given edge or face. or • the elements that share a given vertex. This of course is not practicable since it would result in an exact solution of the complete discrete problem. j++ ) { res[i] = m[j]*x[ dim[lv1]+c[j] ].6.dim[ lv1 ]. K The following Java method implements this algorithm: // one squared Richardson iteration (x and b are defined on all levels) // res is an auxiliary array to store the residual public void mgSR( double[] x. corresponds to the bk ’s. Another extreme case obviously consists in choosing all elements.
S 0.6 (p.6.7 (p. The nodal value of a coarsegrid function at the barycentre of a child element in Tk then is its nodal value at the barycentre of the parent element in Tk . The reﬁnement introduces new vertices at the midpoints of some edges of the parent element and possibly – when using quadrilaterals – at the barycentre of the parent element. e. The numbers +0. Partitions of a triangle. II.6.6. expressions of the form i + 1 have to be taken modulo 3 Figures II. e. Every element in Tk−1 is subdivided into several smaller elements in Tk .2. SOLUTION OF THE DISCRETE PROBLEMS 63 ﬁnite element functions corresponding to level k−1 are contained in the ﬁnite element space corresponding to level k. inside the elements indicate the enumeration of the child elements.2 and II.−1 (T ). Consider a piecewise constant approximation. 74). Example II.6. The numbers outside the element indicate the enumeration of the element vertices and edges. Therefore. 75)). the value at vertex 1 of child +0 is the average of the values at vertices 0 and 1 of the parent element. S 1. i i i+1 0 d d +(i) d i+1 i+2 i d +(3) d d d +(i+1) +(i+2) 0 d 0 d i i 0 d d d i+1 i+2 +1 d d +0 d d +2 d 0 0 d i i+2 i+1 d d d i+1 i+2 d +1 d +0 0 0 d i i i+2 i+1 i+2 i+1 0 d d i+2 +2 d i+1 d +0 d 0 +1 0 d i i+2 Figure II. The nodal points are the vertices of the elements. Similarly. the values of a coarsegrid function corresponding to level k − 1 at the nodal points corresponding to level k are obtained by evaluating the nodal bases functions corresponding to Tk−1 at the requested points. +1 etc.6. i. The remaining numbers inside the elements give the enumeration of the vertices of the child elements. the nodal value at .g.6.g. Consider a piecewise linear approximation.II. Example II.11.0 (T ).7.7.e.k .e. i.10. §II. respectively (cf. edge 2 of the triangle has the vertices 0 and 1 as its endpoints.3 show various partitions of a triangle and of a square. The nodal points are the barycentres of the elements. The nodal value at the midpoint of an edge is the average of the nodal values at the endpoints of the edge. This deﬁnes the interpolation operator Ik−1. Thus. Thus.
64
II. STATIONARY LINEAR PROBLEMS
i+2 0 +(i+2) i+1 i i+1 0 +(i+1) i+3
+(i+3) 0 i+3 i+2 0 i i+1 0
+(i) 0 i i+2 0 i i+1
i+2
¡e ¡0e ¡ e +1 e +0 ¡ ¡ e ¡ e i+1 i+3 ¡ e ¡ e ¡ e +2 ¡ e ¡ e ¡ e
i+3 i+2 0 +0 i+2 i i +4 i+1 0
i+1
+1
+0
i+3
i+3 i+2 0 +0
i+2 i +4
0 i i+1 0
i+1
i+1
d
+1 0 i+3 0 i+2 +2 +3 i+3
i+3 +3 d d +1 +2 d d d 0 d 0
0 i+3 i+2 i
d d d d0 d
i
Figure II.6.3. Partitions of a square; expressions of the form i + 1 have to be taken modulo 4 the barycentre of the parent element is the average of the nodal values at the four element vertices. II.6.8. Restriction. The restriction is computed by expressing the nodal bases functions corresponding to the coarse partition Tk−1 in terms of the nodal bases functions corresponding to the ﬁne partition Tk and inserting this expression in the variational formulation. This results in a lumping of the righthand side vector which, in a certain sense, is the transpose of the interpolation. Example II.6.12. Consider a piecewise constant approximation, i.e. S 0,−1 (T ). The nodal shape function of a parent element is the sum
II.6. SOLUTION OF THE DISCRETE PROBLEMS
65
of the nodal shape functions of the child elements. Correspondingly, the components of the righthind side vector corresponding to the child elements are all added and associated with the parent element. Example II.6.13. Consider a piecewise linear approximation, i.e. S 1,0 (T ). The nodal shape function corresponding to a vertex of a 1 parent triangle takes the value 1 at this vertex, the value 2 at the midpoints of the two edges sharing the given vertex and the value 0 on the remaining edges. If we label the current vertex by a and the midpoints of the two edges emanating form a by m1 and m2 , this results in the following formula for the restriction on a triangle 1 Rk,k−1 ψ(a) = ψ(a) + {ψ(m1 ) + ψ(m2 )}. 2 When considering a quadrilateral, we must take into account that the 1 nodal shape functions take the value 4 at the barycentre b of the parent quadrilateral. Therefore the restriction on a quadrilateral is given by the formula 1 1 Rk,k−1 ψ(a) = ψ(a) + {ψ(m1 ) + ψ(m2 )} + ψ(b). 2 4 Remark II.6.14. An eﬃcient implementation of the prolongation and restrictions loops through all elements and performs the prolongation or restriction elementwise. This process is similar to the usual elementwise assembly of the stiﬀness matrix and the load vector. II.6.9. Variants of the CGalgorithm for indeﬁnite problems. The CGalgorithm can only be applied to symmetric positive deﬁnite systems of equations. For nonsymmetric or indeﬁnite systems it in general breaks down. However, there are various variants of the CGalgorithm which can be applied to these problems. A naive approach consists in applying the CGalgorithm to the squared system LT Lk xk = LT bk which is symmetric and positive deﬁk k nite. This approach cannot be recommended since squaring the systems squares its condition number and thus at least doubles the required number of iterations. A more eﬃcient algorithm is the stabilized biconjugate gradient algorithm, shortly BiCGstab. The underlying idea roughly is to solve simultaneously the original problem Lk xk = bk and its adjoint LT yk = bT . k k Algorithm II.6.15. (Stabilized bionjugate gradient algorithm BiCGstab) (0) Given: a linear system of equations Lx = b, an initial guess x0 for the solution, and a tolerance ε > 0. Sought: an approximate solution of the linear system. (1) Compute r0 = b − Lx0 ,
66
II. STATIONARY LINEAR PROBLEMS
and set r 0 = r0 α−1 = 1 Set i = 0. (2) If ri · ri < ε2 return xi as approximate solution; stop. Otherwise go to step 3. (3) Compute ρ i = r i · ri , ρi αi−1 βi−1 = . ρi−1 ωi−1 If βi−1  < ε there is a possible breakdown; stop. Otherwise compute pi = ri + βi−1 {pi−1 − ωi−1 vi−1 }, vi = Lpi , ρi . αi = r 0 · vi If αi  < ε there is a possible breakdown; stop. Otherwise compute si = ri − αi vi , ti = Lsi , t i · si ωi = , ti · ti xi+1 = xi + αi pi + ωi si , ri+1 = si − ωi ti . Increase i by 1 and go to step 2. The following Java method implements the BiCGstab algorithm. Note that it performs an automatic restart upon breakdown at most MAXRESTART times.
// precontioned stabilized biconjugate gradient method public void pBiCgStab( SMatrix a, double[] b ) { set(ab, 0.0, dim); a.mvmult(ab, x); add(res, b, ab, 1.0, 1.0, dim); set(s, 0.0, dim); set(rf, 0.0, dim); double alpha, beta, gamma, rho, omega; it = 0; copy(rf, res, dim);
, v−1 = 0 , ρ−1 = 1
, p−1 = 0 , , ω−1 = 1 .
mvmult(ab. res. 1. s. // breakdown do restart rho = innerProduct(rf. acc(x. beta = rho*alpha/gamma. Another closely related problem is the question about the spatial distribution of the error. dim).0. They only describe the asymptotic behaviour of the error. set(v.0. dim). Motivation. dim).mvmult(v.1.abs( gamma ) < EPS ) break. dim). alpha. 1. ADAPTIVITY int restarts = 1. acc(x.0. beta. add(d. dim). Suppose we have computed the solution of a discretization of a partial diﬀerential equation such as the Stokes equations. omega. Throughout this section we denote by X(T ) and Y (T ) ﬁnite element spaces for the velocity and pressure. acc(d.7. rho = 1.0. omega.0. if( Math. add(s. v. dim). d. s.0. dim). dim). res. 1.7. Where is it large. What is the error of our computed solution? A priori error estimates do not help in answering this question. alpha = 1. set(d. s.II. // breakdown do restart alpha = rho/gamma. omega. if( Math. ab. dim). omega = innerProduct(ab. omega = 1. This task is achieved by adaptive grid reﬁnement based on a posteriori error estimation. while ( it < maxit && lastRes > tolerance ) { gamma = rho*omega. 0. 0. ab. v. v. a. But for a given mesh and discretization they give no information on the actual size of the error. They tell us how fast it will converge to zero when reﬁning the underlying mesh.0. In summary. we want to compute an approximate solution of the partial diﬀerential equation with a given tolerance and a minimal amount of work. dim)/gamma. A posteriori error estimation and adaptive grid reﬁnement II. d). res.7. gamma = innerProduct(ab. gamma = innerProduct(rf. respectively associated with .abs( gamma ) < EPS ) break. s). dim). add(res. } } } // end of pBiCgStab 67 II. dim). where is it small? Obviously we want to concentrate our resources in areas of a large error. a. alpha. // restart in case of breakdown while ( it < maxit && norm(res) > tolerance && restarts < MAXRESTART ) { restarts++. d. it++.
(2) Solve the discrete problem corresponding to Ti . pT .68 II.7. (General adaptive algorithm) (0) Given: A partial diﬀerential equation and a tolerance ε. In order to make the adaptive algorithm operative we must obviously specify the following ingredients: • an algorithm that computes the ηK ’s (a posteriori error estimation). Increase i by 1 and go to step 2.2 (p. II. A residual a posteriori error estimator.7. (4) If ηi ≤ ε return the current discrete solution as approximate solution. Sought: An approximate solution with an error less than ε. (3) For each element K ∈ Ti compute an estimate ηK of the error on K and set 1/2 ηi = K∈Ti 2 ηk .3 (p. • a rule that reﬁnes the elements in Ti (regular refinement). Otherwise go to step 5. The ηK ’s are usually called (a posteriori) error estimators or (a posteriori) error indicators.7. stop. (5) Determine a subset Ti of elements in Ti that must be reﬁned. The adaptive algorithm has a general structure which is independent of the particular diﬀerential equation. The simplest and most popular a posteriori error estimator is the residual error estimator. With these spaces we associate either a mixed method as in §II. Set i = 0. p denotes the solution of the Stokes equations. STATIONARY LINEAR PROBLEMS a given partition T of the domain Ω. 40).3. (1) Construct an initial admissible partition T0 and set up the associated discrete problem. The solution of the corresponding discrete problem is denoted by uT . General structure of the adaptive algorithm. Set up the discrete problem corresponding to Ti+1 . Algorithm II.2.2. For the Stokes equations it is given by . Remark II. whereas u. • an algorithm that constructs the partition Ti+1 (additional refinement).1.7. II. • a rule that selects the elements in Ti (marking strategy). Determine Ti+1 such that it is an admissible partition and such that all elements in Ti are reﬁned. 32) or a PetrovGalerkin method as in §II.
or a face. The lower error bound involves the diﬀerential operator which is a local one: local variations in the velocity and pressure result in local variations of the force terms. The lower bound on the error is a local one whereas the upper bound is a global one.II. One can prove that the residual error estimator yields the following upper and lower bounds on the error: 1/2 u − uT 1 + p − pT 0 ≤c ∗ K∈T 2 ηK .1). if Ω ⊂ R3 (cf. fT is the L2 projection of f onto S 0.3. ADAPTIVITY 69 ηK = h2 f + ∆uT − K + 1 2 pT 2 L2 (K) + div uT 2 L2 (K) 1/2 hE [nE · ( uT − pT I)]E E∈ET E⊂∂K 2 L2 (E) .7. This is not by chance. Remark II. • The pressure jumps vanish when using a continuous pressure approximation. The upper error bound on the other hand involves the inverse .−1 (T ) and ωK denotes the union of all elements that share an edge with K. The f − fT term is of higher order. H 1 (ωK ) η K ≤ c∗ u − uT + p − pT . • The second term is the residual of the discrete solution with respect to the continuity equation div u = 0 in it strong form.2.7. if Ω ⊂ R2 .7.1 (p. • The third term contains the boundary terms that appear when switching from the strong to the weak form of the momentum equation by using an integration by parts formula elementwise. The constant c∗ depends on the shape parameter of T and on the polynomial degree of the spaces X(T ) and Y (T ). 32) and on the shape parameter max K∈T ρK T . Note that: • The ﬁrst term is the residual of the discrete solution with respect to the momentum equation −∆u + p = f in its strong form. The constant c∗ depends on the constant cΩ in the stability result at the end hK of the partition of §II. L2 (ωK ) + hK f − fT L2 (ωK ) Here. Figure II.
27). Two other classes of popular error estimators are based on the solution of auxiliary discrete local problems with Neumann and Dirichlet boundary conditions. Y (K) = span{ψK q : q ∈ Rku −1 (K)}. which yields an upper bound on the error. Error estimators based on the solution of auxiliary problems.0 • S0 (T )n ⊂ X(T ) and either • S m. Domain ωK for a triangle and a parallelogram of the diﬀerential operator which is a global one: local variations of the exterior forces result in a global change of the velocity and pressure. II. where n is the space dimension.1.0 (T ) ∩ L2 (Ω) ⊂ Y (T ) when using a continuous pressure 0 approximation.2. Ω ⊂ Rn . One can prove that the deﬁnition of kT ensures that the local discrete problem .12 (p.4. kp }. i. kp − 1}.4.70 II.−1 (T )∩L2 (Ω) ⊂ Y (T ) when using a discontinuous pressure 0 approximation or • S m. kE = max{ku − 1. An error estimator. For their description we denote by ku and kp the maximal polynomial degrees k and m such that k. w ∈ RkE (E)n . For the Neumanntype estimator we introduce for every element K ∈ T the spaces X(K) = span{ψK v . ψE w : v ∈ RkT (K)n . An estimator. is called reliable.e.7. Remark II. Set kT = max{ku + n. which yields a lower bound on the error.7. STATIONARY LINEAR PROBLEMS d ¨ ¨ ¨ ¨¨ d ¨¨ ¨ K d d ¨ ¨ d¨ d¨ ¨ ¨ d ¨ d ¨¨ d¨ d d d d d d K d d d d d d Figure II. where ψK and ψE are the element and edge respectively face bubble functions deﬁned in §I. E ∈ ET ∩ ∂K}.7. Any decent error estimator must be reliable and eﬃcient. is called efficient.
one can prove that the deﬁnition of kT ensures that the local discrete problem . For the Dirichlettype estimator we denote – as in the previous subsection – for every element K ∈ T by ωK the union of all elements in T that share an edge. has a unique solution. and b are determined by the exterior force f and the discrete solution uT .7. or a face. E ∈ ET ∩ ∂K}.7. Again. K ∈ T ∩ ωK }. ψE w : v ∈ RkT (K )n . ADAPTIVITY 71 Find uK ∈ X(K) and pK ∈ Y (K) such that uK : K vK dx {f + ∆uT − K − K pK div vK dx = + pT } · vK dx [nK · ( uT − pT I)]∂K · vK dS ∂K qK div uK dx = K K qK div uT dx for all vK ∈ X(K) and all qK ∈ Y (K). pT . if n = 2.K = uK 2 1 (K) + pK H 2 L2 (K) 1/2 .II. Y (K) = span{ψK q : q ∈ Rku −1 (K ) . if n = 3.1 (p. With this solution we deﬁne the Neumann estimator by ηN. Figure II. (cf. K ∈ T ∩ ωK . w ∈ RkE (E)n . g. 70)) and set X(K) = span{ψK v . The above local problem is a discrete version of the Stokes equations with Neumann boundary conditions −∆v + grad q = f div v = g nK · v − qnK = b in K in K on ∂K where the data f .
K . H 1 (ωK ) ηN.K .K ≤ c4 K ⊂ωK 2 ηK . . With this solution we deﬁne the Dirichlet estimator by ηD.72 II. has a unique solution.K ≤ c2 u − uT + p − pT . This local problem is a discrete version of the Stokes equations with Dirichlet boundary conditions −∆v + grad q = f div v = g v=0 in ωK in ωK on ∂ωK where the data f and g are determined by the exterior force f and the discrete solution uT . One can prove that both estimators are reliable and eﬃcient and equivalent to the residual estimator: 1/2 u − uT 1 + p − pT 0 ≤ c1 K∈T 2 ηN. L2 (ωK ) + hK f − fT ηK ≤ c3 ηN. STATIONARY LINEAR PROBLEMS Find uK ∈ X(K) and pK ∈ Y (K) such that uK : ωK vK dx f · vK dx − ωK ωK − ωK pK div vK dx = + uT : vK dx pT div vK dx ωK qK div uK dx = ωK ωK qK div uT dx for all vK ∈ X(K) and all qK ∈ Y (K). L2 (ωK ) 1/2 ηN.K = uK 2 1 (ωK ) + pK H 2 L2 (ωK ) 1/2 . pT .
ωK is the union of all elements in T that share at least a vertex with K (cf. .K . error estimates ηK for the elements K ∈ T .7. 1).K . Algorithm II.K obviously is more expensive than the one of the residual estimator ηK .7. The constants c1 . 1).6. c8 only depend on the shape parameter of the partition T .II. Here. and a threshold θ ∈ (0. L2 (ωK ) 1/2 ηD. Thus it is recommended to use the cheaper residual estimator for reﬁning the mesh and one of the more expensive Neumanntype or Dirichlettype estimators for determining the actual error on the ﬁnal mesh. II. Figure I. (Maximum strategy) (0) Given: a partition T . 27)). Remark II. K∈T (2) If ηK ≥ θηT . error estimates ηK for the elements K ∈ T . Marking strategies.max mark K for reﬁnement and put it into the set T . The computation of the estimators ηN.max = max ηK .K ≤ c6 u − uT + p − pT . (Equilibration strategy) (0) Given: a partition T . .5. and a threshold θ ∈ (0. There are two popular marking strategies for determining the set Ti in the general adaptive algorithm: the maximum strategy and the equilibration strategy.K and ηD. H 1 (ωK ) ηD.7. on the other hand.7. and the stability parameter cΩ of the Stokes equations.7. The residual estimator.5 (p. ADAPTIVITY 73 1/2 u − uT 1 + p − pT 0 ≤ c5 K∈T 2 ηD. Sought: a subset T of marked elements that should be reﬁned. This is recompensed by a higher accuracy.5. the polynomial degree of the discretization.2. (1) Compute ηT . is completely satisfactory for identifying the regions for meshreﬁnement. .K ≤ c8 K ⊂ωK 2 ηK . Algorithm II. .7. L2 (ωK ) + hK f − fT ηK ≤ c7 ηD. .
6. 63) and by the top square of Figure II. Otherwise skip K. The resulting elements are called red.6. This is illustrated by the topleft triangle of Figure II. 64).3 (p. Regular reﬁnement. The numbers outside the elements indicate the local enumeration of edges and vertices of the parent element. The corresponding reﬁnement is called regular. i. Thus the shape parameter of the elements does not change. II.max . have the same angles. When all elements have been treated. Both marking strategies yield comparable results.74 II. stop. Otherwise go to step 3.e. 2 If this is the case.7. put K in T and add ηK to ΣT . return to step 2. (3) Compute ηT . Elements that are marked for reﬁnement usually are reﬁned by connecting their midpoints of edges.5. .e. (2) If ΣT ≥ θΘT return T . K∈T \T (4) For all elements K ∈ T \T check whether ηK = ηT .max = max ηK . STATIONARY LINEAR PROBLEMS Sought: a subset T of marked elements that should be reﬁned. nearly all elements are marked. At the end of this algorithm the set T satisﬁes 2 ηK ≥ θ K∈T K∈T 2 ηK . i. In both strategies. The numbers inside the elements close to the vertices indicate the local enumeration of the vertices of the child elements. (1) Compute 2 ΘT = ηK .2 (p. A popular and well established choice is θ ≈ 0. Triangles and quadrilaterals are thus subdivided into four smaller triangles and quadrilaterals that are similar to the parent element. K∈T Set ΣT = 0 and T = ∅. a large value of θ leads to small sets T . The maximum strategy obviously is cheaper than the equilibration strategy. i.6.e. A small value of θ leads to large sets T . very few elements are marked.
If this is the case. Parallelepipeds are also subdivided into eight smaller similar parallelepipeds by joining the midpoints of edges. +1 etc.2. In order to avoid too acute or too abstuse triangles. II. Figure II. • a purple quadrilateral by bisecting exactly three edges. ¨rr ¨¨ rr ¨ r r ¨¨ • r ¨ ¨ rr ¨ ¨ r r ¨¨ r Figure II. . 63) and II. blue. a blue reﬁnement is performed instead. Example of a hanging node For abbreviation we call the resulting elements green.3 (p.2 (p. Joining the midpoints of edges introduces four smaller similar tetrahedrons at the vertices of the parent tetrahedron plus a double pyramid in its interior. For tetrahedrons. These tetrahedrons. the situation is more complicated.6. Since not all elements are reﬁned regularly. They are obtained as follows: • a green element by bisecting exactly one edge.2) and to ensure the admissibility of the reﬁned partition.7. we need additional reﬁnement rules in order to avoid hanging nodes (cf. are not similar to the parent tetrahedron. and purple. the longest one of the reﬁnement edges is bisected ﬁrst. The latter one is subdivided into four small tetrahedrons by cutting it along two orthogonal planes.II.7. Note that the enumeration of new elements and new vertices is chosen in such a way that triangles and quadrilaterals may be treated simultaneously with a minimum of case selections. • a blue element by bisecting exactly two edges. Thus these rules guarantee that the shape parameter of the partition does not deteriorate during a repeated adaptive reﬁnement procedure.6. inside the elements give the enumeration of the children. But there are rules which determine the cutting planes such that a repeated reﬁnement according to these rules leads to at most four similarity classes of elements originating from a parent element.7. Additional reﬁnement. the blue and green reﬁnement of triangles obey to the following two rules: • In a blue reﬁnement of a triangle. 64).7.7. • Before performing a green reﬁnement of a triangle it is checked whether the reﬁnement edge is part of an edge which has been bisected during the last ng generations. These rules are illustrated in Figures II. ADAPTIVITY 75 The numbers +0. however.
k > 0. since it lies on the Dirichlet boundary. It equals −1 if the corresponding node is not a degree of freedom.3. STATIONARY LINEAR PROBLEMS The second rule is illustrated in Figure II.76 II. The member d gives the address of the corresponding degree of freedom. Here the cross represents the new hanging node which is created by the blue reﬁnement. d d d d d d d d d × d d d d d d × Figure II. Both are integer arrays of length 4. e. Larger values result in an excessive blowup of the reﬁnement zone. of a vertex of a grid. if the node belongs to the kth component of the Dirichlet boundary part of the computational domain.7. k > 0. indicates that the endpoints of the corresponding edge are on the kth curved part of the boundary. It is k. The member c stores the coordinates in Euclidean 2space. k ≥ 0. A value e[i] = j ≥ 0 indicates that edge i of the current . i. It is a double array of length 2. The right part shows the blue reﬁnement which is performed instead. i. The member t stores the type of the node. Note that the data structures are independent of the particular diﬀerential equation and apply to all engineering problems which require the approximate solution of partial diﬀerential equations. The class ELEMENT realizes the concept of an element. A value e[i] = −1 indicates that the corresponding edge is on a straight part of the boundary. Its member nv determines the element type. For simplicity we consider only the twodimensional case.7. This member takes into account that not every node actually is a degree of freedom. It equals 0 if it is an interior point of the computational domain. Required data structures. if the node is on the kth component of the Neumann boundary. Similarly e[i] = −k − 2. The class NODE realizes the concept of a node.3. In this subsection we shortly describe the required data structures for a Java or C++ implementation of an adaptive ﬁnite element algorithm. t. It equals −k. The cross in the left part represents a hanging node which should be eliminated by a green reﬁnement. Its members v and e realize the vertex and edge informations.7.e. respectively.g. Numerical experiments indicate that the optimal value of ng is 1. and d.e. It is assumed that v[3] = −1 if nv= 3.8. It has three members c. triangle or quadrilateral. Forbidden green reﬁnement and substituting blue reﬁnement II.
II.7. ADAPTIVITY
77
element is adjacent to element number j. Thus the member e decribes the neighbourhood relation of elements. The members p, c, and t realize the grid hierarchy and give the number of the parent, the number of the ﬁrst child, and the reﬁnement type, respectively. In particular we have if the element is not reﬁned {0} {1, . . . , 4} if the element is reﬁned green t ∈ {5} if the element is reﬁned red {6, . . . , 24} if the element is reﬁned blue {25, . . . , 100} if the element is reﬁned purple. At ﬁrst sight it may seem strange to keep the information about nodes and elements in diﬀerent classes. But this approach has several advantages: • It minimizes the storage requirement. The coordinates of a node must be stored only once. If nodes and elements are represented by a common structure, these coordinates are stored 4 − 6 times. • The elements represent the topology of the grid which is independent of the particular position of the nodes. If nodes and elements are represented by diﬀerent structures it is much easier to implement mesh smoothing algorithms which aﬀect the position of the nodes but do not change the mesh topology. When creating a hierarchy of adaptively reﬁned grids, the nodes are completely hierarchical, i.e. a node of grid Ti is also a node of any grid Tj with j > i. Since in general the grids are only partly reﬁned, the elements are not completely hierarchical. Therefore, all elements of all grids are stored. The information about the diﬀerent grids is implemented by the class LEVEL. Its members nn, nt, nq, and ne give the number of nodes, triangles, quadrilaterals, and edges, resp. of a given grid. The members first and last give the addresses of the ﬁrst element of the current grid and of the ﬁrst element of the next grid, respectively. The member dof yields the number of degrees of freedom of the corresponding discrete ﬁnite element problems. For demonnstration purposes we refer to the Java applet ALF (Adaptive Linear Finite elements) which implements the described data structures for scalar diﬀerential equations of 2nd order and which together with a user guide is available at the address: www.rub.de/num1/demo/alf/ALFApplet www.rub.de/num1/demo/alf/ALFUserguide.htm
CHAPTER III
Stationary nonlinear problems
III.1. Discretization of the stationary NavierStokes equations III.1.1. Variational formulation. We recall the stationary incompressible NavierStokes equations of §I.1.13 (p. 16) with noslip boundary condition and the rescaling of Remark I.1.4 (p. 15) −∆u + Re(u · )u + grad p = f div u = 0 u=0 in Ω in Ω on Γ
where Re > 0 is the Reynolds number. The variational formulation of this problem is given by
1 Find u ∈ H0 (Ω)n and p ∈ L2 (Ω) such that 0
u:
Ω
vdx −
Ω
p div vdx )u] · vdx =
Ω
+
Ω
Re[(u ·
f · vdx
q div udx = 0
Ω 1 for all v ∈ H0 (Ω)n and all q ∈ L2 (Ω). 0
It has the following properties: • Any solution u, p of the NavierStokes equations is a solution of the above variational problem. • Any solution u, p of the variational problem, which is suﬃciently smooth, is a solution of the NavierStokes equations. III.1.2. Fixedpoint formulation. For the mathematical analysis it is convenient to write the variational formulation of the NavierStokes equations as a ﬁxedpoint equation. To this end we denote by T the Stokes operator which associates with each g the unique solution v = T g of the Stokes equations:
79
there are only ﬁnitely many turning or bifurcation points.1.3. where the constant γ only depends on the domain Ω and can be estimated by γ ≤ diam(Ω)4− 2 2n− 2 . For the ﬁnite element discretization of the NavierStokes equations we choose a partition T of the domain Ω and corresponding ﬁnite element spaces X(T ) and Y (T ) for the velocity and pressure. STATIONARY NONLINEAR PROBLEMS 1 Find v ∈ H0 (Ω)n and q ∈ L2 (Ω) such that 0 v: Ω wdx − Ω q div wdx = Ω g · wdx r div vdx = 0 Ω 1 for all w ∈ H0 (Ω)n and all r ∈ L2 (Ω). These spaces have to satisfy the n 1 .80 III. III. • The mapping. • Every solution of the variational problem satisﬁes the a priori bound u1 ≤ f 0 .4. • Every solution of the variational problem has the same regularity properties as the solution of the Stokes equations. I. Its derivative with respect to Re is a continuous linear operator which is invertible with a continuous inverse for all but ﬁnitely many values of Re. is diﬀerentiable. • The variational problem admits a unique solution provided γRe f 0 < 1. Existence and uniqueness results. III.e. The following existence and uniqueness results can be proven for the variational formulation of the NavierStokes equations: • The variational problem admits at least one solution. Finite element discretization.1. 0 Then the variational formulation of the NavierStokes equations takes the equivalent ﬁxedpoint form u = T (f − Re(u · )u). which associates with Re a solution of the variational problem.
2.5. • Every solution of the discrete problem satisﬁes the a priori bound uT 1 ≤ Re f 0 . Fixedpoint formulation of the discrete problem.III. To this end we denote by TT the discrete Stokes operator which associates with every g the solution vT = TT g of the discrete Stokes problem: Find vT ∈ X(T ) and qT ∈ Y (T ) such that vT : Ω wT dx − Ω qT div wT dx = Ω g · wT dx rT div vT dx = 0 Ω for all wT ∈ X(T ) and all rT ∈ Y (T ). respectively. 36).1. The discrete problem has similar properties as the variational problem: • The discrete problem admits at least one solution.6. We then replace in the variational 1 problem the spaces H0 (Ω)n and L2 (Ω) by X(T ) and Y (T ).1. III.1. STATIONARY NAVIERSTOKES EQUATIONS 81 infsup condition of §II. III. 0 This leads to the following discrete problem: Find uT ∈ X(T ) and pT ∈ Y (T ) such that uT : Ω vT dx − Ω pT div vT dx )uT ] · vT dx = Ω + Ω Re[(uT · f · vT dx qT div uT dx = 0 Ω for all vT ∈ X(T ) and all qT ∈ Y (T ). that – as for the variational problem – this equation also determines the pressure via the operator TT . Note. . The discrete NavierStokes problem then takes the equivalent ﬁxedpoint form uT = TT (f − Re(uT · )uT ).6 (p. Properties of the discrete problem. The ﬁnite element discretization of the NavierStokes can be written in a ﬁxedpoint form similar to the variational problem.
Remark III. v. 39) we denote by ku and kp the maximal polynomial degrees k and m such that k.0 (T ) ∩ L2 (Ω) ⊂ Y (T ) when using a continuous pressure 0 approximation. there are only ﬁnitely many turning or bifurcation points. this in particular yields [(u · Ω 1 H0 (Ω)n . III. . Its derivative with respect to Re is a continuous linear operator which is invertible with a continuous inverse for all but ﬁnitely many values of Re.3 (p. where the constant γ is the same as for the variational problem.1. III. w ∈ H0 (Ω)n the identity [(u · Ω )v] · wdx = − Ω [(u · )w] · vdx + Ω v · w div udx.8. • The mapping.82 III.2. If u is solenoidal. The integration by parts formulae of 1 §I. is diﬀerentiable. which associates with Re a solution of the discrete problem.11 (p.e.0 • S0 (T )n ⊂ X(T ) and either • S m. i.1. I. w ∈ This symmetry in general is violated for the discrete problem since div uT = 0. )v] · wdx = − Ω [(u · )w] · vdx for all v. To enforce the symmetry. Symmetrization. one often replaces in the discrete problem the term Re[(uT · Ω )uT ] · vT dx 1 2 by 1 2 Re[(uT · Ω )uT ] · vT dx − Re[(uT · Ω )vT ] · uT dx.−1 (T )∩L2 (Ω) ⊂ Y (T ) when using a discontinuous pressure 0 approximation or • S m. 19) imply for all u. STATIONARY NONLINEAR PROBLEMS • The discrete problem admits a unique solution provided γRe f 0 < 1. div u = 0.1. The number of turning or bifurcation points of the discrete problem of course depends on the partition T and on the choice of the spaces X(T ) and Y (T ).1. A priori error estimates. As in §II.7.e.2.
Therefore it cannot be checked for a given discretization.9. Further we denote by u.2. pT a solution of the discrete problem. 0 hRe f 0 is suﬃciently small. These observations show that the a priori error estimates are of purely academic interest. (4) For most practical examples. the above regularity assumptions are not realistic for practical problems. (2) The condition ”hRe f 0 suﬃciently small” can in general not be quantiﬁed. STATIONARY NAVIERSTOKES EQUATIONS 83 Set k = min{ku − 1. Since 1 uu = ( u2 ) . the righthand side of the above error estimates behaves like O(hRe2 ) or O(h2 Re2 ). We consider the onedimensional NavierStokes equations −u + Re uu = 0 u(−1) = 1 u(1) = −1. Practical informations can only be obtained from a posteriori error estimates. A warning example. The constants c1 and c2 only depend on the domain Ω. (3) One cannot conclude from the computed discrete solution whether the analytical solution is a turning or bifurcation point or not. Thus they require unrealistically small meshsizes for large Reynolds numbers. 1) .1. (1) As for the Stokes equations. kp }. p a solution of the variational formulation of the NavierStokes equations and by uT . ≤ c2 hk+2 Re2 f k. Remark III. III. that 1 u ∈ H k+2 (Ω)n ∩ H0 (Ω)n and p ∈ H k+1 (Ω) ∩ L2 (Ω).1.1. 2 in I = (−1. then u − uT 0 k. With these notations the following a priori error estimates can be proved: Assume • • • then u − uT 1 + p − pT 0 ≤ c1 hk+1 Re2 f If in addition Ω is convex. and u is no turning or bifurcation point of the variational problem.III.
Since tanh(x) ≈ 1 for x 1. STATIONARY NONLINEAR PROBLEMS we conclude that −u + Re 2 u =c 2 is constant. tanh(αRe ) Inserting this expression in the diﬀerential equation yields Re 2 0 = −u + u 2 Re αRe + − αRe tanh(αRe ) u2 = tanh(αRe ) 2 Re =2 − αRe tanh(αα ) uu . To determine the constant c. we integrate the above equation and obtain 1 1 Re 2 Re 1 2 −u + cdx = 2c = u dx = 2 + u dx . 2 Therefore it must be of the form u(x) = βRe tanh(αRe x) with suitable parameters αRe and βRe depending on Re. Hence every solution of the diﬀerential equation satisﬁes Re 2 u = u − γ 2. Due to the monotonicity of tanh. Hence. we obtain the deﬁning relation 2αRe tanh(αRe ) = Re for the parameter αRe . 2 2 −1 −1 −1 ≥0 This shows that c ≥ 1 and that we may write c = γ 2 with a diﬀerent constant γ ≥ 1. the solution u has a sharp interior layer 2 . 2 Since neither u nor its derivative u vanish identically. The boundary conditions imply that 1 βRe = − .84 III. this equation admits for every Re > 0 a unique solution αRe . we have αRe ≈ Re for Re 1. tanh(αRe ) Hence we have u(x) = − tanh(αRe x) .
the value of the discrete solution at the meshpoint xi = −1 + ih and evaluate all integrals with the Simpson rule. +1 and use continuous piecewise linear ﬁnite elements on the resulting mesh. Solution of the onedimensional NavierStokes equations for Re = 100 For the discretization. the discretization results in the following ﬁnite diﬀerence scheme: 2ui − ui−1 − ui+1 Re + (ui − ui−1 )(2ui + ui−1 ) h 6 Re + (ui+1 − ui )(2ui + ui+1 ) = 0 for 1 ≤ i ≤ N 6 u0 = 1 uN +1 = −1.1. Figure III. For its solution. 0 ≤ i ≤ N + 1. We denote by ui . divide the interval (−1. Since all integrands are piecewise polynomials of degree at most 2. we try the ansatz 1 for i = 0 ui = δ for 0 ≤ i ≤ N −1 for i = N + 1 with an unknown constant δ.1. we choose an integer N ≥ 1. Figure III.1 depicts the solution u for Re = 100.1. STATIONARY NAVIERSTOKES EQUATIONS 85 at the origin for large values of Re. we see that the equations corresponding to 2 ≤ i ≤ N − 1 are satisﬁed independently of the value of δ.1. 1) into N small subintervals of equal length h = N2 . The equations for i = 1 and i = N on the other hand result in δ − 1 Re 0= + (δ − 1)(2δ + 1) h 6 . this is an exact integration. With theses notations. When inserting this ansatz in the difference equations.III.
If Reh = 6. STATIONARY NONLINEAR PROBLEMS = and 0= Reh δ−1 1+ (2δ + 1) h 6 δ + 1 Re + (−δ − 1)(2δ − 1) h 6 δ+1 Reh = 1− (2δ − 1) h 6 respectively. Hence. If Reh = 6. our ansatz now leads to the same solution as before. however. A more detailed analysis. Next. we see that equations corresponding to 2 ≤ i ≤ N − 1 are again satisﬁed independently of δ. things do not change. one can prove that one obtains a completely useless discrete solution when Reh ≥ 2. This results in the diﬀerence equations 2ui − ui−1 − ui+1 + Reui (ui − ui−1 ) = 0 for 1 ≤ i ≤ N h u0 = 1 uN +1 = −1. When using a standard symmetric ﬁnite diﬀerence approximation. our ansatz does not work. these equations have the two solutions δ = 1 and δ = −1. the above equations have no solution. for 1 ≤ i ≤ N When Reh = 2. The equations for i = 0 and i = N now take the form δ−1 + Reδ(δ − 1) 0= h δ−1 = [1 + Rehδ] h .86 III. Thus the principal result does not change in this case. Again. The diﬀerence equations then take the form 2ui − ui−1 − ui+1 Re + ui (ui+1 − ui−1 ) = 0 h 2 u0 = 1 uN +1 = −1. we see that the ﬁnite element discretization yields a qual6 itatively correct approximation only when h < Re . shows that for Reh ≥ 6 the discrete solution has nothing in common with the solution of the diﬀerential equation. When trying our ansatz. Both solutions obviously have nothing in common with the solution of the diﬀerential equation. Hence we have to expect a prohibitively small meshsize for reallife problems. Only the critical meshsize is reduced by a factor 3. In summary. we try a backward diﬀerence approximation of the ﬁrst order derivative.
80) in order to obtain qualitatively correct approximations also for ”large” values of hRe. i. 0= Our ansatz now results in the two conditions δ−1 0= + Reδ0 h δ−1 = h and δ+1 + Reδ(−1 − δ) h δ+1 [1 − Rehδ] .1.1.e. Thus. we try a forward diﬀerence approximation of the ﬁrst order derivative. .10. This yields the diﬀerence equations 2ui − ui−1 − ui+1 + Reui (ui+1 − ui ) = 0 for 1 ≤ i ≤ N h u0 = 1 uN +1 = −1. in the case Reh = 1 we obtain the unique solution δ = −1. a backward diﬀerence approximation when u > 0 and a forward diﬀerence approximation when u < 0. A more reﬁned analysis shows. A qualitatively 1 correct discrete solution is obtained only if h ≤ Re . we approximate the integral involving the convective derivative by a onepoint quadrature rule [(uT · Ω )uT ] · vT dx ≈ K∈T K[(uT (xK ) · )uT (xK )] · vT (xK ).1. that we obtain a qualitatively 1 correct discrete solution only if h ≤ Re . Upwind methods. One possibility to achieve this goal is to use an upwind difference approximation for the convective derivative (uT · )uT . These experiences show that we need a true upwinding. In a ﬁrst step. The previous section shows that we must modify the ﬁnite element discretization of §III. Finally.4 (p. To describe the idea we consider only the lowest order method. Thus the upwind direction and – with it the discretization – depend on the solution of the discrete problem! 0= III. = h For Reh = 1 this yields the unique solution δ = 1.III. STATIONARY NAVIERSTOKES EQUATIONS 87 and δ+1 + Reδ0 h δ+1 = h respectively.
The streamlinediﬀusion method. we replace the convective derivative by a suitable upwind diﬀerence 1 uT (xK ) (uT (xK ) − uT (yK )). In this respect it generalizes the PetrovGalerkin method of §II.3 (p. this upwinding yields a better approximation than the straightforward discretization of §III. the streamlinediﬀusion method simultaneously has a stabilizing eﬀect with respect to the incompressibility constraint. 80).. i.11.2.2.1. STATIONARY NONLINEAR PROBLEMS Here K is the area respectively volume of the element K and xK denotes its barycentre. the resulting discrete problem is diﬀerentiable and can be solved with a Newton method.1. Upwind diﬀerence In a last step. 40). Figure III. For suﬃciently small meshsizes. the error converges to zero linearly with h. Thus the solution is slightly ”smeared” in its smooth direction while . (uT (xK ) · )uT (xK ) ≈ xK − yK Here · denotes the Euclidean norm in Rn and yK is the intersection of the halfline {xK − suT (xK ) : s > 0} with the boundary of K (cf. this does not deteriorate the asymptotic convergence rate of the ﬁnite element discretization. we replace uT (yK ) by IT uT (yK ) where IT uT denotes the linear interpolate of uT in the vertices of the edge respectively face of K which contains yK . yK −b = 1 a−b .4 (p.0 velocity. but avoids the differentiability problem of the upwind scheme of the previous section. This has the awkward sideeﬀect that the discrete nonlinear problem is not diﬀerentiable and leads to severe complications for the solution of the discrete problem.2).g. 4 1 3 we have IT uT (yK ) = 4 uT (a) + 4 uT (b).e. In particular. When using a ﬁrst order approximation for the 1. e. For ”large” values of hRe. III.88 III.1. a yK $ $ b $x $$$ K $$ $ $$$ Figure III. The upwind direction depends on the discrete solution. The streamlinediffusion method has an upwind eﬀect. Moreover. X(T ) = S0 (T )n . The idea is to add an artiﬁcial viscosity in the streamline direction.1. If in Figure III.1. In a second step.
the Newton method. The additional parameter αK has to be nonnegative and can be chosen equal to zero.3. the streamlinediffusion discretization is given by: Find uT ∈ X(T ) and pT ∈ Y (T ) such that uT : Ω vT dx − Ω pT div vT dx )uT ] · vT dx pT + Ω Re[(uT · + K∈T δK h2 K K Re[−f − ∆uT + )uT ] · [(uT · K +Re(uT · + K∈T )vT ]dx f · vT dx Ω αK δK div uT div vT dx = qT div uT dx Ω + K∈T δK h2 K K [−∆uT + )uT ] · E pT qT dx δK h2 K K∈T K +Re(uT · + E∈ET δE hE [pT ]E [qT ]E dS = f· qT dx for all vT ∈ X(T ) and all qT ∈ Y (T ). For the solution of discrete nonlinear problems which result from a discretization of a nonlinear partial diﬀerential equation one can proceed in two ways: • One applies a nonlinear solver.4 (p.2. General structure. 80). Solution of the discrete nonlinear problems III. suggest that the choice αK ≈ 1 is preferable.2. Recall that [·]E denotes the jump across E. III.g.1.III. to the nonlinear diﬀerential equation and then discretizes the resulting linear partial diﬀerential equations. The stabilization parameters δK and δE are chosen as described in §II. 44).3.2. . Retaining the notations of §III. The artiﬁcial viscosity is added via a suitable penalty term which is consistent in the sense that it vanishes for any solution of the NavierStokes equations. such as e. When setting Re = 0 and αK = 0 the above discretization reduces to the PetrovGalerkin discretization of the Stokes equations presented in §II.3 (p.4 (p. SOLUTION OF THE DISCRETE NONLINEAR PROBLEMS 89 retaining its steep gradient in the orthogonal direction. 43). however.1. Computational experiments.
Fixedpoint iteration.2.2. the Newton method. Both approaches often are equivalent and yield comparable approximations.g.6 (p.1) u = T (f − Re(u · )u) with the Stokes operator T which associates with each g the unique solution v = T g of the Stokes equations −∆v + grad q = g div v = 0 v=0 in Ω in Ω on Γ. pi+1 as approximate solution. The ﬁxedpoint iteration converges if Re2 f 0 ≤ 1. Most of the algorithms require the solution of discrete Stokes equations or of slight variations thereof. Sought: an approximate solution of the stationary incompressible NavierStokes equations.2.2 (p. We recall the ﬁxedpoint formulation of the NavierStokes equations of §III.2. such as e. The convergence rate approximately is 1 − Re2 f 0 . This can be achieved with the methods of §II. 79) (III. Therefore this algorithm can only be recommended for problems with very small Reynolds’ numbers. III. (1) Set i = 0. (2) Solve the Stokes equations −∆ui+1 + pi+1 = {f − Re(ui · i+1 )ui } in Ω in Ω on Γ.90 III. Algorithm III. The resulting discrete linear problems can then be interpreted as discretizations of suitable linear diﬀerential equations. In this section we will follow the ﬁrst approach since it requires less notation.1. 53). STATIONARY NONLINEAR PROBLEMS • One directly applies a nonlinear solver. All algorithms can easily be reinterpreted in the second sense described above. stop. div u (3) If =0 ui+1 = 0 ui+1 − ui 1 ≤ ε return ui+1 . to the discrete nonlinear problems. The ﬁxedpoint iteration is given by ui+1 = T (f − Re(ui · It results in the following algorithm: )ui ). .1. (Fixedpoint iteration) (0) Given: an initial guess u0 and a tolerance ε > 0. Otherwise increase i by 1 and go to step 2.
1) we now look at a whole family of NavierStokes equations with a parameter λ uλ = T (f − λ(uλ · )uλ ).3. If we know the solution uλ0 corresponding to the parameter λ0 .2. otherwise the iteration may diverge. SOLUTION OF THE DISCRETE NONLINEAR PROBLEMS 91 III. The Newton iteration converges quadratically.4. stop. Equation (III. Otherwise increase i by 1 and go to step 2. (Newton iteration) (1) Given: an initial guess u0 and a tolerance ε > 0.2. Then we must solve in each step a linear problem of the form g = DF (u)v = v + ReT ((u · )v + (v · )u). For large Reynolds’ numbers this may cause severe diﬃculties with the solution of the modiﬁed Stokes problems. III. The modiﬁed Stokes problems incorporate convection and reaction terms which are proportional to the Reynolds’ number.2. the initial guess must be ”close” to the sought solution. (2) Set i = 0. However. div ui+1 = 0 .III. Sought: an approximate solution of the stationary incompressible NavierStokes equations. The derivative vλ = duλ solves the modiﬁed Stokes equations dλ vλ = −T (λ(vλ · )uλ + λ(uλ · vλ ) + (uλ · )uλ ). This results in the following algorithm: Algorithm III. pi+1 as approximate solution. (3) Solve the modiﬁed Stokes equations −∆ui+1 + pi+1 + Re(ui · +Re(ui+1 · )ui+1 )ui = {f + Re(ui · ui+1 = 0 (4) If ui+1 − ui 1 ≤ ε return ui+1 . Instead of the single problem (III. To avoid this.2. one can use a damped Newton iteration. If )ui } in Ω in Ω on Γ.2. we can compute vλ0 and may use uλ0 + (λ1 − λ0 )vλ0 as initial guess for the Newton iteration applied to the problem with parameter λ1 > λ0 .2.1) can be rewritten in the form F (u) = u − T (f − Re(u · )u) = 0. We may apply Newton’s method to F . Newton iteration. Path tracking.2. Its solutions uλ depend diﬀerentiably on the parameter λ.
stop.2. Denote the result by uλ . and a tolerance ε > 0.92 III. λ + ∆λ}. A nonlinear CGalgorithm.2. (Nonlinear CGalgorithm of Polak` Ribiere) (0) Given: an initial guess u0 and a tolerance ε > 0. the increment ∆λ should be reduced. Sought: an approximate solution of the stationary incompressible NavierStokes equations.2. (Path tracking) (0) Given: a Reynolds’ number Re. (2) Apply a few Newton iterations to the NavierStokes equations with Reynolds’ number λ. +λ(vλ · )uλ = {f − λ(uλ · div vλ = 0 vλ = 0 (5) 5. (3) If λ = Re return the velocity uλ and the corresponding pressure pλ as approximate solution. Sought: an approximate solution of the stationary incompressible NavierStokes equations with Reynolds’ number Re.5. and tolerance ε. an initial parameter 0 ≤ λ < Re. a few Newton iterations will yield a suﬃciently good approximation of vλ1 . ˜ (1) Compute the solutions z0 and g0 the Stokes problems −∆z0 + r0 = Re(u0 · z0 = 0 )u0 − f in Ω in Ω on Γ div z0 = 0 .4. Go to step 2. The idea is to apply a nonlinear CGalgorithm to the leastsquares minimization problem 1 minimize u − T (f − Re(u · 2 It leads to the following algorithm: )u)2 . initial guess uλ . Replace uλ by uλ + ∆λvλ and λ by min{Re. (4) Solve the modiﬁed Stokes equations −∆vλ + qλ + λ(uλ · )vλ )uλ } in Ω in Ω on Γ. III. The path tracking algorithm should be combined with a steplength control: If the Newton algorithm in step 2 does not converge suﬃciently well. This idea leads to the following algorithm: Algorithm III. 1 Algorithm III.3. Otherwise go to step 4. (1) Set uλ = 0. an increment ∆λ > 0. STATIONARY NONLINEAR PROBLEMS λ1 − λ0 is not too large.
2 2 Ω Determine the smallest positive zero ρi of α + βρ + γρ2 + δρ3 and set ui+1 = ui − ρi wi 1 2 zi+1 = zi − ρi zi + ρi zi .III. (2) If 1/2 0  wi 2 dx Ω ≤ε return the velocity ui and the corresponding pressure pi as approximate solution. Otherwise go to step 3.2. stop. SOLUTION OF THE DISCRETE NONLINEAR PROBLEMS 93 and −∆˜ 0 + g s0 = Re{(u0 + z0 ) · ( u0 ) − (u0 · ˜ 0 )(u0 + z0 )} in Ω in Ω on Γ. div zi 2 zi 2 Compute α=− =0 =0 (ui + zi ) : Ω (wi + zi )dx 1 (ui + zi ) : Ω β= Ω  (wi + zi )2 dx + 1 zi dx 2 zi dx 2 3 γ=− (wi + zi ) : 1 2 Ω 1 δ=  zi 2 . (3) Compute the solutions zi and zi of the Stokes problems 2 1 −∆zi + 1 i r1 = Re{(ui · )wi + (wi · )ui } in Ω in Ω on Γ div zi = 0 1 zi 1 and −∆zi + 2 i r2 = Re(wi · =0 )wi in Ω in Ω on Γ. ˜ div g = 0 ˜ g =0 ˜ Set i = 0 and w0 = g0 = u0 + z0 + g0 . 1 2 2 .
III.6.2.5. a damping parameter ω ∈ (0. Compute −1 )(ui+1 + zi+1 )} in Ω in Ω on Γ σ i+1 = Ω  g i+1 2  dx Ω (gi+1 − gi ) : gi+1 dx and set wi+1 = gi+1 + σ i+1 wi . This leads to the following algorithm: Algorithm III. (3) Solve the Stokes equations 2ωui+ 4 − ∆ui+ 4 + 1 1 pi+ 4 = 2ωui + f − Re(ui · )ui in Ω in Ω on Γ. (Operator splitting) (1) Given: an initial guess u0 . 1). Increase i by 1 and go to step 2. Operator splitting. Sought: an approximate solution of the stationary incompressible NavierStokes equations. and a tolerance ε > 0. The idea is to decouple the diﬃculties associated with the nonlinear convection term and with the incompressibility constraint. =0 . This algorithm has the advantage that it only requires the solution of Stokes problems and thus avoids the diﬃculties associated with large convection and reaction terms.2. 1 1 div ui+ 4 = 0 u i+ 1 4 =0 (4) Solve the nonlinear Poisson equations ωui+ 4 − ∆ui+ 4 + Re(ui+ 4 · 3 3 3 )ui+ 4 = ωui+ 4 + f − u i+ 3 4 3 1 pi+ 4 1 in Ω on Γ.94 III. STATIONARY NONLINEAR PROBLEMS ˜ Compute the solution gi+1 of the Stokes problem −∆˜ i+1 + g si+1 = Re{(ui+1 + zi+1 ) · ( ui+1 ) ˜ − (ui+1 · ˜ div gi+1 = 0 ˜ gi+1 = 0 and set ˜ gi+1 = gi+1 + ui+1 + zi+1 . (2) Set i = 0.
3.2. 58) to nonlinear problems one only has to modify the pre. stop. One only has to adapt the a posteriori error estimator. the regular reﬁnement.5 (p.2 (p.7.and postsmoothing steps. and solve for the current unknown by applying a few Newton iterations to the resulting nonlinear equation in one unknown. 58) to the linear problems that must be solved in the algorithms described above. for the auxiliary linear problems that must be solved during the Newton iteration. The marking strategy.3.6. and the data structures described in §II.1. Otherwise increase i by 1 and go to step 2. 73) – §II. pi+1 as approximate solution.7.3. 68) directly applies to nonlinear problems. Alternatively one can apply a variant of the multigrid algorithm of §II. The nonlinear problem in step 3 is solved with one of the algorithms presented in the previous subsections. III. When applying the multigrid algorithm of §II. GaußSeidel iteration. The general structure of an adaptive algorithm as described in §II. ADAPTIVITY FOR NONLINEAR PROBLEMS 95 (5) Solve the Stokes equations 2ωui+1 − ∆ui+1 + pi+1 = 2ωui+ 4 + f − Re(ui+ 4 · div ui+1 = 0 ui+1 = 0 (6) If ui+1 − ui 1 ≤ ε return ui+1 . • Successively choose a node and the corresponding equation. this variant is often preferred in practice. Due to the lower costs for implementation. the additional reﬁnement.g.III. such as e.8 (p.7. .5 (p. General structure. 76) do not change. This can be done in two possible ways: • Apply a few iterations of the Newton algorithm to the nonlinear problem combined with very few iterations of a classical iterative scheme.5 (p. Adaptivity for nonlinear problems III. The second variant is often called nonlinear GaußSeidel algorithm. freeze all unknowns that do not belong to the current node.7.6. 3 3 3 )ui+ 4 in Ω in Ω on Γ. Multigrid algorithms. III. This task is simpliﬁed by the fact that the incompressibility condition and the pressure are now missing.
With these spaces we associate a discretization of the stationary incompressible NavierStokes equations as described in §III. we observe that the nonlinear convection term has been added to the element residual. III. pT . Error estimators based on the solution of auxiliary problems. III. w ∈ RkE (E)n . . K ∈ T ∩ ωK }. kE = max{ku − 1.7. E ∈ ET ∩ ∂K}. E ∈ ET ∩ ∂K}. 70) and set X(K) = span{ψK v . Under suitable conditions on the solution of the NavierStokes equations.3. respectively associated with a given partition T of the domain Ω. Y (K) = span{ψK q : q ∈ Rku −1 (K)}. We recall the notations of §II. one can prove that – as in the linear case – the residual error estimator is reliable and eﬃcient. whereas u. K ∈ T ∩ ωK . ψE w : v ∈ RkT (K)n . A residual a posteriori error estimator. The computed solution of the corresponding discrete problem is denoted by uT . Y (K) = span{ψK q : q ∈ Rku −1 (K ) . ψE w : v ∈ RkT (K )n . When comparing it with the residual estimator for the Stokes equations. X(K) = span{ψK v . kp }. the local problems itself remain linear. In particular the nonlinearity only enters into the data. p denotes a solution of the NavierStokes equations. where kT = max{ku (ku − 1).96 III.1 (p. w ∈ RkE (E)n . 79). The error estimators based on the solution of auxiliary local discrete problems are very similar to those in the linear case. ku + n. STATIONARY NONLINEAR PROBLEMS Throughout this section we denote by X(T ) and Y (T ) ﬁnite element spaces for the velocity and pressure.4 (p.2. The residual a posteriori error estimator for the NavierStokes equations is given by ηK = h2 f + ∆uT − K + div uT + 1 2 2 L2 (K) 2 L2 (E) 1/2 pT − Re(uT · )uT 2 L2 (K) hE [nE · ( uT − pT I)]E E∈ET E⊂∂K .3. kp − 1}.3.
4 (p. we deﬁne the Neumann estimator by ηN. With these notations we consider the following discrete Stokes problem with Neumann boundary conditions Find uK ∈ X(K) and pK ∈ Y (K) such that uK : K vK dx {f + ∆uT − K − K pK div vK dx = pT )uT } · vK dx − Re(uT · + ∂K [nK · ( uT − pT I)]∂K · vK dS qK div uT dx qK div uK dx = K K for all vK ∈ X(K) and all qK ∈ Y (K).K = uK 2 1 (K) + pK H 2 L2 (K) 1/2 .3. Similarly we can consider the following discrete Stokes problem with Dirichlet boundary conditions Find uK ∈ X(K) and pK ∈ Y (K) such that uK : ωK vK dx f · vK dx − ωK ωK − ωK pK div vK dx = + uT : vK dx pT div vK dx ωK − ωK Re(uT · qK div uT dx )uT · vK dx qK div uK dx = ωK ωK .III.7. With the solution of this problem. ADAPTIVITY FOR NONLINEAR PROBLEMS 97 and ku and kp denote the polynomial degrees of the velocity and pressure approximation respectively. Note that the deﬁnition of kT diﬀers from the one in §II. 70) and takes into account the polynomial degree of the nonlinear convection term.
73) also applies to the nonlinear problem. Remark II.K = uK 2 1 (ωK ) + pK H 2 L2 (ωK ) 1/2 . one can prove that both estimators are reliable and eﬃcient and comparable to the residual estimator. we deﬁne the Dirichlet estimator by ηD. As in the linear case. STATIONARY NONLINEAR PROBLEMS for all vK ∈ X(K) and all qK ∈ Y (K).7. .5 (p. With the solution of this problem.98 III.
t) 0 Ω ∂v(x.1. Variational formulation. t)2 dxdt < ∞ 0 Ω such that T −u(x.1. t)] · v(x. T ) in Ω. The variational formulation of this problem is given by Find a velocity ﬁeld u with T 0<t<T max u(x.12 (p. T > 0 is given ﬁnal time and u0 denotes a given initial velocity. T ) in Ω × (0. t) : ∂t )u(x. t) + ν u(x. t) + [(u(x. t)2 dx + Ω 0 Ω  u(x. Here.1. t)dxdt + Ω u0 (x) · v(x. t) v(x. t) · − p(x. Discretization of the instationary NavierStokes equations IV. t) · v(x.CHAPTER IV Instationary problems IV. 0)dx 99 .1. t)2 dxdt < ∞ and a pressure p with T p(x. t) div v(x. 15) with noslip boundary condition ∂u − ν∆u + (u · ∂t )u − grad p = f div u = 0 u=0 u(·. T ) on Γ × (0. t) dxdt T = 0 Ω f (x. 0) = u0 in Ω × (0. We recall the instationary incompressible NavierStokes equations of §I.
T ). For larger times the smoothness with respect to time is better. uniqueness of the solution can only be guaranteed within a restricted class of more regular functions.1.1) dt y(0) = y0 in Rn on the time interval (0. But.1. Numerical methods for ordinary diﬀerential equations revisited. In three space dimensions.2. Any solution of the instationary incompressible NavierStokes equa√ tions behaves like t for small times t. this solution is unique. We choose an integer N ≥ 1 and intermediate times 0 = t0 < t1 < . we shortly recapitulate some basic facts on numerical methods for ordinary diﬀerential equations. . . The simplest and most popular method is the θscheme. ti ) + (1 − θ)F(yi−1 . Existence and uniqueness results. ti−1 ) . t) 2   dx +  v(x. the existence of such a more regular solution cannot be guaranteed. Suppose that we have to solve an initial value problem dy = F(y. t) (IV. < tN = T and set τi = ti − ti−1 . t) div u(x. 1 ≤ i ≤ N . This singular behaviour for small times must be taken into account for the timediscretization. t)2 dx < ∞.1. In two space dimensions. IV. τi . 1 ≤ i ≤ N. Ω IV.100 IV. It is given by y 0 = y0 yi − yi−1 = θF(yi . The approximation to y(ti ) is denoted by yi . t)dxdt 0 Ω =0 holds for all v with ∂v(x. Before presenting the various discretization schemes for the timedependent NavierStokes equations. INSTATIONARY PROBLEMS T q(x. t)2 dx < ∞ max 0<t<T Ω ∂t and all q with 0<t<T max q(x.3. The variational formulation of the instationary incompressible NavierStokes equations admits at least one solution.
Strongly diagonal implicit RungeKutta schemes (in short SDIRK schemes) are particularly well suited for the discretization of timedependent partial diﬀerential equations since they combine high order with good stability properties. and of order 2. . r = 2. ≤ cr ≤ 1. . bk in a table of the form c1 a11 a12 ..1. . They are both Astable .k ) k=1 yi = yi−1 + τi . otherwise it is called implicit.. arr . They take the form y0 = y0 r y i. r = 1. . if θ = 1 . and the choice θ = 1 gives the CrankNicolson scheme. . . yi. . 1] is a ﬁxed parameter. where 0 ≤ c1 ≤ . . 1 ≤ j ≤ r. . 1 ≤ i ≤ N. ajk . The choice θ = 0 yields the explicit Euler scheme. . 2 2 Another class of popular methods is given by the various RungeKutta schemes. . θ = 1 corresponds to the implicit Euler scheme. br The Euler and CrankNicolson schemes are RungeKutta schemes. bk F(ti−1 + ck τi . 2 The θscheme is implicit unless θ = 0. . The scheme is called explicit. They correspond to the tables 0 0 1 1 1 1 0 0 0 1 1 1 2 2 1 2 1 2 r = 1. if ajk = 0 for all k ≥ j. . a1r . 2 The θscheme is of order 1. . It is Astable provided θ ≥ 1 . INSTATIONARY NAVIERSTOKES EQUATIONS 101 where θ ∈ [0. if θ = 1 . The simplest representatives of this class are called SDIRK 2 and SDIRK 5. .IV. . . The number r is called stage number of the RungeKutta scheme. cr ar1 ar2 b1 b2 . For the ease of notation one usually collects the numbers ck . . .k ) .j =y i−1 + τi k=1 r ajk F(ti−1 + ck τi . yi. .
102
IV. INSTATIONARY PROBLEMS
and have orders 3 and 4, respectively. They are given by the data
√ 3+ 3 6 √ 3− 3 6 √ 3+ 3 6 √ − 3 3 1 2 1 4 3 4 11 20 1 2 1 4 1 2 17 50 371 1360 25 24 25 24
0
√ 3+ 3 6 1 2
SDIRK2
0
1 4 1 − 25 137 − 2720 − 49 48
0 0
1 4 15 544 125 16 125 16
0 0 0
1 4 − 85 12
0 0 0 0
1 4 1 4
SDIRK5.
1
− 49 48
− 85 12
IV.1.4. Method of lines. This is the simplest discretization scheme for timedependent partial diﬀerential equations. We choose a partition T of the spatial domain Ω and associated ﬁnite element spaces X(T ) and Y (T ) for the velocity and pressure, respectively. These spaces must satisfy the infsup condition of §II.2.6 (p. 36). Then we replace in the variational formulation the velocities u, v and the pressures p, q by discrete functions uT , vT and pT , qT which depend upon time and which – for every time t – have their values in X(T ) and Y (T ), respectively. Next, we choose a bases for the spaces X(T ) and Y (T ) and identify uT and pT with their coeﬃcient vectors with respect to the bases. Then we obtain the following differential algebraic equation for uT and pT duT = fT − νAT uT − BT pT − NT (uT ) dt T BT uT = 0. Denoting by wi , 1 ≤ i ≤ dim X(T ), and rj , 1 ≤ j ≤ dim Y (T ), the bases functions of X(T ) and Y (T ) respectively, the coeﬃcients of the stiﬀness matrices AT and BT and of the vectors fT and NT (uT ) are given by AT ij =
Ω
wi (x) :
wj (x)dx,
BT ij = −
Ω
rj (x) div wi (x)dx, f (x, t) · wi (x)dx,
fT i =
Ω
IV.1. INSTATIONARY NAVIERSTOKES EQUATIONS
103
NT (uT )i =
Ω
[(u(x, t) ·
)u(x, t)] · wi (x)dx.
Formally, this problem can be cast in the form (IV.1.1) by working with testfunctions vT in the space V (T ) = {vT ∈ X(T ) :
Ω
qT div vT dx = 0 for all qT ∈ Y (T )}
of discretely solenoidal functions. The function F in (IV.1.1) is then given by F(y, t) = fT − νAT y − NT (y). If we apply the θscheme and denote by ui and pi the approximate T T values of uT and pT at time ti , we obtain the fully discrete scheme u0 = IT u0 T ui − ui−1 T T = −BT pi + θ fT (ti ) − νAT ui − NT (ui ) T T T τi i−1 + (1 − θ) fT (ti−1 ) − νAT uT − NT (ui−1 ) T
T BT ui = 0 , 1 ≤ i ≤ N, T
1 where IT : H0 (Ω)n → X(T ) denotes a suitable interpolation operator (cf. §I.2.11 (p. 26)). The equation for ui , pi can be written in the form T T
1 i uT + θνAT ui + BT pi + θNT (ui ) = gi T T T τi T BT ui = 0 T with 1 i−1 u + θfT (ti ) + (1 − θ) fT (ti−1 ) − νAT ui−1 − NT (ui−1 ) . T T τi T Thus it is a discrete version of a stationary incompressible NavierStokes equation. Hence the methods of §III.2 (p. 89) can be used for its solution. Due to the condition number of O(h−2 ) of the stiﬀness matrix AT , one must choose the parameter θ ≥ 1 . Due to the singularity of the 2 solution of the NavierStokes equations at time 0, it is recommended to ﬁrst perform a few implicit Euler steps, i.e. θ = 1, and then to switch to the CrankNicolson scheme, i.e. θ = 1 . 2 The main drawback of the method of lines is the ﬁxed (w.r.t. time) spatial mesh. Thus singularities, which move through the domain Ω in the course of time, cannot be resolved adaptively. This either leads gi =
104
IV. INSTATIONARY PROBLEMS
to a very poor approximation or to an enormous overhead due to an excessively ﬁne spatial mesh. IV.1.5. Rothe’s method. In the method of lines we ﬁrst discretize in space and then in time. This order is reversed in Rothe’s method. We ﬁrst apply a θscheme or a RungeKutta method to the instationary NavierStokes equations. This leaves us in every timestep with a stationary NavierStokes equation. For the θscheme, e.g., we obtain the problems 1 i u − νθ∆ui τi +θ(ui · )ui − grad pi = θf (·, ti ) + 1 i−1 u τi + (1 − θ){f (·, ti−1 ) − ν∆ui−1 + (ui−1 · div u = 0 ui = 0
i
)ui−1 }
in Ω in Ω on Γ.
The stationary NavierStokes equations are then discretized in space. The main diﬀerence to the methods of lines is the possibility to choose a diﬀerent partition and spatial discretization at each timelevel. This obviously has the advantage that we may apply an adaptive meshreﬁnement on each timelevel separately. The main drawback of Rothe’s method is the lack of a mathematically sound stepsize control for the temporal discretization. If we always use the same spatial mesh and the same spatial discretization, Rothe’s method of course yields the same discrete solution as the method of lines. IV.1.6. Spacetime ﬁnite elements. This approach circumvents the drawbacks of the method of lines and of Rothe’s method. When combined with a suitable spacetime adaptivity as described in the next section, it can be viewed as a Rothe’s method with a mathematically sound stepsize control in space and time. To describe the method we choose as in §IV.1.3 (p. 100) an integer N ≥ 1 and intermediate times 0 = t0 < t1 < . . . < tN = T and set τi = ti − ti−1 , 1 ≤ i ≤ N . With each time ti we associate a partition Ti of the spatial domain Ω and corresponding ﬁnite element spaces X(Ti ) and Y (Ti ) for the velocity and pressure, respectively. These spaces must satisfy the infsup condition of §II.2.6 (p. 36). We denote by ui ∈ X(Ti ) and pi ∈ Y (Ti ) the approximations of the velocity and T T pressure, respectively at time ti . The discrete problem is then given by
89).7. IV. The param2 eter Θ is chosen either equal to θ or equal to 1. . Roughly speaking. . . Up to these projections one also recovers the method of lines when all partitions and ﬁnite element spaces are ﬁxed with respect to time.1. On each timelevel we have to solve a discrete version of a stationary NavierStokes equation.IV. This is done with the methods of §III. The parameter θ is chosen either equal to 1 or equal to 1. these projections make the diﬀerence between the spacetime ﬁnite elements and Rothe’s method. This method is based on the observation that due to the transport theorem of §I. INSTATIONARY NAVIERSTOKES EQUATIONS 105 Find u0 ∈ X(T0 ) such that T 0 u0 · vT dx = T Ω 0 for all vT ∈ X(T0 ) and determine i = 1. ti ) · vT dx Ω i f (x. N successively such that 1 i ui · vT dx τi Ω T Ω i uT 0 u0 · vT dx ∈ X(Ti ) and pi ∈ Y (Ti ) for T +θν Ω ui : T − Ω i vT dx i pi div vT dx T i )ui ] · vT dx = T +Θ Ω [(ui · T 1 τi +θ i−1 i uT · vT dx Ω i f (x.2 (p.1.3 (p. 8) the term ∂u + (u · )u ∂t .1. ti−1 ) · vT dx Ω i−1 uT : Ω i−1 [(uT · Ω i vT dx i )ui−1 ] · vT dx T + (1 − θ) + (1 − θ)ν + (1 − Θ) i qT div ui dx = 0 T holds for all Ω i vT i ∈ X(Ti ) and qT ∈ Y (Ti ). . that u0 is the L2 projection of u0 onto X(T0 ) and that the T righthand side of the equation for ui involves the L2 projection of T functions in X(Ti−1 ) onto X(Ti ). The transportdiﬀusion algorithm. Note.
2 (p. µxK = K. With these notations the transportdiffusion algorithm is given by: Find u0 ∈ X(T0 ) such that T 0 µx u0 (x) · vT (x) = T x∈Q0 0 vT x∈Q0 0 µx u0 (x) · vT (x) for all ∈ X(T0 ). 28)). solve the initial value problems (transport step) dyx (t) = ui−1 (yx (t). . For i = 1.6 (p. ω denotes the area.2.2. . if n = 2. of a set ω ⊂ Rn and ωx and ωE are the unions of all elements that share the vertex x or the edge E respectively (cf. 24) and I. Figures I. if n = 3.106 IV. t − τ ) τ where s → y(s) is the trajectory which passes at time t through the point x. Qi the vertices x in Ti . Qi the midpoints xE of the edges in Ti . The remaining terms are approximated in a standard ﬁnite element way. Therefore ∂u + (u · ∂t )u (x. or the volume. The most important examples are given by Qi the barycentres xK of the elements in Ti . . t) for ti−1 < t < ti T dt yx (ti ) = x . Here. . INSTATIONARY PROBLEMS is the material derivative along the trajectories of the ﬂow. µxE = ωE . N successively. µx = ωx . t) − u(y(t − τ ). t) is approximated by the backward diﬀerence u(x. we retain the notations of the previous subsection and introduce in addition on each partition Ti a quadrature formula ϕdx ≈ Ω x∈Qi µx ϕ(x) which is exact at least for piecewise constant functions. To describe the algorithm more precisely.
When compared to adaptive discretizations of stationary problems an adaptive discretization scheme for instationary problems requires some additional features: • We need an error estimator which gives upper and lower bounds on the total error. This is the reward for the necessity to solve the initial value problems for the yx and to evaluate functions in X(Ti ) at the points yx (ti−1 ) which are not nodal degrees of freedom. IV.2. • The error estimator must allow a distinction between the temporal and spatial part of the error. In contrast to the previous algorithms we now only have to solve discrete Stokes problems at the diﬀerent timelevels. Spacetime adaptivity IV. on both the errors introduced by the spatial and the temporal discretization. i. For the NavierStokes equations some special diﬃculties arise in addition: . Overview.e.2. ti ) · vT dx Ω i f (x. SPACETIME ADAPTIVITY 107 for all x ∈ Qi and ﬁnd ui ∈ X(Ti ) and pi ∈ Y (Ti ) such T T that (diffusion step) 1 i µx ui (x) · vT (x) T τi x∈Q i +θν Ω ui : T − Ω i vT dx i pi div vT dx = T 1 τi i µx ui−1 (yx (ti−1 )) · vT (x) T x∈Qi i f (x. • We need a strategy for adapting the timestep size.1.IV.2. • The reﬁnement strategy for the spatial discretization must also allow the coarsening of elements in order to take account of singularities which in the course of time move through the spatial domain. ti−1 ) · vT dx Ω +θ + (1 − θ) + (1 − θ)ν ui−1 : T Ω i vT dx i qT div ui dx = 0 T Ω i i holds for all vT ∈ X(Ti ) and qT ∈ Y (Ti ).
2.108 IV. INSTATIONARY PROBLEMS • The velocities are not exactly solenoidal. IV.2. ti ) + (1 − θ)f (·. respectively interior faces. Recall that ETi denotes the set of all interior edges. if n = 2.1.6 (p. • The convection term may be dominant. The error estimator now consists of two components: a spatial part and a temporal one. if n = 3.2 (p. 104) the spatial contribution on timelevel i is given by i ηh = K∈Ti h2 K i−1 ui − uT T θf (·. With the notations of §IV. For this part we have to solve on each timelevel the following discrete Poisson equations: 1. the ratio of upper and lower error bounds must stay bounded uniformly with re1 spect to large values of Re u 1 or ν u 1 . i. Therefore the a posteriori error estimators must be robust. A residual a posteriori error estimator. we have to invest some work in the computation of the temporal part of the estimator. In order to obtain a robust estimator. This introduces an additional consistency error which must be properly estimated and balanced. The spatial part is a straightforward generalization of the residual estimator of §III. in Ti .e.0 Find ui ∈ S0 (Ti )n such that T ν Ω ui : T i vT dx . ti−1 ) − τi + θν∆ui + (1 − θ)∆νui−1 − T T − Θ(ui · T + K∈Ti pi T )ui−1 T 2 L2 (K) )ui − (1 − Θ)(ui−1 · T T div ui T 2 L2 (K) + E∈ETi hE [nE · (θν ui + (1 − θ)ν ui−1 T T 1 2 − pi I)]E T 2 L2 (E) . 90) for the stationary NavierStokes equation.2.
ti−1 + τi−1 } if ητ ≈ ηh . We replace ti by 1 (t + ti ) and repeat the solution of the discrete problem on time2 i−1 level i and the computation of the error estimators. we accept the current timestep and continue with the space adaptivity. Next. The temporal contribution of the error estimator on timelevel i then is given by i ητ = νui − ui−1 2 + νui 2 T 1 1 T T + K∈Ti i−1 h2 ((Θui + (1 − Θ)uT ) · T K i−1 )(ui − uT ) T + ν∆ui T 2 L2 (K) 1/2 + E∈ETi hE [ν · ui ]E 2 2 (E) T L . which is described in the next subsection. Time adaptivity. Assume that we have solved the discrete problem up to timelevel i − 1 and that we have computed the i−1 i−1 error estimators ηh and ητ . IV. we solve the discrete problem on timelevel i with the current i i value of ti and compute the error estimators ηh and ητ . .0 i holds for all vT ∈ S0 (Ti )n .2.2. i i If ητ ≈ ηh . Then we set ti = i−1 i−1 min{T. With these ingredients the residual a posteriori error estimator for the instationary incompressible NavierStokes equations takes the form N 1/2 i 2 ηh i 2 ητ τi i=1 + .3. ti−1 + 2τi−1 } if ητ ≤ 1 ηh . SPACETIME ADAPTIVITY 109 = Ω [((Θui + (1 − Θ)ui−1 ) · T T i−1 i )(ui − uT )] · vT dx T 1. we reject the current timestep. 2 In the ﬁrst case we retain the previous timestep. i−1 i−1 min{T.IV. i i If ητ ≥ 2ηh . in the second case we try a larger time step.
With these estimators we then perform M steps of the marking strategy of §II. Now. we go back m generations in the gridhierarchy to the partition Ti −m . 73) must be modiﬁed accordingly. IV. and a vector ﬁeld U0 : Ω → Rm .3.5 (p. we are looking for a vector ﬁeld U : Ω × (0.e. ∞) → Rm . For timedependent problems the spatial adaptivity must also allow for a local mesh coarsening. IV. . 73). i.7. . Systems in divergence form. x. Hence. When this goal is achieved we accept the spatial discretization and proceed with the next timelevel. We add these contributions and thus obtain for every K ∈ Ti −m an error estimator ηK . This yields a new partition Ti −m+M which usually is diﬀerent from Ti . i i If the newly calculated error estimators satisfy ηh ≈ ητ . Ti = Ti and Tij is a (local) reﬁnement of Tij−1 .110 IV. .5 (p. Typical values for the parameters m and M are 1 ≤ m ≤ 3 and m ≤ M ≤ m + 2.4. 67) with the ηh as error estimators until we arrive at a i i partition which satisﬁes ηh ≈ ητ . Moreover. Each K gives a contribution i to ηh . solve the corresponding discrete problem on timelevel i i i and compute the new error estimators ηh and ητ . 1 ≤ j ≤ . ∞) in Ω. Due to the nestedness of the partitions. . Discretization of compressible and inviscid problems IV. a vector ﬁeld g : Rm × Ω × (0. ∞) → Rm such that ∂M(U) + div F(U) = g(U. Space adaptivity. We may assume that Ti currently is the ﬁnest partition in a hierarchy Ti0 . the marking strategies of §II. we successively reﬁne the partition Ti as described i in §II. In this section we consider problems of the following form: Given a domain Ω ⊂ Rn with n = 2 or n = 3. t) ∂t U(·. we accept the current partition Ti and proceed with the next timelevel. successively reﬁned partitions. each element K ∈ Ti −m is the union of several elements K ∈ Ti . Ti of nested. a tensor ﬁeld F : Rm → Rm×n . suppose that we have accepted the current timestep and want to optimize the partition Ti . i i If ηh ≥ 2ητ . an integer m ≥ n. INSTATIONARY PROBLEMS The described strategy obviously aims at balancing the two contrii i butions ηh and ητ of the error estimator. 0) = U0 in Ω × (0.3. . We replace Ti by this partition. a vector ﬁeld M : Rm → Rm .2.7 (p.7. Assume that we have solved the discrete problem on timelevel i with an actual timestep τi and an actual partition Ti of the spatial i i domain Ω and that we have computed the estimators ηh and ητ .1.
1. is called a system (of differential equations) in divergence form. T + pI Fvisc (U) = (T + pI) · v + σ These equations are complemented by the constitutive equations for p. Finite volume schemes are particularly suited for the discretization of systems in divergence form.3.10 (p.2. Example IV. and σ given in §I.9 (p. 13) and §I. T. e)T ρ ρv M (U) = e ρv Fadv (U) = ρv ⊗ v + pI ev + pv 0 g = ρf . 13) ﬁt into this framework.1. The partition may consist of .8 (p.IV. IV. It is often split into an advective flux Fadv which contains no derivatives and a viscous flux Fvisc which contains spatial derivatives. The compressible NavierStokes equations and the Euler equations (with α = 0) of §I. Finite volume schemes. v. Note. f ·v For the NavierStokes equations the viscous forces yield a viscous ﬂux 0 . For both equations we have m=n+2 U = (ρ.e F = F adv + F visc . we choose a timestep τ > 0 and a partition T of the domain Ω. n div F(U) = j=1 ∂F(U)i. i. The tensor ﬁeld F is called the flux of the system.3.e.j ∂xj 1≤i≤m . To describe the idea in its simplest form.1. i. COMPRESSIBLE AND INVISCID PROBLEMS 111 Such a problem. that the divergence is taken rowwise. which has to be complemented with suitable boundary conditions. 11).3.1.
of the element K. respectively volume. iτ ))dx ∂t (i−1)τ K K − iτ K iτ M(U(x. iτ div F(U)dxdt (i−1)τ K = (i−1)τ K Using the integration by parts formulae of §I. Finally. we obtain M(U(x. if n = 3. x. K where K denotes the area.g.7 (p. iτ ))dx ≈ KM(Ui ) K K i−1 M(U(x. This yields iτ (i−1)τ iτ K ∂M(U) dxdt + ∂t g(U. we approximate the boundary integral for the ﬂux by a numerical flux τ ∂K F(Ui−1 ) · nK dS ≈ τ K K ∈T ∂K∩∂K ∈ET i−1 i−1 ∂K ∩ ∂K FT (UK . .2. Next we approximate the ﬂux term iτ F(U) · nK dSdt ≈ τ (i−1)τ ∂K ∂K i−1 F(UK ) · nK dS. xK . Denoting by Ui and Ui−1 its constant values on K at times K K iτ and (i − 1)τ respectively.2. which is ﬁxed a priori. (i − 1)τ ))dx F(U) · nK dSdt. we obtain for the lefthand side iτ ∂M(U) dxdt = M(U(x. we ﬁx an i and an element K ∈ T . UK ). its barycentre. From §I. In a ﬁrst step. Now.112 IV. t)dxdt ≈ τ Kg(UK .3 (p. we assume that U is piecewise constant with respect to space and time. div F(U)dxdt = (i−1)τ K (i−1)τ ∂K where nK denotes the exterior unit normal of K. iτ ]. Then we integrate the system over the set K × [(i − 1)τ. INSTATIONARY PROBLEMS arbitrary polyhedrons. 19). (i − 1)τ ))dx ≈ KM(UK ). x. (i − 1)τ ). 21) we only retain the condition that the subdomains must not overlap. (i−1)τ K where xK denotes a point in K. if n = 2. t)dxdt. The righthand side is approximated by iτ i−1 g(U. e.
COMPRESSIBLE AND INVISCID PROBLEMS 113 where ET denotes the set of all edges.3. successively compute for all elements K ∈ T M(Ui ) = M(Ui−1 ) K K −τ K ∈T ∂K∩∂K ∈ET ∂K ∩ ∂K  i−1 FT (Ui−1 . K For a concrete ﬁnite volume scheme we of course have to specify • the partition T and • the numerical ﬂux FT . . i. if n = 2. K K K For i = 1. if n = 3. . Construction of the partitions. To this end one chooses an increasing sequence 0 = t0 < t1 < t2 < .3.IV. . In practice one works with a variable timestep and variable partitions. respectively faces.3.3. Then τ is replaced by τi = ti − ti−1 and K and K are elements in Ti−1 or Ti .2). (i − 1)τ ). n = 2. one prefers socalled dual meshes. In practice.3. . This will be the subject of the following sections. . IV. 2. in T and where ∂K ∩ ∂K  is the length respectively area of the common edge or face of K and K . of times and associates with each time ti a partition Ti of Ω.2. we consider the twodimensional case. Figure IV. Then we subdivide each element K ∈ T into smaller elements by either • drawing the perpendicular bisectors at the midpoints of edges of K (cf.1) or by • connecting the barycentre of K with its midpoints of edges (cf. Then the elements in T consist of the unions of all small elements that share a common vertex in the partition T . With these approximations the simplest finite volume scheme for a system in divergence form is given by For every element K ∈ T compute 1 U0 = U0 (x)dx. Figure IV. Moreover one has to furnish an interpolation operator which maps piecewise constant functions with respect to one partition to piecewise constant functions with respect to another partition.7 (p. UK ) K K + τ g(Ui−1 . An obvious possibility is to construct a ﬁnite volume partition T in the same way as a ﬁnite element partition. 21). .e. xK .3. Remark IV. We start from a standard ﬁnite element partition T which satisﬁes the conditions of §I. however.2. To describe the idea.
1.3. Dual mesh (thick lines) via barycentres of primal mesh (thin lines) Thus the elements in T can be associated with the vertices in NT . But this construction also has some disadvantages which are not present with the second construction: • The perpendicular bisectors of a triangle may intersect in a point outside the triangle.2. we may associate with each edge in ET exactly two vertices in NT such that the line connecting these vertices intersects the given edge. The ﬁrst construction has the advantage that this intersection is orthogonal. The intersection point is within the triangle only if its largest angle is at most a right one. . INSTATIONARY PROBLEMS d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d d Figure IV. Moreover.114 IV.3. Dual mesh (thick lines) via perpendicular bisectors of primal mesh (thin lines) e e d e e d e e d e e d rr d rr d rr d r r r r d r r r d d d e d d e d d e d d e d d d ee d ee d ee d ee d r d r d r d r r r rr rr rr r d d d d e d d e d d e d d e d d e e e d e d e d e d ee d r d r d r d r r r rr rr rr r d d d d e d d e d d e d d e d d e e e d e d e d e d ee d r d r d r d r r r rr rr rr r d d d d e d e d e d e d e d e d e d e d d Figure IV.
Denote by x1 and x2 the two vertices in NT such that their connecting line intersects E (cf. that in general x1 x2 will not be orthogonal to E. respectively.e. U2 ) and a viscous part FT visc (U1 . Note.0 (T ) corresponding to the ﬁnite element partition T such that Φ(xK ) = ϕK for the vertex xK ∈ NT corresponding to K ∈T. . i. U2 ) similar to the splitting of the analytical ﬂux F. Ui−1 ) corresponding to the comK mon boundary of two elements K and K in a ﬁnite volume partition. we split ∂K ∩ ∂K into straight edges.3. This link sometimes considerably simpliﬁes the analysis of ﬁnite volume methods. • Apply a standard a posteriori error estimator to Φ. • The ﬁrst construction has no three dimensional analogue. . In order to i−1 construct the numerical ﬂux FT (UK . • Use T to construct a new dual mesh T . locally reﬁned partition T . K1 K2 We assume that T is the dual mesh to a ﬁnite element partition T as described in §IV. Figure IV. 113). we denote the adjacent elements by K1 and K2 and write U1 and U2 instead of Ui−1 and Ui−1 . ϕ ∈ S 0. • Given the error estimator apply a standard mesh reﬁnement strategy to the ﬁnite element mesh T and thus construct a new. ηn such that the direction ηn is parallel to the direction x1 x2 and such that the other directions are tangential to E. . With ϕ we associate a continuous piecewise linear function Φ ∈ S 1. U2 ). For example it suggests a very simple and natural approach to a posteriori error estimation and mesh adaptivity for ﬁnite volume methods: • Given the solution ϕ of the ﬁnite volume scheme compute the corresponding ﬁnite element function Φ. To simplify the notation.3 (p. if n = 3.3.3). They intersect in a common point inside the quadrilateral only if it is a rectangle. or plane faces.3. we split the ﬂux in an advective part FT adv (U1 . Construction of the numerical ﬂuxes. Then we .3.5. In order to construct the contribution of E to the numerical ﬂux FT (U1 . . The fact that the elements of a dual mesh can be associated with the vertices of a ﬁnite element partition gives a link between ﬁnite volume and ﬁnite element methods: Consider a function ϕ that is piecewise constant on the dual mesh T . For the viscous part of the numerical ﬂux we introduce a local coordinate system η1 . Relation to ﬁnite element methods.3. This is the reﬁnement of T . IV.−1 (T ). Consider such an edge respectively face E. COMPRESSIBLE AND INVISCID PROBLEMS 115 • The perpendicular bisectors of a quadrilateral may not intersect at all.IV. IV.4. if n = 2.
0} and deﬁne ∆(V)± 0 . The entries of ∆(V) are of course the eigenvalues of D(Fadv (V) · nK1 ). INSTATIONARY PROBLEMS x2 r rr r r r x2 x1 x1 Figure IV. there is an invertible m × m matrix Q(V) and a diagonal m × m matrix ∆(V) such that Q(V)−1 D(Fadv (V) · nK1 )Q(V) = ∆(V). Examples of a common edge E (thick lines) of two volumes and corresponding nodes x1 and x2 of the underlying primal mesh together with their connecting line (thin lines) express all derivatives in Fvisc in tems of the new coordinate system. . suppress all derivatives not involving ηn ..3. With these notations.. I. Here. ∆(V)± mm and C(V)± = Q(V)∆(V)± Q(V)−1 . 0 0 . . For the advective part of the numerical ﬂux we need some additional notations. One can prove that for many systems in divergence form – including the compressible NavierStokes and Euler equations – this matrix can be diagonalized. x1 − x2 2 denotes the Euclidean distance of the two points x1 and x2 . . .3. Denote by D(Fadv (V) · nK1 ) ∈ Rm×m the derivative of Fadv (V) · nK1 with respect to V. the SteegerWarming approximation of the advective ﬂux is given by . . . 0 11 0 0 ∆(V)± . . . Consider an arbitrary vector V ∈ Rm . z − = min{z. and approximate derivatives ϕ −ϕ with respect to ηn by diﬀerence quotients of the form x11−x22 2 . .e. . 0} . 22 ∆(V)± = . For any real number z we set z + = max{z. .116 IV..
At ﬁrst sight the van Leer scheme seems to be more costly than the SteegerWarming scheme since it requires three evaluations of C(V) instead of two. however. U2 ) 1 1 = C(U1 ) + C( (U1 + U2 ))+ − C( (U1 + U2 ))− U1 2 2 1 1 + + C(U2 ) − C( (U1 + U2 )) + C( (U1 + U2 ))− U2 .3. Another popular numerical ﬂux is the van Leer approximation which is given by FT (U1 . this can be reduced to one evaluation since for these equations Fadv (V) · nK1 = C(V)V holds for all V.IV. 2 2 In both schemes one has to compute the derivatives DFadv (V) · nK1 and their eigenvalues and their eigenvectors for suitable values of V. . U2 ) = C(U1 )+ U1 + C(U2 )− U2 . For the compressible NavierStokes and Euler equations. COMPRESSIBLE AND INVISCID PROBLEMS 117 FT (U1 .
.
∞ closure of a set. instationary incompressible NavierStokes equations. C0 (Ω). weak derivatives. slip boundary condition 2. diﬀerentiation of products. remedies: mixed methods. conservation of energy. initial and boundary conditions. element bubble functions and their properties. piecewise polynomial.0 (T ). trace spaces. tensors and derivatives. compressible NavierStokes equations in nonconservative form. compressible NavierStokes equations in conservative form. conservation of mass. Reynolds’ number. conservation of momentum. stationary incompressible NavierStokes equations. a piecewise smooth function is in H 1 iﬀ it its continuous. in two dimensions: shape regularity ⇐⇒ smallest angle bounded away from zero. Notations and basic results Notations for domains. functions. dynamic viscosities. approximation properties. Sobolev spaces and norms. kinematic viscosity. nodal shape functions. Friedrichs and Poincar´ inequalities. Courant triangulation. support of a function. admissibility. equation of state. vector ﬁelds. Euler equations. nonconforming methods. interior forces. solenoidal velocities requires a polynomial degree ≥ 5. PetrovGalerkin stabilization. incompressible ⇐⇒ div v = 0.Summary I Fundamentals 1. discretization with continuous. aﬃne equivalence. shaperegularity. A ﬁrst attempt Variational formulation of the Poisson equation and standard ﬁnite element discretization. ﬁnite element spaces S k. Stokes equations. piecewise linear. noslip boundary condition. Modelization Lagrangian and Eulerian coordinate systems. jumps across edges and faces II Stationary linear problems 1. H 1 functions do not admit point values. triangles and parallelograms may be mixed. k. representation of nodal shape functions in terms of element vertices. classical and weak derivatives coincide for smooth functions. velocity.0 S k. edge and face bubble functions and their properties. weak derivative of x. Lagrangian coordinate system moves with the ﬂuid. total energy. solenoidal velocities does not work. hierarchical basis. Discretization of the Stokes equations. variational formulation of the Stokes equations using solenoidal velocities. deformation tensor. streamfunction formulation 119 . constitutive laws.−1 (T ). exterior forces. transport theorem. integration by parts formulae. polynomial spaces Rk (K). e ﬁnite element partitions. approximation with continuous. Eulerian coordinate system is ﬁxed. S0 (T ). Cauchy theorem.
pressure is a maximizer. there is no constructive way to compute the pressure. checkerboard instability. variational formulation of the biharmonic equation. adding element bubble functions to the m.0 (T ) ∩ L2 (Ω) is sta0 ble for every k ≥ 3. infsup condition. Nonconforming methods Aim: relax the continuity condition for the discrete velocities. no three dimensional equivalent. CrouzeixRaviart element and its properties. a priori error estimate . curl operators. piecewise continuous linear velocities and piecewise constant pressures on a triangular grid do not work. structure of the discrete mixed porblem. saddlepoint formulation of the biharmonic equation and corresponding mixed ﬁnite element discretization. necessary condition: dim X(T ) ≥ dim Y (T ). a priori error estimates. v : Ω → R2 solenoidal ⇐⇒ ∃ streamfunction ϕ with v = curl ϕ. choice of spaces: polynomial degree of pressure one less than polynomial degree of velocity. Stokes equations equivalent to the biharmonic equation for the streamfunction. S k−1. nonconforming discretization using the Morley element. a conforming discretization requires a polynomial degree of at least 5. general form of PetrovGalerkin discretizations. the pair S0 (T )n . stable elements. drawbacks: diﬃculties in the presence of reentrant corners. computation of the velocity by solving a symmetric positive deﬁnite system with condition O(h−4 ). S m−1. computation of the pressure by a simple postprocessing 5. chain rules. the mini 0 element. the TaylorHood element and the modiﬁed TaylorHood k. existence and uniqueness of a solution of the mixed variational problem. adding edge/face bubble functions to the n. vorticity. structure of the discrete problems 4. general structure of a mixed ﬁnite element discretization of the Stokes equations. the pair S0 (T )n . continuous dependence on the force term.0 velocity space stabilizes the Q1/Q0 element. a priori error estimates.−1 2 S (T ) ∩ L0 (Ω) is stable.−1 (T ) ∩ L2 (Ω) is stable. choice of stabilization parameters: δmax ≈ δmin . discrete mass equation does not imply that discrete velocity is solenoidal. integration by parts formula. S m−1. 0 n+m−1. polynomial degree of the pressure one less than the polynomial degree of the velocity is optimal 3. biharmonic equation only yields the velocity.0 velocity space stabilizes the pair S0 (T )n . PetrovGalerkin stabilization Splitting the velocity of the mini element in a linear and a bubble part yields a P1/P1 discretization of a modiﬁed Stokes problem with a consistent penalty.0 the pair S0 (T )n . Streamfunction formulation Aim: obtain an exactly solenoidal conforming velocity approximation. velocity is a minimizer. relation between the Morley element discretization of the biharmonic equation and the CrouzeixRaviart element discretization of the Stokes equations. no higherorder equivalent.120 SUMMARY 2.−1 (T ) ∩ L2 (Ω). 0. relation to a constrained minimization problem. construction of a local solenoidal basis.0 element are stable. Mixed ﬁnite element discretizations of the Stokes equations Mixed variational formulation of the Stokes equations.
squared Richardson. a onedimensional example. ﬁnite element spaces must be stable. regular subdivision of triangles. the Reynolds’ number. regular reﬁnement. parallelograms and parallelepipeds by joining the midpoints of edges. solve in each . Solution of the discrete nonlinear problems Fixedpoint iteration. prolongation. green. discrete problem has properties similar to the analytical problem. solution has the same regularity as the one of the corresponding Stokes problem. Stokes operator. general structure of the adaptive algorithm. at most ﬁnitely many turning and bifurcation points. properties: at least one solution. they give no information on its actual size nor on its spatial distribution. Discretization of the stationary NavierStokes equations Variational formulation. ingredients: a posteriori error estimator. simultaneous stabilization of convective term and divergence constraint 2.SUMMARY 121 6. ﬁxedpoint formulation. marking strategy. Newton iteration.r. additional reﬁnement. error estimators based on the solution of auxiliary discrete local Stokes problems with Neumann and Dirichlet boundary conditions. blue and purple reﬁnement of elements. idea: eliminate the velocity from the discrete Stokes equations. restriction. required data structures III Stationary nonlinear problems 1. maximum and equilibration strategies for marking elements. exactly one solution for suﬃciently small Reynolds’ number. prolongation. indeﬁniteness cannot be circumvented. Solution of the discrete problems General structure of the discrete problem. ﬁxedpoint formulation of the discrete problem. ﬁnite element discretization. additional rules for avoiding too abstuse or acute elements. restriction. a residual error estimator for the Stokes equations and its ingredients. convergence rate is independent of the meshsize. apply a CG algorithm to the resulting problem for the pressure and compute velocities approximately by applying a standard multigrid algorithm to the corresponding discrete Poisson problems. Uzawa is very simple but very slow. coarse grid correction. equivalence of the estimators. the CG algorithm and its properties. convergence rate 1 − Re2 f 0 ⇒ practicable only for small Reynolds’ number. the improved Uzawa algorithm. multigrid algorithm.and Wcycles. reliability and eﬃciency. basic ingredients: smoothing. A posteriori error estimation and adaptive grid reﬁnement A priori error estimates only give information on the asymptotic behavior of the error. stiﬀness matrix is indeﬁnite ⇒ standard iterative methods such as CG cannot be applied. discrete Stokes operator. symmetric formulation of the nonlinear term. solution is diﬀerentiable w. a priori error estimates. regular subdivision of tetrahedra requires an additional device for handling the double pyramid in the interior. smoothing methods: symmetric GaussSeidel. V. streamlinediﬀusion method. Vanka methods.t. idea: add additional viscosity in streamline direction. upwind diﬀerence approximation of the convective term. stabilized biconjugate gradient algorithm for nonsymmetric and indeﬁnite problems 7. the Uzawa algorithm. global upper and local lower bounds on the error. Uzawa falls in the category of pressure correction schemes. necessity of true upwinding with upwind direction depending on the solution.
transportdiﬀusion algorithm: temporal derivative and convection term are simultaneously discretized by backward diﬀerences along trajectories. nonlinearity only enters in the righthand side IV Instationary problems 1. error estimators based on the solution of auxiliary local discrete problems: discrete problems remain linear. in three dimensions uniqueness can only be proved within a restricted class of more regular functions in which existence cannot be guaranteed. numerical ﬂuxes. residual error estimator: add nonlinear convection term to the element residuals. splitting into advective and viscous part. ﬂux. CrankNicolson scheme. the Reynolds’ number.122 SUMMARY step modiﬁed Stokes problems with convection. Rothe’s method: ﬁrst discretize in time and then in space. relation to ﬁnite element methods. time) spatial mesh. allows to use varying space discretizations in time. RungeKutta methods.t. Discretization of compressible and inviscid problems Systems in divergence form. distinction between spatial and temporal error. time adaptivity. √ any solution behaves like t for small times. main drawback is missing mathematically sound control of the temporal stepsize. spacetime ﬁnite elements: choose diﬀerent spatial meshes and ﬁnite element spaces at each time step. SDIRK schemes. method of lines: in each timestep a discrete stationary NavierStokes problem must be solved. compressible NavierStokes and Euler equations as systems in divergence form. main drawback is the ﬁxed (w. numerical methods for the solution of ordinary diﬀerential equations: θscheme. similar to multigrid for linear problems only change smoothing 3. nonlinear CGalgorithm.t. but trajectories must be computed at each time step for every node of a quadrature formula 2. construction of the numerical ﬂuxes: splitting into viscous and advective part. spatial and temporal contribution. residual error estimator. only Stokes problems must be solved in each time step. ﬁnite volume schemes: partitions. implicit timediscretization is mandatory. dual meshes. idea: treat nonlinearity and divergence constraint separately. in two dimensions the solution is unique. Spacetime adaptivity Required additional features: upper and lower bounds on spatial and temporal order. step length control. nonlinear multigrid algorithms. space adaptivity 3. construction via perpendicular bisectors or via barycentres. Adaptivity for nonlinear problems General structure of the adaptive algorithm for nonlinear problems is the same as for linear ones. explicit and implicit Euler method.r. possibility for coarsening the spatial mesh. van Leer scheme . allows a simultaneous spatial and temporal adaptivity. SteegerWarming scheme. implicit methods are mandatory for the timediscretization of parabolic problems. operator splitting. leads to a stationary NavierStokes problem in each time step. variational formulation admits at least one solution. path tracking. Discretization of the instationary NavierStokes equations Variational formulation. SDIRK schemes are particularly well suited. bounds must be uniform w. only error estimators do change.r.
A. TeubnerWiley. Gunzburger. Temam: NavierStokes Equations. Kr¨ner: Numerical Schemes for Conservation Laws. TeubnerWiley. Godlewski. 2nd edition.Trends and Advances. 1996 123 . North Holland. Raviart: Finite Element Approximation of the NavierStokes Equations. LeVeque: Finite Volume Methods for Hyperbolic Problems. Raviart: Numerical Approximation of Hyperbolic Systems of Conservation Laws. Elsevier 2003 [4] E. IX. Pironneau: Finite Element Methods for Fluids. 1993 [6] D. 1997 o [7] R. Girault. A. 1996 [5] D. Wiley. 2000 [2] V. Springer. Computational Methods in Physics. Wiley. 1984 [10] R. A. 3rd edition. Verf¨rth: A Review of A Posteriori Error Estimation and Adaptive Meshu Reﬁnement Techniques. Berlin. Stuttgart. R. T. Springer. Ainsworth. J. Handbook of Numerical Analysis Vol. Nicolaides: Incompressible CFD . Springer. P. 1989 [9] R. Glowinski: Finite Element Methods for Incompressible Viscous Flows.Bibliography [1] M. 1986 [3] R. 2002 [8] O. Cambridge University Press. Oden: A Posteriori Error Estimation in Finite Element Analysis. P.
.
22 hT . 21 L2 (Ω).. 23 k. 26 p. 12 . 18 · k . 23 S k.. 49 curl. 20 ·∞ . 49 ∂ α1 +. 71 h. 28 ψK .∂xαn n ∆.. 50 K. 24 ωx . 47 v. 22 supp.Index ·. 9 :. 12 λx . 20 · 1 . 68 . 46 N V0 . 23 S k. 20 0 N E0 . 111 I. 24 µ. 21 ∞ C0 (Ω). 19 curl. 22 hK .x . 68 ηN. 28 Ex . 44 δmin . 46 wx . 47 tE . 28 λ. 79 125 TT . 23 T . 28 nE.−1 . 16 Rk (K).x . 12 ψE .Γ . 38 RT . 47 δmax . 9 V .+αn . 81 D. 22 ·k . 20 H k (Ω). 23 A. 111 Fvisc .. 18 ηD. 21 2 ·1 . 27 ρK . 46 N T . 46 Ω. 18 1 H0 (Ω). 18 ν. 19 α ∂x1 1 . 23 ⊗. 26 Re. 18 xα . 9 nE . 19 a posteriori error estimation.0 . 72 ηK . 22 [·]E . 12 T. 68 a posteriori error estimator. 15 ωx . 20 1 H 2 (Γ). 44 div.K . 18 Γ. 46 tE. 47 NT . 12 Fadv . 7 wE . 23 T . 29 V (T ). 18 P .0 S0 . 18 ∂.K . 10 ⊗. 19 ET . 20 2 H0 (Ω). 46 n.
45 deformation tensor. 18 dual mesh. 13 compressible NavierStokes equations in nonconservative form. 68 admissibility. 61 general adaptive algorithm. 68 maximum strategy. 37 marking strategy. 15 LadyzhenskayaBabuˇkaBrezzi s condition. 101 explicit RungeKutta scheme. 18 LBBcondition. 75 hanging node. 36 Cauchy theorem. 101 face bubble function. 10 consistent penalty. 70 element. 37 inner product. Olga. 113 dyadic product. 68 error indicator. 9 CG algorithm. 55 conservation of energy. 43 Courant triangulation. 7 Lagrangian representation. 107 discrete Stokes operator. 37 s Babuˇka.126 INDEX a posteriori error indicator. 11 conservation of mass. 9 conservation of momentum. 18 instationary incompressible NavierStokes equations. 12 edge bubble function. 73 error estimator. 75 hierarchical basis. 15 internal energy. 59 . 12 Jacobian determinant. 59 mini element. 12 equilibration strategy. 22 advective ﬂux. 51 multigrid algorithm. 81 divergence. 18 diﬀerential algebraic equation. 28 eﬃcient. 7 Laplace operator. 7 Jacobian matrix. 111 Friedrichs inequality. 46 explicit Euler scheme. 15 infsup condition. 27 equal order interpolation. 37 Ladyzhenskaya. 111 aﬃne equivalence. 7 kinematic viscosity. 21 element bubble function. 31 CrankNicolson scheme. 14 conjugate gradient algorithm. 65 biharmonic equation. 37 s BiCGstab. 39 ideal gas. 101 improved Uzawa algorithm. 28 ﬁnite volume scheme. 90 ﬂux. 33 mixed variational formulation of the Stokes equations. 68 additional reﬁnement. 18 dynamic viscosities. 40 equation of state. 55 checkerboard instability. 32 modiﬁed TaylorHood element. 35 coarse grid correction. 68 Euler equations. 73 MG algorithm. Franco. 12. 75 boundary. 7 Euler’s formula. 68 gradient. 57 incompressible. 21 GaussSeidel algorithm. Ivo. 36 checkerboard mode. 7 Eulerian representation. 22 BabuˇkaBrezzi condition. 26 higher order TaylorHood element. 37 Lagrangian coordinate. 38 mixed ﬁnite element discretization of the Stokes equations. 113 ﬁxedpoint iteration. 12 implicit Euler scheme. 101 implicit RungeKutta scheme. 102 diﬀusion step. 101 CrouzeixRaviart element. 18 green element. 59 compressible NavierStokes equations in conservative form. 9 Brezzi. 13 Eulerian coordinate. 38 Morley element. 50 blue element.
Osborne. 16 Stokes. 35 quasiinterpolation operator.INDEX 127 Navier. 58 Sobolev space. 12 Stokes equations. 21 transport step. 18 upwind diﬀerence. Pierre Louis Marie Henri. 16 robust error estimator. 15 reference pressure. 95 numerical ﬂux. 11 trace. 15 reference time. 89 streamlinediﬀusion method. 59 purple element. 94 partition. 12 postsmoothing. 106 transport theorem. 17 nodal shape function. 106 unit outward normal. 21 trace space. 18 θscheme. 111 TaylorHood element. 21 path tracking. 19 surface element. 52 Wcycle. 16 residual error estimator. 12. 112 operator splitting. 17 Stokes operator. 87 Uzawa algorithm. 55 prolongation operator. 36 stage number. 70 Reynolds. 12. 12 pressure correction scheme. 111 system of diﬀerential equations in divergence form. 88 strongly diagonal implicit RungeKutta scheme. 108 Rothe’s method. 117 Vanka method. 111 vorticity. 36 stable discretization. 36 stable element. 116 stiﬀness matrix. 116 SteegerWarming scheme. 49 streamlinediﬀusion discretization. 100 total energy. 92 e nonlinear GaußSeidel algorithm. 33 SDIRK scheme. 101 stationary incompressible NavierStokes equations. 7 viscous ﬂux. 54 Vcycle. 43 Poincar´ inequality. 59 Reynolds’ number. 60 van Leer approximation. George Gabriel. 60 . 68. 36 stable pair. 75 Q1/Q0 element. 91 noslip condition. 104 RungeKutta scheme. 29 Stokes. 101 saddlepoint problem. 79 streamfunction. 24 nonlinear CGalgorithm of PolakRibi`re. 15 reference length. 26 red element. 9 unit tensor. 101 support. 8 transportdiﬀusion algorithm. 92 penalty parameter. 16 SteegerWarming approximation. 62 velocity. 59 pressure. 17 smoothing operator. 65 stable. 68 restriction operator. 22 slip condition. 10 system in divergence form. 21 e Poise. 15 regular reﬁnement. 20 stabilized biconjugate gradient algorithm. 15 reference velocity. 38 tensorial product. 101 shaperegularity. 117 van Leer scheme. 17 Newton iteration. 59 presmoothing. 74 reliable. 74 reference force.
This action might not be possible to undo. Are you sure you want to continue?
Use one of your book credits to continue reading from where you left off, or restart the preview.