You are on page 1of 104

Numerical method II lecture notes

Finite difference method

Joe Huang

June 15, 2022


2
Contents

I Boundary Value problems and Iterative methods 7


1 Finite difference approximation 9
1.1 Truncation errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.2 Deriving the finite difference approximations . . . . . . . . . . . . . . . . . . 10
1.3 Second order derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.4 Richardson’s Extrapolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2 Steady States and Boundary Value Problems (BVP) 15


2.1 The heat equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.2 The steady state problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.3 A Finite Difference Method . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.4 Error in grid function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.5 Neumann’s boundary conditions . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.6 Variable coefficients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.7 Nonlinear equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

3 Elliptic Equations 27
3.1 Steady state heat equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.2 5-point Stencil . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.3 Matrix conditioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.4 9-point Laplacian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.5 Higher order methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.5.1 Fourth order differencing . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.6 Method for solving linear systems . . . . . . . . . . . . . . . . . . . . . . . . 31
3.6.1 Gaussian elimination . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

4 Iterative methods 33
4.1 Jacobi and Gauss-Seidel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.2 Analysis of matrix splitting methods . . . . . . . . . . . . . . . . . . . . . . 34
4.2.1 Jacobi’s method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.2.2 Gauss-Seidel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.2.3 Error and convergence . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.2.4 Rate of convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

3
CONTENTS

4.2.5 Successive Over Relaxation (SOR) . . . . . . . . . . . . . . . . . . . 37

II Initial Value Problems 41


5 The Initial Value Problem (IVP) for Ordinary Differential Equations 43
5.1 Finite difference method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
5.1.1 Multistep method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
5.2 One step error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
5.3 Taylor series method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
5.4 Runge-kutta methods (RK) . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
5.4.1 2nd order Runge-kutta method (RK) . . . . . . . . . . . . . . . . . . 45
5.4.2 General 2-stage RK . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
5.4.3 4th stage RK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
5.5 Linear multistep method (LMM) . . . . . . . . . . . . . . . . . . . . . . . . 47
5.5.1 LTE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

6 Zero stability and convergence 51


6.1 Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
6.2 General linear case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
6.2.1 Nonlinear case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
6.3 Zero stability of general 1-step methods . . . . . . . . . . . . . . . . . . . . . 53
6.3.1 Consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
6.3.2 Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
6.4 Zero stability and convergence of LMM . . . . . . . . . . . . . . . . . . . . . 54
6.4.1 Error estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
6.5 Difference equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

7 Absolute Stability for Ordinary Differential Equations 59


7.1 Generalization of difference equation . . . . . . . . . . . . . . . . . . . . . . 59
7.2 Absolute stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
7.3 Stability regions for linear multistep methods . . . . . . . . . . . . . . . . . 62
7.4 Boundary locus method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
7.5 Linear systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
7.6 Stiff systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
7.6.1 Numerical methods for stiff problems . . . . . . . . . . . . . . . . . . 67
7.7 L-Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
7.7.1 Backward differentiation formula methods (BDF) . . . . . . . . . . . 68

8 Diffusion Equation and Parabolic Problems 69


8.1 Local truncation error and order of accuracy . . . . . . . . . . . . . . . . . . 70
8.2 Method of line discretization . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
8.3 Convergence of method I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

4
CONTENTS

8.3.1 Convergence in matrix form . . . . . . . . . . . . . . . . . . . . . . . 74


8.4 Von Neumann analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
8.4.1 Fourier analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
8.4.2 Von neumann analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 77
8.4.3 Discrete Fourier analysis . . . . . . . . . . . . . . . . . . . . . . . . . 78
8.5 Energy estimates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
8.5.1 Energy method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
8.6 Stability analysis of Crank-Niccolson method . . . . . . . . . . . . . . . . . . 81
8.6.1 Method of lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
8.6.2 Energy method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
8.6.3 Von Neumann analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 83
8.7 2D heat equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

9 Advection Equations and Hyperbolic Systems 85


9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
9.1.1 Advection equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
9.1.2 Linear systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
9.1.3 Nonlinear systems of conservation laws . . . . . . . . . . . . . . . . . 88
9.2 Difference schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
9.2.1 Centered difference scheme . . . . . . . . . . . . . . . . . . . . . . . . 89
9.2.2 Downwind scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
9.2.3 Upwind scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
9.2.4 Lax Friedrichs scheme (LxF) . . . . . . . . . . . . . . . . . . . . . . . 93
9.3 Von Neumann analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
9.3.1 Centered difference scheme . . . . . . . . . . . . . . . . . . . . . . . . 95
9.3.2 Downwind scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
9.3.3 Upwind scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
9.4 Lax Wendroff scheme (LW) . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
9.4.1 Taylor series approach . . . . . . . . . . . . . . . . . . . . . . . . . . 96
9.4.2 Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
9.4.3 Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
9.4.4 Fourier analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
9.5 Phase error . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
9.6 Characteristic tracing and interpolation . . . . . . . . . . . . . . . . . . . . . 99
9.7 The modified equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
9.7.1 Modified equation for LxF . . . . . . . . . . . . . . . . . . . . . . . . 101
9.8 Hyperbolic systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
9.9 Numerical methods for hyperbolic systems . . . . . . . . . . . . . . . . . . . 102
9.9.1 Lax-Wendroff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
9.9.2 Upwind . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
9.10 Initial boundary value problems . . . . . . . . . . . . . . . . . . . . . . . . . 104
9.11 Nonlinear advection equation . . . . . . . . . . . . . . . . . . . . . . . . . . 104

5
CONTENTS

Introduction
Some basic linear algebra stuffs should be dumped in the beginning session. Everything here
is trivial, so we will omit the proofs here.

(I) Symmetric square matrix has a complete set of eigenvectors and all real eigenvalues.

(II) Suppose λ is eigenvalue of A. Then, λ − σ is eigenvalue of A − σI where I is the


identity matrix of the same size as A.
1
(III) Suppose the matrix A is invertible and λ is an eigenvalue of A. Then, λ
is an eigenvalue
of A−1 .

(IV) If || · ||α , || · ||β are equivalent, then there exists c, C such that

c|| · ||α ≤ || · ||β ≤ C|| · ||α


Pn
(V) ||A||1 = max1≤j≤n i=1 |aij | max column sum
P
(VI) ||A||∞ = max1≤i≤n j |aij | max row sum
p
(VII) ||A||2 = ρ(AT A), where ρ(B) is the spectral radius of B, the maximum of modulus
over all eigenvalues.

(VIII) If A is symmetric, ||A||2 = ρ(A).

6
Part I

Boundary Value problems and


Iterative methods

7
Chapter 1

Finite difference approximation

Here we present an introduction to finite difference approximation method. Let u(x) be


a smooth function. Suppose we wish to approximate u0 (x) at a point x = x0 using the
definition:
u(x0 + h) − u(x0 )
u0 (x0 ) = lim
h→0 h

There are several common variations:

u(x0 +h)−u(x0 )
(1) D+ u(x0 ) = h
: forward difference

u(x0 )−u(x0 −h)


(2) D− u(x0 ) = h
: backward difference

u(x0 +h)−u(x0 −h)


(3) D0 u(x0 ) = 2h
: centered difference

1.1 Truncation errors

A natural question to ask here is how accurate is the approximation?

Definition 1.1.1. Let E(h) be the error or truncation error of the approximation with step
size h. If E(h) ∼ Chp , we say the approximation is pth order accurate.

9
CHAPTER 1. FINITE DIFFERENCE APPROXIMATION

Order of accuracy using Taylor series


Since we assume u(x) to be smooth, it has a Taylor series expansion. As an example, we
examine the centered difference:
1 1
u(x0 + h) = u(x0 ) + hu0 (x0 ) + h2 u00 (x0 ) + h3 u000 (x0 ) + O(h4 )
2 6
1 1
u(x0 − h) = u(x0 ) − hu0 (x0 ) + h2 u00 (x0 ) − h3 u000 (x0 ) + O(h4 )
2 6
u(x0 + h) − u(x0 − h) 1 1
D0 u(x0 ) = = (2hu0 (x0 ) + h3 u000 (x0 ) + O(h4 ) + O(h4 ))
2h 2h 3
2
h
|D0 u(x0 ) − u0 (x0 )| ∼ Ch2 C = u000 (x0 ) h small enough
6
Therefore, the centered difference scheme is first-order approximation.

1.2 Deriving the finite difference approximations


Method of undetermined coefficients
Suppose we wish to approximate u0 (x0 ) using u(x0 ), u(x0 − h), u(x0 − 2h). We will assume
our scheme:
Du(x0 ) = au(x0 ) + bu(x0 − h) + cu(x0 − 2h)
where a, b, c are constants to be determined. We can do a Taylor expansion about x = x0 ,
we get:
h2 00
Du (x0 ) = (a + b + c)u(x0 ) − (b + 2c)hu0 (x0 ) + (b + 4c) u (x0 ) − (b + 8c)h3 u000 (x0 ) + h.o.t
2
Since we seek Du (x0 ) − u0 (x0 ) to be at least O(h2 ), we require:
 
5
a + b + c = 0 a = 2 h

b + 2c = −1h 
⇒ b = −2 h
 1
b + 4c = 0 c = 2h
 

We thus derived a second order scheme:


1
D2 u(x0 ) = (3u(x0 ) − 4u(x0 − h) + u(x0 − 2h))
2h

Polynomial interpolation
We can approximate u(x) by p(x), interpolating polynomial at chosen points:
p(xi ) = u(xi ) i = 0, · · · n ⇒ u(x) ≈ p(x) and u0 (x) ≈ p0 (x)
There are a couple of ways which one can construct the interpolating polynomial. As an
example, if we wish to find a polynomial p(x) of degree 3 satisfying p(xi ) = ui , i = 0, 1, 2, 3.

10
CHAPTER 1. FINITE DIFFERENCE APPROXIMATION

(I) Regular interpolation using Van Der Pol matrix:

p(x) =a + bx + cx2 + dx3


p(x0 ) =a + bx0 + cx0 2 + dx0 3 = u0
.. ..
. .
2 3
p(x3 ) =a + bx3 + cx3 + dx3 = u3

We have     
1 x0 x0 2 x0 3 a u0
1 x1 x1 2 3  
x1   b  u1 

 = 
1 x2 x2 2 x2 3   c  u2 
1 x3 x3 2 x3 3 d u3

(II) We can reduce the size of Van Der Pol matrix by setting a = u0 and:

p(x) =u0 + b(x − x0 ) + c(x − x0 )2 + d(x − x0 )3


p(x1 ) =u0 + b(x1 − x0 ) + c(x1 − x0 )2 + d(x1 − x0 )3 = u1
.. ..
. .

The system then becomes


    
x1 − x0 (x1 − x0 )2 (x1 − x0 )3 b u1 − u0
x2 − x0 (x2 − x0 )2 (x2 − x0 )3   c  = u2 − u0 
x3 − x0 (x3 − x0 )2 (x3 − x0 )3 d u3 − u0

(III) Newton’s form of the interpolating polynomials

p(x) =a + b(x − x0 ) + c(x − x0 )(x − x1 ) + d(x − x0 )(x − x1 )(x − x2 )


p(x0 ) =a = u0
p(x1 ) =a + b(x1 − x0 ) = u1
p(x2 ) =a + b(x2 − x0 ) + c(x2 − x0 )(x2 − x1 ) = u2
p(x3 ) =a + b(x3 − x0 ) + c(x3 − x0 )(x3 − x1 ) + d(x3 − x0 )(x3 − x1 )(x3 − x2 ) = u3

This system is a linear, triangular, 4 × 4 system. Easier to solve then previous two.

(IV) Lagrange interpolation

Problem 1.2.1. Let p(x) be the quadratic polynomial interpolating u(x) at x0 , x0 −h, x0 −2h.
Show that p0 (x0 ) = D2 u(x0 ).

11
CHAPTER 1. FINITE DIFFERENCE APPROXIMATION

1.3 Second order derivative


We may derive the approximation to second order derivative using similar methods.
(I) Taylor series
1
D2 u(x) = (u(x0 − h) − 2u(x0 ) + u(x0 + h))
h2
(II) Undetermined coefficients. Requires approximation to be exact for cubic polynomial.
(III) Use low order derivative approximation.
d 0
u00 (x) = u (x)
dx

D2 u(x0 ) =D+ (D− u(x0 ))


1
= (D− u(x0 + h) − D− u(x0 ))
h
1 u(x0 + h) − u(x0 ) u(x0 ) − u(x0 − h)
= ( − )
h h h
1
= 2 (u(x0 − h) − 2u(x0 ) + u(x0 + h))
h
=D− (D+ u(x0 ))
This is called the centered difference approximation to second derivative.

1.4 Richardson’s Extrapolation


Recall the Taylor expansion for the forward difference:
h h2
D+ u(x0 ) =u0 (x0 ) + u00 (x0 ) + u000 (x0 ) + · · ·
2 6
0
=u (x0 ) + c1 h + c2 h + c3 h3 Ck : independent of h
2

Now, we wish to obtain higher order approximation from this scheme. The idea is to
compute D+ u(x0 ) with step size h, h2 , h4 , then use them to eliminate O(h) terms.

h
A0,0 =D+ u(x0 ) = u0 (x0 ) + c1 h + c2 h2 + c3 h3
h h h h
A1,0 =D+2 u(x0 ) = u0 (x0 ) + c1 + c2 ( )2 + c3 ( )3
2 2 2
0 2 3
A1,1 =2A1,0 − A0,0 = u (x0 ) + d2 h + d3 h
h h h h
A2,0 =D+4 u(x0 ) = u0 (x0 ) + c1 + c2 ( )2 + c3 ( )3
4 4 4
h h
A2,1 =2A2,0 − A1,0 = u0 (x0 ) + d2 ( )2 + d3 ( )3
2 2

12
CHAPTER 1. FINITE DIFFERENCE APPROXIMATION

In general, we have:

Aj,1 = 2Aj,0 − Aj−1,−


4Aj,1 − Aj−1,1
Aj,2 =
3
8Aj,2 − Aj−1,2
Aj,3 =
7
Figure 1.1 demonstrates the procedure.

A0,0

A1,0 A1,1
A2,0 A2,1 A2,2
A3,0 A3,1 A3,2 A3,3
Figure 1.1: Illustration of Richardson Extrapolation

13
CHAPTER 1. FINITE DIFFERENCE APPROXIMATION

14
Chapter 2

Steady States and Boundary Value


Problems (BVP)

Boundary value problems typically arise as a steady state limit of time dependent problem.

2.1 The heat equation


Example 2.1.1. Heat flow in a heat conducting rod.

Let U (x, t) be the temperature, Ψ(x, t) be heat source (sink), k(x) be heat conduction
coefficient, and k(x)ux be heat flux, rate of heat flow.
Heat balance in a rod segment:
∆xU (x, t + ∆t) ≈ ∆xU (x, t) + ∆t(k(x + ∆x)Ux (x + ∆x, t) − k(x)Ux (x, t)) + ∆t∆xψ(x, t)
Divide both sides by ∆x∆t:
U (x, t + ∆t) − U (x, t) k(x + ∆x)Ux (x + ∆x, t) − k(x)Ux (x, t)
≈ + ψ(x, t)
∆t ∆x
Let ∆x, ∆t → 0, we get
ut = (k(x)ux )x + ψ(x, t)
 
u(a, t) = α(t)  ux (a, t) = σ1 (t) 

u(b, t) = β(t) or ux (b, t) = σ2 (t) Boundary conditions
 
Dirichlet BC’s Neumann BC’s
 

2.2 The steady state problem


The steady state problem assumes that Ψ, α, β are all time independent. As t → ∞, U
approaches to its steady state. Thus, we arrive at the following problem:
u00 = f = −ψ u(a) = α, u(b) = β or u0 (a) = σ1 , u0 (b) = σ2

15
CHAPTER 2. STEADY STATES AND BOUNDARY VALUE PROBLEMS (BVP)

Note for Neumann boundary conditions, we see the following issues:

(I) Non-uniqueness
u00 = 0 u0 (0) = u0 (1) = 0
If u is a solution, then we also have u + cost also a solution. The boundary condition
implies no heat flux through the boundaries, thus u is a constant.

(II) Nonexistence
u00 = f f = −ψ < 0 ⇒ positive heat source
with the boundary condition u0 (0) = u0 (1) = 0, there is no heat flux through the
boundaries. Thus, cannot expect a steady state solution.

(III) If at least one boundary condition is Dirichlet, then we are fine.

2.3 A Finite Difference Method


We consider the following BVP problem:

u00 (x) = f (x) on (0, 1) u(0) = α u(1) = β

0= X0 xn+1 =1
Figure 2.1: Grid point

As shown in Figure 2.1, we discretize the line into n + 1 equally-spaced intervals, where

0 = x0 , x1 , · · · , xn+1 = 1 : grid points u0 , u1 , · · · , un+1 : grid function values

uj = u(xj ); u0 = α, un+1 = β, fj = f (xj )


The numerical scheme uses grid points to approximate u00 (x). For example, the centered
difference scheme:
u(x0 − h) − 2u(x0 ) + u(x0 + h)
D2 u(x0 ) = ≈ u00 (x0 )
h2
uj−1 − 2uj + uj+1
D2 u(xj ) = ≈ u00 (xj ) = fj
h2
uj−1 − 2uj + uj+1
= fj , j = 1, · · · , n
h2

16
CHAPTER 2. STEADY STATES AND BOUNDARY VALUE PROBLEMS (BVP)

Recall for u0 = α, un+1 = β.


α − 2u1 + u2
j=1 = f1
h2
un−1 − 2un + β
j=n = fn
h2

In matrix form, we have AU = F :


 
−2 1  
 1 −2 1 u1  
f1 − hα2


 1 −2 1

  u
 .2 

 f2 
1  
.. 
A= 2 U =  F =
 
 .. 
h 
 
. 
 ... 
   
 

1 −2 1 
 fn − hβ2
un

1 −2
P
Note this is a linear, tridiagonal, diagonal dominant system (|aii | ≥ j6=i |aij |). Thus, we
have a unique solution
U = A−1 F

2.4 Error in grid function


A natural question to ask is how good is the approximation? We already know the centered
difference approximation is second order. We wish to achieve the same order for the numerical
solution as well.
Definition 2.4.1. Let U = (u1 , · · · , un )0 be the numerical solution and Û = (u(x1 ), · · · , u(xn ))0
be the exact solution. We define the global error E := U − Û .
The goal here is to show E = O(h2 ) as h → 0 for the finite difference scheme.

Grid Function norms and norm equivalence


X
||E||1 =h |Ej | h||E||∞ ≤ ||E||1 ≤ nh||E||∞ = (b − a)||E||∞
j
X √ √ p
||E||2 =(h |Ej |2 )1/2 h||E||∞ ≤ ||E||2 ≤ nh||E||∞ = (b − a)||E||∞
j
X √ √ p
||E||p =(h |Ej |p )1/p h||E||2 ≤ ||E||1 ≤ nh||E||2 = (b − a)||E||2
j

||E||∞ = max |Ej |


j

As h → 0, norm equivalence does not carry over. (Linear algebra: all finite dimensional
norms are equivalent).

17
CHAPTER 2. STEADY STATES AND BOUNDARY VALUE PROBLEMS (BVP)

Local Truncation Error (LTE)


1
(uj−1 − 2uj + uj+1 ) = fj j = 1, · · · , n
h2
We substitute exact solution into the difference approximation:
1
τj = 2 (u(xj−1 ) − 2u(xj ) + u(xj+1 ) − fj
h
h2
=u00 (xj ) + u0000 (xj ) + O(h4 ) − f (xj )
12
h2 0000
= u (xj ) + O(h4 ) (f (xj ) = u00 (xj ))
12
Thus, τj = O(h2 ) as h → 0.
Definition 2.4.2. The local truncation error is defined as:
 
τ1
 τ2 
.
τ =  ..  : LTE
 
.
 .. 
τn

Global error
Recall: )
AU = F
⇒ A(U − Û ) = −τ or AE = −τ
AÛ = F + τ
Alternatively, we have
1
(Ej−1 − 2Ej + Ej+1 ) = −τj j = 1, · · · , n
h2
For Dirichlet’s boundary conditions, we have
E0 = En+1 = 0
Now, we let Ah to represent the same discretize matrix with step size h.
Ah E h = −τ h ⇔ E h = −(Ah )−1 τ h ⇔ ||E h || = ||(Ah )−1 τ h || ≤ ||(Ah )−1 ||||τ h ||
Definition 2.4.3. We see that
Ah E h = −τ h ⇔ E h = −(Ah )−1 τ h ⇔ ||E h || = ||(Ah )−1 τ h || ≤ ||(Ah )−1 ||||τ h ||
(a) Consistency: method is consistent if ||τ h || → 0 as h → 0.
(b) Stability: method is stable if ||(Ah )−1 || is bounded for small h.
(c) Convergence: method is convergent if ||E h || → 0 as h → 0.
Intuitively, if we have both consistency and stability, we should be able to get convergence.
However, it is much harder to show stability than consistency.

18
CHAPTER 2. STEADY STATES AND BOUNDARY VALUE PROBLEMS (BVP)

l2 stability
 
−2 1
 1 −2 1 
1 
.. .. ..

Ah = 2 
 
. . .
h 


 1 −2 1 
1 −2
We note Ah is a symmetric, tridiagonal matrix. So is A−1 . Let λk denote the set of eigenvalues
of Ah and rk denote the set of eigenvectors. Recall:
1
||A||2 = ρ(A) = max |λk | ||A−1 ||2 = ρ(A−1 ) = max |λ−1
k | =
k k min |λk |
2
Problem 2.4.1. Show the eigenvalues and eigenvectors of Ah are: λk = h2
(cos(kπh) −
1); k = 1, 2, · · · , n and (rk )j = sin(kπjh); j = 1, · · · , n.
Problem 2.4.2. mink |λk | = |λ1 | = | h22 (cos(πh) − 1)| = | − π 2 + O(h2 )|. Therefore, |λ1 | →
−π 2 as h → 0 independent of h.
1 1 1 2 0000
||A−1 ||2 = as h → 0 ||E h ||2 ≤ ||(Ah )−1 ||2 ||τ h ||2 ≈ h ||u (xj )||2
π2 π 2 12

l∞ stability
To show O(h2 ) convergence in l∞ , we need to show ||A−1 ||∞ ≤ C.
Claim 2.4.1. A−1 ej extracts the j th column of the matrix A−1
Proof. ej = (0, · · · , 1, 0, · · · , 0), where 1 appears at jth entry. If we let a1 , · · · , an denote the
rows of A−1 we see    j 
a1 · ej a1
−1
A ej =  .
.   .. 
= . 

.
an · ej ajn
which equals the j th column of A−1 .
Claim 2.4.2. The solution for Av = ej that is the vector v = A−1 ej is the vector obtained
by evaluating the Green’s function hG(x; xj ) on the grid, that is

h(xj − 1)xi xi ≤ xj
vi = hG(xi ; xj ) =
h(xi − 1)xj xi > xj
Proof. The solution to Av = ej must satisfy the following

vi−1 − 2vi + vi+1 0 i 6= j
=
h2 1 i=j
Note here v = (v1 , v2 , · · · , vn ). Now we consider four cases:

19
CHAPTER 2. STEADY STATES AND BOUNDARY VALUE PROBLEMS (BVP)

(I) i < j. In this case, we verify the following

vi−1 − 2vi + vi+1 h(xj − 1)xi−1 − 2h(xj − 1)xi + h(xj − 1)xi+1


=
h2 h2
xj − 1
= (xi − h − 2xi + xi + h) = 0
h

(II)

vi−1 − 2vi + vi+1 h(xj − 1)xi−1 − 2h(xj − 1)xi + h(xi+1 − 1)xj


=
h2 h2
h(xi − 1)xi−1 − 2h(xi − 1)xi + h(xi+1 − 1)xi
=
h2
h(xi − 1)(xi − h) − 2h(xi − 1)xi + h(xi + h − 1)xi
=
h2
1
= (x2i − xi − hxi + h − 2x2i + 2xi + x2i + hxi − xi )
h
=1

(III) i = j + 1. In this case, we verify

vi−1 − 2vi + vi+1 h(xj − 1)xi−1 − 2h(xi − 1)xj + h(xi+1 − 1)xj


=
h2 h2
1
= (x2j − xj − 2xj (xj + h − 1) + xj (xj + 2h − 1)) = 0
h

(IV) i > j + 1. In this case, we verify

vi−1 − 2vi + vi+1 h(xi−1 − 1)xj − 2h(xi − 1)xj + h(xi+1 − 1)xj


=
h2 h2
xj
= (xi − h − 1 − 2xi + 2 + xi + h − 1) = 0
h

This proves every entry of vi = hG(xi ; xj ) satisfy Av = ej .

Claim 2.4.3. Consider the matrix G whose elements are Gij = hG(xi ; xj ). Show that each
element of G is bounded by h.

Proof. We observe that there are two cases:

(I) i ≤ j

|Gij | = |hG(xi ; xj )| ≤ h|G(xi ; xj )| = h|(xj − 1)xi | ≤ h|xj − 1||xi | ≤ h

Note 0 ≤ xk ≤ 1.

20
CHAPTER 2. STEADY STATES AND BOUNDARY VALUE PROBLEMS (BVP)

(II) i > j
|Gij | = |h(xi − 1)xj | ≤ h|xi − 1||xj | ≤ h

Claim 2.4.4. ||A−1 ||∞ = ||G||∞ ≤ 1

Proof. ||A−1 ||∞ equals to the absolute value of maximum row sum of A−1 . By previous two
claims, we have ||A−1 ||∞ ≤ n × h < 1 Thus, this method is l∞ -stable.

2.5 Neumann’s boundary conditions


We first consider the following problem:

u00 = f u0 (0) = σ u(1) = β

Use the same discretization as Figure 2.1, we get the following:


1
(uj−1 − 2uj + uj+1 ) = fj j = 1, · · · , n − 1
h2
1
(un−1 − 2un + β) = fn j = n
h2

Now, we can approximate u0 (0) as:


u1 − u0 2 2σ
= σ ⇔ 2 (u1 − u0 ) =
h h h
In matrix form:    2σ 
−2 2  
h
 1 −2 1  u0  f1 
1    u1   
.. .. ..     .. 
=

. . . .
h2    ..   . 
     
 1 −2 1   fn−1 
un
1 −2 fn − hβ2

Problem 2.5.1. Considered the 2-point BVP u00 = f, u0 (0) = σ, u(1) = β with the scheme
derived as above. with

(i)
U1 − U0
=σ j=0
h
(ii)
U1 − U0 h
= σ + f0 j=0
h 2
21
CHAPTER 2. STEADY STATES AND BOUNDARY VALUE PROBLEMS (BVP)

(I) Compute the Local Truncation Error (LTE) for the interior points. Show that the
LTE at j = 0 is O(h) for (i) and O(h2 ) for (ii).

(II) Show that the method is l2 -stable. (To find the e-values of the matrix Ah find first the
e-functions of the corresponding differential operator ∂ 2 /∂x2 with boundary conditions
u0 (0) = u(1) = 0.

2.6 Variable coefficients


A more general linear variable coefficients 2nd order equation takes the form:

a(x)uxx + b(x)ux + c(x)u = f (x)

Discretization yields
uj−1 − 2uj + uj+1 uj+1 − uj−1
aj 2
+ bj + cj uj = fj
h 2h
1 h h
2
[(aj − bj )uj−1 + (h2 cj − 2aj )uj + (aj + b2 )uj+1 ] = fj
h 2 2
In matrix form:
   
h2 c1 − 2a1 a1 + h2 b1 f1 − (a1 − h2 b1 )α/h2
 a2 − h b 2 h2 c2 − 2a2 a2 + h2 b2   f2 
2
1  .. .. ..
 
..

U =
   
2 . . . .
h 
   
h 2 h
  
 an−1 − 2 bn−1 h cn−1 − 2an−1 an−1 + 2 bn−1   fn−1 
h 2 h 2
an − 2 b n h cn − 2an fn − (an + 2 bn )β/h

A tridiagonal system, if nonsingular, the system can be solved. However, this method is not
the best approach.

Example 2.6.1. Variable heat conduction coefficient k(x).

(k(x)ux )x = f u(0) = α u(1) = β

or
k(x)uxx + k 0 (x)ux = f (k, k 0 given )
An alternative method is more suitable for this case, in which we approximate the derivatives
using half-points:
uj+1 − uj
(k(x)ux )j+ 1 ≈ kj+ 1 ( )
2 2 h
Thus,
kj− 1 uj−1 − (kj− 1 + kj+ 1 )uj + kj+ 1 uj+1
(k(x)ux )j ≈ 2 2
2
2 2

h
22
CHAPTER 2. STEADY STATES AND BOUNDARY VALUE PROBLEMS (BVP)

In matrix form, we have


−(k 1 + k 3 )
 
k3
2 2 2

1  k 3 −(k 3 + k5 ) k5 
 2 2 2 2 
h2 
 .. .. .. 
. . . 
kn− 1 −(kn− 1 + kn+ 1 )
2 2 2

It is a symmetric negative definite matrix provided k(x) > 0, satisfying maximum prin-
ciple.

2.7 Nonlinear equations


Consider the pendulum problem as shown in Figure 2.2. We define the following variables:

θ(t) : angular displacement


˙ : angular velocity L
θ(t) θ
˙ : tangential velocity
Lθ(t)
¨ : tangential acceleration
Lθ(t)
mg
mg sin θ : tangential force F = ma → −mg sin θ = mLθ(t) ¨
Figure 2.2:
¨ = −g sin θ ( take g = 1) ⇔ θ(t)
θ(t) ¨ = − sin θ
L L Nonlinear
Pendulum
This is a second order nonlinear equation, we need two auxiliary conditions.
˙ ⇒ IV P
θ(0), θ(0)
θ(0), θ(T ) ⇒ BV P
Note if θ is small, sin(θ) ≈ θ. Thus, the equation is reduced to:
¨ = −θ :
θ(t) linearized pendulum problem
For the general problem:
¨ = − sin θ
θ(t) θ(0) = α, θ(T ) = β
We will discretize the time:
T
tj = jh j = 0, 1, · · · , n + 1 (n + 1)h = T h=
n+1
We thus have  θ −2θ +θ
j−1 j j+1

 h2
+ sin(θj ) = 0 j = 1, · · · , n
θ0 = α

θn+1 = β

23
CHAPTER 2. STEADY STATES AND BOUNDARY VALUE PROBLEMS (BVP)

To solve this nonlinear equation, we need to use Newton’s method. Let

θ =(θ1 , θ2 , · · · , θn )T
G(θ) =0 G : Rn → Rn
θj−1 − 2θj + θj+1
G(θ)i = + sin(θj ) : ith component
h2
∂G k ∗
G(θ∗ ) ≈G(θ∗ ) + (θ )(θ − θ)
∂θ
∂G k k+1
G(θk+1 ) ≈G(θk ) + (θ )(θ − θ) = 0
∂θ

where ∂G
∂θ
is the Jacobian matrix and θ∗ − θ = δ k . We note
1
 h2 j = i − 1

∂G(θ)i cos θi − h22 j = i

∂G
= = 1
∂θ ij ∂θj  2 j =i+1
h


0 otherwise

Note
∂G k k ∂G k −1
− (θ )δ = G(θk ) ⇔ δ k = −( (θ )) G(θk )
∂θ ∂θ
this can be solved by Gaussian elimination. Note:

(1) Each Newton’s iteration requires to solve a tridiagonal system.

(2) Newton’s method converges only if initial guess is sufficiently close.

(3) Different initial guess may converge to different solution.

Accuracy

θ : numerical solution G(θ) = 0


θ̂ : exact solution G(θ̂) = τ

The local truncation error is given by:

θj−1 − 2θj + θj+1


τj = + sin(θj ) = O(h2 )
h2

stability
G(θ) − G(θ̂) = −τ

24
CHAPTER 2. STEADY STATES AND BOUNDARY VALUE PROBLEMS (BVP)

Linearize
∂G(θ)i
G(θ) =G(θ̂) + (θ − θ̂)
∂θj
∂G(θ)i −1
θ − θ̂ = − [ ] τ : global error
∂θj
∂G(θ)i −1
||E|| ≤||[ ] ||||τ ||
∂θj

The nonlinear finite difference scheme is stable if ||[ ∂G(θ)


∂θj
i −1
] || ≤ c for sufficiently small h.
2
Note we drop O(E ) terms in the linearizion assuming they are small (CAN BE SHOWN).

25
CHAPTER 2. STEADY STATES AND BOUNDARY VALUE PROBLEMS (BVP)

26
Chapter 3

Elliptic Equations

3.1 Steady state heat equation


ut = (k(x, y)ux )x + (k(x, y)uy )y + ψ k(x, y) > 0
Steady state:

ut = 0 ψ independent of t take k = 1
uxx + uyy = f (x, y) : Possion’s equation
2
∇ u = uxx + uyy ≡ ∆U : Laplacian operator

We consider the following problem:

∇2 u = uxx + uyy = f on (0, 1) × (0, 1) + BC’s


xi = i∆x; yj = j∆y; ∆x = ∆y = h
uij ≈ u(xi , yj ) i, j = 0, 1, · · · , n + 1

3.2 5-point Stencil


We can approximate the Laplacian using 5-point stencil as shown
in Figure 3.1. j

ui−1,j − 2ui,j + ui+1,j ui,j−1 − 2ui,j + ui,j+1


+ = fi,j
h2 h2
1
(ui−1,j + ui+1,j + ui,j−1 + ui,j+1 − 4ui,j ) = fi,j
h2
i, j = 1, · · · , n
i
To put this scheme in matrix form, we need to order the points.
Figure 3.1: 5-point stencil
27
CHAPTER 3. ELLIPTIC EQUATIONS

I. Order grid points by rows


As illustrated by Figure 3.2a, we order the points by row. In

j j

13 14 15 16 15 7 16 8

9 10 11 12 5 13 6 14
5 6 7 8 11 3 12 4
1 2 3 4 1 9 2 10
i i
(a) Order by row (b) Red-black ordering

matrix form AU = F :
   
T I −4 1  
I  1 −4 1 1
T I  
 1
1  .. .. ..
 
.. .. ..
 
A= 2 T =  I = 
     
. . . . . . . .
h 

    . 
 I T I  1 −4 1 
1
I T 1 −4
Note A is n2 × n2 block diagonal matrix and T, I are n × n. The vector U is given by:
 
u11
u21 
 
u31 
U = u 
 
 41 
u 
 12 
..
.

II. Red-black ordering


As shown in Figure 3.2b, we order the grid in red, black sequence.

Accuracy

1
τi,j = [u(xi − h, yj ) + u(xi + h, yj ) + u(xi , yj − h) + u(xi , yj + h) − 4u(xi , yj )] − f (xi , yj )
h2
1
= h2 (uxxxx + uyyyy ) + O(h4 ) = O(h2 )
12
28
CHAPTER 3. ELLIPTIC EQUATIONS

Stability
Recall, stability means
||A−1 || ≤ c as h → 0
Problem 3.2.1. Show that the eigenvectors and eigenvalues of the 5-point Laplacian are
(rp,q )i,j = sin(pπih) sin(qπjh)
2
λp,q = 2 [(cos(pπh) − 1) + (cos(qπh) − 1)]
h
Claim 3.2.1. The method is l2 -stable.
Proof. From the problem, we have shown the eigenvalues of A−1 are given by 1
λi,j
where
2
λi,j = [(cos(iπh) − 1) + (cos(jπh) − 1)]
h2
1 1
||A−1 || = ρ(A−1 ) = ≈ 2 as h → 0
|λ1,1 | 2π
Thus, the method is l2 -stable.

3.3 Matrix conditioning


Ax = b : linear system
Ax∗ = b∗ : linear system with slightly perturbed right hand side
x = A−1 b
x∗ = A−1 b∗
Provided δb = b − b∗ , we wish to know how big is δx = x − x∗ ?
(I)
A(x − x∗ ) = b − b∗ ⇒ (x − x∗ ) = A−1 (b − b∗ ) ⇒ ||x − x∗ || ≤ ||A−1 ||||b − b∗ ||
(II)
1 1
b = Ax ⇒ ||b|| ≤ ||A||||x|| ⇒ ≤ ||A||
||x|| ||b||
∗ ∗
||x − x || ||b − b ||
≤ ||A||||A−1 ||
||x|| ||b||
Definition 3.3.1. The condition number Cond(A) of a matrix A is:
Cond(A) = ||A||||A−1 ||
||x−x∗ ||
Observe that relative error in solution ||x||
is less than or equal to Cond(A) times
||b−b∗ ||
relative perturbation .
Thus, if Cond(A) is large, small perturbation in right-hand
||b||
side may still yield large error in solution. We call such matrix A with large Cond(A)
ill-conditioned.

29
CHAPTER 3. ELLIPTIC EQUATIONS

5-point Laplacian
We examine the system of 5-point Laplacian:

2 8
||A||2 = ρ(A) = |λn,n | = | 2
[cos(nπh) − 1 + cos(nπh) − 1]| ≈ 2
h h
1 1
||A−1 ||2 = ρ(A−1 ) = ≈ 2
|λ1,1 | 2π
8 1 4
Cond2 (A) = 2 2 = 2 2
h 2π π h
Thus, system is ill-conditioned as h → 0.

3.4 9-point Laplacian


In literature, we use the notation ∇25 ui,j to denote 5-point Laplacian. An alternative approach
is to use 9-point Laplacian as shown in Figure 3.3. We denote the 9-point Laplacian as ∇29 ui,j .

6 2 5

3 1
0

7 4 8

Figure 3.3: 9-point Laplacian

Here we denote uk as the solution evaluated at kth points of Figure 3.3. In


previous discussion, we use the 5-point u0 , u1 , u2 , u3 , u4 to get:

u1 + u2 + u3 + u4 − 4u0
∇25 ui,j = (I)
h2
Alternatively, we can use the points on the diagonals u0 , u5 , u6 , u7 , u8 . In this
way, we get:
u5 + u6 + u7 + u8 − 4u0
∇˜25 ui,j = (II)
2h2
The 9-point Laplacian is defined as:

4(I) + 2(II) 4(u1 + u2 + u3 + u4 − 4u0 ) + (u5 + u6 + u7 + u8 − 4u0 )


∇29 ui,j = =
6 6h2
2
h
=∇2 u + (uxxxx + 2uxxyy + uyyyy ) +O(h4 )
2 | {z }
∇2 (∇2 u)=∇2 f

30
CHAPTER 3. ELLIPTIC EQUATIONS

Some observations
(i) Method is 2nd order accurate.
(ii) If f = 0 or ∇2 f = 0 (f is harmonic), then method is 4th order accurate.
(iii) If f (x, y) is known, then we can compute ∇2 f . Using this, we can increase order of
accuracy:
2 h2 2
∇9 ui,j = fi,j + ∇ fi,j : 4th order scheme
2
(iv) If fi,j is known, then ∇25 fi,j = ∇2 fi,j + O(h2 ). Thus:
h2 2
∇29 ui,j = fi,j + ∇ fi,j : 4th order scheme
2 5
The observation above can be classified as method of deferred corrections.

3.5 Higher order methods


The methods we introduce thus far are second order approximations for solving BVPs. There
are a few ways where we can obtain a higher order approximation.

3.5.1 Fourth order differencing


MORE LEC 7

3.6 Method for solving linear systems


To solve a linear system, there are usually two approaches. A direct method such as Gaussian
Elimination yields an exact solution with finite number of operations. An iterative method
starts with an initial guess for the solution, and through each iteration, improve the guess
until it is good enough; it yields an approximated solution.

3.6.1 Gaussian elimination


Gaussian elimination is a direct method that transforms Ax = b into U x = b0 , where U is an
upper triangular matrix. Note each matrix has P A = LU decomposition. The new system
can be solved by backward substitution. The first step can be done through elementary row
operations.

Operation counts
For GE, the operation count is O(N 3 ) where N is the size of the system n2 . For a banded
system, the operation count is reduced to O(p2 N ) where p is the band size.

31
CHAPTER 3. ELLIPTIC EQUATIONS

32
Chapter 4

Iterative methods

Iterative methods can be used to generate more and more accurate approximations.

4.1 Jacobi and Gauss-Seidel


We consider the standard BVP:
uj−1 − 2uj + uj+1
u00 = f ⇔ = fj
h2
Here the notation u(k) represents quantities at kth iteration, k = 0, 1, · · · . We introduce two
methods:
(k+1) 1 (k) (k) h2
uj = (uj−1 + uj+1 ) − fj Jacobi’s method (J)
2 2
(k+1) 1 (k+1) (k) h2
uj = (uj−1 + uj+1 ) − fj Gauss-Seidel’s method (GS)
2 2

In matrix form:

A=M −N
Ax = F ⇔ (M − N )x = F ⇔ M x = N x + F
M x(k+1) = N x(k) + F : iterative method
A = D − L − U D : diagonal L : lower triangle U : upper triangle

where
   
  0 0 1
−2 . ... ... 
1  1 . .
−1  −1 

D= 2 ... 
L= 2  .

U= 2 
 
h

h  .
.. .. 

h  ... 
−2 1
1 0 0

33
CHAPTER 4. ITERATIVE METHODS

4.2 Analysis of matrix splitting methods

4.2.1 Jacobi’s method

In matrix form, we have

−2
M =D = I
h2  
0 1
1 0 1 
−1 
 .. .. ..

N =L + U = 2 

. . . 
h  
 1 0 1
1 0
M u(k+1) =N u(k) + F
 
0 1
1 0 1 
1   h2
u(k+1) =M −1 N u(k) + M −1 F =  . . . . . . . . .  u(k) − F
 
2  2
 1 0 1
1 0

4.2.2 Gauss-Seidel

In matrix form, we have

M =D − L N =U
   
−2 0 1
−1 −2   .. .. 
 (k+1)  . .  u(k) − h2 F
u =

 ... ... 
   0 1
−1 −2 0

34
CHAPTER 4. ITERATIVE METHODS

4.2.3 Error and convergence


Iterative solution: M u(k+1) = N u(k) + F
Exact solution: Au∗ = b ⇒ M u∗ = N u∗ + F
M (u(k+1) − u∗ ) = N (u(k) − u∗ )
| {z } | {z }
e(k+1) e(k)

e(k+1) =M −1 N e (k)
e(k) = u(k) − u ∗
: error in kth iterate
M −1 N =G : iteration matrix
e(k+1) =Ge(k) : linear convergence
e(k) =Gk e(0) ← error in initial guess
||e(k) || ≤||Gk ||||e(0) ||
Linear convergence means the error is directly related to the error in previous iteration.
We see iteration converges from any initial guess if Gk → 0 as k → ∞. We assume G
diagonalizable:
G =RΛR−1
R : matrix of right eigenvectors of G
Λ : diagonal matrix of eigenvaluesλk
Gk =GG · · · G = RΛk R−1
If all |λk | < 1 ⇒ Gk → 0 method converges. The rate of convergence is determined by
largest eigenvalue of G.
||e(k) ||2 ≤||RΛk R−1 ||2 ||e(0) ||2
≤||R||2 ||Λk ||2 ||R−1 ||2 ||e(0) ||2
=ρ(G)cond2 (R)||e(0) ||2
If G is normal (GGT = GT G) and symmetric, then cond2 (R) = 1 and ||e(k) ||2 ≤ ρk ||e(0) ||2 .

4.2.4 Rate of convergence


||e(k+1) || ≤ c||e(k) || : method is linearly convergent
Newton’s method: ||e(k) || ≤ c||e(k−1) ||2 is quadratic convergent.

Jacobi’s method
A=D−L−U
−1 −1 h2 −1
G =D (L + U ) = D (D − A) = I − D A = I + A
2
h2 2 h2
e-values of G :1 + λk , λk = 2 (cos(kπh) − 1) ⇔ 1 + λk = cos(kπh) k = 1, 2, · · · , n
2 h 2

35
CHAPTER 4. ITERATIVE METHODS

Note here:
ˆ All eigenvalues of G are | · | < 1, thus method converges!

ˆ ρ(G) = cos(πh) = 1 − 12 (πh)2 + O(h4 ) < 1, we see ρ(G) → 1 as h → 0, thus method


converges slowly!

Operation count
||e(k) || ≤ ρk ||e(0) ||, ||e0 || = O(1). To reduce error by a factor of ε:

ρk ≈ ε ⇒ k ≈ log(ε)/ log(ρ) : # of iterations

Since our method is second order, we want our ε to be second order as well:

ε =ch2
log(ε) log(ch2 ) log(c) + 2 log(h)
k≈ ≈ 1 ≈
log(ρ) 2
log(1 − 2 (πh) ) − 12 (πh)2
2 log(h) −2 log n 4
≈O( −1 2 2 ) = O( 1 2 1 2 ) = O( 2 n2 log(n)) : # of iterations
2
π h −2π (n) π

Multiply by total number of points, we get

1Dn × O(n2 log(n)) = O(n3 log(n)) O(12 n) = O(n)p = 1, N = n


2Dn2 × O(n2 log(n)) = O(n4 log(n)) O(n2 n) = O(n4 )p = 1, N = n2
3Dn3 × O(n2 log(n)) = O(n5 log(n)) O((n2 )3 n) = O(n7 )p = 1, N = n3

Gauss-Seidel
Note: A = D − L − U , e(k+1) = GGS e(k) , where

GGS = (D − L)−1 U : iteration matrix

Claim 4.2.1. If A is block tridiagonal, then ρ(GGS ) = ρ(GJ )2 .


Proof. Let
λ−1 c1
   
b1 c1 b1
.. .. .. ..
a
 . . 
λa
 . . 
A= 2 Aλ =  2
 
... ...  ... ... 
 cn−1   λ−1 cn−1 
an b n λan bn

where
Aλ = Dλ ADλ −1 Dλ = diag(λ, λ2 , · · · , λn )

36
CHAPTER 4. ITERATIVE METHODS

If λ 6= 0, then |A| = |Aλ |. Now, we examine GJ and GGS . First, recall:

GJ = D−1 (L + U ) GGS = (D − L)−1 U

We can relate the eigenvalues of GJ and GGS :

λn |GJ − λI| =λn |D−1 (L + U ) − λI|


=λn |D−1 ||L + U − λD| = |D−1 ||λ(L + U ) − λ2 D|
=|D−1 ||λ2 L + U − λ2 D| = |D−1 ||U − λ2 (D − L)|
=|D−1 ||D − L||(D − L)−1 U − λ2 I| = |D−1 ||D − L||GGS − λ2 I

Therefore eigenvalue of GJ equals eigenvalues of GGS square.

Some comments:

ˆ For A tridiagonal, Jacobi converges if and only if GS converges.

ˆ If they converge, GS converges faster.

ˆ One GS iteration is approximately 2 Jacobi Iterations

Operation count
To reduce error by a factor ε = ch2

2 log(h) 2
k = log(ε)/ log(ρ) ≈ O( 2 2
) = O( 2 n2 log(n))
log(1 − π h ) π

4.2.5 Successive Over Relaxation (SOR)

DU (k+1) =(L + U )U (k) + F : Jacobi


=(L + U − D + D)U (k) + F
=DU (k) − (AU (k) − F )
| {z }
Jacobi residual

DU (k+1) =LU (k+1) + U U (k) + F : Gauss-Seidel


(k) (k)
=DU − ((D − U )U − LU (k+1) − F )
| {z }
GS residual
(k+1) (k) (k)
DU =DU − ω((D − U )U − LU (k+1) − F )

37
CHAPTER 4. ITERATIVE METHODS

We see GS moves approximation in right direction, but is far too conservative. Instead, we
can consider the following scheme:

1 (k+1) (k) h2
UkGS = (Uj−1 + Uj+1 ) − fj
2 2
(k+1) (k) GS (k)
Uj =Uj + ω(Uj − Uj )

where ω is a scalar parameter. If ω = 1, then we are back to the GS scheme. If ω > 1, we


move further than GS suggests, thus we get successive over relaxation (SOR).
(k+1) (k)
(k+1) (k) uj−1 + uj+1 − h2 fj (k)
uj = uj+ ω( − uj )
2
1 2 (k+1) ω (k+1) 1 2 (k) ω (k)
[− 2 uj + 2 uj−1 ] = [(1 − ω)(− 2 uj − 2 uj+1 ] + fj
ω h h ω h h
In matrix form:
1 1
[(D − ωL)] U (k+1) = [(1 − ω)D + ωD] U (k) + F
|ω {z } |ω {z }
M N
A=D−L−U =M −N
GSOR = M −1 N = (D − ωL)−1 [(1 − ω)D + ωU ]

We also need to know what is the optimal ω. Note we need ||GSOR || to be as small as
possible. So ωoptimal should gives us GSOR with the smallest spectral radius.

Claim 4.2.2. If ρ(GSOR ) < 1 then 0 < ω < 2.

Proof.

ρ(GSOR )n = max{< |λi |(GSOR ) >n , i = 1, · · · , n}


≥Πni=1 |λi |(GSOR ) = |GSOR |
|N | |(1 − ω)n D|
=|M −1 N | = |M −1 ||N | = = = |1 − ω|n
|M | |D|
|1 − ω|n ≤ ρn (GSOR ) < 1 → −1 < 1 − ω < 1 ⇒ 0 < ω < 2

Problem 4.2.1. If A is symmetric and positive definite, A = M − N with M invertible,


then

(1) M T + N is also symmetric.

(2) If M T + N is also positive definite, then ρ(G) < 1.

38
CHAPTER 4. ITERATIVE METHODS

Jacobi
M = D, N = L + U ⇒ M T + N is symmetric and pos-definite → ρ(GJ ) < 1

SOR
M = ω1 (D − ωL), N = ω1 [(1 − ω)D + ωU ]. If A is symmetric positive definite, i.e. LT = U ,
then M T + N = 2−ω
ω
D is symmetric. If 0 < ω < 2, M T + N is also positive definite, by the
problem, we have ρ(GSOR ) < 1.

Claim 4.2.3. If A is block tridiagonal, symmetric, and positive definite. Then


2
ω∗ = p is optimal
1 + 1 − ρ(GJ )2

Therefore,
ρ(Gω∗ = ω ∗ − 1 < ρ(GGS ) < G(GJ ) < 1

For our problem


2 2 2
ω∗ = p = p =
1 + 1 − ρ(GJ )2 1 + 1 − cos2 (πh) 1 + sin πh
2
≈ ≈ 2(1 − πh)
1 + πh
ρopt =ρ(Gω∗ ) = ω ∗ − 1 = 1 − 2πh

Operation count
k is the # of iteration.
2 log h
k = log(ε)/ log(ρ) = log(ch2 )/ log(1 − 2πh) ≈ O( ) = O(n log(n))
−2πh
Compare with O(n2 log n) for Jacobi, GS. Typically, convergence rate is very sensitive to
choice of ω. It is better to slightly overestimate ω ∗ than slight underestimate. Even with
optimal SOR, ρ(Gω∗ ) → 1 as h → 0.

39
CHAPTER 4. ITERATIVE METHODS

40
Part II

Initial Value Problems

41
Chapter 5

The Initial Value Problem (IVP) for


Ordinary Differential Equations

We consider the following general first order ODE problem:


u0 (t) = f (u(t), t) u, f ∈ Rn u(0) = η
This is called the initial value problem for ODE. Note every ODE system converted to a
first-order autonomous system
u0 (t) = f (u(t)) u, f ∈ Rn u(0) = η
Recall from ODE theory that
Theorem 5.0.1. If f is uniformly Lipschitz continuous on a domain D for 0 ≤ t ≤ T , i.e.
there exists a Lipschitz constant L such that |f (u) − f (u∗ )| ≤ L|u − u∗ | for all u, u∗ ∈ D, the
IVP u0 = f (u), u(0) = η has a unique solution for all η ∈ D.

5.1 Finite difference method


For ODE problem, we will discretize time step:
k : time step
t0 , t1 , t2 , · · · tn = nk discrete time
u0 , u1 , u2 , · · · un ≈ u(tn ), u0 = η

Euler’s method (Forward Euler)

u0 (tn ) ≈ D+ (un )
un+1 − un
= f (un ) ⇒ un+1 = un + kf (un ) n = 0, 1, 2, · · ·
k
From the initial data U 0 , we can compute U 1 , U 2 and so on. This is a time marching scheme,
explicit.

43
CHAPTER 5. THE INITIAL VALUE PROBLEM (IVP) FOR ORDINARY
DIFFERENTIAL EQUATIONS
Backward Euler

u0 (tn+1 ) ≈ D+ (un+1 )
un+1 − un
= f (un+1 ) ⇒ un+1 = un + kf (un+1 ) n = 0, 1, 2, · · ·
k
Implicit scheme, it requires to solve a nonlinear equation to find un+1 .

Trapezoid method

un+1 − un 1
= (f (un ) + f (un+1 )) ⇒ un+1 = un + k(f (un ) + f (un+1 )) n = 0, 1, 2, · · ·
k 2
Implicit, second order accurate scheme.

5.1.1 Multistep method


The above methods are all one step methods, in the sense that U n+1 is determined solely by
U n . If we wish to get higher order of accuracy, we can use multistep method.

Midpoint (leap frog)

u0 (tn ) ≈ D0 (un )
un+1 − un−1
= f (un ) ⇒ un+1 = un + 2kf (un )
2k
Explicit, second order method. It needs another method to start it up.

2nd order backward differentiation formula

u0 (tn+1 ) ≈ D2 (un+1 ) one sided 2nd order approximation


3un+1 − 4un + un−1
= f (un+1 )
2k
Implicit, 2nd order.

Truncation Error
The LTE for forward Euler
u(tn+1 ) − u(tn ) k
τn = − f (u(tn )) = u00 (tn ) + O(k 2 )
k 2
Thus, it is a first order approximation.

44
CHAPTER 5. THE INITIAL VALUE PROBLEM (IVP) FOR ORDINARY
DIFFERENTIAL EQUATIONS

5.2 One step error


Suppose un is given exactly, what is the error in un+1 ?
(i)un+1 = un + kf (un )
(ii)u(tn+1 ) = u(tn ) + kf (u(tn )) + kτ n
(ii) − (i) assume un = u(tn )
k2
u(tn+1 ) − un+1 = kτ n = u00 (tn ) + O(k 3 ) = Ln
2
Ln : one time step error
We note here Ln = O(k 2 ), to obtain solution at t = nk must take n = kt steps. This
leads to total error n × O(k 2 ) = τ × O(k). Thus one time step error O(k 2 ) implies global
error is O(k).

5.3 Taylor series method


Here is another method to approximate u(tn+1 :
k 2 00
u(tn+1 ) = u(tn ) + ku0 (tn ) + u (tn ) + · · ·
2
u0 = f (u(tn )
u00 = (f (u(t))0 = f 0 (u)u0 = f 0 (u)f (u)
u000 = (f 0 (u)f (u))0 = f 00 (u)f 2 (u) + (f 0 (u))2 f (u)
Forward Euler is the first order member of this family.
Example 5.3.1.
u0 = sin2 u u00 = 2 sin u cos uu0 = 2 sin3 u cos u
Taylor series method gets very messy in higher order terms.

5.4 Runge-kutta methods (RK)


A different approach to obtain higher order approximation is through multistage methods,
where intermediate values of the solution and its derivative are generated and used within a
single time step.

5.4.1 2nd order Runge-kutta method (RK)


k 1
u∗ = un + f (un ) ≈ un+ 2
2
un+1 = un + kf (u∗ )

45
CHAPTER 5. THE INITIAL VALUE PROBLEM (IVP) FOR ORDINARY
DIFFERENTIAL EQUATIONS

Accuracy

u(tn+1 ) − u(tn ) k
τn = − f (u(tn ) + f (u(tn )))
k 2
k k
f (u(tn ) + f (u(tn ))) = f (u(tn )) + f 0 (u(tn ))u0 (tn ) +O(k 2 )
2 | {z } |2
=u0 (tn )
{z }
=u00 (tn )
k2
u00 (tn ) + O(k 3 )
ku0 (tn ) + k
τn = 2
− (u0 (tn ) + u00 (tn ) + O(k 2 ) = O(k 2 )
k 2
We may also check this on a linear ODE:
u0 = λu
k (λk)2 n
un+1 = un + λk(un + λun ) = (1 + λk + )u
2 2
u(t) = u0 eλt : exact solution
(λk)2
u(tn+1 ) = u0 eλ(tn +k) = eλk u(tn ) = (1 + λk + + · · · )u(tn )
2
Numerical solution
un+1 = eλk un + O(k 3 )
Thus, the one step error is O(k 3 ), which implies truncation error is O(k 2 ). It doesn’t auto-
matically follow method is 2nd order accurate for nonlinear problems.

5.4.2 General 2-stage RK

F0 = f (un )
F1 = f (un + akF0 )
un+1 = un + k(bF0 + cF1 )
How do we find a, b, c to maximize accuracy?
u(tn+1 ) − u(tn )
τn = − (bf (u(tn )) + cf (u(tn ) + akf (u(tn )))
k
u(tn+1 ) − u(tn )
= − (b f (u(tn )) +c(f (u(tn )) +ak f (u(tn ))f 0 (u(tn )) +O(k 2 )))
k | {z } | {z } | {z }
u0 (tn ) u0 (tn ) u00 (tn )
1
=(1 − (b + c))u0 (tn ) + k( − ac)u00 (tn ) + O(k 2 )
2
Thus, we require
(
b+c=1
⇒ τ n = O(k 2 ) : 1-parameter family
ac = 21

46
CHAPTER 5. THE INITIAL VALUE PROBLEM (IVP) FOR ORDINARY
DIFFERENTIAL EQUATIONS

Example 5.4.1.
1
a = , b = 0, c = 1
2
F0 = f (un )
k
F1 = f (un + F0 )
2
un+1 = un + kF1

This is the midpoint method!


Example 5.4.2.
1
a = 1b = c =
2
F0 = f (un )
F1 = f (un + kF0 )
k
un+1 = un + (F0 + F1 )
2
This is the Trapezoid method!

5.4.3 4th stage RK

F0 = f (un )
k
F1 = f (un + F0 )
2
k
F2 = f (un + F1 )
2
F3 = f (un + kF2 )
k
un+1 = un + (F0 + 2F1 + 2F2 + F3 )
6
Problem 5.4.1. Check method is 4th order accurate on a linear problem

5.5 Linear multistep method (LMM)


A general r-step linear multistep method takes the follow form
r
X r
X
n+j
αj u =k βj f (un+j ) (5.5.1)
j=0 j=0

U n+r is computed using U n+r−1 , un+r−2 , · · · , un . Note if βr = 0, then the method is explicit.

47
CHAPTER 5. THE INITIAL VALUE PROBLEM (IVP) FOR ORDINARY
DIFFERENTIAL EQUATIONS

Adams-Bashforth method

r−1
X
n+r n+r−1
u =u +k βj f (un+j ) : explicit
j=0

αr = 1, αr−1 = −1, αj = 0o.w., βr = 0, other βj chosen to max accuracy

Adams-Moulton method

r
X
n+r n+r−1
u =u +k βj f (un+j ) : implicit
j=0

Nystron method

r−1
X
n+r n+r−2
u =u +k βj f (un+j ) : explicit
j=0

r = 1 ⇒ midpoint method

5.5.1 LTE
r r
n+r 1 X X
τ = ( αj u(tn+j ) − k βj ) f (u(tn+j ))
k j=0 j=0
| {z }
u0 (tn+j )
r r
1 X X
= ( αj u(tn+j ) − k βj u0 (tn+j ))
k j=0 j=0

(kj)2 00
u(tn+j ) = u(tn ) + kju0 (tn ) + u (tn ) + · · ·
2
(kj)2 000
u0 (tn+j ) = u0 (tn ) + kju00 (tn ) + u (tn ) + · · ·
2
r r
1 X X
τ n+r = ( αj )u(tn ) + ( (jαj − βj ))u0 (tn )
k j=0 j=0
r
X j2
+k( ( αj − jβj ))u00 (tn ) + · · ·
j=0
2
..
.
r
X jp j p−1
+ k p−1 ( ( αj − βj ))u(p) (tn ) + · · ·
j=0
p! (p − 1)!

48
CHAPTER 5. THE INITIAL VALUE PROBLEM (IVP) FOR ORDINARY
DIFFERENTIAL EQUATIONS

For consistency, τ n+r → 0 as k → 0. We need


r
X r
X r
X
(i) αj = 0 (ii) jαj = βj
j=0 j=0 j=0

Problem 5.5.1. Show that these conditions are obtained by requiring this method to be
exact for polynomials of degree 0,1.

Comments on LMM:

ˆ LMM are not self-starting.

ˆ They require u0 , u1 , · · · , ur−1 starting values.

ˆ Only u0 is given from ODE.

ˆ u1 , · · · , ur−1 needs to be generated.

ˆ General rule: may drop one order of accuracy in generating starting values (one step
error is O(k p+1 )).

ˆ Method is pth order accurate if τ = O(k p ) and


r
X jq j q−1
( αj − βj ) = 0 q = 0, 1, · · · , p
j=0
q! (q − 1)!

49
CHAPTER 5. THE INITIAL VALUE PROBLEM (IVP) FOR ORDINARY
DIFFERENTIAL EQUATIONS

50
Chapter 6

Zero stability and convergence

6.1 Convergence
To discuss convergence for IVP, we fix T = nk and check the error in our approximations to
u(T ). We say a method converges if
lim U N = u(T ) (6.1.1)
k→0

Note it requires more steps as k → 0. In general, a method might converge on some problems
but not all. To say a method is convergent in general, we mean it converges for all problems
with all reasonable starting values. For r-step method:
Starting values: U 0 , U 1 , · · · , U r−1 lim U l = U 0 = η for l = 0, 1, · · · , r − 1 (6.1.2)
k→0

More precisely,
Definition 6.1.1. An r-step method is said to be convergent if applying the method to any
ODE u0 = f (u, t) with f (u, t) Lipschitz continuous in u, and with any set of starting values
satisfying (6.1.2), we obtain convergence in the sense of (6.1.1) for every fixed time T > 0
at which the ODE has a unique solution.
Example 6.1.2.
u0 = λu u(0) = u0 → u(t) = u0 eλt
u1 = u0 + kλu0 = (1 + kλ)u0
u2 = u1 + kλu1 = (1 + kλ)u1 = (1 + kλ)2 u0
..
.
uN = (1 + λk)N u0
T
As k → 0, N K = T, fixed N =
k
λT 1
lim U N = lim (1 + λk) u0 = lim [(1 + λk) λk ]λT u0 = eλT u0
λk
k→0 k→0 k→0

51
CHAPTER 6. ZERO STABILITY AND CONVERGENCE

6.2 General linear case


We now consider a general linear case.

u0 = λu + g(t) u(0) = u0

If we apply Euler’s method to the system, we obtain:

un+1 =un + k(λun + g n ) = (1 + kλ)un + kg n


u(tn+1 ) − u(tn )
τn = − (λu(tn ) + g(tn ))
k
k
=u0 (tn ) + u00 (tn ) + O(k 2 ) − (λu(tn ) + g(tn ))
2
k 00
= u (tn ) + O(k 2 )L LTE
2
n+1
u =(1 + kλ)un + kg n

u(tn+1 ) =(1 + kλ)u(tn ) + kg(tn ) + kτ n
en+1 =(1 + kλ)en − kτ n
=(1 + kλ)n+1 e0 −k[(1 + kλ)n τ 0 + (1 + kλ)n−1 τ 1 + · · · + (1 + kλ)τ n−1 + τ n ]
| {z }
Pn l n−l
−k l=0 (1+kλ) τ

(1 + kλ) ≤e|λ|k ⇒ (1 + kλ)l ≤ e|λ|kl ≤ e|λ|T


Denote ||τ ||∞ = max |τ l |
0≤l≤n−1

|en | ≤T e|λ|T ||τ ||∞


k 00 k
||τ ||∞ = max ||u (tl )|| ≤ M M = max |u00 (t)|
0≤l≤n−1 2 2
k
|en | ≤ M T e|λ|T = O(k) ⇒ |en | → 0 as k → 0
2

6.2.1 Nonlinear case

u0 =f (u) f : Lipschitz continuous |f (u) − f (v)| ≤ L|u − v|


un+1 =un + kf (un )

52
CHAPTER 6. ZERO STABILITY AND CONVERGENCE

u(tn+1 ) =u(tn ) + kf (u(tn )) + kτ n


en+1 =en + k(f (un ) − f (u(tn ))) − kτ n
|f (un ) − f (u(tn ))| ≤L|un − u(tn )| = Len
|en+1 | ≤|en | + kL|en | + k|τ n |
=(1 + kL)|en | + k|τ n |
X n
n+1 0
≤(1 + kL) |e | + k (1 + kL)l |τ n−l |
l=0
n−1
X n−1
X
n l n−1−l LT
|e | ≤k (1 + kL) |τ | ≤ ke |τ n−1−l | ≤ nkeLT ||τ ||∞ = T ||τ ||∞ eLT
l=0 l=0

So we have en = O(k). Note Lipschitz constant L plays the role of λ in the linear case.

6.3 Zero stability of general 1-step methods


Methods have the general form

un+1 = un + kψ(un , k) ψ depends on f

Here we assume f is Lipschitz continuous in u with Lipschitz constant L and ψ is Lipschitz


continuous in u with Lipschitz constant L0 . L0 is related to L.

6.3.1 Consistency

u(tn+1 ) − u(tn )
τn = − ψ(u(tn ), k)
k
k
=u0 (tn ) + u00 (tn ) + · · · − [ψ(u(tn ), 0) + kψk (u(tn ), 0) + · · · ]
2

Method is consistent if
ψ(u, 0) = f (u) ⇒ τ n → 0 as k → 0

Example 6.3.1. 2-stage RK


) (
un+1 = un + kf (un + k2 f (un )) ψ(u, 0) = f (u)

ψ(u, k) = f (u + k2 f (u)) L0 = L + k2 L2

53
CHAPTER 6. ZERO STABILITY AND CONVERGENCE

6.3.2 Convergence

un+1 =un + kψ(un , k)



u(tn+1 ) =u(tn ) + kψ(u(tn ), k) + kτ n
en+1 =en + k[ψ(un , k) − ψ(u(tn ), k)] − kτ n
|en+1 | ≤|en | + kL0 |en | + k|τ n |
0
|en | ≤T eL T ||τ ||∞

6.4 Zero stability and convergence of LMM

r
X r
X
αj U n+j = k βj f (un+j ) : r-step LMM
j=0 j=0
Xr r
X X
αj = 0 jαj = βj : consistency
j=0 j=0
0 1 r−1
U ,U ,··· ,U → U 0 as k → 0

Example 6.4.1.

un+2 − 3un+1 + 2un = −kf (un )


2
X 2
X X
αj = 1 − 3 + 2 = 0 jαj = 2 ∗ 1 + 1 ∗ (−3) + 0 ∗ (2) = −1 = βj ⇒ method is consistent
j=0 j=0

Take f = 0, u0 (t) = 0, u(0) = 0, the exact solution is given by u(t) = 0. Numerical solution
with u0 = 0, u1 = k:

N 5 10 20
uN 4.2 ∼ 260 ∼ 2 × 106

Numerical solution does not converge.

6.4.1 Error estimates


FILL IN Lecture 11 page 6.

54
CHAPTER 6. ZERO STABILITY AND CONVERGENCE

6.5 Difference equation


We review the method for solving homogeneous linear difference equation (u0 = 0). Consider
the following scheme:
Xr
αj un+j = 0 (6.5.1)
j=0
n n
Substitute a trial solution u = ξ into the equation we get
r
X r
X
αj ξ n+j = 0 ⇒ αj ξ j = 0
j=0 j=0
r
X
⇔ ξ is a root of ρ(ξ) = αj ξ j : characteristic polynomial
j=0

Suppose we have ξ1 , ξ2 , · · · , ξr distinct roots of ρ(ξ), then we can write un as a linear com-
bination:
un = c1 ξ1n + c2 ξ2n + · · · + cr ξnn
where c1 , c2 , · · · , cr are constant determined by initial value:

n = 0 c1 + c2 + · · · + cr = u 0
n = 1 c1 ξ1 + c2 ξ2 + · · · + cr ξr = u1
..
.
n = r − 1 c1 ξ r−1 + c2 ξ r−1 + · · · + cr ξ r−1 = ur−1

In matrix form:    
1 1 ··· 1 c1 u0
 ξ1 ξ2 ··· ξr  c2   u1 
   
 .. .. ..   ..   .. 
 . . .  .  . 
ξ1r−1 ξ2r−1 · · · r−1
ξr cr ur−1
The matrix on the left side is Van der mode matrix. It is nonsingular, ill-conditioned.
Example 6.5.1. We revisit the example:

un+2 − 3un+1 + 2un = −kf (un )

The characteristic polynomial is given by:

ρ(ξ) = ξ 2 − 3ξ + 2 = (ξ − 1)(ξ − 2) ⇒ ξ1 = 1, ξ2 = 2

Thus, the general solution is given by

un = c1 1n + c2 2n = c1 + c2 2n

55
CHAPTER 6. ZERO STABILITY AND CONVERGENCE

If we take the initial data: u0 = 0, u1 = k, then we have


(
c1 + c2 = 0
c2 = k, c1 = −k
c1 + 2c2 = k

Thus, we have
un = k(2n − 1)

It grows exponentially!

Claim 6.5.1.  
1 1 ··· 1
 ξ1 ξ2 ··· ξr 
A=
 
.. .. .. 
 . . . 
ξ1r−1 ξ2r−1 · · · ξrr−1

is invertible provided ξj are distinct.

Proof. To show A is invertible, it suffices to show the null space of AT is trivial; that is, the
solution to AT x = 0 is x ≡ 0.
  
1 ξ1 · · · ξ1r−1 x1
 .. .. .. ..   ..  = 0
. . . .  . 
1 ξr · · · ξrr−1 xr

If we define

f (ξ) = x1 + x2 ξ + · · · + xr ξ r−1 : polynomial of degree r − 1

Note we have a system of equations:

f (ξ1 ) = f (ξ2 ) = · · · = f (ξr ) = 0 ⇒ f (ξ) has r roots

This implies that f (ξ) is the trivial zero polynomial and x1 = x2 = · · · = xr = 0. Thus, we
have shown A is invertible.

It is noteworthy that if any of the roots |ξn | > 1, then LMM does not converge. But if
all roots satisfy |ξj | <≤ 1, will the method converge?

Claim 6.5.2. If ξ1 is a root of ρ(ξ) with multiplicity of 2, then un = nξ1n is a solution of the
difference equation.

56
CHAPTER 6. ZERO STABILITY AND CONVERGENCE

Proof. We simply substitute nξ n into (6.5.1)


r
X r
X r
X
n+j
αj u = αj (n + j)ξ1n+j = αj (n + j)ξ1n−r+r+j
j=0 j=0 j=0
r
X
=ξ1n−r αj (n − r + r + j)ξ1r+j
j=0
r
X r
X
=ξ1n−r ((n − r) αj ξ1r+j +ξ1 αj (r + j)ξ1r+j−1 )
j=0 j=0
| {z } | {z }
ξ1r ρ(ξ1 ) ξ1 (ξ1r ρ(ξ1 ))0

If ξ1 is a multiple root, then ρ(ξ1 ) = ρ0 (ξ1 ) = 0. Thus, nξ1 is also a solution.

Example 6.5.2. We consider the following scheme:

k
un+2 − 2un+1 + un = (f (un+2 ) − f (un ))
2

X X X
αj = 0 jαj = βj = 0 ⇒ consistent!
ρ(ξ) = ξ 2 − 2ξ + 1 = (ξ − 1)2 = 0 ⇒ ξ1 = ξ2 = 1
un = c1 ξ1n + c2 nξ1n = c1 + c2 n
u0 = 0, u1 = k ⇒ c1 = 0, c2 = k ⇒ un = kn

Note here solution grows linearly, not as bad as exponential growth. Method does not
converge.

Example 6.5.3.

5 1 1
un+3 − 2un+2 + un+1 − un = kf (un )
4 4 4
X 5 1 X 5 1 X
αj = 1 − 2 + − = 0 jαj = 1 ∗ 3 − 2 ∗ 2 + + 0 = = βj ⇒ consistent!
4 4 4 4
5 1 1
ρ(ξ) = ξ 3 − 2ξ 2 + ξ − = (ξ − 1)(ξ − )2
4 4 2
1
⇒ ξ1 = 1, ξ2 = ξ3 =
2
1 1
un = c1 + c2 ( )n + c3 n( )n
2 2
1 n
n( ) ⇒ 0 as n → ∞
2
57
CHAPTER 6. ZERO STABILITY AND CONVERGENCE

Claim 6.5.3. 
∞ |ξ| > 1

p n
lim n |ξ | = 0 |ξ| < 1
n→∞ 
1 |ξ| = 1, p = 0

Definition 6.5.4. An r-step LMM is zero-stable if the roots of ρ(ξ) satisfy

(1) |ξj | ≤ 1, j = 1, · · · , r.

(2) |ξj | = 1 ⇒ ξj is a simple root

We then say that ρ(ξ) satisfies the root condition.

Note from the root condition is necessary and sufficient condition for convergence.

Theorem 6.5.5. If ρ(ξ) has distinct roots ξ1 , · · · , ξj with multiplicity m1 ≥ 1, · · · , mj ≥ 1,


then the solution of the difference equation is

un =(a1,0 + a1,1 m + · · · + a1,m1 −1 nm1 −1 )ξ1n


..
.
+(aj,0 + aj,1 m + · · · + aj,mj −1 nmj −1 )ξjn

Proof. SKIPPED

58
Chapter 7

Absolute Stability for Ordinary


Differential Equations

7.1 Generalization of difference equation


We generalize the results from section 6.5 to linear case u0 = λu. We know the exact solution
is given by u(t) = u0 eλt . Now, we recall the general LMM:
r
X r
X
n+j
αj U = λk βj U n+j : r-step LMM
j=0 j=0
n n
u = ξ : trial solution ⇒
Xr
(αj − λkβj )ξ j = 0 polynomial of degree r in ξ
j=0
Pr j
)
ρ(ξ) = j=0 αj ξ
Pr j
⇒ ρ(ξ) − λkσ(ξ) = 0
σ(ξ) = j=0 βj ξ

Consistency
X
αj = 0 ⇒ ρ(1) = 0 ρ(1)0 6= 0( stability )
X X
jαj = βj ⇒ ρ0 (1) = σ(1)
j

Definition 7.1.1. The stability polynomial Π(ξ) is defined as:


Π(ξ) = ρ(ξ) − λkσ(ξ)
For k = 0, 1 is a simple root. For k small, there is a root ξ1 (k) such that ξ1 (k) → 1 as
k → 0. We define ξ1 (k) as principle root of LMM, while the other roots are called extraneous
roots.

59
CHAPTER 7. ABSOLUTE STABILITY FOR ORDINARY DIFFERENTIAL
EQUATIONS

Example 7.1.2.
un+2 − un − 2kλun+1 = 0 : leap frog
Π(ξ, λk) = ξ 2 − 2kλξ − 1 = 0
p
ξ1 = λk + (λk)2 + 1 → 1 as k → 0
p
ξ2 = λk − (λk)2 + 1 → −1 as k → 0

Claim 7.1.1. If τ n = O(k r ) ⇒ ξ1 (k) = eλk + O(k r+1 ) and


un = ξ1n (k) = (eλk + O(k r+1 ))n = eλt + O(k r )
This justifies the name of principal root!
Example 7.1.3. Consider the following problem
u0 = λu u(0) = 1 the exact solution is given by: u(t) = eλt
Using leap frog we get:
p 1
ξ1 = λk + (λk)2 + 1 = 1 + λk + (λk)2 + · · · = eλk + O(k 3 )
2
p 1
ξ2 = λk − (λk)2 + 1 = −1 + λk − (λk)2 + · · · = −eλk + O(k 3 )
2

We see that ξ1n = eλt + O(k 2 ), this is a good approximation to the actual solution; on the
other hand, ξ2n = (−1)n e−λt + O(k 2 ) a clear extraneous root. The general solution is given
by:
un = c1 ξ1n + c2 ξ2n = c1 eλt + c2 (−1)n e−λt + O(k 2 )
Using the initial condition u0 = 1, u1 = eλt we get
c1 = 1 + O(k 3 ) c2 = O(k 3 ) ⇒ un → eλt as k → 0, n → ∞, nk = t fixed
Note here for fixed t = nk, limn→∞ un = u(t) implies convergence. For k > 0, if Re(λ) >
0, the extraneous solution decays as n → ∞; vice versa.

7.2 Absolute stability


A numerical method sometimes produce undesired results. To investigate whether a numer-
ical method preserves certain properties of the equation, we need to define another notion
of stability.
Consider the following linear systems:
u0 = Au u(0) = u0 A : n × n matrix

60
CHAPTER 7. ABSOLUTE STABILITY FOR ORDINARY DIFFERENTIAL
EQUATIONS

Suppose A is diagonalizable. Let r1 , r2 , · · · , rn be the set of eigenvectors and λ1 , λ2 , · · · , λn


be the set of eigenvalues. The spectral factorization of A is given by
 
λ1
 
| | |
A = RΛR−1 R = r1 r2 · · · rn  Λ = 
 .. 
. 
| | | λn
The general solution is given by
u(t) = α1 (t)r1 + α2 (t)r2 + · · · + αn (t)rn
Note
u(0) = α1 (0)r1 + · · · + αn (0)rn = u0
Thus, α1 (0), · · · , αn (0) can be determined from initial data.
u0 (t) = Au ⇔ α10 (t)r1 + α20 (t)r2 + · · · + αn0 (t)rn = AU
where
AU = A(α1 (t)r1 + α2 (t)r2 + · · · + αn (t)rn ) = α1 (t)λ1 r1 + · · · αn (t)λn rn
The equality yields:
αk0 (t) = λk αk k = 1, 2, · · · , n
So,
αk (t) = αk (0)eλk t ⇔ u(t) = α1 (0)eλ1 t + · · · + αn (0)eλn t
If Re(λk ) < 0 for all k = 1, 2, · · · , n, then u(t) → 0 as t → 0. This is usually noted as schur
stable.
Definition 7.2.1. A numerical method that preserves schur stability is called absolute stable.
We consider the simplest example:
u0 (t) = λu(t) u(0) = 1

un+1 =un + kλun = (1 + λk)un : Euler method


en+1 =(1 + λk)en − kτ n : error propagation
Notice if |1 + λk| > 1, then error is growing exponentially. Thus, Euler method is absolutely
stable if |1 + kλ| ≤ 1. In other words, we must have −2 ≤ λk ≤ 0. We define z = kλ (z is
complex). We say the interval of absolute stability for Euler’s method is [−2, 0].
Notice here this does not contradict zero stability since
|en | ≤ T e|λ|T ||τ ||∞ = T e|λ|T O(k)
Error bound can grow exponentially. What this implies is we might lose accuracy due to
large k. However, for any k > 0, when k → 0, we still have en → 0, thus convergence is
guaranteed.

61
CHAPTER 7. ABSOLUTE STABILITY FOR ORDINARY DIFFERENTIAL
EQUATIONS

7.3 Stability regions for linear multistep methods


We apply a general LMM method of the form (5.5.1) to u0 = λu:
r
X r
X r
X
n+j n+j
αj u = |{z}
λk βj u =z βj un+j
j=0 =z j=0 j=0

Substitute a trial solution un = ξ n we get


r
X
(αj − zβj )ξ j = 0
j=0

This resembles the difference equation from section 6.5. Instead, the characteristic polyno-
mial is replaced by:
X
Π(ξ, z) = (αj − zβj )ξ j = ρ(ξ) − zσ(ξ) : stability polynomial

Definition 7.3.1. For a given z, a LMM is absolute stable if Π(ξ, z) satisfies the root con-
dition (Definition 6.5.4).

Note a LMM is zero-stable if and only if z = 0 is inside the absolute stability region.

Example 7.3.2. For forward Euler method: un+1 = (1 + z)un , we have

Π(ξ, z) = ξ − (1 + z) = 0 ⇒ ξ1 = 1 + z

|ξ1 | ≤ 1 ⇒ |1 + z| ≤ 1
The stability region is the disk centered at −1 as shown in Figure 7.1.

Im z

Re z
−1

Figure 7.1: Stability region of forward Euler in cyan.

62
CHAPTER 7. ABSOLUTE STABILITY FOR ORDINARY DIFFERENTIAL
EQUATIONS

Im z

Re z
1

Figure 7.2: Stability region of backward Euler.

Example 7.3.3. For backward Euler: un+1 = un + λkun+1 , we have


1
Π(ξ, z) = (1 − z)ξ − 1 = 0 ⇒ ξ1 =
1−z
|ξ1 | ≤ 1 ⇔ |1 − z| ≥ 1
The stability region is exterior of the disk shown in Figure 7.2. Note here regions of absolute
stability contains entire left half-plane. Such method is called A-stable. A-stable method
has no restrictions on k for absolute stability.

Example 7.3.4. For trapezoid method: un+1 = un + z2 (un + un+1 ), we have

z z 1 + z/2
Π(ξ, z) = (1 − )ξ − (1 + ) = 0 ⇒ ξ =
2 2 1 − z/2

1 + z/2
|ξ1 | ≤ 1 ⇔ −1 < <1
1 − z/2
The stability region is shown in Figure 7.3. Method is also A-stable.
Im z

Re z

Figure 7.3: Stability region of Trapzoid method.

63
CHAPTER 7. ABSOLUTE STABILITY FOR ORDINARY DIFFERENTIAL
EQUATIONS

Example 7.3.5. For midpoint (Leap Frog): un+1 = un−1 + 2zun , we have

Π(ξ, z) = ξ 2 − 2z − 1 = 0 ⇒ ξ1,2 = z ± z 2 + 1
1
Since ξ1 ξ2 = −1, |ξ1 | = |ξ2 |
. Therefore method is stable if and only if |ξ1 | = |ξ2 | = 1. In
other words
ξ2 − 1 1 1
z= = (ξ − )
2ξ 2 ξ
Let ξ = a + ib, then ξ 2 = a2 + b2 = 1. Thus we have
1 1
z = (ξ − ) = iα : pure imaginary
2 ξ

|α| < 1 for absolute stability. The stability region is shown in Figure 7.4.
Im z

Re z

−1

Figure 7.4: Stability region of Leap Frog method.

7.4 Boundary locus method


In this section, we introduce a new and easier way to find absolute stability region. Recall
that
Π(ξ, z) = ρ(ξ) − zσ(ξ) : stability polynomial
Notice if z is on the boundary of the absolute stability region, then Π(ξ, z) has at least
one root with |ξ(z)| = 1. Denote this root by ξ = eiθ for some θ ∈ [0, 2π]. Now, substitute
this root into the stability polynomial:

αj eiθ
X P
iθ iθ
Π(e , z) = (αj − zβj )e = 0 ⇒ z = P
βj eiθ

It is worth-noting that every point on the boundary must have this form for some θ. However,
it is possible that not all θ correspond to some z.
Idea: plot z(θ) for all θ ∈ [0, 2π], then identify all z that are potentially on the boundary.

64
CHAPTER 7. ABSOLUTE STABILITY FOR ORDINARY DIFFERENTIAL
EQUATIONS

Example 7.4.1. For Euler’s method, we have

ξ =1+z z = ξ − 1 = eiθ − 1 on the boundary

The boundary correspond to the disk in Figure 7.1. The next step is to determine whether
inside or outside of the boundary is stable. Observe that the number of ξ with |ξ| > 1 does
not change unless we cross the boundary. Thus, we can test stable region by randomly pick
a point inside and outside:

Inside: Pick z = −1, ξ1 = 0, satisfies the root condition; thus, stable.

Outside: Pick z = −3, ξ1 = −2, does not satisfy the root condition; thus, unstable.

Thus, we have found the same absolute stability region as derived before.

Example 7.4.2. For midpoint method, we have

ξ2 − 1 ei2θ − 1 1 iθ
Π(ξ, z) = ξ 2 − 2zξ − 1 = 0 ⇒ z = = = (e − e−iθ ) = i sin(θ)
2ξ 2eiθ 2

Therefore, z is pure imaginary with |z| ≤ 1 or z = iα, |α| < 1 on the boundary. Outside the
boundary, pick z = 34 m we get ξ1 = 2, ξ2 = −12
, does not satisfy the root condition. Thus,
stability region is the boundary.

As shown in Figure 7.5, boundary locus may cross itself. In those situations, to determine
the stability region, evaluate roots of Π(ξ, z) at some convenient z inside each region.

Problem 7.4.1. Use the boundary locus method to find the absolution stability region of the
2-stage RK and (ii) the 2-step BDF method.

Im z

Re z

Figure 7.5: Stability region of 4 step Adam-Bashford method.

65
CHAPTER 7. ABSOLUTE STABILITY FOR ORDINARY DIFFERENTIAL
EQUATIONS

7.5 Linear systems


Our discussions so far have focused on stability theory involving a scalar differential equation
u0 = f (u, t). In this section, we will take a look at linear systems u0 = Au, where A is a
constant m × m matrix.
u0 = Au u(0) = u0
Suppose A is diagonalizable. Let r1 , r2 , · · · , rm be the set of eigenvectors and λ1 , λ2 , · · · , λm
be the set of eigenvalues. Recall that
A = RΛR−1 R = (r1 | · · · |rm ) Λ = diag(λ1 , · · · , λm )
A change of coordinates yield:
R−1 u0 = R−1 Au = R−1 ARR−1 u = ΛR−1 u
Let v = R−1 u, we have v 0 = Λv, v(0) = R−1 u0 .
We have

v10 = λ1 v1 v1 (0) 


.. m decoupled equations ⇒ u(t) = Rv(t)
. 
v0 = λ v

m 1 mv (0) 
1

A LMM can be decoupled in the same way.


Example 7.5.1. For a general Euler’s method, we have
un+1 = un + kAun
Let v n = R−1 un , we can decouple the system:
v n+1 = v n + kΛv n
For each component:
vpn+1 = vpn + kλp vpn = (1 + kλp )vpn p = 1, · · · , m
For stability, we require kλp to be in the stability region for all p.
Example 7.5.2.
u0 = Au A : 2 × 2 λ1 << λ2 << 0
The general solution is given by:
u(t) = α1 eλ1 t r1 + α2 eλ2 t r2 ≈ α2 eλ2 t r2
2
For this component, we require −2 ≤ kλ2 < 0 ⇒ k ≤ |λ2 |
. But for absolute stability we also
need −2 ≤ kλ1 < 0 ⇒ k ≤ |λ21 | << |λ22 | .
This is an example of a stiff system.

66
CHAPTER 7. ABSOLUTE STABILITY FOR ORDINARY DIFFERENTIAL
EQUATIONS

7.6 Stiff systems


To discuss what is a stiff system, we need to consider the choice of step size:

Accuracy: k ≤ kacc ⇒ LTE is acceptably small


Stability: k ≤ kst ⇒ method is absolutely stable

Ideally, we should have kacc ≤ kst .


max(|λp |)
Definition 7.6.1. A system is stiff if kst << kacc . A good measure of stiffness is min(|λp |)
.

We note that if max |λp | is very close to the imaginary axis, then solution will oscillate
rapidly, undamped. In this case, a small kacc is chosen to resolve oscillations. Stiffness may
also occur in a scalar problem.

Example 7.6.2. We consider the following scalar ODE:

u0 = λ(u − C) − sin(t) u(0) = u0 λ < 0, C : a constant

The exact solution is given by:

u(t) = (u0 − 1)eλt + cos(t) u(t) → cos(t) exponentially

7.6.1 Numerical methods for stiff problems


To deal with stiffness, region of absolute stability should extend as far as possible into left
half plane (ideally, A-stable).

LMM
So far, we have seen two kinds of LMM:

(I) Methods with bounded stability region: forward Euler, Adams.

(II) Methods with unbounded stability region: backward Euler, Trapezoid

In fact, all explicit methods have bounded stability regions while some implicit methods
also have bounded stability regions. Any A-stable method is at most second order accurate.

7.7 L-Stability
Recall that the stability polynomial is:

Π(ξ, z) = ρ(ξ) − zσ(ξ)

67
CHAPTER 7. ABSOLUTE STABILITY FOR ORDINARY DIFFERENTIAL
EQUATIONS

For backward Euler:


1
Π(ξ, z) = (1 − z)ξ − 1 = 0 ⇒ ξ1 = |ξ1 | → 0 as |z| → ∞
1−z
For Trapezoid method:
z
z z 1+ 2
Π(ξ, z) = (1 − )ξ − (1 + ) = 0 ⇒ ξ1 = z ⇒ |ξ1 | → 1 as |z| → ∞
2 2 1− 2

We see even though both methods are A-stable, but for large |z|, backward Euler is more
efficient (errors decay faster).
1 1
Π(ξ, z) = ρ(ξ) − σ(ξ)
z z
As |z| → ∞, roots of Π(ξ, z) approaches roots of σ(ξ).

Definition 7.7.1. A method is L-stable if it is A-stable and if Π(ξ, z) → 0 as |z| → ∞.


Alternatively, a method is L-stable if all roots of σ(ξ) are strictly inside the unit circle
|ξj | < 1.

In the examples above, backward Euler method is L-stable while Trapezoid method is
not. L-stable methods are in general very good at solving stiff systems.

7.7.1 Backward differentiation formula methods (BDF)


One class of L-stable method is called backward differentiation formula (BDF) method. Note
if we pick σ(ξ) = βr ξ r , then all roots are at ξ = 0. The general form of BDF is given by:
r
X
αj U n+j = kβr f (U n+r )
j=0

In the simplest case r = 1, we get U n+1 = U n +kf (U n+1 ), which corresponds to the backward
Euler method. Other BDF methods are:

r=2: 3U n+2 − 4U n+1 + U n = 2kf (U n+2 )


r=3: 11U n+3 − 16U n+2 + 9U n+1 − 2U n = 6kf (U n+3 )
r=4: 25U n+4 − 48U n+3 + 36U n+2 − 16U n+1 + 3U n = 12kf (U n+4 )

68
Chapter 8

Diffusion Equation and Parabolic


Problems

We consider the following problem:

ut = uxx u(x, o) = u0 (x) u(0, t) = g0 (t) u(1, t) = g1 (t) t ≥ 0

This combines boundary value problem with initial value problems. We will discretize space
and time as shown in Figure 8.1:
t

u32
.
.
.
3
2
1
x
0 1 2 3 ... m m+1

Figure 8.1: Grid for diffusion equation.

uni = u(xi , tn ) xi = ih tn = nk i = 0, · · · , m + 1 n = 0, 1, · · · (m + 1)h = 1

We first consider the following schemes:

I.
un+1
i − uni uni−1 − 2uni + uni+1
= 2
: explicit (8.0.1)
| {zk } | h
{z }
forward Euler in time centered difference in space

69
CHAPTER 8. DIFFUSION EQUATION AND PARABOLIC PROBLEMS

We can also write it as


k
un+1
i = uni + 2
(uni−1 − 2uni + uni+1 )
h
|{z}
=r
k
Note here we define r = h2
: ratio of k and h2 .

II. Crank-Nicolson
un+1
i − uni 1
= 2 ((uni−1 − 2uni + uni+1 ) + (un+1 n+1
i−1 − 2ui + un+1
i+1 )) (8.0.2)
k 2h
This method is trapezoid method in time and centered difference in space: implicit.
n+1 n+1

n n
i−1 i i+1 i−1 i i+1
(a) (b)

Figure 8.2: Stencil for (8.0.1) and (8.0.2).

8.1 Local truncation error and order of accuracy


LTE for I

u(x, t + k) − u(x, t) 1
τin = − 2 (u(x − h, t) − 2u(x, t) + u(x + h, t))
k h
2
k k
=ut + utt + uttt + O(k 3 )
2 6
1 h2 h3  h4 h3 h3  h4
− 2 (u −hux + uxx − u xxx + u xxxx − 2u
 + u +
  xhu + u xx + uxxx + uxxxx + O(h6 ))
h 2 6 24 2 6 24
 
 
2 2
k k h
ut + utt + uttt + O(k 3 ) − (
= uxx
+ uxxxx + O(h4 )) (ut = uxx )
2 6 12
k h2
= utt − uxxxx + h.o.t (utt = uxxxx )
2 12
k h2
=( − )uxxxx + h.o.t. = O(k + h2 )
2 12
Thus, (8.0.1) is first order accurate in time, second order accurate in space.

Problem 8.1.1. Show Crank-Nicolson method is second order accurate in both space and
time; i.e. τin = O(k 2 + h2 ).

70
CHAPTER 8. DIFFUSION EQUATION AND PARABOLIC PROBLEMS

A method is said to be consistent if τ → 0 as k, h → 0. If a method is stable. we expect


convergence. For linear PDEs, consistency plus stability is equivalent to convergence; such
theorem is known as Lax equivalence theorem.

8.2 Method of line discretization


To investigate the stability theory of time-dependent PDEs, we can relate it to the stability
theory for time-dependent ODEDs. In the approach of method of lines, we first discretize in
space alone. The equation ut = uxx now becomes:
1
(ui )t = (ui−1 (t) − 2ui (t) + ui+1 (t)) i = 1, · · · , m : systems of ODEs
h2

u0 (t) u1 (t) u2 (t) um (t) um+1 (t)


6 6 6 6 6 6

-x
x0 x1 x2 ··· xm xm+1

Figure 8.3: Method of lines

As shown in Figure 8.3, we may solve the system of ODEs along vertical lines. In matrix
form:

u(t) = (u1 (t), u2 (t), · · · , um (t))T u0 = Au + g(t) : Semi-discrete system

Where A is the same as before and g(t) includes the boundary conditions:
   
−2 1 g0 (t)
 1 −2 1   0 
1  ... ... ...
 1 
 .. 

A= 2 , g(t) =
 
h  h2  . 
  

 1 −2 1   0 
1 −2 g1 (t)

Once we discretize in time, we will get a fully discrete system. The stability of schemes
in (8.0.1) or (8.0.2) can now be analyzed. We expect the method to be stable if kλp ∈ S; i.e.

71
CHAPTER 8. DIFFUSION EQUATION AND PARABOLIC PROBLEMS

if the time step k times any eigenvalue λp lies in the absolute stability region of the ODE
method. We have seen that the eigenvalues of A is given by:
2
λp = (cos(pπh) − 1) p = 1, 2, · · · , m
h2
4
Notice that all λp are real negative. min |λp | = |λ1 | ≈ π 2 and max |λp | = |λm | ≈ h2
.

Example 8.2.1. For method I ((8.0.1)), the stability region for forward Euler’s method is
−2 ≤ kλ ≤ 0. Thus, we require:
k k 1
−2 ≤ −4 ≤ 0 ⇒ ≤
h2 h2 2
Example 8.2.2. For method II ((8.0.2)), Trapezoid method is A-stable. Thus, there is no
restriction on k.

Our problem becomes stiff as h → 0 since the stiffness ratio is:


λm 4
| |= 2 2
λ1 π h
To see what causes stiffness, we go back to a simple example

ut = uxx g0 (t) = g1 (t) = 0 u(x, 0) = sin(pπx)

The exact solution is given by


2 π2 t
u(x, t) = e−p sin(pπt)

Low frequencies decay slowly while high frequencies decay rapidly. For different time scales,
we require different p. In the continuous case, we have infinite stiffness; on the other hand,
the discrete case sees finite stiffness but as h increases, m increases and more frequencies are
present, hence, stiffness increases.

8.3 Convergence of method I


For the problem
ut = uxx u(x, 0) = u0 (x) ||u(·, t||∞ ≤ ||u0 (x)||∞
2 π2 t
If u(x, 0) = sin(pπx), then u(x, t) = e−p sin(pπx), which decays in time.

Definition 8.3.1. We define


||un ||∞ = max |uni |
i
n 0
Method is said to be stable if ||u ||∞ ≤ ||u ||∞ .

72
CHAPTER 8. DIFFUSION EQUATION AND PARABOLIC PROBLEMS

k
Claim 8.3.1. Method I is stable if and only if h2
= r ≤ 21 .
Proof. (⇐=) we assume r ≤ 21 .
un+1
i =uni + r(uni−1 − 2uni + uni+1 )
=runi−1 + (1 − 2r)uni + runi+1 )
Taking the norm on both sides and by triangle inequality:
|un+1
i | ≤|runi−1 | + |(1 − 2r)uni | + |runi+1 )|
≤r||un ||∞ + (1 − 2r)||un ||∞ + r||un ||∞ = ||un ||∞
True for all i, thus for max as well:
||un+1 ||∞ ≤ ||un ||∞ ≤ · · · ≤ ||u0 ||∞
(=⇒) Suppose r > 21 , we want to show solution grows. We let u0i = (−1)i . In this case,
||u0 ||∞ = 1. But we see
u1i =r(−1)i−1 + (1 − 2r)(−1)i + r(−1)i+1
=(−1)i−1 (r − (1 − 2r) + r) = (−1)i−1 (4r − 1)
|u1i | = 4r − 1 > 1 ⇒ ||u1 ||∞ > ||u0 ||∞
Keep going, we show solution grows.
To show convergence, we have the following:
Claim 8.3.2. If r ≤ 12 , then ||en ||∞ ≤ |{z}
T ||τ ||∞ .
=nk

Proof. Let uˆni denote the exact solution u(xi , tn ).


un+1
i = runi−1 + (1 − 2r)uni + runi+1 (8.3.1)
ˆ = runˆ + (1 − 2r)uˆn + runˆ + kτ n
un+1 (8.3.2)
i i−1 i i+1 i

(8.3.1) - (8.3.2) yields:


en+1
i = reni−1 + (1 − 2r)eni + reni+1 − kτin eni : error
Denote ||en ||∞ = maxi |eni |. If r ≤ 12 , then we have
||en+1 ||∞ ≤ r||en ||∞ + (1 − 2r)||en ||∞ + r||en ||∞ + k||τ n ||∞ = ||en ||∞ + k||τ ||∞
Inductively, we have
||en ||∞ ≤  0
||e||∞ + nk||τ || = T ||τ ||


k
Corollary 8.3.2. If k, h → 0 with r = h2
fixed and r ≤ 12 , then the method converges.

73
CHAPTER 8. DIFFUSION EQUATION AND PARABOLIC PROBLEMS

8.3.1 Convergence in matrix form


We denote the approximated solution un and exact solution uˆn at time n by:

un = (un1 , un2 , · · · , unm )T uˆn = (u(x1 , tn ), u(x2 , tn ), · · · , u(xm , tn ))T

In matrix form we have


un+1 = Bun (8.3.3)
and
ˆ = B uˆn + kτ n
un+1 (8.3.4)
where  
1 − 2r r
 r
 1 − 2r r 

B=
 .. .. .. 
. . . 
 
 r 1 − 2r r 
r 1 − 2r
(8.3.3) - (8.3.4):
E n+1 = BE n − kτ n E n = un − uˆn
Inductively, we will get
n−1
X n−1
X
n n 0 n−l−1 l
E =B E −k B τ = −k B n−l−1 τ l
l=0 l=0

Taking the max norm on both sides:


n−1
X
||E n || ≤ k ||B n−l−1 ||||τ ||
l=0

k
Definition 8.3.3. (I) If ||τ || → 0 as k, h → 0 with r = h2
fixed, then we have consistency.

(II) If ||B n || is uniformly bounded for all k, n, nk ≤ T , then we have stability.


k
(III) If ||E n || → 0 as k, h → 0 with r = h2
fixed, then we have convergence.

Power boundedness of B
Recall the matrix B for method I is given by:
 
1 − 2r r
 r
 1 − 2r r 

B=
 . . . . . . . 
. . 
 
 r 1 − 2r r 
r 1 − 2r

74
CHAPTER 8. DIFFUSION EQUATION AND PARABOLIC PROBLEMS

In max norm or 1-norm, we have


(
1 r ≤ 21
||B||∞,1 = 1
4r − 1 r > 2

In 2-norm, we have
pπh
||B||2 = ρ(B) where λp = 1 + 2r(cos(pπh) − 1) = 1 − 4r sin2 ( )
2
If |λp | ≤ 1 for all p, then ||B n ||2 is bounded:

pπh 1
−1 ≤ 1 − 4r sin2 ( )≤1⇒r≤
2 2
A very important note here, k, h cannot tend to 0 independently. Their ratio must be fixed.

8.4 Von Neumann analysis


8.4.1 Fourier analysis
ut = uxx −∞<x<∞
We look for solutions of the form u(x, t) = eωt+iξx , where ξ is the wave number and ω is
the growth rate. Plug this into the equation we get

ωeωt+iξx = (iξ)2 eωt+iξx

Thus, ξ, ω satisfy the dispersion relation:

ω = −ξ 2

We can rewrite the solution as


2 t+iξx
u(x, t) = e−ξ
If ξ = 0, we just have a constant mode. On the other hand, if ξ 6= 0, we have mode oscillating
in space and decaying in time. We now introduce the method of Fourier analysis.

Fourier transform
Z ∞
1
fˆ(ξ) = f (x)e−iξx dx −∞<ξ <∞
2π −∞

Inverse Fourier transform


Z ∞
f (x) = fˆ(ξ)eiξx dξ
−∞

75
CHAPTER 8. DIFFUSION EQUATION AND PARABOLIC PROBLEMS

Parserval’s equality
Z ∞ Z ∞
1 2
|f (x)| dx = |fˆ(ξ)|2 dξ
2π −∞ −∞

Similarly, in finite dimensional setting, we consider q1 , · · · , qn ∈ Rn an orthonormal basis;


i.e. (
0 i 6= j
qiT qj = qiT qi = ||qi ||22 = 1
1 i=j
For any vector f ∈ Rn , we have:

Inverse Fourier transform


f = fˆ1 q1 + fˆ2 q2 + · · · + fˆn qn

Fourier transform
q1T f = q1T (fˆ1 q1 + fˆ2 q2 + · · · + fˆn qn ) = fˆ1 ⇔ f̂i = qiT f where fˆ = (fˆ1 , · · · , fˆn )T

Parserval’s equality
2 2
fˆ1 + · · · + fˆn = fˆT fˆ = ||fˆ||22
X
||f ||22 = f T f =

Solution formula
Recall the problem
ut = uxx , u(x, 0) = f (x)
Fourier transform yield: Z ∞
1
û(ξ, t) = u(x, t)e−iξx dx
2π −∞

Plug this into the solution formula, we see


Z ∞ Z ∞
1 −iξx 1
ût (ξ, t) = ut (x, t)e dx = uxx (x, t)e−iξx dx
2π −∞ 2π −∞

Z ∞
1 1
= ux e−iξx − ux (iξ)e−iξx dx
2π −∞ 2π −∞

Z ∞
1 −iξx 1
= u(x, t)(iξ)e + u(x, t)(iξ)2 e−iξx dx
2π −∞ 2π −∞
| {z }
(−ξ 2 )û(ξ,t)

Thus, we obtain
ˆ
ût (ξ, t) = −ξ 2 û(ξ, t) û(ξ, 0) = f (ξ)

76
CHAPTER 8. DIFFUSION EQUATION AND PARABOLIC PROBLEMS

The solution is
ˆ −ξ2 t
û(ξ, t) = f (ξ)e
by inverse Fourier transform:
Z ∞ Z ∞
u(x, t) = iξx
û(ξ, t)e dξ = ˆ −ξ
f (ξ)e
2 t+iξx

−∞ −∞

using Parseval’s equality:


Z ∞ Z ∞ Z ∞
1 2
2
|u(x, t)| dx = 2
|û(ξ, t)| dξ = |fˆ(ξ)|2 e−2ξ t dξ
2π −∞
Z−∞

−∞
Z ∞
ˆ 2 1
≤ |f (ξ)| dξ = |f (x)|2 dx
−∞ 2π −∞
This shows that
||u(x, t)||2 ≤ ||f ||2 for all t ≥ 0 : stability

8.4.2 Von neumann analysis


Consider the following scheme:
un+1
j = unj + r(unj−1 − 2unj + unj+1 )

We look for solution of the form unj = eωnk+iξjh (u0j = eiξjh ).

eω(n+1)k+iξjh = eωnk+iξjh + reωnk (eiξ(j−1)h − 2eiξjh + eiξ(j+1)h )


Divide both sides by eωnk+iξjh , we get
eωk = 1 + r(e−iξh − 2 + eiξh ) = 1 + 2r(cos(ξh) − 1) : dispersion relation
Note here
un+1
j = eωk unj = g(ξh)unj ⇒ unj = g n (ξh)eiξjh
Definition 8.4.1. We call g(ξh) = 1 + 2r(cos(ξh) − 1) the amplification factor.
Remark 8.4.2. (1) If r ≤ 21 , |g(ξh)| ≤ 1 for all |ξh| ≤ π. If this restriction is not satisfied,
then the Fourier components will be amplified.
(2) If ξh = π, we have eiξjh = (−1)j , a saw tooth. This is the fastest oscillating mode on the
grid. On the other hand, ξh > π will cause aliasing.
2k 2 h2 r
(3) Exact amplification factor ge (ξh) = eωk = e−ξ = e−ξ . Note:
(ξh)2 (ξh)4
g(ξh) = 1 + 2r(− + + · · · ) + ge (ξh) + O(ξh)4
2 4!
Thus, the analysis above is suitable for small ξh (long wave, well resolved on grid) while
not so good for ξh large (short waves).

77
CHAPTER 8. DIFFUSION EQUATION AND PARABOLIC PROBLEMS

8.4.3 Discrete Fourier analysis


We will now show a method is stable if |g(ξh)| ≤ 1.

Fourier transform

ˆ h X −iξjh
f (ξh) = fj e
2π −∞

Inverse transform
Z π
h
fj = fˆ(ξh)eiξjh dξ
−π
h

Parseval’s equality
Z π ∞
h h X
|fˆ(ξh)|2 dξ = |fj |2
−π
h
2π −∞

Claim 8.4.1. A method is stable if |g(ξh)| ≤ 1.

Proof. Consider unj , the numerical solution evaluated on the grid, where j = 0, ±1, ±2, · · ·
given.

n h X n −iξjh
û (ξh) = u e
2π −∞ j


n+1 h X n+1 −iξjh
û (ξh) = u e
2π −∞ j

h X n
= (u + r(unj−1 − 2unj + unj+1 ))e−iξjh
2π −∞ j
∞ ∞
h X n −iξjh h X n −iξjh
= g(ξh)uj e = g(ξh) u e
2π −∞ 2π −∞ j
=g(ξh)ûn (ξh)

Thus, we get
ûn+1 (ξh) = g(ξh)ûn (ξh) = g n (ξh)û0 (ξh)
We see Z π Z π
h h
unj = n
û (ξh)e iξjh
dξ = û0 (ξh)g n (ξh)eiξjh dξ
−π
h
−π
h

78
CHAPTER 8. DIFFUSION EQUATION AND PARABOLIC PROBLEMS

If |g(ξh)| ≤ 1, then we have


Z π Z π
h X n2 h
n 2
h
|uj | = |û (ξh)| dξ = |û0 (ξh)|2 ||g n (ξh)|2 dξ
2π j π
−h −hπ

Z π
h h X 02
≤ |û0 (ξh)|2 dξ = |uj |
−π
h
2π j

Recall that ||un ||2 ≤ ||u0 ||2 implies l2 stability. Hence, the proof.

8.5 Energy estimates


In this section, we introduce another method of finding the l2 stability condition. First
consider the following problem:

ut = uxx u(x, 0) = u0 (x) u(0, t) = u(1, t) = 0

Definition 8.5.1. We definition energy as:


Z 1
||u(:, t)||22 = u(x, t)2 dx
0

Claim 8.5.1. ||u(:, t)||2 ≤ ||u0 ||2 . Thus, we have l2 stability.

Proof.

d 1
Z
d 2
||u(:, t)||2 = u(x, t)2 dx
dt dt 0
Z 1
= 2u(x, t)ut dx
0
Z 1
= 2uuxx dx
0
1
Z 1
=2[uux − (ux )2 dx] ≤ 0
0 0
| {z }
=0

Energy is decaying in time, therefore, ||u(:, t)||2 ≤ ||u0 ||2 .

Integration by parts
Z 1
(f, g) = f (x)g(x)dx : inner product
0

If f (0) = f (1) = 0, then we have (f, g 0 ) = −(f 0 , g).

79
CHAPTER 8. DIFFUSION EQUATION AND PARABOLIC PROBLEMS

Proof. Z 1 Z 1 Z 1
0 0 0 0 0
(f g) = f g + g f ⇒ (f g) dx = f gdx + f g 0 dx
0 0 0
where
1
0 = fg = (f 0 , g) + (f, g 0 ) ⇒ (f, g 0 ) = −(f 0 , g)
0

Summation by parts
We define: X
(f, g)h = h fj gj : discrete inner product
j

Claim 8.5.2. If fj = 0 for j ≤ 0, j ≥ N , then

(f, D− g)h = −(D+ f, g)h

Proof.
fj+1 gj+1 − fj gj
D+ (f g)j = = fj+1 D+ gj + (D+ fj )gj
h
Thus, we have
X f1 g1 − f0 g0 f2 g2 − f1 g1 fN gN − fN −1 gN −1 −f0 g0 fN gN
D+ (f g)j = + + ··· + = + =0
j
h h h h h

Therefore, we have
X X X X
0=h fj+1 (D+ gj )+h (D+ fj )gj = h fj (D− gj )+h (D+ fj )gj = (f, D− g)h +(D+ f, g)h
j j j j

8.5.1 Energy method


Discrete energy estimates
We will now introduce the discrete energy estimates. First we rewrite the scheme:
k n
un+1 = unj + (u − 2unj + unj+1 ) = unj + kD+ D− unj
j
h2 j−1
The discrete energy estimate is given by:
X
||un ||22,h = (un , un )h = h (unj )2 extend unj = 0, j ≤ 0, j ≥ N
j

80
CHAPTER 8. DIFFUSION EQUATION AND PARABOLIC PROBLEMS

Claim 8.5.3. If r ≤ 12 , then ||un ||2,h ≤ ||u0 ||2,h ; hence, method is stable.

Proof.

||un+1 ||22,h − ||un ||22,h =(un+1 , un+1 )h − (un , un )h


=(un+1 + un , un+1 − un )h
=(2un + kD+ D− un , kD+ D− un )h
= 2k(un , kD+ D− un ) + k 2 (kD+ D− un , kD+ D− un )h
| {z } | {z }
(I) (II)

(I) : 2k(un , kD+ D− un ) = −2k(D− un , D− un ) = −2k||D− un ||22,h

(II) : k 2 (kD+ D− un , kD+ D− un )h = k 2 ||D+ D− un ||22,h


We need to show 2k||D− un ||22,h ≥ k 2 ||D+ D− un ||22,h . Define S + fj = fj+1 , shift operator. Then

S+ − I
D+ =
h

Using this, we can rewrite (II)

S+ − I S + D− un − D− un 2
k 2 ||D+ D− un ||22,h =k 2 || D− un ||22,h = k 2 || ||2,h
h h
k2
≤ (||S + D− un || + ||D− un ||)2
h2
4k 2
= 2 ||D− un ||22,h
h

Note ||S + D− un || = ||D− un || since ||v|| = ||S + v||. If r ≤ 12 , we have

4k 2 1
(I) + (II) ≤ (−2k + 2
)||D− un ||22,h = 4k(r − )||D− un ||22,h ≤ 0
h 2

Thus, ||un+1 ||2 − ||un ||2 ≤ 0. Energy is decaying in time.

8.6 Stability analysis of Crank-Niccolson method


We now analyze the stability of method (II) using the three different methods. Recall the
problem:
ut = uxx 0 < x < 1 u(0, t) = u(1, t) = 0

81
CHAPTER 8. DIFFUSION EQUATION AND PARABOLIC PROBLEMS

8.6.1 Method of lines


In semi-discrete form, we have
u0i (t) = D+ D− ui (t)
Recall the method of lines:
 
−2 1
 1 −2 1 
0 1 
. . .

u (t) = Au A = 2 
 .. .. .. 
h 


 1 −2 1 
1 −2
The eigenvalues of A are
2 −4 1
λp = 2
(cos(pπh) − 1) 2
< λp < 0 p = 1, · · · , m h =
h h m+1
For Crank-Nicolson, we have Trapezoid time discretization:
k k
(I − A)un+1 = (I + A)un
2 2
Thus
k k
un+1 = (I − A)−1 (I + A)un
2 2
k k
||un+1 ||2 ≤ ||(I − A)−1 ||2 ||(I + A)||2 ||un ||2
2 2
k k −1
Eigenvalues of I − 2 A ∈ (1, 1 + 2r), i.e. ||(I − 2 A) ||2 ≤ 1 for all r, and eigenvalues of
I + k2 A ∈ (1 − 2r, 1) or ||(I + k2 A)||2 ≤ 1 if r ≤ 1. Therefore, we have l2 stability provided
r ≤ 1.
Obviously, this restriction is not good enough for us.

8.6.2 Energy method


The scheme can be rewritten as
k
un+1 = un + D+ D− (un+1 + un )
2

(un+1 , un+1 )h − (un , un )h =(un+1 + un , un+1 − un )h


k
=(un+1 + un , D+ D− (un+1 + un ))h
2
k n+1
= − (D− (u + un ), D− (un+1 + un ))h
2
k
= − ||D− (un+1 + un )||22,h ≤ 0
2
Therefore, ||un+1
2 ≤ ||un ||2 for all r. Such method is said to be unconditional stable.

82
CHAPTER 8. DIFFUSION EQUATION AND PARABOLIC PROBLEMS

8.6.3 Von Neumann analysis


k k
(I − D+ D− )un+1j = (I + D+ D− )unj (8.6.1)
2 2
Recall Fourier method: unj = g(ξh)n eiξjh and un+1
j = g(ξh)unj :

ξh n
kD+ D− unj = 2r(cos(ξh) − 1)unj = −4r sin2 u
2 j
Therefore, (8.6.1) can be rewritten as

ξh ξh
(1 + 2r sin2 )g(ξh) = (1 − 2r sin2 )
2 2
Thus,
ξh
1 − 2r sin2 2
|g(ξh)| = | | ≤ 1 for all r > 0
1+ 2r sin2 ξh
2

By Von Neumann analysis, ||un ||2 ≤ ||u0 ||2 for all r: unconditional stable.

8.7 2D heat equation


LEC 18 skipped

83
CHAPTER 8. DIFFUSION EQUATION AND PARABOLIC PROBLEMS

84
Chapter 9

Advection Equations and Hyperbolic


Systems

9.1 Introduction
9.1.1 Advection equation
We consider the following hyperbolic equation:

ut + aux = 0 −∞<x<∞

(x(t), t) defines a curve in (x, t) plane.


dx
Claim 9.1.1. The solution u(x, t) is constant along dt
= a (a straight line).

Proof.

d ∂u ∂u dx ∂u ∂u
u(x(t), t) = + = +a = 0 ∵ u is a solution
dt ∂t ∂x dt ∂t ∂x
dx dt 1
⇒ u(x, t) = cost along = a( = )
dt dx a

As illustrated in Figure 9.1, the line x − at = cost is called a characteristic. The solution
is constant along characteristic.

Corollary 9.1.1.
u(x, t) = u(x − at, 0) = f (x − at) = f (x0 )

Definition 9.1.2. Solution is a traveling wave with speed a. Domain of dependence of u(x, t)
is the point x0 . Such equation is called linear advection equation.

85
CHAPTER 9. ADVECTION EQUATIONS AND HYPERBOLIC SYSTEMS

t
6 (x, t)

dx dt
x − at = cost .
...
Slope dt = a( dx = a1 )
6.....
............................ ..........
.... .... .......................

-
x0 x

Figure 9.1: Characteristic line

9.1.2 Linear systems


~ut + A~ux = 0 ~u(x, 0) = f (x) ~u = (u1 , u2 , · · · , uN )T f = (f1 , f2 , · · · , fN )T
We say the system is hyperbolic if A has real eigenvalues and a complete set of eigenvectors.
Let R denote matrix of right eigenvectors; note this is an invertible matrix. Let Λ =
diag(λ1 , · · · , λN ) denote the matrix of real eigenvalues.
A = RΛR−1 : spectral factorization Λ = R−1 AR : A diagonalizable
We can transform the origin systems of equations:
R−1 ut + R−1 ARR−1 ux = 0 set v = R−1 u ⇒ vt + Λvx = 0
For each component, we have
(vk )t + λk (vk )x = 0 : linear advection
The eigenvalues of A are called wave speeds.
Example 9.1.3.
    
u1 0 1 u1
+ = 0 u1 (x, 0) = f1 (x) u2 (x, 0) = f2 (x)
u2 t 1 0 u2 x
We diagonalize the matrix A
     
−1 0 1 1 −1 1 1 −1
Λ= R= R =
0 1 −1 1 2 1 1
Set v = R−1 u,  
−1 −1 1 f1 (x) − f2 (x)
v(x, 0) = R u(x, 0) = R f (x) =
2 f1 (x) + f2 (x)
Solve the decoupled system vt + Λvx = 0, we get
1 1
v1 (x, t) = v1 (x+t, 0) = (f1 (x+t)−f2 (x+t)) v2 (x, t) = v1 (x−t, 0) = (f1 (x−t)+f2 (x−t))
2 2
u(x, t) = Rv(x, t) ⇒
      
u1 (x, t) 1 1 v1 (x, t) 1 f1 (x + t) − f2 (x + t) + f1 (x − t) + f2 (x − t)
= =
u2 (x, t) −1 1 v2 (x, t) 2 −f1 (x + t) + f2 (x + t) + f1 (x − t) + f2 (x − t)

86
CHAPTER 9. ADVECTION EQUATIONS AND HYPERBOLIC SYSTEMS

It is worth noting that solution at (x, t) depends on two points α, β in the previous ex-
ample. In general, domain of dependence will include every point as illustrated in Figure 9.2.
t (x, t)
6
EA
 EA
λ1 < λ2 < · · · < λN
x − λ1 t = cost  E A  E A

x − λ2 t = cost
 E A
·E · · A
 E A

 E A
 E A
x x
 xE A x -
x

Figure 9.2: Domain of dependence (more general case)

Wave equation
The wave equation given by:

utt = a2 uxx u(x, 0) = f (x) ut (x, 0) = g(x)

We introduce a change of variable:


(
u1 = ux (u1 )t = (ux )t = (ut )x = (−u2 )x
u2 = −ut (u2 )t = −(ut )t = −a2 uxx = −a2 (u1 )x

In matrix form:
      
u1 0 1 u1 0
+ 2 = : hyperbolic system
u2 t a 0 u2 x 0

Spectral factorization yields:


     
−a 0 1 1 −1 1 a −1
Λ= R= R =
0 a −a a 2a a 1

Thus,
u1 (x, 0) = ux (x, 0) = f 0 (x) = f1 (x) u2 (x, 0) = f2 (x)
    1 1

−1 1 a −1 u1 2
v1 − 2a v2
v=R u= = 1 1
2a a 1 u2 v + 2a
2 1
v2
1
f − 1 f2

v(x, 0) = R−1 u(x, 0) = 21 1 2a 1
f + 2a
2 1
f2
Thus, we have
1 1 1 1
v1 (x, t) = f1 (x + at) − f2 (x + at) v2 (x, t) = f1 (x + at) + f2 (x + at)
2 2a 2 2a
87
CHAPTER 9. ADVECTION EQUATIONS AND HYPERBOLIC SYSTEMS

And
1 1
ux = u1 = [f 0 (x + at) + f 0 (x − at)] + [g(x + at) − g(x − at)]
2 2a
Z x Z x+at
1 1
u(x, t) = u1 (s, t)ds = [f (x + at) + f (x − at)] + g(s)ds : d’ Alembert formula
2 2a x−at

9.1.3 Nonlinear systems of conservation laws


ut + f (u)x = 0 u = (u1 , u2 , · · · , uN )T f (u) = (f1 (u), · · · , fN (u)T
∂f (u)
ut + A(u)ux = 0 : quasilinear form whereA(u) =
∂u
System is hyperbolic if A(u) has real eigenvalues and a complete set of eigenvectors.

Example 9.1.4. [Gas dynamics] Let ρ be density v be velocity and p be pressure. The
equation is given by:
ρt + (ρv)x = 0
(ρv)t + (ρv 2 + p(ρ))x = 0 p = p(ρ) : equation of stable

p0 (ρ)
ρt + ρvx + vρx = 0 vt + vvx + ρx = 0
ρ
In matrix form:   ! 
ρ v ρ ρ
+ p0 (ρ) =0
u t ρ
v u x
p
The eigenvalues are λ = v ± p0 (ρ) and p0 (ρ) = c2 ; where c is the speed of sound.

9.2 Difference schemes


In this section, we will introduce a few numerical difference schemes to solve linear advection
equation.
ut + aux = 0 u(x, 0) = f (x) a > 0

n+1 n+1
n+1

n n n
i−1 i i+1 i i+1 i−1 i

(a) Centered-diff (b) Downwind scheme (c) Upwind scheme

Figure 9.3: Stencil for centered difference, downwind, and upwind schemes

88
CHAPTER 9. ADVECTION EQUATIONS AND HYPERBOLIC SYSTEMS

9.2.1 Centered difference scheme


The centered difference scheme (9.3a) is given by

un+1
j − unj unj+1 − unj−1
+a =0
k 2h
In a different form:
1 ak n
un+1
j = unj − (uj+1 − unj−1 )
2 |{z}
h
ν

Here we define
ak
ν= : CFL number (nondimensional)
h

Accuracy
We can check that the method is first order accurate in time and second order in space, i.e.
τ n = O(k + h2 ).

Stability
Assume bounded domain 0 ≤ x ≤ 1 and periodic boundary conditions u(0, t) = u(1, t). We
can use method of lines to check the stability of this method.

u(t) = (u0 (t), u1 (t), · · · , um (t))T

−a n
u0j (t) =(u − unj−1 ) j = 1, 2, · · · , m − 1
2h j+1
Since we allow periodic boundary conditions:
−a n −a n
u00 (t) = (u1 − unm ) u0m (t) = (u0 − unm−1 )
2h 2h
In matrix form:  
0 1 −1
−1 0 1 
−a 
... ... ...

u0 (t) =
 
2h 
 

 −1 0 1
1 −1 0
The matrix is skew symmetric; hence, all eigenvalues are pure imaginary. For absolute
stability of time discretization, we need stability region include part of imaginary axis. For
forward Euler as in this scheme, stability region does not include any part of imaginary axis.
Thus, this method is unconditionally unstable for fixed ν.

89
CHAPTER 9. ADVECTION EQUATIONS AND HYPERBOLIC SYSTEMS

9.2.2 Downwind scheme


The downwind scheme (9.3b) is given by

un+1
j − unj unj+1 − unj
+a =0
k h
Or
un+1
j = unj − ν(unj+1 − unj ) = (1 + ν)unj − νunj+1

Claim 9.2.1. Method is unconditionally unstable in ∞-norm.

Proof. Consider unj = (−1)j . Then ||un ||∞ = 1. Yet:

u + j n+1 = (1 + ν)(−1)j − ν(−1)j+1 = (1 + 2ν)(−1)j

We have
|un+1
j | = 1 + 2ν > 1 ∵ ν > 0

x−at=const
t
n
n−1
.
.
.

α xj xj x j+n x
+1

Figure 9.4: Domain of dependence

Here we see the domain of dependence of u(x, t) is α (see Figure 9.4). Meanwhile, the
numerical domain of dependence are (xj , xj+1 , · · · , xj+n ). In other words, the numerical
domain of dependence does not include the one point that matters α.

9.2.3 Upwind scheme


The upwind scheme ((9.3c) is given by

un+1
j − unj unj − unj−1
+a =0
k h
Or
un+1
j = unj − ν(unj − unj−1 ) = (1 − ν)unj + νunj−1

90
CHAPTER 9. ADVECTION EQUATIONS AND HYPERBOLIC SYSTEMS

Claim 9.2.2. Scheme is l∞ stable if and only if 0 < ν < 1.

Proof. ⇐=
un+1
j = (1 − ν)unj + νunj−1
Taking absolute values on both sides:

|un+1
j | ≤|1 + νunj | + |νunj−1 | = (1 − ν)|unj | + ν|unj−1 |
≤(1 − ν)||un ||∞ + ν||un ||∞ = ||un ||∞

This holds for all j, thus holds for maxj :

||un+1 ||∞ ≤ ||un ||∞

=⇒ Let ν > 1, we will show scheme is no longer stable. Take unj = (−1)j , ||un ||∞ = 1. Then

|un+1
j | > (2ν − 1)||uj ||∞ > 1

x−at=const ν>1
t x−at=const
t ν<1

x j−n α xj x α x j−n xj x

(a) ν < 1 (b) ν > 1

Figure 9.5: Numerical domain of independence

Figures 9.5a and 9.5b show that numerical domain of dependence contains analytic do-
main of dependence if and only if ν < 1, which also suggests the following CFL condition in
order to satisfy stability condition of a numerical scheme.

CFL condition
The numerical domain of dependence must contain the analytic domain of dependence. This
is a necessary, yet, not sufficient condition for stability. Note centered difference scheme does
satisfy CFL condition; but it is unconditional unstable.

91
CHAPTER 9. ADVECTION EQUATIONS AND HYPERBOLIC SYSTEMS

Accuracy of upwind scheme

u(x, t + k) − u(x, t) u(x, t) − u(x − h, t)


τjn = +a
k h
k h
=ut + utt + O(k 2 ) + a(ux − uxx + O(h2 ))
2 2
k ah
= utt − uxx + O(k 2 + h2 ) (ut + aux = 0)
2 2
k 2 h
=( a − a )uxx + O(k 2 + h2 ) (utt = a2 uxx )
2 2
h
= a(ν − 1)uxx + h.o.t
2
=O(k + h)

Convergence

Here we assume 0 ≤ ν ≤ 1.
un+1
j = (1 − ν)unj + νunj−1 (9.2.1)

u(x, t + k) = (1 − ν)u(x, t) + νu(x − h, t) + kτjn (9.2.2)

(9.2.1) - (9.2.2) yields:

en+1
j = (1 − ν)enj + νenj−1 − kτjn

Take absolute values on both sides gives:

|en+1
j | ≤(1 − ν)|enj | + ν|enj−1 | + k|τjn |
≤(1 − ν)||en ||∞ + ν||en ||∞ + k||τ n ||∞
=||en ||∞ + k||τ ||∞

Thus:
||en ||∞ ≤ ||en−1 ||∞ + k||τ ||∞ ≤ · · · ≤  0
||e||∞ + nk||τ ||∞


We have ||en ||∞ ≤ tO(h); method converges. Note if ν = 1, then

un+1
j = unj−1 ⇒ unj = u0j−n = f (xj − n) = f (xj − at)

Scheme is exact ! If 0 < ν < 1, then scheme interpolates between unj−1 and unj (linear
interpolation).

92
CHAPTER 9. ADVECTION EQUATIONS AND HYPERBOLIC SYSTEMS

n+1

n
i−1 i i+1

Figure 9.6: Stencil for LxF scheme

9.2.4 Lax Friedrichs scheme (LxF)


One way to stabilize the centered difference difference scheme is by taking the average of
two points in the time derivative known as the LxF scheme. The LxF scheme is given by
unj−1 + unj ν
un+1
j = − (unj+1 − unj−1 )
2 2
ν 1
=unj − (unj+1 − unj−1 ) + (unj−1 − 2unj + unj+1 )
2 2
We can write it as
un+1
j − unj a 1 h2 unj−1 − 2unj + unj+1
+ (unj+1 − unj−1 ) =
k 2h 2 k
|{z} h2

This looks like an approximation to

ut + aux = εuxx (9.2.3)

(9.2.3) is known as the modified equation (see later section).

Method of lines (assume periodic BC’s)


   
0 1 −1 −2 1 1
−1 0 1   1 −2 1 
−a  .. .. ..
 ε 
.. .. ..
 1 h2
u0 (t) = BU B = + 2 ε=
   
. . . . . .
2h   h 2 k
  
 
 −1 0 1  1 −2 1 
1 −1 0 1 1 −2
Here the eigenvectors are given by (rp )j = ei2πpjh . Eigenvalues are given by
a 2ε
λp = − i sin(2πph) + 2 (cos(2πph) − 1)
h h
−2ε 2ε a
= 2 + 2 cos(2πph) − i sin(2πph)
h h h
For stable forward Euler integration, kλp ∈ stability region:

kλp = −1 + cos(2πph) − iν sin(2πph)


| {z } | {z }
x y

93
CHAPTER 9. ADVECTION EQUATIONS AND HYPERBOLIC SYSTEMS

Note
y
x + 1 = cos(2πph) y = ν sin(2πph) ⇒ = sin(2πph)
ν
The equation satisfy an ellipse
y
(x + 1)2 + ( )2 = 1
ν
centered at (−1, 0) with major/minor semi-axes 1, ν. For stability, we require the ellipse to
be inside the stability region of forward Euler. As shown in Figure 9.7, we need |ν| ≤ 1.

Im z Im z

Re z Re z
−1 −1

(a) |ν| ≤ 1 (b) |ν| ≥ 1

Figure 9.7: Stability region of forward Euler in cyan, the ellipse enclosed by the red curve is
the region of kλp of LxF.

9.3 Von Neumann analysis


The system
ut + aux = 0 u(x, 0) = f (x)

Here we reintroduce Von Neumann analysis. We look for solutions of the form u(x, t) =
eωt+iξx . Plug it into the equation:

ω + aiξ = 0 : dispersion relation

Our solution is
u(x, t) = eiξ(x−at) : no growth/decay in amplitude

Solution formula u(x, 0) = f (x):


Z ∞
1
û(ξ, t) = u(x, t)e−iξx dx
2π −∞

94
CHAPTER 9. ADVECTION EQUATIONS AND HYPERBOLIC SYSTEMS

Z ∞
1
ût (ξ, t) = ut (x, t)e−iξx dx
2π −∞
−1 ∞
Z
= aux (x, t)e−iξx dx
2π −∞
Z ∞
−a −iξx∞
= ( ue −∞ + (iξ) u(x, t)e−iξx dx)


2π −∞
= − iξaû(ξ, t)
Thus, we have
ût (ξ, t) = −iξaû(ξ, t) û(ξ, 0) = fˆ(ξ) ⇒ û(ξ, t) = fˆ(ξ)e−iξat
Thus Z ∞
u(x, y) = û(ξ, t)eiξx dξ = f (x − at)
−∞
||u(·, t)||2 = ||u(·, 0)||2 : stability

9.3.1 Centered difference scheme


1
un+1
j = unj − ν(unj+1 − unj−1 )
2
ν iξh
g(ξh) = 1 − (e − e−iξh ) = 1 + iν sin(ξh)
2
|g|2 = 1 + ν 2 sin2 (ξh) ≥ 1 : unconditionally unstable
Im z Im z

Re z Re z
1 1 1+ν

l2 stability l2 stability
|g|=1 |g|=1

(a) Centered difference scheme. (b) Downwind scheme.

Figure 9.8: The red curve is the region enclosed by g(ξh).

9.3.2 Downwind scheme


un+1
j = (1 + ν)unj − νunj+1
g(ξh) = 1 + ν − νeiξh : circle of radius ν, centered at (1 + ν, 0)
Scheme is stable only for ξh = 0 (constant solution); otherwise, it is unstable as shown in
Figure 9.8b. g(ξh = π) = 1 + 2ν corresponds to largest, fastest growing mode.

95
CHAPTER 9. ADVECTION EQUATIONS AND HYPERBOLIC SYSTEMS

9.3.3 Upwind scheme


un+1
j = (1 − ν)unj − +νunj+1 g(ξh) = 1 − ν + νeiξh
|g(ξh)| ≤ 1 if and only if 0 ≤ ν ≤ 1 as shown in Figure 9.9.
Im z Im z Im z

Re z Re z Re z
1− ν 1 1− ν 1 1−ν 1

l2 stability l2 stability l2 stability


|g|=1 |g|=1 |g|=1

1 1
(a) 0 < ν < 2 (b) 2 ≤ν≤1 (c) ν > 1

Figure 9.9: The red curve is the region enclosed by g(ξh).

9.4 Lax Wendroff scheme (LW)


So far, we have seen two unstable schemes (centered difference, downwind) and two stable
schemes with O(h) order of accuracy (upwind, LxF). What can be done to achieve higher
order of accuracy?

9.4.1 Taylor series approach


Using the Taylor series approach:
k2
un+1 = un + kut + utt + · · ·
2
k2
un+1
j = unj + k(ut )nj + (utt )nj + · · ·
2
Use PDE to replace time derivative by space derivatives:
unj+1 − unj−1
ut = −aux −→ −a
2h
n
u
2 j−1
− 2unj + unj+1
utt = a2 uxx −→ a
h2
ak
Plug them in, we get the Lax-Wendroff scheme. Note ν = h
:
ν n ν2 n
un+1
j = u n
j − (u j+1 − un
j−1 ) + (uj−1 − 2unj + unj+1 ) : LW scheme
| 2 {z } | 2 {z }
centered difference viscosity

96
CHAPTER 9. ADVECTION EQUATIONS AND HYPERBOLIC SYSTEMS

This method stabilizes centered difference scheme by adding ’viscosity’ (diffusion):


un+1
j − unj unj+1 − unj−1 ν 2 h2 unj−1 − 2unj + unj+1
+a = ( )
k 2h 2k h2
Here we let
ν 2 h2 a2 k 2 h2 k
= 2ε= = a2 > 0
2k h 2k 2
Note ε should be strictly positive here due to well pose of diffusion equation.

9.4.2 Accuracy
The LTE is given by τ = O(k 2 + h2 ), second order of accuracy in both space and time. We
note that the scheme is exact for ν = −1, 0, 1 as shown in Figure 9.10.
n+1
n+1 n+1

n
n n i−1 i i+1
i−1 i i+1 i−1 i i+1
(a) ν = 1, un+1
j = unj−1 (b) ν = −1, un+1
j = unj+1 (c) ν = 0, un+1
j = unj

Figure 9.10: LW scheme, red line to illustrate the interpolation.

9.4.3 Stability
We may view LW as forward Euler discretization of u0 (t) = AU (t), where
   
0 1 −1 −2 1 1
−1 0 1  a2 k 2  1 −2 1
  
−a  ... ... ... ... ... ...

A= + 2 
   
2h   2h 
 

 −1 0 1  1 −2 1 
1 −1 0 1 1 −2
| {z } | {z }
Imaginary e-values shift into left half plane

The eigenvalues are given by


−ia a2 k
λp = sin(2πph) + 2 (cos(2πph) − 1)
h h
kλp = −iν sin(2πph) + ν 2 (cos(2πph) − 1)
For stability, kλp must lie inside absolute stability region for forward Euler. We see that:
kλp = −ν 2 + ν 2 cos(2πph) − iν sin(2πph)
This corresponds to an ellipse centered at (−ν 2 , 0) with major/minor axes ν 2 and ν in real
and imaginary direction as shown in Figure 9.11. If ν ≤ 1, kλp will be inside the stability
region for all p.

97
CHAPTER 9. ADVECTION EQUATIONS AND HYPERBOLIC SYSTEMS

Im z

Re z
−1

Figure 9.11: If |ν| ≤ 1, kλp inside stability region for all p.

9.4.4 Fourier analysis


ν n ν2 n
un+1 = un
j − (u − un
) + (u − 2unj + unj+1 )
j
2 j+1 j−1
2 j−1
Substitute unj = g(ξh)n eiξjh into the scheme we get

g(ξh) = 1 − iν sin(ξh) + ν 2 (cos(ξh) − 1)

For stability:

|g|2 =(1 + ν 2 (cos(ξh) − 1))2 + ν 2 sin2 (ξh)


=1 + 2ν 2 (cos(ξh) − 1) + ν 4 (cos(ξh) − 1)2 + ν 2 (1 − cos2 (ξh))
ξh
=1 − 4ν 2 (1 − ν 2 ) sin4 ( )
2
We see that |g|2 ≤ 1 if and only if |ν| ≤ 1.

9.5 Phase error


Recall the equation:
ut + aux = 0
We sub u(x, t) = eωt+iξx to get the dispersion formula ω = −iξa. Thus

u(x, t) = eiξ(x−at) u(x, t + k) = eωt u(x, t)

gex = eωk = e−iξak : exact amplification factor


For numerical amplification factor, take upwind scheme as an example:

gup =1 − ν + νe−iξh
=1 − ν(1 − cos(ξh)) − iν sin(ξh)

We see
gup = |g|eiθ θ : phase

98
CHAPTER 9. ADVECTION EQUATIONS AND HYPERBOLIC SYSTEMS

=g
θ = tan−1
<g
−1 −ν sin(ξh) −1 νξh + O(ξh)3
= tan = tan (− )
1 − ν(1 − cos(ξh)) 1 + O(ξh)2
= tan−1 (−νξh + O(ξh)3 ) ≈ −νξh + O(ξh)3

exact
ωk −iξak
= = a : phase speed
−ξk −iξk

numerical

= a + O(ξh)2
−iξk
We note that long waves propagate at right speed; short waves propagate at wrong speed
(or even wrong direction).

9.6 Characteristic tracing and interpolation


We have seen that the solution u(x, t) to linear advection equation is constant along each
characteristic. Over one time step, we have

u(xj , tn + k) = u(xj − ak, tn )

The exact value of the solution un+1


j can be obtained by tracing the characteristic back in
time to tn and use the fact that the solution remains constant along this line. In general,
the characteristic will not cross as grid points but rather in between grid points. This calls
for interpolation. For example, we might perform a linear interpolation of the data at time
tn , which yields:
unj − unj−1
p(x) = unj + (x − xj )( )
h
Using the characteristic, we get

ak n
un+1
j = p(xj − ak) = unj − (uj − unj−1 ) = unj − ν(unj − unj−1 )
h
This is the upwind scheme.

Problem 9.6.1. Show that quadratic interpolation gives the Lax-Wendroff scheme. Identify
special values of the CFL number ν for which the interpolation error vanishes (i.e. the
numerical solution becomes exact).

99
CHAPTER 9. ADVECTION EQUATIONS AND HYPERBOLIC SYSTEMS

Problem 9.6.2. The 2nd order upwind scheme for ut + aux = 0 for a > 0 has the form

un+1
j = c−2 unj−2 + c−1 unj−1 + c0 unj

where the coefficients ck are in general polynomials of the CFL number ν (here polynomials
of degree 2). Identify the values of ν for which the scheme becomes exact, and use these
special values of ν to determine ck . Write the resulting scheme. This scheme is called the
Beam-Warming scheme.

9.7 The modified equation


The standard way to estimate error is by using LTE. Here we introduce a slightly different
approach that sheds light on the structure and behavior of the numerical solution.

un+1
j − unj unj − unj−1
+a = 0 : upwind scheme
k h

Here we assume u(x, t) is a smooth function:

u(x, t + k) − u(x, t) u(x, t) − u(x − h, t)


τ= +a
k h
h
=ut + aux + a(ν − 1)uxx + O(k 2 + h2 )
2

We see that if this scheme is used to approximate ut + aux = 0, then unj − u(x, t) = O(h) or
τ = O(h). On the other hand if we use this to approximate

h
ut + aux = a(1 − ν)uxx (9.7.1)
2

Then τ = O(h2 ) and unj − u(x, t) = O(h2 ). In other words the upwind scheme solves
ut + aux = 0 only to first order accuracy while it solves (9.7.1) to second order of accuracy.
We call (9.7.1) the modified equation, where h2 a(1 − ν)uxx is an artificial viscosity (diffusion).

Definition 9.7.1. The modified equation to hyperbolic equation is of the following form:

ut + aux = εuxx

It is well-posed only if ε > 0.

In the upwind scheme, CFL condition guarantees ε = h2 a(1 − ν) > 0.

100
CHAPTER 9. ADVECTION EQUATIONS AND HYPERBOLIC SYSTEMS

9.7.1 Modified equation for LxF


Let us consider the LxF scheme:
unj−1 + unj ν
un+1
j = − (unj+1 − unj−1 )
2 2
We can rewrite it as
un+1
j − unj unj+1 − unj−1 h2 unj − 2unj + unj−1
+a =
k 2h 2k h2
Using LTE or Taylor expansion, we see the modified equation for LxF is:
h 1
ut + aux = a( − ν)uxx
2 ν
Similar as upwind scheme, εLxF = h2 a( ν1 − ν) > 0 if and only if ν 2 < 1 for arbitrary a 6= 0.
However, it is worth noting, if 0 < ν < 1, then εup < εLxF .
A scheme is dissipative of order 2s, s > 0 integer, if |g(ξh)|2 ≈ 1−O(ξh)2s for all |ξh| ≤ π.

Example 9.7.2. For LW scheme:


ξh ξh
g(ξh) = 1 − 2ν 2 sin2 ( ) − iν sin(ξh) |g|2 = 1 − 4ν 2 (1 − ν 2 ) sin4 ( ) = 1 − 4ν 2 (1 − ν)O(ξh)4
2 2
LW scheme is dissipative of order 4. We can also investigate its phase error: g = |g|eiθ . Here

=g −ν sin(ξh)
θ = tan−1 = tan−1
<g 1 − 2ν 2 sin2 ( ξh
2
)
1
= − νξh(1 − (1 − ν 2 (ξh)2 + O(ξh)3 )
6
iθ 1
= a(1 − (1 − ν 2 (ξh)2 + O(ξh)3 )
−iξk 6
When |ν| < 1, we have a lagging phase as illustrated in Figure 9.12.

(a) Exact solution (b) Numerical solution

Figure 9.12

Remark 9.7.3. Unfortunately, all linear schemes that preserve monotonicity are at most
first order accurate.

101
CHAPTER 9. ADVECTION EQUATIONS AND HYPERBOLIC SYSTEMS

9.8 Hyperbolic systems


A general hyperbolic system has the form

ut + Aux = 0 U = (u1 , · · · , uN )T

We assume A has real eigenvalues and a complete set of eigenvectors. It has the spectral
factorization R−1 ΛR, where

Λ = diag(λ1 , · · · , λN ) R = (r1 , · · · , rN )

As before, we do a change of variable:

v = R−1 u ⇒ vt + Λvx = 0 (vp )t + λp (vp )x = 0 p = 1, · · · , N

Example 9.8.1. Suppose we have a 4 × 4 system with λ1 < λ2 < λ3 < λ4 . Then, the
domain of dependence of u(x, t) is < α1 , α2 , α3 , α4 > as shown in Figure 9.13. We need a
method that works for both positive and negative eigenvalues.

t (x, t)
6
EA
 EA
 E A
 E A
 E A x−λ t = α1
 E A 1
 E A
 E A
 E A
x x
 xE A x -

α4 α3 α2 α1 x

Figure 9.13: Domain of dependence of a 4 × 4 system

9.9 Numerical methods for hyperbolic systems


Most of the methods we have seen so far for linear advection equation can be generalized.
For example, the LxF scheme:

1 k
un+1
j = (unj+1 + unj−1 ) + A(unj+1 − unj−1 )
2 2
We can also generalize the CFL condition for hyperbolic system:
λp k
Definition 9.9.1. The CFL number for a hyperbolic system is νp = h
. The CFL condition
for hyperbolic system requires:
max |νp | ≤ 1

102
CHAPTER 9. ADVECTION EQUATIONS AND HYPERBOLIC SYSTEMS

9.9.1 Lax-Wendroff
We examine the system ut = −Aux . If we take time derivative on both sides, we get:

utt = −Auxt = −Autx = −A(−Aux )x = A2 uxx

Taylor expansion yields


k2
un+1
j = unj + k(ut )nj + (utt )nj + · · ·
2
Thus, the generalized LW scheme is:
kA n k 2 A2 n
un+1 = unj − (uj+1 − unj−1 ) + (u − 2unj + unj+1 )
j
2h 2h2 j−1

Stability in l2 norm
We generalize the idea of amplification factor as follows: let unj = Gn (ξh)eiξjh u0 ”

1k 1 k 2 2 −iξh
G(ξh) = I − A(eiξh − e−i xi
)+ A (e − 2 + eiξh )
2h 2 h2
k k2
G(ξh) = I − i A sin(ξh) + 2 A2 (cos(ξh) − 1) : amplification matrix
h h
In 2-norm, we have
||un ||2 ≤ max ||Gn (ξh)||2 ||u0 ||2
|ξh|≤π

Therefore, scheme is stable if and only if amplification matrix is power bounded. Recall
A = RΛR−1 , we have
k k2
Gn (ξh) = R (I − i Λ sin(ξh) + 2 Λ2 (cos(ξh) − 1)) R−1
| h {zh }
≤1 if CFL condition is satisfied

Thus, LW scheme is stable in l2 norm if CFL condition is satisfied. Note it is not stable in
the infinity norm.

9.9.2 Upwind
In 1D case ut + aux = 0, we use
(
un+1
j = unj − ν(unj − unj−1 ) a > 0
un+1
j = unj − ν(unj+1 − unj ) a < 0

To be able to deal with both negative and positive eigenvalues, we define:



Λ− = diag(λ− − + + + +
1 , · · · , λN ) λp = min(0, λp ) Λ = diag(λ1 , · · · , λN ) λp = max(0, λp )

103
CHAPTER 9. ADVECTION EQUATIONS AND HYPERBOLIC SYSTEMS

Similarly, we have
A± = RΛ± R−1
and
k k
un+1
j = unj − A− (unj+1 − unj ) − A+ (unj − unj−1 )
h h
For stability, we require |νp | ≤ 1.

9.10 Initial boundary value problems


So far, we have analyzed hyperbolic problems with periodic boundary condition. In this
section, we add an initial condition boundary condition known as initial boundary value
problems (IBVPs).
Under CONSTRUCTION

9.11 Nonlinear advection equation


Under CONSTRUCTION

104

You might also like