Professional Documents
Culture Documents
Joe Huang
3 Elliptic Equations 27
3.1 Steady state heat equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.2 5-point Stencil . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.3 Matrix conditioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
3.4 9-point Laplacian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.5 Higher order methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.5.1 Fourth order differencing . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.6 Method for solving linear systems . . . . . . . . . . . . . . . . . . . . . . . . 31
3.6.1 Gaussian elimination . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
4 Iterative methods 33
4.1 Jacobi and Gauss-Seidel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4.2 Analysis of matrix splitting methods . . . . . . . . . . . . . . . . . . . . . . 34
4.2.1 Jacobi’s method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.2.2 Gauss-Seidel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.2.3 Error and convergence . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.2.4 Rate of convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3
CONTENTS
4
CONTENTS
5
CONTENTS
Introduction
Some basic linear algebra stuffs should be dumped in the beginning session. Everything here
is trivial, so we will omit the proofs here.
(I) Symmetric square matrix has a complete set of eigenvectors and all real eigenvalues.
(IV) If || · ||α , || · ||β are equivalent, then there exists c, C such that
6
Part I
7
Chapter 1
u(x0 +h)−u(x0 )
(1) D+ u(x0 ) = h
: forward difference
Definition 1.1.1. Let E(h) be the error or truncation error of the approximation with step
size h. If E(h) ∼ Chp , we say the approximation is pth order accurate.
9
CHAPTER 1. FINITE DIFFERENCE APPROXIMATION
Polynomial interpolation
We can approximate u(x) by p(x), interpolating polynomial at chosen points:
p(xi ) = u(xi ) i = 0, · · · n ⇒ u(x) ≈ p(x) and u0 (x) ≈ p0 (x)
There are a couple of ways which one can construct the interpolating polynomial. As an
example, if we wish to find a polynomial p(x) of degree 3 satisfying p(xi ) = ui , i = 0, 1, 2, 3.
10
CHAPTER 1. FINITE DIFFERENCE APPROXIMATION
We have
1 x0 x0 2 x0 3 a u0
1 x1 x1 2 3
x1 b u1
=
1 x2 x2 2 x2 3 c u2
1 x3 x3 2 x3 3 d u3
(II) We can reduce the size of Van Der Pol matrix by setting a = u0 and:
This system is a linear, triangular, 4 × 4 system. Easier to solve then previous two.
Problem 1.2.1. Let p(x) be the quadratic polynomial interpolating u(x) at x0 , x0 −h, x0 −2h.
Show that p0 (x0 ) = D2 u(x0 ).
11
CHAPTER 1. FINITE DIFFERENCE APPROXIMATION
Now, we wish to obtain higher order approximation from this scheme. The idea is to
compute D+ u(x0 ) with step size h, h2 , h4 , then use them to eliminate O(h) terms.
h
A0,0 =D+ u(x0 ) = u0 (x0 ) + c1 h + c2 h2 + c3 h3
h h h h
A1,0 =D+2 u(x0 ) = u0 (x0 ) + c1 + c2 ( )2 + c3 ( )3
2 2 2
0 2 3
A1,1 =2A1,0 − A0,0 = u (x0 ) + d2 h + d3 h
h h h h
A2,0 =D+4 u(x0 ) = u0 (x0 ) + c1 + c2 ( )2 + c3 ( )3
4 4 4
h h
A2,1 =2A2,0 − A1,0 = u0 (x0 ) + d2 ( )2 + d3 ( )3
2 2
12
CHAPTER 1. FINITE DIFFERENCE APPROXIMATION
In general, we have:
A0,0
A1,0 A1,1
A2,0 A2,1 A2,2
A3,0 A3,1 A3,2 A3,3
Figure 1.1: Illustration of Richardson Extrapolation
13
CHAPTER 1. FINITE DIFFERENCE APPROXIMATION
14
Chapter 2
Boundary value problems typically arise as a steady state limit of time dependent problem.
Let U (x, t) be the temperature, Ψ(x, t) be heat source (sink), k(x) be heat conduction
coefficient, and k(x)ux be heat flux, rate of heat flow.
Heat balance in a rod segment:
∆xU (x, t + ∆t) ≈ ∆xU (x, t) + ∆t(k(x + ∆x)Ux (x + ∆x, t) − k(x)Ux (x, t)) + ∆t∆xψ(x, t)
Divide both sides by ∆x∆t:
U (x, t + ∆t) − U (x, t) k(x + ∆x)Ux (x + ∆x, t) − k(x)Ux (x, t)
≈ + ψ(x, t)
∆t ∆x
Let ∆x, ∆t → 0, we get
ut = (k(x)ux )x + ψ(x, t)
u(a, t) = α(t) ux (a, t) = σ1 (t)
u(b, t) = β(t) or ux (b, t) = σ2 (t) Boundary conditions
Dirichlet BC’s Neumann BC’s
15
CHAPTER 2. STEADY STATES AND BOUNDARY VALUE PROBLEMS (BVP)
(I) Non-uniqueness
u00 = 0 u0 (0) = u0 (1) = 0
If u is a solution, then we also have u + cost also a solution. The boundary condition
implies no heat flux through the boundaries, thus u is a constant.
(II) Nonexistence
u00 = f f = −ψ < 0 ⇒ positive heat source
with the boundary condition u0 (0) = u0 (1) = 0, there is no heat flux through the
boundaries. Thus, cannot expect a steady state solution.
0= X0 xn+1 =1
Figure 2.1: Grid point
As shown in Figure 2.1, we discretize the line into n + 1 equally-spaced intervals, where
16
CHAPTER 2. STEADY STATES AND BOUNDARY VALUE PROBLEMS (BVP)
As h → 0, norm equivalence does not carry over. (Linear algebra: all finite dimensional
norms are equivalent).
17
CHAPTER 2. STEADY STATES AND BOUNDARY VALUE PROBLEMS (BVP)
Global error
Recall: )
AU = F
⇒ A(U − Û ) = −τ or AE = −τ
AÛ = F + τ
Alternatively, we have
1
(Ej−1 − 2Ej + Ej+1 ) = −τj j = 1, · · · , n
h2
For Dirichlet’s boundary conditions, we have
E0 = En+1 = 0
Now, we let Ah to represent the same discretize matrix with step size h.
Ah E h = −τ h ⇔ E h = −(Ah )−1 τ h ⇔ ||E h || = ||(Ah )−1 τ h || ≤ ||(Ah )−1 ||||τ h ||
Definition 2.4.3. We see that
Ah E h = −τ h ⇔ E h = −(Ah )−1 τ h ⇔ ||E h || = ||(Ah )−1 τ h || ≤ ||(Ah )−1 ||||τ h ||
(a) Consistency: method is consistent if ||τ h || → 0 as h → 0.
(b) Stability: method is stable if ||(Ah )−1 || is bounded for small h.
(c) Convergence: method is convergent if ||E h || → 0 as h → 0.
Intuitively, if we have both consistency and stability, we should be able to get convergence.
However, it is much harder to show stability than consistency.
18
CHAPTER 2. STEADY STATES AND BOUNDARY VALUE PROBLEMS (BVP)
l2 stability
−2 1
1 −2 1
1
.. .. ..
Ah = 2
. . .
h
1 −2 1
1 −2
We note Ah is a symmetric, tridiagonal matrix. So is A−1 . Let λk denote the set of eigenvalues
of Ah and rk denote the set of eigenvectors. Recall:
1
||A||2 = ρ(A) = max |λk | ||A−1 ||2 = ρ(A−1 ) = max |λ−1
k | =
k k min |λk |
2
Problem 2.4.1. Show the eigenvalues and eigenvectors of Ah are: λk = h2
(cos(kπh) −
1); k = 1, 2, · · · , n and (rk )j = sin(kπjh); j = 1, · · · , n.
Problem 2.4.2. mink |λk | = |λ1 | = | h22 (cos(πh) − 1)| = | − π 2 + O(h2 )|. Therefore, |λ1 | →
−π 2 as h → 0 independent of h.
1 1 1 2 0000
||A−1 ||2 = as h → 0 ||E h ||2 ≤ ||(Ah )−1 ||2 ||τ h ||2 ≈ h ||u (xj )||2
π2 π 2 12
l∞ stability
To show O(h2 ) convergence in l∞ , we need to show ||A−1 ||∞ ≤ C.
Claim 2.4.1. A−1 ej extracts the j th column of the matrix A−1
Proof. ej = (0, · · · , 1, 0, · · · , 0), where 1 appears at jth entry. If we let a1 , · · · , an denote the
rows of A−1 we see j
a1 · ej a1
−1
A ej = .
. ..
= .
.
an · ej ajn
which equals the j th column of A−1 .
Claim 2.4.2. The solution for Av = ej that is the vector v = A−1 ej is the vector obtained
by evaluating the Green’s function hG(x; xj ) on the grid, that is
h(xj − 1)xi xi ≤ xj
vi = hG(xi ; xj ) =
h(xi − 1)xj xi > xj
Proof. The solution to Av = ej must satisfy the following
vi−1 − 2vi + vi+1 0 i 6= j
=
h2 1 i=j
Note here v = (v1 , v2 , · · · , vn ). Now we consider four cases:
19
CHAPTER 2. STEADY STATES AND BOUNDARY VALUE PROBLEMS (BVP)
(II)
Claim 2.4.3. Consider the matrix G whose elements are Gij = hG(xi ; xj ). Show that each
element of G is bounded by h.
(I) i ≤ j
Note 0 ≤ xk ≤ 1.
20
CHAPTER 2. STEADY STATES AND BOUNDARY VALUE PROBLEMS (BVP)
(II) i > j
|Gij | = |h(xi − 1)xj | ≤ h|xi − 1||xj | ≤ h
Proof. ||A−1 ||∞ equals to the absolute value of maximum row sum of A−1 . By previous two
claims, we have ||A−1 ||∞ ≤ n × h < 1 Thus, this method is l∞ -stable.
Problem 2.5.1. Considered the 2-point BVP u00 = f, u0 (0) = σ, u(1) = β with the scheme
derived as above. with
(i)
U1 − U0
=σ j=0
h
(ii)
U1 − U0 h
= σ + f0 j=0
h 2
21
CHAPTER 2. STEADY STATES AND BOUNDARY VALUE PROBLEMS (BVP)
(I) Compute the Local Truncation Error (LTE) for the interior points. Show that the
LTE at j = 0 is O(h) for (i) and O(h2 ) for (ii).
(II) Show that the method is l2 -stable. (To find the e-values of the matrix Ah find first the
e-functions of the corresponding differential operator ∂ 2 /∂x2 with boundary conditions
u0 (0) = u(1) = 0.
Discretization yields
uj−1 − 2uj + uj+1 uj+1 − uj−1
aj 2
+ bj + cj uj = fj
h 2h
1 h h
2
[(aj − bj )uj−1 + (h2 cj − 2aj )uj + (aj + b2 )uj+1 ] = fj
h 2 2
In matrix form:
h2 c1 − 2a1 a1 + h2 b1 f1 − (a1 − h2 b1 )α/h2
a2 − h b 2 h2 c2 − 2a2 a2 + h2 b2 f2
2
1 .. .. ..
..
U =
2 . . . .
h
h 2 h
an−1 − 2 bn−1 h cn−1 − 2an−1 an−1 + 2 bn−1 fn−1
h 2 h 2
an − 2 b n h cn − 2an fn − (an + 2 bn )β/h
A tridiagonal system, if nonsingular, the system can be solved. However, this method is not
the best approach.
or
k(x)uxx + k 0 (x)ux = f (k, k 0 given )
An alternative method is more suitable for this case, in which we approximate the derivatives
using half-points:
uj+1 − uj
(k(x)ux )j+ 1 ≈ kj+ 1 ( )
2 2 h
Thus,
kj− 1 uj−1 − (kj− 1 + kj+ 1 )uj + kj+ 1 uj+1
(k(x)ux )j ≈ 2 2
2
2 2
h
22
CHAPTER 2. STEADY STATES AND BOUNDARY VALUE PROBLEMS (BVP)
1 k 3 −(k 3 + k5 ) k5
2 2 2 2
h2
.. .. ..
. . .
kn− 1 −(kn− 1 + kn+ 1 )
2 2 2
It is a symmetric negative definite matrix provided k(x) > 0, satisfying maximum prin-
ciple.
23
CHAPTER 2. STEADY STATES AND BOUNDARY VALUE PROBLEMS (BVP)
θ =(θ1 , θ2 , · · · , θn )T
G(θ) =0 G : Rn → Rn
θj−1 − 2θj + θj+1
G(θ)i = + sin(θj ) : ith component
h2
∂G k ∗
G(θ∗ ) ≈G(θ∗ ) + (θ )(θ − θ)
∂θ
∂G k k+1
G(θk+1 ) ≈G(θk ) + (θ )(θ − θ) = 0
∂θ
where ∂G
∂θ
is the Jacobian matrix and θ∗ − θ = δ k . We note
1
h2 j = i − 1
∂G(θ)i cos θi − h22 j = i
∂G
= = 1
∂θ ij ∂θj 2 j =i+1
h
0 otherwise
Note
∂G k k ∂G k −1
− (θ )δ = G(θk ) ⇔ δ k = −( (θ )) G(θk )
∂θ ∂θ
this can be solved by Gaussian elimination. Note:
Accuracy
stability
G(θ) − G(θ̂) = −τ
24
CHAPTER 2. STEADY STATES AND BOUNDARY VALUE PROBLEMS (BVP)
Linearize
∂G(θ)i
G(θ) =G(θ̂) + (θ − θ̂)
∂θj
∂G(θ)i −1
θ − θ̂ = − [ ] τ : global error
∂θj
∂G(θ)i −1
||E|| ≤||[ ] ||||τ ||
∂θj
25
CHAPTER 2. STEADY STATES AND BOUNDARY VALUE PROBLEMS (BVP)
26
Chapter 3
Elliptic Equations
ut = 0 ψ independent of t take k = 1
uxx + uyy = f (x, y) : Possion’s equation
2
∇ u = uxx + uyy ≡ ∆U : Laplacian operator
j j
13 14 15 16 15 7 16 8
9 10 11 12 5 13 6 14
5 6 7 8 11 3 12 4
1 2 3 4 1 9 2 10
i i
(a) Order by row (b) Red-black ordering
matrix form AU = F :
T I −4 1
I 1 −4 1 1
T I
1
1 .. .. ..
.. .. ..
A= 2 T = I =
. . . . . . . .
h
.
I T I 1 −4 1
1
I T 1 −4
Note A is n2 × n2 block diagonal matrix and T, I are n × n. The vector U is given by:
u11
u21
u31
U = u
41
u
12
..
.
Accuracy
1
τi,j = [u(xi − h, yj ) + u(xi + h, yj ) + u(xi , yj − h) + u(xi , yj + h) − 4u(xi , yj )] − f (xi , yj )
h2
1
= h2 (uxxxx + uyyyy ) + O(h4 ) = O(h2 )
12
28
CHAPTER 3. ELLIPTIC EQUATIONS
Stability
Recall, stability means
||A−1 || ≤ c as h → 0
Problem 3.2.1. Show that the eigenvectors and eigenvalues of the 5-point Laplacian are
(rp,q )i,j = sin(pπih) sin(qπjh)
2
λp,q = 2 [(cos(pπh) − 1) + (cos(qπh) − 1)]
h
Claim 3.2.1. The method is l2 -stable.
Proof. From the problem, we have shown the eigenvalues of A−1 are given by 1
λi,j
where
2
λi,j = [(cos(iπh) − 1) + (cos(jπh) − 1)]
h2
1 1
||A−1 || = ρ(A−1 ) = ≈ 2 as h → 0
|λ1,1 | 2π
Thus, the method is l2 -stable.
29
CHAPTER 3. ELLIPTIC EQUATIONS
5-point Laplacian
We examine the system of 5-point Laplacian:
2 8
||A||2 = ρ(A) = |λn,n | = | 2
[cos(nπh) − 1 + cos(nπh) − 1]| ≈ 2
h h
1 1
||A−1 ||2 = ρ(A−1 ) = ≈ 2
|λ1,1 | 2π
8 1 4
Cond2 (A) = 2 2 = 2 2
h 2π π h
Thus, system is ill-conditioned as h → 0.
6 2 5
3 1
0
7 4 8
u1 + u2 + u3 + u4 − 4u0
∇25 ui,j = (I)
h2
Alternatively, we can use the points on the diagonals u0 , u5 , u6 , u7 , u8 . In this
way, we get:
u5 + u6 + u7 + u8 − 4u0
∇˜25 ui,j = (II)
2h2
The 9-point Laplacian is defined as:
30
CHAPTER 3. ELLIPTIC EQUATIONS
Some observations
(i) Method is 2nd order accurate.
(ii) If f = 0 or ∇2 f = 0 (f is harmonic), then method is 4th order accurate.
(iii) If f (x, y) is known, then we can compute ∇2 f . Using this, we can increase order of
accuracy:
2 h2 2
∇9 ui,j = fi,j + ∇ fi,j : 4th order scheme
2
(iv) If fi,j is known, then ∇25 fi,j = ∇2 fi,j + O(h2 ). Thus:
h2 2
∇29 ui,j = fi,j + ∇ fi,j : 4th order scheme
2 5
The observation above can be classified as method of deferred corrections.
Operation counts
For GE, the operation count is O(N 3 ) where N is the size of the system n2 . For a banded
system, the operation count is reduced to O(p2 N ) where p is the band size.
31
CHAPTER 3. ELLIPTIC EQUATIONS
32
Chapter 4
Iterative methods
Iterative methods can be used to generate more and more accurate approximations.
In matrix form:
A=M −N
Ax = F ⇔ (M − N )x = F ⇔ M x = N x + F
M x(k+1) = N x(k) + F : iterative method
A = D − L − U D : diagonal L : lower triangle U : upper triangle
where
0 0 1
−2 . ... ...
1 1 . .
−1 −1
D= 2 ...
L= 2 .
U= 2
h
h .
.. ..
h ...
−2 1
1 0 0
33
CHAPTER 4. ITERATIVE METHODS
−2
M =D = I
h2
0 1
1 0 1
−1
.. .. ..
N =L + U = 2
. . .
h
1 0 1
1 0
M u(k+1) =N u(k) + F
0 1
1 0 1
1 h2
u(k+1) =M −1 N u(k) + M −1 F = . . . . . . . . . u(k) − F
2 2
1 0 1
1 0
4.2.2 Gauss-Seidel
M =D − L N =U
−2 0 1
−1 −2 .. ..
(k+1) . . u(k) − h2 F
u =
... ...
0 1
−1 −2 0
34
CHAPTER 4. ITERATIVE METHODS
e(k+1) =M −1 N e (k)
e(k) = u(k) − u ∗
: error in kth iterate
M −1 N =G : iteration matrix
e(k+1) =Ge(k) : linear convergence
e(k) =Gk e(0) ← error in initial guess
||e(k) || ≤||Gk ||||e(0) ||
Linear convergence means the error is directly related to the error in previous iteration.
We see iteration converges from any initial guess if Gk → 0 as k → ∞. We assume G
diagonalizable:
G =RΛR−1
R : matrix of right eigenvectors of G
Λ : diagonal matrix of eigenvaluesλk
Gk =GG · · · G = RΛk R−1
If all |λk | < 1 ⇒ Gk → 0 method converges. The rate of convergence is determined by
largest eigenvalue of G.
||e(k) ||2 ≤||RΛk R−1 ||2 ||e(0) ||2
≤||R||2 ||Λk ||2 ||R−1 ||2 ||e(0) ||2
=ρ(G)cond2 (R)||e(0) ||2
If G is normal (GGT = GT G) and symmetric, then cond2 (R) = 1 and ||e(k) ||2 ≤ ρk ||e(0) ||2 .
Jacobi’s method
A=D−L−U
−1 −1 h2 −1
G =D (L + U ) = D (D − A) = I − D A = I + A
2
h2 2 h2
e-values of G :1 + λk , λk = 2 (cos(kπh) − 1) ⇔ 1 + λk = cos(kπh) k = 1, 2, · · · , n
2 h 2
35
CHAPTER 4. ITERATIVE METHODS
Note here:
All eigenvalues of G are | · | < 1, thus method converges!
Operation count
||e(k) || ≤ ρk ||e(0) ||, ||e0 || = O(1). To reduce error by a factor of ε:
Since our method is second order, we want our ε to be second order as well:
ε =ch2
log(ε) log(ch2 ) log(c) + 2 log(h)
k≈ ≈ 1 ≈
log(ρ) 2
log(1 − 2 (πh) ) − 12 (πh)2
2 log(h) −2 log n 4
≈O( −1 2 2 ) = O( 1 2 1 2 ) = O( 2 n2 log(n)) : # of iterations
2
π h −2π (n) π
Gauss-Seidel
Note: A = D − L − U , e(k+1) = GGS e(k) , where
where
Aλ = Dλ ADλ −1 Dλ = diag(λ, λ2 , · · · , λn )
36
CHAPTER 4. ITERATIVE METHODS
Some comments:
Operation count
To reduce error by a factor ε = ch2
2 log(h) 2
k = log(ε)/ log(ρ) ≈ O( 2 2
) = O( 2 n2 log(n))
log(1 − π h ) π
37
CHAPTER 4. ITERATIVE METHODS
We see GS moves approximation in right direction, but is far too conservative. Instead, we
can consider the following scheme:
1 (k+1) (k) h2
UkGS = (Uj−1 + Uj+1 ) − fj
2 2
(k+1) (k) GS (k)
Uj =Uj + ω(Uj − Uj )
We also need to know what is the optimal ω. Note we need ||GSOR || to be as small as
possible. So ωoptimal should gives us GSOR with the smallest spectral radius.
Proof.
38
CHAPTER 4. ITERATIVE METHODS
Jacobi
M = D, N = L + U ⇒ M T + N is symmetric and pos-definite → ρ(GJ ) < 1
SOR
M = ω1 (D − ωL), N = ω1 [(1 − ω)D + ωU ]. If A is symmetric positive definite, i.e. LT = U ,
then M T + N = 2−ω
ω
D is symmetric. If 0 < ω < 2, M T + N is also positive definite, by the
problem, we have ρ(GSOR ) < 1.
Therefore,
ρ(Gω∗ = ω ∗ − 1 < ρ(GGS ) < G(GJ ) < 1
Operation count
k is the # of iteration.
2 log h
k = log(ε)/ log(ρ) = log(ch2 )/ log(1 − 2πh) ≈ O( ) = O(n log(n))
−2πh
Compare with O(n2 log n) for Jacobi, GS. Typically, convergence rate is very sensitive to
choice of ω. It is better to slightly overestimate ω ∗ than slight underestimate. Even with
optimal SOR, ρ(Gω∗ ) → 1 as h → 0.
39
CHAPTER 4. ITERATIVE METHODS
40
Part II
41
Chapter 5
u0 (tn ) ≈ D+ (un )
un+1 − un
= f (un ) ⇒ un+1 = un + kf (un ) n = 0, 1, 2, · · ·
k
From the initial data U 0 , we can compute U 1 , U 2 and so on. This is a time marching scheme,
explicit.
43
CHAPTER 5. THE INITIAL VALUE PROBLEM (IVP) FOR ORDINARY
DIFFERENTIAL EQUATIONS
Backward Euler
u0 (tn+1 ) ≈ D+ (un+1 )
un+1 − un
= f (un+1 ) ⇒ un+1 = un + kf (un+1 ) n = 0, 1, 2, · · ·
k
Implicit scheme, it requires to solve a nonlinear equation to find un+1 .
Trapezoid method
un+1 − un 1
= (f (un ) + f (un+1 )) ⇒ un+1 = un + k(f (un ) + f (un+1 )) n = 0, 1, 2, · · ·
k 2
Implicit, second order accurate scheme.
u0 (tn ) ≈ D0 (un )
un+1 − un−1
= f (un ) ⇒ un+1 = un + 2kf (un )
2k
Explicit, second order method. It needs another method to start it up.
Truncation Error
The LTE for forward Euler
u(tn+1 ) − u(tn ) k
τn = − f (u(tn )) = u00 (tn ) + O(k 2 )
k 2
Thus, it is a first order approximation.
44
CHAPTER 5. THE INITIAL VALUE PROBLEM (IVP) FOR ORDINARY
DIFFERENTIAL EQUATIONS
45
CHAPTER 5. THE INITIAL VALUE PROBLEM (IVP) FOR ORDINARY
DIFFERENTIAL EQUATIONS
Accuracy
u(tn+1 ) − u(tn ) k
τn = − f (u(tn ) + f (u(tn )))
k 2
k k
f (u(tn ) + f (u(tn ))) = f (u(tn )) + f 0 (u(tn ))u0 (tn ) +O(k 2 )
2 | {z } |2
=u0 (tn )
{z }
=u00 (tn )
k2
u00 (tn ) + O(k 3 )
ku0 (tn ) + k
τn = 2
− (u0 (tn ) + u00 (tn ) + O(k 2 ) = O(k 2 )
k 2
We may also check this on a linear ODE:
u0 = λu
k (λk)2 n
un+1 = un + λk(un + λun ) = (1 + λk + )u
2 2
u(t) = u0 eλt : exact solution
(λk)2
u(tn+1 ) = u0 eλ(tn +k) = eλk u(tn ) = (1 + λk + + · · · )u(tn )
2
Numerical solution
un+1 = eλk un + O(k 3 )
Thus, the one step error is O(k 3 ), which implies truncation error is O(k 2 ). It doesn’t auto-
matically follow method is 2nd order accurate for nonlinear problems.
F0 = f (un )
F1 = f (un + akF0 )
un+1 = un + k(bF0 + cF1 )
How do we find a, b, c to maximize accuracy?
u(tn+1 ) − u(tn )
τn = − (bf (u(tn )) + cf (u(tn ) + akf (u(tn )))
k
u(tn+1 ) − u(tn )
= − (b f (u(tn )) +c(f (u(tn )) +ak f (u(tn ))f 0 (u(tn )) +O(k 2 )))
k | {z } | {z } | {z }
u0 (tn ) u0 (tn ) u00 (tn )
1
=(1 − (b + c))u0 (tn ) + k( − ac)u00 (tn ) + O(k 2 )
2
Thus, we require
(
b+c=1
⇒ τ n = O(k 2 ) : 1-parameter family
ac = 21
46
CHAPTER 5. THE INITIAL VALUE PROBLEM (IVP) FOR ORDINARY
DIFFERENTIAL EQUATIONS
Example 5.4.1.
1
a = , b = 0, c = 1
2
F0 = f (un )
k
F1 = f (un + F0 )
2
un+1 = un + kF1
F0 = f (un )
k
F1 = f (un + F0 )
2
k
F2 = f (un + F1 )
2
F3 = f (un + kF2 )
k
un+1 = un + (F0 + 2F1 + 2F2 + F3 )
6
Problem 5.4.1. Check method is 4th order accurate on a linear problem
U n+r is computed using U n+r−1 , un+r−2 , · · · , un . Note if βr = 0, then the method is explicit.
47
CHAPTER 5. THE INITIAL VALUE PROBLEM (IVP) FOR ORDINARY
DIFFERENTIAL EQUATIONS
Adams-Bashforth method
r−1
X
n+r n+r−1
u =u +k βj f (un+j ) : explicit
j=0
Adams-Moulton method
r
X
n+r n+r−1
u =u +k βj f (un+j ) : implicit
j=0
Nystron method
r−1
X
n+r n+r−2
u =u +k βj f (un+j ) : explicit
j=0
r = 1 ⇒ midpoint method
5.5.1 LTE
r r
n+r 1 X X
τ = ( αj u(tn+j ) − k βj ) f (u(tn+j ))
k j=0 j=0
| {z }
u0 (tn+j )
r r
1 X X
= ( αj u(tn+j ) − k βj u0 (tn+j ))
k j=0 j=0
(kj)2 00
u(tn+j ) = u(tn ) + kju0 (tn ) + u (tn ) + · · ·
2
(kj)2 000
u0 (tn+j ) = u0 (tn ) + kju00 (tn ) + u (tn ) + · · ·
2
r r
1 X X
τ n+r = ( αj )u(tn ) + ( (jαj − βj ))u0 (tn )
k j=0 j=0
r
X j2
+k( ( αj − jβj ))u00 (tn ) + · · ·
j=0
2
..
.
r
X jp j p−1
+ k p−1 ( ( αj − βj ))u(p) (tn ) + · · ·
j=0
p! (p − 1)!
48
CHAPTER 5. THE INITIAL VALUE PROBLEM (IVP) FOR ORDINARY
DIFFERENTIAL EQUATIONS
Problem 5.5.1. Show that these conditions are obtained by requiring this method to be
exact for polynomials of degree 0,1.
Comments on LMM:
General rule: may drop one order of accuracy in generating starting values (one step
error is O(k p+1 )).
49
CHAPTER 5. THE INITIAL VALUE PROBLEM (IVP) FOR ORDINARY
DIFFERENTIAL EQUATIONS
50
Chapter 6
6.1 Convergence
To discuss convergence for IVP, we fix T = nk and check the error in our approximations to
u(T ). We say a method converges if
lim U N = u(T ) (6.1.1)
k→0
Note it requires more steps as k → 0. In general, a method might converge on some problems
but not all. To say a method is convergent in general, we mean it converges for all problems
with all reasonable starting values. For r-step method:
Starting values: U 0 , U 1 , · · · , U r−1 lim U l = U 0 = η for l = 0, 1, · · · , r − 1 (6.1.2)
k→0
More precisely,
Definition 6.1.1. An r-step method is said to be convergent if applying the method to any
ODE u0 = f (u, t) with f (u, t) Lipschitz continuous in u, and with any set of starting values
satisfying (6.1.2), we obtain convergence in the sense of (6.1.1) for every fixed time T > 0
at which the ODE has a unique solution.
Example 6.1.2.
u0 = λu u(0) = u0 → u(t) = u0 eλt
u1 = u0 + kλu0 = (1 + kλ)u0
u2 = u1 + kλu1 = (1 + kλ)u1 = (1 + kλ)2 u0
..
.
uN = (1 + λk)N u0
T
As k → 0, N K = T, fixed N =
k
λT 1
lim U N = lim (1 + λk) u0 = lim [(1 + λk) λk ]λT u0 = eλT u0
λk
k→0 k→0 k→0
51
CHAPTER 6. ZERO STABILITY AND CONVERGENCE
u0 = λu + g(t) u(0) = u0
52
CHAPTER 6. ZERO STABILITY AND CONVERGENCE
So we have en = O(k). Note Lipschitz constant L plays the role of λ in the linear case.
6.3.1 Consistency
u(tn+1 ) − u(tn )
τn = − ψ(u(tn ), k)
k
k
=u0 (tn ) + u00 (tn ) + · · · − [ψ(u(tn ), 0) + kψk (u(tn ), 0) + · · · ]
2
Method is consistent if
ψ(u, 0) = f (u) ⇒ τ n → 0 as k → 0
53
CHAPTER 6. ZERO STABILITY AND CONVERGENCE
6.3.2 Convergence
r
X r
X
αj U n+j = k βj f (un+j ) : r-step LMM
j=0 j=0
Xr r
X X
αj = 0 jαj = βj : consistency
j=0 j=0
0 1 r−1
U ,U ,··· ,U → U 0 as k → 0
Example 6.4.1.
Take f = 0, u0 (t) = 0, u(0) = 0, the exact solution is given by u(t) = 0. Numerical solution
with u0 = 0, u1 = k:
N 5 10 20
uN 4.2 ∼ 260 ∼ 2 × 106
54
CHAPTER 6. ZERO STABILITY AND CONVERGENCE
Suppose we have ξ1 , ξ2 , · · · , ξr distinct roots of ρ(ξ), then we can write un as a linear com-
bination:
un = c1 ξ1n + c2 ξ2n + · · · + cr ξnn
where c1 , c2 , · · · , cr are constant determined by initial value:
n = 0 c1 + c2 + · · · + cr = u 0
n = 1 c1 ξ1 + c2 ξ2 + · · · + cr ξr = u1
..
.
n = r − 1 c1 ξ r−1 + c2 ξ r−1 + · · · + cr ξ r−1 = ur−1
In matrix form:
1 1 ··· 1 c1 u0
ξ1 ξ2 ··· ξr c2 u1
.. .. .. .. ..
. . . . .
ξ1r−1 ξ2r−1 · · · r−1
ξr cr ur−1
The matrix on the left side is Van der mode matrix. It is nonsingular, ill-conditioned.
Example 6.5.1. We revisit the example:
ρ(ξ) = ξ 2 − 3ξ + 2 = (ξ − 1)(ξ − 2) ⇒ ξ1 = 1, ξ2 = 2
un = c1 1n + c2 2n = c1 + c2 2n
55
CHAPTER 6. ZERO STABILITY AND CONVERGENCE
Thus, we have
un = k(2n − 1)
It grows exponentially!
Claim 6.5.1.
1 1 ··· 1
ξ1 ξ2 ··· ξr
A=
.. .. ..
. . .
ξ1r−1 ξ2r−1 · · · ξrr−1
Proof. To show A is invertible, it suffices to show the null space of AT is trivial; that is, the
solution to AT x = 0 is x ≡ 0.
1 ξ1 · · · ξ1r−1 x1
.. .. .. .. .. = 0
. . . . .
1 ξr · · · ξrr−1 xr
If we define
This implies that f (ξ) is the trivial zero polynomial and x1 = x2 = · · · = xr = 0. Thus, we
have shown A is invertible.
It is noteworthy that if any of the roots |ξn | > 1, then LMM does not converge. But if
all roots satisfy |ξj | <≤ 1, will the method converge?
Claim 6.5.2. If ξ1 is a root of ρ(ξ) with multiplicity of 2, then un = nξ1n is a solution of the
difference equation.
56
CHAPTER 6. ZERO STABILITY AND CONVERGENCE
k
un+2 − 2un+1 + un = (f (un+2 ) − f (un ))
2
X X X
αj = 0 jαj = βj = 0 ⇒ consistent!
ρ(ξ) = ξ 2 − 2ξ + 1 = (ξ − 1)2 = 0 ⇒ ξ1 = ξ2 = 1
un = c1 ξ1n + c2 nξ1n = c1 + c2 n
u0 = 0, u1 = k ⇒ c1 = 0, c2 = k ⇒ un = kn
Note here solution grows linearly, not as bad as exponential growth. Method does not
converge.
Example 6.5.3.
5 1 1
un+3 − 2un+2 + un+1 − un = kf (un )
4 4 4
X 5 1 X 5 1 X
αj = 1 − 2 + − = 0 jαj = 1 ∗ 3 − 2 ∗ 2 + + 0 = = βj ⇒ consistent!
4 4 4 4
5 1 1
ρ(ξ) = ξ 3 − 2ξ 2 + ξ − = (ξ − 1)(ξ − )2
4 4 2
1
⇒ ξ1 = 1, ξ2 = ξ3 =
2
1 1
un = c1 + c2 ( )n + c3 n( )n
2 2
1 n
n( ) ⇒ 0 as n → ∞
2
57
CHAPTER 6. ZERO STABILITY AND CONVERGENCE
Claim 6.5.3.
∞ |ξ| > 1
p n
lim n |ξ | = 0 |ξ| < 1
n→∞
1 |ξ| = 1, p = 0
(1) |ξj | ≤ 1, j = 1, · · · , r.
Note from the root condition is necessary and sufficient condition for convergence.
Proof. SKIPPED
58
Chapter 7
Consistency
X
αj = 0 ⇒ ρ(1) = 0 ρ(1)0 6= 0( stability )
X X
jαj = βj ⇒ ρ0 (1) = σ(1)
j
59
CHAPTER 7. ABSOLUTE STABILITY FOR ORDINARY DIFFERENTIAL
EQUATIONS
Example 7.1.2.
un+2 − un − 2kλun+1 = 0 : leap frog
Π(ξ, λk) = ξ 2 − 2kλξ − 1 = 0
p
ξ1 = λk + (λk)2 + 1 → 1 as k → 0
p
ξ2 = λk − (λk)2 + 1 → −1 as k → 0
We see that ξ1n = eλt + O(k 2 ), this is a good approximation to the actual solution; on the
other hand, ξ2n = (−1)n e−λt + O(k 2 ) a clear extraneous root. The general solution is given
by:
un = c1 ξ1n + c2 ξ2n = c1 eλt + c2 (−1)n e−λt + O(k 2 )
Using the initial condition u0 = 1, u1 = eλt we get
c1 = 1 + O(k 3 ) c2 = O(k 3 ) ⇒ un → eλt as k → 0, n → ∞, nk = t fixed
Note here for fixed t = nk, limn→∞ un = u(t) implies convergence. For k > 0, if Re(λ) >
0, the extraneous solution decays as n → ∞; vice versa.
60
CHAPTER 7. ABSOLUTE STABILITY FOR ORDINARY DIFFERENTIAL
EQUATIONS
61
CHAPTER 7. ABSOLUTE STABILITY FOR ORDINARY DIFFERENTIAL
EQUATIONS
This resembles the difference equation from section 6.5. Instead, the characteristic polyno-
mial is replaced by:
X
Π(ξ, z) = (αj − zβj )ξ j = ρ(ξ) − zσ(ξ) : stability polynomial
Definition 7.3.1. For a given z, a LMM is absolute stable if Π(ξ, z) satisfies the root con-
dition (Definition 6.5.4).
Note a LMM is zero-stable if and only if z = 0 is inside the absolute stability region.
Π(ξ, z) = ξ − (1 + z) = 0 ⇒ ξ1 = 1 + z
|ξ1 | ≤ 1 ⇒ |1 + z| ≤ 1
The stability region is the disk centered at −1 as shown in Figure 7.1.
Im z
Re z
−1
62
CHAPTER 7. ABSOLUTE STABILITY FOR ORDINARY DIFFERENTIAL
EQUATIONS
Im z
Re z
1
z z 1 + z/2
Π(ξ, z) = (1 − )ξ − (1 + ) = 0 ⇒ ξ =
2 2 1 − z/2
1 + z/2
|ξ1 | ≤ 1 ⇔ −1 < <1
1 − z/2
The stability region is shown in Figure 7.3. Method is also A-stable.
Im z
Re z
63
CHAPTER 7. ABSOLUTE STABILITY FOR ORDINARY DIFFERENTIAL
EQUATIONS
Example 7.3.5. For midpoint (Leap Frog): un+1 = un−1 + 2zun , we have
√
Π(ξ, z) = ξ 2 − 2z − 1 = 0 ⇒ ξ1,2 = z ± z 2 + 1
1
Since ξ1 ξ2 = −1, |ξ1 | = |ξ2 |
. Therefore method is stable if and only if |ξ1 | = |ξ2 | = 1. In
other words
ξ2 − 1 1 1
z= = (ξ − )
2ξ 2 ξ
Let ξ = a + ib, then ξ 2 = a2 + b2 = 1. Thus we have
1 1
z = (ξ − ) = iα : pure imaginary
2 ξ
|α| < 1 for absolute stability. The stability region is shown in Figure 7.4.
Im z
Re z
−1
αj eiθ
X P
iθ iθ
Π(e , z) = (αj − zβj )e = 0 ⇒ z = P
βj eiθ
It is worth-noting that every point on the boundary must have this form for some θ. However,
it is possible that not all θ correspond to some z.
Idea: plot z(θ) for all θ ∈ [0, 2π], then identify all z that are potentially on the boundary.
64
CHAPTER 7. ABSOLUTE STABILITY FOR ORDINARY DIFFERENTIAL
EQUATIONS
The boundary correspond to the disk in Figure 7.1. The next step is to determine whether
inside or outside of the boundary is stable. Observe that the number of ξ with |ξ| > 1 does
not change unless we cross the boundary. Thus, we can test stable region by randomly pick
a point inside and outside:
Outside: Pick z = −3, ξ1 = −2, does not satisfy the root condition; thus, unstable.
Thus, we have found the same absolute stability region as derived before.
ξ2 − 1 ei2θ − 1 1 iθ
Π(ξ, z) = ξ 2 − 2zξ − 1 = 0 ⇒ z = = = (e − e−iθ ) = i sin(θ)
2ξ 2eiθ 2
Therefore, z is pure imaginary with |z| ≤ 1 or z = iα, |α| < 1 on the boundary. Outside the
boundary, pick z = 34 m we get ξ1 = 2, ξ2 = −12
, does not satisfy the root condition. Thus,
stability region is the boundary.
As shown in Figure 7.5, boundary locus may cross itself. In those situations, to determine
the stability region, evaluate roots of Π(ξ, z) at some convenient z inside each region.
Problem 7.4.1. Use the boundary locus method to find the absolution stability region of the
2-stage RK and (ii) the 2-step BDF method.
Im z
Re z
65
CHAPTER 7. ABSOLUTE STABILITY FOR ORDINARY DIFFERENTIAL
EQUATIONS
66
CHAPTER 7. ABSOLUTE STABILITY FOR ORDINARY DIFFERENTIAL
EQUATIONS
We note that if max |λp | is very close to the imaginary axis, then solution will oscillate
rapidly, undamped. In this case, a small kacc is chosen to resolve oscillations. Stiffness may
also occur in a scalar problem.
LMM
So far, we have seen two kinds of LMM:
In fact, all explicit methods have bounded stability regions while some implicit methods
also have bounded stability regions. Any A-stable method is at most second order accurate.
7.7 L-Stability
Recall that the stability polynomial is:
67
CHAPTER 7. ABSOLUTE STABILITY FOR ORDINARY DIFFERENTIAL
EQUATIONS
We see even though both methods are A-stable, but for large |z|, backward Euler is more
efficient (errors decay faster).
1 1
Π(ξ, z) = ρ(ξ) − σ(ξ)
z z
As |z| → ∞, roots of Π(ξ, z) approaches roots of σ(ξ).
In the examples above, backward Euler method is L-stable while Trapezoid method is
not. L-stable methods are in general very good at solving stiff systems.
In the simplest case r = 1, we get U n+1 = U n +kf (U n+1 ), which corresponds to the backward
Euler method. Other BDF methods are:
68
Chapter 8
This combines boundary value problem with initial value problems. We will discretize space
and time as shown in Figure 8.1:
t
u32
.
.
.
3
2
1
x
0 1 2 3 ... m m+1
I.
un+1
i − uni uni−1 − 2uni + uni+1
= 2
: explicit (8.0.1)
| {zk } | h
{z }
forward Euler in time centered difference in space
69
CHAPTER 8. DIFFUSION EQUATION AND PARABOLIC PROBLEMS
II. Crank-Nicolson
un+1
i − uni 1
= 2 ((uni−1 − 2uni + uni+1 ) + (un+1 n+1
i−1 − 2ui + un+1
i+1 )) (8.0.2)
k 2h
This method is trapezoid method in time and centered difference in space: implicit.
n+1 n+1
n n
i−1 i i+1 i−1 i i+1
(a) (b)
u(x, t + k) − u(x, t) 1
τin = − 2 (u(x − h, t) − 2u(x, t) + u(x + h, t))
k h
2
k k
=ut + utt + uttt + O(k 3 )
2 6
1 h2 h3 h4 h3 h3 h4
− 2 (u −hux + uxx − u xxx + u xxxx − 2u
+ u +
xhu + u xx + uxxx + uxxxx + O(h6 ))
h 2 6 24 2 6 24
2 2
k k h
ut + utt + uttt + O(k 3 ) − (
= uxx
+ uxxxx + O(h4 )) (ut = uxx )
2 6 12
k h2
= utt − uxxxx + h.o.t (utt = uxxxx )
2 12
k h2
=( − )uxxxx + h.o.t. = O(k + h2 )
2 12
Thus, (8.0.1) is first order accurate in time, second order accurate in space.
Problem 8.1.1. Show Crank-Nicolson method is second order accurate in both space and
time; i.e. τin = O(k 2 + h2 ).
70
CHAPTER 8. DIFFUSION EQUATION AND PARABOLIC PROBLEMS
-x
x0 x1 x2 ··· xm xm+1
As shown in Figure 8.3, we may solve the system of ODEs along vertical lines. In matrix
form:
Where A is the same as before and g(t) includes the boundary conditions:
−2 1 g0 (t)
1 −2 1 0
1 ... ... ...
1
..
A= 2 , g(t) =
h h2 .
1 −2 1 0
1 −2 g1 (t)
Once we discretize in time, we will get a fully discrete system. The stability of schemes
in (8.0.1) or (8.0.2) can now be analyzed. We expect the method to be stable if kλp ∈ S; i.e.
71
CHAPTER 8. DIFFUSION EQUATION AND PARABOLIC PROBLEMS
if the time step k times any eigenvalue λp lies in the absolute stability region of the ODE
method. We have seen that the eigenvalues of A is given by:
2
λp = (cos(pπh) − 1) p = 1, 2, · · · , m
h2
4
Notice that all λp are real negative. min |λp | = |λ1 | ≈ π 2 and max |λp | = |λm | ≈ h2
.
Example 8.2.1. For method I ((8.0.1)), the stability region for forward Euler’s method is
−2 ≤ kλ ≤ 0. Thus, we require:
k k 1
−2 ≤ −4 ≤ 0 ⇒ ≤
h2 h2 2
Example 8.2.2. For method II ((8.0.2)), Trapezoid method is A-stable. Thus, there is no
restriction on k.
Low frequencies decay slowly while high frequencies decay rapidly. For different time scales,
we require different p. In the continuous case, we have infinite stiffness; on the other hand,
the discrete case sees finite stiffness but as h increases, m increases and more frequencies are
present, hence, stiffness increases.
72
CHAPTER 8. DIFFUSION EQUATION AND PARABOLIC PROBLEMS
k
Claim 8.3.1. Method I is stable if and only if h2
= r ≤ 21 .
Proof. (⇐=) we assume r ≤ 21 .
un+1
i =uni + r(uni−1 − 2uni + uni+1 )
=runi−1 + (1 − 2r)uni + runi+1 )
Taking the norm on both sides and by triangle inequality:
|un+1
i | ≤|runi−1 | + |(1 − 2r)uni | + |runi+1 )|
≤r||un ||∞ + (1 − 2r)||un ||∞ + r||un ||∞ = ||un ||∞
True for all i, thus for max as well:
||un+1 ||∞ ≤ ||un ||∞ ≤ · · · ≤ ||u0 ||∞
(=⇒) Suppose r > 21 , we want to show solution grows. We let u0i = (−1)i . In this case,
||u0 ||∞ = 1. But we see
u1i =r(−1)i−1 + (1 − 2r)(−1)i + r(−1)i+1
=(−1)i−1 (r − (1 − 2r) + r) = (−1)i−1 (4r − 1)
|u1i | = 4r − 1 > 1 ⇒ ||u1 ||∞ > ||u0 ||∞
Keep going, we show solution grows.
To show convergence, we have the following:
Claim 8.3.2. If r ≤ 12 , then ||en ||∞ ≤ |{z}
T ||τ ||∞ .
=nk
k
Corollary 8.3.2. If k, h → 0 with r = h2
fixed and r ≤ 12 , then the method converges.
73
CHAPTER 8. DIFFUSION EQUATION AND PARABOLIC PROBLEMS
k
Definition 8.3.3. (I) If ||τ || → 0 as k, h → 0 with r = h2
fixed, then we have consistency.
Power boundedness of B
Recall the matrix B for method I is given by:
1 − 2r r
r
1 − 2r r
B=
. . . . . . .
. .
r 1 − 2r r
r 1 − 2r
74
CHAPTER 8. DIFFUSION EQUATION AND PARABOLIC PROBLEMS
In 2-norm, we have
pπh
||B||2 = ρ(B) where λp = 1 + 2r(cos(pπh) − 1) = 1 − 4r sin2 ( )
2
If |λp | ≤ 1 for all p, then ||B n ||2 is bounded:
pπh 1
−1 ≤ 1 − 4r sin2 ( )≤1⇒r≤
2 2
A very important note here, k, h cannot tend to 0 independently. Their ratio must be fixed.
ω = −ξ 2
Fourier transform
Z ∞
1
fˆ(ξ) = f (x)e−iξx dx −∞<ξ <∞
2π −∞
75
CHAPTER 8. DIFFUSION EQUATION AND PARABOLIC PROBLEMS
Parserval’s equality
Z ∞ Z ∞
1 2
|f (x)| dx = |fˆ(ξ)|2 dξ
2π −∞ −∞
Fourier transform
q1T f = q1T (fˆ1 q1 + fˆ2 q2 + · · · + fˆn qn ) = fˆ1 ⇔ f̂i = qiT f where fˆ = (fˆ1 , · · · , fˆn )T
Parserval’s equality
2 2
fˆ1 + · · · + fˆn = fˆT fˆ = ||fˆ||22
X
||f ||22 = f T f =
Solution formula
Recall the problem
ut = uxx , u(x, 0) = f (x)
Fourier transform yield: Z ∞
1
û(ξ, t) = u(x, t)e−iξx dx
2π −∞
Thus, we obtain
ˆ
ût (ξ, t) = −ξ 2 û(ξ, t) û(ξ, 0) = f (ξ)
76
CHAPTER 8. DIFFUSION EQUATION AND PARABOLIC PROBLEMS
The solution is
ˆ −ξ2 t
û(ξ, t) = f (ξ)e
by inverse Fourier transform:
Z ∞ Z ∞
u(x, t) = iξx
û(ξ, t)e dξ = ˆ −ξ
f (ξ)e
2 t+iξx
dξ
−∞ −∞
77
CHAPTER 8. DIFFUSION EQUATION AND PARABOLIC PROBLEMS
Fourier transform
∞
ˆ h X −iξjh
f (ξh) = fj e
2π −∞
Inverse transform
Z π
h
fj = fˆ(ξh)eiξjh dξ
−π
h
Parseval’s equality
Z π ∞
h h X
|fˆ(ξh)|2 dξ = |fj |2
−π
h
2π −∞
Proof. Consider unj , the numerical solution evaluated on the grid, where j = 0, ±1, ±2, · · ·
given.
∞
n h X n −iξjh
û (ξh) = u e
2π −∞ j
∞
n+1 h X n+1 −iξjh
û (ξh) = u e
2π −∞ j
∞
h X n
= (u + r(unj−1 − 2unj + unj+1 ))e−iξjh
2π −∞ j
∞ ∞
h X n −iξjh h X n −iξjh
= g(ξh)uj e = g(ξh) u e
2π −∞ 2π −∞ j
=g(ξh)ûn (ξh)
Thus, we get
ûn+1 (ξh) = g(ξh)ûn (ξh) = g n (ξh)û0 (ξh)
We see Z π Z π
h h
unj = n
û (ξh)e iξjh
dξ = û0 (ξh)g n (ξh)eiξjh dξ
−π
h
−π
h
78
CHAPTER 8. DIFFUSION EQUATION AND PARABOLIC PROBLEMS
Z π
h h X 02
≤ |û0 (ξh)|2 dξ = |uj |
−π
h
2π j
Recall that ||un ||2 ≤ ||u0 ||2 implies l2 stability. Hence, the proof.
Proof.
d 1
Z
d 2
||u(:, t)||2 = u(x, t)2 dx
dt dt 0
Z 1
= 2u(x, t)ut dx
0
Z 1
= 2uuxx dx
0
1
Z 1
=2[uux − (ux )2 dx] ≤ 0
0 0
| {z }
=0
Integration by parts
Z 1
(f, g) = f (x)g(x)dx : inner product
0
79
CHAPTER 8. DIFFUSION EQUATION AND PARABOLIC PROBLEMS
Proof. Z 1 Z 1 Z 1
0 0 0 0 0
(f g) = f g + g f ⇒ (f g) dx = f gdx + f g 0 dx
0 0 0
where
1
0 = fg = (f 0 , g) + (f, g 0 ) ⇒ (f, g 0 ) = −(f 0 , g)
0
Summation by parts
We define: X
(f, g)h = h fj gj : discrete inner product
j
Proof.
fj+1 gj+1 − fj gj
D+ (f g)j = = fj+1 D+ gj + (D+ fj )gj
h
Thus, we have
X f1 g1 − f0 g0 f2 g2 − f1 g1 fN gN − fN −1 gN −1 −f0 g0 fN gN
D+ (f g)j = + + ··· + = + =0
j
h h h h h
Therefore, we have
X X X X
0=h fj+1 (D+ gj )+h (D+ fj )gj = h fj (D− gj )+h (D+ fj )gj = (f, D− g)h +(D+ f, g)h
j j j j
80
CHAPTER 8. DIFFUSION EQUATION AND PARABOLIC PROBLEMS
Claim 8.5.3. If r ≤ 12 , then ||un ||2,h ≤ ||u0 ||2,h ; hence, method is stable.
Proof.
S+ − I
D+ =
h
S+ − I S + D− un − D− un 2
k 2 ||D+ D− un ||22,h =k 2 || D− un ||22,h = k 2 || ||2,h
h h
k2
≤ (||S + D− un || + ||D− un ||)2
h2
4k 2
= 2 ||D− un ||22,h
h
4k 2 1
(I) + (II) ≤ (−2k + 2
)||D− un ||22,h = 4k(r − )||D− un ||22,h ≤ 0
h 2
81
CHAPTER 8. DIFFUSION EQUATION AND PARABOLIC PROBLEMS
82
CHAPTER 8. DIFFUSION EQUATION AND PARABOLIC PROBLEMS
ξh n
kD+ D− unj = 2r(cos(ξh) − 1)unj = −4r sin2 u
2 j
Therefore, (8.6.1) can be rewritten as
ξh ξh
(1 + 2r sin2 )g(ξh) = (1 − 2r sin2 )
2 2
Thus,
ξh
1 − 2r sin2 2
|g(ξh)| = | | ≤ 1 for all r > 0
1+ 2r sin2 ξh
2
By Von Neumann analysis, ||un ||2 ≤ ||u0 ||2 for all r: unconditional stable.
83
CHAPTER 8. DIFFUSION EQUATION AND PARABOLIC PROBLEMS
84
Chapter 9
9.1 Introduction
9.1.1 Advection equation
We consider the following hyperbolic equation:
ut + aux = 0 −∞<x<∞
Proof.
d ∂u ∂u dx ∂u ∂u
u(x(t), t) = + = +a = 0 ∵ u is a solution
dt ∂t ∂x dt ∂t ∂x
dx dt 1
⇒ u(x, t) = cost along = a( = )
dt dx a
As illustrated in Figure 9.1, the line x − at = cost is called a characteristic. The solution
is constant along characteristic.
Corollary 9.1.1.
u(x, t) = u(x − at, 0) = f (x − at) = f (x0 )
Definition 9.1.2. Solution is a traveling wave with speed a. Domain of dependence of u(x, t)
is the point x0 . Such equation is called linear advection equation.
85
CHAPTER 9. ADVECTION EQUATIONS AND HYPERBOLIC SYSTEMS
t
6 (x, t)
dx dt
x − at = cost .
...
Slope dt = a( dx = a1 )
6.....
............................ ..........
.... .... .......................
-
x0 x
86
CHAPTER 9. ADVECTION EQUATIONS AND HYPERBOLIC SYSTEMS
It is worth noting that solution at (x, t) depends on two points α, β in the previous ex-
ample. In general, domain of dependence will include every point as illustrated in Figure 9.2.
t (x, t)
6
EA
EA
λ1 < λ2 < · · · < λN
x − λ1 t = cost E A E A
x − λ2 t = cost
E A
·E · · A
E A
E A
E A
x x
xE A x -
x
Wave equation
The wave equation given by:
In matrix form:
u1 0 1 u1 0
+ 2 = : hyperbolic system
u2 t a 0 u2 x 0
Thus,
u1 (x, 0) = ux (x, 0) = f 0 (x) = f1 (x) u2 (x, 0) = f2 (x)
1 1
−1 1 a −1 u1 2
v1 − 2a v2
v=R u= = 1 1
2a a 1 u2 v + 2a
2 1
v2
1
f − 1 f2
v(x, 0) = R−1 u(x, 0) = 21 1 2a 1
f + 2a
2 1
f2
Thus, we have
1 1 1 1
v1 (x, t) = f1 (x + at) − f2 (x + at) v2 (x, t) = f1 (x + at) + f2 (x + at)
2 2a 2 2a
87
CHAPTER 9. ADVECTION EQUATIONS AND HYPERBOLIC SYSTEMS
And
1 1
ux = u1 = [f 0 (x + at) + f 0 (x − at)] + [g(x + at) − g(x − at)]
2 2a
Z x Z x+at
1 1
u(x, t) = u1 (s, t)ds = [f (x + at) + f (x − at)] + g(s)ds : d’ Alembert formula
2 2a x−at
Example 9.1.4. [Gas dynamics] Let ρ be density v be velocity and p be pressure. The
equation is given by:
ρt + (ρv)x = 0
(ρv)t + (ρv 2 + p(ρ))x = 0 p = p(ρ) : equation of stable
⇒
p0 (ρ)
ρt + ρvx + vρx = 0 vt + vvx + ρx = 0
ρ
In matrix form: !
ρ v ρ ρ
+ p0 (ρ) =0
u t ρ
v u x
p
The eigenvalues are λ = v ± p0 (ρ) and p0 (ρ) = c2 ; where c is the speed of sound.
n+1 n+1
n+1
n n n
i−1 i i+1 i i+1 i−1 i
Figure 9.3: Stencil for centered difference, downwind, and upwind schemes
88
CHAPTER 9. ADVECTION EQUATIONS AND HYPERBOLIC SYSTEMS
un+1
j − unj unj+1 − unj−1
+a =0
k 2h
In a different form:
1 ak n
un+1
j = unj − (uj+1 − unj−1 )
2 |{z}
h
ν
Here we define
ak
ν= : CFL number (nondimensional)
h
Accuracy
We can check that the method is first order accurate in time and second order in space, i.e.
τ n = O(k + h2 ).
Stability
Assume bounded domain 0 ≤ x ≤ 1 and periodic boundary conditions u(0, t) = u(1, t). We
can use method of lines to check the stability of this method.
−a n
u0j (t) =(u − unj−1 ) j = 1, 2, · · · , m − 1
2h j+1
Since we allow periodic boundary conditions:
−a n −a n
u00 (t) = (u1 − unm ) u0m (t) = (u0 − unm−1 )
2h 2h
In matrix form:
0 1 −1
−1 0 1
−a
... ... ...
u0 (t) =
2h
−1 0 1
1 −1 0
The matrix is skew symmetric; hence, all eigenvalues are pure imaginary. For absolute
stability of time discretization, we need stability region include part of imaginary axis. For
forward Euler as in this scheme, stability region does not include any part of imaginary axis.
Thus, this method is unconditionally unstable for fixed ν.
89
CHAPTER 9. ADVECTION EQUATIONS AND HYPERBOLIC SYSTEMS
un+1
j − unj unj+1 − unj
+a =0
k h
Or
un+1
j = unj − ν(unj+1 − unj ) = (1 + ν)unj − νunj+1
We have
|un+1
j | = 1 + 2ν > 1 ∵ ν > 0
x−at=const
t
n
n−1
.
.
.
α xj xj x j+n x
+1
Here we see the domain of dependence of u(x, t) is α (see Figure 9.4). Meanwhile, the
numerical domain of dependence are (xj , xj+1 , · · · , xj+n ). In other words, the numerical
domain of dependence does not include the one point that matters α.
un+1
j − unj unj − unj−1
+a =0
k h
Or
un+1
j = unj − ν(unj − unj−1 ) = (1 − ν)unj + νunj−1
90
CHAPTER 9. ADVECTION EQUATIONS AND HYPERBOLIC SYSTEMS
Proof. ⇐=
un+1
j = (1 − ν)unj + νunj−1
Taking absolute values on both sides:
|un+1
j | ≤|1 + νunj | + |νunj−1 | = (1 − ν)|unj | + ν|unj−1 |
≤(1 − ν)||un ||∞ + ν||un ||∞ = ||un ||∞
=⇒ Let ν > 1, we will show scheme is no longer stable. Take unj = (−1)j , ||un ||∞ = 1. Then
|un+1
j | > (2ν − 1)||uj ||∞ > 1
x−at=const ν>1
t x−at=const
t ν<1
x j−n α xj x α x j−n xj x
Figures 9.5a and 9.5b show that numerical domain of dependence contains analytic do-
main of dependence if and only if ν < 1, which also suggests the following CFL condition in
order to satisfy stability condition of a numerical scheme.
CFL condition
The numerical domain of dependence must contain the analytic domain of dependence. This
is a necessary, yet, not sufficient condition for stability. Note centered difference scheme does
satisfy CFL condition; but it is unconditional unstable.
91
CHAPTER 9. ADVECTION EQUATIONS AND HYPERBOLIC SYSTEMS
Convergence
Here we assume 0 ≤ ν ≤ 1.
un+1
j = (1 − ν)unj + νunj−1 (9.2.1)
en+1
j = (1 − ν)enj + νenj−1 − kτjn
|en+1
j | ≤(1 − ν)|enj | + ν|enj−1 | + k|τjn |
≤(1 − ν)||en ||∞ + ν||en ||∞ + k||τ n ||∞
=||en ||∞ + k||τ ||∞
Thus:
||en ||∞ ≤ ||en−1 ||∞ + k||τ ||∞ ≤ · · · ≤ 0
||e||∞ + nk||τ ||∞
un+1
j = unj−1 ⇒ unj = u0j−n = f (xj − n) = f (xj − at)
Scheme is exact ! If 0 < ν < 1, then scheme interpolates between unj−1 and unj (linear
interpolation).
92
CHAPTER 9. ADVECTION EQUATIONS AND HYPERBOLIC SYSTEMS
n+1
n
i−1 i i+1
93
CHAPTER 9. ADVECTION EQUATIONS AND HYPERBOLIC SYSTEMS
Note
y
x + 1 = cos(2πph) y = ν sin(2πph) ⇒ = sin(2πph)
ν
The equation satisfy an ellipse
y
(x + 1)2 + ( )2 = 1
ν
centered at (−1, 0) with major/minor semi-axes 1, ν. For stability, we require the ellipse to
be inside the stability region of forward Euler. As shown in Figure 9.7, we need |ν| ≤ 1.
Im z Im z
Re z Re z
−1 −1
Figure 9.7: Stability region of forward Euler in cyan, the ellipse enclosed by the red curve is
the region of kλp of LxF.
Here we reintroduce Von Neumann analysis. We look for solutions of the form u(x, t) =
eωt+iξx . Plug it into the equation:
Our solution is
u(x, t) = eiξ(x−at) : no growth/decay in amplitude
94
CHAPTER 9. ADVECTION EQUATIONS AND HYPERBOLIC SYSTEMS
Z ∞
1
ût (ξ, t) = ut (x, t)e−iξx dx
2π −∞
−1 ∞
Z
= aux (x, t)e−iξx dx
2π −∞
Z ∞
−a −iξx∞
= ( ue −∞ + (iξ) u(x, t)e−iξx dx)
2π −∞
= − iξaû(ξ, t)
Thus, we have
ût (ξ, t) = −iξaû(ξ, t) û(ξ, 0) = fˆ(ξ) ⇒ û(ξ, t) = fˆ(ξ)e−iξat
Thus Z ∞
u(x, y) = û(ξ, t)eiξx dξ = f (x − at)
−∞
||u(·, t)||2 = ||u(·, 0)||2 : stability
Re z Re z
1 1 1+ν
l2 stability l2 stability
|g|=1 |g|=1
95
CHAPTER 9. ADVECTION EQUATIONS AND HYPERBOLIC SYSTEMS
Re z Re z Re z
1− ν 1 1− ν 1 1−ν 1
1 1
(a) 0 < ν < 2 (b) 2 ≤ν≤1 (c) ν > 1
96
CHAPTER 9. ADVECTION EQUATIONS AND HYPERBOLIC SYSTEMS
9.4.2 Accuracy
The LTE is given by τ = O(k 2 + h2 ), second order of accuracy in both space and time. We
note that the scheme is exact for ν = −1, 0, 1 as shown in Figure 9.10.
n+1
n+1 n+1
n
n n i−1 i i+1
i−1 i i+1 i−1 i i+1
(a) ν = 1, un+1
j = unj−1 (b) ν = −1, un+1
j = unj+1 (c) ν = 0, un+1
j = unj
9.4.3 Stability
We may view LW as forward Euler discretization of u0 (t) = AU (t), where
0 1 −1 −2 1 1
−1 0 1 a2 k 2 1 −2 1
−a ... ... ... ... ... ...
A= + 2
2h 2h
−1 0 1 1 −2 1
1 −1 0 1 1 −2
| {z } | {z }
Imaginary e-values shift into left half plane
97
CHAPTER 9. ADVECTION EQUATIONS AND HYPERBOLIC SYSTEMS
Im z
Re z
−1
For stability:
gup =1 − ν + νe−iξh
=1 − ν(1 − cos(ξh)) − iν sin(ξh)
We see
gup = |g|eiθ θ : phase
98
CHAPTER 9. ADVECTION EQUATIONS AND HYPERBOLIC SYSTEMS
=g
θ = tan−1
<g
−1 −ν sin(ξh) −1 νξh + O(ξh)3
= tan = tan (− )
1 − ν(1 − cos(ξh)) 1 + O(ξh)2
= tan−1 (−νξh + O(ξh)3 ) ≈ −νξh + O(ξh)3
exact
ωk −iξak
= = a : phase speed
−ξk −iξk
numerical
iθ
= a + O(ξh)2
−iξk
We note that long waves propagate at right speed; short waves propagate at wrong speed
(or even wrong direction).
ak n
un+1
j = p(xj − ak) = unj − (uj − unj−1 ) = unj − ν(unj − unj−1 )
h
This is the upwind scheme.
Problem 9.6.1. Show that quadratic interpolation gives the Lax-Wendroff scheme. Identify
special values of the CFL number ν for which the interpolation error vanishes (i.e. the
numerical solution becomes exact).
99
CHAPTER 9. ADVECTION EQUATIONS AND HYPERBOLIC SYSTEMS
Problem 9.6.2. The 2nd order upwind scheme for ut + aux = 0 for a > 0 has the form
un+1
j = c−2 unj−2 + c−1 unj−1 + c0 unj
where the coefficients ck are in general polynomials of the CFL number ν (here polynomials
of degree 2). Identify the values of ν for which the scheme becomes exact, and use these
special values of ν to determine ck . Write the resulting scheme. This scheme is called the
Beam-Warming scheme.
un+1
j − unj unj − unj−1
+a = 0 : upwind scheme
k h
We see that if this scheme is used to approximate ut + aux = 0, then unj − u(x, t) = O(h) or
τ = O(h). On the other hand if we use this to approximate
h
ut + aux = a(1 − ν)uxx (9.7.1)
2
Then τ = O(h2 ) and unj − u(x, t) = O(h2 ). In other words the upwind scheme solves
ut + aux = 0 only to first order accuracy while it solves (9.7.1) to second order of accuracy.
We call (9.7.1) the modified equation, where h2 a(1 − ν)uxx is an artificial viscosity (diffusion).
Definition 9.7.1. The modified equation to hyperbolic equation is of the following form:
ut + aux = εuxx
100
CHAPTER 9. ADVECTION EQUATIONS AND HYPERBOLIC SYSTEMS
=g −ν sin(ξh)
θ = tan−1 = tan−1
<g 1 − 2ν 2 sin2 ( ξh
2
)
1
= − νξh(1 − (1 − ν 2 (ξh)2 + O(ξh)3 )
6
iθ 1
= a(1 − (1 − ν 2 (ξh)2 + O(ξh)3 )
−iξk 6
When |ν| < 1, we have a lagging phase as illustrated in Figure 9.12.
Figure 9.12
Remark 9.7.3. Unfortunately, all linear schemes that preserve monotonicity are at most
first order accurate.
101
CHAPTER 9. ADVECTION EQUATIONS AND HYPERBOLIC SYSTEMS
ut + Aux = 0 U = (u1 , · · · , uN )T
We assume A has real eigenvalues and a complete set of eigenvectors. It has the spectral
factorization R−1 ΛR, where
Λ = diag(λ1 , · · · , λN ) R = (r1 , · · · , rN )
Example 9.8.1. Suppose we have a 4 × 4 system with λ1 < λ2 < λ3 < λ4 . Then, the
domain of dependence of u(x, t) is < α1 , α2 , α3 , α4 > as shown in Figure 9.13. We need a
method that works for both positive and negative eigenvalues.
t (x, t)
6
EA
EA
E A
E A
E A x−λ t = α1
E A 1
E A
E A
E A
x x
xE A x -
α4 α3 α2 α1 x
1 k
un+1
j = (unj+1 + unj−1 ) + A(unj+1 − unj−1 )
2 2
We can also generalize the CFL condition for hyperbolic system:
λp k
Definition 9.9.1. The CFL number for a hyperbolic system is νp = h
. The CFL condition
for hyperbolic system requires:
max |νp | ≤ 1
102
CHAPTER 9. ADVECTION EQUATIONS AND HYPERBOLIC SYSTEMS
9.9.1 Lax-Wendroff
We examine the system ut = −Aux . If we take time derivative on both sides, we get:
Stability in l2 norm
We generalize the idea of amplification factor as follows: let unj = Gn (ξh)eiξjh u0 ”
1k 1 k 2 2 −iξh
G(ξh) = I − A(eiξh − e−i xi
)+ A (e − 2 + eiξh )
2h 2 h2
k k2
G(ξh) = I − i A sin(ξh) + 2 A2 (cos(ξh) − 1) : amplification matrix
h h
In 2-norm, we have
||un ||2 ≤ max ||Gn (ξh)||2 ||u0 ||2
|ξh|≤π
Therefore, scheme is stable if and only if amplification matrix is power bounded. Recall
A = RΛR−1 , we have
k k2
Gn (ξh) = R (I − i Λ sin(ξh) + 2 Λ2 (cos(ξh) − 1)) R−1
| h {zh }
≤1 if CFL condition is satisfied
Thus, LW scheme is stable in l2 norm if CFL condition is satisfied. Note it is not stable in
the infinity norm.
9.9.2 Upwind
In 1D case ut + aux = 0, we use
(
un+1
j = unj − ν(unj − unj−1 ) a > 0
un+1
j = unj − ν(unj+1 − unj ) a < 0
103
CHAPTER 9. ADVECTION EQUATIONS AND HYPERBOLIC SYSTEMS
Similarly, we have
A± = RΛ± R−1
and
k k
un+1
j = unj − A− (unj+1 − unj ) − A+ (unj − unj−1 )
h h
For stability, we require |νp | ≤ 1.
104