You are on page 1of 32

Numerical Methods for Hyperbolic Conservation Laws (AM257)

by Chi-Wang Shu Semester I 2006, Brown. Send corrections to kloeckner@dam.brown.edu. Any mistakes or omissions in these notes are certainly due to my typing.

Table of contents
Table of contents .. .. .. ... .. .. .. .. .. .. .. ... .. .. .. .. .. .. ... .. .. .. .. .. .. .. .. ... .. .. .. .. .. .. ... .. .. .. .. .. .. ... .. .. .. .. .. .. ... .. .. .. .. .. .. ... .. .. .. .. .. .. ... .. .. .. .. .. .. ... .. .. .. .. .. .. ... .. .. .. .. .. .. ... .. .. .. .. .. .. ... .. .. .. .. Finite-Volume Scheme .. .. ... .. .. .. .. .. .. ... .. .. .. .. .. .. ... .. .. .. .. .. .. ... .. .. .. .. .. .. ... .. .. .. .. .. .. ... .. .. .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1 7 8 8 9 9 9 9 9 10 14 15 17 21 22 23 23 23 24 25 26 26 27 27 27 28 29 1 Theory of One-Dimensional Scalar Conservation Laws . . . . . . . . . . . . . . . . . . . . . . . . 2 Numerics . . . . . . . . . . . . . . . . . . . . . . . 2.1 Examples of conservative schemes . . . . . 2.1.1 The Godunov Scheme . . . . . . . . . 2.1.2 The Lax-Friedrichs Scheme . . . . . . 2.1.3 The local Lax-Friedrichs Scheme . . . 2.1.4 Roe Scheme . . . . . . . . . . . . . . . . 2.1.5 Engquist-Osher Scheme . . . . . . . . 2.1.6 Lax-Wendro Scheme . . . . . . . . . 2.1.7 MacCormack Scheme . . . . . . . . . . 2.2 Higher-order TVD Schemes . . . . . . . . . 2.2.1 General Framework of a Conservative 2.2.2 Generalized MUSCL Scheme . . . . . 2.3 Essentially Non-Oscillatory Schemes . . . 2.4 Weighted ENO Schemes . . . . . . . . . . . 2.5 Finite Dierence Methods . . . . . . . . . . 2.5.1 Accuracy . . . . . . . . . . . . . . . . . . 2.5.2 Stability . . . . . . . . . . . . . . . . . . 3 Two Space Dimensions . . . 3.1 FV methods in 2D . . . . . 3.1.1 The Linear Case . . . 3.1.2 The Nonlinear Case . 3.2 Finite Dierence Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

4 Systems of Conservation Laws . . . . . . . . . . . . . . . . . . . 4.1 A First Attempt: Generalize Methods from AM255 . . . . . . 4.2 How to Generalize Scalar Higher-Order Schemes to Systems 4.3 The Nonlinear Case . . . . . . . . . . . . . . . . . . . . . . . . . .

5 Discontinuous Galerkin method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 5.1 Some Theoretical Properties of the Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

1 Theory of One-Dimensional Scalar Conservation Laws


ut + f (u)x = 0, where u is a function of x and t. d dt
b a

(1)

is the integral form of (1).

u(x, t)dx = f (u(b, t)) f (u(a, t)) ut + f (u)x , u(x, 0) = u0(x). 1 (2)

Section 1

Characteristics: Dene a function x(t) by = f (u(x(t), t)), x(0) = x0. du(x(t), t) = uxx (t) + ut = uxf (u(x(t), t) + ut = f (u)x + ut = 0. dt
dx(t) dt

Then

So u(x(t), t) = u(x(0), 0) = u0(x0). All that holds under the assumption that we have a smooth solution. Which we dont. :( Consider Burgers Equation: u2 = 0, ut + 2 x u(x, 0) = sin(x).

(3)

Consider the characteristics at /2 and 3/2. They intersect and propagate dierent values, so the above theory breaks down. There is no global (in x and t) solution to (3). The concept of weak solution helps us out now. Reconsider the integral form: d dt
b a

u(x, t)dx = f (u(a, t)) f (u(b, t))

(4)

For C 1 solutions, (1) (4). Attempts at dening weak solutions: If u satises (4) for almost all (a, b) then in u is called a weak solution to (1). (physically meaningful, correct)
1 If for any C0 (R2), 0

(ut + f (u)x)dx dt

u0(x)(x, 0)dx = 0,

then in u is called a weak solution to (1). (more meaningful mathematicallymotivated by multiplication by test function and integration by parts.) It turns out the two are equivalent. (Not proven here.) Now, assume a solution that has two C 1 segments separated by a curve on which no regularity is demanded of u. Then d b 0 = u(x, t)dx + f (u(b, t)) f (u(a, t)) dt a = d dt
x(t) b

u(x, t)dx +
a x(t) x(t)

u(x, t)dx + f (u(b, t)) f (u(a, t))


b x(t) b x(t) +

= u(x(t), t)x (t) +


a

ut(x, t)dx u(x(t+), t)x (t) +


x(t)

ut(x, t) + f (u(b, t)) f (u(a, t))

= u(x(t), t)x (t)

f (u)xdx u(x(t+), t)x (t) +

f (u)xdx + f (u(b, t)) f (u(a, t))

= u(x(t), t)x (t) f (u(x(t), t)) + f (u(a, t)) u(x(t ), t)x (t) f (u(b, t)) f (u(x(t+), t)) + f (u(b, t)) f (u(a, t)) = u(x(t), t)x (t) f (u(x(t), t) u(x(t+), t)x (t) + f (u(x(t+), t). Now use the shorthand u u+ and write Now distinguish two cases: u = u+: This is ne. 0 = f (u+) f (u ) x (t)(u+ u ).

4 4

u(x(t), t) u(x(t+), t)

Theory of One-Dimensional Scalar Conservation Laws

u+: We get the Rankine-Hugoniot jump condition: x (t) = f (u+) f (u) u+ u

If u is piecewise C 1 and is discontinuous only along isoated curves, and if u satises the PDE when it is C 1, and the Rankine-Hugoniot (RH) condition along all discontinuous cruves, then u is a weak solution of (1). Example 1. Consider the following Riemann problem: u + u2 = 0 t 2
x

The IC is just propagated in time to form a weak solution. (a shock ) Example 2. Now ip the initial conditions: ut +

u(x, 0) =

1 x < 0, 1 x > 0.

u2 2

=0
1 x < 0, 1 x > 0.

The propagated ICs also form a weak solution. But consider 1 x t, u(x, t) = x/t t < x < t, 1 x > t. This is also a weak solution. (a rarefaction wave) Oops. So, we need a third category of solutions, called entropy solutions, where neither uniqueness nor existence poses a big problem. Consider adding an articial viscosity: u + f (u)x = u t x,x with a very small 0 < 1. Then we would wish to dene an entropy solution as
0

u(x, 0) =

lim u(x, t) = u(x, t) 0, i.e. if it is convex. Then multiply the

in some norm. In fact, this is the entropy solution. Pick a function U (u) called the entropy function if U (u) conservation law with viscosity by U (u):

U (u)(u + f (u)x) = U (u)u t x,x ) + F (u) (u)u ) U (u)(u )2 U (u t x = (U x x x U (u)t + F (u)x (U (u)u )x x where F (u) =
u

U ()f ()d

F (u) = U (u)f (u). 0.

2 To support our argument as 0, once again take a test function C0 (R R+),

(U (u)t + F (u)x) dx dt U (u)t + F (u)x dx dt

(U (u)u )x dx dt x U (u)u xdx dt x U (u)x,xdx dt

=
0

Section 1

DCT allows taking the limit. We get the entropy inequality


U (u)t + F (u)x dx dt Homework #1:


0

0.

On a domain [0, 2], with periodic BCs, consider ut + u 2 = 0 2 x u(x, 0) = 1 + sin x


2

Find the maximum T such that u(x, t) C for t < T .

Write a code to solve for u when t < T . (Hint: Look for equation implicitly dening u, maybe use Newtons method). Test the code for (0.1, 0.1), (1, 0.08), (, 0.09). 0. If f (u) > 0, it is called

Denition 3. A conservation law is called genuinely nonlinear i f (u) convex, if f (u) < 0 it is called concave.

Shocks must appear for genuinely nonlinear conservation laws under periodic or compactly supported initial conditions. Consider a box containing the support of a test function Cc (R R+) and let u(x, t) be piecewise 1 C with one discontinuity along (t, x(t)). t d = (t, x(t))

Then consider
d b a d

=
d

(U (u)t + F (u)x)dxdt
c x(t) a x(t) a d c b

=
c

(U (u)t + F (u)x) dxdt


=0

x (t)U (u) F (u) x (t)U (u+) F (u+) ds ds 1 + (x (t))2 1 + x (t)2 x (t)(U (u) U (u+)) (F (u) F (u+)) ds. 1 + x (t)2 x (t)(U (u) U (u+)) (F (u) F (u+)) 0.

? ?
c a
(U ,F )T

b
Figure 1.

(U (u)t + F (u)x)dxdt

(U (u)t + F (u)x)dxdt
x(t)

(U (u), F (u)) nds

(U (u), F (u)) nds

We obtain If we introduce the notation f

4 f (u+) f (u), then this condition becomes


x (t) U F .

Theory of One-Dimensional Scalar Conservation Laws

Oleinik entropy condition: For all u between u and u+, we need to have f (u) f (u) u u x (t)
s

f (u) f (u+) , u u+

where s is the shock speed, known from the Rankine-Hugoniot condition. Laxs entropy condition: f (u) > s > f (u+).

t x = x(t)

u+

Figure 2. Illustration of Laxs entropy condition. Characteristics are going into shocks.

It is easy to see that the Oleinik condition implies Laxs condition. Unfortunately, the converse does not hold. Laxs entropy condition does not guarantee uniquenessbut it is a necessary condition. However, if f (u) 0 uniformly (i.e. the conservation law is genuinely nonlinear), then Laxs entropy condition is sucient for u to be the entropy solution. For f (u) > 0, Laxs condition becomes even simpler. Consider f (u ) s= f (u) u f (u+)

and note that f (u) is monotonically increasing, such that the middle part is automatically satised. Thus, Laxs condition becomes f (u) f (u+).

I.e. looking towards the right, we can only jump down. Theorem 4. The solutions to u + f (u)x = u , t x,x u(x, 0) = u0(x) are L1-contractive. I.e. let v be the solution of
vt + f (v )x = vx,x , v (x, 0) = v 0(x).

Then u( , t) v ( , t) Proof. We need to show 0 d dt


L1

u0 v 0

L1

|u(x, t) v (x, t)|dx.

Let s j be the sign of u v on I j and consider, using Leibnizs rule, the following: d dt d = dt =
j

sj (t) u(x j +1/2(t), t) v (x j +1/2(t), t) x j +1/2(t) 0

s j (t) u(x j+1/2(t), t) v (x j+1/2(t), t) x j+1/2(t) x j +1/2

+ + =
j

x j 1/2

x j +1/2

x j 1/2 x j +1/2

x j 1/2 x j +1/2 x j 1/2

?
I j (t) x j 1/2 |u(x, t) v (x, t)|dx
x j +1/2 x j 1/2

s j (t)(u(x, t) v (x, t))dx 0 s j (t)(u(x, t) vt (x, t))dx t

s j (t)(u(x, t) vt (x, t))dx t

=
j

s j (t)[ f (u(x, t))x + u (x, t) + f (v (x, t))x vx,x(x, t)]dx x,x

=
j

s j (t) f (u(x j +1/2(t), t)) + f (u(x j 1/2(t), t)) + f (v (x j +1/2(t), t)) f (v (x j 1/2(t), t))
0 u (x j +1/2(t), t) u (x j 1/2(t), t) vx(x j +1/2(t), t) + vx(x j 1/2(t), t) x x

0.

To see why the orange and blue parts together each are x j 1/2.

The entropy solution has a non-increasing total variation. TV(u) 4 sup


h

because ...?

?
u v x x j +1/2
Figure 3.

Section 1

|u(x, t) v (x, t)|dx

0, just look at whats happening at the

u(x + h) u(x) dx. h TV(u0),

TV(u( , t))

Numerics

2 Numerics
Consider ut +
u 2 ) =0 2 x 0, u(x, 0) = 1 x < 0. 0 x

The entropy solution is

Note also that the analytic solution satises a maximum principle, i.e. min u0(x)
x

1 x 1 t, 2 u(x, t) = 0 x > 1 t.
2

u(, t)

max u0(x).
x

Remember for ut + a ux = 0, we wrote down an upwind scheme: un+1 = un a j j t n (u un1). j x j

Lets write a direct generalization, for the (equivalent...?) PDE ut + u ux = 0: un+1 = un j j But for j t n n u (u un1). j x j j

0, u0 u0 1 = 0, and for j = 0, u0 = 0. Altogether, j j j un+1 = un. j j

Bad. Denition 5. A scheme to solve conservation laws is called conservative i it can be written as un+1 = un j j t f f j 1/2 , x j +1/2

where f is 1. Lipschitz continuous, 2. f (u,

, u) = f (u) (consistency).

Theorem 6. (Lax-Wendro) If the solution {un } to a conservative scheme converges (as t, x 0) j boundedly a.e. to a function u(x, t), then u is a weak solution of the conservation law.
1 Proof. Let n = (x j , tn) for C0 . Then j

=
n j

un+1 un f j +1/2 f j 1/2 j j + nxt j t x n n1 n n n1 j j j uj + j f j +1/2 xt x t (tu + xf (u))dxdt = 0.

=
DCT,Conservativity

Remark 7. Above, we used partial summation:


j2 j =j1 j2

a j (b j b j 1) =

j =j1

(a j +1 a j )b j a j1bj 1 + aj2b j2.

Section 2

2.1 Examples of conservative schemes


2.1.1 The Godunov Scheme The Godunov scheme for the conservation law ut + f (u)x = 0, u(x, 0) = u0(x) was derived from the fact that the Riemann problem ut + f (u)x = 0, 0, u(x, 0) = ul x < 0 u x
r

can be solved exactly. Example: (see above) For Burgers Equation, we get ul x < s t, ul > ur , u x s t, r u(x, t) = ul x < ult, x/t ult x < urt, ul < ur ,
ur x urt,

where

2 2 f (ur) f (ul) 2 [ur ul ] 1 = = (ul + ur). s= ur ul 2 ur ul

The same technique would work for all convex (f (u) > 0) or concave conservation laws. (Also cf. book by Toro500 pages of Riemann solutions.) Note that conservation laws have nite propagation speed. Suppose we choose a scheme where we consider the solution constant in each cell (Conceptually, imagine that this value u j is the cell average of cell I j this is also how you arrive at u j0.) If we choose x and t such that max |f (u)|t < x, then in a sequence of cells (A, B, C , D, E), then the solution in cell C in the next timestep is not inuenced at all by the solution in cells A and E. Thus we only need to solve a Riemann problem at each cell interface and were done. Then
tn+1 tn x j +1/2

(ut + f (u)x)dx dt = 0
x j 1/2

1 x

x j +1/2 x j 1/2

un+1 dx

1 x

x j +1/2 x j 1/2

un dx +

1 t

tn+1 tn

f (u j+1/2) dx

1 x

tn+1

f (u j 1/2)dx = 0.
tn

Now consider that for the Riemann solution u(x, t) is a function of only one variable = x/t. In fact, the substitution x = a x, t = a t. leaves the PDE and the Riemann ICs invariant. (This is also called self-similarity.) Thus u is constant along x = x j 1/2, making the last two integrals trivial. The Godunov scheme can then be written as u j1 = u j0 t f (u j +1/2) f (u j 1/2) . x

j j This is a conservative scheme because the ux f (u0 , u0 +1) depends on the right values (and Lipschitz continuity holds as well, but is a bit tricky to prove.) The numerical ux of the Godunov scheme can be written as minuj u u j +1 f (u) u j < u j +1, f j +1/2 = maxu j u u j +1 f (u) u j u j +1.

Numerics

2.1.2 The Lax-Friedrichs Scheme The numerical ux here is where = maxu |f (u)|. The numerical ux here is 1 f j +1/2 = [f (u j ) + f (u j +1) (u j+1 u j )], 2

2.1.3 The local Lax-Friedrichs Scheme 1 f j +1/2 = f (u j ) + f (u j +1) j +1/2(u j +1 u j ) , 2

where j +1/2 = max(uj ,u j +1) |f (u)| (where we note that (u j , u j +1) is meant as a non-empty interval no matter which end of the interval is greater). 2.1.4 Roe Scheme The numerical ux here is f j +1/2 = where a j +1/2 = is the speed of the solution as given by the RHC. 2.1.5 Engquist-Osher Scheme The numerical ux here is f j +1/2 = f +(u j ) + f (u j+1), where f +(u) =
0 u

f (u j ) a j +1/2 0, f (u j +1) a j +1/2 < 0, f (u j +1) f (u j ) u j +1 u j

max (f (u), 0)du + f (0), min (f (u), 0)du.

f (u) =
0

2.1.6 Lax-Wendro Scheme Consider ut = f (u)x ut,t = f (u)x,t = (f (u)t)x = (f (u)ut)x = (f (u)f (u)x)x. The general idea is: Repeatedly replace time by space derivatives by using the PDE, Discretize space derivatives by (2nd order central) FD formulae. t2 n u 2 t,t t2 = un t f (un)x + (f (u)f (u)x)x 2 f (un+1) f (un1) j j = un t j 2x f (un f (un1) j j x, f (un1/2) j x un+1/2 = j

Derivation: un+1 = un + t un + t

un+1 j

f (un+1) f (un) t2 n j j f (u j +1/2) x 2

where

un + un j j+1 . 2

10

Section 2

The numerical ux becomes 1 f j+1/2 = f (u j ) + f (u j +1) f (u j+1/2)(f (u j+1) f (u j )) , 2 = 2.1.7 MacCormack Scheme The idea behind MacCormack is of the predictor-corrector sort. uj
n+1/2

where

t . x

un+1 j The numerical ux is a bit ugly:

= un (f (un) f (un1)), j j j 1 n n+1/2 n+1/2 n+1/2 ) . + f (u j +1 ) f (u j u + uj = 2 j

Homework #2:

1 f j +1/2 = [f (u j ) + f (u j (f (u j ) f (u j 1)))]. 2

1. Code the Godunov and Lax-Friedrichs scheme for solving a Riemann problem of Burgers Equation. Test the code with b) ul = 0.5, ur = 1 a) ul = 1, ur = 0.5.

using N = 160 points equally spaced. Show the solution graphically along with the exact solution. 2. Find the formula for the entropy solution of ut + f (u)x = 0, u(x, 0) = ul x < 0, u x>0
r

where f (u) > 0.

3. Show that the Godunov ux and the Roe ux are both Lipschitz-continuous. Denition 8. A scheme un+1 = un (f (u j p , , u j + q) f (u j p1, j j G(u j p1, , u j + q ) In the special case of 3-point schemes f (u j , u j +1) the scheme is a monotone if f (, ) plus a restriction on : , u j +q 1)) , ) of each argument.

is called a montone scheme if G is a monotonically nondecreasing function G(, ,

Clearly, if f (, ), then G(, ?, ). To clean up the second argument, consider G = 1 [f1 f2] u j If (f1 f2) 1, then G(, , ). Examples: The Lax-Friedrichs ux is monotone:
0

G(u j 1, u j , u j+1) = u j [f (u j , u j +1) f (u j 1, u j )].

0.

1 f LF(u j , u j +1) = [f (u j ) + f (u j 1) (u j +1 u j )] for 2 LF 1 = 1 [f (u j ) + ] 0, f 2 LF 2 = 1 [f (u j +1) + ] 0. f 2

= max |f (u)|,
u

Numerics

11

Theorem 9. Good properties of monotone schemes: 1. u j v j for all j (u v) implies G(u) j G(v) j for all j. 2. Local maximum principle:
istencil around j

min

ui

G(u) j

istencil around j

max

ui.

3. L1-contraction: (this was already obtained for the PDE) G(u) G(v)
L1

uv .

4. This immediately implies the Total Variation Diminishing (TVD) property: G(u) Proof. 1 is just the denition. 2. Fix j. Take vi = Then clearly ui maxkstencil arond i uk if i stencil around j , ui otherwise.
BV

BV.

vi for all i, so that G(u) j G(v) j = v j =


istencil around j

max

ui.

Other way around runs in an analogous fashion. 3. Dene Then let We have by property 1. Then Thus Therefore
j

a b = max (a, b),

a b = min (a, b),

a+ = a 0,

a = a 0.

w j 4 u j v j = v j + (u j v j )+. ( ) G(u) j G(w) j G(v) j j

G(w) j G(v) j G(w) j G(v) j (G(u) j G(v j ))+

0 j , G(u j ) G(v j ) j. (G(u) j G(v) j )+.


() j

(G(w) j G(v)) j =

w j vj =

() j

(u j v j )+.

because we are treating a conservation law, meaning un+1 = j


j j

un , j

()

which holds for conservative schemes. (Why?) Also consider


j

|G(u) j G(v) j | =

(G(u) j G(v) j )+ + (u j v j )+ + |u j v j |.
j

(G(u) j G(v) j )

(v j u j )+

=
j

(This is also called the Crandall-Tartar lemma.) 4: Take v j = u j+1 in 3.

12

Section 2

Theorem 10. Solutions to monotone schemes satisfy all entropy conditions. Proof. Well prove a particular case, namely for any c R. Then and U (u) = 2(x c) 0. U (u) = |u c| U (u) = 1 u < c, 1 u>c 0, then

(Recall that entropy conditions were of the form, pick an entropy function U (u) U (u)t + F (u)x = 0, where F is the entropy ux
u

F (u) = satisfying F (u) = U (u)f (u).) Here we let


c

U (u)f (u)du

F (u) = sign(u c)(f (u) f (c)). We claim that the cell entropy inequality is true, i.e. U (un+1) U (un) F j +1/2 F j 1/2 j j + x t F = f (c u) f (c u). f () 4 f (, , First step: Try to show |un c| (F j+1/2 F j 1/2) = G(c u) j G(c u) j . j Now consider: I: I II: Next, note that c = G(c, , c) G(c u) j , n+1 uj = G(un) j G(c u) j , n+1 c uj G(c un) j , where the step is true because if the arguments of G are constant, then only the un term comes into j play, just yielding back the argument. Also Then c un+1 j U (un+1) = |un+1 c| j j
U (un) j

0,

where

Observe that weve abused notation a bit, i.e.

, ).

II: G(c u) j G(c u) j = |u j c| (F j+1/2 F j ).

G(c u) j = (c u j ) (f (c u) j +1/2) f (c u j 1/2)) G(c u) j = (c u j ) (f (c u) j +1/2) f (c u j 1/2))

G(c un) j G(c un) j = |un c| (F j+1/2 F j 1/2). j

G(c un) j .

Theorem 11. (Godunov) Monotone schemes are at most rst-order accurate.

Numerics

13

After this depressing result, we will have to look for dierent classes of schemes. For example, in order of decreasing strength: Monotone: see above. TVD: A scheme is TVD if TV(un+1) TV(un).

Monotonicity-preserving: A scheme is monotonicity-perserving if {un+1 j un j } {un+1 j j +1 un+1 j }. j

Lets prove that the above is actually in order of decreasing strength, i.e. Theorem 12. A TVD scheme is monotonicity-preserving. Proof. Assume un+1 un for all j. If there exists a j0 such that un+1 < un+1. Modify u to be constant j j j0 +1 j0 outside the stencil used to compute un+1 and un+1 . But the reversal of the order of these two values j0 +1 j0 means that the TVD property is violated. Later in this class, a theorem by Godunov will show that all the above properties are actually the same, and thus rst-order, and thus useless. :-/ Denition 13. A scheme is called a linear scheme if it is linear when applied to a linear PDE: ut + a ux = 0, where a is a constant. A linear scheme for ut + ux = 0 can be written as
k

(5)

un+1 = j
l=k

cl()unl , j

where cl() are constants which may depend on = t/x. A linear scheme for (5) is monotone i cl() This is why they are also called positive schemes. Theorem 14. For linear schemes, monotonicity-preserving monotone. Corollary 15. For linear schemes, monotonicity-preserving and TVD schemes are at most rst order accurate. Proof. (of Theorem 14) If the above linear scheme is monotonicity-perserving, then consider ui = This is a monotone function. Then 0 i , 1 i > .
k

l.

(I) un+1 = j +1
l=k k

cl()un+1 j cl()unl j
l=k k

(II) un+1 = j (I) (II): un+1 = j


l=k

cl()unl j

14

Section 2

where we note that un = 1 if m = , and zero otherwise.


k

un+1 = 0
l=k

cl()un = c() l

0, 0, such that the scheme is

due to the requirement of monotonicty-preserving-ness, meaning all c() monotone. So, we have

monotonicity-preserving (MP)monotone TVD MP where the implication only holds for linear schemes. n n For a scheme to be consistent, j = 0 if u is a constant solution (where j is the local truncation n error). For a scheme to be at least rst order accurate, j = 0 if u is a linear solution of the PDE. Consider a linear scheme un+1 = j Plug a constant in there, and we obtain 1= Plug a linear term in there, and obtain jx (n + 1)t =
l l

clunl. j cl .

cl((j l)x nt)


l

t = x l cl =
l

( l)cl

For a quadratic term, we would get l2cl = 2.


l

So, now try to derive a contradiction between any two of the above to refute second-order. To that end, dene a = (l cl )k b = ( cl )k l=k , l=k and now use Cauchy-Schwarz: 2 = |a b|2
l

l2cl
l

cl = 2,

where equality in holds only if a and b are linearly dependent, i.e. l c l = cl , where is just some constant independent of l. Theorem 16. (Godunov) A linear monotone (TVD) scheme is at most rst-order accurate.

2.2 Higher-order TVD Schemes


Consider ut + f (u)x = 0, where we will worry about the computation of the spatial derivative now and about the time derivative later. Then we can use backward dierences f (u j ) f (u j 1) x

Numerics

15

for rst-order accuracy or for second-order accuracy or f (u j+1) f (u j 1) 2x


3 1 f (u j ) 2f (u j 1) + 2 f (u j 2) 2

for third-order.

2.2.1 General Framework of a Conservative Finite-Volume Scheme Consider our conventional notation of Ij = [x j 1/2, x j +1/2], where x j = x j +1/2 x j 1/2. Now integrate the PDE: d xj +1/2 u dx + f (u(x j +1/2)) f (u(x j 1/2)) = 0 dt xj 1/2 Denote x j +1/2 1 u dx. uj = x xj 1/2 Then d 1 uj + [f (u(x j +1/2, t)) f (u(x j 1/2, t)). dt x j A nite volume scheme is of the form 1 d f f j 1/2 , uj + x j j+1/2 dt where f j +1/2 is the numerical ux. We want f j +1/2 f (u(x j +1/2, t)). For the time being, lets assume f (u) 0 and f j +1/2 = f (u j ), which is the numerical ux for Godunov, Roe, Engquist-Osher. See below for the case of unknown sign. f j +1/2 = f (u j , u j +1), where f (, ). So, we can try to compute u j+1/2 using the information {u j , u j+1} as u j +1/2 = u j +1/2 so that 1 (1) (1) f j +1/2 = f (u j +1/2) = f (u j + u j +1) , 2 1 (2) (2) f j +1/2 = f (u j +1/2) = f (3u j u j 1) . 2 The above uxes are 2nd order accurate, and are called the 2nd order central and upwind ux, respectively. (u(1) is gained from the line connecting the cell centers at the cell averages of Ij and I j +1. u(2) is the same for I j and I j 1.) The step from {u j } {u j +1/2} is called reconstruction. 1 (1) f j +1/2 = f u j + (u j +1 u j ) , 2
uj
(1)

(1)

(2)

1 (u j + u j +1), 2 1 3 = u j u j 1, 2 2

(2) f j 1/2

1 = f u j + (u j u j 1) . 2
uj
(2)

? ?

16

Section 2

u j measures the distance from the cell average u j to u j+1/2. Now dene a |a| < |b|, a b > 0, minmod(a, b) 4 b |b| < |a|, a b > 0, 0 ab 0 and set u j 4 minmod u j , u j . (1) (2) Then consider (3) f j +1/2 = f (u j + u j ). Lemma 17. (Harten) If a scheme can be written as u j +1 = u j + C j +1/2+u j D j 1/2u j ) with C j +1/2 0, D j +1/2 notation, we have 0, 1 (C j +1/2 + D j +1/2) 0 and = t/x, then it is TVD. As a matter of

(i)

(1)

+u j = u j +1 u j , u j = u j u j 1. Proof. Write

n +u jn+1 = +u jn + (C j +3/2+u j+1 D j +1/2+u jn C j +1/2+u jn + Dj 1/2 u jn) n = [1 (C j +1/2 + D j +1/2)]+u jn + C j +3/2+u j+1 + D j 1/2u jn. Thus |+u jn1| |+u jn1| TV(u jn+1) which proves the claim. Next, prove that the scheme we designed above is TVD using Hartens Lemma. Rewrite u jn+1 = u j [f (u j + u j ) f (u j 1 + u j 1)] = u j Dj 1/2 u j ], with D j 1/2 = u u j 1 + u j u j 1 f (u j + u j ) f (u j 1 + u j 1) = f () j u j u j 1 u j u j 1
0
1 2

+1 1 (C j +1/2 + D j+1/2) |+u jn | + C j +3/2| + u jn | + D j 1/2|u jn |.


C j +1/2|+u n| j D j +1/2|+u n| j

1 (C j +1/2 + D j+1/2) + C j +1/2 + Dj +1/2 |+u jn |

TV(un), j

??
2 . 3

Thus our scheme is TVD. We also get a condition for the CFL number. D j 1/2 which comes from 1 D j 1/2

u j 1 uj = f () 1 + 0 u j u j 1 u j u j 1
0
1 2

??
3/2f ()

3 max |f ()|, 2 0 max |f ()|

3 1 max |f ()| 2

If we use a 2nd order Runge-Kutta method like u (1) = L(u n), n+1 = 1 u n + L(u (1)) , u 2

Numerics

17

then TV(u n) 1 1 n+1) TV(u n) + TV(L(u (1))) TV(u 2 2 1 1 TV(u n) + TV(u (1)) 2 2 1 1 TV(u n) + TV(u n) 2 2 n). = TV(u The scheme treated here is called MUSCL (monotone upstream scheme for conservation laws). Homework #3: 1. Prove: Conservative montone schemes are at most rst order accurate. 2. Prove: For every convex entropy U (u) 0 , u) = F (u)) entropy ux and a conservative monotone scheme, there exists a consistent (F (u, j +1/2 such that the following cell entropy inequality holds F U (un+1) U (un) F j +1/2 F j 1/2 j j + t x where un+1 = un (f j +1/2 f j 1/2) = H(un p, j j j

TV(u (1))

0, , un+ q). j

(We proved this for U (u) = |u c|.) 3. Code: ut + on 0 x =0 x u(x, 0) = 1 + 1 sin(x)


2 u2 2

2 to (i) t = 1.0 and (ii) t = 3.0. Use a uniform grid with N = 20, 40, 80, 160, 320. Use

i. First order Godunov (upwinding) ii. 2nd order central (u (1)) iii. 2nd order upwind (u (2)) iv. MUSCL (minmod) For (i): tables of L1 errors and orders. For (ii): Figures for N = 40. 2.2.2 Generalized MUSCL Scheme We are still considering ut + f (u)x = 0, with a scheme of the form
+ + u jn+1 = u jn [f (u j +1/2, u j +1/2) f (u j 1/2, u j 1/2)],

where f (, ) is a monotone ux. Before we can seriously start considering the above scheme, we need to specify the reconstruction step, which achieves the mapping {u j } {u j }. +1/2

Procedure:

18

Section 2

From {u j }, we obtain the reconstructed functions P j (x) dened on Ij = (x j 1/2, x j +1/2) and + then take u j +1/2 = P j (x j +1/2), u j+1/2 = P j +1(x j+1/2). Conditions on P j :
1 x 1 x Ij

P j (x)dx = u j , P j (x)dx = u j +l for some set of l 0. (accuracy)

I j +l

3rd order reconstruction formulas: u j +1/2 = u j +1/2


(3) (2) (1)

We could then choose

and once more obtain a linear scheme, which is third order accurate and, by Godunovs theorem, should be oscillatory. Now dene u j +1 = u++1/2 + u j+1, j u j = u j +1/2 u j ,

or equivalently

Then, remember our previous modication of the reconstruction and do something analogous: uj mod = minmod(u j , u j +1 u j , u j u j 1), uj mod = minmod(u j , u j +1 u j , u j u j 1) u ,mod = u j + u j mod j +1/2 +,mod u = u j u j +1 . mod
j+1/2

and with that

To show that this modication does not destroy much accuracy and is in fact TVD, consider

u jn+1 = u jn [f (u,mod, u+,mod) f (u,mod, u+,mod) + f (u,mod, u+,mod) f (u ,mod, u+,mod)], j +1/2 j 1/2 j 1/2 j 1/2 j +1/2 j+1/2 j +1/2 j 1/2
(2) (1)

where these terms correspond to the marked terms in the assumption of Hartens lemma: u j +1 = u j + C j +1/2+u j D j 1/2u j .
(2) (1)

Now consider

??
u j +1/2 u j+1/2 = u j +1/2,
(2)

1 7 11 u j 2 u j 1 + u j , 3 6 6 5 1 1 = u j 1 + u j u j+1. 6 3 6 5 1 1 = u j + u j +1 u j+2. 6 6 3 u++1/2 = u j+1/2 j


(3)

u j +1/2 = u j + u j , u++1/2 = u j+1 u j+1. j

D j 1/2 =

f (u ,mod, u+,mod) f (u,mod, u+,mod) j +1/2 j 1/2 j 1/2 j 1/2 u j u j 1 mod u j + u j u j 1 u j 1 mod = f1(, u+,mod) j 1/2 u j u j 1 =
0 (monotonicity) 0 1 0 2

u j 1 mod uj mod . f1(, u+,mod) 1 + j 1/2 u j u j 1 u j u j 1


0 1

0.

? ??

??

Numerics

19

Claim: In smooth and monotone regions the scheme maintains its original high order accuracy. Consider the following Taylor expansions:
r r 2 u j +1/2 = u(x j+1/2) + O(x ), x = u(x j ) + ux(x j ) + O(x2). 2 1 u(x)dx uj = x I j 1 (x x j )2 = u(x j ) + ux(x x j ) + ux,x + O(x3) dx x I j 2 = u(x j ) + O(x2). u j = u j+1/2 u j x + O(x2). = ux(x j ) 2 u j+1 u j = u(x j+1) u(x j ) + O(x2) = ux(x j )x + O(x2) u j u j 1 = ux(x j )x + O(x2).

Observe that the second and third arguments of the minmod functionit is about half as big as the rst one. The monotonicity assumption above has the consequence that we may neglect the second-order terms in favor of the rst-order one. Theorem 18. (Osher) TVD schemes are at most rst-order accurate near smooth extrema. A simple argument by Harten shows something similar. Why are we restricted near smooth extrema? Suppose we are considering ut + ux = 0.

Consider what TVD means here: At most rst order! x2

initial condition

exact solution after t

Figure 4. Why TVD schemes dont do so well near smooth extrema.

What routes can we take out of this dilemma? Relax TVD: Only demand TVB. TV(u n+1) (1 + Ct)TV(u n)

20

Section 2

or TV(u n+1) Both have the consequence that TV(u n) C(T ) for nt T . TVD/TVB is also an important theoretical property: The space of all TVB functions is precompact, which has important consequences for convergence results. This leads us to using a modied minmod function (min-mod-mod? min-mod2? :-) Replace by with minmod(u j , u j +1 u j , u j u j 1) minmod(u j , u j +1 u j , u j u j 1) minmod(a, b, c) 4 We get the following properties: The scheme is TVB: TV(u n+1) TV(u n+1) + C M x2 N TV(u n) + Ct where N is the total number of cells. The scheme maintains its high-order accuracy in smooth regions including at local extrema. u j = ux(x j ) x + O(x2) = O(x2) 2 a |a| M x2 m(a, b, c) otherwise. TV(u n) + Ct.

near smooth extrema. The choice of M represents a tradeo between oscillation and accuracy. One 2 analysis of DG was carried out using M = 3 |ux,x | at extrema. Discussion of HW#3, Problem 2: Heres how to show the CEI in the semidiscrete case. Let f (, ) and U (u) 0, and
u Integration by parts u

F (u) =

U (u)f (u)du

U (u)f (u)

U (u)f (u).

du j 1 + f (u j , u j+1)x f (u j 1, u j ) dt x Then

= 0

1 dU (u j ) + U (u j ) f (u j , u j+1) f (u j 1, u j ) x dt Dene Then F j+1/2 = U (u j )f (u j , u j +1)


uj

= 0.

U (u)f (u)du.

dU (u j ) 1 1 + F F j 1/2 + j = 0. dt x x j +1/2 Then


uj junky :) u j 1

j =
uj

U (u)f (u)du U (u j )f (u j 1, u j ) + U (u j 1)f (u j 1, u j ) U (u)f (u)du (U (u j ) U (u j 1)f (u j 1, u j ) U (u)f (u)du


uj u j 1

U (u)f (u)du

=
u j 1 uj

=
u j 1 uj

U (u)duf (u j 1, u j ) 0.

=
u j 1

U (u) f (u) f (u j 1, u j ) du

Numerics

21

Then 1. u j 1 < u j . u j 1 uj f (u) f (u j 1, u j ) = f (u, u) f (u j 1, u j ) ...and then he cleaned the blackboard. (End of HW discussion)

2.3 Essentially Non-Oscillatory Schemes

This scheme goes back to the idea of the MUSCL scheme,

1 u j +1/2 u j + minmod(u j +1 u j , u j u j 1). 2


+

Recap: Newton interpolation. Suppose we have n points x j with values y j . Look for polynomial of degree n 1 such that p(x j ) = y j . First review Lagrange polynomials and Lagrange interpolation (li(x j = i,j ). (omitted) Next up, Newton interpolation: y[xi] = yi y[xi+1] y[xi] y[xi , xi+1] = xi+1 xi y[xi+1, xi+2] y[xi , xi+1] y[xi , xi+1, xi+2] = xi+2 xi Then p(x) = y[x0] + y[x0, x1](x x0) + y[x0, x1, x2](x x0)(x x1) + y[x0, x1, x2, x3](x x0)(x x1)(x x2). But we are doing reconstruction, not interpolation. How can we convert reconstruction to interpolation? Consider that were looking for a p(x) such that 1 x Then dene and observe
x j +1/2 x j +1/2

??
for j = 1, 2, p()d m.

p(x) = u j
x j 1/2 x

P (x) =
x1/2 j

xl+1/2

P (x j +1/2) =
x1/2

p()d =
l=1 xl1/2

xlul

j = 0,

, m.

So how do we implement this? (Aargh, Fortran.) This algorithm works only for a uniform mesh: 1. Given the cell averages u0, u1, u2, as ub(0),ub(1),... 2. Compute the un-divided dierences of u . do i=1,n u(i,0)=ub(1) enddo do l=1,m do i =1,n-l u(i,l)=u(i+1,l-1)-u(i,l-1) enddo enddo 3. At each location j + 1/2, to compute u j +1/2, do a. Find the origin is(j) of the ENO stencil is(j)=j

22

Section 2

do l=1,m if (abs( u(is(j)-1,l) ) .lt. enddo b.

abs( u(is(j),l) ) ) is(j) = is(j)-1

un(j) =
u j +1/2 l=is(j)

is(j)+m

c(l-is(j),j-is(j-1))ub(1) , m}).

(consider that l-is(j),j-is(j) {0,

2.4 Weighted ENO Schemes


Aside: Why is an interpolation polynomial monotone in the cell containing the discontinuity of a jump function? Suppose were using 6 points, with the discontinuity in the middle cell. Then the polynomial is of degree ve. The mean value theorem tells us that the derivative has zeros in the cells away from the discontinuity, of which there are four. But the derivative is of degree four, so it can at most have four zeros: Nice! There isnt one in the middle cell! (End aside) Idea: Dont choose stencils like ENO, use a weighted sum. Do it like this: u j +1/2 = w1u j+1/2 + w2u j +1/2 + w3u j +1/2 +
(i) (1) (2) (3)

where w1 + w2 + w3 + = 1 and u j+1/2 are the higher-order linear reconstructions above. The goal is to (i) choose the weights such that a higher order than just with u j+1/2 is achieved, if the desired smoothness is available. Choose i such that the linear combination of smaller stencils adds up to a high-order stencil. wi = i + O(x2) in smooth regions If the stencil Si contains a discontinuity, then we would like to have wi = O(x4).

We dene a smoothness indicator, i to measure the smoothness of the function in stencil si. i i = 1, 2, 3 , = 10 6, wi = ( + i)2 wi wi = . w1 + w2 + w3 Shus graduate student Jiang derived these smoothness indicators: i = x2 Homework: Code for Burgers: u + u 2=0 t 2 x u(x, 0) = 1 + 1 sin(x)
2 Ij

[(P (x)2 + x2(P (x)2)2]dx.

Give same output as before 3rd order linear using u

u j +1/2 : u j 1, u j , u j+1, u++1/2 : u j , u j +1, u j +2. j 3rd order TVD 3rd order TVB (M = 5) 3rd order ENO 5th order ENO 5th order WENO

Numerics

23

Use 3rd order Runge-Kutta. (Might need to reduce t to see the 5th order accuracy.) (Remember to initialize with and compare to cell averages of IC and exact solution!)

2.5 Finite Dierence Methods


We are still considering ut + f (u)x = 0, which we hope to approximate by using Our requirements are 2.5.1 Accuracy Accuracy means f j +1/2 f j 1/2 = f (u)x |x=xj + O(xr). Lemma 19. (ENO paper by Shu, Osher) If there is a function h(x) (which depends on x) s.t. f (u(x)) = then f (u)x = 1 x
x+x/2

du j 1 f f j 1/2 = 0 + dt x j +1/2 f j+1/2 = f (u j p , , u j + q).

h()d ,
xx/2

1 x h x+ 2 x

h x

x 2

All thats needed to obtain a higher-order scheme is now to approximate the function h to a certain degree of accuracy. we want {h j+1/2}, {u j } given {f (u j )} given {h j given Then f (u j ) = f (u(x j )) = 2.5.2 Stability For the moment, assume f (u) 1. TVD Schemes: a. Use an upwind-biased stencil to compute f j +1/2, e.g.
+ b. limit f j +1/2 f (u j ) = df j .

reconstruction 1 x
x j +x/2 x j x/2

h()d = h j .

0.

{f (u j 1), f (u j ), f (u j+1)} f j +1/2.

Then

+(mod) + df j = minmod(df j , f (u j +1) f (u j ), f (u j ) f (u j 1)). +(mod) (mod) f j +1/2 = f (u j ) + df j .

Then use Hartens Lemma to prove TVDness. We only have the term D j 1/2 since we have a unique wind direction by assumption, in un+1 = un ( C j +1/2(un un) + D j 1/2(un un1)). j j+1 j j j j

24

Section 3

By brute force, we have D j 1/2 = =

f (u j ) + df j

f (u j ) f (u j 1) + dfj u u j 1 j
f ()

with

f (u j ) f (u j 1) 1 + u j u j 1 f (u j ) f (u j 1) f (u j ) f (u j 1)
0 1 0 1

? ??
+(mod)

f (u j 1) df j 1 u j u j 1
+(mod)

+(mod)

df j 1

+(mod)

+(mod) df j

+(mod) dfj 1

Dj 1/2

2max |f (u)|.
u

In order to lift the condition on the wind direction (f (u) montone uxes, namely those characterized by ux splitting:

0), we need to consider only a subclass of

f (u , u+) = f +(u) + f (u+), where df (u) df +(u) 0, 0. du du 1 One such example is Lax-Friedrichs: f (u) = 2 (f (u) u), where = maxu |f (u)|. f (u) = f +(u) + f (u)

Then use the previous (single-wind-direction) procedure w/ f +(u) instead of f (u).

The mirror-symetric (w.r.t. j + 1/2) procedure with f (u) instead of f +(u). Thus we obtain f j +1/2. FD u j = u(x j , t) reconstruction {f (u j )} {f j +1/2} + numerical ux f j+1/2 = f j +1/2 + f j+1/2

Summary of FV versus FD: FV x 1 u j = x x j +1/2 u(x, t)


j 1/2

reconstruction {u j } {u j 1/2} (u , u++1) numerical ux f j +1 j

any f (, ) splittable monotone ux f (u, u+) = f +(u ) + f (u+) x arbitrary (meshing unrestricted) x uniform or smoothly mappable to uniform not much physics in the derivation

3 Two Space Dimensions


Now consider ut + f (u)x + g(u) y = 0. The good news are: Theoretical properties of weak solutions, entropy solutions etc. are the same as in 1D. All properties of monotone schmes (TVD, entropy condition, L1-contraction, ...) are still valid in 2D.

Theorem 20. (Goodman & LeVeque) In 2D, TVD schemes are at most rst order accurate. Proof. (Very rough idea) Many things can happen in 2D:

Two Space Dimensions

25

Low Total Variation

TVD Schemes in the literature for nD means schemes which are TVD in 1D and are generalized to 2D in a dimension by dimension fashion, like this: du j 1 + f f j 1/2 = 0 dt x j +1/2 with f j+1/2 {f (u j 1), f (u j ), f (u j +1)} becomes dui, j 1 1 f f fi1/2, j + fi,j 1/2 = 0 + y i,j +1/2 x i+1/2, j dt with fi+1/2,j {f (ui1, j ), f (ui, j ), f (ui+1,j )}. They really are not TVD in more than one dimension. One good property we have in more than one dimension is a maximum principle: Given a scheme in Harten form, i.e.
n n un+1 = un x Ci+1/2,j (un i,j i+1,j ui,j ) + Di1/2,j (ui,j ui1, j )] i, j n n y Ci, j+1/2(ui,j +1 ui,j ) + Di, j 1/2(un j ui, j 1)] i,

with

Ci+1/2, j , Di1/2, j , 1 x[Ci+1/2, j + Di+1/2, j ] Ci,j +1/2, Di,j 1/2, 1 y[Ci,j +1/2 + Di,j +1/2] we can proceed as follows: un+1 = i,j

n + xCi+1/2, j un i+1,j + xDi1/2,j ui1, j 0 0

n + yCi,j +1/2ui,j +1 + yDi, j 1/2un 1. i,j 0 0

Thus

because it is a convex combination of the values in the stencil.

3.1 FV methods in 2D

Next, lets consider FV methods in 2D. Let ui,j = 1 xy

?
1 0 High TV

1 xCi+1/2,j xDi1/2,j yCi, j +1/2 yDi, j 1/2]un i,j


0

? ? ? ?
min (stencil) un+1 i,j
y j +1/2 xi+1/2 y j 1/2 xi 1/2

0, 0,

max (stencil)

u(x, y, t)dx dy,

26

Section 3

where we note that is the cell-averaging operator in y, is the cell-averaging operator in x. Next, 1 xy 1 = xy
y j +1/2 y j 1/2 y j +1/2 y j 1/2 xi+1/2

f (u)xdx dy
xi 1/2

Thus

d 1 1 ui,j + dt x y + 1 1 y x

The equality ( ) below is what breaks when we switch to a nonlinear equation. FV Scheme: 1 d 1 ui,j + f fi1/2,j + g gi,j 1/2 = 0. x i+1/2,j dt y i, j +1/2 3.1.1 The Linear Case

Lets consider a simple case to start:

In this case, we only have to perform 2 reconstructions per point , so that y j +1/2 y j +1/2 1 1 1 d ui,j + f (u(xi+1/2, y, t))dy f (u(xi1/2, y, t))dy y yj 1/2 x y yj 1/2 dt
() fi+1/2,j =aui+1/2, j = f (ui+1/2,j )

1 1 + y x

3.1.2 The Nonlinear Case

In general, if f (u) and g(u) are nonlinear, then we have to perform one reconstructions for each point of the stencil, i.e. many times along one cut line through the stencil.
1D rec

{ui, j }

?
y j +1/2 y j 1/2 xi+1/2 xi1/2

f (u(xi+1/2, y, t)) f (u(xi1/2, y, t))dy.

f (u(xi+1/2, y, t))dy

1 y 1 x

y j +1/2

f (u(xi1/2, y, t))dy
y j 1/2 xi+1/2

f (u(x, yj +1/2, t))dx

f (u(x, y j 1/2, t))dx


xi 1/2

= 0.

ut + a ux + b u y = 0

f (u) = a u,

g(u) = b u.

xi+1/2

xi1/2

1 f (u(x, y j +1/2, t))dx x


gi, j +1/2

xi+1/2

xi1/2

f (u(x, y j 1/2, t))dx = 0.

{ui+1/2,j }

1D rec

{ui+1/2,j +wk }

{f (ui+1/2, j+wk)}

num.int.

{fi+1/2, j }

1D rec

{ui, j +1/2}

1D rec

{ui+k ,j +1/2}

{f (ui+k ,j +1/2)}

num.int.

{fi, j+1/2}

Remark 21. These considerations only matter if we are interested in order of accuracy three or greater. If we are concerned with only second order accuracy, then ui,j = u(xi , y j ) + O(x2, y 2) is all we need.

Systems of Conservation Laws

27

3.2 Finite Dierence Methods


We are still considering ut + f (u)x + g(u) y = 0, but we switch the focus of our approximation to actual point values: ui,j = u(xi , y j , t) to get the discretized conservation law 1 1 dui, j f fi1/2, j + g gi, j 1/2 . + y i,j +1/2 x i+1/2, j dt 1 f fi1/2,j = f (u)x |x=xj , y= yj + O(xr , y r) x i+1/2,j for accuracy. This is identical to the 1D routine with xed j.

We need

4 Systems of Conservation Laws


ut + f (u)x = 0 u is a vector, and so is f . For the moment, x is still only 1-dimensional. Example 22. Compressible ow: u = v , E v 2 f (u) = v + p , v(E + p)

where is density, v is velocity, E is total energy and p is pressure. For a -law gas, for example, we could have the constitutive relationship 1 p + v 2. E= 1 2 E.g. for air = 14. (Now, drop the bold-for-vector notation.)

4.1 A First Attempt: Generalize Methods from AM255


Example 23. (From 255) If f (u) = A u, then we have the equation ut + A ux = 0 (6)

If A has only real eigenvalues and a complete set of eigenvectors, then (6) is called hypberbolic. Consider A ri = iri , so that A R = R diag(1,

where R has the vectors ri in its columns. Then we obtain

R1A R = .

, n),

The rows li of R 1 are called the left eigenvectors of A, with liA = ili with lir j = i, j . Now, perform a change of variables, namely v = R1u, so that vt + vx = 0. (7)

28

Section 4

The goal for the nonlinear case is to take the lessons from the linear case, but rewrite the scheme (7) so that it only acts on u. If all the eigenvalues are positive, then we can rewrite the upwind scheme (now reinstating bold-face-for-vector, with index for x location) dv j 1 + [v j v j 1] = 0 dt x

1 duj + RR 1[Rv j Rv j 1] = 0 x dt
uj

1 duj + A[u j uj 1] = 0. x dt If we do not have the above eigenvalue condition, then we need a good way to write the resulting system concisely. Why not start with some notation... a+ 4 a a 0, 0 otherwise, a 4 0 otherwise . a a 0

Thus |a| = a+ a and a = a+ + a. This notation has natural generalizations to matrices and vectors. We obtain the following scheme in v: 1 dv j + +[v j v j 1] + [v j+1 v j ] x dt
A

1 du j + R+R 1[u j u j 1] + RR1[u j +1 uj ] x dt +

Note the slightly ambiguous notation hereA+ is not the positive part of A in the above sense, even though A = A+ + A still holds.

? ?
4
A

= 0 = 0.

4.2 How to Generalize Scalar Higher-Order Schemes to Systems


We are still considering ut + A ux = 0. 1. Find the eigenvalues of A, hence Also nd the eigenvectors of A, hence R and R 1. 2. At each point that we need to compute a ux or a reconstruction, say at x j+1/2, do the following b. Use the scalar subroutine to each component of v to obtain a reconstruction v j +1/2. c. uj +1/2 = Rv j +1/2. Now, why should we do this transformation instead of just applying the scalar subroutine to u? Consider this example: (v1)t + (v1)x = 0, (v2)t + (v2)x = 0. Any combination of u is bound to develop two shocks, travelling at dierent speeds. If however we calculate v, then we retain the two nicely separated shocks. To drive home the point, ENO always counts on the fact that it can nd a stencil near a shock where the function is smooth. For a point trapped between two shocks, this assumption is violated, and we will lose something. Also note that this procedure only makes sense if you are doing something nonlinear in step 2b. Next, note that if our discussion is targetted at generalizing to nonlinear conservation laws. Consequently, it is really pointless to actually carry out steps 2a and 2c each time unless the matrix A is actually changing as it will be. Note 24. Theorem: All results about stability and convergence carry over to the case of linear systems if the numerical schemes use the above the characteristic procedure. a. vi = R 1ui (i = j p, j + q)

Systems of Conservation Laws

29

4.3 The Nonlinear Case


If we consider the equation ut + f (u)x = 0, then There is essentially no theory. The numerical procedure is essentially identical to that for the linear system case performed in (local) characteristic elds.

Additional Homework: (This+HW4 due Nov 29) 1. Add third order nite dierence version to HW4. [one classs worth of material is missing here. It is available as a separate PDF le called 257-missedclass.pdf courtesy of Ishani Roy.]

30

Section 5

5 The Discontinuous Galerkin Method


ut + f (u)x = 0. To begin a FV discretization, we rewrite this as 1 t which results in: FV in its full glory is
x j +1/2

(ut + f (u)x)dx = 0,
x j 1/2

1 du j f (u j +1/2) f (u j 1/2) = 0 + x j dt 1 du j + + + f (u j +1/2, u j +1/2) f (u j 1/2, u j 1/2) , x j dt

where, to make this a scheme, we need a monotone ux f (u , u+), which needs to satisfy the following criteria: f (, ), f (u, u) = u, Lipschitz continuous.

For DG, we do something dierent. We multiply the PDE by a test function v, then integrate the result over the interval (x j 1/2, x j +1/2)
x j +1/2

(ut + f (u)x)v dx = 0.
x j 1/2

Now consider u and v both from a nite-dimensional function space Vh, where h = max (x j+1/2, x j 1/2). The space is then given by Vh = {w: w|Ij P k(Ij )},

where Ij = (x j 1/2, x j +1/2) and P k(I j ) is a collection of polynomials of degree dim Vh = N (k + 1). Then perform integration by parts and write
x j +1/2 xj 1/2 x j +1/2

utv

x j 1/2

f (u)vx dx + f (u j +1/2)v j +1/2 f (u j 1/2)v j 1/2 = 0.

To make this into a scheme: nd u Vh such that


Ij

utv dx

Ij

f (u)vxdx + f (u j +1/2)v j +1/2 f (u j 1/2)v j 1/2 = 0


?

is true for any test function v Vh. But the term marked ? is meaningless, since the functions are double-valued at the spots in question. To motivate a meaning for the term, consider the following: If we take the test function 1 x Ij , v= 0 elsewhere, we recover
Ij

ut dx + f (u j +1/2) v j +1/2 f (u j 1/2) v j 1/2


from left

Ij

ut dx + f (u j+1/2) f (u j 1/2) = 0,

?
from right

k on cell Ij . We observe

= 0

which is exactly reminiscent of the FV scheme, motivating the equality


+ + f (u j +1/2) f (u j 1/2) = f (u j +1/2, u j +1/2) f (u j 1/2, u j 1/2)

The Discontinuous Galerkin Method

31

and thus the scheme


Ij

utv dx

Ij

+ + + f (u)vxdx + f (u j +1/2, u j +1/2)v j +1/2 f (u j 1/2, u j 1/2)v j 1/2 = 0.

Pick a basis for Vh: For example, we could take Vh = { j : 1


(0) (l)

N,0

k}.

j (x) = 1Ij (x), (2)(x) = (x x j )21I j (x), j then


k

(1)(x) = (x x j )1Ij (x), j

u(x, t) =
l=1

u j (t) j (x),

(l)

(l)

x I j.

Now take v = j (x), m = 0, 1,

(m)

, l and put that into our scheme


x j +1/2 k

x j 1/2 k

+f
l=0 k

u(l)(t)(l)(x j +1/2), j j

u(l) (t)(l) (x j 1/2), j 1 j 1


l=0

Working with that yields


k

l=0

d (l) u (t) dt j

+ F (u j 1(t), u j (t), uj +1(t)) = 0, where u(0)(t) j . uj (t) = (k) u j (t)

?
x j 1/2 l=0 x j +1/2 k

u(l)(t)(l)(x) j j

(m)(x)dx j
t

(l) (l) u j (t) j (x)

l=0

d (m) (x)dx dx j

u(l) (t)(l) (x j+1/2) (m)(x j+1/2) j+1 j +1 j u(l)(t)(l)(x j 1/2) (m)(x j 1/2) = 0. j j j

l=0

l=0

x j +1/2

x j 1/2

(l)(x)(m)(x)dx j j

(k+1)(k+1) matrix

If the matrix above (also called the local mass matrix ) is, we can rewrite the scheme as
k

l=0

d uj (t) + F (uj 1(t), uj (t), u j +1(t)) = 0, dt

which, if F is locally Lipschitz (which it is), gives a well-dened scheme. If we have a linear PDE f (u) = A u, where A = A(x, t), then the scheme becomes du j (t) + [B j 1u j 1 + C ju j (t) + D j +1uj +1(t)] = 0, dt where the three matrices B j 1, C j , D j+1 (each of size (k + 1) (k + 1)) do not depend on u.

32

5.1 Some Theoretical Properties of the Scheme

This scheme satises the cell entropy inequality for the square entropy U (u) = u2/2. Recall the general entropy inequality, where for an entropy U satisfying U (u) 0 and a matching ux
u

we have in some weak sense. Proof. Take v = u in the scheme:

d dt d dt

u2 + + dx g(u j+1/2) + g(u j 1/2) + f j +1/2u j +1/2 f j 1/2u j 1/2 = 0 2 = 0

u2 dx + F j +1/2 F j 1/2 + g(u 1/2) + f j 1/2u 1/2 + g(u+1/2) f j 1/2u+1/2 j j j j 2


j 1/2

where we have taken and

g(u) =

F j +1/2 = g(u j +1/2) + f j +1/2u j +1/2,

where we observe that F is consistent, i.e.


?

F (u, u) = g(u) + f (u)u


u

F We would like to show j 1/2 = = = = =

?
F (u) = U (u)f (u)du, 0 U (u)t + F (u)x
Ij

Section 5

utu dx

Ij

?
g(u)x

+ f (u)uxdx + f j +1/2u j +1/2 f j 1/2u j 1/2 = 0

f (u)du,

g (u) = f (u)

u f (u) du

udf (u) = u f (u)

= f (u) + f (u)u + f (u) = f (u)u.

0 to prove the cell entropy inequality, i.e. the term above g(u) + f (u , u+)u + g(u+) f (u , u+)u+ +) g(u) f (u, u+)(u+ u ) g(u + + g ()(u )(u u ) f (u , u+)(u+ u) (u+ u)(f () f (u, u+)) + (u u )(f (, ) f (u, u+)). 0.

?
u g(u)

f (u)du.

0.

After a simple case distinction on u u+ and using f (, ), we nd