You are on page 1of 42

Review

Elements of the Difference Method


Physics 160, Spring 2014
Discrete Representation of a Continuous
Variable
We can approximate a continuous function f as
f j f xj (2.3)
and for any
x j x x j 1 (2.4)
we can interpolate f(xj) as
f * f j 1 1 f j (2.6)
where
x ' x j
(2.5)
x j 1 x j
Discrete Representation of a Continuous
Variable
This approximation only retains structures larger
than Dxj

Figure 2.2 A function of f of the independent variable x is represented as a


vector of components fj on the mesh points j: (a) the mesh representation is
a poor approximation to a rapidly varying function; (b) a good
approximation for a slowly varying (long-wavelength) function.
Difference Derivatives in Space

If f does not change rapidly over , we approximate


' f j 1 f j 1 (2.19)
Dx f j
2D

Figure 2.3 The difference approximation to the first derivative of a function


(a) a poor approximation (b) a good approximation.
Difference Derivatives in Space

Using Fourier modes,


u geikx (2.20)
du
ikgeikx
dx (2.21)
iku


Which can be combined with (2.19) to get a

difference derivative
ikx j 1 ikx j 1
ge ge
Dxu (2.22)
2D
Difference Derivatives in Space

Which leads to
D' x u
2D

g ik(x j D) ik(x j D)
e e
(2.23)
g ikx 1

e j e ikD e ikD
D 2

iu
D' x u sin(kD) (2.24)
D

Apply the first difference operator:
D 2
4 4
D u iuk 1
x O k D (2.25)
6
k 2 D2 d
D' x 1 4 4
O(k D ) (2.26)
6 dx
General Formulation of the Initial-Value
Problem
Time-like derivatives are different than space-like
Must specify an integration procedure on the time mesh
Can apply to any problem with a one-point boundary
condition
The initial-value problem is of fundamental
importance
General Formulation of the Initial-Value
Problem
Given a state vector, u(r, t ) R(r ), we have an initial
value equation that describes the state of the
system
du
dt
Lu (2.33)
Where L is in general some non-linear operator is
defined on the surface for all time
General Formulation of the Initial-Value
Problem
Generic initial value problem
Divide time into intervals
Specific points usually denoted with superscript
Integrate the equation in time in parallel with real
time of the computer
n
t Dt n
(2.39)
1

Figure 2.5. A time mesh. Initial-value problems are integrated over small
time increments tn
General Formulation of the Initial-Value
Problem
Integration relates the n and n+1 state vectors
t n1
u u Ludt
n 1 n
(2.40)
tn

Which can be rewritten with a Taylor expansion

Which is usually approximated to second order


d ( Dt ) 2
u n 1 u n Lu n Dt (Lu) (2.43)
dt tn 2
General Formulation of the Initial-Value
Problem
Defining the time derivative can be tricky
Using the time step ahead (tn+1)
un1 un Lun (1 )Dt Lun1Dt (2.44)
is an interpolation parameter, 0< < 1
Second-order accuracy only when =
When = 0, the method is explicit
un1 (I DtL)un (2.45)
0 is the implicit method
I DtL u n1 I+ 1 DtL u n (2.46)
General Formulation of the Initial-Value
Problem
If we assume that the left hand side is non-singular,
u n 1 I DtL I 1 DtL u n
1

T Dt , D u n
(2.47)

T relates the states of the system over points on the


time mesh.
For PDEs, T is a difference operator which couples the
dependent variables on the space mesh, of step
Requirements of a Difference Solution

The integration operator T


+1 = (, )
Relates time levels
Not unique
Depends upon choices
Integration scheme in time
Difference scheme in space
Four important properties
Consistency
Accuracy
Stability
Efficiency
Consistency of Approximation

We must demand the our difference system is


consistent with the differential system
Can be specified as
T (Dt , D) I
lim lim L
Dt 0 D0 Dt
Dt (2.48)

D

L is the differential operator and is finite


Accuracy of Approximation

Truncation error
Arises from representation of continuous variable by
discrete points
Dependent on mesh intervals in time and space
Round-off error
Computer only stores a finite number of decimal places
Can round the smallest bit for a cumulative effect
Arithmetic on computer not exact
Modern computing power makes this less of a
problem
Stability of Scheme

A numerical method is stable if a small error at any


stage produces a smaller cumulative error
Define error as n,which occurs at step n. We are
interested in the amplification of this error at step n+1.
We call g the amplification factor, then
n 1 g n (2.49)
Stability requires
n 1 n (2.50)
g n n (2.51)
g 1 (2.52)
Efficiency of Scheme

Finite operation time and memory


Cannot use an arbitrary complex scheme
Efficiency is the total number of operations
performed by the central processor of the
computer
Efficiency decreases with greater complexity of the
method
Accuracy increases with increasing complexity
Compromise between accurate and efficient
Integration of ODEs

We will start simple and develop sophistication


Consider an ODE du
f u, t 0
dt (2.61)
u t u
0 0
(2.62)
In general, can be complex
Integrate using tools we know
t n1
n 1
u u - f (u, t )dt
n
tn
(2.63)
Dt t tn 1
(2.64)
n

Can use many different methods to approximate


Euler First-Order Method

Uses value at tn to approximate value at tn+1


u u f u , t Dt
n 1 n
(2.65)
n n

t n1
Figure 2.6. The approximation to n f (u (t ), t )dt in the Euler method
t
(equation 2.65). The shaded region shows the approximation to the
integral under the curve.
Euler First-Order Method

Introduce an error
u n 1 n 1 u n n f u n

n , t n Dt (2.66)
f

f u n n , t n f u n , t n
u n
n O n (2.67)
f
n 1 n Dt n O n (2.68)
u n
So the amplification factor is
f
g 1 Dt (2.69)
u n
Euler First-Order Method

Three classes of problems


Growth, f/u > 0
Decay, f/u < 0
Oscillatory, f/u is imaginary
Must demand stability
Euler First-Order Method

For decay type, we require


f
Dt 2 (2.70)
u n
2
Dt (2.71)
f / u |n
This gives an upper limit of time step
Note: we used the process of linearization to find
these
Leapfrog Method

Time center integrand, so second order accurate

t n 2
Figure 2.8. The approximation to n f (u (t ), t )dt in the leapfrog method
t
(equations 2.81).
Leapfrog Method
at n 1 u n 1 u n 1 f u n , t n 2Dt
at n 2 u n 2 u n f u n 1 , t n 1 2Dt
(2.81)

Requires u0 and u1
Very sensitive to accuracy of u1
Can use sub-intervals to first t
Can also use high order expansion
Leapfrog Method

Applying the same method of error analysis,


f
n 1 n 1 2 Dt n (2.82)
u n
f
g 2 1 2Dtg (2.83)
u n
f
If
u n
Dt

g 1
(2.84)
2

Amplification factor greater than one for real


Not appropriate for decay or growth type
Leapfrog Method

Let = i, then
g i 2
1
gg * 1 (for 1)
(2.85)

So the stability requirement is


Dt 1
1.0 (2.86)
Dt

So the timestep must be smaller than the
characteristic time scale of the problem
Explicit Two-Step Method

Use intermediate timestep in Euler method

t n1
Figure 2.9. The approximation to t n f(u(t),t)dt in the two-step method
(equations 2.91, 2.92). The first step uses the Euler method to "time-
center" the integral at tn+1/2 (dotted region). The second step evaluates the
integral to second-order accuracy (shaded region)
Explicit Two-Step Method

Now we include a un+1/2 in our step calcuations


Dt
u f u , t
n 1
auxiliary: u 2
2
(2.91)
n n n

u u f u
n 1 n
,t Dt n 1
(2.92) 2
n 1
2

Which gives error analysis


f f Dt

n 1
Dt 1
n
(2.93)
n

u u n2 n
1
g 1 2
2
f
Where we assume that f
u
f
u
and u Dt
n 1/ 2 n n

2.0
Dt
f / u n (2.94)
Implicit Second-Order Method

Uses time average between tn and tn+1


Dt
u u f u , t f u , t
n 1 n n n n 1 n 1
(2.95)
2

t n1
Figure 2.10. The approximation to t n f (u (t ), t )dt in the implicit second-order
method (equation 2.95). A "time-average" for the function f is used.
Implicit Second-Order Method

We can determine the stability


f Dt f Dt
g 1 g (2.96)
u n 2 u n 1 2

f Dt
1
u n 2 (2.97)
g
f Dt
1
u n 1 2

Amplification factor is always smaller than or equal


to unity
Lose some algebraic complexity
Method Summary

Method Algorithm Stability Order Amplification


Euler un+1 = un fn Dt Stable for real, if = O(Dt) g=1
Dt < 2/
Unstable for imaginary
Leapfrog un+1 = un-1 fn 2Dt Unstable for real = O(Dt2) g = (2 + 1)
Stable for imaginary, if
Dt < 1/
Two-Step un+ = un fn Dt/2 Stable for real, if = O(Dt2) 1
g = 1 + 2
un+1 = un fn+1/2 Dt Dt < 2/ 2
Marginally stable for
imaginary
1
Implicit un+1 = un-1 (fn +fn+1)2Dt Stable for real = O(Dt2) g = (1 )/(1 + )
2
Stable for imaginary
For all Dt
1
= O(Dt2) 1 3
Adams- un+1 = un (3fn +fn-1)Dt Stable for real, if g=
2
Bashforth Dt < 1/ 1
2
9
4
Marginally stable for ( 2-+1)
2 4
imaginary
Numerical Differentiation

Centered difference
f j 1 f j 1
Dx f j (A.1)
2D
Forward difference
f j 1 f j
D x f j (A.2)
D
Backward difference
f j f j 1
D x f j (A.3)
D
Numerical Integration

Make a change of variable =


x j 1 D

f ( x)dx f ( x)dx
xj 0
(A.4)
Apply forward differencing
D
D f f j
f j x j 1 dx
f ( x)dx
D
(A.5)

0
0

We get the Trapezoidal Rule


D
f j 1 f j
f ( x )dx Df j 12 D2 ( ) D( 12 f j 12 f j 1 ) (A.6)
0
D
Part 1: Assign Charges to a Spatial Grid
System length L; grid size = 1; NG total # of grid points
0 L
=1

1 2 3 i -1 i i +1 NC
NG

Particle position is Dx away from grid point i and (1 Dx)


away from i + 1
Dx

i 1-Dx i+1
= 1

Goal to determine charge density on grids from position


{x, v}
Assigning Charge: Nearest Grid Point
Simplest method is to assign all of the charge from a
particle to the nearest grid point (NGP)
The nearest grid point i to the particle gets 1 full unit
of charge
Particle size is effectively
Charge smeared between grid point i and i + 1

q
effective
shape of
particle
Dx with NGP

i i+1
Assigning Charge: Linear Interpolation
A better method is to use linear interpolation to
distribute charge to the two nearest grid points
Fractional charge is placed on grids i and i + 1
q

q(1-Dx)

qDx
Dx
particle
i i+1

(i) q(1 Dx) (i 1) qDx


The size of the particle is more than
spread over 2 grid points
smoother charge distribution compared to NGP
Origin of Charge Weighting Schemes

Charge weighting schemes are from Taylor expansion


x x0
f x f x0 f x1 f x0

Using only first term gives NGP: f(x) = f(xo)
Keeping next term gives: f(x) = f(xo)(1 ) + f(x1)
equivalent to linear interpolation
x x0


Keeping additional terms in Taylor expansion gives
higher order interpolation schemes

You might also like