Professional Documents
Culture Documents
Lecture8 PDF
Lecture8 PDF
present
past
u = u(x, t)
domain of
dependence
space
Time discretization
u
t
+ Lu = f
in (0, T )
time-dependent PDE
Bu = 0
on (0, T )
boundary conditions
u = u0
in at t = 0
initial condition
u0 u0 in
tn+1 = tn + t
Space-time discretization
Space discretization:
Unknowns:
ui (t)
Time discretization:
The space and time variables are essentially decoupled and can be discretized
independently to obtain a sequence of (nonlinear) algebraic systems
A(un+1 , un )un+1 = b(un )
Method of lines (MOL)
L Lh
duh
+ Lh uh = fh
dt
FEM approximation
on (tn , tn+1 )
uh (x, t) =
n = 0, 1, . . . , M 1
N
P
j=1
semi-discretized equations
uj (t)j (x),
uni u(xi , tn )
Weak formulation
d
dt (u, v)
u
t
+ Lu f v dx = 0,
+ a(u, v) = l(v), v V
Differential-Algebraic Equations
d
dt (uh , vh )
t (tn , tn+1 )
v V,
+ a(uh , vh ) = l(vh ), vh Vh
MC du
dt + Au = b
t (tn , tn+1 )
where MC = {mij } is the mass matrix and u(t) = [u1 (t), . . . , uN (t)]T
Matrix coefficients
Mass lumping
mij = (i , j ),
MC ML = diag{mi },
aij = a(i , j ),
mi =
P
j
P
j
j 1.
bi = l(i )
mij = (i ,
P
j
j ) =
i dx
ML du
dt + Au = b,
du
dt
+ R(u, t) = 0
u(tn ) = un ,
01
explicit,
O(t)
= 1/2
Crank-Nicolson scheme
implicit,
O(t)2
=1
implicit,
O(t)
MC du
dt + Au = b,
0 1,
n = 0, . . . , M 1
Exact integration
R tn+1
tn
un+1 = un + f (, u( ))t,
du
dt
dt = u
n+1
u =
(tn , tn+1 )
R tn+1
tn
f (t, u(t)) dt
FE
111111111
000000000
000000000
111111111
000000000
111111111
000000000
111111111
000000000
111111111
000000000
111111111
000000000
111111111
000000000
111111111
000000000
111111111
tn
tn+1
left endpoint
BE
111111111
000000000
000000000
111111111
000000000
111111111
000000000
111111111
000000000
111111111
000000000
111111111
000000000
111111111
000000000
111111111
000000000
111111111
000000000
111111111
000000000
111111111
000000000
111111111
000000000
111111111
tn
tn+1
right endpoint
CN
111111111
000000000
000000000
111111111
000000000
111111111
000000000
111111111
000000000
111111111
000000000
111111111
000000000
111111111
000000000
111111111
000000000
111111111
000000000
111111111
000000000
111111111
000000000
111111111
000000000
111111111
tn
tn+1
LF
111111111
000000000
000000000
111111111
000000000
111111111
000000000
111111111
000000000
111111111
000000000
111111111
000000000
111111111
000000000
111111111
000000000
111111111
000000000
111111111
000000000
111111111
trapezoidal rule
tn
tn+1
midpoint rule
Forward Euler
Backward Euler
Crank-Nicolson
Leapfrog method
tn = nt,
t =
T
M
M=
T
t
n = 0, . . . , M 1
p1
glob
= M loc
= O(t)
u
t
u
+ v u
x = d x2
u(0) = u(1) = 0,
in (0, X) (0, T )
u|t=0 = u0
Lagrangian representation
where
d
dt
u
f (t, u(t)) = v u
x + d x2
2u
du(x(t), t)
=d 2
dt
x
=v
t=T
v
du(x(t), t)
=0
dt
t=0
dx(t)
dt
d=0
d>0
X x
i = 0, . . . , N
tn = nt,
t =
T
M,
n = 0, . . . , M
u0i = u0 (xi )
01
unknown
x =
x
tn+1
t
tn
t
t 1
n
known
xi = ix,
un un+1 ,
x
un+1
= uni + [fhn+1 + (1 )fhn ]t
i
x 1
x
x +1
0
Central difference / lumped-mass FEM
"
#
n+1
n+1
n+1
n+1
n+1
n+1
n
u
u
2ui + ui+1
ui1
ui ui
= v i+1
+ d i1
t
2x
(x)2
uni1 2uni + uni+1
uni+1 uni1
+d
+ (1 ) v
2x
(x)2
i
i = 1, . . . , N,
X x
n = 0, . . . M 1
uin+ = un+1
+ (1 )uni ,
i
01
Forward Euler ( = 0)
un+1
= h(uni1 , uni , uni+1 )
i
Backward Euler ( = 1)
n+1
n
un+1
= h(un+1
i1 , ui , ui+1 )
i
Crank-Nicolson ( = 12 )
n+1
n
n
n
un+1
= h(un+1
i
i1 , ui1 , ui , ui+1 , ui+1 )
n+1
t
t
Leapfrog time-stepping
n+1
n+1
t
, uni , uni+1 )
un+1
= h(uni1 , un1
i
i
n+1
t
t
BE
CN
un+1
= un1
+ 2tfhn
i
i
uni+1 uni1
uni1 2uni + uni+1
un+1
un1
i
i
+v
=d
2t
2x
(x)2
(explicit, three-level)
FE
LF
Fractional-step scheme
Given the parameters (0, 1), = 1 2, and [0, 1] subdivide the time
interval (tn , tn+1 ) into three substeps and update the solution as follows
Step 1.
Step 2.
Step 3.
2
2
12
1
u
n+1 = un + f (tn , un )t
forward Euler
2. Corrector
Crank-Nicolson
un+1 = un + f (tn+1 , u
n+1 )t
backward Euler
or
tn+1 , . . . , tnm ,
Runge-Kutta methods
m = 0, 1, . . .
[0, 1]
Adams methods
f
past
present future
= O(t)p
glob
tn 2
(explicit)
p=1
un+1 = un + tf (tn , un )
p=2
un+1 = un +
t
n
n
2 [3f (t , u )
p=3
un+1 = un +
t
n
n
12 [23f (t , u )
Adams-Moulton methods
tn 1
tn
tn+1
forward Euler
f (tn1 , un1 )]
16f (tn1 , un1 ) + 5f (tn2 , un2 )]
(implicit)
p=1
p=2
un+1 = un +
t
n+1
, un+1 )
2 [f (t
p=3
un+1 = un +
t
n+1
, un+1 )
12 [5f (t
backward Euler
+ f (tn , un )]
Crank-Nicolson
Adams methods
Predictor-corrector algorithm
1. Compute u
n+1 using an Adams-Bashforth method of order p 1
2. Compute un+1 using an Adams-Moulton method of order p with
predicted value f (tn+1 , u
n+1 ) instead of f (tn+1 , un+1 )
Pros and cons of Adams methods
methods of any order are easy to derive and implement
only one function evaluation per time step is performed
error estimators for ODEs can be used to adapt the order
other methods are needed to start/restart the calculation
time step is difficult to change (coefficients are different)
tend to be unstable and produce nonphysical oscillations
Runge-Kutta methods
Multipredictor-multicorrector algorithms of order p
p=2
u
n+1/2 = un +
t
n
n
2 f (t , u )
un+1 = un + tf (tn+1/2 , u
n+1/2 )
p=4
u
n+1/2 = un +
t
n
n
2 f (t , u )
u
n+1/2 = un +
t
n+1/2
,u
n+1/2 )
2 f (t
u
n+1 = un + tf (tn+1/2 , u
n+1/2 )
un+1 = un +
t
n
n
6 [f (t , u )
+ 2f (tn+1/2 , u
n+1/2 )
+ 2f (tn+1/2 , u
n+1/2 ) + f (tn+1 , u
n+1 )]
Simpson rule
corrector
General comments
Pros and cons of Runge-Kutta methods
self-starting, easy to operate with variable time steps
more stable and accurate than Adams methods of the same order
high order approximations are rather difficult to derive; p function
evaluations per time step are required for a pth order method
more expensive than Adams methods of comparable order
Adaptive time-stepping strategy
t t
||u ut || T OL
ut = u + t2 e(u) + O(t)4
2.
(prescribed tolerance)
umt ut
t2 (m2 1)
t2 (m2 1)
= T OL
||ut umt ||
fourth-order accurate
Practical implementation
Automatic time step control can be executed as follows
Given the old solution un do:
1. Make one large time step of size mt to compute umt
2. Make m small substeps of size t to compute ut
3. Evaluate the relative solution changes ||ut umt ||
4. Calculate the optimal value t for the next time step
5. If t t, reset the solution and go back to step 1
6. Set un+1 = ut or perform Richardson extrapolation
Note that the cost per time step increases substantially (umt may be as
expensive to obtain as ut due to slow convergence at large time steps).
On the other hand, adaptive time-stepping enhances the robustness of the
code, the overall efficiency and the credibility of simulation results.
||un+1 un ||
||un+1 ||
of an indicator variable u
If en > reject the solution and repeat the time step using t =
en tn
tn+1 =
en1
en
k P
T OL
en
k I
e2n1
en en2
k D
tn
kP = 0.075,
kI = 0.175,
tn+1
L
tn
kD = 0.01
Pseudo time-stepping
Solutions to boundary value problems of the form Lu = f represent the
steady-state limit of the associated time-dependent problem
u
+Lu = f,
t
u(x, 0) = u0 (x)
in