Professional Documents
Culture Documents
0.8
f(x)
0.6
0.4
0.2
0
0 0.5 1 1.5 2 2.5 3
x
Lagrange polynomials
n x − xj
Li ( x ) = ∏
n
f n ( x) = ∑ c j L j (x ) xi − x j
j =0
j =0 j ≠i
1
0.75
0.5
Lj(x)
0.25
-0.25
0 0.5 1 1.5 2
x
Lagrange Polynomial
• Useful when the grid points are fixed but
function values may be changing (estimating
the temperature at a point using the
measured temperatures at nearby points)
• The value of the Lagrange polynomials at the
desired point need to be calculated only once
• Then, we just need to multiply these values
with the corresponding temperatures
• What if a new measurement is added?
• The polynomials will need to be recomputed
Interpolation: Newton’s divided difference
n
f n ( x ) = ∑ c jφ j ( x )
j =0
φi (x ) = ∏ ( x − x j ) 2
1.5
j =0
1
φi(x)0.5
• For example, using the 0
-1
0 0.5 1 1.5 2
x
Newton’s divided difference
n
f n ( x ) = ∑ c jφ j ( x )
j =0
φ0 (x ) = 1; φ1 (x ) = x − x0 ; φ2 (x ) = (x − x0 )(x − x1 )
φ3 (x ) = (x − x0 )(x − x1 )(x − x2 )
• Applying the equality of function value and
the polynomial value at x=x0: c0=f(x0).
• At x=x1:
f ( x1 ) − f ( x0 )
f ( x1 ) = c0 + c1 (x1 − x0 ) ⇒ c1 =
x1 − x0
Newton’s divided difference
• At x=x2:
f ( x2 ) = c0 + c1 ( x2 − x0 ) + c2 ( x2 − x0 )( x2 − x1 )
f ( x1 ) − f ( x0 )
f ( x2 ) − f ( x0 ) − ( x2 − x0 )
x1 − x0
⇒ c2 =
(x2 − x0 )(x2 − x1 )
f ( x2 ) − f ( x1 ) f ( x1 ) − f ( x0 )
−
x2 − x1 x1 − x0
=
(x2 − x0 )
Newton’s divided difference
• The divided difference notation:
f ( x j ) − f ( xi )
[
f x j , xi = ] x j − xi
[
= f xi , x j ]
[
f xk , x j , xi = ] [ ] [
f xk , x j − f x j , xi ] = f [x , x , x ] = ...
i j k
xk − xi
f [xn , xn −1 ,..., x2 , x1 ] − f [xn −1 ,..., x2 , x1 , x0 ]
f [xn , xn −1 ,..., x2 , x1 , x0 ] =
xn − x0
0 1
2 c0 = 1; c1 = 2; c2 = 1
1 3 1
4 f 2 ( x) = 1 + 2( x − 0) + 1( x − 0)( x − 1)
2 7
= 1+ x + x2
Newton’s divided difference: Error
• The remainder may be written as:
Rn ( x) = f ( x) − f n ( x) = φn +1 ( x) f [x, xn , xn −1 ,..., x2 , x1 , x0 ]
where
φn +1 (x ) = (x − x0 )(x − x1 )(x − x2 )...(x − xn )
0.5
0.4
0.3
1.0
0.2
0.1 n=20 0.8
0.0
-1 -0.5 0 0.5 1
x 0.6
0.4
f(x)
0.2
0.0
-1 -0.5 0 0.5 1
-0.2
x
Spline Interpolation
• Using piece-wise polynomial interpolation
• Given (xk , f (xk )) k = 0,1,2,..., n
• Interpolate using “different” polynomials
between smaller segments
• Easiest: Linear between each successive pair
• Problem: First and
1.0
0.9
0.8
higher derivatives
0.7
0.6
0.5
f(x)
would be discontinuous
0.4
0.3
0.2
0.1
0.0
-1 -0.5 0 0.5 1
x
Spline Interpolation
• Most common: Cubic spline
• Given (xk , f (xk )) k = 0,1,2,..., n
• Interpolate using the cubic splines:
Segment, i
x0 xi xi+1 xn
Corner Node x Corner
(or End Node) Node
Spline Interpolation
• Total n segments => 2n d.o.f
• Equality of first and second derivative at
interior nodes : 2(n-1) constraints
• Need 2 more constraints (discussed later)!
• How to obtain the coefficients?
• The second derivative of the cubic spline is
linear within a segment. Write it as
1
Si′′( x) = [(xi +1 − x )Si′′( xi ) + (x − x i )Si′′( xi +1 )]
xi +1 − xi
Spline Interpolation
• Integrate it twice:
Si ( x) =
1
6( xi +1 −x )
[(x ′′ 3
′′ ]
i +1 − x ) S i ( xi ) + ( x − x i ) S i ( xi +1 ) + C1 x + C 2
3
f (x ) =
( xi +1 − xi )
2
S i′′( xi )
+ C1 xi + C2
i
6
f ( xi +1 ) =
( xi +1 − x i ) S i′′( xi +1 )
2
+ C1 xi +1 + C2
6
Spline Interpolation
• Resulting in
Si ( x) =
( xi +1 − x ) Si′′( xi ) + ( x − x i ) Si′′( xi +1 )
3 3
6( xi +1 − xi )
f ( xi ) ( xi +1 − xi )Si′′( xi )
+ − (x i +1 − x)
xi +1 − xi 6
f ( xi +1 ) ( xi +1 − xi )Si′′( xi +1 )
+ − ( x − xi )
xi +1 − xi 6
S i′−1 (x ) =
( xi − x i −1 )S i′′−1 ( xi ) ( xi − xi −1 )Si′′−1 ( xi −1 ) f ( xi ) − f ( xi −1 )
+ +
i
3 6 xi − xi −1
• Second derivative is also continuous
• We get a tridiagonal system
(xi − x i −1 )Si′′−1 + 2(xi +1 − x i −1 )Si′′+ (xi +1 − x i )Si′′+1
f ( xi +1 ) − f ( xi ) f ( xi ) − f ( xi −1 )
=6 −6
xi +1 − xi xi − xi −1
Spline Interpolation
• What are the 2 more required constraints?
Clamped: The function is clamped on each corner
node forcing both ends to have some known fixed
slope, say, s0 and sn. This implies S 0′ = s0 and S n′ = sn
Natural: Curvature at the corner nodes is zero, i.e.,
S 0′′ = S n′′ = 0
Not-a-knot: The first and last interior nodes have C3
continuity, i.e., these do not act as a knot, i.e.,
S 0 ( x ) ≡ S1 ( x ) and S n − 2 (x ) ≡ S n −1 (x )
6( xi +1 − xi )
f ( x2 ) ( x3 − x2 )S 2′′
+ − ( x3 − x )
x3 − x2 6
f ( x3 ) ( x3 − x2 )S3′′
• Putting values
+ − ( x − x2 )
x3 − x2 6
S2 ( x) =
(3 − x ) 0.2319 + (x − 2) 0.03025
3 3
+ 0.2 −
0.2319
(3 − x ) + 0.1 − 0.03025
(x − 2)
6 6 6
0.8
f(x)
0.6
0.4
0.2
0
0 0.5 1 1.5 2 2.5 3
x
If the data, f(x), may have uncertainty, we do not want the approximating function
to pass through ALL data points. Regression minimizes the “error”.
Regression
• Given (xk , f (xk )) k = 0,1,2,..., n
• Fit an approximating function such that it is
“closest” to the data points
• Mostly polynomial, of degree m (m<n)
• Sometimes trigonometric functions
• As before, assume the approximation as
m
f m ( x ) = ∑ c jφ j ( x )
j =0
Regression: Least Squares
• Minimize the sum of squares of the difference
between the function and the data:
2
n m
∑ f ( x k ) − ∑ c jφ j ( x k )
k =0 j =0
• Results in m+1 linear equations (that is why
the term Linear Regression): [A]{c}={b}. Called
the Normal Equations.
n n
aij = ∑ φi ( xk )φ j ( xk ) and bi = ∑ φi ( xk ) f ( xk )
k =0 k =0
Regression: Least Squares
• For example, using conventional form, φj=xj,
n n n n
n
∑1 ∑ xk ∑ k ... ∑ xk ∑ f ( xk )
2 m
x
kn=0 k =0 k =0 k =0
c k =0
m +1 0
n n n n
x ... ∑ xk ∑ xk f ( xk )
∑ ∑x ∑x
2 3
k k k
c1 k =0
n
k =0 k =0 k =0 k =0
n n n n
∑ xk2 ∑ k
x 3
∑ k
x 4
... ∑ xkm + 2 c2 = ∑ xk2 f ( xk )
k =0 k =0 k =0 k =0 . k =0
. . . . . . .
n. n
.
n
. .
n
. cm
n
.
xm
∑ k ∑ k
x m +1
∑ k
x m+ 2
. ∑ xk2 m ∑ xkm f ( xk )
k =0 k =0 k =0 k =0 k =0
Least Squares: Example
• From the following data (n=4), estimate f(2.6), using
regression with a quadratic polynomial (m=2):
x 0 1 2 3 4
f(x) 1 0.5 0.2 0.1 0.05882
4 4 4
4
∑1 ∑x ∑ ∑
2
k x
k f ( x k )
k4=0 k =0 k =0
c0 4 k = 0
5 10 30 c0 1.85882
4 4
3
x
∑ ∑x ∑ ∑ 100 c1 = 1.43528
2
x k 1c = x f ( x )
k ⇒ 10 30
k k
k
c2 4
k =0 k =0 k =0 k =0
4 4 4
x 2 f ( x ) 30 100 354 c2 3.14112
∑ xk2
k =0
∑x
k =0
3
k ∑
k =0
x 4
k
∑
k =0
k k
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.0
0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0
1.2
0.8
f(x)
0.6
0.4
0.2
0
0 0.5 1 1.5 2 2.5 3
x
Lagrange polynomials
n x − xj
Li ( x ) = ∏
n
f n ( x) = ∑ c j L j (x ) xi − x j
j =0
j =0 j ≠i
1
0.75
0.5
Lj(x)
0.25
-0.25
0 0.5 1 1.5 2
x
Newton’s divided difference
2.5
2
1.5
1
φi(x) 0.5
0
-0.5
-1
0 0.5 1 1.5 2
x
f ( x j ) − f ( xi )
[ ]
f x j , xi =
x j − xi
[
= f xi , x j ]
i −1
φi (x ) = ∏ ( x − x j ) [ ]
f xk , x j , xi =
[ ] [
f xk , x j − f x j , xi ] = f [x , x , x ] = ...
i j k
j =0 xk − xi
0.5
0.4
0.3
1.0
0.2
0.1 n=20 0.8
0.0
-1 -0.5 0 0.5 1
x 0.6
0.4
f(x)
0.2
0.0
-1 -0.5 0 0.5 1
-0.2
x
Spline Interpolation
Interior
Nodes
f(x) Si (x)
Segment, i
x0 xi xi+1 xn
Corner Node x Corner
(or End Node) Node
Linear Spline
1.0
0.9
0.8
0.7
0.6
0.5
f(x)
0.4
0.3
0.2
0.1
0.0
-1 -0.5 0 0.5 1
x
Cubic Spline
Si ( x) =
( xi +1 − x ) Si′′( xi ) + ( x − x i ) Si′′( xi +1 )
3 3
6( xi +1 − xi )
f ( xi ) ( xi +1 − xi )Si′′( xi )
+ − (x i +1 − x)
xi +1 − xi 6
f ( xi +1 ) ( xi +1 − xi )Si′′( xi +1 )
+ − ( x − xi )
xi +1 − xi 6
0.8
f(x)
0.6
0.4
0.2
0
0 0.5 1 1.5 2 2.5 3
x
If the data, f(x), may have uncertainty, we do not want the approximating function
to pass through ALL data points. Regression minimizes the “error”.
Regression
• Given (xk , f (xk )) k = 0,1,2,..., n
• Fit an approximating function such that it is
“closest” to the data points
• Mostly polynomial, of degree m (m<n)
• Sometimes trigonometric functions
• As before, assume the approximation as
m
f m ( x ) = ∑ c jφ j ( x )
j =0
Regression: Least Squares
• Minimize the sum of squares of the difference
between the function and the data:
2
n m
∑ f ( x k ) − ∑ c jφ j ( x k )
k =0 j =0
• Results in m+1 linear equations (that is why
the term Linear Regression): [A]{c}={b}. Called
the Normal Equations.
n n
aij = ∑ φi ( xk )φ j ( xk ) and bi = ∑ φi ( xk ) f ( xk )
k =0 k =0
Regression: Least Squares
• For example, using conventional form, φj=xj,
n n n n
n
∑1 ∑ xk ∑ k ... ∑ xk ∑ f ( xk )
2 m
x
kn=0 k =0 k =0 k =0
c k =0
m +1 0
n n n n
x ... ∑ xk ∑ xk f ( xk )
∑ ∑x ∑x
2 3
k k k
c1 k =0
n
k =0 k =0 k =0 k =0
n n n n
∑ xk2 ∑ k
x 3
∑ k
x 4
... ∑ xkm + 2 c2 = ∑ xk2 f ( xk )
k =0 k =0 k =0 k =0 . k =0
. . . . . . .
n. n
.
n
. .
n
. cm
n
.
xm
∑ k ∑ k
x m +1
∑ k
x m+ 2
. ∑ xk2 m ∑ xkm f ( xk )
k =0 k =0 k =0 k =0 k =0
Least Squares: Example
• From the following data (n=4), estimate f(2.6), using
regression with a quadratic polynomial (m=2):
x 0 1 2 3 4
f(x) 1 0.5 0.2 0.1 0.05882
4 4 4
4
∑1 ∑x ∑ ∑
2
k x
k f ( x k )
k4=0 k =0 k =0
c0 4 k = 0
5 10 30 c0 1.85882
4 4
3
x
∑ ∑x ∑ ∑ 100 c1 = 1.43528
2
x k 1c = x f ( x )
k ⇒ 10 30
k k
k
c2 4
k =0 k =0 k =0 k =0
4 4 4
x 2 f ( x ) 30 100 354 c2 3.14112
∑ xk2
k =0
∑x
k =0
3
k ∑
k =0
x 4
k
∑
k =0
k k
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.0
0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0
Least Squares: Orthogonal polynomials
• Equidistant points xk ; k = 0,1,2,..., n
2
• Minimize: f ( x ) − c φ ( x ) => [A]{c}={b}
n m
∑ k ∑ j j k
n
k =0 j =0
n
aij = ∑ φi ( xk )φ j ( xk ) and bi = ∑ φi ( xk ) f ( xk )
k =0 k =0
• Choose orthonormal basis functions: Known
as Gram’s polynomials, or discrete
Tchebycheff polynomials -- denote by Gi(x).
• Normalize the data range from −1 to 1.
2i
• Implies that xi = −1+
n
Least Squares: Orthogonal polynomials
• Gi(x) is a polynomial of degree i.
n
1
∑
k =0
G0 ( xk )G0 ( xk ) = 1 ⇒ G0 ( x) =
n +1
• Assume G1(x) = d0+d1 x
n
1
∑ (d 0 + d1 x ) = 0 ⇒ d 0 = 0 since ∑ x = 0
k =0 n +1
n
1 1
∑ (d + d1 x ) = 1 ⇒ d1 =
2
0 =
n 2
n
2k
∑
k =0
∑
2
x −1+
k =0 k =0 n
Gram polynomials
• Therefore:
1 3n
d1 = =
n
4k 2 4k (n + 1)(n + 2)
∑ 1 + 2 −
k =0 n n
• Recursive relation:
αi
Gi +1 ( x) = α i xGi ( x) − Gi −1 ( x) for i = 1,2,..., n − 1
α i −1
1 3n n (2i + 1)(2i + 3)
G0 ( x) = ; G1 ( x) = x ;α i =
n +1 (n + 1)(n + 2) i + 1 (n − i )(n + i + 2)
Gram polynomials: Example
• From the following data (n=4), estimate f(2.6), using
regression with a quadratic polynomial (m=2):
t 0 1 2 3 4
f(t) 1 0.5 0.2 0.1 0.05882
• Normalize: x=t/2-1
1 2 2
• For n=4, we get G ( x) = 0 ; G1 ( x) = x ; G2 ( x) =
5
(
7
)
2x2 −1
5
• Normal Equations:
4
∑ G0 ( xk ) f ( xk )
1 0 0 c0 k =40 0.831290
0 1 0 c = G ( x ) f ( x ) = − 0.721746
1 ∑ 1 k k
0 0 1 c2 k4=0 0.298702
G ( x ) f ( x )
∑ 2 k k
k =0
Gram polynomials: Example
0.8313 2 2
f 2 ( x) = − 0.7217 x + 0.2987
5 7
(2
2x −1 )
5
• f(t=2.6)=f(x=0.3)= 0.1039
• Same as before 1.0
0.8
• How to estimate 0.6
• Coefficient of
0.2
0.0
S t = ∑ ( f ( xk ) − f )
k =0
∑ f (x ) k
f = k =0
n +1
S r = ∑ ( f ( xk ) − f m ( x k ) )
k =0
E = ∑ f ( xk ) − ∑ c jφ j ( xk )
k =0 j =0
• Derivative w.r.t. ci ( i from 0 to m )
∂E n m
= 0 ⇒ 2∑ f ( xk ) − ∑ c jφ j ( xk ) (− φi ( xk ) ) = 0
∂ci k =0 j =0
∂c1 k =0 j =0
...
∂E n m
∂cm
(
= 0 ⇒ 2∑ f ( xk ) − ∑ c j xk − xkm = 0
j
)
k =0 j =0
Regression: Least Squares
n n n n
n
∑1 ∑ xk ∑ k ... ∑ xk ∑ f ( xk )
2 m
x
kn=0 k =0 k =0 k =0
c k =0
m +1 0
n n n n
x ... ∑ xk ∑ xk f ( xk )
∑ ∑ k ∑ k
2 3
x x
k
c1 k =0
n
k =0 k =0 k =0 k =0
n n n n
∑ xk2 ∑ k
x 3
∑ k
x 4
... ∑ xkm + 2 c2 = ∑ xk2 f ( xk )
k =0 k =0 k =0 k =0 . k =0
. . . . . . .
n. n
.
n
. .
n
. cm
n
.
xm
∑ k ∑ k
x m +1
∑ k
x m+ 2
. ∑ xk2 m ∑ xkm f ( xk )
k =0 k =0 k =0 k =0 k =0
Gram polynomials
• Normalize: x = -1 to 1
• For example, if n=4, m=2:
f 2 ( x) = c0G0 ( x) + c1G1 ( x) + c2G2 ( x)
1 2 2
G0 ( x) = ; G1 ( x) = x ; G2 ( x) =
5 7
(
2x2 −1 )
5
• Normal Equations:
4
∑ G0 ( xk ) f ( xk )
1 0 0 c0 k =40
0 1 0 c = G ( x ) f ( x )
1 ∑ 1 k k
• Coefficient of determination
2 St − S r
r =
St
• r is called the correlation coefficient
• r2<0.3 poor fit, >0.8 good fit
Multiple Regression
• For a function of 2 (or more) variables
(xk , yk , f (xk , yk )) k = 0,1,2,..., n
2
n m2 m1
• Minimize ∑ f ( xi , yi ) − ∑∑ c j ,k xi yi
j k
i =0 k =0 j =0
Nonlinear Regression
n
Minimize ∑ ( f ( xk ) − f m ( x, c0 , c1 ,..., cm ) )
2
•
k =0
• The normal equations are nonlinear Ac=b
• A=f(c)
• May be solved using Newton method
Numerical Differentiation and Integration
• Given data (xk , f (xk )) k = 0,1,2,..., n
Numerical Differentiation
• Estimate the integral: e.g., from measured
flow velocities in a pipe, estimate discharge, i.e.,
R
Numerical Integration
Numerical Differentiation
• Estimate the derivatives of a function from
given data
(xk , f (xk )) k = 0,1,2,..., n
• Start with the first derivative
• Simplest: The difference of the function values
at two consecutive points divided by the
difference in the x values
• Finite Difference: The analytical derivative has
zero ∆x, but we use a finite value
• What if we want more accurate estimates?
First Derivative
• For simplicity, let us use fi for f(xi)
• Assume that the x’s are arranged in increasing
order (xn>xn-1>…>x0).
• For estimating the first derivative at xi :
′ f i +1 − f i
– Forward difference: f i = x − x
i +1 i
′ f i − f i −1
– Backward difference: f i =
xi − xi −1
– Central difference: ′ f i +1 − f i −1
fi =
xi +1 − xi −1
First Derivative
• Most of the times, the function is “measured”
at equal intervals
• Assume that xn−xn-1= xn-1−xn-2 = … =x1 − x0 = h
• Then, the first derivative at xi :
′ f i +1 − f i
– Forward difference: fi =
h
′ f i − f i −1
– Backward difference: fi =
h
′ f i +1 − f i −1
– Central difference: fi =
2h
First Derivative: Error Analysis
• What is the error in these approximations?
• As an example, if the exact function is a
straight line, the estimate would have no error
• For forward difference, use Taylor’s series:
h2 h m [m] h m +1
f i +1 = f i + hf ′( xi ) + f ′′( xi ) + ... + f ( xi ) + f [ m +1] (ζ f )
2 m! (m + 1)!
ζ f ∈ (xi , xi +1 )
• Derivative at xi
f i + 2 − f i +1 f i +1 − f i
−
f i +1 − f i h h − 3 f i + 4 f i +1 − f i + 2
+ ( − h) =
h 2h 2h
Combine two estimates: Richardson Extrapolation
• O(h) accurate: f i +1 − f i
f ′( xi ) = + O ( h)
h
• Write f i +1 − f i
′
f ( xi ) = + E + O(h ) 2
h
fi+2 − fi
• and ′
f ( xi ) = + 2 E + O(h ) 2
2h
• Eliminate E: 2 f ′( x ) − f ′( x ) = 2 f i +1 − f i − f i + 2 − f i
i i
h 2h
− 3 f i + 4 f i +1 − f i + 2
f ′( xi ) = + O(h 2 )
2h
First Derivative: Taylor’s series
h2 h3 h4
f i +1 = f i + hf ′( xi ) + f ′′( xi ) + f ′′′( xi ) + f ′′′′(ζ f 1 )
2 6 4!
2 3 4
4h 8h 16h
′
f i + 2 = f i + 2hf ( xi ) + ′′
f ( xi ) + ′′′
f ( xi ) + f ′′′′(ζ f 1 )
2 6 4!
ζf1 Є (xi, xi+1) and ζf2 Є (xi, xi+2)
2h 3
Taylor’s series
− 3 f i + 4 f i +1 − f i + 2
• O(h2) accurate f i′ =
2h
2
h
Error : − f ′′( xi ) + O(h 3 )
• General Method: 3
1
f i′ = (ci f i + ci +1 f i +1 + ci + 2 f i + 2 )
h
ci + ci +1 + ci + 2 h
= f i + (ci +1 + 2ci + 2 ) f ′( xi ) + (ci +1 + 4ci + 2 ) f ′′( xi )
h 2
h2
+ (ci +1 + 8ci + 2 ) f ′′′( xi ) + ...
6
ci + ci +1 + ci + 2 = 0
• Equate coefficients: => -3/2,2,-1/2
ci +1 + 2ci + 2 = 1
ci +1 + 4ci + 2 = 0
Backward difference
• Similarly, for backward difference, O(h2) accurate:
1
f i′ = (ci f i + ci −1 f i −1 + ci − 2 f i − 2 )
h
ci + ci −1 + ci − 2 h
= f i − (ci −1 + 2ci − 2 ) f ′( xi ) + (ci −1 + 4ci − 2 ) f ′′( xi )
h 2
h2
− (ci −1 + 8ci − 2 ) f ′′′( xi ) + ...
6
3 f i − 4 f i −1 + f i − 2
ci + ci −1 + ci − 2 = 0 f i′ =
2h
ci −1 + 2ci − 2 = −1 2
h
ci −1 + 4ci − 2 = 0 Error : f ′′( xi ) + O(h 3 )
3
Central Difference
• And, for central difference, O(h4) accurate:
1
f i′ = (ci −2 f i −2 + ci −1 f i −1 + ci f i + ci +1 f i +1 + ci + 2 f i + 2 )
h
c +c +c +c +c
= i − 2 i −1 i i +1 i + 2 f i + (− 2ci − 2 − ci −1 + ci +1 + 2ci + 2 ) f ′( xi )
h
h h2
+ (4ci − 2 + ci −1 + ci +1 + 4ci + 2 ) f ′′( xi ) + (− 8ci − 2 − ci −1 + ci +1 + 8ci + 2 ) f ′′′( xi )
2 6
h3
+ (16ci − 2 + ci −1 + ci +1 + 16ci + 2 ) f ′′′′( xi ) + ...
24
ci − 2 + ci −1 + ci + ci +1 + ci + 2 = 0
f i − 2 − 8 f i −1 + 0 f i + 8 f i +1 − f i + 2
− 2ci − 2 − ci −1 + ci +1 + 2ci + 2 = 1 f i′ =
12h
4ci − 2 + ci −1 + ci +1 + 4ci + 2 = 0
h 4 [5]
− 8ci − 2 − ci −1 + ci +1 + 8ci + 2 = 0 Error : f ( xi ) + O(h 6 )
16ci − 2 + ci −1 + ci +1 + 16ci + 2 = 0
30
General formulation
• In general, for the nth derivative
nf
1
fi [n]
= n
h
∑c
j = − nb
i+ j fi+ j
• Central difference? ′ f i +1 − f i −1
fi =
xi +1 − xi −1
h
• Forward difference f ′( xi ) − f i′ = − f ′′(ζ f )
2
h
• Backward difference f ′( xi ) − f i′ = f ′′(ζ b )
2
2
h
• Central difference f ′( xi ) − f i′ = − f ′′′(ζ c )
6
Increasing accuracy
• Richardon Extrapolation
− 3 f i + 4 f i +1 − f i + 2
f i′ = 2
; Error O(h )
2h
f i − 2 − 8 f i −1 + 0 f i + 8 f i +1 − f i + 2
′
fi = 4
; Error : O(h )
12h
Numerical Differentiation: Uneven spacing
• What if the given data is not equally spaced
(xk , f (xk )) k = 0,1,2,..., n
• Forward and backward difference formula for
the first derivative will still be valid
′ f i +1 − f i ′ f i − f i −1
fi ≈ fi ≈
xi +1 − xi xi − xi −1
• Central difference? ′ f i +1 − f i −1
fi ≈
xi +1 − xi −1
f i − 2 f i −1 + f i − 2
f i′′=
h2
– Central difference, O(h2):
f i −1 − 2 f i + f i +1
f i′′= 2
h
Central Difference: Richardson method
• Combine 2 estimates of O(h2) accuracy:
f i −1 − 2 f i + f i +1
f ′′( xi ) = 2
+ E + O ( h 4
)
h
fi −2 − 2 f i + f i + 2
f ( xi ) =
′′ 2
+ 4 E + O(h ) 4
4h
ci − 2 + ci −1 + ci + ci +1 + ci + 2 = 0
− 2ci − 2 − ci −1 + ci +1 + 2ci + 2 = 0 − f i − 2 + 16 f i −1 − 30 f i + 16 f i +1 − f i + 2
f i′′=
1 12h 2
(4ci −2 + ci −1 + ci +1 + 4ci + 2 ) = 1
2 h 4 [6]
Error : f ( xi ) + O(h8 )
− 8ci − 2 − ci −1 + ci +1 + 8ci + 2 = 0 90
16ci − 2 + ci −1 + ci +1 + 16ci + 2 = 0
Numerical Differentiation: Example
• Given: Location of an object at different times
Time (s) Location (cm) 70
0 0.00 60
1 2.61 50
2 6.91 40
3 13.85
30
4 24.70
20
10
5 41.25
0
6 65.86 0 2 4 6 8
–increasing order
–equidistant (with ∆x=h)
–x0 = a and xn = b
Discharge estimation R
Q = ∫ 2πrvdr
from velocity measurements
• Resulting in:
h
~ f i − f i −1 f i −1 + f i
I i = ∫ f i −1 + x dx = h
0
h 2
which is the area of the trapezoid
Trapezoidal Rule
• The desired integral is written as
~ n ~ f 0 n −1 fn
I = ∑ I i = h + ∑ f i +
i =1 2 i =1 2
• How to find the error?
• Take the ith segment:
xi
~ f i − f i −1
Ei = I i − I i = ∫x f ( x) − f i −1 + (x − xi −1 ) h dx
i −1
Ei = ∫ (x − x )(x − x ) f [x, x
xi −1
i −1 i i −1 , xi ]dx
f [xn , xn −1 ,..., x1 , x0 ] =
f [n]
(ζ ) ; ζ ∈ (x0 , xn )
n!
Rolle’s Theorem Proof Outline
• The Newton interpolating polynomial is
n i −1
f n ( x ) = ∑ c jφ j ( x ) φi (x ) = ∏ ( x − x j )
j =0 j =0
• φi is an ith-degree polynomial
• f(x)−fn(x) is zero at n+1 points
f ’(x)−f ’n(x) is zero at “at least” n points, one
within each segment
f ’’(x)−f ’’n(x) is zero at n−1 points, one within
each of the segments of the previous “bullet”
Rolle’s Theorem Proof Outline
• Extending this argument:
f [n](x)−fn [n](x) is zero at some point ζ, within the
interval (x0,xn)
• Since φi is an ith-degree polynomial, its nth
derivative will be zero for i<n
• And nth derivative of φn is “n!”
• Therefore,
f [xn , xn −1 ,..., x1 , x0 ] =
f [n]
(ζ ) ; ζ ∈ (x0 , xn )
n!
Trapezoidal Rule: Error estimate
xi
Ei = ∫ (x − x )(x − x ) f [x, x
xi −1
i −1 i i −1 , xi ]dx
f ′′(ζ i )
h
= ∫ x( x − h ) dx where ζ i ∈ ( xi −1 , xi )
0
2!
• Use second mean value theorem for integrals
(note that x(x-h) is uniformly non-positive)
f (ζ i ) h f ′′(ζ i )
h
′′ 3
Ei = ∫ x(x − h )dx = −
2! 0 12
Numerical Integration
• Given data (xk , f (xk )) k = 0,1,2,..., n
b
• Estimate I = ∫ f ( x)dx
a
e(x)
f1(x)
xi-1 xi x xi+1
f i − f i −1
f ( x) − f i −1 + ( x − xi −1 ) = ( x − xi −1 )( x − xi ) f [x, xi −1 , xi ]
h
Trapezoidal Rule: Error estimate
• Therefore,
xi
Ei = ∫ (x − x )(x − x ) f [x, x
xi −1
i −1 i i −1 , xi ]dx
1.5
φi (x ) = ∏ ( x − x j )
j =0
1.25
1
f(x)
0.75
0.5
1
0.25
f ( x) = 2
0 1+ x + x
-1 -0.5 0 0.5 1
x
-0.2
-0.02
-0.4
-0.04 -0.6
-0.06 -0.8
-0.08 -1
-1 -0.5 0 0.5 1 -1 -0.5 0 0.5 1
x x
Proof Outline
h
= ∫ x(x − h )
f ′′ ζ i*( )
dx where ζ i* ∈ ( xi −1 , xi )
0
2!
• Use second mean value theorem for integrals
b b
f ′′(ζ i ) h 3 f ′′(ζ i )
h
Ei = ∫ x(x − h )dx = −
2! 0 12
Trapezoidal Rule: Error estimate
The total error is
n
~ n
h 3
∑ f ′′(ζ ) i
(b − a )h 2 f ′′
E = I − I = ∑ Ei = − i =1
=−
i =1 12 12
where the average value of the second
derivative is given by
n n
∑ f ′′(ζ ) ∑ f ′′(ζ )
i i
f ′′ = i =1
= i =1
n (b − a ) / h
Trapezoidal Rule: Error estimate
• The error in one segment is O(h3), and the
total error over the interval (a,b) is O(h2)
• Implies that if we reduce the step size to half,
error in each segment will be reduced to 1/8,
but overall error reduces to 1/4 (since the
number of segments is doubled!)
Trapezoidal Rule: Example
• The velocity of an object is measured (x-direction)
Time (s) Speed (cm/s) 35
0 2.00 30
1 3.33 25
2 5.44
20
15
3 8.65 10
4 13.36 5
5 20.13 0
0 2 4 6 8
6 29.60
• Error − (b − a )h 2 f ′′
should vary from
5 20.13
6 29.60
12
about -0.3 h2 to -1.7 h2 T.V.=65.86
xi-1 xi xi+1
Simpson’s Rule
• Sum over all sub-intervals (assume n is even):
~ ~ h
I = ∑ I i = ( f i −1 + 4 f i + f i +1 )
i =1, 3, 5,..., n −1 3
h
= f 0 + 4 ∑ f i + 2 ∑ f i + f n
3 i =1, 3, 5,..., n −1 i = 2 , 4 , 6 ,..., n − 2
Simpson’s Rule: Error Estimate
• Error in the ith sub-interval:
h
Ei = ∫ (x + h )x(x − h ) f [x, x
−h
i −1 , xi , xi +1 ]dx
h
x
= f [x, xi −1 , xi , xi +1 ] ∫ ( x + h )x( x − h )dx
−h −h
df [x, xi −1 , xi , xi +1 ]
h x
−∫ ∫ (x + h )x(x − h )dx dx
−h
dx −h
h x
( ) ( ) ∫ ( x + h )x(x − h )dx
• −∫h x + h x x − h dx = 0 and −h is
nonnegative for x between (−h,h).
Simpson’s Rule: Error Estimate
h x
f (ζ i )iv
Ei = −
4! −∫h ∫ (x + h )x(x − h )dx dx
−h
h
f (ζ i ) iv
x x h h 4 2 2 4
=− ∫
4! − h 4
−
2
+
4
dx
h 5 f iv (ζ i )
=−
90
• Sub-interval error is O(h5)
• Total error, O(h4):
h5 ∑ f iv (ζ i )
~ (b − a )h 4
f iv
E = I −I = ∑E i
i =1, 3, 5..., n −1
=− i =1, 3, 5,...n −1
90
=−
180
Simpson’s Rule: Example
• The velocity of an object is measured (x-direction)
Time (s) Speed (cm/s) 35
0 2.00 30
1 3.33 25
2 5.44
20
15
3 8.65 10
4 13.36 5
5 20.13 0
0 2 4 6 8
6 29.60
d=(2+4x(3.33+8.65+20.13)
+2x(5.44+13.36)+29.60)x1/3 = 65.88 cm
• h=3 s
d=(2+4x8.65+29.60)x3/3 = 66.20 cm
Simpson’s Rule: Example – Error analysis
• The fourth derivative is Time (s)
0
Speed (cm/s)
2.00
1 3.33
constant at 0.12 cm/s5 2 5.44
(b − a )h 4 f iv 3 8.65
• Error − should be equal 4 13.36
180
5 20.13
to −0.004 h4 6 29.60
• Trapezoidal
x
Rule:
i
Ei = ∫ ( x − xi −1 )( x − xi ) f [x, xi −1 , xi ]dx
f ′′(ζ )
xi −1
f [x, xi −1 , xi ] =
2!
h f (ζ i )
3
′′
E=−
(b − a )h 2
f ′′
Ei = −
12 12
Simpson’s 1/3rd Rule
~ h
I i = ( f i −1 + 4 f i + f i +1 )
3
~ h
I = f 0 + 4 ∑ f i + 2 ∑ f i + f n
3 i =1, 3, 5,..., n −1 i = 2 , 4 , 6 ,..., n − 2
xi-1 xi xi+1
Simpson’s Rule: e( x) = f ( x) − f 2 ( x) = ( x − xi −1 )( x − xi )( x − xi +1 )c3 ( x)
Error Estimate f ( x) − { f i −1 + ( x − xi −1 ) f [xi −1 , xi ] + ( x − xi −1 )( x − xi ) f [xi −1 , xi , xi +1 ]}
c3 ( x) =
( x − xi −1 )( x − xi )( x − xi +1 )
f(x)
f2(x)
e(x)
xi-1 xi xi+1
x
e( x) = ( x − xi −1 )( x − xi )( x − xi +1 ) f [x, xi −1 , xi , xi +1 ]
Simpson’s Rule: Error Estimate
• Error in the ith sub-interval:
h
Ei = ∫ (x + h )x(x − h ) f [x, x
−h
i −1 , xi , xi +1 ]dx
h
x
= f [x, xi −1 , xi , xi +1 ] ∫ ( x + h )x( x − h )dx
−h −h
df [x, xi −1 , xi , xi +1 ]
h x
−∫ ∫ (x + h )x(x − h )dx dx
−h
dx −h
df [x, xi −1 , xi , xi +1 ] f [x + ε , xi −1 , xi , xi +1 ] − f [x, xi −1 , xi , xi +1 ]
= lim
dx ε →0 ε
f iv (ζ i )
= lim f [x + ε , x, xi −1 , xi , xi +1 ] = ; ζ i ∈ ( xi −1 , xi +1 )
ε →0 4!
Simpson’s Rule: Error Estimate
h x
h 5 f iv (ζ i )
=−
90
• Sub-interval error is O(h5)
• Total error, O(h4):
h5 ∑ f iv (ζ i )
~ (b − a )h 4
f iv
E = I −I = ∑E i
i =1, 3, 5..., n −1
=− i =1, 3, 5,...n −1
90
=−
180
Simpson’s Rule: Example
• The velocity of an object is measured (x-direction)
Time (s) Speed (cm/s) 35
0 2.00 30
1 3.33 25
2 5.44
20
15
3 8.65 10
4 13.36 5
5 20.13 0
0 2 4 6 8
6 29.60
d=(2+4x(3.33+8.65+20.13)
+2x(5.44+13.36)+29.60)x1/3 = 65.88 cm
• h=3 s
d=(2+4x8.65+29.60)x3/3 = 66.20 cm
Simpson’s Rule: Example – Error analysis
• The fourth derivative is Time (s)
0
Speed (cm/s)
2.00
1 3.33
constant at 0.12 cm/s5 2 5.44
(b − a )h 4 f iv 3 8.65
• Error − should be equal 4 13.36
180
5 20.13
to −0.004 h4 6 29.60
• Error: Ei = − h f ′′(ζ i ) 3
12
ci −1 + ci + ci +1 = 2
h
I = ∫ xdx = 0 = h(− ci −1h + ci .0 + ci +1h )
• f(x)= x : −h
− ci −1 + ci +1 = 0
h
2h 3
• f(x)= x2: I = ∫ x dx = 2
3
(
= h ci −1h 2 + ci .0 + ci +1h 2 )
−h
2 1 4 1
ci −1 + ci +1 = ⇒ ci −1 = ; ci = ; ci +1 =
3 3 3 3
Improving accuracy: Most common technique
• Richardson Extrapolation:
xi +1
• Estimate I i = ∫ f ( x)dx
xi −1
~ 1 h 2h
I i = 4 ( f i −1 + 2 f i + f i +1 ) − ( f i −1 + f i +1 )
3 2 2
• Again getting the Simpson’s 1/3 rule, with error
of the order h4, as seen earlier.
• Romberg algorithm: Recursive combination,
using integral estimates I~h ,k , of order k and step
sizes h and 2h:
k~ ~
~ 2 I h ,k − I 2 h ,k
I h ,k + 2 = k
2 −1
Romberg Integration
• Algorithm: Start with trapezoidal rule, with step
size of h, 2h, 4h,… ~ ~
~ 4 I h ,k − I 2 h ,k
• Since the error is O(h ), k=2 and I h , 4 =
2
3
~ ~
~ 4 I 2 h,k − I 4 h,k
• Similarly, I 2 h , 4 =
3
~ ~
~ 16 I h , 4 − I 2 h , 4
• Combine two O(h ) estimates: I h , 6 =
4
15
• Any order of accuracy could be achieved, if we
have enough points. We only need to know the
Trapezoidal rule!
Romberg Integration: Example
Time (s) Speed (cm/s)
d=(2/2+5.44+13.36+29.60+59.90/2)x2 = 158.72 cm
• h=4 s: d=(2/2+13.36+59.92/2)x4 = 177.28 cm
O(h4) with h,2h: 4/3x154.03-158.72/3 = 152.47 cm
O(h4) with 2h,4h: 4/3x158.72-177.28/3 = 152.53 cm
O(h6) with h,2h,4h: 16/15x152.47-152.53/15 = 152.46 cm
Improving accuracy: Newton-Cotes and Adams
• Newton-Cotes: Use a higher degree interpolating
polynomial and integrate over the entire sub-
interval
– Trapezoidal
– Simpson’s 1/3
80
=−
80
Adams Method
• Adams: Use a higher degree interpolating
polynomial and integrate over only
one segment: useful in “open” method
• E.g., quadratic using xi-1,xi,xi+1
f i +1 − f i f i − f i −1
h
−
~ f i − f i −1 h h
I i +1 = ∫ f i −1 + ( x + h) + ( x + h) x dx
0
h 2h
h
= (− f i −1 + 8 f i + 5 f i +1 )
12
Open and Semi-open Integration
• Given data (xk , f (xk )) k = 0,1,2,..., n
b
• Estimate I = ∫ f ( x)dx
a
• Open Integration:
a < x0 AND b > x n
• Semi-open integration:
a < x0 OR b > x n
Semi-open Integration
• We discuss only semi-open integration
• Assume a = x0 ; b = x n + h
• Trapezoidal rule:
• Linear interpolation in the last segment
• Integrate by extrapolating up to b
2h
~ f n − f n −1 h
I n +1 = ∫ f n −1 + x dx = (3 f n − f n −1 )
h
h 2
Semi-open Integration
• The estimate of I is, therefore,
~ n +1 ~ h n −1
I = ∑ I i = f 0 + 2∑ f i + f n + 3 f n − f n −1
i =1 2 i =1
h n−2
= f 0 + 2∑ f i + f n −1 + 4 f n
2 i =1
• The error in the extrapolated segment is
2h
f ′′(ζ * ) 5h 3 f ′′(ζ )
En +1 = ∫ x( x − h) dx = ; ζ ∈ ( xn −1 , b)
h
2 12
• For f(x)=x,
b 2 2
b −a
∫ f ( x)dx = c0 f ( x0 ) + c1 f ( x1 ) ⇒ c0 x0 + c1 x1 =
a
2
resulting in f iv (ζ )
E=
135
Gauss Quadrature: General Form
• Let there be n+1 quadrature points: z0,z1,..zn
• We should be able to exactly integrate all
polynomials of degree 2n+1 and lower
• All these polynomials must necessarily match
the function values at the (n+1) zi’s
• Let us write these polynomials as
n n
f 2 n +1 ( z ) = ∑ Li ( z ) f ( zi ) + pn ( z )∏ ( z − zi )
i =0 i =0
Numerical Integration
• Given data (xk , f (xk )) k = 0,1,2,..., n
b
• Estimate I = ∫ f ( x)dx
a
~ h
I = f 0 + 4 ∑ f i + 2 ∑ f i + f n
3 i =1, 3, 5,..., n −1 i = 2 , 4 , 6 ,..., n − 2
Open and Semi-open Integration
• Given data (xk , f (xk )) k = 0,1,2,..., n
b
• Estimate I = ∫ f ( x)dx
a
• Open Integration:
a < x0 AND b > x n
• Semi-open integration:
a < x0 OR b > x n
Semi-open Integration
• We discuss only semi-open integration
• Assume a = x0 ; b = x n + h
• Trapezoidal rule:
• Linear interpolation in the last segment
• Integrate by extrapolating up to b
2h
~ f n − f n −1 h
I n +1 = ∫ f n −1 + x dx = (3 f n − f n −1 )
h
h 2
Semi-open Integration
• The estimate of I is, therefore,
~ n +1 ~ h n −1
I = ∑ I i = f 0 + 2∑ f i + f n + 3 f n − f n −1
i =1 2 i =1
h n−2
= f 0 + 2∑ f i + f n −1 + 4 f n
2 i =1
• The error in the extrapolated segment is
2h
f ′′(ζ * ) 5h 3 f ′′(ζ )
En +1 = ∫ x( x − h) dx = ; ζ ∈ ( xn −1 , b)
h
2 12
0 2.00 30
1 3.33 25
2 5.44
20
15
3 8.65 10
4 13.36 5
5 20.13 0
0 2 4 6 8
6 29.60
• For f(x)=x,
b 2 2
b −a
∫ f ( x)dx = c0 f ( x0 ) + c1 f ( x1 ) ⇒ c0 x0 + c1 x1 =
a
2
b−a
Ix = Iz
2
• In subsequent analysis, we find I , not I
Numerical Integration of a Function
• The four equations then become:
2
c0 + c1 = 2; c0 z0 + c1 z1 = 0; c z + c z = ; c0 z03 + c1 z13 = 0
2
0 0
2
1 1
3
resulting in
1 1
z0 = − ; z1 = ; c0 = 1; c1 = 1
3 3
3.0
2.0
1.0
0.0
-1 -0.5 0 0.5 1
2 3
f ( x) = 1 + x + x + x
10.0
8.0
6.0
4.0
2.0
0.0
-1 -0.5 0 0.5 1
-2.0
2 3
f ( x) = 1 + 2 x + 3x + 4 x
Gauss Quadrature: General Form
• Let there be n+1 quadrature points: z0,z1,..zn
• We have 2n+2 adjustable parameters
• We should be able to exactly integrate all
polynomials of degree 2n+1 (and lower)
• All these polynomials must necessarily match
the function values at the (n+1) zi’s
• We may write these polynomials using a
combination of the Lagrange polynomials, Li,
n
and the Newton polynomial,
∏i =0
(z − z )
i
Gauss Quadrature: General Form
• With pn(z) being an arbitrary polynomial of
degree n, we write the exactly integrable
n n
f 2 n +1 ( z ) = ∑ Li ( z ) f ( zi ) + pn ( z )∏ ( z − zi )
i =0 i =0
∫f 2 n +1 ( z )dz = ∑ ci f ( zi )
=> −1 i =0
n 1 1 n n
∑ ∫ L ( z )dz f ( z ) + ∫ p ( z )∏ ( z − z )dz = ∑ c f (z )
i = 0 −1
i i n
i =0
i
i =0
i i
−1 1
ci = ∫ Li ( z )dz
• This is achieved by letting −1
and
choosing the zi’s as the zeroes of an n+1th
degree polynomial which is orthogonal to ALL
polynomials of degree n: Legendre polynomial
Gauss-Legendre Quadrature
• Recall the first few Legendre polynomials:
P0(x)=1; P1(x) = x; P2(x) = (−1+3x2)/2
P3(x) = (−3x+5x3)/2; P4(x) = (3−30x2+35x4)/8
• Any of these is orthogonal to all lower degree
polynomials. Earlier we had seen that P3(z) is
orthogonal to P0(z), P1(z), and P2(z); i.e.,
1 1 1
1
1
z 3 2
c0 = ∫ L0 ( z )dz = − z = 1; c1 = 1
−1 2 4 −1
Gauss-Legendre Quadrature
3
• For three Gauss points at 0, ± :
5
3
z z −
5 5 3 5 2 5 2 5 3 5 2
L0 ( z ) = =− z + z ; L1 ( z ) = 1 − z ; L2 ( z ) = z+ z
3 3 6 5 6 3 6 5 6
×2
5 5
1
1
5z 2
3 5 3 5 8 5
c0 = ∫ L0 ( z )dz = − + z = ; c1 = ; c2 =
−1 12 5 18 −1 9 9 9
5.0
4.0
3.0
2.0
1.0
0.0
-1 -0.5 0 0.5 1
2 3 4 5
f ( x) = 1 + x + x + x + x + x
25.0
20.0
15.0
10.0
5.0
0.0
-1 -0.5 0 0.5 1
-5.0
2 3 4 5
f ( x) = 1 + 2 x + 3 x + 4 x + 5 x + 6 x
Abscissa, Weight, and Error for the Gauss-Legendre Quadrature points
Wi =
(
2 1− z 2
i )
[(n + 1)Pn (zi )]2
• Recall: E =
135 . 4th derivative (z) varies from
e − ( z / 2+1.5 )
1
Iz = ∫ dz (T.V. = 0.571946)
−1 1 − z ( z / 2 + 1.5)
2
resulting in Iz=0.568160
• Error in I = 3.79x10−3
Gauss-Tchebycheff Quadrature: Example
• Use 3-points, T3(z) = 4z3−3z; the z’s are
−√0.75, 0,√0.75 :
i z W x f
resulting in Iz=0.571833
• Error in Iz= 1.13x10−4
Numerical Integration: Improper Integrals
b
• We have assumed:
• a and b are finite
• f(x) is defined and is continuous in (a,b)
• Improper Integral: When any (or both) of
these assumptions is violated
• E.g., ∞ 1
cos x
I = ∫e
−x 2
dx I =∫ dx
1 0 x
Improper Integrals: Convergence
• An improper integral may or may not
converge (i.e., have a finite value)
• We assume that it converges! How to find it?
• If the domain is unbounded, we use a
transformation of variable to make it finite
1
−
∞ 1
1 e z2
I = ∫ e dx : z = ⇒ I = ∫ 2 dz
− x2
1
x 0
z
• If f(x) is undefined at one end, a semi-open
method could be used (or variable transform)
1 1
cos x
I =∫ dx : z = x ⇒ I = ∫ 2 cos z dz
2
0 x 0
Improper Integrals: Evaluation
∫e
2
−t
erfc( x) = dt
π x
• Estimate the value of erfc(1) (T.V.=0.157299)
• Transformation: y=1/t 1
−
∞ 1 y2
e
I = ∫ e dt ⇒ I = ∫ 2 dy
−t 2
1 0
y
• Note that f(y) is undefined at y=0 (limit does
exist). Trapezoidal, Simpson… cannot be used!
• Use 3-point Gauss-quadrature
Improper Integrals: Example
• 3-point Gauss-quadrature
i z W y f
resulting in
1 1
z0 = − ; z1 = ; c0 = 1; c1 = 1
3 3
2.0
1.0
0.0
-1 -0.5 0 0.5 1
1
~
∫ f ( z )dz ≈ I z = c0 f ( z0 ) + c1 f ( z1 )
−1
z − z1 z − z0
f3 ( z) = f ( z0 ) + f ( z1 )
z0 − z1 z1 − z0
+ (a + bz )( z − z0 )( z − z1 )
Gauss Quadrature: General Form
• Since the cubic polynomial is exactly integrable
1
z − z1 z − z0
∫−1 z0 − z1 f ( z0 ) + z1 − z0 f ( z1 ) + (a + bz )( z − z0 )( z − z1 )dz
= c0 f ( z0 ) + c1 f ( z1 )
1 1
z − z1 z − z0
Which implies that c0 = ∫
z − z1
dz and c1 = ∫
z − z0
dz
−1 0 −1 1
∫ (a + bz )( z − z )( z − z )dz = 0
−1
0 1
Gauss-Legendre Quadrature
• Recall the first few Legendre polynomials:
P0(x)=1; P1(x) = x; P2(x) = (−1+3x2)/2
P3(x) = (−3x+5x3)/2; P4(x) = (3−30x2+35x4)/8
• Any of these is orthogonal to all lower degree
polynomials. Earlier we had seen that P2(z) is
orthogonal to P0(z), and P1(z), i.e.,
1 1
∫ P ( z ) P ( z )dz = ∫ P ( z ) P ( z )dz = 0
−1
2 0
−1
2 1
Wi =
(
2 1− z 2
i )
[(n + 1)Pn (zi )]2
5.0
4.0
3.0
2.0
1.0
0.0
-1 -0.5 0 0.5 1
2 3 4 5
f ( x) = 1 + x + x + x + x + x
25.0
20.0
15.0
10.0
5.0
0.0
-1 -0.5 0 0.5 1
-5.0
2 3 4 5
f ( x) = 1 + 2 x + 3 x + 4 x + 5 x + 6 x
Abscissa, Weight, and Error for the Gauss-Legendre Quadrature points
• Recall: E =
135 . 4th derivative (z) varies from
e − ( z / 2+1.5 )
1
Iz = ∫ dz (T.V. = 0.571946)
−1 1 − z ( z / 2 + 1.5)
2
resulting in Iz=0.568160
• Error in I = 3.79x10−3
Gauss-Tchebycheff Quadrature: Example
• Use 3-points, T3(z) = 4z3−3z; the z’s are
−√0.75, 0,√0.75 :
i z W x f
resulting in Iz=0.571833
• Error in Iz= 1.13x10−4
Numerical Integration: Improper Integrals
b
• We have assumed:
• a and b are finite
• f(x) is defined and is continuous in (a,b)
• Improper Integral: When any (or both) of
these assumptions is violated
• E.g., ∞ 1
cos x
I = ∫e
−x 2
dx I =∫ dx
1 0 x
Improper Integrals: Convergence
• An improper integral may or may not
converge (i.e., have a finite value)
• We assume that it converges! How to find it?
• If the domain is unbounded, we use a
transformation of variable to make it finite
1
−
∞ 1
1 e z2
I = ∫ e dx : z = ⇒ I = ∫ 2 dz
− x2
1
x 0
z
• If f(x) is undefined at one end, a semi-open
method could be used (or variable transform)
1 1
cos x
I =∫ dx : z = x ⇒ I = ∫ 2 cos z dz
2
0 x 0
Improper Integrals: Evaluation
∫e
2
−t
erfc( x) = dt
π x
• Estimate the value of erfc(1) (T.V.=0.157299)
• Transformation: y=1/t 1
−
∞ 1 y2
e
I = ∫ e dt ⇒ I = ∫ 2 dy
−t 2
1 0
y
• Note that f(y) is undefined at y=0 (limit does
exist). Trapezoidal, Simpson… cannot be used!
• Use 3-point Gauss-quadrature
Improper Integrals: Example
• 3-point Gauss-quadrature
i z W y f
dy
Given: = f (t , y ) and y at t =t 0 = y0
dt
Find: y at t = t0 + h
Once the value of y is obtained at t0+h, we take
this as the “known” value and estimate the value
at the next time step, and so on.
Note that h could be changed at each step, but
generally it is kept constant
First Order ODE’s : Graphical Representation
y4
3
0
0 0.2 0.4 0.6 0.8 1 1.2
t
1.5
1.4
1.3
y
1.2
1.1
1
0 0.1 0.2
t
Use h=0.2: Forward Slope
1.5
1.4
1.3
y
1.2
1.1
1
0 0.1 0.2
t
Backward Slope
1.5
1.4
1.3
y
1.2
1.1
1
0 0.1 0.2
Midpoint Slope t
Alternative formulation: Integration
yt 0 + h = y 0 + ∫ f (t , y )dt
t0
14
12
10
f(t,y) 8
0
0 0.2 0.4 0.6 0.8 1 1.2
t
2.5
1.5
f(t,y)
1
0.5
0
0 0.1 0.2
Rectangular Rule t
2.5
1.5
f(t,y)
1
0.5
0
0 0.1 0.2
1.5
f(t,y)
1
0.5
0
0 0.1 0.2
Trapezoidal Rule t
First Order ODE’s: Solution Algorithm
• Use subscript n for “known” point and n+1 for
the “desired” point: given tn,yn,tn+1, find yn+1
• Euler Forward or Explicit method:
yn +1 = y n + hf (t n , yn )
Slope approximated by a forward difference
yn +1 − y n
f (t n , yn ) =
h
Or, integral estimated by a rectangular rule
t n+1
∫ f (t , y )dt = hf (t n , yn )
tn
Euler Method
• Euler Backward or Implicit method:
yn +1 = y n + hf (t n +1 , yn +1 )
Slope approximated by a backward difference
yn +1 − y n
f (t n +1 , yn +1 ) =
h
Integral estimated by a backward rectangular rule
t n+1
∫ f (t , y )dt = hf (t
tn
n +1 , yn +1 )
Implicit: Cannot be solved directly for yn+1 (unless
f is of a very simple form, e.g. f=-λy)
Single- and Multi-step Methods
• Both of these methods use the slope
(derivative) at a single point (n for the explicit
and n+1 for the implicit): Single-step methods
• The multi-step methods use slope at more
than one points. E.g., trapezoidal rule
f (t n , yn ) + f (t n +1 , yn +1 )
t n+1
∫ f (t , y )dt = h
tn
2
Resulting in the trapezoidal method or Implicit
Heun’s method:
f (t n , yn ) + f (t n +1 , yn +1 )
yn +1 = y n + h
2
Single- and Multi-step Methods
• Single-step methods may be explicit or
implicit, depending on which point is used:
If slope at n is used- Explicit
If slope at n+1 is used- Implicit
• Multi-step methods also may be explicit or
implicit, depending on how the points are
chosen:
If the “average” slope does not use yn+1- Explicit
E.g., f (t n , yn ) + f (t n +1 , yn + hf (t n , yn ))
yn +1 = y n + h
2
Otherwise- Implicit. E.g., Trapezoidal method
1.5
1.4
1.3
y
1.2
1.1
1
0 0.1 0.2
t
Use Forward Slope to estimate “end-point” value. Then, use that estimate for trapezoidal rule
Consistency and Stability
• Consistency: The numerical approximation
should represent the original equation as h->0
E.g., Euler Forward –
yn +1 = y n + hf (t n , yn )
Taylor’s series: 2
h
yn +1 = y n + hf (t n , yn ) + f ′(t n , yn ) + ...
2
The method is consistent and the error in a single
step (called the Local Truncation Error) is O(h2).
Consistency
Euler Backward –
yn +1 = y n + hf (t n +1 , yn +1 )
Taylor’s series:
2
h
yn = y n +1−hf (t n +1 , yn +1 ) + f ′(t n +1 , yn +1 ) − ...
2
The method is consistent and the LTE (local
truncation error) is O(h2).
Both forward and backward are consistent and have
same order of accuracy. Forward may not be STABLE
Stability
• Stability: The numerical solution should be
bounded if the exact solution is bounded
• E.g., First-order decay: dy = −λy; yt =0 = 1
• Euler Forward: dt
yn +1 = y n −hλyn
will become unbounded if 1 − hλ > 1, conditionally stable
• Euler Backward: yn
yn +1 = y n −hλyn +1 ⇒ yn +1 =
1 + hλ
will not become unbounded – unconditionally stable
• If dy = λy; y = 1 , the exact soln is unbounded
t =0
dt
Derivation of multi-step methods
• Given: dy y = y
= f (t , y ) at t = t 0 0
dt
subscript n is for “known” point and n+1 for
the “desired” point: given tn,yn,tn+1, find yn+1
All previous points, 0,1,2…,n−1 are “known”
• Linear: We write the desired value, yn+1, in
terms of a linear combination of yn and the
“slopes” k
yn +1 = y n + h βf n +1 + ∑ α i f n −i
i =0
• Explicit if β=0, implicit otherwise. k =0,1,2,…,n
Explicit multi-step methods: Adams Bashforth
• Consider k=2:
yn +1 = y n + h(α 0 f n + α1 f n −1 + α 2 f n − 2 )
• Use Taylor’s series (tn−1=tn − h; tn−2=tn − 2h)
2 3 4
h h h
yn +1 = y n + hf n + f n′ + f n′′ + f n′′′+ ...
2! 3! 4!
2 3
h h
f n −1 = f n − hf n′ + f n′′ − f n′′′+ ...
2! 3!
2 3
4h 8h
f n − 2 = f n − 2hf n′ + f n′′ − f n′′′+ ...
2! 3!
Explicit multi-step methods
• Combine: h2 h3 h4
y n + hf n + f n′ + f n′′ + f n′′′+ ... = y n + hα 0 f n
2! 3! 4!
h 2
h 3
+ hα1 f n − hf n′ + f n′′ − f n′′′+ ...
2! 3!
4h 2 8h 3
+ hα 2 f n − 2hf n′ + f n′′ − f n′′′+ ...
2! 3!
• Match the coefficients:
1 α1 1
α 0 + α1 + α 2 = 1;−α1 − 2α 2 = ; + 2α 2 =
2 2 6
• And get:
23 4 5
α 0 = ; α1 = − ; α 2 =
12 3 12
Explicit multi-step methods
• Therefore, for k=2:
23 4 5
yn +1 = y n + h f n − f n −1 + f n−2
12 3 12
• Non-self starting, since at the start we do not
have the values of fn-1 and fn-2
• May use single-step method for first two-steps
and then switch to the above formula
• The lowest order error term for this method is
h4 h3 8h 3 3h 4 3h 4
f n′′′+ hα1 f n′′′ + hα 2 f n′′′ = f ′′′ = y′′′′
4! 3! 3! 8 8
Implicit multi-step methods: Adams Moulton
• Consider k=1:
yn +1 = y n + βhf n +1 + h(α 0 f n + α1 f n −1 )
• Use Taylor’s series
2 3 4
h h h
yn +1 = y n + hf n + f n′ + f n′′ + f n′′′+ ...
2! 3! 4!
2 3
h h
f n −1 = f n − hf n′ + f n′′ − f n′′′+ ...
2! 3!
2 3
h h
f n +1 = f n + hf n′ + f n′′ + f n′′′+ ...
2! 3!
Implicit multi-step methods
• Combine: h2 h3 h4
y n + hf n + f n′ + f n′′ + f n′′′+ ... = y n + hα 0 f n
2! 3! 4!
h 2
h 3
+ hα1 f n − hf n′ + f n′′ − f n′′′+ ...
2! 3!
h2 h3
+ hβ f n + hf n′ + f n′′ + f n′′′+ ...
2! 3!
• Match the coefficients:
1 β α1 1
β + α 0 + α1 = 1; β − α1 = ; + =
2 2 2 6
• And get:
5 2 1
β = ; α 0 = ; α1 = −
12 3 12
Implicit multi-step methods
• Therefore, for k=1:
5 2 1
yn +1 = y n + h f n +1 + f n − f n −1
12 3 12
• Non-self starting, since at the start we do not
have the values of fn-1
• Implicit since fn+1 on the RHS
• The lowest order error term for this method is
h4 h3 h3 h4 h4
f n′′′+ hα1 f n′′′ − hβ f n′′′ = − f ′′′ = − y′′′′
4! 3! 3! 24 24
ESO 208A: Computational
Methods in Engineering
Arghya Das
Acknowledgement
Profs. Abhas Singh and Shivam Tripathi (CE)
Ordinary Differential
Equation
ODE: Introduction
We will consider general problems of the form:
= , = ≥0
Solution of this equation is a function y(t)
Starting from , we shall take discrete time steps , , of
size h such that, = +
Starting from the known initial value , we shall compute values
of y at each time step, , , , i.e., compute tab(y)
An obvious way can be:
= + + + + + +
2! 3! 4! 5! 6!
Neglecting, h2 and higher order terms:
≈ = + = + ,
ODE: Introduction
= + , = ,
= + , + ,
2
This method may also be seen as follows:
= , = ,
≈ = +
Example: k = 2
= + + +
= = = + + +
Let’s expand all the terms in Taylor’s series and equate LHS with RHS!
Multi-Step Methods: Explicit
= + + +
Expanding all the terms in Taylor’s series:
= = + + + + +
2! 3! 4!
Now,
= + +
2! 3!
2 2
= 2 + +
2! 3!
Put these in the original equation!
RHS =
= + + + +
2! 3!
2 2
+ 2 + +
2! 3!
Multi-Step Methods: Explicit
Thus, we have reduced = + + + to:
+ + + + + = + +
2! 3! 4!
+ + +
2! 3!
2 2
2 + +
2! 3!
Grouping Terms:
+ + + + + = + + + +
2! 3! 4!
4
2 + +2 + +
2 6 3
Multi-Step Methods: Explicit
= + + +
+ + + + + = + + + +
2! 3! 4!
4
2 + +2 + +
2 6 3
Equating both sides:
1 1
+ + = 1; +2 = ; +2 =
2 2 6
23 4 5
= , = and =
12 3 12
23 4 5
= + +
12 3 12
Multi-Step Methods: Explicit
Effective approximation is:
23 4 5
≈ = + +
12 3 12
= , = and = …(1)
We already have:
= + +
!
+
!
+
!
+ …(2)
= + + + + 2 + +2
2
4
+ + …(3)
6 3
= + + + +
2 6 3
+ + + = + +
2 6 3
Multi-Step Methods: Explicit
Thus, we have shown that effective approximation is:
23 4 5
≈ = + +
12 3 12
, = + + + + +
2! 3! 4!
: + + + = + +
2 6 3
3
= + + + + = + +
3 4! 8
23 4 5 3
= + + + +
12 3 12 8
Local truncation error (LTE) of this method is O(h4)!
The method is non-self starting, or cannot be started with the given initial
condition y = y0 at t = t0 or 0.
Why???
Multi-Step Methods: Explicit
23 4 5 3
= + + + +
12 3 12 8
Let us assume that we have obtained y1 at t1 = t0 + h and y2 at t2 = t0 + 2h using
another method and then applying this method for subsequent time steps:
23 4 5 3
= + + + +
12 3 12 8
23 4 5 3
= + + + +
12 3 12 8
23 4 5 3
= + + + + + + +
12 3 12 8
+
This way, if we apply the method for n time steps,
23 4 5 3
= + + + +
12 3 12 8
Multi-Step Methods: Explicit
23 4 5 3
= + + + +
12 3 12 8
= = ; ∈ ,
Therefore,
23 4 5 3
= + + + +
12 3 12 8
GTE
Name k Method
Order
Euler
0 = + h
Forward
3 1
1 = + h2
2 2
Adams- 23 4 5
2 = + + h3
Bashforth 12 3 12
55 59 37 3
3 = + + h4
24 24 24 8
First Order ODE’s: Solution Algorithm
• Single-step methods:
Euler Forward or Explicit method:
yn +1 = y n + hf (t n , yn )
Euler Backward or Implicit method:
yn +1 = y n + hf (t n +1 , yn +1 )
• Multi-step methods:
f (t n , yn ) + f (t n +1 , yn +1 )
Implicit Heun’s : yn+1 = y n + h 2
Explicit Heun’s (or just Heun’s method):
f (t n , yn ) + f (t n +1 , yn + hf (t n , yn ))
yn +1 = y n + h
2
Derivation of multi-step methods
• Given: dy y = y
= f (t , y ) at t = t 0 0
dt
subscript n is for “known” point and n+1 for
the “desired” point: given tn,yn,tn+1, find yn+1
All previous points, 0,1,2…,n−1 are “known”
• Linear: We write the desired value, yn+1, in
terms of a linear combination of yn and the
“slopes” k
yn +1 = y n + h βf n +1 + ∑ α i f n −i
i =0
• Explicit if β=0, implicit otherwise. k =0,1,2,…,n
Derivation of multi-step methods
k
yn +1 = y n + h βf n +1 + ∑ α i f n −i
i =0
• The term in the (..) may be thought of as an
“average slope” over the interval (tn, tn+1)
• For explicit methods, the average slope is
obtained from a weighted average of a few
(=k+1) “previous (i.e., known)” slopes
• For implicit methods, the average slope is
obtained from a weighted average of a few
“previous” slopes and the “unknown” slope
8
y4
0
0 0.2 0.4 0.6 0.8 1 1.2
t
Explicit multi-step methods: Adams Bashforth
• Consider k=2:
yn +1 = y n + h(α 0 f n + α1 f n −1 + α 2 f n − 2 )
• Use Taylor’s series (tn−1=tn − h; tn−2=tn − 2h)
2 3 4
h h h
yn +1 = y n + hf n + f n′ + f n′′ + f n′′′+ ...
2! 3! 4!
2 3
h h
f n −1 = f n − hf n′ + f n′′ − f n′′′+ ...
2! 3!
2 3
4h 8h
f n − 2 = f n − 2hf n′ + f n′′ − f n′′′+ ...
2! 3!
Explicit multi-step methods
• Combine: h2 h3 h4
y n + hf n + f n′ + f n′′ + f n′′′+ ... = y n + hα 0 f n
2! 3! 4!
h 2
h 3
+ hα1 f n − hf n′ + f n′′ − f n′′′+ ...
2! 3!
4h 2 8h 3
+ hα 2 f n − 2hf n′ + f n′′ − f n′′′+ ...
2! 3!
• Match the coefficients:
1 α1 1
α 0 + α1 + α 2 = 1;−α1 − 2α 2 = ; + 2α 2 =
2 2 6
• And get:
23 4 5
α 0 = ; α1 = − ; α 2 =
12 3 12
Explicit multi-step methods
• Therefore, for k=2:
23 4 5
yn +1 = y n + h f n − f n −1 + f n−2
12 3 12
• Non-self starting, since at the start we do not
have the values of fn-1 and fn-2
• May use single-step method for first two-steps
and then switch to the above formula
• The lowest order error term for this method is
h4 h3 8h 3 3h 4 3h 4
f n′′′+ hα1 f n′′′ + hα 2 f n′′′ = f ′′′ = y′′′′
4! 3! 3! 8 8
Adams Bashforth: Alternative formulation
• For k=2:
yn +1 = y n + h(α 0 f n + α1 f n −1 + α 2 f n − 2 )
• Approximate f by a quadratic function:
f =
(t + 2h )(t + h )
fn
(2h )(h )
+
(t + 2h )t
f n −1
(− h + 2h )(− h ) f
+
(t + h )t
f n−2
(− 2h + h )(− 2h )
-2h -h t 0 h
Adams Bashforth: Alternative formulation
• Write h
yn +1 = y n + ∫ fdt
0
h
(t + 2h )(t + h )dt = 23h
∫0 (2h )(h ) 12
• Integrate the quadratic f :
h
(t + 2h )t dt = − 4h h
(t + h )t dt = 5h
∫0 (− h + 2h )(− h ) 3 ∫0 (− 2h + h )(− 2h ) 12
• Same formula:
23 4 5
yn +1 = y n + h f n − f n −1 + f n−2
12 3 12
Implicit multi-step methods: Adams Moulton
• Consider k=1:
yn +1 = y n + βhf n +1 + h(α 0 f n + α1 f n −1 )
• Use Taylor’s series:
2 3 4
h h h
yn +1 = y n + hf n + f n′ + f n′′ + f n′′′+ ...
2! 3! 4!
2 3
h h
f n −1 = f n − hf n′ + f n′′ − f n′′′+ ...
2! 3!
2 3
h h
f n +1 = f n + hf n′ + f n′′ + f n′′′+ ...
2! 3!
Implicit multi-step methods
• Combine: h2 h3 h4
y n + hf n + f n′ + f n′′ + f n′′′+ ... = y n + hα 0 f n
2! 3! 4!
h 2
h 3
+ hα1 f n − hf n′ + f n′′ − f n′′′+ ...
2! 3!
h2 h3
+ hβ f n + hf n′ + f n′′ + f n′′′+ ...
2! 3!
• Match the coefficients:
1 β α1 1
β + α 0 + α1 = 1; β − α1 = ; + =
2 2 2 6
• And get:
5 2 1
β = ; α 0 = ; α1 = −
12 3 12
Implicit multi-step methods
• Therefore, for k=1:
5 2 1
yn +1 = y n + h f n +1 + f n − f n −1
12 3 12
• Non-self starting, since at the start we do not
have the value of fn-1
• Implicit since fn+1 on the RHS
• The lowest order error term for this method is
h4 h3 h3 h4 h4
f n′′′+ hα1 f n′′′ − hβ f n′′′ = − f ′′′ = − y′′′′
4! 3! 3! 24 24
Adams Moulton: Alternative formulation
• For k=1:
yn +1 = y n + βhf n +1 + h(α 0 f n + α1 f n −1 )
• Approximate f by a quadratic function:
f =
( t + h )t
f n +1
(2h )(h )
+
(t + h )(t − h )
fn
(h )(− h ) f
t (t − h )
+ f n −1
(− h )(− 2h )
-h 0 t h
Adams Moulton: Alternative formulation
• Write h
yn +1 = y n + ∫ fdt
0
h
(t + h )t dt = 5h
∫0 (2h )(h ) 12
• Integrate the quadratic f :
h
(t + h )(t − h )dt = 2h h
t (t − h ) h
∫0 (h )(− h ) 3 ∫
0
(− h )(− 2h )
dt = −
12
• Same formula:
5 2 1
yn +1 = y n + h f n +1 + f n − f n −1
12 3 12
Another option: Backward Difference methods
• We write the “unknown” slope, f(tn+1,yn+1), in
terms of a linear combination of yn+1 and the
“known” ys (yn, yn-1….)
k
• Always implicit: hf = α y
n +1 ∑ i n+1−i
i =0
k=0,1,2,…,n+1
• Derivation is similar to the multi-step method
• E.g., for k=2:
hf n +1 = α 0 yn +1 + α1 yn + α 2 yn −1
Backward Difference methods
• Use Taylor’s series: hf n +1 = α 0 yn +1 + α1 yn + α 2 yn −1
h2 h3
f n +1 = f n + hf n′ + f n′′ + f n′′′+ ...
2! 3!
h2 h3 h4
yn +1 = y n + hf n + f n′ + f n′′ + f n′′′+ ...
2! 3! 4!
2 3 4
h h h
yn −1 = y n −hf n + f n′ − f n′′ + f n′′′− ...
2! 3! 4!
• Match the coefficients:
α0 α2
α 0 + α1 + α 2 = 0; α 0 − α 2 = 1; + =1
2 2
• And get:
3 1
α 0 = ; α1 = −2; α 2 =
2 2
Backward Difference methods: Alternative View
• Approximate y by a quadratic:
y=
( t + h )t
yn +1
(2h )(h )
+
(t + h )(t − h ) y
(h )(− h ) n y
t (t − h )
+ yn −1
(− h )(− 2h ) t
-h 0 h
y
1
dy/dt
0.5
0 t
0 0.2 0.4 0.6 0.8 1 1.2
-0.5
-1
-1.5
-2
-2.5
Example
• For t=0.1 (TV = 0.814354):
Euler Forward : yn +1 = y n + hf (t n , yn )
y0.1 = 1 + 0.1(− 2 ) = 0.8
Euler Backward : yn +1 = y n + hf (t n +1 , yn +1 )
(
y0.1 = 1 + 0.1 − y0.1 − e −0.1
) => y 0.1 = 0.826833
Trapezoidal or Implicit Heun’s :
f (t n , yn ) + f (t n +1 , yn +1 )
y = y +h
n +1 n
2
y0.1 = 1 + 0.1
(
− 2 + − y0.1 − e −0.1 )
=> y0.1 = 0.814055
2
Most Common: Runge-Kutta methods
• The “average slope” over the interval (tn, tn+1)
is approximated by a weighted mean of slopes
at “a few” intermediate points in the interval
(tn, tn+1).
• Explicit, since the “intermediate points” are
obtained directly from “known” values at
“previous” points
• General Form: m
yn +1 = y n + h∑ α i ki
i =1
First Order ODE’s: Example
• Given: dy/dt = − y − e−t ; y(0)=1
Find: y at t=0.1, 0.2, 0.3, 0.4, 0.5 (using h=0.1)
Exact Solution: y = e−t (1– t)
1.5
y
1
dy/dt
0.5
0 t
0 0.2 0.4 0.6 0.8 1 1.2
-0.5
-1
-1.5
-2
-2.5
Example
• For t=0.1 (TV = 0.814354):
Euler Forward : yn +1 = y n + hf (t n , yn )
y0.1 = 1 + 0.1(− 2 ) = 0.8
Euler Backward : yn +1 = y n + hf (t n +1 , yn +1 )
(
y0.1 = 1 + 0.1 − y0.1 − e −0.1
) => y 0.1 = 0.826833
Trapezoidal or Implicit Heun’s :
f (t n , yn ) + f (t n +1 , yn +1 )
y = y +h
n +1 n
2
y0.1 = 1 + 0.1
(
− 2 + − y0.1 − e −0.1 )
=> y0.1 = 0.814055
2
Example
• For t=0.2 (TV = 0.654985):
Euler Forward : yn +1 = y n + hf (t n , yn )
y0.2 = 0.8 + 0.1(− 1.704837 ) = 0.629516
Euler Backward : yn +1 = y n + hf (t n +1 , yn +1 )
(
y0.2 = 0.826833 + 0.1 − y0.2 − e −0.2
) => y 0.2 = 0.677236
Trapezoidal or Implicit Heun’s :
f (t n , yn ) + f (t n +1 , yn +1 )
y = y +h n +1 n
2
y 0 .2 = 0.814055 + 0.1
(
− 1.718893 + − y0.2 − e −0.2)=> y0.2 = 0.654452
2
t TV Euler Forward Euler Backward Trapezoidal
1.0 TV
Euler Forward
Euler Backward
Trapezoidal
0.8
0.6
0.4
0.2
0.0
0.0 0.1 0.2 0.3 0.4 0.5 0.6
Example
• Explicit multi-step method, with k=2:
23 4 5
yn +1 = y n + h f n − f n −1 + f n−2
12 3 12
• Use the values obtained from Trapezoidal
• At t=0.3 (TV= 0.518573 ):
23 4 5
y0.3 = y 0.2 + h f 0.2 − f 0.1 + f0
12 3 12
23 4 5
= 0.654452 − 0.1 1.473182 − 1.718893 + 2
12 3 12
= 0.517944
Example
• Implicit multi-step method, with k=1:
5 2 1
yn +1 = y n + h f n +1 + f n − f n −1
12 3 12
• Use the values obtained from Trapezoidal
• At t=0.2 (TV= 0.654985 ):
5 2 1
y0.2 = y 0.1+ h f 0.2 + f 0.1 − f0
12 3 12
y0.2 = y 0.1+0.1
5
[ ]
− y0.2 − e − 0.2 2 1
− 1.718893 + 2
12 3 12
• => y0.2 = 0.654735
Example
• Backward Difference method, k=2:
3 1
hf n +1 = yn +1 − 2 yn + yn −1
2 2
• Taylor’s series:
2 3 4
h h h
yn +1 = y n + hf n + f n′ + f n′′ + f n′′′+ ...
2! 3! 4!
k2 = f (t n + αh, yn + αhk1 )
∂f ∂f
= f n + αh + αhf n
∂t ∂y
2 2
∂ f ∂f ∂f ∂ f
+α h
2 2
2
+ 2α h f n
2 2
+ α h fn
2 2 2
2
+ ...
∂t ∂t ∂y ∂y
2 3 4
h h h
y n + hf n + f n′ + f n′′ + f n′′′+ ... = yn + w1hf n +
2! 3! 4!
∂f ∂f ∂ 2
f
f + αh + αhf +α h 2 2
2
+
∂t ∂y ∂t
w2 h 2
2α h f ∂ f ∂f ∂ f
2 2
+ α 2
h 2
f 2
2
+ ...
∂ t ∂ y ∂ y n
df ∂f dy ∂f ∂f ∂f
f′= = + = +f
dt ∂t dt ∂y ∂t ∂y
d df ∂ ∂f ∂f ∂ ∂f ∂f
f ′′ = = + f + f + f
dt dt ∂t ∂t ∂y ∂y ∂t ∂y
Runge-Kutta methods
• Equating coefficients:
h : 1 = w1 + w2
1 ∂f ∂f ∂f ∂f 1
2
h : + f = w2α + w2αf ⇒ w2α =
2 ∂t ∂y ∂t ∂y 2
t
y
t
4th order Runge-Kutta method: Example
• Given: dy/dt = − y − e−t ; y(0)=1
Find: y at t=0.2 (using h=0.2) TV= 0.654985
Exact Solution: y = e−t (1– t)
t
4th order Runge-Kutta method: Example
• Fourth-order R-K method
1 1
k1 = f (t n , yn ); k 2 = f t n + h, yn + hk1
2 2
1 1
k3 = f t n + h, yn + hk 2 ; k 4 = f (t n + h, yn + hk3 )
2 2
h
yn +1 = y n + (k1 + 2k 2 + 2k3 + k 4 )
6
• t0=0, y0=1, k1= − 1 − e−0 = -2; k2= − (1+0.1x(-2))
− e−0.1 = -1.70484; k3= − (1+0.1x(- 1.70484)) −
e−0.1 = -1.73435 ; k4=− (1+0.2x(- 1.73435)) −
e−0.2 = -1.47186
• y0.2 = 1+0.2/6(-2-2x1.70484 - 2x1.73435 -1.47186 ) = 0.654992
Error Analysis
• The “local truncation error (LTE)” is the error
over one interval (tn, tn+1)
• The “global truncation error (GTE)” is the
error over the entire time period (t0, tn+1)
• E.g., Euler Forward
yn +1 = y n + hf (t n , yn )
• Compare with Taylor’s series 2
h
yn +1 = y n + hf (t n , yn ) + f ′(t n , yn ) + ...
2
• Local Error is O(h2)
• Global?
Global Truncation Error
• We start with t0, where y0 is known exactly
• y1 = y0 + f(t0,y0) 2
h
• Error (LTE & GTE) y′′(ζ 0 ); ζ 0 ∈ (t0 , t1 )
2
2 1 1 1 1 1 1 1 1
2
h2 h2
= y′′(ζ 0 ) + y′′(ζ 1 ) + H .O.T .
2 2
Global Truncation Error
• LTE in the second step is h2
y′′(ζ 1 ); ζ 1 ∈ (t1 , t 2 )
2
h2 h2
y′′(ζ 0 ) + y′′(ζ 1 )
• GTE is 2 2
• Proceeding similarly, GTE up to tn+1:
h2 n
= ∑ y′′(ζ i ); ζ i ∈ (ti , ti +1 )
2 i =0
=
(t n +1 − t0 )h 2
y′′(ζ ) =
(t n +1 − t0 )h
y′′(ζ )
2h 2
Global Truncation Error
• A numerical scheme for solving the ODE is
called a kth order scheme, if the GTE is O(hk)
• The LTE is O(hk+1)
• For example, 2nd order R-K method have LTE
O(h3) and GTE O(h2)
• Let us take the same example and solve by the
Mid-point method (2nd order R-K) and the 4th-
order R-K method.
• dy/dt = − y − e−t ; y(0)=1
• Use h=0.1, 0.2, and 0.3 and solve up to t=0.6
RK2
t TV yi k1 k2 yi+1
t TV yi k1 k2 k3 k4 yi+1
-
0.1 -0.00052338 0.00000023413 0.2 -0.00404791 -0.00000732 0.3 -0.01321485 -0.00005439
-
0.2 -0.00093252 0.00000041628 0.4 -0.00642561 -0.00001157 0.6 -0.01870549 -0.00007630
-
0.3 -0.00124582 0.00000055494 0.6 -0.00764207 -0.00001369
-
0.4 -0.00147906 0.00000065736
-
0.5 -0.00164579 0.00000072977
-
0.6 -0.00175758 0.00000077747
Stability Analysis
• Stability: The numerical solution should be
bounded if the exact solution is bounded
• Different from “error,” a stable solution could
have large errors
• A numerical scheme may be stable for all
values of time-step (unconditionally stable) or
only for time-step less than a threshold
(conditionally stable)
• Also, a numerical sheme with the same time-
step may be stable for some ODE’s and
unstable for some other
Linear Stability Analysis
• We perform only a Linear Stability analysis
• Expand the function f (i.e., dy/dt) in a Taylor’s
series and ignore the higher order terms
∂f ∂f
f (t , y ) = f (t0 , y0 ) + (t − t0 ) + ( y − y0 ) + ...
∂t ( t0 , y0 ) ∂y ( t0 , y0 )
λrh
Linear Stability Analysis
• We now look at the stability region of various
numerical methods.
• Start with the Euler Forward
yn+1 = y n + hf (t n , yn ) = yn (1 + λr h + iλi h )
• Define an amplification factor, σ, as the ratio
of y at two consecutive time steps (yn+1/yn)
σ = 1 + λr h + iλi h
• For solution to be bounded, |σ| must be ≤1
• The stability region is, therefore, given by
(1 + λr h ) + λi h ≤ 1
2 2 2
Linear Stability Analysis: Euler Forward
• The stability region is shown below: a circle of
radius 1, centered at (-1,0)
• For real negative values of λ, the condition is
|λh|≤2
λih
λrh
-2
Linear Stability Analysis: Euler Backward
• Similarly, for Euler Backward
yn+1 = y n + hf (t n+1 , yn+1 ) ⇒ yn+1 = yn / (1 − λr h − iλi h )
σ = 1 /(1 − λr h − iλi h)
• For solution to be bounded, |σ| must be ≤1
• The stability region is, therefore, given by
(λr h − 1) + λi h ≥ 1
2 2 2
λrh
2
Linear Stability Analysis: Trapezoidal method
• For Trapezoidal method
f (t n , yn ) + f (t n+1 , yn+1 ) 1 + λr h / 2 + iλi h / 2
yn+1 = y n + h ⇒ yn+1 = yn
2 1 − λr h / 2 − iλi h / 2
2
• Why stop at one step only? Iterate using
the corrected value in the implicit step.
y = yn + hf (t n , yn )
(0)
n +1
h
yn+1 = yn + [ f (t n , yn ) + f (t n+1 , yn+1 )]
(i ) ( i −1)
2
• Repeat till convergence
Linear Stability Analysis
y = y0 e n (λr h+iλi h )
• The analytical solution is bounded for all
negative λr
• Stability Region
λih
λrh
Linear Stability Analysis: Euler Forward
• The stability region is shown below: a circle of
radius 1, centered at (-1,0)
• For real negative values of λ, the condition is
|λh|≤2
λih
λrh
-2
Linear Stability Analysis: Euler Backward
• The stability region is shown below: outside a
circle of radius 1, centered at (-1,0)
• For real negative values of λ, the method is
unconditionally stable
λih
λrh
2
Linear Stability Analysis: Trapezoidal method
• For Trapezoidal method
f (t n , yn ) + f (t n+1 , yn+1 ) 1 + λr h / 2 + iλi h / 2
yn+1 = y n + h ⇒ yn+1 = yn
2 1 − λr h / 2 − iλi h / 2
0
-3 -2 -1 0 1 2 3 λrh
-1
-2
-3
Predictor-Corrector methods
• Implicit methods are stable but require
solution of a nonlinear equation at each
step
• Explicit methods require less
computational effort per step but may
need a very small time-step for stability
• Avoid the nonlinear equation solution, by
predicting the “unknown” value using
explicit method and then correcting it
using implicit
Predictor-Corrector methods
• For example, Heun’s method:
Predictor: y p
n +1 = yn + hf (t n , yn )
h
Corrector: y
c
n +1 = yn + [ f (t n , yn ) + f (t n+1 , yn+1 )]
p
2
• Why stop at one step only? Iterate using
the corrected value in the implicit step.
y = yn + hf (t n , yn )
(0)
n +1
h
yn+1 = yn + [ f (t n , yn ) + f (t n+1 , yn+1 )]
(i ) ( i −1)
2
• Repeat till convergence
Predictor-Corrector : Milne’s method
• Milne’s method (multi-step):
• Non-self starting
• Uses Simpson’s 1/3 methodology
• Predictor: interpolate a quadratic using n-2, n-
1, and n; integrate over n-3 to n+1
• Corrector: interpolate a quadratic using n-1, n,
and n+1; integrate over n-1 to n+1
Milne’s method: Predictor
• Approximate f by a quadratic function:
f =
(t + h )t
f n−2
(− 2h + h )(− 2h )
+
(t + 2h )t
f n −1
(− h + 2h )(− h ) f
+
(t + 2h )(t + h )
fn
(2h )(h )
-3h -2h -h t 0 h
+
( t + h )t ( i −1)
f n +1
(2h )(h )
t
-3h -2h -h 0 h
• Integrate from -h to h:
y (i )
n +1
h
3
[ ( ( i −1)
= yn −1 + f n −1 + 4 f n + f t n +1 , yn +1 )]
Predictor-Corrector : Adams method
• Adams method:
• Uses Adams-Bashforth (explicit) and
Adams-Moulton (implicit)
• For Example, take the 4th order method
• Predictor: interpolate a cubic using n-3, n-2, n-
1, and n; integrate over n to n+1
• Corrector: interpolate a cubic using n-2, n-1,
n, and n+1; integrate over n to n+1
Adams method: Predictor
• Approximate f by a cubic function:
f =
(t + 2h )(t + h )t
f n −3
(− h )(− 2h )(− 3h )
+
(t + 3h )(t + h )t
f n−2
(− h )(− h )(− 2h ) f
+
(t + 3h )(t + 2h )t
f n −1
(2h )(h )(− h )
+
(t + 3h )(t + 2h )(t + h )
fn
(3h )(2h )(h ) -3h -2h -h t 0 h
• Integrate from 0 to h:
h
3 37 59 55
y (0)
n +1 = yn + ∫ fdt = yn + h − f n −3 + f n−2 − f n −1 + fn
0 8 24 24 24
Adams method: Corrector
• Approximate f by a cubic function:
f =
(t + h )t (t − h )
f n−2
(− h )(− 2h )(− 3h )
+
(t + 2h )t (t − h )
f n −1
(h )(− h )(− 2h ) f
+
(t + 2h )(t + h )(t − h)
fn
(2h )(h )(− h )
+
(t + 2h )(t + h )t ( i −1)
f n +1 t 0
(3h )(2h )(h ) -3h -2h -h h
• Integrate from 0 to h:
(i ) 1 5 19 3 ( i −1)
y n +1 = y n + h f n−2 − f n −1 + f n + f n +1
24 24 24 8
System of ODEs
• If we have several dependent variables,
yi, i from 1 to m
• Derivatives could be functions of time
and one or more ys
• Initial conditions on all ys should be given
• The system may be expressed as
dy1
= f1 (t , y1 , y2 ,..., ym )
dt
dy2 y1 (t =0 ) = y1, 0 ; y2 (t =0 ) = y2, 0 ;...; ym (t =0 ) = ym , 0
= f 2 (t , y1 , y2 ,..., ym )
dt
...
dym
= f m (t , y1 , y2 ,..., ym )
dt
Higher order ODEs
• If we have a higher order ODE, it could be
converted into a system of ODEs
2
• For example, c d y + c dy + c y = f (t )
2 2 1 0
dt dt
• Could be expressed as (using y1=y and
y2=dy/dt): dy
1
= f1 (t , y1 , y2 ) = y2
dt
dy2 f (t ) − c0 y1 − c1 y2
= f 2 (t , y1 , y2 ) =
dt c2
Higher order ODEs
• The only problem is with the boundary
conditions
• There are two boundary conditions on y
• If both are specified at t=“0” (e.g., y0 and
dy/dt0): Initial Value Problem (IVP)
• If these are specified at different points
(e.g., y0 and yT): Boundary Value Problem
(BVP)
• Problems discussed till now were IVPs
Higher order ODEs
• The higher order IVP is readily convertible
into a system of IVPs
• The BVPs require different technique and
will be discussed later
• For now, we will look at only a system of
IVPs, and will not consider higher-order
IVPs separately, since these are
equivalent!
System of ODEs
• All the methods described earlier for a
single ODE, are applicable for a system
• Explicit methods pose no problem
• Implicit methods require the solution of a
nonlinear system of algebraic equations
• Vector notation is used to write
d {y}
= { f } with {y}t =0 = {y0 }
dt
• where,
{y} = {y1 , y2 ,..., ym }T ; { f } = { f1 , f 2 ,..., f m }T ; {y0 } = {y1,0 , y2,0 ,..., ym,0 }T
System of ODEs: Euler Forward
• Euler Forward method gives:
{y}n+1 = {y}n + h{ f }n
• Or, in expanded form:
y1,n +1 = y1,n + f1 (t n , y1,n , y2,n ,..., ym ,n )
y2,n +1 = y2,n + f 2 (t n , y1,n , y2,n ,..., ym ,n )
...
ym ,n +1 = ym ,n + f m (t n , y1,n , y2,n ,..., ym ,n )
y1 y1 5.6 − 26.4 y1
= + h
y2 n +1 y2 n 26.4 − 106.6 y2 n
(1 + 5.6h) y1 − 26.4hy2
=
26.4hy1 + (1 − 106.6h) y2 n
Stability of a System of IVPs
• Analytical solution is −t
y1 = 0.8e + 0.2e −100 t
• k=6 (BDF6):
49 15 20 15 6 1
hf n +1 = yn +1 − 6 yn + yn −1 − y n − 2 + y n −3 − y n − 4 + y n −5
20 2 3 4 5 6
Stiff Systems: Gear’s Method
• Use BDF1 (Backward Euler) for “a few”
time steps with step size h
• Then use BDF2 with a step size of 2h
• Since BDF2 requires yn-1 also, BDF1 has to
be used for at least 2 steps of h
• Similarly, after at least 3 steps of BDF2
with step size 2h, we could switch to
BDF3 with step size 4h; and so on
Gear’s Method
• This allows us to use the “equal spaced”
formulae derived for BDF
• We could also use unequal spacing of
previous points, when changing the step
size, but it requires re-derivation
• The recommended size of the initial time
step is 1/| λmax |
• BDF7 is unstable. BDF6 is stable but not
robust (stability region is small)
• Therefore, sometimes we stop at BDF5
Gear’s Method: Example
• Same Problem:
d { y} 5.6 − 26.4
= { y}
dt 26.4 − 106.6
• Start with h=0.01 (=1/| λmax |)
• Euler Implicit for 2 steps, to get
0.892079 0.834237
{ y}0.01 = ;{ y}0.02 =
0.598020 0.396059
y(0)=ya; y(1)=yb
• Linear BVP: p(x), q(x), and r linear in y
Boundary Value problems: Methods of solution
• Convert into a system of equations –
Shooting Method
• Approximate the derivatives by finite
differences: Direct Method
• Only Linear BVPs are considered
d2y dy
p (x ) 2 + q(x ) + r1 ( x ) y = r0 (x )
dx dx
• Solution domain (a,b) and specified
conditions ya and yb (or could be y’b, or
any combination of y and y’)
Boundary Value problems: Shooting Method
• Convert into two first-order ODEs
d2y dy
p (x ) 2 + q(x ) + r1 ( x ) y = r0 (x )
dx dx
• y1=>y; y2=>dy/dx
dy1
= f1 ( x, y1 , y2 ) = y2
dx
dy2 r0 ( x) − r1 ( x ) y1 − q( x ) y2
= f 2 ( x, y1 , y2 ) =
dx p(x )
• Boundary conditions: y1 (a) = ya ; y1 (b) = yb
• For IVP, we need y2(a), which is not given
• Assume y2(a), solve IVP, compare y1(b)
Shooting Method
• Generally, the computed y1(b) will not be
equal to the given yb
• Assume a different y2(a), solve IVP till b,
to obtain another value of y1(b)
• Use a linear interpolation/extrapolation
to estimate the y2(a) which will result in
y1(b) equal to yb.
• Solve the IVP again with this value of
y2(a). For linear problems, the solution
could be obtained by linear interpolation.
Shooting Method: Example
• Second-order equation:
d2y dy x
2
+ 3 + 2 y = 6e
dx dx
• Boundary conditions: y(0)=y(1)=0
• Write as:
dy1
= y2
dx
dy2 x
= 6e − 2 y1 − 3 y2
dx
h 3h xn
y1,n +1 = y1.n + y2,n + 2 y2,n +
3 4
(
6e − 2 y1,n − 3 y2,n )
xn + 3 h / 4 3h
6 e − 2 y
1,n + y 2,n
h xn 4
y2,n +1 = y 2.n +( )
6e − 2 y1,n − 3 y2,n + 2
3 3h xn
(
− 3 y2,n + 4 6e − 2 y1,n − 3 y2,n )
Shooting Method: Example
• First assume y2(0)=0, then 1. Use h=0.2
x y1 y2 f1 f2 x y1 y2 f1 f2
0 0 0 0 6.00000 0 0 1 1 3
0.15 0 0.9 0.9 4.27101 0.15 0.15 1.45 1.45 2.321005
0.2 0.12 0.969467 0.2 0.26 1.509467
x y1 y2 f1 f2 x y1 y2 f1 f2
0.2 0.12 0.969467 0.969467 4.18001 0.2 0.26 1.509467 1.509467 2.280014
0.35 0.26542 1.59647 1.59647 3.19416 0.35 0.48642 1.85147 1.85147 1.987156
0.4 0.397494 1.674023 0.4 0.607494 1.926423
x y1 y2 f1 f2 x y1 y2 f1 f2
0.4 0.397494 1.674023 1.674023 3.13389 0.4 0.607494 1.926423 1.926423 1.956693
0.55 0.648597 2.144106 2.144106 2.67000 0.55 0.896457 2.219926 2.219926 1.946824
0.6 0.794976 2.238949 0.6 1.031912 2.316445
x y1 y2 f1 f2 x y1 y2 f1 f2
0.6 0.794976 2.238949 2.238949 2.62591 0.6 1.031912 2.316445 2.316445 1.919553
0.75 1.130819 2.632836 2.632836 2.54185 0.75 1.379379 2.604378 2.604378 2.130108
0.8 1.295284 2.752924 0.8 1.533592 2.72843
x y1 y2 f1 f2 x y1 y2 f1 f2
0.8 1.295284 2.752924 2.752924 2.50390 0.8 1.533592 2.72843 2.72843 2.100772
0.95 1.708223 3.12851 3.12851 2.71228 0.95 1.942857 3.043546 3.043546 2.497908
1 1.895947 3.281489 1 2.121294 3.201536
Shooting Method: Example
• For y2(0)=0, y1(1)=1.896
• For y2(0)=1, y1(1)=2.121
• Specified value is y1(1)=0
• Linear extrapolation => y2(0)= -8.41348
• Solve the IVP again
• In this case, we do not need to solve
again. Just use linear extrapolation of
values obtained for the two assumed
derivative values.
Shooting Method: Example
• Solution of IVP with y2(0)= -8.41348
x y1 y2 f1 f2
0 0 -8.41348 -8.41348 31.24043
0.15 -1.26202 -3.72741 -3.72741 20.67728
0.2 -1.05789 -3.57381
x y1 y2 f1 f2
0.2 -1.05789 -3.57381 -3.57381 20.16562
0.35 -1.59396 -0.54897 -0.54897 13.34922
0.4 -1.36934 -0.44954
x y1 y2 f1 f2
0.4 -1.36934 -0.44954 -0.44954 13.03824
0.55 -1.43677 1.506197 1.506197 8.75446
0.6 -1.19848 1.586939
x y1 y2 f1 f2
0.6 -1.19848 1.586939 1.586939 8.56886
0.75 -0.96044 2.872267 2.872267 6.00608
0.8 -0.70971 2.959006
x y1 y2 f1 f2
0.8 -0.70971 2.959006 2.959006 5.89566
0.95 -0.26586 3.843354 3.843354 4.51592
1 0 3.954172
0
0 0.2 0.4 0.6 0.8 1 1.2
-0.2
-0.4
-0.6
y -0.8 Analytical
Numerical
-1
-1.2
-1.4
-1.6
x
Shooting Method: Different Boundary Conditions
• What if dy/dx is specified at “b”?
• Same methodology, compare y2(b)
• If both y and dy/dx are specified at b?
• IVP with a negative h
• If dy/dx specified at a and y at b?
• Assume two different y1(a), solve the IVP
and compare y1(b)
• For nonlinear problems, more iterations
are needed. To avoid that: Direct Method
Boundary Value problems: Direct Method
• Approximate the derivatives by finite
differences using a grid of points
(generally equally spaced)
• Take linear equation:
d2y dy
p( x ) 2 + q( x ) + r1 ( x ) y = r0 ( x )
dx dx
• with the boundary conditions
y (a ) = ya ; y (b) = yb
• Let (a,b) be divided into n equal intervals
[h=(b-a)/n]
Direct Method
• The grid points are called Nodes
• Let the node numbers be denoted by 0
(at a),1,2,...,i-1,i,i+1,...,n-1,n (at b)
• The derivatives at the nodes are
approximated by appropriate finite
difference formula (generally central)
• For example,
d2y dy
p( x ) 2 + q( x ) + r1 ( x ) y = r0 ( x )
dx dx
Direct Method
d2y dy
p( x ) 2 + q( x ) + r1 ( x ) y = r0 ( x )
dx dx
y -0.8
-1
-1.2
-1.4
-1.6
x
• Direct Shooting
Direct Method: Example
• Let us change the right boundary
condition to y’(1)=4.0687
• Using ghost node (node 6) at x=1.2,
y6 = y4 + 2 × 0.2 × 4.0687 = y4 + 1.6275
− 48 32.5 y1 6e 0.2
17.5 − 48 32.5 y 0.4
2 6e
17.5 − 48 32.5 y3 = 6e 0.6
17 . 5 − 48 32 . 5 y
4 6 e 0.8
50 − 48 y5 − 36.5841
0
0 0.2 0.4 0.6 0.8 1 1.2
-0.2
-0.4
-0.6
y Analytical
-0.8 Numerical
-1
-1.2
-1.4
-1.6
x
Boundary Value problems: Shooting Method
• Convert into two first-order ODEs
d2y dy
p (x ) 2 + q(x ) + r1 ( x ) y = r0 (x )
dx dx
• y1=>y; y2=>dy/dx
dy1
= f1 ( x, y1 , y2 ) = y2
dx
dy2 r0 ( x) − r1 ( x ) y1 − q( x ) y2
= f 2 ( x, y1 , y2 ) =
dx p(x )
• Boundary conditions: y1 (a) = ya ; y1 (b) = yb
• For IVP, we need y2(a), which is not given
• Assume y2(a), solve IVP, compare y1(b)
0
0 0.2 0.4 0.6 0.8 1 1.2
-0.2
-0.4
-0.6
y -0.8 Analytical
Numerical
-1
-1.2
-1.4
-1.6
x
Boundary Value problems: Direct Method
• Approximate the derivatives by finite
differences using a grid of points
(generally equally spaced)
• Take linear equation:
d2y dy
p( x ) 2 + q( x ) + r1 ( x ) y = r0 ( x )
dx dx
• with the boundary conditions
y (a ) = ya ; y (b) = yb
• Let (a,b) be divided into n equal intervals
[h=(b-a)/n]
Direct Method
• At each node, we get an equation relating
the y values at nodes i-1, i, and i+1 (or
more, if higher order finite difference
formula is used)
ai ,i −1 yi −1 + ai ,i yi + ai ,i +1 yi +1 = bi
• where:
p( xi ) q( xi ) p( xi )
ai ,i −1 = 2 − ; ai ,i = −2 2 − r1 ( xi );
h 2h h
p( xi ) q( xi )
ai ,i +1 = 2 + ; bi = r0 ( xi )
h 2h
Direct Method: Boundary Conditions
• Virtual, Imaginary, or Ghost Node:
Add a fictitious node (n+1)
The equation at node n can now be written
Write central difference approximation as
yn +1 − yn −1
= yb′ ⇒ yn +1 = yn −1 + 2hyb′
2h
y -0.8
-1
-1.2
-1.4
-1.6
x
Direct Shooting
Direct Method: Example
• Let us change the right boundary
condition to y’(1)=4.0687
• Using ghost node (node 6) at x=1.2,
y6 = y4 + 2 × 0.2 × 4.0687 = y4 + 1.6275
− 48 32.5 y1 6e 0.2
17.5 − 48 32.5 y 0.4
2 6e
17.5 − 48 32.5 y3 = 6e 0.6
17 . 5 − 48 32 . 5 y
4 6 e 0.8
50 − 48 y5 − 36.5841
0
0 0.2 0.4 0.6 0.8 1 1.2
-0.2
-0.4
-0.6
y Analytical
-0.8 Numerical
-1
-1.2
-1.4
-1.6
x
Partial Differential Equations
• Two or more independent variables
Vibration of a string: y=f(x,t)
Steady-state temperature of a plate, T=f(x,y)
Transient temperature in a cube, T=f(t,x,y,z)
• Need Initial and/or Boundary conditions
Diffusion Equation: ∂c ∂ c 2
=D 2
∂t ∂x
Advection-Diffusion Equation:
2
∂c ∂c ∂ c
+u = D 2
∂t ∂x ∂x
Partial Differential Equations: Examples
Diffusion Equation in 3D:
2 2 2
∂c ∂ c ∂ c ∂ c
= Dx 2 + D y 2 + Dz 2
∂t ∂x ∂y ∂z
3D Advection-Diffusion Equation:
2 2 2
∂c ∂c ∂c ∂c ∂ c ∂ c ∂ c
+ u + v + w = Dx 2 + D y 2 + Dz 2
∂t ∂x ∂y ∂z ∂x ∂y ∂z
2
=u
∂t ∂x 2
Needs two initial conditions and two b.c.
• Classifications of PDEs helps us in
identifying the appropriate IC/BC
• On the basis of Characteristics
• These are the hyper-planes (line, if 2
independent variables; plane if 3), along
which “information” propagates
Partial Differential Equations: Characteristics
• The governing equations become simpler
along the characteristics
• For example, a first-order PDE in 2
independent variables reduces to an ODE
along the characteristic lines
• These also help in identifying the
“domain (or region) of influence” and the
“domain (or region) of dependence”
• Which helps in proper selection of
initial/boundary conditions
Partial Differential Equations: Characteristics
• Consider the “pure advection”
∂c ∂c
+u = 0
∂t ∂x
• Clearly, the information propagates at
the velocity u: Characteristics
0 x
Partial Differential Equations: Characteristics
• c is constant along these lines
• How do we find the characteristics?
• Define a new variable
ξ = ξ (t , x)
• Partial derivatives:
∂c ∂c ∂ξ
=
∂t ∂ξ ∂t
∂c ∂c ∂ξ
=
∂x ∂ξ ∂x
Partial Differential Equations: Characteristics
• From the governing equation:
∂c ∂ξ ∂ξ
+u =0
∂ξ ∂t ∂x
• Resulting in ξ = x − ut
• Along the lines, ξ=constant, dx/dt=u
• Governing equation becomes dc/dt=0
• c is constant along a characteristic
line (known as Riemann Invariant)
Characteristic Lines
• Let us now consider a set of two first-
order nonlinear equations: “channel flow”
∂y ∂y ∂V
+V +y =0
∂t ∂x ∂x
∂y V ∂V 1 ∂V
+ + = f ( x, t )
∂x g ∂x g ∂t
• Multiply 2nd eqn. by α and add to 1st
∂y ∂y ∂V ∂y V ∂V 1 ∂V
+V +y + α + + = αf ( x, t )
∂t ∂x ∂x ∂x g ∂x g ∂t
Characteristic Lines
• Write it as
g ∂y ∂y ∂V gy ∂V
+ (V + α ) + + V + = αf ( x, t )
α ∂t ∂x ∂t α ∂x
• For conversion to ODE, should have
gy
V +α = V + ⇒ α = ± gy
α
• i.e., ξ = x − (V + gy )t and η = x − (V − gy )t
• Along these lines, g dy dV
± + =0
• With f=0 y dt dt
Characteristic Lines
• Or d
(V ± 2 gy ) = 0
dt
Region of influence
+ive Characteristics
P
P
Region of dependence
0 x
Partial Differential Equations: Characteristics
• Similarly, for a second-order equation
• Use ξ and η
• A general second-order PDE is
∂φ2
∂φ 2
∂φ ∂φ
2
∂φ
A 2 +B + C 2 + D + E + Fφ = G
∂x ∂x∂y ∂y ∂x ∂y
• We consider only Linear : all the
coefficients may be f (x,y only)
• We switch to the compact notation
Aφ xx + Bφ xy + Cφ yy + Dφ x + Eφ y + Fφ = G
Partial Differential Equations: Characteristics
• With ξ(x,y) and η(x,y)
φ x = φξ ξ x + φηη x
φ xx = ξ x (φξ ξ x + φηη x )ξ + η x (φξ ξ x + φηη x )η
• And so on...
• The PDE is then written as
Aφ xx + Bφ xy + Cφ yy + Dφ x + Eφ y + Fφ = G
Partial Differential Equations: Classification
∂ 2φ ∂ 2φ
• Example: Laplace equation 2
+ 2 =0
∂x ∂y
• a= 1,b=0,c=1
Partial Differential Equations: Numerical Solution
2
+ 2 =0
∂x ∂y
• For a general case, Kx(x,y) and Ky(x,y)
need to be used in the two terms
• The objective is to find the value of φ at
all points within the solution domain
Laplace Equation
• Rectangular domain, lengths Lx and Ly
• Elliptic equation: No characteristic lines,
need boundary conditions on ALL
boundaries
• Assume Dirichlet B.C. on all boundaries,
φ (0,y)= l; φ (Lx,y)= r; φ (x,0)= b; φ
(x,Ly)= t; at the left, right, bottom, and top
• Use a uniform grid, spacing ∆x and ∆y,
which may not be equal
• How to find φ at each “grid point (node)”
y φ=t
Ly
i, j+1
7 8 9
i−1, j i, j i+1, j
φ=l 4 5 6
φ=r
i, j−1
1 2 3
∆y
x
0 ∆x φ=b Lx
Although there are 25 nodes, only 9 unknowns, shown by circles
(Also note the discontinuity at the corner nodes)
Laplace Equation
• At node (i,j), the discretized form is
φi −1, j − 2φi , j + φi +1, j φi , j −1 − 2φi , j + φi , j +1
2
+ 2
=0
∆x ∆y
• Node 5
φ2 + φ4 − 4φ5 + φ6 + φ8 = 0
• Node 2
b + φ1 − 4φ2 + φ3 + φ5 = 0
• Node 1
b + l − 4φ1 + φ2 + φ4 = 0
Laplace Equation
• We get nine linear equations in nine
unknowns. Five non-zero diagonals, but
two are “away from” the main diagonal
− 4 1 0 1 φ1 − b − l
1 −4 1 0 1 φ − b
2
0 1 −4 0 0 1 φ3 − b − r
1 0 0 −4 1 0 1 φ4 − l
1 0 1 −4 1 0 1 φ5 = 0
1 0 1 − 4 0 0 1 φ
6 − r
1 0 0 −4 1 0 φ7 − l − t
1 0 1 − 4 1 φ
8 − t
1 0 1 − 4 φ − r − t
9
Laplace Equation
• The system of equations could be solved
by using direct methods
• For sparse matrices, iterative solution is
more efficient, specially for large systems
• Let us consider the bottom boundary at
zero potential and the other three at 100
• The RHS vector is
{-100,0,-100,-100,0,-100,-200,-100,-200}T
• And the solution is
{57.1,47.3,57.1,81.3,75.0,81.3,92.9,90.2,92.9}T
Laplace Equation: Iterative solution
• The Gauss-Seidel iterations are written as
1
φi , j = (φi −1, j + φi , j −1 + φi +1, j + φi , j +1 )
4
• With appropriate values for the nodes
near the boundaries
• Use the starting guess as 100 at all nodes
• In the first iteration:
1
φ1 = (0 + 100 + 100 + 100) = 75
4
1
φ2 = (0 + 75 + 100 + 100) = 68.75......
4
Node -> 1 2 3 4 5 6 7 8 9
Iter 0 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00
Iter 1 75.00 68.75 67.19 93.75 90.63 89.45 98.44 97.27 96.68
Iter 2 65.63 55.86 61.33 88.67 82.81 85.21 96.48 93.99 94.80
Iter 3 61.13 51.32 59.13 85.11 78.91 83.21 94.78 92.12 93.83
Iter 4 59.11 49.29 58.12 83.20 76.95 82.23 93.83 91.15 93.35
Iter 5 58.12 48.30 57.63 82.23 75.98 81.74 93.34 90.67 93.10
Iter 6 57.63 47.81 57.39 81.74 75.49 81.49 93.10 90.42 92.98
Iter 7 57.39 47.57 57.26 81.49 75.24 81.37 92.98 90.30 92.92
Iter 8 57.26 47.44 57.20 81.37 75.12 81.31 92.92 90.24 92.89
Iter 9 57.20 47.38 57.17 81.31 75.06 81.28 92.89 90.21 92.87
Iter 10 57.17 47.35 57.16 81.28 75.03 81.27 92.87 90.19 92.86
Iter 11 57.16 47.34 57.15 81.27 75.02 81.26 92.86 90.19 92.86
Iter 12 57.15 47.33 57.15 81.26 75.01 81.25 92.86 90.18 92.86
Iter 13 57.15 47.33 57.14 81.25 75.00 81.25 92.86 90.18 92.86
Iter 14 57.14 47.32 57.14 81.25 75.00 81.25 92.86 90.18 92.86
Iter 15 57.14 47.32 57.14 81.25 75.00 81.25 92.86 90.18 92.86
Laplace Equation
• What if Neumann B.C.?
• Assume Dirichlet B.C. on bottom and left
boundaries: φ (x,0)= b; φ (0,y)= l;
and Neumann at the right and top:
∂φ/∂x (Lx,y)= r; ∂φ/∂y(x,Ly)= t
• We have already seen how to modify the
equation at the boundary nodes for
Dirichlet B.C.
• For Neumann boundary, we could use
Ghost node or backward difference
y ∂φ/ ∂y = t
Ly 13 14 15 16
i, j+1
9 10 11 12
i−1, j i, j i+1, j
φ=l 5 6 7 8 ∂φ/ ∂x = r
i, j−1
1 2 3 4
∆y
x
0 ∆x φ=b Lx
Number of unknowns has increased from 9 to 16
Laplace Equation: Neumann B.C.
• Using backward difference, O(h2):
• Node 8
φ6 − 4φ7 + 3φ8 = 2∆xr
• Node 14
φ6 − 4φ10 + 3φ14 = 2∆yt
• Node 16? Does not occur in other Eqns.
Could be computed separately!
Laplace Equation: Example of derivative B.C.
• Let us consider the bottom boundary at
zero potential, left at 100, and the right
and top to be insulated (zero gradient)
• Use the Gauss-Seidel method
• Some equations are:
φ1 = (0 + 100 + φ2 + φ5 ) / 4
φ4 = (− φ2 + 4φ3 ) / 3
φ6 = (φ2 + φ5 + φ7 + φ10 ) / 4
φ15 = (− φ7 + 4φ11 ) / 3
Node -> 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Iter 0 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00 100.00
Iter 1 75.00 68.75 67.19 66.67 93.75 90.63 89.45 89.06 98.44 97.27 96.68 96.48 100.00 99.48 99.09
Iter 2 65.63 55.86 52.99 52.04 88.67 82.81 80.39 79.58 96.48 93.86 92.46 91.99 99.09 97.55 96.48
Iter 3 61.13 49.24 45.42 44.14 85.11 77.15 73.65 72.48 94.51 90.42 88.13 87.37 97.65 94.84 92.96
Iter 4 58.59 45.29 40.77 39.26 82.56 72.98 68.59 67.13 92.66 87.15 84.02 82.97 96.02 91.88 89.16
Iter 5 56.96 42.68 37.63 35.95 80.65 69.77 64.64 62.93 90.96 84.15 80.23 78.92 94.39 88.95 85.43
Iter 6 55.83 40.81 35.35 33.53 79.14 67.18 61.42 59.50 89.42 81.45 76.81 75.26 92.85 86.20 81.93
Iter 7 54.99 39.38 33.58 31.65 77.90 65.04 58.73 56.63 88.05 79.02 73.74 71.98 91.43 83.69 78.74
Iter 8 54.32 38.24 32.15 30.13 76.85 63.21 56.43 54.17 86.83 76.87 71.00 69.05 90.15 81.42 75.86
Iter 9 53.77 37.28 30.96 28.85 75.95 61.63 54.44 52.05 85.74 74.95 68.58 66.45 89.01 79.39 73.29
Iter 10 53.31 36.48 29.94 27.77 75.17 60.26 52.71 50.19 84.78 73.25 66.42 64.15 87.98 77.58 71.00
Iter 50 50.02 30.68 22.60 19.91 69.40 50.07 39.80 36.37 77.50 60.39 50.12 46.69 80.20 63.82 53.56
Iter 60 50.01 30.65 22.56 19.87 69.37 50.02 39.73 36.30 77.47 60.32 50.03 46.61 80.16 63.75 53.47
Iter 70 50.00 30.64 22.55 19.86 69.37 50.01 39.71 36.28 77.46 60.30 50.01 46.58 80.15 63.73 53.44
Iter 80 50.00 30.64 22.55 19.85 69.36 50.00 39.71 36.28 77.45 60.30 50.00 46.57 80.15 63.73 53.43
Node 16 value is obtained as 50, from both top boundary and right boundary conditions
Slow convergence. Neumann conditions make the equation “not diagonally dominant”
Laplace Equation: Mixed B.C.
• Sometimes the boundary condition is
specified as a linear combination of the
dependent variable and its derivative
• Known as Third Type, Mixed, or Robin
• E.g., convective heat transfer
∂φ/∂x = k (φ−φ0)
• Similar procedure as Neumann. E.g.,
φi − 2, j − 4φi −1, j + 3φi , j
= k (φi −1, j − φ0 )
2∆x
φi − 2, j − 4φi −1, j + (3 − 2∆xk )φi , j = −kφ0
Advection-Diffusion Equation
• Mass transport in moving fluids
• We consider only 1-D, homogeneous case
2
∂c ∂c ∂ c
+u = D 2
∂t ∂x ∂x
n+1 i, n+1
n
i-1, n i, n i+1, n
n-1
0
L=m∆x
0 i-1 i i+1 m x
Advection-Diffusion Equation: Discretization
2
∂c ∂c ∂ c
+u = D 2
∂t ∂x ∂x
• Forward difference for time and central
for space
n +1 n n n n n n
c −c n c −c n c − 2c + c
i
n
i
+ ui i +1 i −1
=D
i
i +1 i
2
i −1
∆t 2∆xi ∆x
i
• Assumption: uniform step size, constant
velocity and dispersion
n +1 u∆t D∆t n 2 D∆t n u∆t D∆t n
c
i = + 2 ci −1 + 1 − c + −
2 i
+ 2 ci +1
2∆x ∆x ∆x 2∆x ∆x
Advection-Diffusion Equation: Discretization
n +1 u∆t D∆t n 2 D∆t n u∆t D∆t n
c
i = + 2 ci −1 + 1 − c + −
2 i
+ 2 ci +1
2∆x ∆x ∆x 2∆x ∆x
• Not applicable at i=0 and i=m
• Not needed, since c is given
• Explicit, nodal values at any time step
directly from those at previous time step
• For better stability, we could use implicit
n +1 n n +θ n +θ n +θ n +θ n +θ
c −c c −c c − 2c + c
i
+u
i i +1 i −1
=D i +1 i
2
i −1
∆t 2∆x ∆x
where, cin +θ = (1 − θ )cin + θcin +1
Time-Weighting
• θ denotes the weight assigned to the
“unknown” time step
• Sometimes, the weighting factor θ is
replaced by 1-μ, with μ being the weight
assigned to the “known” time step
• We will use μ: ci
n +1− µ
= µci + (1 − µ )ci
n n +1
u∆t
• We use the Courant number, C= and
∆x
u∆x
the Grid Peclet number, Pg =
D
(the Peclet no. based on domain length is
given by Pe = uL )
D
Courant and Peclet Numbers
• Cournt no. represents the number of grid-
lengths travelled in one time step
• Peclet number represents the relative
influence of advection and dispersion
• The nodal equation becomes
C C n +1 2(1 − µ )C n +1 C C n +1
− (1 − µ ) +
ci −1 + 1 + ci + (1 − µ ) − ci +1 =
2 Pg Pg 2 P
g
C C n 2 µC n C C n
µ + ci −1 + 1 − ci + µ − + ci +1
Pg 2 P
2 Pg g
• And, at ∆t:
1 1 1
n +1 n u∆t D∆t n + 2 2 D∆t n + 2 u∆t D∆t n + 2
ci = ci + + 2 ci −1 − 2
ci + − + 2 ci +1
2∆x ∆x ∆x 2∆x ∆x
Advection-Diffusion Equation: Example
• Given: c(0,x)=1−x/3, c(t,0)=1, c(t,3)=0 (c in
kg/m3; x in m, t in s); u= 1 m/s; D= 2 m2/s
• Use: ∆x=1 m; ∆t=1 s
• Find: c after 1 s at x=1 m and 2 m
• C=1, Pg=0.5
1 1 2C 1 1
• Explicit: c = C + c + 1 − c + C − + c
n +1
i
n
i −1
n
i
n
i +1
2 Pg Pg 2 Pg
n +1 n n n
c = 2.5c − 3c + 1.5c
• At 1 s: i i −1 i i +1
• At 1 s:
− 2.5c01 + 5c11 − 1.5c12 = c10 ⇒ 5c11 − 1.5c12 = 19 / 6
− 2.5c11 + 5c12 − 1.5c31 = c20 ⇒ −2.5c11 + 5c12 = 1 / 3
• Solution: 0.7686, 0.4510
Advection-Diffusion Equation: Example
• Crank-Nicolson or Time-centered (μ=0.5):
1 C C n +1 C n +1 1 C C n +1 1 C C n C n 1 C C n
− + ci −1 + 1 + c + − c = + ci −1 + 1 − ci + − + ci +1
2 2 Pg P i
g 2 2 P i +1
g 2 2 Pg P
g
2 2 Pg
• At 1 s:
− 1.25c01 + 3c11 − 0.75c12 = 1.25c00 − c10 + 0.75c20 ⇒ 3c11 − 0.75c12 = 2.083
− 1.25c11 + 3c12 − 0.75c31 = 1.25c10 − c20 + 0.75c30 ⇒ −1.25c11 + 3c12 = 0.5
• Corrector:
1 1 1
n +1 u∆t D∆t
n 2 D∆t n+ n+ u∆t D∆t n+
ci = c +i + 2 ci −1 − 2
ci 2 2
+ − + 2 ci +1 2
2∆x ∆x ∆x 2∆x ∆x
1 1 1
ci1 = ci0 + 2.5c 2
i −1 − 4c + 1.5c
i
2 2
i +1 ⇒ c11 = 0.5833; c12 = 0.4167
Advection-Diffusion Equation: Neumann B.C.
• Assume Dirichlet B.C. at x=0 and Neumann
at L: c(t,0)=1, ∂c/∂x(t,L)=0; , and zero
initial condition: c(0,x)=0 Time
n
• Uniform grid, space ∆x and time, ∆t: ci
t Space
n+1 i, n+1
n
i-1, n i, n i+1, n
n-1
0
L=m∆x
0 i-1 i i+1 m x
Neumann B.C. – Explicit Method
n +1 u∆t D∆t n 2 D∆t n u∆t D∆t n
c
i = + 2 ci −1 + 1 − c + −
2 i
+ 2 ci +1
2∆x ∆x ∆x 2∆x ∆x
• Not applicable at i=0 (not needed)
• Use a ghost node (m+1)
• Approximate the derivative by central
n n
difference cm +1 − cm −1
=0
2∆xi
• The equation at node m:
n +1 2 D∆t n 2 D∆t n
c m = 2
cm −1 + 1 − c
2 m
∆x ∆x
Neumann B.C.: Example
• Given: c(0,x)=1−x/3, c(t,0)=1, ∂c/ ∂x(t,3)=0;
u= 1 m/s; D= 2 m2/s
• Use: ∆x=1 m; ∆t=1 s
• Find: c after 1 s at x=1 m and 2 m
• C=1, Pg=0.5 1 1 2C 1 1
• Explicit: c = C + c + 1 − c + C − + c
n +1 n n n
i 2 P P i −1 2 P i i +1
g g g
n +1 n n n
c
i = 2.5c i −1 − 3c + 1.5c
i i +1
• At 1 s:
c11 = 2.5c00 − 3c10 + 1.5c20 = 2.5 − 2 + 0.5 = 1
1 0 0 0
c = 2.5c − 3c + 1.5c = 5 / 3 − 1 = 2 / 3
2 1 2 3
2
= u ( x) 2
∂t ∂x
2
= u ( x) 2
∂t ∂x
Advection-Diffusion Equation
∂c ∂c ∂ 2c
+u = D 2
∂t ∂x ∂x
n +1 u∆t D∆t n 2 D∆t n u∆t D∆t n
c
i = + 2 ci −1 + 1 − c + −
2 i
+ 2 ci +1
2∆x ∆x ∆x 2∆x ∆x
t
n+1 i, n+1
n
i-1, n i, n i+1, n
n-1
0
L=m∆x
0 i-1 i i+1 m x
Time-Weighting
cn +1− µ
i = µc + (1 − µ )c
n
i
n +1
i
C C n +1 2(1 − µ )C n +1 C C n +1
− (1 − µ ) + ci −1 + 1 + ci + (1 − µ ) − ci +1 =
2 P P 2 P
g g g
C C n 2 µC n C C n
µ + ci −1 + 1 − ci + µ − + ci +1
2 P
2 P g Pg g
2
= u ( x) 2
∂t ∂x
• u is velocity, may vary with x (we assume
it to be constant), ϕ is a scalar (could be
displacement, electric/magnetic field…)
• Need two initial and two boundary
conditions: E.g. ϕ(0,x) and ∂ϕ/∂t(0,x);
ϕ(t,0) and ϕ(t,L).
Wave Equation
• Consider the vibration of a string fixed
between two supports
L
2
=u 2
∂t ∂x
φ n −1
− 2φ + φ
n n +1
2 φ − 2φ + φ
n n n
i i
2
i
=u i −1 i
2
i +1
∆t ∆x
• How to decide the step-size?
• Need to look at characteristics
• a=1, b=0, c= −u2 . b2 − ac= u2 .
• Hyperbolic equation: Two sets of
characteristics with slope 1/u and −1/u
Region of influence
Domain of dependence
Wave Equation
• Dirichlet B.C. : φ(t,0)=0, φ(t,L)=0
• Initial displacement: e.g. φ(0,x)= sin (πx/L)
• Initial velocity, e.g., ∂φ /∂t(0,x)=0 Time
n+1 i, n+1
n
i-1, n i, n i+1, n Numerical domain of dependence
n-1
i, n−1
0
L=m∆x
0 i-1 i i+1 m x
Hyperbolic Equation
• A necessary condition for convergence:
the numerical domain of dependence
must contain the physical domain of
dependence
• Known as the Courant-Friedrichs-Lewy
(CFL) condition
• u∆t ≤ ∆x for the explicit scheme
• The Courant number, C≤ 1
• Implicit schemes, no limit on C for
convergence (should be small for accuracy)
Wave Equation: Implicit Scheme
• The spatial derivative may be written as a
weighted average at different times:
4 ∆ x 2
Wave Equation: Implicit Scheme
• Results in a tridiagonal system:
C n +1 C
2 2
n +1 C n +1
2
− φi −1 + 1 + φi − φi +1
4 2 4
C 2 n −1 C 2 n −1 C 2 n −1
= φi −1 − 1 + φi + φi +1
4 2 4
2 2
C n C n
+
2
( )
φi −1 + 2 − C φi + φi +1
2 n
2
• Thomas algorithm. Non-self starting!
• If the initial velocity is zero, at the first
time step, we could apply the central
difference and write φi = φi
n +1 n −1
Implicit Scheme: Start-up
• The equations at the first time-step are:
C2 1 C 2
C 2
C 2
−
2
( )
φi −1 + 2 + C 2 φi1 − φi1+1 =
2 2
( )
φi0−1 + 2 − C 2 φi0 + φi0+1
2
• At Node 2:
− 0.5φ11 + 3φ21 − 0.5φ31 = 0.5φ10 + φ20 + 0.5φ30 ⇒ −0.5φ11 + 3φ21 = −0.06495
+0.5φ n
i −1 + φ + 0.5φ
i
n n
i +1
cin. +j 1 − cin, j cin−+11, j − 2cin, +j 1 + cin++11, j cin, +j 1−1 − 2cin, +j 1 + cin, +j 1+1
= D 2
+ 2
∆t ∆x ∆y
i−1, j i, j i+1, j
φ=l 4 5 6
φ=r
i, j−1
1 2 3
∆y
x
0 ∆x φ=b Lx
9 unknowns: At each time step a banded matrix is formed
Can we make it tridiagonal for faster solution?
Alternating Direction Implicit Scheme
• Use implicit in one direction (say, x) at
“half time step” and explicit in other (y)
• Use implicit in the other direction (y) at
the next “half time step” and explicit in x
cin. +j 1/ 2 − cin, j cin−+11,/j 2 − 2cin, +j 1/ 2 + cin++11,/j 2 cin, j −1 − 2cin, j + cin, j +1
= D 2
+ 2
∆t / 2 ∆x ∆y
cin. +j 1 − cin, +j 1/ 2 cin−+11,/j 2 − 2cin, +j 1/ 2 + cin++11,/j 2 cin, +j 1−1 − 2cin, +j 1 + cin, +j 1+1
= D 2
+ 2
∆t / 2 ∆x ∆y
• Could reverse the order in the next time
step (implicit in y for the first half)
y φ=t
Ly
i, j+1
3 6 9
i−1, j i, j i+1, j
φ=l 2 5 8
φ=r
i, j−1
1 4 7
∆y
x
0 ∆x φ=b Lx
Node numbers need to be modified accordingly in the two half-steps
Truncation Error and Stability
• Similar to what was done for ODE
• Truncation error is obtained by comparing
the discretized “difference form” with the
Taylor’s series
• Stability is obtained by looking at the
amplification
• Covered very briefly here and for very
simple cases
PDE’s Discussed
• Laplace Equation
∂φ ∂φ
2 2
2
+ 2 =0
∂x ∂y
• Discretization
φi −1, j − 2φi , j + φi +1, j φi , j −1 − 2φi , j + φi , j +1
2
+ 2
=0
∆x ∆y
PDE’s Discussed
• Advection-Diffusion Equation
2
∂c ∂c ∂ c
+u = D 2
∂t ∂x ∂x
• Explicit:
n +1 n n n n n n
c −c n c −c n c − 2c + c
i
n
+ ui i i +1 i −1
=D i
i +1 i
2
i −1
∆t 2∆xi ∆x i
• Time-weighted (μ)
(1 − µ ) − u∆t − D∆2t cin−+11 + 1 + 2(1 − µ 2)D∆t cin+1 + (1 − µ ) u∆t − D∆2t cin++11 =
2 ∆x ∆x ∆x 2∆x ∆x
u∆t D∆t n 2 µD∆t n u∆t D∆t n
µ + 2 ci −1 + 1 − 2
c
i + µ − + 2 ci +1
2∆x ∆x ∆x 2∆x ∆x
PDE’s Discussed
• Wave Equation ∂φ2
2 ∂ φ
2
2
=u 2
∂t ∂x
φn −1
− 2φ + φ n n +1
φ − 2φ + φ
n n n
• Explicit: i i
2
i
=u 2 i −1 i
2
i +1
∆t ∆x
1 φ − 2φ + φ
n −1 n −1
n −1
• Implicit:
i −1 i
2
+
i +1
4 ∆x
φ n −1
− 2φ + φ
n n +1
2 1 φ n
− 2φ n
+ φ n
i i
2
i
=u i −1 i
2
i +1
+
∆t 2 ∆ x
n +1
1 φ
i −1 − 2φ i
n +1
+ φ n +1
i +1
4 ∆ x 2
PDE’s Discussed
• 2-D transient diffusion
∂c ∂ c ∂ c 2 2
= D 2 + 2
∂t ∂x ∂y
• Explicit:
cin. +j 1 − cin, j cin−1, j − 2cin, j + cin+1, j cin, j −1 − 2cin, j + cin, j +1
= D 2
+ 2
∆t ∆x ∆y
• Implicit:
cin. +j 1 − cin, j cin−+11, j − 2cin, +j 1 + cin++11, j cin, +j 1−1 − 2cin, +j 1 + cin, +j 1+1
= D 2
+ 2
∆t ∆x ∆y
Truncation Error and Stability
• Similar to what was done for ODE
• Truncation error is obtained by comparing
the discretized “difference form” with the
Taylor’s series
• Stability is obtained by looking at the
amplification
• Covered very briefly here and for very
simple cases
Truncation Error : Laplace Equation
∂ 2φ ∂ 2φ φi −1, j − 2φi , j + φi +1, j φi , j −1 − 2φi , j + φi , j +1
+ =0 2
+ 2
=0
∂x 2
∂y 2
∆x ∆y
• Need the Taylor’s series expansions
∂φ ∆x ∂ φ ∆x ∂ φ ∆x ∂ φ
2 2 3 3 4 4
φi ±1, j = φ ± ∆x + 2
± 3
+ 4
+ ...
∂x 2! ∂x 3! ∂x 4! ∂x i, j
∂φ ∆y 2 ∂ 2φ ∆y 3 ∂ 3φ ∆y 4 ∂ 4φ
φi , j ±1 = φ ± ∆y + 2
± 3
+ 4
+ ...
∂y 2! ∂y 3! ∂y 4! ∂y i, j
• Not needed here, but for later use:
∂φ ∂φ 1 ∂ ∂
2
1 ∂ ∂
3
φi +1, j +1 = φ + ∆x + ∆y + ∆x + ∆y φ + ∆x + ∆y φ ...
∂x ∂y 2! ∂x ∂y 3! ∂x ∂y i , j
Truncation Error : Laplace Equation
• Difference equation at i,j
φi −1, j − 2φi , j + φi +1, j φi , j −1 − 2φi , j + φi , j +1
2
+ 2
=0
∆x ∆y
∂ 2φ ∆x 2 ∂ 4φ ∂ 2φ ∆y 2 ∂ 4φ
2
+ 4
+ ... + 2 + 4
+ ... = 0
∂x 12 ∂x ∂y 12 ∂y
∂ 2φ ∂ 2φ ∆x 2 ∂ 4φ ∆y 2 ∂ 4φ
2
+ 2 =− 4
− 4
+ ...
∂x ∂y 12 ∂x 12 ∂y
• Truncation Error: (T.V. − Approx Value) of ∇ φ 2
∆x ∂ φ ∆y ∂ φ
2 4 2 4
4
+ 4
+ ...
12 ∂x 12 ∂y
• Second order in both x and y, as expected
Truncation Error : Diffusion Equation
2
∂c ∂ c
=D 2
∂t ∂x
• Time-weighted scheme:
D∆t n +1 2(1 − µ )D∆t n +1 D∆t n +1
− (1 − µ )2
c + 1+
i −1 2
c − (1 − µ )
i c =
2 i +1
∆x ∆ x ∆x
D∆t n 2 µD∆t n D∆t n
µ 2 ci −1 + 1 − 2 ci + µ c
2 i +1
∆x ∆x ∆x
• Use the Taylor’s series expansions (similar to
the Laplace equation with i,j, now we have i
for space and n for time). We will now need
the i±1,n+1 expressions also.
Truncation Error : Diffusion Equation
• Difference equation at i,n simplifies to
∂c ∂ 2c ∆t ∂ 2 c ∂ 3c
−D 2 =− 2
+ (1 − µ )∆tD
∂t ∂x 2 ∂t ∂t ∂x 2
∆t 2 ∂ 3c ∆t 2 ∂ 4c
− 3
+ (1 − µ ) D 2 2
6 ∂t 2 ∂t ∂x
∆x 2 ∂ 4 c
+D 4
+ ...
12 ∂x
2 1 ∆x 2 ∂ 4 c
• And the term becomes D ∆t µ − − D 4
2 12 ∂x
• Which may me made to vanish by choosing
D∆t 1
∆x 2
=
1 (naturally, works only for μ>1/2
12 µ −
2 could use explicit, μ=1)
Truncation Error : Pure advection
∂c ∂c
+u = 0
∂t ∂x
• Explicit scheme, with central difference:
n +1 u∆t n n u∆t n
ci = ci −1 + ci − ci +1
2∆x 2∆x
• Using Taylor’s series (and replacing all time
derivatives by spatial derivatives):
∂c ∂c u 2 ∆t ∂ 2 c u 3 ∆t 2 u∆x 2 ∂ 3c
+u =− 2
+ − 3 + ...
∂t ∂x 2 ∂x 6 6 ∂x
1, 2
2
• Therefore, the general solution of the
recursive equation − zk −1 + (2 − λ )zk − zk +1 = 0
k k
is zk = c r + c r
1 1 2 2 for k = 0,1,2,..., m
Stability Analysis: Matrix method
k k
zk = c r + c r
1 1 for k = 0,1,2,..., m
2 2
2 − λ ± λ − 4λ 2
r1, 2 = ± iθ
• Write 2 as e
2−λ 4λ − λ 2
cosθ = and sin θ =
2 2
• We get sin mθ = 0 = sin jπ
Stability Analysis: Matrix method
• Resulting in the eigenvalues of B as:
jπ
λ j = 2 − 2 cos ; j = 1,2,..., m − 1
m
• The largest magnitude will be for j=m-1,
and is approximately 4
• The eigenvalues of A are obtained on
multiplying by −(D/∆x2)
• Stabilty limit is given by |λmax∆t|≤2
• D∆t/ ∆x2 ≤ 1/2
Stability Analysis: von Neumann method
• The solution is assumed to be of the form
∞
φ= ∑ T (t )X ( x )
k = −∞
k k
T k ∆x
Stability Analysis: von Neumann method
• Amplification for any “wave number” k
2 D∆t
σ = 1+ 2
(cos k∆x − 1)
∆x
• Most critical for cos=−1
4 D∆t
1− 2
≤1
∆x