Professional Documents
Culture Documents
Claus Führer
Lund University
claus@maths.lth.se
C. Führer: FMN081/FMN041/NUMA12-2008
Unit 0: Preface
• These notes serve as a skeleton for the course. They document together
with the assignments the course outline and course content.
• All references in the notes refer to the textbook by Süli and Mayers if
not otherwise stated.
• The notes are a guide to read the textbook. They are no textbook.
C. Führer: FMN081/FMN041/NUMA12-2008
1
Unit 1: Basic Iterative Schemes in 1D
Problem: 0.5
Given: f : R → R
Find: x ∈ R such that f (x) = 0. 0
x is called a root or a zero of f .
-0.5
0 5 10
C. Führer: FMN081/FMN041/NUMA12-2008
2
1.1 Existence
Theorem. [1.1]
f : [a, b] ⊂ R → R, continuous and f (a)f (b) < 0,
i.e. they have different signs
Then
There exists ξ ∈ [a, b] with f (ξ) = 0 .
10
Double Root
5
-5
Single Root
-10
0 5 10
f (x) = 0 ⇔ x = g(x)
Examples:
g(x) = x − f (x)
g(x) = x + αf (x)
C. Führer: FMN081/FMN041/NUMA12-2008
5
1.4 Brower’s Fixed Point Theorem
C. Führer: FMN081/FMN041/NUMA12-2008
6
1.5 Example
ln(2x+1)
8 2.5
exp(x)-2x-1
6
2
4
1.5
2
1
0
-2 0.5
0.5 1 1.5 2 2.5 0.5 1 1.5 2 2.5
C. Führer: FMN081/FMN041/NUMA12-2008
7
1.6 Fixed Point Iteration
xk+1 = g(xk )
C. Führer: FMN081/FMN041/NUMA12-2008
8
1.7 Example
x =
Columns 1 through 10
1.0000 1.0986 1.1623 1.2013 1.2246 1.2381 1.2460 1.2504 1.2530 1.25
Columns 11 through 18
C. Führer: FMN081/FMN041/NUMA12-2008
11
1.10 Example
g(x) = ln(2x + 1)
We set L = 2/3.
C. Führer: FMN081/FMN041/NUMA12-2008
12
1.11 Speed of convergence
Thus
|xk+1 − xk |
≤L
|xk − xk−1|
C. Führer: FMN081/FMN041/NUMA12-2008
13
1.12 Example: Speed of convergence
>> diff(x(2:end))./diff(x(1:end-1))
ans =
Columns 1 through 10
0.6457 0.6134 0.5946 0.5838 0.5776 0.5740 0.5720 0.5709 0.5702 0.56
Columns 11 through 16
C. Führer: FMN081/FMN041/NUMA12-2008
14
1.13 Error bounds
We get
|xk − ξ| = |g(xk−1) − g(ξ)|
≤ L|xk−1 − ξ|
≤ L(|xk−1 − xk | + |xk − ξ|)
and consequently:
L
|xk − ξ| ≤ |xk − xk−1|
1−L
Lk
|xk − ξ| ≤ |x1 − x0|
1−L
k+1
|xk − ξ| < k and lim = µ with µ ∈ (0, 1)
k→∞ k
• Superlinear convergence if µ = 0.
• Sublinear convergence if µ = 1
C. Führer: FMN081/FMN041/NUMA12-2008
17
1.16 Rate of Convergence (Cont.)
3
exp(x)-x-2
2
1
x
2
0
x
1
-1
-4 -2 0 2
C. Führer: FMN081/FMN041/NUMA12-2008
19
1.18 Roots of a function Cont.
2 4 4
ln(x+2) exp(x)- x(exp(x)-
1 2 x)/2
x 2
2 2 X
0 2 x
2
0
-1 0
X -2 extra
-2 1 x
x 1 fixpoint
1 -2
-3 -4
-2 -1 0 1 2 -3 -2 -1 0 1 2 -2 -1 0 1 2
C. Führer: FMN081/FMN041/NUMA12-2008
20
1.19 Relaxation
A first attempt is
xk+1 = xk − λf (xk )
with λ 6= 0.
Let ξ fullfill f (ξ) = 0. We select λ such that the relaxed iteration converges
fast to ξ, if x0 is near ξ.
C. Führer: FMN081/FMN041/NUMA12-2008
21
1.20 Relaxation (Cont.)
g 0(x) = 1 − λf 0(x)
Theorem. [1.7]
Let f and f 0 be continuous. f (ξ) = 0 and f 0(ξ) 6= 0.
Then
There exists real numbers λ and δ such that the relaxed iteration converges
to ξ for all x0 ∈ [ξ − δ, ξ + δ].
1
and make the optimal choice λ(x) = f 0 (x) . This leads to
Definition. [1.6]
Newton’s method for solving f (x) = 0 is defined by
f (xk )
xk+1 = xk − 0 .
f (xk )
C. Führer: FMN081/FMN041/NUMA12-2008
24
1.22 Newton’s Method (Cont.)
Newton's method
0 x2 x1 x
0
k+1
|xk − ξ| < k and lim q =µ
k→∞ k
• If q = 2 it converges quadratically
C. Führer: FMN081/FMN041/NUMA12-2008
26
1.24 Newton’s Method Convergence (Cont.)
|f 00(x)|
≤ A ∀x, y ∈ Iδ
|f 0(y)|
Secant Method
xk − xk−1
xk+1 = xk − f (xk )
f (xk ) − f (xk−1)
f (xk )
xk+1 = xk − 0
f (x0)
C. Führer: FMN081/FMN041/NUMA12-2008
28
Ö1: Exercise Notes
f (x) = e−x − x = 0
[ x , f x , e x c i t , d i a g n ]= f z e r o (@( x ) exp(−x)−x , [ 0 . 1 , 2 ] )
matlab_prog/f4-70115.html)
C. Führer: FMN081/FMN041/NUMA12-2008
29
Ö1.2: Exercise Notes (Cont.)
x = 5.671432904097838e-01
fx = 1.110223024625157e-16
excit = 1
diagn =
intervaliterations: 0
iterations: 7
funcCount: 9
algorithm: ’bisection, interpolation’
message: ’Zero found in the interval [0.1, 2]’
C. Führer: FMN081/FMN041/NUMA12-2008
30
Ö1.3: Exercise Notes (Cont.)
C. Führer: FMN081/FMN041/NUMA12-2008
33
Ö1.5: Exercise Notes (Cont.)
• Avoid maxit and determine during the initialization step the required
number of iterations.
• The code should handle the case when the zero is exactly hit.
C. Führer: FMN081/FMN041/NUMA12-2008
34
Ö1.6: Exercise Notes (Cont.)
We solve the problem now by fixed point iteration and rewrite it as a fixed
point problem:
x = e−x.
The interval [0.1, 1] is mapped into itself and the mapp is contractive with
Lipschitz constant
xk+1 = e−xk .
C. Führer: FMN081/FMN041/NUMA12-2008
35
Ö1.7: Exercise Notes (Cont.)
In MATLAB
x (1)=0.1;
f o r i =1:30
x ( i +1)=exp(−x ( i ) ) ;
end
C. Führer: FMN081/FMN041/NUMA12-2008
36
Ö1.7: Exercise Notes (Cont.)
C. Führer: FMN081/FMN041/NUMA12-2008
37
Ö1.8: Exercise Notes (Cont.)
C. Führer: FMN081/FMN041/NUMA12-2008
38
Ö1.8: Exercise Notes (Cont.)
C. Führer: FMN081/FMN041/NUMA12-2008
39
Unit 2: Linear Systems, Vector Spaces
• v1 + v2 = v2 + v1
C. Führer: FMN081/FMN041/NUMA12-2008
40
2.1: Vector Spaces (Cont.)
• α∈R⇒α·v ∈V
• (α + β) · v = α · v + β · v
• α · (β · u) = (αβ) · u
• α · (v1 + v2) = α · v1 + α · v2
• 1·v =v
In this course:
• ....
C. Führer: FMN081/FMN041/NUMA12-2008
42
2.3: Example of Vector Spaces
Not a vector space: The set of all polynomials of degree n which have the
property p(2) = 5.
C. Führer: FMN081/FMN041/NUMA12-2008
43
2.4: Basis, Coordinates
The number of basis vectors determines the dimension of the vector space.
C. Führer: FMN081/FMN041/NUMA12-2008
44
2.5: Norms
• |λv| = |λ||v|
• |u + v| ≤ |u| + |v|
C. Führer: FMN081/FMN041/NUMA12-2008
45
2.6: Vector Norms
C. Führer: FMN081/FMN041/NUMA12-2008
46
2.7: Examples
• 1-norm n
X
kvk1 = |vi|
i=1
• ∞-norm
max |vi|
i=1:n
C. Führer: FMN081/FMN041/NUMA12-2008
47
2.8: Unit Circle
1 inf
1
0
2
-1
-1 0 1
C. Führer: FMN081/FMN041/NUMA12-2008
48
2.10: Convergence
Theorem. [-]
If dim V < ∞ and if k · kp and k · kq are norms in V, then there exist
constants c, C such that for all v ∈ V
Definition. [2.10]
Let k · k be a given vector norm. The corresponding (subordinate) matrix
norm is defined as
kAvk
kAk = max
v∈Rn \{0} kvk
2
A
0
-2
-1 0 1 -1 0 1
kAk = 2.6180
C. Führer: FMN081/FMN041/NUMA12-2008
51
2.13: How to compute matrix norms
Pn
∞-norm (Th. 2.7): kAk1 = maxi=1:n j=1 |aij | maximal row sum
p
2-norm (Th. 2.9): kAk2 = maxi=1:n λi(ATA)
where λi(ATA) is the ith eigenvalue of ATA.
C. Führer: FMN081/FMN041/NUMA12-2008
52
2.14: Condition of a Problem
0 1 00
ŷ − y = f (x + δx) − f (x) = f (x)δx + f (x + θδx)(δx)2
2!
C. Führer: FMN081/FMN041/NUMA12-2008
53
2.15: Condition of a Problem (Cont.)
0
ŷ − y xf (x) δx
= + O(δx2)
y f (x) x
C. Führer: FMN081/FMN041/NUMA12-2008
54
2.16: Condition of a Problem (Cont.)
In general
kxk kf 0(x)k
Condx = kf 0(x)k condx(f ) =
kf (x)k
and we get
C. Führer: FMN081/FMN041/NUMA12-2008
55
2.17: Examples
Example 1: (Summation)
x1
Problem: f : R2 −→ R1 with 7→ x1 + x2
x2
0 T
Jacobian: f (x1, x2) = 1, 1
In 1-norm: Condx1,x2 (f ) = 1
|x1| + |x2|
condx1,x2 (f ) =
|x1 + x2|
Problem if two nearly identical numbers are subtracted.
C. Führer: FMN081/FMN041/NUMA12-2008
56
2.18: Examples (Cont.)
f b = A−1
0
Jacobian:
kbkkA−1k kAkkxkkA−1k −1
condb(f ) = −1
≤ = kAkkA k =: κ(A)
kA bk kxk
C. Führer: FMN081/FMN041/NUMA12-2008
57
2.19: Examples (Cont.)
Example:
3
10 0 1 0
A= b= δb =
0 10−3 0 10−5
(see the exercises of the current week and Exercise 2.14 in the book.)
C. Führer: FMN081/FMN041/NUMA12-2008
58
Unit 3: Systems of nonlinear functions
2 2
f1(x1, x2) x 1 + x2 − 1
F (x) = = =0
f2(x1, x2) 5x21 + 21x22 − 9
1
f1
0.5
f2 p p
0 ξ1 = (− 3/2, 1/2) ξ2 = ( 3/2, 1/2)T
T
-0.5 p p
-1
ξ3 = (− 3/2, −1/2) ξ4 = ( 3/2, −1/2)T
T
-1 -0.5 0 0.5 1
C. Führer: FMN081/FMN041/NUMA12-2008
59
3.1 Fixed Point Iteration in Rn
3/4 1/3
g(x) = x
0 3/4
It follows
kg(x) − g(y)k = kA(x − y)k ≤ kAkkx − yk
Thus L = kAk.
13
But kAk1 = kAk∞ = 12 and kAk2 = 0.9350.
We start a fixed point iteration in the last example with x = (1, 1)T and
get the following norms:
2
different norms
1 of fixed point iterates
1.5
2
1
inf
0.5
0
0 1 2 3 4 5 6 7 8 9 10
C. Führer: FMN081/FMN041/NUMA12-2008
63
3.5: The Jacobian
2 2
f1(x1, x2) x 1 + x2 − 1
F (x) = = =0
f2(x1, x2) 5x21 + 21x22 − 9
Then
2x1 2x2
JF (x) =
10x1 42x2
C. Führer: FMN081/FMN041/NUMA12-2008
65
3.7: Jacobian: Numerical Computation
f u n c t i o n [ J ]= j a c o b i a n ( f u n c , x , f x )
% computes t h e J a c o b i a n o f a f u n c t i o n
n=l e n g t h ( x ) ;
i f n a r g i n ==2
f x=f e v a l ( f u n c , x ) ;
end
eps =1. e − 8; % c o u l d be made b e t t e r
x p e r t u r b=x ;
f o r i =1: n
x p e r t u r b ( i )= x p e r t u r b ( i )+eps ;
J ( : , i )=( f e v a l ( f u n c , x p e r t u r b )− f x ) / eps ;
x p e r t u r b ( i )=x ( i ) ;
end ;
C. Führer: FMN081/FMN041/NUMA12-2008
66
3.8: Newton’s method in Rn
JF (x(k))∆x = −F (x(k))
and compute the next iterate by adding the Newton increment δx:
x(k+1) = x(k) + ∆x
Solving linear systems is done in MATLAB with the \-command - not with
the command inv !!!
C. Führer: FMN081/FMN041/NUMA12-2008
68
3.10: Homotopy Method
Finding a good starting value x(0) for Newton’s method is a crucial problem.
Assume, that F0 is a known function with a known zero x∗, i.e. F (x∗) = 0
and note, that H(x, 0) = 0 is the problem with known solution and
H(x, 1) = 0 is the original problem F (x) = 0.
C. Führer: FMN081/FMN041/NUMA12-2008
69
3.11: Homotopy Method (Cont.)
C. Führer: FMN081/FMN041/NUMA12-2008
70
3.12: Homotopy Method (Cont.)
We discretize now the intervall into 0 = s0 < s1 < · · · < sn = 1 and solve
a sequence of nonlinear systems with Newton’s method
H(x, si) = 0
C. Führer: FMN081/FMN041/NUMA12-2008
71
3.13: Homotopy Method: Example
We saw previously that Newton’s method for this problem fails to converge
if started with |x0| > 1.34.
C. Führer: FMN081/FMN041/NUMA12-2008
72
3.14: Homotopy Method: Example (Cont.)
x0 =4;
homotop=@( x , s ) atan ( x )+( s −1)∗ atan ( 4 ) ;
h o m o t o p d e r i v=@( x ) 1/(1+ x ˆ 2 ) ;
s=l i n s p a c e ( 0 , 1 , 1 1 ) ;
x a s t=z e r o s ( l e n g t h ( s ) , 1 ) ;
x a s t (1)= x0 ;
f o r i =2:11
[ x a s t ( i ) , i t e r , i e r ]= newtonh ( homotop , h o m o t o p d e r i v , . . .
x a s t ( i − 1 ) , 1 5 , 1 . e −8, s ( i ) ) ;
i f ( i e r ==1)
disp ( ’ divergence ’ )
break
end
end
3
x(s)
2
0 s
0 0.5 1
C. Führer: FMN081/FMN041/NUMA12-2008
74
Ö 3.1: Exercise Notes (Homework 2)
For the task to find the equilibrium of the truck we write the following Newton solver in
MATLAB
f u n c t i o n [ x , conv ]= newtonRaph ( fun , x0 , t o l , j a k o b )
% initialization
x ( : , 1 ) = x0 ; xk=x0 ; m a x i t =100; conv =0; n=l e n g t h ( x0 ) ;
E=eye ( n ) ; h =1. e − 10;
f o r i =1: m a x i t % newton−l o o p
f x=f e v a l ( fun , xk ) ;
i f n a r g i n==3 % i f c a l l e d w i t h t h r e e arguments . . .
f o r j =1: n % numerical Jacobian
J ac ( : , j )=( f e v a l ( fun , xk+h ∗E ( : , j )) − f x ) / h ;
end
else
J ac=f e v a l ( j a k o b , xk ) % a n a l y t i c J a c o b i a n
end
d e l t a x=−J ac \ f x ; xk=xk+d e l t a x ; x ( : , i )=xk ;
i f norm ( d e l t a x )< t o l % c o n v e r g e n c e t e s t
conv =1;
break
end
end
C. Führer: FMN081/FMN041/NUMA12-2008
75
Ö 3.2: Exercise Notes (Cont.)
We also write a wrapper for the truck-file. This eliminates the econd
parameter without the need of chaning truck acc.m
f u n c t i o n a=t r u c k w r a p ( p )
a=t r u c k a c c ( p , z e r o s ( 9 , 1 ) ) ;
C. Führer: FMN081/FMN041/NUMA12-2008
76
Ö 3.3: Exercise Notes (Cont.)
C. Führer: FMN081/FMN041/NUMA12-2008
77
Ö 3.4: Exercise Notes (Cont.)
Computing Newton-fractals:
Here we plot two pictures, one which depicts the fractals according to Task
2 and another where we plot the numbers of iterations for each start value.
The main m-file looks like this:
% s u p p r e s s w arni ngs c o n c e r n i n g n e a r l y s i n g u l a r m a t r i c e s
warning o f f a l l
[ X , Y ] = meshgrid ( − 1 : . 0 0 5 : 1 , − 2 : . 0 1 : 2 ) ;
f o r i =1: s i z e (Y , 1 )
f o r j =1: s i z e (X , 2 )
[ A( i , j ) ,B( i , j )]= n e w t o n f r a c ( [ X( i , j ) ; Y( i , j ) ] ) ;
end
end
% Which−Root−F r a c t a l ( s e e t a s k 2)
f i g u r e ( 1 ) ; p c o l o r (A ) ; shading i n t e r p
%How−many−I t e r a t i o n s f r a c t a l
f i g u r e ( 2 ) ; p c o l o r (B ) ; shading i n t e r p
C. Führer: FMN081/FMN041/NUMA12-2008
78
Ö 3.5: Exercise Notes (Cont.)
The Newton method is implemeneted for this task so:
f u n c t i o n [ r o o t , i t ]= n e w t o n f r a c ( x0 )
x=x0 ; r 1 = [ 1 ; 0 ] ; r 2 =1/2 ∗ [ − 1; s q r t ( 3 ) ] ; r 3 =1/2∗[ −1; − s q r t ( 3 ) ] ;
f o r i =1:40
i t =i ;
f x =[ x (1)ˆ3 − 3 ∗ x ( 1 ) ∗ x (2)ˆ2 − 1;3 ∗ x ( 1 ) ˆ 2 ∗ x (2) − x ( 2 ) ˆ 3 ] ;
J ac =[3 ∗ x (1)ˆ2 − 3 ∗ x (2)ˆ2 , − 6 ∗ x ( 1 ) ∗ x ( 2 ) ; . . .
6 ∗ x ( 1 ) ∗ x ( 2 ) , 3 ∗ x (1)ˆ2 − 3 ∗ x ( 2 ) ˆ 2 ] ;
dx=−J ac \ f x ; x=x+dx ;
i f abs ( dx ) < 1. e −7
break
end
end
i f norm ( x−r 1 ) < 1 . e −5
r o o t =1;
e l s e i f norm ( x−r 2 ) < 1 . e −5
r o o t =2;
e l s e i f norm ( x−r 3 ) < 1 . e −5
r o o t =3;
else % no r o o t
r o o t =−1;
end
C. Führer: FMN081/FMN041/NUMA12-2008
79
Ö 3.6: Exercise Notes (Cont.)
C. Führer: FMN081/FMN041/NUMA12-2008
80
Unit 4: Polynomial Interpolation
p(xk ) = yk k = 0 : n
holds.
An Interpolation Polynomial
5
polynomial (6th degree)
4
0
measurements
-1
-2
0 1 2 3 4 5 6
xn0 xn−1
0 ··· x0 1 an y0
xn
1 xn−1
1 ··· x1 an−1
1 y1
.. = ..
···
xnn xn−1
n ··· xn 1 a0 yn
or V a = y.
V is called a Vandermonde matrix.
C. Führer: FMN081/FMN041/NUMA12-2008
83
4.3 Vandermonde approach in MATLAB
C. Führer: FMN081/FMN041/NUMA12-2008
84
4.4 Vandermonde approach in MATLAB
C. Führer: FMN081/FMN041/NUMA12-2008
85
4.5 Lagrange Polynomials
C. Führer: FMN081/FMN041/NUMA12-2008
86
4.6 Lagrange Polynomials (Cont.)
00
measurements
-1
-2
0 1 0.33
2 3 0.66
4 5 61
C. Führer: FMN081/FMN041/NUMA12-2008
88
4.8 The vector space P n
Pn n
Lagrange representation p(x) = k=0 yk Lk (x)
C. Führer: FMN081/FMN041/NUMA12-2008
89
4.9 The vector space P n (Cont.)
It is easy to show, that these really are bases (linear independent elements).
C. Führer: FMN081/FMN041/NUMA12-2008
90
4.10 Inner Product Space
• (v, w) = (w, v)
• (v + w, u) = (v, u) + (v, u)
then V is called an inner product space and (·, ·) an inner product.
C. Führer: FMN081/FMN041/NUMA12-2008
91
4.11 Inner Product Space (Examples)
C. Führer: FMN081/FMN041/NUMA12-2008
92
4.12 Inner Product Space - Orthogonality
Definition. [9.2] Let V be an inner product space and let two elements
p, q ∈ V have the property (p, q) = 0, then they are called orthogonal. One
writes p⊥q or p = q ⊥.
Lagrange polynomials
Pn form an orthogonal basis with respect to the inner
product (p, q) = i=0 p(xi)q(xi).
C. Führer: FMN081/FMN041/NUMA12-2008
93
4.13 Inner Products and Norms
Examples:
R 1/2
b
• In P n: kpk2 = a
p(x)2
dx
2 1/2
n
Pn
• In P : kpkxi = i=0 p(xi )
min kf − pk
p∈P n
C. Führer: FMN081/FMN041/NUMA12-2008
95
4.15 Interpolating functions - Example
1 exp(x)
3
2
(x − x0)(x − x2) 2 2
L1(x) = =1−x p(x)
(x1 − x0)(x1 − x2) 1
1
00
measurements
(x − x0)(x − x1) 1 -1
L2(x) = = x(x + 1) 00
-2
1-0.50.33 0.66 161
(x2 − x0)(x2 − x1) 2 -1 2 03 4 0.5 5
Error:
An Interpolation Polynomial
0.0835
polynomial (6th degree)
4
0.0613 exp(x)
2
2
0.04
1 p(x)
1
0.0200 measurements
-1
00
-2
-1 1-0.50.33
2 03 0.66
4 0.5 5 161
C. Führer: FMN081/FMN041/NUMA12-2008
97
4.17 Interpolating functions - Error
1
f (x) − p(x) = f (n+1)(ξ)(x − x0) . . . (x − xn)
(n + 1)!
1
|f (x) − p(x)| = Mn+1(f )|(x − x0) . . . (x − xn)|
(n + 1)!
C. Führer: FMN081/FMN041/NUMA12-2008
98
4.18 Interpolating functions - Error (Cont.)
Consequently
1
kf − pk∞ = Mn+1(f ) max |(x − x0) . . . (x − xn)|
(n + 1)! x∈[a,b]
If there is the possibility to select the xi freely, one can minimize the
interpolation error for a given n.
An Interpolation Polynomial
0.081
35
3 (6th degree)
polynomial 4
4
0.06
0.513 exp(x) 1
2
2
0.040 2
1 p(x)
1
-0.500
0.02 measurements
-1
-1
00
-2
-1 1-0.50.33
2 03 0.66
4 0.5 5 161
C. Führer: FMN081/FMN041/NUMA12-2008
101
4.21 Chebyshev Polynomials-Properties
2k−1
• Tn(x̄k ) = 0 for x̄k = cos 2n π for k = 1, . . . , n
C. Führer: FMN081/FMN041/NUMA12-2008
102
4.22 Minimality Property of Chebyshev Polynomials
|an|
|P (ξ)| ≥ n−1 .
2
1
f (x) − p(x) = f (n+1)(ξ)(x − x0) · · · (x − xn)
(n + 1)!
x−a
[a, b] → [−1, 1] x 7→ 2 −1
b−a
C. Führer: FMN081/FMN041/NUMA12-2008
104
Unit 5: Cubic Splines
C. Führer: FMN081/FMN041/NUMA12-2008
105
5.1: Cubic Splines
C. Führer: FMN081/FMN041/NUMA12-2008
106
5.2: Cubic Splines - Construction
By fixing the 4m free coefficients ai, bi, ci, di, i = 0 : m − 1 the entire spline
is fixed.
C. Führer: FMN081/FMN041/NUMA12-2008
107
5.3: Cubic Splines-Construction
We can define two extra boundary conditions. One has several alternatives:
C. Führer: FMN081/FMN041/NUMA12-2008
110
5.6: Natural Splines Construction (Cont.)
σi
Thus bi = 2 .
From
σi+1 = 6aih + 2bi.
and Condition (5) we get
σi+1 −σi
ai = 6h .
C. Führer: FMN081/FMN041/NUMA12-2008
111
5.7: Natural Splines Construction (Cont.)
... and after inserting the hight-lighted expressions for ai and bi we get
σi+1 − σi 3 σi 2
yi+1 = h + h + cih + yi.
6h 2
C. Führer: FMN081/FMN041/NUMA12-2008
112
5.8: Natural Splines Construction (Cont.)
Inserting now the expressions for ai, bi and ci, using Condition (2) and
simplifying finally gives the central recursion formula:
σ0 = σm = 0.
C. Führer: FMN081/FMN041/NUMA12-2008
113
5.9: Natural Splines Construction (Cont.)
4 1 σ1 y2 − 2y1 + y0
1 4 1 σ2 y3 − 2y2 + y1
... = 6 ..
1 1 σ3
h2
.. .
...
1 .
1 4 σm−1 ym − 2ym−1 + ym−2
First, this system is solved and then the coefficients ai, bi, ci, di are deter-
mined by the high-lighted equations.
C. Führer: FMN081/FMN041/NUMA12-2008
114
5.10: Splines in MATLAB
Example:
x = 0:10; y = sin (x );
xx = 0 : . 2 5 : 1 0 ;
yy = s p l i n e ( x , y , xx ) ;
p l o t ( x , y , ’ o ’ , xx , yy )
C. Führer: FMN081/FMN041/NUMA12-2008
115
5.11: Splines in MATLAB (Cont.)
Theorem. [-]
Let s∗ ∈ V be a cubic spline satisfying a natural boundary condition. Then
ks∗00k2 ≤ kf 00k2 ∀f ∈ V.
C. Führer: FMN081/FMN041/NUMA12-2008
117
5.13: Minimality Property of Cubic Splines (Cont.)
Proof: Let f ∈ V , then there is a g ∈ C 2 with g(xi ) = 0 such that f (x) = s∗ (x) + g(x).
We then obtain kf 00 k2 ∗ 00 + gk2 = ks∗ 00 k2 + 2(s∗ 00 , g 00 ) + kg 00 k2 with
2 = ks 2 2
Z xm
∗ 00 00 ∗ 00 00
(s , g ) := s (x)g (x)dx.
x0
00
We have to show that (s∗ , g 00 ) = 0:
Integration by parts gives
ixm Z xm 000
∗ 00 00 ∗ 00 0 ∗ 0
(s , g ) = s (x)g (x) − s (x)g (x)dx.
x0 x0
From the natural boundary conditions follows ixm
∗ 00 0
s (x)g (x) = 0.
x0
As s∗ is a piecewise cubic polynomial we get for the last term
Z xm m Z x m
∗ 000 0 X i 0 X
s (x)g (x)dx = αi g (x) = αi (g(xi ) − g(xi−1 )) = 0
x0 i=1 x i−1 i=1
C. Führer: FMN081/FMN041/NUMA12-2008
118
5.14: The Space of Cubic Splines
(x − a)n, x ≥ a
(x − a)n+ =
0, x<a
C. Führer: FMN081/FMN041/NUMA12-2008
120
5.16: Cubic B-Splines
C. Führer: FMN081/FMN041/NUMA12-2008
121
5.17: The Space of Cubic Splines (Cont.)
Definition.
The functions Nik , i = 1 : 4 + m + 3 − k defined recursively as follows are
called B-splines:
0 if ξi = ξi+1
Ni1(x) := 1 if x ∈ [ξi, ξi+1)
0 else
x−ξi ξi+k −x
and Nik := ξi+k−1 −ξi Ni,k−1 +ξ Ni+1,k−1
i+k −ξi+1
where we use the convention 0/0 = 0 if nodes coincide.
C. Führer: FMN081/FMN041/NUMA12-2008
122
5.18: B-Splines Cont.
C. Führer: FMN081/FMN041/NUMA12-2008
123
5.19: B-Splines Basis
C. Führer: FMN081/FMN041/NUMA12-2008
124
5.20: B-Splines Basis (Cont.)
1
Grid: 0,0,0,0,1,2,3,4,5, 5,5,5
0.8
0.6
0.4
0.2
0
0 2 4 5
C. Führer: FMN081/FMN041/NUMA12-2008
125
5.21: B-Splines Basis Representation
m+3
X
s= diNi4
i=1
and in particular
m+3
X
1= Ni4.
i=1
The coefficients di are called de Boor points.
C. Führer: FMN081/FMN041/NUMA12-2008
126
5.22: B-Splines Properties
1. Ni4(x) 6= 0 only for x ∈ [ξi, ξi+4]: local support (sv: lokalt stöd)
2. Ni4(x) ≥ 0: non-negative
-2
d2
-4
-6
0 1 2 3 4 5
C. Führer: FMN081/FMN041/NUMA12-2008
129
Ö 5.1 : B-Splines (Version 1)
C. Führer: FMN081/FMN041/NUMA12-2008
130
Ö 5.2 : B-Splines (Version 2)
C. Führer: FMN081/FMN041/NUMA12-2008
131
Unit 6: L2 Space
!1/2
Z b Z b
kf k2 = f (x)2dx (f, g)2 = f (x)g(x)dx
a a
C. Führer: FMN081/FMN041/NUMA12-2008
132
6.1: Best Approximation
1
0.8
best L 2approximation Alternativ problem:
0.6 Find minp∈P0 kf − pk∞
0.4 best max-norm approximation
with kf k∞ = maxx∈[a,b] |f (x)|.
0.2
0
0 0.5 1
C. Führer: FMN081/FMN041/NUMA12-2008
134
6.3: Best Approximation (Cont.)
Reformulation
Minimize
Z 1
E(c0, c1, . . . , cn) = (f (x) − pn(x))2dx
0
Z 1 n
X Z 1 n X
X n Z 1
2 j
= f (x) dx − 2 cj f (x)x dx + cj ck xk+j dx
0 j=0 0 j=0 k=0 0
C. Führer: FMN081/FMN041/NUMA12-2008
135
6.4: Best Approximation (Cont.)
n n n n−1 n n
(x , x ) (x , x ) ··· (x , 1) cn (f, x )
(xn−1, xn) (xn−1, xn−1) · · · (xn−1, 1) cn−1 (f, xn−1)
.. . = ..
.
(1, xn) (1, xn−1) ··· (1, 1) c0 (f, 1)
| {z }
=:M
C. Führer: FMN081/FMN041/NUMA12-2008
136
6.5: Hilbert Matrices
20
1
0 Condition vs Matrix
Dimension
10
Hilbert matrices are extremely ill con- 1
0
ditioned. Therefore this way to com-
pute the best polynomial pn is very
0
sensitive to perturbations. 10
0 5 10 15
C. Führer: FMN081/FMN041/NUMA12-2008
137
6.6: Orthogonal Polynomials
If pn is represented as
where the ϕ form a basis of Pn and the γi are real coefficients, then the
system becomes
(ϕ0, ϕ0) (ϕ0, ϕ1) · · · (ϕ0, ϕn) γ0 (f, ϕ0)
(ϕ1, ϕ0) (ϕ1, ϕ1) · · · (ϕ1, ϕn) γ1 (f, ϕ1)
.. . = .
. .
(ϕ , ϕ ) (ϕn, ϕ1) · · · (ϕn, ϕn) γn (f, ϕn)
| n 0 {z }
=:M
C. Führer: FMN081/FMN041/NUMA12-2008
138
6.7: Orthogonal Polynomials (Cont.)
C. Führer: FMN081/FMN041/NUMA12-2008
139
6.8: Orthogonal Polynomials (Cont.)
holds.
Note,
”exact degree” j means ϕj (x) = aj xj + aj−1xj−1 + · · · with aj 6= 0.
C. Führer: FMN081/FMN041/NUMA12-2008
140
6.9: Gram–Schmidt Orthogonalisation
Gram-Schmidt Orthogonalisation
Let ϕ0 = 1.
Ansatz ϕ1 = x − α0ϕ0.
(ϕ0, x)
Orthogonality: (ϕ0, ϕ1) = 0, this gives α0 = .
(ϕ0, ϕ0)
j
Pj−1 (ϕi, xj )
In general: ϕj = x − i=0 αi ϕi with αi = .
(ϕi, ϕi)
C. Führer: FMN081/FMN041/NUMA12-2008
141
6.10: Orthogonal Polynomials - Examples
Example 1: R
1
Let (f, g) = 0 f (x)g(x)dx. Then (p. 261 f)
ϕ0(x) = 1
1
ϕ1(x) = x −
2
2 1
ϕ2(x) = x − x +
6
3 3 2 3 1
ϕ3(x) = x − x + x −
2 5 20
..
C. Führer: FMN081/FMN041/NUMA12-2008
142
6.11: Orthogonal Polynomials - Examples
Example 2: R
1
Let (f, g) = −1 f (x)g(x)dx. Then (p. 263 f)
ϕ0(x) = 1
ϕ1(x) = x
1
ϕ2(x) = x2 −
3
3
ϕ3(x) = x3 − x
5
..
Example 3:
Let Z 1
1
(f, g) = √ f (x)g(x)dx.
1−x 2
−1
C. Führer: FMN081/FMN041/NUMA12-2008
144
6.12: Orthogonal Polynomials - Properties
(xϕk , ϕk )w 2 (ϕk , ϕk )w
βk+1 := γk+1 :=
(ϕk , ϕk )w (ϕk−1, ϕk−1)w
We consider now the finite dimensional case and ask for the best
approximation in Rm:
Let V ⊂ Rm be an n-dimensional subspace.
Consider the problem:
Let b ∈ Rm. Find a vector v ∗ ∈ V such that
C. Führer: FMN081/FMN041/NUMA12-2008
147
6.15 Finite Dimensional Case (Cont.)
C. Führer: FMN081/FMN041/NUMA12-2008
148
6.16 Finite Dimensional case (Cont.)
If we form a matrix A which have the basis vectors vi as columns, then M = ATA and
we obtain the so-called normal equations
T T
A Ax = A b
∗ T −1 T
v = A(A A) A b
| {z }
=:P
Note that P is a projection matrix describing the orthogonal projection of Rm onto V .
C. Führer: FMN081/FMN041/NUMA12-2008
149
6.17 Finite Dimensional case: Least Squares Method
Ax = b
This system has only a solution if b ∈ R(A), where R(A) denotes the range space of A
(bildrum, kolonnrum (sv.)).
C. Führer: FMN081/FMN041/NUMA12-2008
150
6.18 Least Squares Method
min kAx − bk
x∈Rn
C. Führer: FMN081/FMN041/NUMA12-2008
151
6.19 Least Squares Method
C. Führer: FMN081/FMN041/NUMA12-2008
152
Ö6 : Exercise Notes (Homework 5)
We give a symbolic code for orthogonal polynomials and demonstrate the best approxima-
tion by monomials, Legendre and Chebychev polynomials.
C. Führer: FMN081/FMN041/NUMA12-2008
153
Ö6.1 : Gram-Schmidt/ Symbolic Toolbox
f u n c t i o n p o l y=o r t p o l y ( x , n , omega , a , b ) ;
% computes o r t h o g o n a l p o y n o m i a l s by a p p l y i n g
% Gram−Schmid o r t h o g o n a l i z a t i o n
% symbolic v a r i a b l e s x , omega
f o r i =0: n
sum=0;
f o r j =0: i −1,
a l p h a=i n t ( omega ∗ p o l y ( j +1)∗ x ˆ i , a , b ) / . . .
i n t ( omega ∗ p o l y ( j +1)∗ p o l y ( j +1) , a , b ) ;
sum=sum+a l p h a ∗ p o l y ( j +1);
end
p o l y ( i +1)=x ˆ i −sum ;
end
C. Führer: FMN081/FMN041/NUMA12-2008
154
Ö6.2 : Result
>> syms x
>> omega=1/ s q r t (1− x ˆ 2 ) ;
>> c h e b y=o r t p o l y ( x , 5 , omega , − 1 ,1)
legendre =
[ 1 , x , x ˆ2 − 1/2 , xˆ3 −3/4∗ x , xˆ4+1/8− x ˆ 2 , x ˆ5+5/16 ∗ x −5/4∗ x ˆ 3 ]
C. Führer: FMN081/FMN041/NUMA12-2008
155
Ö6.3 : Best approximation monomial basis
C. Führer: FMN081/FMN041/NUMA12-2008
156
Ö6.4 : Best approximation monomial basis (2)
and plotting:
x p l o t=l i n s p a c e ( − 1 , 1 , 5 0 0 ) ;
subplot (2 ,1 ,1) , plot ( xplot , y p l o t )
s u b p l o t ( 2 , 1 , 2 ) , p l o t ( x p l o t , atan ( x p l o t )− y p l o t )
C. Führer: FMN081/FMN041/NUMA12-2008
157
Ö6.5 : Best approximation Legendre basis (1)
f u n c t i o n y=l e g e n d r e ( x , n )
switch n
case 0
y=1∗ o n e s ( s i z e ( x ) ) ;
case 1
y=x ;
case 2
y =1/2 ∗ (3 ∗ x .ˆ2 − 1);
case 3
y =1/2 ∗ (5 ∗ x .ˆ3 − 3 ∗ x ) ;
case 4
y =1/8 ∗ (35 ∗ x .ˆ4 − 30 ∗ x . ˆ 2 + 3 ) ;
case 5
y =1/8 ∗ (63 ∗ x .ˆ5 − 70 ∗ x .ˆ3+15 ∗ x ) ;
end
C. Führer: FMN081/FMN041/NUMA12-2008
158
Ö6.6 : Best approximation Legendre basis (2)
f u n c t i o n c o e f f=b e s t L e g e n d r e
f o r i =0:5
m( i +1)=quad (@( x ) l e g e n d r e ( x ,5 − i ) . ˆ 2 , − 1 , 1 , 1 . e − 9);
b ( i +1)=quad (@( x ) l e g e n d r e ( x ,5 − i ) . ∗ atan ( x ) , − 1 , 1 , 1 . e − 9);
c o e f f ( i +1)=b ( i +1)/m( i +1);
end
C. Führer: FMN081/FMN041/NUMA12-2008
159
Ö6.7 : Best approximation Legendre basis (3)
x p l o t=l i n p s a c e ( − 1 , 1 , 5 0 0 ) ;
sum=z e r o s ( s i z e ( x p l o t ) ) ;
f o r i =0:5
sum=sum+c o e f f ( i +1)∗ l e g e n d r e ( x p l o t ,5 − i ) ;
end
p l o t ( x p l o t , sum)
C. Führer: FMN081/FMN041/NUMA12-2008
160
Ö6.8 : Legendre Best Approximation (with symbolic
toolbox)
omega=x^0;
legendre=ortpoly(x,5,omega,-1,1);
bestpoly=0;
for ii=0:5
i=ii+1;
coeff(i)=int(omega*atan(x)*legendre(i),-1,1)/...
int(omega*legendre(i)*legendre(i),-1,1);
bestpoly=bestpoly+coeff(i)*legendre(i);
end;
cpoly=sym2poly(bestpoly);
xplot=linspace(-1,1,200);
yplot=polyval(cpoly,xplot);
subplot(1,2,1), plot(xplot,yplot,’b’)
subplot(1,2,2), plot(xplot,atan(xplot)-yplot)
C. Führer: FMN081/FMN041/NUMA12-2008
161
Ö6.9 : Legendre Best Approximation
-3
x 10
1 1.5
Legen Err
dre or
1
0.5
0.5
0 0
-0.5
-0.5
-1
-1 -1.5
-1 0 1 -1 0 1
C. Führer: FMN081/FMN041/NUMA12-2008
162
Ö6.10 : Chebyshev Best Approximation
-3
x 10
1 1
Chebys Err
hev or
0.5 0.5
0 0
-0.5 -0.5
-1 -1
-1 0 1 -1 0 1
C. Führer: FMN081/FMN041/NUMA12-2008
164
Unit 7: Numerical Integration, Quadrature
Problem
We will first discuss methods which operate on [a, b] and then so-called composite methods
which act on small subintervals.
C. Führer: FMN081/FMN041/NUMA12-2008
165
7.1: Newton–Cotes Formulae
where k, the number of interpolation points, uniquely defines the weights wk and so the
method.
We know from the interpolation chapter that pn can be expressed by Lagrange polynomials
n
X
pn(x) = f (xk )Lk (x)
k=0
C. Führer: FMN081/FMN041/NUMA12-2008
167
7.3: Newton–Cotes Formulae (Cont.)
Thus Z b
wk = Lk (x)dx.
a
Examples:
n = 1 : a = x0 < x 1 = b
Gives
x−b x−a
p1(x) = f (a) + f (b)
a−b b−a
1
= [(b − x)f (a) + (x − a)f (b)]
b−a
C. Führer: FMN081/FMN041/NUMA12-2008
168
7.4: Trapezoidal Rule
Rb
Integration a
p1(x)dx gives the
Trapezoidal Rule
b `1 1
Z
´
f (x)dx ≈ (b − a) f (a) + f (b)
a 2 2
C. Führer: FMN081/FMN041/NUMA12-2008
169
7.5: Simpson’s Rule
Simpson’s rule
b „ «
1 4 `b + a´ 1
Z
f (x)dx ≈ (b − a) f (a) + f + f (b)
a 6 6 2 6
C. Führer: FMN081/FMN041/NUMA12-2008
170
7.6: Basic Requirements
C. Führer: FMN081/FMN041/NUMA12-2008
171
7.7: Basic Requirements (Cont.)
C. Führer: FMN081/FMN041/NUMA12-2008
172
7.8: Error estimates
C. Führer: FMN081/FMN041/NUMA12-2008
173
7.9: Error estimates (Cont.)
(b−a)3
Trapezoidal Rule: |E1(f )| ≤ 12 M2
(b−a)4
Simpson’s rule: |E2(f )| ≤ 196 M3
For deriving composite formulas we modify our notation (which deviates a bit from the
book) and introduce some terms, which we will meet in the Runge–Kutta chapter again:
We consider the integral Z 1
1
I0 (f ) = f (x)dx
0
and write the Newton-Cotes formulas
s
X
1
Iˆ0 (f ) = bif (ci).
i=1
We call s the stage number, bi the weights and ci the nodes of the method.
C. Führer: FMN081/FMN041/NUMA12-2008
175
7.11: Composite Formulas
To decrease the error, we partition the interval [a,b] into m subintervals a = x0 < · · · <
xm and split the integral into a sum of integrals:
Z b m−1
X Z xi+1 m−1
XZ 1
f (x)dx = f (x)dx = h f (xi + hξ)dξ =
a i=0 xi i=0 0
C. Führer: FMN081/FMN041/NUMA12-2008
176
7.12: Composite Formulas
C. Führer: FMN081/FMN041/NUMA12-2008
177
7.13: Example: Composite Trapezoidal Rule
Z b
f (x)dx ≈
a
„ «
1 1
x i x i+1 h f (x0) + f (x1) + · · · + f (xm−1) + f (xm)
2 2
Trapezoidal Rule
with h = (b − a)/m.
C. Führer: FMN081/FMN041/NUMA12-2008
178
7.14: Example: Composite Simpson Rule
Z b
f (x)dx ≈
a
„
h h h
f (x0) + 4f (x0 + ) + 2f (x1) + 4f (x1 + ) + 2f (x2) + · · ·
6 2 2
«
h
· · · + 2f (xm−1) + 4f (xm−1 + ) + f (xm)
2
Note, the book expresses the composite Simpson rule in a slightly different way. There, h
is twice as large as here.
C. Führer: FMN081/FMN041/NUMA12-2008
179
7.15: Error of Composite Rules
The global error of composite rules is just the sum of the (local) errors in each subinterval:
h 4
Composite Simpson’s rule: |E2(f )| ≤ . . . ≤ (b − a) 2880 M4
Definition.
q is called the order of the method.
For higher order methods larger step sizes h give the same accuracy as small step sizes for
lower order methods.
C. Führer: FMN081/FMN041/NUMA12-2008
181
7.17 Order plots
In a loglog plot the order of a method can be visualized by comparing the approximate
solution to the exact solution for different step sizes (see also Homework 5):
C. Führer: FMN081/FMN041/NUMA12-2008
182
7.18 Construction of a method with optimal order
The cost of the method is related to the number of f evaluations and this is related to
the number of stages s.
Z 1
M (x)p(x)dx = 0 ∀p ∈ Pm−1[0, 1],
0
C. Führer: FMN081/FMN041/NUMA12-2008
184
7.20 Example: 3 stage method of order 4
Consider m = 1, s = 3:
Z 1
0 = (t − c1)(t − c2)(t − c3) · 1dt
0
1 1
= − (c1 + c2 + c3) +
4 3
1
+ (c1c2 + c1c3 + c2c3) − c1c2c3
2
⇒
1
4 − (c1 + c2)/3 + c1c2/2
c3 = 1
3 − (c1 + c2)/2 + c1c2
Theorem.
A method with s stages has maximal order 2s.
Theorem.
There is a method of order 2s. It is uniquely defined by taking cj as the roots of the sth
Legendre polynomial Ps(2t − 1), t ∈ [0, 1].
(Proof: Given in the lecture. Consequence of the orthogonality of the Legendre polynomi-
als.)
C. Führer: FMN081/FMN041/NUMA12-2008
187
7.23 Gauß–Legendre Methods (Examples)
C. Führer: FMN081/FMN041/NUMA12-2008
188
Unit 8 : Finite Element Method in 1D
This problem is called a Sturm-Liouville problem. The boundary conditions are called
Dirichlet boundary conditions or sometimes essential boundary conditions, in mechanics: kinematic boundary
conditions.
C. Führer: FMN081/FMN041/NUMA12-2008
189
8.1 : Sturm-Liouville problems
The problem (BVP),(BC) will now be reformulated as a variational equation (VE) and a
variational
problem (VP).
C. Führer: FMN081/FMN041/NUMA12-2008
190
8.2 : Sobolev spaces
Note, H 1(a, b) and H01(a, b) are linear spaces. HE1 (a, b) is an affine linear space,
i.e. v, u ∈ HE1 (a, b) ⇒ v − u ∈ H01(a, b).
C. Führer: FMN081/FMN041/NUMA12-2008
191
8.3 : Sobolev norm
Rb
• inner product (u, v)H 1 = a
u(x)v(x) + u0(x)v 0(x)dx and the
1/2
• corresponding norm kukH 1 = (u, v)H 1 .
C. Führer: FMN081/FMN041/NUMA12-2008
192
8.4 : Variational Problem, Variational Equation, BVP
C. Führer: FMN081/FMN041/NUMA12-2008
193
8.5 : Bilinear Form and L2 inner product
This and the L2-inner product allow a compact notation for the variational functional J :
1
J(w) = A(w, w) − (f, w)2
2
C. Führer: FMN081/FMN041/NUMA12-2008
194
8.6 : Characterization of a Solution, Variational Equation
The following theorem gives a necessary and sufficient condition for a solution of (VP):
Theorem. [14.1]
1 1
u ∈ HE (a, b) minimizes J over HE (a, b) if and only if
1
A(u, v) = (f, v)2 ∀v ∈ H0 (a, b) (VE)
(Variational equation)
Theorem. [14.2]
1
If u ∈ HE (a, b) ∩ C 2(a, b) solves (BVP) then it is also a solution of the variational
equation (VE).
As (VE) might have solutions with smoothness (i.e. not in C 2) one defines
Definition. [14.3]
1
A solution u ∈ HE (a, b) of the variational equation (VE) is called a weak solution of the
BVP.
C. Führer: FMN081/FMN041/NUMA12-2008
196
8.8 : Uniqueness
Theorem. [14.3] The boundary value problem (BVP) has at most one weak solution in
1
HE (a, b).
Summary:
⇐=
(V P ) ⇐⇒ (V E) (BV P )
(=⇒)
h 1
We consider a finite dimensional affine linear subspace SE ⊂ HE (a, b),
n
X
h h 1 h
SE = {w ∈ HE (a, b) : w = ψ(x) + wiϕi(x)}
i=1
where ψ is any function fullfilling the boundary conditions and the ϕi are linearly indepen-
dent functions fullfilling the homogeneous boundary conditions.
Find uh ∈ SE
h
such that
h h h
J(u ) = min J(w ) (VP )
wh ∈S h
E
C. Führer: FMN081/FMN041/NUMA12-2008
198
8.10: Discretization - Galerkin method (Cont.)
C. Führer: FMN081/FMN041/NUMA12-2008
199
8.11: Discretization - Galerkin method (Cont.)
C. Führer: FMN081/FMN041/NUMA12-2008
200
8.12: Discretization - Stiffness Matrix
C. Führer: FMN081/FMN041/NUMA12-2008
201
8.13: Discretization - Linear Finite Element
A particular choice of discrete spaces is given by the linear finite element approach (FEM):
C. Führer: FMN081/FMN041/NUMA12-2008
202
8.14: Linear Finite Element (Cont.)
And the function ψ , which fullfils the boundary conditions can be given as:
In total:
n−1
X
h
u (x) = ψ(x) + uiϕi(x)
i=1
The choice of linear splines with their support [xi−1, xi+1] has as consequence that
C. Führer: FMN081/FMN041/NUMA12-2008
204
8.16: Energy Norm Error Estimate
Under the assumptions on p and r we can define a norm and an inner product by A:
C. Führer: FMN081/FMN041/NUMA12-2008
205
8.17: Energy Norm Error Estimate (Cont.)
We apply the theorem above now to estimate the finite element error and to get the order
of the method:
Let u(xi) denote the (unknown) exact solution at the grid points.
Then
Xn−1
h
I (u)(x) = ψ(x) + u(xi)ϕi(x)
i=1
is a linear spline function in SEh which interpolates the exact solution.
It can be inserted as “wh” into the second statement of Cea’s theorem:
h h
ku − u kA ≤ ku − I (u)kA
C. Führer: FMN081/FMN041/NUMA12-2008
206
8.17: Energy Norm Error Estimate (Cont.)
The linear spline interpolation error in the energy norm can be estimated (Corollary 14.1):
„ «
h 2 2 h 00 2
ku − I (u)kA ≤ C ku k2
π
h h 00
ku − u kA ≤ Cku k2
π
Thus, the error is linear in h.
C. Führer: FMN081/FMN041/NUMA12-2008
207
Ö7 : Excentric Bending of a rod
function [xint,u]=galbas(n)
E=208*10^9;I=0.036;a=0.06;b=3;F=180000;
xint=linspace(0,b,n);
h=xint(2)-xint(1);
M=zeros(n-2);
M=diag((-2*E*I/h+h*F)*ones(n-2,1))+ ...
diag(E*I/h*ones(n-3,1),1)+ ...
diag(E*I/h*ones(n-3,1),-1);
for k=1:n-2
hf(k)=a/b*h*F*xint(k+1);
end
u=M\hf’;
C. Führer: FMN081/FMN041/NUMA12-2008
208
Ö7.1 : Excentric Bending of a rod
Here what you get, when you call the program with n = 20:
x 10 6-6
-
0
-0.2
-0.4
-0.6
-0.8
-1
0 0.5 1 1.5 2 2.5 3
C. Führer: FMN081/FMN041/NUMA12-2008
209
Unit 9: Initial Value Problems
In rigid body mechanics this problem occurs as equations of motion where n describes the
number of degrees of freedom.
In mechanics due to Newton Euler’s law initial value problems occur in implicit second order form:
−1
ÿ(t) = M (y(t)) f (t, y(t), ẏ(t))
C. Führer: FMN081/FMN041/NUMA12-2008
211
9.2: Second order Systems (Cont.)
−1
ÿ(t) = M (y(t)) f (t, y(t), ẏ(t))
−1
ẏ2(t) = M (y1(t)) f (t, y1(t), y2(t)) time derivative of velocity is acceleration
C. Führer: FMN081/FMN041/NUMA12-2008
212
9.3: Second order Systems (Cont.)
−1
ẏ2(t) = M (y1(t)) f (t, y1(t), y2(t)) time derivative of velocity is acceleration
C. Führer: FMN081/FMN041/NUMA12-2008
213
9.4: Linear Differential Equation Systems
„ « „ «„ «
α(t) 0 1 α(t)
=
α̇(t) − gl 0 α̇(t)
| {z } | {z } | {z }
ẏ(t) A y(t)
C. Führer: FMN081/FMN041/NUMA12-2008
214
9.5: Linear Differential Equation Systems - Eigenvalues
stable unstable
1 400
0 200
-1 0
0 5 10 0 5 10
2 20
0 0
-2 -20
0 5 10 0 5 10
We want that a numerical method reflects the stability properties of the original (linear)
problem.
C. Führer: FMN081/FMN041/NUMA12-2008
216
9.7: Directional field
Scalar differential equations can be illustrated by their directional field, a plot which assigns
to every point a slope.
-5
0 5 10 15
C. Führer: FMN081/FMN041/NUMA12-2008
217
9.8: Euler’s method
We denote
• exact solution at ti by yi
• approximate solution at ti by ui
C. Führer: FMN081/FMN041/NUMA12-2008
218
9.9: Euler’s method (Cont.)
ui+1 − ui
= f (ti, ui) ⇒ ui+1 = ui + hif (ti, ui)
hi
ui+1 − ui
= f (ti+1, ui+1) ⇒ ui+1 = ui + hif (ti+1, ui+1)
hi
(The method is called implicit because the unknown ui+1 occurs on both sides.)
C. Führer: FMN081/FMN041/NUMA12-2008
219
9.10: Euler’s explicit method
2 h=3
1 h=1.5
0
-1
-2
0 5 10 15
C. Führer: FMN081/FMN041/NUMA12-2008
220
9.11: Euler’s implicit method
2 h=3
1
h=1.5
-1
-2
0 5 10 15
C. Führer: FMN081/FMN041/NUMA12-2008
221
9.12: Stability behavior of Euler’s method
ẏ(t) = λy(t)
The equation is stable if Real(λ) ≤ 0. In this case the solution is exponentially decaying.
(limt→∞ y(t) = 0).
C. Führer: FMN081/FMN041/NUMA12-2008
222
9.13: Stability behavior of Euler’s method (Cont.)
ui+1 = ui + hλui
hλ
i
The solution is decaying (stable)
if |1 + hλ| ≤ 1
-2 -i
C. Führer: FMN081/FMN041/NUMA12-2008
223
9.14: Stability behavior of Euler’s method (Cont.)
ui+1 = ui + hλui+1
` 1
´i+1
This gives ui+1 = 1−hλ u0 .
hλ
The solution is decaying (stable)
i
if |1 − hλ| ≥ 1
2
-i
C. Führer: FMN081/FMN041/NUMA12-2008
224
9.15: Stability behavior of Euler’s method (Cont.)
10 λ =-5 h=0.41
-5
-10
0 2 4 6 8 10 12 14
C. Führer: FMN081/FMN041/NUMA12-2008
225
9.16: Stability behavior of Euler’s method (Cont.)
Facit:
For stable ODEs with a fast decaying solution (Real(λ) << −1 )
or highly oscillatory modes (Im(λ) >> 1 )
the explicit Euler method demands small step sizes.
This makes the method inefficient for these so-called stiff systems.
C. Führer: FMN081/FMN041/NUMA12-2008
226
9.17: Implementation of implicit methods
Two ways to solve for ui+1: k is the iteration counter, i the integration step counter
(k+1) (k)
• Fixed point iteration: ui+1 = ui + hif (ti+1, ui+1)
| {z }
(k)
=ϕ(u )
i+1
• Newton iteration:
C. Führer: FMN081/FMN041/NUMA12-2008
227
9.18: Implementation of implicit methods (Cont.)
When should fixed points iteration and when Newton iteration be used?
C. Führer: FMN081/FMN041/NUMA12-2008
228
9.19: Implementation of implicit methods (Cont.)
C. Führer: FMN081/FMN041/NUMA12-2008
229
9.20: Multistep Methods
Multistep methods are methods, which require starting values from several previous steps.
C. Führer: FMN081/FMN041/NUMA12-2008
230
9.21: Adams Methods
C. Führer: FMN081/FMN041/NUMA12-2008
231
9.22: Adams Methods (Cont.)
Thus Z tn+1
yn+1 = y(tn+1) = yn + f (τ, y(τ )) dτ
tn
Let un+1−i, i = 1, . . . , k be previously computed values and πkp the unique polynomial
which interpolates
f (tn+1−i, un+1−i), i = 1, . . . , k
The we define a method by
Z tn+1
p
un+1 = un + πk (τ ) dτ
tn
C. Führer: FMN081/FMN041/NUMA12-2008
232
9.23: Adams Methods (Cont.)
Integration gives:
`3 1
un+1 = un + h f (tn, un) − f (tn−1, un−1)
2 2
This is the Adams–Bashforth-2 method.
C. Führer: FMN081/FMN041/NUMA12-2008
233
9.24: Adams Methods (Cont.)
k
X
un+1 = un + h βk−if (tn+1−i, un+1−i)
i=1
Here the unknown value f (tn+1, un+1) is taken as an additional interpolation point.
C. Führer: FMN081/FMN041/NUMA12-2008
234
9.25: Adams Methods (Cont.)
Examples (Adams–Moulton):
k
X
un+1 = un + h β̄k−if (tn+1−i, un+1−i)
i=0
C. Führer: FMN081/FMN041/NUMA12-2008
235
9.26: Starting a multistep method
C. Führer: FMN081/FMN041/NUMA12-2008
236
9.27: Adams Method - Stability
-0.5
-2
-1
-1.5 -4
-2 -1 0 -6 -4 -2 0 2
20
5
BDF-3 BDF-6
10
BDF-5
BDF-2
C. Führer: FMN081/FMN041/NUMA12-2008
o BDF-1 BDF-4
86
0 0 237
-10
9.28: BDF methods
BDF methods BDF=backward differentiation fomula are constructed directly from the differential
equation. We seek for a polynomial, which statisfies the ODE in one point, tn+1 and
interpolates k previous ones:
πk+1(tn+1−i) = un+1−i, i = 0, . . . , k
π̇k+1(tn+1) = f (tn+1, un+1).
C. Führer: FMN081/FMN041/NUMA12-2008
238
9.29: BDF methods (Cont.)
Working out these conditions and using again the Lagrange polynomial approach gives
formulas of the following type:
k
X
αk−iun+1−i = hf (tn+1, un+1)
i=0
Examples:
18 9 2 6
k=3: un+1 = 11 un − 11 un−1 + 11 un−2 + h 11 f (tn+1, un+1)
C. Führer: FMN081/FMN041/NUMA12-2008
239
1.5 4
AM-1 AM-0
1 AB-1 AB-2 AM-2
2
0.5 AM-3
AB-3
9.30: BDF0 - Stability 0
-0.5
-2
The region of -1
stability shrinks when k increases,
-1.5 -4 but remains unbounded for k ≤ 6:
-2 -1 0 -6 -4 -2 0 2
20
5
BDF-3 BDF-6
10
BDF-5
BDF-2
BDF-4
86o BDF-1
0 0
-10
-5
-20
-2 0 2 4 6 8 -10 0 10 20 30
C. Führer: FMN081/FMN041/NUMA12-2008
240
9.31: Multisptep methods in General
k
X k
X
αk−iun+1−i − hn βk−if (tn+1−i, un+1−i) = 0.
i=0 i=0
en := y(tn) − un,
with n = tn/h.
If for exact starting values en = O(h), then the method is said to be convergent. More
precisely, a method is convergent of order p, if
p
en = O(h ).
C. Führer: FMN081/FMN041/NUMA12-2008
242
9.33: Local Residual
To make a statement about the behavior of the global error, we have to introduce and
study first the local residual:
k
X k
X
l(y, tn, h) := αk−iy(tn−i) − h βk−iẏ(tn−i)
i=0 i=0
is called the local residual of the method.
C. Führer: FMN081/FMN041/NUMA12-2008
243
9.34: Example
by
„ «
5 8 1
l(y, t + h, h) = y(t + h) − y(t) − h ẏ(t + h) + ẏ(t) − ẏ(t − h) .
12 12 12
C. Führer: FMN081/FMN041/NUMA12-2008
244
9.35: Example (Cont.)
3.
C. Führer: FMN081/FMN041/NUMA12-2008
245
9.36: Order of Consistency
Conditions for higher order of consistency are given by the following theorem:
Theorem. A linear multistep method has the order of consistency p if the following
p + 1 conditions on its coefficients are met:
k
X
αi = 0
i=0
k
X
iαi − βi = 0
i=0
k
X 1 j 1 j−1
i αi − i βi = 0 with j = 2, . . . , p.
i=0
j! (j − 1)!
C. Führer: FMN081/FMN041/NUMA12-2008
246
9.37: Asymptotic Form of Local Residual
form
p+1 (p+1) p+2
l(y, t, h) = cp+1h y (t) + O(h ).
of consistency k
C. Führer: FMN081/FMN041/NUMA12-2008
247
9.38: Global Error Propagation
and
k
X k
X
αk−iun−i − hn βk−iAun−i = 0.
i=0 i=0
gives a recursion for the global error:
k
X k
X
αk−ien−i − hn βk−iAen−i = l(y, tn, h).
i=0 i=0
C. Führer: FMN081/FMN041/NUMA12-2008
248
9.39: Global Error Propagation (Cont.)
0 1
en
B en−1 C
En := B . C ∈ Rkny
@ .. A
en−k+1
this recursion formula can be written in one-step form as
En+1 = Φn(h)En + Mn
C. Führer: FMN081/FMN041/NUMA12-2008
249
9.40: Global Error Propagation (Cont.)
with
0 1
−1 −1 −1 −1
−Ak Ak−1 −Ak Ak−2 · · · −Ak A1 −Ak A0
B
B I 0 ··· 0 0 C
C
Φn(h) := B 0 I ··· 0 0 C,
B C
B
@ ... ... ... C
A
0 0 ··· I 0
−A−1
0 1
k l(y, tn , h)
B 0 C
Mn := B
@ ... C.
A
0
.
C. Führer: FMN081/FMN041/NUMA12-2008
250
9.41: Global Error Propagation (Cont.)
From this formula we see how the global error of a multistep method is built up. There is in
every step a (local) contribution Mn, which is of the size of the local residual. Therefore,
a main task is to control the integration in such a way that this contribution is kept small.
The effect of these local residuals on the global error is influenced by Φn(h). The local
effects can be damped or amplified depending on the properties of the propagation matrix
Φn(h). This leads to the discussion of the stability properties of the method and its
relation to the stability of the problem.
C. Führer: FMN081/FMN041/NUMA12-2008
251
9.42: Stability
All eigenvalues of Φ(h) are within the unit circle and those on its boundary are
simple.
C. Führer: FMN081/FMN041/NUMA12-2008
252
9.43: Organisation of a Multistep code
C. Führer: FMN081/FMN041/NUMA12-2008
253
9.44: Runge–Kutta Methods
Runge–Kutta methods are one-step methods, i.e. they have the generic form
with a method dependent increment function φh. In contrast to multistep methods, the
transition from one step to the next is based on data of the most recent step only.
C. Führer: FMN081/FMN041/NUMA12-2008
254
9.45: Runge–Kutta Methods: Basic Scheme
U1 = un
i−1
X
Ui = un + h aij f (tn + cj h, Uj ) i = 2, . . . , s
j=1
s
X
un+1 = un + h bif (tn + cih, Ui).
i=1
C. Führer: FMN081/FMN041/NUMA12-2008
255
9.46: Runge–Kutta Methods: Example
U1 = un
h
U2 = un + f (tn, U1)
2
h
un+1 = un + hf (tn + , U2)
2
For this method the increment function reads
“ h h ”
φh(t, u) := f t + , u + f (t, u) .
2 2
C. Führer: FMN081/FMN041/NUMA12-2008
256
9.47: Runge–Kutta Methods: Stage derivatives
k1 = f (tn, un)
i−1
X
ki = f (tn + cih, un + h aij kj ) i = 2, . . . , s
j=1
s
X
un+1 = un + h biki.
i=1
C. Führer: FMN081/FMN041/NUMA12-2008
257
9.48: Runge–Kutta Methods: Butcher Tableau
c1
c2 a21
c3 a31 a32 c A
... ... ... ... or
bT
cs as1 as2 ··· as,s−1
b1 b2 ··· bs−1 bs
C. Führer: FMN081/FMN041/NUMA12-2008
258
9.49: Butcher Tableau - RK4
0
1 1
2 2
1 1
2 0 2 .
1 0 0 1
1 2 2 1
6 6 6 6
C. Führer: FMN081/FMN041/NUMA12-2008
259
9.50: Order of a Runge–Kutta Method
The global error of a Runge–Kutta method at tn is defined in the same way as for multistep
methods
en := y(tn) − un.
with n = tn/h. A Runge–Kutta method has order p if en = O(hp).
C. Führer: FMN081/FMN041/NUMA12-2008
260
9.51: Embedded Runge-Kutta Methods
The local error can be estimated by comparing the result after one step with two methods
with different order:
c1
c2 a21
c3 a31 a32
... ... ... ...
cs as1 as2 ··· as,s−1
bp1 bp2 ··· bps−1 bps
bp+1
1 bp+1
2 ··· bp+1
s−1 bp+1
s
C. Führer: FMN081/FMN041/NUMA12-2008
261
9.52: Runge–Kutta–Fehlberg 2(3)
0
1 1
1 1 1
2 4 4
1 1
2 2 0
1 1 4
6 6 6
It uses 3 stages.
C. Führer: FMN081/FMN041/NUMA12-2008
262
9.53: Stability of RK4(5)
DOPRI4
2
-2 DOPRI5
-4
-4 -2 0 2
C. Führer: FMN081/FMN041/NUMA12-2008
263
Ö8 : MATLAB ODE example
ODE:
g
α̈(t) = − sin(α(t))
l
ODE in first order form
α̇1(t) = α2(t)
g
α̇2(t) = − sin(α1(t))
l
Initial values
α1 = π/2 and α2 = 0
C. Führer: FMN081/FMN041/NUMA12-2008
264
Ö8.1 : Pendulum -right hand side function
function ydot=pendel(t,y)
%
g=9.81;
l=1;
%
alpha1=y(1);
alpha2=y(2);
%
ydot(1)=alpha2;
ydot(2)=-g/l*sin(alpha1);
ydot=ydot’;
C. Führer: FMN081/FMN041/NUMA12-2008
265
Ö8.2 : Pendulum Simulation
C. Führer: FMN081/FMN041/NUMA12-2008
266
Ö8.3 : Pendulum Simulation (Cont.)
The blue curve depicts α(t), the green its derivative and the circles indicate the step sizes:
-5
0 5 10
C. Führer: FMN081/FMN041/NUMA12-2008
267
Ö8.4 : Implicit Euler Code
function [t,y]=impeul(odefun,tspan,y0,n)
t0=tspan(1); te=tspan(2); t=linspace(t0,te,n);h=t(2)-t(1);
y(:,1)=y0;
for i=1:n-1 % integrator loop
% predict
ypred=y(:,i)+h*feval(odefun,t(i),y(:,i));
y(:,i+1)=ypred;
% corrector med fixpunktiteration
for j=1:2 % we use a fixed number of iterations
% and do not check convergence
y(:,i+1)=y(:,i)+h*feval(odefun,t(i+1),y(:,i+1));
end
end
C. Führer: FMN081/FMN041/NUMA12-2008
268
Ö8.5 : Implicit Euler Damping
2
n=200
1
-1
-2
0 5 10
C. Führer: FMN081/FMN041/NUMA12-2008
269
Ö8.6 : Implicit Euler Damping
2
n=2000
1
-1
-2
0 5 10
C. Führer: FMN081/FMN041/NUMA12-2008
270
10.1: Outlook
This course is a basic appetizer for continued work and education in numerical analysis
and applied mathematics.
There are courses on
C. Führer: FMN081/FMN041/NUMA12-2008
271
11.1: Appendix
C. Führer: FMN081/FMN041/NUMA12-2008
272
Claus Führer, 2008-01-16
Assignment 1
Last hand-in day: 23 January 2008 at 13:00h
Goal: Making practical experiments with fixed point and Newton iteration
Task 1 Consider the problem of finding all zeros of the function f (x) =
2x − tan x. Transform this function into the two different fixed point
problems
1
x = tan x or x = arctan(2x).
2
Try to perform fixed point iteration with these functions, discuss con-
vergence and estimate the error after 10 iteration steps (a posteriori
estimate) in the convergent case.
Construct another fixed point problem, which has as fixed points the
roots of the function f . Discuss even for this third alternative the
convergence properties as above.
Task 2 Write in MATLAB three functions, one which performs one step
of Newton-iteration with exact derivative, one which does the same,
but with a finite-difference approximation of the derivative and one
for the secant method. Solve with these functions the problem f (x) =
arctan(x) = 0 by calling them in MATLAB within an appropriate
for-loop.
Task 4 Read Sec. 1.7 in the course book. It contains two examples, which
demonstrate interesting phenomena observed when performing fixed
point iteration and Newton iteration. Program fixed point iteration
for the logistic equation (1.33) and demonstrate the phenomena by
your computations. Be prepared to present your results in class on
Tuesday, January 25.
Lycka till!
1
Claus Führer, 2008-01-23
Assignment 2
Last hand-in day: 30 January 2008 at 13:00h
Goal: Matlab experience with Newton iteration in Rn , mechanical example,
fractal structure of Newton convergence regions, a property of the condition
number
p p
8 5
p p
9
6
p7
loading area (body 5) cabin (body 4)
p
2
chassis (body 2)
p
3
p(0) = (0.52, 2.16, 0.0, 0.52, 2.68, 0.0, −1.5, 2.7, 0.0)T .
Download the MATLAB file truck acc.m which evaluates the right
hand side of the differential equation describing the motion of the
truck. That m-file will also need spridamp.m and relat.m . Compu-
te the equilibrium position of the truck by using Newton’s method.
Discuss the rate of convergence.
1
Task 2 We will construct now a fractal figure using Newton’s method. Con-
sider the roots of the equation system
3
x1 − 3x1 x22 − 1
F (x) = =0
3x21 x2 − x32
Lycka till!
2
Claus Führer, 2008-01-30
Assignment 3
Last hand-in day: 6 February 2008 at 13:00h
Goal: To get experience with Polynomial interpolation, Chebyshev polyno-
mials, Runge’s phenomenon
Task 1 This task is just to get experience with simple data interpolation
and extrapolation. In the table below you find three columns with
energy data from my house in Södra Sandby. Construct the unique
polynomial which interpolates the energy consumption and another
which interpolates the temperature. Plot the curves. Evaluate these
polynomials to see, which energy I will consume on Tuesday, February,
6 and find out which will be the average temperature at that day.
Note, you get in all three cases the same polynomial but in totally
different representations. I.e. your plots should be identical in all three
cases (if you made things correctly).
1
interval [−1, 1] and plot ωn (x). Set n = 5 and try out different choices
of interpolation points. Can you recommend an optimal one? Test even
the case n = 15.
Task 4 Now, we visualize the error. For this end interpolate the function
1
f (x) =
1 + 25x2
on an equidistant grid in the interval [−1, 1] with n = 3, 9, 15. Con-
struct also an interpolation polynomial on a grid with Chebyshev
points.
Lycka till!
2
Claus Führer, 2008-02-05
Assignment 4
Last hand-in day: 13 February 2008 at 15:00h
Goal: To get experience with splines
1
E F G H
a 93.576667419 8.834924130 16. 9.519259302
b 2.747477419 20 12. 20.5
c − 58.558326413 55. 49.5
ξmin −39.764473993 −49.662510381 −62.764705882 −70.0
ξmax −38.426669071 −39.764473993 −49.662510381 −62.764705882
5
H G F D C B B A
0 E
-5
-10
-15
-20
The S1002 wheel profile
-25 and its sectors of
polynomial describtion
-30
-80 -60 -40 -20 0 20 40 60
Figure 1.
Describe this wheel profile by means of a natural cubic spline. For this
end, download the file s1002.m, which contains the above description
of the s1002 wheel standard and generate from this data, which you
then use to generate an interpolating spline with your programs in
Task 1 and 2. Plot the resulting curve.
Lycka till!
2
Claus Führer, 2008-01-30
Assignment 5
Last hand-in day: 20 February 2008 at 13:00h
Goal: To get experience with Polynomial interpolation, Chebyshev polyno-
mials, Runge’s phenomenon
Task 1 Compute and plot the polynomial p5 which best approximates the
function f (x) = arctan(x) in the interval [−1, 1]. Make three different
approaches
1. Use a monomial basis for this task, set up a Hilbert matrix
Rand
1
solve for the coefficients. Use the inner product (f, g) =
−1 (x)g(x)dx.
f
2. Use Legendre polynomials as a basis and the same inner product.
R1 1
3. Use the inner product (f, g) = −1 √1−x 2
f (x)g(x)dx instead and
use Chebychev polynomials as a basis.
To compute integrals of nonpolynomial expressions use MATLABs
command quad, i.e. don’t make symbolic computations. Study the
influence of different integration tolerances (TOL).
Task 3 Write two MATLAB programs mysimpson and mygauss3, which
compute
Rb an Simpson and 3-stage Gauss approximation to the integral
a f (x)dx for a given function and a given number of steps.
Lycka till!
1
Claus Führer, 2008-02-20
Assignment 6
Last hand-in day: 27 February 2008 at 15:00h
Goal: To solve a simple boundary value problem with Galerkin’s method
and test a multistep method for an initial value problem.
Task 1 Consider a steel rod with a quadratic cross section, which is loaded
eccentrically by a force F , see figure. The differential equation for the
displacement of its central line (dotted line) is given by
a
EIw00 (x) + F w(x) = Fx
b
where E is Young’s modulus 208 GPa, the cross section I = 0.036 m4 ,
the length b = 3m and the height a = 0.06m. The force F = 180 kN.
The rod is supported at its both ends, which leads to the boundary
conditions
w(0) = w(b) = 0.
1
Task 2 We compute once more (see Assignment 2) the steady state position
of the 2D-truck model. Now, we compute it by solving the its differ-
ential equation. We take as initial conditions p(0) from Assginment 2
and v(0) = 0 (9 components). The differential equation is given by
the files trurhs.m and the corresponding subprograms, which can be
downloaded as a zip-file from the web.
Lycka till!
2
Claus Führer, 2008-02-26
There will be consulting hours even for the project (see the web page for
dates).
Scenario: Consider the pendulum equation from the lecture with the same
initial conditions. This time we place an obstacle (ett hinder, sv.) at an
angle αobst = 23 π, see picture. The obstacle will be hit at an unknown
time tobst . Let’s say that it has then an angular velocity (also unknown)
α̇obst . When this obstacle is hit, we have to restart integration of the
problem with new initial conditions (αobst , −α̇obst )T .
Your task is to integrate the pendulum numerically over many periods
and to show how the obstacle influences its trajectory. Furthermore
we are interested in tobst , (αobst , and α̇obst . The integration should be
performed with an BDF 2-step method. To start integration use an
implicit Euler step first.
Organization of your work:
1. Write a MATLAB code, which performs a single step of the im-
plicit Euler method.
2. Write a MATLAB code performing a single step of BDF-2. (coef-
ficients of the method and of its predictor, see at the end of this
project page)
1
α
α obst
3. Combine these programs and solve the problem without the ob-
stacle. Check your solution by comparing it to the results from
the lecture notes.
4. Write a MATLAB program, which computes an interpolation
polynomial, for three consecutive solution points un+1 , un , un−1 .
5. Call this MATLAB function after every integration step, so that
you have in a typical step tn → tn+1 the coefficients of a polyno-
mial p(t) available. Note, that this polynomial is a vector valued
function p(t) = (pα (t), pα̇ (t))T with two components one interpo-
lating angles and another interpolating angular velocities.
6. Check if the integration passed the obstacle. For this end, you
have to check if the function pα (t) − αobst = 0 has a solution in
[tn , tn+1 ]. If you find a solution of this nonlinear equation, you
call it tobst . It is the time, when the obstacle is hit.
7. Compute αobst and α̇obst by evaluating the polynomial.
8. Restart your integration by setting the initial time to tobst and
by setting new initial conditions (as given above).
9. Perform your integration over 5 sec and with an appropriate fixed
step size h.
10. Make a plot of your result. A phase plot (α versus α̇) is quite
instructive.
Note, this exercise tries to put the content of the several chapters of the
course in a common application oriented environment. Here you should just
check to what extend you understood the material of the course and that
you can handle practical problems. In the theoretical exam you will be asked
a couple of questions around this problem and the other home assignments.
This exercise may appear hard as it consists of the combination of many
different steps. There will be help as usual. So just try.
2
Lycka till!
3
Flow Chart 1.1: Generic Integrator Call
Initialization:
a) model data: mass, gravity, etc
b) integrator control data: initial stepsize, tolerance, etc
tout=tstart:Dtout:tend
ODE solver:
on return: x(tout), error code see Flowchart 1.2
Error ?
No Yes
Break
Postprocessing
Flow Chart 1.2: Generic Single Step Integrator Call
ODEstep solver:
on return: x(tout), error code
see Flowchart 1.3
Error ?
No Yes
Step size too small?
t=t+h Yes No
Error handling Redo step
with new
Break step size
Flow Chart 1.3:
Generic Single Step Integrator Organisation
Predict
new Jacobian needed?
Yes No
Compute Jacobian
Jacnew = 1 Jacnew = 0
Yes convergence ? No
Estimate Error Y Jacnew = 0 N
Y Error < Tol Redo the step
N Redo the step
with the same
with h=h/2
Accept step Reject step step size
increase and require a
decrease and require a
step size step size new Jacobian
new Jacobian