Numerical Methods

Claus F¨ uhrer
Lund University
claus@maths.lth.se
Lund, Jan/Feb. 2008
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
Unit 0: Preface
• These notes serve as a skeleton for the course. They document together
with the assignments the course outline and course content.
• All references in the notes refer to the textbook by S¨ uli and Mayers if
not otherwise stated.
• The notes are a guide to read the textbook. They are no textbook.
• Please report missprints or your comments to the teacher.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
1
Unit 1: Basic Iterative Schemes in 1D
Problem:
Given: f : R →R
Find: x ∈ R such that f(x) = 0.
x is called a root or a zero of f.
0 5 10
-0.5
0
0.5
1
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
2
1.1 Existence
Theorem. [1.1]
f : [a, b] ⊂ R →R, continuous and f(a)f(b) < 0,
i.e. they have different signs
Then
There exists ξ ∈ [a, b] with f(ξ) = 0 .
Repeat the definition of multiplicity of a root!
Note, Roots with even multiplicity cannot be detected by this theorem.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
3
1.2 Example
0 5 10
-10
-5
0
5
10
Double Root
Single Root
The algorithm based on this theorem is the bisection method.
(Intervallhalveringsmetoden). See also MATLAB’s fzero .
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
4
1.3 Fixed Point Formulation
We rewrite the problem
f(x) = 0 ⇔ x = g(x)
x is called a fixed point of g.
Examples:
g(x) = x −f(x)
g(x) = x +αf(x)
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
5
1.4 Brower’s Fixed Point Theorem
Theorem. [1.2] (Brower’s Fixed Point Theorem)
g : [a, b] ⊂ R →R, continuous and g(x) ∈ [a, b],
i.e. g maps [a, b] into itself
Then
There exists ξ ∈ [a, b] with g(ξ) = ξ
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
6
1.5 Example
Root-finding problem: 0 = exp(x) −2x −1
Fixed-point problem: x = ln(2x + 1)
0.5 1 1.5 2 2.5
-2
0
2
4
6
8
exp(x)-2x-1
0.5 1 1.5 2 2.5
0.5
1
1.5
2
2.5
ln(2x+1)
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
7
1.6 Fixed Point Iteration
Definition. [1.1] The iteration
x
k+1
= g(x
k
)
is called fixed point iteration or simple iteration.
If it converges, then to a fixed point of g:
ξ = lim
k→∞
x
k+1
= lim
k→∞
g(x
k
) = g( lim
k→∞
x
k
) = g(ξ)
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
8
1.7 Example
>> x(1)=1;for k=1:17, x(k+1)=log(2*x(k)+1);end
>> x
x =
Columns 1 through 10
1.0000 1.0986 1.1623 1.2013 1.2246 1.2381 1.2460 1.2504 1.2530 1.2545
Columns 11 through 18
1.2553 1.2558 1.2561 1.2562 1.2563 1.2564 1.2564 1.2564
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
9
1.8 Contractions
Definition. [1.2] A function g is called a contraction (or contractive) on
[a, b]
if there exists a constant L < 1 such that
[g(x) −g(y)[ ≤ L[x −y[ ∀x, y ∈ [a, b]
The intervall [x, y] shrinks to [g(x), g(y)].
If g is differentiable, then g

(x) < 1 for all x ∈ [a, b].
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
10
1.9 Contraction Mapping Theorem
Theorem. [1.3] g : [a, b] ⊂ R →R, continuous and g(x) ∈ [a, b],
g contractive in [a, b]
Then
• g has a unique fixed point in [a, b]
• the iteration x
k+1
= g(x
k
) converges
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
11
1.10 Example
g(x) = ln(2x + 1)
By mean value theorem (medelv¨ardessats)
[g(x) −g(y)[ ≤ [g

(y)[[x −y[
and g

(x) = 2/(2x + 1). g

(x) is a decreasing function.
Thus max
x∈[1,2]
[g

(x)[ = [g

(1)[ = 2/3
We set L = 2/3.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
12
1.11 Speed of convergence
Let’s check the speed of convergence:
[x
k+1
−x
k
[ = [g(x
k
) −g(x
k−1
)[ ≤ L[x
k
−x
k−1
[
Thus
[x
k+1
−x
k
[
[x
k
−x
k−1
[
≤ L
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
13
1.12 Example: Speed of convergence
>> diff(x(2:end))./diff(x(1:end-1))
ans =
Columns 1 through 10
0.6457 0.6134 0.5946 0.5838 0.5776 0.5740 0.5720 0.5709 0.5702 0.5698
Columns 11 through 16
0.5696 0.5695 0.5694 0.5694 0.5694 0.5694
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
14
1.13 Error bounds
We get
[x
k
−ξ[ = [g(x
k−1
) −g(ξ)[
≤ L[x
k−1
−ξ[
≤ L([x
k−1
−x
k
[ +[x
k
−ξ[)
and consequently:
[x
k
−ξ[ ≤
L
1 −L
[x
k
−x
k−1
[
This is called an a-posteriori estimate.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
15
1.14 Error bounds (Conts.)
Analogously we can derive an a-priori bound
[x
k
−ξ[ ≤
L
k
1 −L
[x
1
−x
0
[
How many iterations do we need in our example for an accuracy of
[x
k
−ξ[ ≤ 10
−8
?
See also Th. 1.4 in the book.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
16
1.15 Rate of Convergence
Definition. [1.4] Assume lim
k→∞
x
k
= ξ.
• Linear convergence, if
[x
k
−ξ[ <
k
and lim
k→∞

k+1

k
= µ with µ ∈ (0, 1)
• Superlinear convergence if µ = 0.
• Sublinear convergence if µ = 1
• Asymptotic rate of convergence ρ = −log
10
µ
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
17
1.16 Rate of Convergence (Cont.)
ρ large ⇒ fast (linear) convergence
ρ small ⇒ slow (linear) convergence
Check example on overhead 1.13 again.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
18
1.17 Roots of a function
Example (p.17): f(x) = e
x
−x −2 = 0
-4 -2 0 2
-1
0
1
2
3
exp(x)-x-2
x
1
x
2
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
19
1.18 Roots of a function Cont.
e
x
−x −2 = 0 ⇔ x = ln(x + 2) ⇔ x = e
x
−2 ⇒ x = x(e
x
−x)/2
-2 -1 0 1 2
-3
-2
-1
0
1
2
ln(x+2)
x
1
x
2
-3 -2 -1 0 1 2
-2
0
2
4
X
1
X
2
exp(x)-
2
-2 -1 0 1 2
-4
-2
0
2
4
x
1
x
2
extra
fixpoint
x(exp(x)-
x)/2
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
20
1.19 Relaxation
We construct now systematically g from f.
A first attempt is
Definition. [1.5] A relaxed version of a fixed point iteration with f is
x
k+1
= x
k
−λf(x
k
)
with λ = 0.
Let ξ fullfill f(ξ) = 0. We select λ such that the relaxed iteration converges
fast to ξ, if x
0
is near ξ.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
21
1.20 Relaxation (Cont.)
Iteration function: g(x) = x −λf(x)
Contraction is determined by max g

(x) in a neighborhood of ξ.
g

(x) = 1 −λf

(x)
Some (loose) concepts:
• λ should have the same sign as f

(x), why?
• x
0
should be sufficiently near ξ
• λ not too large
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
22
1.20 Relaxation (Cont.)
Theorem. [1.7]
Let f and f

be continuous. f(ξ) = 0 and f

(ξ) = 0.
Then
There exists real numbers λ and δ such that the relaxed iteration converges
to ξ for all x
0
∈ [ξ −δ, ξ +δ].
How to find such a λ, which also guarantees fast convergence?
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
23
1.21 Newton’s Method
We modify the relaxed iteration to
x
k+1
= x
k
−λ(x
k
)f(x
k
)
and make the optimal choice λ(x) =
1
f

(x)
. This leads to
Definition. [1.6]
Newton’s method for solving f(x) = 0 is defined by
x
k+1
= x
k

f(x
k
)
f

(x
k
)
.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
24
1.22 Newton’s Method (Cont.)
0
Newton's method
x
0
x
x
1 2
Tangent: y = f

(x
k
)(x −x
k
) +f(x
k
)
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
25
1.23 Newton’s Method Convergence
Definition. [1.7] Assume lim
k→∞
x
k
= ξ.
• The iteration converges with order q iff
[x
k
−ξ[ <
k
and lim
k→∞

k+1

q
k
= µ
• If q = 2 it converges quadratically
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
26
1.24 Newton’s Method Convergence (Cont.)
Newton’s method converges locally (as it is a fixed point iteration with a
locally contractive function).
It converges also quadratically:
Theorem. [1.8] f, f

, f

continuous in I
δ
:= [ξ −δ, ξ +δ], f(ξ) = 0 and
f

(ξ) = 0.
If there exists a constant A with
[f

(x)[
[f

(y)[
≤ A ∀x, y ∈ I
δ
and x
0
∈ I
h
:= [ξ −h, ξ +h] with h = min(1/A, δ), then Newton’s method
converges quadratically to ξ.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
27
1.25 Variants of Newton’s Method
Secant Method
x
k+1
= x
k
−f(x
k
)

x
k
−x
k−1
f(x
k
) −f(x
k−1
)

Simplified Newton’s Method
x
k+1
= x
k

f(x
k
)
f

(x
0
)
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
28
¨
O1: Exercise Notes
We consider the problem
f(x) = e
−x
−x = 0
and solve it first by MATLAB’s function fzero:
[ x , f x , e x c i t , di agn ]=f zer o (@( x ) exp(−x)−x , [ 0 . 1 , 2 ] )
Note here the use of anonymous functions in MATLAB.
(see also http://www.mathworks.com/access/helpdesk/help/techdoc/index.html?/access/helpdesk/help/techdoc/
matlab_prog/f4-70115.html)
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
29
¨
O1.2: Exercise Notes (Cont.)
Here the result (use format long e !):
x = 5.671432904097838e-01
fx = 1.110223024625157e-16
excit = 1
diagn =
intervaliterations: 0
iterations: 7
funcCount: 9
algorithm: ’bisection, interpolation’
message: ’Zero found in the interval [0.1, 2]’
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
30
¨
O1.3: Exercise Notes (Cont.)
We write now our own bisection code
f uncti on [ r oot , e x i t , i t ]=mybi s ec ( myfunc , s t a r t , Tol )
% i n i t i a l i z a t i o n
a=s t a r t ( 1 ) ; b=s t a r t ( 2 ) ; maxi t =50; i t =0;
f a=f eval ( myfunc , a ) ; f b=myfunc ( b ) ;
i f f a ∗ f b > 0
di sp ( ’ wrong i n t e r v a l t aken ’ )
e x i t =0;
r etur n
end
% i t e r a t i o n
f or i =1: maxi t
c=(a+b ) /2;
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
31
f c=f eval ( myfunc , c ) ;
i f f a ∗ f c < 0
b=c ;
el s e
a=c ;
f a=f eval ( myfunc , a ) ;
end
i f b−a < Tol
e x i t =1;
r oot =(a+b ) /2;
i t =i ;
break
end
end
Check in the MATLAB help pages the use of feval !
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
32
¨
O1.4: Exercise Notes (Cont.)
Here the result (use format long e !):
>> [r,e,it]=mybisec(@(x) exp(-x)-x,[0,1],1.e-6)
r = 5.671429634094238e-01
e =1
it =20
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
33
¨
O1.5: Exercise Notes (Cont.)
How could the code mybisec be improved?
• Avoid maxit and determine during the initialization step the required
number of iterations.
• Silently it was assumed that b > a. This should me checked during
initialization.
• The code should handle the case when the zero is exactly hit.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
34
¨
O1.6: Exercise Notes (Cont.)
We solve the problem now by fixed point iteration and rewrite it as a fixed
point problem:
x = e
−x
.
The interval [0.1, 1] is mapped into itself and the mapp is contractive with
Lipschitz constant
L = max [g

(x)[ = e
−0.1
= 0.905.
We expect slow convergence of fixed point iteration:
x
k+1
= e
−x
k
.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
35
¨
O1.7: Exercise Notes (Cont.)
In MATLAB
x ( 1) =0. 1;
f or i =1:30
x ( i +1)=exp(−x ( i ) ) ;
end
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
36
¨
O1.7: Exercise Notes (Cont.)
... which results in
>> format compact
>> x(1:2)
ans =
0.100000000000000 0.904837418035960
>> x(30:31)
ans =
0.567143328627341 0.567143268734953
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
37
¨
O1.8: Exercise Notes (Cont.)
The error can be estimated via the a posteriori estimate
L=0.905
e s t=abs ( x ( end)−x ( end−1))∗L/(1−L)
which gives est=5.705e−07 whereas the exact error is eps=3.053e−07.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
38
¨
O1.8: Exercise Notes (Cont.)
To detremine the convergence rate may use the following commands:
r a t e=d i f f ( x ( 2 : end ) ) . / d i f f ( x ( 1 : end−1))
Note the MATLAB operations ./ .* .ˆ (and check their meaning).
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
39
Unit 2: Linear Systems, Vector Spaces
Definition. [–] A set 1 is called a linear space or a vector space over R if
there are two operations + and with the following properties
• v
1
, v
2
∈ 1 ⇒ v
1
+v
2
∈ 1
• v
1
+v
2
= v
2
+v
1
• v
1
+ (v
2
+v
3
) = (v
1
+v
2
) +v
3
• There is an element 0
V
∈ 1 with 0
V
+v = v + 0
V
= v
• There is an element −v ∈ 1 with v + (−v) = 0
V
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
40
2.1: Vector Spaces (Cont.)
• α ∈ R ⇒ α v ∈ 1
• (α +β) v = α v +β v
• α (β u) = (αβ) u
• α (v
1
+v
2
) = α v
1
+α v
2
• 1 v = v
One can then easily show 0 v = 0 and −1 v = −v.
The elements v of 1 are called vectors, the elements of R scalars.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
41
2.2: Example of Vector Spaces
In linear Algebra: R, the nullspace and the column space of a matrix
In this course:
• space of all n m matrices
• space of all polynomials of degree n: {
n
• space of all continuous functions ([a, b]
• space of all functions with a continuous first derivative (
1
[a, b]
• ....
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
42
2.3: Example of Vector Spaces
Not a vector space: The set of all polynomials of degree n which have the
property p(2) = 5.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
43
2.4: Basis, Coordinates
One describes a certain element in a vector space by giving its coordinates in
a given basis. (Recall these corresponding definitions from linear algebra).
The number of basis vectors determines the dimension of the vector space.
([a, b] is a vector space with an infinite dimension, {
n
has finite dimension.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
44
2.5: Norms
First we recall properties of the absolute value (absolut belopp) of a real
number:
v ∈ R → [v[ =

v if v ≥ 0
−v if v < 0
• [v[ ≥ 0 and [v[ = 0 ⇔ v = 0
• [λv[ = [λ[[v[
• [u +v[ ≤ [u[ +[v[
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
45
2.6: Vector Norms
We generalize the definition of the absolute value of a real number to norms
of vectors and matrices (later also of functions):
Definition. [2.6] 1 a linear space , a function | | : 1 → R is called a
norm if for all u, v ∈ 1 and all λ ∈ R:
• |v| ≥ 0 and |v| = 0 ⇔ v = 0 (Positivity)
• |λv| = [λ[|v| (Homogenity)
• |u +v| ≤ |u| +|v| (Triangular inequality)
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
46
2.7: Examples
Examples for norms in R
n
:
• 1-norm
|v|
1
=
n
¸
i=1
[v
i
[
• 2-norm (Euclidean norm)
|v|
2
=

n
¸
i=1
v
2
i

1/2
• ∞-norm
max
i=1:n
[v
i
[
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
47
2.8: Unit Circle
The unit circle is the set of all vectors of norm 1.
-1 0 1
-1
0
1
1
2
inf
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
48
2.10: Convergence
Theorem. [-]
If dim1 < ∞ and if | |
p
and | |
q
are norms in 1, then there exist
constants c, C such that for all v ∈ 1
c|v|
q
≤ |v|
p
≤ C|v|
q
Norms in finite dimensional spaces are equivalent.
Sequences convergent in one norm are convergent in all others.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
49
2.11: Matrix norms
A matrix defines a linear map.
Definition. [2.10]
Let | | be a given vector norm. The corresponding (subordinate) matrix
norm is defined as
|A| = max
v∈R
n
\{0}
|Av|
|v|
I.e. the largest relative change of a vector, when mapped by A.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
50
2.12: Matrix norms
Example: A =

2 1
1 1

-1 0 1 -1 0 1
-2
0
2
A
|A| = 2.6180
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
51
2.13: How to compute matrix norms
Matrix norms are computed by applying the following formulas:
1-norm (Th. 2.8): |A|
1
= max
j=1:n
¸
n
i=1
[a
ij
[ maximal column sum
∞-norm (Th. 2.7): |A|
1
= max
i=1:n
¸
n
j=1
[a
ij
[ maximal row sum
2-norm (Th. 2.9): |A|
2
= max
i=1:n

λ
i
(A
T
A)
where λ
i
(A
T
A) is the i
th
eigenvalue of A
T
A.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
52
2.14: Condition of a Problem
A mathematical problem can be viewed as a function mapping in-data to
out-data (solution):
f : D ⊂ 1 → ¼
Condition number is a measure for the
sensitivity of the out-data with respect to
perturbations in in-data.
Scalar case: y = f(x) and ˆ y = f(x +δx):
ˆ y −y = f(x +δx) −f(x) = f

(x)δx +
1
2!
f

(x +θδx)(δx)
2
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
53
2.15: Condition of a Problem (Cont.)
Scalar case, relative:
ˆ y −y
y
=

xf

(x)
f(x)

δx
x
+O(δx
2
)
absolut (local) condition num-
ber
Cond
x
= [f

(x)[
relative (local) condition num-
ber
cond
x
(f) =

xf

(x)
f(x)

C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
54
2.16: Condition of a Problem (Cont.)
In general
absolut (local) condition num-
ber
Cond
x
= |f

(x)|
relative (local) condition num-
ber
cond
x
(f) =
|x| |f

(x)|
|f(x)|
and we get
|rel.output error| ≤ cond
x
(f)|rel.input error|
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
55
2.17: Examples
Example 1: (Summation)
Problem: f : R
2
−→R
1
with

x
1
x
2

→ x
1
+x
2
Jacobian: f

(x
1
, x
2
)
T

=

1, 1

In 1-norm: Cond
x
1
,x
2
(f) = 1
cond
x
1
,x
2
(f) =
[x
1
[ +[x
2
[
[x
1
+x
2
[
Problem if two nearly identical numbers are subtracted.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
56
2.18: Examples (Cont.)
Example 2: (Linear Systems)
Problem: f : R
n
−→R
n
with

b

→ x = A
−1
b
Jacobian: f

b

= A
−1
In 1-norm: Cond
b
(f) = |A
−1
|
cond
b
(f) =
|b||A
−1
|
|A
−1
b|

|A||x||A
−1
|
|x|
= |A||A
−1
| =: κ(A)
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
57
2.19: Examples (Cont.)
The estimate is sharp, i.e. there is a worst case perturbation
Example:
A =

10
3
0
0 10
−3

b =

1
0

δb =

0
10
−5

(see the exercises of the current week and Exercise 2.14 in the book.)
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
58
Unit 3: Systems of nonlinear functions
Example (p. 107)
F(x) =

f
1
(x
1
, x
2
)
f
2
(x
1
, x
2
)

=

x
2
1
+x
2
2
−1
5x
2
1
+ 21x
2
2
−9

= 0
-1 -0.5 0 0.5 1
-1
-0.5
0
0.5
1
f
1
f
2
ξ
1
= (−

3/2, 1/2)
T
ξ
2
= (

3/2, 1/2)
T
ξ
3
= (−

3/2, −1/2)
T
ξ
4
= (

3/2, −1/2)
T
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
59
3.1 Fixed Point Iteration in R
n
Definition. [4.1] Let g : D ⊂ R
n
→ R
n
, D closed (cf. p. 105), x
0
∈ D
and g(D) ⊂ D.
The iteration
x
k+1
= g(x
k
)
is called fixed point iteration or simple iteration.
and similar to the scalar case we define
Definition. [4.2] Let g : D ⊂ R
n
→R
n
, D closed (cf. p. 105).
If there is a constant L < 1 such that
|g(x) −g(y)| ≤ L|x −y|
then g is called contractive.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
60
3.2 Contractivity and norms
Contractivity depends on the choice of a norm.
Here a function, which is contractive in one norm, but not in another
g(x) =

3/4 1/3
0 3/4

x
It follows
|g(x) −g(y)| = |A(x −y)| ≤ |A||x −y|
Thus L = |A|.
But |A|
1
= |A|

=
13
12
and |A|
2
= 0.9350.
g is contractive in the 2-norm and dissipative and the others.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
61
3.3 Contractivity and norms (Cont.)
We start a fixed point iteration in the last example with x = (1, 1)
T
and
get the following norms:
0 1 2 3 4 5 6 7 8 9 10
0
0.5
1
1.5
2
different norms
of fixed point iterates
1
2
inf
(see also ”equivalence of norms”)
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
62
3.4 Fixed Point Theorem in R
n
Theorem. [4.1] Let g : D ⊂ R
n
→R
n
, D closed, g(D) ⊂ D.
If there is a norm such that g is contractive, then g has a unique fixed
point ξ ∈ D and the fixed point iteration converges.
Let J(x) be the Jacobian (functionalmatrix → flerdim) of g.
If |J(ξ)| < 1 then fixed point iterations converges in a neighborhood of ξ.
(Th. 4.2)
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
63
3.5: The Jacobian
Newton’s method requires first derivatives.
We recall the definition (see calculus in several variables)
Definition. [4.3] Let f : D ⊂ R
n
→R
n
, x ∈ D.
The n n matrix
J
f
(x) =

¸
¸
¸
¸
∂f
1
∂x
1
∂f
1
∂x
2

∂f
1
∂x
n
∂f
2
∂x
1
∂f
2
∂x
2

∂f
2
∂x
n

∂f
n
∂x
1
∂f
n
∂x
2

∂f
n
∂x
n

is called the Jacobian or functional matrix of g at x.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
64
3.6: Jacobian: Example
F(x) =

f
1
(x
1
, x
2
)
f
2
(x
1
, x
2
)

=

x
2
1
+x
2
2
−1
5x
2
1
+ 21x
2
2
−9

= 0
Then
J
F
(x) =

2x
1
2x
2
10x
1
42x
2

C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
65
3.7: Jacobian: Numerical Computation
Often the Jacobian is not analytically available and it has to be computed
numerically.
It can be computed column wise by finite differences
f uncti on [ J]=j a c o bi a n ( f unc , x , f x )
% computes t he Jacobi an of a f unct i on
n=l ength ( x ) ;
i f nargi n ==2
f x=f eval ( f unc , x ) ;
end
eps =1. e −8; % coul d be made b e t t e r
x pe r t ur b=x ;
f or i =1: n
x pe r t ur b ( i )=x pe r t ur b ( i )+eps ;
J ( : , i )=( f eval ( f unc , x pe r t ur b)−f x )/ eps ;
x pe r t ur b ( i )=x ( i ) ;
end ;
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
66
3.8: Newton’s method in R
n
Newton’s method for systems of equations is a direct generalization of the
scalar case:
Definition. [4.5] The recursion
x
(k+1)
= x
(k)
−J
F
(x
(k)
)
−1
F(x
(k)
)
with J
F
(x) being the Jacobian of F is called Newton’s method.
Note, in order to avoid confusion with the i-th component of a vector,
we set now the iteration counter as a superscript x
(i)
and no longer as a
subscript x
i
.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
67
3.9: Newton’s method: Implementation remarks
For implementing Newton’s method no matrix inverse is computed (this is
to expansive), we solve instead linear equation systems:
J
F
(x
(k)
)∆x = −F(x
(k)
)
and compute the next iterate by adding the Newton increment δx:
x
(k+1)
= x
(k)
+ ∆x
Solving linear systems is done in MATLAB with the `-command - not with
the command inv !!!
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
68
3.10: Homotopy Method
Finding a good starting value x
(0)
for Newton’s method is a crucial problem.
The homotopy method (continuation method, successive loading method)
can be used to generate a good starting value.
Assume, that F
0
is a known function with a known zero x

, i.e. F(x

) = 0
We construct a parameter depending function
H(x, s) = sF(x) + (1 −s)F
0
(x) s ∈ [0, 1]
and note, that H(x, 0) = 0 is the problem with known solution and
H(x, 1) = 0 is the original problem F(x) = 0.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
69
3.11: Homotopy Method (Cont.)
An example for F
0
(x) is just the trivial way
F
0
(x) := F(x) −F(x

)
This gives the homotopy function
H(x, s) = F(x) + (s −1)F(x

)
for a given vector x

.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
70
3.12: Homotopy Method (Cont.)
As the solution of H(x, s) = 0 depends on s we denote it by x

(s).
We discretize now the intervall into 0 = s
0
< s
1
< < s
n
= 1 and solve
a sequence of nonlinear systems with Newton’s method
H(x, s
i
) = 0
Each iteration is started with the solution x

(s
i−1
) of the preceeding
problem.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
71
3.13: Homotopy Method: Example
Let’s consider the scalar problem arctan(x) = 0 again.
We saw previously that Newton’s method for this problem fails to converge
if started with [x
0
[ > 1.34.
Assume we have an initial guess x

= 4 instead. We set up a homotopy
function
H(x, s) = arctan(x) + (s −1) arctan(4) = 0
and discretize s by selecting s
i
= i ∗ 0.1, i = 0 : 10
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
72
3.14: Homotopy Method: Example (Cont.)
x0=4;
homotop=@( x , s ) atan ( x)+( s −1)∗atan ( 4 ) ;
homot op der i v=@( x ) 1/(1+x ˆ2) ;
s=l i ns pace ( 0 , 1 , 11) ;
x as t=zeros ( l ength ( s ) , 1 ) ;
x as t (1)=x0 ;
f or i =2:11
[ x as t ( i ) , i t e r , i e r ]=newtonh ( homotop , homot op der i v , . . .
x as t ( i −1) , 15 , 1. e−8, s ( i ) ) ;
i f ( i e r ==1)
di sp ( ’ di v e r g e nc e ’ )
break
end
end
Note, we had to provide newton with an extra parameter s.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
73
3.15: Homotopy Method: Example (Cont.)
The resulting homotopy path
0 0.5 1
0
1
2
3
4
Homotopy path x(s)
s
x(s)
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
74
¨
O 3.1: Exercise Notes (Homework 2)
For the task to find the equilibrium of the truck we write the following Newton solver in
MATLAB
f uncti on [ x , conv]=newtonRaph ( f un , x0 , t ol , j akob )
% i n i t i a l i z a t i o n
x ( : , 1) = x0 ; xk=x0 ; maxi t =100; conv=0; n=l ength ( x0 ) ;
E=eye ( n ) ; h=1. e −10;
f or i =1: maxi t % newton−l oop
f x=f eval ( f un , xk ) ;
i f nargi n==3 % i f c a l l e d wi t h t hr ee arguments . . .
f or j =1: n % numeri cal Jacobi an
Jac ( : , j )=( f eval ( f un , xk+h∗E( : , j ))−f x )/h ;
end
el s e
Jac=f eval ( j akob , xk ) % anal y t i c Jacobi an
end
d e l t a x=−Jac\f x ; xk=xk+d e l t a x ; x ( : , i )=xk ;
i f norm( d e l t a x)<t o l % convergence t e s t
conv=1;
break
end
end
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
75
¨
O 3.2: Exercise Notes (Cont.)
We also write a wrapper for the truck-file. This eliminates the econd
parameter without the need of chaning truck acc.m
f uncti on a=t r uc k wr ap ( p)
a=t r uc k a c c ( p , zeros ( 9 , 1 ) ) ;
and execute in the MATLAB command window:
>> p0 =[ 0. 52 , 2. 16 , 0. 0 , 0. 52 , 2. 68 , 0. 0 , −1.5, 2. 7 , 0 . 0 ] ’ ;
>> [ x , conv]=newtonRaph ( @tr uck wr ap , p0 , 1 . e −12);
The solution is then:
Col umns 1 t hr ough 8
0. 5000 2. 0000 0. 0000 0. 5000 2. 9000 0. 0000 −1.4600 2. 9000
Column 9
0. 0000
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
76
¨
O 3.3: Exercise Notes (Cont.)
Now let’s test the rate of convergence:
>> f or i =1:m, nx ( i )=norm( x ( : , i ) ) ; end
>> d i f f ( nx ( 2 : end ) ) . / d i f f ( nx ( 1 : end−1))
ans =
0. 3309 0. 1073 0. 0430 0. 0006 0. 0000 0
This indicates superlinear convergence.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
77
¨
O 3.4: Exercise Notes (Cont.)
Computing Newton-fractals:
Here we plot two pictures, one which depicts the fractals according to Task
2 and another where we plot the numbers of iterations for each start value.
The main m-file looks like this:
% s uppr es s warni ngs concerni ng near l y s i ng ul ar mat ri ces
war ni ng o f f a l l
[ X, Y] = meshgri d ( −1: . 005: 1 , −2: . 01: 2) ;
f or i =1: s i z e (Y, 1 )
f or j =1: s i z e (X, 2 )
[ A( i , j ) , B( i , j )] = newt onf r ac ( [ X( i , j ) ; Y( i , j ) ] ) ;
end
end
% Which−Root−Fr act al ( see t as k 2)
f i gur e ( 1 ) ; pcol or (A) ; shadi ng i n t e r p
%How−many−I t e r a t i o ns f r a c t a l
f i gur e ( 2 ) ; pcol or (B) ; shadi ng i n t e r p
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
78
¨
O 3.5: Exercise Notes (Cont.)
The Newton method is implemeneted for this task so:
f uncti on [ r oot , i t ]=newt onf r ac ( x0 )
x=x0 ; r 1 =[ 1 ; 0 ] ; r 2 =1/2∗[ −1; s qr t ( 3 ) ] ; r 3=1/2∗[−1;−s qr t ( 3 ) ] ;
f or i =1:40
i t =i ;
f x =[ x(1)ˆ3−3∗x (1)∗x (2)ˆ2 −1;3∗x (1)ˆ2∗x(2)−x ( 2 ) ˆ 3 ] ;
Jac =[3∗x(1)ˆ2−3∗x (2)ˆ2, −6∗x (1)∗x ( 2 ) ; . . .
6∗x (1)∗x ( 2) , 3∗ x(1)ˆ2−3∗x ( 2 ) ˆ 2 ] ;
dx=−Jac\f x ; x=x+dx ;
i f abs ( dx)<1. e−7
break
end
end
i f norm( x−r 1 ) < 1. e−5
r oot =1;
e l s e i f norm( x−r 2 ) < 1. e−5
r oot =2;
e l s e i f norm( x−r 3 ) < 1. e−5
r oot =3;
el s e % no root
r oot =−1;
end
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
79
¨
O 3.6: Exercise Notes (Cont.)
And here the results:
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
80
Unit 4: Polynomial Interpolation
(see course book p. 180)
We denote (as above) by {
n
the linear space (vector space) of all polynomials
of (max-) degree n.
Definition. [–] Let (x
i
, y
i
), i = 0 : n be n + 1 pairs of real numbers
(typically measurement data)
A polynomial p ∈ {
n
interpolates these data points if
p(x
k
) = y
k
k = 0 : n
holds.
We assume in the sequel that the x
i
are distinct.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
81
4.1: Polynomial Interpolation
0 1 2 3 4 5 6
-2
-1
0
1
2
3
4
5
An Interpolation Polynomial
measurements
polynomial (6th degree)
How do we determine such a polynomial?
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
82
4.2: Vandermonde Approach
Ansats: p(x) = a
n
x
n
+a
n−1
x
n−1
+ a
1
x +a
0
Interpolation conditions
p(x
j
) = a
n
x
n
i
+a
n−1
x
n−1
i
+ a
1
x
i
+a
0
= y
i
0 ≤ i ≤ n
In matrix form

¸
¸
¸
x
n
0
x
n−1
0
x
0
1
x
n
1
x
n−1
1
x
1
1

x
n
n
x
n−1
n
x
n
1

¸
¸
¸
a
n
a
n−1
.
.
.
a
0

=

¸
¸
¸
y
0
y
1
.
.
.
y
n

or V a = y.
V is called a Vandermonde matrix.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
83
4.3 Vandermonde approach in MATLAB
polyfit sets up V and solves for a (the coefficients)
Alternatively vander sets up V and a = V `y solves for a.
polyval evaluates the polynomial for given x values.
n + 1 points determine a polynomial of (max-)degree n.
Obs! n is input to polyfit .
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
84
4.4 Vandermonde approach in MATLAB
Essential steps to generate and plot an interpolation polynomial:
• Computing the coefficients (polyfit, vander etc)
• Generating x-values for ploting, e.g.
xval=[0:0.1:100] or xval=linspace(0,100,1000)
• Evaluating the polynomial, e.g. yval=polyval(coeff,xval)
• Plotting, e.g. plot(xval,yval)
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
85
4.5 Lagrange Polynomials
We take now another approach to compute the interpolation polynomial
Definition. [–] The polynomials L
n
i
∈ {
n
with the property
L
n
k
(x
i
) =

0 if k = i
1 if k = i
are called Lagrange polynomials.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
86
4.6 Lagrange Polynomials (Cont.)
It is easy to check, that
L
n
k
(x) =
n
¸
i = 0
i = k
(x −x
i
)
(x
k
−x
i
)
The interpolation polynomial p can be written as
p(x) =
n
¸
k=0
y
k
L
n
k
(x)
Check that it indeed fulfills the interpolation conditions!
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
87
4.7 Lagrange Polynomials: Example
Lagrange polynomials of degree 3:
0 1 2 3 4 5 6
-2
-1
0
1
2
3
4
5
An Interpolation Polynomial
measurements
polynomial (6th degree)
0 0.33 0.66 1
0
1
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
88
4.8 The vector space {
n
We have two ways to express a polynomial
Monomial representation p(x) =
¸
n
k=0
a
k
x
k
Lagrange representation p(x) =
¸
n
k=0
y
k
L
n
k
(x)
They describe the same polynomial (as the interpolation polynomial is
unique).
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
89
4.9 The vector space {
n
(Cont.)
We introduced two bases in {
n
:
Monomial basis ¦1, x, x
2
, , x
n
¦, coordinates a
k
, k = 0 : n
Lagrange basis ¦L
n
0
(x), L
n
1
(x), , L
n
n
(x)¦, coordinates y
k
, k = 0 : n
It is easy to show, that these really are bases (linear independent elements).
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
90
4.10 Inner Product Space
Definition. [9.1] Let 1 be a linear space and (, ) : 1 1 → R a map
with properties
• (v, v) ≥ 0 and (v, v) = 0 ⇔ v = 0,
• (αv, w) = α(v, w) for α ∈ R,
• (v, w) = (w, v)
• (v +w, u) = (v, u) + (v, u)
then 1 is called an inner product space and (, ) an inner product.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
91
4.11 Inner Product Space (Examples)
• R
n
is an inner product space with the inner product
(v, u) =
n
¸
i=0
v
i
u
i
= v
T
u see scalar product
• {
n
is an inner product space with the inner product
(p, q)
x
i
=
n
¸
i=0
p(x
i
)q(x
i
) pointwise inner product
• {
n
is an inner product space with the inner product
(p, q)
2
=

b
a
p(x)q(x)dx
L
2
-inner product
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
92
4.12 Inner Product Space - Orthogonality
Definition. [9.2] Let 1 be an inner product space and let two elements
p, q ∈ 1 have the property (p, q) = 0, then they are called orthogonal. One
writes p⊥q or p = q

.
Lagrange polynomials form an orthogonal basis with respect to the inner
product (p, q) =
¸
n
i=0
p(x
i
)q(x
i
).
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
93
4.13 Inner Products and Norms
Inner products induce norms, so-called inner-product norms
|v| := (v, v)
1/2
Examples:
• In {
n
: |p|
2
=

b
a
p(x)
2
dx

1/2
• In {
n
: |p|
x
i
=
¸
n
i=0
p(x
i
)
2

1/2
• In R
n
: scalar product and (Euclidean-)length of a vector
But |p|

:= max
x∈[a,b]
[p(x)[ is not an inner product norm.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
94
4.14 Interpolating functions
Let f : R →R be a function, we try to describe it by a polynomial.
Best approximation of a function:
min
p∈P
n
|f −p|
The result depends on the norm we choose.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
95
4.15 Interpolating functions - Example
Example [6.1, p. 183] f(x) = e
x
in [−1, 1]. We select x
0
= −1, x
1
=
0, x
2
= 1 and compute the interpolation polynomial
L
0
(x) =
(x −x
1
)(x −x
2
)
(x
0
−x
1
)(x
0
−x
2
)
=
1
2
x(x −1)
L
1
(x) =
(x −x
0
)(x −x
2
)
(x
1
−x
0
)(x
1
−x
2
)
= 1 −x
2
L
2
(x) =
(x −x
0
)(x −x
1
)
(x
2
−x
0
)(x
2
−x
1
)
=
1
2
x(x + 1)
0 1 2 3 4 5 6
-2
-1
0
1
2
3
4
5
An Interpolation Polynomial
measurements
polynomial (6th degree)
0 0.33 0.66 1
0
1
-1 -0.5 0 0.5 1
0
1
2
3
exp(x)
p(x)
Thus p(x) =
1
2
x(x −1)e
−1
+ (1 −x
2
)e
0
+
1
2
x(x + 1)e
1
.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
96
4.16 Interpolating functions - Example (Cont.)
Error:
0 1 2 3 4 5 6
-2
-1
0
1
2
3
4
5
An Interpolation Polynomial
measurements
polynomial (6th degree)
0 0.33 0.66 1
0
1
-1 -0.5 0 0.5 1
0
1
2
3
exp(x)
p(x)
-1 -0.5 0 0.5 1
0
0.02
0.04
0.06
0.08
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
97
4.17 Interpolating functions - Error
Theorem. [6.2] Let f : R →R with n +1 continuous derivatives in [a, b]
and let x
i
∈ [a, b], i = 0 : n. Then there exists a ξ = ξ(x) ∈ [a, b] with
f(x) −p(x) =
1
(n + 1)!
f
(n+1)
(ξ)(x −x
0
) . . . (x −x
n
)
This gives the estimate
[f(x) −p(x)[ =
1
(n + 1)!
M
n+1
(f)[(x −x
0
) . . . (x −x
n
)[
with M
n+1
(f) = max
x∈[a,b]
[f
(n+1)
(x)[.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
98
4.18 Interpolating functions - Error (Cont.)
Consequently
|f −p|

=
1
(n + 1)!
M
n+1
(f) max
x∈[a,b]
[(x −x
0
) . . . (x −x
n
)[
If there is the possibility to select the x
i
freely, one can minimize the
interpolation error for a given n.
See Chebyshev polynomials.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
99
4.19 Chebyshev polynomials
Definition. [8.2] The functions
T
n
(x) = cos(narccos(x)), n = 0, 1, 2, . . .
are called Chebyshev polynomials in [−1, 1].
They are indeed polynomials: Set δ = arccos(x).
The identity
cos(n + 1)δ + cos(n −1)δ = 2 cos δ cos nδ
gives T
n+1
(x) = 2xT
n
(x) − T
n−1
(x). With T
0
(x) = 1, T
1
(x) = x it
becomes obvious that T
n
(x) ∈ {
n
.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
100
4.20 Chebyshev Polynomials (Cont.)
0 1 2 3 4 5 6
-2
-1
0
1
2
3
4
5
An Interpolation Polynomial
measurements
polynomial (6th degree)
0 0.33 0.66 1
0
1
-1 -0.5 0 0.5 1
0
1
2
3
exp(x)
p(x)
-1 -0.5 0 0.5 1
0
0.02
0.04
0.06
0.08
-1 -0.5 0 0.5 1
-1
-0.5
0
0.5
1
4
3
2
1
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
101
4.21 Chebyshev Polynomials-Properties
Lemma. [8.2] • The T
i
have integer coefficients.
• The leading coefficient is a
n
= 2
n−1
.
• T
2n
is even, T
2n+1
is odd.
• [T
n
(x)[ ≤ 1 for x ∈ [−1, 1] and [T
n
(x)[ = 1 for x
k
:= cos(kπ/n).
• T
n
(1) = 1, T
n
(−1) = (−1)
n
• T
n
(¯ x
k
) = 0 for ¯ x
k
= cos

2k−1
2n
π

for k = 1, . . . , n
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
102
4.22 Minimality Property of Chebyshev Polynomials
Theorem. [-] 1. Let P ∈ {
n
([−1, 1]) have a leading coefficient a
n
= 0.
Then there exists a ξ ∈ [−1, 1] with
[P(ξ)[ ≥
[a
n
[
2
n−1
.
2. Let ω ∈ P
n
([−1, 1]) have a leading coefficient a
n
= 1. Then the scaled
Chebyshev polynomials T
n
/2
n−1
have the minimal property
|T
n
/2
n−1
|

≤ min
ω
|ω|

Proof: see lecture
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
103
4.23 Chebyshev Polynomials and Optimal Interpolation
We apply this theorem to the result on the approximation error of polynomial
interpolation and conclude for [a, b] = [−1, 1]:
The approximation error
f(x) −p(x) =
1
(n + 1)!
f
(n+1)
(ξ)(x −x
0
) (x −x
n
)
error is minimal if (x −x
0
) (x −x
n
) = T
n+1
/2
n
, i.e. if the x
i
are roots
of the n + 1
st
Chebyshev polynomial, so-called Chebychev points
In case of [a, b] = [−1, 1] we have to consider the map:
[a, b] → [−1, 1] x → 2
x −a
b −a
−1
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
104
Unit 5: Cubic Splines
Let K = ¦x
0
, . . . , x
m
¦ be a set of given knots with
a = x
0
< x
1
< < x
m
= b
Definition. [11.2] A function s ∈ C
2
[a, b] is called a cubic spline on [a, b],
if s is a cubic polynomial s
i
in each interval [x
i
, x
i+1
].
It is called a cubic interpolating spline if s(x
i
) = y
i
for given values y
i
.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
105
5.1: Cubic Splines
Interpolating cubic splines need two additional conditions to be uniquely
defined
Definition. [11.3] An cubic interpolatory spilne s is called a natural spline
if
s

(x
0
) = s

(x
m
) = 0
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
106
5.2: Cubic Splines - Construction
We construct an interpolating in a different but equivalent way than in the
textbook:
Ansatz for m the piecewise polynomials
s
i
(x) = a
i
(x −x
i
)
3
+b
i
(x −x
i
)
2
+c
i
(x −x
i
) +d
i
By fixing the 4m free coefficients a
i
, b
i
, c
i
, d
i
, i = 0 : m−1 the entire spline
is fixed.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
107
5.3: Cubic Splines-Construction
We need 4m conditions to fix the coefficients
(1) s
i
(x
i
) = y
i
, for i = 0 : m−1,
(2) s
m−1
(x
m
) = y
m
, 1 condition
(3) s
i
(x
i+1
) = s
i+1
(x
i+1
), for i = 0 : m−2,
(4) s

i
(x
i+1
) = s

i+1
(x
i+1
), for i = 0 : m−2,
(5) s

i
(x
i+1
) = s

i+1
(x
i+1
), for i = 0 : m−2,
These are 4m−2 conditions. We need two extra.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
108
5.4: Cubic Splines-Boundary Conditions
We can define two extra boundary conditions. One has several alternatives:
Natural Spline s

0
(x
0
) = 0 and s

m−1
(x
m
) = 0
End Slope Spline s

0
(x
0
) = y

0
and s

m−1
(x
m
) = y

m
Periodic Spline s

0
(x
0
) = s

m−1
(x
m
) and s

0
(x
0
) = s

m−1
(x
m
)
Not-a-Knot Spline s

0
(x
1
) = s

1
(x
1
) and s

m−2
(x
m−1
) = s

m−1
(x
m−1
)
We consider here natural splines.
MATLAB uses splines with a not-a-knot condition.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
109
5.5: Natural Splines Construction
s
i
(x) = a
i
(x −x
i
)
3
+b
i
(x −x
i
)
2
+c
i
(x −x
i
) +d
i
Let h = x
i+1
−x
i
equidistant spacing
s

i
(x
i+1
) = 3a
i
h
2
+ 2b
i
h +c
i
s

i
(x
i+1
) = 6a
i
h + 2b
i
From Condition (1) we get d
i
= y
i
.
We introduce new variables for the second derivatives at x
i
, i.e.
σ
i
:= s

(x
i
) = s

i
(x
i
) = 6a
i
(x
i
−x
i
) + 2b
i
= 2b
i
i = 0 : m−1
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
110
5.6: Natural Splines Construction (Cont.)
Thus b
i
=
σ
i
2
.
From
σ
i+1
= 6a
i
h + 2b
i
.
and Condition (5) we get
a
i
=
σ
i+1
−σ
i
6h
.
By Condition (3) we get
y
i+1
= a
i
h
3
+b
i
h
2
+c
i
h +y
i
.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
111
5.7: Natural Splines Construction (Cont.)
... and after inserting the hight-lighted expressions for a
i
and b
i
we get
y
i+1
=

σ
i+1
−σ
i
6h

h
3
+
σ
i
2
h
2
+c
i
h +y
i
.
From that we get c
i
: c
i
=
y
i+1
−y
i
h
−h

i

i+1
6
.
Using now Condition (4) gives a relation between c
i
and c
i+1
c
i+1
= 3a
i
h
2
+ 2b
i
h +c
i
.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
112
5.8: Natural Splines Construction (Cont.)
Inserting now the expressions for a
i
, b
i
and c
i
, using Condition (2) and
simplifying finally gives the central recursion formula:
σ
i−1
+ 4σ
i

i+1
= 6

y
i+1
−2y
i
+y
i−1
h
2

with i = 1, . . . , m−1.
We consider now natural boundary conditions
σ
0
= σ
m
= 0.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
113
5.9: Natural Splines Construction (Cont.)
Finally we rewrite this all a system of linear equations

¸
¸
¸
¸
¸
4 1
1 4 1
1
.
.
. 1
.
.
. 1
1 4

¸
¸
¸
¸
¸
σ
1
σ
2
σ
3
.
.
.
σ
m−1

=
6
h
2

¸
¸
¸
¸
¸
y
2
−2y
1
+y
0
y
3
−2y
2
+y
1
.
.
.
.
.
.
y
m
−2y
m−1
+y
m−2

First, this system is solved and then the coefficients a
i
, b
i
, c
i
, d
i
are deter-
mined by the high-lighted equations.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
114
5.10: Splines in MATLAB
Example:
x = 0 : 1 0 ; y = s i n ( x ) ;
xx = 0 : . 2 5 : 1 0 ;
yy = s pl i ne ( x , y , xx ) ;
pl ot ( x , y , ’ o ’ , xx , yy )
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
115
5.11: Splines in MATLAB (Cont.)
Example for just computing the σ
i
A=di ag (4∗ ones (m−1,1))+di ag ( ones (m−2,1), −1)+di ag ( ones (m−2 , 1) , +1);
r hs =6/hˆ2∗ d i f f ( d i f f ( y ) )
si gma=A\r hs ’
si gma =[ 0; si gma ; 0 ]
See also ppval, find, unmkpp, mkpp, ppval.
(The suffix or prefix pp stands for piecewise polynomial.)
Note, alternatively can A also be constructed by the command
toeplitz(r).
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
116
5.12: Minimality Property of Cubic Splines
Let 1 ⊂ C
2
functions which interpolate the points (x
i
, y
i
) with i = 0 : m.
Theorem. [-]
Let s

∈ 1 be a cubic spline satisfying a natural boundary condition. Then
|s

|
2
≤ |f

|
2
∀f ∈ 1.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
117
5.13: Minimality Property of Cubic Splines (Cont.)
Proof: Let f ∈ V, then there is a g ∈ C
2
with g(x
i
) = 0 such that f(x) = s

(x) +g(x).
We then obtain f

2
2
= s

+g
2
2
= s

2
+ 2(s

, g

) +g

2
2
with
(s

, g

) :=
Z
x
m
x
0
s

(x)g

(x)dx.
We have to show that (s

, g

) = 0:
Integration by parts gives
(s

, g

) = s

(x)g

(x)
i
x
m
x
0

Z
x
m
x
0
s

(x)g

(x)dx.
From the natural boundary conditions follows
s

(x)g

(x)
i
x
m
x
0
= 0.
As s

is a piecewise cubic polynomial we get for the last term
Z
x
m
x
0
s

(x)g

(x)dx =
m
X
i=1
α
i
Z
x
i
x
i−1
g

(x) =
m
X
i=1
α
i
(g(x
i
) −g(x
i−1
)) = 0
with some constants α
i
.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
118
5.14: The Space of Cubic Splines
o is a linear space, its dimension depends on the number of knots
x
0
< x
1
< < x
m
.
We construct a basis for o:
Definition. [11.3] The positive part of (x −a)
n
, n > 0 is defined by
(x −a)
n
+
=

(x −a)
n
, x ≥ a
0, x < a
Note the functions (x −x
k
)
3
+
are cubic splines.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
119
5.15: The Space of Cubic Splines (Cont.)
The set B := ¦1, x, x
2
, (x − x
0
)
3
+
, . . . , (x − x
m−1
)
3
+
¦ is a basis of o and
dimo = m+ 3.
This basis is not convenient for computations.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
120
5.16: Cubic B-Splines
We extend the grid to
ξ
1
= . . . = ξ
4
< ξ
4+1
< . . . < ξ
4+m+1
= . . . = ξ
4+m+4
with ξ
4+i
= x
i
for i = 0, . . . , m
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
121
5.17: The Space of Cubic Splines (Cont.)
Definition.
The functions N
ik
, i = 1 : 4 +m+3 −k defined recursively as follows are
called B-splines:
N
i1
(x) :=

0 if ξ
i
= ξ
i+1
1 if x ∈ [ξ
i
, ξ
i+1
)
0 else
and N
ik
:=
x−ξ
i
ξ
i+k−1
−ξ
i
N
i,k−1
+
ξ
i+k
−x
ξ
i+k
−ξ
i+1
N
i+1,k−1
where we use the convention 0/0 = 0 if nodes coincide.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
122
5.18: B-Splines Cont.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
123
5.19: B-Splines Basis
The B-splines N
i4
are cubic splines.
Theorem. The B-splines N
i4
, i = 1, . . . , m + 3 form a basis of o, the
space of cubic splines.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
124
5.20: B-Splines Basis (Cont.)
0 2 4 5
0
0.2
0.4
0.6
0.8
1
Grid: 0,1,2,3,4,5, 0,0,0, 5,5,5
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
125
5.21: B-Splines Basis Representation
Any spline s ∈ o has a unique basis representation
s =
m+3
¸
i=1
d
i
N
i4
and in particular
1 =
m+3
¸
i=1
N
i4
.
The coefficients d
i
are called de Boor points.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
126
5.22: B-Splines Properties
1. N
i4
(x) = 0 only for x ∈ [ξ
i
, ξ
i+4
]: local support (sv: lokalt st¨od)
2. N
i4
(x) ≥ 0: non-negative
3. N
ie
∈ o if ξ
i
= ξ
i+4
: B-splines are splines
This makes that the coefficients have a graphical importance.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
127
5.23: B-Splines: Control Polygon
0 1 2 3 4 5
-6
-4
-2
0
2
4
d
2
For more see Course FMN100: Numerical Methods in CAGD
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
128
¨
O5 : Exercise Notes (Homework 4)
Here we give two versions of a MATLAB m-file for computing B-splines.
First an m-file which is a straightforward MATLAB implementation of the
definition.
Then a recursive implementation.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
129
¨
O 5.1 : B-Splines (Version 1)
function y=bspline(x,grid,i,k)
% function y=bspline(x,grid,i,k)
% x evaluation point
% xnode grid vector
% i number of B-spline
% degree (level) of B-spline
% C. Fuhrer, 2005
N=zeros(k,k);
level=1;
for j=0:k-level
if (xnode(i+j)==xnode(i+j+1))
N(j+1,level)=0;
else
if (xnode(i+j)<=x & x < xnode(i+j+1))
N(j+1,level)=1;
end
end
end
for level=2:1:k
for j=0:k-level
denom=(xnode(i+j+level-1)-xnode(i+j));
if (denom==0)
fac1=0;
else
fac1=(x-xnode(i+j))/denom;
end
denom=(xnode(i+j+level)-xnode(i+j+1));
if (denom==0)
fac2=0;
else
fac2=(xnode(i+j+level)-x)/denom;
end
N(j+1,level)=fac1*N(j+1,level-1)+ ...
fac2*N(j+1+1,level-1);
end
end
y=N(1,k);
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
130
¨
O 5.2 : B-Splines (Version 2)
... and here the recursive version
function [N] = bspline(x,i,k,grid);
%
%
if k==1
if grid(i) == grid(i+1)
N=0;
else
if x >= grid(i) & x < grid(i+1)
N=1;
else
N=0;
end
end
else
denom1=grid(i+k-1)-grid(i);
if denom1==0
fact1=0;
else
fact1=(x-grid(i))/denom1;
end
denom2=grid(i+k)-grid(i+1);
if denom2==0
fact2=0;
else
fact2=(grid(i+k)-x)/denom2;
end
N=fact1*bspline(x,i,k-1,grid)+...
fact2*bspline(x,i+1,k,grid);
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
131
Unit 6: L
2
Space
Space of all square integrable functions:
L
2
(a, b) :=

f[

b
a
f(x)
2
dx < ∞
¸
Norm and inner product in L
2
|f|
2
=

b
a
f(x)
2
dx

1/2
(f, g)
2
=

b
a
f(x)g(x)dx
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
132
6.1: Best Approximation
The problem in this unit is
Find a polynomial p
n
∈ {
n
such that
|f −p
n
| = min
q∈P
n
|f −q|
2
p is polynomial of best approximation in 2-norm.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
133
6.2: Best Approximation
Example [9.4]: Find the best p ∈ {
0
to f(x) = 1 −e
−20x
0 0.5 1
0
0.2
0.4
0.6
0.8
1
best L approximation
best max-norm approximation
2 Alternativ problem:
Find min
p∈P
0
|f −p|

with |f|

= max
x∈[a,b]
[f(x)[.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
134
6.3: Best Approximation (Cont.)
For simplicity [a, b] = (0, 1)
Ansatz: p
n
(x) = c
0
+c
1
x +c
2
x
2
+. . . +c
n
x
n
.
Reformulation
Minimize
E(c
0
, c
1
, . . . , c
n
) =

1
0
(f(x) −p
n
(x))
2
dx
=

1
0
f(x)
2
dx −2
n
¸
j=0
c
j

1
0
f(x)x
j
dx +
n
¸
j=0
n
¸
k=0
c
j
c
k

1
0
x
k+j
dx
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
135
6.4: Best Approximation (Cont.)
Condition for minimum:
∂E(c
0
, c
1
, . . . , c
n
)
c
i
= 0 i = 0 : n
Working this out gives

¸
¸
¸
(x
n
, x
n
) (x
n
, x
n−1
) (x
n
, 1)
(x
n−1
, x
n
) (x
n−1
, x
n−1
) (x
n−1
, 1)
.
.
.
(1, x
n
) (1, x
n−1
) (1, 1)

. .. .
=:M

¸
¸
¸
c
n
c
n−1
.
.
.
c
0

=

¸
¸
¸
(f, x
n
)
(f, x
n−1
)
.
.
.
(f, 1)

C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
136
6.5: Hilbert Matrices
The matrix M is called a Hilbert matrix.
In MATLAB M=hilb(n) (in revers order!).
Hilbert matrices are extremely ill con-
ditioned. Therefore this way to com-
pute the best polynomial p
n
is very
sensitive to perturbations.
0 5 10 15
10
0
1
0
10
1
0
20
Condition vs Matrix
Dimension
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
137
6.6: Orthogonal Polynomials
If p
n
is represented as
p
n
(x) = γ
0
ϕ
0
(x) +γ
1
ϕ
1
(x) +. . . γ
n
ϕ
n
(x)
where the ϕ form a basis of {
n
and the γ
i
are real coefficients, then the
system becomes

¸
¸
¸

0
, ϕ
0
) (ϕ
0
, ϕ
1
) (ϕ
0
, ϕ
n
)

1
, ϕ
0
) (ϕ
1
, ϕ
1
) (ϕ
1
, ϕ
n
)
.
.
.

n
, ϕ
0
) (ϕ
n
, ϕ
1
) (ϕ
n
, ϕ
n
)

. .. .
=:M

¸
¸
¸
γ
0
γ
1
.
.
.
γ
n

=

¸
¸
¸
(f, ϕ
0
)
(f, ϕ
1
)
.
.
.
(f, ϕ
n
)

C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
138
6.7: Orthogonal Polynomials (Cont.)
... and if the basis functions satisfy

i
, ϕ
j
)

= 0, if i = j
= 0, if i = j
then the system simplifies to

¸
¸
¸

0
, ϕ
0
)

1
, ϕ
1
)
.
.
.

n
, ϕ
n
)

. .. .
=:M

¸
¸
¸
γ
0
γ
1
.
.
.
γ
n

=

¸
¸
¸
(f, ϕ
0
)
(f, ϕ
1
)
.
.
.
(f, ϕ
n
)

C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
139
6.8: Orthogonal Polynomials (Cont.)
This motivates the following definition
Definition. [9.4] The (infinite) sequence of polynomials ϕ
j
, j = 0, 1, . . .
is called a system of orthogonal polynomials if ϕ
j
has exact degree j and

i
, ϕ
j
)

= 0, if i = j
= 0, if i = j
holds.
Note,
”exact degree” j means ϕ
j
(x) = a
j
x
j
+a
j−1
x
j−1
+ with a
j
= 0.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
140
6.9: Gram–Schmidt Orthogonalisation
Does such an orthogonal system excists?
Answer: We just construct it.
Gram-Schmidt Orthogonalisation
Let ϕ
0
= 1.
Ansatz ϕ
1
= x −α
0
ϕ
0
.
Orthogonality: (ϕ
0
, ϕ
1
) = 0, this gives α
0
=

0
, x)

0
, ϕ
0
)
.
In general: ϕ
j
= x
j

¸
j−1
i=0
α
i
ϕ
i
with α
i
=

i
, x
j
)

i
, ϕ
i
)
.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
141
6.10: Orthogonal Polynomials - Examples
Example 1:
Let (f, g) =

1
0
f(x)g(x)dx. Then (p. 261 f)
ϕ
0
(x) = 1
ϕ
1
(x) = x −
1
2
ϕ
2
(x) = x
2
−x +
1
6
ϕ
3
(x) = x
3

3
2
x
2
+
3
5
x −
1
20
.
.
.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
142
6.11: Orthogonal Polynomials - Examples
Example 2:
Let (f, g) =

1
−1
f(x)g(x)dx. Then (p. 263 f)
ϕ
0
(x) = 1
ϕ
1
(x) = x
ϕ
2
(x) = x
2

1
3
ϕ
3
(x) = x
3

3
5
x
.
.
.
These polynomials are called Legendre polynomials.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
143
6.11: Orthogonal Polynomials - Examples
Example 3:
Let
(f, g) =

1
−1
1

1 −x
2
f(x)g(x)dx.
Then (p. 263 f) we get (again) the Chebyshev polynomials.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
144
6.12: Orthogonal Polynomials - Properties
Orthogonal polynomials and three-term recursions are related to each other
by the following theorem:
Theorem.
Orthogonal polynomials with leading coefficient one satisfy the three-term
recursion
ϕ
k+1
(x) = (x −β
k+1

k
(x) −γ
2
k+1
ϕ
k−1
(x)
with ϕ
−1
(x) := 0, ϕ
0
(x) := 1 and
β
k+1
:=
(xϕ
k
, ϕ
k
)
w

k
, ϕ
k
)
w
γ
2
k+1
:=

k
, ϕ
k
)
w

k−1
, ϕ
k−1
)
w
(The proof is based on Gram-Schmidt orthogonalization again.)
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
145
6.13: Orthogonal Polynomials - Properties
We note an important property of orthogonal polynomials
Theorem. [9.4] Let ϕ
j
, j = 0, 1, . . . be a system of orthogonal polynomi-
als on [a, b].
Then all roots of ϕ
j
are distinct, real and lie in (a, b).
We will use this result later in the unit about integration.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
146
6.14 Finite Dimensional case
We consider now the finite dimensional case and ask for the best
approximation in R
m
:
Let V ⊂ R
m
be an n-dimensional subspace.
Consider the problem:
Let b ∈ R
m
. Find a vector v

∈ V such that
|v

−b|
2
= min
v∈V
|v −b|
2
where now |.|
2
denotes the usual norm in R
m
.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
147
6.15 Finite Dimensional Case (Cont.)
We apply exactly the same approch as in the initedimensional case:
Let v
1
, v
2
, . . . , v
n
be a basis of V and v =
¸
n
i=1
α
i
v
i
. Then the α
i
are
given by

¸
¸
¸
(v
1
, v
1
) (v
1
, v
2
) (v
1
, v
n
)
(v
2
, v
1
) (v
2
, v
2
) (v
2
, v
n
)
.
.
.
(v
n
, v
1
) (v
n
, v
2
) (v
n
, v
n
)

. .. .
=:M

¸
¸
¸
α
1
α
2
.
.
.
α
n

=

¸
¸
¸
(b, v
1
)
(b, v
2
)
.
.
.
(b, v
n
)

where (., .) is the scalar product in R
m
.
Compare M with the Hilbert matrix approach above.)
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
148
6.16 Finite Dimensional case (Cont.)
If we form a matrix A which have the basis vectors v
i
as columns, then M = A
T
A and
we obtain the so-called normal equations
A
T
Ax = A
T
b
with x = (α
1
, . . . , α
n
)
T
.
The best approximating vector v

=
P
n
i=1
α
i
v
i
can then be written as v

= Ax or
v

= A(A
T
A)
−1
A
T
| {z }
=:P
b
Note that P is a projection matrix describing the orthogonal projection of R
m
onto V .
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
149
6.17 Finite Dimensional case: Least Squares Method
An application is the solution of an overdetermined linear system
Ax = b
where A is an m×n matrix (n < m) and b an m×1 vector.
This system has only a solution if b ∈ R(A), where R(A) denotes the range space of A
(bildrum, kolonnrum (sv.)).
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
150
6.18 Least Squares Method
We consider the problem
min
v∈R(A)
v −b
and as all v ∈ R(A) can be written as v = Ax this problems becomes
min
x∈R
n
Ax −b
x is the least squares solution of the overdetermined linear equation system.
The normal equations (see above) give then the solution:
x = (A
T
A)
−1
A
T
b
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
151
6.19 Least Squares Method
This equation is rarely evaluated in the form it is written.
Effective numerical methods will be developed in the course “Numerical linear algebra”.
Applications of the least squares method occur in parameter estimation, statistics and
nearly everywhere where you have to handle a large ammount of measurements.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
152
¨
O6 : Exercise Notes (Homework 5)
Topic: Best Approximations
We give a symbolic code for orthogonal polynomials and demonstrate the best approxima-
tion by monomials, Legendre and Chebychev polynomials.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
153
¨
O6.1 : Gram-Schmidt/ Symbolic Toolbox
f uncti on pol y=o r t p o l y ( x , n , omega , a , b ) ;
% computes or t hogonal poynomi al s by appl yi ng
% Gram−Schmid or t hog onal i z at i on
% symbol i c v a r i a b l e s x , omega
f or i =0: n
sum=0;
f or j =0: i −1,
al pha=i n t ( omega∗pol y ( j +1)∗xˆ i , a , b ) / . . .
i n t ( omega∗pol y ( j +1)∗pol y ( j +1) , a , b ) ;
sum=sum+al pha∗pol y ( j +1);
end
pol y ( i +1)=xˆ i −sum;
end
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
154
¨
O6.2 : Result
>> syms x
>> omega=1/s qr t (1−x ˆ2) ;
>> cheby=o r t p o l y ( x , 5 , omega , −1 , 1)
l egendre =
[ 1 , x , xˆ2−1/2, xˆ3−3/4∗x , xˆ4+1/8−x ˆ2 , xˆ5+5/16∗x−5/4∗x ˆ3]
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
155
¨
O6.3 : Best approximation monomial basis
Computing the coefficients
f uncti on [ c o e f f ]=monombest ( n)
% computes t he b e s t approxi mat i on of f un by a pol ynomi al
% of max degree n ( Hi l b e r t mat ri x method )
f or i =0: n
f or j =0: n
M( i +1, j +1)=quad(@( x ) x . ˆ ( n−i ) . ∗ x . ˆ ( n−j ) , −1 , 1 , 1. e −9);
end
b( i +1)=quad(@( x ) atan ( x ) . ∗ x . ˆ ( n−i ) , −1 , 1 , 1. e −9);
end
c o e f f=M\b ’ ;
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
156
¨
O6.4 : Best approximation monomial basis (2)
and plotting:
x pl o t=l i ns pace ( −1 , 1 , 500);
subpl ot ( 2 , 1 , 1) , pl ot ( xpl ot , y pl o t )
subpl ot ( 2 , 1 , 2) , pl ot ( xpl ot , atan ( x pl o t )−y pl o t )
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
157
¨
O6.5 : Best approximation Legendre basis (1)
f uncti on y=l egendre ( x , n)
s wi t c h n
cas e 0
y=1∗ones ( s i z e ( x ) ) ;
cas e 1
y=x ;
cas e 2
y=1/2∗(3∗x . ˆ2 −1);
cas e 3
y=1/2∗(5∗x.ˆ3−3∗x ) ;
cas e 4
y=1/8∗(35∗x.ˆ4−30∗x . ˆ2+3) ;
cas e 5
y=1/8∗(63∗x.ˆ5−70∗x .ˆ3+15∗x ) ;
end
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
158
¨
O6.6 : Best approximation Legendre basis (2)
f uncti on c o e f f=be s t Le ge ndr e
f or i =0:5
m( i +1)=quad(@( x ) l egendre ( x,5−i ) . ˆ2 , −1 , 1 , 1. e −9);
b( i +1)=quad(@( x ) l egendre ( x,5−i ) . ∗ atan ( x ) , −1 , 1 , 1. e −9);
c o e f f ( i +1)=b( i +1)/m( i +1);
end
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
159
¨
O6.7 : Best approximation Legendre basis (3)
x pl o t=l i n p s a c e ( −1 , 1 , 500);
sum=zeros ( s i z e ( x pl o t ) ) ;
f or i =0:5
sum=sum+c o e f f ( i +1)∗l egendre ( xpl ot ,5−i ) ;
end
pl ot ( xpl ot , sum)
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
160
¨
O6.8 : Legendre Best Approximation (with symbolic
toolbox)
omega=x^0;
legendre=ortpoly(x,5,omega,-1,1);
bestpoly=0;
for ii=0:5
i=ii+1;
coeff(i)=int(omega*atan(x)*legendre(i),-1,1)/...
int(omega*legendre(i)*legendre(i),-1,1);
bestpoly=bestpoly+coeff(i)*legendre(i);
end;
cpoly=sym2poly(bestpoly);
xplot=linspace(-1,1,200);
yplot=polyval(cpoly,xplot);
subplot(1,2,1), plot(xplot,yplot,’b’)
subplot(1,2,2), plot(xplot,atan(xplot)-yplot)
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
161
¨
O6.9 : Legendre Best Approximation
-1 0 1
-1
-0.5
0
0.5
1
-1 0 1
-1.5
-1
-0.5
0
0.5
1
1.5
x 10
-3
Legen
dre
Err
or
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
162
¨
O6.10 : Chebyshev Best Approximation
% Best legendre fit
omega=1/sqrt(1-x^2);eps=1.e-8;
cheby=ortpoly(x,5,omega,-1,1);
bestpoly=0;
for ii=0:n
i=ii+1;c=sym2poly(cheby(i));
denom=quad(@atancheby,-1+eps,1-eps,[],[],c)
coeff(i)=denom/int(omega*cheby(i)*cheby(i),-1,1);
bestpoly=bestpoly+coeff(i)*cheby(i);
end;
cpoly=sym2poly(bestpoly);
xplot=linspace(-1,1,200);
yplot=polyval(cpoly,xplot);
subplot(1,2,1), plot(xplot,yplot,’b’)
subplot(1,2,2), plot(xplot,atan(xplot)-yplot)
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
163
¨
O6.11 : Chebyshev Best Approximation
-1 0 1
-1
-0.5
0
0.5
1
-1 0 1
-1
-0.5
0
0.5
1
x 10
-3
Chebys
hev
Err
or
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
164
Unit 7: Numerical Integration, Quadrature
In the last unit we saw the need for computing integrals:
Problem
Compute in an efficient way an accurate approximation to
Z
b
a
f(x)dx
We will first discuss methods which operate on [a, b] and then so-called composite methods
which act on small subintervals.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
165
7.1: Newton–Cotes Formulae
Let’s denote the exact integral by
I
b
a
(f) :=
Z
b
a
f(x)dx
and an approximation by
ˆ
I
b
a
(f).
Consider equidistant nodes a = x
0
< x
1
< . . . < x
n
= b
Then we define approximations by (see Eq. (7.2))
ˆ
I
b
a
(f) = I
b
a
(p
n
)
where p
n
is the interpolation polynomial of f on the grid.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
166
7.2: Newton–Cotes Formulae (Cont.)
This concept leads to the so-called Newton–Cotes formulae:
ˆ
I
b
a
(f) =
n
X
k=0
w
k
f(x
k
)
where k, the number of interpolation points, uniquely defines the weights w
k
and so the
method.
We know from the interpolation chapter that p
n
can be expressed by Lagrange polynomials
p
n
(x) =
n
X
k=0
f(x
k
)L
k
(x)
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
167
7.3: Newton–Cotes Formulae (Cont.)
Thus
w
k
=
Z
b
a
L
k
(x)dx.
Examples:
n = 1 : a = x
0
< x
1
= b
Gives
p
1
(x) =
x −b
a −b
f(a) +
x −a
b −a
f(b)
=
1
b −a
[(b −x)f(a) + (x −a)f(b)]
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
168
7.4: Trapezoidal Rule
Integration
R
b
a
p
1
(x)dx gives the
Trapezoidal Rule
Z
b
a
f(x)dx ≈ (b −a)
`1
2
f(a) +
1
2
f(b)
´
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
169
7.5: Simpson’s Rule
n = 2 : a = x
0
< x
1
= (b +a)/2 < x
2
= b
gives in an analogous way (p. 203):
Simpson’s rule
Z
b
a
f(x)dx ≈ (b −a)

1
6
f(a) +
4
6
f
`b +a
2
´
+
1
6
f(b)
«
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
170
7.6: Basic Requirements
• One requires that constants are integrated exactly, i.e.
I
b
a
(1) =
ˆ
I
b
a
(1).
This has as consequence, that
P
n
k=0
w
k
= (b −a), why?
• Further as integrals are monoton
f > g ⇒
Z
b
a
f(x)dx >
Z
b
a
g(x)dx
we require also that the method is monoton
f > g ⇒
ˆ
I
b
a
(f) >
ˆ
I
b
a
(g)
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
171
7.7: Basic Requirements (Cont.)
Monotony requires positive weights w
k
, why?
Newton Cotes formulae are monoton up to n = 8.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
172
7.8: Error estimates
The error of an integration formula is defined as
E
n
(f) =
Z
b
a
f(x)dx −
n
X
k=0
w
k
f(x
k
)
From the formula of the interpolation error we directly get
Theorem. [7.1] Let f ∈ C
n+1
([a, b]), then
|E
n
(f)| ≤
M
n+1
(n + 1)!
Z
b
a
|(x −x
0
) · · · (x −x
n
)|dx
with M
n+1
= max
x∈[a,b]
|f
(n+1)
(x)|.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
173
7.9: Error estimates (Cont.)
This estimate gives the following estimates (p. 205)
Trapezoidal Rule: |E
1
(f)| ≤
(b−a)
3
12
M
2
Simpson’s rule: |E
2
(f)| ≤
(b−a)
4
196
M
3
The last estimate can be improved (Th. 7.2):
|E
2
(f)| ≤
(b−a)
5
2880
M
4
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
174
7.10: Composite Formulas
For deriving composite formulas we modify our notation (which deviates a bit from the
book) and introduce some terms, which we will meet in the Runge–Kutta chapter again:
We consider the integral
I
1
0
(f) =
Z
1
0
f(x)dx
and write the Newton-Cotes formulas
ˆ
I
1
0
(f) =
s
X
i=1
b
i
f(c
i
).
We call s the stage number, b
i
the weights and c
i
the nodes of the method.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
175
7.11: Composite Formulas
To decrease the error, we partition the interval [a,b] into m subintervals a = x
0
< · · · <
x
m
and split the integral into a sum of integrals:
Z
b
a
f(x)dx =
m−1
X
i=0
Z
x
i+1
x
i
f(x)dx = h
m−1
X
i=0
Z
1
0
f(x
i
+hξ)dξ =
Each subintegral is then approximated by a Newton Cotes formula.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
176
7.12: Composite Formulas
This gives then the following approach
Z
b
a
f(x)dx ≈ h
m−1
X
i=0
s
X
j=1
b
j
f(x
i
+c
j
h)
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
177
7.13: Example: Composite Trapezoidal Rule
x x
i i+1
Trapezoidal Rule
This gives for the composite trapezoidal rule the following
formula in case of m equidistant intervals
Z
b
a
f(x)dx ≈
h

1
2
f(x
0
) +f(x
1
) +· · · +f(x
m−1
) +
1
2
f(x
m
)
«
with h = (b −a)/m.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
178
7.14: Example: Composite Simpson Rule
In the same way the composite Simpson rule becomes
Z
b
a
f(x)dx ≈
h
6

f(x
0
) + 4f(x
0
+
h
2
) + 2f(x
1
) + 4f(x
1
+
h
2
) + 2f(x
2
) +· · ·
· · · + 2f(x
m−1
) + 4f(x
m−1
+
h
2
) +f(x
m
)
«
Note, the book expresses the composite Simpson rule in a slightly different way. There, h
is twice as large as here.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
179
7.15: Error of Composite Rules
The global error of composite rules is just the sum of the (local) errors in each subinterval:
Composite Trapezoidal Rule:
|E
1
(f)| ≤
X
i
= 0
m−1
h
3
12
M
2
= m
h
3
12
M
2
= (b −a)
h
2
12
M
2
Composite Simpson’s rule: |E
2
(f)| ≤ . . . ≤ (b −a)
h
4
2880
M
4
where we used the fact nh = b −a.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
180
7.16: Order of a method
We observe from the error formulas
• When h → 0 the error decreases as fast as h
q
.
(q = 2 for the trapezoidal rule, q = 4 for Simpson’s rule)
• The method is exact for polynomials up to degree q −1.
(M
q
is then 0)
Definition.
q is called the order of the method.
For higher order methods larger step sizes h give the same accuracy as small step sizes for
lower order methods.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
181
7.17 Order plots
In a loglog plot the order of a method can be visualized by comparing the approximate
solution to the exact solution for different step sizes (see also Homework 5):
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
182
7.18 Construction of a method with optimal order
The cost of the method is related to the number of f evaluations and this is related to
the number of stages s.
The accuracy of the method is determined by its order.
What is the highest order for a method with s stages?
This will be investigated by the next theorems.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
183
7.19 Order Criterion
Theorem. [-] Define a method
b
I
1
0
by (c
i
, b
i
)
s
i=1
with order k ≥ s,
and set
M(x) := (x −c
1
)(x −c
2
) · · · (x −c
s
) ∈ P
s
[0, 1].
The order of
b
I
1
0
is at least s +m iff
Z
1
0
M(x)p(x)dx = 0 ∀p ∈ P
m−1
[0, 1],
i.e. M⊥P
m−1
[0, 1] in L
2
.
(Proof: see lecture)
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
184
7.20 Example: 3 stage method of order 4
Consider m = 1, s = 3:
0 =
Z
1
0
(t −c
1
)(t −c
2
)(t −c
3
) · 1dt
=
1
4

1
3
(c
1
+c
2
+c
3
) +
+
1
2
(c
1
c
2
+c
1
c
3
+c
2
c
3
) −c
1
c
2
c
3

c
3
=
1
4
−(c
1
+c
2
)/3 +c
1
c
2
/2
1
3
−(c
1
+c
2
)/2 +c
1
c
2
Thus, there are two degrees of freedom in designing such a method.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
185
7.21 Order cannot exceed 2s
Theorem.
A method with s stages has maximal order 2s.
(Proof: Given in the lecture. Consequence of the preceeding theorem.)
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
186
7.22 Gauß–Legendre Methods (Definition)
Theorem.
There is a method of order 2s. It is uniquely defined by taking c
j
as the roots of the s
th
Legendre polynomial P
s
(2t −1), t ∈ [0, 1].
(Proof: Given in the lecture. Consequence of the orthogonality of the Legendre polynomi-
als.)
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
187
7.23 Gauß–Legendre Methods (Examples)
• s = 1 gives the midpoint rule
b
I
1
0
(f) = f(
1
2
)
• s = 2 Exercise.
• s = 3 gives a 6
th
order method
b
I
1
0
(f) =
5
18
f(
1
2


15
10
) +
8
18
f(
1
2
) +
5
18
f(
1
2
+

15
10
).
These methods are called Gauß-Legendre quadrature rules.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
188
Unit 8 : Finite Element Method in 1D
We consider the following boundary value problem

d
dx

p(x)
du
dx
(x)
«
+r(x)u(x) = f(x) (BVP)
with the boundary conditions
u(a) = r
1
u(b) = r
2
. (BC)
This problem is called a Sturm-Liouville problem. The boundary conditions are called
Dirichlet boundary conditions or sometimes essential boundary conditions, in mechanics: kinematic boundary
conditions.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
189
8.1 : Sturm-Liouville problems
We assume p(x) ≥ c
0
> 0 (ellipticity) and r(x) > 0.
The problem (BVP),(BC) will now be reformulated as a variational equation (VE) and a
variational
problem (VP).
We will numerically solve the variational equation (VE) instead.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
190
8.2 : Sobolev spaces
We will need the following function spaces
• H
1
(a, b) set of all functions v, with the property
R
b
a
v(x)
2
+v

(x)
2
+dx < ∞,
• H
1
E
(a, b) ⊂ H
1
(a, b) set of all functions v ∈ H
1
(a, b),
with the property v(a) = r
1
, v(b) = r
2
,
• H
1
0
(a, b) ⊂ H
1
(a, b) set of all functions v ∈ H
1
(a, b),
with the property v(a) = 0, v(b) = 0.
Note, H
1
(a, b) and H
1
0
(a, b) are linear spaces. H
1
E
(a, b) is an affine linear space,
i.e. v, u ∈ H
1
E
(a, b) ⇒ v −u ∈ H
1
0
(a, b).
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
191
8.3 : Sobolev norm
We use in Sobolev spaces the
• inner product (u, v)
H
1
=
R
b
a
u(x)v(x) +u

(x)v

(x)dx and the
• corresponding norm u
H
1
= (u, v)
1/2
H
1
.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
192
8.4 : Variational Problem, Variational Equation, BVP
The boundary value problem can be stated as a variational problem or as a variational
equation. The relationship between these will be stated in the following theorems.
Consider the functional J : H
1
E
(a, b) → R
J(w) =
1
2
Z
a
p(x)w

(x)
2
+r(x)w(x)
2
dx −
Z
b
a
f(x)w(x)dx
and the corresponding variational problem (VP):
Find u ∈ H
1
E
(a, b) such that
J(u) = min
w∈H
1
E
(a,b)
J(w) (VP)
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
193
8.5 : Bilinear Form and L
2
inner product
We introduce the symmetric bilinear form
A(u, w) =
Z
a
p(x)u

(x)w

(x) +r(x)u(x)w(x)dx.
This and the L
2
-inner product allow a compact notation for the variational functional J:
J(w) =
1
2
A(w, w) −(f, w)
2
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
194
8.6 : Characterization of a Solution, Variational Equation
The following theorem gives a necessary and sufficient condition for a solution of (VP):
Theorem. [14.1]
u ∈ H
1
E
(a, b) minimizes J over H
1
E
(a, b) if and only if
A(u, v) = (f, v)
2
∀v ∈ H
1
0
(a, b) (VE)
(Variational equation)
(proof given in lecture).
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
195
8.7 : Weak Solutions of BVP
By multiplying (BVP) by an arbitrary v ∈ H
1
0
(a, b), taking integrals and applying
integration by parts one sees, that
Theorem. [14.2]
If u ∈ H
1
E
(a, b) ∩ C
2
(a, b) solves (BVP) then it is also a solution of the variational
equation (VE).
As (VE) might have solutions with smoothness (i.e. not in C
2
) one defines
Definition. [14.3]
A solution u ∈ H
1
E
(a, b) of the variational equation (VE) is called a weak solution of the
BVP.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
196
8.8 : Uniqueness
As a consequence of the assumptions p(x) ≥ c
0
> 0 and r(x) ≥ 0 we obtain uniqueness:
Theorem. [14.3] The boundary value problem (BVP) has at most one weak solution in
H
1
E
(a, b).
Summary:
(V P) ⇐⇒ (V E)
⇐=
(=⇒)
(BV P)
Here, the relation in parantheses is only valid if the solution is in C
2
.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
197
8.9: Discretization - Galerkin method
We consider a finite dimensional affine linear subspace S
h
E
⊂ H
1
E
(a, b),
S
h
E
= {w
h
∈ H
1
E
(a, b) : w
h
= ψ(x) +
n
X
i=1
w
i
ϕ
i
(x)}
where ψ is any function fullfilling the boundary conditions and the ϕ
i
are linearly indepen-
dent functions fullfilling the homogeneous boundary conditions.
Find u
h
∈ S
h
E
such that
J(u
h
) = min
w
h
∈S
h
E
J(w
h
) (VP
h
)
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
198
8.10: Discretization - Galerkin method (Cont.)
The solution is characterized by the discrete variational equations:
Theorem. The discrete variational equations
A(u
h
, v
h
) = (f, v
h
)
2
∀v
h
∈ S
h
0
(VE
h
)
with S
h
0
= {v
h
=
P
n
i=1
v
i
ϕ
i
(x)} have a unique solution.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
199
8.11: Discretization - Galerkin method (Cont.)
Note that (VE
h
) is equivalent to the n equations
A(u
h
, ϕ
j
) = (f, ϕ
j
)
2
j = 1, . . . , n
With u
h
(x) = ψ(x) +
P
n
i=1
u
i
ϕ
i
(x) these equations can be written as
X
j=1
nK
ij
u
j
= b
j
i = 1, . . . , n
with K
ij
= A(ϕ
i
, ϕ
j
) b
j
= (f, ϕ
j
) −A(ψ, ϕ
j
).
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
200
8.12: Discretization - Stiffness Matrix
... or as linear equation system
0
B
B
@
K
11
K
12
. . . K
1n
K
22
. . . K
2n
symm.
.
.
.
K
nn
1
C
C
A
| {z }
stiffness matrix
0
B
B
@
u
1
u
2
.
.
.
u
n
1
C
C
A
=
0
B
B
@
b
1
b
2
.
.
.
b
n
1
C
C
A
| {z }
load vector
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
201
8.13: Discretization - Linear Finite Element
A particular choice of discrete spaces is given by the linear finite element approach (FEM):
Grid: a = x
0
< x
1
< x
2
< . . . < x
n
= b
Basis functions: ϕ
i
, i = 1, . . . , n
with ϕ
i
linear spline functions (“hat” functions):
ϕ
i
(x) :=
8
<
:
(x −x
i−1
) x ∈ [x
i−1
, x
i
]
(x
i+1
−x) x ∈ [x
i
, x
i+1
]
0 else
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
202
8.14: Linear Finite Element (Cont.)
And the function ψ, which fullfils the boundary conditions can be given as:
ψ(x) = Aϕ
0
(x) +Bϕ
n
(x)
In total:
u
h
(x) = ψ(x) +
n−1
X
i=1
u
i
ϕ
i
(x)
The choice of linear splines with their support [x
i−1
, x
i+1
] has as consequence that
K
ij
= 0 for |i −j| > 1
K is a symmetric tridiagonal matrix (see spline section).
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
203
8.15: Linear Finite Element - Galerkin Orthogonality
Theorem. [14.6, Cea’s Lemma]
The discrete solution can be viewed as a projection of the exact solution onto S
h
E
:
• A(u −u
h
, v
h
) = 0 ∀v
h
∈ S
h
0
• A(u −u
h
, u −u
h
) = min
w
h
∈S
h
E
A(u −w
h
, u −w
h
)
(Proof: see lecture)
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
204
8.16: Energy Norm Error Estimate
Under the assumptions on p and r we can define a norm and an inner product by A:
• A(u, v) is an inner product
• u
A
= (A(u, u))
1
2
is a norm, the so-called energy norm.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
205
8.17: Energy Norm Error Estimate (Cont.)
We apply the theorem above now to estimate the finite element error and to get the order
of the method:
Let u(x
i
) denote the (unknown) exact solution at the grid points.
Then
I
h
(u)(x) = ψ(x) +
n−1
X
i=1
u(x
i

i
(x)
is a linear spline function in S
h
E
which interpolates the exact solution.
It can be inserted as “w
h
” into the second statement of Cea’s theorem:
u −u
h

A
≤ u −I
h
(u)
A
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
206
8.17: Energy Norm Error Estimate (Cont.)
The linear spline interpolation error in the energy norm can be estimated (Corollary 14.1):
u −I
h
(u)
2
A
≤ C
2

h
π
«
u

2
2
with C
2
=
`
P + (
h
π
)
2
R
´
and P = max
x∈[a,b]
p(x), R = max
x∈[a,b]
r(x).
And finally (Corollary 14.2):
u −u
h

A

h
π
Cu

2
Thus, the error is linear in h.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
207
¨
O7 : Excentric Bending of a rod
Here comes a student’s solution of the bar problem in Assignment 6:
function [xint,u]=galbas(n)
E=208*10^9;I=0.036;a=0.06;b=3;F=180000;
xint=linspace(0,b,n);
h=xint(2)-xint(1);
M=zeros(n-2);
M=diag((-2*E*I/h+h*F)*ones(n-2,1))+ ...
diag(E*I/h*ones(n-3,1),1)+ ...
diag(E*I/h*ones(n-3,1),-1);
for k=1:n-2
hf(k)=a/b*h*F*xint(k+1);
end
u=M\hf’;
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
208
¨
O7.1 : Excentric Bending of a rod
Here what you get, when you call the program with n = 20:
0 0.5 1 1.5 2 2.5 3
-1
-0.8
-0.6
-0.4
-0.2
0
x 10
-
6
-6
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
209
Unit 9: Initial Value Problems
We consider now problems of the type
˙ y(t) = f(t, y(t))
y(t
0
) = y
0
initial value
where f : R ×R
n
→ R
n
is called the right-hand side function of the problem.
In rigid body mechanics this problem occurs as equations of motion where n describes the
number of degrees of freedom.
y is called the state vector of the system.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
210
9.1: Initial Value Problems - Second order Systems
In mechanics due to Newton Euler’s law initial value problems occur in implicit second order form:
M(y(t))¨ y(t) = f(t, y(t), ˙ y(t))
y(t
0
) = η
0
initial value
˙ y(t
0
) = η
1
M(y(t)) is a positive definite (thus invertible) mass matrix:
¨ y(t) = M(y(t))
−1
f(t, y(t), ˙ y(t))
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
211
9.2: Second order Systems (Cont.)
¨ y(t) = M(y(t))
−1
f(t, y(t), ˙ y(t))
Transformation to first-order (standard-) form:
Set y
1
(t) = y(t) (position) and y
2
(t) = ˙ y(t) (velocity), then
˙ y
1
(t) = y
2
(t) time derivative of position is velocity
˙ y
2
(t) = M(y
1
(t))
−1
f(t, y
1
(t), y
2
(t)) time derivative of velocity is acceleration
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
212
9.3: Second order Systems (Cont.)
˙ y
1
(t) = y
2
(t) time derivative of position is velocity
˙ y
2
(t) = M(y
1
(t))
−1
f(t, y
1
(t), y
2
(t)) time derivative of velocity is acceleration
This gives a system of the type
˙
Y (t) = F(t, Y (t))
Y (t
0
) = Y
0
initial value
where Y (t), Y
0
∈ R
2n
.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
213
9.4: Linear Differential Equation Systems
Often (e.g. in control theory) the differential equation is linear:
˙ y(t) = Ay(t) +Bu(t)
where A is a n × n system matris, B an n × m input matrix and u(t) a known input
function (excitation).
Example: The linearized pendulum


α(t)
˙ α(t)
«
| {z }
˙ y(t)
=

0 1

g
l
0
«
| {z }
A

α(t)
˙ α(t)
«
| {z }
y(t)
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
214
9.5: Linear Differential Equation Systems - Eigenvalues
The eigenvalues λ of A indicate stability see course Linear Systems
0 5 10
-1
0
1
0 5 10
0
200
400
0 5 10
-2
0
2
0 5 10
-20
0
20
stable unstable
Re(λ) ≤ 0 ⇒ stable,Re(λ) > 0 ⇒ unstable, Im(λ) = 0 ⇒ oscillatory
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
215
9.6: Stability Goal
We want that a numerical method reflects the stability properties of the original (linear)
problem.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
216
9.7: Directional field
Scalar differential equations can be illustrated by their directional field, a plot which assigns
to every point a slope.
Directional field of ˙ y(t) = −0.5y(t) and one solution curve (red)
0 5 10 15
-5
0
5
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
217
9.8: Euler’s method
We partition the time intervall
t
0
< t
1
< t
2
< · · · < t
i
< t
i+1
| {z }
h
i
< · · · < t
e
and call h
i
the step size.
We denote
• exact solution at t
i
by y
i
• approximate solution at t
i
by u
i
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
218
9.9: Euler’s method (Cont.)
We replace in the differential equation the differentiation by a difference quotient and
obtain either
• Euler’s explicit method
u
i+1
−u
i
h
i
= f(t
i
, u
i
) ⇒ u
i+1
= u
i
+h
i
f(t
i
, u
i
)
• or Euler’s implicit method
u
i+1
−u
i
h
i
= f(t
i+1
, u
i+1
) ⇒ u
i+1
= u
i
+h
i
f(t
i+1
, u
i+1
)
(The method is called implicit because the unknown u
i+1
occurs on both sides.)
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
219
9.10: Euler’s explicit method
0 5 10 15
-2
-1
0
1
2 h=3
h=1.5
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
220
9.11: Euler’s implicit method
0 5 10 15
-2
-1
0
1
2
h=3
h=1.5
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
221
9.12: Stability behavior of Euler’s method
We consider the so-called linear test equation
˙ y(t) = λy(t)
where λ ∈ C is a system parameter which mimics the eigenvalues of linear systems of
differential equations.
The equation is stable if Real(λ) ≤ 0. In this case the solution is exponentially decaying.
(lim
t→∞
y(t) = 0).
When is the numerically solution u
i
also decaying, lim
i→∞
u
i
= 0?
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
222
9.13: Stability behavior of Euler’s method (Cont.)
Explicit Euler discretization of linear test equation:
u
i+1
= u
i
+hλu
i
This gives u
i+1
= (1 +hλ)
i+1
u
0
.
The solution is decaying (stable)
if |1 +hλ| ≤ 1
-2

i
-i
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
223
9.14: Stability behavior of Euler’s method (Cont.)
Implicit Euler discretization of linear test equation:
u
i+1
= u
i
+hλu
i+1
This gives u
i+1
=
`
1
1−hλ
´
i+1
u
0
.
The solution is decaying (stable)
if |1 −hλ| ≥ 1
2

i
-i
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
224
9.15: Stability behavior of Euler’s method (Cont.)
Explicit Euler’s instability for fast decaying equations:
0 2 4 6 8 10 12 14
-10
-5
0
5
10
λ =-5
h=0.41
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
225
9.16: Stability behavior of Euler’s method (Cont.)
Facit:
For stable ODEs with a fast decaying solution (Real(λ) << −1 )
or highly oscillatory modes (Im(λ) >> 1 )
the explicit Euler method demands small step sizes.
This makes the method inefficient for these so-called stiff systems.
Alternative: implicit Euler method.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
226
9.17: Implementation of implicit methods
Implicit Euler method u
i+1
= u
i
+h
i
f(t
i+1
, u
i+1
)
Two ways to solve for u
i+1
: k is the iteration counter, i the integration step counter
• Fixed point iteration: u
(k+1)
i+1
= u
i
+h
i
f(t
i+1
, u
(k)
i+1
)
| {z }
=ϕ(u
(k)
i+1
)
• Newton iteration:
u
i+1
= u
i
+h
i
f(t
i+1
, u
i+1
) ⇔ u
i+1
−u
i
−h
i
f(t
i+1
, u
i+1
)
| {z }
=F(u
i+1
)
= 0
F

(u
(k)
i+1
)∆u
i+1
= −F(u
(k)
i+1
) u
(k+1)
i+1
= u
(k)
i+1
+ ∆u
i+1
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
227
9.18: Implementation of implicit methods (Cont.)
These iterations are performed at every integration step!
They are started with explicit Euler method as so-called predictor:
u
(0)
i+1
= u
i
+h
i
f(t
i
, u
i
)
When should fixed points iteration and when Newton iteration be used?
The key is contractivity!
Let’s check the linear test equation again: ˙ y = λy.
Contractivity: |ϕ

(u)| = |hλ| < 1.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
228
9.19: Implementation of implicit methods (Cont.)
If the differential equation is
• nonstiff: explicit Euler or
• nonstiff: implicit Euler with fixed point iteration
• stiff: implicit Euler with Newton iteration
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
229
9.20: Multistep Methods
Multistep methods are methods, which require starting values from several previous steps.
There are two important families of multistep methods
• Adams methods (explicit: Adams–Bashforth as predictor, implicit: Adams–Moulton as
corrector), used together with fixed point iteration for nonstiff problems,
• BDF methods (implicit), together with Newton iteration used for stiff problems
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
230
9.21: Adams Methods
For deriving ADAMS methods we transform the ODE
˙ y = f(t, y) with y(t
0
) = y
0
into its integral form
y(t) = y
0
+
Z
t
t
0
f(τ, y(τ)) dτ
and partition the interval into
t
0
< t
1
< · · · < t
i
< t
i+1
= t
i
+h
i
< · · · < t
e
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
231
9.22: Adams Methods (Cont.)
Thus
y
n+1
= y(t
n+1
) = y
n
+
Z
t
n+1
t
n
f(τ, y(τ)) dτ
Let u
n+1−i
, i = 1, . . . , k be previously computed values and π
p
k
the unique polynomial
which interpolates
f(t
n+1−i
, u
n+1−i
), i = 1, . . . , k
The we define a method by
u
n+1
= u
n
+
Z
t
n+1
t
n
π
p
k
(τ) dτ
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
232
9.23: Adams Methods (Cont.)
Example (2-step method, k = 2):
Using the concept of Lagrange polynomials give
π
p
k
(t) = f(t
n
, u
n
)L
0
(t) +f(t
n−1
, u
n−1
)L
1
(t)
= f(t
n
, u
n
)
t −t
n−1
t
n
−t
n−1
+f(t
n−1
, u
n−1
)
t −t
n
t
n−1
−t
n
Integration gives:
u
n+1
= u
n
+h
`3
2
f(t
n
, u
n
) −
1
2
f(t
n−1
, u
n−1
)
This is the Adams–Bashforth-2 method.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
233
9.24: Adams Methods (Cont.)
Adams–Bashforth methods are explicit, their general form is
u
n+1
= u
n
+h
k
X
i=1
β
k−i
f(t
n+1−i
, u
n+1−i
)
Adams–Moulton methods are constructed in a similar way.
Here the unknown value f(t
n+1
, u
n+1
) is taken as an additional interpolation point.
This makes the method implicit.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
234
9.25: Adams Methods (Cont.)
Examples (Adams–Moulton):
k = 0 : u
n+1
= u
n
+hf(t
n+1
, u
n+1
) implicit Euler method
k = 1 : u
n+1
= u
n
+h
`
1
2
f(t
n+1
, u
n+1
) +
1
2
f(t
n
, u
n
)
´
Trapezoidal rule
k = 2 : u
n+1
= u
n
+h
`
5
12
f(t
n+1
, u
n+1
) +
8
12
f(t
n
, u
n
) −
1
12
f(t
n−1
, u
n−1
)
´
The general form is
u
n+1
= u
n
+h
k
X
i=0
¯
β
k−i
f(t
n+1−i
, u
n+1−i
)
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
235
9.26: Starting a multistep method
To start a multistep method one applies
• the initial value
• then a one-step method, to get two values
• then a two-step method, to get three values
• and so on ...
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
236
9.27: Adams Method - Stability
The region of stability shrinks when k increases:
-2 -1 0
-1.5
-1
-0.5
0
0.5
1
1.5
AB-1 AB-2
AB-3
-6 -4 -2 0 2
-4
-2
0
2
4
AM-0 AM-1
AM-3
AM-2
-2 0 2 4 6 8
-5
0
5
BDF-1
BDF-2
BDF-3
86
o
-20
-10
-10 0 10 20 30
0
10
20
BDF-4
BDF-5
BDF-6
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
237
9.28: BDF methods
BDF methods BDF=backward differentiation fomula are constructed directly from the differential
equation. We seek for a polynomial, which statisfies the ODE in one point, t
n+1
and
interpolates k previous ones:
Define the k
th
degree polynomial π
k+1
by
π
k+1
(t
n+1−i
) = u
n+1−i
, i = 0, . . . , k
˙ π
k+1
(t
n+1
) = f(t
n+1
, u
n+1
).
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
238
9.29: BDF methods (Cont.)
Working out these conditions and using again the Lagrange polynomial approach gives
formulas of the following type:
k
X
i=0
α
k−i
u
n+1−i
= hf(t
n+1
, u
n+1
)
Examples:
k = 1 : u
n+1
= u
n
+hf(t
n+1
, u
n+1
) implicit Euler method
k = 2 : u
n+1
=
4
3
u
n

1
3
u
n−1
+h
2
3
f(t
n+1
, u
n+1
) BDF-2
k = 3 : u
n+1
=
18
11
u
n

9
11
u
n−1
+
2
11
u
n−2
+h
6
11
f(t
n+1
, u
n+1
)
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
239
9.30: BDF - Stability
The region of stability shrinks when k increases,
but remains unbounded for k ≤ 6:
-2 -1 0
-1.5
-1
-0.5
0
0.5
1
1.5
AB-1 AB-2
AB-3
-6 -4 -2 0 2
-4
-2
0
2
4
AM-0 AM-1
AM-3
AM-2
-2 0 2 4 6 8
-5
0
5
BDF-1
BDF-2
BDF-3
86
o
-20
-10
-10 0 10 20 30
0
10
20
BDF-4
BDF-5
BDF-6
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
240
9.31: Multisptep methods in General
The general form of a linear multistep method reads
k
X
i=0
α
k−i
u
n+1−i
−h
n
k
X
i=0
β
k−i
f(t
n+1−i
, u
n+1−i
) = 0.
For starting a multistep method k starting values
u
0
, . . . , u
k−1
are required.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
241
9.32: Global Error
The quantity of interest is the global error of the method
at a given time point t
n
e
n
:= y(t
n
) −u
n
,
with n = t
n
/h.
If for exact starting values e
n
= O(h), then the method is said to be convergent. More
precisely, a method is convergent of order p, if
e
n
= O(h
p
).
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
242
9.33: Local Residual
To make a statement about the behavior of the global error, we have to introduce and
study first the local residual:
Definition. Let y be a differentiable function, then the quantity
l(y, t
n
, h) :=
k
X
i=0
α
k−i
y(t
n−i
) −h
k
X
i=0
β
k−i
˙ y(t
n−i
)
is called the local residual of the method.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
243
9.34: Example
The local residual of the two-step implicit Adams method is defined
by
l(y, t +h, h) = y(t +h) −y(t) −h

5
12
˙ y(t +h) +
8
12
˙ y(t) −
1
12
˙ y(t −h)
«
.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
244
9.35: Example (Cont.)
Taylor expansion leads to
l(y, t, h) = h ˙ y(t) +
1
2
h
2
¨ y(t) +
1
6
h
3
y
(3)
(t) +
1
24
h
4
y
(4)
(t) +. . .

5
12
h ˙ y(t) −
5
12
h
2
¨ y(t) −
5
24
h
3
y
(3)
(t) −
5
72
h
4
y
(4)
(t) +. . .

8
12
h ˙ y(t)
+
1
12
h ˙ y(t) −
1
12
h
2
¨ y(t) +
1
24
h
3
y
(3)
(t) −
1
72
h
4
y
(4)
(t) +. . .
= −
1
24
h
4
y
(4)
(t) +. . .
Thus the implicit two step Adams method has the order of consistency
3.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
245
9.36: Order of Consistency
Conditions for higher order of consistency are given by the following theorem:
Theorem. A linear multistep method has the order of consistency p if the following
p + 1 conditions on its coefficients are met:
k
X
i=0
α
i
= 0
k
X
i=0

i
−β
i
= 0
k
X
i=0
1
j!
i
j
α
i

1
(j −1)!
i
j−1
β
i
= 0 with j = 2, . . . , p.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
246
9.37: Asymptotic Form of Local Residual
The local residual of a method with order of consistency p takes the
form
l(y, t, h) = c
p+1
h
p+1
y
(p+1)
(t) +O(h
p+2
).
Adams–Bashforth methods have order of consistency k, Adams–Moulton
methods have order of consistency k + 1, and BDF methods have order
of consistency k
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
247
9.38: Global Error Propagation
Consider (for simplicity) the linear differential equation ˙ y = Ay.
The difference of
k
X
i=0
α
k−i
y(t
n−i
) −h
n
k
X
i=0
β
k−i
Ay(t
n−i
) = l(y, t
n
, h)
and
k
X
i=0
α
k−i
u
n−i
−h
n
k
X
i=0
β
k−i
Au
n−i
= 0.
gives a recursion for the global error:
k
X
i=0
α
k−i
e
n−i
−h
n
k
X
i=0
β
k−i
Ae
n−i
= l(y, t
n
, h).
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
248
9.39: Global Error Propagation (Cont.)
By introducing the vector
E
n
:=
0
B
B
@
e
n
e
n−1
.
.
.
e
n−k+1
1
C
C
A
∈ R
kn
y
this recursion formula can be written in one-step form as
E
n+1
= Φ
n
(h)E
n
+M
n
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
249
9.40: Global Error Propagation (Cont.)
with
Φ
n
(h) :=
0
B
B
B
B
B
@
−A
−1
k
A
k−1
−A
−1
k
A
k−2
· · · −A
−1
k
A
1
−A
−1
k
A
0
I 0 · · · 0 0
0 I · · · 0 0
.
.
.
.
.
.
.
.
.
0 0 · · · I 0
1
C
C
C
C
C
A
,
A
i
:= (α
i
I −hβ
i
A) and
M
n
:=
0
B
B
@
−A
−1
k
l(y, t
n
, h)
0
.
.
.
0
1
C
C
A
.
.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
250
9.41: Global Error Propagation (Cont.)
From this formula we see how the global error of a multistep method is built up. There is in
every step a (local) contribution M
n
, which is of the size of the local residual. Therefore,
a main task is to control the integration in such a way that this contribution is kept small.
The effect of these local residuals on the global error is influenced by Φ
n
(h). The local
effects can be damped or amplified depending on the properties of the propagation matrix
Φ
n
(h). This leads to the discussion of the stability properties of the method and its
relation to the stability of the problem.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
251
9.42: Stability
The stability requirement is
Φ(h)
n
< C
with C being independent of n, which is equivalent to
All eigenvalues of Φ(h) are within the unit circle and those on its boundary are
simple.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
252
9.43: Organisation of a Multistep code
see Flowcharts (separate file).
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
253
9.44: Runge–Kutta Methods
Runge–Kutta methods are one-step methods, i.e. they have the generic form
u
n+1
:= u
n
+hφ
h
(t
n
, u
n
)
with a method dependent increment function φ
h
. In contrast to multistep methods, the
transition from one step to the next is based on data of the most recent step only.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
254
9.45: Runge–Kutta Methods: Basic Scheme
The basic construction scheme is
U
1
= u
n
U
i
= u
n
+h
i−1
X
j=1
a
ij
f(t
n
+c
j
h, U
j
) i = 2, . . . , s
u
n+1
= u
n
+h
s
X
i=1
b
i
f(t
n
+c
i
h, U
i
).
s is called the number of stages.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
255
9.46: Runge–Kutta Methods: Example
By taking s = 2, a
21
= 1/2, b
1
= 0, b
2
= 1, c
1
= 0, and c
2
= 1/2 the following
scheme is obtained
U
1
= u
n
U
2
= u
n
+
h
2
f(t
n
, U
1
)
u
n+1
= u
n
+hf(t
n
+
h
2
, U
2
)
For this method the increment function reads
φ
h
(t, u) := f

t +
h
2
, u +
h
2
f(t, u)

.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
256
9.47: Runge–Kutta Methods: Stage derivatives
Normally, Runge–Kutta methods are written in an equivalent form by substituting k
i
:=
f(t
n
+c
i
h, U
i
)
k
1
= f(t
n
, u
n
)
k
i
= f(t
n
+c
i
h, u
n
+h
i−1
X
j=1
a
ij
k
j
) i = 2, . . . , s
u
n+1
= u
n
+h
s
X
i=1
b
i
k
i
.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
257
9.48: Runge–Kutta Methods: Butcher Tableau
The coefficients of a Runge–Kutta method in a compact form
Butcher tableau:
c
1
c
2
a
21
c
3
a
31
a
32
.
.
.
.
.
.
.
.
.
.
.
.
c
s
a
s1
a
s2
· · · a
s,s−1
b
1
b
2
· · · b
s−1
b
s
or
c A
b
T
with A = (a
ij
) and a
ij
= 0 for j ≥ i.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
258
9.49: Butcher Tableau - RK4
The classical 4-stage Runge–Kutta method reads in this notation
0
1
2
1
2
1
2
0
1
2
1 0 0 1
1
6
2
6
2
6
1
6
.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
259
9.50: Order of a Runge–Kutta Method
The global error of a Runge–Kutta method at t
n
is defined in the same way as for multistep
methods
e
n
:= y(t
n
) −u
n
.
with n = t
n
/h. A Runge–Kutta method has order p if e
n
= O(h
p
).
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
260
9.51: Embedded Runge-Kutta Methods
The local error can be estimated by comparing the result after one step with two methods
with different order:
c
1
c
2
a
21
c
3
a
31
a
32
.
.
.
.
.
.
.
.
.
.
.
.
c
s
a
s1
a
s2
· · · a
s,s−1
b
p
1
b
p
2
· · · b
p
s−1
b
p
s
b
p+1
1
b
p+1
2
· · · b
p+1
s−1
b
p+1
s
(Butcher Tableau of an embedded pair of two RK-methods).
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
261
9.52: Runge–Kutta–Fehlberg 2(3)
One method of low order in that class is the RKF2(3) method
0
1 1
1
2
1
4
1
4
1
2
1
2
0
1
6
1
6
4
6
It uses 3 stages.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
262
9.53: Stability of RK4(5)
The stability plot of the RK45 method (MATLAB’s ode45)
-4 -2 0 2
-4
-2
0
2
4
DOPRI4
DOPRI5
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
263
¨
O8 : MATLAB ODE example
Let’s simulate with MATLAB the pendulum.
ODE:
¨ α(t) = −
g
l
sin(α(t))
ODE in first order form
˙ α
1
(t) = α
2
(t)
˙ α
2
(t) = −
g
l
sin(α
1
(t))
Initial values
α
1
= π/2 and α
2
= 0
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
264
¨
O8.1 : Pendulum -right hand side function
In MATLAB define first the right-hand side function:
function ydot=pendel(t,y)
%
g=9.81;
l=1;
%
alpha1=y(1);
alpha2=y(2);
%
ydot(1)=alpha2;
ydot(2)=-g/l*sin(alpha1);
ydot=ydot’;
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
265
¨
O8.2 : Pendulum Simulation
We may use ode15s to integrate this problem
>> ode15s(@pendel,[0 10],[pi/2,0]);
ode15s is a multistep method with variable step sizes and order.
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
266
¨
O8.3 : Pendulum Simulation (Cont.)
The blue curve depicts α(t), the green its derivative and the circles indicate the step sizes:
0 5 10
-5
0
5
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
267
¨
O8.4 : Implicit Euler Code
function [t,y]=impeul(odefun,tspan,y0,n)
t0=tspan(1); te=tspan(2); t=linspace(t0,te,n);h=t(2)-t(1);
y(:,1)=y0;
for i=1:n-1 % integrator loop
% predict
ypred=y(:,i)+h*feval(odefun,t(i),y(:,i));
y(:,i+1)=ypred;
% corrector med fixpunktiteration
for j=1:2 % we use a fixed number of iterations
% and do not check convergence
y(:,i+1)=y(:,i)+h*feval(odefun,t(i+1),y(:,i+1));
end
end
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
268
¨
O8.5 : Implicit Euler Damping
We see the damping effect of implicit Euler if we take few steps:
0 5 10
-2
-1
0
1
2
n=200
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
269
¨
O8.6 : Implicit Euler Damping
... and here with 10 times more steps
0 5 10
-2
-1
0
1
2
n=2000
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
270
10.1: Outlook
This course is a basic appetizer for continued work and education in numerical analysis
and applied mathematics.
There are courses on
• Numerical Methods for ODE/PDE
• Finite Volume Methods
• Numerical Methods in Computer Graphics
• Numerical Linear Algebra
• Simulation Tools
and hopefully soon a course on Numerical Optimization...
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
271
11.1: Appendix
Here we attach for complete course documentation
• the six home assignments
• the final exam preparing project
• the ODE code flowchart
C. F¨ uhrer: FMN081/FMN041/NUMA12-2008
272
Claus F¨ uhrer, 2008-01-16
FMN081/FMN041/NUMA12: Computational Methods in Physics,Mechanics
Numerical Analysis, Matematikcentrum
Assignment 1
Last hand-in day: 23 January 2008 at 13:00h
Goal: Making practical experiments with fixed point and Newton iteration
Task 1 Consider the problem of finding all zeros of the function f(x) =
2x − tan x. Transform this function into the two different fixed point
problems
x =
1
2
tan x or x = arctan(2x).
Try to perform fixed point iteration with these functions, discuss con-
vergence and estimate the error after 10 iteration steps (a posteriori
estimate) in the convergent case.
Construct another fixed point problem, which has as fixed points the
roots of the function f. Discuss even for this third alternative the
convergence properties as above.
Task 2 Write in MATLAB three functions, one which performs one step
of Newton-iteration with exact derivative, one which does the same,
but with a finite-difference approximation of the derivative and one
for the secant method. Solve with these functions the problem f(x) =
arctan(x) = 0 by calling them in MATLAB within an appropriate
for-loop.
Task 3 Study by experiments the convergence behavior for these three
methods, when applied to the function above. What is the order of
convergence? Determine for each method the convergence interval, i.e.
an interval around the solution, where you can observe convergence
for any x
0
.
Task 4 Read Sec. 1.7 in the course book. It contains two examples, which
demonstrate interesting phenomena observed when performing fixed
point iteration and Newton iteration. Program fixed point iteration
for the logistic equation (1.33) and demonstrate the phenomena by
your computations. Be prepared to present your results in class on
Tuesday, January 25.
Lycka till!
1
Claus F¨ uhrer, 2008-01-23
FMN041/FMN081: Computational Methods in Physics
NUMA12 Numerical Approximations
Numerical Analysis, Matematikcentrum
Assignment 2
Last hand-in day: 30 January 2008 at 13:00h
Goal: Matlab experience with Newton iteration in R
n
, mechanical example,
fractal structure of Newton convergence regions, a property of the condition
number
Task 1 We consider the mathematical model of the truck (see figure).
p
8
p
7
p
9
p
5
loading area (body 5)
reference frame (body 0)
rear wheel (body 1)
front wheel (body 3)
chassis (body 2)
cabin (body 4)
p
1
p
4
p
3
p
2
p
6
The task is to compute an equilibrium position for the truck starting
from an initial position given by the coordinates
p
(0)
= (0.52, 2.16, 0.0, 0.52, 2.68, 0.0, −1.5, 2.7, 0.0)
T
.
Download the MATLAB file truck acc.m which evaluates the right
hand side of the differential equation describing the motion of the
truck. That m-file will also need spridamp.m and relat.m . Compu-
te the equilibrium position of the truck by using Newton’s method.
Discuss the rate of convergence.
1
Task 2 We will construct now a fractal figure using Newton’s method. Con-
sider the roots of the equation system
F(x) =

x
3
1
−3x
1
x
2
2
−1
3x
2
1
x
2
−x
3
2

= 0
1. Write down Newton’s method applied specifically to this problem.
2. Write an m-file called rootz31.m that given an initial vector x
(0)

R
2
applies a fixed number k = 15 of iterations of Newton’s met-
hod. Use this file to find the three zeros of F:
x
b
= (1, 0)
T
, x
g
= (−
1
2
,

3
2
)
T
and x
r
= (−
1
2
, −

3
2
)
T
.
3. Study now the dependance of Newton’s method on initial vectors
x
(0)
. Write an m-file called newtonfractal.m following the steps
• Use the meshgrid command to set up a grid of 100
2
points in
the set G = [−1, 1] ×[−2, 2]. You obtain two matrices X and
Y where a specific grid point is defined as p
ij
= (X
i1
, Y
1j
)
T
.
• Build a matrix A such that a
ij
= 1 if Newton’s method
started with p
ij
converges to the first root, x
b
, and a
ij
= 2 if
it converges to the second root, x
g
, and a
ij
= 3 if it converges
to the third root x
r
. If the process diverges you set a
ij
= 0.
• Visualize the resulting matrix, A, to observe the resulting
pattern. Use for this end the MATLAB command pcolor
and shading.
• Why is this figure called a fractal? Explain the term local
convergencefrom the figure.
Task 3 Let A be a symmetric matrix. Show that its condition number
with respect to the 2-norm is

max
|

min
|
, where λ
max
is the largest
eigenvalue of A and λ
min
the smallest.
Lycka till!
2
Claus F¨ uhrer, 2008-01-30
FMN041/FMN081: Computational Methods in Physics
NUMA12 Numerical Approximations
Numerical Analysis, Matematikcentrum
Assignment 3
Last hand-in day: 6 February 2008 at 13:00h
Goal: To get experience with Polynomial interpolation, Chebyshev polyno-
mials, Runge’s phenomenon
Task 1 This task is just to get experience with simple data interpolation
and extrapolation. In the table below you find three columns with
energy data from my house in S¨odra Sandby. Construct the unique
polynomial which interpolates the energy consumption and another
which interpolates the temperature. Plot the curves. Evaluate these
polynomials to see, which energy I will consume on Tuesday, February,
6 and find out which will be the average temperature at that day.
Day Temperature (C) Energy (kwh)
080127 -1.9 109.26
080128 -3.7 92.4
080129 -5.77 115.33
080130 2.53 107.77
080131 4.32 61.14
Solve this task with three methods
• by using MATLAB’s commands polyfit and polyval,
• by using Vandermonde’s approach
• by using Lagrange polynomials.
Note, you get in all three cases the same polynomial but in totally
different representations. I.e. your plots should be identical in all three
cases (if you made things correctly).
Task 2 When interpolating a function with a polynomial using x
i
, i = 0 : n
as interpolation points the error has the form
|f(x) − p(x)| =
1
(n + 1)!
|f
(n+1)
| |(x − x
0
)(x − x
1
) · · · (x − x
n
)|.
Therefor, the polynomial ω
n
(x) = (x−x
0
)(x−x
1
) · · · (x−x
n
) influences
the size of the interpolation error.
Put n distinct (!) interpolation points in different locations in the
1
interval [−1, 1] and plot ω
n
(x). Set n = 5 and try out different choices
of interpolation points. Can you recommend an optimal one? Test even
the case n = 15.
Task 3 Select in Task 2 the interpolation points as Chebyshev points and
compare the resulting curve with your previous results.
Task 4 Now, we visualize the error. For this end interpolate the function
f(x) =
1
1 + 25x
2
on an equidistant grid in the interval [−1, 1] with n = 3, 9, 15. Con-
struct also an interpolation polynomial on a grid with Chebyshev
points.
Task 4 Proof the property of the Lagrange polynomials L
i
∈ P
n
n

i=0
L
i
(x) = 1 ∀ x
Lycka till!
2
Claus F¨ uhrer, 2008-02-05
FMN081: Computational Methods in Mechanics
Numerical Analysis, Matematikcentrum
Assignment 4
Last hand-in day: 13 February 2008 at 15:00h
Goal: To get experience with splines
Task 1 Write a program coeff=cubspline(xint,yint), which takes as in-
put the x- and y-values of m + 1 interpolation points and returns a
m× 4 coefficient matrix of the natural spline which interpolates the
data. The i-th row of this matrix contains the coefficients a
i
, b
i
, c
i
, d
i
of the cubic subpolynomial s
i
of the spline.
The program may be written for equidistant node-points x
i
(h con-
stant.)
Task 2 Write a program yval=cubsplineval(coeff,xint,xval), which
evaluates the spline at xval. Test both programs on a simple test
problem of your choice.
Task 3 We will now discuss a wheel profile function, that is based the stan-
dard profile S1002, which is defined sectionwise by polynomials up to
degree 7. The profile and its sections ar shown in Fig. 1. The polyno-
mials are defined by
Section A: F(s) = a
A
−b
A
s
Section B: F(s) = a
B
−b
B
s + c
B
s
2
−d
B
s
3
+ e
B
s
4
−f
B
s
5
+ g
B
s
6
−h
B
s
7
+ i
B
s
8
Section C: F(s) = −a
C
−b
C
s −c
C
s
2
−d
C
s
3
−e
C
s
4
−f
C
s
5
−g
C
s
6
−h
C
s
7
Section D: F(s) = a
D

b
2
D
−(s + c
D
)
2
Section E: F(s) = −a
E
−b
E
s
Section F: F(s) = a
F
+

b
2
F
−(s + c
F
)
2
Section G: F(s) = a
G
+

b
2
G
−(s + c
G
)
2
Section H: F(s) = a
H
+

b
2
H
−(s + c
H
)
2
and
A B C D
a 1.364323640 0.0 4.320221063 10
+3
16.446
b 0.066666667 3.358537058 10
−2
1.038384026 10
+3
13.
c − 1.565681624 10
−3
1.065501873 10
+2
26.210665
d − 2.810427944 10
−5
6.051367875 10
+0

e − 5.844240864 10
−8
2.054332446 10
−1

f − 1.562379023 10
−8
4.169739389 10
−3

g − 5.309217349 10
−15
4.687195829 10
−5

h − 5.957839843 10
−12
2.252755540 10
−7

i − 2.646656573 10
−13
− −
ξ
min
32.15796 −26 −35 −38.426669071
ξ
max
60 32.15796 −26 −35
1
E F G H
a 93.576667419 8.834924130 16. 9.519259302
b 2.747477419 20 12. 20.5
c − 58.558326413 55. 49.5
ξ
min
−39.764473993 −49.662510381 −62.764705882 −70.0
ξ
max
−38.426669071 −39.764473993 −49.662510381 −62.764705882
-80 -60 -40 -20 0 20 40 60
-30
-25
-20
-15
-10
-5
0
5
A B B C D F G H
The S1002 wheel profile
and its sectors of
polynomial describtion
E
Figure 1.
Describe this wheel profile by means of a natural cubic spline. For this
end, download the file s1002.m, which contains the above description
of the s1002 wheel standard and generate from this data, which you
then use to generate an interpolating spline with your programs in
Task 1 and 2. Plot the resulting curve.
Task 4 Write a program y=bspline(x,xint,i) which computes the i-th
cubic B-spline function y = N
i4
(x). Use this program to plot the curve
given by the deBoor points d = [1, 2, −2, 4, −5, 0, 2, 2] and the node
vektor xint = [0, 0, 0, 0, 1, 2, 3, 4, 5, 5, 5, 5].
Lycka till!
2
Claus F¨ uhrer, 2008-01-30
FMN041/FMN081: Computational Methods in Physics
NUMA12 Numerical Approximations
Numerical Analysis, Matematikcentrum
Assignment 5
Last hand-in day: 20 February 2008 at 13:00h
Goal: To get experience with Polynomial interpolation, Chebyshev polyno-
mials, Runge’s phenomenon
Task 1 Compute and plot the polynomial p
5
which best approximates the
function f(x) = arctan(x) in the interval [−1, 1]. Make three different
approaches
1. Use a monomial basis for this task, set up a Hilbert matrix
and solve for the coefficients. Use the inner product (f, g) =

1
−1
f(x)g(x)dx.
2. Use Legendre polynomials as a basis and the same inner product.
3. Use the inner product (f, g) =

1
−1
1

1−x
2
f(x)g(x)dx instead and
use Chebychev polynomials as a basis.
To compute integrals of nonpolynomial expressions use MATLABs
command quad, i.e. don’t make symbolic computations. Study the
influence of different integration tolerances (TOL).
Task 3 Write two MATLAB programs mysimpson and mygauss3, which
compute an Simpson and 3-stage Gauss approximation to the integral

b
a
f(x)dx for a given function and a given number of steps.
Task 4 Aplly these methods to compute the arclength of the logarithmic
spiral, which is given in parametric form by
x(t) = 3 cos(t) exp(−0.04t)
y(t) = 3 sin(t) exp(−0.04t)
with t ∈ [0, 100]. (The arclength is defined by s =

b
a

x

(t)
2
+ y

(t)
2
.)
Task 5 Take a nonpolynomial function of your choice of which you know its
exact integral. Use this function to determine the approximation error
of your Simpson and Gauss3 method. Compare their performance by
making a effort/precision plot. Plot the stepsize versus the error in a
loglog diagram and discuss the order of the methods.
Lycka till!
1
Claus F¨ uhrer, 2008-02-20
FMN081: Computational Methods in Mechanics
Numerical Analysis, Matematikcentrum
Assignment 6
Last hand-in day: 27 February 2008 at 15:00h
Goal: To solve a simple boundary value problem with Galerkin’s method
and test a multistep method for an initial value problem.
Task 1 Consider a steel rod with a quadratic cross section, which is loaded
eccentrically by a force F, see figure. The differential equation for the
displacement of its central line (dotted line) is given by
EIw

(x) + Fw(x) =
a
b
Fx
where E is Young’s modulus 208 GPa, the cross section I = 0.036 m
4
,
the length b = 3m and the height a = 0.06m. The force F = 180 kN.
The rod is supported at its both ends, which leads to the boundary
conditions
w(0) = w(b) = 0.
F
The origin of the coordinate system is assumed in the left support.
• Compute numerically the bending w(x) by using Galerkin’s method
together with a linear spline space. Choose an appropriate size of
the discretization parameter h.
• The exact solution is
w(x) = a

x
b

sin(λx)
sin(λb)

.
with λ =

F/(EI). Decrease h successively and make a plot of
the error versus the step size. Which order of the error do you
observe?
• Is the stiffness matrix M of the problem positive or negative
definite. Give your answer for a specific h.
• The bending moment is defined as EIw

(x). Plot the bending
moment and determine its extremal values.
1
Task 2 We compute once more (see Assignment 2) the steady state position
of the 2D-truck model. Now, we compute it by solving the its differ-
ential equation. We take as initial conditions p(0) from Assginment 2
and v(0) = 0 (9 components). The differential equation is given by
the files trurhs.m and the corresponding subprograms, which can be
downloaded as a zip-file from the web.
• Solve this task with MATLAB’s ODE-solver ode15S.
• Write a 3-step Adams–Moulton method myAdams3 and solve the
same task with your code and constant step size. Use fixed-point
iteration for the corrector iteration and start your code with three
steps of the Euler method.
Lycka till!
2
Claus F¨ uhrer, 2008-02-26
FMN081: Computational Methods in Mechanics
Numerical Analysis, Matematikcentrum
Assignment 7 (Exam Preparation)
Last hand-in day: – no hand in –
Goal: This practical project is intended as a practical preparation for the
final exam. In contrast to the homework it is recommended that you work
on it individually. Clearly, you can discuss your answers and problems with
everybody, you may use also all help you find from books, lecture notes etc..
But work out your own solution. In case you become stuck with MATLAB,
note that for the exam, it is not MATLAB, which maters, it is the under-
standing of the assignment problems and of solution methods, which will be
the topic of the exam. Think also about possible alternative solution tech-
niques, if any.
You need not to hand in anything, but don’t forget to bring your answers to
the final exam, where we pose questions which are related to all homeworks
and to this project in particular.
There will be consulting hours even for the project (see the web page for
dates).
Scenario: Consider the pendulum equation from the lecture with the same
initial conditions. This time we place an obstacle (ett hinder, sv.) at an
angle α
obst
=
2
3
π, see picture. The obstacle will be hit at an unknown
time t
obst
. Let’s say that it has then an angular velocity (also unknown)
˙ α
obst
. When this obstacle is hit, we have to restart integration of the
problem with new initial conditions (α
obst
, −˙ α
obst
)
T
.
Your task is to integrate the pendulum numerically over many periods
and to show how the obstacle influences its trajectory. Furthermore
we are interested in t
obst
, (α
obst
, and ˙ α
obst
. The integration should be
performed with an BDF 2-step method. To start integration use an
implicit Euler step first.
Organization of your work:
1. Write a MATLAB code, which performs a single step of the im-
plicit Euler method.
2. Write a MATLAB code performing a single step of BDF-2. (coef-
ficients of the method and of its predictor, see at the end of this
project page)
1
α
α
obst
3. Combine these programs and solve the problem without the ob-
stacle. Check your solution by comparing it to the results from
the lecture notes.
4. Write a MATLAB program, which computes an interpolation
polynomial, for three consecutive solution points u
n+1
, u
n
, u
n−1
.
5. Call this MATLAB function after every integration step, so that
you have in a typical step t
n
→t
n+1
the coefficients of a polyno-
mial p(t) available. Note, that this polynomial is a vector valued
function p(t) = (p
α
(t), p
˙ α
(t))
T
with two components one interpo-
lating angles and another interpolating angular velocities.
6. Check if the integration passed the obstacle. For this end, you
have to check if the function p
α
(t) − α
obst
= 0 has a solution in
[t
n
, t
n+1
]. If you find a solution of this nonlinear equation, you
call it t
obst
. It is the time, when the obstacle is hit.
7. Compute α
obst
and ˙ α
obst
by evaluating the polynomial.
8. Restart your integration by setting the initial time to t
obst
and
by setting new initial conditions (as given above).
9. Perform your integration over 5 sec and with an appropriate fixed
step size h.
10. Make a plot of your result. A phase plot (α versus ˙ α) is quite
instructive.
Note, this exercise tries to put the content of the several chapters of the
course in a common application oriented environment. Here you should just
check to what extend you understood the material of the course and that
you can handle practical problems. In the theoretical exam you will be asked
a couple of questions around this problem and the other home assignments.
This exercise may appear hard as it consists of the combination of many
different steps. There will be help as usual. So just try.
2
Lycka till!
3
Initialization:
a) model data: mass, gravity, etc
b) integrator control data: initial stepsize, tolerance, etc
ODE solver:
on return: x(tout), error code
Error ?
No Yes
Error Handling
Break
Save Data
tout=tstart:D tout:tend
Postprocessing
Flow Chart 1.1: Generic Integrator Call
see Flowchart 1.2
Flow Chart 1.2: Generic Single Step Integrator Call
ODEstep solver:
on return: x(tout), error code
Error ?
No Yes
t=t+h
while t < tout
First Step?
Yes
No
Initialization: h=h
0
Step size too small?
Yes
No
Redo step
with new
step size
Error handling
Break
see Flowchart 1.3
Flow Chart 1.3:
Generic Single Step Integrator Organisation
Predict
new Jacobian needed?
Yes
No
Compute Jacobian
Jac
new
= 1
Jac
new
= 0
Corrector iteration (Newton iteration)
return: solution or error code
convergence ?
Yes No
Estimate Error
Y
N
Jac
new
= 0
Redo the step
with the same
step size
and require a
new Jacobian
Redo the step
with h=h/2
and require a
new Jacobian
Error < Tol
Y
N
Accept step
increase
step size
Reject step
decrease
step size

Master your semester with Scribd & The New York Times

Special offer for students: Only $4.99/month.

Master your semester with Scribd & The New York Times

Cancel anytime.