You are on page 1of 8

Exam Review Exercises - Tutorial 12

Tiago Salvador
Department of Mathematics and Statistics, McGill University
tiago.saldanhasalvador@mail.mcgill.ca
November 26, 2013
1. (a) State the ”Fixed Point Theorem“ which gives sufficient conditions for an iteration x
n+1
=
g(x
n
) to converge to a fixed point.
Solution. The ”Fixed Point Theorem“ states the following: let g ∈ C[a, b] be such that g

exists on (a, b). Suppose that
i. g(x) ∈ [a, b] for all x ∈ [a, b]
ii. there is a positive constant 0 < k < 1 such that [g

(x)[ ≤ k for all x ∈ (a, b).
Then for any x
0
∈ [a, b], the sequence defined by x
n+1
= g(x
n
), n ≥ 0, converges to the
unique fixed point x

∈ [a, b].
(b) Consider the iteration with g(x) = x +
1
2
(2 −e
x
).
i. Show that the iteration has a fixed point x

= log(2).
Solution. The fixed point x

is such that x

= g(x

). We then have
g(x) = x ⇐⇒x +
1
2
(2 −e
x
) = x ⇐⇒2 −e
x
= 0 ⇐⇒x = log(2)
and so we are done.
ii. Show that the scheme satisfies all the conditions of the Fixed Point Theorem on the
interval [0, 1].
Solution. We have g

(x) = 1 −
1
2
e
x
and g

(x) = 0 ⇐⇒x = log(2). Hence
x 0 log(2) 1
g

(x) + + 0 − −
g(x)
1
2
¸ log(2) ¸ ≈ 0.64
and therefore g(x) ∈ [0, 1] for all x ∈ [0, 1]. This means that the first condition of the
Fixed Point Theorem is satisfied. We now observe that g

(x) = −
e
x
2
< 0, therefore
the maximum of [g

[ in [0, 1] is attained at either x = 0 or x = 1. Thus
[g

(x)[ ≤ max ¦g

(0), g

(1)¦ =
1
2
:= k
for all x ∈ [0, 1]. Since k is such that 0 < k < 1, the second condition of the Fixed
Point Theorem is also satisfied.
iii. What is the order of convergence of the scheme?
1
Solution. To determine the order of convergence of the scheme we need to look at
g

(x

):
g

(x

) = 1 −
1
2
e
x

= 1 −
1
2
e
log(2)
= 0.
Thus the scheme has order of convergence 2.
iv. Let x
0
= 0.5 and compute x
3
.
Solution.
x
0
x
1
x
2
x
3
log(2)
0.5 0.675639 0.692995 0.693147 0.693147
v. What is the relative error of x
3
as an approximation to log(2).
Solution. The relative error is given by
[x
3
−x

[
[x

[
≈ 1.67467 10
−8
.
2. Given
i 0 1 2 3
x
i
0 1 2 3
f(x
i
) 2 3 10 29
construct the appropriate table of divided differences and hence state
i. the polynomial of degree 3 which interpolates f at x
0
, x
1
, x
2
and x
3
.
ii. the polynomial of degree 2 which interpolates f at x
1
, x
2
and x
3
.
Solution. The table of divided differences is given by
x
i
f[x
i
] f[x
i
, x
i+1
] f[x
i
, x
i+1
, x
i+2
] f[x
i
, x
i+1
, x
i+2
, x
i+3
]
0 2
1
1 3 3
7 1
2 10 6
19
3 29
We then have
i. p(x) = 2 + x + 3x(x −1) + x(x −1)(x −2) = 2 + x
3
ii. p(x) = 3 + 7(x −1) + 6(x −1)(x −2) = 8 −11x + 6x
2
.
3. Consider the Forward difference approximation
f

(x
0
) ≈ N
1
(h) =
f(x
0
+ h) −f(x
0
)
h
and the data
x 0 0.1 0.2 0.4
f(x) 0 0.099833 0.19867 0.38942
2
(a) Using Taylor Series, or otherwise, show that f

(x
0
) = N
1
(h) + c
1
h + c
2
h
2
+O(h
3
).
Solution. The Taylor series tells us that
f(x
0
+ h) = f(x
0
) + hf

(x
0
) +
h
2
2
f

(x
0
) +
h
3
6
f

(x
0
) +O(h
4
).
Hence
f

(x
0
) = N
1
(h) −
h
2
f

(x
0
) −
h
2
6
f

(x
0
) +O(h
3
)
as desired.
(b) Use Richardson extrapolation to find N
2
(h) such that f

(x
0
) = N
2
(h) + k
2
h
2
+O(h
3
) and
N
3
(h) such that f

(x
0
) = N
3
(h) +O(h
3
). (The formula for N
2
(h) should be involve N
1
(h)
and N
1
(
h
2
))
Solution. If we replace h by
h
2
in the expression in (a), we get
f

(x
0
) = N
1
_
h
2
_
+ c
1
h
2
+ c
2
h
2
4
+ . . .
Subtracting the equation in (a) from twice the above equation, we eliminate the term
involving c
1
and get
f

(x
0
) = 2N
1
_
h
2
_
−N
1
(h) −c
2
_
h
2
2
−h
2
_
+ . . .
Set
N
2
(h) := 2N
1
_
h
2
_
−N
1
(h) .
Hence
f

(x
0
) = N
2
(h) + k
2
h
2
+O(h
3
).
Therefore N
2
(h) is an approximation of order O(h
2
) of f

(x
0
). Replacing now h by
h
2
in
the expression above yields
f

(x
0
) = N
2
_
h
2
_
+
k
2
4
h
2
O(h
3
).
Then
f

(x
0
) =
4N
2
_
h
2
_
−N
2
(h)
3
+O(h
3
).
Therefore an O(h
3
) approximation to e is given by
e ≈ N
3
(h) :=
4N
2
_
h
2
_
−N
2
(h)
3
.
(c) Taking x
0
= 0, evaluate N
1
(0.1), N
1
(0.2) and N
1
(0.4) and use these values to evaluate
N
2
(h) for two values of h and N
3
(h) for one value of h.
Solution.
h 0.4 0.2 0.1
N
1
(h) 0.97355 0.99335 0.99833
N
2
(h) 1.01315 1.00331 −
N
3
(h) 1.00003 − −
3
4. (a) Define the degree of accuracy (also known as the degree of precision) of a quadrature
formula I
h
(f) for approximating the integral
I(f) =
_
b
a
f(x) dx.
Solution. The degree of accuracy of a quadrature formula I
h
(f) for approximating the
integral I(f) is the largest positive integer n such that I(x
k
) = I
h
(x
k
) for each k =
0, . . . , n.
(b) Find the degree of accuracy p of the quadrature formula
I
h
(f) =
3
2
h[f(x
1
) + f(x
2
)]
where a = x
0
, b = x
3
and h = x
i+1
−x
i
.
Solution. We start by observing that h =
b−a
3
and x
i
= a + ih = b − (3 − ih). Taking
f(x) = 1, we get
I(f) = b −a
and
I
h
(f) =
3
2
h(1 + 1) = 3h = b −a.
Taking now f(x) = x leads to
I(f) =
b
2
−a
2
2
and
I
h
(f) =
3
2
h(a + h + b −h) =
b −a
2
(b + a) =
b
2
−a
2
2
.
Taking now f(x) = x
2
we get
I(f) =
b
3
−a
3
3
and
I
h
(f) =
3h
2
_
(a + h)
2
+ (b −h)
2
_
=
b −a
2
(a + 2ha + h
2
+ b
2
−2hb + h
2
)
=
b −a
18
(5a
2
+ 8ab + 5b
2
)
,=
b
3
−a
3
3
.
Therefore the degree of accuracy is 1.
(c) Find constants α, β and γ such that the degree of accuracy of the quadrature formula
I
h
(f) = h[αf(a) + βf(a + γh)]
is as large as possible, where h = b −a.
4
Solution. Taking f(x) = 1 leads to
I(f) = b −a
and
I
h
(f) = h(α + β) = (b −a)(α + β).
We then get the equation α + β = 1. Taking now f(x) = x leads to
I(f) =
b
2
−a
2
2
=
h
2
(b + a)
and
I
h
(f) = h[αa + β(a + γh)] .
We now get the equation αa + β(a + γh) =
b+a
2
. Finally taking f(x) = x
2
we get
I(f) =
b
3
−a
3
3
=
a
2
+ ab + b
2
3
h
and
I
h
(f) = h
_
αa
2
+ β(a + γh)
2
¸
.
This leads to the equation
αa
2
+ β(a + γh)
2
=
a
2
+ ab + b
2
3
.
We then get the nonlinear system
_
¸
_
¸
_
α + β = 1
αa + βa + βγh =
b+a
2
αa
2
+ β(a + γh)
2
=
a
2
+ab+b
2
3
which has the solution
_
¸
_
¸
_
α =
1
4
β =
3
4
γ =
2
3
5. (a) What is the key difference between Lagrange and Hermite interpolation? What is the
difference between a clamped and a natural cubic spline?
Solution. The key difference is that in Hermite interpolation we match not only the function
value at the nodes (like in the Lagrange interpolation) but also the derivative. In clamped
cubic spline we impose the extra conditions S

(x
0
) = f

(x
0
) and S

(x
0
) = f

(x
0
) and in
the natural cubic spline the conditions S

(x
0
) = S

(x
n
) = 0.
(b) A natural cubic spline S on [0, 2] has the formula
S(x) =
_
S
0
(x) = 1 + 2x −x
3
0 ≤ x < 1
S
1
(x) = a + b(x −1) + c(x −1)
2
+ d(x −1)
3
1 ≤ x ≤ 2.
Find a, b, c and d.
5
Solution. We have
S

(x) =
_
S

0
(x) = 2 −3x
2
0 ≤ x < 1
S

1
(x) = b + 2c(x −1) + 3d(x −1)
2
1 ≤ x ≤ 2.
and
S

(x) =
_
S

0
(x) = −6x 0 ≤ x < 1
S

1
(x) = 2c + 6d(x −1) 1 ≤ x ≤ 2.
We know that S ∈ C
2
[0, 2]. Therefore
_
¸
_
¸
_
S
1
(1) = S
0
(1)
S

1
(1) = S

0
(1)
S

1
(1) = S

0
(0)
which leads to
_
¸
_
¸
_
a = 1
b = −1
c = −3
Since it’s a natural cubic spline, S

(2) = 0 and so −6 + 6d = 0 ⇐⇒d = 1.
6. (a) Use the Taylor expansion
f(x

) = f(x) + (x

−x)f

(x) +
(x

−x)
2
2
f

(ξ)
for f ∈ C
2
[a, b] to derive Newton’s method for approximating a root x

of the equation
f(x) = 0.
Solution. The idea is that given a guess x
n
we want to compute the new guess such that
f(x
n+1
) ≈ 0. Hence ignoring the error term in the Taylor expansion and taking x

= x
n+1
and x = x
n
we get
0 = f(x
n
) + (x
n+1
−x
n
)f

(x
n
)
and so
x
n+1
= x
n

f(x
n
)
f

(x
n
)
which is indeed the Newton’s method as desired.
(b) Show that Newton’s method can be written as a fixed point iteration
x
n+1
= g(x
n
)
for a suitable choice of g(x).
Solution. We just need to take
g(x) = x −
f(x)
f

(x)
.
(c) Furthermore show that g

(x

) provided that f

(x

) ,= 0. What can we say about the fixed
point iteration in such a case?
6
Solution. We have that
g

(x) = 1 −
f

(x)f

(x) −f(x)f

(x)
[f

(x)]
2
.
Hence, since f(x

) = 0 and f

(x

) ,= 0, we get g

(x

) = 0. In this case the fixed point
iteration will converge quadratically to x

provided x
0
is close enough to x

, i.e., if x
0

[x

−δ, x

+ δ] for some δ > 0.
(d) Find lim
x→x
∗ g

(x) for Newton’s method when f

(x

) = 0 but f

(x

) ,= 0. What can we
say about the fixed point iteration in such case?
Solution. Since we assume that f

(x

) = 0 and f

(x

) ,= 0, we can write f(x) = (x −
x

)
2
r(x) with r(x

) ,= 0. Then
f

(x) = 2(x −x

)r(x) + (x −x

)
2
r

(x) = (x −x

) (2r(x) + (x −x

)r

(x))
and
f

(x) = 2r(x) + (x −x

)k(x)
for some function k. Then
lim
x→x

g

(x) = lim
x→x

f(x)f

(x)
[f

(x)]
2
= lim
x→x

(x −x

)
2
r(x) (2r(x) + (x −x

)k(x))
(x −x

)
2
(2r(x) + (x −x

)r

(x))
2
=
1
2
.
In this case the fixed point iteration will converge only linear to x

provided x
0
is close
enough to x

.
(e) The root x

= 5 of f(x) = x
3
− 9x
2
+ 15x + 25 is approximated using Newton’s method
method with x
0
= 3. What is the order of convergence?
Solution. We have f(x) = (x−5)
2
(x−1). Computing the first two iterations gives x
1
=
13
3
and x
2
=
211
45
. Hence this fixed point iteration converges indeed to x

= 5 and using (d)
we conclude that the order of convergence is 1.
7. Consider the initial value problem
y

(t) = f(t, y(t) = λy(t) 0 ≤ t ≤ T y(0) = α > 0 λ < 0.
Suppose you approximate the solution y() using the Runge-Kutta method
y
i+1
= y
i
+
1
4
hf(t
i
, y
i
) +
3
4
hf
_
t
i
+
2
3
h, y
i
+
2
3
hf(t
i
, y
i
)
_
i = 0, . . . , N −1
y
0
= α
with time step h.
(a) Show that y(t
i+1
) = e

y(t
i
).
Solution. The solution is given by y(t) = αe
λt
. Since t
i
= ih, we have
y(t
i+1
)
y(t
i
)
=
αe
λt
i+1
αe
λt
i
= e
λ(t
i+1
−t
i
= e
λh
and so we are done.
7
(b) Show that y
i+1
=
_
1 + hλ +
(hλ)
2
2
_
y
i
.
Solution. We have
y
i+1
= y
i
+
h
4
λy
i
+
3
4
hy
i
+
3
4
hλ(y
i
+
2
3
hλy
i
)
= y
i
+ hλy
i
+
(hλ)
2
2
y
i
as desired.
(c) Under what conditions on h does lim
i→∞
= 0?
Solution. We need
¸
¸
¸1 + hλ +
(hλ)
2
2
¸
¸
¸ ≤ 1. Let’s look at the function g(x) = 1 + x +
x
2
.
We have g

(x) = 1 + x and so the function has a minimum at x = −1 since it’s convex.
Additionally,
g(x) = 1 ⇐⇒x
_
1 +
x
2
_
= 0 ⇐⇒x = 0 ∨ x = −2.
Then since g(−1) =
1
2
,
[g(hλ)[ ≤ 1 ⇐⇒−2 ≤ hλ ≤ 0 ⇐⇒h < −
2
λ
since λ < 0.
(d) Define local truncation error τ
i+1
(h) and show that for this problem
τ
i+1
(h) =
h
2
λ
3
6
y(ξ)
where ξ ∈ (t
i
, t
i+1
).
Solution. The local truncation error is given by
τ
i+1
(h) =
y(t
i+1
) −
_
1 + hλ +
(hλ)
2
2
_
y(t
i
)
h
.
Using the Taylor expansion we have
y(t
i+1
) = y(t
i
) + hy

(t
i
) +
h
2
2
y

(t
i
) +
h
3
6
y

(ξ)
= y(t
i
) + hλy(t
i
) +
(hλ)
2
2
y(t
i
) +
(hλ)
3
3
y(ξ)
where ξ ∈ (t
i
, t
i+1
). Hence
τ
i+1
(h) =
h
2
λ
3
6
y(ξ).
8