Professional Documents
Culture Documents
Root Finding
Improved Algorithms for Root Finding
Error Analysis
Outline
Root Finding
Fixed Point Iteration
Detour: Non-unique Fixed Points...
Error Analysis
Convergence
Practial Application of the Theorems
Roots of Higher Multiplicity
Joe Mahaffy,
hmahaffy@math.sdsu.edui
Department of Mathematics
http://www-rohan.sdsu.edu/jmahaffy
Spring 2010
Joe Mahaffy, hmahaffy@math.sdsu.edui
(1/56)
Root Finding
Improved Algorithms for Root Finding
Error Analysis
Quick Recap
Root Finding
Improved Algorithms for Root Finding
Error Analysis
(2/56)
g (x) = x,
to a root finding problem, define
f (x) = g (x) x,
f (x) = 0,
is related to
f (x) = 0,
(3/56)
(4/56)
Root Finding
Improved Algorithms for Root Finding
Error Analysis
Root Finding
Improved Algorithms for Root Finding
Error Analysis
cos(p) = p
With a starting value of p = 0.3 we get:
1.
2.
What we learn from the analysis will help us find good root
finding strategies.
3.
1.4
1.2
Iteration #7. :
p = 0.71278594551835,
cos(p) = 0.75654296195845
p = 0.75654296195845
0.8
0.6
0.4
0.2
Root Finding
Improved Algorithms for Root Finding
Error Analysis
(5/56)
0.4
0.6
0.8
1.2
1.4
1.6
1.8
Root Finding
Improved Algorithms for Root Finding
Error Analysis
(6/56)
1 of 2
0.2
(x)|
(7/56)
(8/56)
Root Finding
Improved Algorithms for Root Finding
Error Analysis
Root Finding
Improved Algorithms for Root Finding
Error Analysis
2 of 2
f (p ) f (q )
p q
|p
q|
|f (p )
f (q )|
= |f (r )| |p q |
k|p q |
< |p q |
The contradiction |p q | < |p q | shows that the supposition p 6= q is false. Hence, the fixed point is unique.
Joe Mahaffy, hmahaffy@math.sdsu.edui
Root Finding
Improved Algorithms for Root Finding
Error Analysis
(9/56)
Theorem
Root Finding
Improved Algorithms for Root Finding
Error Analysis
(10/56)
Proof
Suppose both part (a) and part (b) of the previous theorem (here
restated) are satisfied:
Thats great news! We can use any starting point, and we are
guaranteed to find the fixed point.
a. If f C [a, b] and f (x) [a, b], x [a, b], then f has a fixed
point p [a, b]. (Brouwer fixed point theorem)
b.
c.
n = 1, 2, . . . ,
by {MVT}
by b
|pn p | = |f (pn1 ) f (p )|
= |f (r )| |pn1 p |
k|pn1 p |
(11/56)
(12/56)
Root Finding
Improved Algorithms for Root Finding
Error Analysis
Root Finding
Improved Algorithms for Root Finding
Error Analysis
Example: x 3 + 4x 2 10 = 0
1 of 2
Iteration #4:
p = 1.3652,
p 3 + 4p 2 10 = 0.0001
g3 (p) = 1.3652
p = 1.3652
1.8
1.7
1.6
1.5
1.4
1.3
1.2
1.1
1
1
Root Finding
Improved Algorithms for Root Finding
Error Analysis
(13/56)
1 of 6
1.6
1.8
Root Finding
Improved Algorithms for Root Finding
Error Analysis
(14/56)
2 of 6
1 1 + 4a
p =
2a
the other root is outside the interval [1, 1].
1 + 4a
1
fa (p ) = 2a
= 1 + 4a 1 > 0
2a
|fa (p )| = 1 + 4a 1
1.4
(Continued...)
1.2
x 3 + 4x 2 10
,
3x 2 + 8x
2 of 2
Example: x 3 + 4x 2 10 = 0
1.9
g3 (x) = x
(15/56)
(16/56)
Root Finding
Improved Algorithms for Root Finding
Error Analysis
Root Finding
Improved Algorithms for Root Finding
Error Analysis
3 of 6
Lets look at the theoretical fixed point, and the computed values
of the fixed point iteration... for values of a between 0 and 1.
Bifurcation Diagram for f(x)=1ax2
0.9
4 of 6
0.8
0.7
Fixed point/orbit range...
It turns out that when |f (p )| > 1 the fixed point exists, but is no
longer attractive, i.e. the fixed point iteration does not converge
to it.
0.6
0.5
0.4
0.3
has a unique fixed point for a in the range [0.75, 1] (at least).
0.2
0.1
0.1
0.2
0.3
0.4
0.5
0.6
Parameter range (a)
0.7
0.8
0.9
For some other critical value of a, the fixed point for fa (fa (x)) (the
2-orbit of fa (x)) also becomes unstable... it breaks into a 4-orbit.
Root Finding
Improved Algorithms for Root Finding
Error Analysis
(17/56)
5 of 6
Root Finding
Improved Algorithms for Root Finding
Error Analysis
(18/56)
6 of 6
0.5
0.8
0.6
0.4
0.5
0.2
0.4
0.6
0.8
Parameter range (a)
0.2
0.4
1.2
0.2
(19/56)
0.6
0.8
0.2
0.4
0.6
0.8
1
1.2
Parameter range (a)
1.4
1.6
1.8
(20/56)
Root Finding
Improved Algorithms for Root Finding
Error Analysis
Root Finding
Improved Algorithms for Root Finding
Error Analysis
Newtons Method
The Secant Method
The Regula Falsi Method
Quick Summary
We have see that fixed point iteration and root finding are strongly
related, but it is not always easy to find a good fixed-point
formulation for solving the root-finding problem.
In the next section we will add three new algorithms for root
finding:
1. Regula Falsi
2. Secant Method
3. Newtons Method
(21/56)
Newtons Method
The Secant Method
The Regula Falsi Method
Quick Summary
Root Finding
Improved Algorithms for Root Finding
Error Analysis
Root Finding
Improved Algorithms for Root Finding
Error Analysis
1 of 2
(x x )2
f ((x)),
2
(x) [x, x ]
x x
(22/56)
Newtons Method
The Secant Method
The Regula Falsi Method
Quick Summary
2 of 2
f (x)
f (x)
f (x)
f (x)
x x
f (xn1 )
f (xn1 )
(24/56)
Root Finding
Improved Algorithms for Root Finding
Error Analysis
Newtons Method
The Secant Method
The Regula Falsi Method
Quick Summary
6
0.9
1.1
1.2
1.3
1.4
1.5
1.6
6
0.9
1.1
1.2
1.3
p1 = p0
p03 + 4p02 10
= 1.45454545454545
3p02 + 8p0
p2 = p1
p13 + 4p12 10
= 1.36890040106952
3p12 + 8p1
Root Finding
Improved Algorithms for Root Finding
Error Analysis
f (xn1 )
f (xn1 )
1.5
1.6
p0 = 1
Newtons Method
The Secant Method
The Regula Falsi Method
Quick Summary
Root Finding
Improved Algorithms for Root Finding
Error Analysis
Newtons Method
The Secant Method
The Regula Falsi Method
Quick Summary
(26/56)
Newtons Method
The Secant Method
The Regula Falsi Method
Quick Summary
Theorem
f (xn1 )
xn = xn1
f (xn1 )
|
{z
}
g (xn1 )
sequence {xn }
n=1 converging to x for any initial approximation
x0 [x , x + ].
The theorem is interesting, but quite useless for practical purposes. In
practice: Pick a starting value x0 , iterate a few steps. Either the iterates
converge quickly to the root, or it will be clear that convergence is
unlikely.
Joe Mahaffy, hmahaffy@math.sdsu.edui
(27/56)
f (x)f (x)
f (x)f (x) f (x)f (x)
=
[f (x)]2
[f (x)]2
By assumption, f (x ) = 0, f (x ) 6= 0, so g (x ) = 0. By continuity
|g (x)| k < 1 for some neighborhood of x ... Hence the fixed point
iteration will converge. (Gory details in the book).
Joe Mahaffy, hmahaffy@math.sdsu.edui
(28/56)
Root Finding
Improved Algorithms for Root Finding
Error Analysis
Newtons Method
The Secant Method
The Regula Falsi Method
Quick Summary
Root Finding
Improved Algorithms for Root Finding
Error Analysis
1 of 3
lim
xxn1
f (x) f (xn1 )
x xn1
f (xn2 ) f (xn1 )
.
xn2 xn1
Newtons Method
The Secant Method
The Regula Falsi Method
Quick Summary
Root Finding
Improved Algorithms for Root Finding
Error Analysis
2 of 3
3 of 3
xn = xn1
20
15
10
xn = xn1 h
f (xn1 )
i
f (xn2 )f (xn1 )
xn2 xn1
(30/56)
f (xn2 ) f (xn1 )
f (xn1 )
xn2 xn1
= xn1
Newtons Method
The Secant Method
The Regula Falsi Method
Quick Summary
10
0.8
(31/56)
1.2
1.4
1.6
1.8
2.2
(32/56)
Root Finding
Improved Algorithms for Root Finding
Error Analysis
Newtons Method
The Secant Method
The Regula Falsi Method
Quick Summary
Root Finding
Improved Algorithms for Root Finding
Error Analysis
(33/56)
an1 bn1
.
sn = bn1 f (bn1 )
f (an1 ) f (bn1 )
Update as in the bisection method:
if f (an1 ) f (sn ) > 0
if f (an1 ) f (sn ) < 0
then an = sn , bn = bn1
then an = an1 , bn = sn
Regula Falsi is seldom used (it can run into some issues to be
explored soon), but illustrates how bracketing can be incorporated.
Joe Mahaffy, hmahaffy@math.sdsu.edui
Newtons Method
The Secant Method
The Regula Falsi Method
Quick Summary
Root Finding
Improved Algorithms for Root Finding
Error Analysis
(34/56)
Newtons Method
The Secant Method
The Regula Falsi Method
Quick Summary
Newtons Method
The Secant Method
The Regula Falsi Method
Quick Summary
(35/56)
Method
Bisection
Next Iterate
Midpoint of bracketing interval:
mn+1 = (an + bn )/2:
if f (cn+1 )f (bn ) < 0, then {an+1 = mn+1 , bn+1 = bn },
else {an+1 = an , bn+1 = mn+1 }.
Regula Falsi
xn xn1
:
f (xn )f (xn1 )
Secant
Newton
xn xn1
f (xn )f (xn1 )
f (xn )
f (xn )
(36/56)
Root Finding
Improved Algorithms for Root Finding
Error Analysis
Newtons Method
The Secant Method
The Regula Falsi Method
Quick Summary
Root Finding
Improved Algorithms for Root Finding
Error Analysis
Summary Convergence
Summary Cost
Method
Bisection
Convergence
Regula Falsi
Secant
Newton
(37/56)
Newtons Method
The Secant Method
The Regula Falsi Method
Quick Summary
Method
Bisection
Cost
Regula Falsi
Higher cost per iteration compared with Secant (conditional statements), Requires more iterations then Secant.
Secant
Newton
Summary Comments
(38/56)
Newtons Method
The Secant Method
The Regula Falsi Method
Quick Summary
Method
Bisection
Comments
Regula Falsi
Secant
Newton
Newtons Method
The Secant Method
The Regula Falsi Method
Quick Summary
(39/56)
How to start
How to update
Can the scheme break?
Can we fix breakage? (How???)
Relation to Fixed-Point Iteration
In the next section we will discuss the convergence in more detail.
(40/56)
Root Finding
Improved Algorithms for Root Finding
Error Analysis
Convergence
Practial Application of the Theorems
Roots of Higher Multiplicity
Root Finding
Improved Algorithms for Root Finding
Error Analysis
(41/56)
Convergence
Practial Application of the Theorems
Roots of Higher Multiplicity
Definition
Suppose the sequence {pn }
n=0 converges to p, with pn 6= p for all
n. If positive constants and exists with
|pn+1 p|
=
n |pn p|
lim
then {pn }
n=0 converges to p of order , with asymptotic error
constant .
An iterative technique of the form pn = g (pn1 ) is said to be of order
if the sequence {pn }
n=0 converges to the solution p = g (p) of order .
Bottom line: High order () Faster convergence (more desirable).
has an effect, but is less important than the order.
Joe Mahaffy, hmahaffy@math.sdsu.edui
Root Finding
Improved Algorithms for Root Finding
Error Analysis
(42/56)
Convergence
Practial Application of the Theorems
Roots of Higher Multiplicity
|pn+1 |
= p ,
n |pn |
lim
|qn+1 |
= q
n |qn |2
lim
1 of 2
Convergence
Practial Application of the Theorems
Roots of Higher Multiplicity
(43/56)
n 1
|qn | q |qn1 |2 2q
|q0 |2
(44/56)
Root Finding
Improved Algorithms for Root Finding
Error Analysis
Convergence
Practial Application of the Theorems
Roots of Higher Multiplicity
Root Finding
Improved Algorithms for Root Finding
Error Analysis
2 of 2
pn
1
0.9
0.81
0.729
0.6561
0.59049
0.531441
0.4782969
0.43046721
qn
1
0.9
0.729
0.4782969
0.205891132094649
0.0381520424476946
0.00131002050863762
0.00000154453835975
0.00000000000021470
Root Finding
Improved Algorithms for Root Finding
Error Analysis
Proof
(46/56)
Convergence
Practial Application of the Theorems
Roots of Higher Multiplicity
n (pn , p )
|pn+1 p |
= lim |g (n )| = |g (p )|
n |pn p |
n
lim
n1
The existence and uniqueness of the fixed point follows from the fixed
point theorem.
x (a, b)
pn = g (pn1 ),
Thus,
Theorem
Let g C [a, b] be such that g (x) [a, b] for all x [a, b].
Suppose, in addition that g (x) is continuous on (a, b) and there is
a positive constant k < 1 so that
Convergence
Practial Application of the Theorems
Roots of Higher Multiplicity
|g (k)| k,
Convergence
Practial Application of the Theorems
Roots of Higher Multiplicity
(47/56)
Theorem
Let p be a solution of p = g (p). Suppose g (p ) = 0, and g (x)
is continuous and strictly bounded by M on an open interval I
containing p . Then there exists a > 0 such that, for
p0 [p , p + ] the sequence defined by pn = g (pn1 )
converges at least quadratically to p . Moreover, for sufficiently
large n
M
|pn+1 p | < |pn p |2
2
Joe Mahaffy, hmahaffy@math.sdsu.edui
(48/56)
Root Finding
Improved Algorithms for Root Finding
Error Analysis
Convergence
Practial Application of the Theorems
Roots of Higher Multiplicity
Root Finding
Improved Algorithms for Root Finding
Error Analysis
Solve: x = g (x)
Solve: x = g (x)
Solve: x = g (x)
a constant
(x) differentiable
d
[x (x)f (x)] = 1 (x)f (x) (x)f (x)
dx
at x = p we have f (p ) = 0, so
Convergence
Practial Application of the Theorems
Roots of Higher Multiplicity
g (p ) = 1 (p )f (p ).
For quadratic convergence we want this to be zero, thats true if
(p ) =
Convergence
Practial Application of the Theorems
Roots of Higher Multiplicity
f (x)
,
f (x)
Newtons Method
(49/56)
1
(p )
(50/56)
Convergence
Practial Application of the Theorems
Roots of Higher Multiplicity
Multiplicity of Zeroes
1 of 2
f (x) = (x p )m q(x),
Basically, q(x) is the part of f (x) which does not contribute to the zero
of f (x).
If m = 1 then we say that f (x) has a simple zero.
Theorem
lim q(x) 6= 0
xp
(52/56)
Root Finding
Improved Algorithms for Root Finding
Error Analysis
Convergence
Practial Application of the Theorems
Roots of Higher Multiplicity
Multiplicity of Zeroes
Root Finding
Improved Algorithms for Root Finding
Error Analysis
2 of 2
0 = f (p ) = f (p ) = f
(m1)
(p ),
but f
(m)
1 of 3
Convergence
Practial Application of the Theorems
Roots of Higher Multiplicity
(p ) 6= 0.
(x p )m q(x)
m(x p )m1 q(x) + (x p )m q (x)
= (x p )
q(x)
mq(x) + (x p )q (x)
mq(p ) + (p p )q (p )
m
(53/56)
Convergence
Practial Application of the Theorems
Roots of Higher Multiplicity
Root Finding
Improved Algorithms for Root Finding
Error Analysis
2 of 3
= g (x) = x
= x
= x
(x)
(x)
3 of 3
f (xn )f (xn )
[f (xn )]2 f (xn )f (xn )
Drawbacks:
We have to compute f (x) more expensive and possibly
another source of numerical and/or measurement errors.
We have to compute a more complicated expression in each
iteration more expensive.
(54/56)
Convergence
Practial Application of the Theorems
Roots of Higher Multiplicity
xn+1 = xn
f (x)
f (x)
[f (x)]2 f (x)f (x)
[f (x)]2
f (x)f (x)
(55/56)
(56/56)