You are on page 1of 44

Numerical Methods

Roots of Nonlinear Equations


“Find the solutions of f(x) = 0, where
the function f is given.”
The solutions (values of x) are known as the
roots of the equation f (x) = 0, or the zeroes of
the function f (x).
y = f(x)
 Contains three elements: an input value x, an output
value y and the rule f for computing y.
 The function is said to be given if the rule f is
specified.
 In numerical computing, the rule is invariably a
computer algorithm.
 As long as the algorithm produces an output y for
each input x, it qualifies as a function.
Roots or Zeroes
May be real.
May be complex (seldom computed, for the
case of polynomials where it is most
significant, as in damped vibrations).
The approximate locations of the roots are best
determined by plotting the function.
Extraction of Roots
 All methods of finding roots are iterative procedures
that require a starting point, i.e., an estimate of the
root.
 An estimate can be crucial; a bad starting value may
fail to converge, or it may converge to the “wrong”
root.
 highly advisable to go a step further and bracket the
root (determine its lower and upper bounds) before
passing the problem to a root-finding algorithm.
Stopping Criteria
εa ≤ εs
“actual error vs. significant error”
where
x present − x previous
εa = ×100 ε s = 0.5 ×10 2− n
x present

xpresent = estimated root at present iteration


xprevious = estimated root at previous iteration
n = no. of significant figures
Stopping Criteria (simplified)
 For simplicity, one may stop iterating the algorithm
until such time that the convergence satisfies the
condition,

f ( x present ) ≤ 0.00005
Methods for Roots of Nonlinear
Equations
Direct or Incremental Search Method
Interval-Halving Method
Method of False-Position
Newton-Raphson 1st Method
Newton-Raphson 2nd Method
Secant Method
Fixed-Point Iteration
Bairstow’s Method
Direct or Incremental Search Method

 if f(x1) and f(x2) have


opposite signs, then there is
at least one root in the
interval (x1 ≤ xroot ≤ x2)
 If the interval is small
enough, it is likely to contain
a single root.
 the zeroes of f(x) can be
detected by evaluating the
function at intervals x and
looking for change in sign.
Direct or Incremental Search
Method
Algorithm:
1. Estimate the initial interval (xi, xi+1) that will bound
the root (f(xi) and f(xi+1) should be opposite in
signs), where xi+1 = xi + ∆x
2. Evaluate f(xi) and f(xi+1).
3. Check the product of f(xi) and f(xi+1):
if (+): xi+1 becomes the new lower bound xi. Then
generate a new xi+1 using same ∆x
if (-): estimate a smaller ∆x and generate a new
xi+1.
4. Repeat 2 & 3 until stopping criteria is achieved.
Example
 Test function: f(x) = e-x – cosx
1.2

0.8

0.6
f(x)

0.4

0.2

-0.2

-0.4
0 0.5 1 1.5 2 2.5 3 3.5
x

 Initial estimates: xi = 1, xi+1 = 1.5 and ∆x = 0.5


Simulation
Iteration Root Function
(i)
xi ∆x xi+1 f(xi) f(xi+1) f(xi)f(xi+1)
1 1 0.5 1.5 -0.17242 0.15239 -
2 1 0.25 1.25 -0.17242 -0.02882 +
3 1.25 0.25 1.5 -0.02882 0.15239 -
4 1.25 0.125 1.375 -0.02882 0.05829 -
5 1.25 0.0625 1.3125 -0.02882 0.01371 -
6 1.25 0.03125 1.28125 -0.02882 -0.00783 +
7 1.28125 0.03125 1.3125 -0.00783 0.01371 -
8 1.28125 0.01563 1.29688 -0.00783 0.00288 -
9 1.28125 0.00782 1.28907 -0.00783 -0.00249 +
10 1.28907 0.00782 1.29689 -0.00249 0.00289 -
Simulation (part 2)

Iteration Root Function


(i)
xi ∆x xi+1 f(xi) f(xi+1) f(xi)f(xi+1)
11 1.28907 0.00391 1.29298 -0.00249 0.00020 -
12 1.28907 0.00196 1.29103 -0.00249 -0.00114 +
13 1.29103 0.00196 1.29299 -0.00114 0.00020 -
14 1.29103 0.00098 1.29201 -0.00114 -0.00047 +
15 1.29201 0.00098 1.29299 -0.00047 0.00020 -
16 1.29201 0.00049 1.29250 -0.00047 -0.00013 +
17 1.29250 0.00049 1.29299 -0.00013 0.00020 -
18 1.29250 0.00025 1.29275 -0.00013 0.00004
Bisection Method of Bolzano
 uses the same principle as
incremental search: if there is a root
in the interval (x1, x2), then f(x1)f(x2)
< 0.
 In order to halve the interval, we
compute f(x3),where x3 = ½ (x1 + x2)
is the midpoint of the interval
 If f(x2)f(x3) < 0, then the root must
be in (x2, x3) and we record this by
replacing the original bound x1 by
x3 .
 Otherwise, the root lies in (x1, x3),
in which case x2 is replaced by x3.
Bisection Method of Bolzano
Algorithm:
1. Estimate the initial interval (xi, xi+1) that will bound
the root (f(xi) and f(xi+1) should be opposite in
signs). 1
2. Compute for xr , where xr = ( xi + xi +1 )
2
3. Evaluate f(xi), f(xi+1) and f(xr).
4. Check the product of f(xi) and f(xr):
if (+): xr becomes the new lower bound xi. Then
generate a new xr using step 2.
if (-): xr becomes the new upper bound xi+1. Then
generate a new xr using step 2.
5. Repeat 3 & 4 until stopping criteria is achieved.
Example
 Test function: f(x) = e-x – cosx
1.2

0.8

0.6
f(x)

0.4

0.2

-0.2

-0.4
0 0.5 1 1.5 2 2.5 3 3.5
x

 Initial estimates: xi = 1, xi+1 = 1.5 and xr = 1.25


Simulation
Iteration Root Function
(i)
xi xi+1 xr f(xi) f(xi+1) f(xr) f(xi)f(xr)
1 1 1.5 1.25 -0.17242 0.15239 -0.02882 +
2 1.25 1.5 1.375 -0.02882 0.15239 0.05829 -
3 1.25 1.375 1.3125 -0.02882 0.05829 0.01371 -
4 1.25 1.3125 1.28125 -0.02882 0.01371 -0.00783 +
5 1.28125 1.3125 1.29688 -0.00783 0.01371 0.00288 -
6 1.28125 1.29688 1.28907 -0.00783 0.00288 -0.00249 +
7 1.28907 1.29688 1.29298 -0.00249 0.00288 0.00020 -
8 1.28907 1.29298 1.29103 -0.00249 0.00020 -0.00114 +
9 1.29103 1.29298 1.29201 -0.00114 0.00020 -0.00047 +
10 1.29201 1.29298 1.29250 -0.00047 0.00020 -0.00013 +
11 1.29250 1.29298 1.29274 -0.00013 0.00020 0.00003
Method of False Position
(Regula Falsi)
 Similarly to the bisection method,
the false position or regula falsi
method starts with the initial
solution interval [xi, xi+1] that is
believed to contain the solution of
f(x) = 0.
 Approximating the curve of f(x)
on [xi, xi+1] by a straight line
connecting the two points (xi,
f(xi)) and (xi+1, f(xi+1)), it guesses
that the solution may be the point
at which the straight line crosses xi +1 f ( xi ) − xi f ( xi +1 )
the x axis: xr =
f ( xi ) − f ( xi +1 )
Method of False Position
(Regula Falsi)
Algorithm:
1. Estimate the initial interval (xi, xi+1) that will bound
the root.
2. Compute for new root xr , where x = xi+1 f (xi ) − xi f (xi+1)
f (xi ) − f (xi+1)
r

3. Evaluate f(xi), f(xi+1) and f(xr).


4. Check the product of f(xi) and f(xr):
if (+): xr becomes the new lower bound xi. Then
generate a new xr using step 2.
if (-): xr becomes the new upper bound xi+1. Then
generate a new xr using step 2.
5. Repeat 3 & 4 until stopping criteria is achieved.
Example
 Test function: f(x) = e-x – cosx
1.2

0.8

0.6
f(x)

0.4

0.2

-0.2

-0.4
0 0.5 1 1.5 2 2.5 3 3.5
x
 Initial estimates: xi = 1, xi+1 = 1.5 and
xr = 1.265416357
Simulation

Iteration Root Function


(i) xi xi+1 xr f(xi) f(xi+1) f(xr) f(xi)f(xr)
1 1 1.5 1.26542 -0.17242 0.15239 -0.01853 +
2 1.26542 1.5 1.29085 -0.01853 0.15239 -0.00127 +
3 1.29085 1.5 1.29257 -0.00127 0.15239 -0.00008 +
4 1.29257 1.5 1.29269 -0.00008 0.15239 -0.00001
Newton-Raphson 1st Method
(Method of Tangents)
 best-known method of
finding roots for a good
reason: it is simple and fast.
 only drawback of the
method is that it uses the
derivative f’(x) of the
function as well as the
function f(x) itself
f ( xi )
 derived from the Taylor xi +1 = xi −
Series expansion: f '( xi )
Newton-Raphson 1st Method
(Method of Tangents)
Algorithm:
1. Estimate the initial root approximation xi.
2. Compute for new root xi+1, where
f ( xi )
xi +1 = xi −
f '( xi )
This will be the next iteration’s xi.
3. Evaluate f(xi) and f(xi+1).
4. Repeat 2 & 3 until stopping criteria is achieved.
Example
 Test function: f(x) = e-x – cosx
1.2

0.8

0.6

f(x)
0.4

0.2

-0.2

-0.4
0 0.5 1 1.5 2 2.5 3 3.5
 Initial estimate: xi = 1, x

f(x) = e-x – cosx and f’(x) = - e-x + sinx


xi+1 = 1.36407505
Simulation

Iteration Root Function


(i)
xi xi+1 f(xi) f'(xi) f(xi+1)

1 1 1.36408 -0.17242 0.47359 0.05036

2 1.36408 1.29442 0.05036 0.72309 0.00119

3 1.29442 1.29270 0.00119 0.68800 0.00000


Newton-Raphson’s 2nd Method
Calculus based, derived from the 1st method
and truncated Taylor’s series expansion
1st and 2nd order derivatives are used to
improve the approximation of the estimated
root for f(x) = 0.
Newton-Raphson’s 2nd Method
Algorithm:
1. Estimate the initial root approximation xi.
2. Compute for new root xi+1, where
−1
 f "( xi ) f '( xi ) 
xi +1 = xi +  − 
 2 f '( xi ) f ( xi ) 
This will be the next iteration’s xi.
3. Evaluate f(xi) and f(xi+1).
4. Repeat 2 & 3 until stopping criteria is achieved.
Example
 Test function: f(x) = e-x – cosx
1.2

0.8

0.6

f(x)
0.4

0.2

-0.2

-0.4
0 0.5 1 1.5 2 2.5 3 3.5
 Initial estimate: xi = 1, x

f(x) = e-x – cosx, f’(x) = - e-x + sinx and f ”(x) = e-x + cosx
xi+1 = 1.320290083
Simulation

Iteration Root Function


(i)
xi xi+1 f(xi) f '(xi) f "(xi) f(xi+1)

1 1 1.26987 -0.17242 0.47359 0.90818 -0.01554

2 1.26987 1.29269 -0.01554 0.67419 0.57728 0.00000


Secant Method
 Newton-Raphson method
involve nonelementary
forms (integral, derivative,
etc) so it is desirable to have
an algorithm that converges
just as fast yet only involves
evaluation of f(x) and not
f’(x)
f ( xi )( xi − xi −1 )
 Similar to Regula Falsi xi +1 = xi −
using two initial points f ( xi ) − f ( xi −1 )
Secant Method
Algorithm:
1. Estimate the two initial points xi and xi-1.
2. Evaluate f(xi) and f(xi-1).
3. Compute for new root xi+1, where
f ( xi )( xi − xi −1 )
xi +1 = xi −
f ( xi ) − f ( xi −1 )
This will be the next iteration’s xi. The current xi value will
be transferred to the next iteration’s xi-1.
4. Repeat 2 & 3 until stopping criteria is achieved.
Example
 Test function: f(x) = e-x – cosx
1.2

0.8

0.6
f(x)

0.4

0.2

-0.2

-0.4
0 0.5 1 1.5 2 2.5 3 3.5
x
 Initial estimate: xi = 1, xi-1 = 0.5
xi+1 = 1.874097878
Simulation
Iteration Root Function
(i)
xi-1 xi xi+1 f(xi-1) f(xi) f(xi+1)

1 0.5 1 1.87410 -0.27105 -0.17242 0.45217

2 1 1.87410 1.24130 -0.17242 0.45217 -0.03456

3 1.87410 1.24130 1.28623 0.45217 -0.03456 -0.00443

4 1.24130 1.28623 1.29284 -0.03456 -0.00443 0.00010

5 1.28623 1.29284 1.29270 -0.00443 0.00010 0.00000


Fixed-Point Iteration
also known as Picard iteration/functional
iteration/contraction theorem
The objective of this method is to construct an
auxiliary function g(x) from f(x) where we find
the root x0 such that f(x0) = 0. The auxiliary
function has the form x0 = g(x0).
This method’s convergence is indifferent of its
initial estimate as long as the fixed point exists
in the interval x1 ≤ x0 ≤ x2
Example
Evaluate: x = 3 6 + 3 6 + 3 6 +L

This equation is the same as


x 3 − 6 = 3 6 + 3 6 + 3 6 +L
+

or x3 − 6 = x
The auxiliary function is then
g ( x) = 3 x + 6
Simulation
Iteration Root Function
(i)
xi g(xi)
1 0 1.81712
2 1.81712 1.98464
3 1.98464 1.99872
4 1.99872 1.99989
5 1.99989 1.99999
6 1.99999 2.00000
7 2.00000 2.00000
Bairstow’s Method
 A special problem associated with polynomials Pn(x)
is the possibility of complex roots. Newton’s method,
the secant method, and Muller’s method all can find
complex roots if complex arithmetic is used and
complex initial approximations are specified.
 When polynomials with real coefficients have
complex roots, they occur in conjugate pairs, which
corresponds to a quadratic factor of the polynomial
Pn(x).
 This method extracts quadratic factors from a
polynomial using only real arithmetic.
Bairstow’s Method
Consider the general nth-degree polynomial,
Pn(x):
Pn ( x ) = an x n + an −1 x n −1 + L + a0
Let’s factor out a quadratic factor from Pn(x).
Thus,
Pn ( x) = ( x 2 − rx − s )Qn − 2 ( x ) + remainder

or
Pn ( x ) = ( x 2 − rx − s )(bn x n − 2 + bn −1 x n −3 + L + b3 x + b2 ) + remainder
Bairstow’s Method
 When the remainder is zero, (x2 – rx – s) is an exact
factor of Pn(x).
 This is best understood by synthetic division done
iteratively until r and s converge to a specific value.
 Roots are extracted two at a time when r and s are
determined. The remaining polynomial becomes
reduced in two degrees lower, hence, the method can
be repeated until all roots are solved.
Example
Find the roots of f ( x) = x3 − 3x 2 + 4 x − 2 = 0
From the given equation, a3 = 1.0
a2 = −3.0
a1 = 4.0
a0 = −2.0
We initially guess values for r and s. For this
problem, we’ll have r1 = 1.5 and s1 = -2.5
Simulation
1 -3 4 -2
r1 = 1.5 1.5 -2.25 -1.125
s1 = -2.5 -2.5 3.75
1 -1.5 -0.75 0.625
r1 = 1.5 1.5 0
s1 = -2.5 -2.5
1 0 -3.25

(-)
(0.0) ∆r + (1.0) ∆s = 0.75 (-)
−3.25∆r + (0.0) ∆s = −0.625
Simulation
Solving simultaneously
∆r = 0.192308 and ∆s = 0.75

And r2 = r1 + ∆r = 1.5 + 0.192308 = 1.692308


s2 = s1 + ∆s = −2.5 + 0.75 = −1.75

Then we use the new set of values of r and s


until such time that…
Simulation
i r s ∆r ∆s 1 -3 4 -2
r6 =
1 1.5 -2.5 0.192308 0.75 1.999999 1.999999 -2.000001 0
s6 = -2 -2 2
2 1.692308 -1.75 0.278352 -0.144041
1 -1.000001 0 0
3 1.97066 -1.894041 0.034644 -0.110091 r6 =
1.999999 1.999999 2
4 2.005304 -2.004132 -0.005317 0.004173
s6 = -2 -2
5 1.999987 -1.999959 0.000012 -0.000041
1 0.999998 0
6 1.999999 -2 0 0
Simulation
Then the quadratic factor is
x 2 − rx − s = x 2 − 2.0 x + 2.0 = 0
Where the two roots are determined by the quadratic
formula
−b ± b 2 − 4ac −(−2.0) ± (−2.0)2 − 4(1.0)(2.0)
x= = = 1 + j ,1 − j
2a 2(1.0)

The remaining Qn-2(x) polynomial can still be reduced


by continuous Bairstow’s algorithm until all roots
have been determined.

You might also like