You are on page 1of 29

Roots of Nonlinear

Equations
Roots of Equations
 

Not trivial to determine


the roots!
Roots of Non-Linear Equations:
f(x) = 0
f may be a function belonging to any class: algebraic, trigonometric, hyperbolic,
polynomials, logarithmic, exponential, etc.

Five types of methods can broadly be classified:


 Graphical method
 Bracketing methods: Bisection, Regula-Falsi
 Open methods: Fixed point, Newton-Raphson, Secant, Muller
 Special methods for polynomials: Bairstow’s
 Hybrid methods: Brent’s

Background assumed (MTH 101): intermediate value theorem; nested interval


theorem; Cauchy sequence and convergence; Taylor’s and Maclaurin’s series; etc.

5
Graphical Method
Involves plotting f(x) curve and finding the
solution at the intersection of f(x) with x-axis.

6
Bracketing Methods
 Intermediate value theorem: Let f be a continuous fn on [a, b]
and let f(a) < s < f(b), then there exists at least one x such that
a < x < b and f(x) = s.
 Bracketing methods are application of this theorem with s = 0
 Nested interval theorem: For each n, let In = [an, bn] be a
sequence of (non-empty) bounded intervals of real numbers
such that and ,
then contains only one point.
 This guarantees the convergence of the bracketing methods to the root.
 In bracketing methods, a sequence of nested interval is generated
such that each interval follows the intermediate value theorem with s
= 0. Then the method converges to the root by the one point specified
by the nested interval theorem. Methods only differ in ways to
generate the nested intervals.
7
Intermediate Value Theorem

8
Mean Value Theorem
If f(x) is defined and continuous on the interval [a,b] and
differentiable on (a,b), then there is at least one number c in the
interval (a,b) (that is a < c < b) such that

A special case of Mean


Value Theorem is Roll’s
Theorem when
Bisection Method
 Principle: Choose an initial interval based on intermediate value
theorem and halve the interval at each iteration step to generate
the nested intervals.
 Initialize: Choose a0 and b0 such that, f(a0)f(b0) < 0. This is done
by trial and error.
 Iteration step k:
 Compute mid-point mk+1 = (ak + bk)/2 and functional value f(mk+1)
 If f(mk+1) = 0, mk+1 is the root. (It’s your lucky day!)
 If f(ak)f(mk+1) < 0: ak+1 = ak and bk+1 = mk+1; else, ak+1 = mk+1 and
bk+1 = bk
 After n iterations: size of the interval dn = (bn – an) = 2-n (b0 – a0),
stop if dn ≤ ε
 Estimate the root (x = α say!) as: α = mn+1 ± 2-(n+1) (b0 – a0)
10
Bisection Method

11
Regula-Falsi or Method of False Position
 Principle: In place of the mid point, the function is assumed to be
linear within the interval and the root of the linear function is chosen.
 Initialize: Choose a0 and b0 such that, f(a0)f(b0) < 0. This is done by
trial and error.
 Iteration step k:
 A straight line passing through two points (ak, f(ak)) and (bk, f(bk)) is
given by:

 Root of this equation at f(x) = 0 is:


 If f(mk+1) = 0, mk+1 is the root. (It’s your lucky day!)
 If f(ak)f(mk+1) < 0: ak+1 = ak and bk+1 = mk+1; else, ak+1 = mk+1 and bk+1 =
bk
 After n iterations: size of the interval dn = (bn – an), stop if dn ≤ ε
 Estimate the root (x = α say!) as:
12
Regula-Falsi or Method of False Position

y = f(x)

f(ak)

mk+1

bk
ak
f(bk)

13
Example Problem

 True solution = 0.5671
 Set up a scheme as follows:
Itr. xl f(xl) xu f(xu) xk f(xk) er(%)
0 0 2 0.6981 186

1 0 0.6981 0.5818 20
2 0 0.5818 0.5687 2.2

14
Example Problem

Find th
, x is in m.

A. Solve by Bisection Method


 Initial guess: xL = 0 m; xU = 2R = 0.11 m Functions.xlsx
 11 iterations to converge to x = 0.062 m
B. Solve by Regula Falsi Method
 Initial guess: xL = 0 m; xU = 2R = 0.11 m Functions.xlsx
 4 iterations to converge to x = 0.062 m 15
Bracketing Methods:

 Convergence to a root is guaranteed (may not get all
the roots, though!)
 Simple to program
 Computation of derivative not needed
 Disadvantages
 Slow convergence
 For more than one roots, may find only one solution
by this approach.
16
Open Methods: Fixed Point
 Problem: f(x) = 0, find a root x = α such that f(α) = 0
 Re-arrange the function: f(x) = 0 to x = g(x)
 Iteration: xk+1 = g(xk)

 Stopping criteria:

 Convergence: after n iterations,


 At the root: α = g(α) or α - xn+1 = g(α) - g(xn)

 Mean Value Theorem: for some

(α - xn+1) = gʹ(ξ)(α - xn) or en+1 = en or


 Condition for convergence: │ │<1

 As , = constant
17
Open Methods: Fixed Point

y = g(x)
y=x y = x y = g(x)

x3 x2 x1 x0 x3 x2 x1 x0
Root α Root α

18
Example Problem

 True solution = 0.5671
 Set up a scheme as follows:
 Iterations x g(x) er (%)
1 0 1 100
2 1 0.3678 172
3 0.3678 0.6922 47
4 0.6922 0.5004 38
5 0.5004 0.6064 17

19
Example Problem

Find th
, x is in m.

A. Solve by Fixed Point Method


 Rearrange:
 Initial guesses: x = 0.001, 0.01, 0.003 m DO NOT
CONVERGE Functions.xlsx
 Initial guess: x = 0.08 m, it takes 5 iterations to converge
to x = 0.062 m
20
Truncation Error:
- error committed when a limiting process is truncated before one
has reached the limiting value

- e.g. )Δ

- Taylor series expansion of a function

! !

! !
Taylor polynomial approximation:

If the remainder is omitted, the right side of equation is the Taylor


polynomial approximation to f(x).

Zeroth order approximation

First order approximation

Second order approximation


Taylor’s Theorem:

If a function can be differentiated (n+1) times in an interval,


I, containing , then, for each in I, there exist between and
such that

(Theorem of Mean for Integrals)


Roots of Nonlinear Equations: Order of Convergence
• Denote by x(i), the estimate of the root at the ith iteration
• If an iterative sequence x(0), x(1), x(2) … converges to the root ,
(which means the true error at iteration i is e ( i )    x (i ) ) and if
e (i 1)
lim C
i  (i ) p
e
then p is the order of convergence and C the asymptotic error
constant. The convergence is called linear if p=1, quadratic if
p=2, and cubic if p=3 (p does not have to be an integer).
• For bisection method, using approx. error as a proxy for true
error (as we approach the root, these tend to be the same)
e ( i 1) 1

e(i ) 2
Implying linear convergence and error constant of 0.5
Roots of Nonlinear Equations: Order of Convergence

• For linearly convergent methods: C must be less than 1


• For superlinearly convergent methods (p>1): Not necessary
• Convergent methods: Approximate Error at any iteration is
representative of True Error at the previous iteration

x ( i 1)
x (i )
 x (i )

• For the bisection method, the approximate error at the nth


iteration is x ( 0 )
2n
which means the number of iterations for a desired accuracy are
known a priori.
Open Methods: Newton-Raphson
 Problem: f(x) = 0, find a root x = α such that f(α) = 0
 Principle: Approximate the function as a straight line
having same slope as the original function at the point of
iteration.

f(x0)

f(x1)

x0
y = f(x)
x1 x2
26
Open Methods: Newton-Raphson
 Problem: f(x) = 0, find a root x = α such that f(α) = 0
 Iteration Step 0: Taylor’s Theorem at x = x0

! !

!
, for some
 Iteration Step k:

! !
, for some

 Assumptions: Neglect 2nd and higher order terms and assume that the root is
arrived at the (k+1)th iteration, i.e., f(xk+1) = 0

 Iteration Formula:

 Start with x = x0 (Guess!) Stopping criteria:

27
Open Methods: Newton-Raphson
 Convergence: Taylor’s Theorem after n iterations,

for some

 Re-arrange: !

or !
or !

 As , = constant

28
Open Methods: Newton-Raphson

Advantages:
y = f(x)
Faster convergence f(x0)
(quadratic) x1
x0
Disadvantages: f(x1)

Need to calculate
derivate

Newton-Raphson
method may get stuck!
29
Open Methods: Secant
 Principle: Use a difference approximation for the slope
or derivative in the Newton-Raphson method. This is
equivalent to approximating the tangent with a secant.
 Problem: f(x) = 0, find a root x = α such that f(α) = 0

f(x0)

f(x1)
f(x2)
x0 x1 x2
y = f(x)
x3 30
Open Methods: Secant
 Problem: f(x) = 0, find a root x = α such that f(α) = 0

 Initialize: choose two points x0 and x1 and evaluate f(x0) and f(x1)

 Approximation: , replace in Newton-


Raphson

 Iteration Formula:

 Stopping criteria:

31

You might also like