You are on page 1of 18

Numerical Methods [CEng2112] – Chapter 2: Solving non-linear equations

Numerical Methods
CEng2112
Chapter 2
Solving non-linear equations of one variable
Contents:-
2.1 Locating roots
2.2 Numerical methods of solving a non-linear equation
I. Bracketing Methods
 Bisection Method
 Method of false position
II. Open Methods
 Iteration Method (Fixed point Iteration)
 Secant Method
 Newton - Raphson Method

Objectives:- At the end of this chapter students should be able to:


 Understand the difference between bracketing and open methods
 Know the graphical interpretation of each method
 Identify the advantage and disadvantage of each method.
 Know the features of each method:
 The Procedure/Implementation of the method
 The condition of Convergence

BC [Hawassa University] 1 | of page 15


Numerical Methods [CEng2112] – Chapter 2: Solving non-linear equations

Introduction:

Activity: Identify the difference between


 Algebraic and transcendental equations
 Linear equations and non-linear equations
 Systems of linear equations and systems non-linear equations

A problem of great importance in science and engineering is that of determining the roots/zeros of an
equation of the form f ( x )=0.

Definition (Root or solution of an equation):


 A number r , for which f ( r )=0, is called the root of the equation f ( x )=0
 Geometrically, the roots of f (x) are points at which the graph of f (x) crosses or touches the x-
axis. i.e x-intercepts

A root of an equation may be simple or multiple;


 A number r is a simple root of f ( x )=0, if f (r) = 0 and f ′(r) ≠ 0. Then, we can write f(x) as
f (x) = (x – r) g(x), g(r) ≠ 0. Example: x = 1 is a simple root of f ( x )=x 3 + x−2=0, because we
can write f ( x )=(x−1)(x 2+ x +2).
 A number r is a complex root of f ( x )=0 with multiplicity m . Then, we can write f (x) as
f ( x )=( x−r ) g(x) , g(r) ≠ 0. Example: x = 2 is a multiple root of f ( x )=x 3−3 x 2 +4=0 with
m

multiplicity 2 (double root), because we can write f ( x )= ( x −2 )2 (x+1).


Remark:- In this chapter, we focus on different numerical methods of approximating real simple roots
of a non-linear equation of a single variable f ( x )=0.

Types of non-linear equations:


Algebraic equations:
BC [Hawassa University] 2 | of page 15
Numerical Methods [CEng2112] – Chapter 2: Solving non-linear equations

 Polynomial equations
Transcendental equations:
 Exponential equations
 Logarithmic equations
 Trigonometric equations
 Hyperbolic equations

A fundamental technique in computer science to solve equations is Iteration. Iteration is a process


which is repeated until an answer or root is achieved. i.e producing a sequence of numbers that
hopefully converge towards a limit which is a root. The first values of this sequence are called initial
guesses (values).

Common Iterative numerical methods can be classified as:


Bracketing methods: These methods start with guesses that bracket (contain) the root and then
systematically reduce the width of the bracket.
 Bisection
 False position
Open methods: These methods also involve systematic trial-and-error iterations but do not
require that the initial guesses contain the root.
 Fixed point iteration
 Newton - Raphson method
 Secant method
2.1 Locating roots
Root locating theorem: From the intermediate value theorem of calculus, If f (x) is continuous on
[a, b] & f ( a ) f ( b ) < 0, then there is one real number rϵ [ a , b ] such that f ( r )=0.
⟹ f crosses or touches the x−axis at x=r
⟹ f has a root at x=r
Locating the roots has limited practical value because it is not precise. However, locating the roots
can be utilized to obtain rough estimates of roots. These estimates can be employed as starting guesses
for the numerical methods.
Example: Give an interval that locates a solution of:
2
a. f ( x )=x −2 x +1=0 c. f ( x )=2x + x 2=0
1
b. f ( x )=sinx−x+ =0 d. f ( x )=2x −x 2=0
2

BC [Hawassa University] 3 | of page 15


Numerical Methods [CEng2112] – Chapter 2: Solving non-linear equations

2.2 Numerical methods of solving a non-linear equation f ( x )=0

These methods are based on the idea of successive approximations. We start with one or two initial
approximations to the root and obtain a sequence of approximations {x k }∞0 which will converge to the
exact root.
Since, we cannot perform infinite number of iterations, we need a criterion to stop the iterations. We
use one or both of the following criterion:
(i) The equation f ( x )=0 is satisfied to a given accuracy or f (x k ) is bounded by an error
tolerance ε.
|f ( x k )|≤ ε
(ii) The magnitude of the difference between two successive iterates is smaller than a given
accuracy or an error bound ε.
|x k +1−x k|≤ ε
For example, if we require two decimal place accuracy, then we iterate until |x k +1−x k|≤ 0.005 .
If we require three decimal places accuracy, then we iterate until |x k +1−x k|≤ 0.0005 .

Here, we will see each iterative methods of solving a given non-linear equation f ( x )=0

I. Bracketing Methods:- these methods contains or brackets the root by an interval.


a. Bisection Method:- The bisection method is a root-finding algorithm which repeatedly
bisects an interval then selects a subinterval in which a root must lie for further processing.

BC [Hawassa University] 4 | of page 15


Numerical Methods [CEng2112] – Chapter 2: Solving non-linear equations

 Procedure: Suppose f (x) is continuous on an interval [a, b]


 Check that f ( a ) f ( b ) < 0 (i.e the function must have opposite signs at the end points)
a+ b
 Let m be the first estimate of the root, given by m= , which is the midpoint of
2
the interval
 Make the following evaluations to determine the subinterval in which the root lies:
i. If f ( m) =0 , then m is the root of f .
ii. If f ( m) f ( a ) <0 , then the root lies in the subinterval [a , m]
iii. If f ( m) f ( a ) >0 , then the root lies in the subinterval [ m ,b ]
 Continue the process until either we get the exact root or we may have an approximate
root with the required degree of accuracy.

The Bisection method generates a sequence { mn } of mid points of the reduced interval. i.e
1
mn= ( an +bn ) ∀ n ≥ 1where a 1=a∧b1=b
2
 Condition of convergence Theorem: Assume that f is continuous on [a, b] and f ( a ) f ( b ) < 0,
then
 There exists a number rϵ [ a , b ] such that f ( r )=0.

 The sequence { mn } converges to the zero x=r . That is, nlim mn=r .
→∞

 Termination criteria for a given tolerance (acceptance) or error ϵ :


BC [Hawassa University] 5 | of page 15
Numerical Methods [CEng2112] – Chapter 2: Solving non-linear equations

The number of iteration (n ) process will be terminated when the length of the interval ( a n ,b n )
becomes very small. i.e b n−a n ≅ 0⟹ a n ≅ bn
b−a
Absolute error( e A ) =|m new −mold|≤ n

2
Now, to have an absolute error with tolerance of error ϵ and number of iteration (n) should
satisfy:
n
⟹ ϵ 2 >(b−a)
(b−a)
⟹ n> log 2
ϵ

Example1: Approximate the root of f ( x )=x 3−2 x−5=0 in [2, 3].


f ( 2 )=−1∧f ( 3 )=16 & also f is cont on [2, 3]. Thus f has at least one root b/n 2 & 3.

n an bn mn f (mn)
1 2 3 2.5 5.6250 > 0
2 2 2.5 2.25 1.8906 > 0
3 2 2.25 2.125 0.3457 > 0
4 2 2.125 2.0625 -0.3513 < 0
5 2.0625 2.125 2.09375 -0.0089 < 0

 The desired approximation is m5=2.09375 if


|m5−m4|=|2.09375−2.0625|< ϵ
Example2: Determine approximately how much iteration is necessary to solve
f ( x )=x + 4 x −10=0, with an accuracy of e=10−5 for a 1=1and b 1=2
3 2

Solution: The maximum number of iteration n is given by


(b−a)
⟹ n> log 2
ϵ
(2−1)
⟹ n> log 2
10−5
5
⟹ n> log 2 10
⟹ n>5 log 2 10=5 × 3.3219=16.6096
It requires 17 iterations to obtain an approximation to an accurate of 10−5 .

BC [Hawassa University] 6 | of page 15


Numerical Methods [CEng2112] – Chapter 2: Solving non-linear equations

 Advantages and Disadvantages:


 It is very easy to implement
 Very reliable and has good error bounds

 Although it is reliable, it converges slowly.


 The bisection method gives only a range where the root exists, rather than a single estimate
for the root's location.
 If f has several simple roots in the interval [a,b], then the bisection method will find one of
them.
 We can determine the number of iterations for a given tolerance (acceptable) of error ϵ .

b. Method of False position/ Regula Falsi Method /linear interpolation method

One of the shortcoming of bisection method is that, in dividing the interval [a , b] no account is
taken of the magnitude of f (a)∧f (b). For example if f (a) is much closer to zero than f (b),
then it is likely that the root is closer to a than b .
An alternative method, false position, that exploits this graphical insight is to join f (a)∧f (b) by
a straight line (chord).
⟹ The intersection of this line with x-axis represents an improved estimate of the root.

 Procedure: In this method we choose two points x 1∧x 2 ∋ f ( x 1 )∧f ( x2 ) are of opposite signs.
Hence, a root of f ( x )=0 must lie in between these points, given that f is continuous on [ x 1 , x 2 ] .

 Now the equation of the chord joining the two points (x 1 , f ( x1 ) ) & ( x 2 , f (x 2)¿ is given
by
y−f (x 1) f ( x 2 )−f (x 1)
=
x −x1 x 2 −x 1

BC [Hawassa University] 7 | of page 15


Numerical Methods [CEng2112] – Chapter 2: Solving non-linear equations

 The intersection of this line with the x-axis is the x -intercept let x 3 which is the first
estimate of the root;
x1 f ( x 2 )−x 2 f ( x1 )
x 3=
f ( x 2 )−f ( x 1 )
(x 2−x 1)
⟹ x 3=x 2−f ( x 2)
f ( x 2 ) −f ( x1 )

 Check the functional value of f ( x 3 )


 If f ( x 3 ) =0 , then x3 is the exact root of f ( x )=0
 If f ( x 3 ) < 0 ,the root is in [x3, x2] (considering f ( x 2 ) > 0 ), In this case join the points
(x 3 , f (x 3)) & (x 2 , f (x 2)) using st. line to get a better approximate root x 4 .

 If f ( x 3 ) > 0 ,then the root is in [x1, x3]. If so, join the points ( x1 , f ( x1 )) ¿ ( x3 , f ( x3))
using st. line to get a better approximate root x 4 .
 Continue the process by bracketing the root until the root is estimated adequately by;
( bn−an)
x n=b n−f ( bn ) , for the interval [an , bn ] that contains the root x n
f ( bn ) −f (an )

 Condition of convergence Theorem:

Assume that f is continuous on [a, b] and there exists a number r ϵ [a , b] such that f ( r )=0. If
f ( a )∧f ( b ) have opposite signs, and

( bn−an)
x n=b n−f ( bn )
f ( bn ) −f (an )

represents the sequence of points generated by the false position process, then the sequence x n
converges to the zero x=r .

That is, lim x n=r .


n→∞

 Termination criteria for a given tolerance of error ϵ : Here, we use the size of |f ( x )| as the
stopping criterion for it. i .e until|f (x n)|< ϵ

Example: Find a root of the equation

BC [Hawassa University] 8 | of page 15


Numerical Methods [CEng2112] – Chapter 2: Solving non-linear equations

3
f ( x )=x −2 x−5=0 in [2, 3], using regula falsi method up to 4th iteration.
Solution:
The formula for the method is
( x2−x 1 )
x 3=x 2 - f ( x 2 )
f ( x 2 )−f ( x 1)
Since f ( 2 )=−1∧f ( 3 )=16 the root lies b/n 2 & 3
16 ( 3−2 ) 16 51−16 35
 x 3=x 2 - = 3− = =
16−(−1) 17 17 17
x 3 ≅ 2.0588
3
Now find f ( x 3 ) =(2.0588) −2 (2.0588)−5
≈−0.3911< 0
 The root is in [ x3 , x 2]
Now use
(x 3−x 2 )
x 4 =x3 −f ( x 3 )
f ( x3 ) −f (x 2)
(−0.3911 ) (2.0588−3)
¿ 2.0588−
−0.3911−16
 x 4 ≈ 2.0904
x 5 ≈ 2.0934 .. . etc.

 Advantages and Disadvantages:


 Advantages: faster than bisection method
 Limitations:
o When there is a discontinuity over an interval.
o When there are distinct roots.
o Over a "large" interval.

II. Open Methods:- these methods do not contain the root by an interval.

BC [Hawassa University] 9 | of page 15


Numerical Methods [CEng2112] – Chapter 2: Solving non-linear equations

a. Fixed-point Iteration: This method involves writing the equation f ( x )=0 in a form of
x=g ( x ). There are many ways of rewriting f ( x )=0 in this form. The function g(x ) is called
iteration function.
This method needs one initial value (initial approximate root) and an iteration function.

Definition (Fixed Point):- A fixed point of a function g(x ) is a number psuch that p=g (p)
A fixed point is not a root of the equation g ( x )=0, it is a solution of the equation
x=g(x ). Geometrically, the fixed points of a function g(x ) are the point(s) of intersection of the
curve y=g (x) and the line y=x .

Now, finding a root of f ( x )=0 is same as finding a number p such that p=g (p) .

 Procedure:
A function g(x ) for computing successive terms is needed, together with a starting value x 0,
Such that x=g ( x ). Then a sequence of values { x n } is obtained using the iterative rule
x n+1=g (x n).

Definition (Fixed Point Iteration): The iteration x n+1=g ( x n ) for n ≥ 0 is called fixed point
iteration.

The sequence has the pattern

BC [Hawassa University] 10 | of page 15


Numerical Methods [CEng2112] – Chapter 2: Solving non-linear equations

x 0 (initial value )
x 1=g(x 0 )
x 2=g(x 1 )

x n+1=g ( x n )

Convergence of an iteration method depends on the choice of the iteration function g(x ), and a
suitable initial approximation x 0, to the root.

 Condition of convergence Theorem:



Theorem: Assume that g(x ) is a continuous function and that { x n }n=0 is a sequence generated
by fixed point iteration. If lim x n= p, then p is a fixed point of g(x ).
n→∞

The following two theorems establish conditions for the existence of a fixed point and the
convergence of the fixed-point iteration process to a fixed point.

Theorem (Existence and uniqueness): Assume that g(x ) is continuous on [a, b], Then we
have the following conclusions.

(i). If the range of the mapping y=g (x) satisfies y ∈[a ,b ] for all x ∈[a , b ], then g has a
fixed point in [a , b].
(ii). Furthermore, suppose that g' (x) is defined over (a ,b) and that a positive constant
k < 1 exists with |g' (x )|≤ k for all ∈(a , b) , then g has a unique fixed point p in [a , b].

Theorem (conditions of convergence): Assume that the following hypothesis hold true.

(a) p is a fixed point of a function g,


(b) gand g' are continuous on[a , b]
(c) k is a positive constant,
(d) x 0 ϵ (a , b), and
(e) g ( x ) ∈[a , b] for all x ∈[a , b ]. Then we have the following conclusions.

(i). If |g' (x )|≤ k < 1 for all x ∈[a , b ] , then the iteration x n+1=g ( x n ) will converge to
the unique fixed point p ∈(a , b). In this case, p is said to be an attractive fixed point.

BC [Hawassa University] 11 | of page 15


Numerical Methods [CEng2112] – Chapter 2: Solving non-linear equations

(ii). If |g' (x )|<1 for all x ∈[a , b ] , then the iteration x n+1=g ( x n ) will not converge to p
. In this case, p is said to be a repelling fixed point and the iteration exhibits local
divergence.
 Termination criteria within a given tolerance error ϵ :- The stopping criterion of the method
of fixed point iteration is |f ( x n )| < ϵ
Example: Use the method of simple iteration to find the root of the function f ( x )=3 xe x −1=0 , use
x °=1.
Solution
x 1 1 −x 1 −x
3 xe −1=0 x = = e ⇒ x= e
3e 3
x
3
1 −x
Take g ( x )= e
3

 x n+1=g (x n)
1 −1
 x 1=g ( x ° )= e =0.12263
3
1 −0.12263
 x 2=g ( x 1 )= e =0.29486
3
1 −0.29486
 x 3=g ( x 2 )= e =0.24821
3
1 −0.24821
 x 4 =g ( x 3 ) = e =0.26007
3
1 −0.26007
 x 5=g ( x 4 ) = e =0.25700
3

 Using only five iterations the approximate root of f (x) is 0.257

Example 2: Use the method of simple iteration to find the root of f ( x )=x 3−2=0.Take x 0=1.2.
Solution: Let us consider the following two different ways:

I II
3 3
x −2=0 x −2=0
3
3 2+5 x−x
⇔ x=x + x −2 ⇔ x=
5
3
3 2+ 5 x− x
 g1 ( x ) =x + x−2, x 0=1.2 ⇒ g2 ( x ) = , x 0=1.2
5
x 1=g1 ( x 0 )=0.928 x 1=g2 ( x 0 )=1.2544

BC [Hawassa University] 12 | of page 15


Numerical Methods [CEng2112] – Chapter 2: Solving non-linear equations

x 2=g1 ( x1 ) =0.273 x 2=g2 ( x 1) =1.2596


x 3=g1 ( x 2) =−2.293 x 3=g2 ( x 2) =1.2599
x 4 =g 1 ( x 3 )=−16.349 x 4 =g 2 ( x 3 )=1.25992
⋮ ⋮

But, the exact root is x=√3 2


Remark: The simple iteration may fail to converge. This usually happens when the slope of
g(x ) (in absolute value) near the root is too large.
2
Exercise: Observe the convergence of the solution of f ( x )=x 3−2=0 , x 0=1.2 such that g(x )= 2
x

b. The Secant method:- The secant method is an open method of root-finding algorithm that
uses a succession of roots of secant lines to better approximate a root of a function f. It
resembles totally the method of false position except that no attempt is made to ensure the root
is enclosed. This method needs two initial values (initial approximate roots).
 Procedure: Let x=r be the root of f ( x ) i.e f ( r )=0, and assume x 0 ¿ x 1 be two initial
approximate roots of f .
 Draw the secant line passing through ( x 0 , f ( x 0 ) ) ∧(x 1 , f ( x 1 ))

 Find the point at which the line crosses the x -axis, let x 2 and from the slope of the
f ( x 1 )−f (x 0 ) f (x 1)
line, we have =
x1 −x0 x 1−x 2
(x 1−x 0 )
⇒ x 2=x 1 - f ( x 1 )
f ( x 1 )−f ( x 0)

 Further approximations of x 3 , x 4 , . .. are computed using the iteration;


(x n−x n−1)
x n+1=x n – f ( x n ) for n ≥1
f ( x n ) −f ( xn −1 )

BC [Hawassa University] 13 | of page 15


Numerical Methods [CEng2112] – Chapter 2: Solving non-linear equations

 Condition of convergence Theorem:


The iterates xn of the secant method converge to a root of f, if the initial values x0 and x1 are
sufficiently close to the root.
If the initial values are not close to the root, then there is no guarantee that the secant
method converges.
 Termination criteria within a given tolerance error ϵ :- The stopping criterion for this
method is the size of |f ( x n )|≤ ϵ
Example: Use the secant method to approximate the root of f ( x )=e x −x−2=0 with initial
approximations x 0=1 & x 1=2 & |f ( x n )| < Tol = 10−8
Solution
x 0=1 f ( x 0 ) =−0.2817
x 1=2 f ( x 1 )=3.3891
f ( x 1 ) [ x1 −x0 ] 3.3891 ( 2−1 )
⇒ x 2=x 1− =2− =1.0767
f ( x 1 )−f ( x0 ) 3.3891+0.2817

And f (x 2) ≈−0.1416
f ( x 2 ) [ x 2−x 1 ] 0.1416 [ 1.0767−2 ]
⇒ x 3=x 2− =1.0767+ =1.1137
f ( x 2 )−f ( x 1) −0.1416−3.3891
1.1137
And f ( x 3 ) =e −1.1137−2=−0.0680
f ( x3 ) [x 3−x 2 ]
⇒ x 4 =x 3 – =1.1479
f ( x 3 )−f (x 2 )
BC [Hawassa University] 14 | of page 15
Numerical Methods [CEng2112] – Chapter 2: Solving non-linear equations

And f ( x 4 )=0.0036
⇒ x 5=1.1462
And f (x ¿¿ 5)=−0.00005 ¿
....
 Advantages and Disadvantages:
We are not sure about the convergence of the approximated roots to the exact root. But, the
process is simpler; because the sign of f (x n) is not tested, and often converges faster
1
Exercise: Use the method of regula falsi method to find the root f ( x )= x +cosx =0 until
4
|f ( x n )|< 5× 10−6 in the interval [2, 3] (Hint: Use radian measure).

c. Newton - Raphson Method:- This open method needs one initial value (approximate root)

 Procedure:
Let x 0 denote the known approximate value of the root of f ( x )=0 , then using Taylor series
expansion about x ° we expand f ( x )=0 as:
' 1
f ( x )=f ( x ° ) + f ( x) ( x−x ° ) + ¿ Neglecting the second & higher order derivatives, we get:
2!
'
f ( x ° ) + f ( x ° ) (x−x ° ) = 0
−f (x ° ) f (x ° )
x−x °= '  x=x °− '
f (x ° ) f ( x° )
Replacing x by x1 we get:
f (x °)
x 1=x 0− '
f (x ° )
Hence, successive approximate roots are given by Newton – Raphson iteration;
f (x n )
x n+1=x n −
f ' (x n)
OR, Geometrically, Newton – Raphson method consists in replacing the part of the curve between
the point (x ° , f ( x° )) & the x -axis by means of the tangent line to the curve at point (x ° , f ( x° )) & is
shown as:

BC [Hawassa University] 15 | of page 15


Numerical Methods [CEng2112] – Chapter 2: Solving non-linear equations

 From the figure, a is the root of f ( x )=0. i.e f ( a )=0


 And x 0 is the initial approximate root of a
 Let x 1 be the root of the tangent line

' f ( x 0) f (x °)
 Since the slope of f ( x )=f ( x 0) = , hence we have x 1=x 0− '
x0 −x 1 f (x ° )
 Similarly, successive approximate roots are given by Newton – Raphson iteration;
f ( xn )
x n+1=x n − '
f (x n)

 Condition of convergence Theorem:


Assume that f is continuous on [a, b] and there exists a number ∈[a , b] , where f ( p )=0.
If f ' ( p)≠ 0, then there exists a δ >0 such that the sequence { p k }∞k=0 defined by the iteration
f (p k )
pk +1=g ( p k )= p k − ' for k =0 , 1 ,… …
f ( pk )
will converge to p for any initial approximation p0 ∈[ p−δ , p+δ ].
 Termination criteria for a given tolerance of error ϵ :- Use the value |f ( x n )|< ϵ as stopping
criterion for Newton’s method.

Example 1: Use the Newton - Raphson method to find a root of the equation x 3−2 x−5=0 , with
initial value x 0=2
Solution: f ( x )=x 3−2 x−5
' 2
f ( x)=3 x −2,
f (x n )
x n+1=x n − , n = 0, 1, 2 . . .
f ' (x n )

BC [Hawassa University] 16 | of page 15


Numerical Methods [CEng2112] – Chapter 2: Solving non-linear equations

3
x n−x n−2 x n−5
 x n+1= 2
3 x n−2
3
(x 0 −2 x 0−5)
x 0=2 x 1=x 0− 2
=2.1
3 x 0−2
x 2=2.094568 is the approximate root in the 2nd iteration.

Example 2: Find the positive root of f ( x )=x 2−16=0 using Newton – Raphson method taking x ° = 5
up to 3rd iteration.

 nth root algorithm:- The principal nth root √n A , A ≥ 0, of a positive real number A, is the positive
real solution of the equation

x=√ A
n n
⇒x =A
n
let f ( x )=x − A=0

There is a very fast-converging nth root algorithm for finding √n A i.e Newton's method;
The general iteration scheme of Newton's method to solve f ( x )=0 is:

1. Make an initial guess x0


f ( xk)
2. Set x k+1 =x k − '
f (x k )
3. Repeat step 2 until the desired precision is reached.

The nth root problem can be viewed as searching for a zero of the function f ( x )=x n −A

So the derivative is f ' ( x )=nx n−1 and the iteration rule is

f ( xk)
x k+1 =x k − '
f (x k )

n
xk − A
¿ xk− n−1
n xk

xk A
¿ xk− +
n n x k n−1

1 A
¿ [ ( n−1 ) x k + n−1 ] leading to the general nth root algorithm.
n xk

Example: Find √3 3 using Newton – Raphson method taking x 0=1.


BC [Hawassa University] 17 | of page 15
Numerical Methods [CEng2112] – Chapter 2: Solving non-linear equations

BC [Hawassa University] 18 | of page 15

You might also like