Professional Documents
Culture Documents
Chapter 2 NM
Chapter 2 NM
Numerical Methods
CEng2112
Chapter 2
Solving non-linear equations of one variable
Contents:-
2.1 Locating roots
2.2 Numerical methods of solving a non-linear equation
I. Bracketing Methods
Bisection Method
Method of false position
II. Open Methods
Iteration Method (Fixed point Iteration)
Secant Method
Newton - Raphson Method
Introduction:
A problem of great importance in science and engineering is that of determining the roots/zeros of an
equation of the form f ( x )=0.
Polynomial equations
Transcendental equations:
Exponential equations
Logarithmic equations
Trigonometric equations
Hyperbolic equations
These methods are based on the idea of successive approximations. We start with one or two initial
approximations to the root and obtain a sequence of approximations {x k }∞0 which will converge to the
exact root.
Since, we cannot perform infinite number of iterations, we need a criterion to stop the iterations. We
use one or both of the following criterion:
(i) The equation f ( x )=0 is satisfied to a given accuracy or f (x k ) is bounded by an error
tolerance ε.
|f ( x k )|≤ ε
(ii) The magnitude of the difference between two successive iterates is smaller than a given
accuracy or an error bound ε.
|x k +1−x k|≤ ε
For example, if we require two decimal place accuracy, then we iterate until |x k +1−x k|≤ 0.005 .
If we require three decimal places accuracy, then we iterate until |x k +1−x k|≤ 0.0005 .
Here, we will see each iterative methods of solving a given non-linear equation f ( x )=0
The Bisection method generates a sequence { mn } of mid points of the reduced interval. i.e
1
mn= ( an +bn ) ∀ n ≥ 1where a 1=a∧b1=b
2
Condition of convergence Theorem: Assume that f is continuous on [a, b] and f ( a ) f ( b ) < 0,
then
There exists a number rϵ [ a , b ] such that f ( r )=0.
The sequence { mn } converges to the zero x=r . That is, nlim mn=r .
→∞
The number of iteration (n ) process will be terminated when the length of the interval ( a n ,b n )
becomes very small. i.e b n−a n ≅ 0⟹ a n ≅ bn
b−a
Absolute error( e A ) =|m new −mold|≤ n
<ϵ
2
Now, to have an absolute error with tolerance of error ϵ and number of iteration (n) should
satisfy:
n
⟹ ϵ 2 >(b−a)
(b−a)
⟹ n> log 2
ϵ
n an bn mn f (mn)
1 2 3 2.5 5.6250 > 0
2 2 2.5 2.25 1.8906 > 0
3 2 2.25 2.125 0.3457 > 0
4 2 2.125 2.0625 -0.3513 < 0
5 2.0625 2.125 2.09375 -0.0089 < 0
One of the shortcoming of bisection method is that, in dividing the interval [a , b] no account is
taken of the magnitude of f (a)∧f (b). For example if f (a) is much closer to zero than f (b),
then it is likely that the root is closer to a than b .
An alternative method, false position, that exploits this graphical insight is to join f (a)∧f (b) by
a straight line (chord).
⟹ The intersection of this line with x-axis represents an improved estimate of the root.
Procedure: In this method we choose two points x 1∧x 2 ∋ f ( x 1 )∧f ( x2 ) are of opposite signs.
Hence, a root of f ( x )=0 must lie in between these points, given that f is continuous on [ x 1 , x 2 ] .
Now the equation of the chord joining the two points (x 1 , f ( x1 ) ) & ( x 2 , f (x 2)¿ is given
by
y−f (x 1) f ( x 2 )−f (x 1)
=
x −x1 x 2 −x 1
The intersection of this line with the x-axis is the x -intercept let x 3 which is the first
estimate of the root;
x1 f ( x 2 )−x 2 f ( x1 )
x 3=
f ( x 2 )−f ( x 1 )
(x 2−x 1)
⟹ x 3=x 2−f ( x 2)
f ( x 2 ) −f ( x1 )
If f ( x 3 ) > 0 ,then the root is in [x1, x3]. If so, join the points ( x1 , f ( x1 )) ¿ ( x3 , f ( x3))
using st. line to get a better approximate root x 4 .
Continue the process by bracketing the root until the root is estimated adequately by;
( bn−an)
x n=b n−f ( bn ) , for the interval [an , bn ] that contains the root x n
f ( bn ) −f (an )
Assume that f is continuous on [a, b] and there exists a number r ϵ [a , b] such that f ( r )=0. If
f ( a )∧f ( b ) have opposite signs, and
( bn−an)
x n=b n−f ( bn )
f ( bn ) −f (an )
represents the sequence of points generated by the false position process, then the sequence x n
converges to the zero x=r .
Termination criteria for a given tolerance of error ϵ : Here, we use the size of |f ( x )| as the
stopping criterion for it. i .e until|f (x n)|< ϵ
3
f ( x )=x −2 x−5=0 in [2, 3], using regula falsi method up to 4th iteration.
Solution:
The formula for the method is
( x2−x 1 )
x 3=x 2 - f ( x 2 )
f ( x 2 )−f ( x 1)
Since f ( 2 )=−1∧f ( 3 )=16 the root lies b/n 2 & 3
16 ( 3−2 ) 16 51−16 35
x 3=x 2 - = 3− = =
16−(−1) 17 17 17
x 3 ≅ 2.0588
3
Now find f ( x 3 ) =(2.0588) −2 (2.0588)−5
≈−0.3911< 0
The root is in [ x3 , x 2]
Now use
(x 3−x 2 )
x 4 =x3 −f ( x 3 )
f ( x3 ) −f (x 2)
(−0.3911 ) (2.0588−3)
¿ 2.0588−
−0.3911−16
x 4 ≈ 2.0904
x 5 ≈ 2.0934 .. . etc.
II. Open Methods:- these methods do not contain the root by an interval.
a. Fixed-point Iteration: This method involves writing the equation f ( x )=0 in a form of
x=g ( x ). There are many ways of rewriting f ( x )=0 in this form. The function g(x ) is called
iteration function.
This method needs one initial value (initial approximate root) and an iteration function.
Definition (Fixed Point):- A fixed point of a function g(x ) is a number psuch that p=g (p)
A fixed point is not a root of the equation g ( x )=0, it is a solution of the equation
x=g(x ). Geometrically, the fixed points of a function g(x ) are the point(s) of intersection of the
curve y=g (x) and the line y=x .
Now, finding a root of f ( x )=0 is same as finding a number p such that p=g (p) .
Procedure:
A function g(x ) for computing successive terms is needed, together with a starting value x 0,
Such that x=g ( x ). Then a sequence of values { x n } is obtained using the iterative rule
x n+1=g (x n).
Definition (Fixed Point Iteration): The iteration x n+1=g ( x n ) for n ≥ 0 is called fixed point
iteration.
x 0 (initial value )
x 1=g(x 0 )
x 2=g(x 1 )
x n+1=g ( x n )
Convergence of an iteration method depends on the choice of the iteration function g(x ), and a
suitable initial approximation x 0, to the root.
The following two theorems establish conditions for the existence of a fixed point and the
convergence of the fixed-point iteration process to a fixed point.
Theorem (Existence and uniqueness): Assume that g(x ) is continuous on [a, b], Then we
have the following conclusions.
(i). If the range of the mapping y=g (x) satisfies y ∈[a ,b ] for all x ∈[a , b ], then g has a
fixed point in [a , b].
(ii). Furthermore, suppose that g' (x) is defined over (a ,b) and that a positive constant
k < 1 exists with |g' (x )|≤ k for all ∈(a , b) , then g has a unique fixed point p in [a , b].
Theorem (conditions of convergence): Assume that the following hypothesis hold true.
(i). If |g' (x )|≤ k < 1 for all x ∈[a , b ] , then the iteration x n+1=g ( x n ) will converge to
the unique fixed point p ∈(a , b). In this case, p is said to be an attractive fixed point.
(ii). If |g' (x )|<1 for all x ∈[a , b ] , then the iteration x n+1=g ( x n ) will not converge to p
. In this case, p is said to be a repelling fixed point and the iteration exhibits local
divergence.
Termination criteria within a given tolerance error ϵ :- The stopping criterion of the method
of fixed point iteration is |f ( x n )| < ϵ
Example: Use the method of simple iteration to find the root of the function f ( x )=3 xe x −1=0 , use
x °=1.
Solution
x 1 1 −x 1 −x
3 xe −1=0 x = = e ⇒ x= e
3e 3
x
3
1 −x
Take g ( x )= e
3
x n+1=g (x n)
1 −1
x 1=g ( x ° )= e =0.12263
3
1 −0.12263
x 2=g ( x 1 )= e =0.29486
3
1 −0.29486
x 3=g ( x 2 )= e =0.24821
3
1 −0.24821
x 4 =g ( x 3 ) = e =0.26007
3
1 −0.26007
x 5=g ( x 4 ) = e =0.25700
3
Example 2: Use the method of simple iteration to find the root of f ( x )=x 3−2=0.Take x 0=1.2.
Solution: Let us consider the following two different ways:
I II
3 3
x −2=0 x −2=0
3
3 2+5 x−x
⇔ x=x + x −2 ⇔ x=
5
3
3 2+ 5 x− x
g1 ( x ) =x + x−2, x 0=1.2 ⇒ g2 ( x ) = , x 0=1.2
5
x 1=g1 ( x 0 )=0.928 x 1=g2 ( x 0 )=1.2544
b. The Secant method:- The secant method is an open method of root-finding algorithm that
uses a succession of roots of secant lines to better approximate a root of a function f. It
resembles totally the method of false position except that no attempt is made to ensure the root
is enclosed. This method needs two initial values (initial approximate roots).
Procedure: Let x=r be the root of f ( x ) i.e f ( r )=0, and assume x 0 ¿ x 1 be two initial
approximate roots of f .
Draw the secant line passing through ( x 0 , f ( x 0 ) ) ∧(x 1 , f ( x 1 ))
Find the point at which the line crosses the x -axis, let x 2 and from the slope of the
f ( x 1 )−f (x 0 ) f (x 1)
line, we have =
x1 −x0 x 1−x 2
(x 1−x 0 )
⇒ x 2=x 1 - f ( x 1 )
f ( x 1 )−f ( x 0)
And f (x 2) ≈−0.1416
f ( x 2 ) [ x 2−x 1 ] 0.1416 [ 1.0767−2 ]
⇒ x 3=x 2− =1.0767+ =1.1137
f ( x 2 )−f ( x 1) −0.1416−3.3891
1.1137
And f ( x 3 ) =e −1.1137−2=−0.0680
f ( x3 ) [x 3−x 2 ]
⇒ x 4 =x 3 – =1.1479
f ( x 3 )−f (x 2 )
BC [Hawassa University] 14 | of page 15
Numerical Methods [CEng2112] – Chapter 2: Solving non-linear equations
And f ( x 4 )=0.0036
⇒ x 5=1.1462
And f (x ¿¿ 5)=−0.00005 ¿
....
Advantages and Disadvantages:
We are not sure about the convergence of the approximated roots to the exact root. But, the
process is simpler; because the sign of f (x n) is not tested, and often converges faster
1
Exercise: Use the method of regula falsi method to find the root f ( x )= x +cosx =0 until
4
|f ( x n )|< 5× 10−6 in the interval [2, 3] (Hint: Use radian measure).
c. Newton - Raphson Method:- This open method needs one initial value (approximate root)
Procedure:
Let x 0 denote the known approximate value of the root of f ( x )=0 , then using Taylor series
expansion about x ° we expand f ( x )=0 as:
' 1
f ( x )=f ( x ° ) + f ( x) ( x−x ° ) + ¿ Neglecting the second & higher order derivatives, we get:
2!
'
f ( x ° ) + f ( x ° ) (x−x ° ) = 0
−f (x ° ) f (x ° )
x−x °= ' x=x °− '
f (x ° ) f ( x° )
Replacing x by x1 we get:
f (x °)
x 1=x 0− '
f (x ° )
Hence, successive approximate roots are given by Newton – Raphson iteration;
f (x n )
x n+1=x n −
f ' (x n)
OR, Geometrically, Newton – Raphson method consists in replacing the part of the curve between
the point (x ° , f ( x° )) & the x -axis by means of the tangent line to the curve at point (x ° , f ( x° )) & is
shown as:
' f ( x 0) f (x °)
Since the slope of f ( x )=f ( x 0) = , hence we have x 1=x 0− '
x0 −x 1 f (x ° )
Similarly, successive approximate roots are given by Newton – Raphson iteration;
f ( xn )
x n+1=x n − '
f (x n)
Example 1: Use the Newton - Raphson method to find a root of the equation x 3−2 x−5=0 , with
initial value x 0=2
Solution: f ( x )=x 3−2 x−5
' 2
f ( x)=3 x −2,
f (x n )
x n+1=x n − , n = 0, 1, 2 . . .
f ' (x n )
3
x n−x n−2 x n−5
x n+1= 2
3 x n−2
3
(x 0 −2 x 0−5)
x 0=2 x 1=x 0− 2
=2.1
3 x 0−2
x 2=2.094568 is the approximate root in the 2nd iteration.
Example 2: Find the positive root of f ( x )=x 2−16=0 using Newton – Raphson method taking x ° = 5
up to 3rd iteration.
nth root algorithm:- The principal nth root √n A , A ≥ 0, of a positive real number A, is the positive
real solution of the equation
x=√ A
n n
⇒x =A
n
let f ( x )=x − A=0
There is a very fast-converging nth root algorithm for finding √n A i.e Newton's method;
The general iteration scheme of Newton's method to solve f ( x )=0 is:
The nth root problem can be viewed as searching for a zero of the function f ( x )=x n −A
f ( xk)
x k+1 =x k − '
f (x k )
n
xk − A
¿ xk− n−1
n xk
xk A
¿ xk− +
n n x k n−1
1 A
¿ [ ( n−1 ) x k + n−1 ] leading to the general nth root algorithm.
n xk