Professional Documents
Culture Documents
Untitled
Untitled
page
CHAPTER TWO .......................................................................................................................................... 2
Nonlinear equations ...................................................................................................................................... 2
2.1. Locating roots ................................................................................................................................... 2
2.2. Bisection method .............................................................................................................................. 2
2.3. Interpolation (false position) method ............................................................................................. 7
2.4. Iteration Methods............................................................................................................................. 9
2.5. Newton-Raphson Method .............................................................................................................. 10
2.5.1. Multiple roots and modified newton’s method ..................................................................... 14
2.6. The Secant Method ........................................................................................................................ 15
2.7. Conditions for convergence (Convergence of the Iteration Methods) ...................................... 17
1|Page
CHAPTER TWO
Nonlinear equations
This chapter deals with way of solving nonlinear equations using different types of numerical
methods.
The following equations are called non – linear equations
Quadratic equations given by: 𝑎𝑥 2 + 𝑏𝑥 + 𝑐 = 0, 𝑎 ≠ 0
Cubic equations given by: 𝑎𝑥 3 + 𝑏𝑥 2 + 𝑐𝑥 + 𝑑 = 0, 𝑎 ≠ 0
General polynomial equations of degree 𝑛 ≥ 4, given by
𝑎𝑛 𝑥 𝑛 + 𝑎𝑛−1 𝑥 𝑛−1 + ⋯ + 𝑎1 𝑥 + 𝑎0 = 0, 𝑎𝑛 ≠ 0
Equations which contains transcedential functions such as 𝑒 𝑥 , ln 𝑥 , sin 𝑥, cos 𝑥, sinh 𝑥
etc. that is: sin 𝑥 + cos 𝑥 = 0
𝑒 −𝑥 + 𝑥 2 − 𝑥 + 1 = 0
2.1. Locating roots
The Root-Finding Problem: Given a function 𝒇(𝒙), find 𝒙 = 𝜶 such that 𝒇(𝜶) = 𝟎. The
number 𝜶 is called a root of 𝒇(𝒙) = 𝟎 or a zero of the function 𝒇(𝒙).
The function of 𝑓(𝑥) may be algebraic or trigonometric functions. The well-known examples of
algebraic functions are polynomials. In this chapter, we shall describe iterative methods, i.e.
methods that generate successive approximations to the exact solution, by applying a numerical
algorithm to the previous approximation. The indirect or iterative methods are further divided
into two categories: bracketing and open methods. The bracketing methods require the limits
between which the root lies, whereas the open methods require the initial estimation of the
solution. Bisection and False position methods are two known examples of the bracketing
methods. Among the open methods, the secant and Newton-Raphson is most commonly used.
2.2. Bisection method
Suppose we have an equation 𝑓(𝑥) = 0, where 𝑓 is continuous on [𝑎, 𝑏] and 𝑓(𝑎) < 0 and
𝑓(𝑏) > 0 or vice versa and now assume the first one. In order to find a root of 𝑓(𝑥) = 0 lying in
the interval [𝑎, 𝑏] we divide the interval into half. i.e.
𝑎+𝑏
𝑚= (1)
2
and then
2|Page
1. If 𝑓(𝑚) = 0, then 𝑚 is the exact root of the equation.
2. If 𝑓(𝑚) < 0, then the root lies in the interval [𝑚, 𝑏].
3. If 𝑓(𝑚) > 0, then the root lies in the interval [𝑎, 𝑚].
Then the newly reduced interval denoted by [𝑎1 , 𝑏1 ] is again halved and the same investigation is
made. Finally, at some stage in the process, we get either the exact root of 𝑓(𝑥) = 0 or a finite
sequence of intervals [𝑎, 𝑏] ⊇ [𝑎1 , 𝑏1 ] ⊇ ⋯ ⊇ [𝑎𝑛 , 𝑏𝑛 ] such that 𝑓(𝑎𝑛 ) < 0 and 𝑓(𝑏𝑛 ) > 0. If
𝑎𝑛 +𝑏𝑛
𝑚𝑛 = , the process terminates and the approximate root is 𝑚𝑛 .
2
Stopping Criteria
Since in this an iterative method, we must determine some stopping criteria that will allow the
iteration to stop. Here are some commonly used stopping criteria.
Let ε be the tolerance; that is, we would like to obtain the root with an error of at most of ε. Then
Accept x = mk as a root of 𝑓(𝑥) = 0 if any of the following criteria is satisfied:
1. |𝑓(mk )| ≤ 𝜀 (The functional value is less than or equal to the tolerance).
2.| mk−1 − mk | ≤ 𝜀
|mk−1 −mk |
2. |mk |
≤ 𝜀 (The relative change is less than or equal to the tolerance).
(𝑏−𝑎)
3. ≤𝜀 (The length of the interval after k iterations is less than or equal to the tolerance).
2𝑘
larger than the length of the interval, then, knowing ε it is always easy to predict the number of
𝐿0 𝐿0 𝐿 [ln(𝐿0 )−ln(𝜀)]
iterations n: ≤ 𝜀 ⟹ 2𝑛 ≥ ⟹ 𝑛𝑙𝑛2 ≥ ln ( 0 ) ⟹ 𝑛 ≥
2𝑛 𝜀 𝜀 𝑙𝑛2
Therefore, the number of iterations n needed in the Bisection Method to obtain an accuracy
(error tolerance) of ε is given by
[𝐥𝐧(𝒃−𝒂)−𝐥𝐧(𝜺)]
𝒏≥ (2)
𝒍𝒏𝟐
3|Page
Remarks: Since the number of iterations 𝑛 needed to achieve a certain accuracy depends upon
the initial length of the interval containing the root, it is desirable to choose the initial interval
[𝑎 𝑏] as small as possible.
Example 2.2 Solve the equation 𝑥 3 − 5𝑥 + 3 = 0 using the Bisection method in the interval
[0,1] with error 𝜀 = 0.001.
𝑓 is continuous in the given interval [0,1] and 𝑓(0) = 3 > 0, 𝑓(1) = −1 < 0. Hence, there is a
root of the equation in the given interval.
𝑎+𝑏 0+1
𝑚1 = = = 0.5 and 𝑓(0.5) = 0.625 > 0
2 2
Then, the root lies in the interval [0.5,1] and 𝑒1 = |0.5 − 1| = 0.5 > 0.001
0.5+1
𝑚2 = = 0.75 and 𝑓(0.75) = −0.328125 < 0
2
Then, the root lies in the interval [0.5,0.75] and 𝑒2 = |0.5 − 0.75| = 0.25 > 0.001
0.5+0.75
𝑚3 = = 0.625 and 𝑓(0.625) = 0.11914 > 0
2
Then, the root lies in the interval [0.625,0.75] and 𝑒3 = |0.625 − 0.75| = 0.125 > 0.001
0.625+0.75
𝑚4 = = 0.6875 and 𝑓(0.6875) = −0.112548 < 0
2
Then, the root is in the interval [0.625,0.6875] & 𝑒4 = |0.625 − 0.6875| = 0.0.0625 > 0.001
0.625+0.6875
𝑚5 = = 0.65625 and 𝑓(0.65625) = 0.001373 > 0
2
0.65625+0.6875
𝑚6 = = 0.671875 and 𝑓(0.671875) = −0.05607986 < 0
2
4|Page
𝑒6 = |0.65625 − 0.671875| = 0.015625 > 0.001
0.65625+0.671875
𝑚7 = = 0.6640625 and 𝑓(0.6640625) = −0.02747488 < 0
2
0.65625+0.6640625
𝑚8 = = 0.66015625
2
0.65625+0.66015625
𝑚9 = = 0.658203125
2
0.65625+0.658203125
𝑚10 = = 0.657226562
2
Since |0.65625 − 0.657226562| = 0.000976562 < 0.001 = 𝜀, the process terminates and the
approximate root of the equation is 𝑚10 = 0.657226562 Table 1 shows the entire iteration
procedure of bisection method.
iter A b m=(a+b)/2 f(a) f(b) f(m) swap |b-a|
1 0 1 0.5 3 -1 0.625 a=m 1
2 0.5 1 0.75 0.625 -1 −0.328125 b=m 0.5
3 0.5 0.75 0.625 0.625 −0.328125 0.11914 a=m 0.25
5|Page
4 0.625 0.75 0.6875 0.11914 −0.328125 −0.112548 b=m 0.125
5 0.625 0.6875 0.65625 0.11914 −0.112548 0.001373 a=m 0.0625
6 0.65625 0.6875 0.671875 0.001373 −0.112548 −0.05607986 b=m 0.03125
7 0.65625 0.671875 0.6640625 0.001373 −0.056079 −0.02747488 b=m 0.015625
8 0.65625 0.6640625 0.66015625 0.001373 −0.027474 −0.01308 b=m 0.0078125
9 0.65625 0.66015625 0.658203125 0.001373 −0.01308 −0.00586 b=m 0.0039062
10 0.65625 0.658203125 0.657226562 0.001373 −0.00586 −0.002246 b=m 0.0019531
11 0.65625 0.657226562 0.656738281 0.001373 −0.002246 −0.00043678 b=m 0.0009765
6|Page
2.3. Interpolation (false position) method
This method is also based on the intermediate value theorem. In this method also, as in bisection
method, we choose two points 𝑎𝑛 and 𝑏𝑛 such that 𝑓(𝑎𝑛 ) and 𝑓(𝑏𝑛 ) are of opposite signs That is
𝑓(𝑎𝑛 )𝑓(𝑏𝑛 ) < 0). Then, intermediate value theorem suggests that a zero of 𝑓 lies in between 𝑎𝑛
and 𝑏𝑛 , if 𝑓 is a continuous function.
The curve 𝑦 = 𝑓(𝑥) is not generally a straight line. Assume that 𝑓 is continuous in the interval
[𝑎, 𝑏] and 𝑓(𝑎)𝑓(𝑏) < 0.
𝑓(𝑎) > 0
Approximate solution
𝑎 𝑏 𝑥
Now, the equation of the straight line which passes through the points (𝑎, 𝑓(𝑎)) and (𝑏, 𝑓(𝑏)) is
given by:
𝑦−𝑓(𝑎) 𝑓(𝑏)−𝑓(𝑎)
=
𝑥−𝑎 𝑏−𝑎
(𝑏−𝑎)(𝑦−𝑓(𝑎))+𝑎𝑓(𝑏)−𝑎𝑓(𝑎)
⇒𝑥= 𝑓(𝑏)−𝑓(𝑎)
−𝑏𝑓(𝑎)+𝑎𝑓(𝑎)+𝑎𝑓(𝑏)−𝑎𝑓(𝑎) 𝑎𝑓(𝑏)−𝑏𝑓(𝑎)
𝑥= =
𝑓(𝑏)−𝑓(𝑎) 𝑓(𝑏)−𝑓(𝑎)
7|Page
Therefore, the approximate root of the equation 𝑓(𝑥) = 0 is given by the formula
𝑎𝑓(𝑏)−𝑏𝑓(𝑎)
𝑥̃ = 𝑓(𝑏)−𝑓(𝑎)
Suppose that 𝑓(𝑎) < 0 and𝑓(𝑏) > 0, then as in the Bisection method we follow three cases.
8|Page
𝑎0 𝑏0 1 2
| | | |
𝑓(𝑎0 ) 𝑓(𝑏0 ) −1 3
𝑥0 = = = 1.25 𝑎𝑛𝑑 𝑓(𝑥0 ) = 𝑓(1.25) = −0.609375. Here, 𝑓(𝑥0 ) has the
𝑓(𝑏0 )−𝑓(𝑎0 ) 3−(−1)
same sign as 𝑓(𝑎0 ) and so the root must lie on the interval [𝑥0 , 𝑏0 ] = [1.25,2]. Next we set 𝑎1 =
𝑥0 and 𝑏1 = 𝑏0 to get the next approximation.
𝑎 𝑏1
| 1 | | 1.25 2
|
𝑓(𝑎1) 𝑓(𝑏1 )
𝑥1 = = −0.609375 3 = 1.37662337 𝑎𝑛𝑑 𝑓(𝑥1 ) = 𝑓(1.37662337)
𝑓(𝑏1 ) − 𝑓(𝑎1 ) 3 − (−0.609375)
= −0.2862640.
Now 𝑓(𝑥) changes sign on [𝑎1 , 𝑏1 ] = [1.37662337,2]. Thus we set 𝑎2 = 𝑥1 and 𝑏2 = 𝑏1 .
Continuing in this converges to the root (𝑟 = 1.465558).
iter A B C f(c) |b-a|
0 1.0000 2.0000 1.250000 -0.609375 1.000000
1 1.2500 2.0000 1.376623 -0.286264 0.750000
2 1.3766 2.0000 1.430925 -0.117660 0.623377
3 1.4309 2.0000 1.452402 -0.045671 0.569075
4 1.4524 2.0000 1.460613 -0.017331 0.547598
5 1.4606 2.0000 1.463712 -0.006520 0.539387
6 1.4637 2.0000 1.464875 -0.002445 0.536288
7 1.4649 2.0000 1.465310 -0.000916 0.535125
8 1.4653 2.0000 1.465474 -0.000343 0.534690
9 1.4655 2.0000 1.465535 -0.000128 0.534526
10 1.4655 2.0000 1.465558 -0.000048 0.534465
3 2
Table 2 solution 𝑥 − 𝑥 − 1 = 0 using false position method with 𝑎0 = 1 and 𝑏0 = 2.
⋮
𝑥𝑛+1 = 𝑔(𝑥𝑛 )
If the sequence of values {𝑥𝑛 }∞ 𝑛=1 converges to 𝑝 , then
9|Page
lim 𝑥𝑛+1 = 𝑝 = lim 𝑔(𝑥𝑛 )
𝑥→∞ 𝑥→∞
and 𝑝 is called a fixed point of the mapping.
5 5
Solution: Let 𝑥 3 − 2𝑥 − 5 = 0 ⟹ 𝑥(𝑥 2 − 2) = 5 ⟹ 𝑥 2 − 2 = 𝑥 ⟹ 𝑥 = √𝑥 − 2 = 𝑔(𝑥)
𝑥 3 −5
Or 𝑥 3 − 2𝑥 − 5 = 0 ⟹ 2𝑥 = 𝑥 3 − 5 ⟹ = 𝑔(𝑥)
2
3
Or 𝑥 3 − 2𝑥 − 5 = 0 ⟹ 𝑥 3 = 2𝑥 + 5 ⟹ 𝑥 = √2𝑥 + 5 = 𝑔(𝑥)
Now, let us use the first equation and initial value 𝑥0 = 0.5, then
5 5
𝑥1 = √𝑥 + 2 = √0.5 + 2 = 3.4641
0
5 5
𝑥2 = √ + 2 = √ + 2 = 1.8556
𝑥 1 3.4641
5 5
𝑥3 = √𝑥 + 2 = √1.8556 + 2 = 2.1666
2
Consider the equation 𝑓 (𝑥) = 0, which has the root 𝑎 and can be written
as the fixed-point problem g(x) = x. If the following conditions hold which has the root 𝑎 and can
be written as the fixed-point problem g(x) = x. If the following conditions hold
1. 𝑔(𝑥) and 𝑔’(𝑥) are continuous functions;
2. 2 |𝑔′ (𝑎)| < 1
then the fixed-point iteration scheme based on the function g will converge to 𝑎.
10 | P a g e
Alternatively, if |𝑔′(𝑎)| > 1 then the iteration will not converge to 𝑎.
(Note that when |𝑔′(𝑎)| = 1 no conclusion can be reached.)
D
Exercise 2.3 Solve the following equations using the method of successive approximation.
More generally, we write 𝑥𝑛+1 in terms of 𝑥𝑛 , 𝑓 (𝑥𝑛 ) and 𝑓 ′ (𝑥𝑛 ) for 𝑛 = 0,1, 2, …. by means of
the Newton-Raphson formula
𝒇 (𝒙 )
𝒙𝒏+𝟏 = 𝒙𝒏 − 𝒇′ (𝒙𝒏 ) (5)
𝒏
check if the number of iterations has exceeded the maximum number of iterations allowed. If so,
one needs to terminate the algorithm and notify the user.
Example: use newton’s method to compute the root of the function 𝑓(𝑥) = 𝑥 3 − 𝑥 2 − 1 to an
accuracy (tolerance) of 10−4 use 𝑥0 = 1.
Solution: the derivatives of the function is 𝑓 ′ (𝑥) = 3𝑥 2 − 2𝑥 then 𝑓 ′ (𝑥0 ) = 𝑓 ′ (1) = 1,
𝑓(𝑥0 ) = 𝑓(1) = −1
𝑓 (𝑥 ) −1
The first approximation (iterations), 𝑥1 = 𝑥0 − 𝑓′ (𝑥0 ) = 1 − = 2. then 𝑓 ′ (𝑥1 ) = 𝑓 ′ (2) = 8,
0 1
𝑓(𝑥1 ) = 𝑓(2) = 3.
11 | P a g e
𝑓 (𝑥 ) 3
The second iteration, 𝑥2 = 𝑥1 − 𝑓′ (𝑥1 ) = 2 − 8 = 1.625. then 𝑓 ′ (𝑥2 ) = 𝑓 ′ (1.625) =
1
12 | P a g e
1
𝑓 (𝑥𝑛 ) −𝐴
𝑥𝑛
𝑥𝑛+1 = 𝑥𝑛 − = 𝑥𝑛 − 1 This leads to the general formula
𝑓 ′ (𝑥𝑛 ) − 2
𝑥𝑛
13 | P a g e
d) Root Jumping: In some cases where the function is oscillating and has a number of roots, one
may choose an initial guess close to a root. However, the guesses may jump and converge to
some other root.
Note: A good strategy for avoiding failure to converge would be to use the bisection method for
a few steps (to give an initial estimate and make sure the sequence of guesses is going in the
right direction) followed by Newton’s method, which should converge very fast at this point.
2.5.1. Multiple roots and modified newton’s method
A multiple root (double, triple, etc.) occurs where the function is tangent to the x axis. if m is the
multiplicity of the root, then the modified newton method can be given by the formula
𝒇 (𝒙 )
𝒙𝒏+𝟏 = 𝒙𝒏 − 𝒎 𝒇′ (𝒙𝒏 ) (9)
𝒏
A more general modified Newton-Raphson method for multiple roots let us define a new
𝑓(𝑥) 𝒖(𝒙 )
function 𝑢(𝑥) = 𝑓′(𝑥) ⇒ 𝒙𝒏+𝟏 = 𝒙𝒏 − 𝒖′ (𝒙𝒏 ). a new function 𝑢(𝑥) Contains only single roots
𝒏
Example1: The function 𝑓(𝑥) = 𝑥 5 11𝑥 4 + 46𝑥 3 90𝑥 2 + 81𝑥 27 = (𝑥 − 1)2 (𝑥 − 3)3
has root of multiplicity 2 at 𝑥 = 1 and 3 at 𝑥 = 3 use modified newton’s method to approximate
the roots with 𝑥0 = 10, and allowable tolerance of 0.000001. (Exercise)
Example2: The function 𝒇(𝒙) = 𝒙𝟑 − 𝟑𝒙 + 𝟏 = (𝒙 + 𝟐)(𝒙 − 𝟏)𝟐 has root of multiplicity 2 at
𝑥 = 1 and 1 at 𝑥 = −2 use modified newton’s method to approximate the roots with 𝑥0 = 1.3,
and allowable tolerance of 0.000001. (Exercise)
Example3: The function 𝒇(𝒙) = 𝒙𝟑 − 𝟕𝒙𝟐 + 𝟏𝟏𝒙 − 𝟓 has root of multiplicity 2 at 𝑥 = 1. use
modified newton’s method to approximate the roots with 𝑥0 = 0, and allowable tolerance of
0.000001. Again use newton’s method to approximate the roots with 𝑥0 = 0, and allowable
tolerance of 0.000001. Compare the two approximated roots.
Solution: for modified newton’s methods 𝒇′(𝑥) = 3𝑥 2 − 14𝑥 + 11, multiplicity (m=2), then
𝑥 3 𝑛 −7𝑥 2 𝑛 +11𝑥𝑛 −5
𝑥𝑛+1 = 𝑥𝑛 − 2 see table 5.1 for the iteration.
3𝑥 2 𝑛 −14𝑥𝑛 +11
14 | P a g e
3 1.000000 -0.000000 0.000001 0.000999
4 1.000000 0.000000 0.000000 0.000000
The root of the equation at 4th iteration is 1 (root =1.0000).
Table 5.1 solution of 𝑥 3 − 7𝑥 2 + 11𝑥 − 5 = 0 using modified newton’s method 𝑥0 = 0 and m=2.
𝑥 3 𝑛 −7𝑥 2 𝑛 +11𝑥𝑛 −5
For newton’s method 𝑥𝑛+1 = 𝑥𝑛 − see table 5.2 for the iterations.
3𝑥 2 𝑛 −14𝑥𝑛 +11
15 | P a g e
𝑥3 is the x-intercept of the secant-line passing through (𝑥1 , 𝑓(𝑥1 )) and (𝑥2 , 𝑓(𝑥2 )).
and so on. That is why the method is called the secant method.
Generally secant method can be expressed as
(𝑥𝑛−1 − 𝑥𝑛 ) 𝑥𝑛 𝑓(𝑥𝑛−1 ) − 𝑓(𝑥𝑛 )(𝑥𝑛−1 )
𝑥𝑛+1 = 𝑥𝑛 − 𝑓(𝑥𝑛 ) =
𝑓(𝑥𝑛−1 ) − 𝑓(𝑥𝑛 ) 𝑓(𝑥𝑛−1 ) − 𝑓(𝑥𝑛 )
𝑥𝑛 𝑥𝑛−1
|𝑓(𝑥 ) 𝑓(𝑥 |
𝑛 𝑛−1 )
⟹ 𝑥𝑛+1 = (11)
𝑓(𝑥𝑛−1 )−𝑓(𝑥𝑛 )
This is of course, the same formula as for the method of false position.
Algorithm for secant method
Let 𝑥0 and 𝑥1 be two initial approximations, For 𝑛 = 1,2, . . .,maximum iterations
𝑥𝑛 𝑥𝑛−1
|𝑓(𝑥 ) 𝑓(𝑥 )|
𝑛 𝑛−1
𝑥𝑛+1 =
𝑓(𝑥𝑛−1 ) − 𝑓(𝑥𝑛 )
A suitable stopping criteria is
|𝑥𝑛+1 −𝑥𝑛 |
|𝑓(𝑥𝑛 )| ≤ 𝜀, |𝑥𝑛+1 − 𝑥𝑛 | ≤ 𝜀 or ≤ 𝜀 where 𝜀 is a specified tolerance value.
𝑥𝑛+1
Continuing this process the root converges to (𝑟 = 1.4655713), See table 6 below
iter xn f(xn) f(xn+1)-f(xn) |xn+1-xn|
0 1.000000 -1.000000
1 1.250000 -0.609375 -3.609375 0.750000
2 1.376623 -0.286264 0.323111 0.126623
3 1.488807 0.083463 0.369727 0.112184
4 1.463482 -0.007322 -0.090786 0.025325
5 1.465525 -0.000163 0.007160 0.002043
6 1.465571 0.000000 0.000163 0.000046
Table 6 solution of 𝑥 3 − 𝑥 2 − 1 = 0 using secant method with 𝑥0 = 1 and 𝑥1 = 2.
Hence, the approximate root is 1.4656, at the seventh iteration.
16 | P a g e
Advantages of secant method
1. It converges at faster than a linear rate, so that it is more rapidly convergent than the bisection
method.
2. It does not require use of the derivative of the function, as newton’s method.
3. It requires only one function evaluation per iteration, as compared with Newton’s method
which requires two.
Disadvantages of secant method
1. It may not converge.
2. The method fails to converge when 𝑓(𝑥𝑛 ) = 𝑓(𝑥𝑛−1 )
3. There is no guaranteed error bound for the computed iterates.
4. It is likely to have difficulty if 𝑓′(𝛼) = 0. This means the x-axis is tangent to the graph of
𝑓 (𝑥) at 𝑥 = 𝛼, where 𝛼 is a root of 𝑓(𝑥).
2.7. Conditions for convergence (Convergence of the Iteration Methods)
We now study the rate at which the iteration methods converge to the exact root, if the initial
approximation is sufficiently close to the desired root. Define the error of approximation at the
𝑛𝑡ℎ iterations for 𝑛 = 0, 1, 2, . ..as
𝜀𝑛 = 𝑥𝑛 − 𝛼 (12)
Definition An iterative method is said to be of order 𝑝 or has the rate of convergence 𝑝, if 𝑝 is
the largest positive real number for which there exists a finite constant 𝐶 ≠ 0 , such that
|𝜀𝑛+1 | ≤ 𝐶|𝜀𝑛 |𝑝 (13)
The constant 𝐶, which is independent of 𝑛 is called the asymptotic error constant and it depends
on the derivatives of 𝑓(𝑥) at 𝑥 = 𝛼.
Let us now obtain the orders of the methods that were derived earlier.
Method of false position
Substituting 𝑥𝑛 = 𝜀𝑛 + 𝛼, 𝑥𝑛+1 = 𝜀𝑛+1 + 𝛼, 𝑥0 = 𝜀0 + 𝛼, we expand each term in Taylor’s
Series and simplify using t he fact that 𝑓 (𝛼) = 0. we obtain the error equation as
𝑓′′(𝛼)
𝜀𝑛+1 = 𝐶𝜀0 𝜀𝑛 , where 𝐶 = 2𝑓′(𝛼) Since 𝜀0 is finite and fixed, the error equation becomes
17 | P a g e
We have 𝑥𝑛+1 = 𝑔(𝑥𝑛 ), and 𝛼 = 𝑔(𝛼) Subtracting, we get
𝑥𝑛+1 − 𝛼 = 𝑔(𝑥𝑛 ) − 𝑔(𝛼) = 𝑔(𝛼 + 𝑥𝑛 − 𝛼) − 𝑔(𝛼) = [𝑔(𝛼) + (𝑥𝑛 − 𝛼)𝑔′(𝛼) + ⋯ ] − 𝑔(𝛼)
or 𝜀𝑛+1 = 𝜀𝑛 𝑔′(𝛼) + 𝑂(𝜀𝑛 ).
Therefore, | 𝜀𝑛+1 | = 𝐶| 𝜀𝑛 |, 𝑥𝑛 < 𝑡𝑛 < 𝛼, 𝑎𝑛𝑑 𝐶 = | 𝑔′(𝛼) |.
Hence, the fixed point iteration method has order 1 or has linear rate of convergence.
Newton-Raphson method
𝑓 (𝑥 )
The method is given by, 𝑥𝑛+1 = 𝑥𝑛 − 𝑓′ (𝑥𝑛 ), 𝑓 ′ (𝑥𝑛 ) ≠ 0. Substituting 𝑥𝑛 = 𝜀𝑛 + 𝛼, 𝑥𝑛+1 =
𝑛
𝑓 (𝜀 +𝛼 )
𝜀𝑛+1 + 𝛼, we obtain 𝜀𝑛+1 + 𝛼 = 𝜀𝑛 + 𝛼 − 𝑓′ (𝜀𝑛 +𝛼). Expand the terms in Taylor’s series. Using
𝑛
Remark: What is the importance of defining the order or rate of convergence of a method?
18 | P a g e
Suppose that we are using Newton’s method for computing a root of 𝑓(𝑥) = 0. Let us assume
that at a particular stage of iteration, the error in magnitude in computing the root is 10−1 = 0.1.
We observe from Newton’s method which has quadratic rate of convergence, that in the next
iteration, the error behaves like 𝐶(0.1)2 = 𝐶(10−2 ).
That is, we may possibly get an accuracy of two decimal places. Because of the quadratic
convergence of the method, we may possibly get an accuracy of four decimal places in the next
iteration. However, it also depends on the value of C. From this discussion, we conclude that
both fixed point iteration and regula-falsi methods converge slowly as they have only linear rate
of convergence. Further, Newton’s method converges at least twice as fast as the fixed point
iteration and regula-falsi methods.
1.7417
19 | P a g e
0 1.0000 2.0000 1.714286 -0.204082 1.000000
1 1.7143 2.0000 1.740741 -0.006859 0.285714
2 1.7407 2.0000 1.741627 -0.000229 0.259259
the root of the equation at 3th iteration is
root =
1.7417
1.4142
20 | P a g e
the root of the equation at 3th iteration is
root =
1.4142
21 | P a g e