You are on page 1of 19

Numerical Methods to Find a Root of an

Algebraic or Transcendental Equation


Md. Kamrujjaman, Ph.D ∗
Department of Mathematics, University of Dhaka,
Dhaka 1000, Bangladesh

May 5, 2017

1 Introduction
In this chapter, we have discussed the following numerical methods:

1. Bisection method

2. False Position method

3. Newton-Rahpson method

4. Fixed point iteration method

2 Convergence
If a sequence x0 , x1 , x2 , · · · converges to xn , the errors en = xn − x must get smaller as n
approaches infinity. Our interest is to decrease the magnitudes of |en |. Thus, the ratio of
successive errors is |e|en+1
n|
|
. We want this rationality to be less than 1. If n goes to infinity
then this ratio approaches a nonzero constant m < 1:

|en+1 |
lim = m < 1.
n→∞ |en |

which implies that the order of convergence is 1 (first order). In other words, the sequence
converges linearly.

E-mail: kamrujjaman@du.ac.bd

1
The generalization form is as follows
|en+1 |
lim = m < 1,
n→∞ |en |γ

for some positive power γ and known as the order of convergence. When the order γ = 1,
as we saw, we have linear convergence while γ = 2 provides the quadratic convergence.
Quadratic convergence is vastly faster than linear convergence and for high performance,
it is desirable.
Stopping Criterion: When the iteration should stop?
Answer: We have to decide on a predetermined tolerance ϵ and stop the sequence when
the error is smaller than the tolerance (|en | < ϵ). This is called a stopping condition. Our
application might, instead, require having |f (x)| < ϵ. That would be a different stopping
condition. Similarly, making the error term |en | small is more common criteria.
If |en | < ϵ, can we be sure that the value xn is within ϵ of the unknown solution x?
In a perverse situation, the error could start growing again and then much later, decrease
monotonically to zero. Thus it is best to understand the nature of the problem and,
perhaps, even experiment with very long sequences to be sure we aren’t being misled (not
always!).

3 Bracketing Methods
The following root finding methods will be introduced for bracketing case:
1. Bisection method
2. False Position method
Let us explore one important theorem to ensure the root of a function in [a,b]
Theorem 1. If a function f(x) is continuous in the interval [a, b] and f (a)f (b) < 0, then
the equation f (x) = 0 has at least one real root in the interval (a, b).

4 Bisection Method
The bisection method is the easiest to numerically implement and is based on the inter-
mediate value theorem. The main disadvantage is that convergence is slow, otherwise it is
a good choice of method.
Using this method our goal is to construct a sequence x0 , x1 , x2 , · · · that converges to
the root x = c that solves f (x) = 0. Initially, we choose x0 and x1 such that x0 < c < x1
and we say that two points x0 and x1 bracket the root. With f (c) = 0, we want f (x0 ) and
f (x1 ) to be of opposite sign such that f (x0 ) ∗ f (x1 ) < 0.
Next we assign x2 to be the midpoint of x0 and x1 , that is
x2 = (x0 + x1 )/2

2
or
x1 − x0
x2 = x0 +
2
Then determine the sign of f (x2 ) and accordingly the value of x3 is then chosen as either
the midpoint of x0 and x2 or as the midpoint of x2 and x1 , depending on whether x0 and
x2 bracket the root, or x2 and x1 bracket the root. The root, therefore, stays bracketed at
all times.
The algorithm proceeds in this fashion and is typically stopped when the increment to
the left side of the bracket is smaller than some required precision.

4.1 Convergence of Bisection method


After satisfying the condition f (a) ∗ f (b) < 0 on [a,b], the next step is to find the midpoint
of the interval:
c = (a + b)/2.
We keep going until the length of the interval |a − b| is smaller than our chosen ϵ. In each
time, the interval will be exactly half the length of the previous interval. So it is clear that
the bracketed interval width is reduced by a factor of 1/2 at each step and at the end of
nth step the new interval will be [an , bn ] and of length |b−a|
2n
such that
|b − a|
≤ ϵ,
2n
where ϵ is an expected accuracy. After simplification
ln(|b − a|/ϵ)
n≥ (4.1)
ln 2
In fact the above inequality produce the number of iterations needed to achieve the accuracy
ϵ. In particular, for |b − a| = 1 and ϵ = 10−3 , it is seen that

n ≥ 10

There are three important questions to check the convergent of solution.


1. What is the order of convergence here?
2. What is the sequence xn ?
3. Does this method always converge?
The midpoint is xn = (a + b)/2, since the root is bracketed and, we can put an upper
bound on the error at each step, namely, the root must be no farther than half the length
of the interval. Clearly, this bound decreases by a factor of 2 in each iterative step. So it
seems like linear convergence.
Yes, since we always make progress by bracketing the root and the method is supremely
reliable to find the convergence solution.

3
4.2 Algorithm-Bisection
The pseudocode for the bisection algorithm is as follows:

1. Choose a, b and ϵ.

2. Does f(a) = 0? If yes, quit the program with the answer x = a.

3. Does f(b) = 0? If yes, quit with the answer x = b.

4. If f(a)f(b) > 0, quit, saying the root is not bracketed.

5. Begin loop

6. midpoint c = (a + b)/2.
If f(c) = 0, quit with the answer x = c

7. If |a − c| < ϵ, quit with the approximate solution x = c.

8. If f(a)f(c) < 0, set a = c, otherwise, set b = c.

9. Repeat 6-8 until convergence.

4.3 Fortran program of Bisection algorithm


program bisection
implicit none
real,parameter :: tol=0.00001
!================================================
!=choose a, b such that f (a) ∗ f (b) < 0, where f is the function
!= the midpoint c=(a+b)/2
!================================================
real :: a, b, c, f
10 read(*,*) a, b
20 if (f(a)*f(b).lt.0)then
c = (a+b)/2.0
else
write(6,*)”Change for another a & b ”
goto 10
endif
if(f(a)*f(c).lt.0)then
b=c
else
a=c
endif
if(abs(b-a).gt.tol)goto 20

4
write(6,*)”The Root is=”,c
stop
end
!=====Functions=====================
real function f(x)
implicit none
real :: x
f = exp(-x)-sin(x)
return
end
!=====Output=========================
12
Change for another a & b
0 0.5
Change for another a &b
0.5 1.0
The Root is= 0.588539124
!=====Program end======================

4.4 Examples of Bisection Method


Example 1. Find the root of the equation x3 − x − 1 = 0 by the method of bisection correct
up to two decimal places.

Example 2. Find the root of the equation x2 − 4x − 10 = 0 using bisection method.

Example 3. Obtain a root correct up to 3 decimal places for each of the following equations
using bisection method: (i) x3 + x2 − 1 = 0, and (ii) x3 − 18 = 0.

Example 4. Find the root of x − cos(x) = 0 correct up to 4 decimal places using bisection
method.

Example 5. Find a root correct up to 3 decimal places for each of the following equations
using bisection method: (i) x3 − 4x − 9 = 0, and (ii) xex − 1 = 0.

Example 6. Find a root correct up to four decimal places for each of the following equations
using bisection method: (i) x3 − x − 4 = 0, and (ii) x3 + x − 1 = 0.

5 False Position (Regula Falsi) Method


The false-position method is a modification on the bisection method: that a root is trapped
in a sequence of intervals of decreasing size. Rather than selecting the midpoint of each
interval, this method uses the point where the secant lines intersect the x-axis. If it is

5
known that the root lies on [a, b], then it is reasonable that we can approximate the
function on the interval by interpolating the points (a, f(a)) and (b, f(b)).
This method is based upon the principle that any portion of a smooth curve is basically
straight for a small distance. Thus we guess that the graph y=f(x) is a straight line between
the points (a, f(a)) and (b, f(b)). Make sure that these two points are on opposite sides of
the horizontal (x) axis.
Check the equation of the chord joining these two points (a, f(a)) and (b, f(b)) is:
y − f (a) f (b) − f (a)
= (5.1)
x−a b−a
It is also noted that the secant line over the interval [a, b] is the chord between (a, f(a))
and (b, f(b)). The two right triangles in the figure??? are similar, which means that
b−c c−a
= (5.2)
f (b) −f (a)
The method exists in replacing the portion of the curve between these two points (a,
f(a)) and (b, f(b)) by means of the chord joining these points, and taking the intersecting
point of the chord with the x-axis as an approximation to the root. In the current stage if
y=0, by assuming x=c the point of intersection of (5.1) is
(b − a)
c = a − f (a) (5.3)
f (b) − f (a)
Similarly, we can write the other approximations
(a − b)
c = b − f (b) (5.4)
f (a) − f (b)
Finally, it is easy to show that
(a − b) bf (a) − af (b) (b − a) af (b) − bf (a)
c = b − f (b) = = a − f (a) = (5.5)
f (a) − f (b) f (a) − f (b) f (b) − f (a) f (b) − f (a)
and we can randomly use the following relations as a formula for False position method
af (b) − bf (a)
c= (5.6)
f (b) − f (a)
We then compute f(c) and proceed to the next step with the interval [a, c] if f (a) ∗ f (c) < 0
or to the interval [c, b] if f (c) ∗ f (b) < 0. In the general case:
1. The Regula-Falsi method starts with the interval [a0 , b0 ] containing a root, i.e. f (a0 )
and f (b0 ) are of opposite signs.

2. The method uses intervals [an , bn ] that contain roots in almost the same way that
the bisection method does.

6
3. Instead of finding the midpoint of the interval, it finds where the secant line join-
ing (an , f (an )) and (bn , f (bn )) crosses the x-axis and then selects it to be the new
endpoint.
At the n-th step, it computes
an f (bn ) − bn f (an )
cn = (5.7)
f (bn ) − f (an )
If f (an ) and f (cn ) have the same sign, then set an+1 = cn and bn+1 = bn ; otherwise,
set an+1 = an and bn+1 = cn . The process is repeated until the root is approximated
sufficiently well.

5.1 Halting conditions of linear interpolation method


The halting conditions for the false-position method are different from the bisection method.
If you view the sequence of iterations of the false-position method in Figure???(concave
up), you will note that only the left bound is ever updated, and because the function is
concave up, the left bound will be the only one which is ever updated.
Figure

Thus, instead of checking the width of the interval, we check the change in the end
points to determine when to stop.
The Effect of Non-linear Functions
If we cannot assume that a function may be interpolated by a linear function, then ap-
plying the false-position method can result in worse results than the bisection method.
For example, Figure ???(highly nonlinear and the poor selection of interval, for example
f = x10 − 1, [0, 1.3]) shows a function where the false-position method is significantly
slower than the bisection method.
Figure

Such a situation can be recognized and compensated for by falling back on the bisection
method for two or three iterations and then resuming with the false-position method.

Figure 1: Graphical shape to describe False position method.

5.2 Error analysis of linear interpolation method


The study of error analysis for the false-position method is not as easy as it is for the
previous (bisection) method. Out of two bracketing points [a,b], if one of the end points
becomes fixed, it can be shown that
• It is still an O(h) operation i.e. iteration converges linearly, that is, it is the same
rate as of the bisection method

7
• Usually the method is faster, but possibly slower (check for a highly non-linear
function!).

• For differentiable functions, the closer the fixed end point is to the exact solution,
the faster the convergence.

Suppose the end point b is fixed and the left end, a, is sufficiently close to the root that
the function f(x) is closely approximated by the Taylor series, that is

f (a) ≈ (a − c)f (1) (c) + O(h)

The lines interpolating the point (a, f(a)) and (b, f(b)) are essentially parallel to the line
interpolating (c, 0) and (b, f(b)).
Since c be the root, and the point b is fixed, the change in a will be proportional
to the difference between the slope between (c, 0) and (b, f(b)) and the derivative at c.
To view this, let the error be h = a - c and if we are sufficiently close to the root such
that f (a) ≈ hf (1) (c) and thus the slope of the line connecting (a, f(a)) and (b, f(b)) is
approximately fb−c
(b)
.

Remark 1. Note about the slope for details:

f (b) − f (a)
f (1) (c) = f ′ (c) =
b−a

if a → c then f (c) → 0 such that slope= f (b)−0


b−c
.
f (b)
Therefore, the closer b is to c, the better an approximation b−c
is to the derivative
(1)
f (c), and therefore, the faster the convergence.

5.3 Examples of False Position Method


Example 7. Consider the function f (x) = x2 − 3 to find the root. Let ϵ1 = 0.01, ϵ2 = 0.01
and start with the interval [1, 2].

Example 8. Consider finding the root of f (x) = e−x (3.2 sin(x)−0.5 cos(x)) on the interval
[3, 4]. Let ϵ1 = 0.01, ϵ2 = 0.01.

Example 9. Approximate the root of f (x) = x3 − 3 with the false-position method starting
with the interval [1, 2] and use ϵ1 = 0.1, ϵ2 = 0.1. Use five decimal digits of accuracy.

Example 10. Approximate the root of f (x) = x2 − 10 with the false-position method
starting with the interval [3, 4] and use ϵ1 = 0.1, ϵ2 = 0.1.

Example 11. Solve for a positive root of (i) x3 − 4x + 1 = 0, and (ii) x2 − ln(x) − 12 = 0
by Regula-Falsi method.

8
Example 12. Find the positive root of x3 = 2x + 5 by False Position method correct upto
4 decimal places.
Example 13. Find a root of the equation (i) x3 − 3x − 5 = 0, and (ii) xex − 2 = 0 by the
method of False Position correct upto 3 decimal places.
Example 14. Find the real root lying between 1 and 2 of the equation x3 − 3x + 1 = 0
correct upto 3 decimal places by Regula-Falsi method .
Example 15. Solve the equation 3x − cos(x) − 1 = 0 by False Position method correct
upto 4 decimal places.
Example 16. Solve for a real root of the equation x3 − 5x + 3 = 0 by Regula-Falsi method
correct upto 2 decimal places.
Example 17. Find an approximate root of x log10 x − 1.2 = 0 by the method of False
Position correct upto 3 decimal places.
Example 18. Highly nonlinear: Approximate the root of f (x) = x10 − 1 with the
false-position method starting with the interval [0, 1.3] and and compare the solution with
bisection method.

6 Open Root Finding Methods


1. Newton Raphson Method
2. Fixed Point Iteration

7 Newton-Raphson Method
Newton-Raphson method is a technique for finding the root of a single-valued scaler func-
tion f(x). In Newton’s method, it is assumed that the function f is differentiable. Since
f(x) has a continuous derivative, it follows that at any point on the curve of f(x), if we
observe it closely enough, that it will look like a straight line. If it is an issue, why not
approximate the function at (x0 , f (x0 )) by a straight line which is tangent to the curve at
that point?
Remark 2. Note that the equation of tangent line is y − y0 = m(x − x0 ), m = f ′ (x0 )
One can easily deducted the formula for this line: the linear polynomial h(x) = (x −
x0 )f ′ (x0 ) is zero at x0 and has a slope of f ′ (x0 ), and therefore, if we add f (x0 ) with h(x),
it will be tangent to the given point on the curve. Therefore for particular point (x0 , f (x0 ))
on the graph of f, there is a tangent, which is a rather good approximation to the curve in
the vicinity of that point. Analytical meaning of that the linear function

g(x) = (x − x0 )f ′ (x0 ) + f (x0 )

9
is close to the given function f(x) near x0 . At the point x = x0 , the two functions f and g
are equal. We take the root of g(x) (say x = x1 such that g(x1 ) = 0) as an approximation
to the root of f(x). Then the root of g can be found easily:
f (x0 )
x1 = x0 −
f ′ (x0 )
Thus, starting with an approximation x0 , we pass to a new point x1 obtained from the
preceding formula. Sequentially, the process can be iterated to produce a sequence of
points:
f (x1 )
x2 = x1 − ′
f (x1 )
f (x2 )
x3 = x2 − ′
f (x2 )
and so on. Under favorable choice, the sequence of points will approach a zero of f(x).
Geometrical Explanation of Newtons method: The geometry of Newtons method
is shown in Figure ??. The line y = g(x) is tangent to the curve y = f (x). It intersects
the horizontal (x-axis) axis at a point x1 . The slope of g(x) is f ′ (x0 ).

7.1 Interpretation of Newton’s method in different way


Let us assume again that x0 is an initial approximation to a root of f . We can make a
correction h and be added to x0 such that x0 + h is the precise root. Obviously, we have
then f (x0 + h) = 0. If f is a sufficiently well-behaved function, it will have a Taylor series
at x0 and one could write
f (x0 + h) = f (x0 ) + hf ′ (x0 ) + (h2 /2)f ′′ (x0 ) + · · · = 0
Is the determination of h from this equation is so easy! of course, not. Therefore, we give
up the expectation of arriving at the true root in one step and seek only an approximation
to h. This can be obtained by ignoring all but the first two terms in the series:
f (x0 ) + hf ′ (x0 ) = 0
which implies the computed number h = − ff′(x 0)
(x0 )
. Thus our new approximation is
f (x0 )
x1 = x0 + h = x0 − , f ′ (x0 ) ̸≡ 0
f ′ (x0 )
and the process can be repeated. If Newton-Raphson method is described in terms of a
sequence x0 , x1 , · · · , then the following recursive formula is applicable
f (xn )
xn+1 = xn − ′ , n = 0, 1, 2, · · ·
f (xn )
Question: The more interesting and nutural question is that whether
lim xn = c,
n→∞

where c is the desired solution of the given function f.

10
7.2 Summarize: how to solve a problem by Newton’s method
1. Problem: Given an equation of single variable f (x) = 0, find a value x0 known as
root of f such that f (x0 ) = 0.

2. Assumption: Let us assume that the given function f(x) is continuous and has a
continuous derivative.

3. Tools: We will use sampling (initial guess), the derivative, and iteration. For error
analysis, use Taylor series.

4. Requirements: We have an initial assumption x0 of the root.

5. Iteration: Given the approximation xn , the next approximation xn+1 is defined to


be · · · .

6. Stopping criteria: There are three conditions which may cause the iteration pro-
cess to halt:
1. We halt if both of the following conditions are met:
(i) The step between successive iterates is sufficiently small, |xn+1 − xn | < ϵ1 , and
(ii) The function evaluated at the point xn+1 is sufficiently small, |f (xn+1 )| < ϵ2 .
2. If the first derivative f ′ (xn ) = 0, the iteration process fails and we halt, zero
divisor!
3. If we have iterated some maximum number of times, say N, and have not met
Condition 1, we halt and indicate that a solution was not found.

If we halt due to Condition 1, we state that xn+1 is our approximation to the root.
If we halt due to either Condition 2 or 3, we may either choose a different initial
guess x0 , or state that a solution may not exist.

7.3 Convergence Analysis


First let us show an illustration of f (x) = x3 − 2x2 + x − 3, starting with x0 = 3. Of
course, f ′ (x) = 3x2 − 4x + 1. To see in greater detail the rapid convergence of Newtons
method, we use arithmetic with double the normal precision in the program and obtain
the following results in Table 1:
Notice the doubling of the accuracy in f(x) (and also in x) until the maximum precision of
the computer is encountered.
Indeed, the number of correct figures in the answer is nearly doubled at each successive
step. Thus in the example above, we have first 0 and then 1, 2, 3, 6, 12, 24, · · · accurate
digits from each Newton iteration. Five or six steps of Newtons method often suffice to
yield full machine precision in the determination of a root. There is a theoretical basis for
this dramatic performance, as we shall now see.

11
Table 1: Newton’s method to find the root of f (x) = x3 − 2x2 + x − 3.
n xn |f (xn )|

0 3.0 9.0
1 2.4375 2.04
2 2.21303 27224 73144 5 0.256
3 2.17555 49386 14368 4 6.46 × 10−3
4 2.17456 01006 55071 4 4.48 × 10−6
5 2.17455 94102 93284 1 1.97 × 10−12

12
Given a function f and we seek for the solution of f. First, possess two continuous
derivatives f ′ and f ′′ , and let c be the root of f. Further assumption is that c is a simple
root; that means f ′ (c) ̸≡ 0. If initial choice x0 started sufficiently close to c, Newton-
Raphson method converges quadratically to c. This means that the errors in successive
steps obey an inequality in the following form

|c − xn+1 | ≤ α|c − xn |2

We shall establish this fact presently, but first, an informal interpretation of the inequality
may be helpful. Suppose, for simplicity, that α = 1. Suppose also that xn is an estimate of
the root c that differs from it by at most one unit in the k-th decimal place. This means
that
|c − xn | ≤ 10−k
The immediately above two inequalities imply that

|c − xn+1 | ≤ 10−2k

In other words, xn+1 differs from c by at most one unit in the (2k)-th decimal place. So xn+1
has approximately twice as many correct digits as xn ! This is the doubling of significant
digits alluded to previously.

7.4 Limitations of Newton-Raphson Method


The Newton’s formula suggests that there are three situations where this method may not
converge quickly:
1. Choice of approximation x0 is far away from the exact root
2. The 2nd derivative term is too large
3. The derivative at xn is close to zero.

7.5 Examples of Newton-Raphson Method


Example 19. Approximate the root of f (x) = cos(x) + 2 sin(x) + x2 with the Newton’s
method starting with x0 = 0.0 and use ϵf = 0.002, ϵab = 0.001.
Solution:
We consider x0 = 0 as our initial approximation with ϵf = 0.002, ϵab = 0.001 and we will
halt after a maximum of N = 20 iterations.
Consider the recursive formula of Newton’s method
f (xn )
xn+1 = xn − , n = 0, 1, 2, · · ·
f ′ (xn )
Find the derivative of the given function, f ′ (x) = − sin(x) + 2 cos(x) + 2x. We will use
four decimal digit arithmetic to find a solution and the resulting iteration is shown in the

13
Table 2: Newton’s method to find the root of f (x) = cos(x) + 2 sin(x) + x2 .
n xn xn+1 |f (xn+1 )| |xn+1 − xn |

0 0.0000 -0.5000 0.1688 0.5000


1 -0.5000 -0.6368 0.0205 0.1368
2 -0.6368 -0.6589 0.0008 0.02210
3 -0.6589 -0.6598 0.0006 0.0009

following Table.
Hence after four iterations, both halting conditions are met, i.e. |f (xn+1 )| < ϵf and |xn+1 −
xn | < ϵab , and therefore, the approximation is -0.6598.

Example 20. Approximate the root of f (x) = e−x cos(x) with the Newton’s method starting
with x0 = 1.3 and use tol = 1e−5.
Note: Ignore the the column f (xn+1 ) from the previous table; just check |xn − xn+1 | < tol
to halt the iterations.

Example 21. Find the root of the equation x3 −3x−5 = 0 by the Newton-Raphson method
correct up to four decimal places.

Example 22. Find the root of the equation x4 −x−10 = 0 by the Newton-Raphson method
correct up to three decimal places near x = 2.0.

Example 23. Find the root of the equation 12 by the Newton-Raphson method correct
up to 4 decimal places.

Example 24. Find the root of the following equations by the Newton-Raphson method
correct up to 3 decimal places:
(i) x ln10 (x) − 1.2 = 0, (ii) sin(x) + x2 − 1 = 0, (iii) 2x − 3 sin(x) − 5 = 0 and (iv)
x4 + x2 − 80 = 0.

Example 25. Find the root of the following equations by the Newton-Raphson method
correct up to 4 decimal places:
(i) x−cos(x) = 0, (ii) cos(x)−xex = 0, (iii) sin(x)−10(x−1) = 0 and (iv) 3x−cos(x)−1 =
0.

Example 26. Find the root of the equation 2x3 − 3x − 6 = 0 by the Newton-Raphson
method correct up to 4 decimal places.

7.6 Algorithm of Newton-Raphson


The pseudocode for the Newton-Raphson algorithm is as follows:

1. Ask for an initial assumption x0 and ϵ

14
2. If f (x0 ) = 0, quit with answer x = x0 .

3. Begin loop

4. q = x0 − f (x0 )/f ′ (x0 ), where f ′ (x0 ) ̸= 0

5. If |q − x0 | < ϵ, quit with approximate answer x = q

6. set x0 = q

7. Repeat steps 4-6 until convergence.

7.7 Fortran program of Newton’s algorithm


!====================================
!=== Newton-Raphson Method to find a root of an equation
!====================================
PROGRAM NEWTONR
integer N
N=0
write(*,15)
15 format(1x,’Newton Raphson method to find a root’,’´)
write(*,*)’Enter initial guess and tolerance:’
read(*,*) x, tol
10 IF(g(x).EQ.0)THEN
write(*,*)’Change initial root.’
STOP
ENDIF
!=====main formula====================
y = x-(f(x)/g(x))
IF(abs(f(y)).LT.tol) GOTO 20
N = N+1
IF(N.GT.500)THEN
write(*,*)’Error has occured.’
STOP
ENDIF
x=y
GOTO 10
20 write(*,30) x
30 format(2x,’Approximate root= ’,F10.5)
STOP
END
!=====Given function f================
function f(x)

15
f=x**6-x**4-x**3-1.0
return
END
!=====1st derivative of function f, fd=g(x)=
function g(x)
g=6.0*x**5-4.0*x**3-3.0*x**2
return
END
!=====Output========================
Newton Raphson method to find a root
Enter initial guess and tolerance:
10.0 0.00001
Approximate root= 1.40372
!=====End===========================

8 Fixed Point Iteration Method


Consider the nonlinear equation to find the approximate solution

f (x) = 0 (8.1)

In fixed point method, an alternative approach is to recast the problem (8.1) in the following
form
x = g(x) (8.2)
for a related nonlinear function g. For the fixed point problem, we seek a point where the
curve g intersects the diagonal line y = x in such a way that any solution of the equation
(8.2), which is a fixed point of g(x), is a solution of equation (8.1).

Algorithm:
Start from any point x0 and consider the recursive process

xn+1 = g(xn ), n = 0, 1, 2, · · · (8.3)

If f is continuous and xn converges to some c then it is clear that c is a fixed point of g and
hence it is a solution of the equation (8.1). Moreover, for a large n, xn can be considered
as an approximate solution of the equation (8.1) if it converges sufficiently well.
Cauchy criterion:
There is an analogous uniform Cauchy condition that provides a necessary and sufficient
condition for a sequence of functions to converge uniformly.
Statement:
A sequence (fn ) of functions fn : A → R is uniformly Cauchy on A if for every ϵ > 0 there

16
exists N ∈ N such that m, n > N implies that |fm (x) − fn (x)| < ϵ for all x ∈ A.

First let us illustrate whatever we said above with an example.


Example 27. It is easy to check that there is a solution for the equation x3 − 7x + 2 = 0
in [0, 1]. We rewrite the equation in the form
x = (x3 + 2)/7 = g(x)
and define the process
xn+1 = (x3n + 2)/7
Check that if 0 ≤ x0 ≤ 1 then xn satisfies the Cauchy criterion and hence it converges to
a root of the above equation.
It is clear from the above example that the convergence of the process (8.3) depends on
g and the starting point x0 . Moreover, in general, showing the convergence of the sequence
xn obtained from the iterative process is not easy. So we ask the following question.
Question:
Under what assumptions on g and x0 , does the Algorithm converge ? When does the
sequence xn obtained from the iterative process (8.3) converge ?
The following result is a consequence of the mean value theorem.
Theorem 2. Let g : [a, b] → [a, b] be a differentiable function such that
|g ′ (x)| ≤ α < 1 f or all x ∈ [a, b].
Then g has exactly one fixed point c in [a, b] and the sequence xn defined by the process
(8.3), with a starting point x0 ∈ [a, b], converges to c.
Proof. By the intermediate value property g has a fixed point, say c. The convergence of
(xn ) to c follows from the following inequalities:
|xn − c| = |g(xn−1 ) − g(c)| ≤ α|xn−1 − c| ≤ α2 |xn−2 − c| · · · ≤ αn |x0 − c| → 0
If c1 is another fixed point then
|c − c1 | = |g(c) − g(c1 )| ≤ α|c − c1 | < |c − c1 |.
such that |c − c1 | < |c − c1 |, a contradiction. This implies that c = c1 .
Example 28. Let us take the problem given in Example 27 where g(x) = (x3 + 2)/7. Then
g : [0, 1] → [0, 1] and
|g ′ (x)| < 3/7 f or all x ∈ [0, 1].
Hence by the previous theorem the sequence (xn ) defined by the process
xn+1 = (x3n + 2)/7
converges to a root of x3 − 7x + 2 = 0.

17
Let us illustrate one more examples

Example 29. Apply the fixed-point procedure, where g(x) = 1+2/x, starting with x0 = 1,
to compute a zero of the nonlinear function f (x) = x2 − x − 2. Graphically, trace the
convergence process.
Solution:
The fixed-point algorithm is
2
xn+1 = 1 +
xn
First, eight steps of the iterative algorithm are x0 = 1, x1 = 3, x2 = 5/3, x3 = 11/5, x4 =
21/11, x5 = 43/21, x6 = 85/43, x7 = 171/85, and x8 = 341/171 ≈ 1.99415.
In Figure ??, we see that these steps spiral into the fixed point 2.

Question: For a given problem, how many g is there? Is it unique?


For a given nonlinear equation f(x) = 0, there may be many equivalent fixed-point problems
x = g(x) with different functions g, some better than others.

1. A simple way to characterize the behavior of an iterative method xn+1 = g(xn ) is


locally convergent for x∗ if x∗ = g(x∗ ) and |g ′ (x∗ ))| < 1. By locally convergent,
we mean that there is an interval containing x0 such that the fixed-point method
converges for any starting value x0 within that interval.

2. If |g ′ (x∗ )| > 1, then the fixed-point method diverges for any starting point x0 other
than x∗ .

8.1 Examples of Fixed-point iteration Method


Example 30. Find the root of the equation x2 − ln(x) − 2 = 0 on [1,2] by the Fixed-point
method correct up to three decimal places, take x0 = 1.0.

Example 31. Find the real root of the equation x3 − 2x2 − 4 = 0 by Fixed-point iteration
method correct up to three decimal places.

Example 32. Find the real root of the equation tan(x) − x = 0 by iteration method correct
up to three decimal places.

Example 33. Find the root of the following equations by the iteration method correct up
to 3 decimal places:
(i) 2x − ln10 (x) − 7 = 0, (ii) x sin(x) − 1 = 0, and (iii) x3 − 2x − 5 = 0.

Example 34. Find the real root of the equation sin2 (x) − x2 + 1 = 0 by iteration method
correct up to three decimal places.

Example 35. Find the real root of the equation x3 + x2 − 1 = 0 by iteration method correct
up to three decimal places.

18
Example 36. Find the real root of the equation cos(x) − xex = 0 by iteration method
correct up to three decimal places.

Example 37. Find the real root of the equation (i) 3x − 1 + sin(x) = 0 (ii) ex − 3x = 0
(iii) 2x − cos(x) − 3 = 0 and (iv) 3x − cos(x) − 1 = 0 by iteration method correct up to 4
decimal places.

References
[1] L. Perko, Differential Equations and Dynamical Systems, Springer, Third edition
(2008).

19

You might also like