Professional Documents
Culture Documents
May 5, 2017
1 Introduction
In this chapter, we have discussed the following numerical methods:
1. Bisection method
3. Newton-Rahpson method
2 Convergence
If a sequence x0 , x1 , x2 , · · · converges to xn , the errors en = xn − x must get smaller as n
approaches infinity. Our interest is to decrease the magnitudes of |en |. Thus, the ratio of
successive errors is |e|en+1
n|
|
. We want this rationality to be less than 1. If n goes to infinity
then this ratio approaches a nonzero constant m < 1:
|en+1 |
lim = m < 1.
n→∞ |en |
which implies that the order of convergence is 1 (first order). In other words, the sequence
converges linearly.
∗
E-mail: kamrujjaman@du.ac.bd
1
The generalization form is as follows
|en+1 |
lim = m < 1,
n→∞ |en |γ
for some positive power γ and known as the order of convergence. When the order γ = 1,
as we saw, we have linear convergence while γ = 2 provides the quadratic convergence.
Quadratic convergence is vastly faster than linear convergence and for high performance,
it is desirable.
Stopping Criterion: When the iteration should stop?
Answer: We have to decide on a predetermined tolerance ϵ and stop the sequence when
the error is smaller than the tolerance (|en | < ϵ). This is called a stopping condition. Our
application might, instead, require having |f (x)| < ϵ. That would be a different stopping
condition. Similarly, making the error term |en | small is more common criteria.
If |en | < ϵ, can we be sure that the value xn is within ϵ of the unknown solution x?
In a perverse situation, the error could start growing again and then much later, decrease
monotonically to zero. Thus it is best to understand the nature of the problem and,
perhaps, even experiment with very long sequences to be sure we aren’t being misled (not
always!).
3 Bracketing Methods
The following root finding methods will be introduced for bracketing case:
1. Bisection method
2. False Position method
Let us explore one important theorem to ensure the root of a function in [a,b]
Theorem 1. If a function f(x) is continuous in the interval [a, b] and f (a)f (b) < 0, then
the equation f (x) = 0 has at least one real root in the interval (a, b).
4 Bisection Method
The bisection method is the easiest to numerically implement and is based on the inter-
mediate value theorem. The main disadvantage is that convergence is slow, otherwise it is
a good choice of method.
Using this method our goal is to construct a sequence x0 , x1 , x2 , · · · that converges to
the root x = c that solves f (x) = 0. Initially, we choose x0 and x1 such that x0 < c < x1
and we say that two points x0 and x1 bracket the root. With f (c) = 0, we want f (x0 ) and
f (x1 ) to be of opposite sign such that f (x0 ) ∗ f (x1 ) < 0.
Next we assign x2 to be the midpoint of x0 and x1 , that is
x2 = (x0 + x1 )/2
2
or
x1 − x0
x2 = x0 +
2
Then determine the sign of f (x2 ) and accordingly the value of x3 is then chosen as either
the midpoint of x0 and x2 or as the midpoint of x2 and x1 , depending on whether x0 and
x2 bracket the root, or x2 and x1 bracket the root. The root, therefore, stays bracketed at
all times.
The algorithm proceeds in this fashion and is typically stopped when the increment to
the left side of the bracket is smaller than some required precision.
n ≥ 10
3
4.2 Algorithm-Bisection
The pseudocode for the bisection algorithm is as follows:
1. Choose a, b and ϵ.
5. Begin loop
6. midpoint c = (a + b)/2.
If f(c) = 0, quit with the answer x = c
4
write(6,*)”The Root is=”,c
stop
end
!=====Functions=====================
real function f(x)
implicit none
real :: x
f = exp(-x)-sin(x)
return
end
!=====Output=========================
12
Change for another a & b
0 0.5
Change for another a &b
0.5 1.0
The Root is= 0.588539124
!=====Program end======================
Example 3. Obtain a root correct up to 3 decimal places for each of the following equations
using bisection method: (i) x3 + x2 − 1 = 0, and (ii) x3 − 18 = 0.
Example 4. Find the root of x − cos(x) = 0 correct up to 4 decimal places using bisection
method.
Example 5. Find a root correct up to 3 decimal places for each of the following equations
using bisection method: (i) x3 − 4x − 9 = 0, and (ii) xex − 1 = 0.
Example 6. Find a root correct up to four decimal places for each of the following equations
using bisection method: (i) x3 − x − 4 = 0, and (ii) x3 + x − 1 = 0.
5
known that the root lies on [a, b], then it is reasonable that we can approximate the
function on the interval by interpolating the points (a, f(a)) and (b, f(b)).
This method is based upon the principle that any portion of a smooth curve is basically
straight for a small distance. Thus we guess that the graph y=f(x) is a straight line between
the points (a, f(a)) and (b, f(b)). Make sure that these two points are on opposite sides of
the horizontal (x) axis.
Check the equation of the chord joining these two points (a, f(a)) and (b, f(b)) is:
y − f (a) f (b) − f (a)
= (5.1)
x−a b−a
It is also noted that the secant line over the interval [a, b] is the chord between (a, f(a))
and (b, f(b)). The two right triangles in the figure??? are similar, which means that
b−c c−a
= (5.2)
f (b) −f (a)
The method exists in replacing the portion of the curve between these two points (a,
f(a)) and (b, f(b)) by means of the chord joining these points, and taking the intersecting
point of the chord with the x-axis as an approximation to the root. In the current stage if
y=0, by assuming x=c the point of intersection of (5.1) is
(b − a)
c = a − f (a) (5.3)
f (b) − f (a)
Similarly, we can write the other approximations
(a − b)
c = b − f (b) (5.4)
f (a) − f (b)
Finally, it is easy to show that
(a − b) bf (a) − af (b) (b − a) af (b) − bf (a)
c = b − f (b) = = a − f (a) = (5.5)
f (a) − f (b) f (a) − f (b) f (b) − f (a) f (b) − f (a)
and we can randomly use the following relations as a formula for False position method
af (b) − bf (a)
c= (5.6)
f (b) − f (a)
We then compute f(c) and proceed to the next step with the interval [a, c] if f (a) ∗ f (c) < 0
or to the interval [c, b] if f (c) ∗ f (b) < 0. In the general case:
1. The Regula-Falsi method starts with the interval [a0 , b0 ] containing a root, i.e. f (a0 )
and f (b0 ) are of opposite signs.
2. The method uses intervals [an , bn ] that contain roots in almost the same way that
the bisection method does.
6
3. Instead of finding the midpoint of the interval, it finds where the secant line join-
ing (an , f (an )) and (bn , f (bn )) crosses the x-axis and then selects it to be the new
endpoint.
At the n-th step, it computes
an f (bn ) − bn f (an )
cn = (5.7)
f (bn ) − f (an )
If f (an ) and f (cn ) have the same sign, then set an+1 = cn and bn+1 = bn ; otherwise,
set an+1 = an and bn+1 = cn . The process is repeated until the root is approximated
sufficiently well.
Thus, instead of checking the width of the interval, we check the change in the end
points to determine when to stop.
The Effect of Non-linear Functions
If we cannot assume that a function may be interpolated by a linear function, then ap-
plying the false-position method can result in worse results than the bisection method.
For example, Figure ???(highly nonlinear and the poor selection of interval, for example
f = x10 − 1, [0, 1.3]) shows a function where the false-position method is significantly
slower than the bisection method.
Figure
Such a situation can be recognized and compensated for by falling back on the bisection
method for two or three iterations and then resuming with the false-position method.
7
• Usually the method is faster, but possibly slower (check for a highly non-linear
function!).
• For differentiable functions, the closer the fixed end point is to the exact solution,
the faster the convergence.
Suppose the end point b is fixed and the left end, a, is sufficiently close to the root that
the function f(x) is closely approximated by the Taylor series, that is
The lines interpolating the point (a, f(a)) and (b, f(b)) are essentially parallel to the line
interpolating (c, 0) and (b, f(b)).
Since c be the root, and the point b is fixed, the change in a will be proportional
to the difference between the slope between (c, 0) and (b, f(b)) and the derivative at c.
To view this, let the error be h = a - c and if we are sufficiently close to the root such
that f (a) ≈ hf (1) (c) and thus the slope of the line connecting (a, f(a)) and (b, f(b)) is
approximately fb−c
(b)
.
f (b) − f (a)
f (1) (c) = f ′ (c) =
b−a
Example 8. Consider finding the root of f (x) = e−x (3.2 sin(x)−0.5 cos(x)) on the interval
[3, 4]. Let ϵ1 = 0.01, ϵ2 = 0.01.
Example 9. Approximate the root of f (x) = x3 − 3 with the false-position method starting
with the interval [1, 2] and use ϵ1 = 0.1, ϵ2 = 0.1. Use five decimal digits of accuracy.
Example 10. Approximate the root of f (x) = x2 − 10 with the false-position method
starting with the interval [3, 4] and use ϵ1 = 0.1, ϵ2 = 0.1.
Example 11. Solve for a positive root of (i) x3 − 4x + 1 = 0, and (ii) x2 − ln(x) − 12 = 0
by Regula-Falsi method.
8
Example 12. Find the positive root of x3 = 2x + 5 by False Position method correct upto
4 decimal places.
Example 13. Find a root of the equation (i) x3 − 3x − 5 = 0, and (ii) xex − 2 = 0 by the
method of False Position correct upto 3 decimal places.
Example 14. Find the real root lying between 1 and 2 of the equation x3 − 3x + 1 = 0
correct upto 3 decimal places by Regula-Falsi method .
Example 15. Solve the equation 3x − cos(x) − 1 = 0 by False Position method correct
upto 4 decimal places.
Example 16. Solve for a real root of the equation x3 − 5x + 3 = 0 by Regula-Falsi method
correct upto 2 decimal places.
Example 17. Find an approximate root of x log10 x − 1.2 = 0 by the method of False
Position correct upto 3 decimal places.
Example 18. Highly nonlinear: Approximate the root of f (x) = x10 − 1 with the
false-position method starting with the interval [0, 1.3] and and compare the solution with
bisection method.
7 Newton-Raphson Method
Newton-Raphson method is a technique for finding the root of a single-valued scaler func-
tion f(x). In Newton’s method, it is assumed that the function f is differentiable. Since
f(x) has a continuous derivative, it follows that at any point on the curve of f(x), if we
observe it closely enough, that it will look like a straight line. If it is an issue, why not
approximate the function at (x0 , f (x0 )) by a straight line which is tangent to the curve at
that point?
Remark 2. Note that the equation of tangent line is y − y0 = m(x − x0 ), m = f ′ (x0 )
One can easily deducted the formula for this line: the linear polynomial h(x) = (x −
x0 )f ′ (x0 ) is zero at x0 and has a slope of f ′ (x0 ), and therefore, if we add f (x0 ) with h(x),
it will be tangent to the given point on the curve. Therefore for particular point (x0 , f (x0 ))
on the graph of f, there is a tangent, which is a rather good approximation to the curve in
the vicinity of that point. Analytical meaning of that the linear function
9
is close to the given function f(x) near x0 . At the point x = x0 , the two functions f and g
are equal. We take the root of g(x) (say x = x1 such that g(x1 ) = 0) as an approximation
to the root of f(x). Then the root of g can be found easily:
f (x0 )
x1 = x0 −
f ′ (x0 )
Thus, starting with an approximation x0 , we pass to a new point x1 obtained from the
preceding formula. Sequentially, the process can be iterated to produce a sequence of
points:
f (x1 )
x2 = x1 − ′
f (x1 )
f (x2 )
x3 = x2 − ′
f (x2 )
and so on. Under favorable choice, the sequence of points will approach a zero of f(x).
Geometrical Explanation of Newtons method: The geometry of Newtons method
is shown in Figure ??. The line y = g(x) is tangent to the curve y = f (x). It intersects
the horizontal (x-axis) axis at a point x1 . The slope of g(x) is f ′ (x0 ).
10
7.2 Summarize: how to solve a problem by Newton’s method
1. Problem: Given an equation of single variable f (x) = 0, find a value x0 known as
root of f such that f (x0 ) = 0.
2. Assumption: Let us assume that the given function f(x) is continuous and has a
continuous derivative.
3. Tools: We will use sampling (initial guess), the derivative, and iteration. For error
analysis, use Taylor series.
6. Stopping criteria: There are three conditions which may cause the iteration pro-
cess to halt:
1. We halt if both of the following conditions are met:
(i) The step between successive iterates is sufficiently small, |xn+1 − xn | < ϵ1 , and
(ii) The function evaluated at the point xn+1 is sufficiently small, |f (xn+1 )| < ϵ2 .
2. If the first derivative f ′ (xn ) = 0, the iteration process fails and we halt, zero
divisor!
3. If we have iterated some maximum number of times, say N, and have not met
Condition 1, we halt and indicate that a solution was not found.
If we halt due to Condition 1, we state that xn+1 is our approximation to the root.
If we halt due to either Condition 2 or 3, we may either choose a different initial
guess x0 , or state that a solution may not exist.
11
Table 1: Newton’s method to find the root of f (x) = x3 − 2x2 + x − 3.
n xn |f (xn )|
0 3.0 9.0
1 2.4375 2.04
2 2.21303 27224 73144 5 0.256
3 2.17555 49386 14368 4 6.46 × 10−3
4 2.17456 01006 55071 4 4.48 × 10−6
5 2.17455 94102 93284 1 1.97 × 10−12
12
Given a function f and we seek for the solution of f. First, possess two continuous
derivatives f ′ and f ′′ , and let c be the root of f. Further assumption is that c is a simple
root; that means f ′ (c) ̸≡ 0. If initial choice x0 started sufficiently close to c, Newton-
Raphson method converges quadratically to c. This means that the errors in successive
steps obey an inequality in the following form
|c − xn+1 | ≤ α|c − xn |2
We shall establish this fact presently, but first, an informal interpretation of the inequality
may be helpful. Suppose, for simplicity, that α = 1. Suppose also that xn is an estimate of
the root c that differs from it by at most one unit in the k-th decimal place. This means
that
|c − xn | ≤ 10−k
The immediately above two inequalities imply that
|c − xn+1 | ≤ 10−2k
In other words, xn+1 differs from c by at most one unit in the (2k)-th decimal place. So xn+1
has approximately twice as many correct digits as xn ! This is the doubling of significant
digits alluded to previously.
13
Table 2: Newton’s method to find the root of f (x) = cos(x) + 2 sin(x) + x2 .
n xn xn+1 |f (xn+1 )| |xn+1 − xn |
following Table.
Hence after four iterations, both halting conditions are met, i.e. |f (xn+1 )| < ϵf and |xn+1 −
xn | < ϵab , and therefore, the approximation is -0.6598.
Example 20. Approximate the root of f (x) = e−x cos(x) with the Newton’s method starting
with x0 = 1.3 and use tol = 1e−5.
Note: Ignore the the column f (xn+1 ) from the previous table; just check |xn − xn+1 | < tol
to halt the iterations.
Example 21. Find the root of the equation x3 −3x−5 = 0 by the Newton-Raphson method
correct up to four decimal places.
Example 22. Find the root of the equation x4 −x−10 = 0 by the Newton-Raphson method
correct up to three decimal places near x = 2.0.
√
Example 23. Find the root of the equation 12 by the Newton-Raphson method correct
up to 4 decimal places.
Example 24. Find the root of the following equations by the Newton-Raphson method
correct up to 3 decimal places:
(i) x ln10 (x) − 1.2 = 0, (ii) sin(x) + x2 − 1 = 0, (iii) 2x − 3 sin(x) − 5 = 0 and (iv)
x4 + x2 − 80 = 0.
Example 25. Find the root of the following equations by the Newton-Raphson method
correct up to 4 decimal places:
(i) x−cos(x) = 0, (ii) cos(x)−xex = 0, (iii) sin(x)−10(x−1) = 0 and (iv) 3x−cos(x)−1 =
0.
Example 26. Find the root of the equation 2x3 − 3x − 6 = 0 by the Newton-Raphson
method correct up to 4 decimal places.
14
2. If f (x0 ) = 0, quit with answer x = x0 .
3. Begin loop
6. set x0 = q
15
f=x**6-x**4-x**3-1.0
return
END
!=====1st derivative of function f, fd=g(x)=
function g(x)
g=6.0*x**5-4.0*x**3-3.0*x**2
return
END
!=====Output========================
Newton Raphson method to find a root
Enter initial guess and tolerance:
10.0 0.00001
Approximate root= 1.40372
!=====End===========================
f (x) = 0 (8.1)
In fixed point method, an alternative approach is to recast the problem (8.1) in the following
form
x = g(x) (8.2)
for a related nonlinear function g. For the fixed point problem, we seek a point where the
curve g intersects the diagonal line y = x in such a way that any solution of the equation
(8.2), which is a fixed point of g(x), is a solution of equation (8.1).
Algorithm:
Start from any point x0 and consider the recursive process
If f is continuous and xn converges to some c then it is clear that c is a fixed point of g and
hence it is a solution of the equation (8.1). Moreover, for a large n, xn can be considered
as an approximate solution of the equation (8.1) if it converges sufficiently well.
Cauchy criterion:
There is an analogous uniform Cauchy condition that provides a necessary and sufficient
condition for a sequence of functions to converge uniformly.
Statement:
A sequence (fn ) of functions fn : A → R is uniformly Cauchy on A if for every ϵ > 0 there
16
exists N ∈ N such that m, n > N implies that |fm (x) − fn (x)| < ϵ for all x ∈ A.
17
Let us illustrate one more examples
Example 29. Apply the fixed-point procedure, where g(x) = 1+2/x, starting with x0 = 1,
to compute a zero of the nonlinear function f (x) = x2 − x − 2. Graphically, trace the
convergence process.
Solution:
The fixed-point algorithm is
2
xn+1 = 1 +
xn
First, eight steps of the iterative algorithm are x0 = 1, x1 = 3, x2 = 5/3, x3 = 11/5, x4 =
21/11, x5 = 43/21, x6 = 85/43, x7 = 171/85, and x8 = 341/171 ≈ 1.99415.
In Figure ??, we see that these steps spiral into the fixed point 2.
2. If |g ′ (x∗ )| > 1, then the fixed-point method diverges for any starting point x0 other
than x∗ .
Example 31. Find the real root of the equation x3 − 2x2 − 4 = 0 by Fixed-point iteration
method correct up to three decimal places.
Example 32. Find the real root of the equation tan(x) − x = 0 by iteration method correct
up to three decimal places.
Example 33. Find the root of the following equations by the iteration method correct up
to 3 decimal places:
(i) 2x − ln10 (x) − 7 = 0, (ii) x sin(x) − 1 = 0, and (iii) x3 − 2x − 5 = 0.
Example 34. Find the real root of the equation sin2 (x) − x2 + 1 = 0 by iteration method
correct up to three decimal places.
Example 35. Find the real root of the equation x3 + x2 − 1 = 0 by iteration method correct
up to three decimal places.
18
Example 36. Find the real root of the equation cos(x) − xex = 0 by iteration method
correct up to three decimal places.
√
Example 37. Find the real root of the equation (i) 3x − 1 + sin(x) = 0 (ii) ex − 3x = 0
(iii) 2x − cos(x) − 3 = 0 and (iv) 3x − cos(x) − 1 = 0 by iteration method correct up to 4
decimal places.
References
[1] L. Perko, Differential Equations and Dynamical Systems, Springer, Third edition
(2008).
19