You are on page 1of 4

ROOTS OF EQUATIONS – OPEN METHODS

In this part we’ll able to understand the difference between bracketing and open methods
for investigating the roots of the equations. In previous part we learned about
convergence (bracketing) methods. We call them as convergence because in bracketing
method we are trying to get closer to the roots. But in contrast, in open methods, we are
not starting with bracketing. We start with one or two fixed points. Also, open methods
diverge not converge, and they give us the desired results more quickly. In below figure
you can see the difference obviously.

In the figure, a) is bracketing, b) and c) are open methods (Newton-Raphson). a) bisection


between x1 and xu. But in b) and c) there is a formula works between xi and xi+1 either
diverge (b)) or converge (c)) depending on the given equation.

1. Simple Fixed-Point Iteration

As we just mentioned open methods employs a formula to reach the roots. Such a formula
can be developed for fixed-point iteration or developed by rearranging the given
function as

f(x) = 0 → x = g(x)

This can be done by algebraic manipulation or simply adding x by two sided of the given
original equation. This new formula x = g(x) gives you an opportunity to guess new x as a
function of old x. xi is an initial guess of the root, can be used to compute a new estimate
xi+1 as expressed in the iterative formula below:
xi+1 = g(xi)

As a general error estimator, you can use the below formula to determine the error for
the equation above.

𝑥𝑖+1 − 𝑥𝑖
𝜀𝑎 = | | 100%
𝑥𝑖+1

Ex.1. Use simple fixed-point iteration to locate the root of f(x) = e-x – x.

The function can be separated directly as xi+1 = e-xi and you can start from x0 = 0. As you
can see from the below table, in each iteration we are getting closer to the root:
0.567144329.

i xi |εa|, % |εt|, % |εt|i/|εt|i-1


0 0.0000 100.000
1 1.0000 100.000 76.322 0.763
2 0.3679 171.828 35.135 0.460
3 0.6922 46.854 22.050 0.628
4 0.5005 38.309 11.755 0.533
5 0.6062 17.447 6.894 0.586
6 0.5454 11.157 3.835 0.573
7 0.5796 5.903 2.199 0.564
8 0.5601 3.481 1.239 0.569
9 0.5711 1.931 0.705 0.566
10 0.5649 1.109 0.399

Notice that the true percent relative error is roughly proportional to the error from the
previous iteration. This is called linear convergence the main characteristic of fixed-
point iteration. We should consider that this is not the rate of the convergence, this is the
possibility of the convergence. The convergence and the divergence concepts can be
depicted as graphically as we want to pretend it like a bracketing method. But in this case,
we should split the equation into two parts as :

f1(x) = f2(x) then the equations turn to;


y1 = f1(x) and
y2 = f2(x) can be plotted as separately.

You can see the plotting option in the below figure. a) represents the root at the point
where it crosses the x-axis. b) represents the roots at the intersection of the component
functions.
Now we can use the two-curve method to illustrate the convergence and the divergence
of the fixed-point iteration method. We should rearrange the first equations as:

y1 = x and
y2 = g(x).

Now we can plot these two equations separately. f(x) = 0 correspond to the x-axis value
at the intersection of the two curves. The function y1 = x and four different shapes for y2
= g(x) are given in the below figure.
a) is the initial guess x0 is used to determine the corresponding point on the y2 curve [x0,
g(x0)]. The point [x1, x1] is located by moving to the left horizontally to the y1 curve. These
movements are equivalent to the first iteration of the fixed-point method: x1 = g(x0).
Then, in both equation and in the plot, a starting point (x0) is used to gather an estimate
(x1). For the next iteration you use [x1, g(x1)] and get [x2, x2] for the equivalent equation
of x2 = g(x1). As you can detect from the graph, you are getting closer to the root by
convergence in a) and b). But in c) and d) the iterations are divergent.
A theoretical derivation can be used to have a look to the process. It can be shown that
the error for any iteration is linearly proportional to the error from the previous iteration
multiplied by the absolute value of the slope (g) as:

𝐸𝑖+1 = 𝑔′ (𝜉)𝐸𝑖

If |g’|< 1, the errors decrease with each iteration. But in contrast, for the |g’|> 1 errors will
grow. ***If the derivative is positive, the errors will be positive. But if the derivative is
negative, the error will change the sign in each iteration.

You might also like