You are on page 1of 8

See

discussions, stats, and author profiles for this publication at:


https://www.researchgate.net/publication/222358673

Modified families of Newton, Halley and


Chebyshev methods

ARTICLE in APPLIED MATHEMATICS AND COMPUTATION · SEPTEMBER 2007


Impact Factor: 1.55 · DOI: 10.1016/j.amc.2007.02.119 · Source: DBLP

CITATIONS READS

27 34

2 AUTHORS:

Vinay Kanwar S. K. Tomar


Panjab University Panjab University
52 PUBLICATIONS 210 CITATIONS 91 PUBLICATIONS 678 CITATIONS

SEE PROFILE SEE PROFILE

All in-text references underlined in blue are linked to publications on ResearchGate, Available from: S. K. Tomar
letting you access and read them immediately. Retrieved on: 28 March 2016
Applied Mathematics and Computation 192 (2007) 20–26
www.elsevier.com/locate/amc

Modified families of Newton, Halley and Chebyshev methods


a,* b
V. Kanwar , S.K. Tomar
a
University Institute of Engineering and Technology, Panjab University, Chandigarh-160 014, India
b
Department of Mathematics, Panjab University, Chandigarh-160 014, India

Abstract

This paper presents new families of Newton-type iterative methods (Newton, Halley and Chebyshev methods) for find-
ing simple zero of univariate non-linear equation, permitting f 0 ðxÞ ¼ 0 in the vicinity of the root. Newton-type iterative
methods have well-known geometric interpretation and admit their geometric derivation from a parabola. These algo-
rithms are comparable to the well-known powerful classical methods of Newton, Halley and Chebyshev respectively,
and these can be seen as special cases of these families. The efficiency of the presented methods is demonstrated by numer-
ical examples.
Ó 2007 Elsevier Inc. All rights reserved.

Keywords: Newton’s method; Halley’s method; Chebyshev’s method; Non-linear equations; Order of convergence

1. Introduction

Newton’s method is a well-known technique for solving non-linear equations. This method fascinates many
researchers because it is applicable to various types of equations such as non-linear equations, systems of non-
linear algebraic equations, differential equations, integral equations and even to random operator equations.
However, as is well known, a major difficulty in the application of the Newton’s method is the selection of
initial guess which must be chosen sufficiently close to the true solution in order to guarantee the convergence.
Finding a criterion for choosing initial guess is quite cumbersome and therefore, more effective globally con-
vergent algorithms are still needed.
Halley’s method [4] is another famous iteration for solving non-linear equations and is a close relative of the
Newton’s method. Halley’s method has third-order convergence, but requires the computation of a second
derivative. Halley’s irrational method (Euler’s method) [2,6,8] is a straightforward variant of the Newton’s
method, while the Halley’s method is somewhat simpler but has some advantage over the Halley’s irrational
method since the latter involves the expression under square root. Halley’s method is less sensitive to the initial
guess and again it does not work when f 0 ðxÞ ¼ 0. Also it does run into difficulty when f(x) and f 0 (x) or f 0 (x)
and f 00 (x) are simultaneously near zero. Halley’s formula can also be derived by using osculating hyperbola

*
Corresponding author.
E-mail addresses: vmithil@yahoo.co.in (V. Kanwar), sktomar@yahoo.com (S.K. Tomar).

0096-3003/$ - see front matter Ó 2007 Elsevier Inc. All rights reserved.
doi:10.1016/j.amc.2007.02.119
V. Kanwar, S.K. Tomar / Applied Mathematics and Computation 192 (2007) 20–26 21

and this is why it is also known as the method of tangent hyperbolas [5]. Similarly, Chebyshev’s method [6] is
another close relative of the Newton’s method and is one of the best known third-order iterative methods for
solving non-linear equations. This method, like Newton’s method, has poor convergence properties when
f 0 ðxÞ ¼ 0. Therefore, the requirement of f 0 ðxÞ 6¼ 0 is again an essential condition for Chebyshev’s method.
For a good review of these algorithms, some excellent textbooks are available in the literature [1,3,6].
The purpose of this work is to present new classes of iterative techniques having quadratic and cubic con-
vergence and which can be used as an alternative to existing techniques or in some cases where the existing
techniques are not successful, the approach of derivation is completely different. Before going to these
techniques, the following results proved by Wu and Wu [9] are important to keep in mind and are reproduced
here.
Theorem 1.1. Suppose that f ðxÞ 2 C 1 ½a; b and f 0 ðxÞ þ a0 f ðxÞ 6¼ 0, then equation f ðxÞ ¼ 0 has at most a root.

Theorem 2.1. If f ðxÞ 2 C 1 ½a; b, f ðaÞf ðbÞ < 0 and f 0 ðxÞ þ a0 f ðxÞ 6¼ 0, then the equation f ðxÞ ¼ 0 has a unique
root in ða; bÞ.

2. Families of one-point iterative methods

In this section we derive two new families.

2.1. First family

Consider the problem of finding a real and simple root of the non-linear equation

f ðxÞ ¼ 0: ð2:1Þ
Let y ¼ f ðxÞ; ð2:2Þ

be the graph of the function f(x) and assume that x0 is the initial guess to the required root r (say) of Eq. (2.1).
Assume jhj  1 and let
x1 ¼ x0 þ h; ð2:3Þ
be the first approximation to the required root.
Consider a parabola given by
y 2 ¼ 2ax; ð2:4Þ
where a 2 R is a scaling parameter. Parabola (2.4) narrows and widens respectively as jaj ! 0 and 1. Let this
parabola meets the curve y ¼ f ðxÞ at a point fx1 ; f ðx1 Þg, then Eq. (2.4) becomes
f 2 ðx1 Þ ¼ 2ax1 : ð2:5Þ
The equation of the normal to the parabola at the point fx1 ; f ðx1 Þg is given by
y  f ðx1 Þ ¼ a0 f ðx1 Þðx  x1 Þ; ð2:6Þ
0 1
where a ¼ a
.
Consider the point of intersection of this normal with x-axis at the point (x0, 0), which is given by
f ðx1 Þ ¼ a0 f ðx1 Þðx0  x1 Þ: ð2:7Þ
Case (i): Newton-type methods
The first approximate root x1 is given by (2.3) which needs the value of the correction factor h. Here we
shall obtain the value of h from (2.7). In order to find out the correction factor h to our approximate root,
we expand f (x1) by means of Taylor’s theorem about x1 and retaining only the linear terms of h, we obtain
f ðx0 Þ þ hf 0 ðx0 Þ ¼ a0 hf ðx0 Þ; ð2:8Þ
giving the value of h as
22 V. Kanwar, S.K. Tomar / Applied Mathematics and Computation 192 (2007) 20–26

f ðx0 Þ
h¼ : ð2:9Þ
f 0 ðx0 Þ þ a0 f ðx0 Þ

Thus, the first approximation to the required root is


f ðx0 Þ
x1 ¼ x0  : ð2:10Þ
f 0 ðx 0
0 Þ þ a f ðx0 Þ

Now repeat the process after shifting the parabola towards right until the two successive approximations are
close enough. Thus, the general formula for successive approximation is given by
f ðxn Þ
xnþ1 ¼ xn  ; n ¼ 0; 1; . . . :: ð2:11Þ
f 0 ðx 0
n Þ þ a f ðxn Þ

This formula looks like a Newton’s formula and describes the one parameter ða0 Þ family of the Newton-type
iteration formulae for finding the root of Eq. (2.1). Mathematically, it can be seen that Formula (2.11) reduces
to Newton’s formula in the absence of parameter ða0 Þ.
The scaling parameter ða0 Þ in (2.11) is chosen so as to give the largest value of the denominator [9]. In order
to make this happen, the parameter ða0 Þ is chosen according to the rule

m2 ; if f ðxn Þf 0 ðxn Þ 6 0;
a0 ¼ ð2:12Þ
þm2 ; if f ðxn Þf 0 ðxn Þ P 0;
where m 2 R.
Case (ii): Halley-type methods
In Eq. (2.7), expanding the function f (x1) by Taylor’s theorem around x0 and retaining the terms up to
O(h2), we get
h½hff 00 ðx0 Þ þ 2a0 f 0 ðx0 Þg þ 2ff 0 ðx0 Þ þ a0 f ðx0 Þg þ 2f ðx0 Þ ¼ 0; ð2:13Þ
and simplifying, we get
2f ðx0 Þ
h¼ : ð2:14Þ
hff 00 ðx0 Þ þ 2a0 f 0 ðx0 Þg þ 2ff 0 ðx0 Þ þ a0 f ðx0 Þg
ðx0 Þ
Approximating h on the right-hand side of (2.14) by correction term  f 0 ðx0fÞþa 0 f ðx Þ given in (2.9), we obtain
0

2f ðx0 Þff 0 ðx0 Þ þ a0 f ðx0 Þg


h¼ : ð2:15Þ
2ff 0 ðx0 Þ þ a0 f ðx0 Þg2  f ðx0 Þff 00 ðx0 Þ þ 2a0 f 0 ðx0 Þg
Thus the first approximation to the required root is
2f ðx0 Þff 0 ðx0 Þ þ a0 f ðx0 Þg
x1 ¼ x0  : ð2:16Þ
2ff 0 ðx0 Þ þ a0 f ðx0 Þg2  f ðx0 Þff 00 ðx0 Þ þ 2a0 f 0 ðx0 Þg
The general formula for successive approximation can be written as
2f ðxn Þff 0 ðxn Þ þ a0 f ðxn Þg
xnþ1 ¼ xn  ; n ¼ 0; 1; 2; : ð2:17Þ
2ff 0 ðxn Þ þ a0 f ðxn Þg2  f ðxn Þff 00 ðxn Þ þ 2a0 f 0 ðxn Þg
The above formula (2.17) looks like a Halley-type formula for finding the root of the equation. Mathemati-
cally, we see that formula (2.17) reduces to Halley’s formula [4] in the absence of parameter ða0 Þ. Numerically,
the parameter ða0 Þ in (2.17) is chosen to maximize the absolute value of the denominator.
Case (iii): Chebyshev-type methods
Eq. (2.13) on manipulation gives
f ðx0 Þ ff 00 ðx0 Þ þ 2a0 f 0 ðx0 Þg
h¼  h2 : ð2:18Þ
f 0 ðx 0
0 Þ þ a f ðx0 Þ 2ff 0 ðx0 Þ þ a0 f ðx0 Þg
Using the result of formula (2.9) on the right-hand side of (2.18), the general formula for successive approx-
imation is given by
V. Kanwar, S.K. Tomar / Applied Mathematics and Computation 192 (2007) 20–26 23

f ðxn Þ 1 f 2 ðxn Þff 00 ðxn Þ þ 2a0 f 0 ðxn Þg


xnþ1 ¼ xn   3
: ð2:19Þ
f 0 ðxn Þ þ a0 f ðxn Þ 2 ff 0 ðxn Þ þ a0 f ðxn Þg

The formula (2.19) looks like a Chebyshev-type formula for solving non-linear equations. Mathematically, we
see that formula (2.19) reduces to Chebyshev’s formula [6] in the absence of parameter ða0 Þ. The results pre-
sented can further be generalised in order to obtain Chebyshev-type fourth and fifth-order iterative methods.
Again, the parameter ða0 Þ in (2.19) is chosen to maximize the absolute value of the denominator.
Case (iv): Irrational Halley-type methods
Solving the quadratic Eq. (2.18) for h, the following iterative formula can be achieved
2f ðxn Þ
xnþ1 ¼ xn  qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi :
2
ff 0 ðxn Þ þ a0 f ðxn Þg  ff 0 ðxn Þ þ a0 f ðxn Þg  2f ðxn Þff 00 ðxn Þ þ 2a0 f 0 ðxn Þg
This formula describes one parameter family ða0 Þ of Irrational Halley-type methods. For a0 ¼ 0, irrational
Halley method is recovered.

2.2. Second family

The problem of finding a real and simple root of a non-linear Eq. (2.1) is now treated on the basis of one
point approximation by a parabola
2
y ¼ a2 f ðx0 Þðx  x0 Þ ; ð2:20Þ
where a 2 R is a scaling parameter and x0 is the initial guess for the required root.
Let ðx1 ; f ðx1 ÞÞ be the point of intersection of (2.20) with the curve y ¼ f ðxÞ. Now parabola (2.20) while pass-
ing through the point (x1, y1) takes the form
f ðx0 þ hÞ ¼ a2 f ðx0 Þh2 : ð2:21Þ
Case (i): Parabolic version of Newton’s method
Expanding the left-hand side of (2.21) by Taylor’s theorem and retaining the terms up to O(h2) (excluding
the term containing second derivative), we get
2f ðx0 Þ
h¼ pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi : ð2:22Þ
f 0 ðx 0Þ  f 02 ðx0 Þ þ 4a2 f 2 ðx0 Þ
In (2.22), the sign in the denominator should be chosen so that the denominator will be largest in magnitude.
With this choice, the first approximation is given by
2f ðx0 Þ
x1 ¼ x0  pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi : ð2:23Þ
f 0 ðx 0Þ  f 02 ðx0 Þ þ 4a2 f 2 ðx0 Þ
Repeating the process, the formula for general successive approximation is therefore given by
2f ðxn Þ
xnþ1 ¼ xn  pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ; n P 0: ð2:24Þ
f 0 ðx nÞ  f 02 ðxn Þ þ 4a2 f 2 ðx0 Þ
Relation (2.24) defines the quadratically convergent parabolic method for solving Eq. (2.1). If we let a ¼ 0 in
(2.24), we obtain Newton’s iteration formula which is the simplest case of this family.
Case (ii): Some other formulae
Expanding (2.21) by Taylor’s theorem around x0 and retaining the terms up to Oðh2 Þ (excluding the term
containing second derivative), we get
f ðx0 Þ
h¼ : ð2:25Þ
f 0 ðx0 Þ  a2 hf ðx0 Þ
We again use the correction (2.9) on the R.H.S. of (2.25) to obtain some new second order iterative processes.
Thus we obtain the following iterative formula
24 V. Kanwar, S.K. Tomar / Applied Mathematics and Computation 192 (2007) 20–26

f ðxn Þff 0 ðxn Þ þ a0 f ðxn Þg


xnþ1 ¼ xn  ; n P 0: ð2:26Þ
f 0 ðxn Þff 0 ðxn Þ þ a0 f ðxn Þg þ a2 f 2 ðxn Þ

Needless to say that Newton’s method is recovered when a0 and a approach to zero.

3. Analysis of convergence

Hernández [7] has proved that the degree of logarithmic convexity has a close relationship with the speed of
convergence of the Newton’s method.
Definition. Let f : ½a; b  R ! R be a convex, twice differentiable function on an interval ½a; b and let
x0 2 ½a; b such that f 0 ðx0 Þ 6¼ 0. Then the degree of logarithmic convexity of f at x0 is defined as
f ðx0 Þf 00 ðx0 Þ
Lf ðx0 Þ ¼ :
f 02 ðx0 Þ

Theorem 3.1. Let x ¼ r be a root of f ðxÞ ¼ 0, f 0 ðrÞ 6¼ 0; f 0 ðxÞ þ a0 f ðxÞ 6¼ 0 and let I  R be the interval
n o2
containing the point x ¼ r. Suppose that jLf ðxÞj < 1 and 1 þ a0 ff0ðxÞ
ðxÞ P 1 for all x 2 I. Then the sequence
hxnþ1 i defined by (2.11) converges uniquely to r, provided the initial guess x0 is chosen in I.

Proof . Let us write Eq. (2.11) as


f ðxÞ
gðxÞ ¼ x  ; f 0 ðxÞ þ a0 f ðxÞ 6¼ 0: ð3:1Þ
f 0 ðxÞ þ a0 f ðxÞ
Differentiating both sides w.r.t. ‘x’, we obtain
Lf ðxÞ  1
g0 ðxÞ ¼ 1 þ ; ð3:2Þ
p2
where p ¼ 1 þ a0 ff0ðxÞ
ðxÞ
.
From (2.11) and (3.1), we have
xnþ1 ¼ gðxn Þ; ð3:3Þ
which is a fixed point iteration formula and this means that the equation x ¼ gðxÞ is equivalent to f ðxÞ ¼ 0.
Since x ¼ r is a root of f ðxÞ ¼ 0 i.e. x ¼ gðxÞ, we have
r ¼ gðrÞ: ð3:4Þ
If x0 and x1 are two successive approximations to r, then
x1 ¼ gðx0 Þ: ð3:5Þ
Subtracting (3.5) from (3.4), we obtain
r  x1 ¼ gðrÞ  gðx0 Þ: ð3:6Þ

Since by hypothesis jLf ðxÞj < 1 for all x 2 I and also choosing ða0 Þ in such a way that p2 P 1, from (3.2), we
deduce that jg0 ðxÞj < 1 for all x 2 I and it follows that jg0 ðxÞj 6 k < 1 for all x 2 I.
Since 0 < k < 1, therefore the sequence hxnþ1 i converges to r as n ! 1.
By means of mean value theorem, one can prove that the root obtained is unique.
Now we present here the mathematical proof for the order of convergence of techniques (2.11), (2.17) and
(2.19) and the order of convergence for the remaining techniques can be proved similarly.
Theorem 3.2. Let f : D ! R, where D is an open interval. Assume that f has first and second derivatives in D. If
f(x) has a simple root r 2 D, f 0 ðxÞ þ a0 f ðxÞ 6¼ 0 and x0 is an initial guess sufficiently close to r, then formula (2.11)
satisfies the following error equation
V. Kanwar, S.K. Tomar / Applied Mathematics and Computation 192 (2007) 20–26 25

enþ1 ¼ e2n ðC 2 þ a0 Þ þ Oðe3n Þ; ð3:7Þ


00 ðrÞ
where C 2 ¼ ð1=2!Þ ff 0 ðrÞ ..
Proof. Let r be a simple root and en be the error at nth iteration, then f ðrÞ ¼ 0; f 0 ðrÞ 6¼ 0 and xn ¼ r þ en .
Using Taylor’s expansion
f ðxn Þ ¼ f ðr þ en Þ ¼ f 0 ðrÞfen þ C 2 e2n þ Oðe3n Þg; ð3:8Þ
and
f 0 ðxn Þ ¼ f 0 ðr þ en Þ ¼ f 0 ðrÞf1 þ 2C 2 en þ 3C 3 e2n þ Oðe3n Þg; ð3:9Þ
000
1 f ðrÞ
where C 3 ¼ 3! f 0 ðrÞ
.
Using Taylor’s expansion of f and f 0 , relation (2.11) gives rise to the error equation
enþ1 ¼ e2n fC 2 þ a0 g þ Oðe3n Þ: ð3:10Þ
This means that the family of iteration formulae (2.11) has quadratic convergence.
Theorem 3.3. Let f : D ! R where D is an open interval. Assume that f has first, second and third derivatives in
D. If f(x) has a simple root r 2 D, f 0 ðxÞ þ a0 f ðxÞ 6¼ 0 and x0 is an initial guess sufficiently close to r, then formula
(2.17) and (2.19) satisfy the following error equations
enþ1 ¼ fC 22  C 3 þ a0 ðC 2 þ a0 Þge3n þ Oðe4n Þ; ð3:11Þ
and
enþ1 ¼ f2C 22  C 3 þ 4a02 þ 3a0 C 2 ge3n þ Oðe4n Þ; ð3:12Þ
where en ¼ xn  r and C j ¼ ð1=j!Þff j ðrÞ=f 0 ðrÞg; j ¼ 2; 3 . . .. respectively.
Proof follows on the similar steps as given in the previous theorem.

4. Numerical examples

Now we employ the modified Newton-type, Halley-type and Chebyshev-type iterative methods to solve
some non-linear equations and compare these methods with Newton’s method, Halley’s method and Cheby-
shev’s method. The computational results are displayed in Table 1. Calculations are performed using double-
precision arithmetic. Here formulae (2.11), (2.17) and (2.19) are tested for a0 ¼ 12.

Example 1. x10  1 ¼ 0; r ¼ 1
Example 2. expðx2 þ 7x  30Þ  1 ¼ 0; r¼3

Table 1
Comparison of iterative methods
Example Initial guess Newton Newton-type Halley Halley-type Chebyshev Chebyshev-type
0.0 Fails 12 Fails 7 Fails 8
1 0.1 Divergent 12 14 7 Divergent 9
2.0 10 11 6 6 7 7
0.0 Divergent 20 17 12 Divergent 14
2 1.5 Divergent 12 11 7 Divergent 8
3.5 11 11 6 6 7 7
0.0 Fails 6 Fails 4 Fails 3
3 0.1 9 6 5 4 70 11
3 5 7 3 4 4 5
3 89 6 Divergent 5 5 5
4 0 4 4 3 4 3 3
3 6 6 4 3 Divergent 5
26 V. Kanwar, S.K. Tomar / Applied Mathematics and Computation 192 (2007) 20–26

Example 3. x3 þ 4x2  10 ¼ 0; r ¼ 1:3652299642562866


Example 4. cos x  x ¼ 0; r ¼ 0:7390851378440857

5. Remarks and conclusion

One may point out that the first approximation to the initial guess is taken as the point of intersection of
normal to the parabola and the x-axis, instead of point of intersection of tangent to the curve with the x-axis
(Newton’s method). The answer is that any tangent to the parabola will not work. For example, if the poly-
nomial equation f ðxÞ ¼ 0 has roots in ½0; 1Þ then parabola y 2 ¼ 2ax, a > 0, would have all the tangents meet-
ing x-axis somewhere in ð1; 0. This is of no interest at all. A similar conclusion can be observed in case if
f(x) has roots in ð1; 0. This study represents several formulae of second and third-order for solving non-
linear equations. Newton-type methods are the extensions of Newton’s method and have well-known geomet-
ric derivation. These new methods remove the severe condition f 0 ðxÞ ¼ 0 in the vicinity of the required root.
Numerical examples show that in case f 0 ðxÞ ¼ 0 and global convergence, these modified methods may be effi-
cient enough and have better performance as compared to the other known methods of the same order. Fur-
thermore, Halley-type methods can be extended as Halley’s method to solve the system of non-linear
equations (the problem of square root of matrices) or to the system of equations and therefore, this justify
the introduction of the proposed methods in practice. The results presented in this paper can further be
applied to obtain quartically or higher order convergent iterative methods.

References

[1] A.M. Ostrowski, Solution of Equations in Euclidean and Banach Space, third ed., Academic Press, New York, 1973.
[2] A. Melman, Geometry and convergence of Euler’s and Halley’s methods, SIAM Rev. 39 (4) (1997) 728–735.
[3] C.T. Kelly, Iterative Methods for Linear and nonlinear Equations, SIAM, Philadelphia, PA, 1995.
[4] E. Halley, A new, exact and easy method for finding the roots of any equations generally, without any previous reduction (Latin),
Philos. Trans. Roy. Soc. London 18 (1694) 136–148, English translation: Phiols. Trans. Roy. Soc. London (abridged) 3 (1809) 640–649.
[5] G.S. Salehov, On the convergence of the process of tangent hyperbolas, Dokl. Akad. Nauk. SSSR 82 (1952) 525–528 (Russian).
[6] J.F. Traub, Iterative Methods for Solution of Equations, Prentice-Hall, Englewood Cliffs, NJ, 1964.
[7] M.A. Hernández, Newton–Raphson’s method and convexity, Zb. Rad. Prirod. – Mat. Fak. Ser. Mat. 22 (1) (1993) 159–166.
[8] T.R. Scavo, J.B. Thoo, On the geometry of Halley’s method, Am. Math. Môn. 102 (1995) 417–426.
[9] X. Wu, H.W. Wu, On a class of quadratic convergence iteration formulae without derivatives, Appl. Math. Comput. 10 (7) (2000) 77–
80.

You might also like