You are on page 1of 17
Novt Sap J. Maru. 1 VoL. 27, No. 1, 1997, 1-17 DIFFERENT ACCELERATION PROCEDURES OF NEWTON’S METHOD J. A. Ezquerro, M. A. Hernandez University of La Rioja, Department of Mathematics and Computation C/ Luis de Ulloa s/n. 26004 Logroiio. Spain Abstract Three different procedures to accelerate Newton’s method are con- sidered. All of them ate based on the influence of convexity of a real function. From these accelerations we define three iterative methods and a family of iterations. Finally, we see which of the methods show the fastest convergence to a solution of a nonlinear equation AMS Mathematics Subject Classification (1991): 65H05 Key words and phrases: Nonlinear equations, Newton method, Halley method, Convex Acceleration of Newton's method, Chebyshev method, convexity 1. Introduction We analyse the influence of convexity of a curve y = f(z) in the velocity of convergence for the sequence defined by Newton's method (second order): eas n>0. (Q) ‘To measure the convexity of the curve y = f(x) we use the degree of loga- rithmic convexity [11]: _ f@)F"(2) L;(2) = Faye? tay = Ftp) = an — (2) 2 J.A.Ecquerro, M.A.Hernéndez if f"(x) # 0. It is a measure of convexity at each point of the curve. Observe that F(x) = Ls(2), where F is defined in (1). Taking into account the geometrical interpretation of Newton’s method [15], we deduce that the smaller convexity of the curve y = f(z), the faster is the convergence rate of sequence (1) to a unique solution 2* of equation J(z) =0. (3) In particular, if y = f(z) is a line, we have x; = 2*. ‘Three procedures of acceleration of Newton’s method are given. First one consists of reducing directly the degree of logarithmic convexity of f. From the function f, two functions with a lower degree of logarithmic convexity than f are provided. In the second procedure, named global approximation, the curve y = f(2) is approximated by the tangent line at (z*,0). Note that ines are the curves with a lower degree of logarithmic convexity. Finally, in the procedure called local approximation the curve y = f(x) is approximated by means of the line y = f'(2n)(tn — #*) in a neighbourhood of each point (2s F(2n))- By these three procedures, pointwise accelerations are obtained. Conse- quently, independient iterative processes are thus defined. 2. Influence of convexity in Newton’s method We extend the justification of the above-mentoned ideas and complete a study given by Hernandez in [12]. Let f be a real function defined on an interval [a, 6]. Let us assume that f satisfies the following Fourier conditions in [a,b]: f(a) <0 < f(b), f(x) > 0 and f"(x) > 0. For other situations, it suffices to change f(x) for f(—x),—f(x) or —f(—«). It is obvious that in the above conditions, there exists a unique solution 2* of equation (3). Set aq € [a,b] with f(ao) > 0. On the other hand, let g be a function satisfaying the same conditions as f in [a,6]. Let us assume that 2* is also the unique solution of g(x) = 0 in [a,6]. Let yo = zo, and consider for all n > 0 the sequence (Yn) (4) Yat = G(Yn) = Yn - aun)" Different acceleration procedures of Newton’s method 3 By the degree of logarithmic convexity of functions f and g, we will compare the velocity of convergence of sequences (1) and (4). When the velocity of convergence of two sequences with the same order of convergence is compared, the problem of accessibility to the solution <* of equation (3) appears. So, we study first this problem for Newton’s method. Lemma 2.1. Let f satisfy the above Fourier conditions in [a,b]. Let ro € [a,b] with f(a) > 0 and {cm} be the sequence defined by (1). If tn-1 # 2* and x, = 2", then f(z) = ax +6 with a,b € R, for all x € (*,2p_1). Furthermore tn44 = 2" for allk EN. Proof. From F defined in (1) we deduce that F(x) < a” in (x*,¢n~1), since F is a nondecreasing function in [x*,6]. By means of Mean Value theorem, we get F(a) - &* = F(x) — Fe") = F(w)(#- 2°) for some w € (x*,2), and then F(«) > 2*. Therefore, F(x) = 2* for & € (x*,tp_1) and consequently, f(«) = az +6 in (2*,¢,-1) with a,beE R. On the other hand, it is clear that ¢,,, = 2* for all k € N, since 2* is a fixed point of F. o Now we compare the velocity of sequences defined in (1) and (4) with a result given by Hernandez [12]. Theorem 2.1. Let f and g satisfy the above Fourier conditions in [a,b] and both of them have the same solution 2* in [a,b]. Let f(zo) > 0 and g(to) > 0 for xo € {a, 6). If Ly(z) > Loz) > 0 in [a,b] — {x*}, then sequence (4) converges to 2* faster than sequence (1) for yo = 20. Notice that if we consider f(zo) < 0, we have to assure that 2; < 6 and ta < 6 in order to guarantee that sequences (1) and (4) lie in [a,4]. Then both sequences will be decreasing from 21 and yy, since that Ly(x) < L(x) in (o,2*). Therefore, with slight modifications we obtain an analogous result to theorem 2.1. a a? sa ~ Land g(x) = 5-1 be two functions with Example 1. Let f(e) = 55 7 7 J.A.Ezquerro, M.A.Hernandez the same zero 2* = 6 in [3, 10]. From (2) we obtain that 2 144 1 18 Ly2)=5-—y and bel2)= 5-3 ‘These functions are increasing and convex in [3, 10]. Therefore, by theorem 1.2, we have that the sequence {y,,} converges to z* = 6 faster that {x,} if Yo = to € [8,10] and f(z) > 0. On the other hand, if we choose zo = 2 € {3,10} where f(2) < 0, then 10 > F(3) = 10 and 10 > G(3) = 7.5. Therefore, zq,Ym € [6,10] for all n> 1, see Table 1. ta Yn 3.0000000000000 | 3.0000000000000 10.0000000000000 | 7.5000000000000 7.3866666666667 | 6.1500000000000 6.2440237430147 | 6.0018292682927 6.0094124974239 | 6.0000002787669 6.0000147350265 | 6.0000000000000 6.0000000000362 | 6.0000000000000 oakesecols Table 1 Next, we extend the previous situation. The following result solves the problem that Fourier conditions are insufficient to get convergence for New- ton’s method when f” changes the sign in [a, 6]. Theorem 2.2. Let f be a derivable enough function that satisfies f(a) f(b) < 0, f(x) £0 and |L;(x)| < M <1 in [a,b]. Then, the iterative method de- fined by (1) is convergent to «* for any xo € [a,b], where a < F(x) < 6. Proof. Since F is a contractive function, it follows that |z2—2*| < |a;—z*|. By mathematical induction, it is easy to check that |rn — 2*| < M™ Ja — z*| < jz; — 2*| for all n € N. Besides, from 2 € [a,0] it follows that tp € (a,0] for all n € N. Now, the convergence of (1) to 2* is deduced in a similar way. a Note that the last result allows improvement of the widely used Fourier conditions. Different acceleration procedures of Newton's method 5 Example 2. Let us consider f(x) = —2° +32? —2 in the interval {a: 33]. Note that this function has an inflexion point in [3-33] Hence f” changes the sign in that interval. The function defined from (2) is L,(x) = SCH er =20=2) and it satisfies Ly(x)| < 1 in [3 . If we choose zo = 1.66, then 2, = 0.775 € [& a]. Therefore, by Theorem 2.3, the 07 10 sequence {zq} given by (1), converges to the solution ¢* = 1 in [ts I, see Table 2. Tn 1.6000000000000000 0.7750000000000000 1.0079986833443050 0.9999996588133421 1.0000000000000000 Sereee Table 2 Example 3. Consider f(z) = } + sing. Then L,(x) = —S820+3in2) | y¢ is easy to check that |L;(x)| < tin [-1.00297, 0.634867]. We choose 2p = 0.6 for theorem 2.3 and obtain that z; = —0.6899 € [~1.00297, 0.634867]. Consequently, the sequence {x} defined by (1), lies in [-1.00297, 0.634867], and converges to the solution z* = —0.52359877559829887 11, see Table 3. Zn 0.60000000000000000000 -0.6899509655978506667 -0.5129726247150719697 -0.5235667752006047706 -0.5235667753027045709 -0.5235667755982988737 -0.5235667755982988705 -0.5235667755982988742 -0.5235667755982988711 -0.52356677559829887 11 CoIranewnrols Table 3 6 J.A.Ezquerro, M.A.Hernéndez Note that Fourier conditions are satisfied in [—4,e], where ¢ < 0. There- fore, we have extended the domain of inital points to apply Newton’s method. After proving that convexity of the function f influences the rate of convergence of Newton’s method, we get different sequences that converge faster than Newton’s one. We call these sequences accelerations of Newton's iteration and they are pointwise accelerations. That is to say, we obtain Ynti = Glan) where G is defined in (4) so that yp is closer to 2* than ap. It is a consequence of the fact that we will be able to define new iterative processes of the form 2n41 = G(2q) from the function G. 3. Convex acceleration of Newton’s method by means of direct reduction of the degree of loga- rithmic convexity of the function Throughout this section f and g are two increasing and convex functions in (a, 0] that satisfy [L7(x)| > |Lg(2)| for x € (x*, xo], Let 2o € [a,b] with f(xq) > 0. We analyse two different ways to accelerate Newton’s sequence. Firstly, we consider the function [13] S(z) 99) = Ty aa) where 1+af(x) > 0 in [a,b] and @ > 0. It is obvious that g(x) > 0 in (2*, 5] under the previous statements. Then we have w( f(2) (1+ of(z))? gy = LAD afla)) ~ 20 fa) We) = Trap As Ly(w") = 0, there exists an zo € [a,b], where L4(x) < 2 in (2*, 20}. Moreover, if a satisfies UIF\(z) as Pa {3 - er Different acceleration procedures of Newton's method 7 where U[f\(x) = FG. then g is increasing and convex in (2*, zo} (this function and its connection with Whittaker’s method is studied in [10]). Otherwise Lp(2) = Ly(x) ~ af(2)(2 — Ly(2)) and Ls(z) > Lg(2) > @ in (2*, 20]. So, we define a uniparametric family of accelerations of sequence (1): Hear), _ S (ton) 9'(Zayn an Fan) From Theorem 2.2 it follows that (5) converges to z* faster than Newton’s sequence {zn}. Moreover (5) is a family of accelerations, since (1) and (5) are decreasing, and yon < Tam for all a > 0 and n€N. Yount = Lojn ~ Fag jt es(ten)), 220. (5) Secondly, we consider the function _ Sf) g(a) = 7 introduced by Alefeld [1] to define Halley’s method as a vasiant of Newton’s method. Thus | oe) = YF@) (1- 5x2) ng) = Ls) 6 )). g(x) = Sve \2 ~ Lyx) So if L(x) < 2 and Ly(z) < 3 in (2*, 9], then g is increasing and convex in (a*,20). AB and By(2) = pee 0-20 pl2)) hence Ly(x) > Ly(z) > @ in (x*, zo] if and only if (Ly(x) ~ 2)? — Ly(2)(3- 2Ly(z)) > 0. To get Ly(a) > Lg(z) in (2*, x9] it is enough to choose zo so that ; Lye) < 5 (7-20 [a —2my— 16) in (z°,2o] where m = min, {Ly(z)}. Under these assumptions, we obtain rele", by Theorem 2.2 an acceleration of Newton’s method defined for all n > 0 by =o, $n) Len) 2 Bets = Ba ay = Fin) (shes): (¢) 8 J.A.Exquerro, M.A.Herndndez Example 4. Let us consider the function f(r) = e? + z. We see from Table 4 that sequence {yn} defined by (6) converges to the solution 2* = ~0.5671432904097839 of equation f(x) = 0 in [2,2]. Besides, it is an acceleration of the sequence (1) starting from x9 = 2. We obtain that _ite =_— é Lp(2) 3 (7 2m — a= 2nye=16) = 1.10306. Thus, it is easy to check that L(x) < 1.10306 in R. + m= L,(2) = 1.13534 and Tn Un 2.0000000000000000 2.0000000000000000 0.8807970779778824 | -0.2070451959228786 -0.0842749600983386 | -0.5683407447276397 -0.5193066837383489 | -0.5671432903624338 -0.5667232231976213 | -0.5671432904097839 -0.567 1432584762297 | -0.5671432904097839, -0.567 1432904097837 | -0.5671432904097839 oankwnrols Table 4 4, Global convex acceleration of Newton’s method This acceleration procedure was introduced by Herndndez in [12]. The curve y = f(x) is approximated by the tangent line at (c*,Q), (see Fig. 1). For that, it suffices to consider f € C?[a, 6] and the limited Taylor’s formula of § in a neighbourhood of «*: Fe) ~ Her) + Mara 27) + LEV a — ary, —2")?, and we consider the function f(x") 2! (a-2")*, 92) = f(z) — Different acceleration procedures of Newton’s method f(z) y = fi(x*\(z- 2") Figure 1 But this function runs into a problem: 2* is an unknown value, As our purpose is to accelerate the sequence (1), we must obtain a new value Yn4i = G(zn), where G is defined in (4), closer to z* than ¢ng1- Thereby, we have to evaluate g and g! at rn. So we approximate (i) Fa) ~ "(@n), ty — 2") Ame tFT a1. (tn = tnga)® (ii) (en — 2°) ~ (an — 2n41)* for k = 1,2, since lim Therefore, we consider alan) ~ flex) - PG) (Leah) and fan) fan) oan) ~ f'n) - f"(2n) in order to define the sequence 9(2n) __ _ Yat = On Tey ~ En DF) ( a Tien) : @ To see that (8) is an acceleration of (1) [12], it is suffices to show that lyn = 2*| |tn— 2] 0 J.A,Ezquerro, M.A.Hernéndez bz Example 5. Given the function f(x , we show in Table 5 that z sequence (7) converges to the solution 2* = 2.542641357773526 of f(x) = 0 in [1,4] faster than Newton’s one. n fn Un 3.500000000000000 | 3.500000000000000 2.839835893846803 | 2.441271065123373 2.577023717097117 | 2.542750966419476 2.543144242820421 | 2.542641357773588 2.542641466706540 | 2.542641357773526 2.542641357773532 | 2.542641357773526 anwenrol Table 5 5. Local convex acceleration of Newton’s method The third acceleration procedure, that we call local approximation, consists of approximating the curve y = f(«) by lines. When the sequence {zp} given by (1) is obtained, for each point zp of (1) the curve y = f(z) is apptoximated by the line y = f'(<,)(z — 2*) in a neighbourhood of tn, see fig. 2. y= f'(@n-1)(2 - 2") Figure 2 Let us consider f € C?(a, 6] and the limited Taylor’s formula f"(@n) P(t tn)? (8) F(a) ~ f(tn) + Sena - tn) + Different acceleration procedures of Newton’s method i $o we consider g(x) = f’(zn)(x — 2*). Now we have to approximate g(z,) and g'(2n), to get Ynt1 = Glzn), where G is given in (4). Taking into account (8), for 2 = 2*: a(n) ~ flan) + PEN ar — 29) and wean tan n(fE" we obtain g(tn) ~ fan) ( ce 5hi(tn)) : On the other hand, it is obvious that /(tn) = f'(an). Therefore (20) Sen) (, 41 Unt = 2m — iC) ~~ Fn) (1+ ptrlen)- 1) Using the same argument as for sequence (7), it is proved that (9) is an acceleration of (1). Example 6. Let us consider again f(z) = “-—°”. We see from Table 6 e that the sequence {yn} defined by (9) converges to x* = 2.542641357773526 faster than (1). tn Yn. 3.500000000000000 } 3.500600000000000 2.839835893846803 | 2.659283282924826 2.577023717097117 | 2.543020336792808 2.543144242829421 | 2.542641357787998 2.542641466706540 | 2.542641357773526 2.542641357773532 } 2.542641357773526 anennols Table 6 12 J.A.Ezquerro, M.A. Hernandez 6. Iterative processes obtained by means of convex acceleration of Newton’s method We have obtained four accelerations of Newton’s method as a consequence of convexity of the function f. All of them have the form yn41 = G(tn), where G is defined in (4) and, therefore, independent iterative processes may be considered. In this section we define iterative processes, and give a first study of the convergence according to the degree of logarithmic convexity of f. The acceleration (5) provides the family of iterations tant = Fon FEAF 4 afltom))» 20. (10) Let us consider the f increasing and convex in (a,0]. Denote [20,06] if tao <2" < a,6,0,0 >= ' [4,0] if tao >2* and m <4,b,fa,9 >= . UTF\(2) 2€ {5 = He}. ‘Then, we have the following result that has been proved in {13}. Theorem 6.1. Let rao € [a,b] with f(ra,0) > 0 and |Ls(a)| < 2 in < ,b,%a0 >. If 0S aS m, then the sequence given by (10) is decreasing and converges quadratically to z*. Moreover, if0 , the sequences {x5,n} converges to x* faster than {tan}- If f(a.) < 0 and b— 2a, > fie, under the same assumptions as mentioned above in Theorem 6.1, we get an analogous result. An interesting feature of the family (10) is that for a similar efficiency index to Newton’s method, we obtain an iterative process that converges to 2* faster than Newton’s iteration. 2 7 Example 7. Let us consider the function f(z) = In (5 A) in [-$,3]. Different acceleration procedures of Newton’s method 13 is nondecreasing in R. From Theorem 6.1 it follows that the fastest iterative method of (10) is given by a = aS 0.390648. Therefore, the itera- [4-4 tive process of the family (10) where e = 0.390648 is given by the sequence {ro.390648,n}. From Table 7, we see that the previous sequence converges to the solution 2" = 0 of equation f(z) = 0 faster than Newton’s sequence {on} nm Zon 0.390648, 1.500000000000000000 | 1.500000000000000000 0.806852819440054700 | 0.431442208860817500 0.190529451739077100 | 0.014114389234717540 0.009378120633087785 | 0.000011006483878148 0.000022021734024151 | 0.000000000000000000 RwNHo Table 7 The acceleration (6) provides the well known Halley's method or method of tangent hyperbolas ((1],{2J,{4],{6],[14],{16)): f{tn) f(tn) tnq1 = Fy(tn) = an - n>0. (1) The global approximation procedure provides the iterate 2f"(tn) Lj (zn) named Convex Acceleration of Newton’s method or Super-Halley method ((7),{9},[22)). Finally, by local approximation procedure we obtain the Chebyshev it- eration ([3],{5)): tna = Fan) = tq ~ tad ( + is) , 20, (12) F (en 1 \ 2ng1 = Fe(tn) = 2a — fash 14 5Lien}}, n20. (13) Gn the other hand, it is well-known [8] that a method given by the expression L(tn) : Faq) ta n))) n>, Tati = En u4 J.A.Ezquerro, M.A.Herndndez where H(0) = 1, H’(0) = 4 and |H(0)| < oe is of third order. Therefore (11), (12) and (13) ate cubically convergent. Now we give a convergence result depending on L; and 1. for the pre- vious three methods. Consider the function f is increasing, convex, and derivable enough in (a, 6] Theorem 6.2. Let 0 € [a,0] with f(o) > 0 and Lp(a) <0 in [a,b]. (i) IL s(w) <2 in[a,b], then the sequence {en} given by (11) is decreasing and converges to x". (ii) If L4(a) < 1 in (a, 0], then the sequence {en} given by (12) is decreasing and converges to x". (iii) The sequence {2} given by (13) is decreasing and converges to 2". Proof. It follows zo > 2” from f(o) > 0. By Mean Value theorem we have ay — 2" = Fi(wo)(t0 ~ 2") for some wo € (2*, zo). Taking into account that Rta) = (AP) (8-28 pte), we deduce that Fi(z) > 0 in [e*,6]. So 21 > 2*. Now it is easy to prove by mathematical induction that tn > 2* for all n €N. Moreover Sen) th Sg G2) for all n > 0. Thereby, the sequence (11) is decreasing and converges to u € {a,b}, where u > z*. Making n — oo in (11), we get _ Sw) 2 Pu) 2= Llu) and consequently f(u) = 0. But under the above hypotheses, x* is the unique solution of equation (3) in [a, 6], therefore u = 2* and (i) is proved. usu Different acceleration procedures of Newton’s method 15 On the other hand, taking into account that Fe) = ge pleete)- tet) and 1 Fi(x) = 5L4(2)(8—Lp(2)), the cases (ii) and (iii) are analogous to the previous one (see [12] for (ii)). o Note that the obtained sequences defined by (11), (12) and (13) are increasing and converge to «* under the same assumptions of the previous theorem but for f(z0) < 0. Furthermore, we deduce the same result by the Chebyshev rehod if Ly(z) s <3 for € (a, 6), and Halley method if L;(x) < 2 and Ly(x) < $ in {a, 6]. However, to compare the rates of convergence of the three methods we need the same assumptions to get simultaneous convergence for (11), (12), and (13). Theorem 6.3. Under assumtions of Theorem 6.2 and starting from the same initial point, the rate of convergence of sequence (12) is higher than the one of sequence (11), and the latter one is higher than for sequence (13). Proof. Let {tn}, {yn} and {z,} be defined by (12), (13) and (14) respec- tively. Since zo = yo = 2 and the three sequences are decreasing, we expect that z, < a, < yy for all n € N. This can be proved by mathematical induction. Let n = 1, then = Raa) ~ Bilao) = a8) (2 - Bo tyteo)) cg [| * PG@o) \2= Lyl@o) ~ 21 =Ly(#0)) and f(x0) 1 2 Fo (1+ G4000- sage) £° Now we assume that 2,1 < tn_1 < Yn—1. Taking into account that Fi and Fa are non-increasing functions in (a, 6], it is easy to show that Ym — &n = Fa(yn-1) — Fi(@n-1) S Fa(yn-1) ~ Fa(yn-1) $0 a1 — 1 = Fi(20) ~ Fa(t0) = 16 J.A.Ezquerro, M.A.Hernéndez and En ~ 2m = Fy(tn-1) — Fo(2n-1) S$ Fi(@n-1) — F3(tn-1) $0 So, the induction is completed. Example 8. Consider the function f(z) = ¢ — cosz. This function is increasing and convex in (0, 3]. From (2) it follows that ) cost ae and Epa) = then L(x) < 1 and Ly(e) <0 in [0,3]. By Theorem 6.2 the sequences (11), (12) and (13) converge to the solution c* = 0.7390851332151606428 of f(x) = 0 in (0, 3]. In Table 8 we compare convergence rates of the three sequences. z a Zn Yo ‘0 | 1.0000000000000000000 | 1.0000000000000000000 | 1.0000000000000000000 1 | 0.7412215390677832768 | 0.7408730950803435706 | 0.7404989832636941698 2 | 0.7390851348155419594 | 0.7390851338775818840 | 0.7390851334050131377 3 | 0,7390851332151606451 | 0.7390851332151606499 | 0.7390851332151606428 4 | 0.7390851332151606435 | 0.7390851332151606428 | 0.7390851332151606428 5 | 0.7390851332151606428 | 0.7390851332151606428 | 0.7390851332151606428 Table 8 References [1] Alefeld, G., On the convergence of Halley's method, Amer. Math. Monthly 88 (1981), 530-536. [2] Altman, M., Concerning the method of tangent hyperbolas for operator equations, Bull. Acad. Pol. Sci., Serie Sci. Math., Ast. et Phys. 9 (1961), 633-637. [3] Argyros, LK., Chen, D., Results on the Chebyshev method in Banach spaces, Proyecciones 12, 2 (1993), 119-128. 4] Candela, V., Marquina, A., Recurrence relations for rational cubic methods I: The Halley method, Computing 44 (1990), 169-184. Different acceleration procedures of Newton’s method 17 a [13 (14) [15] {16 Candela, V., Marquina, A., Recurrence relations for rational cubic methods II: The Chebyshev method, Computing 45 (1990), 355-367. Chen, D., On the convergence of the Halley method for nonlinear equa- tion of one variable, Tamkang J. Math. 24,4 (1993), 461-467. Chen, D., Argyros, LK., Qian, Q.S., A local convergence theorem for the super-Halley method in a Banach space, Appl. Math. Lett. 7,5 (1994), 49-52. Gander, W., On Halley’s iteration method, Amer. Math. Monthly 92 (1985), 131-134. Gutiérrez, J.M., Hernandez, M.A., Convex acceleration of Newton’s method, submitted. Hernandez, M.A., An acceleration procedure of the Whittaker method by means of convexity, Zb. Rad. Prirod.-Mat. Fak. Ser. Mat. 20, 1 (1990), 27-38. Hernandez, M.A., A note on Halley’s method, Numer. Math. 59, 3 (1991), 273-276. Hernandez, M.A., Newton-Raphson’s method and convexity, Zb. Rad. Prirod.-Mat. Fak, Ser. Mat. 22, 1 (1992), 159-166. Hernandez, M.A., Salanova, M.A., A family of Newton type iterative processes, Intern. J. Computer Math. 51 (1994), 205-214. Safiev, R.A., The method of tangent hyperbolas, Sov. Math. Dokl. 4 (1963), 482-485. Traub, J.F., On Newton-Raphson iteration, Am. Math. Monthly. 74 (1967), 996-998. Yamamoto, T., On the method of tangent hyperbolas in Banach spaces, J. Comput. Appl. Math. 21 (1988), 75-86. Received by the editors January 10, 1996.

You might also like