You are on page 1of 5

1.

4 Newton’s Method
Newton’s method is a general procedure that can be applied in many diverse situations. When
specialized to the problem of locating a zero of real-valued function of a real variable, it is
often called Newton-Raphson iteration. In general, Newton’s method is faster than the
bisection method and Fixed-Point iteration since its convergence is quadratic rather than
linear. Once the quadratic becomes effective, that is, the values of Newton’s method sequence
are sufficiently close to the root, the convergence is so rapid that only a few more values are
needed. Unfortunately, the method is not guaranteed always to convergence. Newton’s
method is often combined with other slower method in a hybrid method that is numerically
globally convergence.
Suppose that we have a function f whose zeros are to be determined numerically. Let r be a
zero of f (x) and let x be an approximation to r . If f  exists and is continuous, then by
Taylor’s Theorem,

0  f  r   f  x  h   f  x   hf  x   o h 2
,
 
where h  r  x . If h is small (that is, x is near r ), then it is reasonable to ignore the

 
o h2
term and solve the remaining equation for
h
. If we do this, the result is

. If is an approximation to , then should be .


h   f  x  f  x  x r x  f  x  f  x  r
Newton’s method begins with an estimate of and then defines inductively
x0 r
f (xn )
xn+1 =xn - ( n ³ 0) .
f ¢( xn )

Newton’s Algorithm
initial guess
x0 =

f (xn ) for
xn+1 =xn - n =0,1, 2,
f ¢( xn )

Before examining the theoretical basis for Newton’s method, let’s give a graphical
interpretation of it. From the description already given, we can say that Newton’s method
involves linearizing the function. That is, f was replaced by a linear function. The usual way
of doing this is to replace f by the first two terms in the Taylor series. Thus, if
1
f  x   f  c   f  c  x  c  
f  c  x  c    .
2

2!
Then the linearization (at c) produces the linear function

1
. (1)
l  x   f  c   f  c  x  c 

Note that l is a good approximation to f in the vicinity of c , and in fact we have


and . Thus, the linear function has the same value and the same
l c  f  c l  c   f  c 

slope as fit the point c . So in Newton’s method we are constructing the target line to f at a
point near r , and finding where the target line intersects the x -axis.

Example: Find the Newton’s method for x 3 + x - 1 =0 .

Error Analysis (Quadratic convergence of Newton’s method)

Now we analyze the errors in Newton’s method. By errors, we mean the quantities
. (We are not considering round-off errors)
en  xn  x

Let us assume that is continuous and is a simple zero of so that .


f  r f f  r   0  f  r 

From the definition of the Newton iteration, we have

f  xn  f  xn  en f ( xn )  f ( xn ) en f ( xn )  ( f ( xn )  f (r ))
en 1  xn 1  r  xn   r  en    .
f '  xn  f '  xn  f ( xn ) f ( xn )
(2)
By Taylor’s Theorem, we have

1 ,
( ) ( ) ( )(
f x n - f r =f ' x n x n - r + e n2f '' xn
2
) ( )
1 1
f ( xn ) en2 f '' ( r )
en+1 » 2 » 2 en2 =cen2 . (3)
f ¢( r ) f ¢( r )
Suppose that and . Then by Equation (3), we have and
c 1 e n  10 4 e n 1  10 8

. We are impressed that only a few additional iterations are needed to obtain
l n 1  10 16

more than machine precision.


2
This equation tells us that e is roughly a constant times
e n . This desirable state of
2
n 1

affairs is called quadratic convergence. It accounts for the apparent doubling of precision with
each iteration of Newton’s method in many applications.

Definition (Quadratically convergent): A iterative method is quadratically convergent if

en+1
M =lim <¥
n® ¥ e 2n

We have yet to establish the convergence of the method. By Equation (3), the idea of the
1
proof is simple: If e n is small and if f   n  f  x n  is not too large, then e n 1 will be
2
smaller than . Define a quantity dependent on by
en c   
1
c   
max f  x  min f  x     0 . (4)
2 x  r  x  r 

We select small enough  to be sure that the denominator in Equation (4) is positive, and
then if necessary we decrease
 so that  c     1 . This is possible because as 
1
converges to 0, c   converges to f  r  f  r  , and so c    converges to 0 . Having
2
fixed , set . Suppose that we start the Newton iteration with a point
   c   x0

satisfying x  r   . Then e   and   r   . Hence, by the definition of c   ,


0 0 0

we have
1
f   0  f  x 0   c   .
2
Therefore, Equation (3) yields

x1  r  e1  e0 2 c     e0 e0 c     e0  c   

.
 e 0   e0  

This shows that the next point , also lies within units of . Hence, the argument can
x1  r
be repeated, with the results
, ,
e1   e 0 e 2   e1   2 e 0

.
e3   e 2   3 e 0

In general, we have

3
.
e n   n e0

Since 0    1 , we have lim  n  0 , and so lim e n  0 .


n  n

Summarizing, we obtain the following theorem on Newton’s method.

Theorem (Theorem on Newton’s method)


Let f be twice continuously differentiable and f (r) =0 . If f '(r) ¹ 0 , then Newton’s
method is locally and quadratically convergent to r and satisfies
2 .
xn+1 - r £c xn - r ( n ³ 0)

In some situations Newton’s method can be guaranteed to converge from an arbitrary starting
point. We give one such theorem as a sample.

Theorem (Theorem on Newton’s method for a convex function)


If belongs to 2 , is increasing, convex and has a zero, then the zero is unique, and
f C  R

the Newton’s method will converge to it from any starting point.

Proof. Recall that a function is convex if for all


f f  x   0 x . Since f
is increasing,

on . By Equation (3), . Thus, for . Since is increasing,


f 0 R e n 1  0 xn  r n 1 f

. Hence, by Equation (2), . Thus, the sequences and are


f  xn   f  r   0 e n 1  e n { en } { xn }
decreasing and bounded below (by 0 and r , respectively). Therefore, the limit e* =lim en
n® ¥

and x *  lim x n exists. From Equation (2), we have e* =e* - f ( x *) f '( x *) , where f  x *  0 and
n 

x*  r .

Example: Find an efficient method for computing square roots using Newton’s method.

4
Theorem above does not say that Newton’s method always converges quagratically. See
following example,

Example: Use Newton’s method to find a root of


f (x) =x 2

Theorem (Linear convergence of Newton’s Method)


Assume that the (m +1) -time continuously differentiable function f on [ a, b ] has a
multiplicity m root at r . Then Newton’s Method is locally and linearly convergent to r

Newton’s Method, like Fixed-Point Method, may not converge to a root.

Example: Use Newton’s method to find a root of with


f (x) =- x 4 +3x 2 + 2 x0 =1

You might also like