You are on page 1of 19

# ECM6 Computational Methods :

Slide 1 of 8

Lecture 6
Newtons Method
Brian G. Higgins
Department of Chemical Engineering & Materials Science
University of California, Davis
April 2014, Hanoi, Vietnam

ECM6Lecture6Vietnam_2014.nb

## Topics for Lecture 6

In this lecture we give a brief over view of the following topics

## 1. Overeview of Newtons method

2. Accuracy Estimates of Iterative Methods
3. Examples
4. Convergence tests
5. Example of convergence tests
Evaluate the following function to avoid function clashes.
Remove@f, g, f1, g1, sol1, sol2, x0, x1, Dx, g3, iterates,

ECM6Lecture6Vietnam_2014.nb

## Stability of Fixed Points

Suppose we are given an iteration function
xn+1 = g Hxn L
And we have a fixed point p such that p = f@pD. To assess the local stability of the fixed point, we
linearize the map about the fixed point. Let xn denote a small perturbation to the fixed point
xn = p + xn xn+1 = p + xn+1
Hence the map becomes
p + xn+1 = f Hp + xn L = f HpL +

= f HpL +

xn = p +

Hxn - pL

f
x

xn
P

Simplifying gives
xn+1 =

f
x

(1)

xn
P

To find a solution to the above expression we look for solutions of the form
xn = qn u

qn+1 u =

f
x

qn u q =

xn = C1

-1 <

f
x

<1
p

And xn as n, if
f
x

>1
p

To summarize
(i)

I f
M
x p

(ii)

I f
M
x p

(iii)

I f
M
x p

## = 1, then the fixed point p is neither attracting or repelling.

We can also define a basin of attraction for a fixed point: Suppose p is a fixed point, then the basin of
attraction of p consists of all x such that f@nD HxL p as n increases without bound.

(i)
4

I f
M
x p

## < 1, then the fixed point p is attracting

ECM6Lecture6Vietnam_2014.nb
(ii) I f
M
> 1, then
x p

(iii)

I f
M
x p

## = 1, then the fixed point p is neither attracting or repelling.

We can also define a basin of attraction for a fixed point: Suppose p is a fixed point, then the basin of
attraction of p consists of all x such that f@nD HxL p as n increases without bound.

Example 4:
Consider the 1-D map
xn+1 = f@xn D m xn H1 - xn L
First we need to find the fixed points of the map, i.e.
p = f@pD = m p H1 - pL
Solving for p we find
HiL

p=0

HiiL

1 = m-m p

fl

p = Hm - 1L m

## The Jacobian in this case is

f
x

= m H1 - 2 pL
p

If p = 0, then
f
x

= m H1 - 2 pL

p=0

=m

p=0

Thus the fixed point is stable if -1 < m < 1. For the fixed point p = Hm - 1L m, the Jacobian becomes
f
x

= m H1 - 2 pL

p=Hm-1Lm

= 2-m

p=Hm-1Lm

Hence p = Hm - 1L m is attracting if

## 2 - m < 1. That is if 1 < m < 3, and is repelling if m > 3.

Example 1
In the first example we consider
f HxL = ex - 3 x2 = 0
which we transform to
x = g HxL ln H3L + ln Ix2 M
First let us plot this function f(x) for a range of values of x

ECM6Lecture6Vietnam_2014.nb

PlotAx - 3 x2 , 8x, - 3, 5<, Frame True, FrameLabel 8"x", "fHxL"<, PlotStyle BlueE
10

fHxL

-10

-20

-2

It follows from the plot that f HxL = 0 has 1 negative and 2 positive roots. For reference the roots are
p1 = -0.458962
p2 = 0.910008
p3 = 3.73308
In the next plot we show a plot of the LHS (=x) and the RHS (=ln H3L + ln Ix2 M) for a range of x
values (Note in Mathematica ln(x) is expressed as Log[x])
PlotA9x, Log@3D + LogAx2 E=, 8x, - 3, 5<, Frame True,
FrameLabel 8"x", "x,gHxL"<, PlotStyle 8Red, Blue<, PlotRange 8- 10, 5<E
4
2

x,gHxL

0
-2
-4
-6
-8
-10

-2

As before we see that there are 3 roots (these are fixed points of g(x) such that p = gHpL. Let us use
NestList to generate the sequence for 30 iterations starting with x0 = 1. Here is the result
g@x_D := Log@3D + LogAx2 E
NestList@g, 1, 30D N
81., 1.09861, 1.28671, 1.60279, 2.0421, 2.52657, 2.95234, 3.26381,
3.4644, 3.58369, 3.6514, 3.68883, 3.70923, 3.72026, 3.7262, 3.72939,
3.7311, 3.73202, 3.73251, 3.73277, 3.73292, 3.73299, 3.73303, 3.73305,
3.73307, 3.73307, 3.73308, 3.73308, 3.73308, 3.73308, 3.73308<

Note that we have converged to a fixed point p3 3.73308 ..., which is one of the desired roots of
f HxL = 0. We can readily test out other initial values for x0 to find that the iteration always converges to
the the same fixed point

ECM6Lecture6Vietnam_2014.nb

Note that we have converged to a fixed point p3 3.73308 ..., which is one of the desired roots of
f HxL = 0. We can readily test out other initial values for x0 to find that the iteration always converges to
the the same fixed point
NestList@g, - 2, 30D N
8- 2., 2.48491, 2.91908, 3.24115, 3.45047, 3.57563, 3.6469,
3.68637, 3.70789, 3.71954, 3.72581, 3.72918, 3.73099, 3.73196, 3.73248,
3.73276, 3.73291, 3.73299, 3.73303, 3.73305, 3.73306, 3.73307, 3.73307,
3.73308, 3.73308, 3.73308, 3.73308, 3.73308, 3.73308, 3.73308, 3.73308<

Consequently, our iteration method does not allow us to determine the other 2 roots. Here is a graphical
representation of the iterations, starting with x=8
8

x,gHxL

10

12

Like us now check the local stability of the fixed points. For this calculation we need to compute
|H g xLp | which is given by
Abs@g '@xDD
2
Abs@xD

Let us evaluate the derivative at the fixed points {p1 = -0.458962, p2 = 0.910008, p3 = 3.73308}
Map@Abs@g '@DD &, 8- 0.458962, 0.910008, 3.73308<D
84.35766, 2.19778, 0.535751<

It follows that only p3 = 3.73308 is a stable fixed point, and for this reason our iteration scheme converges to this value.

Example 2
In this example we consider following equation we wish to solve
f HxL = 0,

f HxL = -x - 3 x

## and we construct the following iteration function

x = g1 HxL
Note that

1
3

-x

ECM6Lecture6Vietnam_2014.nb

x=

1
3

-x f HxL = 0

## Here is a plot of the LHS(=x) and the RHS(=-x /3)

g1@x_D :=

-x

3
Plot@8x, g1@xD<, 8x, 0, 1<, Frame True,
FrameLabel 8"x", "x,g1HxL"<, PlotStyle 8Red, Blue<D
1.0

0.8

x,g1HxL

0.6

0.4

0.2

0.0
0.0

0.2

0.4

0.6

0.8

1.0

We see that there is a single positive root. Let us generate a sequence using x0 = 0.2, which is approximately at the value of the root.
NestList@g1, 0.2, 20D N
80.2, 0.27291, 0.25372, 0.258636, 0.257368, 0.257695, 0.25761,
0.257632, 0.257627, 0.257628, 0.257628, 0.257628, 0.257628, 0.257628,
0.257628, 0.257628, 0.257628, 0.257628, 0.257628, 0.257628, 0.257628<

It is evident the sequence converges to the desired fixed point, and a quick check on stability confirms
that the fixed point is stable, viz., H g1 xLp < 1
Abs@g1 '@0.257628DD
0.257628

## Now suppose we construct our iteration function as

x = g2 HxL e-x - 2 x
Note that
x = e-x - 2 x f HxL = 0

ECM6Lecture6Vietnam_2014.nb

g2@x_D := -x - 2 x
Plot@8x, g1@xD<, 8x, 0, 1<, Frame True,
FrameLabel 8"x", "x,g2HxL"<, PlotStyle 8Red, Blue<D
1.0

0.8

0.6
x,g2HxL

0.4

0.2

0.0
0.0

0.2

0.4

0.6

0.8

1.0

It is evident from the plot that the function g2(x) has the same fixed point
NestList@g2, 0.2, 7D N
90.2, 0.418731, - 0.17958, 1.55588, - 2.90075, 23.9892, - 47.9784, 6.86679 1020 =

It is clear that after 7 iterations our sequence is diverging. A check on the stability of the fixed point
confirms that H g2 xLp > 1
Abs@g2 '@0.257628DD
2.77288

Summary
Thus when we construct a iteration function, we must ensure that the fixed point defined by the iteration
function is stable. If not, our iteration scheme will fail. This assumes of course we know the value of the
fixed point. In general this is not the case ( If we did there is no reason to use a iteration method!). Thus
in practice we can attempt to eestimate the root and then use the estimate to check on stability. One
way of estimating the root is by plotting the function g(x) over a range of values of x.

ECM6Lecture6Vietnam_2014.nb

## Background on Newtons Method

We consider a function f HxL and let p be the root of f HxL such that
f HpL = 0
Now suppose that xk is the approximation of the solution to f HxL = 0. Then if we Taylor expand the
function f HxL about and arbitrary point the xk to obtain the following linear equation
f

f HxL = f Hxk L +

Hx - xk L +
xk

We next set x = p, so that Dx x - xk = p - xk represents the deviation from the root of f HxL. This means
that
f HpL = 0
It then follows that
f Hxk L +

f
x

Hp - xk L 0
xk

## We can write this equation as

p g Hxk L xk -

f Hxk L
I f
M
x x

(2)
k

The RHS of Eq.(1) is our approximation to the root. Our task then is to find an iterate of g(x) such the
RHS of (1) is equal to p. More generally we can write
xk+1 = g Hxk L xk -

f Hxk L
I f
M
x x

(3)
k

## Stability of Newtons Method

Equation (2) shows that the root of f HxL is a fixed point of gHxL. We can also check on the stability of the
fixed point represented by the iteration map. The fixed point iteration will be stable if
g

<1

Taking
g HxL = x -

f HxL
I f
M
x

g
x

= 1-

f
x
f
x

## Thus at the fixed point we require

2 f

f
I f
M
x

x2

2 f

f
I f
M
x

x2

10

ECM6Lecture6Vietnam_2014.nb

2 f

f
I f
M
x

x2

<1

for the fixed point iteration to converge. Recall that at the fixed point f HpL = 0. Hence the convergence
requirement is satisfied unless H f xLp =0

ECM6Lecture6Vietnam_2014.nb

## Simple Example using Newtons Method

Consider a previous example where we needed to determine the roots of
f HxL = ex - 3 x2 = 0
A plot of the function is shown here
PlotAx - 3 x2 , 8x, - 3, 5<, Frame True, FrameLabel 8"x", "fHxL"<, PlotStyle BlueE
10

fHxL

-10

-20

-2

At each root , H f xLp 0. Thus or fixed point iteration should converge. Let us test it out using the
following functions
f@x_D := x - 3 x2
g@x_D := x -

f@xD
f '@xD

## For the root near -0.5 we get

NestList@g, - 0.5, 10D
8- 0.5, - 0.46022, - 0.458964, - 0.458962, - 0.458962,
- 0.458962, - 0.458962, - 0.458962, - 0.458962, - 0.458962, - 0.458962<

## For the root near x=1 we get

NestList@g, 1.2, 10D
81.2, 0.94229, 0.910592, 0.910008, 0.910008,
0.910008, 0.910008, 0.910008, 0.910008, 0.910008, 0.910008<

## And for the root near x=3 we get

NestList@g, 3.1, 10D
83.1, 4.94328, 4.33804, 3.94022, 3.76555,
3.73402, 3.73308, 3.73308, 3.73308, 3.73308, 3.73308<

This iteration method for finding roots is called Newton's method. A potential draw back of Newton's
method is that it requires a calculation of the derivative of the function. When applied to large sets of
equations this can be a time consuming calculation.

11

12

ECM6Lecture6Vietnam_2014.nb

This iteration method for finding roots is called Newton's method. A potential draw back of Newton's
method is that it requires a calculation of the derivative of the function. When applied to large sets of
equations this can be a time consuming calculation.

ECM6Lecture6Vietnam_2014.nb

13

## Accuracy Estimates of Iterative Algorithms

Suppose we have an iterative algorithm trying to find a root of f HxL such that f HpL = 0. If xk is the k th
iterate of the algorithm then the error of xk is
ek = p - xk

(4)

We can think of ek as the amount that must be added to get the value of p. An algorithm converges if
successive ek become smaller, i.e
ek-1 > ek > ek+1
That is
ek

0, as k

Since we do not in general know p at the outset, the error indicator defined by (3) is not terribly useful.
Instead we use information about the iterates to judge accuracy. Since we know
x0 , x1 , x2 , , xk , xk+1 , ....
we therefore form the increments from the iterate values
Dx0 = x1 - x0 , Dx1 = x2 - x1 , Dx2 = x3 - x2 , Dxk = xk+1 - xk ,
Now if the convergence is rapid we would expect
Dxk = xk+1 - xk p - xk = ek ,

if

ek+1

<<

ek

Thus a judgement of when to stop the iteration is when Dxk is small in some suitably defined way. We
note that by its definition Dxk approximates the error of xk-1 and not of xk .

## Estimating the Accuracy

Suppose we want our iterate xk to approximate p to N decimal places, then we can require
ek

## < 0.5 10-N

Further, Dxk xk approximates the relative error of xk-1 . So in summary we can say
Absolute Difference Test :
Relative Difference Test :

Dk
Dk

## < C 10-N , then stop iteration

< C 10-N xk , then stop iteration

where C is a suitable constant, usually taken as 1.0, and N is normally less than the machine precision,
usually 16 digits. Note that the absolute difference test depends on the size of p. If p is say 60000, then
on a machine with machine precision of 16, we can expect at most 11 digits of accuracy after the
decimal point. Recall Mathematica's Accuracy function gives you this information
8Accuracy@60 000.0D, Accuracy@ 0.06D<
811.1764, 17.1764<

As a rule then it makes sense to use the relative difference test, rather than the absolute difference test
to stop an iteration algorithm.

14

ECM6Lecture6Vietnam_2014.nb

Example
Consider the following function
f1 HxL = 150 047.623 0.005 x - 800.135 0.005 x x + 0.005 x x2
We want to find the roots of this function using Newton's method. Let us define the function

Plot of Function
f1@x_D := 150 047.623 0.005 x - 800.135 0.005 x x + 0.005 x x2

## Here is a plot of the function

Plot@f1@xD, 8x, 0, 600<, Frame True, FrameLabel 8"x", "fHxL"<, PlotStyle BlueD
600 000
500 000

fHxL

400 000
300 000
200 000
100 000
0

100

200

300

400

500

600

We see that the roots are near x=300 and x=500. As before we can define Newton's method using the
following fixed point iteration algorithm based upon
xk+1 = xk - f1 Hxk L f1' Hxk L

Newtons Method
g1@x_D := x - f1@xD f1 '@xD

Let us try out this iteration using NestList with x=410, and 10 iterations
sol1 = NestList@g1, 410, 10D
8410, 76.1101, 624.259, 562.295, 522.121,
503.91, 500.238, 500.1, 500.099, 500.099, 500.099<

We converge to the root at x=500.099 We can check on the accuracy of our result by evaluating the
Accuracy of the last few iterates. First we can determine the number of digits after the decimal place
Accuracy@sol1@@11DDD
13.2555

So we have 13 digits of accuracy after the decimal point. The actual value of the root stored in the
computer is

ECM6Lecture6Vietnam_2014.nb

15

So we have 13 digits of accuracy after the decimal point. The actual value of the root stored in the
computer is
sol2 = sol1@@89, 10, 11<DD; sol2 InputForm
{500.09940269270874, 500.09940269234096, 500.09940269234113}

Next we can evaluate the value of the function at the 9, 10 and 11th iterates
Map@f1@xD . x &, sol2D
98.96864 10-7 , - 4.65661 10-10 , 0.=

## Iterating using a For Loop

It is clear that we are converging to the solution, with increasing accuracy. The trouble for using NestList
we have no way to arbitrary stop the evaluation. That is there is no simply way to apply a test at each
step of the iteration. To apply a test using the For loop is relatively straightforward. Here we apply the
absolute test. That is we require that Dxk < 0.001. That is the iteration stops when Dxk < 0.001.
For@i = 1; x0 = 410; Dx = 0.1, Abs@DxD > 0.001, i ++,
x1 = N@x0 - f1@x0D f1 '@x0DD; Print@x1D; Dx = x1 - x0; x0 = x1D
76.1101
624.259
562.295
522.121
503.91
500.238
500.1
500.099

## Here is the value of the last iterate

x0 InputForm
500.09940269270874

## We can test how close the equation f(x)=0 is satisfied by evaluating

f1@xD . x x0
8.96864 10-7

Clearly we can readily increase the accuracy by changing the value of Dxk . If we want to use the
relative error test we proceed as follows
For@i = 1; x0 = 410; Dx = 1, Abs@Dx x0D > 0.001, i ++,
x1 = N@x0 - f1@x0D f1 '@x0DD; Print@x1D; Dx = x1 - x0; x0 = x1D

16

ECM6Lecture6Vietnam_2014.nb

76.1101
624.259
562.295
522.121
503.91
500.238
500.1

f1@xD . x x0
0.467589

## Decreasing, the relative error gives a satisfactory result

For@i = 1; x0 = 410; Dx = 1, Abs@Dx x0D > 0.0001, i ++,
x1 = N@x0 - f1@x0D f1 '@x0DD; Print@x1D; Dx = x1 - x0; x0 = x1D
76.1101
624.259
562.295
522.121
503.91
500.238
500.1
500.099

f1@xD . x x0
8.96864 10-7

ECM6Lecture6Vietnam_2014.nb

## Convergence Rates of Iterative Algorithms

Suppose that we have an iterative algorithm such that a constant CL exists such that
Dxk-1 CL Dxk-2 , Dxk CL Dxk-1 , Dxk+1 CL Dxk ,
Then CL is called the linear convergence constant, and the algorithm that generates the xk is called
linearly convergent, if CL < 1 and linearly divergent if CL > 1.
On the other hand if we have a constant CQ such that
Dxk-1 CQ HDxk-2 L2 , Dxk CQ HDxk-1 L2 , Dxk+1 CQ HDxk L2 ,
then CQ is called the quadratic convergence constant, and the algorithm generating the iterates is
called quadratically convergent. If an algorithm converges quadratically we view such a algorithm as
exhibiting rapid convergence, while if we have linear convergence we view the algorithm as having
slow convergence. Let us test these ideas on a few algorithms

Example 1
Consider the following recursive algorithm studied earlier
xn+1 =

1
b

Hb - 1L xn +

a
xn

Recall this formula is used to compute the square root of a real number a.
We define the function
g3@b_, x_D :=

1
b

Hb - 1L x +

78.8
x

With this in mind we use NestList with a pure function taking the value of b =1.5, and a=78.8
iterates = NestList@g3@1.5, D &, 9, 20D
89, 8.83704, 8.89036, 8.87248, 8.87842, 8.87644, 8.8771,
8.87688, 8.87695, 8.87693, 8.87694, 8.87694, 8.87694, 8.87694,
8.87694, 8.87694, 8.87694, 8.87694, 8.87694, 8.87694, 8.87694<

To for the error estimates Dxk , we partition the list into overlapping partitions
partList = Partition@iterates, 2, 1D
889, 8.83704<, 88.83704, 8.89036<, 88.89036, 8.87248<, 88.87248, 8.87842<,
88.87842, 8.87644<, 88.87644, 8.8771<, 88.8771, 8.87688<, 88.87688, 8.87695<,
88.87695, 8.87693<, 88.87693, 8.87694<, 88.87694, 8.87694<, 88.87694, 8.87694<,
88.87694, 8.87694<, 88.87694, 8.87694<, 88.87694, 8.87694<, 88.87694, 8.87694<,
88.87694, 8.87694<, 88.87694, 8.87694<, 88.87694, 8.87694<, 88.87694, 8.87694<<

Then we use map to create the error estimate Dxk at each iteration step

17

18

ECM6Lecture6Vietnam_2014.nb

## Dx = Map@Abs@@@2DD - @@1DDD &, partListD

90.162963, 0.0533193, 0.0178797, 0.00594788, 0.00198396,
0.000661171, 0.000220407, 0.0000734671, 0.0000244892, 8.16305 10-6 ,
2.72102 10-6 , 9.07006 10-7 , 3.02336 10-7 , 1.00779 10-7 , 3.35928 10-8 ,
1.11976 10-8 , 3.73254 10-9 , 1.24418 10-9 , 4.14726 10-10 , 1.38241 10-10 =

## To check for linear convergence we create the formula

Dxk and then form the ratio as follows

## Map@@@2DD @@1DD &, Partition@Dx, 2, 1DD

80.327186, 0.335332, 0.332662, 0.333557, 0.333259, 0.333358,
0.333325, 0.333336, 0.333332, 0.333334, 0.333333, 0.333333,
0.333333, 0.333333, 0.333333, 0.333333, 0.333333, 0.333333, 0.333332<

## This show us that our algorithm converges linearly as

Dxk+1
Dxk

= CL 0.333

We can combine the above code fragments into a single compound statement as follows
LinearConvergenceTest = Hiterates = NestList@g3@1.5, D &, 9, 20D;
partList = Partition@iterates, 2, 1D;
Dx = Map@Abs@@@2DD - @@1DDD &, partListD;
Map@@@2DD @@1DD &, Partition@Dx, 2, 1DDL
80.327186, 0.335332, 0.332662, 0.333557, 0.333259, 0.333358,
0.333325, 0.333336, 0.333332, 0.333334, 0.333333, 0.333333,
0.333333, 0.333333, 0.333333, 0.333333, 0.333333, 0.333333, 0.333332<

We can also test for quadratic convergence by simply modifying the last line of the code
QuadraticConvergenceTest = Iiterates = NestList@g3@1.5, D &, 9, 20D;
partList = Partition@iterates, 2, 1D;
Dx = Map@Abs@@@2DD - @@1DDD &, partListD;
MapA@@2DD @@1DD2 &, Partition@Dx, 2, 1DEM
92.00773, 6.28914, 18.6056, 56.0799, 167.977, 504.194, 1512.32,
4537.22, 13 611.4, 40 834.4, 122 503., 367 509., 1.10253 106 , 3.30758 106 ,
9.92275 106 , 2.97683 107 , 8.93048 107 , 2.67914 108 , 8.0374 108 =

It is clear that
Dxk+1
HDxk L2

CQ

## Thus as determined previous our algorithm converges linearly

Example 2
Consider the following function
f HxL = ex - 3 x2
Thus we define the following function

ECM6Lecture6Vietnam_2014.nb

19

f@x_D := x - 3 x2
g@x_D := x -

f@xD
f '@xD

## Here is the output of NestList with a starting value of x = -20

NestList@N@g@DD &, - 20, 10D
8- 20, - 10., - 5., - 2.50079, - 1.26263, - 0.690041,
- 0.490353, - 0.459708, - 0.458963, - 0.458962, - 0.458962<

Newton's algorithm converges to the desired root x = -0.4589. Let us test the convergence rate and see
if it is linear
LinearConvergenceTest = Hiterates = NestList@N@g@DD &, - 20, 10D;
partList = Partition@iterates, 2, 1D;
Dx = Map@Abs@@@2DD - @@1DDD &, partListD;
Map@@@2DD @@1DD &, Partition@Dx, 2, 1DDL
90.5, 0.499844, 0.495419, 0.462451, 0.348747,
0.153461, 0.0243245, 0.000590959, 3.49192 10-7 =

Clearly there is no constant CL for these iterates. We can also test for quadratic convergence and find
QuadraticConvergenceTest = Iiterates = NestList@N@g@DD &, - 20, 10D;
partList = Partition@iterates, 2, 1D;
Dx = Map@Abs@@@2DD - @@1DDD &, partListD;
MapA@@2DD @@1DD2 &, Partition@Dx, 2, 1DEM
80.05, 0.0999688, 0.19823, 0.373498, 0.609073, 0.768503, 0.793766, 0.792798, 0.792706<