49 views

Uploaded by Jaskirat Singh

- Analytic Limits
- BTEC HNC - Analytical Methods - Algebraic Methods
- Ansys - 3d Beam
- 2006 Md Nastran Release Guide
- limits at infinity sec 2 6 q&a
- Java Printing
- Phase Stability Analysis
- 4curve Fitting Techniques
- UT Dallas Syllabus for math1314.002.07f taught by Paul Stanford (phs031000)
- 08-l1stable.pdf
- Jmapia Regents Book by Pi Topic
- aboutcordic.pdf
- TI
- Maslov Operational Methods Mir 1976
- News Release by International Mathematical Union
- common core starndards in math
- Explicit Eight Noded Quadrilateral Elements 1999 Finite Elements in Analysis and Design
- MR16
- Science 1
- Branch Current State Estimation of Three Phase Distribution Networks Suitable for Paralellization

You are on page 1of 25

1. Introduction

Finding one or more root of the equation

f (x) = 0

is one of the more commonly occurring problems of applied mathematics. In

most cases explicit solutions are not available and we must be satisfied with

being able to find a root to any specified degree of accuracy. The numerical

procedures for finding the roots are called iterative methods.

Definition 1.1 (Simple and multiple root). A root having multiplicity one

is called a simple root. For example, f (x) = (x 1)(x 2) has a simple root

at x = 1 and x = 2, but g(x) = (x 1)2 has a root of multiplicity 2 at x = 1,

which is therefore not a simple root.

A multiple root is a root with multiplicity m 2 is called a multiple point or

repeated root. For example, in the equation (x 1)2 = 0, x = 1 is multiple

(double) root.

If a polynomial has a multiple root, its derivative also shares that root.

Let be a root of the equation f (x) = 0, and imagine writing it in the

factored form

f (x) = (x )m (x)

with some integer m 1 and some continuous function (x) for which

() 6= 0. Then we say that is a root of f (x) of multiplicity m.

Definition 1.2 (Convergence). A sequence {xn } is said to be converge to a

point with order p if there is exist a constant c such that

|xn+1 |

n 0.

lim

p = c,

n |xn |

The constant c is known as asymptotic error constant.

Two cases are given special attention.

(i) If p = 1 (and c < 1), the sequence is linearly convergent.

(ii) If p = 2, the sequence is quadratically convergent.

Definition 1.3. Let {n } is a sequence which converges to zero and {xn } is

any sequence. If there exists a constant c > 0 and integer N > 0 such that

|xn | c|n |,

n N,

Now we study some iterative methods to solve the non-linear equations.

1

2.1. Method. Let f (x) be a continuous function on some given interval

[a, b] and it satisfies the condition f (a) f (b) < 0, then by Intermediate value

theorem the function f (x) must have at least one root in [a, b]. The bisection

method repeatedly bisects the interval [a, b] and then selects a subinterval in

which a root must lie for further processing. It is a very simple and robust

method, but it is also relatively slow. Usually [a, b] is chosen to contain only

root , but the following algorithm for the bisection method will always

converge to some root in [a, b] as f (a) f (b) < 0.

Algorithm: To determine a root of f (x) = 0 that is accurate within a

specified tolerance value , given values a and b such that f (a) f (b) < 0.

a+b

.

Define c =

2

If f (a) f (c) < 0, then set b = c, otherwise a = c.

End if.

Until |a b| (tolerance value).

Print root as c.

Example 1. Perform the five iterations of the bisection method to obtain

the smallest root of the equation x3 5x + 1 = 0.

Sol. We write f (x) = x3 5x + 1 = 0.

Since f (0) > 0 and f (1) < 0,

= the smallest root lies in the interval (0, 1). 1 Taking a0 = 0 and b0 = 1,

we obtain c1 = 12 (a0 + b0 ) = 0.5.

Now f (c1 ) = 1.375, f (a0 ) f (c1 ) < 0.

This implies root lies in the interval [0, 0.5].

Now we take a1 = 0 and b1 = 0.5, then c2 = 12 (a1 + b1 ) = 0.25

f (c2 ) = 0.2343, and f (a1 ) f (c2 ) < 0,

which implies root lies in interval [0, 0.25].

Similarly applying the same procedure, we can obtain the other iterations

as given in the following Table.

0.1875 + 0.21875

=

Root lies in (0.1875, 0.21875), and we take the mid point

2

0.203125 as root .

2.2. Convergence analysis. Now we analyze the convergence of the iterations.

1Choice of Initial approximations: Initial approximations to the root are often known

from the physical significance of the problem. Graphical methods are used to find the

zero of f (x) = 0 and any value in the neighborhood of root can be taken as initial

approximation.

If the given equation f (x) = 0 can be written as f1 (x) = f2 (x) = 0, then the point of the

intersection of the graphs y = f1 (x) and y = f2 (x) gives the root of the equation. Any

value in the neighborhood of this point can be taken as initial approximation.

k

1

2

3

4

5

ak1

0

0

0

0.125

0.1875

bk1

1

0.5

0.25

0.25

0.25

ck

0.5

0.25

0.125

0.1875

0.21875

f (ak1 ) f (ck )

<0

<0

>0

>0

<0

Theorem 2.1. Suppose that f C[a, b] and f (a)f (b) < 0. The Bisection

method generates a sequence {ck } approximating a zero of f with linear

convergence.

Proof. Let [a0 , b0 ], [a1 , b1 ], ... denote the successive intervals produced

by the bisection algorithm. Thus

a = a0 a1 a2 b0 = b

b = b0 b1 b2 a0 = a.

This implies {an } and {bn } are monotonic and bounded and hence convergent.

Since

1

b1 a1 = (b0 a0 )

2

1

1

b2 a2 = (b1 a1 ) = 2 (b0 a0 )

2

2

........................

1

bn an = n (b0 a0 ).

2

Hence

lim (bn an ) = 0.

n

Take limit

lim an = lim bn = (say).

lim f (an ) = f ( lim an ) = f ().

f (an )f (bn ) < 0

which implies

lim f (an )f (bn ) = f 2 () < 0

= f () = 0.

i.e. limit of {an } and {bn } is a zero of [a, b].

Let cn+1 = 21 (an + bn )

Then

1

1

|cn+1 | = | lim an (an +bn )| |bn (an +bn )|

n

2

2

(since an bn , n)

1

1

= |bn an | = n+1 |b0 a0 |.

2

2

By definition of convergence, we can say that the bisection method converges

linearly with rate 12 .

Note: 1. From the statement of the bisection algorithm, it is clear that the

algorithm always converges, however, can be very slow.

2. Computing ck : It might happen that at a certain iteration k, computation

ak + bk

of ck =

will give overflow. It is better to compute ck as:

2

bk ak

.

ck = ak +

2

Stopping Criteria: Since this is an iterative method, we must determine

some stopping criteria that will allow the iteration to stop. Criterion |f (ck )|

very small can be misleading since it is possible to have |f (ck )| very small,

even if ck is not close to the root.

Lets now find out what is the minimum number of iterations N needed

with the bisection method to achieve a certain desired accuracy. The interval

b0 a0

. So, to obtain an accuracy of , we must

length after N iterations is

2N

b0 a0

have

. That is,

2N

2N (b0 a0 ) ,

or

log(b0 a0 ) log

.

log 2

Note the number N depends only on the initial interval [a0 , b0 ] bracketing

the root.

N

algorithm to approximate the root in interval [2.5, 4] of x3 6x2 +11x6 = 0

with error tolerance 103 .

Sol. Number of iterations

N

= 10.55.

log 2

Thus, a minimum of 11 iterations will be needed to obtain the desired accuracy using the bisection method.

3. Iteration method based on first degree equation

3.1. The Secant Method. Let f (x) = 0 be the given non-linear equation.

Let y = ax + b, be the equation of the secant line joining two points

Let intersection point of the secant line with the x-axis is (x2 , 0).

= x2 = b/a, a 6= 0.

Here x0 and x1 are two approximations of the root and then we can determine a and b. Now for intersection points

f0 = f (x0 ) = ax0 + b.

f1 = f (x1 ) = ax1 + b.

Solving these two equations, we obtain

a=

f1 f0

x1 x0

x1 f0 x0 f1

.

x1 x0

The next approximation to the root is given by

b=

x2 =

x0 f1 x1 f0

b

=

a

f1 f0

x1 x0

f1 .

f1 f0

This is called the secant or chord method and successive iterations are given

by

xk xk1

xk+1 = xk

fk , k = 1, 2, . . .

fk fk1

x2 = x1

line or chord passing through (xk1 , fk1 ) and (xk , fk ) and we take the point

of intersection of the straight line with the x-axis as the next approximation

to the root.

Algorithm:

1. Give inputs and take two initial guesses x0 and x1 .

2. Start iterations

x1 x0

f0 .

x2 = x1

f1 f0

3. If |f (x2 )| < (error tolerance) then stop and print the root.

4. Repeat the iterations (step 2). Also check if the number of iterations has

exceeded the maximum number of iterations.

Example 3. Apply secant method to find the root of the equation

cos x x ex = 0.

Sol. Let f (x) = cos x x ex = 0.

The successive iterations of the secant method are given by

xk+1 = xk

xk xk1

fk , k = 1, 2,

fk fk1

x2 = 0.3146653378

x3 = 0.4467281466

etc.

3.2. Convergence analysis.

Theorem 3.1. Let f C 0 [a, b]. If is a simple root of f (x) = 0, then

secant method generates a sequence {xn } converging to for any initial

approximation x0 near to .

Proof. We assume that is a simple root of f (x) = 0 then f () = 0.

Let xk = + k .

An iterative method is said to has order of convergence p if

|xk+1 | = C |xk |p .

Or equivalently

|k+1 | = C|k |p .

Successive iteration in secant method are given by

xk+1 = xk

xk xk1

fk

fk fk1

k = 1, 2, . . .

k+1 = k

k k1

f ( + k ).

f ( + k ) f ( + k1 )

equation

1 2 00

0

(k k1 ) k f () + k f () + . . .

2

k+1 = k

1 2

2

00

0

(k k1 ) f () + (k k1 ) f () + . . .

2

1

1 2 f 00 ()

1

f 00 ()

= k k + k 0

+ . . . 1 + (k1 + k ) 0

+ ...

2 f ()

2

f ()

1 2 f 00 ()

1

f 00 ()

= k k + k 0

+ . . . 1 (k1 + k ) 0

+ ...

2 f ()

2

f ()

1 f 00 ()

k k1 + O(2k k1 + k 2k1 )

=

2 f 0 ()

Therefore

k+1 Ak k1

where constant A =

1 f 00 ()

.

2 f 0 ()

This relation is called the error equation. Now by the definition of the order

of convergence, we expect a relation of the following type

k+1 = Cpk .

1/p

Hence

1/p

C pk = Ak C 1/p k

1+1/p

.

= pk = AC (1+1/p) k

Comparing the powers of k on both sides, we get

p = 1 + 1/p,

which gives two values of p, one is p = 1.618 and another one is negative

(and we neglect negative value of p as order of convergence is non-negative).

Therefore, order of convergence of secant method is less than 2.

3.3. Newton Method. Let f (x) = 0 be the given non-linear equation.

Let y = ax + b, be the equation of the tangent line at point (x0 , f (x0 )) on

the curve y = f (x).

Let intersection point of the tangent line with the x-axis is (x2 , 0).

= x2 = b/a, a 6= 0.

Here x0 is the approximations of the root and then we can determine a and

b. Then

f (x0 ) = ax0 + b.

Differentiating w.r.t. x, we obtain

f 0 (x0 ) = a.

Hence

b = f (x0 ) f 0 (x0 )x0 .

Now

x1 =

b

f (x0 ) f 0 (x0 )x0

=

a

f 0 (x0 )

f (x0 )

.

f 0 (x0 )

This is called the Newton method and successive iterations are given by

f (xk )

xk+1 = xk 0

, k = 0, 1, . . . .

f (xk )

The method can be obtained directly from the secant method by taking limit

xk1 xk . In the limiting case the chord joining the points (xk1 , fk1 )

and (xk , fk ) becomes the tangent at (xk , fk ).

In this case problem of finding the root of the equation is equivalent to

finding the point of intersection of the tangent to the curve y = f (x) at

point (xk , fk ) with the x-axis.

= x0

Algorithm: Let f : R R be a differentiable function. The following

algorithm computes an approximate solution x of the equation f (x) = 0.

1. Choose an initial guess x0

2. for k = 0, 1, 2, . . . do

if f (x) is sufficiently small

then x = x

return x

end

f (xk )

f 0 (xk )

If |xk+1 xk | is sufficiently small

then x = xk+1

return x

end

4. end (for main loop)

3. xk+1 = xk

2.

Since f 0 (x) = 2x, it follows that in Newtons Method, we can obtain the

next iterate from the previous iterate xk by

xk+1 = xk

x2k 2

xk

1

=

+ .

2xk

2

xk

x1 =

x2 =

1 1

+ = 1.5

2 1

1

1.5

+

= 1.41666667

2

1.5

x3 = 1.41421569

x4 = 1.41421356

x5 = 1.41421356.

Since the fourth and fifth iterates agree in to eight decimal places, we assume

that 1.41421356 is a correct solution to f (x) = 0, to at least eight decimal

places.

Example 5. Perform four iterations to Newtons method to obtain the approximate value of (17)1/3 start with x0 = 2.0.

Sol. Let x = 171/3 which implies x3 = 17.

Let f (x) = x3 17 = 0.

Newton approximations are given by

xk+1 = xk

x3k 17

2x3 + 17

= k 2 ,

2

3xk

3xk

k = 0, 1, 2, . . . .

x1 = 2.75, x2 = 2.582645, x3 = 2.571332, x4 = 2.571282 etc.

10

Once the Newton Method catches scent of the root, it usually hunts

it down with amazing speed. But since the method is based on local

information, namely f (xk ) and f 0 (xk ), the Newton Methods sense

of smell is deficient.

If the initial estimate is not close enough to the root, the Newton

Method may not converge, or may converge to the wrong root.

The successive estimates of the Newton Method may converge to the

root too slowly, or may not converge at all.

The following example shows that choice of initial guess is very important

for convergence.

Example 6. Using Newton Method to find a non-zero solution of x =

2 sin x.

Sol. Let f (x) = x 2 sin x.

Then f 0 (x) = 1 2 cos x, and the Newton iteration is

f (xk )

xk 2 sin xk

2(sin xk xk cos xk )

= xk

=

.

0

f (xk )

1 2 cos xk

1 2 cos xk

Let x0 = 1.1. The next six estimates, to 3 decimal places, are:

x1 = 8.453, x2 = 5.256, x3 = 203.384, x4 = 118.019, x5 = 87.471,

x6 = 203.637. Therefore iterations diverges.

Note that choosing x0 = /3 1.0472 leads to immediate disaster, since

then 12 cos x0 = 0 and therefore x1 does not exist. The trouble was caused

by the choice of x0 .

Lets see whether we can do better. Draw the curves y = x and y = 2 sin x.

A quick sketch shows that they meet a bit past /2. If we take x0 = 1.5.

Here are the next five estimates

x1 = 2.076558, x2 = 1.910507, x3 = 1.895622, x4 = 1.895494, x5 =

1.895494.

xk+1 = xk

on the curve y = ln x which is closest to the origin. Use the Newton Method.

Sol. Let (x, ln x) be a general point on the curve, and let S(x) be the square

of the distance from (x, ln x) to the origin. Then

S(x) = x2 + ln2 x.

We want to minimize the distance. This is equivalent to minimizing the

square of the distance. Now the minimization process takes the usual route.

Note that S(x) is only defined when x > 0. We have

2

ln x

= (2x2 + ln x).

S 0 (x) = 2x + 2

x

x

Our problem thus comes down to solving the equation S 0 (x) = 0. We can use

the Newton Method directly on S 0 (x), but calculations are more pleasant if

we observe that S 0 (x) = 0 is equivalent to x2 + ln x = 0.

11

relation

x2 + ln xk

xk+1 = xk k

2xk + 1/xk

We need to find a suitable starting point x0 . Experimentation with a calculator suggests that we take x0 = 0.65.

Then x1 = 0.6529181, and x2 = 0.65291864.

Since x1 agrees with x2 to 5 decimal places, we can perhaps decide that, to

5 places, the minimum distance occurs at x = 0.65292.

3.4. Convergence Analysis.

Theorem 3.2. Let f C 2 [a, b]. If is a simple root of f (x) = 0 and

f 0 () 6= 0, then Newtons method generates a sequence {xn } converging to

for any initial approximation x0 near to .

Proof. We assume that is a simple root of f (x) = 0 then f () = 0.

Let xk = + k .

Successive iteration in Newton method are given by

xk+1 = xk

f ( + k )

f 0 ( + k )

k = 1, 2, . . .

equation

1 2 00

0

k f () + k f () + . . .

2

k+1 = k

0

f () + k f 00 () + . . .

k f 00 ()

k f 0 () 1 +

+

.

.

.

2 f 0 ()

= k

f 00 ()

0

f () 1 + k 0

+ ...

f ()

1

1 f 00 ()

f 00 ()

= k k 1 +

+ ...

1+ 0

k

2 f 0 ()

f ()

f 00 ()

1 f 00 ()

= k k 1 +

+ ...

1 0

k + ...

2 f 0 ()

f ()

1 f 00 () 2

=

+ O(3k )

2 f 0 () k

= C2k ,

f 00 ()

.

f 0 ()

This error analysis shows that Newton method has second-order convergence.

where C =

12

finite interval [a, b] and let the following conditions be satisfied:

(i) f (a) f (b) < 0.

(ii) f 0 (x) 6= 0, x [a, b].

(iii) Either f 00 (x) 0 or f 00 (x) 0, x [a, b].

(iv) At the end points a, b,

|f (a)|

< b a,

|f 0 (a)|

|f (b)|

< b a.

|f 0 (b)|

in [a, b] for any choice of x0 [a, b].

Some comments about these conditions: Conditions (i) and (ii) guarantee that there is one and only one solution in [a, b]. Condition (iii) states

that the graph of f (x) is either concave from above or concave from below,

and furthermore together with condition (ii) implies that f 0 (x) is monotone

on [a, b]. Added to these, condition (iv) states that the tangent to the curve

at either endpoint intersects the xaxis within the interval [a, b]. Proof of

the Theorem is left as an exercise for interested readers.

Example 8. Find an interval containing the smallest positive zero of f (x) =

ex sin x and which satisfies the conditions of previous Theorem for convergence of Newtons method.

Sol. f (x) = ex sin x, we have f 0 (x) = ex cos x, f 00 (x) = ex sin x.

We choose [a, b] = [0, 1]. Then since f (0) = 1, f (1) = 0.47, we have

f (a)f (b) < 0, so that condition (i) is satisfied.

Since f 0 (x) < 0, x [0, 1], condition (ii) is satisfied, and since f 00 (x) >

0, x [0, 1], condition (iii) is satisfied.

Finally since f (0) = 1, f 0 (0) = 2,

|f (0)|

= 1/2 < b a = 1, and since f (1) = 0.47 and f 0 (1) = 0.90,

|f 0 (0)|

|f (1)|

= 0.52 < 1. This verify condition (iv).

|f 0 (1)|

Newtons iteration will therefore converge for any choice of x0 in [0, 1].

Example 9. Find all the roots of cos x x2 x = 0 to five decimal places.

Sol. f (x) = cos x x2 x = 0 has two roots in the interval (2, 1) and

(0, 1). Applying Newton method,

cos xn x2n xn

.

sin xn 2xn 1

Take x0 = 1.5 for the root in the interval (2, 1), we obtain

xn+1 = xn

Starting with x0 = 0.5, we can obtain the root in (0, 1) and iterations are

given by

x1 = 0.55145650, x2 = 0.55001049, x3 = 0.55000935.

13

3.5. Newton method for multiple roots. Let be a root of f (x) = 0

with multiplicity m. In this case we can write

f (x) = (x )m (x).

In this case

f () = f 0 () = ... = f (m1) () = 0, f (m) () 6= 0.

Now

f (xk ) = f ( + k ) =

m+1

m

k

k

f (m) () +

f (m+1) () + . . .

m!

(m + 1)!

f 0 (xk ) = f 0 ( + k ) =

m1

m

k

f (m) () + k f (m+1) () + . . .

(m 1)!

m!

Therefore

k+1 = k

f ( + k )

f 0 ( + k )

"

#

(m+1) ()

m

f

k

k (m)

f () 1 +

+ ...

m!

(m + 1) f (m) ()

#

"

= k

(m+1) ()

m1

f

k

k

+ ...

f (m) () 1 +

(m 1)!

m f (m) ()

# "

#1

"

k f (m+1) ()

k

k

f (m+1) ()

1+

= k

+ ...

+ ...

1+

m

(m + 1) f (m) ()

m f (m) ()

!

k

k f (m+1) ()

= k

+ ...

1

m

m f (m) ()

1

) + O(2k ).

m

This implies method has linear rate of convergence for multiple roots.

However when the multiplicity of the root is known in advance, we can

modify the method to increase the order of convergence.

We consider

f (xk )

xk+1 = xk e 0

f (xk )

where e is an arbitrary constant to be determined.

If is a multiple root with multiplicity m then error equation

= k (1

If method is to have quadratic rate of convergence then 1 e/m = 0.

This implies e = m. Therefore, for multiple roots Newton method with

14

xk+1 = xk m

f (xk )

.

f 0 (xk )

2 at x = 0. Show that Newtons method with x0 = 1 converges to this zero

but not quadratically.

Sol. We have f (x) = ex x 1, f 0 (x) = ex 1 and f 00 (x) = ex .

Now f (0) = 1 0 1 = 0, f 0 (0) = 1 1 = 0 and f 00 (0) = 1. Therefore f

has a zero of multiplicity 2 at x = 0.

f (xk )

x1 =

Starting with x0 = 1, iterations are given by xk+1 = xk 0

f (xk )

0.58198, x2 = 0.31906, x3 = 0.16800

x4 = 0.08635, x5 = 0.04380, x6 = 0.02206.

Example 11. The equation f (x) = x3 7x2 + 16x 12 = 0 has a double

root at x = 2.0. Starting with x0 = 1, find the root correct to three decimals.

Sol. Firstly we apply simple Newton method and successive iterations are

given by

xk+1 = xk

, k = 0, 1, 2, . . .

3x2k 14xk + 16

x1 = 1.4, x2 = 1.652632, x3 = 1.806484, x4 = 1.89586

x5 = 1.945653, x6 = 1.972144, x7 = 1.985886, x8 = 1.992894

x9 = 1.996435, x10 = 1.998214, x11 = 1.999106, x12 = 1.999553.

The root correct to 3 decimal places is x12 = 2.000.

If we apply modified Newtons Method then

xk+1 = xk 2

, k = 0, 1, 2, . . .

3x2k 14xk + 16

x1 = 1.8, x2 = 1.984615, x3 = 1.999884.

The root correct to 3 decimal places is 2.000 and in this case we need less

iterations to get desired accuracy.

Example 12. Apply the Newton method with x0 = 0.8 to the equation

f (x) = x3 x2 x + 1 = 0, and verify the first-order of convergence.

Then apply the modified Newton method with m = 2 and verify the order of

convergence.

15

xk+1 = xk

x3k x2k xk + 1

.

3x2k 2xk 1

x1 = 0.905882, x2 = 0.954132, x3 = 0.977338, x4 = 0.988734.

Since the exact toot is = 1, we have the error in approximations

0 = | x0 | = 0.2 = 0.2 100

1 = | x1 | = 0.094118 = 0.94 101

2 = | x2 | = 0.045868 = 0.46 101

3 = | x3 | = 0.022662 = 0.22 101

4 = | x4 | = 0.011266 = 0.11 101

which shows the linear convergence (error is almost half in the consecutive

steps).

Iterations in modified Newtons method are given by

xk+1 = xk 2

x3k x2k xk + 1

.

3x2k 2xk 1

x1 = 1.011765, x2 = 1.0000034, x3 = 1.000000.

Now we have the error in approximations

0 = | x0 | = 0.2 = 0.2 100

1 = | x1 | = 0.011765 = 0.12 101

2 = | x2 | = 0.000034 = 0.34 104

which verifies the second-order convergence.

4. General theory for one-point iteration methods

We now consider solving an equation x = g(x) for a root by the iteration

xn+1 = g(xn ),

n 0,

For example, the Newton method fits in this pattern with

f (x)

g(x) = x 0

f (x)

& xn+1 = g(xn ).

Each solution of x = g(x) is called a fixed point of g.

For example, consider the solving x2 a = 0, a > 0.

We can write 1. x = x2 + x a or x = x2 + c(x a), c 6= 0.

2. x = a/x.

3. x = 12 (x + a/x).

Let a = 3, x0 = 2.

16

n

1

0 2.0

1 3.0

2 9.0

3 87.0

2

3

2.0

2.0

1.5

1.75

2.0 1.732147

1.5 1.73205

Now 3 = 1.73205 and it is clear that third choice is correct but why

other two are not working? Therefore which of the approximation is correct

or not, we will answer after the convergence result (which requires |g 0 () < 1|

in the neighborhood of ) for convergence.

Lemma 4.1. Let g(x) be a continuous function on [a, b] and assume that

a g(x) b, x [a, b] i.e. g([a, b]) [a, b] then x = g(x) has at least one

solution in [a, b].

Proof. Let g be a continuous function on [a, b].

Let assume that a g(x) b, x [a, b].

Now consider (x) = g(x) x.

If g(a) = a or g(b) = b then proof is trivial. Hence we assume that a 6= g(a)

and b 6= g(b).

Now since a g(x) b

= g(a) > a and g(b) < b.

Now

(a) = g(a) a > 0

and

(b) = g(b) b < 0.

Now is continuous and (a)(b) < 0, therefore by Intermediate Value

Theorem has at least one zero in [a, b], i.e. there exists some s.t.

g() = , [a, b].

Graphically, the roots are the intersection points of y = x & y = g(x) as

shown in the Figure.

Theorem 4.2 (Contraction Mapping Theorem). Let g & g 0 are continuous

functions on [a, b] and assume that g satisfy a g(x) b, x [a, b].

Furthermore, assume that there is a constant 0 < < 1 s.t.

= max g 0 (x).

axb

Then

1. x = g(x) has a unique solution of x = g(x) in the interval [a, b].

2. The iterates xn+1 = g(xn ), n 1 will converge to for any choice of

x0 [a, b].

17

3. | xn |

4.

n

|x1 x0 |, n 0.

1

| xn+1 |

= |g 0 ()|.

n | xn |

lim

Proof. Let g and g 0 are continuous functions on [a, b] and assume that

a g(x) b, x [a, b]. By previous Lemma, there exists at least one

solution to x = g(x).

By Mean-Value Theorem, c s.t.

g(x) g(y) = g 0 (c)(x y).

|g(x) g(y)| |x y|, 0 < < 1, x [a, b].

1. Let x = g(x) has two solutions, say and in [a, b] then = g(),

and = g().

Now | | = |g() g()| | |

= (1 )| | 0

Since 0 < < 1, = = .

= x = g(x) has a unique solution in [a, b] which is (say).

2. To check the convergence of iterates {xn }, we observe that they all

remain in [a, b] as xn [a, b], xn+1 = g(xn ) [a, b].

Now

| xn+1 | = |g() g(xn )| = g 0 (cn )| xn |

18

= | xn+1 | | xn | 2 | xn1 |

................

n | x0 |

As n , n 0 which = xn .

3. To find the bound:

Since

| x0 | = | x1 + x1 x0 |

| x1 | + |x1 x0 |

| x0 | + |x1 x0 |

= (1 )| x0 | |x1 x0 |

= | x0 |

1

|x1 x0 |

1

= n | x0 |

Therefore

| xn |

n

|x1 x0 |

1

n

|x1 x0 |

1

4. Now

lim

| xn+1 |

= = lim |g 0 (cn )|

n

| xn |

Now xn = cn .

Hence

| xn+1 |

= |g 0 ()|.

lim

n | xn |

Remark 4.1. If |g 0 ()| < 1, the formula

| xn+1 |

= |g 0 ()|

n | xn |

lim

formula proves that convergence is exactly linear, with no higher order of

convergence being possible. In this case, the value of g 0 () is the linear rate

of convergence.

In practice, we dont use the above result in the Theorem. The main reason

is that it is difficult to find an interval [a, b] for which a g(x) b condition

is satisfied. Therefore, we use the Theorem in the following practical way.

19

Corollary 4.3. Let g & g 0 are continuous on some interval c < x < d with

the fixed point contained in this interval. Moreover assume that

|g 0 ()| < 1.

Thus there is an interval [a, b] around for which the hypothesis and hence

conclusions of Theorem are true.

On the contrary if |g 0 ()| > 1, then the iteration method xn+1 = g(xn ) will

not converge to .

When |g 0 ()| = 1, no conclusion can be drawn and even if convergence occur,

the method would be far too slow for the iteration method to be practical.

Remark 4.2. The possible behavior of fixed-point iterates {xn } is shown in

Figure for various values of g 0 (). To see the convergence, consider the case

case of x1 = g(x0 ), the height of y = g(x) at x0 . We bring the number x1

back to the x-axis by using the line y = x and the height y = x1 . We continue

this with each iterate, obtaining a stair-step behavior when g 0 () > 0. When

g 0 () < 0, the iterates oscillates around the fixed point , as can be seen in

the Figure. In first figure (on top) iterations are monotonic convergence, in

second oscillatory convergent, in third figure iterations are divergent and in

the last figure iterations are oscillatory divergent.

Theorem 4.4. Let is a root of x = g(x), and g(x) is p times continuously

differentiable function for all x near to , for some p 2. Furthermore

assume

g 0 () = = g (p1) () = 0.

(4.1)

Then if the initial guess x0 is sufficiently close to , then iteration

xn+1 = g(xn ),

n0

(p)

xn+1

p1 g ()

.

p = (1)

n ( xn )

p!

lim

near to and satisfying the conditions in equation (4.1) stated above.

Now expand g(xn ) about .

xn+1 = g(xn ) = g( + xn )

(xn )p1 (p1)

(xn )p (p)

g

() +

g (n )

(p 1)!

p!

for some n between xn and . Using equation (4.1) and g() = , we

obtain

(xn )p (p)

g (n ).

xn+1 =

p!

= g() + (xn )g 0 () + +

xn+1

g (p) (n )

p =

(xn )

p!

20

Figure 4. Convergent

xn+1 = g(xn )

=

and

non-convergent

sequences

(p)

xn+1

p1 g (n )

=

(1)

( xn )p

p!

Remark: The Newton method can be analyzed by this result.

g(x) = x

f (x)

,

f 0 (x)

g 0 (x) =

f (x) f 00 (x)

[f 0 (x)]2

f 00 ()

6= 0,

f 0 ()

as f 0 () 6= 0 and f 00 () is either positive or negative.

g 0 () = 0,

g 00 () =

104 for x = tan x, for x in [4, 5].

21

Sol. Using g(x) = tan x and x0 = 4 gives x1 = g(x0 ) = tan 4 = 1.158, which

is not in the interval [4, 5]. So we need a different fixed-point function.

If we note that x = tan x implies that

1

1

=

x

tan x

1

1

= x = x +

.

x tan x

1

1

Starting with x0 and taking g(x) = x +

,

x tan x

we obtain x1 = 4.61369, x2 = 4.49596, x3 = 4.49341, x4 = 4.49341.

As x3 and x4 agree to five decimals, it is reasonable to assume that these

values are sufficiently accurate.

Example 14. Consider the equation x3 7x + 2 = 0 in [0, 1]. Write a

fixed-point iteration which will converge to the solution.

1

Sol. We rewrite the equation in the form x = (x3 + 2) and define the the

7

1 3

fixed-point iteration xn+1 = (xn + 2).

7

1 3

Now g(x) = (x + 2)

7

then g : [0, 1] [0, 1] and |g 0 (x)| < 3/7 < 1, x [0, 1].

Hence by the Contraction Mapping Theorem the sequence {xn } defined

above will converge to the unique solution of given equation. Starting with

x0 = 0.5, we can compute the solution as following.

x1 = 0.303571429

x2 = 0.28971083

x3 = 0.289188016.

Therefore root correct to three decimals is 0.289.

Example 15. The iterates xn+1 = 2(1+c)xn +cx3n will converge to = 1

for some values of constant c (provided that x0 is sufficiently close to ).

Find the values of c for which convergence occurs? For what values of c, if

any, convergence is quadratic?

Sol. Fixed-point iteration

xn+1 = g(xn )

with

g(x) = 2 (1 + c)x + cx3 .

If = 1 is a fixed point then for convergence |g 0 ()| < 1

= | (1 + c) + 3c2 | < 1

= 0 < c < 1/2.

22

For quadratic convergence

g 0 () = 0 & g 00 () 6= 0.

This gives c = 1/2.

Example 16. How should the constant a be chosen to ensure the fastest

possible convergence with the iteration from

xn+1 =

axn + x2

n +1

.

a+1

Sol. Let

lim xn = lim xn+1 = .

a + 2 + 1

=

a+1

= 3 2 1 = 0.

Therefore, the above formula is used to find the roots of the equation f (x) =

x3 x2 1 = 0.

Now substitute xn = + n , and xn+1 = + n+1 , we get

1

(a + 1)( + n+1 ) = a( + n ) + 2 (1 + /)2 + 1

which implies

(a + 1)n+1 = (a 2/3 )n + O(2n ).

Therefore, for fastest convergence, we have a = 2/3 . Here is the root of

the equation x3 x2 1 = 0 and can be computed by the Newton method.

Example 17. To compute the root of the equation

ex = 3 loge x,

using the formula

3 loge xn exp(xn )

,

p

show that p = 3 gives rapid convergence.

xn+1 = xn

3 loge ( + n ) exp( n )

n+1 = n

p

1

= n [3 loge + 3 loge (1 + n /) exp() exp(n )]

p

1

= n 3 loge + 3 n / 2n /22 + O(3n ) exp() 1 n + 2n /2 . . .

p

Since is exact root, exp() 3 loge = 0, therefore error equation will

be

1

n+1 = 1 (3/ + e ) n + O(2n ).

p

23

p=

3

+ e

applying Newtons method with x0 = 1.5, we get

x1 = 1.053213, x2 = 1.113665, x3 = 1.115447, x4 = 1.115448.

Taking = 1.11545, we obtain p = 2.9835. Hence p = 3.

Exercises

(1) Given the following equations: (i) x4 x 10 = 0, (ii) x ex = 0.

Find the initial approximations for finding the smallest positive root.

Use these to find the root correct three decimals with Secant and

Newton method.

(2) Find all solutions of e2x = x + 6, correct to 4 decimal places using

the Newton Method.

(3) Use the bisection method to find the indicated root of the following

equations. Use an error tolerance of = 0.0001.

(a) The real root of x3 x2 x 1 = 0.

(b) The smallest positive root of cos x = 1/2 + sin x.

(c) The real roots of x3 2x 1 = 0.

(4) Suppose that

2

e1/x , x 6= 0

f (x) =

0, x = 0

The function f is continuous everywhere, in fact differentiable arbitrarily often everywhere, and 0 is the only solution of f (x) = 0.

Show that if x0 = 0.0001, it takes more than one hundred million

iterations of the Newton Method to get below 0.00005.

(5) A calculator is defective: it can only add, subtract, and multiply.

Use the equation 1/x = 1.37, the Newton Method, and the defective

calculator to find 1/1.37 correct to 8 decimal places.

(6) Use the Newton Method to find the smallest and the second smallest

positive roots of the equation tan x = 4x, correct to 4 decimal places.

(7) What is the order of convergence of the iteration

xn (x2n + 3a)

3x2n + a

the iteration xn+1 = 1 + xn , converge to any of these solutions

(assuming x0 is chosen sufficiently close to ) ?

xn+1 =

24

x, x 0

f (x) =

x, x < 0

with the root = 0. What is the behavior of the iterates? Do they

converge, and if so, at what rate?

(b) Do the same as in (a), but with

3

x2 , x 0

f (x) =

3

x2 , x < 0.

(10) Find all positive roots of the equation

Z x

2

ex dt = 1

10

0

2

Hint: f (x) = 10 x ex 1 = 0. This equation has two positive roots

in the intervals (0, 1) and (1, 2).

(11) Show that

xn (x2n + 3a)

xn+1 =

, n0

3x2n + a

( a xn+1 )

lim

n ( a xn )3

assuming x0 has been chosen sufficiently close to the root.

(12) Show that the following two sequences

have convergence of the sec

ond order with the same limit a.

1

1

a

x2n

(i) xn+1 = xn 1 + 2

(ii) xn+1 = xn 3

.

2

xn

2

a

in the first formula for xn+1 is about one-third of that in the second

formula, and deduce that the formula

1

3a x2n

xn+1 = xn 6 + 2

8

xn

a

gives a sequence with third-order convergence.

(13) Suppose is a zero of multiplicity m of f , where f (m) is continuous

on an open interval containing . Show that the fixed-point method

x = g(x) with the following g has second-order convergence:

g(x) = x m

f (x)

.

f 0 (x)

25

Bibliography

[Gerald]

Analysis, 7th edition, Pearson, 2003.

[Atkinson] K. Atkinson and W. Han. Elementary Numerical Analysis,

3rd edition, John Willey and Sons, 2004.

[Jain] M. K. Jain, S. R. K. Iyengar, and R. K. Jain. Numerical Methods for Scientific and Engineering Computation, 6th edition,

New Age International Publishers, New Delhi, 2012.

- Analytic LimitsUploaded bygauss202
- BTEC HNC - Analytical Methods - Algebraic MethodsUploaded byBrendan Burr
- Ansys - 3d BeamUploaded byIshaq Khan IK
- 2006 Md Nastran Release GuideUploaded byHarsh
- limits at infinity sec 2 6 q&aUploaded bymplukie
- Java PrintingUploaded byDavidThomas
- Phase Stability AnalysisUploaded byadeeyo
- 4curve Fitting TechniquesUploaded bySubhankarGanguly
- UT Dallas Syllabus for math1314.002.07f taught by Paul Stanford (phs031000)Uploaded byUT Dallas Provost's Technology Group
- 08-l1stable.pdfUploaded byAshwani Singh
- Jmapia Regents Book by Pi TopicUploaded byRoma Khaves
- aboutcordic.pdfUploaded byHüsamettinAyıpettin
- TIUploaded bypinco
- Maslov Operational Methods Mir 1976Uploaded bygregoriopiccoli
- News Release by International Mathematical UnionUploaded byOutlookMagazine
- common core starndards in mathUploaded byapi-287576065
- Explicit Eight Noded Quadrilateral Elements 1999 Finite Elements in Analysis and DesignUploaded bySale937
- MR16Uploaded byCarmine Tranfa
- Science 1Uploaded bymsa_imeg
- Branch Current State Estimation of Three Phase Distribution Networks Suitable for ParalellizationUploaded byJesus Ortiz L
- Newtons Method ExplanationUploaded byMudassir Ali
- ansysUploaded bySony Jsd
- Chapter 2Uploaded byjayar0824
- Kuliah Anum-2Uploaded byJesse gio
- RIng mathematicsUploaded byaniket
- Future High School Math Teachers and Upper Level Math CoursesUploaded byAmiz Emuddy
- lesson plan -- 3Uploaded byapi-453817826
- c3-5Uploaded byVinay Gupta
- SG Quadratic TheoryUploaded byzahirul007
- ChApter 2 class 9th.pdfUploaded bydont shady

- Business Model Canvas PosterUploaded byosterwalder
- IIMA-Day to Day Economics - Satish Y. DeodharUploaded bydarshi7
- Ch15 Metal Extr DrawingUploaded byPeeka Prabhakara Rao
- che.docxUploaded byJaskirat Singh
- UME502 (1).pdfUploaded byJaskirat Singh
- UME502.pdfUploaded byJaskirat Singh
- nano fluids.docxUploaded byJaskirat Singh
- eTicket - eTicketing __ Haryana Roadways.pdfUploaded byJaskirat Singh
- demonetisation.docxUploaded byJaskirat Singh
- IQC StudentsUploaded byJaskirat Singh
- Dr Dheeraj IQC_3Uploaded byJaskirat Singh
- Quality CostsUploaded byJaskirat Singh
- IQC_IIUploaded byJaskirat Singh
- Juran TrilogyUploaded byJaskirat Singh
- IQC for Dr.dheerajUploaded byJaskirat Singh
- ch3 (1).pdfUploaded byJaskirat Singh
- ch1 (2).pdfUploaded byJaskirat Singh

- 4 Bisection MethodUploaded byKen
- Bisection MethodeUploaded byanno_x
- BookUploaded bykadirikaka
- Cm Matlab ManualUploaded byClement Raj
- Lecture-4 (Solution of None Linear Equation)Uploaded byJahangir Ali
- Muller BisectionUploaded byMax Grebeniuk
- Numerical Study of Some Iterative Methods for Solving Nonlinear EquationsUploaded byinventionjournals
- 70894_26.pdfUploaded byDarwin Mojica
- Bisection MethodUploaded byJerve Mae Padayhag
- Bisection MethodUploaded bySulaiman Ahlaken
- numathUploaded byvipimohan11
- Am 2032 SampleUploaded bymichaelfoley98
- Numerical Methods Bracketing methodUploaded byKenneth Hernandez Hermosilla
- Bisection MethodUploaded byfake7083
- IME NumericalAnalysisUploaded byCharbel Dalely Tawk
- The Bisection MethodUploaded byHassaan Shah
- MME9621 C2 Root Finding fUploaded bysrkhougang
- Trascendentes Tempranas Solucionario- Dennis Zill - 4th Edition - Capitulo 2Uploaded bycamilo
- 2 DifferenceEqtnUploaded byHeng Choc
- Numerical Methods VivaUploaded byhksaifee
- Solution of Nonlinear EquationsUploaded bytarun gehlot
- Seminar to polUploaded byWinsle An
- Exercise Chapter 2Uploaded byNurAtieqah
- Root Finding Methods_Solving Non-Linear Equations(Root Finding)Uploaded bynstl0101
- Numerical & Statistical ComputationsUploaded byabhishek125
- Akar Persamaan Tak Linier Tunggal Ppt DyUploaded byPrit Bili Bili
- Case StudyUploaded byMelwin Jay Porras Asturias
- vbUploaded byvijaysaran_13082008
- Role of bisection method in finding the roots of any given transcendental equation.Uploaded bySky Jais
- Non Linear Root FindingUploaded byMichael Ayobami Adeleke