You are on page 1of 28

Fixed Point Iteration

Univ.-Prof. Dr.-Ing. habil. Josef BETTEN
RWTH Aachen University
Mathematical Models in Materials Science and Continuum Mechanics
Augustinerbach 4-20
D-52056 A a c h e n , Germany
<betten@mmw.rwth-aachen.de>

Abstract
This worksheet is concerned with finding numerical solutions of non-linear equations
in a single unknown. Using MAPLE 12 the fixed-point iteration has been applied to
some examples.
Keywords: zero form and fixed point form; LIPPSCHITZ constant; a-priori and
a-posteriori error estimation; BANACH 's fixed-point theorem

Introduction
A value x = p is called a fixed-point for a given function g(x) if g(p) = p. In finding the
solution x = p for f(x) = 0 one can define functions g(x) with a fixed-point at x = p in
several ways, for example, as g(x) = x - f(x) or as g(x) = x - h(x)*f(x) , where h(x) is a
continuous function not equal to zero within an interval [a, b] considered.
The iteration process is expressed by
> restart:
> x[n+1]:=g(x[n]); # n = 0,1,2,...
xn + 1 := g( xn )
with a selected initial value for n = 0 in the neighbourhood of x = p.

BANACH's Fixed-Point Theorem
Let g(x) be a continuous function in [a, b]. Assume, in addition, that g'(x) exists
on (a, b) and that a constant L = [0, 1) exists with
> restart:
> abs(diff(g(x),x))<=L;

1

d
g( x ) ≤ L
dx
for all x in [a, b]. Then, for any selected initial value in [a, b] the sequence defined by
> x[n+1]:=g(x[n]); # n = 0,1,2,..
xn + 1 := g( xn )
converges to the unique fixed-point p in [a, b]. The constant L is known as LIPPSCHITZ
constant. Based upon the mean value theorem we arrive from the above assumtion at
> abs(g(x)-g(xi))<=L*abs(x-xi);
g( x ) − g( ξ ) ≤ L x − ξ
for all x and xi in [a, b]. The BANACH fixed-point theorem is sometimes called the
contraction mapping principle.
From the fixed-point theorem one can proof the following error estimates:
a-priori error estimate:
> abs(x[k]-p)<=(L^k/(1-L))*abs(x[1]-x[0]); alpha:=rhs(%);

−xk + p ≤
α :=

L k −x1 + x 0
1−L

L k −x1 + x0
1−L

>
The rate of convergence depends on the factor L^k . The smaller the value of L , the faster
the convergence, which is very slow if the LIPPSCHITZ constant L is close to one.
The necessary number of iterations for a given error "epsilon" can be calculated by
the following formula [see equation (8.36) in: BETTEN, J.: Finite Elemente für Ingenieure 2,
zweite Auflage, 2004, Springer-Verlag, Berlin / Heidelberg / New York]:
> iterations[epsilon]>=ln((1-L)*epsilon/abs(x[1]-x[0]))/ln(L);
⎛ (1 − L) ε ⎞

ln⎜⎜

⎝ −x1 + x0 ⎠
≤ iterationsε
ln( L )
>
a-posteriori error estimate:
> restart:
> abs(x[k]-p)<=(L/(1-L))*abs(x[k]-x[k-1]);
−xk + p ≤
β :=

beta:=rhs(%);

L xk − xk − 1
1−L

L xk − xk − 1
1−L

where
> restart:

2

symbol=circle. The fixed point iteration is given as follows: > starting_point:=x[0]=0. starting_point := x0 = 0. x[1]:=evalf(subs(x=0.4)). y = [0.7390.cos(x)}.0. Thus.k=1.5 x1 := 0..7391 The fixed point form is x = g(x) with g(x) = cos(x)..8776 3 .01. 1].5.co=black. symbolsize=30): > plots[display](seq(p[k].th=3.> alpha>=beta.style=point.H(x-1)}.7391]]..1. The first example is concerned with finding the solution f(x) = 0.th=thickness. title="Fixed Point at x = 0.1.7391*H(x-0.x=0.co=black): > p[2]:=plot(0. f( x ) := x − cos( x ) Using the MAPLE command "fsolve" we immediately arrive at the solution: > Digits:=4.7391. β≤α > Examples In the following some examples of fixed point iterations should be discussed.0..x=0.x=0.7391).cos(x))).7391"): > p[4]:=plot([[0. 1] to itself. > The graph y = g(x) is contained in the square { (x .co=color): > p[1]:=plot({x. 1] }.sc=constrained.co=black): > p[3]:=plot({1. y) | x = [0.7392. where > restart: > f(x):=x-cos(x). p:=fsolve(f(x)=0).5. id est: " g " mapps the interval [0. Digits := 4 p := 0. one can read the fixed point from the folowing Figure: > alias(H=Heaviside.sc=scaling.

6948 x5 := 0.5 we need 21 iterations in order to arrive at the solution p = 0.th=thickness.7389 x19 := 0.6390 x3 := 0.cos(x))) od. Thus.7392 x20 := 0.sc=scaling.8027 x4 := 0.7391 x23 := 0.7403 x14 := 0.7350 x11 := 0.7393 x18 := 0.7192 x7 := 0.7451 x10 := 0. respectively.7682 x6 := 0. f( x ) := x − p := 0.7391. The next example is concerned with finding the solution f(x) = 0.7396 x16 := 0.7523 x8 := 0. If we choose x[0] = 0 or x[0] = 1 we arrive at the fixed point after 23 or 22 iterations. 1 1 sin( x ) − cos( x ) 2 2 Using the MAPLE command "fsolve" we immediately arrive at the solution: > p:=fsolve(f(x)=0).7390 x21 := 0.co=color): 4 .7418 x12 := 0. x2 := 0. one can read the fixed point from the following Figure: > alias(H=Heaviside.7301 x9 := 0.7373 x13 := 0.7387 x17 := 0. where > restart: > f(x):=x-(1/2)*(sin(x)+cos(x)).7391 x22 := 0.7048120020 The fixed point form is x = g(x) with g(x) = [sin(x) + cos(x)]/2.7391 > Using the starting point x[0] = 0.> for i from 2 to 23 do x[i]:=evalf(subs(x=%.7383 x15 := 0.

50 > for i from 2 to 10 do x[i]:=evalf(subs(x=%.. 0.0.th=3.6785040503 x3 := 0.001).g(x))) od.7048120009 x9 := 0.7048120019 5 . > g( x ) := 1 1 sin( x ) + cos( x ) 2 2 x0 := 0 x1 := 0. > The fixed point iteration is given as follows: > g(x):=(sin(x)+cos(x))/2.> p[1]:=plot({x.symbol=circle.1.(sin(x)+cos(x))/2}.7048*H(x-0.x=0.7048]].1.x=0. title="Fixed Point at x = 0. x2 := 0.0.-H(x-1..7048*H(x-0. x[1]:=evalf(subs(x=0.7058)}.3)).7048"): > p[3]:=plot([[0.k=1.7048062961 x6 := 0.7048.7048116773 x7 := 0..g(x)).7048119834 x8 := 0. symbolsize=30): > plots[display](seq(p[k].co=black): > p[2]:=plot({1.co=black. > x[0]:=0.H(x-1).7030708012 x4 := 0.7048).2).style=point. sc=constrained.7047118221 x5 := 0.001.

001)}.co=black. here in the interval x = [0.p[2]). The following steps are concerned with both the a-priori and the a-posteriori error estimate. L := 0.7048120020 We see.-H(x-1.0078125 a_priori_estimate10 := 0.x)).00097655 > The necessary number of iterations for a given error " epsilon " can be calculated by the formula on page 2: > iterations[epsilon]>=ln((1-L_)*epsilon/abs(X[1]-X[0]))/ln(L_).abs(diff(g(x). d 1 1 g( ξ ) = cos( x ) − sin( x ) dξ 2 2 > p[1]:=plot(abs(diff(g(x).x=0. absolute_derivative := The greatest value of the absolute derivative in the neighbourhood of the expected fixed point. title="Absolute Derivative of g(x)"): > plots[display](p[1].x=0.sc=constrained. may be assumed to be as the LIPPSCHITZ constant: > L:=evalf(subs(x=0. a_priori_estimate1 := 0.001.xi))=abs(diff(g(x).062500 a_priori_estimate7 := 0. ⎛ ( 1 − L_ ) ε ⎞ ⎟ ln⎜⎜ ⎟ ⎝ −X1 + X0 ⎠ ≤ iterationsε ln( L_ ) > iterations[epsilon]:= evalf(ln((1-L)*epsilon/abs(x[1]-x[0]))/ln(L).3)..1.x)).50 Assuming " i " iterations.5) od. the 10th iteration x[10] is identical to the above MAPLE solution..co=black): > p[2]:=plot({H(x). then the a-priori error estimate is given by: > for i from 1 by 3 to 10 do a_priori_estimate[i]:=evalf((L^i/(1-L))*abs(x[1]-x[0]). th=3.x10 := 0. 6 .x))).1.H(x-1). At first we determine the LIPPSCHITZ constant from the absolute derivative of the function g(x): > absolute_derivative:=abs(Diff(g(xi). 1].50000 a_priori_estimate4 := 0.2).

2)]:= evalf(subs(epsilon=0.1*10^(-9).3 iterationsε = 1 / 100 := 6.0.00 ε ) > a_priori_iterations[evalf(0.7 a_priori_iterationsε = 0.10 10 -9 := 33.01.a_posteriori_estimate[k]). x[k-1]:=x[9].1. a_priori_iterationsε = 0.L_=L}.01 := 6.0001 := 13. a_priori_iterations a_priori_iterations ε = 0.000001. a_priori_iterationsε := −1.1000000000 10-9 > The 10th iteration is identical to the MAPLE solution: x[10] = p.1 10 -9 := 32.00*epsilon).7048120019 > a_posteriori_estimate[10]:= subs({k=10.00 ε ) > for i from 1 to 5 do iterations[epsilon=10^(-i)]:= evalf(ln((1-L)*10^(-i)/abs(x[1]-x[0]))/ln(L).44 ln( 1.44*ln(k).1 10 -5 := 20.1 := 3. iterationsε = 1 / 100000 := 17. iterationsε = 1 / 10 := 3.0.001 := 9.0001. a_posteriori_estimate10 := 0.3).7 iterationsε = 1 / 1000 := 10.2) od.44*ln(1.1 > The necessary a-priori-iterations can alternatively expressed as follows: > for k in [0.2) od.0000000001] do a_priori_iterations[epsilon=k]:=evalf(-1. xk := 0.7048120020 xk − 1 := 0. a_priori_iterations 0.2 a_priori_iterationsε = 0. where 7 .1*10^(-9). > The a-posteriori error estimate after the 10th iteration is given by: > a_posteriori_estimate[k]:=(L_/(1-L_))*abs(x[k]-x[k-1]).4 a_priori_iterationsε = 0.001. ε = 0.0.iterationsε := −1.0. iterationsε = 1 / 10000 := 13.1*10^(-9) is given by: > a_priori_iterations[epsilon]:=-1. > Another example is concerned with the zero form f(x) = 0. a_posteriori_estimatek := L_ xk − xk − 1 1 − L_ > x[k]:=x[10].44 ln( 1.0.%). The necessary a-priori-iterations with the above a-posteriori error = 0.

th=2.3.k=1.co=black.5)}..th=2.3].x=1.8751]]. style=point. ( −x ) g( x ) := x + e ( 1 + cosh( x ) cos( x ) ) Thus.5.[2. 8 ..[1.sc=constrained..1.5.co=color): > p[1]:=plot({x.co=black): > p[5]:=plot({1.. one can read the fixed point from the following Figure: > alias(H=Heaviside.1].co=black): > p[6]:=plot([[1.1.> restart: > f(x):=1+cosh(x)*cos(x).3].5)-1.2.2.5.2. In this example we have to introduce a continuous function h(x) according to BANACH's theorem. A suitable function is given by: > h(x):=-exp(-x).5.th=2. linestyle=4.1. [1. title="Fixed Point at x = 1.2.99.sc=scaling.5]..[3.g(x)}.h(x)*f(x).6)).8751"): > p[3]:=plot({2+H(x-1.[3. linestyle=4.2].1.x=1.1.co=black): > p[4]:=plot({2+H(x-2)-1.x=1. h( x ) := −e ( −x ) > g(x):=x-h(x)*f(x).875104069 The fixed point form is x = g(x) with g(x) = x .[1.th=3.[1.1.3.2].001.5.symbolsize=30): > plots[display](seq(p[k].3.ytickmarks=4.49..2}.x=1.5*H(x-1.5001.3*H(x-3)}.co=black): > p[2]:=plot({3. f( x ) := 1 + cosh( x ) cos( x ) The MAPLE command "fsolve" immediately furnishes the solution: > p:=fsolve(f(x)=0).x=1.[2.5*H(x-2)}. p := 1.001. xtickmarks=5..1..linestyle=4.1].8751.5.th=thickness.symbol=circle.5]..

3651. symbol=circle.> In this Figure we have drawn two squares.923450867 > for i from 2 to 21 do x[i]:=evalf(subs(x=%.g(x))). That is: the operator " g " maps an interval [a.x)): > simplify(%): > subs({sinh(x)=(exp(x)-exp(-x))/2.style=point. b] = [1.g(x))) od. In the following the fixed point iteration has been discussed based upon the two squares. x[1]:=evalf(subs(x=2.8751)}.3651*H(x-1.symbolsize=30): plots[display](seq(p[k].x=1.x))).8751 is contained.k=1. of [a. Firstly.5.x=1. Both x and y = g(x) are elements of [a...3.H(x-3)}.893171371 x3 := 1. After that we compare these results with the rate of convergence based upon the smaller square.0. we arrive at the following fixed point iteration: > x[0]:=2. title="Absolute Derivative of g(x)"): p[3]:=plot({0. 2].3. b] = [1. x2 := 1.xi))=abs(diff(g(x). in which the fixed point x = 1.8752.001. > > > > > d 1 1 ( −2 x ) ( −x ) ( −2 x ) g( ξ ) = −1 + e +e cos( x ) + sin( x ) + e sin( x ) dξ 2 2 p[1]:=plot(rhs(%).8824 Assuming the starting point x[0] = 2.4).co=black.co=black): p[4]:=plot([[1. We will see that the rate of convergence is faster by considering the smaller square.%): > simplify(%).881762275 9 .3651]].0.4)).8751.abs(diff(g(x). 3] or. > The greatest value of the derivative in this Figure can be considered as the LIPPSCHITZ constant: > L:=evalf(subs(x=3. L := 0.co=black): p[2]:=plot({1. valid for both squares. b] into itself.th=3.1.x=1. The LIPPSCHITZ constant can be determined from the absolute derivative of the function g(x): > absolute_derivative:=abs(Diff(g(xi).cosh(x)=(exp(x)+exp(-x))/2}. alternatively.. linestyle=4. let us discuss the iteration based upon the greater square. x0 := 2 x1 := 1..

875223412 x8 := 1.001257 10 .Lambda=L.1003 a_priori_estimate20 := 0.1874 a_priori_estimate15 := 0.50] do a_priori_estimate[i]:=evalf(L^i*abs(x[1]-x[0])/(1-L).875104070 x20 := 1.877544885 x5 := 1.875104069.875120010 x10 := 1. the 20th iteration x[20] is identical to the MAPLE solution p = 1.05363 We see.875104173 x15 := 1. a_priori_estimate1 := 0.875104071 x19 := 1.30.875109895 x11 := 1.3503 a_priori_estimate10 := 0.875104069 > alpha:=Lambda^k*abs(xi[1]-xi[0])/(1-Lambda).5.20.15.xi[0]=x[0]. then the a-prori estimate is given by: > for i in [1.004392 a_priori_estimate50 := 0. The following steps illustrade both the a-priori and the a-posteriori error estimate.4) od. α := 0.10.875147687 x9 := 1.x4 := 1.%).875104069 x21 := 1.875104083 x17 := 1.875104107 x16 := 1.875997102 x6 := 1.01535 a_priori_estimate40 := 0. Λ −ξ 1 + ξ 0 k α := 1−Λ > alpha:=evalf(subs({k=20. Assuming " i " iterations.875106198 x12 := 1.4).875430574 x7 := 1.xi[1]=x[1]}.875104074 x18 := 1.40.875104847 x13 := 1.5777 a_priori_estimate5 := 0.875104353 x14 := 1.05363 a_priori_estimate30 := 0.

994 ln( 1.iterations[epsilon]). we find the following number of necessary iterations: > a_priori_iterations[epsilon=evalf(0.3).05363 := 20. iterationsε = 0.5777.875104070 > beta:=a_posteriori_estimate[20]=L*abs(x[20]-x[19])/(1-L). iterationsε = 0.. x20 := 1. 3 := fsolve( 1 + cosh( x ) cos( x ) = 0.0.x.1003 := 15. x.875104069 x19 := 1.> The necessary number of iterations for a given error " epsilon " can be calculated by the formula on page 2: > iterations[epsilon]:= evalf(ln((1-L)*epsilon/abs(x[1]-x[0]))/ln(L). iterationsε = 0.05363.0.0.527 ε ) > for i in [0.875104069 the function f(x) = 1 + cosh(x)*cos(x) has the following zeros: > for i from 1 to 8 do ZERO[i-1. 2 . Besides the fixed point p = 1. ZERO0 .. 3 ) 11 .iterations[epsilon]). iterationsε = 0. x.004392 := 40.2) od.i]:=fsolve(f(x)=0. iterationsε = 0.4).75 10 -8 := 146. > The a-posteriori error estimate after the 20th iteration is given by: > x[20]:=1.004392.75*10^(-8). β := a_posteriori_estimate20 = 0.2 iterationsε = 0.. a_priori_iterations ε = 0.1 iterationsε = 0.3503 := 5.01535 := 30.001257 := 50. 0 .875104069. where the LIPPSCHITZ constant is L = [0..01535. 2 := 1.875104070.2)]:= evalf(subs(epsilon=0.i) od. 3] . 1 ) ZERO1 .75*10^(-8).875104069 ZERO2 . iterationsε := −7. 1) ..001257] do iterations[epsilon=i]:= evalf(subs(epsilon=i.7503401361 10-8 > Iserting this value into the a-priori estimate.0. 1 := fsolve( 1 + cosh( x ) cos( x ) = 0.i-1.0.5777 := 1. x[19]:=1.. > In the above discussion we have considered a range x = [1.0.3503..1003.

3] the absolute derivative of the function g(x) is smaller than one..5. x.10*H(x-5). x..x))=1.-20*H(x-5)}...001.5.-20.co=black.22)}....3).0.854757438 > p[1]:=plot(f(x).ZERO3 .001.th=3. instead of the interval x = [1..k=1. title="f(x) := 1 + cosh(x)*cos(x)"): > plots[display](seq(p[k]. in the above considered interval x = [1. 2] . x. 6 .p[2]).694091133 ZERO5 . 5 .co=black): > p[2]:=plot({-20. Note.2)).co=black....22 > We see.5.x)). title="Absolute Derivative of g(x)"): > plots[display](p[1].10.5*H(x-3. 8 := 7. 4 ) ZERO4 .-1. 7 ) ZERO7 . 12 .. 3] it would be possible and more comfortable to consider the range x = [1. 5] is illustrated in the following Figure: > p[1]:=plot(abs(diff(g(x).x=0.1.5.5).1..th=3. 7 := fsolve( 1 + cosh( x ) cos( x ) = 0.x=0.. 3 . 4 := fsolve( 1 + cosh( x ) cos( x ) = 0.875104069.. xderivative = 1 := 3.5.. > The absolute derivative of the function g(x) in the interval x = [0. 6 ) ZERO6 .1.5. In this range we find the fixed point p = 1.5*H(x-5).x=0.10.x=0.5.x. > x[derivative=1]:=evalf(fsolve(abs(diff(g(x). 5 := 4.co=black): > p[2]:=plot({1. 6 := fsolve( 1 + cosh( x ) cos( x ) = 0.1.

cosh(x)=(exp(x)+exp(-x))/2}.style=point.x=1.symbol=circle.ytickmarks=4. xtickmarks=5..xi))= simplify(abs(diff(g(x).0.1. h( x ) := −e ( −x ) > g(x):=x-h(x)*f(x).1..5. symbolsize=30): plots[display](seq(p[k]..x=1.1.8752. sc=constrained. > absolute_derivative:=abs(Diff(g(xi).1.th=3.co=color): p[1]:=plot({x.x))): > simplify(subs({sinh(x)=(exp(x)-exp(-x))/2.x=1.8751. Then we arrive at the following detals.2.5. d 1 1 ( −2 x ) ( −x ) ( −2 x ) g( ξ ) = −1 + e cos( x ) + sin( x ) + e sin( x ) +e dξ 2 2 > p[1]:=plot(rhs(%).8751]]. > The LIPPSCHITZ constant can be determined from the absolute derivative | g'(x) | .co=black): 13 .2. in contrast to the foregoing results: > restart: > f(x):=1+cosh(x)*cos(x).5.%)).5.co=black): p[3]:=plot(1.5.001.8751.5.g(x)}.for instance.0.8750.1..sc=scaling. co=black): p[4]:=plot([[1.co=black.x=1.8751*H(x-1.2.1. ( −x ) > > > > > > g( x ) := x + e ( 1 + cosh( x ) cos( x ) ) alias(H=Heaviside.8751"): p[2]:=plot({2..875104069 > h(x):=-exp(-x).k=1. title="Fixed Point at x = 1.8751). p := 1.2*H(x-2)}....2.2. f( x ) := 1 + cosh( x ) cos( x ) > p:=fsolve(f(x)=0).th=thickness.5.th=3..4)).

8751.k=1.875104069 L := 0.H(x-2)}.x))).co=black): > p[4]:=plot([[1. the following results have been based upon the value L = 0.4).8751).linestyle=4. > The greatest value of the derivative in this Figure can be considered as LIPPSCHITZ' s constant: > L:=evalf(subs(x=2.875104070.4090.0.8751 into the absolute derivative. title="Absolute Derivative | g'(x) |"): > p[3]:=plot(0.5)). symbolsize=30): > p[5]:=plot(0.875104069.3651.8750.x))).. linestyle=4.8824.3651.4090. Thus.. Lopt := 0.x=1.3651 > However. 14 .0.8824 we calculate: > alpha:=Lambda^k*abs(xi[1]-xi[0])/(1-Lambda). the fixed point " p " is "not yet" known.x=1. the 20th iteration x[20] is identical to the MAPLE solution finding by using the MAPLE command fsolve.g(x)))..style=point.abs(diff(g(x)..co=black.8752.4090 > We see.3651]]. one arrives at an optimal value: > L[opt]:=evalf(subs(x=1.3651*H(x-1.co=black): > plots[display](seq(p[k]. L := 0.4090 in contrast to the foregoing value L = 0.> p[2]:=plot({0. x[1]:=evalf(subs(x=2.1.5.4090 in contrast to L = 0.5.1. From the foregoing fixed point iteration we know the following data: > x[0]:=2. Inserting the fixed point p = 1. x[20]:=1. L:=0.875104070 x20 := 1. x0 := 2 x1 := 1.8751.5. x[19]:=1.4).symbol=circle..2.abs(diff(g(x).923450867 x19 := 1.001. With the improved LIPPSCHITZ constant L = 0.0.8751.x=1.

4). α := 0.Λ −ξ 1 + ξ 0 k α := 1−Λ > alpha:=evalf(subs({k=20.27 10-6 a_priori_estimate20 := 0. iterationsε = 0.30 10 -8 := 20.xi[1]=x[1]}.Lambda=L.069 a_priori_estimate5 := 0.69 iterationsε = 0.2).119 ln( 7. -9 15 .3*10^(-8).000022 := 9. β := Λ ξk − ξ k − 1 1−Λ > beta:= simplify(subs({k=20. > The a-posteriori error estimate after the 20th iteration is given by: > beta:=Lambda*abs(xi[k]-xi[k-1])/(1-Lambda).002.xi[0]=x[0].8824.69 10 > Q:=Alpha/Beta=evalf(alpha/beta.xi[k]=x[20]. evalf(0.30 10-8 in contrast to alpha = 0. The following steps illustrade both the a-priori and the a-posteriori error estimate.2) od.2). then the a-priori estimate is given by: > for i in [1.0020 a_priori_estimate10 := 0. ε = 0.10. iterationsε := −1.6 iterationsε = 0. β := 0.3).evalf(0.xi[k-1]=x[19]}.2) od.000022.27*10^(-6).30 10-8 > The last value is identical to alpha.000022 a_priori_estimate15 := 0.iterations[epsilon]).15.6920473773 10-9 > beta:=evalf(%.5.069 := 0. Assuming " i " iterations.675 ε ) > for i in [0.0.%)).0. The necessary number of iterations for a tolerated error " epsilon " can be calculated by the formula on page 2: > iterations[epsilon]:= evalf(ln((1-L)*epsilon/abs(x[1]-x[0]))/ln(L). β := 0.20] do a_priori_estimate[i]:=evalf(L^i*abs(x[1]-x[0])/(1-L).27 10 -6 := 14.05363 of the foregoing calculation based upon L = 0.%). a_priori_estimate1 := 0.002 := 4.2).Lambda=L.6 iterations iterations ε = 0.069.2)] do iterations[epsilon=i]:= evalf(subs(epsilon=i.

exp(x^2-2) = 0.sc=scaling.2.1379"): > p[4]:=plot([[0. > restart: > f(x):=x-exp(x^2-2).2001.th=3.0.co=black): > p[2]:=plot(0. a_priori_iterations ε = 0. The smaller the value of L .1379]].th=thickness. which is very slow if the LIPPSCHITZ constant L is close to one..0..2.. p := 0.1. > Comparing the above results. one can read the fixed point from the following Figure: > alias(H=Heaviside. The next example is concerned with the zero form f(x) = x .2).4090.1379*H(x-0. we find again the necessary iterations: > a_priori_iterations[epsilon=evalf(0.692*10^(-9) into the a-priori estimate.1379348256 The fixed point form is x = g(x) with g(x) = exp(x^2-2).0.2)}.Q := Α = 4.co=black): > p[3]:=plot({0.co=color): > p[1]:=plot({x.g(x)}.style=point.x=0. based upon L = 0.0.69 10 -9 := 21.69*10^(-9).137. the faster the convergence. sc=constrained.k=1. symbolsize=30): > plots[display](seq(p[k].2)]:= evalf(subs(epsilon=0. title="Fixed Point at x = 0.x=0..0. Thus. f( x ) := x − e 2 (x − 2) 2 (x − 2) g( x ) := e The MAPLE command "fsolve" immediately furnishes the solution: > p:=fsolve(f(x)=0). g(x):=x-f(x).1.4)). 16 .0.0.1379).8824 and L = 0.1..1379.co=black.2*H(x-0.138.2.69*10^(-9).symbol=circle.35 Β > Inserting the value of beta = 0.x=0. we see that the rate of convergence essentially depends on the factor L^k in the formula for alpha.iterations[epsilon]).

1.linestyle=4.g(x))) od.> The fixed point iteration is given as follows: > x[0]:=0.x=0.038]].1.2001.. 17 ..06.038*H(x-0. symbolsize=30): p[4]:=plot(0. title="Absolute Derivative | g'(x) |"): p[3]:=plot([[0.0.1.x).0.co=black): p[2]:=plot({0..0.12.1379348242 x6 := 0.symbol=circle.k=1.1379348256 We see.x).1379.038.0.1379339061 x4 := 0.06.138. x[1]:=evalf(subs(x=0. The following steps are concerned with both the a-priori and the a-posteriori error estimate. x0 := 0.06*H(x-0.12.linestyle=4.1372982105 > for i from 2 to 7 do x[i]:=evalf(subs(x=%.5)).0.x=0.2. At first we determine the LIPPSCHITZ constant from the absolute derivative | g'(x) | . > > > > > > 2 d (x − 2) absolute_derivative := g( ξ ) = 2 x e dξ p[1]:=plot(diff(g(x).co=black.1379347905 x5 := 0. co=black): plots[display](seq(p[k].0.co=black): p[5]:=plot(0.1379).x=0.1379348256 x7 := 0.1379106591 x3 := 0.0.2)}..137.. x2 := 0..1379.12 x1 := 0.th=4.x=0. > absolute_derivative:=abs(Diff(g(xi).xi))=diff(g(x).0. the 6th iteration x[6] is identical to the above MAPLE solution.style=point.g(x))).

000066.diff(g(x).evalf(0. a_priori_estimate1 := 0. evalf(0.20*10^(-6).2)] do iterations[epsilon=i]:= evalf(subs(epsilon=i.56 ε ) > for i in[0.20 10-6 a_priori_estimate5 := 0.xi[0]=x[0]}.2) od.0012.0012 a_priori_estimate2 := 0.56*epsilon)).056 Assuming " i " iterations.12*10^(-7). iterationsε = 0.4).-0.347*ln(54.x)).2). here in the interval x = [0. L := 0.2) od.0.evalf(0.2). evalf(0.0 18 .2).0012 := 0.1.000066 := 2. may be assumed to be as the LIPPSCHITZ constant: > L:=evalf(subs(x=0.94 iterationsε = 0.2].66 10-9 > The necessary number of iterations for a given error "epsilon" can be calculated by the formula on page 2: > iterations[epsilon]>= ln((1-Lambda)*epsilon/abs(xi[1]-xi[0]))/ln(Lambda).xi[1]=x[1].The greatest value of the absolute derivative in the neighbourhood of the expected fixed point.lhs(%)).3470 ln( 54.38 10-5 a_priori_estimate4 := 0.12 10-7 a_priori_estimate6 := 0. 0. then the a-priori error estimate is given by: > for i from 1 to 6 do a_priori_estimate[i]:= evalf((L^i/(1-L))*abs(x[1]-x[0]).2).2. iterationsε := −0.000066 a_priori_estimate3 := 0. ⎛ (1 − Λ) ε ⎞ ⎟ ln⎜⎜ ⎟ − ξ + ξ ⎝ 1 0 ⎠ ≤ iterationsε ln( Λ ) > iterations[epsilon]:= evalf(subs({Lambda=L.38*10^(-5).66*10^(-9).

LEGENDRE quadrature is very convenient.056 is very small.2)]:= evalf(subs(epsilon=0.056. we find again the necessary iterations: > a_priori_iterations[epsilon=evalf(0.565142*10^(-9)/(0.9 ε = 0.0 ε = 0.8 ε = 0. x0 := 0.805 Β > Inserting the value of beta = 0.805*10^(-10).8305*10^(-10).12. 19 .0 > The a-posteriori error estimate after the 6th iteriation is given by: > x[5]:=0.38 10 -5 := 3. which are identical to the zeros of the LEGENDRE polynomials [BETTEN. α := a_priori_error_estimate6 = 0.12 10 -7 := 4. a_priori_iterations ε = 0.12 x1 := 0. since the LIPPSCHITZ constant L = 0.1372982105. x[6]:=0.1379348242. J.1379348242 x6 := 0.: Finite Elemente für Ingenieure 2.1372982105 L := 0.347*ln(54.5651416893 10-9 > Q:=Alpha/Beta=evalf(0.iterations iterations iterations iterations ε = 0.20 10 -6 := 3.1379348256 L := 0.056 > alpha:=a_priori_error_estimate[6]=L^6*abs(x[1]-x[0])/(1-L).56*epsilon)). > LEGENDRE Polynomials Evaluating integrals numerically the GAUSS .-0. L:=0. x5 := 0. In order to apply this method one need the so called GAUSS points. 1−Λ x[1]:=0. Q := Α = 6.8305084746 10-10 > alpha:=Lambda^k*abs(xi[1]-xi[0])/(1-Lambda).056 > beta:=a_posteriori_estimate[6]=L*abs(x[6]-x[5])/(1-L).1379348256. Λ −ξ 1 + ξ 0 k α := > x[0]:=0.80 10 -10 := 6.1). > In the last example the convergence is very fast.056.83051*10^(-10)). L:=0. β := a_posteriori_estimate6 = 0.4).8305*10^(-10) into the a-priori estimate.66 10 -9 := 6.

x ) > for i in [0.4.sc=scaling.1. 0.. 0.co=black): > p[3]:=plot({H(x+1).k=1.co=color): > p[1]:=plot({P(1.x=0.1) od.9061798459 0 .th=thickness.x)"): > p[2]:=plot({-1.. 1) and have a symmetry with respect to the origin as has been illustrated in the following Figure: > alias(H=Heaviside. 1) are given as follows: > for i in [1. 1 > The roots of the LEGENDRE polynomials lie in the interval (-1. 1 ZERO4 := 0.H(x-1)..-H(x+1)}.1.1. 0.. 0 . 0. Legendre0 := 1 Legendre1 := x 3 35 4 15 2 x − x + 8 8 4 63 5 35 3 15 Legendre5 := x − x + x 8 4 8 Legendre4 := Legendre8 := 35 6435 8 3003 6 3465 4 315 2 + x − x + x − x 128 128 32 64 32 > the zeros of which in the interval (0. Berlin / Heidelberg / New York 2004].x).-H(x-1)}. title="LEGENDRE Polynomials P(n.x)..0.x=-1.1]:= fsolve(P(i.P(4. 0.x=-1.7966664774.x)}.9602898565 0 .zweite Auflage.x) od.0..1.x).5.co=black.5. sc=constrained.. Legendren := P( n.x)=0.3)). Springer-Verlag. 20 . Using the MAPLE package orthopoly we immediately find the LEGENDRE polynomials: > restart: > with(orthopoly): > Legendre[n]:=P(n.5384693101.th=3.8] do ZERO[i][0.ytickmarks=4.8] do Legendre[i]:=P(i.P(5.5255324099.001. 0.co=black): > plots[display](seq(p[k].xtickmarks=4.x). 1 ZERO5 := 0.3399810436..1. 1 ZERO8 := 0.4.8611363116 0 .1834346425.x.-1..001...P(3.. ZERO1 := 0.

0.> Instead of using the MAPLE command fsolve in finding the zeros of the LEGENDRE polinomials we are going to use the fixed point iteration in the following.g(x)}.1). 0. ZERO0 .sc=constrained. g1( x ) := x + 3 + 35 x4 − 30 x2 > g[2](x):=sqrt(1/10+7*x^4/6).th=thickness. In the first example we consider the polynomial P(4..x.1]:=fsolve(f(x)=0.sc=scaling..3.3.0.co=color): > p[1]:=plot({x.8611363116 > From the polynomial f(x) we read the following fixed point forms: > g[1](x):=x+f(x).4.0...x). 21 . g2( x ) := 90 + 1050 x4 30 > g[4](x):=(6*x^2/7-3/35)^(1/4).x=0.0.4. > g(x):=g[2](x).. LegendreP := 4 3 35 4 15 2 x − x + 8 8 4 > f(x):=8*%. 1 := 0.3399810436.x): > restart: > with(orthopoly): > Legendre[P][4]:=P(4. g( x ) := 90 + 1050 x4 30 > alias(H=Heaviside. ⎛ 6 x2 3 ⎞ g4( x ) := ⎜⎜ − ⎟⎟ ⎝ 7 35 ⎠ (1 / 4) > One can show that only g[2] is compatible with BANACH's fixed-point theorem. f( x ) := 3 + 35 x4 − 30 x2 > ZERO[0.

x=0.x=0.4)).4] into itself : The graph y = g(x) is contained in the square Q = { (x. style=point.4001.symbolsize=30): p[4]:=plot((subs(x=0..k=1.34). > abs(Diff(g(xi).0. 0.34.co=black): p[3]:=plot(0.4)).ytickmarks=3.k=1.34..4.abs(diff(g(x).339.0.symbol=circle. The LIPPSCHITZ constant can be assumed to be the greatest value in (p-delta.x=0.3.subs(x=0.ytickmarks=4.6.3.style=point.th=3. 0.symbol=circle.0.3.0.34*H(x-0. x=0.x)))*H(x-0.4]. xtickmarks=3.6*H(x-0.3.co=black): p[3]:=plot([[0. > This Figure shows that the operator " g " maps the interval x = [0.0.3.4)}. d g( ξ ) = 70 dξ > > > > > x3 90 + 1050 x4 p[1]:=plot(rhs(%). in the neighbourhood of the expected fixed point x = p the absolute derivative is less than one.4*H(x-0.0.co=black.34).34"): p[2]:=plot({0.xi))=abs(diff(g(x).co=black): p[4]:=plot([[0.6.0.0.x)). 0..4] }. symbolsize=30): plots[display](seq(p[k]. y = [0.th=3.4)}. title="Fixed Point at x = 0. y) | x = [0.x)))]].34.0..0.co=black.34. p+delta): 22 .34]].. > We see.34001. title="Absolute Derivative | g'(x) |"): p[2]:=plot({0.3401..co=black)): plots[display](seq(p[k].> > > > xtickmarks=4.3..3.x=0.4..abs(diff(g(x).4001.

3399686240 x7 := 0.3399801403 x9 := 0.3393458083 x4 := 0. a_priori_estimate3 := 0. p + δ ) max > L:=evalf(subs(x=0.41 > With the starting point x[0] = 0. L := ( p − δ.g(x)) od.> L:=(p-delta. Assuming " i " iterations.3308322838 > for i from 2 to 15 do x[i]:=subs(x=%.x))).3399808000 x10 := 0.28 10-5 a_priori_estimate15 := 0.3399809780 x11 := 0. then the a-priori error estimate is given by: > for i from 3 by 4 to 15 do a_priori_estimate[i]:= evalf((L^i/(1-L))*abs(x[1]-x[0]). L := 0.3 x1 := 0.3.000096 a_priori_estimate11 := 0.3399810260 x12 := 0.3399810390 x13 := 0.3376030997 x3 := 0.4.3399810423 x14 := 0.3399776943 x8 := 0. x0 := 0. x2 := 0.3399349860 x6 := 0.0036 a_priori_estimate7 := 0.81 10-7 > The necessary number of iterations for a given error " epsilon " can be calculated by the formula on page 2: > iterations[epsilon]:= 23 .3399810433 x15 := 0.3398101553 x5 := 0.2).abs(diff(g(x).2) od.g(x)).3399810437 > The 15th iteration x[15] is "identical" to the MAPLE solution based upon the command fsolve. x[1]:=subs(x=0.3 we get the following fixed point iteration: > x[0]:=0.p+delta)[max].3.

x). > beta:=a_posteriori_estimate[15]=L*abs(x[15]-x[14])/(1-L). iterationsε = 0.x.6]. ε = 0.53846931 > From the polynomial f(x) we read the following fixed point forms: > g[0](x):=x+f(x).122 ln( 19. > The next example is concerned with finding the zero of the LEGENDRE polynom P(5. f( x ) := 63 x5 − 70 x3 + 15 x > ZERO[0.8).0. evalf(0.000096.0. β := a_posteriori_estimate15 = 0.9 iterations iterations ε = 0.2)]:= evalf(subs(epsilon=0.69*10^(-9)).81*10^(-7).evalf(ln((1-L)*epsilon/abs(x[1]-x[0]))/ln(L).6 := 0. we find the necessary a-priori iterations of i = 21 instead of 15 iterations with alpha = 0.28 10-9 > alpha:=a-priori-estimate[15]=a_priori_estimate[15].2) od. The results are briefly listed in the following: > restart: > with(orthopoly): > Legendre[p][5]:=P(5.16 ε ) > for i in [0.28 10 -5 := 11.81 10 > Q:=A/B=evalf(0.0.2).4). iterationsε := −1.iterations[epsilon]).28 10 -9 := 21.evalf(0.2).5. β := a_posteriori_estimate15 = 0.. Legendrep := 5 63 5 35 3 15 x − x + x 8 4 8 > f(x):=8*%.iterations[epsilon])..5 .0.2)] do iterations[epsilon=i]:= evalf(subs(epsilon=i.4).2779661017 10-9 > beta:=evalf(%.81*10^(-7)/(0.81*10^(-7) : > a_priori_iterations[epsilon=evalf(0.5.0036 := 3.4 B > Inserting the value of beta = 0.. α := a − priori − estimate15 = 0.0 iterationsε = 0.000096 := 6. -7 Q := A = 117.6]:=evalf(fsolve(f(x)=0. 0.28*10^(-9).5.2).28*10^(-9). 0.81 10 -7 := 14. a_priori_iterations ε = 0. g0( x ) := 16 x + 63 x5 − 70 x3 24 .0036.28*10^(-9) into the a-priori estimate. x) in the interval x = [0.6). ZERO0.28*10^(-5).

point theorem. 0. 25 .6.x=0.538]].6] into itself . > In this Figure the graph y = g(x) is contained in the square Q = { (x. (1 / 3) > > > > > > ⎛3 9 5⎞ g( x ) := ⎜⎜ x + x ⎟⎟ ⎝ 14 10 ⎠ alias(H=Heaviside. 0.k=1.5.5. The operator " g " maps the interval x = [0.0.6.sc=scaling.538).0.6] }. > abs(Diff(g(xi). symbolsize=30): plots[display](seq(p[k]..5.538*H(x-0.co=black): p[3]:=plot(0.0. y = [0.> g[1](x):=(70*x^3-63*x^5)/15...th=thickness. xtickmarks=4.xi))=abs(diff(g(x). g1( x ) := 9 5⎞ ⎛3 x ⎟⎟ g3( x ) := ⎜⎜ x + 10 ⎠ ⎝ 14 > g[5](x):=((70*x^3-15*x)/63)^(1/5).0.x=0.5.538. since g[3] maps the interval x = [0.6].co=color): p[1]:=plot({x.. 14 3 21 5 x − x 3 5 > g[3](x):=((15*x+63*x^5)/70)^(1/3).539.4)).co=black): p[4]:=plot([[0.co=black.x=0.537.538"): p[2]:=plot({0.x)).6. 5 ⎞ ⎛ 10 g5( x ) := ⎜⎜ x3 − x ⎟⎟ 21 ⎠ ⎝9 (1 / 3) (1 / 5) > One can show that only g[3] is compatible with BANACH's fixed. 0.6] to itself and the absolute derivative | g'[3] | is smaller than one. > g(x):=g[3](x).g(x)}.6*H(x-0.5.6)}. 0.0. y) | x = [0.0. title="Fixed Point at x = 0.5.symbol=circle..5.th=3.style=point.0.sc=constrained.ytickmarks=4.6001.

6. L := 0.4)).6.6)}.3 9 x4 + 14 2 > > > > > d 1 g( ξ ) = (2 / 3) dξ 3 ⎛3 9 5⎞ ⎜⎜ x + x ⎟⎟ ⎝ 14 10 ⎠ p[1]:=plot(rhs(%).8.abs(diff(g(x).5.co=black): p[4]:=plot([[0.5.0.th=3.53340154 x6 := 0.538.style=point. symbolsize=30): plots[display](seq(p[k]. > We see.538). x[1]:=evalf(subs(x=0.2).x=0. xtickmarks=3. x0 := 0.6.0.0. p+delta)[max].52732399 x4 := 0.8).ytickmarks=3.0. title="Absolute Derivative | g'(x) |"): p[2]:=plot({0.0.. p+delta): > L:=(p-delta.co=black): p[3]:=plot(0.5.x=0.76 > With the starting point x[0] = 0.8.53096887 x5 := 0.52180783 x3 := 0.51333184 > for i from 2 to 25 do x[i]:=evalf(subs(x=%.0.x=0.0..g(x)).8*H(x-0. x2 := 0.53503604 26 . in the neighbourhood of the expected fixed point x = p the absolute derivative is less than one.539.681]].symbol=circle.537.k=1.co=black..6001.8) od.5 x1 := 0.5. p + δ ) max > L:=evalf(subs(x=0.. L := ( p − δ..5 we receive the following fixed point iteration: > x[0]:=0.x))). The LIPPSCHITZ constant can be assumed to be the greatest value in the range (p-delta.681*H(x-0.g(x)).

4) od.53739248 x10 := 0.00005808 > The necessary number of iterations for an allowable error " epsilon " can be calculated by the formula on page 2: > iterations[epsilon]:= evalf(ln((1-L)*epsilon/abs(x[1]-x[0]))/ln(L).53688593 x9 := 0.53839620 x17 := 0.53841950 x18 := 0.53845859 x22 := 0.53846200 x23 := 0.53843538 x19 := 0.5).0002290 a_priori_estimate25 := 0.53836199 x16 := 0.53613917 x8 := 0.53797051 x12 := 0. iterationsε := −3.005 ε ) 27 .53846433 x24 := 0.53812967 x13 := 0.01406 a_priori_estimate10 := 0. > a_priori_estimate5 := 0.53846699 > The 25th iteration x[25] is "identical" to the MAPLE solution based on the command fsolve.x7 := 0. Assuming " i " iterations.53823800 x14 := 0.53773657 x11 := 0.0009033 a_priori_estimate20 := 0.003563 a_priori_estimate15 := 0.53831176 x15 := 0.53845357 x21 := 0. then the a-priori error estimate is given by: > for i from 5 by 5 to 25 do a_priori_estimate[i]:=evalf((L^i/(1-L))*abs(x[1]-x[0]).53844620 x20 := 0.6438 ln( 18.53846591 x25 := 0.

: Finite Elemente für Ingenieure 2. iterationsε = 0. J: "NEWTON's Method in Comparison with the Fixed Point Iteration" published as a worksheet in the MAPLE Application Center. by BETTEN.342 10 -5 := 35. Springer Verlag. α := 0.3420000000 10 > beta:=evalf(%.0.0.014 := 5.003563 := 9.0000581 := 25.7 iterationsε = 0. > 28 .0009033.4). > beta:=a_posteriori_estimate[25]=L*abs(x[25]-x[24])/(1-L).5808*10^(-4) : > a_priori_iterations[epsilon=evalf(0.342*10^(-5) into the a-priori estimate.iterations[epsilon]).342 10-5 > alpha:=a_priori_estimate[25].2) od. for instance. Applications of the fixed point iteration to systems of nonlinear equations have been discussed in more detail.3).003563. we find the necessary a-priori iterations of i = 35 instead of 25 iterations with alpha = 0. > In the same way as before one can find the zeros of the CHEBYSHEV polynomials T(n. Some examples have been discussed by BETTEN.98 Β > Inserting the value of beta = 0. x).0. iterationsε = 0. the convergence order of which is at least two.0 iterationsε = 0. a_priori_iterations ε = 0. zweite Auflage.0. iterationsε = 0. In contrast. Q := Α = 16. Instead of the fixed point iteration one can utilize the NEWTON or NEWTON-RAPHSON iteration for solving root-finding problems.014. NEWTON's method is one of the most well-known and powerful numerical methods.000229 := 20.342*10^(-5).> for i in [0. In general. Berlin / Heidelberg / New York 2004. β := a_posteriori_estimate25 = 0.3)]:= evalf(subs(epsilon=0. which are also orthogonal polynomials contained in the MAPLE package orthopoly.00005808 > Q:=Alpha/Beta=evalf(alpha/rhs(beta).342*10^(-5). -5 β := a_posteriori_estimate25 = 0.0000581] do iterations[epsilon=i]:= evalf(subs(epsilon=i. a sequence with a high order of convergence converges more rapidly than a sequence with a lower order.iterations[epsilon]).2). the fixed point iteration is linearly convergent.000229.0009033 := 15. J.