You are on page 1of 7

3.

Nonlinear Equations Nonlinear Equations


One of the most frequently occurring problems in There are two distinct phases in finding the roots of
scientific work is to find the roots of equations of a nonlinear equation:
the form
1. bounding the solution
= 0 (1)
2. refining the solution
i.e., zeros of the function ( ) Some general philosophy of root finding
The process of solving for the zeros of an equation is 1. Bounding methods should bracket a root, if
known as root finding possible
The function ( ) may be given explicitly, as, for 2. Good initial approximations are extremely
example, a polynomial in or as a transcendental important
function

Nonlinear Equations Iterative Methods


3. For smoothly varying functions, most algorithms Solution process begins with one or more guesses at
will always converge if the initial approximation the solution being sought and then refines these
is close enough guesses according to some specific rules of the
4. When a problem is to be solved only once or a method
few times, the efficiency of the method is not of Typically, this results in the generation of a
major concern sequence of estimates of the solution which we
5. Polynomials can be solved by any of the hope will converge to the true solution
methods for solving nonlinear equations.
It is , of course, necessary to analyze the methods to
However, the special techniques applicable to establish conditions under which they do indeed
polynomials should be considered converge

Iterative Methods Iterative Methods


Iterative processes in which an initial approximation and use is then made of the simple recurrence
to a desired real root = is obtained, by rough relation
graphical methods or otherwise = (3)

Certain recurrence relation is used to generate a Convergence or divergence of the sequence of


sequence of successive approximations approximations to may depend upon the
, , … , , … which converges (in a certain particular form chosen
associated class of cases) to the limit In order to see why this is so, we first assume that
( ) possesses a continuous derivative on the
One such method is that of successive substitutions, closed interval bounded by and and then
which is written as notice that since
= (2) =
Iterative Methods Iterative Methods
Equation = implies the relation and hence also that

− = − = − (4) − ~ ( ) → ∞ (5)
where is a certain constant
where lies between and
this deviation in fact would grow unboundedly in
If the iteration converges, so that → , then also magnitude with increasing if it were true that
′ → ′( ) as → ∞ >1

Temporarily excluding the case when = 0 and Thus it appears that in order that the iteration
= ±1, we deduce that converge to = as an infinite sequence, it is
− ~ − ′( ) necessary that ≤1

Iterative Methods Iterative Methods


If we define the convergence factor as the ratio When > 1, convergence to could occur
of the error in to the error in , it follows that only in a finite number of steps, in consequence of
if is near , then ≈ ′( ) an improbably fortunate choice of the initial
approximation (such as = )
The number ′( ) may be called the asymptotic
convergence factor
When = 1, the asymptotic behavior of the
Unless ≤ 1, a small error in is increased in corresponding approximation sequence is
magnitude by the iteration, and we then say that the unpredictable without further information
iteration is asymptotically unstable at

Iterative Methods Iterative Methods


Finally, when = 0, a sufficiently small value  successive approximations tend toward from
of − certainly leads to a smaller value of one direction if 0 < ′( ) < 1
− , so that asymptotic stability is present,
but (5) no longer describes the nature of the  oscillate about with decreasing amplitude if
convergence when it exists − 1 < ′( ) < 0

if < 1, so that the iteration is asymptotically In the special case when = 0, the nature of
stable at , and if the initial approximation is the convergence depends upon the behavior of the
sufficiently near to , the sequence of the iterates higher derivatives of near =
will indeed converge to , in such a way that
ultimately
Iterative Methods Iterative Methods
More generally, for any differentiable function ( ), Furthermore, if is taken to be inside or at one end
if an interval [ , ] can be found such that ( ) and of [ , ], it then follows from
( ) have opposite signs, and if ′( ) is of constant
− = − = − that
sign in [ , ], then certainly ( ) has one and only
one zero = inside [ , ] − ≤ − < −
If the equation ( ) = 0 is written as = ( ) in so that is closer to than
such a way that
Consequently, one may be led to conclude that also
≤ < 1 (6) − ≤ − ≤ − ,
when ≤ ≤ , then assuredly the iteration and by induction, that − ≤ − , so
= is asymptotically stable at that will necessarily converge to as →∞

Iterative Methods Iterative Methods


An additional condition which clearly is sufficient to This follows simply from the fact that if is in [ , ],
ensure that and all subsequent ’s remain in then
[ , ], and hence that convergence to will indeed − = −
follow in the preceding situation, is the requirement = − −[ − ]
that ( ) be such that ≤ ( ) ≤ for all such
that ≤ ≤ = − [1 − ′ ]

Another sufficient condition for convergence with where < < . Hence, since (7) guarantees that
any choice of in [ , ], assuming again that is 0<1− < 1 , it follows − has the
known to lie in that interval, is that same sign as − and has a smaller magnitude

0< < 1 when < < (7) Thus is between and and hence also in
[ , ]

Iterative Methods Iterative Methods


In view of (5), we may notice that if 0 < | ′( )| < 1, − −
and if the iteration = converges to , ≈
− −
then the relation
which yields the estimate
− ~ (8)
will be valid to some and , independent of −

when is sufficiently large −2 +
If we rewrite this relation with replaced by + 1
or, equivalently,
and by + 2, and eliminate the unknown and
from the resultant three relations, we may deduce − ∆
the approximation ≈ − ≡ −
−2 + ∆
Iterative Methods Iterative Methods
where In a wide class of related methods for dealing with
(1), a recurrence formula of the type
∆ ≡ −
∆ ≡∆ −∆ = −2 + = − (9)
Thus, if three successive iterates , , and
are known, this relation affords an extrapolation is used, with a suitable definition of the auxiliary
which may be expected to provide an improved sequence , , … , …
estimate of , when the iteration converges
Since ( ) = 0, the recurrence relation (9) implies
This procedure for accelerating convergence is often the relation
called Aitken's ∆ process

Iterative Methods Iterative Methods


− ′
− = − − 0< < 2 (11)

1 when is large, convergence of to generally


= − 1− ′ (10)
cannot be obtained

where is between and The requirement = , where = 0, would


imply that
Thus the convergence factor at the th stage is
given, to a first approximation, by 1 − ′( )⁄ 0− ( )
= (12)
when is near , and, unless this factor is smaller −
than unity in magnitude, so that

Iterative Methods False Position Method


so that would then represent the slope of the
The iteration is initiated by finding and such
secant line joining the points ( , ) and ( , 0)
that and are of opposite signs, and by defining
as the slope of the secant (Fig. 2), so that

FIGURE 1 − −
= − = (13)
− −

In each following iteration, is taken as the slope of


the line joining and the most recently determined
Thus it is desirable to define the sequence in such a
point at which the ordinate differs in sign from that
way that this situation is approximated at each stage
at
of the calculation
False Position Method Secant Method
Can be described by the iteration formula


FIGURE 2 = − ≡ − (14)
− ,

with divided-difference notation of


, =

Bisection Method Newton-Raphson Method


consists of Algebraically, the method derives from the familiar
Taylor series expansion of a function in the
 evaluating ( ) at the midpoint of the interval neighborhood of a point,
[ , ] at the ends of which ( ) has opposite
signs ′′( )
+ = + + + ⋯ (15)
 discarding that one of and at which the 2
ordinate has the same sign as ( ) consists of taking in (9) as the slope of the curve
= ( ) at the point so that (9) becomes
 repeating the process until half the length of the
subinterval inside which continues to be ( )
trapped is within the prescribed error tolerance = − (16)
′( )

Newton-Raphson Method Newton-Raphson Method


This iteration is seen to be also the special case of
(3) in which
FIGURE 3 ( )
( )= −
′( )

and hence = ( ) ′′( )⁄ ′( )

Thus if ′( ) ≠ 0 and ′′( ) is finite, there follows


= 0, so that the convergence factor tends to
zero when and if → ∞
Newton-Raphson Method Newton-Raphson Method
In order to examine the behavior of the error
1 ′′( )
− , we rewrite (16) in the equivalent form − =− ( − ) (18)
2 ′( )
− ( )
− = − − (17)
′( ) Thus, if the iteration converges to , there follows
and recall that
− ~− − ( → ∞) (19)
1 2
− ( )= − ′( ) + ( − ) ′′( )
2
where lies between and , if ′′( ) is provided that ′( ) and ′′( ) are both finite and
continuous in the interval, so that (17) becomes nonzero

Systems of Nonlinear Equations Systems of Nonlinear Equations


Some of the methods in preceding sections are = , = , (22)
readily generalized to the treatment of two or more
simultaneous nonlinear equations When the iteration converges to the true solution
pair, say, = and = , it can be shown that the
Thus, for example, the two simultaneous equations
errors in the th iterates tend to be described by
, = 0 , = 0 (20) the relations
can be written (in various ways) in equivalent forms − ≈ + − ≈ +
= , = , (21)
where , , and are constants, independent
and the method of successive substitutions can be of , and where and are the roots of the
based on the recurrence formulas equation

Systems of Nonlinear Equations Systems of Nonlinear Equations


− − Thus the iteration will be asymptotically stable at
=0 , if and only if the roots and , are smaller
− −
than unity in absolute value, the necessary and
or sufficient conditions for which are

− + + − = 0 (23) + ≤ − + 1 < 2 (24)

with the partial derivatives evaluated at , , if A more stringent pair of conditions, which is
≠ at that point sufficient (but generally not necessary) for
asymptotic stability, is of the form
The constants , and , will be conjugate
complex if the same is true of , + <1 + < 1 (25)
Systems of Nonlinear Equations Systems of Nonlinear Equations
The Newton-Raphson iteration, as applied to the and neglecting nonlinear terms in − and
solution of (20), is based on the result of replacing − so that the recurrence formulas are of
( , ) by ( , ) in the right-hand members of the form
the Taylor expansions − , + − , =− ,

0= , = , + − , + − , + − , =− ,
− , +⋯
Rather than resolve these equations for and
0= , = , + − , + , it is usually convenient to solve them, as
− , +⋯ written, for the corrections ∆ ≡ − and
∆ ≡ − which are to be added to and
(26) to yield the following iterates

Systems of Nonlinear Equations


∆ , +∆ , =− ,

∆ , +∆ , =− ,
Thus,
= +∆ = +∆
The above equations are applied repetitively until
either one of the following convergence criteria are
satisfied
∆ ≤ ∆ ≤
, ≤ , ≤

You might also like