Professional Documents
Culture Documents
√
1.3 Convergence
x
sin x = 0; e = π; tan x = x.
A numerical method to solve equations will be a long pro-
While roots can be found directly for algebraic equations
cess. We would like to know, if the method will lead to a
of fourth order or lower, and for a few special transcen-
solution (close to the exact solution) or will lead us away
dental equations, in practice we need to solve equations of
from the solution. If the method, leads to the solution,
higher order and also arbitrary transcendental equations.
then we say that the method is convergent. Otherwise,
As analytic solutions are often either too cumbersome the method is said to be divergent.i.e, in case of linear
or simply do not exist, we need to find an approximate and non linear interpolation convergence means tends to
method of solution. This is where numerical analysis 0.
comes into the picture.
1.4 Rate of Convergence
1.1 Some Useful Observations
Various methods converge to the root at different rates.
• The total number of roots an algebraic equation can That is, some methods are slow to converge and it takes
have is the same as its degree. a long time to arrive at the root, while other methods can
lead us to the root faster. This is in general a compromise
• An algebraic equation can have at most as many pos- between ease of calculation and time.
itive roots as the number of changes of sign in f (x)
. For a computer program however, it is generally better
to look at methods which converge quickly. The rate of
• An algebraic equation can have at most as many convergence could be linear or of some higher order. The
negative roots as the number of changes of sign in higher the order, the faster the method converges.
f (−x) .
If ei is the magnitude of the error in the i th iteration,
• In an algebraic equation with real coefficients, com- ignoring sign, then the order is n if eei+1n is approximately
i
plex roots occur in conjugate pairs constant.
• If f (x) = a0 xn +a1 xn−1 +a2 xn−2 +...+an−1 x+ It is also important to note that the chosen method will
an with roots α1 , α2 , ..., αn then the following hold converge only if ei+1 < ei .
good:
∑
• αi = −a 1
1.5 Bisection Method
∑i a0
• αi αj = aa20
∏ i<j This is one of the simplest methods and is strongly based
• i αi = (−1)n aan0
on the property of intervals. To find a root using this
• If f (x) is continuous in the interval [a, b] and method, the first thing to do is to find an interval [a, b]
f (a)f (b) < 0 then a root must exist in the inter- such that f (a) · f (b) < 0 . Bisect this interval to get a
val (a, b) point (c, f (c)) . Choose one of a or b so that the sign of
1
2 1 SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS
f (c) is opposite to the ordinate at that point. Use this as 1.6 False Position Method
the new interval and proceed until you get the root within
desired accuracy. The false position method (sometimes called the regula
falsi method) is essentially same as the bisection method
-- except that instead of bisecting the interval, we find
Example Solve 4x4 + 3x3 + 2x − 7 correct up to 2 where the chord joining the two points meets the X axis.
decimal places. The roots are calculated using the equation of the chord,
i.e. putting y = 0 in
f (x) = 4x4 + 3x3 + 2x − 7
⇒ f (2) · f (3) < 0 ⇒ ϵ = 1
2 10
−2
⇒i= 2.3010
0.3010 = 8 y − f (x ) = f (x1 ) − f (x0 ) (x − x )
0 0
⇒ x1 = 2.5 x1 − x0
The rate of convergence is still linear but faster than that
f (x1 ) = 5.625 > 0 of the bisection method. Both these methods will fail if f
has a double root.
⇒ x2 = 2.25 ⇒ ...x8 = 2.09
1.6.1 Example
1.5.1 Error Analysis
Consider f(x)=x2 −1. We already know the roots of this
The maximum error after the i th iteration using this pro- equation, so we can easily check how fast the regula falsi
cess will be given as method converges.
For our initial guess, we'll use the interval [0,2].
|b − a| Since f is concave upwards and increasing, a quick sketch
ϵi =
2i of the geometry shows that the chord will always inter-
⇒i≥ [log(b−a)−log ϵi ] sect the x-axis to the left of the solution. This can be
log 2
confirmed by a little algebra.
ϵi+1
As the interval at each iteration is halved, we have =
1
ϵi We'll call our nth iteration of the interval [an, 2]
2 . Thus this method converges linearly.
The chord intersects the x-axis when
If we are interested in the number of iterations the Bisec-
tion Method needs to converge to a root within a certain
tolerance than we can use the formula for the maximum (22 − 1) − (a2n − 1)
error. −(a2n − 1) = (x − an )
2 − an
Rearranging and simplifying gives
Example How many iterations do you need to get
the root if you start with a = 1 and b = 2 and the
tolerance is 10−4 ?
1 + 2an
x=
The error ϵi needs to be smaller than 10−4 . Use the for- 2 + an
mula for the maximum error:
Since this is always less than the root, it is also an₊₁
−i −4
ϵi = 2 · (2 − 1) < 10 The difference between an and the root is en=an−1, but
−i
ϵi = 2 < 10−4
log10 2−i < log10 10−4 This is always smaller than en when an is positive. When
an approaches 1, each extra iteration reduces the error by
−i · log10 2 < −4 two-thirds, rather than one-half as the bisection method
4 would.
i> = 13.29 = 14
log10 2
The order of convergence of this method is 2/3 and is
Hence 14 iterations will ensure an approximation to the linear.
root accurate to 10−4 . Note: the error analysis only gives In this case, the lower end of the interval tends to the root,
a bound approximation to the error; the actual error may and the minimum error tends to zero, but the upper limit
be much smaller. and maximum error remain fixed. This is not uncommon.
1.7 Fixed Point Iteration (or Staircase method or x = g(x) method or Iterative method) 3
for any choice of x0 in [a,b]. In this case, g ′ (x) = 3x2 + 2 , and the convergence con-
dition is
1.7.1 Error analysis
We define the error at the nth step to be 1 > |g ′ (x)| = 3x2 + 2, −1 > 3x2
Then we have
xn+1 = 2 − x3n
So, when |g′(x)|<l, this sequence converges to a root, and Since this range does not include the root, this method
the error will be approximately proportional to n. won't converge either.
Because the relationship between en₊₁ and en is linear, we 3)Another obvious rearrangement is
say that this method converges linearly, if it converges at
all.
√
When g(x)=f(x)+x this means that if xn+1 = 3 2 − xn
xn+1 = xn + f (xn )
1 2
−2
√
converges to a root, x, of f then (2 − xn )− 3 < 1, (2 − xn ) < 33 , |xn −2| > 27
3
4 1 SOLUTION OF ALGEBRAIC AND TRANSCENDENTAL EQUATIONS
2
xn+1 =
x2n + 1
4|x|
< 1, 4|x| < (1 + x2 )2
(1 + x2 )2
( )
f (xn ) x2 − a x2 + a 1 a
xn+1 = xn − ′
= xn − n = n = xn +
f (xn ) 2xn 2xn 2 xn
This method is easily implemented, even with just pen
and paper, and has been used to rapidly estimate square
roots since long before Newton.
The nth error is en=xn-√a, so we have
√
(en + a)2 +a √
en+1 = √
2( a+e )
− a
√ n
√
2a+2en a+e2n
= 2(
√
a+e )
− a
√ √n
√
2( a+en ) a+e2n
= √
2( a+en )
− a
e2
= √ n
2( a+en )
e2
|en+1 | = √ n < |en |
2( a + en )
so, assuming en is positive, it converges when
( √ )
en < 2 en + a
6 2 TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES
2.2 Images
• File:Newton_iteration.png Source: https://upload.wikimedia.org/wikipedia/commons/f/f0/Newton_iteration.png License: Public do-
main Contributors: Transferred from en.wikipedia to Commons. Original artist: Olegalexandrov at English Wikipedia