This action might not be possible to undo. Are you sure you want to continue?
Determination of roots of polynomials and transcendental equations by Newton Raphson, Secant and Bairstow's method. Motivation: Let us look at a set of problems of scientific and engineering interest to get a feel of what is root finding and why to find roots. Later we learn how to find them. Problem 1: Suppose we are asked to cut a rectangular sheet with one of its sides 1.25mts longer than the other and the area being 0.875 mts from a thin iron sheet of 5 mts area.What will be length of the 'smallest side'? Say, length of the smallest side = Length of the other side Area of rectangle mt
i.e. i.e. say (1) So you need to solve a quadratic equation to find the required quantity. i.e. we have to find the roots of a quadratic equation. We know that the roots of a quadratic equation
are given by (3)
Problem 2: Concepts of thermodynamics are used extensively in their works by say aerospace, mechanical and chemical engineers. Here, the zeropressure specific heat of dry air say temperature 'T' by
is related to
Now, determine the temperature that corresponds to a specific heat of 1.2 KJ/(KgK). So, here we have to solve or find the roots of =1.2 i.e. find roots of
Problem 3: The concentration of pollutant bacteria 'C' in a lake decreases as per the model:
where 't' is the time variable. Determine the time required for the bacteria concentration to be reduced to 9. Here, we have to find the roots of (5)
Problem 4: The volume of liquid in a hollow horizontal cylinder of radius r and
length L is related to the depth of the liquid h by
Determine h given that Here we have to find the roots of (6)
So we have seen that finding roots of
is very important in finding solution to several scientific and engineering problems. The equation may be a polynomial equation or a transcendental equation. Polynomial Equations: Polynomial equations in one independent variable 'x' are a simple class of algebraic equations that are represented as follows:
The degree polynomial has complex. Examples:
roots. These roots may be real or
Transcendental Equation: The equations include trigonometric or exponential or logarithmic functions. Examples:
We may note that the examples are nonlinear functions. Method of solution: Some of the ways of finding the roots or solution of
• • •
Direct analytical methods Graphical approach Iterative methods etc.
One may be able to find a mathematical expression for the solution (root) of
Direct analytical methods:
. For example, for quadratic equation (2), we have solutions given by (3). However a large number of equations cannot be solved by direct analytical methods. This approach involves plotting the given function and determining the points where it crosses the x-axis. These points, extracted approximately from the plot, represent approximate values of the roots of the function. Example:
Find the positive roots of Rewrite as
and plot them .
The x-co-ordinate of the point of intersection of
gives the required positive root of the given function. Clearly this approach is cumbersome and time consuming. Starting with an initial guess solution these methods generate a sequence of estimates to the solution which is expected to converge to the true solution. They are grouped into two categories :
Iterative Methods: • •
(a) Bracketing methods (b) Open methods
These methods exploit the fact that a function typically changes sign in the vicinity of a root. They start with two initial guesses that bracket the root and then systematically reduce the width of the bracket until the solution to a desired accuracy is reached. The popular bracketing methods are: (a) Bisection Method, (b) False Position (or) Regula Falsi method, (c) Improved or modified Regula Falsi Method. (b)Open methods: These methods are based on formulas that require only a single starting (or guess) value of solution or two starting values that do not necessarily bracket the root. They may sometimes diverge or move away from true root as the computation progresses. However when the open methods converge , they do so much more quickly than the Bracketing methods. Some of the popular open methods are: (a) Secant method, (b) Newton-Raphson method, (c) Bairstow's method (d) Muller's method etc.
(a) Bisection Method: This is one of the simplest and reliable iterative methods for the solution of nonlinear equation. This method is also known as binary chopping or half-interval method. Given a function which is real and continuous in an interval and and are of opposite sign i.e. of
, then there is at least one real root . Algorithm:
Given a function continuous on a interval satisfying the Bisection method starting criteria, carry out the following steps to find a root of
(1) Set (2) For n=1,2,...until satisfied do
Note: 1) The subscripts in etc denote the
is the interval for the zeroth or starting iteration. iteration.
is the interval for the n
(2) An iterative process must be terminated at some stage. 'Until satisfied' refers to the solution convergence criteria used for stopping the execution process. We must have an objective criteria for deciding when to stop the process.We may use one of the following criteria depending on the behavior of the function (monotonous/steep variation/increasing /decreasing) (i) (Tolerable absolute error in
(ii) (iii) (iv) function values)
(Tolerable relative error in (Value of function as )
(difference in two consecutive iteration
Usually are referred to as tolerance values and it is fixed by us depending on the level of accuracy we desire to have on the solution. For example Example: Solve method. Solution: Given for the root in the interval [1,2] by Bisection etc.
There is a root for the given function in [1,2]. Set
Details of the remaining steps are provided in the table below: Bisection
Method Iteration no. 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 1.0000000000 1.5000000000 1.5000000000 1.6250000000 1.6250000000 1.6562500000 1.6562500000 1.6562500000 1.6562500000 1.6582031250 1.6591796875 1.6596679688 1.6599121094 1.6600341797 1.6600952148 1.6600952148 1.6600952148 1.6600952148 2.0000000000 2.0000000000 1.7500000000 1.7500000000 1.6875000000 1.6875000000 1.6718750000 1.6640625000 1.6601562500 1.6601562500 1.6601562500 1.6601562500 1.6601562500 1.6601562500 1.6601562500 1.6601257324 1.6601104736 1.6601028442 1.5000000000 1.7500000000 1.6250000000 1.6875000000 1.6562500000 1.6718750000 1.6640625000 1.6601562500 1.6582031250 1.6591796875 1.6596679688 1.6599121094 1.6600341797 1.6600952148 1.6601257324 1.6601104736 1.6601028442 1.6600990295 -2.0000000000 1.3437500000 -0.4804687500 0.3920898438 -0.0538940430 0.1666488647 0.0557680130 0.0007849932 -0.0265924782 -0.0129132364 -0.0060664956 -0.0026413449 -0.0009283243 -0.0000717027 0.0003566360 0.0001424643 0.0000353802 -0.0000181614
1.6600990295 1.6601028442 1.6601009369 0.0000086094
Example: Solve Bisection method. for the root in the interval Bisection Method Iteration no. 0 1 2 3 4 0.5000000000 0.5000000000 0.5000000000 0.6250000000 0.6875000000 1.5000000000 1.0000000000 0.7500000000 0.7500000000 0.7500000000 1.0000000000 0.7500000000 0.6250000000 0.6875000000 0.7187500000 3.1720056534 0.6454265714 -1.0943561792 -0.1919542551 0.2357951254 by
5 6 7 8 9 10 11 12 13 14 15 16 17 18
0.6875000000 0.6875000000 0.6953125000 0.6992187500 0.7011718750 0.7011718750 0.7011718750 0.7011718750 0.7012939453 0.7013549805 0.7013549805 0.7013549805 0.7013626099 0.7013664246
0.7187500000 0.7031250000 0.7031250000 0.7031250000 0.7031250000 0.7021484375 0.7016601562 0.7014160156 0.7014160156 0.7014160156 0.7013854980 0.7013702393 0.7013702393 0.7013702393
0.7031250000 0.6953125000 0.6992187500 0.7011718750 0.7021484375 0.7016601562 0.7014160156 0.7012939453 0.7013549805 0.7013854980 0.7013702393 0.7013626099 0.7013664246 0. 7013683 319
0.0240836944 -0.0834089667 -0.0295295101 -0.0026894973 0.0107056862 0.0040097744 0.0006612621 -0.0010144216 -0.0001766436 0.0002420362 0.0000326998 -0.0000715650 -0.0000194324 0.0000069206
Exercise: - Find the solutions of the following problems accurate to within (1) using Bisection Method. for
False Position or Regula Falsi method:
Bisection method converges slowly. Here while defining the new interval
the only utilization of the function
is in checking whether
but not in actually calculating the end point of the interval. False Position or Regular Falsi method uses not only in deciding the new interval as in bisection method but also in calculating one of the end points of the new interval. Here one of end points of previous interval say is calculated as a weighted average defined on as
( have opposite signs). The algorithm for computing the root of function given below. Algorithm: Given a function criteria in : continuous on an interval
by this method is
satisfying the of
, carry out the following steps to find the root
(1) Set (2) For n = 0,1,2.... until convergence criteria is satisfied , do:
, then set
otherwise set Note: Use any one of the convergence criteria discussed earlier under bisection method. For the sake of carrying out a comparative study we will stick both to the same convergence criteria as before i.e. (say) and to the example problems. Example: Solve Falsi method: Solution: Since given function f(x) in [1,2]. Set . for the root in the interval [1,2] by Regula-
, we go ahead in finding the root of
set , proceed with iteration. Iteration details are provide below in a tabular form:
Regula Falsi Method Iteration no. 0 1 2 3 4 5 6 7 8
1.0000000000 2.0000000000 1.4782608747 -2.2348976135 1.4782608747 2.0000000000 1.6198574305 -0.5488323569 1.6198574305 2.0000000000 1.6517157555 -0.1169833690 1.6517157555 2.0000000000 1.6583764553 -0.0241659321 1.6583764553 2.0000000000 1.6597468853 -0.0049594725 1.6597468853 2.0000000000 1.6600278616 -0.0010169938 1.6600278616 2.0000000000 1.6600854397 -0.0002089010 1.6600854397 2.0000000000 1.6600972414 -0.0000432589 1.6600972414 2.0000000000 1.6600997448 -0.0000081223 Note : One may note that Regula Falsi method has converged faster than the Bisection method.
Geometric Interpretation of Regula Falsi Method:
Let us plot the polynomial considered in the above example and trace , its movement and new intervals with iteration. From the figure(), one can verify that the weighted average
is the point of intersection of the secant to
, passing through points
and with the x-axis. Since here is concave upward and increasing the secant is always above . Hence, always lies to the left of the zero. If were to be concave downward and increasing, would always lie to the right of the zero.
Example: Solve Regula Falsi method. for the root in the interval [0.5,1.5] by
Regula Falsi Method Iteration no. 0 1 2 3
0.5000000000 0.5000000000 0.5000000000 0.5000000000
1.5000000000 0.8773435354 0.7222673893 0.7032044530
0.8773435354 0.7222673893 0.7032044530 0.7015219927
2.1035263538 0.2828366458 0.0251714624 0.0021148270
4 5 6
0.5000000000 0.7015219927 0.7013807297 0.0001767781 0.5000000000 0.7013807297 0.7013689280 0.0000148928 0.5000000000 0.7013689280 0.7013679147 0.0000009526 for the root in the interval [2,3]
Exercise: 1) Solve by Regula-Falsi Method. 2) Find the solution to to within
, in the interval [1,2] accurate
using Regula-Falsi Method.
Modified Regula Falsi method: In this method an improvement over Regula Falsi method is obtained by replacing the secant by straight lines of even-smaller slope until falls to the otherside of the zero of . The various steps in the method are given in the algorithm below: Algorithm: Given a function criteria in : continuous on an interval satisfying the of
, carry out the following steps to find the root of
(1)Set (2) For n=0,1,2...., until convergence criteria is satisfied, do: (a) compute (b) If then
Set Also if Otherwise Set Also if Example: Solve Regula Falsi method. Solution: Since Set for the root in the interval [1,2] by Modified we go ahead with finding the Set
root of given function f(x) in [1,2]. Setting and following the above algorithm. Results are provided in the table below:
Modified Regula Falsi Method Iteration no. 0 1.0000000000 2.0000000000 1.4782608747 -2.2348976135 1 2 3 4 5 1.4782608747 2.0000000000 1.7010031939 0.5908976793 1.4782608747 1.6544258595 1.6599385738 1.6599385738 1.7010031939 1.7010031939 1.7010031939 1.6602516174 1.6544258595 1.6599385738 1.6602516174 1.6601003408 -0.0793241411 -0.0022699926 0.0021237291 0.0000002435
The geometric view of the example is provided in the figure below:
Example: Solve for the root in the interval [0.5,1.5] by Modified Regula Falsi Method.
Modified Regula Falsi Method Iteration no. 0 1 2 3 4 5 6
0.5000000000 0.5000000000 0.5000000000 0.6871531010 0.6871531010 0.6871531010 0.7013661265
1.5000000000 0.8773435354 0.7222673893 0.7222673893 0.7015607357 0.7013695836 0.7013695836
0.8773435354 0.7222673893 0.6871531010 0.7015607357 0.7013695836 0.7013661265 0.7013678551
2.1035263538 0.2828366458 -0.1967970580 0.0026464546 0.0000239155 -0.0000235377 -0.0000003363
Like the Regula Falsi method and the Bisection method this method also requires two initial estimates of the root of f(x)=0 but unlike those earlier methods it gives up the demand of bracketing the root. Like in the Regula Falsi method, this method too retains the use of secants throughout while tracking the root of f(x)=0. The secant joining the points is given by
Say it intersects with x-axis at
(say) then replace
and repeat the process to get and so on . The method is algorithmically described below: Algorithm: Given a , two initial points a, b and the required level of accuracy carry out the following steps to find the root of f(x)=0.
(1) Set (2) For n=0,1,2... until convergence criteria is satisfied, do: Compute
Example: Solve for the root with method to an accuracy of . Solution: Set by secant
Repeat the process with get a s.t.
and so on till you These results are tabulated below:
Secant Method Iteration no. 0 1 2 3 4
1.0000000000 2.0000000000 1.4782608747 1.6198574305
2.0000000000 1.4782608747 1.6198574305 1.6659486294
1.4782608747 1.6198574305 1.6659486294 1.6599303484
-2.2348976135 -0.5488323569 0.0824255496 -0.0023854144
1.6659486294 1.6599303484 1.6600996256 -0.0000097955
Geometrical visualization of the root tracking procedure by Secant method for the above example.
Exercise: Find the solutions accurate to within following problems using Secant's Method. (1) (2)
Convergence of secant method:
where are the errors at n at and (n+1) , (n+1)
is the root of
iterations and iterations. If
are the approximations of where
is a constant, then the rate of convergence of
the method by which is generated is p. Claim: Secant method has super linear convergence. Proof: The iteration scheme for the secant method is given by
Say estimating .
i.e the error in the n
Using (iii) in (ii) we get
By Mean value Theorem,
in the interval
i.e. Using (iii)above, we get
using (v), (vi), in (iv) we get
By def of rate of convergence , the method is of order p if
>From (vii) and (viii) we get
i.e i.e From (viii), (ix) we get
Hence the convergence is superlinear. Example: Solve secant method. for the root in the interval [0.5,1.5] by Secant Method 0 1 2 3 4 5 0.5000000000 1.5000000000 0.8773435354 0.4212051630 0.7258019447 0.7037572265 1.5000000000 0.8773435354 0.4212051630 0.7258019447 0.7037572265 0.7013285756 0.8773435354 0.4212051630 0.7258019447 0.7037572265 0.7013285756 0.7013679147 2.1035263538 -4.2280626297 0.3298732340 0.0327354670 -0.0005388701 0.0000009526
Newton-Raphson Method: Unlike the earlier methods, this method requires only one appropriate starting point as an initial assumption of the root of the function a tangent to is drawn. Equation of this
. At tangent is given by
The point of intersection, say , of this tangent with x-asis (y = 0) is taken to be the next approximation to the root of f(x) = 0. So on substituting y = 0 in the tangent equation we get
If of at
(say) we have got an acceptable approximate root , otherwise we replace by , and draw a tangent to , with x-axis
and consider its intersection, say
as an improved approximation to the root of f(x)=0. If , we iterate the above process till the convergence criteria is satisfied. This
geometrical description of the method may be clearly visualized in the figure below:
The various steps involved in calculating the root of by Newton Raphson Method are described compactly in the algorithm below. Algorithm: Given a continuously differentiable function approximation to the root of and an initial
, the steps involved in to the root of s.t. are:
calculating an approximation
(1) Calculate and set (2) For n = 0,1,2... until convergence criteria is satisfied ,do: Calculate
Remark (1): This method converges faster than the earlier methods. In fact the method converges at a quadratic rate. We will prove this later. Remark (2): This method can be derived directly by the Taylor expansion f(x) in the neighbourhood of the root starting approximation to of . The
is to be properly chosen so that the first in the neighbourhood of . i.e
order Taylor series approximation of leads to , an improved approximation to
, neglecting i.e.
and its higher powers, we get
Now the successive approximations calculated by the iterative formula:
etc may be
Remark(3) : One may also derive the above iteration formulation starting with the iteration formula for the secant method. In a way this may help one to visualize Newton-Raphson method as an improvement over the secant method. So, let us consider the iteration formula for the secant method i.e.
Add and subtract
to the numerator on the R.H.S. to get
is the slope of the secant to the curve
through the points , . This also represents slope of the tangent to f(x)=0 parallel to the secant intersecting x-axis between and . If is differentiable one and thus arrive at the
may as well approximate this slope by iteration formula.
Example: Solve method. Solution: Given for the root in [1,2] by Newton Raphson
Since, Therefore repeat the process. Results are tabulated below:
Newton Rahpson Method
Iteration no. 0 1 2 3 2.0000000000 1.7209302187 1.6625729799 1.6601046324 1.7209302187 1.6625729799 1.6601046324 1.6601003408 0.8910911679 0.0347661413 0.0000604780 0.0000002435
Example: Solve method. Solution: Given in [0.5,1.5] for the root by Newton-Raphson
Say, The results are tabulated below:
Newton Raphson Method
Iteration no. 0 1 2 0.5000000000 0.6934901476 0.1086351126 0.6934901476 0.7013291121 0.0005313741 0.7013291121 0.7013678551 0.0000003363
Exercise: Find the solutions accurate to within for the following problems using Newton-Raphson Method.
Convergence of Newton-Raphson method: Suppose is a root of and is an estimate of s.t.
. Then by Taylor series expansion we have,
for some between and . By Newton-Raphson method, we know that
i.e. Using(2*) in (1*) we get
Say where iterations. denote the error in the solution at n and (n+1)
Newton Raphson Method is said to have quadratic convergence. Note: Alternatively, one can also prove the quadratic convergence of NewtonRaphson method based on the fixed - point theory. It is worth stating few comments on this approach as it is a more general approach covering most of the iteration schemes discussed earlier. A Brief discussion on Fixed Point Iteration: Suppose that we are given a function on an interval for which we need to find a root. Derive , from it, an equation of the form: Any solution to (ii) is called a fixed point and it is a solution of (i). The function g(x) is called as "Iteration function". Example: Given , one may re-write it as:
or , or , where g(x) denotes possible choice iteration function.
Fixed point Iteration: Let Say, be a root of and be an associated iteration function.
is the given starting point. Then one can generate a sequence as:
of successive approximations of
............... ............... .................
This sequence is said to converge to iff as . Now the natural question that would arise is what are the conditions on s.t. the sequence as Here, we state few important comments on such a convergence: (i)Suppose on an interval i.e. g(x) maps I into itself. (ii) The iteration function is defined and is continuous on I=[a,b]. and .
(iii)The iteration function g(x) is differentiable on s.t.
Theorem : Let g(x) be an iteration function satisfying (i), (ii) and (iii) then g(x) has exactly one fixed point in I and starting with any , the sequence .
generated by fixed point iteration function converges to (iv) If convergence it is desirable that then
. For rapid
. Under this condition for the (i.e.
Newton Raphson method one can show that quadratic convergence).
Remark 1: One can generalize all the iterative methods for a system of nonlinear equations. For instance, if we have two non-linear equations point then given a suitable starting , the Newton-Raphson algorithm may be written as follows:
For i=1,2... until satisfied , do
Exercises: Solve the following systems of equations by Newton Raphson Method. (1)
Use the initial approximation (2)
Use the initial approximation
Bairstow Method Bairstow Method is an iterative method used to find both the real and complex roots of a polynomial. It is based on the idea of synthetic division of the given polynomial by a quadratic function and can be used to find all the roots of a polynomial. Given a polynomial say, (B.1)
Bairstow's method divides the polynomial by a quadratic function. (B.2)
Now the quotient will be a polynomial (B.3)
and the remainder is a linear function
, i.e. (B.4)
Since the quotient
and the remainder
are obtained by can be (B.5a)
standard synthetic division the co-efficients obtained by the following recurrence relation.
is an exact factor of
then the remainder are the roots of
is zero . It may
and the real/complex roots of be noted that for
is considered based on some guess values
. So Bairstow's method reduces to determining the values of r
and s such that is zero. For finding such values Bairstow's method uses a strategy similar to Newton Raphson's method. Since both expansion of and , are functions of r and s we can have Taylor series as: (B.6a)
i.e. second and higher order the improvement over guess
terms may be neglected, so that value
may be obtained by equating (B.6a),(B.6b) to zero i.e. (B.7a)
To solve the system of equations derivatives of
, we need the partial
w.r.t. r and s. Bairstow has shown that these partial , which
derivatives can be obtained by synthetic division of
amounts to using the recurrence relation with and with i.e.
for where (B.9)
The system of equations (B.7a)-(B.7b) may be written as. (B.10a)
These equations can be solved for improve guess value to
and turn be used to .
Now we can calculate the percentage of approximate errors in (r,s) by (B.11)
is the iteration stopping error, then .
we repeat the process with the new guess i.e. Otherwise the roots of can be determined by
If we want to find all the roots of following three possibilities:
1. If the quotient polynomial
then at this point we have the
is a third (or higher) order polynomial then we can again apply the Bairstow's method to the quotient polynomial. The previous values of starting guesses for this application. can serve as the
2. If the quotient polynomial
is a quadratic function then use .
(B.12) to obtain the remaining two roots of
3. If the quotient polynomial
is a linear function say
then the remaining single root is given by Example: Find all the roots of the polynomial
by Bairstow method . With the initial values
Solution: Set iteration=1
Using the recurrence relations (B.5a)-(B.5c) and (B.8a)-(B.8c) we get
the simultaneous equations for
on solving we get
now we have to solve
On solving we get
Now proceeding in the above manner in about ten iteration we get with
Now on using
So at this point Quotient is a quadratic equation
Roots of Roots i.e Exercises:
(1) Use initial approximation quadratic factor of the form
to find a of the polynomial equation
using Bairstow method and hence find all its roots. (2) Use initial approximaton factor of the form to find a quadratic of the polynomial equation
using Bairstow method and hence find all the roots.