You are on page 1of 9

Lecture Note by S.

M Alay-e-Abbas: Solutions of Nonlinear Equations and Roots

Function and Roots: We'll discuss the problem of finding the roots of a function (or the nonlinear equation f(x) = 0) in one dimension. This is a relatively simple problem that arises quite frequently. Important in its own right (one of the oldest of numerical problems: finding the x value for which a given function f(x) is zero), the problem provides us an opportunity to explore and illustrate the interplay among formal mathematics, numerical analysis, and computational physics. This problem often appears as an intermediate step in the study of a larger problem, but is sometimes the problem of interest. For low order polynomials, finding the zero of a function is a trivial problem: if the function is f (x) = x - 3, for example, the equation x-3=0 is simply rearranged to read x = 3, which is the solution. This is called a CLOSED FORM SOLUTION. Closed form solutions for the roots exist for quadratic, cubic, and quartic equations as well, although they become rather cumbersome to use. But no general solution exists for polynomials of fifth-order and higher. Many equations involving functions other than polynomials have no analytic solution at all. So what we're really seeking is a method for solving for the root of a nonlinear equation. When expressed in this way, the problem seems anything but trivial.

Lecture Note by S.M Alay-e-Abbas: Solutions of Nonlinear Equations and Roots

Graphical Method: To help focus our attention on the problem, let's consider a specific example: f(x) = cosx - x Such transcendental equations are not (generally) solvable analytically. The first thing we might try is to draw a figure shown below

in which cos x and x are plotted. The root is simply the horizontal coordinate at which the two curves cross. The eye has no trouble finding this intersection, and the graph can easily be read to determine that the root lies near x = 0.75. But greater accuracy than this is hard to achieve by graphical methods. Furthermore, if there are a large number of roots to find, or if the function is not an easy one to plot, the effectiveness of this graphical method rapidly decreases we have no choice but to attempt a solution by numerical means. What we would like to have is a reliable numerical method that will provide accurate results with a minimum of human intervention.

Lecture Note by S.M Alay-e-Abbas: Solutions of Nonlinear Equations and Roots

Bisection Method: From the figure, it's clear that a root lies between [0,

]; that is, the

root is bracketed between these two limits. We can improve our brackets by dividing the interval in half, and retaining the interval that contains the root. We can then check to see how small the bracket has become: if it is still too large, we halve the interval again! by doing this repeatedly, the upper and lower limits of the interval approach the root. Since the interval is halved at each step, the method is called BISECTION METHOD. Given a function f(x)=0 and an interval which might contain a root, perform a predetermined number of iterations using the bisection method. The interval must be large enough to avoid discontinuity. The method works like this if [a, b] is the interval in which the root of f(x) lies then we compute f(a) and f(b). Now we compute f(x) at c=a+b/2 and then calculate f(c). Now multiply f(a) with f(c) if the answer of this is negative than the root lies in [a, c] if not (i.e. positive) then root lies in [c, b]. Example1: Find the root of the equation f (x) = cos x - x = 0 by the method of bisection. How many iterates are necessary to determine the root to 8 significant figures? (Ans: 0.739085) Bisection works, and it may be foolproof, but it certainly can be slow! Defining the convergence as the difference between the upper and lower bounds on the root, at every iteration the error is halved. If i is the error at
3

Lecture Note by S.M Alay-e-Abbas: Solutions of Nonlinear Equations and Roots

the ith step, at the next step we have i+1 = i/2, and the rate of convergence is said to be linear. Given initial brackets, we could even determine how much iteration is necessary to obtain a specific level of accuracy. Example 2: Find the solution of the equation x12 1 = 0 using bisection method and compare your answer by plotting the error as a function of the iteration number. Ans:

Note that error is halved with increasing number of iterations Example 3: Find all the zeros of the function f(x) = cos(10x) + cos(13x) between (0, 3) and check your answer graphically. Compare Graphical and bisection results.

Lecture Note by S.M Alay-e-Abbas: Solutions of Nonlinear Equations and Roots

Newton-Raphson Method: It would seem that there would be a better way to find the root with low iteration numbers, at least for a nice, smoothly varying function such as given above. But we need to use information about the function, and how it changes, that is, derivative information. That information is most succinctly provided in a Taylor series expansion of the function i.e

Where (Rn

0 as n

infinity)

Assume that we have a good guess, so that (x - a) is a small number. Then keeping just the first two terms of the Taylor series, we have
f ( x) = f (a) + ( x a) f (a)

We want to find that value of x for which f(x) =0; setting f(x) equal to zero and solving for x, we quickly find
x=a f (a) f (a )

From figure below we see that at x = a, the function and its derivative, which is tangent to the function, are known. Assuming that the function doesn't differ too much from a straight line, a good approximation to where the function crosses zero is where the tangent line crosses zero.

Lecture Note by S.M Alay-e-Abbas: Solutions of Nonlinear Equations and Roots

This point can be taken as a new guess for the root, the function and its derivative evaluated, and so on. The idea of using one value to generate a better value is called iteration, and it is a very practical technique which we will use often. Changing the notation a little, we can calculate the (i + 1)th value from the ith value by the iterative expression
xi +1 = xi f ( xi ) f ( xi )

From this expression if we compute xi+1 twice we get

And

which is accurate up to seven significant figure. Example 4: Use Newton-Raphson method to find the root of Legendre polynomial

We need to take

As the guess to begin the iteration for the smallest non-negative root The error for one iteration in this scheme goes like
6

Lecture Note by S.M Alay-e-Abbas: Solutions of Nonlinear Equations and Roots

Putting the value of f(xi) from Taylors expansion by taking f(x) = 0

We get

Which gives the convergence of NR method and is termed as quadratic.

Lecture Note by S.M Alay-e-Abbas: Solutions of Nonlinear Equations and Roots

Secant Method: The basic disadvantage with Newton Raphson method is that it needs to evaluate the derivative of f(x). This can be avoided by using the definition of derivatives. That is, we can replace f(x) with the gradient of the secant of f(x) (backward difference) defined as
f ( xi ) = f ( xi ) f ( xi 1 ) xi xi 1

putting this in
xi +1 = xi f ( xi ) f ( xi )

we get
xi +1 = xi f ( xi ) ( x xi 1 ) f ( xi ) = xi i f ( xi ) f ( xi 1 ) f ( xi ) f ( xi 1 ) xi xi 1

on solving we get
xi 1 f ( xi ) xi f ( xi 1 ) f ( xi ) f ( xi 1 )

xi +1 =

Note that this formula requires two values namely x0 and x1, between whom the root lays, for the evaluation of the formula. Example 5: Use the secant method to estimate the root of the equation f(x) = sinx 5x + 2 = 0, given that x0 = 0.4 and x1 = 0.6. Sol: For n = 1 the formula becomes
8

Lecture Note by S.M Alay-e-Abbas: Solutions of Nonlinear Equations and Roots

x2 = x0 f(x1) x1 f(x0) / f(x1) f(x0) putting values and solving we find x2 = 0.494, x3 = 0.4950 and x4 = 0.4950. hence 0.4950 is the root of the equation.

This is not the end of methods!!!! There are more!!!!

You might also like