You are on page 1of 15

Bicol University East Campus

College of Engineering

Legazpi City

Academic Year 2019-2020

Machine Problem 3:

Application of Newton’s Raphson Method

Calculus 1

Luisa Corido Pernia

BSChe 1C

Engr. Junel Bon Borbo

Professor

1
Introduction
The Newton-Raphson method, or Newton Method, is a powerful technique for solving equations
numerically. Like so much of the differential calculus, it is based on the simple idea of linear approximation.
The Newton Method, properly used, usually homes in on a root with devastating efficiency.
Let f(x) be a well-behaved function, and let r be a root of the equation f(x) = 0. We start with an estimate
x0 of r. From x0, we produce an improved estimate x. From x1, we produce a new estimate x2. From x2, we
produce a new estimate x3. We go on until we are ‘close enough’ to r or until it becomes clear that we are
getting nowhere. The above general style of proceeding is called iterative. Of the many iterative root-
finding procedures, the Newton-Raphson method, with its combination of simplicity and power, is the most
widely used.
The initial estimate is sometimes called x1, but most mathematicians prefer to start counting at 0.
Sometimes the initial estimate is called a “guess.” The Newton Method is usually very good if x0 is close to
r, and can be horrid if it is not. The “guess” x0 should be chosen with care.

II. Objectives
1. Solve at least 10 problems using Newton's method.
2. Estimate the approximate value of the roots of the polynomial equation.
III. Materials and Methods

Materials
 Microsoft excel
 Microsoft word
 Calculator
 Ball pen and notebook
 Laptop
 System of Linear equation
 Objectives

Methods
Suppose you have a function f(x)f(x), and you want to find as accurately as possible where it crosses the xx-
axis; in other words, you want to solve f(x)=0. Suppose you know of no way to find an exact solution by
any algebraic procedure, but you can use an approximation, provided it can be made quite close to the true
value.
Newton's method is a way to find a solution to the equation to as many decimal places as you want. It is
what is called an "iterative procedure,'' meaning that it can be repeated again and again to get an answer
of greater and greater accuracy. Iterative procedures like Newton's method are well suited to programming
for a computer. Newton's method uses the fact that the tangent line to a curve is a good approximation to
the curve near the point of tangency.
To estimate the solution of an equation f(x) = 0, we produce a sequence of approximations that approach
the solution. We find the first estimate by sketching a graph or by guessing. We choose x1 which is close to
the solution according to our estimate. The method then uses the tangent line to the curve at the point (x1,
f(x1)) as an estimate of the path of the curve.
The function that will be investigated was polynomial, trigonometric and exponential function.

2
Figure 1

We label the point where this tangent to the graph of f(x) cuts the x-axis, x2. Usually x2 is a better
approximation to the solution of f(x) = 0 than x1. We can find a formula for x2 in terms of x1 by using the
equation of the tangent at (x1,f(x1)):
y = f(x1) + f ′ (x1)(x−x1).
The x-intercept of this line, x2, occurs when y = 0 or
f(x1) + f ′ (x1)(x2 −x1) = 0 or −f(x1) = f ′ (x1)(x2 −x1).
This implies that
𝑓(𝑥 ) 𝑓(𝑥 )
- 𝑓′(𝑥1 ) = x2 – x1 or x2 = x1 - 𝑓′(𝑥1 ) .
1 1
If f(x2) ≠ 0, we can repeat the process to get a third approximation to the zero as
𝑓(𝑥 )
x3 = x2 - 𝑓′(𝑥1 )
1
In the same way we can repeat the process to get a third and fourth approximation to the zero using the
formula:
𝑓(𝑥 )
xn+1 = xn - 𝑓′(𝑥1 )
1

Figure 2

3
Note: When applying Newton’s method it may happen that the sequence of approximations gets further
and further away from the zero of the function (or further apart from each other) as the value of n
increases. It may also happen that one of the approximations falls outside the domain of the function. In
this case you should start over with a different approximation.
It is very efficient to use in excel in doing Netwon’s method.

Newton’s Method: Let f(x) = 0 be an equation. Define xn recursively as follows:

Here f′(xn) refers to the derivative f(x) of at xn.


Property 1: Let xn be defined from f(x) as in Definition 1. If function f is well behaved, and the initial guess
is suitable, then f(xn) ≈ 0 for sufficiently large n.
Observation: Newton’s method requires that f(x) has a derivative and that the derivative not be zero.
Usually the method converges quickly to a solution, but this can depend on the initial guess. Also, in cases
where there are multiple solutions, different initial guesses can lead to different solutions.
Example 1.

f(x)=3x – cosx – 1

f’(x)= 3 + sinx=0

First you need to label the column like this

Note: (x), a column for the function evaluations (f(x)), and a column for the slope
(f\’(x))
Enter value in (x). In column B write the formula =3*B129-COS(B129)-1 wherein you
will just substitute the x value in the f(x) equation and in column C write the formula
=3+SIN(B129) by substituting the x value in the derived equation(f’(x)).

4
In column B cell B129 enter formula =B129-C129/D129. Select cell C130, move
cursor to the bottom right, a black plus sign will appear. Drag the black plus sign to
cell B144.

Repeat with cell D

Repeat with cell B. That’s it, you have now learned how to use Newton Method in
Excel.

5
Note: Once the value of x stops changing from row to row and f(x) is small (close to
zero), you have the solution (assuming the process converged).

In that examples we notice that we didn’t have many iterations for Newton’s method to give us an
approximation in the desired range of accuracy. This will not always be the case. Sometimes it will take
many iterations through the process to get to the desired accuracy and on occasion it can fail completely.
It is guaranteed to converge if the initial guess x0 is close enough, but it is hard to make a clear statement
about what we mean by ‘close enough’ because this is highly problem specific. A sketch of the graph of
f(x) can help us decide on an appropriate initial guess x0 for a problem. In that example first,graphed it
and see if there is a convergence happen.

This graph serves as a basis to have an appropriate initial guess that is closely to the x0. It is very efficient
and accurate to graph first the equation before proceeding.

6
IV. Results and Discussion
Example 1
Here we will find the critical points of the function f(x)= sin(x)-x2. This means solving the equation f(x)= f’(x)=
cos(x)-2x=0.
To make an educated guess for the initial value x0, we notice that cos(x)-2x=0 is the same as cos(x)=2x
and we can understand this graphically. It is given by an intersection of the graph y=cos( x )with the line
y= 2x.

Figure 3
From this picture, it looks like x0= 0 is a pretty good place to start. The iteration is given by
cos(𝑥𝑛) − 2𝑥𝑛
Xn+1=xn - −sin(𝑥𝑛 )−2
The following tool will perform the iteration for you. ( Note: you should enter the function as cos(x) -
2*x).You should find a value of approximately 0.450183.

7
f(x)= sin(x) – x2
f’(x)= cos(x) – 2x

n x f(x) f'(x) 16
0 0.86 0.01824256 -1.067563
1 0.877088051 -0.0004032 -1.114783 14
2 0.876726378 -1.811E-07 -1.113782 12
3 0.876726215 -3.653E-14 -1.113781 10
4 0.876726215 0 -1.113781
5 0.876726215 0 -1.113781 8
6 0.876726215 0 -1.113781 6
7 0.876726215 0 -1.113781
4
8 0.876726215 0 -1.113781
9 0.876726215 0 -1.113781 2
10 0.876726215 0 -1.113781 0
11 0.876726215 0 -1.113781 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
12 0.876726215 0 -1.113781
13 0.876726215 0 -1.113781
n x
14 0.876726215 0 -1.113781
15 0.876726215 0 -1.113781

60000 16
14
50000
12
40000 10
8
30000 6
20000 4
2
10000 0
-2 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
0
1 3 5 7 9 11 13 15 17 19 21 23 -4
n x n x

5000

0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
-5000

-10000

-15000

n x

Figure 4. Trigonometric graph of Iteration Result


In this figure, notice that when you guess close to the approximate value the iteration will converge
instantly. It converges immediately in iteration number 4 wherein the final approximation was 0.876726215.
We first need a good guess as initial guess. This generally requires some creativity. Notice how the values

8
for become closer and closer to the same value. This means that we have found the approximate
solution. In fact, this was obtained after only four relatively painless steps.
Notice that using the same equation when we have an initial guess that has a large value it takes
many iterations before it converges. It means that it is needed to have a small value first because
sometimes a large error in the initial estimate can contribute to non-convergence of the algorithm. In this
figure the initial guess was 56,785 and it converge in iteration 21. Compare to the first figures this one
takes many iterations to converge. Notice how the initial guess affect the convergence of the roots. So, it is
needed to think critically and creatively.
When the initial guess was negative it converges easily. As long as the negative value was between
0 and -1 it will surely converge.

Example 2
Graph 1
f(x)= cosx=2x
12
f(x)=cos x -2x
10
f’(x)= -sin x -2
8
6

n x f(x) f'(x) 4

0 0.5 -0.1224174 -2.479426 2


1 0.450626693 -0.0010791 -2.43553 0
2 0.450183648 -8.835E-08 -2.435131
1 2 3 4 5 6 7 8 9 10 11

3 0.450183611 0 -2.435131 n x
4 0.450183611 0 -2.435131
5 0.450183611 0 -2.435131
150000
6 0.450183611 0 -2.435131
100000
7 0.450183611 0 -2.435131
8 0.450183611 0 -2.435131 50000

9 0.450183611 0 -2.435131 0
10 0.450183611 0 -2.435131
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
-50000
-100000
30
-150000
20 n x
Graph 2
10
100000
0
1 2 3 4 5 6 7 8 9 10 11 12 13 14
-10 50000

-20
0
n x 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
Graph 4
-50000

n x
Graph 3

Figure 5: Graph of the result of iterations

9
This graph simply tells that if you guess a large initial value, it will take many iterations before it finally
converge and one of the reason is have chosen a bad starting point, one that lies outside the range of
guaranteed convergence. Like for example in the graph 3 notice that the initial guess was 87654 and the
convergence was really slow, it converge in iteration number 15. Compared from the other graph, graph 1
converge instantly in iteration 4 having an initial guess of 0.5, graph 2 converge in iteration many iterations.
It means that as the value of initial guess is not in the array of the guaranteed convergence.

Exponential Function example

Example 1
Let’s try and find a root to the equation f(x)= ex-2x=0. Notice that f’(x)= ex- 2 so that
𝑥𝑥𝑛 −2𝑥
Xn+1= xn- 𝑒𝑥𝑛 −2𝑛
If we try an initial value x0=1, we find that,
x1=0, x2=1, x3=0, x4=1,…
In other word, Newton’s Method fails to produce a solution. Why is this? Because there is no solution to
be found! We could rewrite a solution as ex = 2x. the following graph shows that there is no such solution.
There is no intersection happen.

10
f(x)= ex-2x
f’(x)= ex- 2
120
n x f(x) f'(x) 100
0 0.69 0.61371553 -0.006284
1 98.34594393 5.1416E+42 5.142E+42 80
2 97.34594393 1.8915E+42 1.891E+42
3 96.34594393 6.9584E+41 6.958E+41 60
4 95.34594393 2.5599E+41 2.56E+41
5 94.34594393 9.4172E+40 9.417E+40 40
6 93.34594393 3.4644E+40 3.464E+40
7 92.34594393 1.2745E+40 1.274E+40 20
8 91.34594393 4.6886E+39 4.689E+39
9 90.34594393 1.7248E+39 1.725E+39 0
10 89.34594393 6.3453E+38 6.345E+38 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
11 88.34594393 2.3343E+38 2.334E+38
12 87.34594393 8.5874E+37 8.587E+37 n x
13 86.34594393 3.1591E+37 3.159E+37
14 85.34594393 1.1622E+37 1.162E+37
15 84.34594393 4.2754E+36 4.275E+36

Figure 6: Data analysis

In this figure notice that there is no convergence happen and even if the initial guess was close to zero. In
the graph it was vividly shown that there is no convergence happen. It has an infinite value of the root.
n x f(x) f'(x)
0 -23 46 -2
1 1.23E-09 1 -1 20
2 1 0.718282 0.718282
3 4.66E-09 1 -1 10
4 1 0.718282 0.718282
5 1.76E-08 1 -1
0
6 1 0.718282 0.718282
7 6.67E-08 1 -1 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
-10
8 1 0.718282 0.718282
9 2.53E-07 1 -1
-20
10 1 0.718282 0.718283
11 9.56E-07 0.999999 -1
12 1.000001 0.718283 0.718284
-30
13 3.62E-06 0.999996 -1
14 1.000004 0.718284 0.718292
n x
15 1.37E-05 0.999986 -0.99999

Figure 7: Data analysis


This is another initial guess in the same equation above. If we take a negative value as an initial guess it is
shown that the approximation was oscillating or interchanged. If we extend that table the result will be
the same, it will interchange again and again.

11
Example 2.
f(x)= e2x - x-6
f’(x)= 2e2x - 1
Solution: Let f(x)=e2x −x−6. We want to find where f(x) = 0.
𝑒2𝑥𝑛 −𝑥𝑛 −6 (2𝑥𝑛 −1)𝑒2𝑥𝑛 +6
Xn+1= xn- 2𝑥𝑛 =
2𝑒 −1 2𝑒2𝑥𝑛 −1
We need to choose n initial estimate x0. This can be done in various ways. We can use a graphing
calculator or a graphing program to graph y=f(x) and eyeball where the graph crosses the x-axis.

It is to verify that f(1) is about 0.389 and that f(0.95) is about 0.95 and 1, so by the Intermediate Value
Theorem there is a root between 0.95 and 1. And since f(0.95) is closer to 0 than is f(1), maybe the root
is closer to 0.95 than to 1.

f(x)= e2x - x-6


f’(x)= 2e2x - 1 20

15

n x f(x) f'(x) 10
0 0.95 -1.7785807 9.3428386
1 1.14036834 2.64351701 18.567771 5
2 0.997997075 0.36151878 13.719032
3 0.97164545 0.0100438 12.963378 0
4 0.970870668 8.3777E-06 12.941758 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
5 0.97087002 5.8433E-12 12.94174
6 0.97087002 0 12.94174
n x
7 0.97087002 0 12.94174
8 0.97087002 0 12.94174
9 0.97087002 0 12.94174 1500
10 0.97087002 0 12.94174
11 0.97087002 0 12.94174 1000
12 0.97087002 0 12.94174
13 0.97087002 0 12.94174 500
14 0.97087002 0 12.94174
15 0.97087002 0 12.94174 0
1 2 3 4 5 6 7 8 9 10 11 12 13 14

n x

12
15
20000
10
0
5 1 2 3 4 5 6 7 8 9 10 11 12 13 14
-20000
0
-40000
-5 1 2 3 4 5 6 7 8 9 10 11
-60000
-10
-80000
n x
n x

Figure 8: Exponential Graph of the Results of Iteration

Notice that the initial estimate x0 = 0.95 and it converge easily in iteration number 6. Estimating the
initial value of x requires logical thinking. If we want greater assurance we can compute f(0.97085) and
f(0.97095) and hope for a sign change, which shows that there is a root between 0.97085 and 0.97095.
There is indeed such a sign change: f(0.97085) is about −2.6×10-4 while f(0.97095) is about 10−3.
Notice the second graph, the initial guess was 953 and the result was # NUM. The reason behind of this
because the value of the initial guess was too large.

V. Conclusion
Newton's method is an extremely powerful technique, in general the convergence is quadratic: as the
method converges on the root, the difference between the root and the approximation is squared (the
number of accurate digits roughly doubles) at each step. However, there are some difficulties with the
method.
One of the difficulties was the derivation of the function. Newton's method requires that the derivative can
be calculated directly. An analytical expression for the derivative may not be easily obtainable or could be
expensive to evaluate. In these situations, it may be appropriate to approximate the derivative by using the
slope of a line through two nearby points on the function.
Another one was the failure of the method to converge to the root. It is important to review the proof of
quadratic convergence of Newton's Method before implementing it. Specifically, one should review the
assumptions made in the proof. For situations where the method fails to converge, it is because the
assumptions made in this proof are not met.
Newton's method is only guaranteed to converge if certain conditions are satisfied. If the assumptions
made in the proof of quadratic convergence are met, the method will converge.
Newton's method is guaranteed to converge under certain conditions. One popular set of such conditions is this: if a
function has a root and has a non-zero derivative at that root, and it's continuously differentiable in some interval
around that root, then there's some neighborhood of the root so that if we pick our starting point in that region, the
iterations will converge to the given root.

So why would that fail? Well, the derivative may be zero at the root; the function may fail to be continuously
differentiable; and you may have chosen a bad starting point, one that lies outside the range of guaranteed
convergence. Poor initial estimate is another one that can affect the convergence of the root. A large error
in the initial estimate can contribute to non-convergence of the algorithm. To overcome this problem one
can often linearize the function that is being optimized using calculus, logs, differentials, or even using
evolutionary algorithms. Good initial estimates lie close to the final globally optimal parameter estimate.

13
Depending on the context, each one of these may be more or less likely. Degenerate roots (those where the derivative
is 0) are "rare" in general. On the other hand, "most" functions aren't continuous or differentiable at all but most
functions that show up naturally in physics or engineering are. The choice of starting point may be obvious if you
have some idea about the rough location of the root, or it could be totally hit-and-miss.

There are other condition sets which may be more or less helpful. There's no all-encompassing way that captures
precisely when the method fails. Generally speaking, if your function is reasonably smooth and you start at a random
location, you most likely will converge to some root, although if you're really unlucky you may choose a starting
point that's stationary or lies in some short cycle.

14
15

You might also like