You are on page 1of 65

CME302

COMPUTATIONAL METHODS IN
ENGINEERING
Part 2 – Roots of Equation

Dr.Namık KILIÇ e-mail: kilicna@mef.edu.tr 26th Feb 2024


Part2 – Roots of Equation
Chap.5 - Bracketing Methods
Chap.6 - Open Methods
GOALS AND OBJECTIVES
After completing Part Two:

have sufficient information to successfully approach a wide variety of


engineering problems dealing with roots of equations.

have mastered the techniques,

have learned to assess their reliability,

be capable of choosing the best method (or methods) for any


particular problem.
GOALS AND OBJECTIVES
1. Understand the graphical interpretation of a root.
2. Know the graphical interpretation of the false-position method and why it is usually
superior to the bisection method.
3. Understand the difference between bracketing and open methods for root
location.
4. Understand the concepts of convergence and divergence; use the two-curve
graphical method to provide a visual manifestation of the concepts.
5. Know why bracketing methods always converge, whereas open methods may
sometimes diverge.
6. Realize that convergence of open methods is more likely if the initial guess is close
to the true root.
7. Understand the concepts of linear and quadratic convergence and their
implications for the efficiencies of the fixed-point-iteration and Newton-Raphson
methods.
8. Know the fundamental difference between the false-position and secant methods
and how it relates to convergence.
9. Understand how Brent’s method combines the reliability of bisection with the
speed of open methods
CONTENT

Chapter 5 - Bracketing Methods


5.1 Graphical Methods
5.2 The Bisection Method
5.3 The False-position Method
5.4 Incremental Searches and Determining Initial Guesses
CONTENT

These techniques are called bracketing methods because


two initial guesses for the root are required.
The methods described herein employ different strategies
to systematically reduce the width of the bracket.
Graphical techniques are also useful for visualizing the
properties of the functions and the behavior of the various
numerical methods.
Graphical Approaches
Recall the velocity of the falling parachutist.
v =
gm
c
(1 − e − ( c m )t
)
Because velocity (v) is explicit (that is, isolated on the left side) it is easy
to compute v, given the parameters (c & m) and a time (t).
But suppose you want to compute the drag coefficient (c) so that a
parachutist of mass (m) has a particular velocity at time (t)?
It is impossible to solve the equation algebraically for c!
But you can re-express the equation as a roots problem.
f (c) =
gm
c
( )
1 − e − ( c m )t − v

The desired value of c is the value that makes f(c) = 0. Numerical methods
provide a way to determine this value.
Graphical Approaches
Use the graphical approach to determine the drag
coefficient c needed for a parachutist of mass m = 68.1 kg
to have a velocity of 40 m/s after freefalling for time t =
10 s.

f (c) =
gm
c
( )
1 − e − ( c m )t − v
Graphical Approaches
Two initial guesses for the root are
required. These guesses must “bracket”
or be on either side of the root.
If one root of a real and continuous
function, f(x)=0, is bounded by values
x = xl and x = xu then if,

f ( xl )  f ( xu )  0

The function changes sign on


opposite sides of the root).
Graphical Approaches
Different Cases Between Two Guesses
(a)Multiple root that
Parts (a) and (c) indicate occurs when the
that if both f(xl ) and function is tangential to
f(xu) have the same
the x axis. Although the
sign, either there will be
no roots or there will be end points are of
an even number of opposite signs, there
roots within the interval. are an even number of
Parts (b) and (d) axis intersections for
indicate that if the the interval.
function has different (b)Discontinuous function
signs at the end points, where end points of
there will be an odd opposite sign bracket
number of roots in the an even number of
interval. roots.
Graphical Approaches

The progressive
enlargement of
f(x) = sin 10x + cos 3x
by the computer.

Such interactive
graphics permit the
analyst to determine
that two distinct roots
exist between x = 4.2
and x = 4.3.
Graphical Depiction of the Bisection Method
Graphical Depiction of the Bisection Method
Bisection Method
For the arbitrary equation of one variable, f(x) = 0.
1. Pick xl and xu such that they bound the root of interest, check if
f(xl) × f(xu) < 0.

2. Estimate the root by evaluating f ( xl + xu ) 2  .


 
3. Find the pair.

If f ( xl )  f ( xl + xu ) 2   0, root lies in the lower interval,


then xu = ( xl + xu ) 2
and go to step 2.
Termination and Error Estimates
If f ( xl )  f ( xl + xu ) 2   0, root lies in

the upper interval, then xl = ( xl + xu ) 2  ,


go to step 2.

If f ( xl )  f ( xl + xu ) 2  = 0, then root is


( xl + xu ) 2 and terminate.
Compare  s with  a .

If  a   s , stop. Otherwise repeat


the process.
Graphical Depiction of the Bisection Method

after six iterations εa finally falls below εs = 0.5%, and the computation can be
terminated
Evaluation of Bisection Method
When Bisection method fails to find roots?

The bisection algorithm is just fine if you are performing a


single root evaluation for a function that is easy to
evaluate. If multiple crossing exist for the chosen bracket,
fails to find root

in dividing the interval from xl to xu into equal halves, no


account is taken of the magnitudes of f(xl) and f(xu).

if f(xl) is much closer to zero than f(xu), it is likely that the


root is closer to xl than to xu
How Many Iterations Will It Take?
• Length of the first Interval. L0 = b − a

• After 1 iteration. L1 = L0 2

• After 2 iterations. L2 = L0 4

• After k iterations. Lk = L0 2k

Lk
 a  100% a  s
n

If Ea,d is the desired error


How Many Iterations Will It Take?
• If the absolute magnitude of the error is,

s  x
= 10−4
100%

▪ and L0 = 2, how many iterations will you have to do to


get the required accuracy in the solution?

−4 2
10 = k  2k = 2 104  k  14.3 = 15
2
The False-Position Method (Regula-Falsi)
If a real root is bounded
by xl and xu of f(x) = 0,
then we can approximate
the solution by doing a
linear interpolation
between the points.

Using similar triangles


False Position Algorithm
1. Find a pair of values of x, xl and xu such that fl = f(xl) < 0 and
fu = f(xu) > 0.
2. Estimate the value of the root from the following formula

xl fu − xu fl
xr =
fu − fl

and evaluate f(xr).


False Position Algorithm
3. Use the new point to replace one of the original points,
keeping the two points on opposite sides of the x axis.

If f ( xr )  0 then xl = xr  f l = f ( xr )

If f ( xr )  0 then xu = xr  f u = f ( xr )

If f(xr) = 0 then you have found the root and need go no


further!

4. See if the new xl and xu are close enough for convergence to


be declared. If they are not go back to step 2.
False Position Algorithm
the error for false position decreases much faster than
for bisection because of the more efficient scheme for
root location in the false-position method
Pitfalls of False Position Algorithm
Although a method such
as false position is often
superior to bisection,
there are invariably
cases that violate this
general conclusion.
Therefore, in addition
formulation, the results
should always be
checked by substituting
the root estimate into
the original equation
and determining whether
the result is close to
zero.
A major weakness of the
false-position method:
its one sidedness.
Incremental Searches and Determining Initial
Guesses
A potential problem with an incremental search is the choice of the
increment length. If the length is too small, the search can be very
time-consuming. On the other hand, if the length is too great, there is
a possibility that closely spaced roots might be missed

A partial remedy for such cases is


to compute the first derivative of
the function f'(x) at the beginning
and the end of each interval.
If the derivative changes sign, it
suggests that a minimum or
maximum may have occurred and
that the interval should be
examined more closely for the
existence of a possible root
CONTENT

Chapter 6 - Open Methods


6.1 Simple Fixed-Point Iteration
6.2 The Newton-Raphson Method
6.3 The Secant Method
6.4 Brent’s Method
6.5 Multiple Roots
6.6 Systems of Nonlinear Equations
Open Methods

Open methods are


based on formulas
that require only a
single starting value
of x or two starting
values that do not
necessarily bracket
the root.
Simple Fixed Point Iteration

Starting with an initial


guess of x0 = 0, this
iterative equation can be
applied to compute

xi+1 = g(xi)

Notice that the true percent relative error for each iteration is roughly proportional (by a factor of
about 0.5 to 0.6) to the error from the previous iteration. This property, called linear convergence, is
characteristic of fixed-point iteration
Simple Fixed-point Iteration 1

Rearrange the function so that x is on the left side of the


equation:

f ( x) = 0  g ( x) = x
xk = g ( xk −1 ) x0 given, k = 1, 2, ...

Bracketing methods are “convergent”.


Fixed-point methods may sometime “diverge”, depending
on the starting point (initial guess) and how the function
behaves.
Two Curve Method
x = g(x) can be expressed as
a pair of equations:
▪ y1 = x
▪ y2 = g(x)
Cobweb Plots
How Fixed-point Iteration Can Work or Can Fail

Ei +1 = g’ ( ) Ei
If g’ ( )  1
▪ converges
If g’ ( )  1
▪ diverges

If g’ ( ) = 1
▪ infinite
Conclusion
Fixed-point iteration converges if,

g ( x)  1 ( slope of the line f ( x ) = x )


When the method converges, the error is roughly
proportional to or less than the error of the previous step,
therefore it is called “linearly convergent.”
Newton-Raphson Method
• Most widely used method.
• Based on Taylor series expansion:
x 2
f ( xi +1 ) = f ( xi ) + f  ( xi ) x + f  ( xi ) + Ox 3
2!
▪ The root is the value of xi+1 when f(xi+1) = 0
▪ Rearranging,

▪ Solve for,

f ( xi )
xi +1 = xi − ▪ Newton-Raphson formula
f  ( xi )
Newton-Raphson Method
A convenient method
for functions whose
derivatives can be
evaluated analytically.
It may not be
convenient for
functions whose
derivatives cannot be
evaluated analytically.
Newton-Raphson Method
Use the Newton-Raphson method to estimate the root of f(x) = e−x − x,
employing an initial guess of x0 = 0.
Newton-Raphson Method
That is, the error is roughly proportional to the square of the previous
error
Four cases where Newton-Raphson exhibits poor
convergence
Aside from slow convergence due to the nature of the
function, other difficulties can arise

a depicts the case where an inflection point [that is, f ″(x)


= 0] occurs in the vicinity of a root. Notice that iterations
beginning at x0 progressively diverge from the root.

b illustrates the tendency of the Newton-Raphson


technique to oscillate around a local maximum or
minimum. Such oscillations may persist when near-zero
slope is reached..

c shows how an initial guess that is close to one root can


jump to a location several roots away. This tendency to
move away from the area of interest occurs because
near-zero slopes are encountered.

If zero slope exist, it means that the solution shoots off


horizontally and never hits the x axis.
Four cases where Newton-Raphson exhibits poor
convergence
1. A plotting routine should be included in the program.
2. At the end of the computation, the final root estimate should always
be substituted into the original function to compute whether the
result is close to zero. This check partially guards against those
cases where slow or oscillating convergence may lead to a small
value of εa while the solution is still far from a root.
3. The program should always include an upper limit on the number of
iterations to guard against oscillating, slowly convergent, or divergent
solutions that could persist interminably.
4. The program should alert the user and take account of the
possibility that f′(x) might equal zero at any time during the
computation.
The Secant Method
A slight variation of Newton’s method for functions whose
derivatives are difficult to evaluate. For these cases the
derivative can be approximated by a backward finite
divided difference.
xi − xi −1
f  ( xi ) 
f ( xi ) − f ( xi −1 )

xi − xi −1
xi +1 = xi − f ( xi ) i = 1, 2,3,
f ( xi ) − f ( xi −1 )
The Secant Method
Requires two initial estimates
of x , for example, x0, x1.
However, because f(x) is not
required to change signs
between estimates, it is not
classified as a “bracketing”
method.
The secant method has the
same properties as Newton’s
method. Convergence is not
guaranteed for all initial
guesses and functions, f(x).
The Secant Method
The similarity Between Secant and False Position

Both use two initial estimates to compute an approximation of the


slope of the function.

The critical difference is how the initial values is replaced by the new
estimate. Recall that in the false-position method the latest estimate
of the root replaces whichever of the original values yielded a function
value with the same sign as f(xr).
Consequently, the two estimates always bracket the root. Therefore,
for all practical purposes, the method always converges because the
root is kept within the bracket.

In contrast, the secant method replaces the values in strict sequence,


with the new value xi+1 replacing xi and xi replacing xi−1. As a result, the
two values can sometimes lie on the same side of the root. For certain
cases, this can lead to divergence
Comparison of False Position and Secant Methods

Comparison of the false-position


and the secant methods.
The first iterations (a) and (b) for
both techniques are identical.
However, for the second
iterations (c) and (d), the points
used differ. As a consequence,
the secant method can diverge,
as indicated in (d).
Comparison of False Position and Secant Methods

Although the secant method may be


divergent, when it converges, it usually
does so at a quicker rate than the false-
position method.

The inferiority of the false-position


method is due to one end staying fixed to
maintain the bracketing of the root.
This property, which is an advantage in
that it prevents divergence, is a
shortcoming with regard to the rate of
convergence; it makes the finite-
difference estimate a less accurate
approximation of the derivative.
BRENT’S METHOD
Combined the reliability of bracketing with the speed of the
open methods?

In Brent’s method, the bracketing technique is the trusty


bisection method, and two different open methods are
employed

Inverse quadratic interpolation is similar in spirit to the secant


method. The secant method is based on computing a
straight line that goes through two guesses.

Now suppose that we had three points. In that case, we could


determine a quadratic function of x that goes through the
three points (b). Just as with the linear secant method, the
intersection of this parabola with the x axis would represent
the new root estimate.
Using a curve rather than a straight line often yields a better
estimate.
BRENT’S METHOD
The difficulty can be rectified by employing inverse quadratic interpolation. That is, rather than
using a parabola in x, we can fit the points with a parabola in y
BRENT’S METHOD

Develop quadratic equations in both x and y for


the data points depicted in Fig. (1, 2), (2, 1), and
(4, 5).
For the first, y = f(x), employ the quadratic formula
to illustrate that the roots are complex. For the
latter, x = g(y), use inverse quadratic interpolation
to determine the root estimate
Multiple Roots
▪ “Multiple root” corresponds to a point where a function
is tangent to the x axis.
▪ Difficulties.
• Function does not change sign at the multiple root,
therefore, cannot use bracketing methods.
• Both f(x) and
f  ( x ) = 0, division by zero with Newton’s
and Secant methods.
Examples of Multiple Roots
Determining Multiple Roots
• None of the methods deal with multiple roots
efficiently, however, one way to deal with problems is
as follows:

u ( xi )
▪ Then find xi + 1 −
u ( xi )
Systems of Nonlinear Equations
x 2 + xy = 10
y + 3xy 2 = 57

u ( x, y ) = x 2 + xy − 10 = 0
v ( x, y ) = y + 3xy 2 − 57 = 0

• Determine the value of (x, y) that make:


▪ u(x, y) = 0
▪ v(x, y) = 0
Multi-dimensional Newton-Raphson 1

• Taylor series expansion of a function of more than one


variable.

ui ui
ui +1 = ui + ( x1i +1 − x1i ) + ( yi +1 − yi )
x y
vi vi
vi +1 = vi + ( x1i +1 − x1i ) + ( yi +1 − yi )
x y
• The root of the equation occurs at the value of x and y
where ui+1 and vi+1 equal to zero.
Multi-dimensional Newton-Raphson 2

ui ui ui ui


xi +1 + yi +1 = −ui + xi + yi
x y x y
vi vi vi vi
xi +1 + yi +1 = −vi + xi + yi
x y x y

• A set of two linear equations with two unknowns that


can be solved for.
Multi-dimensional Newton-Raphson 3

vi ui
ui − vi
y y
xi +1 = xi −
ui vi ui vi

x y y x
Matlab Applications - Bisection
bisect
This function uses the method of bisection to compute the root of f(x) = 0 that is known to lie
in the interval (x1,x2).

The number of bisections n required to reduce the interval to tol is computed.

The input argument filter controls the filtering of suspected singularities.

By setting filter = 1, we force the routine to check whether the magnitude of f(x) decreases
with each interval halving.

If it does not, the “root” may not be a root at all, but a singularity, in which case root = NaN is
returned. Since this feature is not always desirable, the default value is filter = 0.
Matlab Applications - Bisection
Matlab Applications - Bisection
Matlab Applications – Incremental Search
The function rootsearch looks for a zero of the function f(x) in the interval (a,b).

The search starts at a and proceeds in steps dx toward b.

Once a zero is detected, rootsearch returns its bounds (x1,x2) to the calling program. If a
root was not detected, x1 = x2 = NaN is returned (in MATLAB NaN stands for “not a number”).

After the first root (the root closest to a) has been bracketed, rootsearch can be called again
with a replaced by x2 in order to find the next root. This can be repeated as long as
rootsearch detects a root.
Matlab Applications – Incremental Search
Matlab Applications – Newton Raphson
Matlab Applications – Newton Raphson

You might also like