You are on page 1of 15

1/18/2015

CHE 358
Numerical Methods
for Engineers
2014-15.02

Prepared by:
Dr. K. Mensah - Darkwa
Lesson-04
Roots and Optimization

Roots
• “Roots” problems occur when some
function f can be written in terms of one or
more dependent variables x
• The value of x that makes f(x) = 0, is the
root of the equation
• These problems often occur when a design
problem presents an implicit equation for a
required parameter.

CHE 358_2014/15~Dr. K. Mensah-Darkwa 1


1/18/2015

Methods

Roots

Graphical Bracketing Open

False Newton- Secant


Bisection
Position Raphson Method

Fundamental principles used in design problems

CHE 358_2014/15~Dr. K. Mensah-Darkwa 2


1/18/2015

Graphical Methods

Graphical Methods
• A simple method for obtaining the estimate of
the root of the equation f(x)=0 is to make a plot
of the function and observe where it crosses
the x-axis.
• Illustration of a number of general ways that a
root may occur in an interval prescribed by a
lower bound xl and an upper bound xu

a) Same sign, no roots


b) Different sign, one root
c) Same sign, two roots
d) Different sign, three roots

CHE 358_2014/15~Dr. K. Mensah-Darkwa 3


1/18/2015

Graphical Methods
• Same sign, no roots

Graphical Methods
• Different sign, one root

CHE 358_2014/15~Dr. K. Mensah-Darkwa 4


1/18/2015

Graphical Methods
• Same sign, two roots

Graphical Methods
• Different sign, three roots

10

CHE 358_2014/15~Dr. K. Mensah-Darkwa 5


1/18/2015

11

Bracketing Method

Bracketing Method
• Bracketing methods are based on making two
initial guesses that “bracket” the root - that is,
are on either side of the root.

• Brackets are formed by finding two guesses xl


and xu where the sign of the function changes;
that is, where f(xl) f(xu) < 0

12

CHE 358_2014/15~Dr. K. Mensah-Darkwa 6


1/18/2015

Bisection Method
Theorem

 If a function changes sign over an interval, the


function value at the midpoint is evaluated.

 The location of the root is then determined as


lying within the subinterval where the sign
change occurs.

o The absolute error is reduced by a factor of 2 for


each iteration.

13

Bisection Method

14

CHE 358_2014/15~Dr. K. Mensah-Darkwa 7


1/18/2015

Bisection Method
Approximate percent relative error

A benefit of the bisection method is that the


number of iterations required to attain an
absolute error can be computed before starting
the computation, using

15

False Position
The false position method is another bracketing
method, also called linear interpolation

Theorem

 It determines the next guess not by splitting


the bracket in half but by connecting the
endpoints with a straight line and
determining the location of the intercept of
the straight line (xr).
 The value of xr then replaces whichever of the
two initial guesses yields a function value
with the same sign as f(xr). 16

CHE 358_2014/15~Dr. K. Mensah-Darkwa 8


1/18/2015

False Position
 False-position formula f (x u )(x l  x u )
xr  xu 
f (x l )  f (x u )



17

Bisection vs. False Position


• Bisection does not take into account the
shape of the function; this can be good or
bad depending on the function!

f (x)  x10 1


18

CHE 358_2014/15~Dr. K. Mensah-Darkwa 9


1/18/2015

Example
Determine the roots of the equation below
(a)Bisection Method
(b)False Position Method

f ( x)  12  21x  18 x 2  2.75 x 3


xl  1
xu  0
 s  1% 19

20

Open Method

CHE 358_2014/15~Dr. K. Mensah-Darkwa 10


1/18/2015

Open Methods
• Open methods differ from bracketing
methods, in that open methods require
only a single starting value or two starting
values that do not necessarily bracket a
root.

o Open methods may diverge as the


computation progresses, but when they do
converge, they usually do so much faster
than bracketing methods.
21

Newton-Raphson Method
Based on forming the tangent line to the f(x) curve
at some guess x, then following the tangent line to
where it crosses the x-axis.

f (x i )  0
f ' (x i ) 
x i  x i1
f (x )
x i1  x i  ' i
f (x i )
xi 1  xi
a  100%
xi 1 22


CHE 358_2014/15~Dr. K. Mensah-Darkwa 11
1/18/2015

Newton-Raphson: Pros and Cons


• Pro:
The error of the i+1th
iteration is roughly
proportional to the square of
the error of the ith iteration -
this is called quadratic
convergence

• Con:
Some functions show slow
or poor convergence

23

Examples

24

CHE 358_2014/15~Dr. K. Mensah-Darkwa 12


1/18/2015

The Secant Methods


• A potential problem in implementing the
Newton-Raphson method is the evaluation
of the derivative - there are certain
functions whose derivatives may be difficult
or inconvenient to evaluate.
• For these cases, the derivative can be
approximated by a backward finite divided
difference:

25

The Secant Methods

f (x i1 )  f (x i )
f (x i ) 
'

x i1  x i 26

CHE 358_2014/15~Dr. K. Mensah-Darkwa 13


1/18/2015

The Secant Methods


• Substitution of this approximation for the
derivative to the Newton-Raphson method
equation gives:

f (x i )x i1  x i 
x i1  x i 
f (x i1)  f (x i )
• NB - this method requires two initial
estimates of x but does not require an
analytical expression of the derivative. 27


Modified Secant Methods
• Rather than using two arbitrary values to
estimate the derivative, an alternative
approach involves a fractional perturbation
of the independent variable to estimate f(x);

d xi f (xi )
xi+1 = xi -
f (xi + d xi ) - f (xi )

where δ = a small perturbation fraction


28

CHE 358_2014/15~Dr. K. Mensah-Darkwa 14


1/18/2015

Examples

29

CHE 358_2014/15~Dr. K. Mensah-Darkwa 15

You might also like