You are on page 1of 30

Dr.

Muhammad Shafiq
Department of Industrial Engineering
University of Engineering and Technology
The idea of direct search methods is to locate the optimum by
iteratively narrowing a predefined interval of uncertainty to a desired
level of accuracy.

It should be noted that the optimum (if exists) should be the solution of
the set of equations defined by f  x   0
*
This section focuses mainly on the direct search methods applied to
strictly unimodal single-variable functions.

Def.: For a minimization problem, a function is unimodal if


 x1  x2  x implies that f  x2   f  x1  and
*

 x2  x1  x implies that f  x1   f  x2 
*

*
where x is the minimum point
Unrestricted Search with Fixed Step Size

Procedure: (for minimization problem)

1. Set i=1, start with an initial guess point x  x1 and a step size 
2. Determine x2  x1  
a. If f 2  f1 : update i  i  1 and continuously checking the
condition fi1  fi until it is violated. At that point, the search
process is terminated and xi can be taken as the optimum point
b. If f 2  f1 : determine x2  x1   in which x1  x1
If f 2  f1 : update i  i  1 and continuously checking the condition
fi1  fi until it is violated. At that point, the
 search process is terminated and xi can be taken as the
optimum point
 If f 2  f1 : the minimum point lies in between x1 and x2 and
either x1 or x2 can be taken as the optimum point
 If f 2  f1 : the minimum point lies in between x2 and x2 and
either x2 or x2 can be taken as the optimum point
b. If f 2  f1 : the minimum point lies in between x1 and x2 and
either x1 or x2 can be taken as the optimum point
Dichotomous & Golden Section Methods

Considering a maximization problem, these methods seek the


maximum of a unimodal function f  x  over the interval  a, b  which
*
is known to include the optimum point x .

In general, these two methods have the same procedure as follows:

Procedure: (for maximization problem)

Denote I i1   xL , xR  be the current interval of uncertainty. At


iteration 0: xL  a; xR  b .
Determine x1 , x2 based on the following rules:

Dichotomous Golden Section


1
x1   xR  xL    5 1
2
x1  xR   xR  x L 
2
1
x2   xR  xL    5 1
2
x2  xL   xR  xL 
2

Noted that, the selection of x1 , x2 guarantees that


xL  x1  x2  xR
The iterative procedure will be conducted in the following manner:

1. If f  x1   f  x2  , then xL  x*  x2 . Let xR  x2 and set


I i   xL , x2 
2. If f  x1   f  x2  , then x1  x  xR .
*
Let xL  x1 and set
I i   x1 , xR 
3. If f  x1   f  x2  , then x1  x*  x2 . Let xL  x1 and xR  x2
and set I i   x1 , x2 

The algorithm terminates at iteration k if I k   , in which  is a


predefined level of accuracy.
Notes:

 In the dichotomous method, x1 and x2 are located symmetrically


around the midpoint of the current interval of uncertainty and the
algorithm ensures that the length of the interval of uncertainty will
approach the desired accuracy  . The drawback of this approach is
that it requires the calculation of both f  x1  and f  x2  in each
iteration but only one of the two values will be used in the next
iteration
 The advantage of golden section method in comparison with the
dichotomous method is that the discarded value in the preceding
iteration will be reused in the current iteration. How?
Consider a general definition of x1 and x2 :

x1  xR    xR  xL  and x2  xL    xR  xL 
( 0    1)

The interval of uncertainty at iteration i, I i , will be either I i   xL , x2 


or I i   x1 , xR  (most likely).

Consider the case I i   xL , x2  , i.e., x1 is included in I i . In iteration


 i  1 , if we want to reuse x1 in iteration i by setting:
x2  iteration i  1  x1  iteration i 
then, xL    x2  iteration i   xL   xR    xR  xL 

 xL    xR    xR  xL   xL   xR    xR  xL 
  2   1  0
5 1
 
2
Consider the case I i   x1 , xR  , i.e., x2 is included in I i . In iteration
 i  1 , if we want to reuse x2 in iteration i by setting:
x1  iteration i  1  x2  iteration i 

then, xR    xR  x1  iteration i   xL    xR  xL 

 xR    xR  xR    xR  xL   xL    xR  xL 
    1  0
2

5 1
 
2
The golden search method converges more rapidly than the
dichotomous method because:

 In dichotomous method, the length of the interval of uncertainty


is reduced slowly when I   , while the golden section method
guarantees an   reduction in successive interval of uncertainty
(i.e., I i1   I i )
 The golden section method requires less computational effort!
Example:

 3x 0 x2

Maximize f  x    1
 3   x  20  2  x  3

Using   0.1

Dichotomous method

Iteration 1: I 0   xL , xR    0,3
x1  0.5  3  0  0.1  1.45 , f  x1   4.35
x2  0.5  3  0  0.1  1.55 , f  x2   4.65
f  x2   f  x1   xL  1.45 , I1  1.45,3
Iteration 2: I 0   xL , xR   1.45,3
x1  0.5  3  1.45  0.1  2.175 , f  x1   5.942
x2  0.5  3  1.45  0.1  2.275 , f  x2   5.908
f  x1   f  x2   xR  2.275 , I 2  1.45,2.275
…..
The optimal solution: x  2

Golden section method

Iteration 1: I 0   xL , xR    0,3
x1  3  0.618  3  0   1.146 , f  x1   3.438
x2  0  0.618  3  0   1.854 , f  x2   5.562
f  x2   f  x1   xL  1.146 , I1  1.146,3

Iteration 2: I1   xL , xR   1.146,3
x1  x2  iteration 0   1.854 , f  x1   5.562
x2  1.146  0.618  3  1.146   2.292 , f  x2   5.903
f  x2   f  x1   xL  1.854 , I1  1.854,3
…..
The optimal solution: x  2

Assignment:

1. Develop the programming flowchart for golden section method


2. Complete the above example (using Excel)
Interval Halving Method

Target: Seek the minimum of a unimodal function f  x  over the


interval  a, b  which is known to include the optimum point x* .
Procedure: (for minimization problem)

1. Divide the interval L   a, b into four equal intervals with three


values: x1  x0  x2
2. Determine f  x0  , f  x1  , f  x2 
3. If
 f  x2   f  x0   f  x1  : update L   a, x0  , i.e., x0  b ,
and go to step 4
 f  x2   f  x0   f  x1  : update L   x0 , b , i.e., x0  a ,
and go to step 4
 f  x1   f  x0  and f  x2   f  x0  : update L   x1 , x2  , i.e.,
x1  a, x2  b , and go to step 4
4. Test if L   : stop; otherwise, go to step 1

Note: One-haft of the current interval of uncertainty will be deleted at


each iteration.
DIRECT ROOT METHODS

Consider a single-variable function f  x  that is twice continuously


differentiable. It should be noted that the necessary condition for the
existence of an optimum (either minimum or maximum point) is
f  x   0 . This condition can be utilized to help find the optimum
 *

point.
Newton-Raphson Method

Consider a quadratic approximation of f  x  at x  xi by using


Taylor’s series expansion:

1
f  x   f  xi   f   xi  x  xi   f   xi  x  xi 
2

Taking the first derivative of the above expression, we have:

f   x   f   xi   f   xi  x  xi 

So, f   xi   f   xi   x*  xi   0
f   xi 
or x  xi 
*

f   xi 
If xi is an approximation of x* , the above can be rearranged to obtain
an improved approximation as follows:

f   xi 
xi1  xi 
f   xi 

An iterative procedure can then be conducted and it is considered to


have converged when f   xi1    .
Example:

Determine the possible optimum points (stationary points) of the


function:

f  x    3 x  2   2 x  3
2 2

The stationary points are the solution of the following equation:

f   x   72 x3  234 x 2  241x  78  0

So, they can be determined from:

72 x3  234 x 2  241x  78
xk 1  xk 
216 x 2  468 x  241
Starting with x0  10 , the successive iterations are presented below:

f   xi 
k xk xk 1
f   xi 
0 10.000000 2.978923 7.032108
1 7.032108 1.976429 5.055679
2 5.055679 1.314367 3.741312
3 3.741312 0.871358 2.869995
4 2.869995 0.573547 2.296405
5 2.296405 0.371252 1.925154
6 1.925154 0.230702 1.694452
7 1.694452 0.128999 1.565453
8 1.565453 0.054156 1.511296
9 1.511296 0.010864 1.500432
10 1.500432 0.000431 1.500001

The method converges to x  1.5 . It should be noted that there exist


two other points: x  2 and x  13 . These points can be found by
3 12
selecting different starting values (e.g., x0  0.5 and x0  1).
Remarks:

1. Originally, the Newton-Raphson method is developed to help


find the solution for equation f  x   0 by use of:

f  x   f  xi   f   xi  x  xi 

f  xk 
 xk 1  xk 
f   xk 

2. Convergence is not always guaranteed. It depends on the initial


point, if the starting point is not close to the solution, the process
will diverge. So, trial and error should be used to locate a “good”
initial point.
Quasi-Newton Method

The Newton-Raphson method requires both first and second-order


derivatives. When f  x  is not available in closed form or difficult to
differentiate, f   x  and f   x  can be approximated by:

f  xi  x   f  xi  x 
f   xi  
2x
f  xi  x   2 f  xi   f  xi  x 
f   xi  
x 2
and hence, the iterative procedure will be performed based on:

x  f  xi  x   f  xi  x  
xi1  xi 
2  f  xi  x   2 f  xi   f  xi  x  

f  xi1  x   f  xi1  x 
Stopping criterion: f   xi1   
2x
Example: Find the minimum of the function
0.75 1 1
f  x   0.65   0.65 x tan
1 x 2
x
using Quasi-Newton method with x1  0.1 and x  0.01 (select
  0.01)

Iteration 1:
f  x1   0.188179
f  x1  x   0.195512 f  x1  x   0.180615
 x2  0.377882
Convergence check: f   x2   0.137300  
Iteration 2:
f  x2   0.303368
f  x2  x   0.304662 f  x2  x   0.301916
 x3  0.465390
Convergence check: f   x3   0.017700  

Iteration 3:
f  x3   0.309885
f  x3  x   0.310004 f  x3  x   0.309650
 x4  0.480600
Convergence check: f   x4   0.000350  

The solution: x*  x4  0.480600


Bisection Method

This method is applicable to solve the equation f  x   0 where f is a


continuous function.

Procedure:

1. Find two initial points a and b such that f  a  and f  b 


have opposite signs to form a bracket of a root  a, b 
ab
1. Determine the midpoint c  , compute f  c 
2
2. If f  a  f  c   0 : set b equal to c and return to step 1; if
f  b  f  c   0 : set a equal to c and return to step 1. This procedure is
performed until the bracket is small enough.

1. Noted that, if at any iteration f  c   0 , the process stops


immediately because c is the solution (although this is
uncommon, but the implementation of this method must
guard against the occurrence of this situation).

You might also like