You are on page 1of 19

Three general classes of nonlinear optimization problems can be identified, as follows: 1. One-dimensional unconstrained problems 2.

Multidimensional unconstrained problems 3. Multidimensional constrained problems Problems of the first class are the easiest to solve whereas those of the third class are the most difficult. In practice, multidimensional constrained problems are usually reduced to multidimensional unconstrained problems which, in turn, are reduced to one-dimensional unconstrained problems. In effect, most of the available nonlinear programming algorithms are based on the minimization of a function of a single variable without constraints. Therefore, efficient one dimensional optimization algorithms are required, if efficient multidimensional unconstrained and constrained algorithms are to be constructed. The one-dimensional optimization problem is minimize F = f(x) where f(x) is a function of one variable. This problem has a solution if f(x) is unimodal in some range of x, i.e., f(x) has only one minimum in some range xL x xU, where xL and xU are the lower and upper limits of the minimizer x . Two general classes of one-dimensional optimization methods are available, namely, search methods and approximation methods. In search methods, an interval [xL, xU] containing x , known as a bracket, is established and is then repeatedly reduced on the basis of function evaluations until a reduced bracket [xL,k, xU,k] is obtained which is sufficiently small. The minimizer can be assumed to be at the center of interval [xL,k, xU,k]. These methods can be applied to any function and differentiability of f(x) is not essential. In approximation methods, an approximation of the function in the form of a loworder polynomial, usually a second- or third-order polynomial, is assumed. This is then analyzed using elementary calculus and an approximate value of x is deduced. The interval [xL, xU] is then reduced and the process is repeated several times until a sufficiently precise value of x is obtained. In these methods, f(x) is required to be continuous and differentiable, i.e., f(x) C1. Several one-dimensional optimization approaches are as follows: 1. Dichotomous search 2. Fibonacci search 3. Golden-section search

Dichotomous Search Consider a unimodal function which is known to have a minimum in the interval [xL, xU]. This interval is said to be the range of uncertainty. The minimizer x of f(x) can be located by reducing progressively the range of uncertainty until a sufficiently small range is obtained. In search methods, this can be achieved by using the values of f(x) at suitable points. If the value of f(x) is known at a single point xa in the range xL < xa < xU, point x is equally likely to be in the range xL to xa or xa to xU as depicted in Fig. 1(a). Consequently, the information available is not sufficient to allow the reduction of the range of uncertainty. However, if the value of f(x) is known at two points, say, xa and xb, an immediate reduction is possible. Three possibilities may arise, namely, (a) f(xa) < f(xb) (b) f(xa) > f(xb) (c) f(xa) = f(xb) In case (a), x may be located in range xL < x < xa or xa < x < xb, that is, xL < x < xb , as illustrated in Fig. 4.1a. The possibility xb < x < xU is definitely ruled out since this would imply that f(x) has two minima: one to the left of xb and one to the right of xb. Similarly, for case (b), we must have xa < x < xU as in Fig. 1b. For case (c), we must have xa < x < xb, that is, both inequalities xL < x < xb and xa < x < xU must be satisfied as in Fig. 1c.

The Dichotomous Search Method computes the midpoint (a+b)/2, and then moves slightly to either side of the midpoint to compute two test points ((a+b)/2)+(/2) and ((a+b)/2) (/2): , where is a very small number. The objective being to place the two test points as close together as possible. The procedure continues until it gets within some small interval containing the optimal solution. Dichotomous Search Algorithm to maximize f(x) over the interval [a,b] STEP 1: Initialize: Choose a small number > 0, such as 0.01. Select a small t such that 0 < t < b a, called the length of uncertainty for the search. Calculate the number of iterations n using the formula

STEP 2: For k = 1 to n, do Steps 3 and 4. STEP 3: STEP 4: (For a maximization problem) If f (x1) f (x2), then b = x2 else a = x1. k = k + 1 Return to Step 3. STEP 5: Let

In stead of determining the number of iterations, we may wish to continue until the change in the dependent variable is less than some predetermined amount, say . That is continue to iterate until f (a) f (b) . To minimize a function y = f (x), either maximize y or switch the directions of the signs in Step 4. Example Maximize f (x) = x2 2x over the interval 3 x 6. Assume the optimal tolerance to be less than 0.2 and we choose = 0.01. We determine the number of iterations to be

Fibonacci Search Fibonacci search is a univariate search technique that can be used to find the maximum (minimum) of an arbitrary unimodal, univariate objective function. The name Fibonacci search has been attributed to this technique because of the search procedures dependency on a numerical sequence called Fibonacci number. Xn = Xn-1 + Xn-2., n = 2, 3 Xn = 1 and X1 = 1 The Fibonacci sequence Identifier Sequence Fibonacci number F0 0 1 F1 1 1 F2 2 2 F3 3 3 F4 4 5 F5 5 8 F6 6 13 F7 7 21 F8 8 34 F9 9 55 F10 10 89 F11 11 144 F12 12 233 F13 13 377 F14 14 610 F15 15 987 Etc. Etc. Etc. Step-1: Define the end points of the search, A and B.

Step-2: Define the number of functional evaluations, N, that are to be used in the search. Step-3: Define the minimum resolution parameter, . Step-4: Define the initial interval and first interval of uncertainity as (B-A) Therefore, L0 =L1 = (B-A) Step-5: Define the second interval of uncertainty as follows: , ( ) Step-6: Locate the first two functional evaluations at the two symmetric points X1 and X2, defined as follows: X1 = A + L2 X2 = B L2 Step-7: Calculate f(X1) and f(X2), and eliminate the interval in which the optimum cannot lie. Step-8: Use the relationship Ln = Ln-2 Ln-1 to locate the subsequent points of evaluation within the remaining interval of uncertainity. Continue to repeat the steps 7 and 8 until N functional evaluations have been executed. The final solution can either be an average of the last two points evaluated (XN and XN-1) or the best (max/min) functional evaluation. Example Maximize the function f(x) = - 3x2 + 21.6x + 1, with a minimum resolution of 0.5 over six functional evaluations. The optimal value of f(x) is assumed to lie in the range (0, 25) Solution: L0 = 25, L1 = 25 , ( ) , ( ) -

The first two functional evaluations will be conducted over the range of 0, 25 X1 = 0 + 15.4231 = 15.4231, f (X1) = -379.477 X2 = 25 15.4231 = 9.5769, f (X2) = -67.233

9.58

15.42

25

Hence the region to the right of X1 = 15.42 can be eliminated. Similarly, Functional Interval of Xn-1 f(Xn-1) Xn f(Xn) evaluations Uncertainity (n) 2 0, 15.4231 9.5769 -67.233 15.4231 -379.477 3 0, 9.5769 5.8462 24.744 9.5769 -67.233 4 0, 5.8462 3.731 39.83 5.8462 24.744 5 2.115, 2.115 32.26 3.731 39.83 5.8462 6 2, 4.2304 3.731 39.83 4.2304 38.688 At the sixth functional evaluation, the interval of uncertainity is established as 2.115. The best estimate of the optimal solution is given by X5* = 3.731, f (X5*) = 39.83. Golden Section Search Method In performing the a Fibonacci search, the two primary drawbacks are the a priori specification of the resolution factor and the number of experiments to be performed. * + [ ] goes to 0.618. This is known as the golden ratio or

The limit of the ratio of

golden section. Step-1: Define the initial interval of uncertainity as L0 = B A, where B is the upper bound and A is the lower bound. Step-2: Determine the first two functional evaluations at points X1 and X2 defined by, X1 = A + 0.618 (B - A) X2 = B 0.618 (B - A) Step-3: Eliminate the appropriate region in which the optimum cannot lie. Step-4: Determine the region of uncertainity defined by Lj+1 = Lj-1 Lj j = 2, 3, .

Where L0 = (B - A) L1 = (B - A) L2 = X1 A Or L2 = B X2 Depending upon the region eliminated at step-3 Step-4: Establish a new functional evaluation using the result of step-4; Evaluate f(x) at this point, and then go to step-3. Repeat this procedure until a specified convergence criteria is satisfied. Example Minimize f(x) = x4 15x3 + 72x2 1135x Terminate the search when | ( ) ( )| The initial range of x is 1 x 15. Solution: The first two points are placed symmetrically within the interval 1 x 15. The golden section ratio places at X1 = 1 + 0.618 (15 - 1) = 9.652, f(X1) = 595.70 And X2 = 15 0.618 (15 - 1) = 6.348, f(X2) = - 168.82 Therefore the region to the right of X = 9.652 can be eliminated, and the interval of uncertainity after two functional evaluations is given by 9.652 X 1. Similarly, Functional Xn-1 f(Xn-1) Xn f(Xn) Interval of Length evaluation (right) (left) uncertainity (n) 2 9.652 595.7 6.346 -168.8 9.652 X 1 8.652 3 6.346 -168.8 4.304 -100.6 9.652 X 5.348 4.304 4 7.609 6.346 -168.8 7.609 X 3.305 114.64 4.304

5 6 7 8 9

6.346 6.828 6.346 6.53 6.643

-168.8 166.42 -168.8 169.83 169.34

5.566 6.346 6.048 6.346 6.53

147.61 -168.8 163.25 -168.8 169.83

7.609 X 5.566 6.828 X 5.566 6.828 X 6.048 6.828 X 6.346 6.643 X 6.346

2.043 1.262 0.780 0.482 0.297

At iteration number 9, note that f(X9) = -169.34 and f (X8)= -169.83 Hence, | ( ) ( )| Since termination criteria are satisfied, the golden section search will stop at this point. The best answer is given by X* =6.643 and f(X*) = -169.34.

REGULA-FALSI METHOD The convergence process is a bisection method is very low. It depends only on the choice of end points of the interval [a, b]. The function f(x) does not have any role in finding the point c (which is just mid-point of a and b). It is used only to decide the next smaller interval [a, c] or [c, b]. A better approximation to c can be obtained by taking the straight line L joining the points (a, f(a)) and (b, f(b)) intersecting the x-axis. To obtain the value of c we can equate the two expressions of the slope m of the line L. ( ) ( ) ( ) ) ( ( ) ( ) ) ( ) ( )

( ( ))

f(b) * (b-a) c = b - f(b) - f(a)

Now the next smaller interval which brackets the root can be obtained by checking f(a) * f(b) < 0 then b = c > 0 then a = c = 0 then c is the root. Selecting c by the above expression is called Regula-Falsi method or False position method. Algorithm - False Position Scheme Given a function f (x) continuos on an interval [a,b] such that f (a) * f (b) < 0 Do a*f(b) - b*f(a) c= f(b) - f(a) if f (a) * f (c) < 0 then b = c else a = c while (none of the convergence criterion C1, C2 or C3 is satisfied)

The false position method is again bound to converge because it brackets the root in the whole of its convergence process. Numerical Example : Find a root of 3x + sin(x) - exp(x) = 0. The graph of this equation is given in the figure. From this it's clear that there is a root between 0 and 0.5 and also another root between 1.5 and 2.0. Now let us consider the function f (x) in the interval [0, 0.5] where f (0) * f (0.5) is less than zero and use the regulafalsi scheme to obtain the zero of f (x) = 0. Iteration No. a b c f(a) * f(c)

1 2 3

0 0.5 0.376 0.376 0.5 0.36 0.376 0.36 0.36

1.38 (+ve) -0.102 (-ve) -0.085 (-ve)

Example -2 Find the root of x * cos[(x)/ (x-2)]=0 a = 1 and b = 1.5 Iteration a b c f(a) * f(c) No. 1 1 1.5 1.133 0.159 (+ve) 2 1.133 1.5 1.194 0.032 (+ve) 3 1.194 1.5 1.214 3.192E-3 (+ve) 4 1.214 1.5 1.22 2.586E-4(+ve) 5 1.22 1.5 1.222 1.646E-5 (+ve) 6 1.222 1.5 1.222 3.811E-9(+ve) So one of the roots of x * cos[(x)/ (x-2)]=0 is approximately 1.222. Example-3 Find the root of x2 = (exp(-2x) - 1) / x for a = -0.5 and b = 0.5 Iteration No. 1 2 3 4 5 6 7 8 a -0.5 -0.5 -0.5 -0.5 -0.5 -0.5 -0.5 -0.5 b 0.5 0.208 0.0952 0.0438 0.0201 9.212E-3 4.218E-3 1.931E-3 c 0.209 0.0952 0.0438 0.0201 9.212E-3 4.218E-3 1.931E-3 8.83E-4 f(a) * f(c) -0.646 (-ve) -0.3211 (-ve) -0.1547 (-ve) -0.0727 (-ve) -0.0336 (-ve) -0.015 (-ve) -7.1E-3 (-ve) -3.2E-3 (-ve)

So one of the roots of x2 = (exp(-2x) - 1) / x is approximately 8.83E-4.

Example -4 Find the root of exp(x2-1)+10sin(2x)-5 = 0 for a = 0 and b = 0.5 Iteration No. 1 2 3 4 a b c 0.272 0.242 0.24 0.24 f(a) * f(c) -2.637 (-ve) -0.210 (-ve) -0.014 (-ve) -2.51E-3(-ve)

0 0.5 0 0.272 0 0.242 0 0.24

So one of the roots of exp[x2-1]+10sin(2x)-5 = 0 is approximately 0.24. Example 5 Find the root of exp(x)-3x2=0 for a = 3 and b = 4 Iteration No. 1 2 3 4 5 6 a 3 3.512 3.681 3.722 3.731 3.733 b 4 4 4 4 4 4 c 3.512 3.681 3.722 3.731 3.733 3.733 f(a) * f(c) 24.137 (+ve) 3.375 (+ve) 0.211 (+ve) 9.8E-3 (+ve) 3.49E-4 (+ve) 1.733*10-3 (+ve)

So one of the roots of exp(x)-3x2=0 is approximately 3.733.

Example 6 Find the root of tan(x)-x-1 = 0 for a = 0.5 and b = 1.5 Iteration a b c f(a) * f(c) No. 1 0.5 1.5 0.576 0.8836 (+ve) 2 0.576 1.5 0.644 0.8274 (+ve) 3 0.644 1.5 0.705 0.762 (+ve) 4 0.705 1.5 0.76 0.692 (+ve) 5 0.76 1.5 0.808 0.616 (+ve) 6 0.808 1.5 0.851 0.541 (+ve) . . . . . 33 1.128 1.5 1.129 1.859E-4 (+ve) 34 1.129 1.5 1.129 2.947E-6 (+ve) So one of the roots of tan(x)-x-1 = 0 is approximately 1.129. Example - 7 Find the root of sin(2x)-exp(x-1) =0 for a = 0 and b = 0.5 Iteration No. 1 2 3 4 5 a 0 0 0 0 0 b 0.5 0.305 0.254 0.246 0.245 c 0.305 0.254 0.246 0.245 0.245 f(a) * f(c) -0.027 (-ve) -4.497E-3(-ve) -6.384E-4 (-ve) -9.782E-5 (-ve) -3.144E-5 (-ve)

So one of the roots of sin(2x)-exp(x-1) = 0 is approximately 0.245. Example 8 Find the root between (2,3) of x3+ - 2x - 5 = 0, by using regular falsi method. Given f(x) = x3 - 2 x - 5 f(2) = 23 - 2 (2) - 5 = -1 (negative) f(3) = 33 - 2 (3) - 5 = 16 (positive) Let us take a= 2 and b= 3.

The first approximation to root is x1 and is given by x1 = (a f(a) - b f(b))/(f(b)-f(a)) =(2 f(3)- 3 f(2))/(f(3) - f(2)) =(2 x 16 - 3 (-1))/ (16- (-1)) = (32 + 3)/(16+1) =35/17 = 2.058 Now f(2.058) = 2.0583 - 2 x 2.058 - 5 = 8.716 - 4.116 - 5 = - 0.4 The root lies between 2.058 and 3 Taking a = 2.058 and b = 3. we have the second approximation to the root given by x2 = (a f(a) - b f(b))/(f(b)-f(a)) = (2.058 x f(3) - 3 x f(2.058)) /(f(3) - f(2.058)) = (2.058 x 16 -3 x -0.4) / (16 - (-0.4)) = 2.081 Now f(2.081) = 2.0812 - 2 x 2.081 - 5 = -0.15 The root lies between 2.081 and 3 Take a = 2.081 and b = 3 The third approximation to the root is given by x3 = (a f(a) - b f(b))/(f(b)-f(a)) = (2.089 X 16 - 3 x (-0.062))/ (16 - (-0.062)) = 2.093 The required root is 2.09 Practice Problems Find the approximate value of the real root of x log 10 x = 1.2 by regula falsi method Find the root of the x ex = 3 by regula false method and correct to the three decimal places Find a root which lies between 1 and 2 of f(x) = x3 + 2 x2 + 10x -20 (Leonardo's Equation) Using the regula falsi method

Newton-Raphson method The Newton-Raphson method considers a linear approximation to the first derivative of the function using the Taylors series expansion. Subsequently, this expression is equated to zero to find the initial guess. If the current point at iteration t is X(t), the point in the next iteration is governed by the nature of the following expression.

The iteration process given by above equation is assumed to have converged when the derivative, f(X(t+1)), is close to zero.

where is a small quantity. The following figure depicts the convergence process in the Newton-Raphson method, where x is the true solution.
*

Example Find the minimum of the function

using the Newton-Raphson method with the starting point x = 0.1. Use = 0.01 in equation (3) for checking the convergence. Solution The first and second derivatives of the function f(x) are given by

(1)