You are on page 1of 22

Optimization Techniques

Single-variable Optimization
Algorithms
Optimal Design Process
• the designer needs to
Need for Optimization
choose the important
design variables
associated with the design
problem. Choose design variables
• The formulation of
optimal design problems Formulate constraints
involves other
considerations, such as
• constraints, objective Formulate objective function
function, and variable
bounds.
• As shown in the figure, Setup variable bounds
there is usually a
hierarchy in the optimal
design process; Choose an optimization algo

Obtain solutions

nasir m mirza 2
Single-variable Optimization Algorithms
• We begin our discussion with optimization algorithms
for single variable and unconstrained functions.
• Single-variable optimization algorithms are basics.
• Since single-variable functions involve only one
variable, the optimization procedures are simple and
easier to understand.
• Moreover, these algorithms are repeatedly used as
subtask of many multi-variable optimization methods.
• Therefore, a clear understanding of these algorithms
will help to learn complex algorithms.

nasir m mirza 3
Single-variable Optimization Algorithms
• The algorithms described in this chapter can be used to solve
minimization problems of the following type:
• Minimize the function f (x), where f( x) is the objective function
and x is a real variable.
• The purpose of an optimization algorithm is to find a solution
x, for which the function f( x) is minimum.
• Two distinct types of algorithms are
• presented in this chapter:
• Direct search methods use only objective function values to
locate the minimum point,
• and gradient-based methods use the first and/or the second-
order derivatives of the objective function to locate the
minimum point.

nasir m mirza 4
Optimality Criteria
• Before we present conditions for a point to be an optimal point,
we define three different types of optimal points.
• (i) Local optimal point: A point or solution x* is said to be a local
optimal point, if there exists no point in the neighborhood of x*
which is better than x*.
• In the parlance of minimization problems, a point x* is a locally
minimal point if no point in the neighborhood has a function value
smaller than f(x*).
• (ii) Global optimal point: A point or solution x** is said to be a
global optimal point, if there exists no point in the entire search
space which is better than the point x**.
• Similarly, a point x** is a global minimal point if no point in the
entire search space has a function value smaller than f(x**).

nasir m mirza 5
Optimality Criteria
• (iii) Inflection point: A point x* is said to be an inflection point if
the function value increases locally as x* increases and decreases
locally as x* reduces or if the function value decreases locally as x*
increases and increases locally as x* decreases.
• Certain characteristics of the underlying objective function can be
exploited to check whether a point is either a local minimum or a
global minimum, or an inflection point.
• Assuming that the first and the second-order derivatives of the
objective function f( x) exist in the chosen search space, we may
expand the function in Taylor's series at any point x and satisfy the
condition that any other point in the neighborhood has a larger
function value.

nasir m mirza 6
Optimality Criteria
• The conditions for a point x to be a minimum point is that f'(x) = 0 and
f"(x) > 0, where f' and f" represent the first and second derivatives of
the function.
• The first condition alone suggests that the point is either a minimum, a
maximum, or an inflection point,
• and both conditions together suggest that the point is a minimum.
• In general, the sufficient conditions of optimality are given as follows:
• Suppose at point x*, the first derivative is zero and the first nonzero
higher order derivative is denoted by n; then
• If n is odd, x* is an inflection point.
• If n is even, x* is a local optimum.
• 1. If the derivative is positive, x* is a local minimum.
• 2. If the derivative is negative, x* is a local maximum.

nasir m mirza 7
EXERCISE 2.1.1
• Consider the optimality of the point x = 0
in the function f( x) = x3 .
• The function is shown in Figure 2.1. It is
clear from the, figure that the point x = 0
is an inflection point, since the function
value increases for x > 0 and decreases for
x < 0 in a small neighborhood of x = 0.

We may use sufficient conditions for optimality to show this fact.


The first derivative of the function at x = 0 is f'(x = 0) = 3x2 and it is zero
when x = 0.
Searching for a nonzero higher-order derivative;
f"(x = 0) = 6x = 0 and f"'(x = 0) = 6 (a non-zero number).
Thus, the nonzero derivative occurs at n = 3, and since n is odd,
the point x = 0 is an inflection point.

nasir m mirza 8
EXERCISE 2.2.1
Consider the problem: Minimize
f(x) = x2 +54/x in the interval (0,5).
A plot of the function is shown in
Figure 2.4.
The plot shows that the minimum
lies at x* = 3. The corresponding
function value at that point is f( x*)
= 27.
By calculating the first and second
derivatives at this point, we
observe that f'(3) = 0 and f"(3)= 6.
Thus, the point x =3 is a local
minimum point, according to the
sufficiency conditions.

nasir m mirza 9
EXERCISE 2.2.1
• In the following, we try to bracket the minimum of the above
function using the exhaustive search method.
• Let us assume that we would like to bracket the minimum point by
evaluating at most 11 different function values. Thus, we are
considering only 10 intermediate points or n = 10.
• Step 1: Let us have x1 = a = 0 and b = 5. Since n = 10, the
increment Δx = (5 - 0)/10 or 0.5.
• We set x2 = 0 +0.5 or 0.5 and x3 = 0.5 +0.5 or 1.0.
• Step 2: Computing function values, f(0) = infinity, f(0.5) = 108.25,
f(1.0) = 55.00.
• Comparing these, we see that f(x1) > f(x2) > f(x3); Thus, the
minimum does not lie in the interval (0,1).
• We set x1 = 0.5, x2 = 1.0, x3 =1.5 , and proceed to Step 3.

nasir m mirza 10
EXERCISE 2.2.1
• Step 3: At this step, we observe that X3 < 5. Therefore, we move to Step 2.
This completes one iteration of the exhaustive search method.
• Since the minimum is not bracketed, we continue to perform the next
iteration.
• Step 2: At this iteration, we have function values at x1 =0.5, x2 = 1.0, and x3
= 1.5. Then f(0.5) = 108.25, f(1.0) = 55.00; f(x3) = f(1.5) = 38.25. Thus, f(x1)
> f(x2) > f(x3), and the minimum does not lie in the interval (0.5,1.5).
Therefore, we set x1 = 1.0, x2 =1.5, x3 =2.0, and move to Step 3.
• Again Step 3: Once again, x3 < 5.0. Thus, we move to Step 2 for next
iteration.
• Again Step 2: At this step, we require to compute the function value only at
x3 = 2.0. The corresponding function value is f(x3) = f(2.0) = 31.00. Since f(x1)
> f(x2) > f(x3), we continue with Step 3. We set x1 = 1.5, x2 =2.0, and x3 =2.5.
• Next Step 3: At this, X3 < 5.0. Thus, we move to Step 2 for next iteration.

nasir m mirza 11
EXERCISE 2.2.1
• Step 2: The function value at x3 = 2.5 is f(x3) =27.85. Like previous
iterations, we observe that f(x1) > f(x2) > f(x3), and therefore, we go
to Step 3. The new set of three points are x 1 =2.0, x2 =2.5, and x3 =
3.0.
• Step 3: Once again, x3 < 5.0. Thus, we move to Step 2.
• Step 2: Here, f(x3) = f(3.0) = 27.00. Thus, we observe that f(x1) >
f(x2) > f(x3), We set x1 =2.5, x2 =3.0, and x3 = 3.5.
• Step 3: Again, X3 < 5.0, and we move to Step 2.
• Step 2: Here, f(x3) = f(3.5) = 27.68. At this iteration we have a
different situation: f(x1) > f(x2) < f(x3) . This is precisely the
condition for termination of the algorithm. Therefore, we have
• obtained a bound for the minimum: x* Є (2.5, 3.5).

nasir m mirza 12
EXERCISE 2.2.1
• As already stated, with n =10, the accuracy of the solution can only
be 2(5 - 0)/10 or 1.0, which is what we have obtained in the final
interval.
• An interesting point to note is that if we require more precision in
the obtained solution, we need to increase n or, in other words, we
should be prepared to compute more function evaluations.
• For a desired accuracy in the solution, the parameter n has to be
chosen accordingly.
• For example, , the required n can be obtained by solving the
following equation for n: 2(b - a)/n = 0.001
• When a = 0 and b = 5, about n = 10,000 intermediate points are
required and one the average, (n/2 +2) or 5,002 function
evaluations are required.
• In the above problem, with n = 10,000, the obtained interval is
(2.9995,3.0005).

nasir m mirza 13
Bounding phase method
• Bounding phase method is used to bracket the minimum of a
function.
• This method guarantees to bracket the minimum of a Uni-modal
function.
• The algorithm begins with an initial guess and thereby finds a
search direction based on two more function evaluations in the
vicinity of the initial guess.
• Thereafter, an exponential search strategy is adopted to reach the
optimum.
• In the following algorithm, an exponent of two is used, but any
other value may very well be used.

nasir m mirza 14
Bounding phase method
Algorithm
• Step 1: Choose an initial guess x(0) and an increment Δ. Set k = 0.
• Step 2: If f(x(0) |Δ|) ≥ f(x(0)) ≥ f(x(0) |Δ|) then Δ is positive;
• Else if f(x(0) |Δ|) ≤ f(x(0)) ≤ f(x(0) |Δ|) then Δ is negative;
• Else go to Step 1.
• Step 3: Set x(k+1) = x(k) +2kΔ.
• Step 4: If f(x(k+1)) < f(x(k)), set k = k +1 and go to Step 3;
• Else the minimum lies in the interval (x(k-1) , x(k+ 1)) and Terminate.

nasir m mirza 15
Bounding phase method
ACCURACY
• If the chosen Δ is large, the bracketing accuracy of the minimum
point is poor but the bracketing of the minimum is faster.
• On the other hand, if the chosen Δ is small, the bracketing
accuracy is better, but more function evaluations may be
necessary to bracket the minimum.
• This method of bracketing the optimum is usually faster than
exhaustive search method discussed in the previous section.
• We illustrate the working of this algorithm by taking the same
exercise problem.

nasir m mirza 16
EXERCISE 2.2.2
• We would like to bracket the minimum of the function f(x) = x2 +
54/x using the bounding phase method.
• Step 1: We choose an initial guess x(0) =0.6 and an increment
Δ=0.5. We also set k = 0.
• Step 2: We calculate three function: f(x(0)|Δ|)= f(0.6- 0.5)
=540.010; f(x(0)) = f(0.6) = 90.360, and f(x(0)|Δ|)= f(0.6 +0.5) =
50.301.
• We observe that f(0.1) > f(0.6) > f(1.1). Thus we set Δ= +0.5.
• Step 3: We compute the next guess: x(1) = x(0) +20Δ = 1.1 .
• Step 4: The f(x(1)) is 50.301 which is less than that at x(0). Thus, we
set k = 1 and go to Step 3.
• This completes one iteration of the bounding phase algorithm.

nasir m mirza 17
EXERCISE 2.2.2
• Step 3: The next guess is x(2) = x(1) + 21Δ = 1.1 +2(0.5) = 2.1.
• Step 4: The function value at x(2) is 30.124 which is smaller than
• that at x(1). Thus we set k =2 and move to Step 3.
• Step 3: We compute x(3) = x(2) + 22Δ = 2.1 +4(0.5) = 4.1.
• Step 4: The function value f(x(3)) is 29.981 which is smaller than
f(x(2)) = 31.124. We set k = 3.
• Step 3: The next guess is x(4) = x(3) + 23Δ = 4.1 +8(0.5) = 8.1.
• Step 4 The function value at this point is f(8.1) = 72.277 which is
larger than f(x(3)) = 29.981. Thus, we terminate with the obtained
interval as (2.1,8.1).

nasir m mirza 18
Comments after exercise
• With Δ = 0.5, the obtained bracketing is poor, but the number of
function evaluations required is only 7.
• It is found that with x(0) = 0.6 and Δ = 0.001, the obtained interval
is (1.623,4.695), and the number of function evaluations is 15.
• The algorithm approaches the optimum exponentially but the
accuracy in the obtained interval may not be very good,
• whereas in the exhaustive search method the iterations required
to attain near the optimum may be large, but the obtained
accuracy is good.
• An algorithm with a mixed strategy may be more desirable.

nasir m mirza 19
Region-Elimination Methods
• Once the minimum point is bracketed, a more sophisticated
algorithm needs to be used to improve the accuracy of the
solution.
• In this section, we describe three algorithms that primarily work
with the principle of region elimination and require comparatively
smaller function evaluations.
• Depending on the function values evaluated at two points and
assuming that the function is unimodal in the chosen search space,
it can be concluded that the desired minimum cannot lie in some
portion of the search space.
• The fundamental rule for region-elimination methods is described
next:

nasir m mirza 20
Example
• Let us consider two points x1 and
x2 which lie in the interval (a, b)
and satisfy x1 < x2.
• For unimodal functions for
minimization, we can conclude
the following:

If f(x1) > f(x2) then the minimum does not lie in (a, x1).
If f(x1) < f(x2) then the minimum does not lie in (x2, b).
If f(x1) = f(x2) then the minimum does not lie in (a, x1) and (x2, b).
Consider a unimodal function drawn in Figure 2.5 as an example.

nasir m mirza 21
Example
• When function value at x1 is larger than that at x2, the minimum point x*
cannot lie on the left-side of x1.
• Thus, we can eliminate the region (a, x1) from further consideration.
Therefore, we reduce our interval of interest from (a, b) to (x1, b).
• Similarly, the second possibility (f(x1) < f( X2)) can be explained.
• If the third situation occurs, that is, when f(x1) = f( x2) (this is a rare
situation, especially when numerical computations are
performed), we can conclude that regions (a, x1) and (b, x2) can
be eliminated with the assumption that there exists only one local
minimum in the search space (a, b).
• The Interval Halving algorithms and Fibonacci search method
uses the above fundamental rule for region elimination we will
discuss them in next slides.

nasir m mirza 22

You might also like