You are on page 1of 6

UNCONSTRAINED PEAK SEEKING METHODS

SEARCH TECHNIQUES FOR SINGLE VARIABLE

A good technique for the optimization of a function of just one variable is essential for two
reasons:

i. Some unconstrained problems inherently involve only one variable


ii. Techniques for unconstrained and constrained optimization problems generally involve
repeated use of a one-dimensional search.

One method of optimization for a function of a single variable is to set up as fine a grid as you
wish for the values of x and calculate the function value for every point on the grid. An
approximation to the optimum is the best value of f(x). Although this is not a very efficient
method for finding the optimum, it can yield acceptable results. On the other hand, if we were to
utilize this approach in optimizing a multivariable function of more than, say, five variables, the
computer time is quite likely to become prohibitive, and the accuracy is usually not satisfactory.

In optimization of a function of a single variable, we recognize that there is no substitute for a


good first guess for the starting point in the search. Insight into the problem as well as previous
experience are therefore often very important factors influencing the amount of time and effort
required to solve a given optimization problem.

The methods of determining the optimum value of a function can be grouped into the following:

1. Methods that use function values or first and second derivatives: This method includes
the Newton’s method, quasi-Newton method and the finite difference approximation of
Newton’s method
2. Polynomial approximation methods: This method includes quadratic interpolation
method and cubic interpolation method

NEWTON’S METHOD

Recall that the first-order necessary condition for a local minimum is f '(x) = 0. Consequently,
you can solve the equation f '(x) = 0 by Newton's method to get xk+1

(1)

making sure on each stage k that f( xk+1) <f(xk) for a minimum

The advantages of Newton's method are:

1.The procedure is locally quadratically convergent to the extremum as long as f"(x) / 0.


2. For a quadratic function, the minimum is obtained in one iteration.
The disadvantages of the Newton’s method are:

1.You have to calculate both f'(x) and f "(x).


2. If f "(x) →0, the method converges slowly.
3. If the initial point is not close enough to the minimum, the method will not converge.

The effectiveness of this technique is determined by examining the rate of convergence of the
method used. Rates of convergence can be expressed in various ways, but a common
classification is as follows:

(2)

(3)

If p = 2, the order of convergence is said to be quadratic

Example

1. Using the Newton’s method, find the maximum value of the function:

, with an initial guess of xo = 2.5. Let tolerance be ≤ 10-3

Solution
The maximum value occurred at x = 1.252353 ( x3- x4 = 0.000338<10-3)

To get Y max, substitute x = 1.252353 into to the function,

Therefore, Y max = 1.28449


Example 2
2. Using the Newton’s method, find the minimum value of the function:

, with an initial guess of xo = 3. Let tolerance be ≤ 10-6

Solution
Since x8- x7 = 10-7 < 10-6, we stop iteration

The optimum x value is taken as 0.6299606

k xk f"(x) f"’(x) Xk+1


0 3 107 108 2.0092593
1 2.0092593 31.44650523 48.44547325 1.3601479
2 1.3601479 9.065107833 22.2000289 0.9518103
3 0.9518103 2.449142686 10.87131365 0.7265254
4 0.7265254 0.533954146 6.334069682 0.6422266
5 0.6422266 0.059558503 4.94946069 0.6301933
6 0.6301933 0.001108972 4.765723271 0.6299606
7 0.6299606 4.09435*10-07 4.762204456 0.6299605

POLYNOMIAL APPROXIMATION METHODS

Another class of methods of unidimensional minimization locates a point x near x*, the value of
the independent variable corresponding to the minimum of f(x), by extrapolation and
interpolation using polynomial approximations as models off (x). Both quadratic and cubic
approximation have been proposed using function values only and using both function and
derivative values. In functions where f '(x) is continuous, these methods are much more
efficient than other methods and are now widely used to do line searches within multivariable
optimizers.

a) Quadratic Interpolation

We start with three points x1, x2, and x3 in increasing order that might be equally spaced, but the
extreme points must bracket the minimum. We know that a quadratic function, f(x) = a+bx + cx2
can be passed exactly through the three points, and that the function can be differentiated and the
derivative set equal to 0 to yield the minimum of the approximating function:

(1)

Suppose that f(x) is evaluated at x1, x2 and x3 to yield f(x1) ≡ f1, f(x2) ≡f2, f(x2)≡ f3. The
coefficients b and c can be evaluated from the solution of the three linear equations:

(2)
(3)

(4)

via determinants or matrix algebra. Introduction of b and c expressed in terms of x1, x2, x3,, fl, f2,
and f3, into Equation (1) gives :

(5)

To illustrate the first stage in the search procedure, examine the four points in Figure 1 for stage
1.

Figure 1: Two stages of quadratic interpolation.

We want to reduce the initial interval [x1, x3]. By examining the values of f(x) [with the
assumptions that f(x) is unimodal and has a minimum], we can discard the interval from x1 to x2
and use the region (x2, x3) as the new interval. The new interval contains three points, (x1,, , x3)
that can be introduced into Equation (5) to estimate a x*, and so on. In general, you evaluate
f(x*) and discard from the set {x1, x2, x3} the point that corresponds to the greatest value of f(x),
unless a bracket on the minimum of f(x) is lost by so doing, in which case you discard the x so as
to maintain the bracket. The specific tests and choices of xi to maintain the bracket are illustrated
in Figure 2
Figure 2: How to maintain a bracket on the minimum in quadratic interpolation

You might also like