You are on page 1of 34

ONE-DIMENSIONAL

MINIMIZATION
Elimination Methods

Prepared by:
Sherelyn A. Evangelio
OUTLINE
ü Introduction
ü Fibonacci Method
ü Golden Section Method
ü Direct Root Methods
Introduction
§ There are three types of points for which the
NLP
max min 𝑓(𝑥)
s.t. 𝑥 ∈ [𝑎, 𝑏]
can have local maximum or minimum:
Case 1: Points where 𝑎 < 𝑥 < 𝑏, 𝑓 2 𝑥 = 0
Case 2: Points where 𝑓 2 𝑥 does not exist
Case 3: Endpoints 𝑎 and 𝑏 of [𝑎, 𝑏]
Case 1
Points Where 𝑎 < 𝑥 < 𝑏, 𝑓 2 𝑥 = 0

THEOREM
• If 𝑓 2 𝑥5 = 0 and 𝑓 22 𝑥5 < 0, then 𝑥5 is a
local maximum. If 𝑓 2 𝑥5 = 0 and 𝑓 22 𝑥5 >
0, then 𝑥5 is a local minimum.
Case 1
THEOREM
• If 𝑓 2 𝑥5 = 0 and
• If the first nonvanishing (nonzero) derivative at
𝑥5 is an odd-order derivative, then 𝑥5 is not a
local maximum or a local minimum.
• If the first nonvanishing derivative at 𝑥5 is
positive and an even-order derivative, then 𝑥5 is
a local minimum.
• If the first nonvanishing derivative at 𝑥5 is
negative and an even-order derivative, then 𝑥5 is
a local maximum.
Case 1
Case 1
Case 1
Case 2
Points Where 𝑓 2 𝑥 Do Not Exist
Case 2
Case 2
Case 3
Endpoints 𝑎 and 𝑏 of [𝑎, 𝑏]

§ If 𝑓 2 𝑎 > 0, then a is a local minimum.


§ If 𝑓 2 𝑎 < 0, then a is a local maximum.
§ If 𝑓 2 𝑏 > 0, then b is a local maximum.
§ If 𝑓 2 𝑏 < 0, then b is a local minimum.
Case 3
Case 3
Example
§ It costs a monopolist $5/unit to produce a
product. If he produces 𝑥 units of the
product, each can be sold for 10 − 𝑥 dollars.
To maximize profit, how much should the
monopolist produce?
Example
§ Find
max 𝑓(𝑥)
s.t. 0 ≤ 𝑥 ≤ 6
When
𝑓 𝑥 = 2 − (𝑥 − 1)A for 0 ≤ 𝑥 < 3
𝑓 𝑥 = −3 + (𝑥 − 4)A for 3 ≤ 𝑥 < 6
Seatwork (1/2 YP)
If a monopolist produces 𝑞 units, she can
charge 100 − 4𝑞 dollars/unit. The fixed cost
of production is $50, and the variable per-unit
cost is $2. How can the monopolist maximize
profits? If a sales tax of $2/unit must be paid
by the monopolist, then would she increase or
decrease production?
One-dimensional Minimization Methods

§ Elimination Methods § Interpolation Methods


• Unrestricted Search • Requiring no
• Exhaustive Search derivatives
• Dichotomous Search • Quadratic
• Fibonacci Method • Requiring derivatives
• Cubic
• Golden Section Method
• Direct Root Methods
Newton Method
Quasi-Newton
Secant
One-dimensional Minimization Methods

§ The elimination methods can be used for the


minimization of even discontinuous
functions
§ The quadratic and cubic interpolation
methods involve polynomial approximations
to the given functions
§ The direct root methods are root finding
methods that can be considered to be
equivalent to quadratic interpolation
Fibonacci Method
§ The Fibonacci method can be used to find
the minimum of a function of one variable
even if the function is not continuous. It has
the following limitations:
1. The initial interval of uncertainty, in
which the optimum lies, has to be known.
2. The function being optimized has to be
unimodal in the initial interval of
uncertainty.
Fibonacci Method
3. The exact optimum cannot be located in
this method. Only an interval known as
the final interval of uncertainty will be
known.
4. The number of function evaluations to be
used in the search or the resolution
required has to be specified beforehand.
Fibonacci Method
Example:

5.GH MI I
Minimize 𝑓 𝑥 = 0.65 − −0.65𝑥 tan
IJK L K
in the interval [0, 3] by the Fibonacci method
using 𝑛 = 6.
Fibonacci Method
Seatwork:

Minimize 𝑓 𝑥 = 𝑥 H − 5𝑥 O − 20𝑥 + 5 in the


interval [0, 5] by the Fibonacci method using
𝑛 = 8.
Golden Section Method
§ The golden section method is same as the
Fibonacci method except that the total
number of experiments to be conducted has
to be specified before beginning the
calculation
§ In this method, we start with the
assumption that we are going to conduct a
large number of experiments
Golden Section Method
Example:

5.GH MI I
Minimize 𝑓 𝑥 = 0.65 − − 0.65𝑥 tan
IJK L K
in the interval [0, 3] by the Golden Section
method using 𝑛 = 6.
Golden Section Method
Seatwork:

Minimize 𝑓 𝑥 = 𝑥 H − 5𝑥 O − 20𝑥 + 5 in the


interval [0, 5] by the Golden Section method
using 𝑛 = 5.
Comparison of the Elimination Methods

§ The efficiency of an elimination method can


be measured in terms of the ratio of the final
and the initial intervals of uncertainty.
Comparison of the Elimination Methods

Final Intervals of Uncertainty


Comparison of the Elimination Methods

Number of Experiments for a Specified


Accuracy
Remarks for Direct Methods
NEWTON METHOD
§ It was originally developed by Newton for
solving nonlinear equations and later
refined by Raphson, and hence the method
also known as Newton-Raphson method
§ The method requires both the first and
second order derivatives
§ If the second derivative is nonzero, it has a
powerful convergence property, known as
quadratic convergence
Remarks for Direct Methods
NEWTON METHOD
§ If the starting point for the iterative process
is not close to the true solution, the process
might diverge
Remarks for Direct Methods
QUASI-NEWTON METHOD
§ The central difference formulas have been
used in approximating the first and second
order derivatives. However, the forward or
backward difference formulas can also be
used for this purpose
§ The iterative process requires the evaluation
of the function at the points 𝑥Q + ∆𝑥 and 𝑥Q −
∆𝑥 in addition to 𝑥Q
Remarks for Direct Methods
SECANT METHOD
§ It is identical to assuming a linear equation
for 𝑓′(𝑥) which implies that 𝑓(𝑥) is
approximated by a quadratic equation
§ In some cases, 𝑓′(𝑥) varies very slowly with
𝑥, this situation can be identified by noticing
a point that remains unaltered for several
consecutive refits. It is better to take the
TJU
next value of 𝑥QJI as to improve
A
convergence
Direct Root Method
Assignment: 1 whole YP

Minimize 𝑓 𝑥 = 𝑥 H − 5𝑥 O − 20𝑥 + 5 using


a. Newton Method: 𝑥I = 1.0; 𝜀 = 0.01
b. Quasi-Newton Method: 𝑥I = 1.0; ∆𝑥 =
0.01; 𝜀 = 0.01
c. Secant Method: 𝑥I = 1.0; 𝑡5 = 0.1; 𝜀 =
0.01

You might also like