You are on page 1of 24

One Dimensional

Search

Dr. senthilmurugan S,
Chemical Engineering Department,
IIT Guwahati.
One dimensional search
numerical methods
Solving a one-dimensional optimization problem using computer is
always possible. But how efficiently?

Q: How to solve general one-dimensional problem efficiently?


A: In general, it is not easy but there are few workarounds.

Solution methods and algorithms:


• derivative-free (f ′ is hard to find, f ′(x) = 0 is hard to solve)
• interval partitioning methods (bracketing, elimination), polynomial
approximation methods ( quadratic interpolation)
• gradient-based (work better if f ′(x) = 0 is well-posed)
• Newton method, secant (one-dimensional quasi-Newton) method

Sunday, April 10, 2022 CL615 Optimization 2


Know About Unidimensional Search

(1)Knowing a search direction,


want to minimize function
value in that direction by
numerical methods

(2) Search Methods in General


2.1. Non Sequential – Simultaneous evaluation of f at n
points – not an efficient method (unless on parallel
computer).
2.2. Sequential – One evaluation follows the other.
Sunday, April 10, 2022 CL615 Optimization 3
(3) Types of search that are better or best is often
problem dependent. Some of the types are:
a. Interval Bracketing, Polynomial Approximation
(Quadratic Interpolation, Cubic interpolation etc.).
b. Newton, Quasi-Newton, and Secant methods.
c. Region Elimination Methods (Fibonacci, Golden
Section, etc.).
d. Random Search

(4) Most methods assume


(a) a unimodal function, (b) that the min is
bracketed at the start and (c) also you start in
a direction that reduces f.

Sunday, April 10, 2022 CL615 Optimization 4


1. Interval bracketing and elimination
Idea: Reduce the interval which contains minimum iteratively.
Input: Initial interval [a, b], step size h, tolerance ε, counter k = 1.
Procedure:
Step 1: Find x1 and x2
x1 =b − h(b − a)
x2 =a + h(b − a)
Step 2: Find reduced interval
f (x1) ≥ f (x2)  a = x1
f (x1)  f (x2)  b = x2
Step 3: Check convergence
b − a > ε go to Step 1 X1 X2
b − a ≤ ε quit
(if h = 0.618 then the method is called a golden section)

Sunday, April 10, 2022 CL615 Optimization 5


2. Quadratic Interpolation
Approximate f(x) by a
quadratic function.
Use 3 points

Sunday, April 10, 2022 CL615 Optimization 6


Quadratic Interpolation

| | | |
1 𝑓 ( 𝑥 1) 𝑥 21 1 𝑥1 𝑓 (𝑥 1)
2
1 𝑓 ( 𝑥 2) 𝑥2 1 𝑥2 𝑓 ( 𝑥 2)
2
1 𝑓 ( 𝑥 3) 𝑥 3 1 𝑥3 𝑓 ( 𝑥3)
𝑏= 𝑐=

| | | |
2 2
1 𝑥1 𝑥1 1 𝑥1 𝑥1
2 2
1 𝑥2 𝑥 2 1 𝑥2 𝑥 2
2 2
1 𝑥3 𝑥 3 1 𝑥3 𝑥 3

(or use Gaussian elimination)

Sunday, April 10, 2022 CL615 Optimization 7


Quadratic Interpolation

𝑏
𝑥′=𝑥∗=−
2𝑐

Sunday, April 10, 2022 CL615 Optimization 8


How to maintain a bracket on the minimum in
quadratic interpolation

Sunday, April 10, 2022 CL615 Optimization 9


Example: Application of
Quadratic Interpolation
• The function to be
minimized is

-1.7 -0.1 1.5

Sunday, April 10, 2022 CL615 Optimization 10


1. Newton’s Method
Newton’s method for an equation to estimate x at f(x)=0 is

For the application of Optimization (Minimization)


The necessary condition for f(x) to have a local
extremum is f′(x) = 0.
Apply Newton’s method.

′ (𝑘)
(𝑘+1) (𝑘) 𝑓 (𝑥 )
𝑥 =𝑥 − ″ (𝑘)
𝑓 (𝑥 )
Sunday, April 10, 2022 CL615 Optimization 11
Examples
Minimize

2
𝑓(𝑥)=𝑎0+𝑎1 𝑥+𝑎2 𝑥
Minimize

4 2
Sunday, April 10, 2022
𝑓(𝑥)=𝑥 −𝑥 +1
CL615 Optimization 12
Advantages of Newton’s Method

(1) Locally quadratically convergent (as long as f′(x) is


positive – for a minimum).
(2) For a quadratic function, get min in one step.

Disadvantages

(3) Need to calculate both f′(x) and f″(x)


(4) If f″(x)→0, method converges slowly
(5) If function has multiple extrema, may not converge
to global optimum.

Sunday, April 10, 2022 CL615 Optimization 13


2. Finite-Difference Newton Method
Replace derivatives with finite differences
𝑓 (𝑥 +h)− 𝑓 (𝑥 −h)
(𝑘+1) (𝑘) 2h
𝑥 =𝑥 −
𝑓 (𝑥+h)−2 𝑓 (𝑥)+ 𝑓 (𝑥 − h)
2
Disadvantage h
Now need additional function evals (3 here vs. 2 for Newton)

Sunday, April 10, 2022 CL615 Optimization 14


3. Secant(Quasi-Newton) Method

(A)
Analogous equation to (A) is
′ (𝑘) ( 𝑘)
𝑓 (𝑥 )+𝑚( 𝑥 − 𝑥 )=0 ( 𝐵)
The secant approximates f″(x) as a straight line

Sunday, April 10, 2022 CL615 Optimization 15


Start the Secant method by using 2 points spanning x at
which first derivatives are of opposite sign.

For next stage, retain either x(q) or x(p) so that the pair of
derivatives still have opposite sign.

Sunday, April 10, 2022 CL615 Optimization 16


Effectiveness of the algorithm: Order of Convergence

Can be expressed in various ways.

𝐿𝑖𝑛𝑒𝑎𝑟 p = 2, quadratic convergence (newton method)


p = 1.6 secant method

𝑂𝑟𝑑𝑒𝑟𝑃
Sunday, April 10, 2022 CL615 Optimization 17
Example 5.2
• The function to be
minimized is

• For a starting point of x = 3,


• minimize f(x) until the
change in x is less than 10-7
• Use h = 0.1 for the finite-
difference method.
• For the quasi-Newton
method, use xq = 3 and = xp=
-3.

Sunday, April 10, 2022 CL615 Optimization 18


′ (𝑘)
Newton method (𝑘+1) (𝑘) 𝑓 (𝑥 )
𝑥 =𝑥 − ″ (𝑘)
𝑓 (𝑥 )

Sunday, April 10, 2022 CL615 Optimization 19


Sunday, April 10, 2022 CL615 Optimization 20
′ (𝑘 )
Quasi-Newton method (𝑘+1) (𝑘) 𝑓 (𝑥 )
𝑥 =𝑥 − ′ (𝑞) ′ (𝑝)
𝑓 ( 𝑥 )− 𝑓 ( 𝑥 )
𝑥(𝑞) − 𝑥(𝑝)

Sunday, April 10, 2022 CL615 Optimization 21


For the case of multimodal
function
• Extra logic must be
added along with
unidimensional search
algorithm to ensure
that the step size is
adjusted to the
neighbourhood of the
local optimum actually
sought

Sunday, April 10, 2022 CL615 Optimization 22


How one-dimensional search is applied in
a multidimensional problem
𝑛𝑒𝑤 𝑜𝑙𝑑
Newton
′ (𝑘)
𝑥 =𝑥 +𝛼 𝑠❑
❑ ❑
𝑓 (𝑥 )
𝑥(𝑘+1)=𝑥(𝑘) −
𝑓 ″ (𝑥(𝑘)) s is search direction, its a
vector
Finite-Difference Newton Method  is scalar value denoting the
𝑓 ( 𝑥 +h)− 𝑓 (𝑥 −h) distance move along the
(𝑘+1) (𝑘) 2h search direction
𝑥 =𝑥 −
𝑓 (𝑥+ h) −2 𝑓 (𝑥)+ 𝑓 (𝑥 − h) 𝑛𝑒𝑤 𝑜𝑙𝑑
h
2
𝑥 1 =𝑥 +𝛼 𝑠1
1
Quasi-Newton
𝑓 ′ ( 𝑥(𝑘 ))
𝑥
(𝑘+1)
=𝑥 −
(𝑘)
′ (𝑞 ) ′ (𝑝)
𝑥𝑛𝑒𝑤
2 =𝑥 𝑜𝑙𝑑
2 +𝛼 𝑠2
𝑓 (𝑥 )− 𝑓 (𝑥 )
(𝑞) (𝑝)
𝑥 −𝑥

Sunday, April 10, 2022 CL615 Optimization 23


Sunday, April 10, 2022 CL615 Optimization 24

You might also like