You are on page 1of 18

Cover page

Contents
Abstract
Solving unimodal functions has its own importance solving multimodal functions
which are the centerpieces of machine learning 1. As an attempt to understand the
underlying workings for several optimization techniques used for solving one
dimensional problems, we have implemented the following algorithms - exhaustive
search, interval halving, fibonacci and golden section. We have then compared the
algorithms from various angles with our main goal of recognizing the key aspects
which are driving these algorithms to reach the maxima or minima in the first place.
From there on, we have focused on making a fair comparison between these
algorithms through several scrutinizing features. We have managed to bring forth a
number of conclusions.

All the methods discussed in this paper are applicable only for unimodal functions –
function which is monotonically increasing or decreasing with only one optima; with
an initial interval of uncertainty in which an optimum is known to lie at. The methods
make use of region elimination principle.
The Exhaustive Search
Also known as the brute force search or simultaneous search, this optimization
technique aims to explore the search space at a steady pace where it gives an equal
chance to all members of the search space before arriving at a conclusion. The key
idea is that no member of the search space dominates one over the other until the
algorithm is finished with its exploration. This algorithm does not give the exact
minima or maxima, however, it does provide us with the region where it is highly
likely to be present at.

The exhaustive search can be used to solve problems where the interval in which the
optima is known to lie is finite. This method consists of evaluating the unimodal
objective function at a predetermined number of equally spaced points in an interval
and reducing the interval of uncertainty using unimodality.

If the function is evaluated at n equally spaced points in the original interval of uncertainty of
length L 0= X final – X starting, and if optimum turns or to be, let’s say, Xj, then the next
interval is given by:
2
L n= X j+1 – X j-1 = L0
n+1

The following implementation and visuals correspond to the below functions:

(( ) )
2 2
x
 2
+3
3456
Algorithm Implementation:

Visualization:
Interval Halving Method
Also known as the bisection method or binary chopping2, this optimization technique
works by deleting half of the interval of uncertainty at every iteration. The function
value at midpoint is evaluated and by using the region elimination principle, the
search space is reduced by half.

The procedure involves dividing the search space into four equal parts and the
function values are evaluated at the three interior points, based on which the interval is
reduced by half.

Subsequent intervals are considered by the formula, after applying the region
elimination principle

()
( n−1) /2
1
Ln = L0
2

The following implementation and visuals correspond to the below functions:

(( ) )
x 2 2

 2
+3
3456
Algorithm Implementation:
Visualization:
Fibonacci Search
This method can be used to find the optima in a search space of a function that does
not have to be continuous. This method is different from the rest of three methods that
are discussed in this paper by reducing the interval in an unstable fashion. The method
makes use of Fibonacci numbers for the interval reduction, hence its name.

We must define to the total number of iterations before initiating the algorithm. And
then apply this formula, where:
Fn−k
Lk = L
F n−k+1 0

Lk =L k+1+ Lk+2 k = 1, 2,…, n-3

The following implementation and visuals correspond to the below functions:

(( ) )
2 2
x
 2
+3
3456
Algorithm Implementation:
Golden Search Method
Also known as the Golden Ratio or the Golden Section method, the interval is
explored using the golden ratio. The function is evaluated at distances which form the
Golden ratio, hence the name.

The inverse of the Golden ratio is used in calculating the reduction in the interval of
uncertainty and we do not need to predefine the number of iterations before executing
the algorithm. The relation between Fibonacci method can be seen through the
following equation:
FN F N −2
=1+
F N−1 F N −1

can be expressed as an approximation of


1
+1= y
y
Upon solving this equation, we get the root as the Golden Ratio
The following implementation and visuals correspond to the below functions:

(( ) )
2 2
x
 2
+3
3456
Algorithm Implementation:
Visualization:
Comparison of Elimination Methods
Several factors were considered in order to make a reasonable and clear comparison
between the methods mentioned in this paper.
The comparison table
Exhaustive Interval Fibonacci Golden
Feature
Search Halving Search Section
No. of
iterations
No. of
2 in iteration-1 2 in iteration-1
function 1 2
1 in the rest 1 in the rest
evaluations
Time Taken
Reduction of
search space
per iteration
Time
O(n) O(n) O(n) O(n)
Complexity
Space O(n) for storing
O(1) O(1) Fibonacci numbers O(1)
Complexity O(1) for approach
No. of
iterations
required to
achieve ***
accuracy

 The programs were implemented in python and were run over the following
unimodal functions:

(( ) )
2 2
x
1. 2 -1500 < x < 1500
+3
3456

x+42 −x+23
x−56
2. e 34
+e 24
+34 -1000 < x < 1000
31

( )
4
x−3

3. 43 -100 < x < 100
+ 27(x−31)
64
−x+ 42 x+23
−( x−56)
4. e 34
−e 1 24
+245 -250 < x < 1000
31

You might also like