You are on page 1of 43

Single-objective

optimization
algorithms
Analogy
Analogy
BAJ team: Bob Anthony Jennifer
Analogy
GPS
BAJ team: Bob Anthony Jennifer

Latitude (x)
Longitude (y)
Elevation (z)
Analogy
Analogy

Landscape
Analogy

Landscape

GPS: x,y Variables


Analogy

Landscape

GPS: x,y Variables

GPS: z Objective value


Analogy

Landscape

GPS: x,y Variables

GPS: z Objective value

GPS Objective function


Analogy

Landscape

GPS: x,y Variables

GPS: z Objective value

GPS Objective function

All the possible Search space


locations (x,y)
Analogy
Rain, flood, etc. Constraints

Landscape

GPS: x,y Variables

GPS: z Objective value

GPS Objective function

All the possible Search space


locations (x,y)
Analogy
Rain, flood, etc. Constraints

Treasure Global optimum

Landscape

GPS: x,y Variables

GPS: z Objective value

GPS Objective function

All the possible Search space


locations (x,y)
Analogy
Rain, flood, etc. Constraints

Treasure Global optimum

Landscape
Small coins Local optima

GPS: x,y Variables

GPS: z Objective value

GPS Objective function

All the possible Search space


locations (x,y)
Analogy
Rain, flood, etc. Constraints

Treasure Global optimum

Landscape
Small coins Local optima

GPS: x,y Variables


Bob, Anthony, Search agents
Jennifer

GPS: z Objective value

GPS Objective function

All the possible Search space


locations (x,y)
Analogy
Rain, flood, etc. Constraints

Treasure Global optimum

Landscape
Small coins Local optima

GPS: x,y Variables


Bob, Anthony, Search agents
Jennifer

GPS: z Objective value Search strategy Optimization algorithm


(BAJ)

GPS Objective function

All the possible Search space


locations (x,y)
Analogy
Rain, flood, etc. Constraints

Treasure Global optimum

Landscape
Small coins Local optima

GPS: x,y Variables


Bob, Anthony, Search agents
Jennifer

GPS: z Objective value Search strategy Optimization algorithm


(BAJ)

Days Iterations
GPS Objective function

All the possible Search space


locations (x,y)
Analogy
Rain, flood, etc. Constraints

Treasure Global optimum

Landscape
Small coins Local optima

GPS: x,y Variables


Bob, Anthony, Search agents
Jennifer

GPS: z Objective value Search strategy Optimization algorithm


(BAJ)

Days Iterations
GPS Objective function

The process of
finding the Optimization
All the possible Search space best location to
locations (x,y) find the treasure
Modern vs. Conventional
Deterministic vs. Stochastic
Conventional (deterministic) optimization
algorithms

• Gradient descent algorithm


Conventional (deterministic) optimization
algorithms

• Gradient descent algorithm


Conventional (deterministic) optimization
algorithms

• Gradient descent algorithm


Conventional (deterministic) optimization
algorithms

• Gradient descent algorithm


Conventional (deterministic) optimization
algorithms

• Gradient descent algorithm


Conventional (deterministic) optimization
algorithms

• Gradient descent algorithm


Conventional (deterministic) optimization
algorithms

• Gradient descent algorithm


Gradient descent algorithm
Gradient descent algorithm
Gradient descent algorithm
Gradient descent algorithm
Gradient descent algorithm
Gradient descent algorithm
Efficient for unimodal landscapes
Gradient descent algorithm
Efficient for unimodal landscapes Not efficient for unimodal landscapes
Highly depends on the starting point
Gradient descent algorithm
Gradient descent algorithm
Gradient descent algorithm
Modern (stochastic) optimization algorithms
Deterministic algorithms

• Gradient descent algorithm


Deterministic algorithms

• Gradient descent algorithm

It gives the same output


No random component
Deterministic algorithms

• Gradient descent algorithm

It gives the same output


No random component
Stochastic algorithms

• Modern algorithms

It gives different outputs


There are random components
Modern (stochastic) optimization algorithms
Modern (stochastic) optimization algorithms
Deterministic Stochastics
Advantages: Advantages:
• Reliable in finding the same solution • Avoid local solutions
• Require less number of function • Higher chance of finding the
evaluation global optimum
• Fast convergence • Low dependency on the initial
Drawbacks: solution
• Local optima stagnation • Mostly do not need gradient
• Low chance of finding the global Drawbacks:
optimum • Slow convergence speed
• High dependency on the initial • Finding different answers in each
solution run
• Mostly need gradient
Modern (stochastic) optimization algorithms

High local optima Gradient-free


avoidance mechanism

You might also like