You are on page 1of 32

Differential Evolution Optimization Lecture

DIFFERENTIAL EVOLUTION

DE

Differential Evolution Optimization Lecture


Outline

Outline

Rationale
Hystorical notes
DE basics
Implementation (Matlab code)
Improvement strategies
Constraint handling
Conclusions

Differential Evolution Optimization Lecture


Rationale

Rationale
Why are stochastic methods needed?

Mixed real/integer parameter problems (e.g. number of poles/slots)


Constraints (explicit/implicit)
Noisy objective functions (due to numerically evaluated objectives)
Multiminima
Multiobjective
..

Drawback: Very expensive


N. evaluations
Parallelism ?
500.000
Target: 1 day run
50.000
5.000

T [s]

Solver

0.15

Simple analytical

1.5

Complex analytical

15

2d FEM

Differential Evolution Optimization Lecture


Hystorical notes

Hystorical notes
Differential Evolution: what and when
R. Storn and K. Price, Differential
Evolution - A Simple and Efficient
Adaptive Scheme for Global
Optimization over Continuous
Spaces, Tech. Report, International
Computer Science Institute
(Berkeley), 1995.
R. Storn and K. Price, Differential
Evolution A Simple and Efficient
Heuristic for global Optimization
over Continuous Spaces, Journal
of Global Optimization, vol. 11, Dec.
1997, pp. 341-359.

Differential Evolution Optimization Lecture


DE Basics
Algorithm

DE Basics
Algorithm: Typical evolutionary scheme

Mutation: Expands the search space


Recombination: Reuses previously successful individuals
Selection (Explicit): Mimics survival-of-the-fittest

Differential Evolution Optimization Lecture


DE Basics
Mutation

DE Basics
Mutation: add difference vector(s) to a base individual in
order to explore the search space

DE/rand/1
DE/best/1
DE/rand to best/1
DE/curr. to best/1
DE/rand/2
DE/best/2

vi
vi
vi
vi

= xr1 + F1 (xr2 xr3 )


= xbest + F1 (xr2 xr3 )
= xr1 + F1 (xr2 xr3 ) + F2 (xbest xr1 )
= xi + F1 (xr2 xr3 ) + F2 (xbest xi )

vi = xr1 + F1 (xr2 xr3 + xr4 xr5 )


vi = xbest + F1 (xr2 xr3 + xr4 xr5 )
donor

multiobjective ???

mutation factor
6

Differential Evolution Optimization Lecture


DE Basics
Recombination

DE Basics
Recombination: mix successful solutions from the
previous generation with current donors

ui,j =

trial

vi,j
xi,j
target

if randi,j CR or j = irand
else
donor

crossover ratio

Scheme ensures that at least one DOF is changed


7

Differential Evolution Optimization Lecture


DE Basics
Selection

DE Basics
Selection: Greedy scheme is key for fast convergence of DE

xk+1
=
i

uki
k
xi

if f (uki ) < f (xki )


else
multiobjective???

The power of DE was shown at the First International Contest on Evolutionary


Optimization in May 1996 (IEEE International Conference on Evolutionary
Computation). DE was the best general purpose algorithm.
8

Differential Evolution Optimization Lecture


Single objective DE
Implementation

Matlab code

1 clear all; close all; clc


2 %Function to be minimized
3 D=2;
4 objf=inline(4*x1^22.1*x1^4+(x1^6)/3+x1*x24*x2^2+4*x2^4,x1,x2);
5 objf=vectorize(objf);
6 %Initialization of DE parameters
7 N=20; %population size (total function evaluations will be itmax*N, must be
>=5)
8 itmax=30;
9 F=0.8; CR=0.5; %mutation and crossover ratio
10 %Problem bounds
11 a(1:N,1)=1.9; b(1:N,1)=1.9; %bounds on variable x1
12 a(1:N,2)=1.1; b(1:N,2)=1.1; %bounds on variable x2
13 d=(ba);
14 basemat=repmat(int16(linspace(1,N,N)),N,1); %used later
15 basej=repmat(int16(linspace(1,D,D)),N,1); %used later
16 %Random initialization of positions
17 x=a+d.*rand(N,D);
18 %Evaluate objective for all particles
19 fx=objf(x(:,1),x(:,2));
20 %Find best
21 [fxbest,ixbest]=min(fx);
22 xbest=x(ixbest,1:D);
23 %Iterate
24 for it=1:itmax;
25
permat=bsxfun(@(x,y) x(randperm(y(1))),basemat,N(ones(N,1)));
26
%Generate donors by mutation
27
v(1:N,1:D)=repmat(xbest,N,1)+F*(x(permat(1:N,1),1:D)x(permat(1:N,2),1:
D));
28
%Perform recombination
29
r=repmat(randi([1 D],N,1),1,D);
30
muv = ((rand(N,D)<CR) + (basej==r)) ~= 0;
31
mux = 1muv;
32
u(1:N,1:D)=x(1:N,1:D).*mux(1:N,1:D)+v(1:N,1:D).*muv(1:N,1:D);
33
%Greedy selection
34
fu=objf(u(:,1),u(:,2));
35
idx=fu<fx;
36
fx(idx)=fu(idx);
37
x(idx,1:D)=u(idx,1:D);
38
%Find best
39
[fxbest,ixbest]=min(fx);
40
xbest=x(ixbest,1:D);
41 end %end loop on iterations
42 [xbest,fxbest]

Please note that this code


actually works !
9

Differential Evolution Optimization Lecture


Single objective DE
Influence of parameters

DE
So far so good, but what about:
Mutation strategy ?
Mutation factor ?
Crossover ratio ?
What impact do they have ?
Can general rules be found or is fiddling necessary?
Can other ideas be incorporated ?

10

Differential Evolution Optimization Lecture


Single objective DE
Influence of parameters (mutation strategy)

DE/rand/1/bin

f (x) =

N
1

i=1

vi = xr1 + F1 (xr2 xr3 )

[(1 xi )2 + 100(xi+1 x2i )]
















































Low-dimensional





5000 eval.









High-dimensional

Curse of dimensionality (1M eval. to reach convergence)

11

Differential Evolution Optimization Lecture


Single objective DE
Influence of parameters (mutation strategy)

DE/rand/2/bin
Add two difference vectors instead of one

vi = xr1 + F1 (xr2 xr3 + xr4 xr5 )





 





















































Not so good as DE/rand/1/bin


12

Differential Evolution Optimization Lecture


Single objective DE
Influence of parameters (mutation strategy)

DE/best/1/bin
Always start from best
vi = xbest + F1 (xr2 xr3 )

























































Much faster convergence than DE/rand/1/bin


Stagnation?
13

Differential Evolution Optimization Lecture


Single objective DE
Influence of parameters (mutation strategy)

DE/best/2/bin
Always start from best but use two difference vectors
vi = xbest + F1 (xr2 xr3 + xr4 xr5 )



 

























































Similar convergence to DE/rand/1/bin in 2D


Rather poor performance in 30D
14

Differential Evolution Optimization Lecture


Single objective DE
Influence of parameters (mutation strategy)

DE/rand-to-best/1/bin
Include movement to best (analogy with PSO)
vi = xr1 + F1 (xr2 xr3 ) + F2 (xbest xr1 )









  





















































Faster convergence than DE/rand/1/bin


Stagnation?
15

Differential Evolution Optimization Lecture


Single objective DE
Influence of parameters (mutation strategy)

Comparison
And the winner is.. DE/current-to-best/1/bin







  

  

  

   










  

  

  

   






























30D problem very tough


Some better ideas are needed!
16

Differential Evolution Optimization Lecture


Improvement strategies
Improvement strategies (more aggressive behaviour)

Aggressive strategy (asynchronous)


DE/rand/1:Use good solutions immediately


















 


 


















 


 
























Good improvement but only in 2D









17

Differential Evolution Optimization Lecture


Improvement strategies
Improvement strategies (more aggressive behaviour)

Aggressive strategy (asynchronous)


DE/current-to-best/1:Use good solutions immediately




  





































 


 
























Good improvement but only in 2D









18

Differential Evolution Optimization Lecture


Improvement strategies
Improvement strategies (Changing F)

Changing F
Improve solution by changing mutation factor



  








 

  
  
   
   
    












 

  
  
   

  

   




























Faster and/or deeper convergence


Adaptivity? YES
19

Differential Evolution Optimization Lecture


Improvement strategies
Improvement strategies (Population size)

Population size
Smaller populations work well



  










 

 



 
 
 
 
 















 

 

 
 

 

 
 

































Faster and/or deeper convergence


Adaptivity? YES (kill old, add if stagnating)
20

Differential Evolution Optimization Lecture


Improvement strategies
Improvement strategies (Elitism)

Elitism (NEW)
Compare trial not only with target but also with worst



  












 


 

 
 


 

 
 



 

   
  
    

  

    










































Faster and/or deeper convergence


Useful
21

Differential Evolution Optimization Lecture


Improvement strategies
Improvement strategies (Coevolution)

Coevolution
Several populations, each working on a subset of the
degrees of freedom
Population 1:
x=(x1,2)

After iteration:
xbest=(1.2,2), f=9.9

Current best:
x=(1,2), f=10.0

Hopefully:
x=( 1.2, 2.5 ), f=9.0
Population 2:
x=(1,x2)

After iteration:
xbest=(1,2.5), f=9.5

Recent work suggests that coevolution may exhibit some


free lunches for us a free sandwich is enough
22

Differential Evolution Optimization Lecture


Improvement strategies
Improvement strategies (Coevolution)

Coevolution
Several populations, each working on a subset of the
degrees of freedom






   

      


    
       

    

       

























Faster convergence
Useful



23

Differential Evolution Optimization Lecture


Improvement strategies

Final comparison
Not mentioned here:

Summing up







 
    

  
    
  
    








Comparison with nearest target


F from Gaussian or Cauchy distrib.
















The combined use of improvement techniques


(agggressive+elitist+coevolutionary) allows to solve highly
dimensional 2d FEM problems in 1 day!

24

Differential Evolution Optimization Lecture


Conclusions

Conclusions
Hybrid DE can efficiently solve problems with:

Mixed real/integer parameters


Constraints
Noisy objective functions
Multiminima

Parallel computing allows (simple) stochastic 3d FEM


optimization
N. generations

N. eval.

T single eval. [s]

T eval. [s]

Solver

25.000

20

500.000

idem

Very complex analytical

2.500

20

50.000

30

idem

Average 2d FEM

250

20

5.000

300 = 5 min

idem

Simple 3d FEM
25

Differential Evolution Optimization Lecture

Additional material
If time permits

26

Differential Evolution Optimization Lecture

DE vs. PSO

27

Differential Evolution Optimization Lecture


Constraint handling
Existing approaches and proposal

Constraint handling
Existing approaches:
Lagrange multipliers (continuous only)
Penalty (value?)

Elegant solution:
Domination in a single-objective context
Incorporate in selection operator:
If both candidate and target are feasible choose best
If candidate/target is feasible and target/candidate unfeasible choose
feasible
If both candidate and target are unfeasible choose the least
unfeasible
28

Differential Evolution Optimization Lecture


Multiobjective DE
Proposed method

Multiobjective DE
Proposed method:
DE/current-to-best/aggressive/elitist/multipopulation with following
modifications:
All targets and all trial individuals are non-dominated,least-crowded
sorted in the selection phase (like NSGA-II)
Best half evolve to next generation
best individual is randomly selected from level-1 front

29

Differential Evolution Optimization Lecture


Multiobjective DE
Analytical benchmark

Multiobjective DE
Analytical benchmark (KUR100)

Proposed multiobjective DE dominates NSGA-II and


SPEA2 (Strength Pareto Evolutionary Algorithm)

30

Differential Evolution Optimization Lecture


Validation
Motor benchmark

Constrained single-objective DE
Motor benchmark
!!#

!"#

!"$
!$!

Method

Avg. torque [Nm] Torq. ripple [Nm] Torque ripple %

Reference

3.99

1.24

31%

TRIBES (PSO)

3.82

0.46

12%

DE

3.95

0.43

11%
31

Differential Evolution Optimization Lecture


Validation
Motor benchmark

Constrained multi-objective DE
Motor benchmark
Maximize average torque / Minimize torque ripple

32

You might also like