Professional Documents
Culture Documents
ISKE2007 Lin Chuan
ISKE2007 Lin Chuan
Optimization Algorithm
Chuan Lin Quanyuan Feng
School of Information Science and Technology, Southwest Jiaotong University, Chengdu 610031, P. R. China
mean fitness(log10)
Sphere 4.60e-23(2.52e-22) 3.12e-31(1.03e-30) 2 SPSOvon
Rosenbrock 37.60(42.18) 47.68(43.25) HSPSO-S
Rastrigin 105.57(18.93) 90.54(20.74) HSPSO-F
Griwank 0.1527(0.3574) 0.0138(0.0170) 0
Ackley 3.9970(2.1699) 0.7522(0.9082)
Tab. 2: Mean best fitness value and standard variation in five -2
benchmark functions.
-4
Mean best fitness value 0 500 1000 1500 2000 2500 3000
functions (standard variation) time of iterations
HSPSO-S HSPSO-F Fig.6: Convergence of mean best fitness for Griewank.
Sphere 1.93e-34(8.99e-34) 4.05e-38(5.27e-38)
5
Rosenbrock 37.4693(31.3526) 36.9355(29.3884)
Rastrigin 81.7827(16.9103) 64.4330(15.5890)
Griwank 0.0059(0.0087) 0.0033(0.0049)
mean fitness(log10)
0
Ackley 1.30e-14(3.37e-15) 9.34e-15(2.68e-15)
Tab. 2 (continued)
-5
10 SPSO
SPSO
-10 SPSOvon
0 SPSOvon
HSPSO-S
mean fitness(log10)
HSPSO-S
HSPSO-F
-10 HSPSO-F -15
0 500 1000 1500 2000 2500 3000
time of iterations
-20
Fig.7: Convergence of mean best fitness for Ackley.
-30
Table 2 shows that HSPSO-F achieves the
-40
0 500 1000 1500 2000 2500 3000
minimum mean best fitness among the 4 PSO
time of iterations algorithms in all the 5 test functions and HSPSO-S
Fig.3: Convergence of mean best fitness for Sphere. ranks second. PSOvon gets smaller mean best fitness
than PSO-g except in function Rosenbrock. From fig.
9
SPSO
3~7, we can see that in functions Sphere, Rosenbrock
8
SPSOvon and Ackley, HSPSO-F converges fastest and HSPSO-
7 HSPSO-S
S ranks second. In function Griewank, HSPSO-S
mean fitness(log10)
HSPSO-F
6 converges fastest. And PSO-g converges fastest in
5 function Rastrigin but stagnates in the early stage. It is
4 obvious that HSPSO-S and HSPSO-F perform better
3 than PSO-g and PSOvon due to the specialization and
2 cooperation of the subpopulations in the hierarchy and
1 a good balance of exploration and exploitation. As the
0 500 1000 1500 2000 2500 3000
time of iterations particles at upper level have comparable performance
Fig.4: Convergence of mean best fitness for Rosenbrock. and good diversity, each particle may provide some
special information for others in the subpopulation.
4 Hence employing FIPS algorithm at the upper level to
SPSO
make the particle share more information from its all
mean fitness(log10)
2 SPSOvon
HSPSO-S
neighbors seems a good strategy. This explains the
HSPSO-F
reason that HSPSO-F performs best in the 4 PSO
0 algorithms.
-2 5. Conclusions
-4 Based on the metaphor of specialization and
0 500 1000 1500 2000 2500 3000 cooperation and the hierarchical organization in
time of iterations
human society, a novel PSO algorithm, hierarchical
Fig.5: Convergence of mean best fitness for Rastrigin.
subpopulation PSO (HS-PSO) was presented in this [9] J. Kennedy, R. Mendes, Neighborhood
paper. In HS-PSO, the entire population is divided into topologies in fully informed and best-of-
several subpopulations that are arranged in a hierarchy, neighborhood particle swarm. IEEE Trans. on
and the exploration and exploitation ability can be Systems, Man, and Cybernetics-Part C:
well balanced by assigning different tasks to the Applications and Reviews, 36: 515-519, 2006.
particles at different levels. Two versions of HS-PSO
algorithms are given in this paper: HSPSO-S which
uses standard PSO algorithm in all the subpopulations
and HSPSO-F which uses standard PSO algorithm for
the subpopulations at the bottom level but FIPS
algorithm for upper levels. The simulation results with
a set of benchmark functions show that both HSPSO-S
and HSPSO-F can evidently improve the performance
of PSO algorithm and HSPSO-F performs best in both
unimodal and multimodal functions. Our further study
will focus on the following aspects to further extend
HS-PSO:
(1) Employ adaptive hierarchical structure.
(2) Exchange the information between the
subpopulations.
References
[1] J. Kennedy, R.C. Eberhart, Particle swarm
optimization. Proceedings of the IEEE
International Joint Conference on Neural
Networks, pp. 1942–1948, 1995.
[2] Y. Shi, R. C. Eberhart, A modified particle
swarm optimizer. Proceedings of the IEEE
International Conference on Evolutionary
Computation, pp. 69-73, 1998.
[3] A. Chatterjee, K. Pulasinghe, K. Watanabe, etc. ,
A Particle-Swarm-Optimized Fuzzy–Neural
Network for Voice-Controlled Robot Systems.
IEEE Trans. On Industrial Electronics, 52:1478-
1489, 2005.
[4] J. Kennedy, R. Mendes, Population structure and
particle swarm performance. CEC 2002, pp.
1671-1676, 2002.
[5] P. N. Suganthan, Particle swarm optimizer with
neighborhood operator. CEC 1999, pp. 1958-
1962, 1999
[6] S. Janson, M. Middendorf, A hierarchical
particle swarm optimizer and its adaptive variant.
IEEE Trans. On Systems, Man, and Cybernetics-
Part B: Cybernectics, 35: 1272-1282, 2005.
[7] B. Niu, Y. Zhu, X. X. He, etc., MCPSO: A
multi-swarm cooperative particle swarm
optimizer. Applied Mathematics and
computation, 185: 1050-1062, 2007.
[8] R. Mendes, J. Kennedy, The fully informed
particle swarm: simpler, maybe better. IEEE
Trans. on Evolutionary Computation, 8: 204-210,
2004.