You are on page 1of 19

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/220063366

Comprehensive learning particle swarm optimizer for solving


multiobjective optimization problems: Research Articles

Article  in  International Journal of Intelligent Systems · February 2006


DOI: 10.1002/int.20128 · Source: DBLP

CITATIONS READS
125 207

3 authors, including:

Ponnuthurai N. Suganthan Jing Liang


Nanyang Technological University Zhengzhou University
478 PUBLICATIONS   31,863 CITATIONS    115 PUBLICATIONS   8,908 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

PhD Research Project View project

Evolutionary Computation and its applications View project

All content following this page was uploaded by Jing Liang on 12 November 2018.

The user has requested enhancement of the downloaded file.


Comprehensive Learning Particle Swarm
Optimizer for Solving Multiobjective
Optimization Problems
V.L. Huang,† P.N. Suganthan,* J.J. Liang ‡
School of Electrical and Electronic Engineering, Nanyang Technological
University, 639798 Singapore

This article presents an approach to integrate a Pareto dominance concept into a comprehensive
learning particle swarm optimizer ~CLPSO! to handle multiple objective optimization prob-
lems. The multiobjective comprehensive learning particle swarm optimizer ~MOCLPSO! also
integrates an external archive technique. Simulation results ~obtained using the codes made avail-
able on the Web at http://www.ntu.edu.sg/home/EPNSugan! on six test problems show that the
proposed MOCLPSO, for most problems, is able to find a much better spread of solutions and
faster convergence to the true Pareto-optimal front compared to two other multiobjective opti-
mization evolutionary algorithms. © 2006 Wiley Periodicals, Inc.

1. INTRODUCTION

Development of evolutionary algorithms to solve multiobjective optimiza-


tion problems has attracted much interest recently and a number of multiobjective
evolutionary algorithms have been suggested.1–5 Although most of the these algo-
rithms were developed taking into consideration two common goals, namely fast
convergence to the Pareto-optimal front and good distribution of solutions along
the front, each algorithm employs a unique combination of specific techniques to
achieve these goals.
The main advantage of evolutionary algorithms ~EAs! in solving multiobjec-
tive optimization problems is their ability to find multiple Pareto-optimal solu-
tions in one single run. As particle swarm optimizers ~PSO! also have this ability,
recently there are several proposals to extend PSO to handle multiobjective
problems.6–12 The comprehensive learning particle swarm optimizer ~CLPSO!13
is a variant of PSO and has been demonstrated to possess superior performance

*Author to whom all correspondence should be addressed: e-mail: epnsugan@ntu.edu.sg.



e-mail: huangling@pmail.ntu.edu.sg.

e-mail: liangjing@pmail.ntu.edu.sg.
INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, VOL. 21, 209–226 ~2006!
© 2006 Wiley Periodicals, Inc. Published online in Wiley InterScience
~www.interscience.wiley.com!. • DOI 10.1002/int.20128
210 HUANG, SUGANTHAN, AND LIANG
in the single objective domain when compared to other variants of PSO. Hence,
extending the CLPSO to multiobjective problems is beneficial.
In this article, we present an approach to extend the CLPSO algorithm to
solve multiobjective optimization problems, which we call the multiobjective com-
prehensive learning particle swarm optimizer ~MOCLPSO!. From the simulation
results on several standard test functions, we find that the MOCLPSO overall out-
performs two other competitive multiobjective evolutionary algorithms ~MOEAs!,
namely, the multiobjective particle swarm optimization ~MOPSO!12 and the non-
dominated sorting genetic algorithm–II ~NSGA-II!.5
The remainder of the article is organized as follows. Section 2 summarizes the
CLPSO and reviews the literature on MOPSO. In Section 3, we develop the pro-
posed MOCLPSO in detail. Section 4 presents experimental results on MOCLPSO
and compares them with MOPSO and NSGA-II. Section 5 draws conclusions from
the study.

2. RELATED WORK

2.1. Comprehensive Learning Particle Swarm Optimizer

Liang et al.13 proposed an improved PSO called CLPSO, which uses a novel
learning strategy where all particles’ historical best information is used to update a
particle’s velocity. This strategy ensures that the diversity of the swarm is pre-
served to discourage premature convergence. It has been applied to solve real-
world problems successfully.14
In CLPSO, the particle swarm learns from the gbest of the swarm, the particle’s
pbest, and the pbests of all other particles so that every particle learns from the
elite, itself and other particles. In this version, m dimensions are randomly chosen
to learn from the gbest. Some of the remaining D-m dimensions are randomly
chosen to learn from some randomly chosen particles’ pbests and the remaining
dimensions learn from its pbest. The pseudo code for CLPSO is given in Table I.
After 10 iterations, the learning dimensions are randomly reorganized.

2.2. Multiobjective Particle Swarm Optimization

Recently, a number of proposals have been suggested to extend PSO to han-


dle multiobjective problem. Ray and Liew 6 combined Pareto dominance and con-
cepts of evolutionary techniques with the particle swarm. The approach uses
crowding to maintain diversity and Pareto ranks to handle constraints. Parso-
poulos and Vrahatis 7 introduced two methods that extend the PSO to be able to
handle multiobjective problems. They were a weighted aggregation approach and
vector-evaluated PSO. In Ref. 8, a dynamic neighborhood and a new pbest updat-
ing strategy is proposed. Only one objective is optimized in each run. Although
these approaches have been shown to find multiple nondominated solutions on
many test problems, researchers realized the need for introducing elitism, as evi-
denced in many recent successful MOEAs. Further, recently more researchers
International Journal of Intelligent Systems DOI 10.1002/int
COMPREHENSIVE LEARNING PARTICLE SWARM OPTIMIZER 211
Table I. The pseudo code for CLPSO.
Initialize the swarm:
Initialize positions and associated velocities of all particles in the population randomly in the D-dimensional
search space. Evaluate the fitness values of all particles. Set the current position as pbest and the current
particle with the best fitness value in the whole population as the gbest.
Vmax ⫽ 0.25~X max ⫺ X min !
For k ⫽ 1 to maxgen
~v0 ⫺ v1 ! ⫻ ~max gen ⫺ k!
v~k! ⫽ ⫹ v1 and v0 ⫽ 0.9, v1 ⫽ 0.2 ~1!
max gen
If Mod~k,10! ⫽ 1 // assign dimensions every 10 generations
For i ⫽ 1 to NP, // NP is the population size
rc ⫽ randperm~D!; // random permutation of D integers
a i ⫽ zeros~1, D!; bi ⫽ zeros~1, D!;
a i ~rc~1 : m!! ⫽ 1; bi ⫽ [rand~1,D! ⫺ 1 ⫹ Pc ] ~2!
fi ⫽ [rand~1, D!. * NP ] // [ ] represents ceiling operator
EndFor i
EndIf
For i ⫽ 1 to NP // updating velocity, position of each particle
For d ⫽ 1 to D // updating V, X of each dimension
If a i ~d ! ⫽⫽ 1
Vi ~d ! ⫽ vk * Vi ~d ! ⫹ rand~! * ~gbest~d ! ⫺ X i ~d !! ~3a!
Elseif bi ~d ! ⫽⫽ 1
Vi ~d ! ⫽ vk * Vi ~d ! ⫹ rand~! * ~ pbestfi~d ! ~d ! ⫺ X i ~d !! ~3b!
Else
Vi ~d ! ⫽ vk * Vi ~d ! ⫹ rand~! * ~ pbesti ~d ! ⫺ X i ~d !! ~3c!
End If
Limit the velocity:
Vi ~d ! ⫽ min~Vmax ~d !, max~⫺Vmax ~d !,Vi ~d !!!
X i ~d ! ⫽ X i ~d ! ⫹ Vi ~d ! ~4!
EndFor d
If X i 僆 @X min , X max #
Calculate the fitness value of X i
Update pbest, gbest if needed and record gbest_id ~which particle pbest is the gbest!
EndIf
EndFor i
Stop if a stop criterion is satisfied
EndFor k

are interested in incorporating an external archive into MOPSO to enhance the


convergence properties.
Fieldsend and Singh 9 used a dominated tree archive to select the global best
individual based on a concept of closeness to members in the nondominated set
and maintain a set of previous best solutions for each particle. Turbulence is incor-
porated to improve the performance of the multiobjective PSO. This approach
uses an unbounded archive. However, some researchers bound the archive size to
reduce the complexity of archive updating.11,12,15,16
Hu, Eberhart, and Shi 15 improved the dynamic neighborhood PSO approach
by introducing an extended memory with a bounded size. Mostaghim and Teich 10
proposed a sigma method in MOPSO for finding the best local guides for each
International Journal of Intelligent Systems DOI 10.1002/int
212 HUANG, SUGANTHAN, AND LIANG
particle in order to converge fast to the Pareto-optimal front with good diversity.
In another paper,11 the same authors use «-dominance to fix the archive size and
compared «-dominance to the clustering techniques. They used an initial archive
instead of an empty archive for MOPSO.
Li 17 extended PSO to multiobjective problems with the nondominated sort-
ing concept of NSGA-II.5 Later, the author proposed a maximin PSO for multi-
objective optimization, which uses a fitness function derived from the maximum
strategy to determine Pareto domination. One advantage is that no additional clus-
tering or niching technique is needed because the maximin fitness function pro-
vided the domination information and diversity information. Both algorithms
showed competitive performance with the real-coded NSGA-II.
Bartz-Beielstein et al.16 integrated the archiving technique into particle swarm
optimization and also analyzed several modifications and extensions of the archiv-
ing techniques. Coello and Lechuga 18 proposed MOPSO with an external repos-
itory similar to the adaptive grid of Pareto archive evolution strategy ~PAES!.
This approach selects a global best based on roulette wheel selection of a hyper-
cube. Coello et al.12 also incorporates a special mutation operator to enhance the
exploratory capabilities. Another improved version ~called AMOPSO! is pre-
sented by Pulido and Coello,19 in which a clustering technique is used to divide
the population of particles into several swarms in order to maintain a better dis-
tribution of solutions. In the following, we present the proposed multiobjective
CLPSO approach, which incorporates a bounded external archive to save the
nondominated solutions.

3. MULTIOBJECTIVE CLPSO

In CLPSO the swarm population is fixed in size, and its members cannot be
replaced, only adjusted by their pbests and the gbest. However, when extending
CLPSO to handle multiobjective problems, there exists a set of nondominated solu-
tions instead of single global best individual as in the single objective CLPSO. In
addition, there may not be a single previous best individual for each member of
the swarm in the case of two solutions that are nondominated to each other. Select-
ing an exemplar for each particle is very difficult yet important.

3.1. Selection of Exemplar

3.1.1. Selection of pbest

There exist several different pbest maintenance and selection strategies in the
literature.20 In our proposal, we use the pbest updating strategy in Ref. 12 ~see
Table II!.
Differently with MOPSO,12 we allow the particle to learn from the exemplars
until the particle ceases improving for a number of generations ~which is set as 2 in
our approach! to ensure that a particle learns from good exemplars and to minimize
the time wasted on poor directions. Then we reassign the exemplars for the particle.
International Journal of Intelligent Systems DOI 10.1002/int
COMPREHENSIVE LEARNING PARTICLE SWARM OPTIMIZER 213
Table II. Updating the pbest.
if pbesti dominates X i, count ⫽ count ⫹ 1
if X i dominates pbesti , pbesti ⫽ X i
if pbesti and X i are nondominated with each other,
if rand ⬍ 0.5, pbesti ⫽ X i , else count ⫽ count ⫹ 1

3.1.2. Selection of gbest


We mentioned above that in CLPSO, m dimensions of each particle are
randomly chosen to learn from the gbest. Actually, when extending CLPSO to
handle a multiobjective problem, there exists a set of nondominated solutions
instead of a single global best individual as in the single objective CLPSO. Because
all the nondominated solutions are good individuals, we could not decide which
is the best. Coello et al.12 used an adaptive grid and applied roulette-wheel selec-
tion to select a hypercube from which they picked the corresponding particle as
exemplar. In our proposed MOCLPSO, we randomly choose a particle from the
nondominated solutions. This random selection approach is rapid, with lower
computational complexity than the method used in the MOPSO of Coello et al.12

3.2. External Archive


We use an external archive to keep a historical record of the nondominated
solutions obtained during the search process. Initially, this archive is empty. As the
evolution progresses, good solutions enter the archive updated in every genera-
tion. The nondominated solutions ~X G ! obtained at each generation are compared
one by one with the current archive ~A G !, which contains the set of nondominated
solutions found so far. There are three cases as illustrated in Table III: ~1! If the
new solution is dominated by a member of the external archive, the new solution
is rejected. ~2! If the new solution dominates some member~s! of the archive, then
the dominated members in the archive are deleted and the new solution enters the
archive. ~3! The new solution does not dominate any archive members and none of
the archive member dominates the solution. This implies that the new solution
belongs to the nondominated front and it enters the archive. Many MOEAs such as
the MOPSO and PAES have used this archive strategy.

Table III. Updating the external archive.


If x G is dominated by any member of A G ,
discard x G
else if x G dominates a set of members D~ x! in A G
A G ⫽ A G \D~ x G !
A G ⫽ A G 艛 $x G %
else A G and x G are nondominated,
A G ⫽ A G 艛 $x G %

International Journal of Intelligent Systems DOI 10.1002/int


214 HUANG, SUGANTHAN, AND LIANG
However, because the size of the true nondominated set can be very large,
and the complexity of updating the archive increases with the archive size, the size
of the archive will be restricted to a prespecified value. There are several density
estimation methods employed in MOEA to maintain the archive size when the
archive reaches maximum allowed capacity. PAES and MOPSO use adaptive hyper-
cubes, in which one chooses an appropriate depth parameter in PAES or the num-
ber of divisions in MOPSO to control the hypercube size. Because the size of the
hypercubes is adaptive with the bounds of the entire search space, when solutions
converge near the Pareto front, the hypercubes are comparatively large.
In our approach, we estimate the density of solution with respect to crowding
distance in which no user-defined parameter is required.5 This density estimation
method is invoked if the external archived population reaches its maximum size.
The distance values of all archive members are calculated and sorted from large to
small. The first Nmax ~maximum size of archive! members are kept whereas the
remaining ones are deleted from the archive.

3.3. MOCLPSO Algorithm

1! Initialize
Randomly initialize particle positions
Initialize particle velocities: for i ⫽ 0 to NP, Vi ⫽ 0.
Evaluate the fitness values of particles.
2! Optimize
WHILE stopping criterion is not satisfied
DO
For i ⫽ 1 to NP
Select an exemplar from external archive
Assign each dimension to learn from gbest, pbest of this particle and
pbests of other particles, using Equation 2
Update particle velocity using Equation 3a, 3b, 3c
Update particle position using Equation 4
Maintain particles in search space 12
Update pbest if current position is better than pbest ~Table I!
Evaluate the fitness values of particle
End For
Update the external archive
Increment the generation count
END WHILE

3.4. Parameter Settings

~1! Learning probability Pc . Learning probability decides the probability of


whether a dimension of each particle learns from the pbest of itself or
other particles. Learning probability Pc is set as 0.1 in our experiments.
International Journal of Intelligent Systems DOI 10.1002/int
COMPREHENSIVE LEARNING PARTICLE SWARM OPTIMIZER 215
~2! Elitism probability Pm . In the original CLPSO, m dimensions are ran-
domly chosen to learn from the gbest. Because different problems have
different dimensions, it is hard to set the value of m for all problems.
Hence we use a parameter Pm , called the elitism probability, and the orig-
inal m in CLPSO is

m ⫽ [Pm * D] ~5!

~3! Inertia weight w. The inertia weight is used to moderate the impact of the
previous velocity on the current velocity of each particle, and balance the
global and local search abilities. Shi and Eberhart 21 proposed a linearly
decreasing inertia weight with increasing generations. Our proposed
MOCLPSO uses decreasing inertia weight, as described in Equation 1. The
inertia weight is initialized as a large value to explore the search space glob-
ally and quickly, and then gradually decreased to perform fine search.

4. EXPERIMENTAL RESULTS

4.1. Methodology

We compare the proposed MOCLPSO with two multiobjective evolutionary


algorithms that are representative of the state of the art.

Multiobjective Particle Swarm Optimization: MOPSO was proposed by


Coello et al.12 MOPSO incorporates Pareto dominance into particle swarm
optimization to handle multiobjective problems. This algorithm uses an
external repository of nondominated solutions to guide the particles’ future
flight during the evolution. At the completion of the evolution process, the
repository will hold the final nondominated solutions. It also incorporates
a special mutation operator to enhance the exploratory capabilities.
Nondominated Sorting Genetic Algorithm II: This algorithm is a revised
version of the original NSGA.1,5 NSGA-II is a fast and elitist MOEA based
on a nondominated sorting approach. This algorithm first combines the
parent and offspring populations and uses a nondominated sorting to clas-
sify the entire population, then selects the best ~with respect to fitness and
spread! solutions. With elitism and a crowded comparison operator, the
NSGA-II is more efficient than the original NSGA.

4.2. Performance Measures

To measure the performance of MOEAs quantitatively, we need some perfor-


mance metrics to evaluate and compare the algorithms. There are two goals in a
multiobjective optimization: ~1! convergence to the Pareto-optimal set and ~2! diver-
sity of solutions in the Pareto-optimal set. Because these two goals are distinct, we
require two different metrics to evaluate the performance of an MOEA.
International Journal of Intelligent Systems DOI 10.1002/int
216 HUANG, SUGANTHAN, AND LIANG
Convergence metric ~g!: This metric finds an average distance between non-
dominated solutions found and the actual Pareto-optimal front, as follows 22 :
N

( di
i⫽1
g⫽ ~6!
N

where N is the number of nondominated solutions obtained with an algo-


rithm and di is the Euclidean distance ~in objective space! between each of
the nondominated solutions and the nearest member of the actual Pareto-
optimal front. A smaller value of g demonstrates a better convergence
performance.
Spread ~D!: Deb et al.5 proposed such a metric to measure the spread in
solutions obtained by an algorithm. This metric is defined as
M N⫺1

( dme ⫹ ( 6di ⫺ dN 6
m⫽1 i⫽1
D⫽ M
~7!
( dme ⫹ ~N ⫺ 1! dN
m⫽1

Here, the parameters dme are the Euclidean distance between the extreme solu-
tions of Pareto optimal front and the boundary solutions of the obtained nondom-
inated set corresponding to mth objective function. The parameter di is the Euclidean
distance between neighboring solutions in the obtained nondominated solutions
set and dN is the mean value of these distances. D is 0 for an ideal distribution when
dme ⫽ 0 and all di equal to d.N The smaller the value of D, the better the diversity of
the nondominated set.

4.3. Discussion of Results

In our simulations, all MOEAs are run for a maximum of 10,000 fitness func-
tion evaluations ~FES!. MOCLPSO uses the following parameter values: popula-
tion size NP ⫽ 50, archive size A ⫽ 100, learning probability Pc is set as 0.1,
elitism probability Pm ⫽ 0.4. MOPSO uses a population size of 50, a repository
size of 100, and 30 divisions for the adaptive grid with mutation as presented in
Ref. 12. For these two approaches, inertia weight decreases linearly over time,
w0 ⫽ 0.9, w1 ⫽ 0.4, c1 ⫽ c2 ⫽ 2; we use all members in the archive after 10,000
fitness evaluations to calculate the performance metrics. For NSGA-II ~real-
coded!, we use a population size of 100, crossover probability of 0.9 and mutation
probability of 1/D ~where D is the number of decision variables!, distribution
indexes for crossover, and mutation operators as hc ⫽ 20 and hm ⫽ 20 as presented
in Ref. 5. The population obtained at the end of 100 generations is used to
calculate the performance metrics. The results presented in Tables IV–IX are
International Journal of Intelligent Systems DOI 10.1002/int
COMPREHENSIVE LEARNING PARTICLE SWARM OPTIMIZER 217
obtained by repeatedly running each problem 10 times. The best average results
are emphasized in boldface.

Test Problem 1. Fonseca and Fleming 23 proposed a two-objective problem ~FON!


having a nonconvex Pareto front:

冉 冉 M 冊冊
3 2
1
Minimize f1 ~ x! ⫽ 1 ⫺ exp ⫺ ( x i ⫺
i⫽1 3

冉 (冉 M 冊冊
3 2
1
Minimize f2 ~ x! ⫽ 1 ⫺ exp ⫺ xi ⫹
i⫽1 3

where n ⫽ 3 and x i 僆 @⫺4,4# . The optimal solutions are x 1 ⫽ x 2 ⫽ x 3 僆 @⫺1冒M 3,


1冒M 3# .
Figure 1 shows all nondominated solutions achieved by MOCLPSO, MOPSO,
and NSGA-II after 10,000 fitness evaluations ~FES! on Test Problem 1. The Pareto-
optimal region is also shown in the figure. These three algorithms all converged to
the Pareto-optimal front, and MOCLPSO is considerably worse than MOPSO
regarding the convergence metric, but the diversity metric of MOCLPSO is much
smaller than the others as shown in Table IV.

Test Problem 2. Our second test problem was proposed by Kursawe 24 :


n⫺1
Minimize f1 ~ x! ⫽ ( ~⫺10 exp~⫺0.2M x i2 ⫹ x i⫹1
2
!!
i⫽1
n
Minimize f2 ~ x! ⫽ ( ~6 x i 6 0.8 ⫹ 5 sin~ x i3 !!
i⫽1

where n ⫽ 3 and x i 僆 @⫺5,5# . For the optimal solutions, see Ref. 25.
The KUR problem has three disconnected Pareto-optimal regions, which
may cause difficulty in finding nondominated solutions in all regions. In Fig-
ure 2, both MOPSO and NSGA-II have a problem in finding the entire Pareto
front. However, MOCLPSO performs well, obtaining nondominated solutions
spread over the entire regions, with the best convergence and diversity metric
values as shown in Table V.
Test Problems 3– 6 are chosen from Zitzler–Deb–Thiele’s test set.26

Test Problem 3 (ZDT 1). Test Problem 3 has a convex Pareto front:

Minimize f1 ~ x! ⫽ x 1

Minimize f2 ~ x! ⫽ g~ x!@1 ⫺ M x 1 /g~ x!#

g~ x! ⫽ 1 ⫹ 9{ 冉 ( 冊冒
n

i⫽2
xi ~n ⫺ 1!

International Journal of Intelligent Systems DOI 10.1002/int


218 HUANG, SUGANTHAN, AND LIANG

Figure 1. Pareto fronts obtained by MOCLPSO, MOPSO, and NSGA-II on Test Problem 1
~FON!.

Table IV. Comparison of performance on Test Problem 1 ~FON!.


Converge Metric g Diversity Metric D
Test Problem 1
~FON! MOCLPSO MOPSO NSGA-II MOCLPSO MOPSO NSGA-II

Best 0.003091 0.002101 0.003006 0.305820 0.555064 0.422121


Worst 0.004002 0.002883 0.004297 0.482773 0.632461 0.515059
Average 0.003394 0.002441 0.003470 0.379726 0.592735 0.458394
Median 0.003382 0.002409 0.003432 0.373136 0.596043 0.458394
Variance 4.82E-08 6.66E-08 1.53E-07 3.64E-03 7.61E-04 9.26E-04

where n ⫽ 30 and x i 僆 @0,1# . The optimal solutions are x 1 僆 @0,1# and x i ⫽ 0,


i ⫽ 2, . . . , n.
The only difficulty an MOEA may face in this problem is the large number
of variables. Nondominated solutions obtained by MOCLPSO, MOPSO, and
NSGA-II on Test Problem 3 are shown in Figure 3 and Table VI. MOCLPSO could
converge to the Pareto-optimal front with fewer FES, whereas MOPSO and
NSGA-II could not.
International Journal of Intelligent Systems DOI 10.1002/int
COMPREHENSIVE LEARNING PARTICLE SWARM OPTIMIZER 219

Figure 2. Pareto fronts obtained by MOCLPSO, MOPSO, and NSGA-II on Test Problem 2
~KUR!.

Table V. Comparison of performance on Test Problem 2 ~KUR!.


Converge Metric g Diversity Metric D
Test Problem 2
~KUR! MOCLPSO MOPSO NSGA-II MOCLPSO MOPSO NSGA-II

Best 0.039446 0.032998 0.014906 0.428320 0.578433 0.628576


Worst 0.048580 0.059463 0.219866 0.540128 0.751789 0.859299
Average 0.041431 0.041893 0.056087 0.481251 0.649151 0.724285
Median 0.041431 0.039463 0.026572 0.481251 0.647882 0.699996
Variance 1.13E-05 8.02E-05 5.04E-03 1.45E-03 3.23E-03 6.37E-03

Test Problem 4 (ZDT2). This problem has a nonconvex Pareto-optimal front:


Minimize f1 ~ x! ⫽ x 1
Minimize f2 ~ x! ⫽ g~ x!@1 ⫺ ~ x 1 /g~ x!! 2 #

g~ x! ⫽ 1 ⫹ 9 冉 ( 冊冒
n

i⫽2
xi ~n ⫺ 1!

International Journal of Intelligent Systems DOI 10.1002/int


220 HUANG, SUGANTHAN, AND LIANG

Figure 3. Pareto fronts obtained by MOCLPSO, MOPSO, and NSGA-II on Test Problem 3
~ZDT1!.

Table VI. Comparison of performance on Test Problem 3~ZDT1!.


Converge Metric g Diversity Metric D
Test Problem 3
~ZDT1! MOCLPSO MOPSO NSGA-II MOCLPSO MOPSO NSGA-II

Best 0.001945 0.063836 0.046512 0.251628 0.553405 0.434004


Worst 0.002455 0.124011 0.122316 0.351678 0.652837 0.790071
Average 0.002235 0.081128 0.075551 0.304202 0.591477 0.537488
Median 0.002232 0.072365 0.067414 0.307244 0.576528 0.493612
Variance 2.69E-08 4.30E-04 7.51E-04 1.02E-03 1.33E-03 1.35E-02

where n ⫽ 30 and x i 僆 @0,1# . The optimal solutions are x 1 僆 @0,1# and x i ⫽ 0,


i ⫽ 2, . . . , n.
Figure 4 shows the graphical results obtained by MOCLPSO, MOPSO, and
NSGA-II on Test Problem 4. Neither MOPSO nor NSGA-II could find the final
nondominated front with only 10,000 FES, which is also demonstrated from the
International Journal of Intelligent Systems DOI 10.1002/int
COMPREHENSIVE LEARNING PARTICLE SWARM OPTIMIZER 221

Figure 4. Pareto fronts obtained by MOCLPSO, MOPSO, and NSGA-II on Test Problem 4
~ZDT2!.

Table VII. Comparison of performance on Test Problem 4 ~ZDT2!.


Converge Metric g Diversity Metric D
Test Problem 4
~ZDT2! MOCLPSO MOPSO NSGA-II MOCLPSO MOPSO NSGA-II

Best 0.001452 0.103316 0.095841 0.223482 0.578084 0.549511


Worst 0.001779 0.484534 0.250033 0.283006 0.999964 1.000000
Average 0.001611 0.242976 0.152121 0.257180 0.928701 0.881874
Median 0.001611 0.196015 0.138117 0.256441 0.998995 1.000000
Variance 1.21E-08 1.91E-02 2.47E-03 3.43E-04 2.31E-02 2.96E-02

high values of converge and diversity metric in Table VII. However, the proposed
MOCLPSO converges to the Pareto-optimal front, as shown in Figure 4.

Test Problem 5 (ZDT3). The fifth problem has several disconnected Pareto-
optimal fronts:
International Journal of Intelligent Systems DOI 10.1002/int
222 HUANG, SUGANTHAN, AND LIANG

Figure 5. Pareto fronts obtained by MOCLPSO, MOPSO, and NSGA-II on Test Problem 5
~ZDT3!.

Table VIII. Comparison of performance on Test Problem 5 ~ZDT3!.


Converge Metric g Diversity Metric D
Test Problem 5
~ZDT3! MOCLPSO MOPSO NSGA-II MOCLPSO MOPSO NSGA-II

Best 0.005689 0.113631 0.600920 0.503779 0.534863 0.736710


Worst 0.008323 0.227841 0.712746 0.646290 0.662133 0.844782
Average 0.007334 0.157366 0.645840 0.554944 0.607451 0.795274
Median 0.007517 0.145892 0.632486 0.549698 0.607451 0.796255
Variance 6.70E-07 1.33E-03 1.75E-03 2.23E-03 1.61E-03 1.19E-03

Minimize f1 ~ x! ⫽ x 1

Minimize 冋
f2 ~ x! ⫽ g~ x! 1 ⫺ M x 1 /g~ x! ⫺
x1
g~ x!
sin~10px 1 ! 册
g~ x! ⫽ 1 ⫹ 9 冉 ( 冊冒
n

i⫽2
xi ~n ⫺ 1!

International Journal of Intelligent Systems DOI 10.1002/int


COMPREHENSIVE LEARNING PARTICLE SWARM OPTIMIZER 223

Figure 6. Pareto fronts obtained by MOCLPSO, MOPSO, and NSGA-II on Test Problem 6
~ZDT6!.

Table IX. Comparison of performance on Test Problem 6 ~ZDT6!.


Converge metric g Diversity metric D
Test Problem 7
~ZDT6! MOCLPSO MOPSO NSGA-II MOCLPSO MOPSO NSGA-II

Best 0.002950 0.527778 2.498893 0.237585 0.788246 0.934297


Worst 0.015873 1.667019 3.707603 0.964462 1.016125 0.983176
Average 0.006283 1.029723 3.073925 0.486495 0.927657 0.954907
Median 0.004871 0.944908 3.023464 0.346380 0.929350 0.954590
Variance 1.77E-05 1.90E-01 1.40E-01 6.75E-02 5.30E-03 3.00E-04

where n ⫽ 30 and x i 僆 @0,1# . The optimal solutions are x 1 僆 @0,1# and x i ⫽ 0,


i ⫽ 2, . . . , n.
The Pareto optimal front of this problem is made up of four disjoint curves.
Figure 5 shows only MOCLPSO effectively finds nondominated solutions spread
over the whole front. See Table VIII for a comparison of performance on Test
Problem 5.
International Journal of Intelligent Systems DOI 10.1002/int
224 HUANG, SUGANTHAN, AND LIANG
Test Problem 6 (ZDT6). This problem has nonconvex and nonuniformly spaced
Pareto-optimal fronts:

Minimize f1 ~ x! ⫽ 1 ⫺ exp~⫺4x 1 !sin 6 ~6px 1 !

Minimize f2 ~ x! ⫽ g~ x!@1 ⫺ ~ f1 ~ x!/g~ x!! 2 #

g~ x! ⫽ 1 ⫹ 9 冋冉 ( 冊冒
n

i⫽2
xi ~n ⫺ 1! 册 0.25

where n ⫽ 10 and x i 僆 @0,1# . The optimal solutions are x 1 僆 @0,1# and x i ⫽ 0,


i ⫽ 2, . . . , n.
Problem 6 is another hard problem. The adverse density of solutions across the
Pareto-optimal front, together with the nonconvex nature of the front, makes it dif-
ficult for many multiobjective optimization algorithms to maintain a well-distributed
nondominated set and converge to the true Pareto-optimal front. From Table IX, we
could observe that MOPSO and NSGA-II could not converge to the true Pareto front
after 10,000 FES. Figure 6 also shows that MOPSO and NSGA-II face difficulties
in converging to the Pareto-optimal front, whereas MOCLPSO performs well in con-
verging to the true front with a good spread of solutions along the front.
From the above experiment results, we find that MOCLPSO could converge
on all six test problems with only 10,000 FES, obviously outperforming the other
two algorithms on larger dimensional problems. Both the mean and variance val-
ues of performance metrics obtained by MOCLPSO are smaller than those yielded
by the other two MOEAs, which demonstrated that MOCLPSO is an effective and
stable algorithm.

5. CONCLUSIONS
This article presented a novel proposal to extend CLPSO to tackle multiobjec-
tive optimization problems with an external archive. We evaluated the proposed
approach on six test problems currently used in the literature. The results demon-
strate that combining the CLPSO with a crowding distance-based archive mainte-
nance strategy can yield a simple, effective, and stable multiobjective evolutionary
algorithm. The main advantage of MOCLPSO is that it converges fast to the true
Pareto-optimal front with fewer FES, and at the same time maintains good diversity
along the Pareto front. At this point, the proposed MOCLPSO significantly outper-
forms two other representative multiobjective evolutionary algorithms.

References
1. Srinivas N, Deb K. Multiobjective optimization using nondominated sorting in genetic
algorithms. Evol Comput 1994;2:221–248.
2. Zitzler E, Thiele L. Multi-objective evolutionary algorithms: A comparative case study
and the strength Pareto approach. IEEE Trans Evol Comput 1999;3~4!:257–271.
3. Knowles JD, Corne DW. Approximating the nondominated front using the Pareto archive
evolutionary strategy. Evol Comput 2000;8:149–172.

International Journal of Intelligent Systems DOI 10.1002/int


COMPREHENSIVE LEARNING PARTICLE SWARM OPTIMIZER 225
4. Zitzler E, Laumanns M, Thiele L. SPEA2: Improving the strength Pareto evolutionary
algorithm. Tech. Rep. TIK-Rep. 103, Swiss Federal Institute of Technology, Lausanne,
Switzerland; 2001.
5. Deb K, Pratap A, Agarwal S, Meyarivan T. A fast and elitist multi-objective genetic algo-
rithms: NSGA-II. IEEE Trans Evol Comput 2002;6:182–197.
6. Ray T, Liew KM. A swarm metaphor for multiobjective design optimization. Eng Optim
2002;34:141–153.
7. Parsopoulos KE, Vrahatis MN. Particle swarm optimization method in multiobjective
problems. In: Proc ACM Symp on Applied Computing 2002 ~SAC 2002!, Madrid, Spain;
2002. pp 603– 607.
8. Hu X, Eberhart RC. Multiobjective optimization using dynamic neighborhood particle
swarm optimization. In: Proc IEEE Congress on Evolutionary Computation ~CEC 2002!,
Honolulu, HI; 2002. pp 1677–1681.
9. Fieldsend JE, Singh S. A multi-objective algorithm based upon particle swarm optimiza-
tion, an efficient data structure and turbulence. In: Proc 2002 UK Workshop on Computa-
tional Intelligence, Birmingham, UK; 2002. pp 37– 44.
10. Mostaghim S, Teich JR. Strategies for finding local guides in multi-objective particle swarm
optimization ~MOPSO!. In: Proc IEEE Swarm Intelligence Symp 2003 ~SIS 2003!, Indi-
anapolis, IN; 2003. pp 26–33.
11. Mostaghim S, Teich JR. The role of e-dominance in multi-objective particle swarm opti-
mization methods. In: Proc IEEE Congress on Evolutionary Computation 2003 ~CEC 2003!,
Canberra, Australia; 2003. pp 1764–1771.
12. Coello CAC, Pulido GT, Lechuga MS. Handling multiple objectives with particle swarm
optimization. IEEE Trans Evol Comput 2004;8:256–279.
13. Liang JJ, Qin AK, Suganthan PN, Baskar S. Evaluation of comprehensive learning parti-
cle swarm optimizer. Lecture Notes in Computer Science, vol. 3316. Berlin: Springer;
2004. pp 230–235.
14. Baskar S, Alphones A, Suganthan PN, Liang JJ. Design of Yagi-Uda antennas using parti-
cle swarm optimization with new learning strategy. IEE Proc Microwaves Antenn Propag
2005;152:340–346.
15. Hu X, Eberhart RC, Shi Y. Particle swarm with extended memory for multiobjective opti-
mization. In: Proc IEEE Swarm Intelligence Symp 2003 ~SIS 2003!, Indianapolis, IN;
2003. pp 193–197.
16. Bartz-Beielstein T, Limbourg P, Mehnen J, Schmitt K, Parsopoulos KE, Vrahatis MN.
Particle swarm optimizers for Pareto optimization with enhanced archiving techniques.
In: Proc 2003 Congress on Evolutionary Computation ~CEC’03!, Canberra, Australia. Pis-
cataway, NJ: IEEE Press; 2003. pp 1780–1787.
17. Li X. A non-dominated sorting particle swarm optimizer for multiobjective optimization.
Lecture Notes in Computer Science, No. 2723. In: Proc Genetic and Evolutionary Com-
putation Conference ~GECCO 2003!, Chicago, IL; 2003. pp 37– 48.
18. Coello CAC, Lechuga MS. MOPSO: A proposal for multiple objective particle swarm
optimization. In: Proc IEEE Congress on Evolutionary Computation ~CEC 2002!, Hono-
lulu, HI. Piscataway, NJ: IEEE; 2002. pp 1051–1056.
19. Pulido GT, Coello CAC. Using clustering techniques to improve the performance of a
multi-objective particle swarm optimizer. In: Proc Genetic and Evolutionary Computation
Conf ~GECCO 2004!, Seattle, WA; 2004. pp 225–237.
20. Fieldsend JE. Multi-objective particle swarm optimization methods. Technical Report #419,
Department of Computer Science, University of Exeter; March 2004.
21. Shi Y, Eberhart RC. Parameter selection in particle swarm optimization. In: Evolutionary
Programming VII: Proc Seventh Annual Conf on Evolutionary Programming, New York;
1998. pp 591– 600.
22. Veldhuizen DV. Multiobjective evolutionary algorithms: Classifications, analyses, and new
innovations. PhD thesis, Department of Electrical and Computer Engineering, Graduate
School of Engineering, Air Force Institute of Technology, Wright-Patterson AFB, Ohio;
May 1999.

International Journal of Intelligent Systems DOI 10.1002/int


226 HUANG, SUGANTHAN, AND LIANG
23. Fonseca CM, Fleming PJ. Multiobjective optimization and multiple constraint handling
with evolutionary algorithms—Part II: Application example. IEEE Trans Syst Man Cybern
A 1998;28:38– 47.
24. Kursawe F. A variant of evolution strategies for vector optimization. In: Schwefel HP,
Männer R, editors. Parallel problem solving from nature. Berlin: Springer-Verlag; 1990.
pp 193–197.
25. Deb K. Multiobjective optimization using evolutionary algorithms. Chichester, UK: Wiley;
2001.
26. Zitzler E, Deb K, Thiele L. Comparison of multiobjective evolutionary algorithms: Empir-
ical results. Evol Comput 2000;8:173–195.

International Journal of Intelligent Systems DOI 10.1002/int

View publication stats

You might also like