Professional Documents
Culture Documents
Abstract- Solving optimization problems with sults and observations, followed by our conclusions in
multiple (often conflicting) objectives is generally Section 6.
a quite difficult goal. Evolutionary Algorithms
(EAs) were initially extended and applied dur- 2 MOP Background
ing the mid-eighties in an attempt to stochasti-
cally solve problems of this generic class. Dur- MOPs (as a rule) present a possibly uncountable set
ing the past decade a multiplicity of Multiobjec- of solutions, which when evaluated produce vectors
tive EA (MOEA) techniques have been proposed whose components represent trade-offs in decision space.
and applied to many scientific and engineering We have defined an MOP’S global optimum to be the
applications. Our discussion’s intent is to rigor- Pareto front determined by evaluating each member of
ously define and execute a quantitative MOEA the Pareto optimal solution set (Van Veldhuizen, 1999).
performance comparison methodology. Almost Since Pareto concepts are crucial components of the
all comparisons cited in the current literature vi- MOP domain and MOEA implementations, their incar-
sually compare algorithmic results, resulting in nations axe now briefly described.
only relative conclusions. Our methodology gives During MOEA execution, a “local” set of Pareto opti-
a basis for absolute conclusions regarding MOEA mal solutions (with respect to the current MOEA gener-
performance. Selected results from its execution ational population) is determined at each EA generation
with four MOEAs are presented and described. and termed Pcurrent(t), where t represents the generation
number. Many MOEA implementations also use a sec-
1 Introduction ondary population, storing all/some Pareto optimal so-
lutions found through the generations (Van Veldhuizen,
Multiobjective Evolutionary Algorithms (MOEAs) are 1999). This secondary population is termed p k n o w n ( t ) ,
now a well-established field within Evolutionary Com- also annotated with t (representing completion o f t gen-
putation, initially developed in the mid-eighties when erations) to reflect possible changes in its membership
the first MOEA specifically designed to solve Multiobjec- during MOEA execution. Pknown (0) is defined as 8
tive Optimization Problems (MOPs) was implemented. (the empty set) and Pknownaloneas the final, overall
Over 500 publications (Coello, 1999b) have since pro- set of Pareto optimal solutions returned by an MOEA.
posed various MOEA implementations and applications, Of course, the true Pareto optimal solution set (termed
and to a much lesser extent, underlying MOEA theory. Ptpue)is not explicitly known for MOPs of any difficulty.
Solving MOPs with MOEAs presents several unique Pt,, is defined by the functions composing an MOP; it
challenges (e.g., vector-valued fitness and sets of solu- is fixed and does not change.
tions). Space does not permit a detailed presentation Pcumnt(t),.&own, and Pt,, are sets of MOEA geno-
of all material necessary to understand this dynamic types where each set’s phenotypes form a Pareto front.
and rapidly growing research field. For a good intro- We term the associated Pareto front for each of these
duction to the relevant issues and past EA-based ap- solution sets as PFevrrent( t ) , PFknown, and PF,,.,,.
proaches, see Van Veldhuizen (Van Veldhuizen, 1999), Thus, when using an MOEA to solve MOPs, one implic-
Coello Coello (Coello, 1999a), and Fonseca and Flem- itly assumes that one of the following conditions hold:
ing (Fonseca and Fleming, 1995). This paper instead PFknown E PFt,, or that over some norm (Euclidean,
focuses on rigorously defining and executing a quantita- m s , etc.), PFknown E [PFtme,PFtrve + €1.
tive MOEA performance comparison methodology.
The remainder of this paper is organized as fol- 3 MOEA Experiments
lows. Section 2 presents relevant MOP concepts while
Section 3 outlines a quantitative MOEA experimental The major goal of these experiments is to compare well-
methodology. Sections 4 and 5 discuss experimental re- engineered MOEAs in terms of effectiveness and effi-
Figure 1: PFk,,,, /PFtme Example All MOEAs are executed on the same computational
platform for consistency. The host is a Sun Ultra 60
workstation with dual 300 MHz processors and 512 MB
+
d2 = J(3 - 3)2 (6 - 6)2, d3 = J(5 - 4)2 + (4- 4)2, RAM, running Solaris 2.5.1. Many other computational
and G = J1.1182 + O2 12/3 = 0.5. + platforms would suffice but this high-end host offers ex-
clusive access. The MOMGA and NPGA are extensions
3.4.2 Spacing of existing algorithms and speci6c software, are written
in “C” and compiled using the Sun Workshop Compiler
This metric is a value measuring the spread (dis-
tribution) of vectors throughout PFknown . Because version C 4.2. Much of our associated research and re-
lated experimentation employs a specialized MATLAB
PFknown ’S “beginning” and “end” are known, a suitably
EA-based toolbox; the MOGA and NSGA are written
defined metric judges how well PFk,,,, is distributed.
as self-contained “m-files” leveraging pre-defined tool-
Schott (Schott, 1995) proposes such a metric measuring
box routines. These MOEAs are also executed on the
the range (distance) variance of neighboring vectors in
Sun platform described previously but within the MAT-
PFk,,,, . Called spacing, he defines this metric as: LAB 5.2 environment. Timing results are not of specific
I . n experimental concern here, however, for all experimental
MOPs, each MOEA run executes in a matter of minutes.
0.35 - T
1:
0.3 -
0.15.
0.1
0.05
0-
.
. I
NSGA
2-
3-
MOGA NSGA
Figure 8: MOGA N V A
210
MOEA tested (see Section 5.1). Figure 8 shows Nondom- Horn, J., Nafpliotis, N., and Goldberg, D. E. (1994). A
hated Vector Addition ( N V A ) values recorded during a Niched Pareto Genetic Algorithm for Multiobjective
MOGA run. This metric is defined as the change in car- Optimization. In Michalewicz, Z., editor, Proceed-
dinality between PFknown ( t )and PFkno, (t -t1) (van ings of the First IEEE Conference on Evolutionary
Veldhuizen, 1999). This metric’s value may remain Computation, pages 82-87, Piscataway NJ. IEEE
steady for several generations and sometimes even shows Service Center.
a net loss of solutions from PFknown.
Jackson, R. H. F., Boggs, P. T., Nash, S. G., and Pow-
i The final generational distance metric (see Sec- ell, S. (1991). Guidelines for Reporting Results of
tion 3.4.1) indicates an MOEA’s convergence to PFt,,, ,
Computational Experiments - Report of the Ad Hoc
but it may also be used to measure each generational
Committee. Mathematical Programming, 49:413-
population’s convergence. In this case, one expects G
425.
to generally decrease during MOEA execution if con-
vergence is occurring, although the contextual nature of McClave, J. T., Benson, P. G., and Sincich, T. (1998).
determining Pareto dominance means metric values may Statistics for Business and Economics. Prentice
sometimes increase. Hall, Upper Saddle River, NJ, 7th edition.
Neter, J., Wasserman, W., and Kutner, M. H. (1990).
6 Conclusion Applied Linear Statistical Models: Regression, Anal-
This paper proposes and implements a quantitative ysis of Variance, and Experimental Designs. Irwin,
MOEA performance comparison methodology, present- Boston, MA, 3rd edition.
ing experimental results and analyses. Using three prac- Schott, J. R. (1995). Fault Tolerant Design Using Single
tical MOP comparative metrics (generational distance, and Multicriteria Genetic Algorithm Optimization.
overall nondominated vector generation, and spacing), Master’s thesis, Department of Aeronautics and As-
nonparametric statistical analyses then show NSGA per- tronautics, Massachusetts Institute of Technology,
formance to be statistically worse than the other tested Cambridge, Massachusetts.
MOEAs (over the test problems). It concludes with se-
lected results and observations of related experiments. Shaw, K. J., Nortcliffe, A. L., Thompson, M., Love, J.,
Fonseca, C. M., and Fleming, P. J. (1999). Assess-
ing the Performance of Multiobjective Genetic Algo-
References rithms for Optimization of a Batch Process Schedul-
Coello, C. A. C. (1999a). A Comprehensive Survey ing Problem. In 1999 Congress on Evolutionary
of Evolutionary-Based Multiobjective Optimization Computation, pages 37-45, Washington, D.C. IEEE
Techniques. Knowledge and Information Systems, Service Center.
1(3):269-308. Srinivas, N. and Deb, K. (1994). Multiobjective Opti-
Coello, C. A. C. (199913). List of References on Evolu- mization Using Nondominated Sorting in Genetic
tionary Multiobjective Optimization. Online. Avail- Algorithms. Evolutionary Computation, 2(3):221-
able: ht tp: / /www.lania.mx/EMOO. 248.
Van Veldhuizen, D. A. (1999). Multiobjective Evolution-
Deb, I<. (1999). Multi-Objective Genetic Algorithms: ary Algorithms: Classifications, Analyses, and New
Problem Difficulties and Construction of Test Prob-
Innovations. PhD Thesis, AFIT/DS/ENG/99-01,
lems. Evolutionary Computation, 7(3):205-230. Air Force Institute of Technology, Wright-Patterson
Fonseca, C. M. and Fleming, P. J. (1995). An Overview AFB .
of Evolutionary Algorithms in Multiobjective Opti- Van Veldhuizen, D. A. and Lamont, G. B. (2000). Mul-
mization. Evolutionary Computation, 3( 1):l-16. tiobjective Optimization with Messy Genetic Algo-
Fonseca, C. M. and Fleming, P. J. (1998). Multiob- rithm. In Proceedings of the 2000 Symposium on
jective Optimization and Multiple Constraint Han- Applied Computing, pages 470-476. ACM.
dling with Evolutionary Algorithms - Part I: A Uni- Wolpert, D. H. and Macready, W. G. (1997). No Free
fied Formulation. IEEE Dunsuctions on Systems, Lunch Theorems for Optimization. IEEE Transac-
Man and Cybernetics - Part A : Systems and Hu- tions on Evolutionary Computation, 1(1):67-82.
mans, 28(1):26-37.
Zitzler, E. and Thiele, L. (1999). Multiobjective Evo-
Goldberg, D. E., Korb, B., and Deb, K. (1989). Messy lutionary Algorithms: A Comparative Case Study
Genetic Algorithms: Motivation, Analysis, and and the Strength Pareto Approach. IEEE Transac-
First Results. Complex Systems, 3:493-530. tions on Evolutionary Computation, 3(4):257-271.
21 1