Professional Documents
Culture Documents
20.1 Introduction
Multi-objective evolutionary algorithms (MOEAs) are a class of stochastic optimization techniques that simulates biological evolution to solve problems with multiple
objectives. Multi-objective (MO) optimization is a challenging research topic because it involves the simultaneous optimization of several complex objectives in the
Pareto optimal sense and requires researchers to address many issues that are unique
to MO problems. The advances made in MOEAs design is the result of two decades of
intense research examining topics such as tness assignment [9, 18], diversity preservation [4, 16], balance between exploration and exploitation [1], and elitism [17] in
the context of MO optimization. Although the application of evolutionary multiobjective optimization (EMOO) to real-world problems [8, 24] has been gaining
signicant attention from researchers in dierent elds, there is a distinct lack in
studies investigating the issues of uncertainties in the literature [11].
C.K. Goh and K.C. Tan: Evolving the Tradeos between Pareto-Optimality and Robustness
in Multi-Objective Evolutionary Algorithms, Studies in Computational Intelligence (SCI) 51,
457478 (2007)
c Springer-Verlag Berlin Heidelberg 2007
www.springerlink.com
458
459
xX nx
(20.1)
460
and P Sdet
. On the other hand, the robust Pareto front (P Frob ) and solution set
(P Srob ) will refer to the set of tradeos and solutions discovered with explicit robust
considerations respectively. Given a particular set of desirable robust properties, the
and P Srob
optimal robust Pareto front and solution set will be denoted as P Frob
respectively.
The MO problem formulated in the previous section reects the conventional
methodology adopted in the vast majority of the optimization literature which assumes that the MO problem is deterministic and the core optimization concern
is the maximization of solution set quality. However, Pareto optimality does not
necessarily mean that any of the solutions along the tradeo is desirable or even
implementable in practice. This is primarily because such a deterministic approach
neglects the fact that real-world problems are characterized by uncertainty. Furthermore, with the increasing complexity of modern applications, it is not possible to
model any problem with absolute certainty even with the state-of-the-arts modeling
techniques.
In order to reduce the consequences of uncertainty on optimality and practicality of the solution set, factors such as decision variable variation, environmental
variation and modeling uncertainty have to be considered explicitly. Therefore, the
minimization MO problem is redened as the following.
min f (x, x , e ) = {f1 (x, x , e ), f2 (x, x , e ), ..., fM (x, x , e )}
xX nx
(20.2)
461
over the years with the aim of fullling the three optimization goals. Based on
the dierent cost assignments and selection strategies, EAs for MO optimization
can be broadly classied into two classes: non-Pareto approaches and Pareto-based
approaches. Although Pareto-based MOEA is the dominant approach, recent studies have revealed the deciencies of such techniques in handling high-dimensional
problems. Nevertheless, most of the evolutionary techniques for MO optimization
exhibit common characteristics and can be represented in the pseudocode shown in
Fig. 20.1.
Based on the state-of-the-arts, MOEAs are characterized by some form of diversity preservation mechanisms and elitism. Diversity preservation operators are
conducted to ensure diversity and a uniform distribution of solutions. Instances of
diversity mechanisms include niche sharing, crowding, grid mapping, clustering, and
etc. In the case of elitism, it involves 1) the storage of the best solutions found along
the evolutionary process and/or 2) the reinsertion of elitist solutions into the evolving population to improve convergence. A variety of evolutionary operators such as
mutation and crossover have been used in the literature. Note that variation can
also be performed by some form of local search operators or heuristics.
Population Initialization
Fitness Evaluation
Update Archive
While (Stopping criteria not satisfied)
Selection based on fitness and diversity
Elitist Selection
Evolutionary variation
Fitness Evaluation
Update Archive
End While
Output Archive
Fig. 20.1. Framework of MOEA
462
fi (x)
(20.3)
463
Initialize Population
Yes
Update Archive
Add offspring to
population
Evaluation process
GA evaluation
AMO
Uniform Crossover
Elitist Selection
No
Yes
Pareto Ranking
Updare Archive
where x [x lb, x + up]. From (20.3), it can be noted that the measure reects
the degree of variation resulting from the worst objective value.
With respect to the latter issue of incorporating the robust measure, the presence of multiple competing objectives in MO optimization leads to the issue of how
the robust measure can be implemented without a substantial increase in problem
dimensionality. In particular, the objective space will be doubled if the worst case
situation for objective variation is considered independently, i.e. a 20-objective problem will become a 40-objective problem! Therefore, it will be more appropriate to
consider the minimization of the worst possible case al all objectives and the MO
problem to be solved can be now written as
min f (x, x , e ) = {f1 (x), f2 (x), ..., fM (x), fM +1 (x)}
xX nx
where fM +1 (x) =
max
i=1,2,..,M
(20.4)
fi (x)
464
its neighborhood. The constrained Pareto ranking presented in this chapter is based
on a constrain-dominance scheme that is similar to the scheme proposed in [3]. In
particular, an individual f1 constrain-dominates the individual f2 if,
1. f1 is feasible and f2 is infeasible.
2. Both solutions are infeasible and f1 dominates f2 in terms of the dierent
constraint violations.
3. Both solutions are infeasible, incomparable in terms of the dierent constraint
violations and f1 dominates f2 in terms of the objective values.
4. Both solutions are feasible and f1 dominates f2 in terms of the objective values.
Actual ranking is based on the scheme presented by Fonseca and Fleming [10]
which assigns the same smallest cost for all non-dominated individuals, while the
dominated individuals are ranked according to how many individuals in the population dominating them. Therefore, the rank of an individual in the population will
be given by 1 + DC where DC is the number of individuals constrain-dominating
the particular individual in the objective domain.
465
The pseudocode of the GA is shown in Fig. 20.3. The GA begins with the
creation of P OP SIZEGA neighboring points about the individual under evaluation. The Tabu restriction assisted assessment presented in the previous section is
implemented for the evaluation of these individuals. Since the GA sought to discover the worst case performance, the GA is actually optimizing a MO problem
that maximizes 1) the worst case objective in (20.3) and 2) the worst constrain violation. At every generation, the worst performance found will be updated and the
best individual will be stored. Elitism is applied by reinserting the best individual
into the mating pool. Binary tournament selection is then conducted on the evolving population to ll up the mating pool. After which, the mating pool will undergo
uniform crossover and AMO.
GA
466
archive if it is not dominated by any members in the archive. Likewise, any archive
members dominated by this solution will be removed from the archive. In order to
preserve a diverse solution set, a recurrent truncation process based on the niche
count is used to remove the most crowded archive member when the predetermined
archive size is reached.
(20.5)
(20.6)
where xI and xII represents the subset of decision variables associated with solution
optimality and solution robustness respectively. The functions g() and r() controls
the diculty and landscape sensitivities of the problem respectively. To minimize
any complications arising from problem characteristics, a simple g() is used
1
g(xI ) = 1
|xI | 1
f1 15
|x I |
i=2
(20.7)
xi + 1
Test problem 1
Test problem 1 is an instantiation of a Type III problem described by Deb and
Gupta [5]. The r() function is dened as
&
r(xII ) = 1 max
i=1,...,D
(
|x II |
xj i,j 2
di exp
wi
(20.8)
where D = 2. The various parameter settings are shown in Table 20.1. Therefore
467
0.8
0.6
0.4
0.2
0
1
0.8
0.6
0.4
0.2
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
3.5
Feasible Region
Infeasible Region
2
15
15.1
15.2
15.3
15.4
15.5
15.6
15.7
15.8
15.9
16
Test problem 2
Test problem 2 is similar to test problem 1 except that ve troughs are dened in
is located
this problem. The parameter settings are given in Table 20.1. The P Fdet
at x1 xII = 0.3 and x2 xII = 0.5. While the most robust front is located at
x1 xII = 0.6 and x2 xII = 0.8, the most desirable front will depend on the
noise level. The dierent troughs and Pareto fronts are illustrated in Fig. 20.6 and
Fig. 20.7 respectively. The only unfavorable peak, regardless of noise intensity, is
468
located at x1 xII = 0.8 and x2 xII = 0.2. The main diculty is the discovery
of the dierent robustness tradeos. In this chapter, |xII | = 2 for both problems.
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
1
0.8
0.6
0.4
0.2
0.8
0.6
0.4
0.2
3.6
Feasible Region
3.4
3.2
2.8
2.6
2.2
2
15
Infeasible Region
15.1
15.2
15.3
15.4
15.5
15.6
15.7
15.8
15.9
16
469
Trough Location
(i,j )
Trough Width
(wi )
Test Problem 1
d1 = 0.8
d2 = 1.0
w1 = 0.1
w2 = 0.006
Test Problem 2
d1 = 0.75
d2 = 0.8
d3 = 0.4
d4 = 1.0
d5 = 0.9
w1 = 0.1
w2 = 0.05
w3 = 0.2
w4 = 0.01
w5 = 0.01
I-Beam Design
The design of the I-beam involves four decision variables, two objectives as well as
geometric and strength constraints. The two minimization objectives are given by,
1. f1 , the cross section area of the beam,
f1 (x) = 2x1 x4 + x3 (x1 2x4 )
(20.9)
60, 000
x3 (x1 2x4 )3 + 2x2 x4 4x24 + 3x1 (x1 2x4 )
(20.10)
where x1 , x2 , x3 and x4 are the decision variables. The geometric constraints are
given as,
(20.11)
10 x1 80
(20.12)
10 x2 50
(20.13)
0.9 x3 5
(20.14)
0.9 x1 5
while the strength constraint is,
16
180, 000x1
x3 (x1 2x4 )3 + 2x2 x4 4x24 + 3x1 (x1 2x4 )
(20.15)
15, 000x2
0
x33 (x1 2x4 ) + 2x32 x4
For more information about this problem, interested readers are referred to [7].
470
Settings
471
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
15.6
15.7
15.8
15.9
16
(a)
3.5
2.5
2
15
15.1
15.2
15.3
15.4
15.5
(b)
Fig. 20.8. (a) Variables x1 xI I and x2 xI I of the evolved solution set and
(b) Final tradeos evolved by mRMOEA for test problem 1
Algorithm RMOEA w/o AMO w/o GA w/o re-eval. w/o elit. sel
472
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
15.6
15.7
15.8
0.9
(a)
4
3.8
3.6
3.4
3.2
2.8
2.6
2.4
2.2
2
15
15.1
15.2
15.3
15.4
15.5
15.9
16
(b)
Fig. 20.9. (a) Variables x1 xI I and x2 xI I of the evolved solution set and
(b) Final tradeos evolved by RMOEA for test problem 1
The performance of the various settings, in terms of the number of Pareto fronts
failed to be evolved by the various settings and the distance to the ideal solution
set in the decision space, are shown in Fig. 20.12(a) and Fig. 20.12(b) respectively.
From the results, it is clear about the role that AMO plays in allowing RMOEA
locate the various Fronts. Furthermore, the absence of archival re-evaluation or GA
also prevent the algorithm from evolving the various tradeos. Note that the one
peak that RMOEA failed to detect is mostly associated with the unfavorable front
mentioned before. The importance of re-evaluation is also clear from Fig. 20.12(b)
where uncertainty prevents the algorithm from nding a near optimal front.
473
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
15.6
15.7
15.8
15.9
(a)
3.6
3.4
3.2
2.8
2.6
2.4
2.2
2
15
15.1
15.2
15.3
15.4
15.5
16
(b)
Fig. 20.10. (a) Variables x1 xI I and x2 xI I of the evolved solution set and
(b) Final tradeos evolved by mRMOEA for test problem 2
I-Beam Design
This section examines the performance of RMOEA on the practical problem of
I-Beam design. One of the key issues is to maintain the feasibility of the solution under uncertainties. For comparison, an instance of RMOEA that optimizes
the ecient objective values is applied and denoted as MOEA. The evolved tradeos of MOEA and RMOEA for the I-Beam problem is plotted in Fig. 20.13 and
Fig. 20.14 respectively. The crosses represent the feasible solutions while the open
circles represent the infeasible solutions.
474
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
15.6
15.7
15.8
15.9
16
(a)
4
3.8
3.6
3.4
3.2
2.8
2.6
2.4
2.2
2
15
15.1
15.2
15.3
15.4
15.5
(b)
Fig. 20.11. (a) Variables x1 xI I and x2 xI I of the evolved solution set and
(b) Final tradeos evolved by RMOEA for test problem 2
From the two gures, it can be noted that the I-Beam problem represents an
instance of a Type 1 problem [5], with P Frob coinciding with P Fdet . Nonetheless,
part of the P Fdet is now infeasible. Apart from demonstrating that RMOEA is
capable of discovering solutions that remain feasible under the noise level applied,
the importance of considering robustness explicitly during the optimization process
is clear by comparing the two gures.
475
Covergence Measure
0.35
3.5
0.3
3
0.25
2.5
0.2
2
0.15
1.5
0.1
0.5
0.05
Fig. 20.12. (a) Number of fronts undiscovered and (b) Euclidean distance from the
ideal solution for test problem 2
0.06
0.05
0.04
0.03
0.02
0.01
0
100
200
300
400
500
600
700
800
900
20.5 Conclusions
This chapter presents a robust multi-objective evolutionary algorithm for constrained multi-objective optimization. Since the eective Pareto front is sensitive
to the noise intensity applied, the proposed RMOEA considers explicitly the tradeos between Pareto optimality and worst case performance. In particular, RMOEA
incorporates the features of GA, Tabu restriction and periodic re-evaluation. It
has been demonstrated that RMOEA, augmented with the additional objective of
the proposed worst case measure, is capable of evolving the tradeos between optimality and robustness. In addition, the contributions of the various features to the
476
0.03
0.025
0.02
0.015
0.01
0.005
100
200
300
400
500
600
700
800
900
Acknowledgments
The authors would like to thank Yew-Soon Ong for discussions on robustness in
evolutionary computation.
References
1. P. Bosman and D. Thierens, The balance between proximity and diversity in
multiobjective evolutionary algorithms, IEEE Transactions on Evolutionary
Computation, vol. 7, no. 2, pp. 174-188, 2003.
2. J. Branke, Creating robust solutions by means of evolutionary algorithms, in
Proceedings of the 5th International Conference on Parallel Problem Solving
from Nature, pp. 119-128, 1998.
3. K. Deb, Multi-objective Optimization Using Evolutionary Algorithms, John
Wiley & Sons, New York, 2001.
4. K. Deb and D. E. Goldberg, An investigation on niche and species formation in genetic function optimization, in Proceedings of Third International
Conference on Genetic Algorithms, pp. 42-50, 1989.
5. K. Deb and H. Gupta, Introducing robustness in multiobjective optimization,
Kanpur Genetic Algorithms Lab. (KanGAL), Indian Institue of Technology,
Kanpur, India, Technical Report 2004016, 2004.
477
6. D. Buche, P. Stoll, R. Dornberger and P. Koumoutsakos, Multiobjective Evolutionary Algorithm for the Optimization of Noisy Combustion Processes, IEEE
Transactions on Systems, Man, and CyberneticsPart C: Applications and Reviews, vol. 32, no. 4, pp. 460-473, 2002.
7. C. A. Coello Coello, An empirical study of evolutionary techniques for multiobjective optimization in engineering design, Ph.D. dissertation, Department
of Computer Science, Tulane University, New Orleans, LA, 1996.
8. C. A. Coello Coello and A. H. Aguirre, Design of combinational logic circuits
through an evolutionary multiobjective optimization approach, Articial Intelligence for Engineering, Design, Analysis and Manufacture, Cambridge University Press, vol. 16, no. 1, pp. 39-53, 2002.
9. M. Farina and P. Amato, A fuzzy denition of optimality for many-criteria
optimization problems, IEEE Transactions on Systems, Man, and CyberneticsPart A: Systems and Humans, vol. 34, no. 3, pp. 315-326, 2004.
10. C. M. Fonseca and P. J. Fleming, Genetic algorithm for multiobjective optimization, formulation, discussion and generalization, in Proceedings of the
Fifth International Conference on Genetic Algorithms, pp. 416-423, 1993
11. C. K. Goh and K. C. Tan, An investigation on noisy environments in evolutionary multiobjective optimization, IEEE Transactions on Evolutionary Computation, in press.
12. D. E. Goldberg and J. Richardson, Genetic algorithms with sharing for multimodal function optimization, in Proceedings of the Second International Conference on Genetic Algorithms, pp. 41-49, 1987.
13. H. Gupta and K. Deb, Handling constraints in robust multi-objective optimization in Proceedings of the 2005 IEEE Congress on Evolutionary Computation, pp. 25-32, 2005.
14. Y. Jin and J. Branke, Evolutionary Optimization in Uncertain Environments
A Survey, IEEE Transactions on Evolutionary Computation, vol. 9, no. 3,
pp. 303-317, 2005.
15. Y. Jin and B. Sendho, Tradeo between performance and robustness: An evolutionary multiobjective approach, in Proceedings of the Second Conference
on Evolutionary Multi-Criterion Optimization, pp. 237251, 2003.
16. E. F. Khor, K. C. Tan, T. H. Lee and C. K. Goh, A study on distribution
preservation mechanism in evolutionary multi-objective optimization, Articial Intelligence Review, vol. 23, no. 1, pp. 31-56, 2005.
17. M. Laumanns, E. Zitzler and L. Thiele, A unied model for multi-objective
evolutionary algorithms with elitism, in Proceedings of the 2000 Congress on
Evolutionary Computation, vol. 1, pp. 46-53, 2000.
18. H. Lu and G. G. Yen, Rank-based multiobjective genetic algorithm and benchmark test function study, IEEE Transactions on Evolutionary Computation,
vol. 7, no. 4, pp. 325-343, 2003.
19. Y. S. Ong, P. B. Nair and K. Y. Lum, Min-Max Surrogate Assisted Evolutionary Algorithm for Robust Aerodynamic Design, IEEE Transactions on
Evolutionary Computation, vol. 10, no. 4, pp.392-404, 2006.
20. T. Ray, Constrained robust optimal design using a multiobjective evolutionary
algorithm, in Proceedings of the 2002 Congress on Evolutionary Computation,
pp. 419424, 2002.
21. N. Srinivas and K. Deb, Multiobjective optimization using non-dominated
sorting in genetic algorithms, Evolutionary Computation, vol. 2, no. 3,
pp. 221-248, 1994.
478
22. K. C. Tan, C. Y. Cheong and C. K. Goh, Solving multiobjective vehicle routing problem with stochastic demand via evolutionary computation, European
Journal of Operational Research, in press.
23. K. C. Tan, C. K. Goh, Y. J. Yang and T. H. Lee, Evolving better population
distribution and exploration in evolutionary multi-objective optimization, European Journal of Operational Research, vol. 171, no. 2, pp. 463-495, 2006.
24. K. C. Tan, T. H. Lee, E. F. Khor and D. C. Ang, Design and real-time implementation of a multivariable gyro-mirror line-of-sight stabilization platform,
Fuzzy Sets and Systems, vol. 128, no. 1, pp. 81-93, 2002.
25. S. Tsutsui and A. Ghosh, Genetic algorithms with a robust solution searching
scheme, IEEE Transactions on Evolutionary Computation vol. 1, no. 3, pp.
201-208, 1997.
26. S. Tsutsui and A. Ghosh, A comparative study on the eects of adding perturbations to phenotypic parameters in genetic algorithms with a robust solution
searching scheme, in Proceedings of the 1999 IEEE International Conference
on Systems, Man, and Cybernetics, pp. 585-591, 1999.
27. E. Zitzler, K. Deb, and L. Thiele, Comparison of multiobjective evolutionary
algorithms: empirical results, Evolutionary Computation, vol. 8, no. 2, pp.
173-195, 2000.
28. E. Zitzler, L. Thiele, M. Laumanns, C. M. Fonseca and V. G. Fonseca, Performance assessment of multiobjective optimizers: An analysis and review, IEEE
Transactions on Evolutionary Computation, vol. 7, no. 2, pp. 117-132, 2003.