You are on page 1of 20

1 Multi-objective Evolutionary Optimization

2 Partha P. Biswas, P. N. Suganthan


3 School of Electrical and Electronic Engineering
4 Nanyang Technological University, Singapore
5 parthapr001@e.ntu.edu.sg, epnsugan@ntu.edu.sg
6
7 Abstract: Many computational techniques have been known for years to solve multi-objective
8 optimization problems (MOPs). However, the nature of MOPs has been changing and many more large-
9 scale multimodal MOPs, computationally expensive MOPs, dynamic MOPs, noisy MOPs etc. are
10 introduced in multi-objective optimization domain. The researchers are thus inspired to look beyond the
11 conventional approaches and focus more on evolutionary optimization techniques. The developments in
12 the field of evolutionary algorithm (EA) in last few decades make EA an effective tool to apply to complex
13 MOPs. This article provides an overview of multi-objective evolutionary algorithms (MOEAs), different
14 frameworks of MOEAs and the application of MOEAs to various MOPs. Performance indicators for
15 MOEAs and some visualization methods in many-objective optimization problems are also briefly
16 mentioned in this article.
17
18 Keywords: Multi-objective evolutionary algorithm (MOEA) · MOEA frameworks · Multi-objective
19 optimization problems (MOPs) · Complicated MOPs · Performance indicators · Visualization methods
20
21 1. INTRODUCTION
22
23 The ‘optimization’ is a process of finding the best feasible solution that corresponds to the minimum or maximum
24 value of an objective function. The need for optimization arises from the purpose of designing solutions which are
25 cost-effective, efficient and optimal. The importance of optimization is paramount especially in the fields of
26 engineering, scientific research and finance [1]. For examples, a manufacturer seeks the most efficient production
27 process for a good yield, an investor aims to maximize return at minimum risk, a traffic planner wants to minimize
28 congestion by finding best possible ways of diverting traffic. In reality, optimization is an intrinsic part of human
29 life and activity.
30
31
32
33
34 Option E
35 32 hours
36
37
Travel time

38 Option F
39
40 Option D
41
42 Option C
43 Option A
Option B
44 16 hours
45
46 $2000 $4000
47
Price of ticket
48 Figure 1. Illustration of decision-making process in buying a flight ticket
49
50 The optimization process may involve a single objective or multiple objectives. In a single-objective optimization,
51 the task of optimization is to find the optimal solution that can minimize (or maximize) one objective function for
52 a system or process. In a multi-objective optimization where more than one objective is involved, the task of
53 optimization is to find one or more optimal solutions. However, the objectives in multi-objective optimization are
54 often conflicting which means one objective cannot be bettered without worsening one or more other objectives. In
55 such a scenario, the extreme solution i.e. best solution for one of the forming objectives, necessitates compromise
56 on another objective(s). Thus, choice of a solution based solely on one objective is prohibitive. Let us consider the
57 example of buying a flight ticket where the price of ticket and travel time are the decision-making criteria. The
58 points A, B, C, D, E and F in Figure 1 represent various options for flying between two cities - Singapore and New

1
1 York. We assume that the difference in travel time is due to the waiting time for connecting flight at transit. ‘Option
2 A’ is the most expensive with ticket price of $4000, but with least travel time of 16 hours. The cheapest ticket is of
3 $2000 with travel time of 32 hours if one takes flying ‘option E’. Here the decision-making process of flight booking
4 is not a single objective of either price or travel time. The traveler has few options to choose from with some trade-
5 off between travel time and price. If one selects ‘option B’ instead of ‘option A’, he will be saving on his ticket price
6 by spending more time on transit. Again, if the traveler selects ‘option D’ instead of ‘option E’, he has to shell out
7 more money for buying the ticket, but he can save a few hours. So, if we move along options (points) A, B, C, D
8 and E, we cannot better one objective without worsening the other. However, the situation is not the same for ‘option
9 F’. As a flyer, if one chooses ‘option F’, he is definitely losing. He can go for ‘option B’ at same price with lesser
10 travel time, or he can opt ‘option D’ of same travel duration at a lesser price. Therefore, ‘option F’ can be improved
11 on both objectives. The points A, B, C, D and E are called Pareto optimal points, named after the famous Italian
12 economist Vilfredo Pareto. Pareto solutions are also termed as non-dominated solutions where betterment of one
13 objective leads to deterioration of at least one other objective. On contrary, the solution F is a dominated solution.
14
15 The example of flight options is for a two-objective optimization. In multi-objective optimization, there can be more
16 than two objectives. Mathematically, the multi-objective problem is represented as:
Minimize: 𝐹(𝑥) = [𝑓1 (𝑥), 𝑓2 (𝑥), … , 𝑓𝑀 (𝑥)] (1)
17 subject to: 𝑥 ∈ 𝛺
18 where, 𝛺 is the decision vector (variable) space, 𝑀 is the number of objectives. Suppose 𝑅𝑀 is the objective
19 space for 𝑀 -objectives. Therefore, for 𝑥 ∈ 𝛺 , 𝐹(𝑥) ∈ 𝑅𝑀 . The decision variable and objective spaces are
20 illustrated in Figure 2. Let us now consider two objective vectors, 𝑢, 𝑣 ∈ 𝑅𝑀 . The objective 𝑢 is said to dominate
21 𝑣 if and only if 𝑢𝑖 ≤ 𝑣𝑖 for every 𝑖 ∈ {1,2, … , 𝑀} and 𝑢𝑗 < 𝑣𝑗 for at least one index 𝑗 ∈ {1,2, … , 𝑀}. A point
22 𝑥 ∗ ∈ 𝛺 is a ‘Pareto optimal’ if there is no point 𝑥 ∈ 𝛺 such that 𝐹(𝑥) dominates 𝐹(𝑥 ∗ ). 𝐹(𝑥 ∗ ) is called ‘Pareto
23 optimal (objective) vector’. In other words, any improvement in a ‘Pareto optimal’ point in one objective must lead
24 to deterioration of at least one other objective. The set of all the ‘Pareto optimal’ points is called ‘Pareto set’ (PS)
25 and the set of all the Pareto optimal objective vectors is the ‘Pareto front’ (PF) [2]. In the example of flight ticket
26 booking, options A, B, C, D and E are the ‘Pareto optimal’ solutions that form the ‘Pareto set’ (PS). The reflection
27 of ‘Pareto set’ (PS) on the objective space is the ‘Pareto front’ (PF).
28
29 𝑓2
30 Objective space
31 Decision variable space
32 𝐹(𝑥) ∈ 𝑅𝑀
33 𝑥∈𝛺
34
F: 𝛺  𝑅𝑀
35 𝑓1
36
37
38
𝑓3
39
40
41
42 Figure 2. Decision variable space and objective space (illustrated for 𝑀 = 3) in multi-objective optimization
43
44 In earlier days, the approach to solve MOPs was mainly preference-based where an MOP was converted into a
45 scalar objective function with a preference vector. Classical method such as weighted sum method could scalarize
46 a set of objectives with the aid of user-defined weights to obtain a single trade-off solution. However, the need of
47 multiple trade-off solutions compelled researchers to look beyond the conventional approaches to MOPs. In 1985,
48 Schaffer [3] first proposed a method named vector-evaluated genetic algorithm (VEGA) where the real application
49 of evolutionary algorithms (EAs) was observed in finding multiple trade-off solutions or in other words the PF in
50 objective space. Since then, thousands of articles have been published on MOEAs covering generic methodologies,
51 theoretical developments, special methods, applications on engineering, scientific problems, business and finance
52 etc. EAs being stochastic, population-based optimization methods, can approximate the whole PF in a single run.
53 For that reason, EAs have been popular in solving multi-objective optimization problems (MOPs) and these EAs
54 are termed as multi-objective evolutionary algorithms (MOEAs). This article aims to provide an overview on theory,
55 application and recent developments in MOEAs.
56
57 In organizing rest of the paper, frameworks of some commonly used MOEAs are included in section 2. Section 3
58 briefly discusses the operators employed for MOEAs. Section 4 is dedicated to the MOEAs applied to handle

2
1 complicated MOPs. Performance indicators for MOEAs are listed in section 5. An overview of visualization
2 methods and application domains of MOEAs are provided in section 6 and section 7, respectively. The paper ends
3 with concluding remarks in section 8.
4
5 2. MULTI-OBJECTIVE EVOLUTIONARY ALGORITHM FRAMEWORKS
6
7 In the design of an MOEA, framework of the algorithm is a key issue. The framework adopted in non-dominated
8 sorting genetic algorithm (NSGA) [4,5] has most popularly been used in research and application. Further, it is
9 worthwhile to note that majority of the algorithms discussed herein strive for finding entire Pareto front (PF) during
10 the optimization process. However, in some MOEA frameworks such as preference-based, indicator-based and
11 coevolution-based MOEAs, the search process can be guided towards a specific region in the PF based on the
12 decision maker’s requirements. Therefore, the algorithms following these frameworks can be tuned to generate
13 partial PF. In this section, we discuss various frameworks of MOEAs.
14
15 2.1 Non-dominated sorting genetic algorithms
16 Non-dominated sorting approach was first proposed by Goldberg in 1989 [6]. The approach is based on Pareto
17 ranking. Usually the members of combined population of parents and offsprings are categorized based on the
18 adjudged ranks. During the process, the non-dominated solutions are categorized in rank 1 (or level 1). Subsequently,
19 the rank 1 individuals are removed from the population and next set of non-dominated solutions are found and
20 labeled as rank 2 (level 2). The process goes on till all members are classified into certain ranks. Figure 3 illustrates
21 the ranking method in non-dominated sorting.
22
23 Goldberg’s non-dominated sorting approach was first employed in MOEA by Srinivas and Deb in 1994 [4] in their
24 proposed algorithm named non-dominated sorting genetic algorithm (NSGA). As suggested by Goldberg, NSGA
25 also adopted niching techniques (e.g. fitness sharing mechanism) alongwith non-dominated sorting to maintain
26 population diversity. The deficiencies in NSGA were mainly the computational complexity, lack of elitism and need
27 for specifying a sharing parameter to maintain population diversity. These drawbacks have been addressed by Deb
28 et. al. in non-dominated sorting genetic algorithm II (NSGA-II) [5]. The computational complexity in NSGA was
29 𝑂(𝑀𝑁 3 ) where 𝑀 was the number of objectives, 𝑁 was the population size. The complexity has been brought
30
𝑓2

Level 3

Level 2
Level 1

𝑓1
31
32 Figure 3. Ranking method in non-dominated sorting (illustrated for 𝑀 = 2) [5]
2
33 down to 𝑂(𝑀𝑁 ) in NSGA-II with the introduction of domination count and dominated count of each solution.
34 For a solution 𝑝, one needs to find how many solutions dominate 𝑝 (domination count) and the set of solutions
35 that the 𝑝 dominates (dominated count). This requires total 𝑂(𝑀𝑁 2 ) comparisons and computations. Elitism
36 process compares current population with previously found best non-dominated solutions, thus helping to attain
37 better convergence in MOEA. The process is implemented in NSGA-II by comparing parents with offsprings at
38 each generation and selecting the best among them for the next generation. A parameter-less diversity mechanism
39 that abolishes the need for sharing parameter, has been proposed in NSGA-II. The density of current population
40 members is calculated followed by crowding-comparison. Crowding distance operator is applied when all
41 individuals are not included in the sorting process. The individual with lower domination rank is preferred to the
42 one with higher domination rank. When two individuals belong to the same rank (level), the individual in lesser
43 dense region i.e. with higher crowding distance is preferred.
44
45 NSGA-III [7,8] uses the framework of NSGA-II. The motivation behind the development of NSGA-III is for many-
46 objective optimization (four or more) using an effective evolutionary multi-objective optimization algorithm. As
47 the number of objectives increases in MOP, the Pareto-dominance based methods such as NSGA-II, strength Pareto
48 evolutionary algorithm-2 (SPEA2) etc. experience heavy selection pressure, thereby degrading the performance of

3
1 the algorithms. Like NSGA-II, NSGA-III also uses Pareto-dominance principle to guide the search process towards
2 Pareto front (PF). However, a set of well-spread refence points is introduced and updated adaptively to maintain
3 population diversity in NSGA-III. It is observed that the crowding distance operator is not so useful in many-
4 objective optimization. Hence, a more systematic analysis based on supplied reference points is performed in
5 NSGA-III to dispose or select individuals in a generation.
6
7 2.2 Multi-objective evolutionary algorithm based on decomposition (MOEA/D)
8
9 Zhang and Li [9] proposed the decomposition based multi-objective evolutionary algorithm (MOEA/D) in 2007.
10 The algorithm applies an aggregation approach after decomposing the MOP into several scalar optimization
11 subproblems. The objective of each subproblem is the weighted sum of constituting objectives of the MOP.
12 Decomposition process begins with initialization of 𝑁 uniformly spread weight vectors, where 𝑁 is also the
13 population size. The neighborhood of each weight vector is defined by calculating Euclidean distance between each
14 pair of weight vectors. 𝑁 solutions are also initialized, and the MOP is decomposed into 𝑁 scalar optimization
15 subproblems. Now, each subproblem is associated with a weight vector and a solution. Two subproblems are
16 neighbors if their weight vectors are neighbors. Figure 4 shows the distribution of 𝑁 weight vectors 𝜆1 , 𝜆2 , … . , 𝜆𝑁
17 for a bi-objective optimization problem. In Figure 5, the two components ( 𝜆𝑚 𝑚
1 ,𝜆2 ) of weight vector 𝜆
𝑚
are
18 diagrammatically presented. The objective function of 𝑚 -th subproblem for the two-objective optimization
19 problem is:
𝜆𝑚 |𝑓 (𝑥) − 𝑧1∗ |
𝑔𝑡𝑒 (𝑥|𝜆𝑚 , 𝑧 ∗ ) = max { 1𝑚 1 } (2)
𝜆2 |𝑓2 (𝑥) − 𝑧2∗ |
20
21
22 𝑓2
23
24
25
26
Pareto front
27 𝜆1
28
29 𝜆2
30 𝜆3
31
32 𝜆𝑁
33
34
35
36 𝑓1
37 Figure 4. Weight vectors for a bi-objective optimization problem
38
39
40
41 𝑓2
42
43 [𝑓1(𝑥),𝑓2(𝑥)]
44
45
46
47 𝜆𝑚
1
48
49
50 𝜆𝑚
51 𝜆𝑚
2
52
53
54 (𝑧1∗ , 𝑧2∗ ) 𝑓1
55
56 Figure 5. Graphical illustration for a subproblem of a bi-objective function in MOEA/D
57

4
1 where, 𝑧 ∗ = (𝑧1∗ , 𝑧2∗ )𝑇 is the minimum objective value vector that is treated as reference point and the elements of
2 this vector are calculated as 𝑧1∗ = min⁡{𝑓1 (𝑥)|𝑥 ∈ 𝛺} and 𝑧2∗ = min⁡{𝑓2 (𝑥)|𝑥 ∈ 𝛺}. The general representation of
3 objective function of 𝑚-th subproblem of an 𝑀-objective optimization problem is:
𝑔𝑡𝑒 (𝑥|𝜆𝑚 , 𝑧 ∗ ) = max {𝜆𝑚 ∗
𝑖 |𝑓𝑖 (𝑥) − 𝑧𝑖 |} (3)
1≤𝑖≤𝑀

4 Minimization of all 𝑁 such objective functions for the 𝑁 subproblems are simultaneously performed by
5 MOEA/D. In each generation, the evolution of current solution 𝑥 takes place with the neighboring solutions using
6 differential operators (mutation and crossover).
7
8 Several improvements of MOEA/D have been proposed in recently published literatures. To balance between
9 exploration and exploitation, ref. [10] suggested a couple of different neighborhood structures. While all
10 subproblems are treated equally and given the same computational effort in MOEA/D, different computational
11 efforts are given to the subproblems in MOEA/D with dynamic resource allocation (MOEA/D-DRA) [11], based
12 on the calculated utilities of the subproblems. For faster execution of the algorithm on modern-day multi-core
13 processors, thread-based parallel version of MOEA/D has been developed by Nebro and Durillo [12]. Superior
14 performance of MOEA/D-DRA has been established with ensemble of neighborhood sizes in [13]. Most recently a
15 new decomposition based MOEA has been proposed with artificial raindrop algorithm and simulated binary
16 crossover (MOEA/D-ARA+SBX) [14]. The algorithm adopts the framework of MOEA/D, however, can maintain
17 better balance between convergence and diversity.
18
19 2.3 Summation based multi-objective differential evolution (SMODE)
20
21 In summation based multi-objective differential evolution (SMODE) [15], summation of normalized objective
22 values is used for ranking the solutions. The normalization operation given in equation (4) brings all objective values
23 within the range [0,1]:
𝑓𝑖 (𝑥 𝑚 ) − 𝑓𝑖,𝑚𝑖𝑛
𝑓𝑖" (𝑥 𝑚 ) = (4)
𝑓𝑖,𝑚𝑎𝑥 − 𝑓𝑖,𝑚𝑖𝑛
24 where, 𝑓𝑖" (𝑥 𝑚 ) is the normalized value of solution 𝑥 𝑚 for 𝑖-th objective, 𝑓𝑖,𝑚𝑖𝑛 and 𝑓𝑖,𝑚𝑎𝑥 are the minimum
25 and maximum values for the 𝑖-th objective function.
26

Solutions in preferential set


10
Solutions in back-up set
1
Bins not scanned (scan
Objective⁡2, 𝑓2

percentage - 80)
2 6

3 7

27
28 Objective⁡1, 𝑓1
29 Figure 6. SMODE – selection of points in preferential and backup sets [15]
30 The normalized values of all objective functions for a solution are summed up, say 𝐹 " (𝑥 𝑚 ) = ∑𝑀 " 𝑚
𝑖=1 𝑓𝑖 (𝑥 ) where
" " (𝑥 𝑚 )
31 𝑀 is the number of objective functions. For 𝑁 solutions in the problem, 𝐹 is calculated i.e. 𝐹 for 𝑚 ∈
32 {1,2, … , 𝑁} . The solutions are sorted based on the resultant 𝐹 " values. The solution that produces smaller
33 summation of normalized objective values is considered superior to the solution that corresponds to larger summated

5
1 value. To illustrate, 𝑝-th solution in the population is better than 𝑞-th solution if 𝐹 " (𝑥 𝑝 ) < 𝐹 " (𝑥 𝑞 ). Diversified
2 selection is implemented in SMODE after sorting of solutions. During the evolution process, the diversity in
3 SMODE is ensured by maintaining two sets of populations, a preferential set and a backup set. As the name suggests,
4 members of the preferential set get priority to evolve. If there are insufficient number (less than the population size)
5 of individuals in the preferential set, members from the backup set are selected. When number of individuals in the
6 preferential set is larger than the population size, the required number of individuals are randomly selected to
7 become parents for the next generation [15]. The objective space is divided equally into 100 bins. In [15], 80% of
8 the bins are scanned and for each scanned bin, solution with lowest sum of normalized objective value is selected
9 to enter the preferential set. The solutions that are not selected enter the backup set. Figure 6 shows 10-individuals
10 (point 1 to 10 in solid circles) in the current population for a two-objective optimization problem. If we select 80%
11 area for scanning, bin no. 1 to 8 for both the objectives will be scanned. Empty bins are not considered for scanning.
12 Solution 1 (point 1) yields lowest sum of objective values in bin 2. So, point 1 is included in the preferential set.
13 Points 4, 3, 5 and 8 are selected into preferential set for bins 3, 4, 5 and 7, respectively. Based on objective 2, points
14 1, 4, 3, 5, 8, 2 and 9 enter the preferential set. So, finally the members in preferential set are points 1, 2, 3, 4, 5, 8
15 and 9. Points 6, 7 and 10 (marked in green solid circle) are put into the back-up set. Any point in the yellow shaded
16 area is not included in the preferential set as the selected value of scan percentage is 80. Therefore, the percentage
17 of bins to be scanned in each generation is user-defined in [15]. Ref. [16] introduces the concept of stopping point
18 and the scanning process continues until the stopping point is included in the scan. This method does not require
19 input for scan percentage of the gridded area.
20
21 2.4 Hybrid MOEAs
22
23 As the name suggests, a hybrid MOEA is hybridization of two or more MOEAs. A hybrid method utilizes the
24 advantages of the techniques it is composed of. Hybrid MOEAs have been developed to deal with complicated
25 MOPs. The selection of algorithm techniques and the management of their characteristics are the main challenges
26 one faces in hybridization. Combining global search and local search, uniting search operators of different
27 algorithms are some of the commonly used concepts in hybridization. In [17], modified particle swarm optimization
28 (PSO) is hybridized with evolutionary algorithms (EA) to form EA-PSO. PSO is run first to converge and then the
29 solutions obtained by PSO are utilized by EA. Quantum operator and genetic algorithm operator are combined in
30 [18]. Quantum operator works for exploration in discrete (0-1) hyperspace, while the GA operator operates on good
31 solutions for exploitation. Ref. [19] proposes hybrid MOEA with concepts of multiple crossover operators and
32 personal best (pbest) and global best (gbest) of PSO. The appropriate crossover operator is decided by an adaptive
33 mechanism before exploration is performed based on pbest and gbest solutions stored in an archive. A fusion of the
34 solutions obtained by three multi-objective optimization algorithms is suggested in [20]. By executing parallel runs
35 of all the constituting algorithms and extracting well-distributed solutions prove to improve diversity and
36 convergence of the MOPs.
37
38 A special case of hybrid MOEA is memetic MOEA where a local search method is employed to achieve better
39 convergence speed and accuracy of final solutions [21]. Ref. [22] is one of the first papers that applies a local search
40 method to each solution generated by the genetic operators. The fitness function of weighted sum multi-objective
41 is utilized to select parents generating the offsprings. To guide the search along a predefined direction in objective
42 space, literature [23] proposes an iterative approach. Directed search descent and directed search continuation are
43 the two methods presented in the paper to allow search along the Pareto set. Multi-objective genetic local search
44 (MOGLS) [24] uses a weighted scalarization function as the fitness function in each generation. A few individuals
45 from current population are selected based on the fitness values to form the mating pool. The candidates in the
46 mating pool generate the offsprings and subsequently a local search method improves the fitness of the offsprings.
47 It is also worthwhile to mention that MOEA/D [9] belongs to the group of memetic algorithms as the algorithm uses
48 information from neighboring subproblems to update a solution. A self-organizing MOEA that allows mating of a
49 solution only with its neighboring solutions is suggested in [25]. For a 𝑀 -objective MOP, the self-organizing
50 mapping (SOM) with (𝑀 − 1) latent variables is applied to establish the relationship among current solutions. The
51 computational burden of the algorithm is reduced by performing SOM training step and evolution step alternately.
52
53 2.5 Preference-based MOEAs
54
55 The number of Pareto solutions in an MOP can be very large due to the conflicting nature of the objectives. However,
56 the decision maker (DM) may be interested only in a certain range or set of Pareto solutions. The preference of the
57 DM can be incorporated in MOEA so that the search process is guided towards the desired solution region. Based
58 on the involvement of DM at various stages of the optimization, the MOEA can be classified into priori methods,
59 posteriori methods and interactive methods [2]. In a priori method, the preference information is given before the
60 search process. The general approach is to convert an MOP into a scalar optimization problem (SOP) and utilize the

6
1 SOP to find the desired Pareto optimal solutions. In a posteriori method, DM selects most preferred solutions after
2 a well distributed Pareto front is obtained by the search process. Intermediate solutions are made available to DM
3 in an interactive method. The DM understands the problem better by analyzing the intermediate solutions. This
4 process helps DM to provide apt preference information to direct the search process in achieving the desired
5 solutions.
6
7 Inclusion of DM’s preference in MOEA was first tried in 1993 by Fonseca and Fleming [26]. The population
8 members are ranked based on Pareto dominance and information of preference in the proposed algorithm. Use of
9 reference point on NSGA-II framework to determine the preferred solutions is proposed in [27]. An interactive
10 MOEA with the concept of reference direction approach is suggested in [28]. The search process is guided towards
11 the region of preferred solution(s) with the inputs provided by the DM on reference directions. In [29], the objective
12 space is partitioned into several levels in integrating a priori preference information in multi-objective optimization.
13 In the algorithm, the preference functions are formulated based on DM’s interests and the functions use meaningful
14 parameters for each objective. No weight selection for objectives is needed in the method as a single objective is
15 automatically built following conversion of objective preferences into numbers by preference functions. Ref. [30]
16 suggested a preference-based approach to find knee-point on the Pareto front with the presumption that this point
17 represents the optimum trade-off among objectives in most cases. The set of weight values is carefully selected for
18 the weighted sum of the objectives in an MOP. The interactive method of conveying DM’s preference is executed
19 through reference point-based approach in [31]. The preference information is utilized to generate new population
20 members by combining the fitness function and an achievement scalarization function [32]. In most recent times,
21 methodologies for solving many-objective optimization (usually more than 3-objectives) problems are gaining
22 increasing attention. Attaining a representative subset of Pareto solutions with small population for many-objective
23 problems is much more difficult. Literature [33] proposes a reference vector guided evolutionary algorithm for
24 many-objective optimization. The reference vectors can be utilized to aim at preferred subset on the Pareto front.
25 Convergence and diversity of the solutions in high-dimensional objective space is achieved with a special
26 scalarization approach. A reference-point based preference relation is introduced in [34] to integrate preferences of
27 DM into a many-objective optimization problem without modifying its basic structure. The preference relation can
28 not only find optimal solution of the achievement scalarization function but also enables the DM to find a set of
29 solutions around that optimal solution. The parameters of the proposed algorithm are intuitively set by the DM. A
30 knee-point driven MOEA for many-objective optimization is introduced in [35]. The algorithm requires no
31 additional measures for maintaining diversity of the solutions. The computational complexity of many-objective
32 optimization is notably reduced in the proposed algorithm.
33
34 2.6 Coevolution-based MOEAs
35
36 The term ‘coevolution’ refers to simultaneous evolution of multiple sub-populations. Coevolution has been
37 proposed to deal with complicated MOPs. Algorithms that maintain archive population use the coevolution as the
38 operation involves evolution of main population and archive at the same time. Some examples of coevolution with
39 archive strategy are found in [36-37]. Another way of implementing coevolution is by dividing the main problem
40 into a set of subproblems with each subproblem using its own population termed as subpopulation. Two
41 subpopulations can compete and/or cooperate with each other depending on the framework of the algorithm. The
42 final solution set contains solutions from different subpopulations. Literatures [38-40] propose algorithms based on
43 multiple subpopulations.
44
45 2.7 Indicator-based MOEAs
46
47 The performance indicators such as generational distance, hypervolume etc. provide qualitative information on
48 Pareto front (PF) obtained in the multi-objective optimization problem. In some of the proposed MOEAs, the
49 indicator values of approximated PFs are utilized to guide the search process and to select suitable solutions. The
50 indicator based evolutionary algorithm (IBEA) was first suggested in [41] in 2004. An arbitrary indicator is used in
51 this method to compare between two candidate solutions. IBEA does not need any diversity preservation mechanism
52 such as fitness sharing. The computational burden is less in IBEA as it compares only pairs of individuals instead
53 of whole approximated PF. The hypervolume-based algorithm in [42] exhibited superior performance to non-
54 dominated sorting algorithms like NSGA-II and SPEA2. The computational complexity of calculating hypervolume
55 in many-objective optimization is somewhat relieved by objective reduction techniques in [43]. In another fast
56 hypervolume-based many-objective optimization algorithm [44], the burden of computing hypervolume in many-
57 objective optimization is tackled with Monte Carlo simulation. The algorithm can be applied to many-objective
58 optimization as hypervolume of an approximate set is estimated using Monte Carlo simulation.
59
60

7
1 3. GENETIC OPERATIONS AND SELECTION STRATEGIES IN MOEA
2
3 In an evolutionary algorithm, the genetic operators such as mutation and crossover create offsprings from parents
4 as a step towards evolution of solutions. In MOEAs, the genetic operators (also known as reproduction operators)
5 and selection methods of population members for the next generation play important roles. In this section, we
6 describe briefly some of the reproduction operators used in various MOEAs followed by selection and population
7 update strategies.
8
9 3.1 Reproduction
10 The conventional genetic or reproduction operators used in the single-objective (scalar) optimization can be used
11 in multi-objective optimization also. However, the objective frameworks are quite different in single and multi-
12 objective optimizations. While in scalar optimization, the algorithm finds a single point (or a set of points) that lead
13 to optimization of an objective, the multi-objective optimization provides a solution set compromising the concerned
14 objectives at several values. Thus, the operators used in scalar optimization might not be suitable for multi-objective
15 optimization. A problem-specific study and understanding may be necessary to find appropriate reproduction
16 operators.
17
18 Differential evolution (DE) [45] based approaches have been popularly used in reproduction operation for evolving
19 population in multi-objective optimization. Although, the algorithm has been developed for scalar objective
20 optimization problems, it is widely applied in multi-objective optimization due to its simplicity and efficacy. The
21 mutation operation in DE creates candidate solutions taking weighted difference among solutions. Multi-objective
22 DE (MODE) based on orthogonal design for population initialization and Pareto-adaptive ε-dominance for updating
23 archive is proposed in [46]. MODE with diversity enhancement strategy, mixed-integer problem handling method,
24 adaptive control parameters are also available in literatures.
25
26 Particle swarm optimization (PSO) [47] is another population-based stochastic method originally developed for
27 single-objective optimization. One of the difficulties in applying PSO for multi-objective optimization is in
28 preserving non-dominated solutions found during the search process. A secondary population is maintained to tackle
29 the problem. Another difficult proposition in multi-objective PSO (MOPSO) is selection of global best and
30 appropriate local best to facilitate the search process. Ref. [48] proposes clustering of particles into certain groups
31 and finding global best (gbest) in the group. The local best is maintained using weighted sum of the objectives. In
32 [49], the gbest is selected by tournament niching method and Pareto dominance is utilized to find and update the
33 local best. Ref. [50] introduces a generalization form of Pareto dominance called preference order. The ranking of
34 all particles and subsequently selection of the gbest is performed using preference order. In multi-objective
35 comprehensive learning particle swam optimizer (MOCLPSO) [51], a particle learns from historical best
36 information from all other particles to update its velocity. This approach helps to maintain diversity and avoid
37 premature convergence of the swarm.
38
39 Kirkpatrick et. al. introduced a stochastic optimization procedure named simulated annealing (SA) [52] in 1983 for
40 single objective optimization. The algorithm is inspired by annealing of metals. The algorithm has been incorporated
41 in multi-objective optimization domain due to its effectiveness and simplicity. A multi-objective SA maintains an
42 archive to save current non-dominated solutions and creates offsprings with the aid of reproduction operators. The
43 solution is updated following SA rules assessing whether the offspring is dominated by the parent. Ref. [53]
44 introduces a domination based energy function to compute the probability of accepting a new dominated trial
45 solution. A systematic study on domination of a new offspring with its parent and the archive is performed in [54]
46 to determine the acceptance probability of the new offspring.
47
48 Apart from DE, PSO and SA based approaches discussed above, some other methods for reproduction and creation
49 of offsprings have also been tried in MOEAs. Ant colony optimization, quantum-inspired genetic algorithm, tabu
50 search, scatter search methods are some of the other methods used in the field of multi-objective optimization.
51
52 3.2 Selection and update of population
53
54 The selection operation in single-objective (scalar) optimization is straightforward. All the solutions can be ordered,
55 and any two feasible solutions can be compared based on their fitness values. The solution with better fitness value
56 is selected for the next generation. However, in multi-objective optimization the Pareto dominance does not define
57 a complete order and any pair of solutions cannot be forthrightly compared based on Pareto dominance. Hence,
58 additional measures are needed in an MOP for the selection of solutions for next generation. Selection follows the
59 concept of either defining complete orders over individuals or complete orders over populations [32].

8
1
2 To differentiate solutions in MOEA, the partial order of Pareto domination needs to be extended to a complete order.
3 Usually a two-stage strategy is employed to implement this. The population is grouped into clusters at first stage.
4 An integer value called the rank is assigned to each individual 𝑥. Let us denote it as 𝑥 𝑟𝑎𝑛𝑘 . Same rank individuals
5 are given equal importance. An individual with lower rank is preferred to the one with higher rank. At second stage,
6 the same rank individuals are assigned with some real values. The assigned value to an individual is termed as the
7 density of that individual, denoted as 𝑥 𝑑𝑒𝑛 . Individual with lower density is superior to individual having higher
8 density. So, mathematically the complete order of Pareto domination, denoted as Հ𝑖 , can be defined as:
𝑥Հ𝑖 𝑦, if⁡(𝑥 𝑟𝑎𝑛𝑘 < 𝑦 𝑟𝑎𝑛𝑘 )⁡or⁡(𝑥 𝑟𝑎𝑛𝑘 = 𝑦 𝑟𝑎𝑛𝑘 ⁡and⁡𝑥 𝑑𝑒𝑛 < 𝑦 𝑑𝑒𝑛 ) (5)
9 The ranks are assigned to individuals using domination rank, domination count, domination strength etc. Among
10 various density methods proposed in literatures, niching and fitness sharing strategy, crowding distance, 𝐾-nearest
11 neighborhood and ε-domination methods are popularly used.
12
13 Recall the fact that population is updated at each generation in MOEA. Hence, a performance indicator-based
14 selection mechanism defines a complete order over population. A real value is assigned as the quality indicator of a
15 non-dominated population. A lower value of the quality indicator is preferred. If 𝐼(𝑃) is the quality indicator of
16 non-dominated population 𝑃, the full order Հ𝑝 is described as:

𝑃Հ𝑝 𝑄, if⁡𝐼(𝑃) < 𝐼(𝑄) (6)


17 The algorithms using selection based on performance indicator can also be categorized under indicator-based
18 MOEAs as discussed earlier. A plausible drawback of this selection method is high execution time.
19
20 4. MOEAs FOR COMPLICATED PROBLEMS
21
22 MOPs can be constrained which means an infeasible region exists in the search space. Constrained MOPs are more
23 difficult to deal with than unconstrained or bound constrained MOPs. Furthermore, the problem can be multimodal
24 with many objectives, computationally expensive, combinatorial and dynamically changing with time. Stochastic
25 noise is also encountered in multi-objective optimization problems. The optimization process becomes unstable in
26 presence of the noise. All these multimodal, constrained, computationally expensive, dynamic or noisy MOPs pose
27 greater challenges in the domain of MOEAs. This section of the paper discusses briefly about some MOEAs
28 specially designed to handle complicated MOPs.
29
30 4.1 Handling constraints in MOPs
31
32 The constraints in an optimization problem can be categorized into groups of equality and inequality constraints.
33 Let g 𝑖 (𝑥) and ℎ𝑗 (𝑥) be the sets of inequality and equality constraints, respectively. The variable 𝑥 is bound with
34 upper (𝑥 𝑈 ) and lower limits (𝑥 𝐿 ) . If 𝑝 and (𝑚 − 𝑝) are the numbers of inequality and equality constraints
35 respectively, the search space 𝛺 of a constrained MOP can be mathematically expressed as:
g 𝑖 (𝑥) ≤ 0, 𝑖 = 1, … , 𝑝⁡⁡⁡⁡⁡⁡⁡⁡⁡
𝛺 = {ℎ𝑗 (𝑥) = 0, 𝑗 = 𝑝 + 1, … , 𝑚 (7)
𝑥 𝐿 ≤ 𝑥 ≤ 𝑥 𝑈 ⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡
36 In general, equality constraints are transformed into inequality constraints with the aid of a tolerance parameter, δ.
37 The total constraint 𝐺𝑖 (𝑥) is represented as:
max[g 𝑖 (𝑥), 0] , 𝑖 = 1, … , 𝑝⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡⁡
𝐺𝑖 (𝑥) = { (8)
max[|ℎ𝑖 (𝑥)| − 𝛿, 0] , 𝑖 = 𝑝 + 1, … , 𝑚
38 As an obvious fact, the search space in a constrained optimization problem is divided into feasible and infeasible
39 regions. Ref. [55] presents a comprehensive survey on various constraint handling techniques used in evolutionary
40 algorithms. Although, most techniques have originally been developed for single-objective optimization problems,
41 majority of them can be extended to deal with constrained MOPs. Here, we will focus on constraint handling
42 techniques for MOPs. Penalty function approach of constraint handling is one of the most popular methods used in
43 evolutionary algorithm due to its simplicity and ease of implementation. In penalty function method, the constrained
44 optimization problem is converted into an unconstrained one by adding a suitable penalty term to the original fitness
45 function. To elaborate, the objective fitness is penalized if constraint violation occurs. Static penalty, dynamic
46 penalty, adaptive penalty etc. are some of the types of penalty function methods. An effective self-adaptive penalty
47 method is proposed in [56]. The amount of penalty is varied in this method based on the percentage of feasible

9
1 solutions in the current population. Larger penalty is added for small percentage of feasible individuals and vice
2 versa. This approach helps to balance extraction of information present in both feasible and infeasible solutions. In
3 [1], a method called superiority of feasible solutions (SF) has been extended to handle constrained MOPs based on
4 Pareto domination. SF compares a pair of solutions, say 𝑥𝑖 and 𝑥𝑗 . 𝑥𝑖 is said to be superior to 𝑥𝑗 if (a) 𝑥𝑖 is
5 feasible and 𝑥𝑗 is infeasible (b) both are infeasible, but 𝑥𝑖 results in less constraint violation than what 𝑥𝑗 does,
6 and (c) both are feasible, but 𝑥𝑖 dominates 𝑥𝑗 . The solutions are ranked using the rules of SF and superior
7 individuals are selected for evolution. Immune algorithms using the immune response principles are also applied to
8 handle constrained MOPs. Ref. [57] proposes a novel constrained nonlinear multi-objective optimization immune
9 algorithm (CNMOIA) based on the concepts of Pareto optimality and simple interactive metaphors. Hybrid methods
10 have also been popular in dealing with constrained MOPs. The ensemble of various constraint handling (CH)
11 techniques is suggested in [58] for constrained MOPs. The motivation behind the development of ensemble method
12 is that no single CH technique can perform effectively on all sorts of constrained optimization problems. Instead of
13 the user selecting an appropriate CH technique for a specific problem, ensemble method takes the onus of trying
14 out all the methods that form the grouping. Selection of solutions at various stages of the search process follows the
15 rules of individual constituting techniques and later most suitable solutions are allowed to evolve.
16
17 4.2 Computationally expensive MOPs
18
19 In multi-objective optimization problems, sometimes a large number of experiments or simulations might be
20 necessary to locate the Pareto front, therefore rendering the MOPs to be computationally or financially expensive.
21 A method capable of reasonable good solutions at a lower computational cost is desired to deal with computationally
22 expensive MOPs. In [59], Knowels described the features that the computationally expensive MOPs possess. Some
23 of the features are as follows:
24 1. The problem has multiple and probably incomparable objectives.
25 2. One evaluation takes several minutes or even hours.
26 3. The total number of evaluations to be performed is limited by financial, time, or resource constraints.
27 4. Parallel evaluation is not possible i.e. only one evaluation can be performed at one time.
28 5. No realistic simulator or other method of approximating the full evaluation is readily available.
29 6. Noise is low (repeated evaluations yield very similar results).
30 7. The overall gains in quality (or reductions in cost) that can be achieved are high.
31 8. The search landscape is locally smooth but multimodal.
32 9. The dimensionality of the search space is low to medium.
33
34 In last couple of decades, researchers have proposed different approaches to cope with expensive MOPs. One of the
35 most popular and efficient methods to handle single-objective expensive optimization problems is the Gaussian
36 stochastic process (also called surrogate) model. The method has been successfully extended for expensive MOPs.
37 Ref. [60] utilizes the method on MOEA/D framework and the algorithm is called MOEA/D-EGO (MOEA/D for
38 efficient global optimization). In the algorithm, a predictive model is built for each subproblem based on the points
39 evaluated thus far and optimization is performed simultaneously for all subproblems to improve expected
40 improvement in metric values. As parallel computing is used in the algorithm, it naturally becomes a useful and
41 efficient algorithm for expensive MOPs. Ref. [59] proposes the algorithm ParEGO that applies EGO to an aggregate
42 function randomly selected to find out which point to evaluate next. As the algorithm uses only one aggregate
43 function in each iteration, it becomes suitable for problems where only one evaluation is possible at one time. Most
44 recently, literature [61] proposes Gaussian process surrogate model in evolutionary algorithm for medium scale
45 (about 20 to 50 decision variables) expensive optimization problems. The method uses a search mechanism which
46 is aware of the surrogate model and dimension reduction techniques of the medium scale MOPs. An algorithm to
47 deal with high-dimensional expensive problems (more than 50 decision variables) is established in [62]. The method
48 executes search for global optima by cooperation between surrogate-assisted particle swarm optimization (PSO)
49 algorithm and surrogate-assisted social learning-based PSO (SL-PSO). Initially, the solutions evaluated by fitness
50 function are shared by both the algorithms. Later SL-PSO concentrates on exploration while PSO performs the local
51 search.
52
53 4.3 Multimodal MOPs
54
55 In some cases, the multi-objective optimization problem may have different Pareto sets (i.e. groups of Pareto optimal
56 solutions) with same objective values. Such MOPs are classified into multimodal MOPs. It is imperative for the
57 decision maker to know all different Pareto sets (PS) as detail knowledge of PS provides greater flexibility in
58 selecting suitable solution sets. As the objective in a multimodal MOP is to find multiple optimal solutions, several
59 niching techniques have been proposed and incorporated in evolutionary algorithms. Ref. [63] presents a bi-
60 objective multi-population genetic algorithm (BMPGA) to solve multimodal MOPs. The first objective used in the
10
1 algorithm is the original fitness function while the second one is gradient of the function. The algorithm shows
2 stable niching behavior over some of the tested benchmark problems. Basak et. al. in [64] proposes improvement
3 of the method with mean distance-based selection. Deb et. al. also adopts similar approach in [65]. In order to find
4 both global and local optimal solutions, the scalar objective multimodal optimization problem is converted into a
5 bi-objective optimization such that all solutions (global and local) become members of the resulting weak Pareto
6 sets. Like [63], the second objective in the algorithm is gradient of the function. Ref. [66] suggests a technique that
7 transforms a multimodal MOP into a multi-objective optimization problem with two conflicting objectives so that
8 an MOEA can directly be applied to the transformed problem. The Pareto optimal solutions obtained for the
9 transformed problem represent the Pareto optimal solutions of the original multimodal MOP. Particle swarm
10 optimization (PSO) based algorithms [67-68] are also proposed to solve multimodal MOPs. The integrated local
11 search technique in [67] enhances the local search ability of PSO based algorithms for multimodal optimization
12 problems. Most recently, the ring topology suggested in [68] can identify more Pareto-optimal solutions
13 corresponding to a single objective function value of a multimodal MOP by inducing stable niches.
14
15 4.4 Dynamic MOPs
16
17 Dynamic MOPs refer to the MOPs that change over time. Real world problems such as investment optimization,
18 control system design, robot navigation etc. are the examples of dynamic MOPs where the fitness function,
19 parameter space, constraints and location of the optimal front change with time. The dynamic environment possesses
20 certain challenges to classical MOEAs. Researchers have been studying scalar objective optimization in dynamic
21 environment for last few decades. However, dynamic multi-objective optimization problem (DMOP) is relatively
22 new. In DMOP, the PFs may change over time and rapid discovery of the PF is incumbent. Thus, DMOP solver
23 must address diversity and be capable of adaptive exploration.
24
25 To deal with the dynamic environment, ref. [69] transforms the DMOP into a fuzzy dynamic scalar optimization
26 problem. Genetic algorithm (GA) with adaptive mutation and crossover rates are directly applied to the optimization
27 problem. A direction-based neighborhood search algorithm is proposed in [70] to handle DMOPs. An environmental
28 recognition rule is constructed in the algorithm after evaluation of the environment. The approach is most suitable
29 particularly for a slowly changing environment and dimension of the parameter space. Ref. [71] describes a forward-
30 looking approach to dynamic MOPs. A forecasting technique is adopted to estimate the location of the optimal
31 solution in variable space. With the knowledge in memory of previous time steps, the new Pareto front is predicted.
32 In [72], both competitiveness and cooperation properties are accounted for in the new coevolutionary paradigm to
33 solve an MOP and to track the dynamic Pareto front. Instead of fixed decomposition process defined at the start of
34 an evolutionary optimization, the coevolution (competitive and co-operative) facilitates adaptive decomposition.
35
36 4.5 Noisy MOPs
37
38 In real-world MOPs, the issue of stochastic noise or uncertainty is often encountered. The optimization process
39 becomes rather unstable in the presence of noise as the algorithm has to cope with noise in addition to multiple
40 objectives. A few methods have been proposed in literatures to deal with the noise in MOPs. The most commonly
41 used methods are resampling and probabilistic ranking. The resampling effectively solves the noisy problems as the
42 noise is reduced by a factor. However, the method is computationally expensive. Ref. [73] presents probabilistic
43 ranking method where the standard deviation is taken into consideration for evaluation of each solution. Deb et. al.
44 [74] suggests an approach similar to resampling method to find robust solutions for MOPs. The focus of the method
45 has been on finding robust Pareto front instead of global Pareto-optimal frontier. The robust solutions are less
46 sensitive to small perturbations in variables. In the method, the user can set the desired robustness in the problem.
47 Goh et. al. [75] introduces a perturbation operator directed by learning experience. Past experience is used by the
48 operator to adapt its magnitude and direction so that fast convergence is facilitated. The gene adaptive selection
49 strategy employed in the method helps to escape from local optima and thus prevents premature convergence. In
50 addition, the uncertainties in the problem are dealt with the probabilistic archive model developed on the concept
51 of possibility and necessity. The algorithm is proven to perform well on the fronts of diversity and distribution for
52 both noisy and noiseless problems. Recently, the rolling tide evolutionary algorithm (RTEA) [76] is proposed to
53 address the problem of uncertain evaluation in MOPs. RTEA progressively improves the accuracy of the estimated
54 Pareto set, thus guiding the search towards the true Pareto front. The characteristics of the noise can alter during the
55 course of optimization. RTEA algorithm can cope with the changing characteristics of the noise.
56
57 5. PERFORMANCE INDICATORS FOR MOEAs
58
59 As an MOEA finds approximated Pareto set of an MOP, it becomes important to assess the quality of the solutions
60 obtained by the algorithm. The performance indicator helps to compare the qualities of different Pareto sets obtained

11
1 by different algorithms. Unary and binary are the two performance-metrics commonly used. While unary indicator
2 reflects certain qualities of an approximate solution set by assigning a scalar value to the Pareto front, binary metric
3 indicates performance disparity between two approximated solution sets. Here, we go further in listing some of the
4 unary indicators adopted for MOEAs. Unary indicators have become important in providing optimization goal
5 specific information such as proximity, diversity and distribution of the solution sets. Table 1 lists some of the
6 popularly used unary indicators with reference literatures.
7
8 Table 1: Some unary indicators used to evaluate quality of solutions in MOEAs
Description Notation Reference
Hypervolume indicator 𝐼𝐻 [77]
𝑅 indicator 𝐼𝑅 [77]
Objective vector indicator 𝐼𝑂 [78]
Enclosing hypercube indicator 𝐼𝐻𝐶 [78]
Unary 𝜀-indicator 𝐼𝜀1 [78]
Error ratio 𝐼𝐸𝑅 [79]
Generational distance 𝐼𝐺𝐷 [79]
Spacing 𝐼𝑆 [79]
Overall non-dominated vector generation and ratio 𝐼𝑂𝑁𝑉𝐺 [79]
Maximum Pareto front error 𝐼𝑀𝐸 [79]
Maximum spread 𝐼𝑀𝑆 [80]
Hyperarea difference 𝐼𝐻𝐷 [81]
Pareto spread 𝐼𝑂𝑆 [81]
Accuracy of the observed Pareto front 𝐼𝐴 [81]
Cluster 𝐼𝐶𝐿 [81]
9
10 6. VISUALIZATION IN MANY-OBJECTIVE OPTIMIZATION
11
12 Due to stochastic nature of evolutionary algorithms, an MOEA returns different solution sets in different runs of the
13 algorithm. The empirical attainment function (EAF) can describe the probabilistic distribution of several
14 approximated solution sets and therefore it can be utilized to analyze and compare the performance of an algorithm
15 [82]. Further, as multidimensional objective space is involved in MOP, it is important to visualize them. The tasks
16 involved in visualization of MOP can be categorized into two groups – visualization of approximated solution sets
17 and visualization of EAFs. 2D and 3D approximated sets can easily be visualized with the aid of scatter plots.
18 However, advanced methods are necessary to visualize four or higher number of objectives. Visualization
19 techniques of exact and approximate EAFs involve large number of 3D cuboids. This article does not cover details
20 of visualization of EAFs. Readers may refer to the relevant articles given in reference. The methods used for
21 visualization of approximate solution sets of higher dimensional problems are briefly discussed herein. These
22 visualization methods should ideally have following properties:
23 • Preservation of Pareto dominance relation between objective vectors
24 • Preservation of front shape, range and distribution of vectors in the visualized approximate sets
25 • Robustness – addition or removal of a vector should not significantly alter the visualization
26 • Capability to handle large sets
27 • Possibility to simultaneously visualize multiple approximate sets for comparison
28 • Scalability to multiple dimensions
29 • Simplicity for easy understanding and usage
30
31 Some of the commonly used visualization techniques are briefly discussed in this section.
32
33 6.1 Scatter plot matrix
34 This method of visualization is rather straightforward. The vectors are projected to a lower-dimensional space
35 disregarding the dimensions that cannot be visualized. Figure 7 shows an example where each scatter plot represents
36 a possible combination of two dimensions of a multi-dimensional vector. The scatter plot matrix is a simple, fast
37 and robust visualization method that can fairly preserve the shape of the approximated solution sets.
38

12
Figure 7. Scatter plot matrix [82]
1
2 6.2 Bubble chart
3 Bubble chart [83] is similar to the scatter plot matrix. However, instead of several plots at lower-dimensional space,
4 bubble chart includes all information on solution sets in a single plot. The additional dimensions are visualized using
5 different sizes (4D) and colors (5D) of the bubbles. Figure 8 presents a typical diagram of a bubble chart where
6 variable diameters of the bubbles represent 4th dimension.

13
1
2 Figure 8. Bubble chart [82]
3
4 6.3 Radial coordinate visualization
5 In radial coordinate visualization [84], the non-linear mapping on 2D space from a multi-dimensional space is
6 executed with the aid of a unit circle. The objectives (also called dimensional anchors) are evenly distributed on the
7 circumference of the circle (see Figure 9). An imaginary spring, that holds each objective vector, is attached to each
8 dimensional anchor. The force exerted by a spring is proportional to the objective to which it is anchored. As an
9 example, a vector close to 𝑓1 has higher value in 𝑓1 than any other objective. The vector at the center of the circle
10 signifies all equal values for the objectives. This method of visualization can preserve the distribution of vectors
11 well.

12
13 Figure 9. Radial coordinate visualization [82]

14
1 6.4 Parallel coordinates
2 More often the results of a multi-objective optimization algorithm are visualized using parallel coordinates [85]. In
3 this method, usually 𝑛-equally spaced vertical lines are drawn to express 𝑛-dimensional space. An 𝑛-dimensional
4 objective vector is represented with a polyline with its vertices on the parallel axes. The 𝑖-th coordinate of the
5 objective vector is the position of the vertex of the polyline on 𝑖-th axis. Although parallel coordinates are useful
6 in studying dependencies/independencies among objectives, the distribution of objective vectors is difficult to
7 perceive due to the clutter created by numerous polylines as seen in Figure 10.
8

9
10 Figure 10. Parallel coordinates [82]
11
12 6.5 Heatmaps
13 Heatmap is a graphical form of visualization where objective values are indicated using colors [86]. Like parallel
14 coordinates plot, heatmap is useful to study dependencies/independencies among objectives. In Figure 11, the range
15 of values (0 to 1) of first objective (𝑓1 ) are presented with changing colors. The other objective values of an objective
16 vector are denoted with the colors corresponding to the same objective values of 𝑓1 .
17

18
19 Figure 11. Heatmaps [82]
20
21

15
1 7. THE APPLICATION OF MOEAs
2
3 The research field of evolutionary algorithm (EA) is dynamic as improvement of existing method or introduction
4 of a new variant of an algorithm is a regular affair in EA. MOEAs have become very popular in last few decades
5 for real-world multi-objective optimization tasks. Numerous scientific literatures and books have been published to
6 showcase the multifaceted applications of MOEAs in virtually every domain. We will briefly mention about the
7 application domains where MOEAs have seen widespread usages. Table 2 summarizes some of the application areas
8 together with brief statements of the problems and only a few selected references. It is not practical to include many
9 more existing references and a comprehensive review of MOEAs applied to different real-world problems in this
10 article. Readers may please refer to [32,87-89] for more details.
11
12 Table 2: Some application domains of MOEAs
Domain Problem statement Algorithm applied and references
Automation, control Control scheme design Prioritized multi-objective simulated annealing (PMOSA) [90]
system and robotics Controller design Hybrid multi-objective differential evolution (HMODE) [91]
Robotic system optimization Multi-objective genetic algorithm (MOGA) [92]

Electronics and Analog and digital circuit design Adaptive MOGA [93]
communications Antenna design MOEA/D [94]
Wireless sensor network design MOEA/D [95]
CDMA system design MO clonal selection [96]

Data mining Data mining MOGA [97]


Rule extraction Multi-objective genetic programming (MOGP) [98]

Power and energy Optimal power flow Strength pareto evolutionary algorithm (SPEA) [99],
MOEA/D [100]
Economic dispatch NSGA-II [101], SMODE [16]
Unit commitment MOEA/D [102]
Power equipment location optimization MOEA/D [103]
Distribution network loss minimization MOEA/D [104]

Bioinformatics and Gene regulatory network MO using fuzzy dominance [105]


computational biology Molecular docking Multi-objective particle swarm optimization (MOPSO) [106]
Protein structure prediction Pareto archived evolution strategy (PAES) [107]
Designing sequence of DNA Multi-objective firefly algorithm (MOFA) [108]

Manufacturing Manufacturing process optimization NSGA-II [109]


Facility layout optimization MOGA [110]
Production scheduling NSGA-II [111]
Component design MOGA combined with fuzzy set [112]

Chemical engineering Chemical process design and operation Single population EA, dual population EA [113]
Petroleum refining NSGA [114]
Polymerization process optimization Multi-objective differential evolution (MODE) [115]
Fuel cell design MOGA [116]

Economics and finance Investment portfolio optimization NSGA-II, SPEA2 [117]


Risk-return analysis NSGA-II [118]
Stock trading MOPSO [119]

Pattern recognition and Pattern classification NSGA-II, SPEA2 [120]


image processing Image processing A two-objective EA [121]

Artificial neural Learning process in neural network Time varying MOPSO [122]
network, fuzzy system Fuzzy logic system A novel MOEA [123]

Transportation Logistics management Fuzzy logic guided NSGA-II [124]


Route planning NSGA-II [125]
13
14
15

16
1 8. CONCLUSION
2
3 In this article, a short survey on multi-objective optimization has been performed. The initial part is devoted to the
4 discussion of basics of multi-objective evolutionary algorithms. The non-dominated sorting and decomposition-
5 based algorithm frameworks are discussed in detail followed by overview on recent advances in MOEAs to deal
6 with complicated problems. Visualization techniques and application areas are described in last couple of sections.
7 It is worthwhile to mention that this article by no means cover all facets of multi-objective optimization. Readers
8 may refer to the review articles included in the reference lists. Further, though numerous publications are available
9 on evolutionary multi-objective optimization, there is ample scope to work on fronts such as introducing new
10 algorithm frameworks, improving methods for quick finding of approximate Pareto front, developing efficient
11 techniques to handle many-objective MOPs etc. Many more works and publications on this field are thus expected
12 in the near future.
13
14 References
15
16 [1]. K. Deb, Multi-objective optimization using evolutionary algorithms. Vol. 16. John Wiley & Sons, 2001.
17 [2]. K. Miettinen, Nonlinear multiobjective optimization. Vol. 12. Springer Science & Business Media, 2012,2017.
18 [3]. J. D. Schaffer, "Multiple objective optimization with vector evaluated genetic algorithm." In Proceeding of the First
19 International Conference of Genetic Algorithms and Their Application, pp. 93-100. 1985.
20 [4]. N. Srinivas, K. Deb, "Muiltiobjective optimization using nondominated sorting in genetic algorithms." Evolutionary
21 computation 2.3 (1994): 221-248
22 [5]. K. Deb, A. Pratap, S. Agarwal, T. Meyarivan, "A fast and elitist multiobjective genetic algorithm: NSGA-II." IEEE
23 transactions on evolutionary computation 6.2 (2002): 182-197.
24 [6]. D. E. Goldberg, "Genetic Algorithms in Search, Optimization, and Machine Learning." (1989).
25 [7]. K. Deb, H. Jain, "An evolutionary many-objective optimization algorithm using reference-point-based nondominated
26 sorting approach, part I: Solving problems with box constraints." IEEE Trans. Evolutionary Computation 18.4 (2014): 577-
27 601.
28 [8]. H. Jain, K. Deb, "An Evolutionary Many-Objective Optimization Algorithm Using Reference-Point Based Nondominated
29 Sorting Approach, Part II: Handling Constraints and Extending to an Adaptive Approach." IEEE Trans. Evolutionary
30 Computation 18.4 (2014): 602-622.
31 [9]. Q. Zhang, H. Li, "MOEA/D: A multiobjective evolutionary algorithm based on decomposition." IEEE Transactions on
32 evolutionary computation 11.6 (2007): 712-731.
33 [10]. H. Li, Q. Zhang, "Multiobjective optimization problems with complicated Pareto sets, MOEA/D and NSGA-II." IEEE
34 transactions on evolutionary computation 13.2 (2009): 284-302.
35 [11]. Q. Zhang, W. Liu, H. Li, "The performance of a new version of MOEA/D on CEC09 unconstrained MOP test
36 instances." Evolutionary Computation, 2009. CEC'09. IEEE Congress on. IEEE, 2009.
37 [12]. A. J. Nebro, J. J. Durillo. "A Study of the Parallelization of the Multi-Objective Metaheuristic MOEA/D." LION 4 (2010):
38 303-317.
39 [13]. S. Z. Zhao, P. N. Suganthan, and Q. Zhang, "Decomposition-based multiobjective evolutionary algorithm with an ensemble
40 of neighborhood sizes." IEEE Transactions on Evolutionary Computation 16.3 (2012): 442-446.
41 [14]. Q. Jiang, L. Wang, X. Hei, G. Yu, Y. Lin, X. Lu, "MOEA/D-ARA+ SBX: A new multi-objective evolutionary algorithm
42 based on decomposition with artificial raindrop algorithm and simulated binary crossover." Knowledge-Based Systems 107
43 (2016): 197-218.
44 [15]. B. Y. Qu, P. N. Suganthan, "Multi-objective evolutionary algorithms based on the summation of normalized objectives and
45 diversified selection." Information sciences 180.17 (2010): 3170-3181.
46 [16]. B. Y. Qu, J. J. Liang, Y. S. Zhu, Y. Z. Wang, P. N. Suganthan, "Economic emission dispatch problems with stochastic wind
47 power using summation based multi-objective evolutionary algorithm." Information Sciences 351 (2016): 48-66.
48 [17]. A. Elhossini, S. Areibi, R. Dony. "Strength Pareto particle swarm optimization and hybrid EA-PSO for multi-objective
49 optimization." Evolutionary Computation 18.1 (2010): 127-156.
50 [18]. B. B. Li, L. Wang, "A hybrid quantum-inspired genetic algorithm for multiobjective flow shop scheduling." IEEE
51 Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 37.3 (2007): 576-591.
52 [19]. L. Tang, X. Wang, "A hybrid multiobjective evolutionary algorithm for multiobjective optimization problems." IEEE
53 Transactions on Evolutionary Computation 17.1 (2013): 20-45.
54 [20]. A. Ibrahim, M. V. Martin, S. Rahnamayan, & K. Deb, "Fusion-based hybrid many-objective optimization
55 algorithm." Evolutionary Computation (CEC), 2017 IEEE Congress on. IEEE, 2017.
56 [21]. A. Lara, G. Sanchez, C. A. C. Coello, O. Schutze, "HCS: A new local search strategy for memetic multiobjective
57 evolutionary algorithms." IEEE Transactions on Evolutionary Computation 14.1 (2010): 112-132.
58 [22]. H. Ishibuchi, M. Tadahiko, "A multi-objective genetic local search algorithm and its application to flowshop
59 scheduling." IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews) 28.3 (1998): 392-
60 403.
61 [23]. O. Schütze, A. Martín, A. Lara, S. Alvarado, E. Salinas, C. A. C. Coello, "The directed search method for multi-objective
62 memetic algorithms." Computational Optimization and Applications 63.2 (2016): 305-332.
63 [24]. A. Jaszkiewicz, "On the performance of multiple-objective genetic local search on the 0/1 knapsack problem-a comparative
64 experiment." IEEE Transactions on Evolutionary Computation 6.4 (2002): 402-412.
65 [25]. H. Zhang, A. Zhou, S. Song, Q. Zhang, X. Z. Gao, J. Zhang, "A self-organizing multiobjective evolutionary
66 algorithm." IEEE Transactions on Evolutionary Computation 20.5 (2016): 792-806.

17
1 [26]. C. M. Fonseca, P. J. Fleming, "Genetic Algorithms for Multiobjective Optimization: Formulation Discussion and
2 Generalization." Icga. Vol. 93. No. July. 1993.
3 [27]. K. Deb, J. Sundar, "Reference point based multi-objective optimization using evolutionary algorithms." Proceedings of the
4 8th annual conference on Genetic and evolutionary computation. ACM, 2006.
5 [28]. K. Deb, A. Kumar. "Interactive evolutionary multi-objective optimization and decision-making using reference direction
6 method." Proceedings of the 9th annual conference on Genetic and evolutionary computation. ACM, 2007.
7 [29]. J. Sanchis, Javier, M. A. Martínez, X. Blasco, "Integrated multiobjective optimization and a priori preferences using genetic
8 algorithms." Information Sciences 178.4 (2008): 931-951.
9 [30]. L. Rachmawati, D. Srinivasan, "Multiobjective evolutionary algorithm with controllable focus on the knees of the Pareto
10 front." IEEE Transactions on Evolutionary Computation13.4 (2009): 810-824.
11 [31]. L. Thiele, K. Miettinen, P. J. Korhonen, J. Molina, "A preference-based evolutionary algorithm for multi-objective
12 optimization." Evolutionary computation 17.3 (2009): 411-436.
13 [32]. A. Zhou, B. Y. Qu, H. Li, S. Z. Zhao, P. N. Suganthan, Q. Zhang, "Multiobjective evolutionary algorithms: A survey of the
14 state of the art." Swarm and Evolutionary Computation 1.1 (2011): 32-49.
15 [33]. R. Cheng, Y. Jin, M. Olhofer, B. Sendhoff, "A reference vector guided evolutionary algorithm for many-objective
16 optimization." IEEE Transactions on Evolutionary Computation 20.5 (2016): 773-791.
17 [34]. A. López-Jaimes, C. A. C. Coello. "Including preferences into a multiobjective evolutionary algorithm to deal with many-
18 objective engineering optimization problems." Information Sciences 277 (2014): 1-20.
19 [35]. X. Zhang, Y. Tian, and Y. Jin. "A knee point-driven evolutionary algorithm for many-objective optimization." IEEE
20 Transactions on Evolutionary Computation 19.6 (2015): 761-776.
21 [36]. K. Deb, M. Mohan, and S. Mishra, "Evaluating the ε-domination based multi-objective evolutionary algorithm for a quick
22 computation of Pareto-optimal solutions." Evolutionary computation 13.4 (2005): 501-525.
23 [37]. Y. Yuan, H. Xu, B. Wang, X. Yao, "A new dominance relation-based evolutionary algorithm for many-objective
24 optimization." IEEE Transactions on Evolutionary Computation 20.1 (2016): 16-37.
25 [38]. M. Gong, H. Li, E. Luo, J. Liu, J. Liu, "A Multiobjective Cooperative Coevolutionary Algorithm for Hyperspectral Sparse
26 Unmixing." IEEE Transactions on Evolutionary Computation 21.2 (2017): 234-248.
27 [39]. C. K. Goh, K. C. Tan, "A competitive-cooperative coevolutionary paradigm for dynamic multiobjective
28 optimization." IEEE Transactions on Evolutionary Computation 13.1 (2009): 103-127.
29 [40]. Z. H. Zhan, J. Li, J. Cao, J. Zhang, H. S. H. Chung, Y. H. Shi, "Multiple populations for multiple objectives: A
30 coevolutionary technique for solving multiobjective optimization problems." IEEE Transactions on Cybernetics 43.2
31 (2013): 445-463.
32 [41]. E. Zitzler, S. Künzli, "Indicator-based selection in multiobjective search." International Conference on Parallel Problem
33 Solving from Nature. Springer, Berlin, Heidelberg, 2004.
34 [42]. M. Emmerich, B. Nicola, N. Boris, "An EMO Algorithm Using the Hypervolume Measure as Selection Criterion." EMO.
35 Vol. 3410. 2005.
36 [43]. D. Brockhoff, E. Zitzler, "Improving hypervolume-based multiobjective evolutionary algorithms by using objective
37 reduction methods." Evolutionary Computation, 2007. CEC 2007. IEEE Congress on. IEEE, 2007.
38 [44]. J. Bader, E. Zitzler. "HypE: An algorithm for fast hypervolume-based many-objective optimization." Evolutionary
39 computation 19.1 (2011): 45-76.
40 [45]. R. Storn, K. Price, "Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces."
41 Journal of global optimization 11.4 (1997): 341-359.
42 [46]. W. Gong, Z. Cai, "An improved multiobjective differential evolution based on Pareto-adaptive ϵ-dominance and orthogonal
43 design." European Journal of Operational Research 198.2 (2009): 576-601.
44 [47]. J. Kennedy, R. Eberhart, "Particle swarm optimization." Neural Networks, 1995. Proceedings., IEEE International
45 Conference on. Vol. 4. IEEE, 1995.
46 [48]. S. Janson, D. Merkle, M. Middendorf, "Molecular docking with multi-objective particle swarm optimization." Applied Soft
47 Computing 8.1 (2008): 666-675.
48 [49]. D. S. Liu, K. C. Tan, S. Y. Huang, C. K. Goh, W. K. Ho, "On solving multiobjective bin packing problems using
49 evolutionary particle swarm optimization." European Journal of Operational Research 190.2 (2008): 357-382.
50 [50]. Y. Wang, Y. Yang, "Particle swarm optimization with preference order ranking for multi-objective
51 optimization." Information Sciences 179.12 (2009): 1944-1959.
52 [51]. S. Z. Zhao, P. N. Suganthan, "Two-lbests based multi-objective particle swarm optimizer." Engineering Optimization 43.1
53 (2011): 1-17.
54 [52]. S. Kirkpatrick, C. D. Gelatt, M. P. Vecchi, "Optimization by simulated annealing." science 220.4598 (1983): 671-680.
55 [53]. K. I. Smith, R. M. Everson, J. E. Fieldsend, C. Murphy, R. Misra, "Dominance-based multiobjective simulated
56 annealing." IEEE Transactions on Evolutionary Computation 12.3 (2008): 323-342.
57 [54]. S. Bandyopadhyay, S. Saha, U. Maulik, K. Deb, "A simulated annealing-based multiobjective optimization algorithm:
58 AMOSA." IEEE transactions on evolutionary computation 12.3 (2008): 269-283.
59 [55]. C. A. C. Coello, "Theoretical and numerical constraint-handling techniques used with evolutionary algorithms: a survey of
60 the state of the art." Computer methods in applied mechanics and engineering 191.11 (2002): 1245-1287.
61 [56]. Y. G. Woldesenbet, G. G. Yen, B. G. Tessema. "Constraint handling in multiobjective evolutionary optimization." IEEE
62 Transactions on Evolutionary Computation 13.3 (2009): 514-525.
63 [57]. Z. Zhang, "Immune optimization algorithm for constrained nonlinear multiobjective optimization problems." Applied Soft
64 Computing 7.3 (2007): 840-857.
65 [58]. B. Y. Qu, P. N. Suganthan. "Constrained multi-objective optimization algorithm with an ensemble of constraint handling
66 methods." Engineering Optimization 43.4 (2011): 403-416.
67 [59]. J. Knowles, "ParEGO: A hybrid algorithm with on-line landscape approximation for expensive multiobjective optimization
68 problems." IEEE Transactions on Evolutionary Computation 10.1 (2006): 50-66.

18
1 [60]. Q. Zhang, W. Liu, E. Tsang, B. Virginas, "Expensive multiobjective optimization by MOEA/D with Gaussian process
2 model." IEEE Transactions on Evolutionary Computation 14.3 (2010): 456-474.
3 [61]. B. Liu, Q. Zhang, G. G. Gielen, "A Gaussian process surrogate model assisted evolutionary algorithm for medium scale
4 expensive optimization problems." IEEE Transactions on Evolutionary Computation 18.2 (2014): 180-192.
5 [62]. C. Sun, Y. Jin, R. Cheng, J. Ding, J. Zeng, "Surrogate-assisted Cooperative Swarm Optimization of High-dimensional
6 Expensive Problems." IEEE Transactions on Evolutionary Computation (2017).
7 [63]. J. Yao, N. Kharma, P. Grogono, "Bi-objective multipopulation genetic algorithm for multimodal function
8 optimization." IEEE Transactions on Evolutionary Computation 14.1 (2010): 80-102.
9 [64]. A. Basak, S. Das, and K. C. Tan, "Multimodal optimization using a biobjective differential evolution algorithm enhanced
10 with mean distance-based selection." IEEE Transactions on Evolutionary Computation 17.5 (2013): 666-685.
11 [65]. K. Deb, A. Saha. "Multimodal optimization using a bi-objective evolutionary algorithm." Evolutionary computation 20.1
12 (2012): 27-62.
13 [66]. Y. Wang, H. X. Li, G. G. Yen, W. Song, "MOMMOP: Multiobjective optimization for locating multiple optimal solutions
14 of multimodal optimization problems." IEEE transactions on cybernetics 45.4 (2015): 830-843.
15 [67]. B. Y. Qu, J. J. Liang, P. N. Suganthan, "Niching particle swarm optimization with local search for multi-modal
16 optimization." Information Sciences 197 (2012): 131-143.
17 [68]. C. Yue, B. Y. Qu, J. J. Liang, "A Multi-objective Particle Swarm Optimizer Using Ring Topology for Solving Multimodal
18 Multi-objective Problems." IEEE Transactions on Evolutionary Computation (2017).
19 [69]. Z. Bingul, "Adaptive genetic algorithms applied to dynamic multiobjective problems." Applied Soft Computing 7.3 (2007):
20 791-799.
21 [70]. M. Farina, K. Deb, P. Amato, "Dynamic multiobjective optimization problems: test cases, approximations, and
22 applications." IEEE Transactions on evolutionary computation 8.5 (2004): 425-442.
23 [71]. I. Hatzakis, D. Wallace. "Dynamic multi-objective optimization with evolutionary algorithms: a forward-looking
24 approach." Proceedings of the 8th annual conference on Genetic and evolutionary computation. ACM, 2006.
25 [72]. C. K. Goh, K. C. Tan. "A competitive-cooperative coevolutionary paradigm for dynamic multiobjective
26 optimization." IEEE Transactions on Evolutionary Computation 13.1 (2009): 103-127.
27 [73]. E. Hughes, "Evolutionary multi-objective ranking with uncertainty and noise." Evolutionary multi-criterion optimization.
28 Springer Berlin/Heidelberg, 2001.
29 [74]. K. Deb, H. Gupta. "Introducing robustness in multi-objective optimization." Evolutionary computation 14.4 (2006): 463-
30 494.
31 [75]. C. K. Goh, K. C. Tan, "An investigation on noisy environments in evolutionary multiobjective optimization." IEEE
32 Transactions on Evolutionary Computation 11.3 (2007): 354-381.
33 [76]. J. E. Fieldsend, R. M. Everson, "The rolling tide evolutionary algorithm: A multiobjective optimizer for noisy optimization
34 problems." IEEE Transactions on Evolutionary Computation 19.1 (2015): 103-117.
35 [77]. J. D. Knowles, L. Thiele, and E. Zitzler, "A tutorial on the performance assessment of stochastic multiobjective
36 optimizers." TIK-Report 214 (2006).
37 [78]. E. Zitzler, L. Thiele, M. Laumanns, C. M. Fonseca, V. G. Fonseca, "Performance assessment of multiobjective optimizers:
38 An analysis and review." IEEE Transactions on evolutionary computation 7.2 (2003): 117-132.
39 [79]. D. Veldhuizen, "Multiobjective evolutionary algorithms: classifications, analyses, and new innovations. 1999." School of
40 Engineering of the Air Force Institute of Technology, Dayton, Ohio (1999).
41 [80]. E. Zitzler, "Evolutionary algorithms for multiobjective optimization: Methods and applications." (1999).
42 [81]. J. Wu, S. Azarm. "Metrics for quality assessment of a multiobjective design optimization solution set." Journal of
43 Mechanical Design 123.1 (2001): 18-25.
44 [82]. T. Tušar, “Visualizing Solution Sets in Multiobjective Optimization.” Diss. Ph. D. thesis, Jožef Stefan International
45 Postgraduate School, 2014.
46 [83]. M. F. Ashby, "Multi-objective optimization in material design and selection." Acta materialia 48.1 (2000): 359-369.
47 [84]. P. Hoffman, G. Grinstein, K. Marx, I. Grosse, E. Stanley, "DNA visual and analytic data mining." Visualization'97.,
48 Proceedings. IEEE, 1997.
49 [85]. A. Inselberg, B. Dimsdale, "Parallel coordinates for visualizing multi-dimensional geometry." Computer Graphics 1987.
50 Springer, Tokyo, 1987. 25-44.
51 [86]. A. Pryke, S. Mostaghim, A. Nazemi, "Heatmap visualization of population based multi objective algorithms." Evolutionary
52 multi-criterion optimization. Springer Berlin/Heidelberg, 2007.
53 [87]. A. Trivedi, D. Srinivasan, K. Sanyal, A. Ghosh, "A survey of multiobjective evolutionary algorithms based on
54 decomposition." IEEE Transactions on Evolutionary Computation 21.3 (2017): 440-462.
55 [88]. C. A. C. Coello, G. B. Lamont, Applications of multi-objective evolutionary algorithms. Vol. 1. World Scientific, 2004.
56 [89]. K. Deb, "Multi-objective optimization." Search methodologies. Springer US, 2014. 403-449.
57 [90]. E. Aggelogiannaki, H. Sarimveis, "A simulated annealing algorithm for prioritized multiobjective optimization—
58 Implementation in an adaptive model predictive control configuration." IEEE Transactions on Systems, Man, and
59 Cybernetics, Part B (Cybernetics) 37.4 (2007): 902-915.
60 [91]. Chiu, Wei-Yu. "Multiobjective controller design by solving a multiobjective matrix inequality problem." IET Control
61 Theory & Applications 8.16 (2014): 1656-1665.
62 [92]. R. Datta, S. Pradhan, B. Bhattacharya, "Analysis and design optimization of a robotic gripper using multiobjective genetic
63 algorithm." IEEE Transactions on Systems, Man, and Cybernetics: Systems 46.1 (2016): 16-26.
64 [93]. S. Zhao, L. Jiao, "Multi-objective evolutionary design and knowledge discovery of logic circuits based on an adaptive
65 genetic algorithm." Genetic Programming and Evolvable Machines 7.3 (2006): 195-210.
66 [94]. D. Ding, G. Wang. "Modified multiobjective evolutionary algorithm based on decomposition for antenna design." IEEE
67 Transactions on Antennas and Propagation 61.10 (2013): 5301-5307.

19
1 [95]. A. Konstantinidis, K. Yang, "Multi-objective energy-efficient dense deployment in Wireless Sensor Networks using a
2 hybrid problem-specific MOEA/D." Applied Soft Computing 11.6 (2011): 4117-4134.
3 [96]. S. Das, B. Natarajan, D. Stevens, P. Koduru, "Multi-objective and constrained optimization for DS-CDMA code design
4 based on the clonal selection principle." Applied Soft Computing 8.1 (2008): 788-797.
5 [97]. C. H. Chen, J. S. He, T. P. Hong, "MOGA-based fuzzy data mining with taxonomy." Knowledge-Based Systems 54 (2013):
6 53-65.
7 [98]. Y. Zhang, P. I. Rockett, "A generic optimising feature extraction method using multiobjective genetic programming."
8 Applied Soft Computing 11.1 (2011): 1087-1097.
9 [99]. X. Yuan, B. Zhang, P. Wang, J. Liang, Y. Yuan, Y. Huang, X. Lei, "Multi-objective optimal power flow based on improved
10 strength Pareto evolutionary algorithm." Energy 122 (2017): 70-82.
11 [100]. J. Zhang, Q. Tang, P. Li, D. Deng, Y. Chen, "A modified MOEA/D approach to the solution of multi-objective optimal
12 power flow problem." Applied Soft Computing 47 (2016): 494-514.
13 [101]. R. A. Abul’Wafa, "Optimization of economic/emission load dispatch for hybrid generating systems using controlled Elitist
14 NSGA-II." Electric Power Systems Research 105 (2013): 142-151.
15 [102]. A. Trivedi, D. Srinivasan, K. Pal, C. Saha, T. Reindl, "Enhanced multiobjective evolutionary algorithm based on
16 decomposition for solving the unit commitment problem." IEEE Transactions on Industrial Informatics 11.6 (2015): 1346-
17 1357.
18 [103]. P. P. Biswas, P. N. Suganthan, G. A. Amaratunga, "Decomposition based multi-objective evolutionary algorithm for
19 windfarm layout optimization." Renewable Energy 115 (2018): 326-337.
20 [104]. P. P. Biswas, R. Mallipeddi, P. N. Suganthan, G. A. Amaratunga, "A multiobjective approach for optimal placement and
21 sizing of distributed generators and capacitors in distribution network." Applied Soft Computing 60 (2017): 268-280.
22 [105]. P. Koduru, Z. Dong, S. Das, S. M. Welch, J. L. Roe, E. Charbit, "A multiobjective evolutionary-simplex hybrid approach
23 for the optimization of differential equation models of gene networks." IEEE Transactions on Evolutionary Computation
24 12.5 (2008): 572-590.
25 [106]. S. Janson, D. Merkle, M. Middendorf, "Molecular docking with multi-objective particle swarm optimization." Applied Soft
26 Computing 8.1 (2008): 666-675.
27 [107]. J. C. Calvo, J. Ortega, M. Anguita. "PITAGORAS-PSP: Including domain knowledge in a multi-objective approach for
28 protein structure prediction." Neurocomputing 74.16 (2011): 2675-2682.
29 [108]. J. M. Chaves-González, A. M. Vega-Rodríguez, "A multiobjective approach based on the behavior of fireflies to generate
30 reliable DNA sequences for molecular computing." Applied Mathematics and Computation 227 (2014): 291-308.
31 [109]. Z. J. Liu, D. P. Sun, C. X. Lin, X. Q. Zhao, Y. Yang, "Multi-objective optimization of the operating conditions in a cutting
32 process based on low carbon emission costs." Journal of Cleaner Production 124 (2016): 266-275.
33 [110]. L. García‐ Hernández, A. Arauzo‐ Azofra, L. Salas‐ Morera, H. Pierreval, E. Corchado, "Facility layout design using a
34 multi‐ objective interactive genetic algorithm to support the DM." Expert Systems 32.1 (2015): 94-107.
35 [111]. E. Ahmadi, M. Zandieh, M. Farrokh, S. M. Emami, "A multi objective optimization approach for flexible job shop
36 scheduling problem under random machine breakdown by evolutionary algorithms." Computers & Operations Research
37 73 (2016): 56-66.
38 [112]. A. A. Aguilar-Lasserre, L. Pibouleau, C. Azzaro-Pantel, S. Domenech, "Enhanced genetic algorithm-based fuzzy
39 multiobjective strategy to multiproduct batch plant design." Applied Soft Computing 9.4 (2009): 1321-1330.
40 [113]. H. Halsall-Whitney, J. Thibault, "Multi-objective optimization for chemical processes and controller design:
41 Approximating and classifying the Pareto domain." Computers & Chemical Engineering 30.6-7 (2006): 1155-1168.
42 [114]. N. Bhutani, A. K. Ray, G. P. Rangaiah. "Modeling, simulation, and multi-objective optimization of an industrial
43 hydrocracking unit." Industrial & engineering chemistry research 45.4 (2006): 1354-1372.
44 [115]. V. Trivedi, S. Prakash, M. Ramteke. "Optimized on-line control of MMA polymerization using fast multi-objective DE."
45 Materials and Manufacturing Processes 32.10 (2017): 1144-1151.
46 [116]. A. H. Mamaghani, B. Najafi, A. Shirazi, F. Rinaldi, "4E analysis and multi-objective optimization of an integrated MCFC
47 (molten carbonate fuel cell) and ORC (organic Rankine cycle) system." Energy 82 (2015): 650-663.
48 [117]. K. P. Anagnostopoulos, G. Mamanis. "A portfolio optimization model with three objectives and discrete variables."
49 Computers & Operations Research 37.7 (2010): 1285-1297.
50 [118]. A. Mukerjee, R. Biswas, K. Deb, A. P. Mathur, "Multi–objective evolutionary algorithms for the risk–return trade–off in
51 bank loan management." International Transactions in operational research 9.5 (2002): 583-597.
52 [119]. A. C. Briza, P. C. Naval Jr., "Stock trading system based on the multi-objective particle swarm optimization of technical
53 indicators on end-of-day market data." Applied Soft Computing 11.1 (2011): 1191-1201.
54 [120]. G. N. Demir, A. S. Uyar, Ş. Gündüz-Öğüdücü. "Multiobjective evolutionary clustering of web user sessions: a case study
55 in web page recommendation." Soft Computing 14.6 (2010): 579-597.
56 [121]. B. Lazzerini, F. Marcelloni, M. Vecchio. "A multi-objective evolutionary approach to image quality/compression trade-
57 off in JPEG baseline algorithm." Applied Soft Computing 10.2 (2010): 548-561.
58 [122]. S. N. Qasem, S. M. Shamsuddin, "Radial basis function network based on time variant multi-objective particle swarm
59 optimization for medical diseases diagnosis." Applied Soft Computing 11.1 (2011): 1427-1438.
60 [123]. A. B. Cara, C. Wagner, H. Hagras, H. Pomares, I. Rojas, "Multiobjective optimization and comparison of nonsingleton
61 type-1 and singleton interval type-2 fuzzy logic systems." IEEE Transactions on Fuzzy Systems 21.3 (2013): 459-476.
62 [124]. H. C. Lau, T. M. Chan, W. T. Tsui, F. T. Chan, G. T. Ho, K. L. Choy, "A fuzzy guided multi-objective evolutionary
63 algorithm model for solving transportation problem." Expert Systems with Applications 36.4 (2009): 8255-8268.
64 [125]. M. Saadatseresht, A. Mansourian, M. Taleai, "Evacuation planning using multiobjective evolutionary optimization
65 approach." European Journal of Operational Research 198.1 (2009): 305-314.

20

You might also like