You are on page 1of 27

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/343737478

EMoSOA: A New Evolutionary Multi-objective Seagull Optimization Algorithm


for Global Optimization

Article  in  International Journal of Machine Learning and Cybernetics · February 2021


DOI: 10.1007/s13042-020-01189-1

CITATIONS READS

49 801

7 authors, including:

Gaurav Dhiman Adam Slowik


Government Bikram College of Commerce Patiala Koszalin University of Technology
191 PUBLICATIONS   3,788 CITATIONS    128 PUBLICATIONS   988 CITATIONS   

SEE PROFILE SEE PROFILE

Victor Chang Ali Riza Yildiz


Teesside University Bursa Uludag University
477 PUBLICATIONS   10,690 CITATIONS    144 PUBLICATIONS   6,984 CITATIONS   

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Electric Vehicles (EV) in Active Distribution Networks View project

Moth-flame algorithm for optimization of manufacturing (milling) optimization problems View project

All content following this page was uploaded by Gaurav Dhiman on 10 September 2020.

The user has requested enhancement of the downloaded file.


International Journal of Machine Learning and Cybernetics
https://doi.org/10.1007/s13042-020-01189-1

ORIGINAL ARTICLE

EMoSOA: a new evolutionary multi‑objective seagull optimization


algorithm for global optimization
Gaurav Dhiman1 · Krishna Kant Singh2 · Adam Slowik3 · Victor Chang4 · Ali Riza Yildiz5 · Amandeep Kaur6 ·
Meenakshi Garg7

Received: 27 September 2019 / Accepted: 20 August 2020


© Springer-Verlag GmbH Germany, part of Springer Nature 2020

Abstract
This study introduces the evolutionary multi-objective version of seagull optimization algorithm (SOA), entitled Evolutionary
Multi-objective Seagull Optimization Algorithm (EMoSOA). In this algorithm, a dynamic archive concept, grid mechanism,
leader selection, and genetic operators are employed with the capability to cache the solutions from the non-dominated
Pareto. The roulette-wheel method is employed to find the appropriate archived solutions. The proposed algorithm is tested
and compared with state-of-the-art metaheuristic algorithms over twenty-four standard benchmark test functions. Four real-
world engineering design problems are validated using proposed EMoSOA algorithm to determine its adequacy. The findings
of empirical research indicate that the proposed algorithm is better than other algorithms. It also takes into account those
optimal solutions from the Pareto which shows high convergence.

Keywords  Seagull Optimization Algorithm · Multi-objective Optimization · Evolutionary · Pareto · Engineering Design
Problems · Convergence · Diversity

1 Introduction

In recent decades, metaheuristic techniques of optimization


have provided tremendous attention from researchers to
The source codes are available at: http://dhima​ngaur​av.com/. address actual search and problems with optimization. The
techniques are mathematically tractable, relatively afford-
* Gaurav Dhiman able, and faster than exhaustive searches. Such approaches
gdhiman0001@gmail.com aim to achieve near-optimal solutions [1–11]. In general,
1
Department of Computer Science, Government Bikram metaheuristic optimization strategies can be divided into
College of Commerce, Patiala, Punjab 147001, India single-objective and multi-objective categories. The goal of
2
Department of Electronics and Communication Engineering, single-objective techniques is to provide the only global best
KIET Group of Institution, Delhi‑NCR, Ghaziabad, India way of optimizing the single-objective function [12]. None-
3
Department of Electronics and Computer Science, Koszalin theless, multiple objectives must be addressed concurrently
University of Technology, Sniadeckich 2, 75‑453 Koszalin, on most of the problems of real-life optimization. Because
Poland these objectives are usually conflicting in nature, a single-
4
School of Computing, Engineering and Digital Technologies, objective approach fails to achieve all the objectives. On the
Teesside University, Middlesbrough, UK other hand, the multi-objective optimization strategies deal
5
Department of Automotive Engineering, College with issues consisting of many competing objectives and aim
of Engineering, Uludag University, Grkle, Bursa 16059, to find the best possible solution [12–23].
Turkey The main problem in multi-objective optimization is
6
Department of Computer Science and Engineering, Sri Guru modelling the preferences of policy makers with regard to
Granth Sahib World University, Fatehgarh Sahib, Punjab, the organization or determination of the relative importance
India
of conflicting goals. This function is tackled with three main
7
Department of Computer Science, Government Bikram methods: Priori, Posteriori, and Interactive [24–32]. The
College of Commerce, Patiala, Punjab 147001, India

13
Vol.:(0123456789)
International Journal of Machine Learning and Cybernetics

approaches to Priori transform a multi-objective problem • Evolutionary operators such as crossover and mutation
to a single objective problem before the algorithm is run- are also employed to enhance the convergence and diver-
ning. Based on the relatively high importance of its cor- sity visibility of the proposed algorithm.
responding objective [33], this scalar objective function is
allocated weights for each objective function. The methods This algorithm is therefore entitled as a Evolutionary Multi-
of Posteriori do not require any previous user knowledge. objective Seagull Optimization Algorithm (EMoSOA) The
These approaches produce a set of optimal Pareto solutions efficiency of the proposed algorithm is tested on IEEE CEC
which are mathematically equivalent and allow the decision (Congress on Evolutionary Computation) [76], Zitzler, Deb,
maker to choose one solution. In order to find suitable Pareto and Thiele (ZDT) [77], and Deb, Thiele, Zitzler, and Lau-
optimal solutions, Interactive approaches [24] often referred manns (DTLZ) [78] multi-objective test problems. For com-
to as human-in-the-loop approaches are constantly fetched parison, six well-known optimization approaches are chosen,
to and included in the decision maker choice. as: multi-objective particle swarm optimization (MOPSO)
Conversely, multi-objective optimization (MOO) does not [71], Non-dominated sorting genetic algorithm 2 (NSGA-
give a single solution and multiple consensus among the II) [69], multi-objective evolutionary algorithm based on
different goals. The fact that Pareto fronts [34] must come decomposition (MOEA/D) [70], multi-objective vortex
up with multiple points on the Pareto front for successful search algorithm (MOVS) [79], multi-objective artificial
hypotheses is especially awful to any MOO. Even now, the algae algorithm (MOAAA​) [80], and multi-objective spot-
MOO approaches on the Pareto front are not expected to ted hyena optimizer (MOSHO) [81]. Also, four performance
universally spread on the front [35]. The solution to these metrics are used to appraise these algorithms. Further, the
multi-dimensional problems is therefore very difficult to proposed algorithm is validated on four real-life engineer-
predict. The concept of MOO using stochastic techniques ing design problems such as welded beam design, Multiple-
was initially introduced by David Schaffer [36]. The mas- disk clutch brake design, Pressure vessel design, 25-bar truss
tery is a minimal, optimal prevention and gradient-free tool design problems.
that makes them applicable to various problems. In differ- The remainder of this article is organized as follows.
ent fields of engineering and science, these multi-objective Section 2 presents the related works. Section 3 presents the
methods can be used as [37–39]: bio-informatics [40], civil mathematical concepts of the SOA algorithm followed by the
engineering [41], water resource engineering [42–46], bio- multi-objective optimization definitions in Sect. 4. Section 5
diesel [47], artificial intelligence [48–52], power systems presents the proposed EMoSOA algorithm in detail. Sec-
[53], fuzzy optimization [54–56], system engineering [57], tion 6 includes the experimental results and discussions. In
mechanical engineering [58, 59], software engineering [60], Sect. 7, the effectiveness of the proposed algorithm is evalu-
and other domains [37, 38, 61–68]. A few examples of opti- ated on four real-life engineering design problems. Finally,
mization techniques that solve multi-objective problems, the conclusions and future works are illustrated in Sect. 8.
include non-dominated sorting genetic algorithm 2 (NSGA-
II) [69], multi-objective evolutionary algorithm based on
decomposition (MOEA/D) [70], and multi-objective parti- 2 Related works
cle swarm optimization (MOPSO) [71]. Nevertheless, these
can not solve any problem of optimization [59, 60, 72, 73], Several MOO algorithms in the literature have recently been
but these can measure true Pareto optimal solutions (POSs) published. Multi-objective metaheuristic techniques are
[74, 75]. In this study, a MOO optimization algorithm is involved in many issues, such as the variety of diverse, invi-
introduced, which is an advancement of recently developed able solutions and distinctive optimal outcomes [82]. The
seagull optimization algorithm (SOA) [20]. The proposed principle of the transfer of information between the search
algorithm is used to solve the MOO problem. The main con- area and agents will solve these problems. The optimal front
tributions are structured as follows: of Pareto should be accomplished in MOO algorithms using
single iteration.
• An archive portion is added to the SOA algorithm that Non-dominated sorting genetic algorithm 2 (NSGA-II)
can accumulate all Pareto solutions that have not been [69] was one of the most widely used metaheuristic multi-
dominated. objective techniques. This technique is used to extract the
• A leading method for selecting solutions for the location best performances from a swift non-dominated sorter device
of prey from the archive is suggested. and rigid niche operator. The random population in NSGA-II
• A grid function is implemented in the proposed algo- is created, and individuals are divided into non-dominated
rithm to eliminate the most crowded parts in order to trials. To help selection, mutation and recombination opera-
strengthen the non-dominated solutions. tors, another random population is formed.A new population
is generated in each simulation and non-dominated sorting

13
International Journal of Machine Learning and Cybernetics

is performed. The ability to choose a new one depends pollination algorithm (MOFPA) [35], a set based genetic
entirely on the final population of the degree of control. algorithm [87], a self-adaptive artificial bee colony algo-
The entire cycle is iterative until the desired results are rithm [88], hybrid harmony search and artificial bee colony
obtained. Another of the more common MOO algorithms is algorithm [89], and external archive guided multi-objective
the MOPSO [71]. It is a PSO algorithm extending version. evolutionary algorithm based on decomposition [90]. While
For storing and retrieving POSs, the concept of the archive literature describes many optimization algorithms, there is
is used in the MOPSO. The MOPSO is often used for the still no algorithm to solve these problems. There may be
enhancement of its efficiency by the mutation operator. ways to address new problems that have not been solved
Another well-known MOO algorithm is multi-objective until newly created optimization algorithm. A further pro-
evolutionary algorithm based on decomposition (MOEA/D) gression to the current developed SOA algorithm is defined
[70]. This is based on the principle of parallel computing in order to find the optimum solution for multi-objective
for the breakdown of a problem. The population-size sub- issues.
problems are all subject to a single objective function, which
incorporates several subproblem outcomes. The search
method is generally divided between collaboration and con- 3 Seagull optimization algorithm (SOA)
clusion into two sub-processes. Cooperation between neigh-
borhood members, as the name implies, may help solve the In this section, the inspiration and mathematical modeling
problem by mutual cooperation. When neighbors have better of proposed algorithm is discussed in detail.
solutions, the solutions for better solutions are substituted.
This task is called the competitive task of finding the best 3.1 Biological paradigm
answer by questioning neighbours. The machine complex-
ity of MOEA/D and the speed of convergence are limited to Seagulls, scientific named as Laridae, are sea birds which
NSGA-II. can be found all over the planet. There is wide range of
The ant colony optimization (ACO) instance for solving seagulls species with different masses and lengths. Seagulls
MOO problems, named as the MOACO, was developed by are omnivorous and eat insects, fish, reptiles, amphibians,
Angus and Woodward [83]. It employs concepts like the earthworms, and so on. Body of most seagulls is covered
pheromone model, building process, solution estimation, and with white plumage. Seagulls are very intelligent birds.
process update. The non-dominated neighbor immune algo- They use bread crumbs to attract fish and produce rain-like
rithm (NNIA) has been developed for the MOO problems by sound with their feet to attract earthworms hidden under the
Gong et al. [84]. It depends on the ideas of non-dominated, ground. Seagulls can drink both fresh and salt water. Most
neighborly, resistant, heuristic search operators and elitism. of animals are unable to do this. However, seagulls have a
Yet one of NNIA’s big drawbacks is the lack of diversity. special pair of glands right above their eyes which is specifi-
The new solution was called NNIA2 and this question has cally designed to flush the salt from their systems through
been addressed in the updated edition of the NNIA. In order openings in the bill.
to boost variety, versatile counterfeit and K-nearest ranks Generally, seagulls live in colonies. They use their intel-
are used. ligence to find and attack the prey. The most important thing
Özkış and Babalık [79] proposed the multi-objective about the seagulls is their migrating and attacking behaviors.
vortex search algorithm (MOVS). Fast-nondominated- Migration is defined as the seasonal movement of seagulls
sorting and crowding-distance methods are used in MOVS from one place to another to find the richest and most abun-
algorithm. A crossover operation is integrated to MOVS dant food sources that will provide adequate energy [91].
algorithm in order to enhance the Pareto front convergence This behavior is described as follows:
capacity of the solutions. Finally, the inverse incomplete
gamma function is produced to spread the solutions more • During migration, they travel in a group. The initial
successfully over the Pareto front. Babalık et al. [80] pro- positions of seagulls are different to avoid the collisions
posed the multi-objective artificial algae algorithm (MOAAA​ between each other.
) which is inspired by the behavior of microalgae cells. Fast- • In a group, seagulls can travel towards the direction of
non-dominated crowding distance methods are used in this best survival fittest seagull, i.e., a seagull whose fitness
algorithm. The convergence efficiency of MOAAA​is better value1 is low as compared to others.
than others in terms of well-known performance metrics.
Throughout literature, other algorithms are also provided
to solve MOO algorithms, such as multi-objective cat swarm 1
  The term fitness value is defined as a process which evaluates the
optimization (MOCSO) [85], multi-objective artificial bee population and gives a score or fitness. Whereas, the process is a
colony algorithm (MOABC) [86], multi-objective flower function which measures the quality of the represented solution.

13
International Journal of Machine Learning and Cybernetics

30

20

10

0
30
20 30
10 20
10
0 0

Fig. 1  Migration and attacking behaviors of seagulls Fig. 3  Movement of search agents towards the best neighbour

30 3.2.1 Migration (exploration)
20
During migration, the algorithm simulates how the group of
10 seagulls move towards one position to another. In this phase,
0
a seagull should satisfy three conditions:
30
20 30 • Avoiding the collisions: To avoid the collision between
10 20
0 0
10 neighbours (i.e., other seagulls), an additional variable
A is employed for the calculation of new search agent
position (see Fig. 2).
Fig. 2  Collision avoidance between search agents
C⃗s = A × P⃗s (x) (1)
• Based on the fittest seagull, other seagulls can update
their initial positions. where C⃗s represents the position of search agent which
does not collide with other search agent, P⃗s represents
Seagulls frequently attack migrating birds over the sea [92] the current position of search agent, x indicates the cur-
when they migrate from one place to another. They can make rent iteration, and A represents the movement behavior
their spiral natural shape movement during attacking. A of search agent in a given search space.
conceptual model of these behaviors is illustrated in Fig. 1. A = fc − (x × (fc ∕Maxiteration ))
These behaviors can be formulated in such a way that it can (2)
where: x = 0, 1, 2, … , Maxiteration
be associated with the objective function to be optimized.
This makes it possible to formulate a new optimization algo- where fc is introduced to control the frequency of
rithm. This paper focuses two natural behaviors of seagulls. employing variable A which is linearly decreased from
fc to 0. In this work, the value of fc is set to 2.
3.2 Mathematical model • Movement towards best neighbor’s direction: After
avoiding the collision between neighbours, the search
The mathematical models of migration and attacking the agents are move towards the direction of best neighbour
prey are discussed. (see Fig. 3).

13
International Journal of Machine Learning and Cybernetics

where D⃗s represents the distance between the search


30 agent and best fit search agent (i.e., best seagull whose
fitness value is less).
20

10 3.2.2 Attacking (exploitation)
0
30 The exploitation intends to exploit the history and experi-
20
20
30 ence of the search process. Seagulls can change the angle of
10
0 0
10 attack continuously as well as speed during migration. They
maintain their altitude using their wings and weight. While
attacking the prey, the spiral movement behavior occurs in
Fig. 4  Convergence towards the best search agent
the air (see Fig. 5). This behavior in x, y,  and z planes is
described as follows.

x� =r × cos(k) (6)
1

y� =r × sin(k) (7)
0.5

z� =r × k (8)
0
1

0
1 r =u × ekv (9)
0
-1 -1 where r is the radius of each turn of the spiral, k is a random
number in range [0 ≤ k ≤ 2𝜋] . u and v are constants to define
Fig. 5  Natural attacking behavior of seagull the spiral shape, and e is the base of the natural logarithm.
The updated position of search agent is calculated using Eqs.
(5–9).
M⃗ s = B × (P⃗bs (x) − P⃗s (x)) (3)
P⃗s (x) = (D⃗s × x� × y� × z� ) + P⃗bs (x) (10)
where M⃗ s represents the positions of search agent P⃗s
towards the best fit search agent P⃗bs (i.e., fittest seagull). where P⃗s (x) saves the best solution and updates the position
The behavior of B is randomized which is responsible for of other search agents.
proper balancing between exploration and exploitation. The proposed SOA starts with a random generated popu-
B is calculated as: lation. The search agents can update their positions with
respect to best search agent during the iteration process.
B = 2 × A2 × rd (4) A is linearly decreased from fc to 0. For smooth transition
where rd is a random number lies in the range of [0, 1]. between exploration and exploitation, variable B is respon-
• Remain close to the best search agent: Lastly, the sible. Hence, SOA is considered as a global optimizer (see
search agent can update its position with respect to best Algorithm 1) because of its better exploration and exploita-
search agent which is shown in Fig. 4. tion capability.

D⃗s =∣ C⃗s + M⃗ s ∣ (5)

13
International Journal of Machine Learning and Cybernetics

Algorithm : Seagull Optimization Algorithm


Input: Seagull population Ps
Output: Optimal search agent Pbs
1: procedure SOA
2: Initialize the parameters A, B, and M axiteration
3: Set fc ← 2
4: Set u ← 1
5: Set v ← 1
6: while (x<M axiteration ) do
7: Pbs ←ComputeFitness(Ps ) /* Calculate the fitness values of each search agent using
ComputeFitness function*/
/* Migration behavior */
8: rd ← Rand(0, 1) /* To generate the random number in range [0, 1] */
9: k ← Rand(0, 2π) /* To generate the random number in range [0, 2π] */
/* Attacking behavior */
10: r ← u × ekv /* To generate the spiral behavior during migration */
11: Calculate the distance D s using Eq. (5)
12: P ← x × y  × z  /* Compute x, y, z planes using Eqs. (6) - (9) */
13: s × P ) + Pbs
Ps (x) ← (D
14: x←x+1
15: end while
16: return Pbs
17: end procedure

1: procedure ComputeFitness(Ps )
2: for i ← 1 to n do /* Here, n represents the dimension of a given problem */
3:  :))
F ITs [i] ← F itnessF unction(Ps (i, /* Calculate the fitness of each individual */
4: end for
5: F ITsbest ← BEST(F ITs []) /* Calculate the best fitness value using BEST function */
6: return F ITsbest
7: end procedure

1: procedure BEST(F ITs [])


2: Best ← F ITs [0]
3: for i ← 1 to n do
4: if (F ITs [i] < Best) then
5: Best ← F ITs [i]
6: end if
7: end for
8: return Best /* Return the best fitness value */
9: end procedure

4 Multi‑objective optimization m is the number of in-equality constraint, and p is the num-


ber of equality constraint.
Multi-objective technique can be described as an optimiza- Optimal solutions return several Pareto solutions with
tion method that can have functions of a given problem with multi-objective optimization. Regardless of the existence of
more than one objective [93]: several parameters of comparison, the associated operators
can not equate these solutions with each other. We therefore
Minimize ∶ F(⃗z) = [f1 (⃗z), f2 (⃗z), … , fn (⃗z)] (11) need some other operators in order to determine the supe-
riority of one candidate solution over another. Edgeworth
Subject to: [94] initially proposed the notion of Pareto supremacy and
(12) applied it to Pareto [95] which can be described as follows
gi (⃗z) ≥ 0, i = 1, 2, … , m
[96]:
hi (⃗z) = 0, i = 1, 2, … , p (13)
Definition 1  Pareto dominance
where ⃗z = [z1 , z2 , … , zk ]T is the decision variables vector, gi Consider the following two solution vectors: ⃗r
is the ith in-equality constraint, hi is the ith equality constraint, =(r1 , r2 , … , rw  ) and ⃗s =(s1 , s2 , … , sw ) . A solution vector

13
International Journal of Machine Learning and Cybernetics

⃗r will dominate another solution vector ⃗s (symbolized as include a particular solution in the list. The archive update
r ≺ s ) iff: rules are given below:

∀p 𝜖 {1, 2, … , w} ∶ [fp (⃗r) ≤ fp (⃗s)] ∧ ∃p 𝜖 {1, 2, … , w} ∶ • The current solution should be accepted, if the archive is
[fp (⃗r) < fp (⃗s)] found to be empty.
(14) • If any solution is dominated by an individual inside the
Definition 2  Pareto optimality archive, then the particular solution should be discarded.
• If solution is not dominated by the external population,
A solution ⃗r 𝜖 R is said to be Pareto optimal iff:
then the particular solution should be accepted and stored
∄ ⃗s 𝜖R ∣ ⃗s ≺ ⃗r (15) inside the archive.
• If solutions are dominated by the new element, then they
Definition 3  Pareto optimal set are discarded from the archive.
A set that includes the optimal problem solutions of non-
dominated Pareto is called the Pareto optimal set. They are 5.1.2 Grid
listed as:

Po = {⃗r, ⃗s 𝜖 R ∣ ∃⃗s ≺ ⃗r} (16) The Pareto fronts are generated by the adaptive grid pro-
cess [98]. The employed objective function consists of four
Definition 4  Pareto optimal front regions. The grid approach is used to measure individual
A set of objective values for optimal solution set in Pareto from the populations created when they are outside of the
is called Pareto optimal front. This states as follows: grid area [99]. Due to the even distribution of hypercubes,
the grid space is created.
Pf = {f (⃗r) ∣ ⃗r 𝜖 Po } (17)
5.1.3 Leader selection mechanism

5 Proposed evolutionary multi‑objective The principal problem in a multi-objective search area is to


Seagull optimization algorithm (EMoSOA) determine the new solutions in a given search area. Using
a leadership selection method to solve this issue, this tech-
5.1 Motivation nique involves the least crowded search area. The roulette-
wheel selection method is used with one of the best solutions
A proper balance between exploration and use features allow in the search boundary. The following is described as:
the optimisation algorithm in order to find the optimum g
Uk = (18)
global solutions while solving any metaheuristic optimiza- Nk
tion problem. The exploration process describes diversity in
the search for new solutions globally and preserves optimal such as g is a constant variable with value greater than 1 and
convergence by looking for neighborhood solutions [97]. Nk defines the count of POSs to kth segment. This method is
The main concept of the proposed algorithm is based on a conventional approach that uses roulette-wheel proportion
the natural behaviors of seagulls. The four components have to define the contribution of every individual. The advantage
been used to develop a MOO version of the SOA [20]. The of this method over other is that it always gives a chance all
foremost integral are archive controller and grid, which of them to be selected. When implementing in parallel it has
reserve the optimal non-dominated Pareto solutions and the efficient time complexity. Whereas, the crossover and muta-
latter integrals are pioneer selection approach and evolution- tion strategies employed in this paper are same as described
ary operators to select the most effective solution from the in the NSGA-II algorithm.
archive with respect to the orientation of prey.

5.1.1 Archive controller

In a storage space called the archive, all of the POSs best


obtained are kept. The controller determines whether to

13
International Journal of Machine Learning and Cybernetics

Table 1  Characteristics of ZDT Problems Properties 2. The fitness computation of search agents needs
and DTLZ benchmark test O(Maxiterations × no × np ) time, where Maxiterations repre-
functions ZDT1 Convex sents the maximum number of iterations.
ZDT4 Convex 3. It needs O(no × (nns + np )) time to update the archive of
ZDT2 Concave non-dominated solutions.
ZDT6 Concave 4. Steps 2 to 3 are repeated until the maximum iteration is
DTLZ2 Concave reached.
DTLZ3 Concave
DTLZ4 Concave Hence, the time complexity of EMoSOA algorithm is
ZDT3 Disconnected O(Maxiterations × no × (np + nns )).
DTLZ7 Disconnected
DTLZ1 Linear 5.2.2 Space complexity
DTLZ5 –
DTLZ6 – During its initialization process, the space complexity of the
proposed EMoSOA is determined, i.e., when the memory
population requires O(no × np ) time.

Algorithm : Evolutionary Multi-objective Seagull Optimization Algorithm (EMoSOA)


Input: Population Pp
Output: Archive of non-dominated optimal solutions
1: procedure EMoSOA
2: For each search agent, calculate their corresponding objective values
3: Find all the non-dominated solutions and initialize these solutions to archive
4: while (y<M axitr ) do
5: for each search agent do
6: Update the position of current search agent
7: end for
8: Apply mutation and crossover operators on these updated search agents
9: Compute the objective values of all search agents
10: Find the non-dominated solutions from updated search agents
11: Update the obtained non-dominated solutions to archive
12: if archive is full then
13: Grid method should be run to omit one of the most crowded archive members
14: Add new solution to the archive
15: end if
16: Check if any search agent goes beyond the search space and then adjust it
17: Compute the objective values of each search agent
18: x←x+1
19: end while
20: return archive
21: end procedure

5.2 Computational complexity 6 Experimental results and discussions

In this section, the computational efficiency of the proposed 6.1 Experimental setup


technique is provided in detail.
The experimentation and algorithms are implemented in
5.2.1 Time complexity Matlab R2018a (8.3.0.532) version and run it in the envi-
ronment of Microsoft Windows 10 with 64 bits on Core i7
1. It takes O(no × np ) time to initialize the population, processor with 3.20 GHz and 8 GB memory. The proposed
where np is the number of population size and no is the MOO algorithm is compared with six well-known optimi-
number of objectives zation approaches, as: multi-objective particle swarm opti-
mization (MOPSO) [71], non-dominated sorting genetic

13
International Journal of Machine Learning and Cybernetics

algorithm 2 (NSGA-II) [69], multi-objective evolutionary 6.2 Performance metrics


algorithm based on decomposition (MOEA/D) [70], multi-
objective vortex search algorithm (MOVS) [79], multi- To compute the performance of the proposed EMoSOA
objective artificial algae algorithm (MOAAA​) [80], and algorithm, the four different performance metrics have
multi-objective spotted hyena optimizer (MOSHO) [81]. been selected, as: Hypervolume (HV) [100, 101], Δp (p = 1)
Various parameters associated with these approaches have [102–104], Spread [101, 105], and Epsilon ( 𝜀 ) [101, 106].
been kept same according to their original works. The well- The mathematical formulation of these performance met-
known benchmark test functions are used in this paper (see rics are as follows:
Table 1). However, the size of population and maximum ( |Q| )
iterations for these algorithms are fixed to 100 and 1000, ⋃
Hypervolume (HV) = volume uk (19)
respectively. The results are computed based on average and k=1
standard deviation over 30 independent runs. The following
initial parameters for MOPSO algorithm are set as [71]: It computes the volume which is covered by members of
Q in the objective space where objectives are to be mini-
• 𝜙a = 𝜙b = 2.05 mized. The solution k as the diagonal corners of the hyper-
• 𝜙f = 𝜙a + 𝜙b cube where for each k𝜖Q , a hypercube uk is constructed with
• Inertia weight: w =
2
√ reference point ( Rp ). The reference point ( Rp ) is a vector of
𝜙f − 2 + 𝜙2f − 4𝜙f worst objective function values.
��
• Personal coefficient: c1 = 𝜒 ∗ 𝜙a ⋃�Q� 2

• Social coefficient: c2 = 𝜒 ∗ 𝜙b d
k=1 k
(20)
• Grid inflation parameter: 𝛼 = 0.1 Generational Distance (GD) =
• Leader selection pressure parameter: 𝛽 = 4 �Q�
• Number of grids: Gridnumber = 10
where dk is the Euclidean distance between the solution k𝜖Q
in the objective space such as,
For NSGA-II, the following parameters are set as [84]:

√ Y ( )2
√∑
• Population size (X) = 100 dk = minj𝜖|X| √ (z) ∗(z)
f y − fy (21)
• Cross over probability Pc = 0.8 y=1
• Mutation probability Pm = 0.1
where fy∗(z) is the yth objective function of the jth member
For MOEA/D, the following parameters are chosen as [70]: of set X.
�∑
n
• Subproblems: N = 100 m=1
Edm2
(22)
• Number of neighbours: T = 0.1∗ N Inverted Generational Distance (IGD) =
n
• Updated new child maximal copies: M = 0.01∗ N
• Probability of selecting parents: Pp = 0.9 where Edm is the Euclidean distance between the mth true
• Mutation rates: Mr = 0.5 Pareto optimal solution and the approximation POSs in a
• Distribution index: Di = 30 given set and n describes the number of true POSs.
∑T ∑Q
de + l=1 �dl − d∗ �
The parameters for MOSHO algorithm are described as [81]: Spread(Δ) =
t=1 t
(23)
∑T e ∗
t=1 dt + �Q�d
• Number of grids = 10
• Grid inflation parameter ( 𝛼 ) = 0.1 where dl is the Euclidean distances having mean value d∗ ,
• rd1 and rd2 = [0,1] dte is the distance between the intense solutions of Q cor-
• Group selection parameter(𝛽  ) = 4 responding to the tth objective function.
• Parameter h⃗ = [5,0] Epsilon(𝜀) = Supx𝜖[0,1] Minp𝜖P Max{p − x, f (p) − f (x)} (24)
• Parameter M ⃗ = [0.5,1]
where f is the Pareto front, P is a set of size k, and p is
𝜖 − approximates of the x if Max {p − x, f (p) − f (x)} ≤ 𝜖.

13
International Journal of Machine Learning and Cybernetics

Table 2  The obtained optimal results using proposed and competitor approaches on IEEE CEC-2009 benchmark test functions
F Performance metrics EMoSOA MOPSO NSGA-II MOEA/D MOVS MOAAA​ MOSHO

UF1 Hypervolume 3.57E−00 3.70E−01 3.89E−01 6.50E−01 3.71E−01 1.63E−01 1.34E−01


6.20E−05 1.90E−02 3.24E−03 2.70E−03 4.85E−02 1.10E−01 6.69E−02
Δp 2.95E−10 3.11E−04 2.30E−03 4.55E−04 5.30E−03 1.49E−02 2.55E−03
6.99E−12 2.99E−04 6.22E−03 1.90E−04 1.60E−03 1.56E−03 1.80E−03
Spread 1.03E−02 7.42E−01 2.13E+00 3.01E−01 1.30E+00 1.91E+00 1.00E+00
1.11E−03 2.30E−01 1.12E−01 1.73E−01 1.51E−01 1.93E−01 1.91E−01
Epsilon 4.80E−04 1.01E−01 2.13E−01 1.07E−02 1.64E−01 5.11E−01 1.71E−01
1.09E−04 3.11E−02 4.40E−02 1.05E−02 1.12E−01 1.57E−01 1.10E−01
UF2 Hypervolume 1.95E−00 5.00E−01 5.00E−01 5.41E−01 4.82E−01 4.15E−01 3.04E−01
1.21E−04 1.03E−03 6.00E−03 2.77E−03 1.12E−02 1.95E−01 1.16E−01
Δp 2.12E−07 1.50E−03 4.77E−03 3.41E−04 2.62E−03 2.20E−03 2.11E−03
1.10E−07 2.85E−05 4.62E−04 1.74E−04 4.85E−04 1.10E−03 1.97E−03
Spread 2.46E−03 3.97E−01 5.03E−01 3.21E−01 6.38E−01 2.02E+00 2.15E−01
2.92E−03 1.22E−01 3.71E−02 4.50E−02 7.11E−02 2.22E−01 1.48E−01
Epsilon 2.04E−03 8.22E−02 2.30E−01 4.81E−02 1.78E−01 2.34E−01 3.70E−02
4.75E−05 8.72E−03 2.33E−02 1.26E−02 4.60E−02 1.51E−01 2.42E−03
UF3 Hypervolume 3.18E−00 1.75E−00 1.13E−00 4.03E−00 1.44E−00 9.01E−00 2.63E−00
2.71E−01 2.50E−02 2.58E−02 2.21E−02 2.35E−02 2.74E−02 1.20E−02
Δp 1.62E−02 4.87E−03 7.95E−03 1.05E−03 2.53E−02 2.47E−02 1.87E−02
1.60E−02 3.30E−04 1.61E−03 4.00E−04 1.88E−03 1.98E−03 1.41E−03
Spread 7.01E−00 4.23E−01 2.31E+00 4.37E−01 2.00E+00 1.08E+00 2.47E+00
5.98E−01 3.18E−01 4.46E−02 2.04E−01 1.02E−01 5.38E−02 1.75E−02
Epsilon 2.40E−00 2.83E−01 2.83E−01 1.30E−01 3.77E−01 4.54E−01 2.95E−01
2.11E−01 2.43E−03 3.53E−02 4.10E−02 4.30E−02 6.47E−02 3.47E−02
UF4 Hypervolume 1.01E−00 1.41E−01 1.45E−01 1.36E−01 1.41E−01 1.51E−01 1.40E−01
1.31E−04 6.70E−03 2.18E−03 1.00E−02 2.12E−03 5.51E−02 1.96E−02
Δp 3.14E−05 1.65E−03 1.30E−03 2.88E−03 1.38E−03 4.61E−03 3.98E−03
1.68E−06 1.03E−04 3.44E−05 4.68E−04 1.31E−04 2.63E−03 1.80E−04
Spread 1.01E−02 3.27E−01 3.12E−01 3.02E−01 7.23E−01 2.03E+00 2.75E−01
2.12E−03 5.82E−02 3.13E−02 8.45E−02 4.31E−02 3.22E−01 3.23E−02
Epsilon 1.11E−03 6.47E−02 4.98E−02 5.80E−02 7.34E−02 1.61E−01 2.71E−02
2.32E−04 4.28E−02 7.14E−03 8.13E−03 7.60E−03 2.11E−01 5.84E−03
UF5 Hypervolume 1.61E−01 2.50E−03 1.71E−02 3.71E−02 5.21E−02 2.95E−02 2.49E−02
1.09E−03 1.56E−02 3.05E−02 3.61E−02 5.00E−02 3.68E−01 1.48E−02
Δp 1.00E−02 1.62E−01 1.88E−01 2.04E−01 1.99E−01 3.26E−00 2.41E−01
1.07E−03 1.51E−01 2.88E−02 1.90E−02 1.37E−02 2.76E−01 1.98E−02
Spread 1.74E−02 5.56E−01 1.25E+00 1.21E+00 1.13E+00 3.82E+00 1.20E+00
3.77E−03 7.01E−02 8.94E−02 4.27E−02 1.33E−01 6.38E−02 3.72E−02
Epsilon 2.17E−03 2.26E+00 6.91E−01 6.41E−01 7.80E−01 9.71E−01 4.96E−01
1.06E−02 4.27E−01 1.58E−01 1.42E−01 1.75E−01 1.59E−01 2.46E−01
UF6 Hypervolume 2.41E−00 6.02E−02 2.95E−01 3.07E−01 2.09E−01 4.14E−01 3.88E−01
1.37E−03 2.47E−02 5.98E−02 9.23E−02 4.39E−02 7.60E−02 3.20E−02
Δp 3.30E−04 8.55E−03 4.67E−03 2.13E−03 2.92E−02 5.22E−02 7.79E−02
1.30E−04 3.55E−03 1.50E−03 1.66E−03 2.78E−03 1.44E−03 7.28E−03
Spread 6.08E−02 8.78E−01 1.18E+00 1.13E+00 1.26E+00 1.11E+00 1.10E+00
3.41E−03 5.73E−02 1.23E−01 8.46E−02 2.69E−01 8.27E−02 4.40E−02
Epsilon 1.01E−02 4.94E−01 4.38E−01 2.54E−01 4.47E−01 7.71E−01 4.38E−01
1.01E−02 1.81E−01 1.48E−01 1.47E−01 1.90E−01 2.66E−01 1.08E−01

13
International Journal of Machine Learning and Cybernetics

Table 2  (continued)
F Performance metrics EMoSOA MOPSO NSGA-II MOEA/D MOVS MOAAA​ MOSHO

UF7 Hypervolume 1.32E−02 2.94E−01 2.28E−01 3.90E−01 1.48E−01 1.21E−00 1.85E−01


5.35E−02 2.56E−03 7.84E−03 2.33E−05 7.81E−05 7.81E−03 2.98E−03
Δp 2.60E−02 5.37E−03 6.77E−03 1.50E−04 5.48E−02 2.17E−02 3.36E−02
1.44E−04 1.66E−03 1.05E−03 1.43E−05 1.70E−03 2.31E−03 1.74E−03
Spread 1.80E−02 7.51E−01 1.06E+00 2.48E−01 1.17E+00 1.00E+00 4.60E−00
1.16E−03 5.62E−02 1.10E−01 1.74E−01 7.76E−02 1.57E−02 2.57E−02
Epsilon 1.04E−03 1.81E−01 3.27E−01 3.41E−02 5.07E−01 7.00E−01 2.61E−01
2.56E−04 1.06E−01 1.41E−01 2.16E−02 1.18E−01 1.36E−01 1.80E−01
UF8 Hypervolume 2.58E−00 2.04E−02 2.26E−01 1.80E−01 1.92E−02 1.28E−01 1.01E−01
2.57E−04 1.68E−02 3.06E−02 1.00E−01 2.31E−02 7.95E−02 1.78E−02
Δp 2.33E−02 1.91E−03 1.80E−03 2.15E−01 4.10E−03 3.86E−03 4.94E−03
5.10E−03 1.00E−04 3.44E−05 4.16E−03 3.85E−04 8.08E−04 1.85E−03
Spread 5.06E−00 8.15E−01 6.76E−01 3.68E−01 6.06E−01 7.61E−01 4.78E−01
4.12E−00 7.21E−02 7.04E−02 6.91E−01 1.10E−01 1.28E−01 1.57E−01
Epsilon 7.02E−00 6.68E−01 5.23E−01 6.00E−01 8.71E−01 7.04E−01 5.96E−01
2.13E−00 7.50E−02 1.10E−01 8.53E−01 4.21E−02 1.30E−01 2.65E−01
UF9 Hypervolume 3.08E−03 5.88E−02 1.53E−01 1.75E−00 8.42E−02 3.56E−01 2.38E−01
5.32E−00 4.71E−02 5.16E−02 3.46E−01 2.50E−02 3.77E−02 1.10E−02
Δp 3.38E−02 1.67E−03 4.34E−03 5.91E−03 3.77E−03 1.21E−03 2.22E−03
5.11E−03 2.26E−04 1.50E−04 3.47E−03 2.30E−04 1.03E−04 1.75E−03
Spread 1.00E−02 7.20E−01 7.51E−01 5.62E−01 6.54E−01 6.35E−01 3.58E−01
6.36E−01 4.04E−02 5.82E−02 5.40E−02 6.44E−02 5.11E−02 2.45E−02
Epsilon 2.02E−02 7.20E−01 4.13E−01 5.55E−01 7.94E−01 3.80E−01 5.84E−01
4.14E−01 1.41E−01 5.45E−02 4.76E−02 3.15E−02 1.91E−02 4.67E−02
UF10 Hypervolume 3.61E−01 1.00E+00 3.52E+03 5.91E−02 3.04E−02 2.37E−02 3.91E−02
1.30E−01 0.00E+00 1.15E−02 4.10E−01 1.96E−02 1.85E−02 2.01E−02
Δp 5.77E−02 2.78E−02 2.12E−03 3.00E−03 1.82E−03 5.51E−03 3.21E−03
3.90E−02 5.72E−03 2.01E−03 1.80E−03 3.17E−04 2.46E−04 1.45E−03
Spread 1.61E−02 5.52E−01 6.65E−01 4.20E−01 8.17E−01 1.23E+00 3.52E−01
4.81E−01 4.06E−02 5.27E−02 7.61E−01 8.12E−02 1.22E−01 1.85E−01
Epsilon 3.60E−02 1.81E+00 1.04E+00 4.32E−01 7.85E−01 7.67E−01 5.01E−01
4.52E−00 1.44E−01 1.54E−01 6.16E−01 1.06E−01 1.02E−01 1.84E−01

6.3 Performance evaluation and UF10 test functions. For the UF8test feature, the NSGA-
II predicts the best value of Δp and Epsilon in contrast to
6.3.1 Results based on the IEEE CEC‑2009 (UF1–UF10) test current techniques.
functions
6.3.2 Results based on the ZDT ( ZDT 1 − ZDT 6 ) test
The proposed technique is contrasted with the newly identi- functions
fied techniques using the benchmark IEEE CEC-2009, in
Table 2. EMoSOA is used to determine optimal solutions for The EMoSOA juxtaposition with existing methods is illus-
UF1, UF2, UF4, UF5, and UF6. In terms of the UF3 test trated in the Table 3 using the ZDT benchmark problem.
function, the MOPSO fetches a decent spread value in com- In the case of ZDT, the proposed algorithm performs bet-
parison with other techniques. Compared with competing ter than other algorithms in most cases by considering effi-
algorithms, MOEA/D produces promising results by using ciency metrics of Δp , Spread, and Epsilon. Figure 6 indi-
the Δp and Epsilon as metrics of efficiency. The proposed cates the POSs acquired using existing techniques. The
EMoSOA provides best results in terms of the UF7, UF9,  NSGA-II, MOEA/D, and MOAAA​contributions degrade the

13
International Journal of Machine Learning and Cybernetics

Table 3  The obtained optimal results using proposed and competitor approaches on ZDT benchmark test functions
F Performance Metrics EMoSOA MOPSO NSGA-II MOEA/D MOVS MOAAA​ MOSHO

ZDT1 Hypervolume 5.55E−00 5.61E−01 5.61E−01 5.63E−01 5.55E−01 5.61E−01 5.61E−01


2.81E−03 1.41E−04 1.61E−04 2.55E−04 7.57E−04 2.04E−04 1.45E−04
Δp 2.33E−03 1.32E−04 1.17E−04 5.31E−05 3.02E−04 1.10E−04 3.65E−04
3.02E−04 4.01E−07 2.36E−06 1.43E−06 1.43E−04 2.20E−06 1.01E−06
Spread 1.31E−03 5.86E−02 2.76E−01 1.84E−01 5.48E−01 1.42E−01 1.30E−01
1.10E−05 1.03E−02 1.57E−02 1.81E−03 2.77E−02 1.61E−02 1.21E−02
Epsilon 4.48E−01 4.86E−03 1.33E−02 2.02E−03 1.30E−02 7.91E−03 6.07E−02
4.72E−01 1.57E−04 1.01E−03 2.81E−04 1.17E−02 5.67E−04 2.40E−04
ZDT2 Hypervolume 2.28E−02 2.27E−01 2.25E−01 2.58E−01 2.12E−00 2.25E−01 2.26E−01
2.70E−06 7.08E−05 2.10E−04 1.57E−04 5.53E−04 5.13E−04 1.08E−04
Δp 1.02E−06 1.44E−04 1.36E−04 3.97E−05 2.51E−04 1.21E−04 1.31E−04
1.33E−07 1.56E−06 2.40E−05 5.01E−06 1.34E−05 3.20E−06 1.75E−06
Spread 1.01E−00 5.67E−02 3.60E−01 1.83E−01 5.70E−01 1.48E−01 1.55E−01
8.11E−01 6.84E−03 2.51E−02 1.49E−02 3.21E−02 1.45E−02 2.86E−02
Epsilon 1.21E−04 4.45E−03 1.82E−02 1.67E−03 2.28E−02 7.48E−03 2.06E−03
1.78E−03 1.35E−04 1.31E−03 2.61E−04 1.87E−02 5.15E−04 7.72E−04
ZDT3 Hypervolume 4.11E−00 4.13E−01 4.14E−01 4.15E−01 4.10E+01 4.13E−01 4.15E−01
1.21E−03 2.86E−04 2.49E−04 2.08E−05 1.81E−03 1.30E−04 4.81E−04
Δp 3.12E−06 1.56E−04 1.32E−04 3.13E−05 1.01E−03 2.20E−04 1.01E−04
2.30E−04 1.07E−06 3.48E−06 4.54E−06 4.28E−04 1.87E−05 3.73E−05
Spread 1.00E−02 6.04E−01 6.41E−01 2.13E+00 8.01E−01 7.37E−01 5.05E−01
1.16E−04 2.37E−03 1.31E−02 2.37E−03 1.61E−02 3.73E−03 1.52E−03
Epsilon 2.90E−04 4.70E−03 7.86E−03 3.10E−03 2.42E−01 8.61E−03 4.93E−03
5.73E−03 4.48E−04 1.55E−03 1.24E−05 1.53E−01 1.13E−03 1.47E−03
ZDT4 Hypervolume 5.61E−00 1.00E+01 5.47E−01 5.64E−01 5.51E−01 5.48E−01 5.44E−01
3.43E−03 1.00E−07 2.52E−03 3.94E−05 3.63E−03 7.61E−03 2.45E−02
Δp 3.31E−03 2.40E−01 2.77E−04 3.10E−05 2.11E−04 3.70E−03 1.68E−04
1.81E−03 1.37E−02 4.98E−05 2.03E−05 1.11E−04 1.21E−03 1.01E−04
Spread 1.01E−02 7.85E−01 2.58E−01 1.29E−01 8.11E−01 2.05E−01 1.07E−01
1.97E−04 6.17E−02 2.14E−02 2.01E−03 1.87E−01 1.35E−01 1.06E−02
Epsilon 1.08E−04 4.13E+00 1.75E−02 1.93E−03 1.86E−02 5.36E−02 2.27E−02
1.00E−05 1.90E+00 7.11E−03 1.02E−04 1.05E−02 4.28E−02 1.72E−02
ZDT6 Hypervolume 2.18E+00 4.56E−02 3.81E−01 3.04E−02 2.90E−01 2.77E−02 2.25E−01
3.32E−04 4.41E−05 1.45E−03 5.10E−08 1.36E−06 1.71E−03 1.38E−03
Δp 1.71E−03 1.36E−03 3.80E−04 2.60E−05 2.06E−04 3.94E−04 2.26E−04
3.10E−04 1.08E−06 6.06E−05 5.40E−08 2.81E−05 2.36E−05 1.31E−05
Spread 1.10E−02 7.00E−01 2.65E−01 1.69E−01 6.58E−01 1.37E−01 2.21E−01
1.23E−05 3.78E−01 2.98E−02 1.62E−04 1.92E−01 2.01E−02 1.62E−02
Epsilon 1.21E−04 3.85E−03 1.50E−02 1.63E−03 1.36E−02 1.55E−02 2.00E−02
2.12E−05 4.15E−04 2.16E−03 1.42E−01 2.65E−03 4.45E−05 1.16E−04

benchmark of ZDT2. The proposed algorithm provides pow- MOVS, MOAAA​, and MOSHO. Compared to other known
erful performance for ZDT2, ZDT3,  and ZDT6benchmarks. DTLZ1,  DTLZ2,  DTLZ3,  DTLZ5,  DTLZ6,  and DTLZ9
benchmark functions, the proposed algorithm provides
6.3.3 Results based on the DTLZ ( DTLZ1 − DTLZ9 ) test promising statistically significant performance. The
functions MOEA/D and NSGA-II outperform other DTLZ4 compari-
son methods as efficiency indicators with respect to the
Table  4 displays the analytical results of the DTLZ Δp and Epsilon. The MOVS provides optimum values for
benchmark problems for MOPSO, NSGA-II, MOEA/D,

13
International Journal of Machine Learning and Cybernetics

Fig. 6  The obtained Pareto solu- EMoSOA EMoSOA


tions by the EMoSOA technique 1 1
on the ZDT test functions
True PF
0.8 Obtained PF 0.8

0.6 0.6

0.4 0.4

0.2 True PF
0.2
Obtained PF
0 0
0 0.5 1 0 0.5 1
ZDT1 ZDT2

EMoSOA EMoSOA
1 1
True PF True PF
Obtained PF 0.8 Obtained PF
0.5

0.6
0
0.4
-0.5
0.2

-1 0
0 0.5 1 0 0.5 1
ZDT3 ZDT4

EMoSOA
1

0.8

0.6

0.4

True PF
0.2
Obtained PF
0
0.2 0.4 0.6 0.8 1
ZDT6

Hypervolume. The results show clearly that this proposed rise in the number of iterations of EMoSOA, the conver-
algorithm converges the DTLZ test benchmark. gence rate is better.
2. Number of search agents: The size of the search agents
6.4 Influence of parameters are 50, 80, 100, and 200 taken for simulation purpose. It
has been observed from Table 6 that with 100 population
1. Number of iterations: While evaluating the effect on size, EMoSOA produces better optimum solutions.
results from the EMoSOA of numbers of iterations, we 3. Influence of selection approach: The proposed algo-
performed 100, 500, 800, and 1000 iterations of the pro- rithm was tested with a ZDT1, ZDT3 and ZDT6 test
posed algorithm. Table 5 findings suggest that with the problems by selecting a tournament and roulette-wheel
methods in order to evaluate the EMoSOA algorithm

13
International Journal of Machine Learning and Cybernetics

Table 4  The obtained optimal results using proposed and competitor approaches on DTLZ benchmark test functions
F Performance Metrics EMoSOA MOPSO NSGA-II MOEA/D MOVS MOAAA​ MOSHO

DTLZ1 Hypervolume 3.61E−02 2.47E−01 5.36E−01 2.55E+01 4.58E−01 6.55E−01 3.10E−01


3.32E−01 4.55E−02 2.24E−01 4.93E−03 2.72E−02 1.96E−02 1.01E−02
Δp 2.44E−05 1.52E−03 2.77E−03 3.23E−04 4.01E−03 3.45E−04 4.78E−04
1.10E−04 1.48E−04 2.80E−04 4.41E−05 1.01E−04 6.20E−06 3.10E−04
Spread 1.32E−02 4.94E−01 1.01E+00 1.84E−01 2.71E+00 6.48E−01 3.96E−01
2.16E−00 2.91E−00 2.01E−01 3.23E−01 2.88E−01 2.57E−00 2.77E−01
Epsilon 2.61E−02 2.95E−02 1.46E−03 3.34E−02 1.03E−03 4.25E−03 1.06E−02
3.71E−01 2.92E−02 4.22E−01 1.61E−02 1.66E−01 1.48E−02 1.01E−02
DTLZ2 Hypervolume 1.08E+00 1.93E−01 1.62E−02 2.03E−01 4.26E−02 3.03E−01 2.96E−01
1.06E−04 1.62E−03 2.95E−03 4.20E−02 1.11E−02 1.86E−03 1.14E−03
Δp 2.53E−05 2.47E−03 2.66E−04 5.91E−04 2.18E−02 3.72E−04 3.94E−04
2.10E−06 7.05E−04 3.00E−05 5.41E−05 4.38E−03 3.10E−05 3.02E−05
Spread 2.31E−02 5.45E−01 3.82E−01 6.41E−01 6.55E−01 4.45E−01 3.82E−01
4.77E−04 1.72E−01 6.53E−03 1.40E−02 4.72E−01 2.04E−02 5.27E−02
Epsilon 2.68E−03 5.61E−02 1.36E−01 3.03E−02 2.72E−01 7.40E−02 5.04E−02
5.15E−01 7.21E−03 5.36E−02 2.56E−01 3.72E−01 1.12E−02 3.73E−02
DTLZ3 Hypervolume 1.08E−01 3.71E−01 5.23E−02 3.62E−01 4.22E−01 1.00E+01 3.02E−01
7.36E−02 1.62E−02 4.24E−04 1.73E−02 1.34E−03 0.00E+00 4.56E−03
Δp 2.46E−05 4.21E−03 4.04E−04 1.48E−03 1.21E−01 1.01E−01 2.03E−02
1.11E−05 2.13E−03 2.32E−04 5.40E−04 2.91E−02 3.58E−02 1.40E−02
Spread 1.38E−04 3.61E−02 1.31E+00 3.62E−02 1.42E−03 1.21E+00 1.91E−02
3.43E−01 2.72E−01 6.25E−02 1.62E−02 4.21E−01 1.08E−01 1.62E−01
Epsilon 2.17E−03 2.73E−01 4.72E−01 2.83E−02 2.21E−01 5.36E+00 2.21E−01
1.43E−03 2.26E−02 4.76E−02 5.62E−02 7.73E−02 1.71E+00 3.82E−02
DTLZ4 Hypervolume 1.61E−01 1.73E−01 2.32E−02 1.73E−01 2.41E+00 2.11E−01 3.95E−02
1.30E−01 2.72E−03 5.88E−02 2.62E−02 4.22E−03 8.65E−03 4.26E−03
Δp 1.32E−03 3.60E−02 2.68E−02 1.13E−04 3.78E−02 3.34E−03 2.15E−02
3.10E−06 1.01E−04 4.30E−03 1.08E−04 5.48E−03 3.42E−03 3.91E−03
Spread 1.77E−01 2.07E−01 4.72E−02 5.63E−01 4.13E−03 2.56E−01 4.04E−02
4.10E−01 2.52E−02 5.62E−01 2.20E−02 4.54E−02 1.28E−01 3.81E−02
Epsilon 4.04E−01 3.01E−02 5.72E−03 2.62E−02 1.41E−01 2.22E−01 3.61E−02
7.71E−01 4.51E−02 8.13E−03 7.62E−02 7.32E−03 2.74E−01 2.93E−02
DTLZ5 Hypervolume 1.48E+01 1.62E−03 2.12E−02 4.23E−01 4.92E−02 1.95E−01 2.32E−02
1.31E−05 1.82E−04 4.05E−02 2.82E−02 4.22E−02 1.46E−02 1.98E−02
Δp 1.34E−04 1.34E−01 2.20E−01 3.52E−03 2.61E−02 2.76E−01 1.91E−02
2.71E−05 6.35E−02 2.61E−02 5.87E−04 2.25E−02 1.30E−02 1.03E−03
Spread 2.87E−00 4.62E−01 1.35E+00 4.25E−01 1.03E+00 1.42E−02 6.21E−01
4.01E−01 4.61E−02 6.95E−03 4.21E−02 3.13E−04 4.46E−02 3.28E−02
Epsilon 2.13E−03 2.72E+00 3.10E−01 5.93E−01 3.62E−01 6.74E−01 3.05E−01
2.45E−00 3.16E−01 1.68E−01 3.89E−01 1.01E−01 2.21E−01 1.94E−01
DTLZ6 Hypervolume 1.62E+00 2.01E−03 1.91E−01 3.12E−02 1.17E−01 1.04E−01 1.52E−02
1.10E−04 1.55E−02 6.82E−02 1.24E−03 3.42E−02 4.51E−02 1.01E−02
Δp 2.01E−04 1.32E−02 2.51E−02 4.20E−03 3.26E−02 2.10E−02 1.11E−02
1.66E−04 2.65E−03 4.11E−03 2.61E−03 5.35E−03 2.11E−03 2.84E−03
Spread 2.44E−00 4.17E−01 2.41E+00 3.11E−01 1.25E−01 1.11E+00 1.41E−01
5.76E−01 1.20E−02 2.82E−01 4.06E−01 1.62E−01 6.17E−02 3.74E−02
Epsilon 1.46E−04 5.94E−01 3.62E−03 2.64E−02 2.42E−01 6.71E−01 3.33E−02
6.15E−02 1.81E−03 1.51E−01 1.36E−04 5.35E−01 1.72E−01 5.81E−03

13
International Journal of Machine Learning and Cybernetics

Table 4  (continued)
F Performance Metrics EMoSOA MOPSO NSGA-II MOEA/D MOVS MOAAA​ MOSHO

DTLZ7 Hypervolume 1.08E−01 2.52E−01 4.17E−03 3.10E−01 2.48E+00 1.50E−01 3.11E−02


1.31E−01 2.62E−02 2.08E−04 2.82E−03 4.81E−01 6.25E−02 6.71E−03
Δp 3.11E−02 4.51E−03 5.97E−03 2.37E−03 3.90E−02 2.67E−02 5.41E−03
2.76E−03 1.12E−03 2.47E−04 2.26E−03 5.93E−03 1.80E−03 2.91E−03
Spread 3.66E−01 4.41E−01 1.82E+00 2.23E−01 2.72E−03 1.02E−01 1.27E−02
5.24E−01 7.03E−02 1.52E−03 2.36E−01 1.14E−04 2.90E−02 4.41E−03
Epsilon 6.77E−01 7.62E−01 1.16E−02 2.62E−02 2.07E−01 3.03E−01 1.27E−01
4.17E−01 1.06E−01 1.62E−04 2.34E−02 2.32E−02 1.92E−01 3.31E−02
DTLZ8 Hypervolume 6.61E−04 3.51E+00 1.83E−01 3.87E−03 1.02E−02 1.97E−01 1.38E−01
1.46E−03 1.68E−04 2.93E−02 1.00E−02 3.22E−02 5.92E−02 1.75E−03
Δp 5.45E−02 2.17E−04 4.02E−03 3.74E−02 3.68E−03 2.10E−03 3.52E−03
8.80E−03 1.01E−03 1.67E−03 1.31E−03 5.67E−04 1.46E−03 7.94E−04
Spread 2.86E−01 2.26E−01 1.36E−03 2.38E−01 4.06E−02 4.01E−02 1.36E−02
6.76E−01 2.21E−02 7.52E−03 2.91E−01 2.13E−02 1.98E−02 3.04E−03
Epsilon 1.00E−01 2.68E−01 5.24E−03 2.00E−01 2.23E−02 5.08E−01 1.22E−02
5.18E−01 2.50E−02 1.23E−03 3.22E−01 1.21E−02 1.81E−01 5.96E−03
DTLZ9 Hypervolume 1.61E+00 5.88E−02 2.34E−03 2.75E−02 4.42E−02 2.82E−01 1.97E−01
1.35E−03 4.71E−02 5.16E−02 3.46E−01 2.50E−02 1.93E−02 7.75E−02
Δp 1.46E−04 4.46E−03 5.76E−02 2.23E−03 5.88E−03 4.46E−03 6.03E−03
2.61E−05 2.57E−03 3.32E−03 1.86E−03 3.31E−04 2.90E−04 8.42E−04
Spread 8.86E−01 3.20E−01 3.51E−03 5.75E−01 2.54E−02 3.15E−01 4.08E−02
2.77E−01 7.87E−02 2.82E−01 5.30E−02 7.44E−03 3.93E−01 8.80E−03
Epsilon 8.57E−01 3.20E−01 6.72E−03 5.05E−01 6.66E−02 2.40E−02 3.37E−02
3.98E−01 7.41E−01 5.92E−03 6.76E−02 5.95E−01 1.43E−02 2.78E−02

Table 5  Sensitivity analysis of maximum number of iterations tion is better than the approach of the tournament selec-
Iterations ZDT1 (convex) ZDT6 (concave) ZDT3 (disconnected)
tion.

100 2.11E−03 4.51E−02 5.36E−01 6.5 Wilcoxon signed‑rank test


500 2.38E−02 7.45E−01 6.99E−03
800 4.00E−02 5.14E−02 7.12E−03 The literature [107] shows that success measures do not
1000 5.28E−05 2.84E−05 3.90E−05 guarantee a stronger integration and diversity since, in cer-
tain situations, the solutions obtained are not similar to the
ideal Pareto end. The Wilcoxon test [108] is performed with
Table 6  Sensitivity analysis of number of search agents the average value of Hypervolume, Δp , Spread, and Epsilon
Iterations ZDT1 (convex) ZDT6 (concave) ZDT3 (disconnected)
in order to address this matter. For each question, an average
difference is determined between each pair of tests. Such
50 3.61E−02 6.56E−03 5.64E−02 variations are categorized in an upward order and a spec-
80 8.72E−02 1.47E−03 2.46E−02 trum between the smallest and the largest difference is given.
100 2.41E−05 2.51E−06 4.51E−05 The proposed algorithm is given a positive rank in terms
200 5.03E−01 7.01E−04 5.63E−04 of performance measures if it is better than the competing
algorithms. The negative rank is otherwise given. The value
rates for comparison are 0.10 and the positive and negative
performance. The convergence analysis of the selection ranks are summarized [108]. Table 7 displays the results
process for the above problems is shown in the Fig. 7. of the Wilcoxon test where +, −, and = indicates that the
After observing this result, the selection of roulette- output of proposed algorithm is higher, lower and equal to
wheel in terms of converging towards the optimal solu- the competitive algorithms. Table 7 shows that the proposed

13
International Journal of Machine Learning and Cybernetics

1 1 1
Roulette-wheel
Roulette-wheel Roulette-wheel
Tournament
0.8 Tournament Tournament 0.8
0.5

0.6 0.6
f2 f2 0 f2
0.4 0.4

-0.5
0.2 0.2

0 -1 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1 0.2 0.4 0.6 0.8 1

f1 f1 f1
(a) (b) (c)

Fig. 7  Effect of selection methods on a ZDT1, b ZDT3, and c ZDT6 benchmark test problems

Table 7  Wilcoxon signed-rank test results between proposed 7 EMoSOA for engineering design problems
EMoSOA and other algorithms based on average of performance
measures
To examine the effectiveness of the proposed algorithm,
Algorithms Hypervolume Δp Spread Epsilon its performance has been tested on four engineering design
EMoSOA = + + +
problems. There are different types of penalty functions to
MOPSO + + = +
handle these multi-constrained engineering problems such as
NSGA-II − + + =
static penalty, dynamic penalty, annealing penalty, adaptive
MOEA/D + + + + penalty, co-evolutionary penalty, and death penalty [109].
MOVS + + = + However, death penalty function is employed to discard the
MOAAA​ + + + + infeasible solutions and does not use the information of such
MOSHO + + + + solutions which are helpful to solve the dominated infeasible
regions. Due to low computational cost and its simplicity,
the proposed algorithm is equipped with death penalty func-
tion to handle the multiple constraints.

7.1 Welded beam design

This problem mainly focused to reduce the fabrication cost


and concurrently diminishes the vertical deflection [110]
(see Fig. 8). The welded beam design problem includes four
optimization variables, as shown in Fig. 8. The comparative
study of different methods for the best results of the problem
is described in Table 8. Using the suggested algorithm, the
optimal solution is obtained to find the near optimal solu-
tion. Proposed approach shows the better results in solving
the problem of restricted engineering problem.

7.2 Multiple‑disk clutch brake design problem

This problem focuses to decrease the stopping time ( f1 ) and


reduce the mass of brake system ( f2 ) [111] (see Fig. 9). This
Fig. 8  Welded beam design problem problem consists of five decision parameters.
Compared to other methods, Table 9 indicates the sta-
tistical value of the proposed method. Proposed algorithm
algorithm beats all competing algorithms other than NSGA- produces the best results of decision variables. Overviewed
II, which was superior to Hypervolume.

13
International Journal of Machine Learning and Cybernetics

Table 8  The comparison Performance Metrics EMoSOA MOPSO NSGA-II MOEA/D MOVS MOAAA​ MOSHO
between different approaches
for welded beam problem Hypervolume 4.17E+01 7.08E−01 6.16E−01 6.93E−01 7.81E−01 6.41E−01 7.58E−01
2.34E−02 5.92E−01 3.21E−01 5.96E−01 7.95E−01 3.86E−01 4.71E−01
Δp 2.70E−03 2.35E−01 1.68E−01 2.35E−02 5.83E−01 6.91E−02 3.33E−02
4.04E−04 5.50E−02 1.82E−02 1.31E−02 5.86E−02 4.52E−02 2.49E−02
Spread 1.36E−02 1.47E−01 8.91E−01 1.77E+00 2.71E−01 4.47E−01 2.65E−01
4.95E−03 7.28E−02 1.77E−01 2.11E+00 5.21E−02 1.17E−01 1.96E−01
Epsilon 8.63E−04 1.03E−01 8.63E−02 3.21E−02 2.23E−01 2.62E−02 3.67E−02
1.11E−04 7.52E−02 1.35E−02 8.74E−03 8.62E−02 4.64E−03 2.01E−03

algorithm along with other algorithms. EMoSOA fulfills


minimum costs and overall objectives. It can be seen that
EMoSOA would also be able to produce better results than
competitor approaches.

7.4 25‑bar truss design

This problem [113, 114] is selected to assess the perfor-


mance of the proposed algorithm. This design consists of ten
nodes, which are static and twenty five bars cross-sectional
members (refer to Fig. 11).
Table 11 depicts the dominant status of the 25-bar truss.
In comparison with other algorithms shown in Table 11, the
most optimal consequence is obtained by the proposed algo-
rithm. In terms of the average and standard deviation, the
statistical results obtained indicate that the proposed algo-
Fig. 9  Multiple-disk clutch brake design problem rithm is higher than competing algorithms.

by Δp , Spread, Epsilon, and Hypervolume, the proposed 8 Conclusions and future work
algorithm performs betters than other algorithms.
This article introduced a new evolutionary MOO algorithm,
7.3 Pressure vessel design called as the evolutionary multi-objective seagull optimiza-
tion algorithm (EMoSOA). This algorithm mimics the migra-
Kannan and Kramer [112] proposed this problem to mini- tion and attacking behaviors of seagulls. In this study, four
mize the total cost ( f1 ) and maximize the capacity of stor- new components are incorporated to solve MOO problems.
age( f2 ), as shown in Fig. 10. The first component includes an archive, whose main role
This problem consists of four design variables. Table 10 is to retrieve and accumulate the best non-dominated solu-
demonstrates the comparative study of the proposed tions. And, the second component includes a leader selection

Table 9  The comparison Performance Metrics EMoSOA MOPSO NSGA-II MOEA/D MOVS MOAAA​ MOSHO
between different approaches
for multiple-disk clutch brake Hypervolume 3.52E−00 6.85E−01 5.04E−01 7.52E−01 8.01E−01 8.62E−01 5.41E−01
problem
1.42E−02 4.31E−01 3.51E−01 6.62E−01 7.13E−01 6.95E−01 2.10E−01
Δp 1.01E−03 4.14E−02 1.12E−01 6.42E−02 1.20E−01 8.71E−02 4.12E−02
1.43E−04 4.95E−03 1.51E−02 5.02E−02 7.81E−02 3.12E−02 3.50E−02
Spread 1.11E−02 1.35E−01 7.62E−01 2.41E+00 7.42E−01 7.31E−01 2.30E−01
1.15E−02 1.91E−01 3.12E−01 1.34E+00 4.20E−01 3.41E−01 1.37E−01
Epsilon 1.64E−03 1.13E−01 5.02E−02 1.38E−01 1.11E−01 1.92E−02 2.97E−02
4.16E−04 8.83E−02 1.43E−02 7.31E−02 6.31E−02 1.30E−02 1.57E−02

13
International Journal of Machine Learning and Cybernetics

criteria for selecting optimal solutions from the archive. The


third component is grid selection mechanism and forth com-
ponents mutation and crossover for better exploration and
exploitation. Twenty-four known test functions are used to
check this algorithm. Finally, four real-life MOO issues have
been validated. In this regard, many experimental results
show that the proposed algorithm gives the best result in
terms of calculation costs compared to existing compet-
ing algorithms. Due to its stochastic nature, there may be
a chances that this algorithm fails to solve other optimiza-
tion problems. Hence, there is a need to further improve
the parameters of this algorithm. This limitation can be rec-
Fig. 10  Pressure vessel design problem
ommend as a future work. The many-objective version of
EMoSOA can also be seen as future recommendation for
solving various challenging real-life complex problems.

Table 10  The comparison Performance Metrics EMoSOA MOPSO NSGA-II MOEA/D MOVS MOAAA​ MOSHO
between different approaches
for pressure vessel problem Hypervolume 2.02E+00 6.20E−01 5.24E−01 7.94E−01 6.24E−01 6.50E−01 2.10E−01
1.01E−02 2.10E−01 2.41E−01 5.01E−01 6.00E−01 6.83E−01 1.67E−01
Δp 1.16E−05 5.15E−02 3.30E−02 1.58E−02 2.10E−03 2.66E−02 1.27E−03
2.12E−04 1.18E−02 3.44E−03 4.97E−02 2.21E−02 8.91E−02 1.62E−02
Spread 1.65E−02 6.00E−01 3.31E−01 2.32E+00 7.44E−01 8.62E−01 3.17E−01
1.10E−02 1.55E−01 2.61E−01 1.03E+00 5.37E−01 5.34E−01 1.88E−01
Epsilon 1.18E−03 2.48E−01 8.51E−02 2.46E−01 1.41E−01 4.29E−02 1.57E−02
3.20E−04 7.42E−02 6.83E−02 1.34E−01 2.12E−01 2.81E−02 2.17E−02

Fig. 11  25-bar truss design problem [20]

13
International Journal of Machine Learning and Cybernetics

Table 11  The comparison Performance Metrics EMoSOA MOPSO NSGA-II MOEA/D MOVS MOAAA​ MOSHO
between different approaches
for 25-bar truss problem Hypervolume 1.13E−00 7.58E−01 3.34E−01 6.45E−01 3.04E−01 5.75E−01 2.03E−01
1.15E−02 4.78E−01 1.41E−01 4.01E−01 1.70E−01 4.84E−01 1.30E−01
Δp 2.21E−03 1.55E−01 2.78E−02 1.01E−01 1.67E−01 2.26E−02 4.37E−02
1.41E−03 1.71E−01 1.46E−02 1.33E−01 3.01E−02 1.85E−02 2.58E−02
Spread 1.16E−02 2.58E−01 3.31E−01 1.48E−01 4.06E−01 4.37E−01 2.33E−01
3.32E−03 2.42E−01 2.26E−01 1.17E−01 3.03E−01 1.85E−01 1.01E−01
Epsilon 1.87E−03 1.90E−01 1.81E−01 1.58E−01 2.86E−02 1.41E−01 2.65E−01
1.71E−03 2.01E−02 1.43E−01 1.80E−02 1.75E−02 1.01E−01 1.57E−01

Acknowledgements  This work is partly supported by VC Research Minimize ∶ f1 (x) = x1


(VCR 0000056) for Prof Chang. Minimize ∶ f2 (x) = g(x) × h(f1 (x), g(x))
where,
9 ∑N
Appendix A: Unconstrained multi‑objective g(x) = 1 + xi
29 i=2 �
test problems f1 (x)
h(f1 (x), g(x)) = 1 −
g(x)
See Table 12. �
f1 (x)

− sin(10𝜋f1 (x))
g(x)
0 ≤ xi ≤ 1, 1 ≤ i ≤ 30
Appendix B: Unconstrained multi‑objective
test problems ZDT4:

ZDT1: Minimize ∶ f1 (x) = x1


Minimize ∶ f2 (x) = g(x) × [1 − (x1 ∕g(x))2 ]
Minimize ∶ f1 (x) = x1 where,
∑n
Minimize ∶ f2 (x) = g(x) × h(f1 (x), g(x)) g(x) = 1 + 10(n − 1) + i=2 (xi2 − 10cos(4𝜋xi ))
where, 0 ≤ x1 ≤ 1, −5 ≤ xi ≤ 5, i = 1, 2, … , n
9 ∑N
g(x) = 1 + x
N−1 � i=2 i ZDT6:
f1 (x)
h(f1 (x), g(x)) = 1 − Minimize ∶ f1 (x) = 1 − e−4x1 × sin6 (6𝜋x1 )
g(x) � f (x) �2
0 ≤ xi ≤ 1, 1 ≤ i ≤ 30 Minimize ∶ f2 (x) = 1 − 1
g(x)
where,
ZDT2: � �
� ∑n x �0.25
i=2 i
Minimize ∶ f1 (x) = x1 g(x) = 1 + 9
Minimize ∶ f2 (x) = g(x) × h(f1 (x), g(x)) (n − 1)
where, 0 ≤ xi ≤ 1, i = 1, 2, … , n
9 ∑N
g(x) = 1 + x
i=2 i
N−1 � �2
f (x)
h(f1 (x), g(x)) = 1 − 1
g(x)
Appendix C: Unconstrained multi‑objective
0 ≤ xi ≤ 1, 1 ≤ i ≤ 30
test problems

ZDT3: DTLZ1:

13
International Journal of Machine Learning and Cybernetics

Table 12  Table caption
Name Mathematical formulation Properties

UF1 � � ��2 Bi-objective


2 ∑ j𝜋
f1 = x1 + x − sin 6𝜋x +
∣ J1 ∣ j𝜖J1 j 1
n
� � ��2
√ 2 ∑ j𝜋
f2 = 1 − x + j𝜖J2 xj − sin 6𝜋x1 +
∣ J2 ∣ n
J1 = {j ∣ j is odd and 2 ≤ j ≤ n}, J2 = {j ∣ j is even and 2 ≤ j ≤ n}
UF2 2 ∑ Bi-objective
f1 = x1 + y2
∣ J1 ∣ j𝜖J1 j
√ 2 ∑
f2 = 1 − x + y2
∣ J2 ∣ j𝜖J2 j
J1 = {j ∣ j is odd and 2 ≤ j ≤ n}, J2 = {j ∣ j is even and 2 ≤ j ≤ n}
⎧ � 4j𝜋 � � j𝜋 �
2
⎪ xj − [0.3x1 cos 24𝜋x1 + n + 0.6x1 ]cos 6𝜋x1 + n , if j𝜖J1
yj = ⎨ � 4j𝜋 � � j𝜋 �
⎪ xj − [0.3x12 cos 24𝜋x1 + + 0.6x1 ]sin 6𝜋x1 + , if j𝜖J2
⎩ n n
� � � �
UF3 2 ∑ ∏ 20yj 𝜋 Bi-objective
f1 = x1 + 4 j𝜖J1 y2j − 2 j𝜖J1 cos √ +2
∣ J1 ∣ j
√ � ∑ ∏ � 20yj 𝜋 � �
2
f2 = x1 + 4 j𝜖J1 y2j − 2 j𝜖J2 cos √ +2
∣ J2 ∣ j
J1 = {j ∣ j is odd and 2 ≤ j ≤ n}, J2 = {j ∣ j is even and 2 ≤ j ≤ n}
3(j−2)
0.5(1.0+ )
n−2
yj = xj − x1 , j = 2, 3, … , n
UF4 2 ∑ Bi-objective
f1 = x1 + h(yj )
∣ J1 ∣ j𝜖J1
2 ∑
f2 = 1 − x2 + h(yj )
∣ J2 ∣ j𝜖J2
J1 = {j ∣ j is odd and 2 ≤ j ≤ n}, J2 = {j ∣ j is even and 2 ≤ j ≤ n}
( j𝜋 ) ∣t∣
yj = xj − sin 6𝜋x1 + , j = 2, 3, … , n, h(t) =
n 1 + e2∣t∣
� �
UF5 1 2 ∑ Bi-objective
f1 = x1 + + 𝜖 ∣ sin(2N𝜋x1 ) ∣ + h(yi )
2N ∣ J1 ∣ j𝜖J1
� �
1 2 ∑
f2 = 1 − x1 + + 𝜖 ∣ sin(2N𝜋x1 ) ∣ + h(yi )
2N ∣ J2 ∣ j𝜖J2
J1 = {j ∣ j is odd and 2 ≤ j ≤ n}, J2 = {j ∣ j is even and 2 ≤ j ≤ n}
( j𝜋 )
𝜖 > 0, yj = xj − sin 6𝜋x1 + , j = 2, 3, … , n, h(t) = 2t2 − cos(4𝜋t) + 1
n
� � � �
UF6 �
1

2 ∑ ∏ 20yj 𝜋 Bi-objective
f1 = x1 + max{0, 2 + 𝜖 sin(2N𝜋x1 )} + 4 j𝜖J1 y2j − 2 j𝜖J1 cos √ +1
2N ∣ J1 ∣ j
� � � ∑ ∏ � 20yj 𝜋 �
1 2
f2 = 1 − x1 + max{0, 2 + 𝜖 sin(2N𝜋x1 )} + 4 j𝜖J2 y2j − 2 j𝜖J2 cos √ + 1)
2N ∣ J2 ∣ j
J1 = {j ∣ j is odd and 2 ≤ j ≤ n}, J2 = {j ∣ j is even and 2 ≤ j ≤ n}
( j𝜋 )
𝜖 > 0, yj = xj − sin 6𝜋x1 + , j = 2, 3, … , n
n

13
International Journal of Machine Learning and Cybernetics

Table 12  (continued)
Name Mathematical formulation Properties

UF7 √ 2 ∑ Bi-objective
f1 = 5 x +
1 y2
∣ J1 ∣ j𝜖J1 j
√ 2 ∑
f2 = 1 − 5 x +
1 y2
∣ J2 ∣ j𝜖J2 j

J1 = {j ∣ j is odd and 2 ≤ j ≤ n}, J2 = {j ∣ j is even and 2 ≤ j ≤ n}


( j𝜋 )
𝜖 > 0, yj = xj − sin 6𝜋x1 + , j = 2, 3, … , n
n
UF8 � � j𝜋 �2 � Tri-objective
2 ∑
f1 = cos(0.5x1 𝜋)cos(0.5x2 𝜋) + j𝜖J1 xj − 2x2 sin 2𝜋x1 +
∣ J1 ∣ n
� � j𝜋 �2 �
2 ∑
f2 = cos(0.5x1 𝜋)sin(0.5x2 𝜋) + x − 2x2 sin 2𝜋x1 +
∣ J2 ∣ j𝜖J2 j n
� � � �
2 ∑ j𝜋 2
f3 = sin(0.5x1 𝜋) + x − 2x2 sin 2𝜋x1 +
∣ J3 ∣ j𝜖J3 j n
J1 = {j ∣ 3 ≤ j ≤ n, and j − 1 is a multiplication of 3}
J2 = {j ∣ 3 ≤ j ≤ n, and j − 2 is a multiplication of 3}
J3 = {j ∣ 3 ≤ j ≤ n, and j is a multiplication of 3}
UF9 � � j𝜋 �2 � Tri-objective
2 ∑
f1 = 0.5[max{0, (1 + 𝜖)(1 − 4(2x1 − 1)2 )} + 2x1 ]x2 + j𝜖J1 xj − 2x2 sin 2𝜋x1 +
∣ J1 ∣ n
� � j𝜋 �2 �
2 ∑
f2 = 0.5[max{0, (1 + 𝜖)(1 − 4(2x1 − 1)2 )} + 2x1 ]x2 + j𝜖J2 xj − 2x2 sin 2𝜋x1 +
∣ J2 ∣ n
� � � �
2 ∑ j𝜋 2
f3 = 1 − x2 + x − 2x2 sin 2𝜋x1 +
∣ J3 ∣ j𝜖J3 j n
J1 = {j ∣ 3 ≤ j ≤ n, and j − 1 is a multiplication of 3}
J2 = {j ∣ 3 ≤ j ≤ n, and j − 2 is a multiplication of 3}
J3 = {j ∣ 3 ≤ j ≤ n, and j is a multiplication of 3}, 𝜖 = 0.1
UF10 f1 = cos(0.5x1 𝜋)cos(0.5x2 𝜋) +
2 ∑
[4y2 − cos(8𝜋yj ) + 1] Tri-objective
∣ J1 ∣ j𝜖J1 j
2 ∑
f2 = cos(0.5x1 𝜋)sin(0.5x2 𝜋) + [4y2 − cos(8𝜋yj ) + 1]
∣ J2 ∣ j𝜖J2 j
2 ∑
f3 = sin(0.5x1 𝜋) + [4y2 − cos(8𝜋yj ) + 1]
∣ J3 ∣ j𝜖J3 j
J1 = {j ∣ 3 ≤ j ≤ n, and j − 1 is a multiplication of 3}
J2 = {j ∣ 3 ≤ j ≤ n, and j − 2 is a multiplication of 3}
J3 = {j ∣ 3 ≤ j ≤ n, and j is a multiplication of 3}

1
Minimize ∶ f1 (⃗x) = x (1 + g(⃗x))
2 1
1
Minimize ∶ f2 (⃗x) = (1 − x1 )(1 + g(⃗x))
2
where, � �

g(⃗x) = 100 ∣ x⃗ ∣ + xi 𝜖⃗x (x1 − 0.5)2 − cos(20𝜋(xi − 0.5))
0 ≤ xi ≤ 1, i = 1, 2, … , n

DTLZ2:

13
International Journal of Machine Learning and Cybernetics

� � Minimize ∶ f1 (⃗x) = x1
𝜋
Minimize ∶ f1 (⃗x) = (1 + g(⃗x))cos x1
� 2� Minimize ∶ f2 (⃗x) = (1 + g(⃗x))h(f1 (⃗x), g(⃗x))
𝜋
Minimize ∶ f2 (⃗x) = (1 + g(⃗x))sin x1 where,
2 9 ∑
where, g(⃗x) = 1 + xi
∑ ∣ x⃗ ∣ xi 𝜖⃗x
g(⃗x) = xi 𝜖⃗x (xi − 0.5)2 f1 (⃗x)
h(f1 (⃗x), g(⃗x)) = M − (1 + sin(3𝜋f1 (⃗x)))
0 ≤ xi ≤ 1, i = 1, 2, … , n 1 + g(⃗x)
0 ≤ xi ≤ 1, 1 ≤ i ≤ n

DTLZ3:
� �
𝜋
Minimize ∶ f1 (⃗x) = (1 + g(⃗x))cos x1
� 2�
𝜋
Minimize ∶ f2 (⃗x) = (1 + g(⃗x))sin x1
2
where, � �

g(⃗x) = 100 ∣ x⃗ ∣ + xi 𝜖⃗x (xi − 0.5)2 − cos(20𝜋(xi − 0.5))
0 ≤ xi ≤ 1, i = 1, 2, … , n

DTLZ4: Appendix D: Constrained engineering


� � design problems
𝜋
Minimize ∶ f1 (⃗x) = (1 + g(⃗x))cos x1𝛼
� 2�
𝜋 D.1. Welded beam design problem
Minimize ∶ f2 (⃗x) = (1 + g(⃗x))sin x1𝛼
2
where, Minimize f1 (⃗z) = C = 1.10471h2 l + 0.04811tb(14.0 + l),

g(⃗x) = xi 𝜖⃗x (xi − 0.5)2 2.1952
Minimize f2 (⃗z) = D = 3 ,
𝛼 = 100 t b
0 ≤ xi ≤ 1, i = 1, 2, … , n Subject to
g1 (⃗z) = 13, 600 − 𝜏(⃗z) ≥ 0,
DTLZ5:
g2 (⃗z) = 30, 000 − 𝜎(⃗z) ≥ 0,
� 1 + 2g(⃗x)x �
1 𝜋 g3 (⃗z) = b − h ≥ 0,
Minimize ∶ f1 (⃗x) = (1 + g(⃗x))cos ×
4(1 + g(⃗x)) 2 g4 (⃗z) = Pc (⃗z) − 6, 000 ≥ 0,
� 1 + 2g(⃗x)x �
1 𝜋
Minimize ∶ f2 (⃗x) = (1 + g(⃗x))sin × Variable range
4(1 + g(⃗x)) 2
where, 0.125 ≤ h, b ≤ 5.0in.,
∑ 2
g(⃗x) = xi 𝜖⃗x (xi − 0.5) 0.1 ≤ l, t ≤ 10.0in.,
0 ≤ xi ≤ 1, i = 2, 3, … , n where
� √
DTLZ6: 𝜏(⃗z) = (𝜏 � )2 + (𝜏 �� )2 + (l𝜏 � 𝜏 �� )∕ 0.25(l2 + (h + t)2 ),
� 1 + 2g(⃗x)x � � 6, 000 504, 000
1 𝜋 𝜏 = √ , 𝜎(⃗z) = ,
Minimize ∶ f1 (⃗x) = (1 + g(⃗x))cos × 2hl t2 b
4(1 + g(⃗x)) 2 √
� 1 + 2g(⃗x)x �
1 𝜋 �� 6, 000(14 + 0.5l) 0.25(l2 + (h + t)2 )
Minimize ∶ f2 (⃗x) = (1 + g(⃗x))sin × 𝜏 = ,
4(1 + g(⃗x)) 2 2[0.707hl(l2 ∕12 + 0.25(h + t)2 )]
where, Pc (⃗z) = 64, 746.022(1 − 0.0282346t)tb3 .

g(⃗x) = xi 𝜖⃗x xi0.1
0 ≤ xi ≤ 1, i = 2, 3, … , n

DTLZ7:

13
International Journal of Machine Learning and Cybernetics

D.2. Multiple‑disk clutch brake design D.3. Pressure vessel design problem
problem
Minimize f1 (⃗z) = 0.6224Ts LR + 1.7781Th R2
Minimize f1 (⃗z) = M = 𝜋(ro2 − ri2 )t(Z + 1)pm , + 3.1661Ts2 L + 19.84Ts2 R,
Iz w
Minimize f2 (⃗z) = T = , Minimize f2 (⃗z) = −(𝜋R2 L + 1.333𝜋R3 ),
Mh + Mf
Subject to
Subject to
g1 (⃗z) = 0.0193R − Ts ≤ 0,
g1 (⃗z) = ro − ri − ΔR ≥ 0,
g2 (⃗z) = 0.00954R − Th ≤ 0,
g2 (⃗z) = Lmax − (Z + 1)(t + 𝛿) ≥ 0,
g3 (⃗z) = 0.0625 − Ts ≤ 0,
g3 (⃗z) = pmax − prz ≥ 0,
g4 (⃗z) = Ts − 5 ≤ 0,
g4 (⃗z) = pmax Vsr,max − prz Vsr ≥ 0,
g5 (⃗z) = 0.0625 − Th ≤ 0,
g5 (⃗z) = Vsr,max − Vsr ≥ 0,
g6 (⃗z) = Th − 5 ≤ 0,
g6 (⃗z) = Mh − sMs ≥ 0,
g7 (⃗z) = 10 − R ≤ 0,
g7 (⃗z) = T ≥ 0,
g8 (⃗z) = R − 200 ≤ 0,
g8 (⃗z) = Tmax − T ≥ 0,
g9 (⃗z) = 10 − L ≤ 0,
60 ≤ ri ≤ 80 mm,
g10 (⃗z) = L − 240 ≤ 0,
90 ≤ ro ≤ 110 mm,
1.5 ≤ t ≤ 3 mm,
0 ≤ F ≤ 1000 N,
References
2≤Z≤9
where 1. Bin W, Qian C, Ni W, Fan S (2012) The improvement of glow-
worm swarm optimization for continuous optimization problems.
pm = 0.0000078 kg/mm3 , pmax = 1 MPa, Expert Syst Appl 39(7):6335–6342
𝜇 = 0.5, Vsr,max = 10 m/s, 2. Hajiaghaei-Keshteli M, Aminnayeri M (2013) Keshtel algorithm
(ka); a new optimization algorithm inspired by keshtels feeding.
s = 1.5, Tmax = 15 s, n = 250 rpm, In Proceeding in IEEE conference on industrial engineering and
management systems, pp 2249–2253
Ms = 40 Nm, Mf = 3 Nm, 3. Cheraghalipour A, Hajiaghaei-Keshteli M, Paydar MM (2018)
Iz = 55 kg-m2 , 𝛿 = 0.5 mm, Tree growth algorithm (tga): a novel approach for solving opti-
mization problems. Eng Appl Artif Intell 72:393–414
ΔR = 20 mm, Lmax = 30 mm, 4. Orouskhani M, Teshnehlab M, Nekoui MA (2019) Evolutionary
ro3 − ri3 dynamic multi-objective optimization algorithm based on borda
2 count method. Int J Mach Learn Cybernet 10(8):1931–1959
Mh = 𝜇FZ N-mm,
3 ro2 − ri2 5. Dhiman G, Kumar V (2017) Spotted hyena optimizer: a novel
3 3 bio-inspired based metaheuristic technique for engineering appli-
𝜋n 2 ro − ri cations. Adv Eng Softw 114:48–70
w= rad/s, Rsr = mm 6. Singh P, Dhiman G (2018) A hybrid fuzzy time series forecasting
30 3 ro2 − ri2
model based on granular computing and bio-inspired optimiza-
F tion Approaches. J Comput Sci 27:370–385
A = 𝜋(ro2 − ri2 ) mm2 , prz = N/mm2 ,
A 7. Dhiman G, Kumar V (2018) Multi-objective spotted hyena opti-
𝜋Rsr n mizer: a multi-objective optimization algorithm for engineering
Vsr = mm/s, problems. Knowl-Based Syst 150:175–197
30

13
International Journal of Machine Learning and Cybernetics

8. Singh P, Dhiman G (2018) Uncertainty representation using 28. Dhiman G (2019) Multi-objective metaheuristic approaches for
fuzzy-entropy approach: special application in remotely sensed data clustering in engineering application (s). PhD thesis
high-resolution satellite images (RSHRSIs). Appl Soft Comput 29. Algorithm BOS (2019) Mohammad Dehghani, Zeinab Mon-
72:121–139 tazeri, Om Parkash Malik, Gaurav Dhiman, and Vijay Kumar.
9. Dhiman G, Kumar V (2018) Emperor penguin optimizer: a bio- BOSA. Int J Innov Technol Explor Eng 9:5306–5310
inspired algorithm for engineering problems. Knowl-Based Syst 30. Maini R, Dhiman G (2018) Impacts of artificial intelligence on
159:20–50 real-life problems. Int J Adv Res Innov Ideas Educ 4:291–295
10. Dhiman G, Kaur A (2018) Optimizing the design of airfoil and 31. Garg M, Dhiman G (2020) Deep convolution neural network
optical buffer problems using spotted hyena optimizer. Designs approach for defect inspection of textured surfaces. J Inst Elec-
2(3):28 tron Comput 2:28–38
11. Pritpal S, Kinjal R, Gaurav D (2018) A four-way decision-mak- 32. Kaur S, Awasthi LK, Sangal AL, Dhiman G (2020) Tunicate
ing system for the indian summer monsoon rainfall. Modern Phys swarm algorithm: a new bio-inspired based metaheuristic para-
Lett B 32(25):2 digm for global optimization. Eng Appl Artif Intell 90:103541
12. Chiandussi G, Codegone M, Ferrero S, Varesio FE (2012) Com- 33. Konak A, Coit DW, Smith AE (2006) Multi-objective optimiza-
parison of multi-objective optimization methodologies for engi- tion using genetic algorithms: a tutorial. Reliab Eng Syst Saf
neering applications. Comput Math Appl 63(5):912–942 91(9):992–1007
13. Coello Carlos A, Coello LG, B, Van Veldhuizen David A, et al 34. Wang C, Wang Y, Wang K, Yang Y, Tian Y (2019) An improved
(2007) Evolutionary algorithms for solving multi-objective prob- biogeography/complex algorithm based on decomposition
lems, vol 5. Springer, Berlin for many-objective optimization. Int J Mach Learn Cybernet
14. Zhu H, He Z, Jia Y (2016) An improved reference point based 10(8):1961–1977
multi-objective optimization by decomposition. Int J Mach Learn 35. Yang X-S, Karamanoglu M, He X (2014) Flower pollination
Cybern 7(4):581–595 algorithm: a novel approach for multiobjective optimization.
15. Behera SR, Panigrahi BK (2019) A multi objective approach for Eng Optim 46(9):1222–1237
placement of multiple dgs in the radial distribution system. Int J 36. Coello CAC (2006) Evolutionary multi-objective optimization: a
Mach Learn Cybern 10(8):2027–2041 historical view of the field. IEEE Comput Intell Mag 1(1):28–36
16. Gaurav D, Vijay K (2018) Astrophysics inspired multi-objective 37. Chandrawat RK, Kumar R, Garg BP, Dhiman G, Kumar S (2017)
approach for automatic clustering and feature selection in real- An Analysis of Modeling and Optimization Production Cost
life environment. Modern Phys Lett B 18:50385 through Fuzzy Linear Programming Problem with Symmetric
17. Amandeep K, Satnam K, Gaurav D (2018) A quantum method and Right Angle Triangular Fuzzy Number. In Proceedings of
for dynamic nonlinear programming technique using schrödinger Sixth International Conference on Soft Computing for Problem
equation and monte carlo approach. Modern Phys Lett B Solving, pages 197–211. Springer
18:50374 38. Pritpal S, Gaurav D (2017) A Fuzzy-LP approach in time series
18. Pritpal S, Gaurav D, Amandeep K (2018) A quantum approach forecasting. International conference on pattern recognition and
for time series data based on graph and Schrödinger equations machine intelligence. Springer, Berlin, pp 243–253
methods. Modern Phys Lett A 33(35):2 39. Dhiman G, Kaur A (2017) Spotted Hyena Optimizer for Solving
19. Gaurav D, Sen G, Satnam K (2018) ED-SHO: a framework for Engineering Design Problems. In 2017 international conference
solving nonlinear economic load power dispatch problem using on machine learning and data science (MLDS), pages 114–119.
spotted hyena optimizer. Modern Phys Lett A 33(40):2 IEEE
20. Dhiman G, Kumar V (2019) Seagull optimization algorithm: 40. Handl J, Kell DB, Knowles J (2007) Multiobjective optimization
theory and its applications for large-scale industrial engineering in bioinformatics and computational biology. IEEE/ACM Trans
problems. Knowl-Based Syst 165:169–196 Comput Biol Bioinform 4(2):279–292
21. Dhiman G, Kaur A (2019) STOA: a bio-inspired based optimi- 41. Luh G-C, Chueh C-H (2004) Multi-objective optimal design
zation algorithm for industrial engineering problems. Eng Appl of truss structure with immune algorithm. Comput Struct
Artif Intell 82:148–174 82(11–12):829–844
22. Dhiman G, Kumar V (2019) KnRVEA: a hybrid evolutionary 42. Yaseen ZM, Sulaiman SO, Deo RC, Chau K-W (2019) An
algorithm based on knee points and reference vector adapta- enhanced extreme learning machine model for river flow fore-
tion strategies for many-objective optimization. Appl Intell casting: state-of-the-art, practical applications in water resource
49(7):2434–2460 engineering area and future research direction. J Hydrol
23. Gaurav D, Pritpal S, Harsimran K, Ritika M (2019) DHIMAN: 569:387–408
a novel algorithm for economic dispatch problem based on opti- 43. Fotovatikhah F, Herrera M, Shamshirband S, Chau K, Ardabili
mization method using monte carlo simulation and astrophysics SF, Piran MJ (2018) Survey of computational intelligence as
concepts. Modern Phys Lett A 34(04):2 basis to big flood management: challenges, research directions
24. Marler RT, Arora JS (2004) Survey of multi-objective opti- and future work. Eng Appl Comput Fluid Mech 12(1):411–437
mization methods for engineering. Struct Multidiscip Optim 44. Moazenzadeh R, Mohammadi B, Shamshirband S, Chau K
26(6):369–395 (2018) Coupling a firefly algorithm with support vector regres-
25. Gaurav D (2019) MOSHEPO: a hybrid multi-objective approach sion to predict evaporation in northern iran. Eng Appl Comput
to solve economic load dispatch and micro grid problems. Appl Fluid Mech 12(1):584–597
Intell 2:1–19 45. Ardabili SF, Najafi B, Shamshirband S, Bidgoli BM, Deo RC,
26. Pritpal S, Gaurav D, Sen G, Ritika M, Harsimran K, Amandeep Chau K (2018) Computational intelligence approach for mod-
K, Harmanpreet K, Jaswinder S, Napinder S (2019) A hybrid eling hydrogen production: a review. Eng Appl Comput Fluid
fuzzy quantum time series and linear programming model: spe- Mech 12(1):438–458
cial application on TAIEX index Dataset. Modern Phys Lett A 46. Kwok-wing C (2017) Use of meta-heuristic techniques in rain-
34(25):2 fall-runoff modelling
27. Gaurav D (2019) ESA: a hybrid bio-inspired metaheuristic 47. Najafi B, Ardabili SF, Shamshirband S, Chau K, Rabczuk T
optimization approach for engineering problems. Eng Comput (2018) Application of anns, anfis and rsm to estimating and
2:1–31

13
International Journal of Machine Learning and Cybernetics

optimizing the parameters that affect the yield and cost of bio- 67. Mohammad D, Zeinab M, Hadi G, Guerrero Josep M, Gaurav
diesel production. Eng Appl Comput Fluid Mech 12(1):611–624 D (2020) Darts game optimizer: a new optimization technique
48. Maini R, Dhiman G (2018) Impacts of artificial intelligence on based on darts game. Int J Intell Eng Syst 2:2
real-life problems. Int J Adv Res Innov Ideas Educ 4(1):291–295 68. Srihari K, Ramesh R, Udayakumar E, Gaurav D (2020) An inno-
49. Gaurav D, Mukesh S, Mohan PH, Adam S, Harsimran K (2020) vative approach for face recognition using raspberry Pi. Artif
A novel hybrid hypervolume indicator and reference vector adap- Intell Evol 2:2
tation strategies based evolutionary algorithm for many-objective 69. Deb K, Pratap A, Agarwal S, Meyarivan T (2002) A fast and elit-
optimization. Eng Comput 2:1–19 ist multiobjective genetic algorithm: Nsga-ii. IEEE Trans Evol
50. Dehghani M, Montazeri Z, Malik OP, Al-Haddad K, Guerrero Comput 6(2):182–197
JM, Dhiman G (2020) A new methodology called dice game 70. Zhang Q, Li H (2007) MOEA/D: a multiobjective evolutionary
optimizer for capacitor placement in distribution systems. Electr algorithm based on decomposition. IEEE Trans Evol Comput
Eng Electromech 1:61–64 11(6):712–731
51. Gaurav D, Meenakshi G, Atulya N, Vijay K, Mohammad D 71. Coello CA Coello, Lechuga MS (2002) Mopso: A proposal for
(2020) A novel algorithm for global optimization: rat swarm multiple objective particle swarm optimization. In Proceedings
optimizer. J Amb Intell Hum Comput 2:2 of the Evolutionary Computation on 2002. CEC ’02. Proceedings
52. Meenakshi G, Gaurav D (2020) A novel content based image of the 2002 Congress - Volume 02, CEC ’02, pages 1051–1056,
retrieval approach for classification using GLCM features and Washington, DC, USA. IEEE Computer Society
texture fused LBP variants. Neural Comput Appl 2:2 72. Verma S, Kaur S, Dhiman G, Kaur A (2018) Design of a novel
53. You L, Huaxiong L, Bo W, Min Z, Mei J (2020) Multi-objective energy efficient routing framework for wireless nanosensor net-
unit commitment optimization with ultra-low emissions under works. In 2018 First International Conference on Secure Cyber
stochastic and fuzzy uncertainties. Int J Mach Learn Cybern Computing and Communication (ICSCCC), pages 532–536.
2:1–15 IEEE
54. Arqub OA, Mohammed AL-S, Momani S, Hayat T (2016) 73. Gaurav D, Amandeep K (2019) A hybrid algorithm based on
Numerical solutions of fuzzy differential equations using particle swarm and spotted hyena optimizer for global optimiza-
reproducing kernel hilbert space method. Soft Comput tion. Soft computing for problem solving. Springer, Berlin, pp
20(8):3283–3302 599–615
55. Arqub OA, Al-Smadi M, Momani S, Hayat T (2017) Appli- 74. Wolpert DH, Macready WG (1997) No free lunch theorems for
cation of reproducing kernel algorithm for solving second- optimization. Trans Evol Comp 1(1):67–82
order, two-point fuzzy boundary value problems. Soft Comput 75. Arqub OA, Abo-Hammour Z (2014) Numerical solution of sys-
21(23):7191–7206 tems of second-order boundary value problems using continuous
56. Arqub OA (2017) Adaptation of reproducing kernel algorithm genetic algorithm. Inf Sci 279:396–415
for solving fuzzy fredholm-volterra integrodifferential equations. 76. Zhang Q, Zhou A, Zhao S, Suganthan PN, Liu W, Tiwari S, Mul-
Neural Comput Appl 28(7):1591–1610 tiobjective optimization test instances for the cec, (2009) special
57. Chen M, Hammami O (2015) A System Engineering Conception session and competition. University of Essex, Colchester, UK
of Multi-objective Optimization for Multi-physics System, pages and Nanyang technological University, Singapore, special ses-
299–306. Springer International Publishing, Cham sion on performance assessment of multi-objective optimization
58. Kipouros T, Jaeggi DM, Dawes WN, Parks GT, Savill AM, algorithms, technical report 264:2008
Clarkson PJ (2008) Biobjective design optimization for axial 77. Zitzler E, Deb K, Thiele L (2000) Comparison of multiobjec-
compressors using tabu search. AIAA J 46(3):701–711 tive evolutionary algorithms: empirical results. Evol Comput
59. Gaurav D, Vijay K (2019) Spotted hyena optimizer for solv- 8(2):173–195
ing complex and non-linear constrained engineering problems. 78. Deb K, Thiele L, Laumanns M, Zitzler E (2005) Scalable test
Harmony search and nature inspired optimization algorithms. problems for evolutionary multiobjective optimization, pages
Springer, Berlin, pp 857–867 105–145. Springer, London
60. Amandeep K, Gaurav D (2019) A review on search-based tools 79. Özkış A, Babalık A (2017) A novel metaheuristic for multi-
and techniques to identify bad code smells in object-oriented objective optimization problems: the multi-objective vortex
systems. Harmony search and nature inspired optimization algo- search algorithm. Inf Sci 402:124–148
rithms. Springer, Berlin, pp 909–921 80. Babalık A, Özkış A, Uymaz SA, Kiran MS (2018) A multi-objec-
61. Coello Carlos A, Coello LG, B, Van Veldhuizen David A, (2006) tive artificial algae algorithm. Appl Soft Comput 68:377–395
Evolutionary algorithms for solving multi-objective problems 81. Dhiman G, Kumar V (2018) Multi-objective spotted hyena opti-
(genetic and evolutionary computation). Springer, New York mizer: a multi-objective optimization algorithm for engineering
62. Fathollahi-Fard AM, Hajiaghaei-Keshteli M, Tavakkoli- problems. Knowl-Based Syst 150:175–197
Moghaddam R (2018) The social engineering optimizer (seo). 82. Deb K (2012) Advances in evolutionary multi-objective optimi-
Eng Appl Artif Intell 72:267–293 zation. Springer, Berlin, pp 1–26
63. Gaurav D, Amandeep K (2020) HKn-RVEA: a novel many- 83. Angus D, Woodward C (2009) Multiple objective ant colony
objective evolutionary algorithm for car side impact bar crash- optimisation. Swarm Intell 3(1):69–85
worthiness problem. Int J Veh Des 2:2 84. Gong M, Jiao L, Haifeng D, Bo L (2008) Multiobjective immune
64. Gaurav D, Meenakshi G (2020) MoSSE: a novel hybrid multi- algorithm with nondominated neighbor-based selection. Evol
objective meta-heuristic algorithm for engineering design prob- Comput 16(2):225–255
lems. Soft Comput 2:2 85. Pradhan PM, Panda G (2012) Solving multiobjective prob-
65. Dhiman G (2020) Coronavirus (COVID-19) Effects on psycho- lems using cat swarm optimization. Expert Syst Appl
logical health of Indian poultry farmers. Coronaviruses 39(3):2956–2964
66. Yuvaraj N, Srihari K, Chandragandhi S, Arshath RR, Gaurav D, 86. Hancer E, Xue B, Zhang M, Karaboga D, Akay B (2015) A
Amandeep K (2020) Analysis of protein-ligand interactions of multi-objective artificial bee colony approach to feature selec-
SARS-Cov-2 against selective drug using deep neural networks. tion using fuzzy mutual information. In 2015 IEEE Congress on
IEEE Big Data Min Anal 2:2 Evolutionary Computation (CEC), pages 2420–2427

13
International Journal of Machine Learning and Cybernetics

87. Gong D, Sun J, Miao Z (2016) A set-based genetic algorithm for bi-objective problems: theoretical and numerical results. Comput
interval many-objective optimization problems. IEEE Trans Evol Optim Appl 64(2):589–618
Comput 22(1):47–60 103. Schütze O, Esquivel X, Lara A, Coello CAC (2012) Using the
88. Xue Yu, Jiang J, Zhao B, Ma T (2018) A self-adaptive artificial averaged hausdorff distance as a performance measure in evolu-
bee colony algorithm based on global best for global optimiza- tionary multiobjective optimization. IEEE Trans Evol Comput
tion. Soft Comput 22(9):2935–2952 16(4):504–522
89. Bin W, Qian C, Ni W, Fan S (2012) Hybrid harmony search and 104. Schütze O, Laumanns M, Tantar E, Coello CAC, Talbi EG (2010)
artificial bee colony algorithm for global optimization problems. Computing gap free pareto front approximations with stochastic
Comput Math Appl 64(8):2621–2634 search algorithms. Evol Comput 18(1):65–96
90. Cai X, Li Y, Fan Z, Zhang Q (2014) An external archive guided 105. Miqing L, Jinhua Z (2009) Spread assessment for evolutionary
multiobjective evolutionary algorithm based on decomposi- multi-objective optimization. Springer, Berlin, pp 216–230
tion for combinatorial optimization. IEEE Trans Evol Comput 106. Zitzler E, Thiele L, Laumanns M, Fonseca CM, da Fonseca VG
19(4):508–523 (2003) Performance assessment of multiobjective optimizers: an
91. Hoyo J, Elliott A, Sargatal J (1996) Handbook of the birds of the analysis and review. Trans Evol Comp 7(2):117–132
world. Lynx Edicions 3:572–599 107. Roy PC, Islam MM, Murase K, Yao X (2015) Evolutionary path
92. Macdonald SM, Mason CF (1973) Predation of migrant birds by control strategy for solving many-objective optimization prob-
gulls. Br Birds 66:361–363 lem. IEEE Trans Cybern 45(4):702–715
93. Coello CAC (2009) Evolutionary multi-objective optimiza- 108. Richardson A (2010) Nonparametric statistics for non-statisti-
tion: some current research trends and topics that remain to be cians: a step-by-step approach by gregory w. corder, dale i. fore-
explored. Front Comput Sci China 3(1):18–30 man. Int Stat Rev 78(3):451–452
94. Edgeworth FY (1881) Mathematical physics: P. Keagan, London, 109. Coello CAC (2002) Theoretical and numerical constraint-
England handling techniques used with evolutionary algorithms: a sur-
95. Pareto V (1964) Cours d’economie politique: Librairie Droz vey of the state of the art. Comput Methods Appl Mech Eng
96. Coello CAC (2009) Evolutionary multi-objective optimiza- 191(11–12):1245–1287
tion: some current research trends and topics that remain to be 110. Ragsdell KM, Phillips DT (1976) Optimal design of a class
explored. Frontiers of Computer Science in China 3(1):18–30 of welded structures using geometric programming. J Eng Ind
97. Chegini SN, Bagheri A, Najafi F (2018) Psoscalf: a new hybrid 98(3):1021–1025
pso based on sine cosine algorithm and levy flight for solving 111. Rao RV, Waghmare GG (2017) A new optimization algorithm
optimization problems. Appl Soft Comput 73:697–726 for solving complex constrained design optimization problems.
98. Knowles JD, Corne DW (2000) Approximating the nondomi- Eng Optim 49(1):60–83
nated front using the pareto archived evolution strategy. Evol 112. Kannan BK, Kramer SN (1994) An augmented lagrange multi-
Comput 8(2):149–172 plier based method for mixed integer discrete continuous opti-
99. Coello CAC, Pulido GT, Lechuga MS (2004) Handling multiple mization and its applications to mechanical design. J Mech Des
objectives with particle swarm optimization. Trans Evol Comp 116(2):405–411
8(3):256–279 113. Dhiman G, Kaur A (2018) A hybrid algorithm based on particle
100. Zitzler E, Thiele L (1999) Multiobjective evolutionary algo- swarm and spotted hyena optimizer for global optimization. In:
rithms: a comparative case study and the strength pareto Advances in intelligent systems and computing. Springer, Berlin
approach. IEEE Trans Evol Comput 3(4):257–271 114. Dhiman G, Kaur A (2018) Spotted hyena optimizer for solving
101. Coello CA, Dhaenens C, Jourdan L (2010) Multi-objective com- engineering design problems. In International Conference on
binatorial optimization: problematic and context, pages 1–21. Machine Learning and Data Science. IEEE, In press
Springer Berlin Heidelberg, Berlin, Heidelberg
102. Rudolph G, Schütze O, Grimme C, Domínguez-Medina C, Publisher’s Note Springer Nature remains neutral with regard to
Trautmann H (2016) Optimal averaged hausdorff archives for jurisdictional claims in published maps and institutional affiliations.

13

View publication stats

You might also like