You are on page 1of 24

Engineering with Computers (2021) 37:509–532

https://doi.org/10.1007/s00366-019-00837-7

ORIGINAL ARTICLE

I‑GWO and Ex‑GWO: improved algorithms of the Grey Wolf Optimizer


to solve global optimization problems
Amir Seyyedabbasi1   · Farzad Kiani2

Received: 20 April 2019 / Accepted: 29 July 2019 / Published online: 13 August 2019
© Springer-Verlag London Ltd., part of Springer Nature 2019

Abstract
In this paper, two novel meta-heuristic algorithms are introduced to solve global optimization problems inspired by the Grey
Wolf Optimizer (GWO) algorithm. In the GWO algorithm, wolves are likely to be located in regions close to each other.
Therefore, as they catch the hunt (approaching the solution), they may create an intensity in the same or certain regions. In
this case, the mechanism to prevent the escape of the hunt may not work well. First, the proposed algorithm is the expanded
model of the GWO algorithm that is called expanded Grey Wolf Optimizer. In this method, the same as GWO, alpha, beta,
and delta play the role of the main three wolves. However, the next wolves select and update their positions according to
the previous and the first three wolves in each iteration. Another proposed algorithm is based on the incremental model and
is, therefore, called incremental Grey Wolf Optimizer. In this method, each wolf updates its own position based on all the
wolves selected before it. There is the possibility of finding solutions (hunts) quicker than according to other algorithms in
the same category. However, they may not always guarantee to find a good solution because of their act dependent on each
other. Both algorithms focus on exploration and exploitation. In this paper, the proposed algorithms are simulated over 33
benchmark functions and the related results are compared with well-known optimization algorithms. The results of the
proposed algorithms seem to be good solutions for various problems.

Keywords  Grey wolf optimizer (GWO) · Optimization algorithm · Meta-heuristic · Swarm intelligence

1 Introduction Meta-heuristic algorithms can provide reasonable solutions


with an acceptable time. Meta-heuristic algorithms may
A large and growing body of literature has investigated; not always guarantee the best solutions, and sometimes, the
many authors have proposed new meta-heuristic algorithms. finding solutions may not be acceptable. Therefore, the per-
These algorithms are fast becoming a key instrument in formance of the developed algorithms may vary from prob-
complex optimization problems, and so, the search space is lem to problem. An algorithm that is very successful in the
important issue in each complex problem. By the increasing solution of a problem may not be good for another problem
dimension of a problem, search space increases exponen- solution at the same time. The meta-heuristic algorithm is
tially, so the complexity of a problem will be increased [1, executed based on random inputs and received outputs and
2]. In some problem functions, the dimension is constant. independently from the problem [3]. The classification of
meta-heuristic algorithms can be applied based on various
criteria such as single- or multi (population)-based solutions.
* Amir Seyyedabbasi In population-based algorithm, whole of population has
amir.seyedabbasi@gmail.com
effect in output, whereas in single-based algorithm, a single
Farzad Kiani solution along search iteration is evolved [4]. As explained
farzad.kiyani@gmail.com
above, randomness, reasonable solution, acceptable response
1
Computer Engineering Department, Engineering and Natural time and different solutions for a problem in each run are the
Sciences Faculty, Istanbul Sabahattin Zaim University, characteristics of meta-heuristic algorithms [5]. In the litera-
Istanbul, Turkey ture, population-based studies have been conducted more
2
Computer Engineering Department, Engineering [4]. They are used in many areas of science and engineer-
and Architecture Faculty, Istanbul Arel University, Istanbul, ing such as engineering design, machine learning, system
Turkey

13
Vol.:(0123456789)

510 Engineering with Computers (2021) 37:509–532

modeling, industry, and planning in routing problems [4]. [18]. Authors in [19] use BBO algorithm in optimization of
The main purposes of meta-heuristics are solving problems power system stabilizer. In [20], authors introduced hybrid
faster, solving large problems, and obtaining robust algo- bio-inspired algorithm of GA and BBO for protein domain
rithms [4]. problems [21]. BBO solves also complex economic load dis-
As is known, meta-heuristic methods are from the family patch problems [22].
of optimization algorithms. These algorithms are catego- The physics-based method generally mimics physical
rized into two groups of exact and approximate algorithms. rules and biological processes of nature. In this kind of
Exact algorithms are capable of finding the optimal solution algorithm, physical rules have more effect in search spaces.
in a precise manner, but they are not efficient enough for dif- There are more well-known physic-based algorithms. One
ficult or hard optimization problems and their execution time of the most popular algorithms is gravitational search algo-
expands exponentially with the dimensions of the problems rithm (GSA) [22] that has been inspired by gravity law and
[6]. Approximate algorithms are capable of finding good mass interactions. GSA has many applications in thermo-
(near optimal) solutions at short time for difficult/hard opti- dynamics [23] and energy management systems [24]. In
mization problems. Heuristic and meta-heuristic algorithms [25] has been used GSA to face recognition mechanism.
are in the category of approximate algorithms. The heuris- Big Bang–Big Crunch (BBBC) [26] mimics the big bang
tic algorithm is problem-dependent, whereas meta-heuristic and big crunch theory. This algorithm same as the theory
algorithm is a problem-independent technique. Besides, heu- includes two phases. The method in [27] is an example of
ristic algorithms may be trapped in local optima. However, BBBC application to optimal power flow problems. Charged
meta-heuristic algorithms avoid trapping in local optima system search (CSS) [28] has been inspired by Newtonian
by exploration and exploitation concepts [3]. These con- laws in mechanics and Coulomb law from electrostatics. In
cepts are very important in each meta-heuristic algorithms, [29], it has been used CSS for emission constrained eco-
because should have a trade-off between two of them. Explo- nomic power dispatch problem. Chemical reaction optimi-
ration means to find the best solution in the area(s) and zation (CRO) [30] simulated the molecules which interact
exploitation refers to focus on best solution area(s) to reach with each other through a sequence of elementary reactions.
the best solution. Taken as a whole, in the initial iteration Grid computing [31], cloud computing [32], and RNA struc-
power of exploration must be increased, consequently, the ture prediction [33] are applications of CRO. Central force
power of exploitation gradually will increase. optimization (CFO) [34] is based on the metaphor of gravi-
Meta-heuristic algorithms are generally classified into tational kinematics. Leak detection is a problem in piping
three types: evolution-based, physics-based, and swarm system that recently authors in [35] solve this problem with
intelligence methods. The evolution-based algorithms (EA) CFO. Black hole (BH) [36] algorithm inspires the black hole
originated by nature. Evolutionary algorithms for solv- phenomenon. Authors in [37] proposed a method to solve
ing a given problem in a search space initially start with the Travelling salesman problem by BH.
a random population (set of solutions). In these methods, The swarm-based methods in other meaning swarm intel-
the best solution in each process has an effect on the next ligence (SI) algorithms were based on group behaviors.
generation of individuals. The most popular algorithm in These types of algorithms consist of a group of simple parti-
this category is genetic algorithm (GA) [7] that has been cles and homogeneous members that interact with each other
inspired by Charles Darwin’s theory of evolution. GA mim- and their environment. SI-based algorithms impact on agents
ics generation reproduction and includes selection, crosso- that were cooperative in local search space and collective
ver, mutation, and the elitism of generation phases. This behavior of all agents caused to reach convergence near to
algorithm has been applied in different areas such as soft the best solution. In these algorithms, each agent is expected
computing [8], health science [9], and civil engineering to cooperate with other agents. Particle swarm optimization
[10]. Other algorithm based on EA is differential evolution (PSO) is the most popular algorithm in this category that
(DE) same as GA mimics evolutionary theory, but there are was presented by Eberhart et al. [38]. PSO was inspired by
some differences between them such in the selection opera- social behavior of birds; in PSO, there are communication
tors [11]. Some studies and applications based on DE are channels between particles. They move on search space. Fit-
routing algorithms in wireless sensor networks [12, 13] ness function will determine the best solution. PSO can pro-
and electrical engineering [14]. Evolutionary programming vide good solutions in various optimization problems. In the
(EP) emphasized the development of behavioral models literature, different algorithms were proposed in the various
like phenotype, hereditary, and variation [15, 16]. Applica- fields using this method. One of them is the best route-find-
tion of EP was introduced in reservoir flood operation [17]. ing problems. The pathfinding in the routing-based problems
Biogeography-based optimizer (BBO) is inspired by natu- is a critical issue. Especially, when the paths are frequently
ral laws. It explains migration between islands and factors changed, finding the fix and best routes is difficult. In addi-
that why and which island is the best election for migration tion, other parameters such as power, delay, delivery rate,

13
Engineering with Computers (2021) 37:509–532 511

etc. are effective in the decision mechanism. In these cat- the social habits of grey wolves and focuses on grey wolves’
egory problems, PSO can build optimal solutions and rout- hunting mechanism. It is based on leadership structure of
ing paths. Some studies in this area include path planning for grey wolves. In GWO, there are four types of wolves; alpha,
a mobile robot in [39] and energy efficiency in distributed beta, delta, and omega. All other wolves were given the
systems in [40]. Other study is Ant Colony Optimization name Omega. Authors in GWO supposed alpha, beta, and
(ACO) that simulates behaviors of ants in finding the path delta wolves which have better knowledge of hunt position.
in foraging [41]. ACO impacts some factors that each ant Omega wolves update own position based on three wolves
needs to find path according to the other ants’ experience defined smartest to encirclement hunt. In literature, there
in the path. ACO is an excellent example of SI algorithms. are many studies that apply GWO to find optimal solution
ACO has been used in many fields such as routing in wire- in various problems. Authors in [63] used GWO to image
less sensor networks. One critical issue in these networks is segmentation, also in medical diagnosis [64]. One of the
finding the optimal route, so authors in [42] used ACO to important problems in ad-hoc based systems is clustering
find an optimal path. In addition, some studies are disaster in vehicular networks. Authors in [65] have been used the
relief operations [43] and image edge detection [44]. In this GWO to provide optimal clusters in ad-hoc networks. Load
category, another study is Artificial Bee Colony (ABC) that balancing in cloud computing is significant that authors in
mimics the behavior of the honeybee. In this colony, forag- [66] for resource allocation using GWO.
ing is one of the main behaviors [45]. In ABC, there are two As mentioned above, most studies in the field of the meta-
types of bees: employed and unemployed bees. These bees heuristic algorithms have focused on exploration and exploi-
are responsible for search of rich food sources. Bees benefit tation. The algorithms proposed in this field maintain the
from the experiences of other bees in finding a resource. For balance between these two phases. For example, in GWO,
example, if a specific location has good resources, they will the authors claimed that there are trade-offs between them.
try to go there. At reverse situation, if resources are being As mentioned before, there are four types in wolves in GWO.
low at there, the location will be abandoned. ABC has been Omega wolves update own position based on the first three
used in Internet of Things [46]. Furthermore, authors in [47] wolves’ position. The purpose of these updates is to catch
used ABC algorithm to solve the leaf-constrained minimum the prey. These three wolves are located at the first three
spanning tree problem. Other method is Bat-inspired Algo- levels of the hierarchy and the remaining wolves (omega)
rithm (BA) that mimics the echolocation behavior of bats are defined at the fourth level. In this mechanism, since the
[48]. Solving symmetric and asymmetric traveling salesman position updates of the wolves at the second layer (fourth
problems using BA is an application of this algorithm [49]. and next levels—omega) depend only on three wolves, they
In addition, BA has been used in proportional integral (PI) can be densely settled in the same or in certain regions dur-
controllers [50]. Firefly algorithm (FA) was inspired by the ing the prey catching process. In this case, the mechanism
behaviors of fireflies [51]. In this algorithm, the flashing of escape prevention may not work well. In this paper, are
characteristics of fireflies attract other fireflies. The flashing proposed two new meta-heuristic algorithms inspired by the
method has a message, usually used for sending a signal to GWO method. One of them is an algorithm is Expanded
opposite sex of Firefly in the colony. Feature selection [52], Grey Wolf Optimizer (Ex-GWO), the position of the wolves
truss structure [53], and coverage maximization in wireless in the first layer (alpha, beta and delta) and the wolf/wolves
sensor network [54] are applications of FA. Cuckoo Search previously selected and updated in the second layer are used
(CS) algorithm mimics the cuckoo’s behaviors in the nest to update each current wolf own position. In the other rec-
[55]. They have one instinctive behavior that distinguished ommended method in this paper is Incremental-based Grey
cuckoo from other birds. Cuckoos lay their eggs in the nest Wolf Optimizer (I-GWO) algorithm. In the working mecha-
of other birds (of other species); this is known as obligate nism, position update of each wolf is related to wolves that
brood parasitism [56]. If host bird discovers that egg is not were selected and updated before. In other words, the n−1
its own, it tries to destroy the egg or migrate to another nest. wolves’ positions are considered in the position update of the
Usually, cuckoo tries to have some changes on own egg to nth wolf. Both, there is a high probability of finding good
be similar to the host’s nest egg. CS has many applications solutions for a variety of complex problems. In addition, it
in agriculture [57], bioinformatics [58], and wireless sensor can find global solutions quickly in few iterations. The con-
network [59]. Monkey search algorithm mimics the monkey vergence rate to the global solution in Ex-GWO method is
behaviors in searching foods [60]. The algorithm consists lower than in the I-GWO method, but it has more balanced
of climb process, watch-jump process, and somersault pro- behavior and performance in many problems’ solving. These
cess. Authors in [61] and [62] used this algorithm in solve two algorithms are described in more detail in the third sec-
0/1 knapsack problem and health science, respectively. Grey tion of the paper.
Wolf Optimizer (GWO) is another algorithm that is based The rest of the paper is organized as follows. Section 2
on SI category and has been introduced in [3]. GWO mimics discusses the Grey wolf optimizer (GWO) and two most

13

512 Engineering with Computers (2021) 37:509–532

important algorithms based on GWO in literature. Expanded not include any of the above types, wolf is omega. In addi-
Grey Wolf Optimizer (Ex-GWO) and Incremental Grey wolf tion, one of the interesting habits of grey wolves is group
optimizer (I-GWO) are introduced in Sect. 3. Section 4 pre- hunting. GWO has been used in many fields [63–66]. The
sents simulation results and comparisons on 33 benchmark authors of the GWO algorithm suggested a mathematical
functions as global optimization problems. Finally, Sect. 5 model inspired by grey wolf life, and so, the GWO algorithm
concludes the work and future works are described. mimics some real behaviors such as the encircling, hunting
and attacking the prey.

2 Standard Grey Wolf Optimization 2.1.1 Mathematical model in encircling the prey

2.1 Grey Wolf Optimizer (GWO) In hunting, wolves encircle the prey. Mathematically, it is
modeled as given in Eqs. 1 and 2. Thanks to this, the hunt
In 2014, Mirjalili et al. [3] published a paper in which they with the new location of the grey wolves will be surrounded:
described a new meta-heuristic optimization algorithm as
Grey Wolf Optimization (GWO). GWO has been simulated ⃗ = ||C
D ⃗ ⋅X ⃗ ||,
���⃗p (t) − X(t) (1)
| |
based on the social behavior and leadership hierarchy of grey
wolves. Grey wolf group living and hunting habitat is unique
to these kinds of animals. Grey wolves’ groups usually con- ⃗ + 1) = X
X(t ⃗ p (t) − A
⃗ ⋅ D,
⃗ (2)
sist of 5–12 wolves. Each group consists of four types of
grey wolves namely alpha (α), beta (β), delta (δ), and omega where t is the current iteration, A ⃗  , C
⃗ are coefficient vectors,
⃗ is the position vector of the grey wolf, and X
X ���⃗p is the posi-
(ω), as shown in Fig. 1. For example, 2 members in a 5
members’ group are Omega wolf. It must be said that each tion vector of the prey. A
⃗  , C
⃗  , and a⃗ are calculated as Eqs. 3,
of the omega wolves may have different tasks in the pack. 4, and 5, respectively:
In groups, each of these types of grey wolves has a dif- ⃗ = 2⃗a ⋅ r��⃗1 − a⃗ ,
A (3)
ferent responsibility. Alpha (α) wolf has a powerful effect in
the group—the leader. A decision about a hunt, sleep loca- ⃗ = 2 ⋅ r��⃗2 ,
C (4)
tion, and wake-up time are responsibilities of alpha wolf.
( )
This decision is dictated to the group. Only the Alpha Wolf t
a⃗ = 2 1 − , (5)
can mate. It does not matter whether this wolf is stronger T
than others. Group management is more important than pack where a⃗ is linearly decreased from 2 to 0 over the courses of
power in hunting. Beta (β) wolf is in the second layer of pack iteration. It is used to get closer to the solution range. r��⃗1 and
hierarchy. These types of wolves are known as co-leaders r��⃗2 are the random vectors in range of [0,1].
in the group. Beta wolves help alphas in decision-making
and can be good substitutes for alphas. Beta wolves dictate 2.1.2 Mathematical model for hunting mechanism
instructions to wolves in the lower hierarchy. Delta (δ) con-
sidered in third level of hierarchy. A delta wolf must be fol- Grey wolves have an ability to surround prey position. In the
lowing the instructions from upper level wolves: alpha and mathematical model is supposed that there is no idea about
beta. Delta is the last wolf that can eat hunt. It is important to the position of the prey. Thererfore, alpha, beta, and delta
say that if the delta wolf does not exist in a group, the group have better knowledge about the prey’s position. Indeed,
encounters internal chaos and problems. The lowest layer- alpha (the first best solution), beta, and delta are the three
ing of hierarchy is omega (ω) grey wolves. If the wolf does best candidate solutions. Omega wolves renew their posi-
tions according to the wolves in the upper layer. The related
Eqs. 6, 7, and 8 are proposed in this regard:

����𝛼⃗ = ||C
D ���⃗ ⋅ X ⃗ ||,
���⃗ − X
| 1 𝛼 |

����𝛽⃗ = ||C
D ���⃗ ⋅ X ⃗ ||,
���⃗ − X
| 2 𝛽 |

����⃗𝛿 = ||C
D ���⃗ ⋅ X ⃗ ||
���⃗ − X (6)
| 3 𝛿 |
Fig. 1  Grey wolf hierarchy and

13
Engineering with Computers (2021) 37:509–532 513

function and compared with other meta-heuristic algorithms.


���⃗1 = X
X ⃗a − A
���⃗1 ⋅ D
����𝛼⃗,
Authors have declared that the mGWO achieved acceptable
results. Their results show that the mGWO can find optimal
���⃗2 = X
X ⃗𝛽 − A
���⃗2 ⋅ D
����𝛽⃗, solutions in ten benchmark functions.

���⃗3 = X
X ⃗𝛿 − A
���⃗3 ⋅ D
����⃗𝛿 . (7)
2.3 Enhanced Grey Wolf Optimization (EGWO)
Then
���⃗1 (t) + X
X ���⃗2 (t) + X
���⃗3 (t) Joshi et  al. [68] introduced a novel hunting mechanism
⃗ + 1) =
X(t . (8)
3 of GWO. Authors try to balance between exploration and
exploitation. Also they focused on improving convergence
2.1.3 Attacking prey rate with better results. EGWO algorithm adjusted a⃗ param-
eter; a random vector between 0 and 1. Therefore, explora-
It is worth mentioning that here exploration and exploita- tion and exploitation are balanced by parameter a⃗ . Authors
tion are important in meta-heuristic algorithms. GWO tries claim that adjusting in a⃗ parameter maintains exploration to
to trade-off between these two phases. In GWO, a⃗ value in prevent getting trapped in local optima. In EGWO algorithm,
each iteration decreased from 2 to 0. Actually, A ⃗ value is other wolves except alpha (α) update position for hunting
also decreased by a⃗ . The value A is important to a grey wolf.
⃗ just only by leader wolf position. As in the GWO, mentioned
When |A| < 1, force the wolves to attack the prey. Other- alpha (α) has better knowledge about the potential position
wise, |A| > 1, wolves try to search other prey. That proves the of prey. EGWO uses alpha (α) position to hunting habit in
exploration and exploitation concepts. C ⃗ Parameter provides pack. Hunting mechanism is achieved from Eqs. 10, 11, and
random values in each iteration. Authors emphasize that this 12:
value has an effect in exploration all time, even in the last
iteration. The GWO algorithm tested by 29 functions, and ����𝛼⃗ = ||C
D ���⃗ ⋅ X ⃗ ||,
���⃗ − X (10)
| 1 𝛼 |
so, they were divided into two sections (23 benchmark func-
tion and 9 real mechanical and optical engineering prob- ���⃗1 = X
X ⃗a − A
���⃗1 ⋅ D
����𝛼⃗, (11)
lems). The results of the related algorithm were compared
with other well-known algorithms. The achieved results ⃗ + 1) = X
X(t ���⃗1 . (12)
showed that GWO finds an optimal solution in three bench-
They have tested the proposed algorithm on 25 selected
mark functions. The next parts of this section describe two
benchmark functions and according to the results [68],
well-known algorithms based on GWO.
claimed that the performance of the EGWO is promising
in terms of better exploration and exploitation of the search
2.2 Modified GWO algorithm (mGWO)
space in comparison with the other some studied algorithms.
Their results show that the EGWO can find optimal solutions
Nitin et al. [67] have proposed a new version of GWO in
in 11 benchmark functions.
2016. This paper focused on proper balance between explo-
ration and exploitation. Modified GWO algorithm enhance
the exploration process by decreasing the value of a⃗ based
3 Proposed methods inspired by GWO
on the Eq. 9. That way, by decreasing a⃗  , the value A ⃗ is
decreased too. Therefore, mGWO can balance in exploration
In this section, the two proposed methods will be explained
and exploitation to find global minimum with fast conver-
in order. The two proposed studies have no superiority over
gence speed. mGWO uses exponential decay function of a⃗
each other and are not recommended for the completion of
over each iteration, where T denotes the maximum number
their deficiencies. In other words, they are not derived ver-
of iteration and t is the current iteration:
( ) sions. These two algorithms can be used to find solutions
t2 to different complex problems. As a matter of fact, when
a⃗ = 2 1 − 2 . (9) applied over 33 benchmark functions, which will be pre-
T
sented in Sect. 4, one of the introduced algorithms (I-GWO)
In this algorithm, modification was just occurred in a⃗ . finds the best solution in 17 functions compared to other
This modification of GWO is classified in updating mecha- optimization algorithms in the literature and the other pro-
nism [69]. According to results [67], the numbers of iteration posed algorithm (Ex-GWO) is successful in 13 functions.
for exploration and exploitation are 70% and 30% sequen- Therefore, in a problem where someone may be not good,
tially. In addition, mGWO was tested using 23 benchmark the other proposed algorithm might find a good solution.

13

514 Engineering with Computers (2021) 37:509–532

3.1 Expanded GWO (Ex‑GWO) the proposed method, to prevent omega wolves from being
located in close-up areas like GWO in position updates is
As mentioned above, one of the main goals of meta-heuristic defined a parameter that is called a⃗ , but it may not always
algorithms is finding the best optimized solutions. The fit- be a successful metric. Therefore, a mechanism is recom-
ness function is the inseparable base of optimization algo- mended that the wolves (omega wolves) in the second layer
rithms. Fitness function value leads the algorithm to reach follow each other and update own positions. An example of
the best solution maximization or minimization. The popu- the mechanism is shown in Fig. 3 for the fifth wolf.
lation-based optimization algorithm is one type of meta-heu- Ex-GWO algorithm uses other positions of wolves in
ristic algorithms. In these algorithms, a random population pack, just not uses alpha, beta, and delta positions to find
initially will be generated. This random population (search the best solution as described in Fig. 3. Therefore, it is a sig-
space) is a set of solutions. In going iteration, this popula- nificant difference between GWO and Ex-GWO in hunting
tion will be improved. Agents (population) in optimization mechanism. The biggest weakness of the GWO algorithm
algorithm try to find best solution from search space. maybe is the combination of Omega wolves. Exploration
As discussed in the previous section, this paper is inspired occurred in half of the iterations and exploitation dedicated
by the GWO algorithm and proposed two algorithms. One of in the other half. The aim of the GWO algorithm is to estab-
them is expanded version of the GWO and is a population- lish exploration and exploitation phases from the initial iter-
based optimization algorithm. In general, In GWO, three ations. It suggests a balanced performance between the two
wolves alpha (α), beta (β), delta (δ), respectively, have the phases. The time complexity of this algorithm is not good
highest impact to other wolves that named omega (ω) in the compared to the GWO, but, thanks to the balanced behav-
pack. Omega (ω) wolves in the pack must update position ior mechanism, the probability of finding a good solution is
according to the alpha (α), beta (β), and delta (δ) wolves’ higher. In additional, the wolves in the pack minimize the
position. Omega (ω) wolves have not knowledge about hunt escape paths of the hunt, and hence, the hunts can be caught
position. They approach prey based on alpha, beta, and delta faster. Figure 4 is an example of the interaction of wolves
positions. The GWO considers that they have good knowl- with each other, position updates, and the hunting sieges.
edge about the prey. Omega wolves have to optimize own In this algorithm, like the GWO, it is supposed that first,
position to be close to the prey, so they update own position. second, and third wolves have better knowledge about prey
It follows that may be omega wolves located in very similar position. The fourth wolf updates its own position accord-
positions or close to each other. In addition, the mechanism ing to the three wolves’ position. Evidently, the fifth wolf
to prevent the escape of the hunt may not work well. updates own position based on four wolves (alpha, beta,
The proposed Ex-GWO is a novel hunting mechanism
inspired by GWO. In this algorithm, we defined two layers of
hierarchy. The first layer consists of three levels that each of
alpha, beta, and delta wolves is placed in one level. The sec-
ond layer consists of the other members of the herd (Fig. 2).
The position of the wolves in the first layer (alpha, beta, and
delta) and the wolf/wolves previously selected and updated
in the second layer are used to update each current wolf own
position. In this method, the same as GWO, alpha, beta, and
delta play the role as the main three wolves. However, the
next wolves select and update their positions according to
the previous and the first three wolves in each iteration. In

Fig. 2  The hierarchy mechanism in Ex-GWO Fig. 3  Position updating in the Ex-GWO (e.g., the fifth wolf)

13
Engineering with Computers (2021) 37:509–532 515

Fig. 4  The mechanism of position update for each wolf to catch prey in Ex-GWO. a Position updating for forth wolf and b for fifth wolf

delta, and fourth wolves). The best position of first, sec-


ond, third, and fourth wolves positions help the fifth wolf
���⃗3 = X
X ⃗3 − A
���⃗3 ⋅ D
����⃗3 . (14)
to update own position. In the same way, the fifth wolf will Then
move to the best position closer to the prey. This technique
n−1
is done to each wolf in a pack. The Ex-GWO method rep- ���⃗n (t + 1) = 1 ∑
X X (t); n = 4, 5, … m, (15)
resents an innovative alternative to hunting mechanism. In n − 1 i=1 i
this algorithm, some of the parameters, which were defined
in GWO, are revised. Mathematically, a pack has n wolves. where, n is the currently selected wolf, m is the number of
The first three wolves located in the best position of prey. wolves in the pack, t is iteration, and i parameter is started
Remaining wolves update their position in one course of from first wolf and continues until the last wolf has been
the iteration. The proposed algorithm is used some control selected and updated before it. Finally, wolf n updates posi-
parameters as A,⃗ C��⃗ and a⃗ are presented in Eqs. 3, 4, and 5. tion from n − 1 of previous wolves’ position in pack. We
Where, A, C give direction for the activities of the wolves.
⃗ ⃗ believe that in this technique, wolves follow a rule in updat-
Thanks to this, wolves do not always go in the same direc- ing own position. Ex-GWO explained step by step in follow-
tions. The effect of a⃗ is on the range of motion, which directs ing flowchart, as shown Fig. 5. The pseudocode of Ex-GWO
the algorithm to find the solution. In hunting mechanism, is given in Fig. 6. In the next section, the proposed algo-
Ex-GWO is proposed based on Eqs. 13, 14, and 15: rithm is tested by 33 benchmark function and their results
are described.
����⃗1 = ||C
D ���⃗ ⋅ X ⃗ ||,
���⃗ − X
| 1 1 |
3.2 Incremental GWO (I‑GWO)
����⃗2 = ||C
D ���⃗ ⋅ X ⃗ ||,
���⃗ − X This method is inspired by the classical GWO and EGWO
| 2 2 |
algorithms. In this algorithm is considered that the alpha
wolf has best knowledge about the prey position and other
����⃗3 = ||C
D ���⃗ ⋅ X ⃗ ||
���⃗ − X (13) wolves in the pack must follow alpha wolf in order. In this
| 3 3 |
method, each wolf updates its position based on all the
and wolves selected before it. In other words, the n − 1 wolves
���⃗1 = X
X ⃗1 − A
���⃗1 ⋅ D
����⃗1 , positions are considered in the position update of the nth
wolf (Fig. 7). There is the possibility of finding problem
���⃗2 = X
X ⃗2 − A
���⃗2 ⋅ D
����⃗2 , solutions (preys) much faster in fewer iterations. However,
they may not always guarantee to find a good solution,

13

516 Engineering with Computers (2021) 37:509–532

Fig. 5  Flowchart of the proposed Ex-GWO algorithm

( = 1, 2, . . . , m) and second wolves’ positions and so on and so forth. Thus,


Initialize , and it is named Incremental Grey Wolf Optimizer (I-GWO) algo-
Calculate the fitness of each search agent rithm. In this method, the wolves follow each other accord-
1 = the best (or dominating) search agent
ing to the defined parameters and method, so they have the
2 = the second best search agent
3 = the third best search agent
chance to find the optimal solution with few iterations. How-
while ( <Maximum number of iterations) ever, this algorithm has a distinct problem. Therefore, the
for each search agent probability of finding best solutions to some complex prob-
update the position of the current search lems may be low. This algorithm is greedy and is heavily
agent by (15)
end for
dependent on the position of the first wolf (alpha). Therefore,
update by (5) a, A, and C parameters are important for the alpha wolf to
update and by (3, 4) be the best solution. In addition, the best solution of the last
calculate the fitness of all search agents iteration of the algorithm may be dependent on the type and
update 1, 2 and 3 dimension of the parameters applied. Therefore, attention is
insert Xi to best positions table
paid to these issues in the proposed method. Otherwise, if
= +1
end while the alpha wolf does not have a good position relative to the
Return 1 prey, other wolves update their own positions wrongly and
may be far from the prey. In this case, a good solution can
Fig. 6  Pseudocode of Ex-GWO algorithm be found if the number of iterations increases. Because the
alpha will always be in the best position. Indeed, if the alpha
is close to the prey, the algorithm can get a solution quickly,
because they act dependent on each other. Therefore, the however, if the opposite is true, it can still find a good solu-
speed of growth and the selection of the right places for tion, but it needs to increase the number of iterations. This
the first wolf are of great importance. In this algorithm, algorithm, which is applied mentioned 33 functions, was the
the second wolf in the pack updates own position by alpha first rank in finding the best solutions with 52% success rate
wolf. Third wolf also updates its position based on alpha (Table 6). However, its obtained results for some functions

13
Engineering with Computers (2021) 37:509–532 517

Fig. 7  The mechanism of position update for each wolf to catch prey in I-GWO

are the last rank in comparison to eight other algorithms. and


This is also considered normal due to the structure. Detailed
evaluations and simulation results are described in the next
���𝛼⃗ = X
X ⃗𝛼 − A
���𝛼⃗ ⋅ D
����𝛼⃗. (18)
section. Then
The proposed algorithm uses some control parameters
n−1
as are presented in Eqs. 3, 4, and 16: ���⃗n (t + 1) = 1 ∑
X X (t); n = 2, 3, … m, (19)
( ) n − 1 i=1 i
tj
a⃗ = 2 1 − j , (16)
T where n is current selected wolf, m is the number of wolves
where A,⃗ C⃗ give direction for the activities of the wolves. in the pack, t is iteration, and i parameter is started from first
Thanks to this, wolves do not always go in the same direc- wolf and continues until the last wolf has been selected and
tions. The effect of a⃗ is on the range of motion, which directs updated before it. I-GWO explained step by step in follow-
the algorithm to find the solution [67]. To help increase the ing flowchart, as shown Fig. 8. The pseudocode of I-GWO is
number of iterations assigned to the exploration, the vari- given in Fig. 9. In the next section, the proposed algorithm
able j is defined. In hunting mechanism, I-GWO proposed is tested by 33-benchmark functions and their results are
following Eqs. 17, 18, and 19: described.

����𝛼⃗ = ||C
D ����⃗ ⋅ X ⃗ ||
���⃗ − X (17)
| 𝛼 𝛼 |

13

518 Engineering with Computers (2021) 37:509–532

Fig. 8  Flowchart of proposed I-GWO algorithm

4 Results and discussion the literature. The proposed algorithms are simulated in


MATLAB. The details of all benchmark functions tested
In this section, the I-GWO and Ex-GWO algorithms are in this paper are presented in Table 1. These functions
evaluated on 33-benchmark functions and the results are were chosen from CEC 2014 [70, 71]. New optimization
compared with the well-known optimization algorithms in algorithms or improvement version of optimization algo-
rithms must be tested on all types of benchmark func-
tions. Benchmark functions are divided into four groups:
Initialize the search agent (grey wolf) population
unimodal, multimodal, fixed-dimension multimodal and
( = 1, 2, . . . , m) composite function. Unimodal functions have one global
Initialize , and optimum and no local optima. Multimodal functions have
Calculate the fitness of each search agent more than one local optima. For each function, there are
= the best (or dominating) search agent
while ( <Maximum number of iterations)
some features such as dim, range, and optima. Dim speci-
for each search agent fies the dimension of benchmark function. The range is
update the position of the current search the boundary of the function’s search space between lower
agent by (19) bound and upper bound. Optima indicate the global opti-
end for
update by (16)
mum of each benchmark function. As previously said, the
update and by (3, 4) proposed two algorithms can be used to find optimized
calculate the fitness of all search agents solutions in different applications and problems. I-GWO
update i can be used in application areas where global results are
insert Xi to best positions table
= +1
obtained quickly with fewer iterations; for example, in
end while learning-based systems. On the other hand, the Ex-GWO
Return 1 can be used for routing and localization-based problems
because of its balanced behavior and to be able to encircle
Fig. 9  Pseudocode of I-GWO algorithm the target thoroughly.

13
Engineering with Computers (2021) 37:509–532 519

Table 1  Benchmark functions used in the current study


Benchmark Formula Dim Range Optima Type
Function

F1 Sphere n
∑ 30 [− 100,100] 0 Unimodal
f1 (x) = xi2
i=1

F2 Schwefel 2.22 n
∑� � ∏ n
� � 30 [− 10,10] 0 Unimodal
f2 (x) = �xi � + �xi �
i=1 i=1

F3 Schwefel 1.2 n−1



j<i
�2 30 [− 100,100] 0 Unimodal
∑ ∑
f3 (x) = xi
i=0 j=0

F4 Schwefel 2.21 f4 (x) = maxi {|xi , 1 ≤ i ≤ n} 30 [− 100,100] 0 Unimodal


F5 Generalized n−1 �
∑ � �2 � �2 � 30 [− 30,30] 0 Unimodal
Rosenbrock f5 (x) = 100 xi+1 − xi2 + xi − 1
i=1

F6 STEP n ��
∑ ��2 30 [− 100,100] 0 Unimodal
f6 (x) = xi + 0.5
i=1

F7 Quartic ∑n
30 [− 1.28,1.28] 0 Unimodal
f7 (x) = ixi4 + random[0, 1)
i=1
�� �
F8 Generalized n
∑ �xi � 30 [− 500,500] − 418.9829*5 Multimodal
Schwefel f8 (x) = −xi sin � �
i=1

F9 Rastrigin n �
∑ � � � 30 [− 5.12, 5.12] 0 Multimodal
f9 (x) = xi2 − 10 cos 2𝜋xi + 10
i=1
� � �
F10 Ackley n

� n
∑ � �
� 30 [− 32,32] 0 Multimodal
1 1
f10 (x) = −20 exp −0.2 n
xi2 − exp n
cos 2𝜋xi
i=1 i=1
+20 + e
F11 Griewank n
∑ n
∏ � � 30 [− 600,600] 0 Multimodal
1 X
f11 (x) = 4000
xi2 − cos √i +1
i=1 i=1 i

F12 Generalized � � n−1∑� �2 � � �� 30 [− 50,50] 0 Multimodal
𝜋
Penalized f12 (x) = 10 sin 𝜋y1 +
n
yi − 1 1 + 10 sin2 𝜋y1+1
i=1

� �2 n � �

+ yn − 1 + u xi , 10, 100, 4
i=1
xi +1
yi = 1 + 4
⎧ k�x − a�m xi > a
� �⎪ i
u xi , a, k, m = ⎨ � 0 � −a < xi < 1
⎪ k −xi − a m xi < −a


F13 Generalized � � ∑ n � �2 � � �� 30 [− 50,50] 0 Multimodal
Penalized f13 (x) = 0.1 sin2 3𝜋x1 + xi − 1 1 + sin2 3𝜋xi + 1
i=1
� n
� �2 � � �� ∑ � �
+ xn − 1 1 + sin2 2𝜋xni + u xi , 5, 100, 4
i=1

F14 Shekel’s Fox- �


25
�−1 2 [− 65,65] 1 Fixed-dimen-

holes f14 (x) = 1
+ ∑2
1
6
sion
500 (xi −aij )
j=1 j+ i=1

F15 Kowalik’s 11 �
∑ �
x (b2 +bi x2 ) 2 4 [− 5,5] 0.00030 Fixed-dimen-
f15 (x) = ai − b12 +bi x +x sion
i=1 i i 3 4

F16 Six-Hump f16 (x) = 4x12 − 2.1x14 + 13 x16 + x1 x2 − 4x22 + 4x24 2 [− 5,5] − 1.0316 Fixed-dimen-
Camel-Back sion
F17 Branin ( )2 ( ) 2 [− 5,5] 0.398 Fixed-dimen-
5.1 2
f17 (x) = x2 − 4𝜋 x + 𝜋5 x1 − 6 + 10 1 − 8𝜋
1
cos x1 + 10
2 1
sion
[
F18 Goldstein Price ( )2 (
f18 (x) = 1 + x1 + x2 + 1 × 19 − 14x1 + 3x12 − 14x2 + 6x1 x2 2 [− 2,2] 3 Fixed-dimen-
sion
)] [ ( )2 ( )]
+3x22 . 30 + 2x1 − 3x2 × 18 − 32x1 + 12x12 + 48x2 − 36x1 x2 + 27x22

13

520 Engineering with Computers (2021) 37:509–532

Table 1  (continued)
Benchmark Formula Dim Range Optima Type
Function
� �
F19 Hartman’s 4
∑ 3
∑ � �2 3 [1, 3] − 3.86 Fixed-dimen-
Family f19 (x) = − ci exp − aij xj − pij sion
i=1 j=1
� �
F20 Hartman’s 4
∑ 6
∑ � �2 6 [0, 1] − 3.32 Fixed-dimen-
Family f20 (x) = − ci exp − aij xj − pij sion
i=1 j=1

F21 Shekel-5 ∑ ��5 �� � �−1 4 [0,10] − 10.1532 Fixed-dimen-


f21 (x) = − X − ai X − ai )T + ci sion
i=1

F22 Shekel-7 7 ��
∑ �� � �−1 4 [0,10] − 10.4028 Fixed-dimen-
f22 (x) = − X − ai X − ai )T + ci sion
i=1

F23 Shekel-10 10 ��
∑ �� � �−1 4 [0,10] − 10.5363 Fixed-dimen-
f23 (x) = − X − ai X − ai )T + ci sion
i=1

F24 Alpine ∑n
� � � � 30 [− 10,10] 0 Multimodal
f24 (x) = �xi sin xi + 0.1xi �
i=0 � �
F25 Beale ( )2 (
f25 (x) = 1.5 − x0 + x0 x1 + 2.25 − x0 + x0 x12
)2 2 [− 4.5,4.5] 0 Multimodal
( )2
+ 2.625 − x0 + x0 x12
F26 Cigar n
∑ 30 [− 10,10] 0 Multimodal
f26 (x) = x02 + xi2
i=1
( )
F27 Matyas f27 (x) = 0.25 x02 + x12 − 0.48x0 x1 2 [− 10,10] 0 Unimodal
F28 Michalewitz ∑n � � � ix2 �2m 30 [0, π] − 0.966n Multimodal
f28 (x) = − sin xi sin20 𝜋i ;m = 10
i=0

F29 Booth ( )2 (
f29 (x) = x0 + 2x1 − 7 + 2x0 + x1 − 5
)2 2 [− 10,10] 0 Unimodal
( )2 )
F30 Easom ( ) ( )2 (
f30 (x) = − cos x0 cos(x1 ) exp − x0 − 𝛺 − x1 − 𝛺 2 [− 100,100] − 1 Unimodal

F31 Sum Squares n−1


∑ 30 [− 10,10] 0 Unimodal
f31 (x) = ixi
i=0

F32 leon ( )2
f32 (x) = 100 xi+1 − xi3 + (xi − 1.0)2 30 [− 1,2,1.2] 0 Unimodal
F33 Zettl (
f33 (x) = xi2 + xi+1
2
)2
− 2xi + 0.25xi 30 [− 5,5] − 0.00379 Unimodal

4.1 Experimental parameters 4.2 Exploration and exploitation analysis

To obtain the performance, I-GWO and Ex-GWO algorithms Exploration phase finds the different areas of interested
are compared to GWO [3], PSO [38], ALO [5], GSA [22], space and determines productive search areas in optimiza-
WOA [72], EGWO [69], and MGWO [67]. All of these tion algorithms. Exploitation provides the ability to central-
algorithms are population-based optimization well-known ize the search agents in the optimal range to find an opti-
algorithms. They are meta-heuristic algorithms on random mum solution. According to the simulation results, Ex-GWO
search spaces and each algorithm must be run at least ten algorithm has been found to perform the best in 13 of 33
times to determine a meaningful result [67]. In most com- benchmark functions. This shows that the success rate of
parisons of meta-heuristic algorithms, authors try to pre- this algorithm in finding the best solutions is 39% (Table 6).
sent average and standard deviation as reliability criteria. Therefore, these results show the superiority in performance
We employed all algorithms under similar conditions in 30 of Ex-GWO in terms of exploiting the optimum. This is due
independent runs with 30 search agents considered for each to the proposed exploitation operators previously introduced
algorithm. In addition, 500 iterations are fixed for each of in Eqs. 6, 7, and 8. At the same time, according to the results
the algorithms. The results of the simulations for used algo- of the comparison tables, I-GWO found the best solutions
rithms on benchmark functions are presented in Tables 2, in the 17 functions. This shows that the success rate of this
3, 4, and 5. In addition, Figs. 10, 11, 12, 13 demonstrate algorithm in finding the best solutions is 52% (Table 6). The
convergence curves about the performance of each of nine results show that both algorithms are good for exploration
algorithms on each benchmark function. and exploitation. In functions (F8–F16), which have many

13
Engineering with Computers (2021) 37:509–532 521

Table 2  Comparison result of simulations in nine optimization algorithms for F1–F8


F Ex-GWO I-GWO EGWO MGWO GWO
Ave Std Ave Std Ave Std Ave Std Ave Std

F1 1.45E−31 6.17E−31 0.00E+00 0.00E+00 6.63E−31 2.16E−30 1.96E−36 3.04E−36 6.96E−26 3.09E−25
F2 2.87E+01 6.74E−04 0.00E+00 0.00E+00 2.08E−19 2.08E−19 1.02E−21 1.07E−21 9.80E−17 8.90E−17
F3 1.33E−05 4.24E−06 0.00E+00 0.00E + 00 9.25E−04 9.25E−04 2.53E−06 8.53E−06 1.04E−09 2.67E−09
F4 1.67E−04 1.58E−04 0.00E+00 0.00E+00 2.33E−01 2.33E−01 1.23E−09 1.01E−09 7.45E−07 2.06E−06
F5 2.87E+01 6.36E−04 2.90E+01 1.58E−02 2.80E+01 2.80E+01 2.69E+01 5.75E−01 2.70E+01 9.16E−01
F6 0.00E+00 0.00E+00 6.75E+00 3.27E−01 3.33E+00 3.33E+00 6.47E−01 3.62E−01 7.14E−01 3.52E−01
F7 2.19E−15 1.74E−15 7.50E−05 6.74E−05 7.01E−03 7.01E−03 1.21E−03 7.90E−04 3.06E−03 3.99E−03
F8 0.00E+00 0.00E+00 − 2.32E+03 4.41E+02 − 6.62E+03 − 6.62E+03 − 5.60E+03 1.21E+03 − 6.03E+03 8.55E+02
F PSO ALO GSA WOA
Ave Std Ave Std Ave Std Ave Std

F1 6.97E−03 6.75E−03 8.07E−28 1.05E−27 2.62E−16 1.24577E−16 3.12E−72 1.16E−71


F2 4.08E−02 1.34E−01 9.45E−17 7.09E−17 9.45E−02 2.05E−01 1.49E−41 4.56E−41
F3 9.02E+02 3.85E+01 1.52E−05 2.59E−05 8.88E+02 2.71E+02 4.19E+04 1.20E+04
F4 1.83E+00 1.54E+00 8.12E−07 6.07E−07 7.04E+00 2.60E+00 5.49E+01 2.85E+01
F5 9.89E+01 7.20E+01 2.72E+01 8.00E−01 8.92E+01 1.03E+02 2.78E+01 4.19E−01
F6 8.07E−03 1.28E−02 7.90E−01 4.63E−01 1.96251E−16 8.89108E−17 3.13E−01 1.74E−01
F7 4.48E−02 1.29E−02 1.74E−03 6.62E−04 8.02E−02 3.11E−02 2.63E−03 4.41E−03
F8 − 8.39E+03 6.04E+02 − 5.92E+03 1.00E+03 − 2.55E+03 4.19E+02 − 1.03E+04 1.78E+03

The best average values of algorithms are written in bold

Table 3  Comparison result of simulations in nine optimization algorithms for F9–F16


F Ex-GWO I-GWO EGWO MGWO GWO
Ave Std Ave Std Ave Std Ave Std Ave Std

F9 4.98E−07 2.00E−07 0.00E+00 0.00E+00 1.61E+02 1.61E+02 5.55E−01 1.80E+00 1.70E+00 2.85E+00
F10 8.45E−06 4.00E−06 8.88E−16 0.00E+00 1.91E−01 1.91E−01 2.23E−14 4.32E−15 1.01E−13 1.46E−14
F11 4.62E+00 4.31E+00 0.00E+00 0.00E+00 1.37E−02 1.37E−02 4.58E−04 2.51E−03 1.87E−03 4.97E−03
F12 3.03E−03 6.91E−03 1.14E+00 1.94E−01 3.91E+00 3.91E+00 4.44E−02 2.87E−02 3.89E−02 1.79E−02
F13 − 1.03E+00 1.92E−08 3.00E+00 1.70E−03 2.61E+00 2.61E+00 4.54E−01 1.95E−01 6.35E−01 1.85E−01
F14 3.38E+00 3.65E+00 5.30E+00 4.86E+00 7.24E+00 7.24E+00 3.03E+00 3.22E+00 4.13E+00 4.14E+00
F15 6.40E−03 9.30E−03 4.26E−03 1.16E−02 6.30E−03 6.30E−03 2.40E−03 6.09E−03 5.10E−03 8.57E−03
F16 − 1.03E+00 1.72E−08 − 1.03E+00 7.90E−07 − 1.03E+00 − 1.03E+00 − 1.03E+00 6.98E−08 − 1.03E+00 2.35E−08
F PSO ALO GSA WOA
Ave Std Ave Std Ave Std Ave Std

F9 5.66E+01 1.41E+01 2.66E+00 3.32E+00 2.91E+01 6.57E+00 0.00E+00 0.00E+00


F10 6.67E−01 7.18E−01 1.00E−13 1.65E−14 7.57E−02 2.93E−01 3.49E−15 2.63E−15
F11 3.89E−02 2.83E−02 4.28E−03 7.98E−03 2.88E+01 7.02E+00 2.47E−02 7.39E−02
F12 8.68E−02 1.35E−01 5.33E−02 2.58E−02 1.71E+00 9.28E−01 1.87E−02 9.74E−03
F13 1.74E−01 2.56E−01 6.83E−01 2.34E−01 7.84E+00 5.80E+00 4.59E−01 2.09E−01
F14 3.98E+00 2.68E+00 5.05E+00 4.21E+00 6.00E+00 3.74E+00 2.73E+00 3.03E+00
F15 1.39E−03 3.61E−03 3.06E−03 6.90E−03 4.40E−03 2.17E−03 8.02E−04 5.66E−04
F16 − 1.03E+00 6.65E−16 − 1.03E+00 2.59E−08 − 1.03E+00 4.88E−16 − 1.03E+00 1.12E−09

The best average values of algorithms are written in bold

13

522 Engineering with Computers (2021) 37:509–532

Table 4  Comparison result of simulations in nine optimization algorithms for F17–F24


F Ex-GWO I-GWO EGWO MGWO GWO
Ave Std Ave Std Ave Std Ave Std Ave Std

F17 3.98E−01 2.01E−06 3.98E−01 1.69E−04 3.98E−01 3.98E−01 3.98E−01 1.83E−04 3.98E−01 1.58E−06
F18 3.00E+00 4.32E−05 3.00E+00 1.80E−05 3.00E+00 3.00E+00 5.70E+00 1.48E+01 3.00E+00 4.66E−05
F19 − 3.86E+00 2.34E−03 − 3.86E+00 3.35E−03 − 3.86E+00 − 3.86E+00 − 3.86E+00 1.96E−03 − 3.86E+00 2.21E−03
F20 − 3.32E+00 4.07E−02 − 3.01E+00 1.53E−01 − 3.24E+00 − 3.24E+00 − 3.24E+00 7.60E−02 − 3.25E+00 8.35E−02
F21 − 9.81E+00 1.28E+00 − 8.71E+00 1.90E+00 − 6.72E+00 − 6.72E+00 − 9.06E+00 2.23E+00 − 9.73E+00 1.62E+00
F22 − 1.04E+01 9.70E−01 − 7.54E+00 2.93E+00 − 6.54E+00 − .54E+00 − 1.04E+01 4.85E−03 − 1.02E+01 9.70E−01
F23 − 1.05E+01 1.14E−03 − 8.61E+00 2.37E+00 − 7.45E+00 − 7.45E+00 − 1.03E+01 1.48E+00 − 1.05E+01 1.04E−03
F24 1.55E−36 6.28E−36 0.00E+00 0.00E+00 1.92E+01 1.92E+01 6.76E−05 1.92E−04 6.35E−04 7.00E−04
F PSO ALO GSA WOA
Ave Std Ave Std Ave Std Ave Std

F17 3.98E−01 0.00E+00 3.98E−01 1.40E−06 3.98E−01 0.00E+00 3.98E−01 1.67E−05


F18 3.00E+00 1.54E−15 5.70E+00 1.48E+01 3.00E+00 4.16E−15 3.00E+00 6.34E−05
F19 − 3.84E+00 1.41E−01 − 3.86E+00 2.50E−03 -3.86E+00 2.27E−15 − 3.85E+00 2.34E−02
F20 − 3.24E+00 7.33E−02 − 3.26E+00 7.60E−02 − 3.32E+00 2.28E−02 − 3.20E+00 2.06E−01
F21 − 6.57E+00 3.69E+00 − 8.97E+00 2.45E+00 − 6.68E+00 3.78E+00 − 7.78E+00 3.02E+00
F22 − 8.52E+00 3.22E+00 − 1.04E+01 1.25E−03 − 1.02E+01 1.19E+00 − 7.87E+00 3.57E+00
F23 -8.90E+00 1.91E+00 − 1.05E+01 1.11E−03 − 9.39E+00 2.64E+00 − 8.44E+00 3.27E+00
F24 1.53E−01 8.10E−01 6.05E−04 6.89E−04 2.13E−03 2.39E−03 3.83E−49 2.03E−48

The best average values of algorithms are written in bold

local optima and exponentially increased dimensions, the the exploitation phase, tries to achieve near to optimum solu-
proposed algorithms in this paper found the best solutions in tion. This means that in the initial iterations, the search algo-
five functions totally. In the other means, the proposed meth- rithm covers a variety of space, and in the last repetitions,
ods have performed superior to other compared algorithms the searched areas look more closely.
on some unimodal and multimodal benchmark functions. I-GWO and Ex-GWO have better convergence velocity
The details results are presented in upcoming on this section. in comparison of other meta-heuristic algorithms. I-GWO
Exploration and exploitation phases are important in meta- has good performance, especially in these functions F1, F2,
heuristic algorithms. The proposed methods indicate a better F3, F4, F9, F10, F11, F17, F18, F19, F24, F26, F28, F31, and F33.
trade-off between these two phases. I-GWO and Ex-GWO Likewise, Ex-GWO algorithm improves the convergence
algorithms are more proficient in regarding global optima. speed in F6, F7, F12, F16, F17, F18, F19, F20, F21, F22, F23,
According to Figs. 10, 11, 12, and 13, sudden changes in F30, and F33 benchmark functions. Both proposed methods
initial iterations indicate the exploration phase. For example, try to avoid the trap in local optima.
Ex-GWO algorithm performance in Sphere function (F1) in As shown in Table 2, this finding confirms that I-GWO
the initial steps has sudden changes in convergence curves provides very best results in comparison to others on F1,
until the quarter of iterations. These changes are necessary F2, F3, and F4. Global optima for F1–F4 functions are zero.
to determine optimum search areas in the search space. In I-GWO finds global optima. I-GWO was unsuccessful in F5,
the exploitation phase, search agents find the local optimum F6, F7, and F8. As well as results in Ex-GWO are indicate
solution and run on a definite way. that this algorithm has better performance in F6 and F7 that
On the other hand, I-GWO algorithm has different con- find best optima. In F6, Ex-GWO finds global optima. In F7
vergence curve on the same benchmark function (F1). This global optima is zero, Ex-GWO algorithm was unsuccessful
algorithm run exploration phase very rapidly, it is the result to find global optima, but find best optimal value in compar-
of the structure of this algorithm, because in I-GWO, the ing other algorithm.
leader wolf finds the prey earlier than other wolves in the Figure 10 indicates convergence curves from F1 to F8
pack. In exploration phase, I-GWO is determined the opti- Functions. As shown in Fig. 10, I-GWO reaches to global
mum search area in the shortest times. Overall, in initial optima value in F1, F2, F3, and F4. Moreover, Ex-GWO
iterations, the power of exploration must increase. In this reaches to global optima in F6. I-GWO algorithm in ini-
phase, the search agents discover the search space then in tial iterations has good performance in exploration. As

13
Table 5  Comparison result of simulations in nine optimization algorithms for F25–F33
Ex-GWO I-GWO EGWO MGWO GWO
Ave Std Ave Std Ave Std Ave Std Ave Std

F25 2.54E−02 1.39E−01 9.79E−06 9.68E−06 1.52E−01 1.52E−01 7.62E−02 2.33E−01 1.02E−01 2.63E−01
F26
Engineering with Computers (2021) 37:509–532

2.72E−63 1.49E−62 0.00E+00 0.00E+00 3.57E−32 3.57E−32 3.46E−38 7.30E−38 1.90E−29 2.78E−29
F27 2.33E−104 1.05E−103 3.10E−284 0.00E+00 3.09E−119 3.09E−119 3.78E−142 2.07E−141 5.70E−104 2.19E−103
F28 − 2.67E+01 1.67E+00 − 6.19E+00 7.68E−01 − 1.22E+01 − 1.22E+01 − 9.42E+00 9.55E−01 − 1.00E+01 1.88E+00
F29 2.75E−01 2.08E−01 6.36E+00 6.25E+00 9.87E−08 9.87E−08 2.16E−06 1.94E−06 5.33E−07 6.80E−07
F30 − 1.00E+00 8.14E−07 − 1.00E+00 5.19E−05 − 8.33E−01 − 8.33E−01 − 1.00E+00 2.86E−06 − 1.00E+00 8.46E−07
F31 2.15E−70 9.68E−70 0.00E+00 0.00E+00 3.56E−31 3.56E−31 3.28E−37 7.82E−37 2.11E−28 2.69E−28
F32 2.57E−05 7.45E−05 4.71E−01 2.93E−01 1.27E−06 1.27E−06 6.50E−06 7.67E−06 2.22E−06 3.02E−06
F33 − 3.79E−03 1.74E−05 − 3.79E−03 1.61E−03 − 3.79E−03 − 3.79E−03 − 3.79E−03 1.07E−10 − 3.79E−03 4.88E−10
PSO ALO GSA WOA
Ave Std Ave Std Ave Std Ave Std

F25 5.08E−02 1.93E−01 5.08E−02 1.93E−01 6.03E−20 8.92E−20 1.78E−01 3.28E−01


F26 4.28E−05 4.42E−05 7.61E−30 1.46E−29 2.48E−16 1.20E−16 2.70E−74 1.09E−73
F27 1.59E−46 6.96E−46 9.05E−105 4.68E−104 6.12E−21 7.42E−21 2.39E−192 0.00E+00
F28 − 2.31E+01 2.38E+00 − 9.97E+00 1.69E+00 − 2.53E+01 1.61E+00 − 1.21E+01 2.38E+00
F29 0.00E+00 0.00E+00 4.32E−07 4.20E−07 0.00E+00 0.00E+00 1.47E−03 1.28E−03
F30 − 1.00E+00 0.00E+00 − 1.00E+00 8.56E−07 − 1.00E+00 0.00E+00 − 1.00E+00 1.33E−06
F31 1.33E+01 5.71E+01 7.20E−29 9.30E−29 2.73E−03 1.50E−02 2.94E−75 1.56E−74
F32 7.66E−11 3.53E−10 2.30E−06 3.15E−06 2.34E−02 2.20E−02 4.07E−05 5.56E−05
F33 − 3.79E−03 1.76E−18 − 3.79E−03 3.78E−10 − 3.79E−03 1.76E−18 − 3.79E−03 3.85E−09

The best average values of algorithms are written in bold

13
523

524 Engineering with Computers (2021) 37:509–532

Fig. 10  Convergence graph of F1–F8 benchmark functions

13
Engineering with Computers (2021) 37:509–532 525

Fig. 11  Convergence graph of F9–F16 benchmark functions

13

526 Engineering with Computers (2021) 37:509–532

Fig. 12  Convergence graph of F17–F24 benchmark functions

13
Engineering with Computers (2021) 37:509–532 527

Fig. 13  Convergence graph of
F25–F33 benchmark functions

13

528 Engineering with Computers (2021) 37:509–532

Table 6  Success rate for each Ex-GWO I-GWO EGWO MGWO GWO PSO ALO GSA WOA
algorithm result for selected
benchmark functions Success rate 39% 52% 15% 21% 21% 24% 21% 27% 30%

proposed method in hunting mechanism exploitation phase As shown in Fig. 13, the convergence curve of the pro-
occurred in a balance way, the proposed algorithms show posed algorithm is faster than others. In this figure of the
that it clearly has an advantage over GWO in hunting convergence curve for F26, it is noticed that the explora-
mechanism. Proposed I-GWO and Ex-GWO algorithms tion of Ex-GWO on initial iterations follows a mechanism.
are fast in the convergence rate. In case study about F 8, In addition to Fig. 13 shows that the convergence curve of
WOA can find best optimum result. In initial iterations, proposed algorithms is outperformed to other algorithms.
WOA find many big values, but after 50th iteration find Table 6 lists the success rate of each algorithm tested on
value near to global optima. Then, until last iteration try the benchmark function in this paper. Two proposed methods
to find small value. I-GWO and Ex-GWO, respectively, get 52% and 39% suc-
Table 3 represents F9–F16 results in each nine meta-heu- cess rate that outperforms than other algorithms. This rate
ristic search algorithms. This finding highlights that I-GWO calculates for 33 selected benchmark functions according
achieved best optimum solution of benchmark functions in to global and best optima. We observe from Table 6 that
F9, F10, and F11. Global optima for F12 are zero and Ex- WOA and GSA have good performance in some benchmark
GWO algorithm finds best optimal value. In F1, each of this functions.
compared algorithm achieved equal value to global optima
that confirms the admissibility on them. Also I-GWO and 4.3 Local minima avoidance and convergence
WOA have reached global optima in F9. Figure 11 clearly behavior analysis
highlights these results in each related function.
Figure 11 illustrates the convergence curve of each algo- The balance between exploration and exploitation prevents
rithm from F9 to F16. Proposed algorithms also have fast local optima. The control parameters like a, A, and C have a
convergence curves compared to other algorithms. Accord- good effect on exploration and exploitation. Sudden changes
ing to the figures, the proposed algorithms outperform other in the movement of search agents are necessary over the ini-
algorithms in F9–F12. The convergence curve of function F13 tial steps of optimization. Search agents explore the search
in Ex-GWO algorithm shows exploration of search agents space broadly. At the end of optimization, these changes
in initial iterations. After this search, agents exploit the best should be decreased to emphasize exploitation. Since the
solution. proposed Ex-GWO has a balanced behavior, the proposed
The statistical results of the algorithms on F17–F24 bench- method in GWO is used to avoid local minima. However,
mark functions are presented in Table 4. I-GWO achieved this issue needs to be checked more steeply as described
global optima F17, F18, F19, and F24. Ex-GWO in F17–F23 in the I-GWO algorithm. For this, the formula of the ‘a’
benchmark functions gets global and best optima. Figure 12 parameter to give dynamism to avoid both local minima
clearly highlights these results in each related functions. and the chance to find better solutions in more iterations
It can be seen in Fig. 12 that most algorithms in functions can be increased. I-GWO and Ex-GWO convergence curves
F17–F19 have similar convergence curve. All of them find have better performance on benchmark functions. I-GWO in
global optima from the initial iterations. Ex-GWO in F20 comparison of Ex-GWO has better performance, because,
has the best optima value in comparison with other algo- in the I-GWO method, habitats of the first leader have more
rithms. In functions F21–F24, exploration in I-GWO and Ex- impact. In some cases, if the first leader follows mistake
GWO was clearly shown. Exploration in Ex-GWO algorithm position of the hunt, it will be the wrong solution and other
occurred step by step and search agents broadly search and wolves in pack follow the wrong position. We avoided this
then exploitation phase done. mistake by decreasing a⃗ parameter in the initial iterations to
This finding validates the I-GWO and Ex-GWO perfor- find an optimal position of the hunt. On the other hand, the
mance in benchmark functions. Table 5 demonstrates that Ex-GWO has large execution time, but this method guaran-
I-GWO is successful to find best optima in F26, F28, F30, tee encircles hunt. Ex-GWO in the first iteration uses the first
F31, and F33. For functions F26, F30, F31, and F33 find global three wolves’ experience. In such circumstances, encircling
optima. Ex-GWO finds best optima of benchmark functions the hunt will do excellently. Ex-GWO suffers a large execu-
F30 and F33 that both of them are global optima. In function tion time than GWO.
F25–F33, most of algorithms find global optima. Figure 13
clearly highlights these results in each related function.

13
Engineering with Computers (2021) 37:509–532 529

Table 7  Comparison result of simulations for F1, F3, F9, and F10 in the Ex-GWO algorithm, the remaining wolves update their
10, 30, and 100 dimension positions based on previous and the first three wolves in each
Function Dim I-GWO GWO EGWO MGWO iteration. In this case, the time execution of the I-GWO is
name faster than the Ex-GWO algorithm.
F1 10 0.00E+00 1.83E−22 4.45E−23 2.65E−28
30 0.00E+00 6.96E−26 6.63E−31 1.96E−36
4.5 Results, discussion, and analyses
100 0.00E+00 8.12E−32 3.86E−35 3.47E−41
The proposed two algorithms can be used to find optimized
F3 10 0.00E+00 1.85E−01 7.14E−02 3.80E−04
solutions in different applications and problems. I-GWO
30 0.00E+00 1.04E−09 9.25E−04 2.53E−06
can be used in application areas where global results are
100 0.00E+00 1.68E−07 4.25E−08 4.71E−07
obtained quickly with fewer iterations such as learning-based
F9 10 0.00E+00 1.23E+00 1.25E+01 1.18E−00
systems. On the other hand, the Ex-GWO can be used for
30 0.00E+00 1.70E+00 1.61E+02 5.55E−01
routing and localization-based problems because of its bal-
100 0.00E+00 7.36E+04 7.21E+04 1.04E+01
anced behavior and ability to encircle the target thoroughly.
F10 10 4.19E−177 4.08E−28 3.73E−00 4.60E−21
Ex-GWO was the best in 13 functions and I-GWO in 17
30 8.88E−16 1.01E−13 1.91E−01 2.23E−14
functions. Therefore, the success rates of the 9 algorithms
100 6.34E−11 3.24E−10 4.47E−01 5.36E−11
in finding the best solutions on 33 benchmark functions, the
The best average values of algorithms are written in bold I-GWO is in first and the Ex-GWO is in second ranks. Thus,
the simulation results and convergence curves speed show
Table 8  Comparison result of simulations for F6, F7, F11, and F12 in
that the proposed methods have a better performance in opti-
10, 30, and 100 dimension mization problems. As seen the results in Tables 7 and 8, the
proposed methods in different dimensions have better results
Function Dim Ex-GWO GWO EGWO MGWO
name
in comparison other algorithms. For instance, in function F1
with I-GWO algorithm, the average value is 0.00E+00 when
F6 10 0.00E+00 8.51E−17 2.14E+00 3.30E−07 the dimension is 10, 30 and 100. In addition, I-GWO has
30 0.00E+00 7.14E−01 3.33E+00 6.47E−01 good performance in multimodal benchmark functions such
100 0.00E+00 8.31E+01 9.01E+01 8.25E−00 as F9 and F10. Ex-GWO algorithm has good performance in
F7 10 1.20E−78 4.36E−08 2.22E−05 2.52E−18 multimodal function with small dimension sizes like F12.
30 2.19E−15 3.06E−03 7.01E−03 1.21E−03 Tables 7, 8 indicate the statistical results of proposed
100 6.37E+03 3.08E+01 3.53E+01 6.64E−02 methods in different dimensions. The Ex-GWO algorithm
F11 10 2.08E+00 1.39E−11 2.71E−09 4.47E−05 has good performance in low dimension benchmark func-
30 4.62E+00 1.87E−03 1.37E−02 4.58E−04 tions. On the other hand, the I-GWO performance in dif-
100 2.89E+02 2.34E+02 5.85E+00 3.18E−01 ferent dimension sizes is remarkable. Due to the structure
F12 10 2.71E−25 3.05E+00 2.85E+00 3.79E−03 of the Ex-GWO, its performance is better in small dimen-
30 3.03E−03 3.89E−02 3.91E+00 4.44E−02 sion. Furthermore, in updating position phase, the nth wolf
100 1.74E+01 2.93E+00 8.62E+01 3.86E−01 updates its own position based on n − 1 wolves’ position
The best average values of algorithms are written in bold had before. In addition, it is difficult to find the best solution
by increasing dimension. Nevertheless, the I-GWO has bet-
ter performance in different sizes of the dimension. In the
4.4 Complexity and performance analysis I-GWO, the main functionally is related to the leader wolf,
and if the leader wolf found the best position, rest of the
Execution time of the GWO is faster than I-GWO and Ex- wolves follow the leader. In this case, the dimension size
GWO algorithms. However, their success in catching the could be neglected to find the best solution. Meanwhile, the
hunt is much more successful than the GWO algorithm and I-GWO just tries to avoid falling in local optima. In addi-
eliminates some of the weaknesses of the GWO method (as tion, the proposed techniques find optima in fixed-dimension
mentioned earlier, not being able to quickly circle around the benchmark functions. Tables 3, 4 show these results in both
prey, and allow the prey to escape in some cases). proposed methods. They find optima for function F16–F23.
The
( )computational complexity in I-GWO and Ex-GWO The result of the statistical analyses is presented on box-
is o n2  . Execution time of I-GWO is faster than Ex-GWO. plots in Fig. 14. There are two benchmark functions, one
In the I-GWO, each wolf updates its own position based on is unimodal function and other is multimodal function. As
all the wolves selected before. In the first step, there is one shown in Fig. 14a, F1 function was evaluated on GWO,
wolf. If there are n wolves in a pack, the nth wolf updates its I-GWO, Ex-GWO, EGWO, and MGWO. In this figure,
own position based on n − 1 wolves’ position. Moreover, in average values of best scores are shown in boxplots. In this

13

530 Engineering with Computers (2021) 37:509–532

proposed method in the exploitation phase obtains scores


near to optimum value.

5 Conclusion and future works

Meta-heuristic algorithms have quickly become an


important solution to complex optimization problems.
The nature-inspired methods are one of these algorithms.
In this paper, two novel meta-heuristic algorithms are
introduced to solve the global optimization problems
inspired by the Grey Wolf Optimizer (GWO) algorithm.
In the GWO mechanism, since the position updates of the
wolves at the second layer (fourth and next levels—omega)
depend only on three wolves, they can be densely settled
in the same or in certain regions during the prey catching
process. In this case, the mechanism of escape prevention
may not work well. In the Ex-GWO, the position of the
wolves in the first layer (alpha, beta, and delta) and the
wolf/wolves previously selected and updated in the second
layer are used to update each current wolf own position. It
suggests a balanced performance between the two phases.
Thanks to the balanced behavior mechanism, the chance of
finding a good solution is higher. In addition, the wolves in
the pack narrow down the escape paths of the prey leading
to faster catches. In the working mechanism of I-GWO,
position update of each wolf is related to wolves that were
selected and updated before. In other words, the n − 1
wolves’ positions are considered in the position update
of the nth wolf. This algorithm with a lot of greedy is
heavily dependent on the position of the first wolf (alpha).
It can be ensured that the first wolf can be directed to
the hunt by bringing dynamism in the ‘a’ parameter. In
addition, this parameter and/or iteration numbers can be
revised according to the environment and problem to be
applied. I-GWO and Ex-GWO algorithms further previ-
ous methods by hunting mechanism. Proposed methods
Fig. 14  Boxplots graph for the two benchmark functions were tested by 33 benchmark functions that almost tested
in meta-heuristic algorithms. This results show that the
study, these values obtained after 30 times run and 500 itera- I-GWO and Ex-GWO have good performance in compar-
tions. In addition, in Fig. 14b, F12 function was evaluated on ison other seven well-known meta-heuristic algorithms.
the same algorithms. The boxplots show the maximum and Ex-GWO presented the best performance in 13 benchmark
minimum values of each best score, and also the median and functions according to compared with other algorithms
the frequency of the values. In this figure, the I-GWO has that 7 of which are global optima and 6 of them are the
obtained the best score, and the exploitation and explora- best solutions. I-GWO also got best answers in 17 func-
tion phase can be presented very well, as mentioned in the tions that 15 of them are global optima and only two of
previous section. The structure of the I-GWO algorithm is them are best solutions.
based on the first wolf or leader wolf. Therefore, it runs the In both algorithms, there is a high probability of finding
exploration phase in faster than other versions of the GWO good solutions for a variety of complex problems. In addi-
algorithm. On the other hand, the Ex-GWO has the best tion, it can find global solutions quickly in few iterations.
score in comparison of other supposed algorithms in F12. The convergence rate to the global solution in Ex-GWO
As shown in Fig. 14b, the Ex-GWO algorithm starts from method is lower than in the I-GWO method, but it has
the maximum value of the score. After initial iterations, the more balanced behavior and performance in many problem

13
Engineering with Computers (2021) 37:509–532 531

solutions. As a result, the proposed two algorithms can be 10. Kilinc M, Caicedo JM (2019) Finding plausible optimal solutions
used to find optimized solutions in different applications in engineering problems using an adaptive genetic algorithm. Adv
Civ Eng 2019:1–9
and problems. I-GWO can be used in application areas 11. Storn R, Price K (1997) Differential evolution—a simple and effi-
where global results are obtained quickly with fewer itera- cient heuristic for global optimization over continuous spaces. J
tions; for example, in learning-based systems. On the other Glob Optim 11(4):341–359
hand, the Ex-GWO can be used for routing and localiza- 12. Sharma R, Vashisht V, Singh AV, Kumar S (2019) Analysis of
existing clustering algorithms for wireless sensor networks. Sys-
tion-based problems because of its balanced behavior and tem Performance and Management Analytics. Springer, Singa-
to be able to encircle the target thoroughly. Therefore, the pore, pp 259–277
algorithms proposed in this paper have potential applica- 13. Mann PS, Singh S (2019) Improved metaheuristic-based energy-
bility in various areas and may be useful to investigators. efficient clustering protocol with optimal base station location in
wireless sensor networks. Soft Comput 23(3):1021–1037
The following are some of the works planned to be done 14. Sahu RK, Sekhar GC, Priyadarshani S (2019) Differential evolu-
in the future. tion algorithm tuned tilt integral derivative controller with filter
controller for automatic generation control. Evol Intell. https:​ //doi.
• Use of these algorithms for node localization and path- org/10.1007/s1206​5-019-00215​-8
15. Yao X, Liu Y, Lin G (1999) Evolutionary programming made
finding in wireless sensor networks faster. IEEE Trans Evol Comput 3(2):82–102
• Introducing a hybrid mechanism with reinforcement 16. Fogel LJ, Owens AJ, Walsh MJ (1966) Artificial intelligence
learning and cellular automata methods in vehicle ad- through simulated evolution. Wiley-IEEE Press, Hoboken
hoc networks 17. Zhang X, Luo J, Sun X, Xie J (2019) Optimal reservoir flood
operation using a decomposition-based multi-objective evolution-
• Use of these algorithms to define optimized fitness ary algorithm. Eng Optim 51(1):42–62
functions in calculating weight values in artificial neu- 18. Simon D (2008) Biogeography-based optimization. IEEE Trans
ral networks and deep learning-based applications. Evol Comput 12(6):702–713
• Use of these algorithms to find optimized feature 19. Kasilingam F, Pasupuleti J, Bharatiraja C, Adedayo Y (2019)
Power system stabilizer optimization using BBO algorithm for a
extraction and filtering method in bioinformatics such better damping of rotor oscillations owing to small disturbances.
as analysis of DNA and RNA behaviors. FME Trans 47(1):166–176
20. Kumar M, Om H (2019) A Hybrid bio-inspired algorithm for pro-
tein domain problems. In: Advances in nature-inspired computing
and applications. Springer, Cham, pp 291–311
21. Bhattacharya A, Chattopadhyay PK (2010) Solving complex eco-
nomic load dispatch problems using biogeography-based optimi-
Compliance with ethical standards  zation. Expert Syst Appl 37(5):3605–3615
22. Rashedi E, Nezamabadi-Pour H, Saryazdi S (2009) GSA: a gravi-
Conflict of interest  The authors declare that there are no conflicts of tational search algorithm. Inf Sci 179(13):2232–2248
interest regarding the publication of this paper. 23. Naserbegi A, Aghaie M, Minuchehr A, Alahyarizadeh G (2018)
A novel exergy optimization of Bushehr nuclear power plant by
gravitational search algorithm (GSA). Energy 148:373–385
24. Marzband M, Ghadimi M, Sumper A, Domínguez-García JL
References (2014) Experimental validation of a real-time energy manage-
ment system using multi-period gravitational search algorithm
1. Winston PH (1992) Artificial intelligence, 3rd edn. Addison- for microgrids in islanded mode. Appl Energy 128:164–174
Wesley, Boston 25. Chakraborti T, Sharma KD, Chatterjee A (2014) A novel local
2. Yao X, Yong L (1997) Fast evolution strategies. In: International extrema based gravitational search algorithm and its application
conference on evolutionary programming. Springer, Berlin in face recognition using one training image per class. Eng Appl
3. Mirjalili S, Mirjalili SM, Lewis A (2014) Grey wolf optimizer. Artif Intell 34:13–22
Adv Eng Softw 69:46–61 26. Erol OK, Eksin I (2006) A new optimization method: big bang–
4. Talbi EG (2009) Metaheuristics: from design to implementation, big crunch. Adv Eng Softw 37(2):106–111
vol 74. Wiley, Hoboken 27. Sakthivel S, Pandiyan SA, Marikani S, Selvi SK (2013) Appli-
5. Mirjalili S (2015) The ant lion optimizer. Adv Eng Softw cation of big bang big crunch algorithm for optimal power flow
83:80–98 problems. Int J Eng Sci 2(4):41–47
6. Jamil M, Xin-She Y (2013) A literature survey of benchmark 28. Kaveh A, Talatahari S (2010) A novel heuristic optimization
functions for global optimization problems. arXiv preprint method: charged system search. Acta Mech 213(3–4):267–289
arXiv.1308-4008 29. Özyön S, Temurtaş H, Durmuş B, Kuvat G (2012) Charged sys-
7. Holland JH (1992) Genetic algorithms. Sci Am 267(1):66–73 tem search algorithm for emission constrained economic power
8. Chawla P, Chana I, Rana A (2015) A novel strategy for automatic dispatch problem. Energy 46(1):420–430
test data generation using soft computing technique. Front Com- 30. Lam AY, Li VO (2010) Chemical-reaction-inspired metaheuris-
put Sci 9(3):346–363 tic for optimization. IEEE Trans Evol Comput 14(3):381–399
9. Gomes GF, de Almeida FA, Junqueira DM, da Cunha Jr SS, 31. Xu J, Lam AY, Li VO (2011) Chemical reaction optimization for
Ancelotti AC Jr (2019) Optimized damage identification in CFRP task scheduling in grid computing. IEEE Trans Parallel Distrib
plates by reduced mode shapes and GA-ANN methods. Eng Struct Syst 22(10):1624–1631
181:111–123

13

532 Engineering with Computers (2021) 37:509–532

32. Li Z, Li Y, Yuan T, Chen S, Jiang S (2019) Chemical reaction 55. Yang XS, Deb S (2009) Cuckoo search via Lévy flights. In: 2009
optimization for virtual machine placement in cloud computing. World congress on nature and biologically inspired computing
Appl Intell 49(1):220–232 (NaBIC), pp 210–214
33. Kabir R, Islam R (2019) Chemical reaction optimization for 56. Mohamad A, Zain AM, Bazin NEN, Udin A (2013) Cuckoo
RNA structure prediction. Appl Intell 49(2):352–375 search algorithm for optimization problems-a literature review.
34. Formato RA (2007) Central force optimization: a new Applied mechanics and materials, vol 421. Trans Tech Publica-
metaheuristic with applications in applied electromagnetics. tions, Zurich, pp 502–506
Prog Electromagn Res 77:425–491 57. Rath A, Samantaray S, Swain PC (2019) Optimization of the crop-
35. Haghighi A, Ramos HM (2012) Detection of leakage freshwater ping pattern using cuckoo search technique. Smart techniques for
and friction factor calibration in drinking networks using central a smarter planet. Springer, Cham, pp 19–35
force optimization. Water Resour Manag 26(8):2347–2363 58. Arif MA, Mohamad MS, Latif MSA, Deris S, Remli MA, Daud
36. Hatamlou A (2013) Black hole: a new heuristic optimization KM, Corchado JM (2018) A hybrid of cuckoo search and minimi-
approach for data clustering. Inf Sci 222:175–184 zation of metabolic adjustment to optimize metabolites production
37. Hatamlou A (2018) Solving travelling salesman problem using in genome-scale models. Comput Biol Med 102:112–119
black hole algorithm. Soft Comput 22(24):8167–8175 59. Dhivya M, Sundarambal M (2011) Cuckoo search for data gather-
38. Kennedy J, Eberhart R (1995) Particle swarm optimization. In: ing in wireless sensor networks. Int J Mob Commun 9(6):642–656
Proceedings of ICNN’95 - international conference on neural 60. Mucherino A, Seref O (2007) Monkey search: a novel metaheuris-
networks, Australia, pp 1942–1948 tic search for global optimization. AIP Conf Proc 953(1):162–173
39. Pattanayak S, Agarwal S, Choudhury BB, Sahoo SC (2019) 61. Zhou Y, Chen X, Zhou G (2016) An improved monkey algorithm
Path planning of mobile robot using PSO algorithm. In: Infor- for a 0–1 knapsack problem. Appl Soft Comput 38:817–830
mation and communication technology for intelligent systems. 62. Yi TH, Li HN, Zhang XD (2015) Health monitoring sensor place-
Springer, Singapore, pp 515–522 ment optimization for Canton Tower using immune monkey algo-
40. Syahputra R, Robandi I, Ashari M (2015) Reconfiguration of rithm. Struct Control Health Monit 22(1):123–138
distribution network with distributed energy resources integra- 63. Khairuzzaman AKM, Chaudhury S (2017) Multilevel threshold-
tion using PSO algorithm. Telkomnika 13(3):759 ing using grey wolf optimizer for image segmentation. Expert Syst
41. Dorigo M, Birattari M (2010) Ant colony optimization. Appl 86:64–76
Springer, New York, pp 36–39 64. Li Q, Chen H, Huang H, Zhao X, Cai Z, Tong C, Tian X (2017)
42. Okdem S, Karaboga D (2009) Routing in wireless sensor net- An enhanced grey wolf optimization based feature selection
works using an ant colony optimization (ACO) router chip. Sen- wrapped kernel extreme learning machine for medical diagnosis.
sors 9(2):909–921 Comput Math Methods Med 2017:1–15
43. Yi W, Kumar A (2007) Ant colony optimization for disas- 65. Fahad M, Aadil F, Khan S, Shah PA, Muhammad K, Lloret J,
ter relief operations. Transp Res Part E Logist Transp Rev Mehmood I (2018) Grey wolf optimization based clustering
43(6):660–672 algorithm for vehicular ad-hoc networks. Comput Electr Eng
44. Tian J, Yu W, Xie S (2008) An ant colony optimization algorithm 70:853–870
for image edge detection. In: 2008 IEEE congress on evolution- 66. Mousavi S, Mosavi A, Varkonyi-Koczy AR (2017) A load bal-
ary computation (IEEE World Congress on Computational Intel- ancing algorithm for resource allocation in cloud computing.
ligence). IEEE, pp 751–756 In: International conference on global research and education.
45. Karaboga D, Basturk B (2008) On the performance of artificial Springer, Cham, pp 289–296
bee colony (ABC) algorithm. Appl Soft Comput 8(1):687–697 67. Mittal N, Singh U, Sohi BS (2016) Modified grey wolf optimizer
46. Gong D, Han Y, Sun J (2018) A novel hybrid multi-objective for global engineering optimization. Appl Comput Intell Soft
artificial bee colony algorithm for blocking lot-streaming flow Comput 8:1–16
shop scheduling problems. Knowl Based Syst 148:115–130 68. Faris H, Aljarah I, Al-Betar MA, Mirjalili S (2018) Grey wolf
47. Singh A (2009) An artificial bee colony algorithm for the leaf- optimizer: a review of recent variants and applications. Neural
constrained minimum spanning tree problem. Appl Soft Comput Comput Appl 30(2):413–435
9(2):625–631 69. Joshi H, Arora S (2017) Enhanced grey wolf optimization algo-
48. Yang XS (2010) A new metaheuristic bat-inspired algorithm. rithm for global optimization. Fundam Inform 153(3):235–264
Nature inspired cooperative strategies for optimization (NICSO 70. Liang JJ, Qu BY, Suganthan PN (2013) Problem definitions and
2010). Springer, New York, pp 65–74 evaluation criteria for the CEC 2014 special session and competi-
49. Osaba E, Yang XS, Diaz F, Lopez-Garcia P, Carballedo R (2016) tion on single objective real-parameter numerical optimization.
An improved discrete bat algorithm for symmetric and asymmet- Computational Intelligence Laboratory, Zhengzhou University,
ric traveling salesman problems. Eng Appl Artif Intell 48:59–71 Zhengzhou China and Technical Report, Nanyang Technological
50. Sathya MR, Ansari MMT (2015) Load frequency control using University, Singapore, vol 635
Bat inspired algorithm based dual mode gain scheduling of PI 71. Liang JJ, Qu BY, Suganthan PN, Chen Q (2014) Problem defini-
controllers for interconnected power system. Int J Electr Power tions and evaluation criteria for the CEC 2015 competition on
Energy Syst 64:365–374 learning-based real-parameter single objective optimization. Tech-
51. Yang XS (2010) Firefly algorithm, stochastic test functions and nical Report201411A, Computational Intelligence Laboratory,
design optimisation. Int J Bio-Inspir Comput 2:78–84 Zhengzhou University, Zhengzhou China and Technical Report,
52. Banati H, Bajaj M (2011) Fire fly based feature selection Nanyang Technological University, Singapore, vol 29, pp 625–640
approach. Int J Comput Sci Issues (IJCSI) 8(4):473 72. Mirjalili S, Lewis A (2016) The whale optimization algorithm.
53. Talatahari S, Gandomi AH, Yun GJ (2014) Optimum design of Adv Eng Softw 95:51–67
tower structures using firefly algorithm. Struct Des Tall Spec
Build 23(5):350–361 Publisher’s Note Springer Nature remains neutral with regard to
54. Tuba E, Tuba M, Beko M (2017) Mobile wireless sensor networks jurisdictional claims in published maps and institutional affiliations.
coverage maximization by firefly algorithm. In: 2017 27th inter-
national conference Radioelektronika (RADIOELEKTRONIKA).
IEEE, pp 1–5

13

You might also like