You are on page 1of 20

Applied Soft Computing 60 (2017) 115–134

Contents lists available at ScienceDirect

Applied Soft Computing


journal homepage: www.elsevier.com/locate/asoc

An efficient modified grey wolf optimizer with Lévy flight for


optimization tasks
Ali Asghar Heidari, Parham Pahlavani ∗
School of Surveying and Geospatial Engineering, College of Engineering, University of Tehran, Tehran, Iran

a r t i c l e i n f o a b s t r a c t

Article history: The grey wolf optimizer (GWO) is a new efficient population-based optimizer. The GWO algorithm can
Received 31 August 2016 reveal an efficient performance compared to other well-established optimizers. However, because of
Received in revised form 14 May 2017 the insufficient diversity of wolves in some cases, a problem of concern is that the GWO can still be
Accepted 20 June 2017
prone to stagnation at local optima. In this article, an improved modified GWO algorithm is proposed
Available online 1 July 2017
for solving either global or real-world optimization problems. In order to boost the efficacy of GWO,
Lévy flight (LF) and greedy selection strategies are integrated with the modified hunting phases. LF is
Keywords:
a class of scale-free walks with randomly-oriented steps according to the Lévy distribution. In order to
Optimization
Lévy flight
investigate the effectiveness of the modified Lévy-embedded GWO (LGWO), it was compared with several
Grey wolf optimizer state-of-the-art optimizers on 29 unconstrained test beds. Furthermore, 30 artificial and 14 real-world
Metaheuristic problems from CEC2014 and CEC2011 were employed to evaluate the LGWO algorithm. Also, statistical
tests were employed to investigate the significance of the results. Experimental results and statistical
tests demonstrate that the performance of LGWO is significantly better than GWO and other analyzed
optimizers.
© 2017 Elsevier B.V. All rights reserved.

1. Introduction Immature convergence and stagnation at local optima (LO) are


some common deficiency in the majority of meta-heuristic algo-
Over the past recent years, various metaheuristic algorithms rithms [6,7]. Hence, researchers are often concerned to alleviate
(MA) have been established based on diverse natural-based phe- them through modifying previous optimizers or proposing new
nomena and philosophies [1]. These stochastic optimizers have ones [18–22]. One of the latest MA is grey wolf Optimizer (GWO)
independently applied in the different field of sciences, which may [23].
be due to their metamorphoses in the source of inspiration, explo- Social life of wolves usually depends on their leadership hier-
ration and exploitation mechanisms, convergence characteristics, archy and hunting activities. By inspiration from this fact, the
and the optimality of the results [2–7]. Some of the successful, population-based GWO was developed in 2014 [23]. The GWO is
popular MA are differential evolution (DE) [8], particle swarm a new efficient optimizer compared to other methods such as GSA,
optimization (PSO) [9], gravitational search algorithm (GSA) [10], FA, BA, and PSO. In GWO, each group has a hierarchical arrangement
bat algorithm (BA) [11], firefly algorithm (FA) [12], and cuckoo and every member of those groups should perform some roles. In
search (CS) [13]. The PSO and DE are well-known optimizers that order to perform hunting task, each wolf is led by specific leader(s),
their efficacy and convergence behaviors have been widely inves- which is a high-ranked member. The ranking of wolves is based on
tigated in the literature [14]. The BA [11] mimics the echolocation their mental talents such as awareness and supervision, not their
activities of bats. The CS [15] is a nature-inspired technique that physical potentials. Hence, three wolves that have a better percep-
simulates brood parasitism of specific cuckoo classes. The GSA [16] tion of the probable location of the victim and are labeled as alpha,
is a physics-based population-based strategy inspired by gravi- beta, and delta, will guide other members throughout hunting mis-
tational forces amongst interactive masses. The FA [17] inspires sions. The steps of hunting phase can be categorized as surrounding
the behaviour of small lighting germs detected in summer nights. and attacking the quarry. Then, other wolves should modify their
current locations regarding the situation of their leaders in all these
phases.
∗ Corresponding author. The GWO algorithm is capable of exposing an efficient per-
E-mail addresses: as heidari@ut.ac.ir (A.A. Heidari), pahlavani@ut.ac.ir, formance compared to the other well-designed MA such as GSA,
pahlavani.parham@gmail.com (P. Pahlavani).

http://dx.doi.org/10.1016/j.asoc.2017.06.044
1568-4946/© 2017 Elsevier B.V. All rights reserved.
116 A.A. Heidari, P. Pahlavani / Applied Soft Computing 60 (2017) 115–134

evolution strategy (ES), DE, and PSO in dealing with various prob- tion to basic searching pattern still can improve its performance
lems [23]. The last three years have witnessed a rapid growth in the and alleviate its convergence errors. One possible way to this task
use of GWO for different applications. Saremi et al. studied the use is to reconstruct hunting mechanism of wolves using the Lévy flight
of evolutionary population dynamics (EPD) in the basic GWO [2]. (LF)-based patterns.
The EPD operator has been developed based on the theory of self- The LF can be described as a family of scale-free walks
organizing criticality (SOC) [24,25]. In [26], the conventional GWO by randomly-oriented steps according to Lévy distribution [44].
has been used for a multi-layer perceptron (MLP) training task. In After various investigations about foraging patterns of wildlife, it
2015, the GWO with a pattern search (PS) (GWO-PS) was utilized to was discovered and published in nature journal that the behav-
handle the reliable planning of the security smart grid power sys- iors of several animals can be interpreted based on LF concept
tems by considering critical circumstances [27]. In [28], the impact [45,46]. They declared that throughout evolution, organism search
of Singer and Sinusoidal chaotic signals on the performance of GWO approaches could be developed in a way that they may discover the
for solving feature selection tasks has been studied. best Lévy-triggered patterns. In addition, the motion patterns of
In 2014, Mirjalili et al. [29] have compared the performance of several types of animals can estimate optimal Lévy explorations in
GWO strategy with that of a novel multi-verse optimizer (MVO). theory. Strong support has been discovered for Lévy-based search
Results revealed that the GWO is capable of outperforming not only and hunting behaviors among marine predators such as sharks and
in exploring the proper regions but also in exploitive tendency dur- billfish.
ing the search. In 2015, Komaki and Kayvanfar [30] investigated Later, the LF concept was also introduced to nature-inspired
the efficiency of GWO for solving the flow shop scheduling tasks optimizers to ensure enhancement of them and make them more
which also considered release times. Emary et al. [31] proposed realistic. Yang and Deb [47,48] employed LF distribution to gener-
two new binary GWO for feature selection task in a wrapper mode. ate new search agents in cuckoo search (CS) algorithm. Also, LF was
The results reveal that the binary version outperforms PSO and GA used to improve the basic PSO [49,50], FA [51], and ABC [52]. These
techniques. In 2016, GWO was employed for the mixed heat and studies affirm that LF can considerably enhance the performance
power dispatch task in power systems [32]. The simulations con- of stochastic optimizers.
firmed that the quality and consistency of the GWO solutions were In GWO, the search agents are attracted toward the probable
preferable to other evaluated optimizers. optimal points by following the first best solutions (leaders) in each
In 2016, Medjahed et al. [33] proposed a GWO-based proce- generation. But, after this exploitative motions, search agents of
dure for hyperspectral band selection task. In 2016, multi-objective GWO are inclined to the problem of stagnation in LO. The stagnation
GWO (MOGWO) was also proposed by Mirjalili et al. [34]. In [35], problems in GWO have motivated us to propose a new modified
the efficiency of DE and GWO algorithms are investigated and GWO hybridized with LF concept for realizing either global or real-
compared for tackling optimal power flow (OPF) tasks. In 2016, world task. Hence, to alleviate the stagnation problems in GWO, it is
a GWO-based approach was employed to design the wide-area modified here and then hybridized with LF concept. The main idea
power system stabilizer (WAPSS) [36]. In 2015, Song et al. [37] used behind LGWO is to enhance the exploration of GWO by embedding
GWO to estimate parameters in surface waves. Simulations of both LF-based patterns into the hunting steps of GWO. The LF can sup-
synthetic and field datasets revealed that it can reach high levels port GWO for regaining the right balance between exploration and
of exploration and exploitation and precise stable convergence to exploitation tendencies of wolves throughout the hunting process.
high-quality solutions. The proposed LGWO is evaluated and compared with different algo-
In [38], the GWO was combined with a PM2.5 prediction model rithms for treating several optimization test cases from CEC2005,
and the accuracy of the results was improved. In [39], the original CEC2011, and CEC2014. The results verify that significant improve-
GWO has been hybridized with mutation and crossover mecha- ments can be discovered in the results of LGWO compared to the
nisms to tackle the economic dispatch problems. In [40], the simple other competitors.
GWO has been employed to explain the load frequency control This paper is structured as follows: at first, the core characteris-
(LFC) in power systems. In [41], the simple GWO has been uti- tics of the conventional GWO are studied in Section 2. Subsequently,
lized to be evaluated in the path planning tasks. The basic GWO the structure of the modified GWO with LF is provided in Section 3.
has been employed to determine the optimum operative condi- The proposed GWO-based approach is substantiated using a wide
tions in [42] and to design proportional, derivative and integrative set of test beds and details are discussed in Section 4. Finally, the
PID controller in [43]. main concluding remarks are reported in Section 5.
The GWO can be identified by some significant advantages in
comparison with prior population-based optimizers: minimalism,
flexibility, and sufficient LO escaping potential. Moreover, its imple-
mentation is very easy; and it uses fewer preliminary parameters, 2. An overview of GWO algorithm
as well as it can demonstrate acceptable convergence trends in
most of the challenging test cases. The GWO is willingly applicable The GWO can be regarded as a robust swarm-based optimizer
to dissimilar optimization tasks since, in this method, the target [30,32–34,37,38]. This algorithm is inspired by the social hierarchy
problems can be considered as black boxes. and hunting strategies of grey wolves in wildlife. In GWO, the ini-
However, regarding the insufficient diversity of the wolves in tial population should be divided into certain categories including
some cases, the agents of GWO still may face the risk of stagnation alpha (␣), beta (ˇ), delta (ı), and omega (ω). The best wolves should
in LO. This problem may often happen when the conventional GWO be treated as ␣, ˇ, and ı that assist other wolves (ω) in exploring
cannot perform a smooth transition from exploration to exploita- more favorable regions of solution space (see Fig. 1).
tion potential by more iteration. Similar to PSO, DE and the other In conventional GWO, the motion of wolves is described as [23]:
optimizers, another probable concern in GWO may be an imma-
ture convergence to some sub-optimum points, which cannot be
accepted as the best possible solutions on the global scale. This → → → →
D = | C . X p (t) − X (t)|, (1)
can be detected when the searching mechanisms of GWO still can-
not relief it from LO to better ones. Although most of the previous
works just utilized the basic version of GWO in tackling different → → → →
engineering problems [18–25], it is perceived that some modifica- X (t + 1) = X p (t) − A . D , (2)
A.A. Heidari, P. Pahlavani / Applied Soft Computing 60 (2017) 115–134 117

Fig. 1. Social hierarchy of wolves and their characteristics in GWO.

Fig. 2. Possible 2D and 3D locations of wolves nearby the prey.

→ → →
driven mechanisms. In this regard, the states of wolves are adjusted
where t is iteration, A and C are random vectors, Xp is location of the
→ → → by Eqs. (5)–(7) [23]:
prey, and X is the position of wolves. The random A and C vectors
→ → → →
are calculated as [23]: D˛ = | C 1 . X ˛ − X |, (5)
→ → → →
→ Dˇ = |C 2.X ˇ − X |, (6)
→ → →
A = 2a. r 1 − a, (3) → → → →
Dı = | C 3 . X ı − X |, (7)
→ → → → →
C = 2r2 , (4) where X˛ , Xˇ , Xı denote the locations of alpha, beta, and delta
→ → → →
  respectively; C1 , C2 , C3 represent random vectors, and X specifies
where a  = a0 1 − t/T is a temporal parameter that its elements the location of the present solution. Eqs. (5)–(7) rule the estimated
are linearly decreased from a0 to 0, r1 and r2 are random values span amongst the recent solution and the ␣, ˇ, and ı types, respec-
inside [0,1]. In basic GWO, a0 was set to 2. In order to clearly realize tively. After estimating of distances, the final state of the updated
the effects of these simple rules, position vector of wolves and a solutions is determined by [23]:
selection of their neighbors are demonstrated in Fig. 2. These illus- → → → →
trations show that the search agents can reach different situations X1 = X ˛ − A 1 .( D ˛ ), (8)
around the prey by modifying the values of A and C parameters. In → → → →
addition, random values r1 and r2 can assist each search agent in X2 = Xˇ − A 2 .( D ˇ ), (9)
touching any location among the depicted points in Fig. 2. → → → →
These animals are capable of identifying the position of the X3 = Xı − A 3 .( D ı ), (10)
quarry and to enclose them. The hunting procedure is typically → → → →
directed by the alpha types. In some conditions, the beta and delta X (t + 1) = 0.33 × ( X 1 + X2 + X 3 ), (11)
might contribute in hunting as well. In GWO, it has been supposed → → →
where A1 , A2 , A3 show random vectors and t shows recent iteration.
that the alpha (best solution), beta and delta can guesstimate the
The step size of the ω wolves is expressed in Eqs. (5)–(7), respec-
possible situation of the victim. Hence, the first three best solutions
tively. The final location of the ω wolves is formulated in Eq. (8) to
should be recorded to lead other hunters [30].
Eq. (11). An illustration of the aforementioned mechanisms can be
It is apparent that the end position is situated in a random place
perceived in Fig. 3.
inside a hypersphere in the search space. Therefore, elite solutions → →
will approximate the optimum and other hunt agents have to revise The A and C are two random and adaptive vectors that assist

their locations around an estimated victim based on stochastically- GWO in explorative and exploitative behaviors. The vector A is a
118 A.A. Heidari, P. Pahlavani / Applied Soft Computing 60 (2017) 115–134

Fig. 3. Movement mechanism of GWO in 3D.

random value inside [-2a, 2a], where the elements of a are lin- in LGWO is updated based on ␣ and ˇ types using the following

early decreased from 2 to 0. The exploration happens when A equation:
→ →
is larger than 1 or less than −1 (| A | > 1). The C parameter can → → → → → → →
→ X (t) = 0.5 × ( X ˛ − A 1 D ˛ + X ˇ − A 2 D ˇ ). (12)
encourage exploration tendency while it is larger than 1 (| C | > 1).
→ In LGWO, these movements will be performed toward the best
In addition, the exploitation trend can be improved when | A | < 1
→ minima over the search space. In these motions, random walks
and | C | < 1. Note that A is narrowed linearly over simulation to give may be appropriate to model the animal motions more realisti-
more weight to exploitation by more time [30,47,53]. However, cally [49,52]. Lévy motion is regarded as a variety of non-Gaussian

C is determined randomly to give stress to both exploration and random procedures whose random steps should be determined
exploitation tendencies at each stage, which is a simple mechanism based on Lévy stable distribution [44]. Lévy distribution can be
for alleviating the LO entrapment problem [23,30,35]. The structure represented by a clear power-law equation as [49]:
of GWO is described in Fig. 4.
L(s)∼|s|−1−ˇ , 0 < ˇ ≤ 2, (13)

where s is the variable and ˇ is Lévy index for controlling of stability


3. The modified GWO with LF distribution [49]. The Lévy distribution can be formulated as [49]:

  
The conventional GWO algorithm updates its hunters towards  1 0<<s<∞
L(s, , ) = 2 exp − , (14)
the victim based on the condition of the alpha, beta, and delta 2(s − ) (s − )
3/2 s≤0
0
(leader wolves). However, the population of GWO is still inclined
to stagnation in LO in some cases. Hence, GWO’s problems of where  shows a shift parameter and  > 0 is a scale parameter
immature convergence can still be experienced. In some cases, the [47,52]. Lévy distribution can also be reformulated based on Fourier
standard GWO algorithm is not capable of performing a seamless transform as follows [52]:
transition from the exploration to exploitation phases.  
ˇ
To relive the above-mentioned concerns, LF can be used. The LF F(k) = exp −˛|k| , 0 < ˇ ≤ 2, (15)
can assist GWO in searching based on deeper searching patterns.
Using this concept, it can be ensured that GWO can handle global Where ␣ is a parameter inside [−1, 1], which is referred as the
searching more efficiently. By this manner, the stagnation prob- skewness and scale factor. Referring to the investigations of Deb
lem can also be relieved. In addition, the quality of the candidate and Lee in [54,55], different values of ˇ can significantly affect the
solutions should be enhanced in Lévy-embedded GWO through- shape of LF distribution. When the values of ˇ are small, longer
out the simulation. For this purpose, three main modifications are jumps are generated; otherwise, the LF can create smaller jumps
proposed: first, the role of delta wolves in the social hierarchy will using greater values of ˇ [49].
be played by other wolves; second, the LF concept is embedded It has been verified that Lévy-based movements can be regarded
into modified GWO, and third, the greedy selection (GS) strategy is as optimal searching (hunting) approaches for foragers/hunters
employed for LF-based modified GWO. in non-destructive foraging circumstances [46]. Several theoreti-
In LGWO, delta wolves are not considered as a specific class in cal works have demonstrated Lévy-based patterns in hunting of
different phases of hunting processes. In the basic method, updat- wildlife such as monkeys and sharks [45,46,52,56–58]. For instance,
ing and initialization of these types of wolves affect its overall marine predators show Lévy-based motion patterns throughout
searching performance. It should be noted that the dominance hier- hunting procedure [46]. Based on [59], Exploration (searching) will
archy of wolves can help GWO to memorize better solutions for be optimal for exploring randomly scattered objects, while it is per-
guiding the remained members of the population. In LGWO, social formed according to Lévy walk on the LF-based path with a constant
hierarchy can still be effective using three types of wolves including velocity. In this regard, it may be advantageous to revise the hunt-
alpha (˛), beta (ˇ), and omega (ω). Hence, the position of wolves ing process of wolves in GWO according to the LF concept. The LF
A.A. Heidari, P. Pahlavani / Applied Soft Computing 60 (2017) 115–134 119

Fig. 4. (A) Flowchart of the GWO technique. (B) Effects of LF on the searching history of wolves.

may assist LGWO in mimicking wolves’ hunting patterns more real- and improve them as better-quality results. Also, this operator can
istically and precisely than the original GWO. Therefore, it comes to enhance the exploitive behaviors of wolves in last iterations. Here,
the thought that using LF as an alternative can be an effective idea ␣ is a random quantity for each dimension of wolves. Therefore,
for mitigating the stagnation problems of GWO. Hence, as another when |A| > 0.5, the operator is updated as:
modification to GWO, new positions are determined by:
 → → → → → →
0.5 × ( X ˛ − A 1 D˛ + X ˇ − A 2 Dˇ ) + ˛ ⊕ Levi(ˇ) |A| > 0.5 → → → → →

X new (t) = , (16) X new (t) = 0.5 × ( X ˛ − A 1 D ˛ + X ˇ
→ → → → → → |A| < 0.5
0.5 × ( X ˛ − A 1 D˛ + X ˇ − A 2 Dˇ ) → →
− A 2 D ˇ ) + rand(size(D)) ⊕ Levi(ˇ), (17)
where ␣ shows the step size that is associated with the scales of
the target problem, ˇ is the Lévy index inside [0, 2] and ⊕ sym-
bolizes entry wise multiplication. Based on the value of |A|, the where D is dimension. Mantegna technique [60] is an accurate
new operator redistributes some of the wolves to achieve a right strategy to provide stochastic variables whose their probability
balance between exploration and exploitation according to the LF- densities are converged to Lévy steady distribution that is con-
based jumps. In those cases that the modified GWO (without LF) trolled by parameter ␣ (0.3< ␣ < 1.99). Therefore, the Mantegna
can obtain superior solutions, there is a possibility in LGWO to use strategy is utilized here to obtain LF throughout the searching pro-
120 A.A. Heidari, P. Pahlavani / Applied Soft Computing 60 (2017) 115–134

cess. In this regards, in Eq. (17), the step size can be formulated N
as:
u → → a, p, A, C
rand(size(Dim)) ⊕ Levi(ˇ)∼0.01 ( X (t) − X ˛ (t)), (18)
v−ˇ
Xα to be the best wolf
where u and v values should be attained based on normal distribu- Xβ to be the second best wolf
tions; (t<T) or (stopping condition) do
for each wolf
u∼N(0, u2 ), v∼N(0, v2 ), (19)
Update the position of current wolves by Eq. (16)
with Perform GS by Eq. (21)
 1 end for
 (1 + ˇ) sin(

)
ˇ
a, p, A, C
2
u = ˇ−1
, V = 1, (20)
1+ˇ
( 2
)ˇ ×2 2
Xα and Xβ
t = t+1
where  represents the conventional gamma function. In this work,
end while
ˇ parameter is not constant, but a random value inside the [0,
2] interval should be selected in every iteration for LF process. Xα
By this strategy, LF will generate many small and occasionally Fig. 5. Pseudo-code of LGWO algorithm.
long-distance jumps. This random ˇ parameter can improve both
exploitation and exploration trends over the course of iterations
[49]. The visual effect of the LF-based motions on the searching
patterns of wolves is illustrated in Fig. 4.
The subsequent remarks can explain the aspects that the mod-
ified Lévy-based operator qualifies LGWO for outperforming the
basic GWO technique:

• The LGWO can still demonstrate the main features of the basic
GWO in exploration and exploitation tendencies.
• The LF-based jumps can redistribute wolves around the fitness
landscape to prevent the population from the loss of diversity
and to put more emphasis on the global searching tendency once
it is required.
• The LF-based hunting technique permits all artificial hunters to
explore and localize the possible situations of the victim more
effectually.
• In the case of stagnation, Lévy-triggered searching (hunting) pat-
terns can help GWO to jump out of them toward new better
positions.
• LF with random ˇ can boost both exploration and occasionally
exploitation trends by consecutively generating a series of small
and big jumps during the search.
• Based on |A| and random jumps of LF, the LGWO has an enhanced
tendency for global search and it often put emphasis on more
exploration. The LGWO also can lay emphasis on further exploita- Fig. 6. Flowchart of LGWO algorithm.
tion in the last steps.

Additionally, referring to the greedy selection (GS) strategy from is advantageous for stabilizing essential exploration and exploita-
DE algorithm, the concept of “survival of the fittest” is utilized here tion trends to encourage LGWO for converging to better-quality
with the probability p. According to this strategy, new superior solutions.
positions in each generation can continue to be more enriched for The pseudocode and flowchart of LGWO are reported in
the next generations and the worse ones are also disregarded. This Figs. 5 and 6, respectively. In the next section, the proposed LGWO
operator is formulated as: will be evaluated based on diverse numerical instances.
 → → →
→ X (t) f ( X new (t)) > f ( X (t))andrnew < p
X (t + 1) = (21) 4. Experimental results and discussion

X new (t) otherwise
Here, in order to investigate new LGWO’s performance, exper-
where rnew and p are random values inside (0, 1), f(X(t)) is the fit- imental simulations are implemented for various optimization
ness of the last position, and Xnew (t) represents the new position tasks. These classic benchmark functions have been employed by
attained by Eq. (16). The p value in Eq. (21) is determined randomly various researchers to assess their algorithms [23,61–64]. For this
inside interval [0, 1] in each iteration. This choice can emphasize purpose, 29 unconstrained problems including a set of composition
on the random nature of LGWO. By integrating GS into LGWO, the test cases from CEC 2005 are evaluated [65]. Furthermore, 30 mod-
searching capabilities are more enriched because each pioneer wolf ern benchmarks from CEC 2014 and 14 real-world test cases from
gets the chance to survive and then share its observed info with CEC2011 are used as the second and third experiments, respec-
other hunters during the next steps of the searching process. Also, it tively.
A.A. Heidari, P. Pahlavani / Applied Soft Computing 60 (2017) 115–134 121

Table 1
Results of LGWO with different classes.

Problem/Algorithm LGWO-I LGWO-II LGWO-III LGWO-IV LGWO-V LGWO-VI

Number of leaders 1 2 3 4 5 6

FC23 1.02 1 1.06 1.06 1.08 1.12


FC24 1.05 1 1.06 1.06 1.06 1.14
FC25 1.12 1 1.15 1.15 10.02 1.23
FC26 1.20 1 1.25 2.25 1.27 1.32
FC27 1.05 1 1.08 1.08 1.12 1.19
FC28 1.02 1 1.03 1.06 1.05 1.26
FC29 1.10 1 1.11 1.10 1.18 2.31
FC30 1.02 1 1.03 2.02 1.56 10.12

Table 2 These parameters are chosen based on the recommendations in


The parameters of optimizers.
aforementioned works.
Optimizer Description Referring to Derrac et al. [71], statistical tests should be used
PSO c1 , c2 = 2, ω2 = 0.9, ω1 = 0.2 to judge the performance of the evaluated optimizers. Statistical
CS pa = 0.25 tests reflect each test’s results and verify that the differences in the
FA ␣ = 0.5, ˇ = 0.2,  = 1 results are statistically significant. To statistically assess the new
GSA Go = 100, ␣ = 20 LGWO compared to the other methodologies, Wilcoxon rank-sum
BA A = 0.5, r = 0.5, ε = 0.1
test [71] at 5% significance degree is also employed for evaluated
DE F = 0.5, CR = 0.4
GWO a0 = 2 test systems.
LGWO a0 = 2, ˇ ∼ U(0,2), p ∼ U(0,1) The formulation of classic benchmark problems is provided in
Table 3. In addition, Table 4 demonstrates the description of F24-
F29 composition test cases [23]. Some of the F01-F23 benchmarks
For these experiments, each technique is performed a Windows are also selected to be illustrated in Figs. 7–9.
7 system using Intel Core i2, 2 GHz, 4G RAM and Matlab R2013a. No
commercial GWO-based tool was applied in this research. 4.1.1. Exploitation analysis
Before comprehensive tests, a comparative study is carried out Functions F01–F07 should be classified as unimodal test cases
to demonstrate why using three types of wolves in LGWO is prefer- with only one global best. The overall exploitation tendency of
able to other cases with additional or fewer classes. The LGWO with LGWO method can be investigated using these test beds. Stan-
different classes is evaluated on eight composition functions from dard deviation (STD) besides average (Ave) results of LGWO and
CEC 2014 (see Table 8). The complicated fitness landscapes of these other methods are reported in Tables 5–7. Superior results are high-
problems are necessary to reveal the effects of the number of lead- lighted inside the corresponding tables. Meanwhile, the optimizers
ers on the performance of LGWO. For each function, LGWO with 1, are sorted according to their averages. In addition, the average rank
2, 3, 4, and 5 leaders (2, 3, 4, 5, and 6 classes of wolves) ran 30 times. is computed to attain the overall rank of the methods. The sum-
The normalized average results for these problems are reported in mary of statistical test results is reflected in all Tables. In these
Table 1. tables, +, − and ≈ show that the performance of the LGWO is statis-
Note that these results are normalized in each row according tically superior to, inferior to, and similar to the second optimizer,
to the lowest values. Therefore, the comparison of results between respectively.
different rows cannot be either meaningful or necessary. Based on It can be perceived from Table 5 that LGWO can obtain very com-
the achieved results, LGWO with three classes of wolves can find petitive solutions compared to other algorithms. For F01-F07, the
better results in comparison to those of the other versions. The solutions of LGWO demonstrate that it is capable of outperforming
reason is that using more leaders will emphasize on exploitation the conventional GWO. The solutions of LGWO are also superior
rather than exploration and it seems that this strategy cannot avoid to the results of the other evaluated techniques in the most of 7
LGWO from stagnation behaviors. test cases (F01, F02, F03, F04, and F07). Based on last raw, it can be
Therefore, the proposed modifications can assist LGWO in strik- detected that LGWO is competent to be ranked as one. The over-
ing a balance between exploration and exploitation capacities. The all rank of algorithms indicates that the LGWO can outperform DE,
numerical results of LGWO with 3 classes (␣, ˇ, ω) for these bench- GWO, CS, FA, GSA, PSO, and BA algorithms, respectively. It can be
marks are reported in the next sections. recognized from the results on F01-F07 that the LGWO is capable
of attaining satisfactory solutions with an appropriate exploitation
4.1. Experiment results and analysis on benchmark set 1 potential. This reason is that the proposed Lévy-embedded mecha-
nisms can stimulate both exploration and exploitation tendencies
In order to assess the efficiency of proposed LGWO for treating of the conventional GWO, effectively. Therefore, the proposed LF-
different optimization problems, a series of benchmarks are utilized based mechanisms increase the propensity of the algorithm for
in experiment 1. Two well-established algorithms are employed to generating more diminutive LF-based jumps based on higher val-
substantiate LGWO in experiment 1 including PSO [66] and DE [67]. ues of stability index. This feature is advantageous for exploiting
Moreover, the LGWO is compared with GWO [23] as the original new areas nearby the newly explored solutions. For that reason,
method. It also compared with BA [68] that inspires the echoloca- it observed that the new algorithmic modifications have enriched
tion strategies of bats, with adaptive pulse rates of emission and the exploitation trends of GWO in dealing with unimodal problems
loudness, CS [47] that mimics the breeding strategies of a bird fam- such as F01 to F07 test cases.
ily, GSA [69] as a well-known optimizer used in many works, and
FA [70] as prior robust approach that inspires the life of flash- 4.1.2. Exploration analysis
ing fireflies. The performance of LGWO is investigated based on Multimodal test beds (F08-F23) are appropriate to validate the
30 independent runs for realizing each problem over 500 internal exploration potential of different optimizers. Based on the results
loops. The initial parameters of algorithms are reported in Table 2. on Tables 6 and 7, LGWO is capable of exploring very competitive
122
Table 3
descriptions of the benchmark problems (C: Characteristics, U: Unimodal, M: Multimodal, H: L: Low-dimensional, High-dimensional, DM: dimension, OPT: Optimum) [23].

Id C Formulation DM Limits OPT



n 2
F01 (x) =
F01 UH

ni=1 xi n 30 [−100,100] 0
F02 UH F02 (x) = |xi | + |xi | 30 [−10,10] 0

i=1
n
i i=12
F03 UH F03 (x) = ( xj ) 30 [−100,100] 0
i=1 j−1
F04 UH F04 (x) = maxi |xi |, 1 ≤ i ≤ n 30 [−100,100] 0

n−1  2 2

F05 UH F05 (x) = 100(xi+1 − xi2 ) + (xi − 1) 30 [−30,30] 0

A.A. Heidari, P. Pahlavani / Applied Soft Computing 60 (2017) 115–134



n−1
i=1
2
F06 UH F06 (x) = ([xi + 0.5]) 30 [−100,100] 0

i=1
n−1
F07 UH F07 (x) = ixi4 + random [0, 1)

i=1
n
 30 [−1.28,1.28] 0
F08 (x) = − xi sin( |xi |) −418.9829 × n
F08 MH

i=1
n
  30 [−500,500]
F09 MH F09 (x) = xi2 − 10 cos(2xi ) + 10 30 [−5.12,5.12] 0
i=1 
n
n
1
F10 MH F10 (x) = −20 exp(−0.2 x2 ) − exp( 1n cos(2xi )) + 20 + e
−4

n n2 i=1 i
n xi
i=1
30 [−32,32] 0
F11 MH F11 (x) = 2.5 × 10 xi − cos( √ ) + 1 30 [−600,600] 0

i=1
n−1
i=1 i
 n 
 2
 2
 xi + 1
k(xi − a)m xi > a
F12 MH F12 (x) = 10 sin (yi ) + (yi − 1) 1 + 10 sin (yi+1 ) + (yn − 1)2 + u (xi , 10, 100, 4) , yi = 1 + , u (xi , a, k, m) = 0 −a < xi < a 30 [−50,50] 0
n 4

i=1
   
i=1 k(−xi − a)m xi < −a

n (xi − 1)
2 2
1 + sin (3xi + 1)
n
F13 MH F13 (x) = 0.1
2
sin (3xi ) + 2
 2
 + u(xi , 5, 100, 4) 30 [−50,50] 0
i=1
+(xn − 1) 1 + sin (2xn ) i=1


25 −1
F14 ML 1
F14 (x) = ( 500 +
j=1

2 1 6
) 2 [−65,65] 1
j+ (xi −aij )
  i=1
 2

11 x1 b2 +bi x2
i
F15 ML F15 (x) = ai − 4 [−5,5] 0.00030
i=1 b2 +bi x3 +x4
i
1 6
F16 ML F16 (x) = 4x12− 2.1x14 + x + x1 x2 − 4x22 + 4x24
3 1
2 [−5,5] −1.0316
2
F17 (x) = (x2 − 5.12 x12 + 5
x − 6) + 10(1 − 8 1
+ 10
F17 ML
 4  1  ) cos x1
    2 [−5,5] 0.398
F18 ML F18 (x) = 1 + (x1 + x2 + 1)2 19 − 14x1 + 3x12 − 14x2 + 6x1 x2 + 3x22 × 30 + (2x1 − 3x2 )2 × 18 − 32x1 + 12x12 + 48x2 − 36x1 x2 + 27x22 2 [−2,2] 3

4 
 2 
3
F19 ML F19 (x) = − ci exp − aij xj − pij 3 [1,3] −3.86

i=1


j=1
2 
4 6
F20 ML F20 (x) = − ci exp − aij xj − pij 6 [0,1] −3.32
i=1 j=1

5  T
−1
F21 ML F21 (x) = − (X − ai ) (X − ai ) + ci 4 [0,10] −10.1532

7 
i=1
T
−1
F22 ML F22 (x) = − (X − ai ) (X − ai ) + ci 4 [0,10] −10.4028

i=1
10
 T
−1
F23 ML F23 (x) = − (X − ai ) (X − ai ) + ci 4 [0,10] −10.5363
i=1
A.A. Heidari, P. Pahlavani / Applied Soft Computing 60 (2017) 115–134 123

Table 4
Composition benchmarks [23].

ID Function D Bounds GM

f1 , f2 , f3 , ..., f10 = Sphere’sFunction


F24 [ı1 , ı2 , ı3 , ..., ı10 ] = [1, 1, 1, ..., 1] 10 [−5,5] 0
[
1 ,
2 ,
3 , ...,
10 ] = [5/100,5/100,...,5/100]
f1 , f2 , f3 , ..., f10 = Griewank’sfunction
F25 [ı1 , ı2 , ı3 , ..., ı10 ] = [1, 1, 1, ..., 1] 10 [−5,5] 0
[
1 ,
2 ,
3 , ...,
10 ] = [5/100,5/100,...,5/100]
f1 , f2 , f3 , ..., f10 = Griewank’sfunction
F26 [ı1 , ı2 , ı3 , ..., ı10 ] = [1, 1, 1, ..., 1] 10 [−5,5] 0
[
1 ,
2 ,
3 , ...,
10 ] = [1, 1, 1, ..., 1]
f1 , f2 = Ackley’sFunction
f3 , f4 = Rastrigin’sFunction
f5 , f6 = Weierstras’sFunction
F27 f7 , f8 = Griewank’sFunction 10 [−5,5] 0
f9 , f10 = Sphere’sFunction
[ı1 , ı2 , ı3 , ..., ı10 ] = [1, 1, 1, ..., 1]
[
1 ,
2 ,
3 , ...,
10 ] = [5/32, 5/32, 1, 1, 5/0.5, 5/0.5, 5/100, 5/100, 5/100, 5/100]
f1 , f2 = Rastrigin’sFunction
f3 , f4 = Weierstras’sFunction
f5 , f6 = Griewank’sFunction
F28 f7 , f8 = Ackley’sFunction 10 [−5,5] 0
f9 , f10 = Sphere’sFunction
[ı1 , ı2 , ı3 , ..., ı10 ] = [1, 1, 1, ..., 1]
[
1 ,
2 ,
3 , ...,
10 ] = [1/5, 1/5, 5/0.5, 5/0.5, 5/100, 5/100, 5/32, 5/32, 5/100, 5/100]
f1 , f2 = Rastrigin’sFunction
f3 , f4 = Weierstras’sFunction
f5 , f6 = Griewank’sFunction
f7 , f8 = Ackley’sFunction
F29 10 [−5,5] 0
f9 , f10 = Sphere’sFunction
[ı1 , ı2 , ı3 , ..., ı10 ] = [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1]
[
1 ,
2 ,
3 , ...,
10 ] = [0.1 ∗ 1/5, 0.2 ∗ 1/5, 0.3 ∗ 5/0.5, 0.4 ∗ 5/0.5,...
0.5 ∗ 5/100, 0.6 ∗ 5/100, 0.7 ∗ 5/32, 0.8 ∗ 5/32, 0.9 ∗ 5/100, 1 ∗ 5/100]

Fig. 7. 3-D map for 2D forms of F07, F09, F10 benchmarks.

Fig. 8. 3-D map for 2D forms of F12, F13, F14 benchmark.


124 A.A. Heidari, P. Pahlavani / Applied Soft Computing 60 (2017) 115–134

Fig. 9. 3-D map for 2-D forms of F19, F21, F23 benchmarks.

Table 5
Results for unimodal F01–F07 functions.

F GWO CS PSO FA GSA BA DE LGWO

F01 Ave 6.32E − 28 5.78E − 03 0.000128 0.040411 2.12E − 16 0.767411 7.17E − 12 3.17E − 30
STD 5.18E − 05 2.41E − 03 0.000289 0.018201 5.21E − 17 0.688170 2.1E − 14 4.07E − 20
Rank(Test) 2 (+) 6 (+) 5 (+) 7 (+) 3 (+) 8 (+) 4 (+) 1
F02 Ave 8.38E − 17 2.08E − 01 0.048122 0.061744 0.051774 0.327110 1.10E − 07 5.39E − 19
STD 0.034082 3.17E − 02 0.041385 0.017173 0.231239 2.160233 3.35E − 08 0.010729
Rank(Test) 2 (+) 7 (+) 4 (+) 6 (+) 5 (+) 8 (+) 3 (+) 1
F03 Ave 4.09E − 06 2.63E − 01 64.17332 0.050016 455.1004 0.129385 8.23E − 07 8.12E − 08
STD 21.70012 2.97E − 02 18.71810 0.011250 124.7488 0.688058 2.31E − 07 2.053381
Rank(Test) 3 (+) 6 (+) 7 (+) 4 (+) 8 (+) 5 (+) 2 (+) 1
F04 Ave 8.19E − 07 1.43E − 05 1.388550 0.213711 8.021015 0.193882 1.18E − 04 1.17E − 08
STD 1.741844 4.83E − 06 0.325442 0.028923 1.493110 0.641056 1.12E − 05 1.316448
Rank(Test) 2 (+) 3 (+) 7 (+) 6 (+) 8 (+) 5 (+) 4 (−) 1
F05 Ave 24.74168 0.008121 85.03224 3.005223 58.38920 0.321710 0 8.350714
STD 51.82238 0.054326 50.40314 1.563822 56.00025 0.299057 0 5.336001
Rank(Test) 6 (+) 2 (+) 8 (+) 4 (+) 7 (+) 3 (+) 1 (−) 5
F06 Ave 0.798003 6.17E − 04 0.000425 0.045230 8.31E − 14 0.712085 2.88E − 03 2.69E − 04
STD 0.001289 2.80E − 05 1.27E − 05 0.029208 7.70E − 15 0.892204 1.45E − 05 0.000023
Rank(Test) 8 (+) 4 (+) 3 (+) 6 (+) 1 (+) 7 (+) 5 (+) 2
F07 Ave 0.003781 0.028551 0.102744 0.008561 0.088145 0.153102 2.41E − 02 3.02E − 03
STD 0.214005 0.001277 0.042350 0.004512 0.044009 0.108730 1.12E − 03 0.001102
Rank(Test) 2 (+) 5 (≈) 7 (+) 3 (≈) 6 (+) 8 (+) 4 (+) 1

Average Rank 3.571429 4.714286 5.857143 5.142857 5.428571 6.285714 3.285714 1.714286
Overall Rank 3 4 7 5 6 8 2 1
+/−/≈ 7/0/0 6/0/1 7/0/0 6/0/1 7/0/0 7/0/0 5/2/0 45/2/2

Bold values in these table indicates the “best result” achieved by a specific method (GWO, CS, PSO, etc) for a considered function F (F01, F02, etc).

Table 6
Results for multimodal F08–F13 functions.

F GWO CS PSO FA GSA BA DE LGWO

F08 Ave −4021.351 −2128.913 −4258.206 −1295.180 −2688.746 −1002.711 −1538.1527 −3365.8658
STD 315.447 0.008418 512.4180 302.4846 475.2366 831.0054 582.4522 296.12698
Rank(Test) 2 (+) 5 (−) 1 (+) 7 (+) 4 (+) 8 (+) 6 (+) 3
F09 Ave 0.302682 0.2463220 34.18946 0.283300 21.56339 1.200358 12.414778 0.094579
STD 25.26322 0.001816 8.296435 0.211472 5.1287386 0.642822 9.2482451 21.580073
Rank(Test) 4 (+) 2 (≈) 8 (+) 3 (−) 7 (+) 5 (−) 6 (+) 1
F10 Ave 1.12E − 13 4.01E − 10 0.258511 0.142184 0.096552 0.130025 2.85E − 10 2.12E − 15
STD 0.085041 5.21E − 09 0.489455 0.042488 0.195096 0.064851 3.1E − 08 0.0429752
Rank(Test) 2 (≈) 4 (+) 8 (+) 7 (+) 5 (+) 6 (+) 3 (+) 1
F11 Ave 0.005102 0.185228 0.008922 0.086059 12.33364 1.138114 8.15E − 04 0.0000242
STD 0.006325 0.039805 0.009626 0.028002 4.801125 0.637005 1.41E − 06 0.0000839
Rank(Test) 3 (+) 6 (+) 4 (+) 5 (+) 8 (+) 7 (+) 2 (+) 1
F12 Ave 0.061741 0.0125808 0.007241 0.131795 2.112974 0.409255 8.21E − 03 7.12E − 04
STD 0.039705 4.12E − 09 0.003357 0.253518 0.058956 0.810752 1.17E − 04 0.0030027
Rank(Test) 5 (+) 4 (−) 2 (≈) 6 (+) 8 (+) 7 (+) 3 (+) 1
F13 Ave 0.512798 0.485117 0.017158 0.002966 4.200091 0.362611 5.15E − 06 3.94E − 07
STD 0.005821 6.85E − 08 0.018251 0.001452 2.911045 0.189552 1.52E − 07 0.0002098
Rank(Test) 7 (+) 6 (−) 4 (+) 3 (≈) 8 (+) 5 (+) 2 (−) 1

Average Rank 3.833333 4.5 4.5 5.166667 6.666667 6.333333 3.666667 1.333333
Overall Rank 3 4 4 6 8 7 2 1
Overall +/−/≈ 5/0/1 2/3/1 5/0/1 4/1/1 6/0/0 5/1/0 5/1/0 32/6/4

Bold values in these table indicates the “best result” achieved by a specific method (GWO, CS, PSO, etc) for a considered function F (F01, F02, etc).
A.A. Heidari, P. Pahlavani / Applied Soft Computing 60 (2017) 115–134 125

Table 7
Results for fixed-dimension multimodal F14–F23 test cases.

F GWO CS PSO FA GSA BA DE LGWO

F14 Ave 4.013365 1.423652 3.836673 3.026592 4.980711 3.500296 0.9948522 1.1493786
STD 4.030052 1.30E − 02 3.100205 1.932225 3.955225 2.305466 2.56E − 11 2.9072365
Rank(Test) 7 (+) 3 (+) 6 (+) 4 (+) 8 (+) 5 (+) 1 (+) 2
F15 Ave 0.0005837 5.03E − 04 0.000612 0.0010 0.003569 7.03E − 03 5.48E − 05 2.53E − 06
STD 0.0007215 1.11E − 04 0.000198 4.60E − 04 0.001798 2.11E − 03 1.18E − 06 3.95E − 04
Rank(Test) 4 (+) 3 (+) 5 (+) 6 (+) 7 (+) 8 (+) 2 (+) 1
F16 Ave −1.03163 −1.03163 −1.03163 −1.03160 −1.03163 −1.03163 −1.03163 −1.03163
STD 4.3360058 1.49E − 08 6.27E − 16 1.65E − 07 4.72E − 16 2.452254 3.17E − 11 3.2E − 12
Rank(Test) 1 (≈) 1 (≈) 1 (≈) 8 (≈) 1 (≈) 1 (≈) 1 (≈) 1
F17 Ave 0.397889 0.39795 0.397901 0.397985 0.397887 0.397900 0.397887 0.397887
STD 1.28E − 05 3.24E − 06 0 3.56E − 08 0 1.27E − 02 8.24E − 07 0
Rank(Test) 4 (+) 7 (+) 6 (≈) 8 (+) 1 (≈) 5 (+) 1 (+) 1
F18 Ave 3.000028 3.00135 3.0001 3.012363 3.000000 3.000000 3.000000 3.000000
STD 0.012257 0.00258 7.01E − 06 0.052675 4.29E − 14 0.056167 1.58E − 18 2.40E − 2
Rank(Test) 5 (+) 7 (+) 6 (−) 8 (+) 1 (≈) 1 (≈) 1 (−) 1
F19 Ave −3.86263 −3.86288 −3.86280 −3.86137 −3.86250 −3.86280 −3.86280 −3.86288
STD 2.122054 1.85E − 05 1.89E − 12 3.81E − 03 3.15E − 15 1.104492 1.51E − 20 1.47E − 05
Rank(Test) 6 (−) 1 (+) 3 (−) 8 (−) 7 (+) 3 (−) 3 (+) 1
F20 Ave −3.28635 −3.32185 −3.26632 −3.28415 −3.31862 −3.322084 −3.2176 −3.32101
STD 0.268112 7.21E − 03 0.060413 0.070285 4.15E − 05 0.058045 0.63565 0.038626
Rank(Test) 5 (+) 2 (−) 7 (+) 6 (+) 4 (+) 1 (+) 8 (+) 3
F21 Ave −10.1510 −9.72828 −7.63838 −6.92669 −5.92533 −9.203955 −10.1527 −10.15304
STD 9.120014 0.288102 3.672215 3.257075 3.730022 3.0581205 6.53E − 05 5.1404578
Rank(Test) 3 (+) 4 (−) 6 (+) 7 (+) 8 (+) 5 (+) 2 (+) 1
F22 Ave −10.255746 −9.87298 −7.36982 −10.4008 −9.630195 −9.23499 −10.4028 −10.4028
STD 8.6952230 0.320344 3.255826 1.198224 2.8100546 3.225782 2.625526 2.200478
Rank(Test) 4 (≈) 5 (+) 8 (+) 3 (≈) 6 (+) 7 (+) 1 (+) 1
F23 Ave −10.53435 −9.78223 −9.968005 −10.2182 −10.95211 −10.98552 −10.5364 −10.53640
STD 8.4924069 0.500213 1.980510 1.240585 1.41E − 14 10.29354 1.055221 2.0122878
Rank(Test) 5 (+) 8 (+) 7 (+) 6 (+) 2 (−) 1 (+) 3 (+) 3

Average Rank 4.4 4.1 5.5 6.4 4.5 3.7 2.3 1.5
Overall Rank 5 4 7 8 6 3 2 1
+/−/≈ 7/1/2 7/2/1 6/2/2 7/1/2 6/3/1 7/1/2 8/1/1 48/11/11

Bold values in these table indicates the “best result” achieved by a specific method (GWO, CS, PSO, etc) for a considered function F (F01, F02, etc).

solutions on F08-F23 test cases. The LGWO can obtain the best solu- theorem of “no free lunch” (NFL), a universally best optimizer for
tions compared to other methods for F09, F10, F11, F12, F13, F15, all classes of problems does not exist, [72,73]. Note that LGWO is
F16, F17, F18, F19, F21, and F22. Based on overall ranking results not an exception.
on Table 6, LGWO is capable of outperforming the DE, CS, GWO, FA, Based on the overall results, it can be perceived that exploration
PSO, and BA algorithms on multimodal cases. Statistical tests also capacity of the GWO is extended as a result of the proposed Lévy-
support that the results of LGWO are better than other methods on embedded searching steps. Besides, the GS updating strategy in
76% of evaluations. Based on STD index, the accuracy of the results LGWO can decrease the chance of the LGWO falling into LO. Hence,
is improved compared to GWO. the exploration tendency of LGWO is desirable as well.
The relative performance of LGWO affirms that it has a satis-
factory explorative tendency, especially when the target problems 4.1.3. The LO escaping capacity
(such as F08, F13) have several LO. This reason is that the proposed Composition test beds can be utilized to critic the performance
Lévy-embedded mechanisms can stimulate both exploration and of MA from different aspects. The results in Table 8 indicate that
exploitation tendencies of the conventional GWO, effectively. It LGWO can demonstrate an efficient performance compared to
was observed that the proposed LF-based mechanisms can enrich other algorithms on F27, F28 and F29 cases. It can perform as the
the explorative behaviors of GWO by generating more lengthy LF- second best on F25 and F26 test cases. According to total ranks,
based jumps. The lower values of stability index can assist wolves in the LGWO is the most effective method and can find better opti-
generating more explorative jumps. This feature can be seen when mums compared to previous optimizers. From statistical tests, it is
LGWO needs to explore unseen areas of the problem landscape. observed that the performance of LGWO on F24-F29 is meaning-
Accordingly, the new LF-based operators have assisted GWO in fully better than the CS, PSO, FA, and BA algorithms for all trials. In
striking a fine balance between global and local search inclinations. these experiments, GWO performs better that DE, which is capable
Based on Table 7, it is seen that LGWO’s performance is bet- of optimizing the functions as the third best method. It is seen that
ter than DE, GWO, and other compared techniques. Additionally, LGWO is statistically superior for 85% of pair-wise comparisons.
conventional GWO can still reveal an efficient performance com- From Table 8, it can be detected that the modified Lévy-based
pared to FA, PSO, BA, GSA, and CS; and go beyond them occasionally. operators in LGWO can effectively assist this algorithm in regaining
According to statistical results, the LGWO is superior to other a correct balance amongst exploration and exploitation tendencies.
approaches in 48 cases. For F14-F23, the average results of LGWO In addition, the effects of the new modifications on LGWO also con-
are statistically better than those of GWO, FA, CS, BA on 7, PSO and firm that LGWO with GS strategy can efficaciously escape from LO in
GSA on 6, DE on 8 independent simulations. For some problems dealing with composition test cases. In circumstances that GWO is
such as F18, F19, and F20, it is observed that the obtained results stuck in LO or it cannot balance between exploration and exploita-
are very competitive and similar. The reason is that these problems tion, it was observed that the proposed mechanisms can improve
are not challenging enough for evaluated optimizers. Based on the its performance and quality of the results.
126 A.A. Heidari, P. Pahlavani / Applied Soft Computing 60 (2017) 115–134

Table 8
Results for composition F24–F29 test cases.

F GWO CS PSO FA GSA BA DE LGWO

F24 Ave 38.36621 102 98 139.7366 2.23E − 15 173.2038 7.19E − 01 23.093361


STD 58.00258 107.38239 82.369 86.00859 1.46E − 15 103.96633 2.52E − 01 31.926050
Rank(Test) 4 (+) 6 (+) 5 (+) 7 (+) 1 (−) 8 (+) 2 (−) 3
F25 Ave 89.30698 149.1103 146.1082 293.3381 193.2573 456.4019 35.89622 67.390577
STD 86.36999 88.85123 15.10336 98.12579 61.09312 136.9227 15.8510 65.120068
Rank(Test) 3 (≈) 5 (+) 4 (+) 7 (+) 6 (+) 8 (+) 1 (+) 2
F26 Ave 62.71469 286.2597 153.9006 729.69083 177.52980 587.230274 164.1057 69.002458
STD 66.43023 79.15931 30.70365 198.1229 85.22842 136.387762 29.85250 65.820246
Rank(Test) 1 (−) 6 (+) 3 (+) 8 (+) 5 (+) 7 (+) 4 (+) 2
F27 Ave 123.1235 401.5247 314.3 810.7268 167.79662 754.387768 327.1741 102.63363
STD 163.9937 98.16459 20.066 106.1199 78.160605 152.113482 22.10388 48.660546
Rank(Test) 2 (+) 6 (+) 4 (+) 8 (+) 3 (≈) 7 (+) 5 (+) 1
F28 Ave 102.1429 212.7639 83.45 122.67393 193.90902 540.095791 62.41042 53.668553
STD 81.25536 205.9728 101.11 209.75633 39.053480 211.13330 37.52588 49.305029
Rank(Test) 4 (+) 7 (+) 3 (+) 5 (+) 6 (+) 8 (+) 2 (+) 1
F29 Ave 43.14216 812.2943 861.42 849.36873 134.07588 817.243657 504.8521 37.80675
STD 84.48573 191.5134 125.81 114.11353 83.90841 142.520364 41.80736 56.383689
Rank(Test) 2 (≈) 5 (+) 8 (+) 7 (+) 3 (+) 6 (+) 4 (+) 1

Average Rank 2.666667 5.833333 4.5 7 4 7.333333 3 1.666667


Overall Rank 2 6 5 7 4 8 3 1
+/−/≈ 3/1/2 6/0/0 6/0/0 6/0/0 4/1//1 6/0/0 5/1/0 36/3/3

Bold values in these table indicates the “best result” achieved by a specific method (GWO, CS, PSO, etc) for a considered function F (F01, F02, etc).

4.1.4. Convergence analysis masses based on the laws of physics, and differential evolution
The convergence features of LGWO and other approaches in based on covariance and bimodal distribution (CoBiDE) [76] as an
handling F01-F07 functions are compared to the GWO and PSO in amended DE variant.
Fig. 10. The PSO is selected because it often performs as a bench- The parameters of optimizers are set as their original articles,
mark method with superior convergence styles in the literature hence,
is set as [N/2] and  is set to be 0.3 in CMA-ES [74]; ps is
[49]. A convergence window is also provided in Fig. 10. It iden- set to 0.5 and pb is set to 0.4 in CoBiDE [76], also its mutation and
tifies the intervals that LGWO outperforms GWO. From Fig. 10, it crossover operations are set as in [76]; for CS, ˇ is set to 1.50, p0 is
can be found that LGWO will converge to superior results as time set as 0.25 as [47]; and for GSA, G0 is equal to 100 and ␣ is equal to
continues. During more iteration, LGWO can approximate more 20 [64]. It should be noted that SOS has no initial parameters [75].
accurate solutions in the vicinity of the optimum solutions. In addi- The dimension of the test cases is fixed to 30 and the maxi-
tion, accelerated convergence trends can be realized in the curves mum number of function evaluations (NFE) is 3 × 105 . For a fair
of LGWO compared to GWO. This trend confirms that LGWO can assessment, the benchmarks have been tested under the condition
lay emphasis on further exploitation and local search in concluding of same NFE and maximum generation (MXG). Hence, the popu-
steps. These plots indicate that the LGWO can effectively improve lation size has been set to 40 in SOS, since it utilizes four stages,
the fitness of all wolves and guarantee to exploit enhanced results. 75 in CS, and 150 in the GSA, CoBiDE, and CMA-ES algorithms. The
For F09-F16 benchmarks, the convergence plots are exposed in obtained results are exposed in Tables 10–13 For each task, the
Fig. 11. According to Fig. 11, LGWO still outperforms GWO in terms error values (f(x) − f(x0 )) are compared during 30 independent tri-
of convergence rate in solving F09, F10, F11, F14, F15, F19, F20, F21, als. Note that x is the best result when a method ends and x0 is the
and F23. By considering the acceleration of GWO and PSO, plots of global optimum. Then, the methods are ranked based on the aver-
LGWO confirm that it can handle exploration of the search space age values and the overall ranks are compared. The statistical tests
more quickly. of experiment 2 are completed similar to those of experiment 1.
Based on these figures, it can be discovered that the modified For the unimodal benchmarks, FC01 to FC03, the statistical
Lévy-based operators and GS strategy can deepen the searching results of the compared techniques are reported in Table 10.
capabilities of GWO. The proposed LGWO algorithm can demon- The results in Table 10 can reveal that LGWO has an improved
strate a more efficient performance compared to other methods efficacy compared to GWO and the rest of optimizers in tackling
with better convergence trends. rotated unimodal problems in 30 dimensions. It is also seen that
the performance of CoBiDE, CS, GWO, CMA-ES, GSA, SOS methods
4.2. Experiment results and analysis on benchmark set 2 can be placed at the next stages, respectively. Moreover, GWO’s
performance was mediocre compared to LGWO, CoBiDE, and CS.
In this section, 30 IEEE CEC 2014 test beds are utilized to inves- Note that the CS has the LF-based searching mechanisms as well.
tigate the performance of LGWO. Table 9 reviews some details of According to the findings, one of the main features of LGWO
these functions and more about them can be obtained from [74]. is that it has a promising exploration power compared to GWO.
These problems have many unique features. In these problems, the The main reason is that the LF-based walks can improve the com-
search range is [–100,100] and the dimension is fixed to 30. petency of wolves in discovering the fruitful areas of the fitness
To judge the efficiency of LGWO, the solutions are compared basins. The results on these benchmarks indicate that the LGWO
with some state-of-the-art optimizers such as GWO [23], SOS can show an efficient performance for unimodal problems.
[75] that mimics the cooperative activities perceived amongst For multimodal test beds, FC04 to FC16, statistical outcomes are
organisms in nature and performs some core phases: mutual- reported in Table 11.
ism, commensalism and parasitism phases, evolution strategy with From Table 11, it is observed that the performance of LGWO
covariance matrix adaptation (CMA-ES) [74] as a well-known evo- appears comparable, but somewhat improved compared to the
lutionary method, CS [47] that has been improved by the LF random other approaches. The LGWO ranks first on the shifted rotated
walk procedure, GSA [64] that simulates interactions among some functions FC06, FC07, FC10, FC13, FC15, and FC16. However, LGWO
A.A. Heidari, P. Pahlavani / Applied Soft Computing 60 (2017) 115–134 127

Fig. 10. Convergence curves of the LGWO, GWO, and PSO algorithms for F01-F07 test cases.

Table 9
Brief description of the CEC2014 benchmark Functions (OPT: Optimum, UM: Unimodal, MM: Multimodal, H: Hybrid, CP: Composition, NS: Non-separable, S: Separable, R:
Rotated, AS: Asymmetrical, NSS: Non-separable subcomponents, DFLB: Different features nearby various LO, DFVS: Different features for diverse variables subcomponents).

No. Name Properties OPT

FC01 Rotated high conditioned elliptic Function UM,NS, R, Quadratic ill-conditioned 100
FC02 Rotated bent cigar Function UM,NS, R, Smooth but narrow ridge 200
FC03 Rotated discus Function UM,NS, R, One sensitive direction 300
FC04 Shifted and rotated Rosenbrock’s Function MM, NS, R, Having a narrow valley from LO to 400
global peak
FC05 Shifted and rotated Ackley’s Function MM, NS, R 500
FC06 Shifted and rotated Weierstrass Function MM, NS, R, Continuous but differentiable just 600
on some points
FC07 Shifted and rotated Griewank’s Function MM, NS, R 700
FC08 Shifted Rastrigin’s Function MM, S, Have so many LO 800
FC09 Six Hump Camel Back MM, NS, Have so many LO 900
FC10 Shifted and rotated Rastrigin’s Function MM, S, R, Have so many LO and second 1000
superior LO is not close to the global best
FC11 Shifted and rotated Schwefel’s Function MM, NS, R, Have so many LO and second 1100
superior LO is not close to the global best
FC12 Shifted and rotated Katsuura Function MM, NS, R, Continuous yet not differentiable 1200
FC13 Shifted and rotated HappyCat Function MM, NS, R 1300
FC14 Shifted and rotated HGBat Function MM, NS, R 1400
FC15 Shifted and rotated Expanded Griewank’s plus MM, NS, R 1500
Rosenbrock’s Function
FC16 Shifted and rotated Expanded Scaffer’s F6 MM, NS, R 1600
Function
FC17 Hybrid Function 1 (N = 3) H, MM or UM, NSS, DFVS 1700
FC18 Hybrid Function 2 (N = 3) H, MM or UM, NSS, DFVS 1800
FC19 Hybrid Function 3 (N = 4) H, MM or UM, NSS, DFVS 1900
FC20 Hybrid Function 4 (N = 4) H, MM or UM, NSS, DFVS 2000
FC21 Hybrid Function 5 (N = 5) H, MM or UM, NSS, DFVS 2100
FC22 Hybrid Function 6 (N = 5) H, MM or UM, NSS, DFVS 2200
FC23 Composition Function 1 (N = 5) CP, MM, NS, AS, DFLB 2300
FC24 Composition Function 2 (N = 3) CP, MM, NS, DFLB 2400
FC25 Composition Function 3 (N = 3) CP, MM, NS, AS, DFLB 2500
FC26 Composition Function 4 (N = 5) CP, MM, NS, AS, DFLB 2600
FC27 Composition Function 5 (N = 5) CP, MM, NS, AS, DFLB 2700
FC28 Composition Function 6 (N = 5) CP, MM, NS, AS, DFLB 2800
FC29 Composition Function 7 (N = 3) CP, MM, NS, AS, DFLB, DFVS 2900
FC30 Composition Function 8 (N = 3) CP, MM, NS, AS, DFLB, DFVS 3000
128 A.A. Heidari, P. Pahlavani / Applied Soft Computing 60 (2017) 115–134

Fig. 11. Convergence curves of the LGWO, GWO, and PSO algorithms for the selected test cases.

Table 10
Statistical results for CEC2014 unimodal test beds.

No. Metrics SOS GSA CS CMA-ES CoBiDE GWO LGWO

FC01 Ave 6.9464E + 07 1.5688E + 07 2.2857E + 05 9.4219E + 04 5.8936E + 00 7.6852E + 03 5.2384E + 00


STD 2.9152E + 07 3.5776E + 06 7.8599E + 02 7.8851E + 04 4.8893E + 00 3.1896E + 02 3.8516E + 00
Rank(test) 7(+) 6(+) 5(+) 4(+) 2(+) 3(≈) 1
FC02 Ave 3.6133E + 09 8.8850E + 03 1.3853E + 02 2.5541E + 10 6.8552E − 02 3.6812E + 04 0
STD 7.7578E + 08 1.6241E + 03 4.7145E + 01 3.8502E + 09 7.1102E − 03 6.8900E + 04 0
Rank(test) 6(+) 4(+) 3(−) 7(+) 2(≈) 5(+) 1
FC03 Ave 2.3789E + 04 7.3158E + 04 1.0476E + 04 1.4469E + 04 5.9536E − 06 2.6852E + 04 1.4118E − 01
STD 1.3521E + 04 4.1273E + 03 5.5381E − 01 5.6612E + 03 6.1587E − 06 7.1130E + 03 2.2357E − 02
Rank(test) 5(+) 7(+) 3(≈) 4(+) 1(−) 6(+) 2

Average rank 6 5.666667 3.666667 5 1.666667 4.666667 1.333333


Overall rank 7 6 3 5 2 4 1
Overall +/−/≈ 3/0/0 3/0/0 1/1/1 3/0/0 1/1/1 2/0/1 13/2/3

Bold values in these table indicates the “best result” achieved by a specific method (GWO, CS, PSO, etc) for a considered function F (F01, F02, etc).

ranks worse than GSA, CMA-ES, and CS on the shifted rotated FC05 LGWO’s results can be significantly better than GWO, GSA, CS, and
but still performs better than GWO. According to the overall ranking other techniques in 57% of the comparisons.
results, LGWO can be considered as one of the best techniques com- The used LF distribution can conform to the exploitation and on
pared to other contending optimizers. Thoroughly, this can lead occasion the exploitation, which ensures that the LGWO can jump
one to recognize that LGWO can show an acceptable performance out the LO and detect the fruitful areas of the fitness basins. Accord-
on shifted rotated cases. According to Simon’s opinions, which is ingly, the utilized LF-based searching patterns can increase the
discussed on [77], it is not generally meaningful to announce that explorative capabilities of LGWO in dealing with the multimodal
a metaheuristic can optimize better, or unwell, on rotated func- tasks. In addition, LGWO can provide significantly better results
tions. Based on overall statistical results on the last line of Table 11, than other techniques such as SOS and CMA-ES. It can explore more
A.A. Heidari, P. Pahlavani / Applied Soft Computing 60 (2017) 115–134 129

Table 11
Statistical results for CEC2014 multimodal test beds.

No. Metrics SOS GSA CS CMA-ES CoBiDE GWO LGWO

FC04 Ave 4.1439E + 02 3.6846E + 02 7.0723E + 01 2.5216E + 03 1.8689E + 01 1.5783E + 02 3.4587E + 01


STD 7.3228E + 01 2.9801E + 01 2.1300E + 01 5.3596E + 02 5.1328E + 00 5.1708E − 01 1.5437E + 01
Rank(test) 6(+) 5(+) 3(≈) 7(+) 1(−) 4(+) 2
FC05 Ave 2.0685E + 01 1.9999E + 01 2.0154E + 01 2.0000E + 01 2.0615E + 01 2.1062E + 01 2.0552E + 01
STD 5.3856E − 01 4.3911E − 03 2.8509E − 01 2.6288E − 05 3.9885E − 01 3.2385E − 02 3.1102E − 02
Rank(test) 6(+) 1(−) 3(+) 2(−) 5(+) 7(+) 4
FC06 Ave 2.4822E + 01 2.8755E + 01 2.4762E + 01 4.0855E + 01 3.0283E + 01 1.1518E + 01 9.7202E + 00
STD 1.9335E + 00 2.4932E + 00 4.1864E + 00 2.1285E + 00 1.8763E + 00 3.4809E−02 3.4086E + 00
Rank(test) 4(−) 5(−) 3(≈) 7(≈) 6(≈) 2(−) 1
FC07 Ave 3.3820E + 01 1.9852E − 04 7.4485E − 02 2.3115E + 02 2.9522E − 05 6.1837E + 00 0
STD 8.0225E + 00 2.0184E − 05 6.4093E − 02 2.8258E + 01 2.9005E − 02 2.2355E − 03 0
Rank(test) 6(+) 3(≈) 4(≈) 7(+) 2(≈) 5(≈) 1
FC08 Ave 8.0952E + 01 1.4012E + 02 7.1183E + 01 2.8308E + 02 2.8009E + 01 7.9273E + 01 2.9041E + 01
STD 7.8952E + 00 8.4125E + 00 1.1201E + 01 2.2058E + 01 1.2985E + 00 2.6646E−01 2.6399E + 01
Rank(test) 5(≈) 6(−) 3(−) 7(−) 1(−) 4(≈) 2
FC09 Ave 1.7262E + 02 1.6489E + 02 1.7805E + 02 3.2810E + 02 1.7844E + 02 8.6912E + 01 8.1445E + 01
STD 1.8996E + 01 1.2005E + 01 3.4720E + 01 7.6521E + 01 1.8089E + 01 4.8105E−01 3.1568E − 01
Rank(test) 4(+) 3(+) 5(−) 7(+) 6(+) 2(≈) 1
FC10 Ave 2.3950E + 03 3.3531E + 03 2.0203E + 03 2.6105E + 02 2.5822E + 03 2.1463E + 03 2.0025E + 03
STD 1.1958E + 02 3.1885E + 02 1.9374E + 02 1.0589E + 02 1.8727E + 02 2.8312E + 00 4.5224E + 02
Rank(test) 5(+) 7(+) 3(≈) 1(+) 6(+) 4(+) 2
FC11 Ave 4.4831E + 03 4.0567E + 03 4.4904E + 03 1.6864E + 02 5.6353E + 03 2.0121E + 03 2.0017E + 03
STD 4.0843E + 02 4.2429E + 02 3.3801E + 02 1.9833E + 02 2.3537E + 02 6.1086E + 00 6.4105E + 00
Rank(test) 5(+) 4(+) 6(+) 1(+) 7(+) 3(+) 2
FC12 Ave 7.3655E − 01 9.0965E − 02 8.1058E − 01 3.0284E − 01 1.0037E + 00 1.5828E − 01 9.2844E − 02
STD 2.8584E − 01 1.2252E − 03 2.7821E − 01 2.1796E + 00 1.2382E − 01 1.9304E − 03 2.9824E − 01
Rank(test) 5(+) 1(−) 6(−) 4(+) 7(−) 3(+) 2
FC13 Ave 6.8510E−01 4.0259E−01 4.1731E−01 5.5079E + 00 5.5632E−01 4.5388E−01 3.8903E−01
STD 2.0744E + 00 3.5520E−02 4.4655E−02 3.0711E−01 5.6633E−02 7.7562E−03 2.4412E−02
Rank(test) 6(+) 2(+) 3(≈) 7(+) 5(+) 4(+) 1
FC14 Ave 8.3855E + 00 2.3001E−01 5.1779E−01 7.5318E + 01 3.4235E−01 7.2281E−01 4.2998E−01
STD 4.2866E + 00 2.1930E−02 2.6524E−02 8.0755E + 00 2.5744E−02 5.1955E−02 3.7201E−02
Rank(test) 6(+) 1(≈) 4(−) 7(+) 2(≈) 5(+) 3
FC15 Ave 2.5625E + 02 1.2559E + 01 1.3177E + 01 1.0215E + 04 1.5902E + 01 1.7446E + 01 7.5225E + 00
STD 2.1086E + 02 1.9874E + 00 1.8683E + 00 3.2423E + 04 2.2552E + 00 1.4473E − 01 4.0864E + 00
Rank(test) 6(+) 2(+) 3(−) 7(+) 4(−) 5(+) 1
FC16 Ave 1.2023E + 01 1.4804E + 01 1.2314E + 01 1.3785E + 01 1.1005E + 01 1.0852E + 01 1.0630E + 01
STD 3.6985E − 01 2.4403E − 01 1.5985E − 01 5.3125E − 01 2.6522E − 01 1.9525E − 02 2.1370E − 02
Rank(test) 4(+) 7(+) 5(+) 6(+) 3(+) 2(≈) 1

Average rank 5.230769 3.615385 3.923077 5.384615 4.230769 3.846154 1.769231


Overall rank 6 2 4 7 5 3 1
Overall +/−/≈ 11/1/1 7/4/2 3/5/5 10/2/1 6/4/3 8/1/4 45/17/16

Bold values in these table indicates the “best result” achieved by a specific method (GWO, CS, PSO, etc) for a considered function F (F01, F02, etc).

Table 12
Statistical results for CEC2014 hybrid test cases.

No. Algorithms SOS GSA CS CMA-ES CoBiDE GWO LGWO

FC17 Ave 5.5865E + 06 7.2880E + 05 1.2439E + 05 5.4855E + 03 1.6535E + 03 5.3952E + 04 1.6412E + 03


STD 3.6851E + 06 1.2488E + 05 3.0053E + 03 3.6247E + 03 1.1285E + 02 2.0886E + 03 6.9852E + 02
Rank(test) 7(+) 6(+) 5(+) 3(+) 2(−) 4(+) 1
FC18 Ave 4.8625E + 05 3.8621E + 02 1.4215E + 03 1.5185E + 09 4.8155E + 01 1.1455E + 03 1.1397E + 03
STD 2.2476E + 05 1.1468E + 02 2.4209E + 01 3.9258E + 08 8.4600E + 00 3.8632E + 01 2.9023E + 01
Rank(test) 6(+) 2(+) 5(≈) 7(+) 1(−) 4(−) 3
FC19 Ave 4.0660E + 01 1.5678E + 02 1.1816E + 01 2.9845E + 02 2.0292E + 01 1.0789E + 01 9.8420E + 00
STD 2.2358E + 01 2.4896E + 01 5.3125E − 01 4.2523E + 01 9.8552E − 01 2.0721E − 03 1.9852E − 02
Rank(test) 5(+) 6(+) 3(+) 7(+) 4(≈) 2(−) 1
FC20 Ave 1.5949E + 04 8.2447E + 04 1.3593E + 02 4.6121E + 03 3.5379E + 01 8.2490E + 03 8.1683E + 01
STD 1.0025E + 04 1.3421E + 04 3.9858E + 01 3.8802E + 03 4.8922E + 00 5.6202E + 01 4.7058E + 01
Rank(test) 6(+) 7(+) 3(−) 4(+) 1(−) 5(≈) 2
FC21 Ave 7.8598E + 05 1.7879E + 05 1.6696E + 03 6.8602E + 03 7.4978E + 02 6.8250E + 05 1.5413E + 03
STD 6.0551E + 05 3.1904E + 04 1.8149E + 02 2.7562E + 03 1.5436E + 02 2.1136E + 04 2.0715E + 03
Rank(test) 7(+) 5(+) 3(−) 4(≈) 1(−) 6(+) 2
FC22 Ave 5.4507E + 02 9.5109E + 02 3.1138E + 02 1.6104E + 03 2.5742E + 02 3.5737E + 02 2.4171E + 02
STD 1.8229E + 02 1.8234E + 02 9.1532E + 01 2.9209E + 02 7.6962E + 01 2.8799E + 01 2.3741E + 01
Rank(test) 5(+) 6(+) 3(≈) 7(≈) 2(+) 4(−) 1

Average rank 6 5.333333 3.666667 5.333333 1.833333 4.166667 1.666667


Overall rank 7 5 3 5 2 4 1
Overall +/−/≈ 6/0/0 6/0/0 2/2/2 4/0/2 1/4/1 2/3/1 21/9/6

Bold values in these table indicates the “best result” achieved by a specific method (GWO, CS, PSO, etc) for a considered function F (F01, F02, etc).
130 A.A. Heidari, P. Pahlavani / Applied Soft Computing 60 (2017) 115–134

Table 13
Statistical results for CEC2014 composition test beds.

No. Algorithms SOS GSA CS CMA-ES CoBiDE GWO LGWO

FC23 Ave 3.4236E + 02 2.0000E + 02 3.4374E + 02 5.7912E + 02 3.1175E + 02 3.2348E + 02 2.0000E + 02


STD 2.1852E + 01 2.3411E − 5 8.1932E − 02 4.9411E + 01 4.7136E − 07 1.6914E − 01 2.8552E − 01
Rank(test) 5(+) 1(−) 6(−) 7(+) 3(−) 4(−) 1
FC24 Ave 2.3617E + 02 2.0008E + 02 2.2139E + 02 2.1203E + 02 2.1839E + 02 2.0000E + 02 2.0000E + 02
STD 9.8953E + 00 7.0555E − 02 1.4508E + 00 7.4908E + 00 1.6855E + 01 2.2863E − 04 2.1785E − 05
Rank(test) 7(+) 3(≈) 6(+) 4(≈) 5(+) 1(≈) 1
FC25 Ave 2.1495E + 02 2.0000E + 02 2.0904E + 02 2.1207E + 02 2.0068E + 02 2.0032E + 02 2.0000E + 02
STD 3.4960E + 00 4.9228E − 07 6.1156E − 01 2.9699E + 00 7.3998E − 03 4.8867E − 03 3.9810E − 03
Rank(test) 7(+) 1(−) 5(≈) 6(+) 4(≈) 3(+) 1
FC26 Ave 1.0093E + 02 1.6935E + 02 1.0087E + 02 1.2533E + 02 1.0209E + 02 1.0021E + 02 1.0007E + 02
STD 2.9009E − 02 2.9081E + 01 4.1891E − 02 5.5098E + 01 1.6960E − 01 3.1820E − 02 3.9998E − 02
Rank(test) 4(≈) 7(+) 3(−) 6(+) 5(≈) 2(−) 1
FC27 Ave 4.9605E + 02 7.6900E + 02 4.1820E + 02 1.0692E + 03 1.0054E + 03 5.4382E + 02 3.9852E + 02
STD 1.5103E + 02 5.6821E + 02 5.6852E + 00 2.3008E + 02 6.7082E + 01 1.9629E + 00 1.0029E + 00
Rank(test) 3(−) 5(+) 2(+) 7(+) 6(+) 4(≈) 1
FC28 Ave 1.3208E + 03 7.6532E + 02 9.1293E + 02 2.7948E + 03 3.5157E + 02 1.6008E + 03 2.0392E + 02
STD 1.0582E + 02 3.0818E + 02 3.9422E + 01 5.9168E + 02 8.1142E + 00 5.5922E − 01 1.8553E − 02
Rank(test) 5(+) 3(+) 4(+) 7(+) 2(+) 6(+) 1
FC29 Ave 2.1568E + 04 2.0005E + 02 1.6858E + 03 3.5228E + 04 2.5652E + 02 9.1764E + 03 2.4683E + 02
STD 1.1852E + 03 4.5785E − 02 2.2585E + 02 5.3369E + 03 8.1125E − 01 1.8147E + 01 2.1915E + 01
Rank(test) 6(+) 1(−) 4(≈) 7(+) 3(+) 5(+) 2
FC30 Ave 3.7319E + 04 2.3115E + 04 3.5887E + 03 6.4777E + 05 7.3153E + 02 3.1357E + 04 3.0228E + 04
STD 2.1242E + 04 2.4398E + 04 6.5226E + 02 1.3141E + 05 9.8485E + 01 6.2146E + 02 5.1873E + 02
Rank(test) 6(+) 3(+) 2(≈) 7(+) 1(−) 5(≈) 4

Average rank 5.375 3 4 6.375 3.625 3.75 1.5


Overall rank 6 2 5 7 3 4 1
Overall +/−/≈ 6/1/1 4/3/1 3/2/3 7/0/1 4/2/2 3/2/3 27/10/11

Bold values in these table indicates the “best result” achieved by a specific method (GWO, CS, PSO, etc) for a considered function F (F01, F02, etc).

precise solutions. These features can be either due to the adaptive exploitative modes of the hunting phases has occurred in LGWO.
searching behaviors in GWO or enhanced LF-based random walks The proposed algorithmic modifications enhanced the searching
in LGWO. capacities of GWO. It is seen that the CoBiDE outperforms other
The overall statistical results of evaluated algorithms for FC17- optimizers on FC18, FC20, and FC21 and it is ranked as the second
FC22 hybrid test beds are reported in Table 12. These hybrid best technique.
benchmarks are more complex than the two earlier categories; Table 13 reflects the results of different optimizers for compo-
hence, evaluated optimizers can hardly determine the global opti- sition functions.
mal of these problems. From Table 13, it can be realized that the efficiency of the LGWO
From Table 12, it is seen that LGWO still provides the best- is acceptable compared to the SOS, CMA-ES, CS and GWO based on
obtained results in FC17, FC19, and FC22. Meanwhile, LGWO was the average and STD values in dealing with FC23, FC24, FC25, FC26,
achieved to the first rank in treating hybrid benchmarks. In addi- FC27, FC28 test functions. In addition, it was recognized that GSA
tion, the superiority of LGWO is statistically significant in most of can outperform other techniques as the second best method, espe-
the cases. The efficacy of LGWO is satisfactory compared to the SOS, cially for FC23, FC25, and FC29. In comparison with GSA, LGWO
GSA, and CMA-ES. Note that the LGWO still inherits the foremost still performs meaningfully better on 4 cases (FC26, FC27, FC28,
features and advantages of the basic GWO. It can be perceived that and FC30) and similarly on FC24 function, which has different fea-
LGWO can realize competitive solutions with a promising perfor- tures nearby LO. It is seen that the GWO algorithm, which is the
mance on complicated tasks. From Table 12, it can be perceived that fourth best method, can perform better than GSA only on FC24,
LGWO can be more fruitful than GWO in dealing with the hybrid FC26, and FC27 multimodal problems, while the results of LGWO
functions such as FC17, FC20, FC21 and FC22. An implication of are better than the basic GWO in all of the eight composition cases.
these findings is that a suitable balance between explorative and With regard to CoBiDE, LGWO can significantly perform better than

Table 14
Details of CEC2011 test cases. Complete description of these tasks can be obtained from [78] (D: Dimension, C: Constraints, B: Bound constrained, IE: Inequality, LE: Linear
Equality).

No. ID Problem C D

1 T01 Parameter estimation for frequency-modulated (FM) sound waves B 6


2 T05 Tersoff potential function minimization (instance 1) B 30
3 T06 Tersoff potential function minimization (instance 2). B 30
4 T07 Spread spectrum radar polyphase code design B 20
5 T09 Large scale transmission pricing LE 126
6 T10 Circular antenna array design B 12
7 T11.1 Dynamic economic dispatch (instance 1) IE 120
8 T11.2 Dynamic economic dispatch (instance 2) IE 216
9 T11.7 Static economic load dispatch (instance 5) IE 140
10 T11.8 Hydrothermal scheduling (instance 1) IE 96
11 T11.9 Hydrothermal scheduling (instance 1) IE 96
12 T11.10 Hydrothermal scheduling (instance 1) IE 96
13 T.12 Spacecraft trajectory optimization (Messenger) B 26
14 T.13 Spacecraft trajectory optimization (Cassini2) B 22
A.A. Heidari, P. Pahlavani / Applied Soft Computing 60 (2017) 115–134 131

Table 15
Results of LGWO in comparison with basic algorithms (D: Dimension, S: Significant).

Algorithm D metric GA PSO DE ES GWO LGWO

Problem S S S S S
T01 6 Ave 9.18E − 05 − 9.23E − 12 − 4.36E − 07 − 7.02E − 02 + 1.96E + 00 + 1.86E + 00
STD 5.34E − 06 2.81E − 15 3.21E − 08 3.18E − 03 2.85E − 02 1.67E − 03
Rank 3 1 2 4 6 5
T05 30 Ave −1.38E + 01 − −1.15E + 01 + −6.60E + 00 + −3.86E + 00 + −3.52E + 01 + −3.56E + 01
STD 2.76E + 00 3.21E + 00 1.79E + 00 1.97E + 00 1.24E + 00 2.34E − 01
Rank 3 4 5 6 2 1
T06 30 Ave −1.63E + 01 + −5.62E + 00 + −2.56E + 01 + −1.13E + 01 + −1.17E + 01 + −2.74E + 01
STD 7.98E − 01 2.84E − 01 3.41E − 01 3.80E − 01 3.96E + 00 2.57E + 00
Rank 3 6 2 5 4 1
T07 20 Ave 9.45E − 01 + 9.82E − 01 + 9.24E − 01 + 9.23E − 01 + 1.12E + 00 + 8.89E − 01
STD 2.41E−04 5.32E−03 2.51E−02 6.98E−03 8.58E − 02 3.71E − 01
Rank 4 5 3 2 6 1
T09 126 Ave 1.59E + 03 + 3.52E + 01 + 4.58E + 03 + 1.89E + 03 + 2.09E + 04 + 1.98E + 03
STD 5.84E + 00 1.91E + 00 9.68E + 02 8.10E + 00 1.87E + 01 9.59E + 02
Rank 2 1 5 3 6 4
T10 12 Ave −1.18E + 01 + −1.88E + 01 + −1.85E + 01 + −4.27E + 00 + −1.82E + 01 + −2.15E + 01
STD 6.78E + 00 4.21E + 00 6.73E + 00 1.21E + 00 6.27E + 00 1.02E − 02
Rank 5 2 3 6 4 1
T11.1 120 Ave 5.78E + 05 + 2.36E + 05 + 5.74E + 04 + 8.56E + 05 + 1.24E + 05 + 5.81E + 04
STD 1.64E + 03 4.42E + 03 3.64E + 03 4.73E + 03 2.56E + 03 8.15E + 02
Rank 5 4 1 6 3 2
T11.2 216 Ave 6.24E + 06 + 2.68E + 06 + 2.79E + 06 + 8.65E + 06 + 1.86E + 06 + 1.07E + 06
STD 5.57E + 05 3.14E + 05 1.32E + 05 3.19E + 05 7.61E + 04 4.09E + 03
Rank 5 3 4 6 2 1
T11.7 140 Ave 4.85E + 06 + 3.24E + 06 + 1.94E + 06 + 7.45E + 06 + 2.11E + 06 + 1.92E + 06
STD 7.64E + 04 5.79E + 04 8.02E + 05 3.40E + 04 2.78E + 05 3.68E + 04
Rank 5 4 2 6 3 1
T11.8 96 Ave 1.88E + 06 + 6.37E + 06 + 1.23E + 06 + 3.59E + 06 + 1.02E + 06 + 9.46E + 05
STD 6.35E + 04 4.41E + 05 6.68E + 04 5.81E + 04 3.12E + 04 4.20E + 03
Rank 4 6 3 5 2 1
T11.9 96 Ave 3.50E + 06 + 9.82E + 05 + 1.58E + 06 − 7.14E + 06 + 1.29E + 06 + 1.02E + 06
STD 4.01E + 05 1.55E + 04 7.65E + 04 7.63E + 04 1.91E + 05 1.00E + 05
Rank 5 1 4 6 3 2
T11.10 96 Ave 2.38E + 06 + 8.49E + 06 + 1.75E + 06 + 1.12E + 06 + 1.02E + 06 + 9.49E + 05
STD 3.36E + 04 2.54E + 04 5.50E + 04 1.81E + 04 5.47E + 04 6.03E + 03
Rank 5 6 4 3 2 1
T.12 26 Ave 1.59E + 01 + 7.24E + 01 + 1.29E + 01 + 6.89E + 01 + 1.62E + 01 + 1.20E + 01
STD 7.49E − 01 2.84E + 00 2.75E − 01 7.21E − 01 2.83E + 00 1.90E + 00
Rank 3 6 2 5 4 1
T.13 22 Ave 9.10E + 00 + 4.18E + 01 − 1.82E + 01 + 9.49E + 00 + 1.76E + 01 + 1.18E + 01
STD 3.21E − 02 3.73E + 00 8.25E − 01 4.83E − 02 8.41E + 00 2.84E + 00
Rank 1 6 5 2 4 3

Sum of the ranks 53 55 45 65 51 25


Average rank 3.785714 3.928571 3.214286 4.642857 3.642857 1.785714
Overall rank 4 5 2 6 3 1
Overall +/−/≈ 12/2/0 12/2/0 12/2/0 14/0/0 14/0/0 64/6/0

Bold values in these table indicates the “best result” achieved by a specific method (GWO, CS, PSO, etc) for a considered function F (F01, F02, etc).

other methods (FC24, FC27, FC28, and FC29) on four cases and per- in case of immature convergence to LO. The new hunting operations
form at a similar level on two asymmetrical functions (FC25 and have improved the quality of solutions and searching capacities of
FC26). GWO. Composition problems are suitable to measure the LO escap-
Additionally, it observed that a number of performance dif- ing capabilities of different optimizers. According to the results,
ferences between the LGWO and its peers aren’t statistically the LGWO has an improved LO escaping capacity compared to the
significant, whereas these optimizers haven’t similar average val- basic GWO. As in the first experiment, the results affirm that the
ues. For example, the performance difference between LGWO and embedded LF-based searching patterns can relatively alleviate the
CoBiDE is insignificant in asymmetrical function FC26. This situ- stagnation problems of the GWO. In summary, the results of LGWO
ation occurs when an optimizer such as CoBiDE runs based on a are promising or at least competitive with other optimizers accord-
certain number of self-sufficient runs, there is a slight possibility for ing to the optimality of solutions. In the case of LO stagnation,
it to be jammed into LO and in this manner, the algorithm returns a the LF-based motions and GS strategy can successfully stimulate
larger fitness cost. This large cost tends to threaten the total average the exploration and exploitation tendencies of LGWO to become
cost of CoBiDE. active again. The features of LGWO is that it can prevent wolves
Regarding overall results on Table 13, the LGWO can demon- from falling into LO, optimize with more efficacy, and improve the
strate a satisfactory performance compared to other competitors. wolves’ searching (hunting) capabilities in challenging cases.
However, regarding the theorem of “no free lunch”, an optimizer
cannot outperform all previous MA on all aspects or on every type 4.3. Experiment 3: results and analysis on real-world problems
of problem [72]. The proposed LGWO can’t be an exception as well.
The main reason behind the effectiveness of LGWO is that the In the present section, the efficacy of LGWO is evaluated using
LF-based jumps can effectively redistribute the search agents to 14 real-world problems from IEEE CEC2011. It’s worth noting that
enhance their diversity and to emphasize on more explorative steps these investigations are conducted according to the guidelines of
132 A.A. Heidari, P. Pahlavani / Applied Soft Computing 60 (2017) 115–134

Table 16
The obtained results of LGWO with 1.5 × 105 NFE.

Problem Best Median Mean Worst STD

T01 0.00000E + 00 1.22000E + 00 1.86000E + 00 1.11875E + 01 1.67000E − 03


T05 −3.68450E + 01 −3.62350E + 01 −3.56000E + 01 −2.99652E + 01 2.34000E − 01
T06 −2.92000E + 01 −2.86000E + 01 −2.74000E + 01 −2.23875E + 01 2.57000E + 00
T07 5.00000E − 01 6.23006E − 01 8.88955E − 01 1.05585E + 00 3.71000E − 01
T09 4.57000E + 02 9.86240E + 02 1.98000E + 03 2.15845E + 03 9.59000E + 02
T10 −2.17520E + 01 −2.15912E + 01 −2.15000E + 01 −1.85123E + 01 1.02000E − 02
T11.1 5.10000E + 04 5.45120E + 04 5.81280E + 04 6.15135E + 04 8.15000E + 02
T11.2 1.06000E + 06 1.06808E + 06 1.07342E + 06 1.08276E + 06 4.08927E + 03
T11.7 1.92372E + 06 1.92397E + 06 1.92407E + 06 2.06410E + 06 3.68112E + 04
T11.8 9.41381E + 05 9.43219E + 05 9.46331E + 05 9.49024E + 05 4.20179E + 03
T11.9 9.28199E + 05 9.67102E + 05 1.02354E + 06 1.02637E + 06 9.99832E + 04
T11.10 9.45203E + 05 9.48178E + 05 9.49277E + 05 9.51308E + 05 6.02851E + 03
T.12 7.62831E + 00 1.01782E + 01 1.19811E + 01 1.50488E + 01 1.89952E + 00
T.13 8.62711E + 00 9.19733E + 00 1.18372E + 01 1.71474E + 01 2.83750E + 00

CEC2011 [78,79]. For details about these problems, the reader can space. The LGWO also can lay emphasis on further exploitation in
refer to [78]. Table 14 reviews the main details of the used CEC2011 the last steps.
functions.
The performance of LGWO is compared to the standard GWO, 5. Conclusions and future works
GA, PSO, DE, and ES as the well-established optimizers in literature.
For GA, real coding, roulette wheel selection, single point crossover In this article, a modified GWO with LF-based operators was pro-
with a probability of 1 and mutation probability of 0.001 is used. posed to alleviate the stagnation problems of the basic optimizer.
For DE, F and CR are 0.5. For PSO, w is 0.8, 1 is 1.0, and 2 is 1.0. For the first test, the LGWO and GWO, PSO, CS, FA, DE, BA, and GSA
For ES,  and
are set to 50, and mutation ı is equal to 1. These were compared based on the quality of solutions for realizing 29
parameters are also recommended and set by Simon in [80]. Based test cases. The results verify that LGWO can obtain very competi-
on CEC2011, the population size of GA, PSO, DE, and ES is set to 50. tive also better results compared to GWO. Based on ranking records,
In addition, all results are attained over 25 independent runs. The LGWO can find better-quality or competitive solutions and it out-
maximum NFE is limited to 1.5 × 105 for each case. performs as the best amongst GWO, PSO, DE, GSA, FA, BA, and CS
The obtained results are tabulated in Table 15. Note that the best optimizers. Convergence trends of LGWO are better than equiva-
results are bolded. The overall results of Wilcoxon test and ranking lent curves for GWO and PSO. In addition, non-parametric statistical
results are also embedded in Table 15. tests affirm that the optimality of solutions is significantly enriched.
Based on Table 15, it is seen that the conventional GA can per- From the first experiment, it can be concluded that LGWO not only
form as the best method on spacecraft trajectory optimization shows an efficient performance, but also the stagnation behaviors
problem (T.13), the simple PSO represents the best performance in of GWO are alleviated, considerably.
three cases (T01, T09, T11.9), and the ES strategy returns the best For the second test, 30 benchmarks from CEC2014 were
results on none of the problems. The proposed LGWO performs employed to investigate the effectiveness of the proposed LGWO.
well on the majority of test cases (T.05-T.07, T.10-T11.2, T11.7, The results show that LGWO outperforms GWO and several well-
T11.8, T11.10, T.12), while the basic GWO and DE cannot show known optimizers due to the effects of the GS strategy and LF-based
a competitive performance. Note that the results of Wilcoxon’s patterns on exploration and exploitation. Moreover, LGWO was
test affirm that the behaviour of LGWO is significantly different compared with the GA, PSO, and ES optimizers on 14 practi-
from GWO, DE, PSO, GA and ES on 91% of the comparisons. In cal engineering problems. The LGWO was capable of performing
PSO, the moving particles reveal stagnation behaviors and resem- particularly well on several real-world engineering problems. It
ble themselves in some steps (loss of diversity). Hence, the basic outperformed the conventional GWO, DE, PSO, GA, and ES. From
PSO cannot recover its power and efficiency in solving some prob- the results, we can conclude that the proposed LGWO is a simple,
lems, occasionally. From the other side, it is seen that the proposed efficient optimizer that can be utilized as a powerful approach in
LGWO can demonstrate a satisfactory performance in realizing dealing with both classic and real-world applications.
these real-world tasks. However, some of the results of GWO in For future works, it is possible to design discrete extension of
this experiment are not desirable. In comparison with GA, PSO, DE, LGWO. The future research may consist of employing the LGWO
GWO and ES algorithms, the LGWO’s statistical results and the rank- to tackle specific industrial tasks. Also, the efficiency of LGWO will
ing results indicate that it can be considered as the best technique be compared with other GWO-based optimizers for solving vari-
in this test. The basic GWO is also the second best method in dealing ous problems. To end with, we hope that this work will motivate
the test cases considered here. other researchers who are working on new MA and optimization
The performance of LGWO in terms of the best, median, mean, concepts.
worst and STD of the results can be seen in Table 16. From Table 16,
it can be concluded that the proposed LGWO can attain satisfactory Acknowledgment
results in dealing with real-world problems.
It was observed from the last experiment that the LGWO can We would like to gratefully acknowledge the constructive com-
perform better than GWO as a result of its LF-based exploration ments and suggestions of the anonymous referees.
strategies. Therefore, as it can be seen from the results on T11.2,
T11.7, and T11.8 problems, the possibility of stagnation to LO is References
considerably mitigated in LGWO. Based on |A| and random jumps
of LF, the LGWO has an enhanced tendency for global search and it [1] Y. Atay, I. Koc, I. Babaoglu, H. Kodaz, Community detection from biological
and social networks: a comparative analysis of metaheuristic algorithms,
can often put emphasis on more extensive exploration of the search
Appl. Soft Comput. 50 (2017) 194–211.
A.A. Heidari, P. Pahlavani / Applied Soft Computing 60 (2017) 115–134 133

[2] S. Saremi, S.Z. Mirjalili, S.M. Mirjalili, Evolutionary population dynamics and [34] S. Mirjalili, S. Saremi, S.M. Mirjalili, L.d.S. Coelho, Multi-objective grey wolf
grey wolf optimizer, Neural Comput. Appl. 26 (2015) 1257–1263. optimizer: a novel algorithm for multi-criterion optimization, Expert Syst.
[3] Y. Zhou, J. Wang, Y. Zhou, Z. Qiu, Z. Bi, Y. Cai, Differential evolution with Appl. 47 (2016) 106–119.
guiding archive for global numerical optimization, Appl. Soft Comput. 43 [35] A.A. El-Fergany, H.M. Hasanien, Single and multi-objective optimal power
(2016) 424–440. flow using grey wolf optimizer and differential evolution algorithms, Electr.
[4] F. Zhong, H. Li, S. Zhong, A modified ABC algorithm based on Power Compon. Syst. 43 (2015) 1548–1559.
improved-global-best-guided approach and adaptive-limit strategy for global [36] M. Shakarami, I.F. Davoudkhani, Wide-area power system stabilizer design
optimization, Appl. Soft Comput. 46 (2016) 469–486. based on Grey Wolf Optimization algorithm considering the time delay,
[5] L. Wang, B. Yang, J. Orchard, Particle swarm optimization using dynamic Electr. Power Syst. Res. 133 (2016) 149–159.
tournament topology, Appl. Soft Comput. 48 (2016) 584–596. [37] X. Song, L. Tang, S. Zhao, X. Zhang, L. Li, J. Huang, W. Cai, Grey Wolf Optimizer
[6] A.A. Heidari, R. Ali Abbaspour, A. Rezaee Jordehi, An efficient chaotic water for parameter estimation in surface waves, Soil Dyn. Earthquake Eng. 75
cycle algorithm for optimization tasks, Neural Computing and Applications 28 (2015) 147–157.
(2017) 57–85. [38] M. Niu, Y. Wang, S. Sun, Y. Li, A novel hybrid decomposition-and-ensemble
[7] A.A. Heidari, R. Ali Abbaspour, A. Rezaee Jordehi, Gaussian bare-bones water model based on CEEMD and GWO for short-term PM 2.5 concentration
cycle algorithm for optimal reactive power dispatch in electrical power forecasting, Atmos. Environ. 134 (2016) 168–180.
systems, Appl. Soft Comput. 57 (2017) 657–671. [39] T. Jayabarathi, T. Raghunathan, B. Adarsh, P.N. Suganthan, Economic dispatch
[8] C. Wang, Y. Hou, The identification of electric load simulator for gun control using hybrid grey wolf optimizer, Energy 111 (2016) 630–641.
systems based on variable-structure WNN with adaptive differential [40] D. Guha, P.K. Roy, S. Banerjee, Load frequency control of interconnected power
evolution, Appl. Soft Comput. 38 (2016) 164–175. system using grey wolf optimization, Swarm Evol. Comput. 27 (2016) 97–115.
[9] M. Mahi, Ö.K. Baykan, H. Kodaz, A new hybrid method based on particle [41] S. Zhang, Y. Zhou, Z. Li, W. Pan, Grey wolf optimizer for unmanned combat
swarm optimization, ant colony optimization and 3-opt algorithms for aerial vehicle path planning, Adv. Eng. Software 99 (2016) 121–136.
traveling salesman problem, Appl. Soft Comput. 30 (2015) 484–490. [42] G. Sodeifian, N.S. Ardestani, S.A. Sajadian, S. Ghorbandoost, Application of
[10] S. Mirjalili, A. Lewis, Adaptive gbest-guided gravitational search algorithm, supercritical carbon dioxide to extract essential oil from Cleome coluteoides
Neural Comput. Appl. 25 (2014) 1569–1584. Boiss: experimental, response surface and grey wolf optimization
[11] S. Yılmaz, E.U. Küçüksille, A new modification approach on bat algorithm for methodology, J. Supercrit. Fluids 114 (2016) 55–63.
solving optimization problems, Appl. Soft Comput. 28 (2015) 259–275. [43] P.B. de Moura Oliveira, H. Freire, E.J. Solteiro Pires, Grey wolf optimization for
[12] A. Baykasoğlu, F.B. Ozsoydan, Adaptive firefly algorithm with chaos for PID controller design with prescribed robustness margins, Soft Comput.
mechanical design optimization problems, Appl. Soft Comput. 36 (2015) (2016) 1–13.
152–164. [44] G.M. Viswanathan, V. Afanasyev, S. Buldyrev, E. Murphy, P. Prince, H.E.
[13] J. Huang, L. Gao, X. Li, An effective teaching-learning-based cuckoo search Stanley, Lévy flight search patterns of wandering albatrosses, Nature 381
algorithm for parameter optimization problems in structure designing and (1996) 413–415.
machining processes, Appl. Soft Comput. 36 (2015) 349–356. [45] N.E. Humphries, N. Queiroz, J.R. Dyer, N.G. Pade, M.K. Musyl, K.M. Schaefer,
[14] Ş. Gülcü, H. Kodaz, A novel parallel multi-swarm algorithm based on D.W. Fuller, J.M. Brunnschweiler, T.K. Doyle, J.D. Houghton, Environmental
comprehensive learning particle swarm optimization, Eng. Appl. Artif. Intell. context explains Lévy and Brownian movement patterns of marine predators,
45 (2015) 33–45. Nature 465 (2010) 1066–1069.
[15] T.T. Nguyen, D.N. Vo, The application of one rank cuckoo search algorithm for [46] D.W. Sims, E.J. Southall, N.E. Humphries, G.C. Hays, C.J. Bradshaw, J.W.
solving economic load dispatch problems, Appl. Soft Comput. 37 (2015) Pitchford, A. James, M.Z. Ahmed, A.S. Brierley, M.A. Hindell, Scaling laws of
763–773. marine predator search behaviour, Nature 451 (2008) 1098–1102.
[16] G. Sun, A. Zhang, Y. Yao, Z. Wang, A novel hybrid algorithm of gravitational [47] X.-S. Yang, S. Deb, Cuckoo search via Lévy flights, Nature & Biologically
search algorithm with genetic algorithm for multi-level thresholding, Appl. Inspired Computing, 2009. NaBIC 2009. World Congress On, IEEE (2009)
Soft Comput. 46 (2016) 703–730. 210–214.
[17] A.H. Gandomi, X.-S. Yang, A.H. Alavi, Mixed variable structural optimization [48] A.K. Bhateja, A. Bhateja, S. Chaudhury, P. Saxena, Cryptanalysis of vigenere
using firefly algorithm, Comput. Struct. 89 (2011) 2325–2336. cipher using cuckoo search, Appl. Soft Comput. 26 (2015) 315–324.
[18] Y. Cai, J. Wang, Differential evolution with hybrid linkage crossover, Inf. Sci. [49] H. Haklı, H. Uğuz, A novel particle swarm optimization algorithm with Levy
320 (2015) 244–287. flight, Appl. Soft Comput. 23 (2014) 333–345.
[19] S. Mirjalili, Dragonfly algorithm: a new meta-heuristic optimization [50] R. Jensi, G.W. Jiji, An enhanced particle swarm optimization with levy flight
technique for solving single-objective, discrete, and multi-objective for global optimization, Appl. Soft Comput. 43 (2016) 248–261.
problems, Neural Comput. Appl. 27 (2016) 1053–1073. [51] G. Kalantzis, C. Shang, Y. Lei, T. Leventouri, Investigations of a GPU-based
[20] R. Vafashoar, M.R. Meybodi, Multi swarm bare bones particle swarm levy-firefly algorithm for constrained optimization of radiation therapy
optimization with distribution adaption, Appl. Soft Comput. 47 (2016) treatment planning, Swarm Evol. Comput. 26 (2016) 191–201.
534–552. [52] W.A. Hussein, S. Sahran, S.N.H.S. Abdullah, Patch-Levy-based initialization
[21] J. Pickard, J. Carretero, V. Bhavsar, On the convergence and origin bias of the algorithm for bees algorithm, Appl. Soft Comput. 23 (2014) 104–121.
Teaching-Learning-Based-Optimization algorithm, Appl. Soft Comput. 46 [53] A.P. Piotrowski, J.J. Napiorkowski, P.M. Rowinski, How novel is the novel black
(2016) 115–127. hole optimization approach? Inf. Sci. 267 (2014) 191–200.
[22] Z. Hu, Q. Su, X. Yang, Z. Xiong, Not guaranteeing convergence of differential [54] X.-S. Yang, S. Deb, Multiobjective cuckoo search for design optimization,
evolution on a class of multimodal functions, Appl. Soft Comput. 41 (2016) Comput. Oper. Res. 40 (2013) 1616–1624.
479–487. [55] C.-Y. Lee, X. Yao, Evolutionary algorithms with adaptive lévy mutations, in:
[23] S. Mirjalili, S.M. Mirjalili, A. Lewis, Grey wolf optimizer, Adv. Eng. Software 69 evolutionary Computation, 2001, Proceedings of the 2001 Congress On, IEEE
(2014) 46–61. (2001) 568–575.
[24] P. Bak, K. Sneppen, Punctuated equilibrium and criticality in a simple model [56] M.F. Shlesinger, Levy flights: variations on a theme, Physica D 38 (1989)
of evolution, Phys. Rev. Lett. 71 (1993) 4083. 304–309.
[25] A. Lewis, S. Mostaghim, M. Randall, Evolutionary population dynamics and [57] A.O. Gautestad, I. Mysterud, Complex animal distribution and abundance
multi-objective optimisation problems, in: Multi-Objective Optimization in from memory-dependent kinetics, Ecol. Complexity 3 (2006) 44–55.
Computational Intelligence: Theory and Practice, 2008, pp. 185–206. [58] G. Viswanathan, V. Afanasyev, S.V. Buldyrev, S. Havlin, M. Da Luz, E. Raposo,
[26] S. Mirjalili, How effective is the Grey Wolf optimizer in training multi-layer H.E. Stanley, Lévy flights in random searches, Physica A 282 (2000) 1–12.
perceptrons, Appl. Intell. 43 (2015) 150–161. [59] G. Viswanathan, E. Raposo, M. Da Luz, Lévy flights and superdiffusion in the
[27] B. Mahdad, K. Srairi, Blackout risk prevention in a smart grid based flexible context of biological encounters and random searches, Phys. Life Rev. 5 (2008)
optimal strategy using Grey Wolf-pattern search algorithms, Energy Convers. 133–150.
Manage. 98 (2015) 411–429. [60] R.N. Mantegna, Fast, accurate algorithm for numerical simulation of Levy
[28] E. Emary, H.M. Zawbaa, Impact of chaos functions on modern swarm stable stochastic processes, Phys. Rev. E 49 (1994) 4677.
optimizers, PLoS One 11 (2016) e0158738. [61] S. Mirjalili, S.M. Mirjalili, A. Hatamlou, Multi-Verse Optimizer: a
[29] S. Mirjalili, S.M. Mirjalili, A. Hatamlou, Multi-Verse Optimizer: a nature-inspired algorithm for global optimization, Neural Comput. Appl.
nature-inspired algorithm for global optimization, Neural Comput. Appl. 27 (2015) 1–19.
(2016) 495–513. [62] S. Mirjalili, A. Lewis, The whale optimization algorithm, Adv. Eng. Software 95
[30] G.M. Komaki, V. Kayvanfar, Grey Wolf Optimizer algorithm for the two-stage (2016) 51–67.
assembly flow shop scheduling problem with release time, J. Comput. Sci. 8 [63] S. Mirjalili, The ant lion optimizer, Adv. Eng. Software 83 (2015) 80–98.
(2015) 109–120. [64] H. Salimi, Stochastic fractal search: a powerful metaheuristic algorithm,
[31] E. Emary, H.M. Zawbaa, A.E. Hassanien, Binary grey wolf optimization Knowledge-Based Syst. 75 (2015) 1–18.
approaches for feature selection, Neurocomputing 172 (2016) 371–381. [65] J.-J. Liang, P.N. Suganthan, K. Deb, Novel composition test functions for
[32] N. Jayakumar, S. Subramanian, S. Ganesan, E. Elanchezhian, Grey wolf numerical global optimization, in: swarm Intelligence Symposium, 2005. SIS
optimization for combined heat and power dispatch with cogeneration 2005, Proceedings 2005 IEEE, IEEE (2005) 68–75.
systems, Int. J. Electr. Power Energy Syst. 74 (2016) 252–264. [66] R. Poli, J. Kennedy, T. Blackwell, Particle swarm optimization, Swarm Intell. 1
[33] S. Medjahed, T.A. Saadi, A. Benyettou, M. Ouali, Gray Wolf Optimizer for (2007) 33–57.
hyperspectral band selection, Appl. Soft Comput. 40 (2016) 178–186. [67] R. Storn, K. Price, Differential evolution–a simple and efficient heuristic for
global optimization over continuous spaces, J. Global Optim. 11 (1997)
341–359.
134 A.A. Heidari, P. Pahlavani / Applied Soft Computing 60 (2017) 115–134

[68] X.-S. Yang, A new metaheuristic bat-inspired algorithm, in: Nature Inspired [75] M.-Y. Cheng, D. Prayogo, Symbiotic organisms search: a new metaheuristic
Cooperative Strategies for Optimization (NICSO), Springer, 2010, 2010, pp. optimization algorithm, Comput. Struct. 139 (2014) 98–112.
65–74. [76] Y. Wang, H.-X. Li, T. Huang, L. Li, Differential evolution based on covariance
[69] E. Rashedi, H. Nezamabadi Pour, S. Saryazdi, GSA: a gravitational search matrix learning and bimodal distribution parameter setting, Appl. Soft
algorithm, Inf. Sci. 179 (2009) 2232–2248. Comput. 18 (2014) 232–247.
[70] X.-S. Yang, Firefly algorithm, Levy flights and global optimization, in: Research [77] D. Simon, M.G. Omran, M. Clerc, Linearized biogeography-based optimization
and Development in Intelligent Systems XXVI, Springer, 2010, pp. 209–218. with re-initialization and local search, Inf. Sci. 267 (2014) 140–157.
[71] J. Derrac, S. García, D. Molina, F. Herrera, A practical tutorial on the use of [78] Problem Definitions and Evaluation Criteria for CEC 2011 Competition on
nonparametric statistical tests as a methodology for comparing evolutionary Testing Evolutionary Algorithms on Real World Optimization Problems,
and swarm intelligence algorithms, Swarm Evol. Comput. 1 (2011) 3–18. Technical Report, 2010 (Jadavpur University, India and Nanyang
[72] Y. Yuan, H. Xu, J. Yang, A hybrid harmony search algorithm for the flexible job Technological University, Singapore).
shop scheduling problem, Appl. Soft Comput. 13 (2013) 3259–3272. [79] Competition on Testing Evolutionary Algorithms;1; on Real-world Numerical
[73] Y.-C. Ho, D.L. Pepyne, Simple explanation of the no-free-lunch theorem and Optimization Problems @ CEC11, 2011 (Accessed 28 August 2016) http://
its implications, J. Optim. Theory Appl. 115 (2002) 549–570. www3.ntu.edu.sg/home/epnsugan/index files/CEC11-RWP/CEC11-RWP.htm.
[74] N. Hansen, S.D. Müller, P. Koumoutsakos, Reducing the time complexity of the [80] H. Ma, D. Simon, M. Fei, Z. Chen, On the equivalences and differences of
derandomized evolution strategy with covariance matrix adaptation evolutionary algorithms, Eng. Appl. Artif. Intell. 26 (2013) 2397–2407.
(CMA-ES), Evol. Comput. 11 (2003) 1–18.

You might also like