Professional Documents
Culture Documents
https://doi.org/10.1007/s00366-019-00795-0
ORIGINAL ARTICLE
Abstract
Grey wolf optimizer (GWO) is a recently developed population-based algorithm in the area of nature-inspired optimization.
The leading hunters in GWO are responsible for exploring the new promising regions of the search space. However, in some
circumstances, the classical GWO suffers from the problem of premature convergence due to the stagnation at sub-optimal
solutions. The insufficient guidance of search in GWO leads to slow convergence. Therefore, to alleviate from all the above
issues, an improved leadership-based GWO called GLF–GWO is introduced in the present paper. In GLF–GWO, the leaders
are updated through Levy-flight search mechanism. The proposed GLF–GWO algorithm enhances the search efficiency of
leading hunters in GWO and provides better guidance to accelerate the search process of GWO. In the GLF–GWO algo-
rithm, the greedy selection is introduced to avoid their divergence from discovered promising areas of the search space. To
validate the efficiency of the GLF–GWO, the standard benchmark suite IEEE CEC 2014 and IEEE CEC 2006 are taken. The
proposed GLF–GWO algorithm is also employed to solve some real-engineering problems. Experimental results reveal that
the proposed GLF–GWO algorithms significantly improve the performance of the classical version of GWO.
Keywords Numerical optimization · Swarm intelligence · No free lunch theorem · Levy-flight search
13
Vol.:(0123456789)
Engineering with Computers
ability of the search agents towards the promising regions the bridging mechanism based on chaotic sequence. In Ref.
of the search space. The literature [5] also shows that the [29], an improved GWO called augmented grey wolf optimizer
GWO has better potential than some other state-of-the-art is proposed to employ on grid-connected wind power plants.
algorithms such as PSO, differential evolution (DE), evolu- In Ref. [30], opposition-based learning is integrated in GWO
tion strategy (ES), and gravitational search algorithm (GSA) to reduce the problem of stagnation at local optima.
in dealing with various optimization problems. From the Although there are many attempted have been done to
time of development of GWO, it has been directly applied improve the performance of classical GWO, still in some
successfully to various application problems. For adjusting cases, due to insufficient diversity of solutions, GWO faces
the parameters of PID controller in DC motors, Madadi and the problem of stagnation in local optima. Therefore, in the
Motlagh have used GWO [7]. The performance of GWO present paper, the leadership efficiency of leading hunters
is investigated to train multilayer perceptrons [8] and for has been tried to improve based on Levy-flight local search
the analysis of surface waves [9]. In the literature, GWO by proposing the algorithm called GLF–GWO. In this paper,
was successfully applied on optimal reactive power dispatch it is argued that for updating the leaders, why they use them-
problem [10]. In Ref. [11], load frequency control (LFC) selves and wolves having low fitness. Deriving inspiration
problem is solved using classical GWO. In Ref. [12], GWO from this question, a new mechanism based on Levy-flight
has been employed for unmanned combat aerial vehicle local search is introduced for leading hunters. The Levy-
(UCAV) path planning problem. The economic load dis- flight search strategy locally explores the promising regions
patch problems are also solved using GWO [13]. to find more knowledgeable leaders in terms of fitness and
The literature shows that GWO has advantages over various feasibility. The Levy-flight strategy is more useful when
optimization techniques because of its simplicity, easy imple- the pack trapped in local solutions. To maintain the balance
mentation, and ability of avoiding local optima. Although it between the exploration and exploitation, greedy selection
has been suggested [5] based on the performance of classical is applied in the proposed algorithm at the end of search.
GWO that it has better potential as compared to other well- Although in Ref. [24], Levy-flight search is already intro-
known meta-heuristics such as PSO, DE [14], and GSA [15], duced, but in a different way, in Ref. [24], the Levy-flight
in some cases, classical GWO faces the issues of slow conver- local search is utilized to modify the search equation of
gence rate and the problem of stagnation at local optima. To GWO and the contribution of delta wolves is ignored, and
overcome these issues, various attempts have been done and therefore, the new proposed algorithm has changed the origi-
implemented on real-life application problems. For example, nal structure of GWO. While in the present paper, the Levy-
in [16], an improved GWO (IGWO) has been employed to flight search strategy is introduced for the leading hunters
train q-Gaussian radial basis functional link-nets neural net- (guiding wolves) only to explore the new promising domains
work. To solve optimal power flow problem, GWO has been of a search space and the omega wolves are updated based
used in Ref. [17]. In Ref. [18], GWO has been merged with on the guiding directions provided by the leading hunters,
mutation and crossover to solve the economic load dispatch and therefore, the original structure of the algorithm has
problem. In Ref. [19], GWO has been hybridized with hierar- been kept same. To examine and validate the performance
chical operator to present different modified variants of GWO. of the proposed algorithm, standard and complex benchmark
In Ref. [20], grouped GWO has been designed for maximum problem sets of unconstrained and constraint benchmark
power point tracking of doubly fed induction generator. The problems (CEC 2014 and CEC 2006) have been considered.
GWO has been hybridized with Genetic Algorithm (GA) to The rest of this paper is structured as follows: Sect. 2
minimize the potential energy of molecules [21]. For multi- provides a brief overview of classical GWO. In Sect. 3, the
criterion optimization, the multi-objective GWO is developed proposed leadership-inspired GWO called GLF–GWO is
in Ref. [22]. For a dynamic welding scheduling problem, discussed in detail. Section 4 presents the numerical experi-
hybrid multi-objective GWO has been proposed in Ref. [23]. mentation and discussion on two benchmark suits namely
In Ref. [24], the Levy-flight strategy is introduced to modify IEEE CEC2014 and IEEE CEC2006. In Sect. 5, the pro-
the search equations. This proposed algorithm was named posed algorithm is employed on some engineering applica-
LGWO. In LGWO, the delta wolves are ignored for the pro- tions. The conclusions of the work are presented in Sect. 6.
gression of search space. In Ref. [25], multidirectional search
has been introduced in GWO to solve mixed-integer optimiza-
tion problems. In Ref. [26], multi-strategy ensemble GWO has 2 An overview of classical grey wolf
been proposed which utilized three different search strategies optimizer
to update the wolves. This improved GWO has been used for
feature selection problems. In Ref. [27], ameliorated GWO is The grey wolf optimizer (GWO) algorithm was proposed
proposed to solve economic power load dispatch problems. In by Mirjalili et al. [5] in 2014. This algorithm mimics the
Ref. [28], the 𝛽-GWO algorithm is proposed by introducing hunting and leadership behavior of grey wolves. The grey
13
Engineering with Computers
wolves are the species which prefer to hunt their prey in a 2.3 Attacking and hunting the prey
group. Their group which includes 5–12 wolves is known as
pack. In the pack, the leadership hierarchy is maintained by Mirjalili et al. [5] modelled the hunting behavior of grey
dividing the group into four type of wolves—alpha wolves wolves by assuming the equal contribution of leading hunt-
(dominant wolf of the pack and are decision maker), beta ers at the time of determining prey location. Therefore,
wolves (subordinate to the alpha in their absence and works each wolf updates its location by following these leaders
as a messenger for the alpha), delta wolves (caretaker of as follows:
pack and protect the pack from enemies), and omega wolves
(rest of the wolves which have permission of eating food in
�
X1 = x𝛼,t − 𝜇𝛼 ⋅ d𝛼 , (6)
the last). The wolves, alpha, beta, and delta are known as
leading hunters for the pack and the pack is totally dependent �
on these wolves. The pack of wolves performed the process X2 = x𝛽,t − 𝜇𝛽 ⋅ d𝛽 , (7)
of hunting prey in three steps [31]—(i) chasing a prey; (ii)
encircling a prey; and (iii) attacking the prey. The math- �
� � �
(X1 + X2 + X3 )
Xt+1 = , (9)
2.1 Social and leadership behavior 3
where x𝛼 , x𝛽 , and x𝛿 are the locations of leading wolves.
To mimic the social and leadership behavior of wolves,
d𝛼 , d𝛽 , and d𝛿 are the difference vectors obtained using
the three fittest solutions in terms of fitness value are
Eq. (2) and 𝜇𝛼 , 𝜇𝛽 , and 𝜇𝛿 are the coefficient vectors obtained
selected as alpha, beta, and delta. The rest of the solutions
with the help of Eq. (3).
are assumed as omega wolves. The 𝜔 wolves iteratively
improve their states with the guidance of leading hunters.
2.4 Exploitation and exploration in GWO
c = 2 ⋅ r2 , , (4)
3 Proposed improved leadership‑inspired
where xt and xt+1 are the states of wolf at tth and (t + 1)th
GWO
iteration, respectively. xp,t is the state of the prey at tth itera-
tion. d is a difference vector, and c and 𝜇 are the random
3.1 Motivation
numbers which are employed in GWO to perform the explo-
ration and exploitation of search space. b is a random num-
Since the GWO algorithm mimics the leadership behavior
ber which is decreased linearly from 2 to 0 in the algorithm
and hunting strategies of grey wolf pack to update the solu-
and can be formulated as
tions, therefore, the search process is principally dependent
( ) on the leading hunters. From the search equations of GWO,
t
b=2−2⋅ (5) it can be analyzed that all the omega wolves update their
maximum number of iterations
state randomly based on the guidance provided by the lead-
and r1 , r2 are the random numbers lie in the interval (0, 1).
ing wolves. Moreover, the leading wolves update their state
The vector b helps to transit form the exploration to exploi-
by themselves or low fitted wolves which is not convinc-
tation phase.
ing and sometimes may not be able to produce promising
13
Engineering with Computers
Fig. 1 Pseudocode of classical
version of GWO
guidance for omega wolves. Therefore, when the leading where 𝛼 is scale parameter which controls the scale of the
hunters trapped in sub-optimal solutions, then for the pack, distribution and 𝛾 is location or shift parameter.
it is difficult to move out from that solutions. This situation In terms of Fourier transform, the Levy distribution is
occurs particularly when the fitness landscape of problem defined as follows:
To integrate a sufficient guidance and to enhance the explo- which is drawn from Levy distribution, par is a control
ration of new search regions, a Levy-flight motion which is parameter that control the step length, and numerically, par
a non-Gaussian random process and whose random steps is selected as a linearly decreasing vector which is defined
are selected from Levy distribution, is applied for the lead- as follows:
ing hunters of the grey wolf pack. Levy distribution can be ( )
t
defined by power law equation as par = 2 − 2 ⋅ . (14)
maximum number of iterations
L(s) ∼ |s|(−1−𝛽) , 0 < 𝛽 ≤ 2, (10) The parameter par is chosen as a linearly decreasing
where s is a step length and 𝛽 is Levy index. variable from 2 to 0. The reason for selecting this value of
A simple version of Levy distribution is defined as parameter par is to maintain the balance between explora-
� � � tion and exploitation. The initial values of the parameter
⎧ 𝛼
exp − 𝛼 1
if 0 < 𝛾 < s < ∞ par allow the wolves to explore the search space, so that
⎪ 2𝜋 2(s−𝛾) (s−𝛾)3∕2
L(s, 𝛼, 𝛾) = ⎨ the convergence towards the local optima can be avoided.
⎪
⎩ 0 if s ≤ 0, The low values of the parameter par which are produced
after the half of the maximum number of iterations in
(11)
the algorithm exploit the elite areas of the search space.
13
Engineering with Computers
Thus, parameter par helps to maintain the balance between Thus, the Levy-flight search maintains the balance between
exploration and exploitation in the algorithm. exploration and exploitation during the search.
To generate the step length in our Levy-flight strategy, The steps of the proposed GLF–GWO in algorithm form
Mantegna algorithm [32] for symmetric Levy distribution are explained in Fig. 2.
has been used. The term symmetric means that the step size
can be positive or negative. The calculated steps length s in 3.3 Constraint handling technique
Mantegna algorithm is given by
u To update the leaders for unconstrained problems, only fit-
s= , (15) ness (objective function value) of the wolves is considered,
|v|1∕𝛽
but to update the leaders in constrained problems, the con-
where u and v are two normal stochastic variables with straints should be handled though some mechanism. The
standard deviation 𝜎u and 𝜎v , that is constraint handling mechanism based on constraint viola-
( ) ( ) tion [33] which is integrated in the GLF–GWO is stated as
u ∼ N 0, 𝜎u2 and v ∼ N 0, 𝜎v2 ,
( )1∕𝛽 follows.
where 𝜎u = 𝛽𝛤 and 𝜎v = 1,where
𝛤 (1+𝛽) sin (𝜋𝛽∕2)
(1+𝛽)2(𝛽−1)∕2
1. Sort the wolf population (P) in accordance with their
∞
increasing order of constraint violation value. Denote
∫
𝛤 (1 + 𝛽) = x𝛽 e−x dx. (16) the new sorted population by P1.
0 2. Now, sort the feasible wolves in accordance of their
objective function values (increasing order for minimi-
( )
Particularly, if 𝛽 is an integer, then 𝛤 (1 + 𝛽) = 𝛽. zation problem). Denote this new population by P2 .
In the present work, the value of 𝛽 is fixed as 1. To estab- From the sorted population P2 , the top three wolves can
lish a balance between exploitation and exploration, a greedy be collected as leading hunters for the pack.
selection is introduced between the previous and present
states of wolves. The greedy selection also helps to avoid Here, the constraint violation violx [33], for any solution
the divergence of wolves from available promising areas of x for the general optimization problem
the search space. In the GLF–GWO algorithm, the reason for
( )
selecting the Levy-flight distribution to update the leaders Min f (x), x = x1 , x2 , … , xd ∈ Rd , (17)
is its random behavior. The step lengths produced by Levy-
flight distribution are very high sometimes due to its infinite s.t. gi (x) ≤ 0, i = 1, 2, … , p, (18)
variance which helps in avoiding the situation of stagnation
at local optima. The smaller values of the step length locally hj (x) = 0, j = 1, 2, … , m (19)
explored the search space and extract the useful information
can be calculated as
available at discovered promising areas of the search space.
13
Engineering with Computers
∑
p
∑
m
deviation (SD), median (med), average (avg), and maximum
violx = Gi (x) + Hj (x), (max) of the absolute error values in objection function. The
i=1 j=1 (20) absolute error is defined as |F(x) − F(x∗ )| , where x is the
{ obtained feasible solution and x∗ is the optimal solution for
gi (x) if gi (x) > 0 the problem. F(x) represents the objective function value at
where Gi (x) = (21)
0 otherwise vector x.
For both the dimensions 10 and 30, the superior perfor-
{ mance of the GLF–GWO can be observed in all the sta-
| |
|hj (x)| if |hj (x)|− ∈> 0 tistical measures on unimodal test problems (F1–F3). The
and Hj (x) = | | (22)
0 otherwise, better search efficiency on unimodal problems shows that
the Levy-flight local search with low values of step length
where ∈ is a parameter which is fixed as 10−4 in the present and greedy selection has enhanced the exploitation abil-
paper. f , hj , gi are objective function, equality, and inequality ity and convergence rate of grey wolves in the GLF–GWO
constraints, respectively. m and p represent the number of algorithm.
equality and inequality constraints, respectively. The results on multimodal test functions, which are used
The proposed constrained handling technique is a sim- to judge the exploration ability of any metaheuristic algo-
plest and natural way of picking the best solutions. The main rithm, indicate that the GLF–GWO has enhanced the explo-
feature of this technique is the simplicity and parameter free ration ability of search agents in the GLF–GWO algorithm.
structure. Since any extra parameter other than the algorithm In 10-dimensional multimodal problems F7, F8, F10, F12,
parameters are not required in this technique, therefore, it is and F15, the proposed GLF–GWO outperform classical
easy in implementation in any algorithm. GWO in all the statistical measures such as mean, median,
minimum, maximum, and standard deviation of error val-
ues. In problem F4, the GLF–GWO is better than classi-
4 Details of benchmark suites cal GWO only for minimum and standard deviation value
and numerical experimentation of error. In F5, F6, and F16, except for standard deviation
value, the GLF–GWO is better than the classical GWO. In
In the present section, the proposed GLF–GWO is validated F9, F13, and F14, except for the minimum value of error,
on unconstrained problems of IEEE CEC2014 and constraint the GLF–GWO is better than the classical GWO. In F11,
problems of IEEE CEC2006. The unconstrained benchmark the GLF–GWO is better than the classical GWO only for
set contains 30 problems of various categories such as multi- minimum and maximum values of error. In 30-dimensional
modal, unimodal, composite, and hybrid and the details can multimodal problems F4, F5, F7–F9, F13, and F15, the
be found in Ref. [34]. The constrained problem set has 24 proposed GLF–GWO outperforms classical GWO in all
problems of various complexity levels with inequality and/ the statistics. In F6, the GLF–GWO is better than classical
or equality constraints which can be found in Ref. [35]. In GWO only for the minimum value of error. In F10, except
this paper, all the experiments are performed on MATLAB for median and mean value of error, the GLF–GWO is better
2010a with 4 GB RAM system. than classical GWO. In F11, the GLF–GWO is better than
In this paper, all the results are presented as per the guide- classical GWO only for a median value of error. In F12 and
lines of CEC. The maximum function evaluations as a ter- F14, the GLF–GWO is better than the classical GWO for all
mination criteria are adopted the same as decided by CEC in the statistics except the minimum value of error. In F16, the
Refs. [34, 35]. The size of the population of wolves is taken GLF–GWO is better than the classical GWO only for stand-
as 3× dimension of the problem, for each benchmark set. ard deviation value of error. Overall, from the numerical
results, it can be analyzed that the Levy-flight search strategy
4.1 Numerical results and discussion on CEC 2014 which enhances the search ability of the leading wolves suc-
benchmark set cessfully improves the exploration ability of wolves in the
GLF–GWO.
In this section, the proposed GLF–GWO algorithm is vali- In any optimization algorithm, an appropriate synergy
dated on CEC 2014 benchmark set of unconstraint prob- between exploitation and exploration should be present for
lems. The 10- and 30-dimensional problems are considered the proper functioning of an algorithm. This characteristic of
in our study. The numerical results on CEC2014 problems an algorithm can be verified through hybrid and composite
obtained from the GLF–GWO algorithm and classical GWO problems. In IEEE CEC 2014 benchmark set, the problems
are presented in Tables 1 and 2 corresponding to the 10- and from F17 to F22 are hybrid problems, and from F23 to F30,
30-dimensional test problems. Tables 1 and 2 provide vari- the problems are composite. In these problems, the objective
ous statistical measures such as minimum (min), standard function is designed by combining the features of unimodal
13
Engineering with Computers
13
Engineering with Computers
and multimodal problems. In all the 10-dimensional hybrid pack based on Levy-flight distribution enhances the search
problems except F18, the proposed GLF–GWO provides efficiency of the wolves in the GLF–GWO algorithm. The
better results as compared to classical GWO in terms of results also demonstrate that the GLF–GWO algorithm is a
mean, median, minimum, maximum, and standard devia- better optimizer as compared to the classical version of GWO.
tion of error values. In F18, the GLF–GWO is able to pro- The diversity analysis in GLF–GWO can be done by
vide the better minimum value of error only than classical comparing the diversity curves of classical GWO and
GWO. In all the 30-dimensional hybrid problems, the pro- GLF–GWO. These diversity curves are drawn by consid-
posed GLF–GWO algorithm outperforms in all the statistics ering the average distance between the solutions in each
such as mean, median, minimum, maximum, and standard iteration. To calculate the average distance,
( the Euclidean)
deviation of error values as compared to the classical GWO. distance
( ||.|| between) two solutions X = x1 , x2 , … , xD and
In 10-dimensional composite problems F23, F24, F29, Y = y1 , y2 , … , yD is used which is calculated as follows:
and F30, the GLF–GWO provides better results in terms of √
√D
mean, median, minimum, maximum, and standard devia- √∑ ( )2
tion of error values. In F25, except for the maximum and ||X − Y||2 = √ xj − yj , (23)
j=1
standard deviation, in F26, except for standard deviation, in
28, except for maximum error value, the GLF–GWO out- where D represents the dimension of the problem. From the
performs classical GWO in remaining statistical measures. diversity curves drawn in Figs. 3 and 4, it can be observed
In F27, the GLF–GWO provides a better value of standard that in the initial iterations, the average distance between
deviation only as compared to classical GWO. In 30-dimen- the search agents is high and decreased with increase in
sional composite problems F23, F26, F29, and F30, the the number of iterations for both the algorithms classical
GLF–GWO provides better results in all the statistics as GWO and proposed GLF–GWO. However, in most of the
compared to the classical GWO. In F24, the GLF–GWO test functions, the average distance in each iteration is high
and classical GWO provide the same value of median and for the proposed GLF–GWO algorithm as compared to clas-
minimum error, while in terms of other statistics, the clas- sical GWO which shows the better ability of search in the
sical GWO is better than GLF–GWO. In F25, except for GLF–GWO in terms of exploring new search regions of the
the mean, minimum, and maximum value of error, in F27, search space. This shows the effect of improving the search
except for mean, maximum, and standard deviation value of mechanism of leading wolves through Levy-flight search
error, the proposed GLF–GWO is better than classical GWO. strategy in the GLF–GWO.
In F28, the classical GWO provides better results in terms of
all the statistical measures as compared to the GLF–GWO.
Hence, the performance comparison between classical GWO 4.2 Statistical validity of the results
and proposed GLF–GWO on hybrid and composite prob-
lems demonstrates the efficacy of the proposed strategies In this section, to confirm that the better results which are
(Levy-flight local search and greedy selection) in maintain- obtained through the proposed GLF–GWO are not just by a
ing an appropriate balance of exploration and exploitation chance, a non-parametric Wilcoxon rank sum is used. The
during the search. statistical test is performed at 0.05 level of confidence inter-
Overall, from the experimental results on various catego- val. The statistical conclusions which are drawn by apply-
ries of benchmark problems, it can be concluded that the ing Wilcoxon test between classical GWO and proposed
proposed search mechanism for leading wolves of grey wolf GLF–GWO are presented in Tables 3 and 4 corresponding
13
Engineering with Computers
13
Engineering with Computers
to the 10- and 30-dimensional test problems. In the table, the to enhance the exploitation strength of the GLF–GWO
symbols ‘+/=/−’ are used to indicate that the GLF–GWO algorithm.
algorithm is significantly better, same, or worse than the In multimodal problems, F4, F5, F7, and F12–F15, the
classical version of GWO. From the statistical conclusions, proposed GLF–GWO provides better results as compared to
it can be seen that the proposed GLF–GWO is significantly all other comparative algorithms. In F6, the GLF–GWO pro-
outperforming the classical GWO. vides better results than other algorithms except for the PSO
and modGWO. In F8–F10 and F16, except for the PSO, the
4.3 Comparison of GLF–GWO with other proposed GLF–GWO provides better result than other algo-
optimization methods rithms. In F11, except for the PSO, modGWO and IGWO,
the GLF–GWO performs better than other algorithms in
This section compares the performance of the proposed terms of providing a better mean value of error. Thus, on
GLF–GWO with classical GWO [5], PSO [1], variants of analyzing the comparative performance of the GLF–GWO
GWO such as modified GWO (modGWO) [36], improved and other algorithms on multimodal problems, it can be
GWO (IGWO) [37], opposition-based GWO (OBGWO) concluded that the proposed Levy-flight search strategy for
[38], and exploration-enhanced GWO (EEGWO) [39], and updating the leaders of the grey wolf pack has enhanced the
some recent algorithms such as sine cosine algorithm (SCA) explorative ability of all wolves.
[40], moth-flame optimization (MFO) algorithm [41]. To In all the hybrid problems (F17–F22), the proposed
compare the results, 30-dimensional problems have been GLF–GWO provides less value of mean error as compared
taken. For a fair comparison, 51 independent runs are exe- to all other comparative algorithms. In composite problems
cuted corresponding to each test function and termination F24, F29, and F30, the proposed GLF–GWO outperforms
criteria are take as decided by CEC which is 104 × D func- all other comparative algorithms. In F23, the GLF–GWO
tion evaluations. The obtained results from various other provides a better result than all other comparative algorithms
comparative optimization methods algorithms are shown in except EEGWO. In F25, the GLF–GWO performs better
Table 5. In the table, the comparison is done by reporting than MFO and SCA only. In F26, IGWO, MFO, and SCA
the mean objective function value. The table indicates the perform better than GLF–GWO. In F27, PSO, modGWO,
efficacy of the proposed GLF–GWO algorithm as compared and EEGWO perform better than GLF–GWO. In F28, the
to other optimization methods. proposed GLF–GWO provides a better mean value of error
In all the unimodal problems from F1 to F3, the proposed as compared to all other comparative algorithms except
GLF–GWO outperforms classical GWO, other variants of modGWO and MFO. Overall, the performance comparison
GWO such as modGWO, OBGWO, IGWO, EEGWO, and on hybrid and composition demonstrate the better ability of
other optimization methods such as PSO, MFO, and SCA search in the GLF–GWO as compared to other algorithms
in terms of the mean value of the absolute error in objective in most of the problems. From the results, it can also be ana-
function values. Thus, the analysis of results obtained by lyzed that the proposed Levy-flight search strategy maintains
the GLF–GWO and the performance comparison with other the exploration and exploitation in the algorithm.
algorithms shows that the GLF–GWO algorithm is better in Thus, the overall analysis of results demonstrates that the
exploitation and convergence rate as compared to other com- proposed GLF–GWO algorithm explores the search space
parative algorithms. The results also show that the greedy more efficiently as compared to the classical GWO by pro-
selection mechanism and Levy-flight local search strategy, viding a suitable search mechanism to the leading the hunt-
when produced step length is small, contribute their impact ers of the pack. The greedy selection also shows its ability to
13
Engineering with Computers
Function F1 Function F3
450 500
GWO GWO
400 GLF-GWO 450 GLF-GWO
400
350
350
300
300
Diversity
Diversity
250
250
200
200
150
150
100
100
50 50
0 0
500 1000 1500 2000 2500 3000 500 1000 1500 2000 2500 3000
Iterations Iterations
Function F5 Function F8
500 450
GWO GWO
450 GLF-GWO 400 GLF-GWO
400
350
350
300
300
Diversity
Diversity
250
250
200
200
150
150
100
100
50 50
0 0
500 1000 1500 2000 2500 3000 500 1000 1500 2000 2500 3000
Iterations Iterations
400
350
350
300
300
Diversity
Diversity
250
250
200
200
150
150
100 100
50 50
0 0
500 1000 1500 2000 2500 3000 500 1000 1500 2000 2500 3000
Iterations Iterations
Function F15
450 Function F16
GWO 450
400 GLF-GWO GWO
400 GLF-GWO
350
350
300
300
Diversity
250
Diversity
250
200
200
150 150
100 100
50 50
0 0
500 1000 1500 2000 2500 3000 500 1000 1500 2000 2500 3000
Iterations Iterations
13
Engineering with Computers
350 350
300 300
Diversity
Diversity
250 250
200 200
150 150
100 100
50 50
0 0
500 1000 1500 2000 2500 3000 500 1000 1500 2000 2500 3000
Iterations Iterations
400 400
350 350
300 300
Diversity
Diversity
250 250
200 200
150 150
100 100
50 50
0 0
500 1000 1500 2000 2500 3000 500 1000 1500 2000 2500 3000
Iterations Iterations
400
350
350
300
300
Diversity
Diversity
250
250
200
200
150
150
100
100
50 50
0 0
500 1000 1500 2000 2500 3000 500 1000 1500 2000 2500 3000
Iterations Iterations
400
350
350
300
300
Diversity
Diversity
250
250
200
200
150
150
100
100
50 50
0 0
500 1000 1500 2000 2500 3000 500 1000 1500 2000 2500 3000
Iterations Iterations
13
Engineering with Computers
Table 3 Statistical validity of results on 10-dimensional CEC 2014 CEC2014 benchmark problems are plotted in this section.
benchmark problems The convergence curves are plotted in Figs. 5, 6, 7 by con-
Function Conclu- p value Function Conclu- p value sidering the average function values of objective function
sion sion achieved in 51 independent runs. In the curves, the itera-
tions of an algorithm are shown on the horizontal axis and
F1 + 5.15E−10 F16 + 2.51E−02
the objective function value is depicted on the vertical axis.
F2 + 9.17E−03 F17 + 1.53E−03
From the convergence curves, it can be seen that the proposed
F3 + 6.53E−10 F18 = 0
GLF–GWO provides a better convergence rate as compared
F4 + 1.76E−05 F19 + 1.05E−02
to other improved variants of GWO and some recent optimi-
F5 + 5.15E−10 F20 + 5.15E−10
zation methods while solving the benchmark test problems.
F6 = 0 F21 + 2.33E−02
F7 + 7.35E−10 F22 + 2.22E−02
F8 + 4.74E−05 F23 + 5.15E−10
F9 = 0 F24 + 3.31E−04
4.5 Numerical results and discussion on CEC 2006
F10 + 7.65E−04 F25 – 1.30E−02
benchmark set
F11 – 2.45E−02 F26 = 0
The constrained optimization problems are more difficult to
F12 + 2.44E−08 F27 = 0
solve compared to unconstrained problems, because various
F13 + 3.15E−03 F28 = 0
factors such as type of constraints, type of objective func-
F14 + 4.10E−02 F29 + 9.93E−07
tion, the ratio of the feasible region to the complete search
F15 + 2.66E−07 F30 + 7.55E−03
space, and number of constraints affect the difficulty of the
problem. In this section, the IEEE CEC 2006 benchmark
set of constraint problems is taken to investigate the effect
Table 4 Statistical validity of results on 30-dimensional CEC 2014 of improving the leading guidance in the GLF–GWO algo-
benchmark problems rithm. The obtained results by implementing the proposed
Function Conclu- p value Function Conclu- p value GLF–GWO algorithm and classical GWO on IEEE CEC
sion sion 2006 problems are presented in Tables 6. In the GLF–GWO,
to handle the constraints of CEC 2006 problems, constraint
F1 + 5.15E−10 F16 – 3.43E−04
handling techniques which is presented in Sect. 3.3 is
F2 + 5.15E−10 F17 + 1.36E−04
applied. In Table 6, worst, best, median, standard deviation,
F3 + 5.15E−10 F18 + 1.92E−06
and average of the objective function values are also listed.
F4 + 5.80E−10 F19 + 1.58E−08
In all the benchmark test problems (except for the prob-
F5 + 5.15E−10 F20 + 5.46E−10
lem g10) given in IEEE CEC 2006, the GLF–GWO performs
F6 – 1.27E−02 F21 + 3.89E−03
better than the classical GWO in terms of the best objective
F7 + 5.15E−10 F22 + 0
function value. In the test problems g01–g03, g06–g09, g13,
F8 + 6.15E−10 F23 + 5.15E−10
g15, g16, g18, and g23, the proposed GLF–GWO provides a
F9 = 0 F24 – 5.15E−10
better median value of the objective function as compared to
F10 – 1.41E−02 F25 = 0
classical GWO. In terms of mean objective function value,
F11 + 1.67E−08 F26 = 0
the GLF–GWO is better than the classical GWO in prob-
F12 + 1.78E−07 F27 = 0
lems g01, g03, g06, g07, g09, g14, g15, and g18. In terms
F13 + 1.32E−06 F28 – 3.41E−02
of maximum or worst value of the objective function, the
F14 + 1.69E−03 F29 + 1.86E−08
proposed GLF–GWO algorithm is better in g01–g03, g06,
F15 + 9.14E−09 F30 + 5.15E−10
g07, g09, g14, g15, g18, and g24 as compared to classical
GWO. In terms of standard deviation value, the proposed
GLF–GWO algorithm is better than classical GWO in the
maintain a balance between exploration and exploitation and problems g01, g03, g06, g07, g09, g10, g14, g15, g18, and
to avoid the divergence of wolves from available promising g24. In the problems g08 and g12, the proposed GLF–GWO
areas of the search space. and classical GWO algorithms provide the same values of
the objective function in terms of mean, median, minimum,
4.4 Convergence analysis maximum,and standard deviation. In the problems g05, g17,
g20–g22, both the algorithms classical GWO and proposed
To analyze the convergence rate in the proposed GLF–GWO GLF–GWO fail to provide a feasible solution. Since in these
algorithm, and to compare the convergence behavior with problems, the classical GWO and the GLF–GWO algorithms
other optimization algorithms, the convergence curves for fail to enter in a feasible region; therefore, these are not
13
Engineering with Computers
mentioned in Table 6. Overall, from the numerical results, application problems, where the decision parameters are
it can be observed that the Levy-flight search strategy shows very crucial, posterior distribution analysis of optimization
its impact to solve constrained optimization problems also. parameters [42] can be used.
To make concrete conclusions about the significance of
differences in the performance of the proposed GLF–GWO 4.6 Comparison of GLF–GWO with other
and GWO algorithms, the statistical analysis is necessary. In optimization methods
the present paper, a non-parametric Wilcoxon rank sum test
is used to accomplish this analysis. The test has been con- In this section, the search efficiency of the constrained ver-
ducted at a 5% significance level and the obtained conclu- sion of proposed GLF–GWO has been compared with classical
sions are shown in Table 7. The statistical results also verify GWO [5], PSO [1], variants of GWO such as modified GWO
the impact of Levy-flight search mechanism in updating the (modGWO) [36], improved GWO (IGWO) [37], opposition-
leading wolves during the search. based GWO (OBGWO) [38], and exploration-enhanced GWO
In the present paper, the benchmark test problems are (EEGWO) [39], and some recent algorithms such as Sine
used to analyze the performance of the proposed GLF–GWO Cosine Algorithm (SCA) [40], and moth-flame optimization
algorithm. In these test problems, our consideration of study (MFO) algorithm [41]. To compare the results, 25 runs are
is only objective function value, because metaheuristic algo- conducted of each algorithm for a fair comparison. The termi-
rithms treated the optimization problem as a black box. To nation criteria are taken as the same as decided by CEC. The
analyze the performance of search algorithms on real-world obtained results from various other comparative optimization
13
Engineering with Computers
10
10
10
8
10
7
10
6
10
6
10 4
10
5 2
10 10
0 500 1000 1500 2000 2500 3000 0 500 1000 1500 2000 2500 3000
Iterations Iterations
8
Convergence graph for F3 Convergence graph for F5
10 2.7169
10
7
10
2.7167
10
6
10
2.7165
5 10
10
4
10 10
2.7163
3
10
2.7161
10
2
10
0 500 1000 1500 2000 2500 3000 0 500 1000 1500 2000 2500 3000
Iterations Iterations
methods algorithms are reported in Table 8. In the table, represents the pack size of grey wolf, and d is the size of the
results are compared based on the mean objection function dimension.
value. In the table, only those functions are reported, where
the algorithms are able to enter to in a feasible region of the
problem. The table verifies the competitive search ability of 5 The GLF–GWO algorithm
the GLF–GWO algorithms as compared to other comparative for real‑engineering problems
optimization methods. The results obtained for constrained
problems also demonstrate the ability of enhancing the leading 5.1 Design of gear train
wolves through Levy-flight search strategy.
This unconstrained optimization problem was introduced by
4.7 Computation complexity of the proposed GLF– [43]. This problem is a discrete case study with four decision
GWO algorithm variables. In this problem, the objective is to determine the
optimum number of tooth for gears of a train to optimize the
The computational complexity of the GLF–GWO depends gear ratio [43]. The discrete components of decision variable
on the pack size, problem dimension and maximum number parameters are handled by rounding them to the nearest inte-
of iterations. Therefore, the complexity of the GLF–GWO ger. In the mathematical form, the problem is stated as follows:
algorithm can be easily calculated by analyzing the steps of
( )2
algorithms which will be O(T(n ⋅ d)) , in terms of big-O nota- 1 𝜂 𝜂
tion. Here, T represents the maximum number of iterations, n Min f 1 (x) = − B C , (24)
6.931 𝜂A 𝜂D
13
Engineering with Computers
2.784
10
2.93
10
2.782
10
2.78 2.91
10 10
0 500 1000 1500 2000 2500 3000 0 500 1000 1500 2000 2500 3000
Iterations Iterations
Convergence graph for F10
Convergence graph for F9
1050
3.5
10
3.4
1000 10
3.3
10
950
3.2
10
3.1
10
900
0 500 1000 1500 2000 2500 3000 0 500 1000 1500 2000 2500 3000
Iterations Iterations
3.081
10
Objective function values
3.115
10
3.08
10
3.114
10
0 500 1000 1500 2000 2500 3000 0 500 1000 1500 2000 2500 3000
Iterations Iterations
3.2051 6
10 10
3.205
10 5
10
3.2049
10
4
10
3.2048
10
3
10
0 500 1000 1500 2000 2500 3000 0 500 1000 1500 2000 2500 3000
Iterations Iterations
13
Engineering with Computers
8
Convergence graph for F20 8
Convergence graph for F21
10 10
7 7
10 10
Objective function values
5 5
10 10
4 4
10 10
3 3
10 10
0 500 1000 1500 2000 2500 3000 0 500 1000 1500 2000 2500 3000
Iterations Iterations
3.46 3.42
10 10
Objective function values
3.42
10
3.4 3.41
10 10
3.38
10
3.36
10
0 500 1000 1500 2000 2500 3000 0 500 1000 1500 2000 2500 3000
Iterations Iterations
3.436
10
6
10
Objective function values
3.434
10
5
10
3.432
10
4
10
3.43
10
3
10
0 500 1000 1500 2000 2500 3000 0 500 1000 1500 2000 2500 3000
Iterations Iterations
s.t. 12 ≤ 𝜂A , 𝜂B , 𝜂C , 𝜂D ≤ 60.
of GWO such as modified GWO (modGWO) [36], improved
(25) GWO (IGWO) [37], opposition-based GWO (OBGWO)
30 runs for this problem with 300 function evaluations are [38], and exploration-enhanced GWO (EEGWO) [39], and
executed to obtain the solution of this problem. To compare some recent algorithms such as sine cosine algorithm (SCA)
the results of the GLF–GWO, the results of various other [40], moth-flame optimization (MFO) algorithm [41] are
optimization methods such as classical GWO [5], variants also presented in Table 9. In the same table, various other
13
Engineering with Computers
Table 7 Statistical conclusions Function Conclusion Function Conclusion Function Conclusion Function Conclusion
obtained by applying Wilcoxon
test on CEC 2006 problems g01 + g07 + g13 = g19 −
g02 − g08 + g14 = g20 ×
g03 + g09 + g15 + g21 ×
g04 = g10 − g16 = g22 ×
g05 × g11 = g17 × g23 =
g06 + g12 = g18 = g24 −
13
Engineering with Computers
g01 − 5.840 − 11.209 − 11.842 − 13.003 − 11.827 − 12.440 − 5.798 − 11.300 − 15.000
g02 − 0.746 − 0.743 − 0.783 − 0.742 − 0.545 − 1.502 − 1.364 − 0.737 − 0.729
g03 − 0.332 − 0.429 − 0.194 − 0.997 − 0.972 0.000 0.000 − 0.319 − 0.739
g04 − 30,593.961 − 30,665.177 − 30,665.250 − 30,664.524 − 30,661.612 − 30,553.748 − 30,609.250 − 30,665.372 − 30,665.339
g06 − 8307.003 − 11,700.005 − 14,545.430 − 6955.063 − 6953.716 − 7045.380 − 29,658.775 − 7060.673 − 6958.734
g07 186.959 46.995 28.877 − 2648.067 − 6306.258 − 981.876 56.876 37.860 24.722
g08 − 0.096 2.175 − 0.096 − 0.096 − 0.096 − 0.090 27.908 − 0.096 − 0.096
g09 680.637 685.077 686.008 735.107 841.479 682.025 688.589 685.326 681.099
g10 8133.462 7581.981 7905.542 7971.016 8578.560 11,027.554 9641.809 7571.202 8276.237
g11 0.800 0.750 0.754 0.763 0.770 0.990 0.750 0.750 0.767
g12 − 1.000 − 1.000 − 1.000 − 1.000 − 1.000 − 0.959 − 1.000 − 1.000 − 1.000
g13 0.627 0.430 − 0.810 0.411 − 1.000 0.645 − 1.000 1.003 1.190
g14 − 22.055 − 41.790 − 41.408 − 42.123 − 1.000 − 40.762 − 1.000 − 41.826 − 41.967
g15 689.719 803.868 765.466 885.155 772.762 886.154 − 1.000 968.230 965.773
g16 − 1.827 − 1.902 − 1.902 − 1.900 − 1.865 − 1.804 − 1.622 − 1.844 − 1.835
g18 − 0.862 − 0.774 − 0.737 − 0.634 − 0.498 − 0.805 − 0.707 − 0.798 − 0.823
g19 82.370 38.493 34.494 36.207 38.733 51.779 168.600 38.554 43.077
g23 73.354 72.596 34.494 0.000 38.733 95.697 0.000 − 0.001 269.746
g24 − 5.333 − 5.508 − 5.508 − 5.508 − 5.503 − 5.270 − 5.297 − 5.368 − 5.283
13
Engineering with Computers
Table 9 Results comparison for gear train design problem Table 10 Comparison results for FM-design problem
Algorithm 𝜂A 𝜂B 𝜂C 𝜂D f1,min Algorithm Min Avg Max SD
methods (MBA [44], artificial bee colony (ABC) [45], and (MFO) algorithm [41] are also employed on this problem
augmented lagrange multiplier (ALM) [46]) are also men- with the same parameter setting. To compare the results
tioned which are applied in the literature to solve the same with other algorithms which are employed in the literature
problem. The comparison of results ensures the better search CPSOH [47, 48] and G-CMA-ES [48, 49] are considered.
efficiency of the proposed GLF–GWO algorithm to deter- Table 10 clearly favors the better search efficiency of pro-
mine the number of tooth of a gear train. posed GLF–GWO algorithm.
P − 𝜎 ≤ 0,
(29)
X0 (t) = x1 sin (5t𝜙 + 1.5 sin (4.8t𝜙 + 2 sin (4.9t𝜙))), 1
g3 (x) = √ (33)
respectively, where 𝜙 = 2𝜋∕100. x1 + 2x2
To obtain the solution of this problem, 30 independent
0 < x1 , x2 ≤ 1.
runs are conducted with 2 × 105 function evaluations and
the obtained solutions are presented in Table 10. To com- (34)
pare the results of the GLF–GWO, the classical GWO and Here l = 100 cm , P = 2 KN∕cm2 , 𝜎 = 2 KN∕cm2.
other variants of GWO such as modified GWO (modGWO) The obtained results using classical GWO and proposed
[36], improved GWO (IGWO) [37], opposition-based GLF–GWO are reported in Table 11. 5000 function evalu-
GWO (OBGWO) [38], and exploration-enhanced GWO ations which are the same as [50] are fixed to solve this
(EEGWO) [39], and some recent algorithms such as sine problem. In Table 11, the results are compared with variants
cosine algorithm (SCA) [40], moth-flame optimization of GWO such as modified GWO (modGWO) [36], improved
13
Engineering with Computers
Table 11 Results’ comparison for truss bar design problem of speed reducer weight with several constraints. Mathemati-
Algorithm Decision variable f3,min cally, this problem can be stated as follows:
x1 x2 ( )
Min f 4 (x) = 0.7854x1 x22 3.3333x32 + 14.9334x3 − 43.0934
GLF–GWO 0.788174 0.4096753 263.8969 ( ) ( )
− 1.508x1 x62 + x72 + 7.4777 x63 + x73
GWO 0.787823 0.4106738 263.8974 ( 2 )
+ 0.7854 x4 x6 + x5 x72
modGWO 0.7878452 0.4106108 263.8974 ( )
IGWO 0.8087516 0.3592847 264.6780 x = x1 , x2 , x3 , x4 , x5 , x6 , x7
( )
OBGWO 0.7880142 0.4101417 263.8963 = b, m, N, L1 , L2 , D1 , D2 , (35)
EEGWO 0.8087517 0.3592847 264.6780
− 1 ≤ 0,
SCA 0.78394 0.42219 263.9506 27
PSO 0.58959 0.20568 263.8994 s.t. g1 (x) = (36)
x1 x22 x3
MFO 0.78753 0.41150 263.8968
CS 0.78867 0.40902 263.9716
− 1 ≤ 0,
397.5
Ray and Saini [52] 0.7950 0.3950 264.300 g2 (x) = (37)
Tsai [53] 0.7880 0.4080 263.680 (infeasible)
x1 x22 x32
− 1 ≤ 0,
1.93x43
GWO (IGWO) [37], opposition-based GWO (OBGWO) g3 (x) = (38)
[38], and exploration-enhanced GWO (EEGWO) [39], and x2 x3 x64
some recent algorithms such as sine cosine algorithm (SCA)
[40], moth-flame optimization (MFO) algorithm [41]. In the
− 1 ≤ 0,
1.93x53
table, the results of various other studies [Cuckoo Search g4 (x) = (39)
(CS) algorithm [51], Ray and Saini [52], and Tsai [53]] are x2 x3 x74
also reported. The table ensures the better performance of
GLF–GWO algorithm in finding optima compared to other √ ( )2
reported state-of-the art algorithms. 1.69 × 107 +
745x4
− 1 ≤ 0, (40)
x2 x3
g5 (x) =
5.4 Design of speed reducer 110x63
Fig. 8 Speed reducer
13
Engineering with Computers
13
Engineering with Computers
13
Engineering with Computers
constrained engineering optimization problems. Appl Soft Com- 51. Gandomi AH, Yang XS, Alavi AH (2013) Cuckoo search algo-
put 13(5):2592–2612 rithm: a metaheuristic approach to solve structural optimization
45. Sharma TK, Pant M, Singh VP (2012) Improved local search in problems. Eng Comput 29(1):17–35
artificial bee colony using golden section search. arXiv preprint 52. Ray T, Saini P (2001) Engineering design optimization using a
arXiv:1210.6128 swarm with an intelligent information sharing among individuals.
46. Kannan BK, Kramer SN (1994) An augmented Lagrange mul- Eng Optim 33(6):735–748
tiplier based method for mixed integer discrete continuous opti- 53. Belegundu AD, Arora JS (1985) A study of mathematical pro-
mization and its applications to mechanical design. J Mech Des gramming methods for structural optimization. Part I: theory. Int
116(2):405–411 J Numer Methods Eng 21(9):1583–1599
47. Liang JJ, Qin AK, Suganthan PN, Baskar S (2006) Comprehen- 54. Gandomi AH, Yang XS (2011) Benchmark problems in structural
sive learning particle swarm optimizer for global optimization of optimization. In: Koziel S, Yang X-S (eds) Computational optimi-
multimodal functions. IEEE Trans Evol Comput 10(3):281–295 zation, methods and algorithms. Springer, Berlin, pp 259–281
48. Van Laarhoven PJ, Aarts EH (1987) Simulated annealing. In: 55. Mezura-Montes E, Coello CC, Landa-Becerra R (2003) Engineer-
Aarts E, Lenstra JK (eds) Simulated annealing: theory and appli- ing optimization using simple evolutionary algorithm. In: Tools
cations. Springer, Dordrecht, pp 7–15 with artificial intelligence, 2003. Proceedings. 15th IEEE inter-
49. Auger A, Hansen N (2005) A restart CMA evolution strategy with national conference on. IEEE, pp 149–156
increasing population size. In: Evolutionary computation, 2005. 56. Akhtar S, Tai K, Ray T (2002) A socio-behavioural simula-
The 2005 IEEE Congress on. IEEE, vol 2, pp 1769–1776 tion model for engineering design optimization. Eng Optim
50. Nowcki H (1974) Optimization in pre-contract ship design. In: 34(4):341–354
Fujita Y, Lind K, Williams TJ (eds) Computer applications in the
automation of shipyard operation and ship design, vol 2. North- Publisher’s Note Springer Nature remains neutral with regard to
Holland. Elsevier, New York, pp 327–338 jurisdictional claims in published maps and institutional affiliations.
13